repo_name
stringlengths 5
114
| repo_url
stringlengths 24
133
| snapshot_id
stringlengths 40
40
| revision_id
stringlengths 40
40
| directory_id
stringlengths 40
40
| branch_name
stringclasses 209
values | visit_date
timestamp[ns] | revision_date
timestamp[ns] | committer_date
timestamp[ns] | github_id
int64 9.83k
683M
⌀ | star_events_count
int64 0
22.6k
| fork_events_count
int64 0
4.15k
| gha_license_id
stringclasses 17
values | gha_created_at
timestamp[ns] | gha_updated_at
timestamp[ns] | gha_pushed_at
timestamp[ns] | gha_language
stringclasses 115
values | files
listlengths 1
13.2k
| num_files
int64 1
13.2k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
niranjenswarup/tensor-flow-lite-running-in-raspberry-pi-3 | https://github.com/niranjenswarup/tensor-flow-lite-running-in-raspberry-pi-3 | 33f57ac9fcb510efccf5dc95b94ce0f9e829140e | 4f621e752fcb9a6a6e5835cda4c9e28de6f8ea33 | 5cb1ae912d762a06a79854d687fc55a56900dfe2 | refs/heads/main | 2023-03-18T13:43:36.772948 | 2021-03-08T08:09:26 | 2021-03-08T08:09:26 | 345,574,703 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6500415802001953,
"alphanum_fraction": 0.6650041341781616,
"avg_line_length": 26.976743698120117,
"blob_id": "ffc2ce86f1835c52943cf489a39b8ec7aed3f864",
"content_id": "c2cada4e96e92d18e821b25d17b9e1382e44005d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1203,
"license_type": "no_license",
"max_line_length": 70,
"num_lines": 43,
"path": "/tflite_test.py",
"repo_name": "niranjenswarup/tensor-flow-lite-running-in-raspberry-pi-3",
"src_encoding": "UTF-8",
"text": "import cv2\nimport tensorflow as tf\nimport numpy as np\n\nmodel_path = r\"/home/pi/chain/model.tflite\"\ninterpreter = tf.lite.Interpreter(model_path)\ninterpreter.allocate_tensors()\ninput_details = interpreter.get_input_details()\noutput_details= interpreter.get_output_details()\ninterpreter.allocate_tensors()\nheight = input_details[0]['shape'][1]\nwidth = input_details[0]['shape'][2]\n\n\n \ndef predict_classes(tf_lite_interpreter, y):\n tf_lite_interpreter.set_tensor(input_details[0]['index'], y)\n tf_lite_interpreter.invoke()\n result= tf_lite_interpreter.get_tensor(output_details[0]['index'])\n if result==0:\n print(\"pass\")\n else:\n print(\"REJECT\")\n\n\nsource=cv2.VideoCapture(\"/home/pi/chain/reject.avi\")\nwhile(source.isOpened()):\n ret,img =source.read()\n if ret==True:\n img_resized = cv2.resize(img,(width, height))\n array= np.array(img_resized, dtype=np.float32)\n new= np.expand_dims(array, axis=0)\n input_data=np.vstack([new])\n predict=predict_classes( interpreter, input_data)\n \n cv2.imshow('LIVE',img)\n if cv2.waitKey(10) == ord('q'):\n break\n else:\n break\n\nsource.release() \ncv2.destroyAllWindows()\n"
},
{
"alpha_fraction": 0.7749364972114563,
"alphanum_fraction": 0.7820491194725037,
"avg_line_length": 58.6363639831543,
"blob_id": "3205542744b9fc88566c6a49322ccb02870a5d78",
"content_id": "c5f7005c5b38448f4577b64ef0b14ac940d213d1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 5909,
"license_type": "no_license",
"max_line_length": 469,
"num_lines": 99,
"path": "/README.md",
"repo_name": "niranjenswarup/tensor-flow-lite-running-in-raspberry-pi-3",
"src_encoding": "UTF-8",
"text": "# tensor-flow-lite-running-in-raspberry-pi-3\n\n Section 1 - How to Set Up and Run TensorFlow Lite Object Detection Models on the Raspberry Pi\n\nSetting up TensorFlow Lite on the Raspberry Pi is much easier than regular TensorFlow! These are the steps needed to set up TensorFlow Lite:\n\n- 1a. Update the Raspberry Pi\n- 1b. Download this repository and create virtual environment\n- 1c. Install TensorFlow and OpenCV\n- 1d. Set up TensorFlow Lite detection model\n- 1e. Run TensorFlow Lite model!\n\n\n### Step 1a. Update the Raspberry Pi\nFirst, the Raspberry Pi needs to be fully updated. Open a terminal and issue:\n```\nsudo apt-get update\nsudo apt-get dist-upgrade\n```\nDepending on how long it’s been since you’ve updated your Pi, the update could take anywhere between a minute and an hour. \n\nWhile we're at it, let's make sure the camera interface is enabled in the Raspberry Pi Configuration menu. Click the Pi icon in the top left corner of the screen, select Preferences -> Raspberry Pi Configuration, and go to the Interfaces tab and verify Camera is set to Enabled. If it isn't, enable it now, and reboot the Raspberry Pi.\n\n<p align=\"center\">\n <img src=\"/doc/camera_enabled.png\">\n</p>\n\n### Step 1b. Download this repository and create virtual environment\n\nNext, clone this GitHub repository by issuing the following command. The repository contains the scripts we'll use to run TensorFlow Lite, as well as a shell script that will make installing everything easier. Issue:\n\n```\ngit clonehttps://github.com/niranjenswarup/missing-part-detection-in-drive-chain-using-deep-learning-and-tensorflow.git\n```\n\nThis downloads everything into a folder called TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi. That's a little long to work with, so rename the folder to \"tflite1\" and then cd into it:\n\n```\nmv missing-part-detection-in-drive-chain-using-deep-learning-and-tensorflow tflite1\ncd tflite1\n```\n\nWe'll work in this /home/pi/tflite1 directory for the rest of the guide. Next up is to create a virtual environment called \"tflite1-env\".\n\nI'm using a virtual environment for this guide because it prevents any conflicts between versions of package libraries that may already be installed on your Pi. Keeping TensorFlow installed in its own environment allows us to avoid version conflicts. For example, if you've already installed TensorFlow v1.8 on the Pi using my [other guide](https://www.youtube.com/watch?v=npZ-8Nj1YwY), you can leave that installation as-is without having to worry about overriding it.\n\nInstall virtualenv by issuing:\n\n```\nsudo pip3 install virtualenv\n```\n\nThen, create the \"tflite1-env\" virtual environment by issuing:\n\n```\npython3 -m venv tflite1-env\n```\n\nThis will create a folder called tflite1-env inside the tflite1 directory. The tflite1-env folder will hold all the package libraries for this environment. Next, activate the environment by issuing:\n\n```\nsource tflite1-env/bin/activate\n```\n\n**You'll need to issue the `source tflite1-env/bin/activate` command from inside the /home/pi/tflite1 directory to reactivate the environment every time you open a new terminal window. You can tell when the environment is active by checking if (tflite1-env) appears before the path in your command prompt, as shown in the screenshot below.**\n\nAt this point, here's what your tflite1 directory should look like if you issue `ls`.\n\n<p align=\"center\">\n <img src=\"/doc/tflite1_folder.png\">\n</p>\n\nIf your directory looks good, it's time to move on to Step 1c!\n\n### Step 1c. Install TensorFlow Lite dependencies and OpenCV\nNext, we'll install TensorFlow, OpenCV, and all the dependencies needed for both packages. OpenCV is not needed to run TensorFlow Lite, but the object detection scripts in this repository use it to grab images and draw detection results on them.\n\nTo make things easier, I wrote a shell script that will automatically download and install all the packages and dependencies. Run it by issuing:\n\n```\nbash get_pi_requirements.sh\n```\n\nThis downloads about 400MB worth of installation files, so it will take a while. Go grab a cup of coffee while it's working! If you'd like to see everything that gets installed, simply open get_pi_dependencies.sh to view the list of packages.\n\n**NOTE: If you get an error while running the `bash get_pi_requirements.sh` command, it's likely because your internet connection timed out, or because the downloaded package data was corrupted. If you get an error, try re-running the command a few more times.**\n\n**ANOTHER NOTE: The shell script automatically installs the latest version of TensorFlow. If you'd like to install a specific version, issue `pip3 install tensorflow==X.XX` (where X.XX is replaced with the version you want to install) after running the script. This will override the existing installation with the specified version.**\n\nThat was easy! On to the next step.\n\n### Step 1d. Set up TensorFlow Lite detection model\nNext, we'll set up the detection model that will be used with TensorFlow Lite. This guide shows how to either download a sample TFLite model provided by Google, or how to use a model that you've trained yourself by following [Part 1 of my TensorFlow Lite tutorial series](https://github.com/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi#part-1---how-to-train-convert-and-run-custom-tensorflow-lite-object-detection-models-on-windows-10).\n\nA detection model has two files associated with it: a detect.tflite file (which is the model itself) and a labelmap.txt file (which provides a labelmap for the model). My preferred way to organize the model files is to create a folder (such as \"BirdSquirrelRaccoon_TFLite_model\") and keep both the detect.tflite and labelmap.txt in that folder. This is also how Google's downloadable sample TFLite model is organized.\n\n\nafter following all the process run the file with .py in virtuial enviroment \ncomment me if any issues \n"
}
] | 2 |
JunqiLin/lstm_model_tensorflow | https://github.com/JunqiLin/lstm_model_tensorflow | 30233fb60b2f14879b6649971cbff8bd8ed2f008 | 76afd540ac1dcb9e8318f51aa41cbd3b454bdf9a | ed35e06f8c16519facdc3a12e7ab559e27a94dcb | refs/heads/master | 2020-05-16T05:18:15.758610 | 2019-05-20T04:05:39 | 2019-05-20T04:05:39 | 182,811,849 | 0 | 1 | null | null | null | null | null | [
{
"alpha_fraction": 0.5019454956054688,
"alphanum_fraction": 0.6264591217041016,
"avg_line_length": 11.899999618530273,
"blob_id": "cc425ceac67b8e065277857b9a169f061ea86ceb",
"content_id": "b1950239f108f4fbb65760282adc7be6e62cd31a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 257,
"license_type": "no_license",
"max_line_length": 35,
"num_lines": 20,
"path": "/core/CONSTANT.py",
"repo_name": "JunqiLin/lstm_model_tensorflow",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python2\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Fri Apr 26 18:28:04 2019\n\n@author: linjunqi\n\"\"\"\n\n\"\"\"\nCONSTANT\n\"\"\"\n\nBATCH_START = 0\nTIME_STEPS = 20\nBATCH_SIZE = 50\nINPUT_SIZE = 1\nOUTPUT_SIZE = 1\nCELL_SIZE = 10\nLR = 0.006\nDB_NAME = \"SP500_TOP10\""
},
{
"alpha_fraction": 0.7249388694763184,
"alphanum_fraction": 0.7359412908554077,
"avg_line_length": 29.037036895751953,
"blob_id": "c344852d0451f03d799ece2149483d78d11a93a1",
"content_id": "6d22b5c15f091b8c8aff19717b7c83c7785df2f4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 818,
"license_type": "no_license",
"max_line_length": 227,
"num_lines": 27,
"path": "/README.md",
"repo_name": "JunqiLin/lstm_model_tensorflow",
"src_encoding": "UTF-8",
"text": "\n## LSTM for Time Series Prediction by Tensorflow\n\n##### version\n- python 2.7\n- tensorflow 1.12.0 cpu\n- plotly\n- pymongo 3.7.2\n\n##### database\n- mongodb\n\n##### file\n- data_process.py \n\n\tstore data into mongodb, you need to run this file first because our model read data from database\n\n- data_generator.py\n\n\tread data from database and process the data into the type needed by lstm model.\n\t\n- model.py\n\n\tthis is our core lstm model which includes two hidden layers and middle lstm layer. When you get the data, then run this file and see the result.\n\t\n##### how to run\n- install mongodb and run data_process.py(Actually we can read data directly from csv file but we want our code can cope with different data sources without modifying code and can read data more quickly so we use the database )\n- run model.py\n\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.5264720320701599,
"alphanum_fraction": 0.540821373462677,
"avg_line_length": 24.063291549682617,
"blob_id": "71d17b57c453611aa0c0c2a563aa166092f9e9bc",
"content_id": "60b28431fa34472246fd7c6b23769ece638766d8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2021,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 79,
"path": "/core/data_process.py",
"repo_name": "JunqiLin/lstm_model_tensorflow",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python2\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Mon Apr 22 23:01:05 2019\n\n@author: linjunqi\n\"\"\"\n\n\"\"\"\ndata_process\n\"\"\"\n\n\"\"\"\nStore csv file data into Mongodb \n\"\"\"\n\n\nfrom time import time\nfrom datetime import datetime\nimport csv\nimport pymongo\nimport os\nimport pandas as pd \nfrom CONSTANT import DB_NAME\n#DB_NAME = \"SP500_TOP10\"\n\n\n\nclass BarData(object):\n \"\"\"\n standard data format designed\n \"\"\"\n def __init__(self):\n \"\"\"\"\"\"\n self.symbol = 0\n self.datetime=0\n self.volume= 0\n self.open = 0\n self.high = 0\n self.low = 0\n self.close = 0\n\ndef loadCsvFromYahoo(fileName, dbName, symbol):\n print(\"Begin to load csv files %s to %s 's %s\")%(fileName, dbName, symbol)\n \n client = pymongo.MongoClient()\n collection = client[dbName][symbol]\n collection.ensure_index([('datetime',pymongo.ASCENDING)],unique = True)\n \n with open(fileName,'r') as f:\n# reader = csv.DictReader(f)\n reader = pd.read_csv(f)\n reader.dropna(axis=0, how='any', inplace=True)\n for index, d in reader.iterrows():\n bar = BarData()\n bar.symbol = symbol\n bar.open = float(d['Open'])\n bar.high = float(d['High'])\n bar.low = float(d['Low'])\n bar.close = float(d['Close'])\n bar.datetime = datetime.strptime(d['Date'],'%Y/%m/%d')\n bar.volume = d['Volume']\n \n flt = {'datetime':bar.datetime}\n collection.update_one(flt,{'$set':bar.__dict__},upsert = True)\n print(u'Insert finished')\n \nif __name__ ==\"__main__\":\n flag=\"all\"\n if flag ==\"all\":\n path = os.path.abspath(os.path.dirname(os.getcwd()))+'/data'\n filenames = os.listdir(path)[2:]\n# filenames=['add.csv']\n for f in filenames:\n print(f)\n fn = path+'/'+str(f)\n loadCsvFromYahoo(fn, DB_NAME, str(f)[:-4])\n print('All Data has been inserted')\n# \n \n \n \n \n\n"
},
{
"alpha_fraction": 0.44838884472846985,
"alphanum_fraction": 0.4817039966583252,
"avg_line_length": 21.700000762939453,
"blob_id": "3569b05f038a239a8ca2a51586a1f033c745759d",
"content_id": "677aa60c3450c8af1231909ceb172aa7d7472548",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1831,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 80,
"path": "/core/smooth.py",
"repo_name": "JunqiLin/lstm_model_tensorflow",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python2\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Sun Apr 28 22:14:36 2019\n\n@author: linjunqi\n\"\"\"\n\n\nfrom __future__ import division\nfrom sympy import *\nimport numpy as np\n\ndata = np.array([1,2,3,4,2,3,5,6])\n\ndef line(data):\n \"\"\"\n fitting line\n \"\"\"\n length = len(data)\n result = np.zeros(length)\n start = 0\n end = length-1\n mid = int((end-start)/2)\n\n start_num = data[start]\n mid_num = data[mid]\n end_num = data[end]\n p_len = mid - start\n l_len = end -mid\n p_delta = (data[mid]-data[start])/p_len\n l_delta = (data[end]-data[mid])/l_len\n for i in range(p_len):\n result[i] = data[start] + p_delta * i\n for i in range(l_len+1):\n result[i+p_len] = data[mid] + l_delta * i\n return result \n \n\n\n\ndef equation(data):\n \"\"\"\n fitting curve line\n \"\"\"\n length = len(data)\n start = 0\n end = length - 1\n q_mid = int((start+end)/3)\n h_mid = int(q_mid * 2)\n x = np.array([start,q_mid,h_mid,end])\n y = np.array([data[start],data[q_mid],data[h_mid],data[end]])\n \n a = Symbol('a')\n b = Symbol('b')\n c = Symbol('c')\n d = Symbol('d')\n \n f = solve([a * x[0]**3 + b * x[0]**2 + c* x[0] +d - y[0], \n a * x[1]**3 + b * x[1]**2 + c* x[1] +d - y[1],\n a * x[2]**3 + b * x[2]**2 + c* x[2] +d - y[2],\n a * x[3]**3 + b * x[3]**2 + c* x[3] +d - y[3]],[a, b, c, d])\n \n\n degree = 1000\n z = np.zeros(degree)\n h = np.zeros(degree)\n result = np.zeros(length)\n temp = 0\n delta = len(data)/degree\n for i in range(degree):\n h[i] = temp\n temp += delta\n \n for i in range(degree):\n z[i] = f[a]*h[i]**3 + f[b]*h[i]**2 + f[c]*h[i] + f[d] \n \n for i in range(length):\n result[i] = z[int((i/length)*degree)]\n return h,result\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.5796014070510864,
"alphanum_fraction": 0.5880421996116638,
"avg_line_length": 32,
"blob_id": "c31bdbbbcb07c21cc485b4bc9a702e8e40ab0626",
"content_id": "fdafe04e95bf8f005323a4f0402b4ce3db9216da",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4265,
"license_type": "no_license",
"max_line_length": 120,
"num_lines": 129,
"path": "/core/data_generator.py",
"repo_name": "JunqiLin/lstm_model_tensorflow",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python2\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Fri Apr 26 18:25:28 2019\n\n@author: linjunqi\n\"\"\"\n\n\"\"\"\nmdoel data generation\n\"\"\"\nimport pymongo\nimport pandas as pd\nimport numpy as np\nfrom CONSTANT import TIME_STEPS,BATCH_SIZE,DB_NAME\n\n\ndef get_top(num):\n \"\"\"\n get the highest digit pos\n \"\"\"\n strnum = list(str(num))\n for i,item in enumerate(strnum):\n if i >0:\n strnum[i] = '0'\n result = int(\"\".join(strnum))\n return result\n \n\n \nclass DataGenerator(object):\n \"\"\"\n process data into what model need\n \"\"\"\n def __init__(self, symbol,batch_size,steps,split):\n self.symbol = symbol\n self.batch_size = batch_size\n self.steps = steps\n self.split = split\n self.start = 0\n self.tstart = 0\n \n #read data from mongodb\n client = pymongo.MongoClient('localhost', 27017)\n db = client[DB_NAME]\n table = db[symbol]\n data = pd.DataFrame(list(table.find()))\n \n data.set_index([\"datetime\"], inplace=True)\n data = data[['symbol','open','close','high','low','volume']]\n \n self.data = data[:6000]\n self.length = get_top(len(self.data))\n self.train_len = int(round(self.length*split))\n self.test_len = self.length - self.train_len\n\n \n self.open = self.data['open']\n self.close = self.data['close']\n self.high = self.data['high']\n self.low = self.data['low']\n self.volume = self.data['volume']\n \n self.norm_close = np.array([])\n self.train_data = np.array([])\n self.test_data = np.array([])\n self._normolize_data()\n self._generate_train_data()\n self._generate_test_data()\n \n \"\"\"\n normolize data\n \"\"\" \n def _normolize_data(self):\n np_close = np.array(self.close)\n norm_close = (np_close-np.mean(np_close))/np.std(np_close)\n self.norm_close = norm_close[:self.length]\n \n \"\"\"\n generate training set\n \"\"\"\n def _generate_train_data(self):\n self.train_data = self.norm_close[:self.train_len]\n \n \"\"\"\n generate testing set\n \"\"\"\n def _generate_test_data(self):\n self.test_data = self.norm_close[-self.test_len:]\n \n \"\"\"\n get batch data needed by model\n \"\"\"\n def get_batch_from_train(self):\n BATCH_START = self.start\n trainDataX = self.train_data[BATCH_START:BATCH_START+TIME_STEPS*BATCH_SIZE].reshape((BATCH_SIZE,TIME_STEPS))\n trainDataY = self.train_data[BATCH_START+1:BATCH_START+TIME_STEPS*BATCH_SIZE+1].reshape((BATCH_SIZE,TIME_STEPS))\n self.start += TIME_STEPS\n return [trainDataX[:,:,np.newaxis],trainDataY[:,:,np.newaxis]]\n \n def get_batch_from_test(self):\n BATCH_START = self.tstart\n testDataX = self.test_data[BATCH_START:BATCH_START+TIME_STEPS*BATCH_SIZE].reshape((BATCH_SIZE,TIME_STEPS))\n testDataY = self.test_data[BATCH_START+1:BATCH_START+TIME_STEPS*BATCH_SIZE+1].reshape((BATCH_SIZE,TIME_STEPS))\n self.tstart += TIME_STEPS\n return [testDataX[:,:,np.newaxis],testDataY[:,:,np.newaxis]]\n \n# def get_batch(self):\n# BATCH_START = self.start\n# testDataX = self.norm_close[BATCH_START:BATCH_START+TIME_STEPS*BATCH_SIZE].reshape((BATCH_SIZE,TIME_STEPS))\n# testDataY = self.norm_close[BATCH_START+1:BATCH_START+TIME_STEPS*BATCH_SIZE+1].reshape((BATCH_SIZE,TIME_STEPS))\n# self.start += TIME_STEPS\n# return [testDataX[:,:,np.newaxis],testDataY[:,:,np.newaxis]]\n \n def get_batch(self,steps):\n BATCH_START = self.start\n x_batch = []\n y_batch=[]\n for i in range(BATCH_SIZE):\n xrow_data = self.norm_close[BATCH_START:BATCH_START+TIME_STEPS]\n yrow_data = self.norm_close[BATCH_START+steps:BATCH_START+TIME_STEPS+steps]\n x_batch.append(xrow_data)\n y_batch.append(yrow_data)\n BATCH_START += 1\n \n testDataX = np.array(x_batch).reshape([-1]).reshape((BATCH_SIZE,TIME_STEPS)) \n testDataY = np.array(y_batch).reshape([-1]).reshape((BATCH_SIZE,TIME_STEPS)) \n self.start += BATCH_SIZE\n return [testDataX[:,:,np.newaxis],testDataY[:,:,np.newaxis]]\n \n\n\n\n"
},
{
"alpha_fraction": 0.5587594509124756,
"alphanum_fraction": 0.5676022171974182,
"avg_line_length": 31.610877990722656,
"blob_id": "ef0b6b63104e9ce852f5572ec867d923be10058b",
"content_id": "658a0ec1a4b93477aeb7b4e706efdda105a3ab88",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7803,
"license_type": "no_license",
"max_line_length": 134,
"num_lines": 239,
"path": "/core/model.py",
"repo_name": "JunqiLin/lstm_model_tensorflow",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python2\n# -*- coding: utf-8 -*-\nfrom __future__ import division\n\"\"\"\nCreated on Mon Apr 22 23:01:49 2019\n\n@author: linjunqi\n\"\"\"\n\n\"\"\"\nlstm model\n\"\"\"\n\nimport numpy as np\nimport pandas as pd\nfrom data_generator import DataGenerator\nimport smooth as sm\nfrom CONSTANT import TIME_STEPS,BATCH_SIZE,INPUT_SIZE,OUTPUT_SIZE,CELL_SIZE,LR\nimport tensorflow as tf\nimport plotly.offline as py\nimport plotly.graph_objs as go\nimport matplotlib.pyplot as plt\nimport math\ntf.reset_default_graph()\n\nclass LSTM(object):\n \"\"\"\n lstm model\n \"\"\"\n def __init__(self, n_steps, input_size, output_size, cell_size, batch_size):\n self.xs = tf.placeholder(tf.float32, [None, n_steps, input_size], name='xs')\n self.ys = tf.placeholder(tf.float32, [None, n_steps, output_size ], name='ys')\n \n self.n_steps =n_steps\n self.input_size = input_size\n self.output_size = output_size\n self.cell_size = cell_size\n self.batch_size = batch_size\n\n self.prediction = self.build_model()['pred']\n self.optimizer = self.train_lstm(self.prediction)\n \"\"\"\n generate random weights and biases\n \"\"\" \n @staticmethod \n def _weight_variable(shape, name='weights'):\n initializer = tf.random_normal_initializer(mean=0., stddev=1.,)\n return tf.get_variable(shape=shape, initializer=initializer,name=name)\n\n def _bias_variable(self, shape, name='biases'):\n initializer = tf.constant_initializer(0.1)\n return tf.get_variable(name=name, shape=shape, initializer=initializer)\n @staticmethod \n def ms_error(labels, logits):\n return tf.square(tf.subtract(labels, logits))\n ##mean-square function to calculate error\n \n \n def build_model(self):\n \"\"\"\n build input hidden layer\n \"\"\"\n with tf.variable_scope('in_hidden'):\n l_in_x = tf.reshape(self.xs, [-1, self.input_size], name='2_2D')\n Ws_in = self._weight_variable([self.input_size, self.cell_size])\n bs_in = self._bias_variable([self.cell_size,])\n \n with tf.name_scope('Wx_plus_in_b'):\n l_in_y = tf.matmul(l_in_x, Ws_in) + bs_in\n l_in_y = tf.reshape(l_in_y, [-1, self.n_steps, self.cell_size], name='2_3D')\n \n \"\"\"\n build lstm cell and feed data from input hidden layer into it\n \"\"\"\n with tf.variable_scope('LSTM_cell'):\n lstm_cell = tf.contrib.rnn.BasicLSTMCell(self.cell_size, forget_bias=1.0, state_is_tuple=True)\n with tf.name_scope('initial_state'):\n cell_init_state = lstm_cell.zero_state(self.batch_size, dtype=tf.float32)\n cell_outputs, cell_final_state = tf.nn.dynamic_rnn(\n lstm_cell, l_in_y, initial_state=cell_init_state, time_major=False)\n \n \"\"\"\n build output hidden layer\n \"\"\"\n with tf.variable_scope('out_hidden'):\n l_out_x = tf.reshape(cell_outputs, [-1, self.cell_size], name='2_2D')\n Ws_out = self._weight_variable([self.cell_size, self.output_size])\n bs_out = self._bias_variable([self.output_size, ])\n with tf.name_scope('Wx_plus_out_b'):\n pred = tf.matmul(l_out_x, Ws_out) + bs_out\n \n\n \n model={'pred':pred,'final_state':cell_final_state} \n return model\n \n def train_lstm(self, pred):\n \"\"\"\n calculate loss\n \"\"\"\n with tf.name_scope('cost'):\n losses = tf.contrib.legacy_seq2seq.sequence_loss_by_example(\n [tf.reshape(pred, [-1], name='reshape_pred')],\n [tf.reshape(self.ys, [-1], name='reshape_target')],\n [tf.ones([self.batch_size * self.n_steps], dtype=tf.float32)],\n average_across_timesteps=True,\n softmax_loss_function=self.ms_error,\n name='losses'\n )\n with tf.name_scope('average_cost'):\n cost = tf.div(\n tf.reduce_sum(losses, name='losses_sum'),\n self.batch_size,\n name='average_cost')\n tf.summary.scalar('cost', cost)\n \"\"\"\n AdamOptimizer training\n \"\"\"\n with tf.name_scope('train'):\n train_op = tf.train.AdamOptimizer(LR).minimize(cost)\n result={'cost':cost,'train_op':train_op}\n return result\n\n\nif __name__ == '__main__':\n \n lstm= LSTM(TIME_STEPS, INPUT_SIZE, OUTPUT_SIZE, CELL_SIZE, BATCH_SIZE)\n \n sess = tf.Session()\n merged = tf.summary.merge_all()\n writer = tf.summary.FileWriter(\"logs\", sess.graph)\n\n init = tf.global_variables_initializer()\n sess.run(init)\n split = 0.8\n generator = DataGenerator('add',BATCH_SIZE, TIME_STEPS, split)\n \n \n seq_len = BATCH_SIZE+TIME_STEPS-1\n train_times = int(math.floor((generator.train_len - seq_len)/BATCH_SIZE))\n times = int(math.floor((generator.length - seq_len)/BATCH_SIZE))\n test_times = times-train_times\n \n\n drawtest=[]\n drawtest=[]\n drawtrend=[]\n \n P_STEP = 1\n \"\"\"\n training\n \"\"\"\n for i in range(train_times):\n index = TIME_STEPS-1\n seq, res = generator.get_batch(P_STEP)\n feed_dict = {\n lstm.xs: seq,\n lstm.ys: res\n }\n\n _, cost, pred = sess.run(\n [lstm.optimizer['train_op'],lstm.optimizer['cost'], lstm.prediction],\n feed_dict=feed_dict)\n\n print('cost: ', round(cost, 4))\n result = sess.run(merged, feed_dict)\n writer.add_summary(result, i)\n writer.flush()\n \n \n \n end_pre = (train_times-1)*BATCH_SIZE + seq_len -1\n print(end_pre)\n \n print(\"Test Set Cost\")\n \n \"\"\"\n testing\n \"\"\"\n for i in range(test_times):\n index = TIME_STEPS-1\n test_seq, test_res = generator.get_batch(P_STEP)\n feed_dict = {lstm.xs: test_seq, lstm.ys: test_res}\n \n cost, pred = sess.run([lstm.optimizer['cost'],lstm.prediction], feed_dict = feed_dict)\n print('cost: ', round(cost, 4))\n \n for j in range(BATCH_SIZE):\n drawtest.append(pred[index])\n index+=TIME_STEPS \n# \n true_data = generator.norm_close[end_pre:]\n ##real data\n pd_or = pd.DataFrame(generator.norm_close[end_pre:])\n \n ##prediction sequence\n t = 0\n for _ in range(test_times):\n data = drawtest[t:t+BATCH_SIZE]\n data = sm.line(data)\n drawtrend.append(data)\n t += BATCH_SIZE\n \"\"\"\n output result\n \"\"\"\n ##prediction trend curve\n drawtrend = np.array(drawtrend).reshape([-1])\n\n ##prediction curve\n drawtest = np.array(drawtest).reshape([-1])\n \n right = 0\n wrong = 0\n\n for l in range(len(drawtest)):\n if l == len(drawtest)-1:\n continue\n if (drawtest[l+1]>drawtest[l] and true_data[l+1]>true_data[l]) or (drawtest[l+1]<drawtest[l] and true_data[l+1]<true_data[l]):\n right+=1\n if (drawtest[l+1]>drawtest[l] and true_data[l+1]<true_data[l]) or (drawtest[l+1]<drawtest[l] and true_data[l+1]>true_data[l]):\n wrong +=1\n \n print(\"Right:%s\"%(right))\n print(\"Wrong:%s\"%(wrong))\n accuracy = right / (wrong + right)\n print(\"Accuracy Percentage is %s\"%(accuracy))\n pd_drawtrend = pd.DataFrame(drawtrend)\n pd_drawpred = pd.DataFrame(drawtest)\n \n \n train_pic = go.Scatter(x=pd_drawpred.index, y=pd_drawpred[0], name='test_pre_point')\n trend_pic = go.Scatter(x=pd_drawtrend.index, y=pd_drawtrend[0], name='test_pre_trend')\n origin_pic = go.Scatter(x=pd_or.index,y = pd_or[0],name='real data')\n \n r = [train_pic,origin_pic,trend_pic]\n fig = go.Figure(data=r)\n py.plot(fig)\n## \n# \n \n"
}
] | 6 |
MAGANER/UncleBob | https://github.com/MAGANER/UncleBob | 82d93ab2b258287103bf20a84a7981e68ce9faac | 4ccc8827762ef38e02b021095bd2d03c4d956ff1 | d3467618169e4b321a1bd81ce849783ee0b1e94e | refs/heads/master | 2023-01-30T01:14:36.859891 | 2020-12-11T22:28:01 | 2020-12-11T22:28:01 | 320,350,955 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.652901291847229,
"alphanum_fraction": 0.6601016521453857,
"avg_line_length": 29.610389709472656,
"blob_id": "5f8522906871988058987d850332d23329bfab37",
"content_id": "a5a09ec67d298969afe94398bde19b2fe1873744",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4987,
"license_type": "no_license",
"max_line_length": 101,
"num_lines": 154,
"path": "/UncleBob.py",
"repo_name": "MAGANER/UncleBob",
"src_encoding": "UTF-8",
"text": "import requests\nimport random\nimport sys\n\nclass Bot:\n\tdef __init__(self):\n\t\tself.url = \"https://api.telegram.org/bot1410863001:AAGoiqW1uitKM381nqbwkRRDhfYZbZX6jUs/\"\n\t\t\n\tdef get_updates(self, offset=None, timeout=30):\n\t\tmethod = 'getUpdates'\n\t\tparams = {'timeout': timeout, 'offset': offset}\n\t\tresp = requests.get(self.url + method, params)\n\t\tresult_json = resp.json()['result']\n\t\treturn result_json\n\t\n\tdef get_last_update(self):\n\t\tget_result = self.get_updates()\n\n\t\tif len(get_result) > 0:\n\t\t\tlast_update = get_result[-1]\n\t\telse:\n\t\t\tlast_update = get_result[len(get_result)]\n\n\t\treturn last_update\n\t\n\tdef send_message(self, chat_id, text):\n\t\tparams = {'chat_id': chat_id, 'text': text}\n\t\tmethod = 'sendMessage'\n\t\tresp = requests.post(self.url + method, params)\n\t\treturn resp\n\nclass UncleBob(Bot):\n\tdef __init__(self):\n\t\tBot.__init__(self)\n\t\tself.last_update = 0\n\t\t\n\t\tself.last_update_id = -1\n\t\tself.last_message_id= -1\n\t\tself.last_chat_text = \"\"\n\t\tself.last_message = \"\"\n\t\tself.last_chat_id = -1\n\t\tself.last_chat_name = \"\"\n\t\tself.curr_message_id= -1\n\t\t\n\t\tself.pizda_counter = 0\n\t\n\t\t\n\t\tself.started \t\t= False\n\t\t\n\t\tself.beer_counter = True\n\t\tself.bullshit_counter = True\n\t\t\n\t\tself.actions = {\n\t\t\t\t\t\t\"будешь пиво?\": self.wanna_beer,\n\t\t\t\t\t\t\"пизда\": self.pizda,\n\t\t\t\t\t\t\"команды\": self.show_commands,\n\t\t\t\t\t\t\"да\": self.say_yes,\n\t\t\t\t\t\t\"соси\":\t\t\t self.say_get_off,\n\t\t\t\t\t\t\"привет\":\t\t self.say_hello\n\t\t\t\t\t}\n\t\t\t\t\t\n\tdef wanna_beer(self):\n\t\tdata = [\"кнч\",\"nalivay\",\"да\",\"охота крепкое\"]\n\t\tself.send_message(self.last_chat_id,random.choice(data))\n\tdef pizda(self):\n\t\tif self.pizda_counter == 0:\n\t\t\tself.send_message(self.last_chat_id,\"пизда, ага\")\n\t\t\tself.pizda_counter = self.pizda_counter + 1\n\t\tif self.pizda_counter > 0:\n\t\t\tself.send_message(self.last_chat_id,\"да, да пизда\")\n\t\t\tself.pizda_counter = self.pizda_counter + 1\n\t\tif self.pizda_counter == 5:\n\t\t\tself.send_message(self.last_chat_id,\"пошёл на хуй \"+self.last_chat_name)\n\t\t\tself.pizda_counter = 0\n\tdef show_commands(self):\n\t\tself.send_message(self.last_chat_id,\"что я могу\")\n\t\tfor i in self.actions.keys():\n\t\t\tself.send_message(self.last_chat_id,i)\n\tdef say_yes(self):\n\t\tdata = [\"так блять да\",\"пизда\",\"наверное\",\"да это нет?\"]\n\t\tself.send_message(self.last_chat_id,random.choice(data))\n\tdef say_get_off(self):\n\t\tdata = [\"сам соси\",\"обосрись и здохни\"]\n\t\tself.send_message(self.last_chat_id,random.choice(data)+\" \"+ self.last_chat_name)\n\tdef say_hello(self):\n\t\tdata = [\"zdorovo\",\"hi\"]\n\t\tself.send_message(self.last_chat_id,random.choice(data)+\" \"+ self.last_chat_name)\n\tdef say_random_bull_shit(self):\n\t\tif \"пиво\" in self.last_message and self.beer_counter:\n\t\t\tdata = (\"кто-то сказал пиво?\",\"а мне?\",\"Пиво(!)\")\n\t\t\tself.send_message(self.last_chat_id,random.choice(data))\n\t\t\tself.beer_counter = False\n\t\tif \"пиво\" not in self.last_message:\n\t\t\tself.beer_counter = True\n\t\t\n\t\tif \"хуйня\" in self.last_message and self.bullshit_counter:\n\t\t\tdata = (\"воистину!\",\"поебать\",\"хватит пиздеть\")\n\t\t\tself.send_message(self.last_chat_id,random.choice(data))\n\t\t\tself.bullshit_counter = False\n\t\tif \"хуйня\" not in self.last_message:\n\t\t\tself.bullshit_counter = True\n\t\t\t\n\tdef is_talking_to_myself(self,text):\n\t\treturn \"/дядька\" in text\n\tdef get_command(self,text):\n\t\treturn text[6:]\n\t\n\t\n\tdef update_me(self):\n\t\tBot.get_updates(self)\n\t\tself.last_update = Bot.get_last_update(self)\n\t\t\n\t\tself.last_update_id = self.last_update['update_id']\n\t\tif \"message\" in self.last_update.keys():\n\t\t\tif 'text' in self.last_update['message'].keys():\n\t\t\t\tself.last_chat_text = self.last_update['message']['text']\n\t\t\t\n\t\t\tself.last_chat_id = self.last_update['message']['chat']['id']\n\t\t\tself.last_chat_name = self.last_update['message']['from']['first_name']\n\t\t\tself.curr_message_id= self.last_update['message']['message_id']\n\t\t\n\t\tif \"text\" in self.last_update[\"message\"].keys():\n\t\t\tself.last_message = self.last_update[\"message\"][\"text\"]\n\n\tdef send_start_message(self):\n\t\tif self.started == False:\n\t\t\tself.send_message(self.last_chat_id,\"готов трахать ваш мозг\")\n\t\t\tself.started = True\t\n\tdef run(self):\n\t\tself.send_start_message()\n\t\tif self.is_talking_to_myself(self.last_chat_text) and self.last_message_id != self.curr_message_id:\n\t\t\tcommand = self.get_command(self.last_chat_text)[1:]\n\t\t\tcommand = command[1:]\n\t\t\tprint(command)\n\t\t\tif command in self.actions.keys():\n\t\t\t\tself.actions[command]()\n\t\t\telse:\n\t\t\t\tself.send_message(self.last_chat_id,\"сукаблячёэто \"+command)\n\t\t\tself.last_message_id = self.curr_message_id\n\t\tself.say_random_bull_shit()\n\t\n\nbot = UncleBob()\ndef main(bot):\n\twhile True:\n\t\tbot.update_me()\n\t\tbot.run()\n\t\t\nif __name__ == '__main__': \n\ttry:\n\t\tmain(bot)\n\texcept KeyboardInterrupt:\n\t\tbot.send_message(bot.last_chat_id,\"ааа, меня заебаобалоооо\")\n\t\texit()\n\t\n\n\n\n\n\n\n"
}
] | 1 |
Lionardo/aldryn-stripe-shop | https://github.com/Lionardo/aldryn-stripe-shop | 35fd61dfa9a09c190361acbee1523cc418dc55e3 | 90077de3146a2317e5471dcd036a25a49b0286c0 | 0dfa35a82efe571d6fa6445b716313e5b063b5b9 | refs/heads/master | 2021-01-20T02:20:37.226359 | 2015-07-22T19:14:07 | 2015-07-22T19:14:07 | 39,524,836 | 0 | 1 | null | null | null | null | null | [
{
"alpha_fraction": 0.38804349303245544,
"alphanum_fraction": 0.38804349303245544,
"avg_line_length": 22,
"blob_id": "f9993cab35764ffd0d25425480ed214ad418ac27",
"content_id": "15cf36922c31998f70d83e176aa594174e603ddd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 920,
"license_type": "no_license",
"max_line_length": 120,
"num_lines": 40,
"path": "/aldryn_stripe_shop/static/js/addons/cl.aldryn-stripe.js",
"repo_name": "Lionardo/aldryn-stripe-shop",
"src_encoding": "UTF-8",
"text": "'use strict';\n// #####################################################################################################################\n// #NAMESPACES#\n/**\n * @module Cl\n */\n// istanbul ignore next\nvar Cl = window.Cl || {};\n\n// #####################################################################################################################\n// #UTILS#\n(function ($) {\n 'use strict';\n\n /**\n * Contains various helpers, feel free to extend and adapt\n *\n * @class Utils\n * @namespace Cl\n */\n Cl.AldrynStripe = new Class({\n\n\n initialize: function (options) {\n\n this.handler = StripeCheckout.configure(options)\n this.options = options\n\n },\n\n setData: function (options) {\n $.extend(true, this.options, options);\n },\n\n checkoutStripe: function () {\n this.handler.open(this.options);\n }\n });\n\n})(jQuery);\n"
},
{
"alpha_fraction": 0.7029560208320618,
"alphanum_fraction": 0.7029560208320618,
"avg_line_length": 30.522727966308594,
"blob_id": "3c559c0cd6c182c9f844336c38c9fa95b10f9e86",
"content_id": "252bab164b0285a39fbae1543db97011e2efb899",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "reStructuredText",
"length_bytes": 1387,
"license_type": "no_license",
"max_line_length": 150,
"num_lines": 44,
"path": "/README.rst",
"repo_name": "Lionardo/aldryn-stripe-shop",
"src_encoding": "UTF-8",
"text": "|PyPI Version| |Build Status| |Coverage Status|\n\n\n###########################\nAldryn Stripe Shop\n###########################\n\n\nAldryn-stripe-shop is an addon that lets you build a small arbitrary webshop with stripe as payment system.(Real transactions won't work without SSL!)\nThis is a very simple addon that gives you the possibility to add:\n\n\n* title\n* description\n* images\n* price\n\n\nto your products.\n\nYou can also add plugins inside your products plugin.\n\n*************\nDocumentation\n*************\nFor a list of all data you can fetch and other functionalities\n`API <https://docs.shopify.com/api/introduction/api-call-limit>`_.\nIf you don't know what Aldryn is click `here <https://www.aldryn.com>`_.\n\n\n************\nContribution\n************\n\nYou are very welcome improving this addon for Aldryn and your everyday use, especially the documentation always\nneeds love. Feel free to fork and send pull requests.\n\n\n.. |PyPI Version| image:: http://img.shields.io/pypi/v/aldryn/aldryn-stripe-shop.svg\n :target: https://pypi.python.org/pypi/aldryn-stripe-shop\n.. |Build Status| image:: http://img.shields.io/travis/aldryn/aldryn/aldryn-stripe-shop.svg\n :target: https://travis-ci.org/aldryn/aldryn-stripe-shop\n.. |Coverage Status| image:: http://img.shields.io/coveralls/aldryn/aldryn/aldryn-stripe-shop.svg\n :target: https://coveralls.io/r/aldryn/aldryn-stripe-shop?branch=master\n"
},
{
"alpha_fraction": 0.6568711996078491,
"alphanum_fraction": 0.6568711996078491,
"avg_line_length": 27.219512939453125,
"blob_id": "8675ccb7b2523ec4b2ef1664f5311242a13ebd83",
"content_id": "1528f38d1748ebee62cd98e9d67847785f6bf30c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1157,
"license_type": "no_license",
"max_line_length": 56,
"num_lines": 41,
"path": "/aldryn_stripe_shop/cms_plugins.py",
"repo_name": "Lionardo/aldryn-stripe-shop",
"src_encoding": "UTF-8",
"text": "from django.utils.translation import ugettext as _\nfrom cms.plugin_base import CMSPluginBase\nfrom cms.plugin_pool import plugin_pool\n\nfrom models import CheckoutPlugin, ProductPlugin\n\n\nclass CheckoutPlugin(CMSPluginBase):\n model = CheckoutPlugin\n module = _('stripe shop')\n name = _(\"Checkout\")\n render_template = \"aldryn_stripe_shop/checkout.html\"\n\n def render(self, context, instance, placeholder):\n stripe = instance.stripe\n context.update({\n 'instance': instance,\n 'placeholder': placeholder,\n 'stripe': stripe,\n })\n return context\n\n\nclass ProductPlugin(CMSPluginBase):\n model = ProductPlugin\n module = _('stripe shop')\n name = _(\"Product\")\n render_template = \"aldryn_stripe_shop/product.html\"\n allow_children = True\n\n def render(self, context, instance, placeholder):\n products = instance.product\n context.update({\n 'instance': instance,\n 'placeholder': placeholder,\n 'product': products,\n })\n return context\n\nplugin_pool.register_plugin(CheckoutPlugin)\nplugin_pool.register_plugin(ProductPlugin)\n"
},
{
"alpha_fraction": 0.7284768223762512,
"alphanum_fraction": 0.7360454201698303,
"avg_line_length": 42.14285659790039,
"blob_id": "2d64cb952bdb6136be13aee0a953181c1e526220",
"content_id": "adba71f2302019d6da634006778c8b9ef7e88fca",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2114,
"license_type": "no_license",
"max_line_length": 123,
"num_lines": 49,
"path": "/aldryn_stripe_shop/models.py",
"repo_name": "Lionardo/aldryn-stripe-shop",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\nfrom django.db import models\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom cms.models import CMSPlugin\nfrom filer.fields.image import FilerImageField\nfrom filer.fields.file import FilerFileField\nfrom django.utils.encoding import smart_unicode\n\n\nclass Stripe(models.Model):\n name = models.CharField(_('account name'), max_length=100, blank=True)\n publishable = models.CharField(max_length=255, blank=False)\n secret_key = models.CharField(max_length=255, unique=True)\n address = models.BooleanField(default=True, help_text=\"Do you wish to display the adress for checkout?\")\n currency = models.BooleanField(default=True, help_text=\"Do you wish to display the currency for checkout?\")\n image = models.BooleanField(default=True, help_text=\"Do you wish to show the products image on checkout?\")\n remember_me = models.BooleanField(default=False, help_text=\"Do you wish to store user entries for next time?\")\n description = models.BooleanField(default=False, help_text=\"Do you wish to show the products description on checkout?\")\n bitcoin = models.BooleanField(default=False, help_text=\"Allow payments in bitcoin?\")\n\n def __unicode__(self):\n return smart_unicode(self.name)\n\n def copy_relations(self, oldinstance):\n self.sections = oldinstance.sections.all()\n\n\nclass CheckoutPlugin(CMSPlugin):\n stripe = models.ForeignKey(Stripe)\n\n\nclass Product(models.Model):\n stripe = models.ForeignKey(Stripe)\n title = models.CharField(_('Product title'), max_length=100, blank=True)\n description = models.TextField(_('Product description'), max_length=255, blank=True, default='')\n image = FilerImageField(null=True, blank=True, related_name=\"product_image\")\n disclaimer = FilerFileField(null=True, blank=True, related_name=\"product_disclaimer\")\n price = models.IntegerField(blank=True)\n\n def __unicode__(self):\n return smart_unicode(self.title)\n\n def copy_relations(self, oldinstance):\n self.sections = oldinstance.sections.all()\n\n\nclass ProductPlugin(CMSPlugin):\n product = models.ForeignKey(Product)\n"
},
{
"alpha_fraction": 0.6621004343032837,
"alphanum_fraction": 0.6621004343032837,
"avg_line_length": 21.65517234802246,
"blob_id": "a1ce774d5efb3625db09f36fcfd553ba4d6acd64",
"content_id": "5af545d586890747f42e1db74b41293c7e5745d1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 657,
"license_type": "no_license",
"max_line_length": 56,
"num_lines": 29,
"path": "/aldryn_stripe_shop/admin.py",
"repo_name": "Lionardo/aldryn-stripe-shop",
"src_encoding": "UTF-8",
"text": "from django.contrib import admin\nfrom django.db import models\nfrom django import forms\n\nfrom .models import Stripe, Product\n\n\nclass CheckoutAdmin(admin.ModelAdmin):\n class Meta:\n model = Stripe\n\n\nclass ProductAdmin(admin.ModelAdmin):\n model = Product\n #I need to add the ckeditor to the description field\n formfield_overrides = {\n models.TextField: {\n 'widget': forms.Textarea(\n attrs={'class': 'ckeditor'})\n },\n }\n list_display = ['description']\n\n class Media:\n js = ('ckeditor/ckeditor.js',)\n\n\nadmin.site.register(Stripe, CheckoutAdmin)\nadmin.site.register(Product, ProductAdmin)\n"
}
] | 5 |
mariuszcieply/blebox | https://github.com/mariuszcieply/blebox | d39a9f10ea60e5819fd04794c7ccd9ce3d810a21 | c62ee68df9400aa2022c0c069d80346bad156f04 | 43b8b3c3b8863ad8568b9eb97514158713b7d268 | refs/heads/master | 2021-01-13T03:39:51.195068 | 2016-12-23T22:50:45 | 2016-12-23T22:50:45 | 77,254,709 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5378151535987854,
"alphanum_fraction": 0.6071428656578064,
"avg_line_length": 28.75,
"blob_id": "367d582a46313c7a78f6ceaf97922f8dd8b47515",
"content_id": "1029708553234d7c76cce875b051d1f9660bc5c7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 476,
"license_type": "no_license",
"max_line_length": 61,
"num_lines": 16,
"path": "/switch.py",
"repo_name": "mariuszcieply/blebox",
"src_encoding": "UTF-8",
"text": "import requests\n\nresp = requests.get('http://192.168.0.23/api/relay/state')\nif resp.status_code != 200:\n # This means something went wrong.\n raise ApiError('GET /tasks/ {}'.format(resp.status_code))\n\nprint(resp.json())\nfor item in resp.json():\n #print('{} {}'.format(item['state'], item['relay']))\n state = item['state']\n print(state)\n if state == 0:\n requests.get('http://192.168.0.23/s/1')\n else:\n requests.get('http://192.168.0.23/s/0')\n"
}
] | 1 |
nkoub/nktools | https://github.com/nkoub/nktools | d97d70451a989c01553d9076163f979ee99822da | b23dd8cbb1bdccb76ae11d9e0584108600b76df6 | 1b8ff6ec98236822e6ffaa397d803fe4f8395e8a | refs/heads/master | 2021-07-13T03:19:10.296736 | 2017-10-16T07:08:24 | 2017-10-16T07:08:24 | 107,090,457 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5365853905677795,
"alphanum_fraction": 0.5630965232849121,
"avg_line_length": 22.024391174316406,
"blob_id": "a37af11d2222c01301380174ca8f29b7549501cf",
"content_id": "ff25aa8c5c41d8c587cac44a96ad65464deb4e94",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 943,
"license_type": "no_license",
"max_line_length": 70,
"num_lines": 41,
"path": "/fmod_pi.py",
"repo_name": "nkoub/nktools",
"src_encoding": "UTF-8",
"text": "#! /usr/bin/env python\n# -*- coding: utf-8 -*-\n# Author : Nikos E Kouvaris\n# Email : [email protected]\n# Date : 07/04/2017\n\n\nfrom __future__ import division, absolute_import\n\n# check for Python verion\nimport sys\nif sys.version_info[:2] < (2, 6):\n raise ImportError(\"Python version 2.6 or later\"\\\n \"is required for multiNetX (%d.%d detected).\" % \n sys.version_info[:2])\ndel sys\n\n__author__ = \"Nikos E. Kouvaris <[email protected]>\"\n__copyright__ = \"Copyright (C) 2017 by Nikos E. Kouvaris <[email protected]>\"\n__license__ = \"GNU GPL\"\n__version__ = \"0.1.\"\n\n\n\ntry:\n from numpy import pi, floor\nexcept ImportError:\n raise ImportError(\"numpy is required\")\n\n\ndef fmod_pi (angle_rad):\n '''Return the modulo of an angle in rad within the regime (-pi,pi)\n Parameters:\n -----------\n angle_rad: float\n \n Returns:\n --------\n A float in the regime (-pi,pi)\n '''\n return angle_rad - 2.0 * pi * floor((angle_rad+pi)/(2.0*pi))"
},
{
"alpha_fraction": 0.4720923900604248,
"alphanum_fraction": 0.49353861808776855,
"avg_line_length": 27.873016357421875,
"blob_id": "d8fcebd1164110fbd237039a5cf7bc016acccd43",
"content_id": "bb77f6f7c31279e307062b743ef61d8afc940466",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3637,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 126,
"path": "/animation_tools.py",
"repo_name": "nkoub/nktools",
"src_encoding": "UTF-8",
"text": "# !/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n__author__ = \"Nikos Kouvaris\"\n__email__ = \"[email protected]\"\n__copyright__ = \"Copyright 2013-2014\"\n__license__ = \"GPL\"\n__version__ = \"0.1\"\n__date__ = \"07/01/2014\"\n__update__ = \"18/06/2017\"\n\nimport matplotlib.pyplot as plt\nimport numpy as np\n#\n#\n#\npause = False\ndef onClick(event):\n global pause\n pause ^= True\n#\n#\n#\ndef show_animation(data, ymin=None, ymax=None, \n pos_separate_lines=None,\n title='', axis_labels=False, \n fig_number=1):\n '''Animation (with plot) of the values of a numpy\n array with shape (time,size)\n '''\n import matplotlib.animation as animation\n\n fig = plt.figure(fig_number)\n fig.clf()\n fig.canvas.mpl_connect('button_press_event', onClick)\n ax = fig.add_subplot(111)\n ttl = ax.set_title('')\n line, = ax.plot([],[],\n marker='o',ms=5,mfc='black',mec='black',lw=0,\n linestyle=':',color='gray')\n\n def init():\n if not pause:\n ttl.set_text('')\n line.set_data([], [])\n return line, ttl\n\n def update_line(i):\n if not pause:\n x = np.arange(data.shape[0])\n y = data[:,i]\n ttl.set_text('{}, t={}'.format(title,i))\n line.set_data(x,y)\n return line, ttl\n\n ax.set_xlim(0,data.shape[0])\n ax.set_ylim(ymin,ymax)\n ax2 = ax.twinx()\n if axis_labels:\n ax.set_ylabel(r'module 1: $\\theta_i$')\n ax2.set_ylabel(r'module 2: $\\theta_i$')\n ax2.set_ylim(ymin,ymax)\n \n\n # define the number of separation lines\n # if pos_separate_lines is None:\n # pos_separate_lines = [(data.shape[0]/(1.0*pos_separate_lines))*i\n # for i in range(0,pos_separate_lines)]\n if pos_separate_lines is not None:\n for vertical_line in pos_separate_lines:\n ax.axvline(vertical_line, linewidth=2, color = 'red', alpha=0.7)\n\n anim = animation.FuncAnimation(fig,\n update_line,\n init_func=init,\n frames=data.shape[1],\n interval=100,\n repeat=False,\n blit=False)\n return anim\n###############################################################################\n\n\n\n# if __name__ == '__main__':\n # import numpy as np\n # import matplotlib.pylab as plt\n #\n # num_nodes = 1000\n # data = np.loadtxt('/Users/nkoub/Downloads/data.txt')\n # data2 = data[:,2]\n #\n # space_time_x = np.reshape(data2,(len(data2)/num_nodes,num_nodes))\n # nodes_id = np.arange(space_time_x.shape[1])\n #\n # fig = plt.figure()\n # ax = fig.add_subplot(111)\n #\n # def plot_data(t=0):\n # ax.scatter(nodes_id,space_time_x[t],\n # c=space_time_x[t],s=20,\n # cmap=plt.cm.YlOrBr)\n #\n # plt.ion()\n # txtitle = fig.suptitle('time: {}'.format(0))\n #\n # def plot(snap):\n # ax.cla()\n # ax.set_ylim(-1,2)\n # ax.set_xlim(0,num_nodes)\n # plot_data(t=snap)\n # txtitle.set_text('time: {:.1f}'.format(snap))\n # # use the line below to save png files\n # plt.savefig('video/snapshot_{:05d}'.format(t),\n # dpi=300,bbox_inches='tight')\n # # OR\n # # use the line below to see the animation in python\n # # plt.draw()\n #\n # # use the line below to see the animation in python\n # # plt.show()\n # #\n # for t in np.arange(space_time_x.shape[0]):\n # plot(t)\n #\n # print 'ok'"
},
{
"alpha_fraction": 0.6088861227035522,
"alphanum_fraction": 0.6239048838615417,
"avg_line_length": 30.979999542236328,
"blob_id": "f6fdebef6cb9a1d2a311ae4842c56a214b5ba044",
"content_id": "a60996c739346d88014a547f435a6c602232e21b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1598,
"license_type": "no_license",
"max_line_length": 105,
"num_lines": 50,
"path": "/__init__.py",
"repo_name": "nkoub/nktools",
"src_encoding": "UTF-8",
"text": "#! /usr/bin/env python\n# -*- coding: utf-8 -*-\n\n###########################################################################\n# \n# \n#\n# Copyright (C) 2013-2017 by Nikos E. Kouvaris <[email protected]>\n# \n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program. If not, see <http://www.gnu.org/licenses/>.\n########################################################################### \n \n\"\"\"\n\"\"\"\n\nfrom __future__ import division, absolute_import\n\n# check for Python verion\nimport sys\nif sys.version_info[:2] < (2, 6):\n raise ImportError(\"Python version 2.6 or later is required (%d.%d detected).\" % sys.version_info[:2])\ndel sys\n\n__author__ = \"Nikos E. Kouvaris\"\n__copyright__ = \"Copyright (C) 2013-2017 \\\n\t\t\t\tby Nikos E. Kouvaris <[email protected]>\"\n__license__ = \"GNU GPL\"\n\n#\n#import all modules of the packages\n#\nfrom nktools.progress_meter import *\nfrom nktools.fmod_pi import fmod_pi\nfrom nktools.animation_tools import *\nfrom nktools.latex_tools import *\n# from nktools.data_tools import *\n# from nktools.scatter_animation import *\n# from nktools.colormaps import *"
},
{
"alpha_fraction": 0.3967339098453522,
"alphanum_fraction": 0.4332372844219208,
"avg_line_length": 22.133333206176758,
"blob_id": "161ea920d6d9761dce8435ec63ea504e3bb42bb0",
"content_id": "40516c99ca81ef6611622e90955c2d3292931b27",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1041,
"license_type": "no_license",
"max_line_length": 51,
"num_lines": 45,
"path": "/progress_meter.py",
"repo_name": "nkoub/nktools",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n__author__ = \"Thomas Isele\" \n__copyright__ = \"Copyright 2014\"\n__license__ = \"GPL\"\n__version__ = \"0.1\"\n__date__ = \"09/05/2014\"\n\nimport sys\n\nclass ProgressMeter:\n def __init__(self, i=0, end=None):\n self.i = i\n self.endnum = end\n if type(end) == int:\n self.j = self.endnum / 100.0\n\n def __next(self):\n i = self.i\n if i == 0:\n sys.stdout.write('')\n elif i % 100 == 0 :\n sys.stdout.write(\"%d\\n\"%i)\n elif i % 50 == 0:\n sys.stdout.write('L')\n elif i % 10 == 0:\n sys.stdout.write('|')\n elif i % 5 == 0:\n sys.stdout.write(',')\n else:\n sys.stdout.write('.')\n sys.stdout.flush()\n self.i += 1\n \n def next(self):\n if type(self.endnum)==int:\n self.j += 1\n if ((100*self.j)/self.endnum) > self.i:\n self.__next()\n else:\n self.__next()\n\n def end(self):\n print self.i\n"
},
{
"alpha_fraction": 0.44091224670410156,
"alphanum_fraction": 0.48721492290496826,
"avg_line_length": 38.72222137451172,
"blob_id": "fdee8348f30d7469d1ca493c7addf4d79bd0037d",
"content_id": "ab4986d39e8890989bfe0c2ce277fa6e3f8d1ba9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2894,
"license_type": "no_license",
"max_line_length": 100,
"num_lines": 72,
"path": "/latex_tools.py",
"repo_name": "nkoub/nktools",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\"\"\"\nPlot Style for use in LaTex\n\"\"\"\n__author__ = \"Nikos E Kouvaris\"\n__date__ = \"26/04/2017\"\n\nimport matplotlib.pyplot as plt\n# from mpl_toolkits.axes_grid import axes_size\n# from mpl_toolkits.axes_grid import Divider\nfrom numpy import sqrt\n####################################################################################################\n#\n####################################################################################################\nfrom matplotlib.ticker import NullFormatter\n\nnullfmt = NullFormatter()\n\ndef empty_axes(axes):\n axes.xaxis.set_major_formatter(nullfmt) \n axes.yaxis.set_major_formatter(nullfmt)\n axes.get_xaxis().set_tick_params(bottom=False,top=False,\n labelbottom=False,labeltop=False)\n axes.get_yaxis().set_tick_params(left=False,right=False,\n labelleft=False,labelright=False)\n\ndef half_the_ticks(ax, x=True, y=True, xs=0, ys=0):\n if x:\n xlim = ax.get_xlim()\n ax.set_xticks(ax.get_xticks(),minor=True)\n ax.set_xticks(ax.get_xticks()[xs::2])\n ax.set_xlim(xlim)\n if y:\n ylim=ax.get_ylim()\n ax.set_yticks(ax.get_yticks(),minor=True)\n ax.set_yticks(ax.get_yticks()[ys::2])\n ax.set_ylim(ylim)\n\n\n## Latex parameters for the figures\n#########################################################################\ndef set_fig_params(fig_width_pt=242.64, style=10):\n fig_width_pt = fig_width_pt # Get this from LaTeX using \\showthe\\columnwidth\n inches_per_pt = 1.0 / 72.27 # Convert pt to inch\n golden_mean = (sqrt(5)-1.0) / 2.0 # Aesthetic ratio\n fig_width = fig_width_pt * inches_per_pt # width in inches\n fig_height = fig_width * golden_mean # height in inches\n fig_size = [fig_width, fig_height]\n\n textsizes = {10: (5 , 7 , 8 , 9 , 10 , 12 , 14 , 17 , 20 , 25),\n 11: (6 , 8 , 9 , 10 , 11 , 12 , 14 , 17 , 20 , 25),\n 12: (6 , 8 , 10, 11 , 12 , 14 , 17 , 20 , 25 , 25),\n 20: (10, 14, 16, 19 , 20 , 24 , 28 , 34 , 40 , 50),\n 24: (12, 16, 20, 22 , 24 , 28 , 34 , 40 , 50 , 50)}\n\n tiny, scriptsize, footnotesize, small, normal,\\\n large, Large, LARGE, huge, Huge = textsizes[12]\n\n figpars = {'backend':'ps',\n 'axes.labelsize':normal,\n 'text.fontsize': normal,\n 'legend.fontsize':footnotesize,\n 'xtick.labelsize':small,\n 'ytick.labelsize':small,\n 'font.family':'sans-serif',\n 'font.sans-serif':['Helvecia'],\n 'text.usetex':True,\n 'text.latex.preamble':['\\usepackage{amssymb}',\n '\\usepackage{amsmath}'],\n 'figure.figsize': fig_size}\n return figpars \n"
}
] | 5 |
glen77777/CVG | https://github.com/glen77777/CVG | 7034fd559af63fb9274a4b89671cafceffa80c3f | 072437fde8c1fafb67a53d22903a57864d5c6a4f | af8471f1cc89a6b4eaa3b53742d3109343edbd41 | refs/heads/master | 2021-01-07T05:02:32.548110 | 2020-11-13T21:52:27 | 2020-11-13T21:52:27 | 241,586,349 | 0 | 1 | null | null | null | null | null | [
{
"alpha_fraction": 0.38547858595848083,
"alphanum_fraction": 0.5743604898452759,
"avg_line_length": 55.91386413574219,
"blob_id": "8e935a663a3b864a576afcc114c786ad3a37fcba",
"content_id": "7de641b827c7ef9bcd78d2e7ef93c909e2715a3c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 27091,
"license_type": "no_license",
"max_line_length": 451,
"num_lines": 476,
"path": "/COVID-19dailyspreadgridgui.py",
"repo_name": "glen77777/CVG",
"src_encoding": "UTF-8",
"text": "import matplotlib.pyplot as plt\nfrom tkinter import *\nwindow = Tk()\nwindow.title('COVID-19 Spread')\nwindow.resizable(0, 0)\nlabel = Label(window, text = 'Graphical representation of rate of spread by day', fg=\"Green\", font=(\"Helvetica\", 18))\nlabel.grid(row =1, column = 2, columnspan = 9, rowspan = 3)\nday1 = jan21 = 329\nday2 = jan22 = 561\nday3 = jan23 = 672\nday4 = jan24 = 672 + 174 + 3 + 4 + 21 + 1 + 2 + 16 + 6 + 105 + 7 + 4 + 18 + 3 + 15 + 2 + 1 + 1 + 4 + 6 + 1 + 3 + 3 + 2 + 1 + 2 + 5 + 1 + 2 + 2 + 1 + 1 + 1 + 3 + 10 + 3 + 2 + 11 + 180 + 5 + 8 + 1\nday5 = jan25 = jan24 + 175 + 13 + 1 + 6 + 9 + 1 + 1 + 6 + 19 + 25 + 23 + 24 + 30 + 1 + 19 + 5 + 9 + 13 + 3 + 3 + 6 + 1 + 1 + 8 + 10 + 1 + 3 + 1 + 1 + 1 + 1 + 1 + 5 + 6 + 1 + 31 + 10 + 10 + 18 + 1 + 1 + 3 + 292 + 7\nday6 = jan26 = jan25 + 51 + 12 + 181 + 1 + 26 + 21 + 13 + 20 + 6 + 42 + 5 + 5 + 3 + 18 + 16 + 3 + 3 + 3 + 3 + 3 + 7 + 11 + 7 + 1 + 1 + 1 + 1 + 2 + 3 + 13 + 1 + 6 + 1 + 9 + 1 + 5 + 1 + 3 + 2 + 5 + 3 + 1 + 1 + 12 + 2 + 1 + 1 + 4 + 371 + 302 \nday7 = jan27 = jan26 + 10 + 4 + 13 + 45 + 31 + 17 + 14 + 10 + 35 + 35 + 5 + 3 + 2 + 7 + 1 + 24 + 5 + 25 + 2 + 1 + 1 + 13 + 1 + 9 + 3 + 21 + 12 + 1 + 4 + 4 + 1 + 3 + 7 + 5 + 2 + 1 + 1 + 2 + 1 + 3 + 8 + 5 + 7 + 1 + 24 + 1\nday8 = jan28 = jan27 + 13 + 6 + 40 + 12 + 1291 + 2 + 23 + 36 + 22 + 21 + 9 + 5 + 5 + 45 + 15 + 37 + 4 + 2 + 2 + 7 + 43 + 1 + 3 + 14 + 6 + 18 + 2 + 11 + 2 + 8 + 11 + 2 + 2 + 19 + 4 + 7 + 37 + 1 + 7 + 3 + 840 + 1 + 1 + 1 + 3 + 1\nday9 = jan29 = jan28 + 14 + 2 + 7 + 469 + 38 + 3 + 26 + 29 + 46 + 123 + 18 + 78 + 15 + 1 + 7 + 5 + 34 + 3 + 2 + 15 + 1 + 1 + 1 + 10 + 7 + 1 + 4 + 11 + 2 + 1 + 16 + 9 + 2 + 31 + 1 + 4 + 2 + 3 + 1 + 9 + 2 + 1 + 1032 + 53 + 8 + 18 + 536\nday10 = jan30 = jan29 + 3 + 5 + 17 + 20 + 34 + 48 + 15 + 72 + 3 + 30 + 1 + 5 + 56 + 132 + 39 + 6 + 3 + 2 + 15 + 17 + 1 + 7 + 2 + 3 + 5 + 2 + 1 + 1 + 11 + 13 + 1 + 1 + 17 + 43 + 2 + 3 + 1 + 1 + 2 + 1 + 1 + 1 + 1 + 1 + 3 + 1 + 6 + 1 + 317 + 7 + 3 + 2 + 1 + 1 + 2 + 1 + 903 + 4 + 24 + 622\nday11 = feb1 = jan30 + 1 + 4 + 9 + 16 + 17 + 36 + 78 + 37 + 20 + 39 + 74 + 3 + 109 + 39 + 15 + 3 + 55 + 4 + 3 + 1 + 24 + 2 + 11 + 4 + 7 + 19 + 3 + 4 + 6 + 3 + 5 + 7 + 43 + 14 + 2 + 5 + 1 + 3 + 2 + 1 + 1 + 1 + 2 + 1 + 12 + 6 + 1 + 1 + 1 + 1347 + 1 + 27 + 8\nday12 = feb2 = feb1 + 11 + 8 + 9 + 71 + 3 + 1 + 1 + 1 + 3 + 24 + 19 + 34 + 43 + 69 + 47 + 62 + 15 + 2 + 74 + 8 + 2 + 1 + 7 + 5 + 15 + 15 + 15 + 4 + 6 + 6 + 28 + 8 + 5 + 5 + 13 + 2 + 6 + 1 + 1 + 11 + 1 + 2103 + 10 + 25 + 593 + 11\nday13 = feb3 = feb2 + 16 + 9 + 8 + 2 + 68 + 73 + 23 + 8 + 35 + 16 + 3 + 63 + 59 + 23 + 51 + 58 + 2 + 3 + 3 + 12 + 6 + 7 + 20 + 1 + 2 + 3 + 10 + 4 + 1 + 13 + 12 + 5 + 42 + 24 + 1 + 4 + 4 + 1 + 1 + 1 + 2345 + 8 + 25 + 750 + 5 + 13\nday14 = feb4 = feb3 + 11 + 12 + 72 + 37 + 10 + 109 + 7 + 11 + 37 + 28 + 5 + 105 + 2 + 85 + 72 + 72 + 3 + 14 + 1 + 1 + 1 + 1 + 1 + 6 + 15 + 1 + 5 + 11 + 5 + 1 + 3 + 2 + 7 + 1 + 16 + 1 + 2 + 6 + 2 + 6 + 1 + 16 + 1 + 1 + 4 + 2 + 1 + 1 + 1 + 1 + 1 + 3156 + 10 + 675 + 3 + 7 + 22 + 14\nday15 = feb5 = feb4 + 11 + 9 + 12 + 2 + 19 + 23 + 89 + 50 + 68 + 2 + 33 + 6 + 35 + 66 + 72 + 7 + 57 + 3 + 11 + 11 + 23 + 2 + 1 + 25 + 10 + 3 + 9 + 2 + 1 + 2 + 4 + 7 + 10 + 1 + 1 + 1 + 1 + 1 + 1 + 10 + 2987 + 4 + 660 + 4 + 9 + 13 + 11 + 10\nday16 = feb6 = feb5 + 22 + 5 + 36 + 18 + 87 + 5 + 20 + 61 + 4 + 50 + 32 + 37 + 6 + 1 + 59 + 52 + 74 + 6 + 4 + 8 + 21 + 10 + 2 + 1 + 2 + 2 + 2 + 5 + 26 + 11 + 1 + 1 + 9 + 3 + 4 + 2 + 1 + 1 + 3 + 2 + 2 + 3 + 1 + 5 + 6 + 1 + 2447 + 2 + 1 + 2 + 11 + 12\nday17 = feb7 = feb6 + 625 + 32 + 4 + 6 + 14 + 1 + 23 + 63 + 41 + 74 + 35 + 11 + 52 + 61 + 61 + 48 + 2 + 3 + 3 + 3 + 4 + 11 + 23 + 9 + 6 + 8 + 50 + 1 + 7 + 1 + 3 + 1 + 16 + 4 + 1 + 1 + 4 + 1 + 1 + 3 + 4 + 1 + 8 + 1 + 2841 + 11 + 2 + 4 + 2\nday18 = feb8 = feb7 + 498 + 24 + 11 + 4 + 3 + 21 + 19 + 31 + 31 + 68 + 18 + 67 + 42 + 37 + 8 + 41 + 9 + 2 + 11 + 7 + 18 + 2 + 15 + 3 + 1 + 1 + 7 + 5 + 4 + 9 + 2 + 20 + 5 + 1 + 1 + 8 + 7 + 12 + 2147 + 18 + 6\nday19 = feb9 = feb8 + 441 + 11 + 2 + 9 + 12 + 26 + 23 + 53 + 35 + 29 + 7 + 27 + 1 + 46 + 3 + 42 + 4 + 25 + 19 + 2 + 13 + 1 + 11 + 11 + 1 + 2 + 1 + 1 + 6 + 1 + 9 + 2 + 1 + 1 + 1 + 3 + 7 + 3 + 2531 + 419\nday20 = feb10 = feb9 + 1 + 65 + 4 + 2 + 2 + 4 + 1 + 2097\nday21 = feb11 = feb10 + 370 + 1 + 1 + 2 + 2 + 7 + 2 + 1 + 1638 + 39\nday22 = feb12 = feb11 + 377 + 1 + 3 + 1 + 1 + 14840\nday23 = feb13 = feb12 + 1 + 44 + 1 + 312 + 1 + 1 + 1 + 8 + 1 + 1 + 3 + 4823\nday24 = feb14 = feb13 + 1 + 267 + 1 + 4 + 3 + 3 + 2 + 9 + 1 + 1 + 1 + 1 + 1 + 2420 + 5 + 7 + 8\nday25 = feb15 = feb14 + 193 + 33 + 28 + 7 + 13 + 16 + 13 + 11 + 7 + 1 + 1 + 3 + 2 + 1 + 2 + 67 + 8 + 1 + 1 + 2 + 1 + 5 + 1 + 1 + 1843 + 6\nday26 = feb16 = feb15 + 5 + 19 + 11 + 20 + 100 + 1 + 1 + 2 + 22 + 5 + 3 + 12 + 12 + 70 + 13 + 5 + 3 + 2 + 1 + 5 + 1 + 2 + 3 + 1 + 1933 + 4 + 3 + 12\nday27 = feb17 = feb16 + 4 + 15 + 14 + 11 + 1 + 2 + 34 + 9 + 1 + 5 + 4 + 6 + 1 + 1 + 1 + 1 + 14 + 4 + 85 + 2 + 3 + 1 + 2 + 1 + 1 + 1807 + 1 + 77\nday28 = feb18 = feb17 + 9 + 1 + 1 + 3 + 6 + 1 + 11 + 3 + 2 + 13 + 7 + 1 + 6 + 2 + 1 + 1 + 88 + 4 + 3 + 1 + 3 + 1 + 1 + 1 + 1 + 1 + 1693 + 2 + 4\nday29 = feb19 = feb18 + 47 + 1 + 6 + 2 + 5 + 1 + 1 + 6 + 1 + 2 + 4 + 6 + 2 + 15 + 3 + 1 + 1 + 1 + 1 + 2 + 2 + 1 + 1 + 4 + 1 + 1 + 79 + 1 + 2 + 1 + 3 + 3 + 1 + 1 + 1 + 2 + 2 + 2 + 2 + 349 + 5\nday30 = feb20 = feb19 + 24 + 1 + 4 + 2 + 2 + 1 + 1 + 5 + 2 + 6 + 6 + 2 + 1 + 1 + 1 + 3 + 5 + 2 + 1 + 1 + 1 + 1 + 2 + 5 + 17 + 1 + 3 + 1 + 1 + 1 + 1 + 1 + 13 + 1 + 1 + 1 + 1 + 1 + 1 + 2 + 411 + 1 + 4 + 1 + 4 + 2\nday31 = feb21 = feb20 + 1 + 45 + 1 + 477 + 1 + 2 + 28 + 1 + 1 + 7 + 5 + 3 + 1 + 1 + 1 + 3 + 202 + 1 + 3 + 2 + 48 + 1 + 1 + 2 + 13 + 1 + 1 + 1 + 1 + 1 + 3 + 3 + 1 + 2 + 1 + 1 + 2 + 2 + 426 + 8 + 2 + 18 + 1 + 1 + 1 + 1 + 1 + 2 + 1\nday32 = feb22 = feb21 + 366 + 31 + 1 + 142 + 1 + 2 + 1 + 1 + 87 + 10 + 1 + 9 + 1 + 4 + 1 + 1 + 1 + 4 + 3 + 2 + 33 + 1 + 1 + 1 + 7 + 1 + 16 + 630 + 3\nday33 = feb23 = feb22 + 25 + 95 + 18 + 1 + 2 + 1 + 46 + 1 + 4 + 34 + 2 + 15 + 8 + 1 + 16 + 1 + 57 + 1 + 4 + 1 + 23 + 1 + 1\nday34 = feb24 = feb23 + 161 + 409 + 1 + 3 + 1 + 1 + 70 + 4 + 38 + 1 + 7 + 2 + 1 + 19 + 1 + 7 + 3 + 4 + 2 + 1 + 12 + 1 + 18 + 5 + 14 + 1 + 2 + 1 + 1\nday35 = feb25 = feb24 + 499 + 9 + 60 + 2 + 1 + 1 + 2 + 1 + 1 + 2 + 3 + 6 + 1 + 84 + 1 + 14 + 4 + 34 + 28 + 1 + 2 + 9 + 1 + 1 + 1 + 2 + 4 + 1 + 6 + 1 + 39 + 1 + 4 + 2 + 1 + 1 + 1 + 1 + 1 + 1 + 2 + 2 + 2 + 10\nday36 = feb26 = feb25 + 169 + 401 + 5 + 1 + 3 + 1 + 1 + 3 + 1 + 1 + 115 + 1 + 32 + 3 + 6 + 44 + 1 + 4 + 1 + 1 + 7 + 1 + 1 + 14 + 17 + 1 + 1 + 1 + 2 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 27 + 3 + 1 + 7 + 1 + 1 + 54 + 7 + 1 + 17\nday37 = feb27 = feb26 + 334 + 433 + 1 + 1 + 1 + 1 + 1 + 17 + 3 + 1 + 1 + 171 + 3 + 13 + 2 + 2 + 1 + 104 + 1 + 2 + 2 + 75 + 1 + 3 + 1 + 1 + 2 + 1 + 1 + 6 + 1 + 1 + 127 + 14 + 4 + 5 + 20 + 1 + 6 + 3 + 2 + 1 + 1 + 1 + 1 + 1 + 1 + 7 + 1 + 2\nday38 = feb28 = feb27 + 327 + 1 + 256 + 1 + 1 + 1 + 1 + 1 + 1 + 2 + 1 + 2 + 2 + 315 + 2 + 2 + 12 + 1 + 1 + 1 + 1 + 1 + 2 + 1 + 143 + 1 + 1 + 6 + 3 + 2 + 1 + 2 + 1 + 233 + 1 + 1 + 2 + 2 + 1 + 1 + 16 + 2 + 4 + 1 + 1 + 1 + 2 + 1 + 1 + 7 + 3 + 2 + 1 + 1 + 1 + 1 + 1 \nday39 = feb29 = feb28 + 1 + 423 + 4 + 594 + 1 + 1 + 1 + 1 + 1 + 1 + 5 + 1 + 219 + 8 + 4 + 205 + 1 + 3 + 3 + 1 + 1 + 1 + 1 + 1 + 2 + 4 + 16 + 3 + 2 + 1 + 3 + 1 + 2 + 5 + 8 + 10 + 239 + 3 + 1 + 2 + 27 + 1 + 3 + 1 + 3 + 1 + 1 + 1 + 570 + 3\nday40 = mar1 = feb29 + 1 + 376 + 1 + 4 + 1 + 210 + 2 + 1 + 1 + 5 + 1 + 385 + 1 + 38 + 12 + 4 + 1 + 3 + 3 + 5 + 3 + 1 + 1 + 18 + 2 + 6 + 6 + 1 + 6 + 4 + 566 + 1 + 2 + 30 + 1 + 1 + 1 + 15 + 8 + 3 + 12 + 1 + 1 + 5 + 2 + 1 + 1 + 4 + 2 + 7\nday41 = mar2 = mar1 + 2 + 3 + 196 + 6 + 1 + 4 + 476 + 1 + 1 + 2 + 1 + 2 + 1 + 1 + 123 + 10 + 1 + 2 + 1 + 20 + 2 + 2 + 2 + 523 + 6 + 1 + 1 + 18 + 8 + 2 + 36 + 1 + 4 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 5 + 4 + 340 + 2 + 6 + 2 + 2 + 6 + 2 + 1 + 1 + 1 + 1 + 1 + 3 + 61 + 3 + 2 + 4 + 1 + 7 + 2 + 6 + 3 + 1 + 1 + 2 + 1 + 3 + 1\nday42 = mar3 = mar2 + 114 + 11 + 477 + 1 + 1 + 1 + 1 + 2 + 374 + 1 + 1 + 835 + 6 + 2 + 31 + 10 + 6 + 7 + 466 + 1 + 21 + 5 + 1 + 19 + 31 + 7 + 1 + 1 + 1\nitday1 = ijan30 = 2\nitday2 = ifeb6 = itday1 + 1\nitday3 = ifeb20 = itday2 + 1\nitday4 = ifeb21 = itday3 + 2 + 3 + 8 + 2 + 1\nitday5 = ifeb22 = itday4 + 1 + 33 + 1 + 7 + 1 + 16\nitday6 = ifeb23 = itday5 + 34 + 16 + 1 + 23\nitday7 = ifeb24 = itday6 + 1 + 38 + 7 + 19 + 3 + 4 + 5 + 1\nitday8 = ifeb25 = itday7 + 2 + 14 + 38 + 39 + 2 + 2\nitday9 = ifeb26 = itday8 + 32 + 17 + 27 + 54\nitday10 = ifeb27 = itday9 + 75 + 127\nitday11 = ifeb28 = itday10 + 233 + 1\nitday12 = ifeb29 = itday11 + 239\nitday13 = imar1 = itday12 + 566\nitday14 = imar2 = itday13 + 2 + 340\nitday15 = imar3 = itday14 + 466\nit1i = itday2 / itday1\nit2i = itday3 / itday2\nit3i = itday4 / itday3\nit4i = itday5 / itday4\nit5i = itday6 / itday5\nit6i = itday7 / itday6\nit7i = itday8 / itday7\nit8i = itday9 / itday8\nit9i = itday10 / itday9\nit10i = itday11 / itday10\nit11i = itday12 / itday11\nit12i = itday13 / itday12\nit13i = itday14 / itday13\nit14i = itday15 / itday14\nitav = (it1i + it2i + it3i + it4i + it5i + it6i + it7i + it8i + it9i + it10i + it11i + it12i + it13i + it14i)/14\nit1ia = (it1i + itav)/2\nit2ia = (it2i + itav)/2\nit3ia = (it3i + itav)/2\nit4ia = (it4i + itav)/2\nit5ia = (it5i + itav)/2\nit6ia = (it6i + itav)/2\nit7ia = (it7i + itav)/2\nit8ia = (it8i + itav)/2\nit9ia = (it9i + itav)/2\nit10ia = (it10i + itav)/2\nit11ia = (it11i + itav)/2\nit12ia = (it12i + itav)/2\nit13ia = (it13i + itav)/2\nit14ia = (it14i + itav)/2\nkorday1 = kjan24 = 1\nkorday2 = kjan26 = korday1 + 1\nkorday3 = kjan27 = korday2 + 1\nkorday4 = kjan30 = korday3 + 2\nkorday5 = kjan31 = korday4 + 1 + 4\nkorday6 = kfeb1 = korday5 + 1\nkorday7 = kfeb2 = korday6 + 3\nkorday8 = kfeb4 = korday7 + 1\nkorday9 = kfeb5 = korday8 + 2 + 1 + 4\nkorday10 = kfeb7 = korday9 + 1\nkorday11 = kfeb9 = korday10 + 1 + 2\nkorday12 = kfeb11 = korday11 + 1\nkorday13 = kfeb16 = korday12 + 1\nkorday14 = kfeb17 = korday13 + 1\nkorday15 = kfeb18 = korday14 + 1\nkorday16 = kfeb19 = korday15 + 15 + 1 + 4 + 2 + 5\nkorday17 = kfeb20 = korday16 + 24 + 5 + 17 + 1 + 2 + 1 + 4\nkorday18 = kfeb21 = korday17 + 45 + 48 + 1 + 1 + 2\nkorday19 = kfeb22 = korday18 + 142 + 87 + 3\nkorday20 = kfeb23 = korday19 + 25 + 95 + 46\nkorday21 = kfeb24 = korday20 + 161 + 70\nkorday22 = kfeb25 = korday21 + 60 + 84\nkorday23 = kfeb26 = korday22 + 169 + 115\nkorday24 = kfeb27 = korday23 + 334 + 171\nkorday25 = kfeb28 = korday24 + 256 + 315\nkorday26 = kfeb29 = korday25 + 594 + 219\nkorday27 = kmar1 = korday26 + 376 + 210\nkorday28 = kmar2 = korday27 + 476 + 123\nkorday29 = kmar3 = korday28 + 477 + 374\nk1i = korday2 / korday1\nk2i = korday3 / korday2\nk3i = korday4 / korday3\nk4i = korday5 / korday4\nk5i = korday6 / korday5\nk6i = korday7 / korday6\nk7i = korday8 / korday7\nk8i = korday9 / korday8\nk9i = korday10 / korday9\nk10i = korday11 / korday10\nk11i = korday12 / korday11\nk12i = korday13 / korday12\nk13i = korday14 / korday13\nk14i = korday15 / korday14\nk15i = korday16 / korday15\nk16i = korday17 / korday16\nk17i = korday18 / korday17\nk18i = korday19 / korday18\nk19i = korday20 / korday19\nk20i = korday21 / korday20\nk21i = korday22 / korday21\nk22i = korday23 / korday22\nk23i = korday24 / korday23\nk24i = korday25 / korday24\nk25i = korday26 / korday25\nk26i = korday27 / korday26\nk27i = korday28 / korday27\nk28i = korday29 / korday28\nkav = (k1i + k2i + k3i + k4i + k5i + k6i + k7i + k8i + k9i + k10i + k11i + k12i + k13i + k14i + k15i + k16i + k17i + k18i + k19i + k20i + k21i + k22i + k23i + k24i + k25i + k26i + k27i + k28i)/28\nk1ia = (k1i + kav)/2\nk2ia = (k2i + kav)/2\nk3ia = (k3i + kav)/2\nk4ia = (k4i + kav)/2\nk5ia = (k5i + kav)/2\nk6ia = (k6i + kav)/2\nk7ia = (k7i + kav)/2\nk8ia = (k8i + kav)/2\nk9ia = (k9i + kav)/2\nk10ia = (k10i + kav)/2\nk11ia = (k11i + kav)/2\nk12ia = (k12i + kav)/2\nk13ia = (k13i + kav)/2\nk14ia = (k14i + kav)/2\nk15ia = (k15i + kav)/2\nk16ia = (k16i + kav)/2\nk17ia = (k17i + kav)/2\nk18ia = (k18i + kav)/2\nk19ia = (k19i + kav)/2\nk20ia = (k20i + kav)/2\nk21ia = (k21i + kav)/2\nk22ia = (k22i + kav)/2\nk23ia = (k23i + kav)/2\nk24ia = (k24i + kav)/2\nk25ia = (k25i + kav)/2\nk26ia = (k26i + kav)/2\nk27ia = (k27i + kav)/2\nk28ia = (k28i + kav)/2\nday2inc = day2 / day1\nday3inc = day3 / day2\nday4inc = day4 / day3\nday5inc = day5 / day4\nday6inc = day6 / day5\nday7inc = day7 / day6\nday8inc = day8 / day7\nday9inc = day9 / day8\nday10inc = day10 / day9\nday11inc = day11 / day10\nday12inc = day12 / day11\nday13inc = day13 / day12\nday14inc = day14 / day13\nday15inc = day15 / day14\nday16inc = day16 / day15\nday17inc = day17 / day16\nday18inc = day18 / day17\nday19inc = day19 / day18\nday20inc = day20 / day19\nday21inc = day21 / day20\nday22inc = day22 / day21\nday23inc = day23 / day22\nday24inc = day24 / day23\nday25inc = day25 / day24\nday26inc = day26 / day25\nday27inc = day27 / day26\nday28inc = day28 / day27\nday29inc = day29 / day28\nday30inc = day30 / day29\nday31inc = day31 / day30\nday32inc = day32 / day31\nday33inc = day33 / day32\nday34inc = day34 / day33\nday35inc = day35 / day34\nday36inc = day36 / day35\nday37inc = day37 / day36\nday38inc = day38 / day37\nday39inc = day39 / day38\nday40inc = day40 / day39\nday41inc = day41 / day40\nday42inc = day42 / day41\naddedinc = day2inc + day3inc + day4inc + day5inc + day6inc + day7inc + day8inc + day9inc + day10inc + day11inc + day12inc + day13inc + day14inc + day15inc + day16inc + day17inc + day18inc + day19inc + day20inc + day21inc + day22inc + day23inc + day24inc + day25inc + day26inc + day27inc + day28inc + day29inc + day30inc + day31inc + day32inc + day33inc + day34inc + day35inc + day36inc + day37inc + day38inc + day39inc + day40inc + day41inc + day42inc\navinc = addedinc / 41\nadjinc2 = ((avinc+day2inc) / 2)\nadjinc3 = ((avinc+day3inc) / 2)\nadjinc4 = ((avinc+day4inc) / 2)\nadjinc5 = ((avinc+day5inc) / 2)\nadjinc6 = ((avinc+day6inc) / 2)\nadjinc7 = ((avinc+day7inc) / 2)\nadjinc8 = ((avinc+day8inc) / 2)\nadjinc9 = ((avinc+day9inc) / 2)\nadjinc10 = ((avinc+day10inc) / 2)\nadjinc11 = ((avinc+day11inc) / 2)\nadjinc12 = ((avinc+day12inc) / 2)\nadjinc13 = ((avinc+day13inc) / 2)\nadjinc14 = ((avinc+day14inc) / 2)\nadjinc15 = ((avinc+day15inc) / 2)\nadjinc16 = ((avinc+day16inc) / 2)\nadjinc17 = ((avinc+day17inc) / 2)\nadjinc18 = ((avinc+day18inc) / 2)\nadjinc19 = ((avinc+day19inc) / 2)\nadjinc20 = ((avinc+day20inc) / 2)\nadjinc21 = ((avinc+day21inc) / 2)\nadjinc22 = ((avinc+day22inc) / 2)\nadjinc23 = ((avinc+day23inc) / 2)\nadjinc24 = ((avinc+day24inc) / 2)\nadjinc25 = ((avinc+day25inc) / 2)\nadjinc26 = ((avinc+day26inc) / 2)\nadjinc27 = ((avinc+day27inc) / 2)\nadjinc28 = ((avinc+day28inc) / 2)\nadjinc29 = ((avinc+day29inc) / 2)\nadjinc30 = ((avinc+day30inc) / 2)\nadjinc31 = ((avinc+day31inc) / 2)\nadjinc32 = ((avinc+day32inc) / 2)\nadjinc33 = ((avinc+day33inc) / 2)\nadjinc34 = ((avinc+day34inc) / 2)\nadjinc35 = ((avinc+day35inc) / 2)\nadjinc36 = ((avinc+day36inc) / 2)\nadjinc37 = ((avinc+day37inc) / 2)\nadjinc38 = ((avinc+day38inc) / 2)\nadjinc39 = ((avinc+day39inc) / 2)\nadjinc40 = ((avinc+day40inc) / 2)\nadjinc41 = ((avinc+day41inc) / 2)\nadjinc42 = ((avinc+day42inc) / 2)\nadj2inc2 = ((avinc+avinc+day2inc) / 3)\nadj2inc3 = ((avinc+avinc+day3inc) / 3)\nadj2inc4 = ((avinc+avinc+day4inc) / 3)\nadj2inc5 = ((avinc+avinc+day5inc) / 3)\nadj2inc6 = ((avinc+avinc+day6inc) / 3)\nadj2inc7 = ((avinc+avinc+day7inc) / 3)\nadj2inc8 = ((avinc+avinc+day8inc) / 3)\nadj2inc9 = ((avinc+avinc+day9inc) / 3)\nadj2inc10 = ((avinc+avinc+day10inc) / 3)\nadj2inc11 = ((avinc+avinc+day11inc) / 3)\nadj2inc12 = ((avinc+avinc+day12inc) / 3)\nadj2inc13 = ((avinc+avinc+day13inc) / 3)\nadj2inc14 = ((avinc+avinc+day14inc) / 3)\nadj2inc15 = ((avinc+avinc+day15inc) / 3)\nadj2inc16 = ((avinc+avinc+day16inc) / 3)\nadj2inc17 = ((avinc+avinc+day17inc) / 3)\nadj2inc18 = ((avinc+avinc+day18inc) / 3)\nadj2inc19 = ((avinc+avinc+day19inc) / 3)\nadj2inc20 = ((avinc+avinc+day20inc) / 3)\nadj2inc21 = ((avinc+avinc+day21inc) / 3)\nadj2inc22 = ((avinc+avinc+day22inc) / 3)\nadj2inc23 = ((avinc+avinc+day23inc) / 3)\nadj2inc24 = ((avinc+avinc+day24inc) / 3)\nadj2inc25 = ((avinc+avinc+day25inc) / 3)\nadj2inc26 = ((avinc+avinc+day26inc) / 3)\nadj2inc27 = ((avinc+avinc+day27inc) / 3)\nadj2inc28 = ((avinc+avinc+day28inc) / 3)\nadj2inc29 = ((avinc+avinc+day29inc) / 3)\nadj2inc30 = ((avinc+avinc+day30inc) / 3)\nadj2inc31 = ((avinc+avinc+day31inc) / 3)\nadj2inc32 = ((avinc+avinc+day32inc) / 3)\nadj2inc33 = ((avinc+avinc+day33inc) / 3)\nadj2inc34 = ((avinc+avinc+day34inc) / 3)\nadj2inc35 = ((avinc+avinc+day35inc) / 3)\nadj2inc36 = ((avinc+avinc+day36inc) / 3)\nadj2inc37 = ((avinc+avinc+day37inc) / 3)\nadj2inc38 = ((avinc+avinc+day38inc) / 3)\nadj2inc39 = ((avinc+avinc+day39inc) / 3)\nadj2inc40 = ((avinc+avinc+day40inc) / 3)\nadj2inc41 = ((avinc+avinc+day41inc) / 3)\nadj2inc42 = ((avinc+avinc+day42inc) / 3)\nprint('Day 1 (Jan 21) reported infections:' , day1)\nprint('Day 2 (Jan 22) increase' , day2inc)\nprint('Day 3 (Jan 23) increase' , day3inc)\nprint('Day 4 (Jan 24) increase' , day4inc)\nprint('Day 5 (Jan 25) increase' , day5inc)\nprint('Day 6 (Jan 26) increase' , day6inc)\nprint('Day 7 (Jan 27) increase' , day7inc)\nprint('Day 8 (Jan 28) increase' , day8inc)\nprint('Day 9 (Jan 29) increase' , day9inc)\nprint('Day 10 (Jan 30) increase' , day10inc)\nprint('Day 11 (Feb 1) increase' , day11inc)\nprint('Day 12 (Feb 2) increase' , day12inc)\nprint('Day 13 (Feb 3) increase' , day13inc)\nprint('Day 14 (Feb 4) increase' , day14inc)\nprint('Day 15 (Feb 5) increase' , day15inc)\nprint('Day 16 (Feb 6) increase' , day16inc)\nprint('Day 17 (Feb 7) increase' , day17inc)\nprint('Day 18 (Feb 8) increase' , day18inc)\nprint('Day 19 (Feb 9) increase' , day19inc)\nprint('Day 20 (Feb 10) increase' , day20inc)\nprint('Day 21 (Feb 11) increase' , day21inc)\nprint('Day 22 (Feb 12) increase' , day22inc)\nprint('Day 23 (Feb 13) increase' , day23inc)\nprint('Day 24 (Feb 14) increase' , day24inc)\nprint('Day 25 (Feb 15) increase' , day25inc)\nprint('Day 26 (Feb 16) increase' , day26inc)\nprint('Day 27 (Feb 17) increase' , day27inc)\nprint('Day 28 (Feb 18) increase' , day28inc)\nprint('Day 29 (Feb 19) increase' , day29inc)\nprint('Day 30 (Feb 20) increase' , day30inc)\nprint('Day 31 (Feb 21) increase' , day31inc)\nprint('Day 32 (Feb 22) increase' , day32inc)\nprint('Day 33 (Feb 23) increase' , day33inc)\nprint('Day 34 (Feb 24) increase' , day34inc)\nprint('Day 35 (Feb 25) increase' , day35inc)\nprint('Day 36 (Feb 26) increase' , day36inc)\nprint('Day 37 (Feb 27) increase' , day37inc)\nprint('Day 38 (Feb 28) increase' , day38inc)\nprint('Day 39 (Feb 29) increase' , day39inc)\ny = [day2inc,day3inc,day4inc,day5inc,day6inc,day7inc,day8inc,day9inc,day10inc,day11inc,day12inc,day13inc,day14inc,day15inc,day16inc,day17inc,day18inc,day19inc,day20inc,day21inc,day22inc,day23inc,day24inc,day25inc,day26inc,day27inc,day28inc,day29inc,day30inc,day31inc,day32inc,day33inc,day34inc,day35inc,day36inc,day37inc,day38inc,day39inc,day40inc,day41inc,day42inc]\ny2 = [adjinc2,adjinc3,adjinc4,adjinc5,adjinc6,adjinc7,adjinc8,adjinc9,adjinc10,adjinc11,adjinc12,adjinc13,adjinc14,adjinc15,adjinc16,adjinc17,adjinc18,adjinc19,adjinc20,adjinc21,adjinc22,adjinc23,adjinc24,adjinc25,adjinc26,adjinc27,adjinc28,adjinc29,adjinc30,adjinc31,adjinc32,adjinc33,adjinc34,adjinc35,adjinc36,adjinc37,adjinc38,adjinc39,adjinc40,adjinc41,adjinc42]\ny3 = [adj2inc2,adj2inc3,adj2inc4,adj2inc5,adj2inc6,adj2inc7,adj2inc8,adj2inc9,adj2inc10,adj2inc11,adj2inc12,adj2inc13,adj2inc14,adj2inc15,adj2inc16,adj2inc17,adj2inc18,adj2inc19,adj2inc20,adj2inc21,adj2inc22,adj2inc23,adj2inc24,adj2inc25,adj2inc26,adj2inc27,adj2inc28,adj2inc29,adj2inc30,adj2inc31,adj2inc32,adj2inc33,adj2inc34,adj2inc35,adj2inc36,adj2inc37,adj2inc38,adj2inc39,adj2inc40,adj2inc41,adj2inc42]\ny4 = [k1i,k2i,k3i,k4i,k5i,k6i,k7i,k8i,k9i,k10i,k11i,k12i,k13i,k14i,k15i,k16i,k17i,k18i,k19i,k20i,k21i,k22i,k23i,k24i,k25i,k26i,k27i,k28i]\ny7 = [k1ia,k2ia,k3ia,k4ia,k5ia,k6ia,k7ia,k8ia,k9ia,k10ia,k11ia,k12ia,k13ia,k14ia,k15ia,k16ia,k17ia,k18ia,k19ia,k20ia,k21ia,k22ia,k23ia,k24ia,k25ia,k26ia,k27ia,k28ia]\ny5 = [it1i,it2i,it3i,it4i,it5i,it6i,it7i,it8i,it9i,it10i,it11i,it12i,it13i,it14i]\ny6 = [it1ia,it2ia,it3ia,it4ia,it5ia,it6ia,it7ia,it8ia,it9ia,it10ia,it11ia,it12ia,it13ia,it14ia]\nx = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41]\nx2 = [4,5,8,9,10,11,13,14,16,18,20,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41]\nx3 = [15,29,30,31,32,33,34,35,36,37,38,39,40,41]\nmaxinc = max(y)\nmininc = min(y)\np1=('World-Lowest delta for spread of infection:' , mininc)\np2=('World-Average delta for spread of infection:' , avinc)\np3=('World-Highest delta for spread of infection:', maxinc)\np4=('Korea-Lowest delta for spread of infection:' , min(y4))\np5=('Korea-Average delta for spread of infection:' , kav)\np6=('Korea-Highest delta for spread of infection:', max(y4))\np7=('Italy-Lowest delta for spread of infection:' , min(y5))\np8=('Italy-Average delta for spread of infection:' , itav)\np9=('Italy-Highest delta for spread of infection:', max(y5))\npeople = 1\nr1day = 0\nwhile ( people < 7500000000):\n r1day = r1day + 1\n people = people * mininc\nr1=('at' , 'the' , 'minimum' , 'growth' , 'rate' , 'of' , mininc)\nr2=('time' , 'elapsed' , 'to' , 'spread' , 'to' , '7.5' , 'billion' , 'is' , r1day + 1 , 'days' , '=' , r1day / 365 , 'years')\npeople = 1\nr2day = 0\nwhile ( people < 7500000000):\n r2day = r2day + 1\n people = people * avinc\nr3=('at' , 'the' , 'average' , 'growth' , 'rate' , 'of' , avinc)\nr4=('time' , 'elapsed' , 'to' , 'spread' , 'to' , '7.5' , 'billion' , 'is' , r2day + 1 , 'days' , '=' , r2day / 30 , 'months')\npeople = 1\nr3day = 0\nwhile ( people < 7500000000):\n r3day = r3day + 1\n people = people * maxinc\nr5=('at' , 'the' , 'maximum' , 'growth' , 'rate' , 'of' , maxinc)\nr6=('time' , 'elapsed' , 'to' , 'spread' , 'to' , '7.5' , 'billion' , 'is' , r3day + 1 , 'days' , '=' , r3day / 30 , 'months')\nvar_1 = IntVar()\nvar_2 = IntVar()\nvar_3 = IntVar()\nvar_4 = IntVar()\nvar_5 = IntVar()\nvar_6 = IntVar()\nvar_7 = IntVar()\nline_1 = Checkbutton(window, text = 'world', fg=\"Green\", font=(\"Helvetica\", 10), variable = var_1, onvalue = 1, offvalue = 0)\nline_2 = Checkbutton(window, text = 'world-normalized', fg=\"Green\", font=(\"Helvetica\", 10), variable = var_2, onvalue = 1, offvalue = 0)\nline_3 = Checkbutton(window, text = 'world-extreme normalization', fg=\"Green\", font=(\"Helvetica\", 10), variable = var_3, onvalue = 1, offvalue = 0)\nline_4 = Checkbutton(window, text = 'korea', fg=\"Green\", font=(\"Helvetica\", 10), variable = var_4, onvalue = 1, offvalue = 0)\nline_5 = Checkbutton(window, text = 'korea-normalized', fg=\"Green\", font=(\"Helvetica\", 10), variable = var_5, onvalue = 1, offvalue = 0)\nline_6 = Checkbutton(window, text = 'italy', fg=\"Green\", font=(\"Helvetica\", 10), variable = var_6, onvalue = 1, offvalue = 0)\nline_7 = Checkbutton(window, text = 'italy-normalized', fg=\"Green\", font=(\"Helvetica\", 10), variable = var_7, onvalue = 1, offvalue = 0)\ndef dialog():\n if var_1.get()==1:plt.plot(x, y, label = \"reported values - world (BNONEWS)\")\n if var_2.get()==1:plt.plot(x, y2, label = \"normalized in respect to average change - world\")\n if var_3.get()==1:plt.plot(x, y3, label = \"world-extreme normalization\")\n if var_4.get()==1:plt.plot(x2, y4, label = \"korea (BNONEWS)\")\n if var_5.get()==1:plt.plot(x2, y7, label = \"korea-normalized\")\n if var_6.get()==1:plt.plot(x3, y5, label = \"italy (BNONEWS)\")\n if var_7.get()==1:plt.plot(x3, y6, label = \"italy-normalized\")\n plt.ylabel('rate of increase in respect to previous day')\n plt.xlabel('days (Jan21-Mar3)')\n plt.title('Rate of COVID-19 spread')\n plt.legend(loc=\"upper left\")\n plt.show()\nbtn = Button(window, text = 'graph selected lines', fg=\"Green\", font=(\"Helvetica\", 12), command = dialog)\nrd=('WITH' , 'NO' , 'OTHER' , 'FACTORS' , 'CONSIDERED', '(just' , 'spread' , 'alone):')\nbtn.grid(row = 8, column = 4, columnspan = 5)\nlabel2 = Label(window, text = p1, fg=\"Green\", font=(\"Helvetica\", 10))\nlabel2.grid(row = 12, column = 1, columnspan = 5)\nlabel3 = Label(window, text = p2, fg=\"Green\", font=(\"Helvetica\", 10))\nlabel3.grid(row = 13, column = 1, columnspan = 5)\nlabel4 = Label(window, text = p3, fg=\"Green\", font=(\"Helvetica\", 10))\nlabel4.grid(row = 14, column = 1, columnspan = 5)\nlabel5 = Label(window, text = p4, fg=\"Green\", font=(\"Helvetica\", 10))\nlabel5.grid(row = 15, column = 1, columnspan = 5)\nlabel6 = Label(window, text = p5, fg=\"Green\", font=(\"Helvetica\", 10))\nlabel6.grid(row = 16, column = 1, columnspan = 5)\nlabel7 = Label(window, text = p6, fg=\"Green\", font=(\"Helvetica\", 10))\nlabel7.grid(row = 17, column = 1, columnspan = 5)\nlabel8 = Label(window, text = p7, fg=\"Green\", font=(\"Helvetica\", 10))\nlabel8.grid(row = 18, column = 1, columnspan = 5)\nlabel9 = Label(window, text = p8, fg=\"Green\", font=(\"Helvetica\", 10))\nlabel9.grid(row = 19, column = 1, columnspan = 5)\nlabel10 = Label(window, text = p9, fg=\"Green\", font=(\"Helvetica\", 10))\nlabel10.grid(row = 20, column = 1, columnspan = 5)\nlabel16 = Label(window, text = '|', fg=\"Green\", font=(\"Helvetica\", 10))\nlabel16.grid(row = 12, column = 6)\nlabel17 = Label(window, text = '|', fg=\"Green\", font=(\"Helvetica\", 10))\nlabel17.grid(row = 13, column = 6)\nlabel18 = Label(window, text = '|', fg=\"Green\", font=(\"Helvetica\", 10))\nlabel18.grid(row = 14, column = 6)\nlabel19 = Label(window, text = '|', fg=\"Green\", font=(\"Helvetica\", 10))\nlabel19.grid(row = 15, column = 6)\nlabel20 = Label(window, text = '|', fg=\"Green\", font=(\"Helvetica\", 10))\nlabel20.grid(row = 16, column = 6)\nlabel21 = Label(window, text = '|', fg=\"Green\", font=(\"Helvetica\", 10))\nlabel21.grid(row = 17, column = 6)\nlabel22 = Label(window, text = '|', fg=\"Green\", font=(\"Helvetica\", 10))\nlabel22.grid(row = 18, column = 6)\nlabel23 = Label(window, text = '|', fg=\"Green\", font=(\"Helvetica\", 10))\nlabel23.grid(row = 19, column = 6)\nlabel24 = Label(window, text = '|', fg=\"Green\", font=(\"Helvetica\", 10))\nlabel24.grid(row = 20, column = 6)\nlabel12 = Label(window, text = r1, fg=\"Green\", font=(\"Helvetica\", 10))\nlabel12.grid(row = 13, column = 7, columnspan = 5)\nlabel13 = Label(window, text = r2, fg=\"Green\", font=(\"Helvetica\", 10))\nlabel13.grid(row = 14, column = 7, columnspan = 5)\nlabel14 = Label(window, text = r3, fg=\"Green\", font=(\"Helvetica\", 10))\nlabel14.grid(row = 16, column = 7, columnspan = 5)\nlabel12 = Label(window, text = r4, fg=\"Green\", font=(\"Helvetica\", 10))\nlabel12.grid(row = 17, column = 7, columnspan = 5)\nlabel13 = Label(window, text = r5, fg=\"Green\", font=(\"Helvetica\", 10))\nlabel13.grid(row = 19, column = 7, columnspan = 5)\nlabel14 = Label(window, text = r6, fg=\"Green\", font=(\"Helvetica\", 10))\nlabel14.grid(row = 20, column = 7, columnspan = 5)\nlabel25 = Label(window, text = rd, fg=\"Green\", font=(\"Helvetica\", 10))\nlabel25.grid(row = 12, column = 7, columnspan = 5)\nlabel26 = Label(window, text = '-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------', fg=\"Green\", font=(\"Helvetica\", 10))\nlabel26.grid(row = 11, column = 1, columnspan = 11)\n\nline_1.grid(row = 5, column = 5)\nline_2.grid(row = 5, column = 7)\nline_3.grid(row = 4, column = 5, columnspan = 3)\nline_4.grid(row = 6, column = 5)\nline_5.grid(row = 6, column = 7)\nline_6.grid(row = 7, column = 5)\nline_7.grid(row = 7, column = 7)\nwindow.mainloop()\n"
}
] | 1 |
FlaviodosSantos/testandodeploy | https://github.com/FlaviodosSantos/testandodeploy | 276bcf44d1b0234a0ca0d0e5be611ba08a62f78e | fb98a7abfdc2355bc8898ca0b84c27f1c37d12ae | f315736bdc3454107122a214875f449e8ad8ed30 | refs/heads/main | 2023-07-05T23:08:18.810387 | 2021-08-14T21:14:50 | 2021-08-14T21:14:50 | 380,598,031 | 0 | 0 | null | 2021-06-26T21:20:44 | 2021-08-08T22:01:56 | 2021-08-14T21:14:50 | JavaScript | [
{
"alpha_fraction": 0.653030276298523,
"alphanum_fraction": 0.653030276298523,
"avg_line_length": 46.21428680419922,
"blob_id": "3259d6a22aae5f9a8d220b64d31924da2be81b9f",
"content_id": "6b579b5e472c362fb6289dfa2eb7ed79fcbb6b8b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 660,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 14,
"path": "/blog/urls.py",
"repo_name": "FlaviodosSantos/testandodeploy",
"src_encoding": "UTF-8",
"text": "from . import views\nfrom django.urls import path\n\nurlpatterns = [\n path('', views.PostList.as_view(), name='home'),\n #path('<slug:slug>/', views.PostDetail.as_view(), name='post_detail'),\n path('<slug:slug>/', views.post_detail, name='post_detail'),\n path('create', views.PostCreate.as_view(), name='post_create'),\n path('update/<int:pk>', views.PostUpdate.as_view(), name='post_update'),\n path('delete/<int:pk>', views.PostDelete.as_view(), name='post_delete'),\n path('sobre', views.sobre, name='sobre'),\n path('contato', views.contato, name='contato'),\n path('search/results/', views.PostSearch.as_view(), name='search_results'),\n]"
},
{
"alpha_fraction": 0.8048780560493469,
"alphanum_fraction": 0.8048780560493469,
"avg_line_length": 39,
"blob_id": "3e94ca630555e43a8592985d772b935ed0a5a3a8",
"content_id": "d59277cac27433ca81413cc823f5f022e42bd910",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 41,
"license_type": "no_license",
"max_line_length": 39,
"num_lines": 1,
"path": "/README.md",
"repo_name": "FlaviodosSantos/testandodeploy",
"src_encoding": "UTF-8",
"text": " # testanto deploy no pythonanywhere.com\n"
},
{
"alpha_fraction": 0.6248399615287781,
"alphanum_fraction": 0.6286811828613281,
"avg_line_length": 29.828947067260742,
"blob_id": "20980cbebcaf5d8b60671ad887c4d4ad846a1acd",
"content_id": "3cf9d981855b85f5030e855566e71feacd31a66d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2347,
"license_type": "no_license",
"max_line_length": 89,
"num_lines": 76,
"path": "/blog/views.py",
"repo_name": "FlaviodosSantos/testandodeploy",
"src_encoding": "UTF-8",
"text": "from django.views import generic\nfrom .models import Post\nfrom .forms import CommentForm\nfrom django.shortcuts import render, get_object_or_404\nfrom django.http import HttpResponse\nfrom django.urls import reverse_lazy\n\nclass PostList(generic.ListView):\n queryset = Post.objects.filter(status=1).order_by('-created_on')\n template_name = 'index.html'\n paginate_by = 2 #paginação\n\n#class PostDetail(generic.DetailView):\n# model = Post\n# template_name = 'post_detail.html'\n\ndef post_detail(request, slug):\n template_name = 'post_detail.html'\n post = get_object_or_404(Post, slug=slug)\n comments = post.comments.filter(active=True)\n new_comment = None\n # Comment posted\n if request.method == 'POST':\n comment_form = CommentForm(data=request.POST)\n if comment_form.is_valid():\n\n # Create Comment object but don't save to database yet\n new_comment = comment_form.save(commit=False)\n # Assign the current post to the comment\n new_comment.post = post\n # Save the comment to the database\n new_comment.save()\n else:\n comment_form = CommentForm()\n\n return render(request, template_name, {'post': post,\n 'comments': comments,\n 'new_comment': new_comment,\n 'comment_form': comment_form})\n\ndef sobre(request):\n template_name = 'sobre.html'\n return render(request, template_name)\n\n\nclass PostCreate(generic.CreateView):\n model = Post\n fields = \"__all__\"\n success_url = reverse_lazy('home')\n\n\nclass PostUpdate(generic.UpdateView):\n model = Post\n fields = \"__all__\"\n success_url = reverse_lazy('home')\n\n\nclass PostDelete(generic.DeleteView):\n model = Post\n template_name = 'post_delete.html'\n success_url = reverse_lazy('home')\n\ndef contato(request):\n template_name = 'contato.html'\n return render(request, template_name)\n\nclass PostSearch(generic.ListView):\n model = Post\n template_name = 'search_results.html'\n paginate_by = 5 #paginação \n\n def get_queryset(self): # new\n query = self.request.GET.get('search')\n object_list = Post.objects.filter(title__icontains=query).order_by('-created_on')\n \n return object_list\n"
},
{
"alpha_fraction": 0.4883720874786377,
"alphanum_fraction": 0.6976743936538696,
"avg_line_length": 16.200000762939453,
"blob_id": "8d67dfaed4a79de3af3ef5df9cc3ae88d5727dc7",
"content_id": "935bbc88d187badf4e9c39ab642ed00c036134c2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 86,
"license_type": "no_license",
"max_line_length": 27,
"num_lines": 5,
"path": "/requirements.txt",
"repo_name": "FlaviodosSantos/testandodeploy",
"src_encoding": "UTF-8",
"text": "asgiref==3.4.1\nDjango==3.2.3\ndjango-crispy-forms==1.12.0\npytz==2021.1\nsqlparse==0.4.1\n"
}
] | 4 |
Mglsalamanca/Codigo-Genetico | https://github.com/Mglsalamanca/Codigo-Genetico | b1b0a0d3e312968cfd68cedf49bffe39b7315612 | 6a97fbaac4dd07d2bb6dd82091a9eaa527d4c6f1 | 421c0679bed1a86f2332ba7f3e463fa0c3739dbd | refs/heads/master | 2022-04-19T02:06:34.378541 | 2020-04-20T15:56:56 | 2020-04-20T15:56:56 | 257,328,039 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5105908513069153,
"alphanum_fraction": 0.5395763516426086,
"avg_line_length": 21,
"blob_id": "23ae3e4cefd477a718f716e720c1dcda5cceacb2",
"content_id": "7f90c2437e593c9686dd7b41e87c4993411ad6ef",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 897,
"license_type": "no_license",
"max_line_length": 54,
"num_lines": 39,
"path": "/basics.py",
"repo_name": "Mglsalamanca/Codigo-Genetico",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nCreated on Sat Apr 18 17:11:23 2020\r\n\r\n@author: Feligx\r\n\"\"\"\r\n\r\n#LOPA=list of possible aminoacids\r\n\r\ndef create_LOPA():\r\n txt=open(\"lopa.txt\",\"r\")\r\n LOPAtxt=[]\r\n LOPA_=txt.readlines()\r\n txt.seek(0)\r\n for i in range (len(LOPA_)):\r\n LOPA=txt.readline()\r\n LOPA=LOPA.rstrip(\"\\n\")\r\n LOPAtxt.append(LOPA)\r\n return LOPAtxt\r\n\r\n\r\ndef Aminoacids ():\r\n codons=create_LOPA()\r\n aminoacid_dict={}\r\n for i in codons:\r\n cod1=open(i, \"r\", encoding=\"utf8\")\r\n aminoacid=[]\r\n phe_=cod1.readlines()\r\n cod1.seek(0)\r\n for i in range(len(phe_)):\r\n cod2=cod1.readline()\r\n cod2=cod2.rstrip(\"\\n\")\r\n aminoacid.append(cod2)\r\n aminoacid_dict[aminoacid[0]]=aminoacid[1:]\r\n print(aminoacid_dict)\r\n #print (aminoacid)\r\n#def Dict_codons ():\r\n \r\nprint(Aminoacids())\r\n"
},
{
"alpha_fraction": 0.4460916519165039,
"alphanum_fraction": 0.4676549732685089,
"avg_line_length": 20.22857093811035,
"blob_id": "142eb933d2e1e14e88b9275d1189d3a68562c289",
"content_id": "86eeaa903480273443df8918f41ae20bda5fea30",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 743,
"license_type": "no_license",
"max_line_length": 52,
"num_lines": 35,
"path": "/idea1_v2.0.py",
"repo_name": "Mglsalamanca/Codigo-Genetico",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Fri Mar 13 11:32:25 2020\n\n@author: lovelace\n\"\"\"\n#import basics\n\ndef TRANSCRIPTION (x):\n x.upper()\n ARN=\"\"\n BasesN=[\"A\",\"T\",\"C\",\"G\"]\n for i in x:\n if i in BasesN:\n if i == \"A\":\n ARN=ARN+\"U\"\n elif i == \"T\":\n ARN=ARN+\"A\"\n elif i == \"C\":\n ARN=ARN+\"G\"\n elif i == \"G\":\n ARN=ARN+\"C\"\n else:\n print(\"no es una cadena de ADN\")\n return ARN\n\n#def TRANSLATE (y):\n # cod1=open(\"phe.txt\", \"r\")\n # phe=[]\n # for i in cod1:\n # phe += i\nmyADN=input(\"ingrese su cadema de ADN (EN MAYÚS): \")\nmyARN=(TRANSCRIPTION(myADN))\nprint(TRANSCRIPTION(myADN))"
}
] | 2 |
andriitkach1969/Python-project | https://github.com/andriitkach1969/Python-project | 2fe631a5c9d40de522d0acd3d4af0e26af057b47 | 2a64fb480ca24bad6694a21a3a7d5a0a58c780e7 | ff6d2d465fd8fddfd845b2fed871d91c395a0b86 | refs/heads/master | 2021-07-12T02:21:08.528447 | 2020-03-05T17:59:16 | 2020-03-05T17:59:16 | 162,456,450 | 0 | 0 | null | 2018-12-19T15:28:44 | 2019-02-19T13:59:16 | 2019-02-22T09:34:37 | Python | [
{
"alpha_fraction": 0.645348846912384,
"alphanum_fraction": 0.6895349025726318,
"avg_line_length": 24.294116973876953,
"blob_id": "431493eb6bba7991e2f019a323afab87a2e2b4b5",
"content_id": "edfa467fb7e48abc081191e80925a8e4fd993b91",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1720,
"license_type": "no_license",
"max_line_length": 82,
"num_lines": 68,
"path": "/src/API example.py",
"repo_name": "andriitkach1969/Python-project",
"src_encoding": "UTF-8",
"text": "import requests\nimport logging\nimport http.client as http_client\n\n'''\nhttp_client.HTTPConnection.debuglevel = 1\nlogging.basicConfig(filename='./logs/requests.log', filemode='a')\nlogging.getLogger().setLevel(logging.DEBUG)\n'''\nrequests_log = logging.getLogger(\"requests.packages.urllib3\")\nrequests_log.setLevel(logging.DEBUG)\nrequests_log.propagate = True\n\nACCESS_TOKEN_URL = 'http://master.proton-backend.stag.stfalcon.com/oauth/v2/token'\nAPI_URL = 'http://master.proton-backend.stag.stfalcon.com/api/v1.0/'\n\nDEFAULT_HEADER = 'application/json'\n\nSUCCESS = 200\nINCORRECT_HEADER = 400\nADDED = 201\n\nCLIENT_ID = '1_7e971E3E5F4Fd5990Acc2Ad134C0099B18D6Ef45E78A9017De'\nCLIENT_SECRET = 'c989E96F3C8Cd21C940Ac3Ca75De5752F31C48D9D8D5Fc333A'\nGRANT_TYPE = 'password'\nSCOPE = 'web'\nUSERNAME = '[email protected]'\nPASSWORD = 'Qwerty123'\n\naccess_token = ''\n\nheader = {\n 'Content-Type': 'application/x-www-form-urlencoded'\n}\n\ndata = {\n 'client_id': CLIENT_ID,\n 'client_secret': CLIENT_SECRET,\n 'grant_type': GRANT_TYPE,\n 'scope': SCOPE,\n 'username': USERNAME,\n 'password': PASSWORD\n}\n\nresp = requests.post(ACCESS_TOKEN_URL, data=data, headers=header)\nprint(resp.status_code)\nif resp.status_code == SUCCESS:\n print('----------------')\n access_token = resp.json()['access_token']\n print(access_token)\n\nheader = {\n 'Accept-Language': 'en',\n 'Authorization': 'Bearer ' + access_token\n}\n\nfiles = {\n 'file': ('2png.jpg', open('./pics/2png.jpg', 'rb'), 'multipart/form-data')\n}\n\nresp = requests.post(API_URL+'files', files=files, headers=header)\nprint(resp.status_code)\nif resp.status_code == ADDED:\n print('----------------')\n file_id = resp.json()['id']\n print(file_id)\n\nprint(resp.request.headers)\n"
},
{
"alpha_fraction": 0.6899224519729614,
"alphanum_fraction": 0.7248061895370483,
"avg_line_length": 16.200000762939453,
"blob_id": "c2bba3982477e5ce108277e23502c8f80f3d1a13",
"content_id": "35d522a5cfd38898ca3984a88ecc8746fac26335",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 258,
"license_type": "no_license",
"max_line_length": 37,
"num_lines": 15,
"path": "/src/test_SimpleNumber_pytest.py",
"repo_name": "andriitkach1969/Python-project",
"src_encoding": "UTF-8",
"text": "import pytest\n\nfrom src.SimpleNumber import issimple\n\n\ndef test_SimpleNumber_trivial():\n assert issimple(3) == True\n\n\ndef test_SimpleNUmber_positiv():\n assert issimple(1163) == True\n\n\ndef test_SimpleNumber_negativ():\n assert issimple(3027) == False\n"
},
{
"alpha_fraction": 0.5276073813438416,
"alphanum_fraction": 0.5337423086166382,
"avg_line_length": 19.4375,
"blob_id": "9640a16ada189a1ac83a709381e95ba06c030cf2",
"content_id": "c38b52f612fc989c77e48af9c1e68dcceab02d57",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 326,
"license_type": "no_license",
"max_line_length": 42,
"num_lines": 16,
"path": "/src/exception.py",
"repo_name": "andriitkach1969/Python-project",
"src_encoding": "UTF-8",
"text": "def res(num):\n try:\n r = 1/num\n except ZeroDivisionError as e:\n print('Exception error', e)\n return\n except Exception as e:\n print('Common Exception error', e)\n return\n finally:\n print('end try-except block')\n return r\n\n\nif __name__ == \"__main__\":\n print(res(5))"
},
{
"alpha_fraction": 0.4441210627555847,
"alphanum_fraction": 0.46973225474357605,
"avg_line_length": 27.1639347076416,
"blob_id": "47b4d9bc4c3e6177708c1a72c9c3e8b9a17433f0",
"content_id": "b4afab8bec5586e32078dc638b1ab82f2de51004",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1718,
"license_type": "no_license",
"max_line_length": 87,
"num_lines": 61,
"path": "/src/ArabRomanConvert.py",
"repo_name": "andriitkach1969/Python-project",
"src_encoding": "UTF-8",
"text": "'''\nstage 1\n'''\n\nROMANSIGN = [{1: 'I', 4: 'IV', 5: 'V', 9: 'IX'}, {1: 'X', 4: 'XL', 5: 'L', 9: 'XC'},\n {1: 'C', 4: 'CD', 5: 'D', 9: 'CM'}, {1: 'M'}]\n\n\ndef isint(s):\n try:\n int(s)\n return True\n except ValueError:\n return False\n\n\ndef arabToRoman(number):\n if not isint(number):\n print(\"Not a Arab number\")\n return\n if number > 0 and number < 4000:\n resultStr = \"\"\n print(number)\n digit = 0\n i = 0\n while True:\n digit = number % 10\n number = number // 10\n print(\"digit - \", digit)\n if digit != 0:\n tmpStr = ROMANSIGN[i].get(digit, '*')\n if tmpStr == '*':\n tmpStr = ''\n if digit in (1, 2, 3):\n base = 1\n elif digit in (6, 7, 8):\n base = 5\n digit = digit - base + 1\n tmpStr = tmpStr + ROMANSIGN[i][base]\n for j in range(1, digit):\n tmpStr = tmpStr + ROMANSIGN[i][1]\n resultStr = tmpStr + resultStr\n if number == 0:\n break\n i += 1\n print(\"this number in Roman notation is: \", resultStr)\n else:\n print(\"Not a valid Arab number\")\n\n\ndef romanToArab(arabstr):\n print(arabstr)\n\n\nprint(\"Convert number in Arab notation to Roman notation\")\narabNumber = int(input(\"Enter any number in Arab notation (integer, >0 and < 4000): \"))\narabToRoman(arabNumber)\n\nprint(\"Convert number in Roman notation to Arab notation\")\nromanStr = str(input(\"Enter any number in Roman notation: \"))\nromanToArab(romanStr)\n"
},
{
"alpha_fraction": 0.8316831588745117,
"alphanum_fraction": 0.8415841460227966,
"avg_line_length": 24.25,
"blob_id": "5da262243f14bf097b51d96629f7bb4488fff488",
"content_id": "a8d3101cc7509d116224e9731db56f6ba2534ba3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 101,
"license_type": "no_license",
"max_line_length": 44,
"num_lines": 4,
"path": "/src/README.md",
"repo_name": "andriitkach1969/Python-project",
"src_encoding": "UTF-8",
"text": "# romanNumbers\nPython lib to Roman-Arab numbers translation\nfirst step into Python programming\ntest2\n"
},
{
"alpha_fraction": 0.3942857086658478,
"alphanum_fraction": 0.4228571355342865,
"avg_line_length": 16.5,
"blob_id": "b3b549d648f56f6ff757f776401c6e357fb72b8b",
"content_id": "bac0a44c11cc9a56ee9b42ce83efdc0784ce26c3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 175,
"license_type": "no_license",
"max_line_length": 38,
"num_lines": 10,
"path": "/src/Lesson-2.py",
"repo_name": "andriitkach1969/Python-project",
"src_encoding": "UTF-8",
"text": "def a(_a):\n print('in a()')\n return -_a\n\n\nif __name__ == \"__main__\":\n if 3 == 5 and a(3) == -3:\n print('end')\n b = 0\n# assert b is None, 'Error in data'\n"
},
{
"alpha_fraction": 0.5,
"alphanum_fraction": 0.516438364982605,
"avg_line_length": 20.47058868408203,
"blob_id": "922bacc9243dedb2855fa0ee13032d1e724a54f5",
"content_id": "f83f32adec57c6366879fa909f22cd8f1d15d5cb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 769,
"license_type": "no_license",
"max_line_length": 76,
"num_lines": 34,
"path": "/src/SimpleNumber.py",
"repo_name": "andriitkach1969/Python-project",
"src_encoding": "UTF-8",
"text": "'''\nstage 1\n'''\nimport math\nimport time\n\n\ndef issimple(number):\n\n def check(_number):\n bound = int(math.sqrt(_number)) + 1\n r = range(2, bound)\n for j in r:\n if _number % j == 0:\n break\n else:\n return True\n return False\n\n route = {1: True, 2: True, 3: True}\n return route.get(number, check(number))\n\n\nif __name__ == '__main__':\n start_time = time.time()\n\n boundnumber = abs(int(input(\"Введите любое целое число больше нуля: \")))\n if boundnumber > 0:\n for i in range(1, boundnumber+1):\n if issimple(i):\n print(\"простое -\", i)\n line = 40*\"=\"\n print(line)\n print(time.time() - start_time, \" seconds\")\n"
},
{
"alpha_fraction": 0.5682926774024963,
"alphanum_fraction": 0.5682926774024963,
"avg_line_length": 20.578947067260742,
"blob_id": "2ae61200735819d145956d113952e54c044d45a1",
"content_id": "93f72d43618f4ad9f2e5c9030c9b237002ccdb93",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 410,
"license_type": "no_license",
"max_line_length": 73,
"num_lines": 19,
"path": "/src/decorator.py",
"repo_name": "andriitkach1969/Python-project",
"src_encoding": "UTF-8",
"text": "from datetime import datetime\n\n\ndef time_delta(func):\n def function_wrapper(*args, **kwargs):\n _start_time = datetime.now()\n func(*args, **kwargs)\n _delta_time = datetime.now() - _start_time\n print('*** ' + func.__name__ + '\\t\\t. Call took a ', _delta_time)\n return function_wrapper\n\n\n@time_delta\ndef f(st):\n print(st)\n\n\nif __name__ == \"__main__\":\n f('F function call')\n"
},
{
"alpha_fraction": 0.5752426981925964,
"alphanum_fraction": 0.5770630836486816,
"avg_line_length": 24.353845596313477,
"blob_id": "73dc4fabcc3ea80c044969d0002714b6e2479876",
"content_id": "b7ef10c25ae9e3583d86c8bbaf85a70bb0afcac0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1648,
"license_type": "no_license",
"max_line_length": 118,
"num_lines": 65,
"path": "/src/task 1.py",
"repo_name": "andriitkach1969/Python-project",
"src_encoding": "UTF-8",
"text": "import json\nfrom collections import Iterable\n\nSTR_ITEM = 'item'\nSTR_NAME = 'name'\nSTR_REQUEST = 'request'\nSTR_AUTH = 'auth'\nSTR_METHOD = 'method'\nSTR_HEADER = 'header'\nSTR_BODY = 'body'\nSTR_URL = 'url'\nSTR_DESCRIPTION = 'description'\n\n\ndef enumdata(dictdata, ignore_type=(bytes, str)):\n for it in dictdata:\n if isinstance(it, Iterable) and not isinstance(it, ignore_type):\n yield from enumdata(it)\n else:\n yield it\n\n\ndef crack(dictdata, listdata):\n\n schemadict = {STR_NAME:'', STR_DESCRIPTION:'', STR_METHOD:'', STR_AUTH:'', STR_HEADER:'', STR_URL:'', STR_BODY:''}\n\n for item in dictdata[STR_ITEM]:\n gcheck = item.get(STR_REQUEST)\n if gcheck is None:\n # print(item.get('name'))\n crack(item, listdata)\n else:\n idict = schemadict;\n idict[STR_NAME] = item[STR_NAME]\n idict[STR_BODY] = item[STR_REQUEST][STR_BODY]\n idict[STR_URL] = item[STR_REQUEST][STR_URL]\n idict[STR_METHOD] = item[STR_REQUEST][STR_METHOD]\n idict[STR_HEADER] = item[STR_REQUEST][STR_HEADER]\n listdata.append(idict)\n return\n\n\nif __name__ == \"__main__\":\n\n resList = []\n\n inFile = open('../collections/Proton API.postman_collection.json', 'r')\n rawdata = inFile.read()\n inFile.close()\n jsonData = json.loads(rawdata)\n\n crack(jsonData, resList)\n if len(resList) != 0:\n for i in resList:\n print(i)\n else:\n print('error in parsing')\n print(len(resList))\n\n g = enumdata(jsonData.values())\n cnt = 0\n for i in g:\n print(i)\n cnt += 1\n print(cnt)\n"
},
{
"alpha_fraction": 0.6638935208320618,
"alphanum_fraction": 0.6705490946769714,
"avg_line_length": 27.619047164916992,
"blob_id": "8157cf368f9b7063e46e7b8763639ca41f925717",
"content_id": "a0d4146196a238fc1c829d7be37580cc565b9d72",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 601,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 21,
"path": "/src/xgGamePlan.py",
"repo_name": "andriitkach1969/Python-project",
"src_encoding": "UTF-8",
"text": "import requests\nimport json\n\n\nurl = \"http://imag-sqa02-2-api.xggameplan.com/accesstokens\"\n\npayload = \"{\\n \\\"email\\\": \\\"[email protected]\\\",\\n \\\"password\\\": \\\"123123\\\"\\n}\"\nheaders = {\n 'Content-Type': 'application/json'\n}\n\nresponse = requests.request(\"POST\", url, headers=headers, data=payload)\nif response.status_code == requests.codes.ok:\n print(response.status_code)\n print(response.text.encode('utf8'))\n print(response.content)\n resJson = json.loads(response.content)\n print(resJson)\n print(resJson['token'])\n print(resJson.get('token'))\n print(response.json().get('token'))\n"
}
] | 10 |
OguzhanBeyaz/Python-Translation | https://github.com/OguzhanBeyaz/Python-Translation | 7a050ff92012c52bd0ddc8a9e3cb08dcf209c4cd | c7706a5ecb2b44aa923242c9d206317e27b15de7 | 99ec2346cd2b97a16bf2ee7ee82a9cfa831b2329 | refs/heads/main | 2023-02-05T07:08:32.197179 | 2020-12-28T21:00:28 | 2020-12-28T21:00:28 | null | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6476145386695862,
"alphanum_fraction": 0.6726499795913696,
"avg_line_length": 40.34000015258789,
"blob_id": "a4c49b158944db709375da42c96e890ebe0ee7c7",
"content_id": "01c1331d920cccff98d74dc1b2718b3a36e13952",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2128,
"license_type": "no_license",
"max_line_length": 101,
"num_lines": 50,
"path": "/Translation/Ui_Kılavuz.py",
"repo_name": "OguzhanBeyaz/Python-Translation",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\r\n# Form implementation generated from reading ui file 'c:\\Users\\Acer\\Pyqt5-Projeler\\Ödev11\\Kılavuz.ui'\r\n#\r\n# Created by: PyQt5 UI code generator 5.15.2\r\n#\r\n# WARNING: Any manual changes made to this file will be lost when pyuic5 is\r\n# run again. Do not edit this file unless you know what you are doing.\r\n\r\n\r\nfrom PyQt5 import QtCore, QtGui, QtWidgets\r\n\r\n\r\nclass Ui_Kilavuz(object):\r\n def setupUi(self, Kilavuz):\r\n Kilavuz.setObjectName(\"Kilavuz\")\r\n Kilavuz.resize(461, 125)\r\n self.centralwidget = QtWidgets.QWidget(Kilavuz)\r\n self.centralwidget.setObjectName(\"centralwidget\")\r\n self.pushButton = QtWidgets.QPushButton(self.centralwidget)\r\n self.pushButton.setGeometry(QtCore.QRect(50, 20, 131, 51))\r\n font = QtGui.QFont()\r\n font.setBold(True)\r\n font.setWeight(75)\r\n self.pushButton.setFont(font)\r\n self.pushButton.setObjectName(\"pushButton\")\r\n self.pushButton_2 = QtWidgets.QPushButton(self.centralwidget)\r\n self.pushButton_2.setGeometry(QtCore.QRect(240, 20, 131, 51))\r\n font = QtGui.QFont()\r\n font.setBold(True)\r\n font.setWeight(75)\r\n self.pushButton_2.setFont(font)\r\n self.pushButton_2.setObjectName(\"pushButton_2\")\r\n Kilavuz.setCentralWidget(self.centralwidget)\r\n self.menubar = QtWidgets.QMenuBar(Kilavuz)\r\n self.menubar.setGeometry(QtCore.QRect(0, 0, 461, 26))\r\n self.menubar.setObjectName(\"menubar\")\r\n Kilavuz.setMenuBar(self.menubar)\r\n self.statusbar = QtWidgets.QStatusBar(Kilavuz)\r\n self.statusbar.setObjectName(\"statusbar\")\r\n Kilavuz.setStatusBar(self.statusbar)\r\n\r\n self.retranslateUi(Kilavuz)\r\n QtCore.QMetaObject.connectSlotsByName(Kilavuz)\r\n\r\n def retranslateUi(self, Kilavuz):\r\n _translate = QtCore.QCoreApplication.translate\r\n Kilavuz.setWindowTitle(_translate(\"Kilavuz\", \"Türkçe Dil Kılavuzu\"))\r\n self.pushButton.setText(_translate(\"Kilavuz\", \"Türkçe / İngilizce\"))\r\n self.pushButton_2.setText(_translate(\"Kilavuz\", \"İngilizce / Türkçe\"))\r\n"
},
{
"alpha_fraction": 0.5317139029502869,
"alphanum_fraction": 0.5367746353149414,
"avg_line_length": 27.078432083129883,
"blob_id": "72851dd5f48427b9c0c5bc55fec37f7c4271c6bf",
"content_id": "d8be4eb10abba4474ac3c56c2c862d2e866490a0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2967,
"license_type": "no_license",
"max_line_length": 190,
"num_lines": 102,
"path": "/Translation/Translation.py",
"repo_name": "OguzhanBeyaz/Python-Translation",
"src_encoding": "UTF-8",
"text": "import sys\r\nfrom PyQt5 import QtWidgets\r\nfrom textblob import TextBlob\r\nfrom Ui_Kılavuz import Ui_Kilavuz\r\nfrom Ui_Turkce import Ui_Turkce\r\nfrom Ui_Ingilizce import Ui_Ingilizce\r\nfrom PyQt5.QtWidgets import QWidget, QApplication, QLineEdit, QPushButton, QLabel, QListWidget, QInputDialog, QRadioButton, QGridLayout, QCompleter\r\n\r\n\r\nclass Windows(QtWidgets.QMainWindow):\r\n\r\n def __init__(self):\r\n super().__init__()\r\n self.ui = Ui_Kilavuz()\r\n self.ui.setupUi(self)\r\n\r\n self.ui.pushButton.clicked.connect(self.turkce)\r\n self.ui.pushButton_2.clicked.connect(self.ingilizce)\r\n \r\n \r\n \r\n\r\n def turkce(self):\r\n \r\n self.t = Windows2()\r\n self.t.show()\r\n \r\n def ingilizce(self):\r\n \r\n self.t=Windows3()\r\n self.t.show()\r\n \r\n\r\nclass Windows2(QtWidgets.QMainWindow):\r\n\r\n def __init__(self):\r\n super().__init__()\r\n self.ui = Ui_Turkce()\r\n self.ui.setupUi(self)\r\n self.ui.pushButton.clicked.connect(self.Tamam)\r\n\r\n\r\n layout = QGridLayout()\r\n self.setLayout(layout)\r\n \r\n names = [\"Ampul\", \"ampul\", \"Akıl\", \"akıl\", \"Bardak\", \"bardak\", \"Bez\", \"bez\" \"Cilt\", \"cilt\", \"Cam\", \"cam\" \"Derin\", \"derin\", \"Dolap\", \"dolap\", \"Merhaba\", \"merhaba\", \"Okul\", \"okul\" ]\r\n completer = QCompleter(names)\r\n \r\n self.ui.lineEdit.setCompleter(completer)\r\n layout.addWidget(self.ui.lineEdit, 3, 7)\r\n\r\n \r\n \r\n def Tamam(self):\r\n\r\n tr = self.ui.lineEdit.text()\r\n text = TextBlob(tr)\r\n en = text.translate(to=\"en\")\r\n en = str(en)\r\n \r\n self.ui.listWidget.addItem(\"\" + self.ui.lineEdit.text())\r\n self.ui.listWidget_2.addItem(en)\r\n self.ui.listWidget_3.addItem(en)\r\n \r\n\r\nclass Windows3(QtWidgets.QMainWindow):\r\n\r\n def __init__(self):\r\n super().__init__()\r\n self.ui = Ui_Ingilizce()\r\n self.ui.setupUi(self)\r\n self.ui.pushButton.clicked.connect(self.Tamam)\r\n \r\n\r\n \r\n layout = QGridLayout()\r\n self.setLayout(layout)\r\n \r\n names = [\"Apple\", \"apple\",\"Air\", \"air\", \"Berry\", \"berry\",\"Black\", \"black\", \"Center\", \"center\", \"Cool\", \"cool\", \"Dark\", \"dark\", \"Edit\", \"edit\", \"Hello\", \"hello\", \"School\", \"school\", ]\r\n completer = QCompleter(names)\r\n \r\n self.ui.lineEdit.setCompleter(completer)\r\n layout.addWidget(self.ui.lineEdit, 3, 7)\r\n\r\n\r\n def Tamam(self):\r\n\r\n \r\n en = self.ui.lineEdit.text()\r\n text = TextBlob(en)\r\n tr = text.translate(to=\"tr\")\r\n tr = str(tr)\r\n\r\n self.ui.listWidget.addItem(\"\" + self.ui.lineEdit.text())\r\n self.ui.listWidget_2.addItem(tr)\r\n self.ui.listWidget_3.addItem(tr)\r\n\r\n \r\napp = QApplication(sys.argv)\r\nwin = Windows()\r\nwin.show()\r\nsys.exit(app.exec_())"
},
{
"alpha_fraction": 0.6177072525024414,
"alphanum_fraction": 0.6558341979980469,
"avg_line_length": 43.44186019897461,
"blob_id": "54eff0418750bdc5ae728a4a714a9f9e7f412e94",
"content_id": "68ebb218a25e2d39c1c086185f304ddd472bd5a1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3930,
"license_type": "no_license",
"max_line_length": 100,
"num_lines": 86,
"path": "/Translation/Ui_Turkce.py",
"repo_name": "OguzhanBeyaz/Python-Translation",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\r\n# Form implementation generated from reading ui file 'c:\\Users\\Acer\\Pyqt5-Projeler\\Ödev11\\Turkce.ui'\r\n#\r\n# Created by: PyQt5 UI code generator 5.15.2\r\n#\r\n# WARNING: Any manual changes made to this file will be lost when pyuic5 is\r\n# run again. Do not edit this file unless you know what you are doing.\r\n\r\n\r\nfrom PyQt5 import QtCore, QtGui, QtWidgets\r\n\r\n\r\nclass Ui_Turkce(object):\r\n def setupUi(self, Turkce):\r\n Turkce.setObjectName(\"Turkce\")\r\n Turkce.resize(504, 451)\r\n self.centralwidget = QtWidgets.QWidget(Turkce)\r\n self.centralwidget.setObjectName(\"centralwidget\")\r\n self.lineEdit = QtWidgets.QLineEdit(self.centralwidget)\r\n self.lineEdit.setGeometry(QtCore.QRect(30, 30, 281, 31))\r\n self.lineEdit.setObjectName(\"lineEdit\")\r\n self.pushButton = QtWidgets.QPushButton(self.centralwidget)\r\n self.pushButton.setGeometry(QtCore.QRect(340, 20, 91, 41))\r\n font = QtGui.QFont()\r\n font.setBold(True)\r\n font.setWeight(75)\r\n self.pushButton.setFont(font)\r\n self.pushButton.setObjectName(\"pushButton\")\r\n self.listWidget = QtWidgets.QListWidget(self.centralwidget)\r\n self.listWidget.setGeometry(QtCore.QRect(10, 120, 211, 181))\r\n self.listWidget.setObjectName(\"listWidget\")\r\n self.listWidget_2 = QtWidgets.QListWidget(self.centralwidget)\r\n self.listWidget_2.setGeometry(QtCore.QRect(260, 120, 221, 181))\r\n self.listWidget_2.setObjectName(\"listWidget_2\")\r\n self.label = QtWidgets.QLabel(self.centralwidget)\r\n self.label.setGeometry(QtCore.QRect(30, 10, 55, 16))\r\n font = QtGui.QFont()\r\n font.setBold(True)\r\n font.setWeight(75)\r\n self.label.setFont(font)\r\n self.label.setObjectName(\"label\")\r\n self.label_2 = QtWidgets.QLabel(self.centralwidget)\r\n self.label_2.setGeometry(QtCore.QRect(10, 100, 91, 16))\r\n font = QtGui.QFont()\r\n font.setBold(True)\r\n font.setWeight(75)\r\n self.label_2.setFont(font)\r\n self.label_2.setObjectName(\"label_2\")\r\n self.label_3 = QtWidgets.QLabel(self.centralwidget)\r\n self.label_3.setGeometry(QtCore.QRect(260, 100, 121, 16))\r\n font = QtGui.QFont()\r\n font.setBold(True)\r\n font.setWeight(75)\r\n self.label_3.setFont(font)\r\n self.label_3.setObjectName(\"label_3\")\r\n self.label_4 = QtWidgets.QLabel(self.centralwidget)\r\n self.label_4.setGeometry(QtCore.QRect(20, 330, 111, 16))\r\n font = QtGui.QFont()\r\n font.setBold(True)\r\n font.setWeight(75)\r\n self.label_4.setFont(font)\r\n self.label_4.setObjectName(\"label_4\")\r\n self.listWidget_3 = QtWidgets.QListWidget(self.centralwidget)\r\n self.listWidget_3.setGeometry(QtCore.QRect(17, 360, 461, 31))\r\n self.listWidget_3.setObjectName(\"listWidget_3\")\r\n Turkce.setCentralWidget(self.centralwidget)\r\n self.menubar = QtWidgets.QMenuBar(Turkce)\r\n self.menubar.setGeometry(QtCore.QRect(0, 0, 504, 26))\r\n self.menubar.setObjectName(\"menubar\")\r\n Turkce.setMenuBar(self.menubar)\r\n self.statusbar = QtWidgets.QStatusBar(Turkce)\r\n self.statusbar.setObjectName(\"statusbar\")\r\n Turkce.setStatusBar(self.statusbar)\r\n\r\n self.retranslateUi(Turkce)\r\n QtCore.QMetaObject.connectSlotsByName(Turkce)\r\n\r\n def retranslateUi(self, Turkce):\r\n _translate = QtCore.QCoreApplication.translate\r\n Turkce.setWindowTitle(_translate(\"Turkce\", \"Türkçe İngilizce Sözlük\"))\r\n self.pushButton.setText(_translate(\"Turkce\", \"Tamam\"))\r\n self.label.setText(_translate(\"Turkce\", \"Sözcük\"))\r\n self.label_2.setText(_translate(\"Turkce\", \"Türkçe Sözlük\"))\r\n self.label_3.setText(_translate(\"Turkce\", \"İngilizce Karşılıklar\"))\r\n self.label_4.setText(_translate(\"Turkce\", \"İngilizce Karşılığı\"))\r\n"
},
{
"alpha_fraction": 0.7727272510528564,
"alphanum_fraction": 0.7727272510528564,
"avg_line_length": 21,
"blob_id": "e54778e3930ce924124e37e9237136e81441c7d8",
"content_id": "3e750ff6f62e245ad7fc16e00d91c66ba2e9adac",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 22,
"license_type": "no_license",
"max_line_length": 21,
"num_lines": 1,
"path": "/README.md",
"repo_name": "OguzhanBeyaz/Python-Translation",
"src_encoding": "UTF-8",
"text": "# Python-Translation-\n"
},
{
"alpha_fraction": 0.6229385137557983,
"alphanum_fraction": 0.6606696844100952,
"avg_line_length": 44,
"blob_id": "a4770aa700f06059766078e34937a70b2481ee9f",
"content_id": "674539310414f27882d96bd929fd64094088137c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4025,
"license_type": "no_license",
"max_line_length": 103,
"num_lines": 87,
"path": "/Translation/Ui_Ingilizce.py",
"repo_name": "OguzhanBeyaz/Python-Translation",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\r\n# Form implementation generated from reading ui file 'c:\\Users\\Acer\\Pyqt5-Projeler\\Ödev11\\Ingilizce.ui'\r\n#\r\n# Created by: PyQt5 UI code generator 5.15.2\r\n#\r\n# WARNING: Any manual changes made to this file will be lost when pyuic5 is\r\n# run again. Do not edit this file unless you know what you are doing.\r\n\r\n\r\nfrom PyQt5 import QtCore, QtGui, QtWidgets\r\n\r\n\r\nclass Ui_Ingilizce(object):\r\n def setupUi(self, Ingilizce):\r\n Ingilizce.setObjectName(\"Ingilizce\")\r\n Ingilizce.resize(497, 444)\r\n self.centralwidget = QtWidgets.QWidget(Ingilizce)\r\n self.centralwidget.setObjectName(\"centralwidget\")\r\n self.lineEdit = QtWidgets.QLineEdit(self.centralwidget)\r\n self.lineEdit.setGeometry(QtCore.QRect(30, 30, 281, 31))\r\n self.lineEdit.setObjectName(\"lineEdit\")\r\n self.pushButton = QtWidgets.QPushButton(self.centralwidget)\r\n self.pushButton.setGeometry(QtCore.QRect(340, 20, 91, 41))\r\n font = QtGui.QFont()\r\n font.setBold(True)\r\n font.setWeight(75)\r\n self.pushButton.setFont(font)\r\n self.pushButton.setObjectName(\"pushButton\")\r\n self.listWidget = QtWidgets.QListWidget(self.centralwidget)\r\n self.listWidget.setGeometry(QtCore.QRect(10, 120, 211, 181))\r\n self.listWidget.setObjectName(\"listWidget\")\r\n self.listWidget_2 = QtWidgets.QListWidget(self.centralwidget)\r\n self.listWidget_2.setGeometry(QtCore.QRect(260, 120, 221, 181))\r\n self.listWidget_2.setObjectName(\"listWidget_2\")\r\n self.label = QtWidgets.QLabel(self.centralwidget)\r\n self.label.setGeometry(QtCore.QRect(30, 10, 55, 16))\r\n font = QtGui.QFont()\r\n font.setBold(True)\r\n font.setWeight(75)\r\n self.label.setFont(font)\r\n self.label.setObjectName(\"label\")\r\n self.label_2 = QtWidgets.QLabel(self.centralwidget)\r\n self.label_2.setGeometry(QtCore.QRect(10, 100, 101, 16))\r\n font = QtGui.QFont()\r\n font.setPointSize(7)\r\n font.setBold(True)\r\n font.setWeight(75)\r\n self.label_2.setFont(font)\r\n self.label_2.setObjectName(\"label_2\")\r\n self.label_3 = QtWidgets.QLabel(self.centralwidget)\r\n self.label_3.setGeometry(QtCore.QRect(260, 100, 111, 16))\r\n font = QtGui.QFont()\r\n font.setBold(True)\r\n font.setWeight(75)\r\n self.label_3.setFont(font)\r\n self.label_3.setObjectName(\"label_3\")\r\n self.label_4 = QtWidgets.QLabel(self.centralwidget)\r\n self.label_4.setGeometry(QtCore.QRect(20, 330, 101, 16))\r\n font = QtGui.QFont()\r\n font.setBold(True)\r\n font.setWeight(75)\r\n self.label_4.setFont(font)\r\n self.label_4.setObjectName(\"label_4\")\r\n self.listWidget_3 = QtWidgets.QListWidget(self.centralwidget)\r\n self.listWidget_3.setGeometry(QtCore.QRect(19, 355, 461, 31))\r\n self.listWidget_3.setObjectName(\"listWidget_3\")\r\n Ingilizce.setCentralWidget(self.centralwidget)\r\n self.menubar = QtWidgets.QMenuBar(Ingilizce)\r\n self.menubar.setGeometry(QtCore.QRect(0, 0, 497, 26))\r\n self.menubar.setObjectName(\"menubar\")\r\n Ingilizce.setMenuBar(self.menubar)\r\n self.statusbar = QtWidgets.QStatusBar(Ingilizce)\r\n self.statusbar.setObjectName(\"statusbar\")\r\n Ingilizce.setStatusBar(self.statusbar)\r\n\r\n self.retranslateUi(Ingilizce)\r\n QtCore.QMetaObject.connectSlotsByName(Ingilizce)\r\n\r\n def retranslateUi(self, Ingilizce):\r\n _translate = QtCore.QCoreApplication.translate\r\n Ingilizce.setWindowTitle(_translate(\"Ingilizce\", \"İngilizce Türkçe Sözlük\"))\r\n self.pushButton.setText(_translate(\"Ingilizce\", \"Tamam\"))\r\n self.label.setText(_translate(\"Ingilizce\", \"Sözcük\"))\r\n self.label_2.setText(_translate(\"Ingilizce\", \"İngilizce Sözlük\"))\r\n self.label_3.setText(_translate(\"Ingilizce\", \"Türkçe Karşılıklar\"))\r\n self.label_4.setText(_translate(\"Ingilizce\", \"Türkçe Karşılığı\"))\r\n"
}
] | 5 |
avinash2459/Project3_RESTAPI | https://github.com/avinash2459/Project3_RESTAPI | cacc55b45b9ae611ab5d28f20c37747d4f0bf08d | c6e20c4218934939f86c95b4878295435a1150ed | 9eacbd88f69dc0ae945475d77eed39decbcc89e3 | refs/heads/master | 2023-06-24T03:40:12.197142 | 2021-07-29T04:19:50 | 2021-07-29T04:19:50 | 390,572,152 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.710616409778595,
"alphanum_fraction": 0.7208904027938843,
"avg_line_length": 25.590909957885742,
"blob_id": "4c94e6e5970a99a74215a007aca753bfb7fa7ad5",
"content_id": "3f2eaafa8a7d33ad4aace47403d8a8054c144bec",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 584,
"license_type": "no_license",
"max_line_length": 55,
"num_lines": 22,
"path": "/README.md",
"repo_name": "avinash2459/Project3_RESTAPI",
"src_encoding": "UTF-8",
"text": "# Project 3 - REST API - OSCAR AWARDS\n\n### - Avinash Sriram Chamarthy\n\n## Technologies Used: Python, Docker, MySQL, Flask, GIT\n\n## Postman Screenshots:\n\n### 1. GET request to retrieve all awards\n![Postman_page](screenshots/get_awards.png)\n\n### 2. GET request to get specified award\n![Postman_page](screenshots/get_single_award.png)\n\n### 3. POST request to add a new award\n![Postman_page](screenshots/add_award.png)\n\n### 4. PUT request to edit an existing award\n![Postman_page](screenshots/edit_award.png)\n\n### 5. DELETE request to delete an existing record\n![Postman_page](screenshots/delete_award.png)"
},
{
"alpha_fraction": 0.6537742018699646,
"alphanum_fraction": 0.6628196835517883,
"avg_line_length": 35.443180084228516,
"blob_id": "cba3e25f304046632c20bceeb2f89715aeef7d7f",
"content_id": "8b990b788eec3fad03791f2b4972732972f67ffd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3206,
"license_type": "no_license",
"max_line_length": 117,
"num_lines": 88,
"path": "/app/app.py",
"repo_name": "avinash2459/Project3_RESTAPI",
"src_encoding": "UTF-8",
"text": "import simplejson as json\nfrom flask import Flask, request, Response, redirect\nfrom flask import render_template\nfrom flaskext.mysql import MySQL\nfrom pymysql.cursors import DictCursor\n\napp = Flask(__name__)\n\n\nmysql = MySQL(cursorclass=DictCursor)\n\napp.config['MYSQL_DATABASE_HOST'] = 'db'\napp.config['MYSQL_DATABASE_USER'] = 'root'\napp.config['MYSQL_DATABASE_PASSWORD'] = 'root'\napp.config['MYSQL_DATABASE_PORT'] = 3306\napp.config['MYSQL_DATABASE_DB'] = 'oscarData'\nmysql.init_app(app)\n\n\[email protected]('/api/v1/oscar', methods=['GET'])\ndef api_browse() -> str:\n cursor = mysql.get_db().cursor()\n cursor.execute('SELECT * FROM tblOscarImport')\n result = cursor.fetchall()\n json_result = json.dumps(result)\n resp = Response(json_result, status=200, mimetype='application/json')\n return resp\n\n\[email protected]('/api/v1/oscars/<int:oscar_id>', methods=['GET'])\ndef api_retrieve(oscar_id) -> str:\n cursor = mysql.get_db().cursor()\n cursor.execute('SELECT * FROM tblOscarImport WHERE id=%s', oscar_id)\n result = cursor.fetchall()\n json_result = json.dumps(result)\n resp = Response(json_result, status=200, mimetype='application/json')\n return resp\n\n\[email protected]('/api/v1/oscars', methods=['POST'])\ndef api_add() -> str:\n content = request.json\n cursor = mysql.get_db().cursor()\n inputData = (content['actorName'], content['movieName'], content['sex'],\n content['age'], content['year'])\n sql_insert_query = \"\"\"INSERT INTO tblOscarImport (actorName,movieName,sex,age,year) VALUES (%s, %s,%s, %s,%s) \"\"\"\n cursor.execute(sql_insert_query, inputData)\n mysql.get_db().commit()\n cursor.execute('SELECT * FROM tblOscarImport ORDER BY id DESC LIMIT 1')\n result = cursor.fetchall()\n json_result = json.dumps(result)\n resp = Response(json_result, status=200, mimetype='application/json')\n return resp\n\n\[email protected]('/api/v1/oscars/<int:oscar_id>', methods=['PUT'])\ndef api_edit(oscar_id) -> str:\n cursor = mysql.get_db().cursor()\n content = request.json\n inputData = (content['actorName'], content['movieName'], content['sex'],\n content['age'], content['year'],oscar_id)\n sql_update_query = \"\"\"UPDATE tblOscarImport t SET t.actorName = %s, t.movieName = %s, t.sex = %s, t.age =\n %s, t.year = %s WHERE t.id = %s \"\"\"\n cursor.execute(sql_update_query, inputData)\n mysql.get_db().commit()\n query = \"\"\"SELECT * FROM tblOscarImport WHERE id = %s \"\"\"\n cursor.execute(query, oscar_id)\n result = cursor.fetchall()\n json_result = json.dumps(result)\n resp = Response(json_result, status=200, mimetype='application/json')\n return resp\n\n\[email protected]('/api/v1/oscars/<int:oscar_id>', methods=['DELETE'])\ndef api_delete(oscar_id) -> str:\n cursor = mysql.get_db().cursor()\n sql_delete_query = \"\"\"DELETE FROM tblOscarImport WHERE id = %s \"\"\"\n cursor.execute(sql_delete_query, oscar_id)\n mysql.get_db().commit()\n cursor.execute('SELECT * FROM tblOscarImport')\n result = cursor.fetchall()\n json_result = json.dumps(result)\n resp = Response(json_result, status=200, mimetype='application/json')\n return resp\n\n\nif __name__ == '__main__':\n app.run(host='0.0.0.0', debug=True)"
}
] | 2 |
ozgebarbaros/dynamic-rest | https://github.com/ozgebarbaros/dynamic-rest | 5da4eb05d0018eb381024d07bcdfb06b36a2f5c8 | f2cd588314a7e7da3b991eb224fd5f08ccccb5b4 | 996ef56d34f13b370389169de1cdcb1bd145d987 | refs/heads/master | 2022-11-17T21:17:14.922680 | 2020-07-15T15:37:15 | 2020-07-15T15:37:15 | 294,690,900 | 1 | 0 | MIT | 2020-09-11T12:34:46 | 2020-09-11T12:34:02 | 2020-07-22T21:53:27 | null | [
{
"alpha_fraction": 0.6195856928825378,
"alphanum_fraction": 0.6195856928825378,
"avg_line_length": 33.630435943603516,
"blob_id": "1f5de3e33707fa907e6373cfeb4835bee84af8d3",
"content_id": "9e390452ad0912e9c092aaa1f473db3484c48652",
"detected_licenses": [
"LicenseRef-scancode-unknown-license-reference",
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1593,
"license_type": "permissive",
"max_line_length": 60,
"num_lines": 46,
"path": "/dynamic_rest/pagination.py",
"repo_name": "ozgebarbaros/dynamic-rest",
"src_encoding": "UTF-8",
"text": "\"\"\"This module contains custom pagination classes.\"\"\"\nfrom collections import OrderedDict\n\nfrom rest_framework.pagination import PageNumberPagination\nfrom rest_framework.response import Response\nfrom rest_framework.settings import api_settings\n\nfrom dynamic_rest.conf import settings\n\n\nclass DynamicPageNumberPagination(PageNumberPagination):\n \"\"\"A subclass of PageNumberPagination.\n\n Adds support for pagination metadata and overrides for\n pagination query parameters.\n \"\"\"\n page_size_query_param = settings.PAGE_SIZE_QUERY_PARAM\n page_query_param = settings.PAGE_QUERY_PARAM\n max_page_size = settings.MAX_PAGE_SIZE\n page_size = settings.PAGE_SIZE or api_settings.PAGE_SIZE\n\n def get_page_metadata(self):\n # returns total_results, total_pages, page, per_page\n return {\n 'total_results': self.page.paginator.count,\n 'total_pages': self.page.paginator.num_pages,\n 'page': self.page.number,\n 'per_page': self.get_page_size(self.request)\n }\n\n def get_paginated_response(self, data):\n meta = self.get_page_metadata()\n if isinstance(data, list):\n data = OrderedDict([\n ('count', self.page.paginator.count),\n ('next', self.get_next_link()),\n ('previous', self.get_previous_link()),\n ('results', data),\n ('meta', meta)\n ])\n else:\n if 'meta' in data:\n data['meta'].update(meta)\n else:\n data['meta'] = meta\n return Response(data)\n"
},
{
"alpha_fraction": 0.5272511839866638,
"alphanum_fraction": 0.6374407410621643,
"avg_line_length": 23.823530197143555,
"blob_id": "799acdc50d7ebddbaae60567a10167e0e43f88ff",
"content_id": "9b62fb1a85e975b17fa8f61b3d2dea1954539e65",
"detected_licenses": [
"LicenseRef-scancode-unknown-license-reference",
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "INI",
"length_bytes": 844,
"license_type": "permissive",
"max_line_length": 64,
"num_lines": 34,
"path": "/tox.ini",
"repo_name": "ozgebarbaros/dynamic-rest",
"src_encoding": "UTF-8",
"text": "[pytest]\naddopts=--tb=short\n\n[tox]\nenvlist =\n py37-lint,\n {py35,py36,py37}-django{111,20,21,22}-drf{38,39,310,311},\n\n[testenv]\ncommands = ./runtests.py --fast {posargs} --coverage -rw\nsetenv =\n PYTHONDONTWRITEBYTECODE=1\ndeps =\n django111: Django==1.11.29\n django20: Django==2.0.13\n django21: Django==2.1.15\n django22: Django==2.2.12\n drf38: djangorestframework==3.8.2\n drf39: djangorestframework==3.9.4\n drf310: djangorestframework==3.10.3\n drf311: djangorestframework==3.11.0\n -rrequirements.txt\n\n[testenv:py37-lint]\ncommands = ./runtests.py --lintonly\ndeps =\n -rrequirements.txt\n\n[testenv:py37-drf311-benchmarks]\ncommands = ./runtests.py --benchmarks\ndeps =\n Django==2.2.12\n djangorestframework==3.11.0\n -rrequirements.benchmark.txt\n"
},
{
"alpha_fraction": 0.6000000238418579,
"alphanum_fraction": 0.6047058701515198,
"avg_line_length": 17.478260040283203,
"blob_id": "9c5160f458311470f815ee6ef8ce9832187c99ad",
"content_id": "29c007c0e6d33f57ce6cd2d94b647a01dba36201",
"detected_licenses": [
"LicenseRef-scancode-unknown-license-reference",
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 425,
"license_type": "permissive",
"max_line_length": 53,
"num_lines": 23,
"path": "/dynamic_rest/utils.py",
"repo_name": "ozgebarbaros/dynamic-rest",
"src_encoding": "UTF-8",
"text": "from django.utils.six import string_types\n\nFALSEY_STRINGS = (\n '0',\n 'false',\n '',\n)\n\n\ndef is_truthy(x):\n if isinstance(x, string_types):\n return x.lower() not in FALSEY_STRINGS\n return bool(x)\n\n\ndef unpack(content):\n if not content:\n # empty values pass through\n return content\n\n keys = [k for k in content.keys() if k != 'meta']\n unpacked = content[keys[0]]\n return unpacked\n"
},
{
"alpha_fraction": 0.4736842215061188,
"alphanum_fraction": 0.6710526347160339,
"avg_line_length": 16.538461685180664,
"blob_id": "34e42a639c883ae8c53c70bc00322f0648bad2f8",
"content_id": "e6cf3575d05df4083dbdb5eec6f2bd58f6039fb1",
"detected_licenses": [
"LicenseRef-scancode-unknown-license-reference",
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 228,
"license_type": "permissive",
"max_line_length": 25,
"num_lines": 13,
"path": "/requirements.benchmark.txt",
"repo_name": "ozgebarbaros/dynamic-rest",
"src_encoding": "UTF-8",
"text": "Sphinx==1.3.4\nDjango>=1.11.15<2.0.0\ndj-database-url==0.3.0\ndjango-debug-toolbar==1.7\nflake8==2.4.0\npytest-cov==1.8.1\npytest-django==2.8.0\npytest-sugar==0.5.1\npytest==2.7.2\npsycopg2==2.5.1\ntox-pyenv==1.0.2\ntox==2.3.1\ndjay==0.0.8\n"
},
{
"alpha_fraction": 0.6944444179534912,
"alphanum_fraction": 0.6944444179534912,
"avg_line_length": 24.200000762939453,
"blob_id": "9c5c56b77485683e453852a9b9570e1c2d7b0180",
"content_id": "ba38d93738f3464d0506f11447e7c437e212a69c",
"detected_licenses": [
"LicenseRef-scancode-unknown-license-reference",
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 252,
"license_type": "permissive",
"max_line_length": 51,
"num_lines": 10,
"path": "/benchmarks/urls.py",
"repo_name": "ozgebarbaros/dynamic-rest",
"src_encoding": "UTF-8",
"text": "from django.conf.urls import include, patterns, url\n\nfrom .drest import router as drest_router\nfrom .drf import router as drf_router\n\nurlpatterns = patterns(\n '',\n url(r'^', include(drf_router.urls)),\n url(r'^', include(drest_router.urls)),\n)\n"
},
{
"alpha_fraction": 0.4559585452079773,
"alphanum_fraction": 0.6580311059951782,
"avg_line_length": 15.083333015441895,
"blob_id": "86069ee0903bd2719a873d73b7e9ae033264fa35",
"content_id": "2cd4046d60957c4c81ea039987341e268861ace3",
"detected_licenses": [
"LicenseRef-scancode-unknown-license-reference",
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 193,
"license_type": "permissive",
"max_line_length": 22,
"num_lines": 12,
"path": "/requirements.txt",
"repo_name": "ozgebarbaros/dynamic-rest",
"src_encoding": "UTF-8",
"text": "dj-database-url==0.5.0\ndjay>=0.0.8\nflake8==3.5.0\nmock==2.0.0\npsycopg2==2.7.5\npytest-cov==2.5.1\npytest-django==3.4.1\npytest-sugar==0.9.1\npytest==3.7.1\nSphinx==1.7.5\ntox-pyenv==1.1.0\ntox==3.14.6\n"
}
] | 6 |
deekshithamayya/IntelligentOnlineTherapist | https://github.com/deekshithamayya/IntelligentOnlineTherapist | 94434f583f8ee2d37958b6c4fdff823573786fe6 | c347f6db9fb63dd046904d59abf024023c3ce260 | 1992106c1019efe9a0ae23bf3b4bead252d71338 | refs/heads/master | 2020-05-21T14:23:29.803859 | 2019-06-13T10:03:34 | 2019-06-13T10:03:34 | 186,075,823 | 0 | 1 | null | 2019-05-11T01:53:18 | 2019-06-10T01:54:32 | 2019-06-10T10:00:19 | CSS | [
{
"alpha_fraction": 0.6080901622772217,
"alphanum_fraction": 0.6200265288352966,
"avg_line_length": 21.863636016845703,
"blob_id": "c35807029894ddf1ec48b07e63999397e3dbdf31",
"content_id": "bdf094c7196590428123e2333cf69484f8d04951",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1508,
"license_type": "no_license",
"max_line_length": 69,
"num_lines": 66,
"path": "/app.py",
"repo_name": "deekshithamayya/IntelligentOnlineTherapist",
"src_encoding": "UTF-8",
"text": "from flask import Flask, request, redirect, url_for , render_template\nimport json\nfrom flask import Response\nimport depression_detection_tweets\nimport face_recognition\n\napp = Flask(__name__)\nres={\"e\":0,\"s\":0,\"b\":0}\n#< a href = \"{{ url_for('logout') }}\" > logout < / a >\n\[email protected](\"/\")\ndef index():\n return render_template('index.html')\n\n\[email protected]('/sentimental_analysis', methods = ['GET'])\ndef sentimental_analysis():\n return render_template('senti-index.html')\n\[email protected]('/sentiment/<s>', methods = ['GET'])\ndef sentiment(s):\n val= depression_detection_tweets.func(s)\n res[\"s\"]=10*val;\n return render_template(\"index.html\")\n\n\[email protected]('/emotion_detection', methods = ['GET'])\ndef emotion_detection():\n try:\n val=face_recognition.func()\n if val>0.2:res[\"e\"]=0\n if val<-0.2:res[\"e\"]=10\n\n except:\n pass\n return render_template(\"index.html\")\n\n\[email protected]('/BDI', methods = ['GET'])\ndef BDI():\n return render_template('BDI.html')\n\[email protected]('/BDIEvaluate/<score>')\ndef get_javascript_data(score):\n val=(int(score)/63)\n res[\"b\"]=val*80\n print(val)\n return render_template(\"index.html\")\n\[email protected]('/Result', methods = ['GET'])\ndef Result():\n s=0\n for i in res:\n s+=(res[i])\n s=round(s,2)\n print(s)\n return render_template('Result.html',score=s)\n\n\n#@app.route('/employee',methods = ['GET'])\n#def employee()\n # print(json.dumps(employees))\n # return json.dumps(employees)\n\nif __name__ == '__main__':\n app.run()"
},
{
"alpha_fraction": 0.5572174191474915,
"alphanum_fraction": 0.5707826018333435,
"avg_line_length": 36.59060287475586,
"blob_id": "eb1628f7b99961041de9c65cbdc741562a1c1366",
"content_id": "6c1a57bf8d5757dc2b48c001a31ee6069b56840d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "HTML",
"length_bytes": 5752,
"license_type": "no_license",
"max_line_length": 325,
"num_lines": 149,
"path": "/templates/index.html",
"repo_name": "deekshithamayya/IntelligentOnlineTherapist",
"src_encoding": "UTF-8",
"text": "<!DOCTYPE html>\r\n<html>\r\n<head>\r\n <title>Intelligent Online Therapist</title>\r\n\r\n <meta charset=\"UTF-8\">\r\n <meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\">\r\n <meta http-equiv=\"X-UA-Compatible\" content=\"ie=edge\">\r\n <script type=\"text/javascript\" src=\"http://code.jquery.com/jquery-1.7.1.min.js\"></script>\r\n\r\n <link rel=\"stylesheet\" type=\"text/css\" href=\"static/css/css1.css\">\r\n</head>\r\n<body>\r\n\t<header>\r\n <nav class=\"nav_container\">\r\n <ul class=\"main_menu\">\r\n <li><a href=\"#slide1\">Home</a></li>\r\n <li><a href=\"#slide2\">Causes</a></li>\r\n <li class=\"show_sub\"><a href=\"#slide3\">Recovery</a>\r\n <!-- <ul class=\"sub_menu\">\r\n <li><a href=\"\">Submenu 1</a></li>\r\n <li><a href=\"\">Submenu 2</a></li>\r\n <li><a href=\"\">Submenu 3</a></li>\r\n </ul> -->\r\n </li>\r\n <li ><a href=\"#slide21\">Our Models</a>\r\n <!-- <ul class=\"sub_menu\">\r\n <li><a href=\"#slide21\">Beck's Inventory</a></li>\r\n <li><a href=\"#slide21\">Facial Depression Detection</a></li>\r\n <li><a href=\"#slide21\">Submenu 3</a></li> -->\r\n <!-- </ul> -->\r\n </li>\r\n <!-- <li><a href=\"\">Menu 4</a></li> -->\r\n </ul>\r\n </nav>\r\n </header>\r\n<div id=\"title\" class=\"slide header1\">\r\n <div class=\"box\">\r\n <h3>Intelligent Online Therapist</h3>\r\n</div>\r\n</div>\r\n\r\n<div id=\"slide1\" class=\"slide\">\r\n <div class=\"title\">\r\n <h1>What is Depression?</h1>\r\n <p>While we all feel sad, moody or low from time to time, some people experience these feelings intensely, for long periods of time (weeks, months or even years) and sometimes without any apparent reason. Depression is more than just a low mood – it's a serious condition that affects your physical and mental health.</p>\r\n </div>\r\n</div>\r\n\r\n<div id=\"slide2\" class=\"slide\">\r\n <div class=\"title\">\r\n <h1>What causes Depression?</h1>\r\n <p>Depression usually results from a combination of recent events and other longer-term or personal factors, rather than one immediate issue or event.</p><br><br>\r\n <h1>Types of Depression.</h1>\r\n <p>There are different types of depressive disorders. Symptoms can range from relatively minor (but still disabling) through to very severe, so it's helpful to be aware of the range of conditions and their specific symptoms.</p>\r\n </div>\r\n <!-- <img src=\"images/side1.jpg\">\r\n <img src=\"images/side2.jpg\"> -->\r\n</div>\r\n\r\n<div id=\"slide3\" class=\"slide\">\r\n <div class=\"title\">\r\n <h1>Recovering from a mental health condition.</h1>\r\n <p>Recovery can take time and is different for everyone. As well as getting treatment underway, you'll need to find new ways to manage and live with the changes and challenges of anxiety and/or depression.\r\nWhile psychological and/or medical treatment can help with your recovery, there are many other ways you can help yourself to get better and stay well.</p>\r\n </div>\r\n</div>\r\n\r\n<div id=\"slide21\" class=\"slide\">\r\n <div class=\"title1\">\r\n\r\n <h1>Our Depression Detection Systems</h1>\r\n \r\n <div class=\"pic-thumb-view\">\r\n <div class=\"pic-thumbnail-list\">\r\n <div class=\"thumbnail\">\r\n <a href=\"{{ url_for('sentimental_analysis') }}\">\r\n <div class=\"info-container\">\r\n <!-- <img src=\"..http://www.escaperoom.ax/wp-content/uploads/2016/02/house-wallpaper-3.jpg\" alt=\"picture\" class=\"info-thumb-pic\"> -->\r\n \r\n \t<div class=\"info-thumb-header\">\r\n \t\t<h3>  Sentiment Analysis</h3>\r\n \t</div>\r\n \r\n\r\n <!-- <section class=\"info-thumb-section\">\r\n \"Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.\"\r\n </section> -->\r\n </div>\r\n </a>\r\n </div>\r\n <div class=\"thumbnail\">\r\n <a href=\"{{ url_for('emotion_detection') }}\">\r\n <div class=\"info-container1\">\r\n <!-- <img src=\"http://4.bp.blogspot.com/-eIFYvodclHQ/UbNxYtXXGMI/AAAAAAAAIl8/UkWtmBiNlss/s1600/Dark+rain+hd.jpg\" alt=\"picture\" class=\"info-thumb-pic\"> -->\r\n <div class=\"info-thumb-header\">\r\n <h3>  Emotion Detection</h3>\r\n </div>\r\n <!-- <section class=\"info-thumb-section\">\r\n \"Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.\"\r\n </section> -->\r\n </div>\r\n </a>\r\n </div>\r\n <br>\r\n <div class=\"thumbnail\">\r\n <a href=\"{{ url_for('BDI') }}\">\r\n <div class=\"info-container\">\r\n <!-- <img src=\"..http://www.escaperoom.ax/wp-content/uploads/2016/02/house-wallpaper-3.jpg\" alt=\"picture\" class=\"info-thumb-pic\"> -->\r\n\r\n \t<div class=\"info-thumb-header\">\r\n \t\t<h3>  Beck's Depression Inventory</h3>\r\n \t</div>\r\n\r\n\r\n <!-- <section class=\"info-thumb-section\">\r\n \"Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.\"\r\n </section> -->\r\n </div>\r\n </a>\r\n </div>\r\n\r\n </div>\r\n </div>\r\n\r\n</div>\r\n\r\n \r\n</div>\r\n<button type=\"button\" id=\"evaluate\" onclick=\"myFun()\"> EVALUATE </button>\r\n</div>\r\n\r\n <script>\r\n function myFun(){\r\n\r\n window.location.href='{{ url_for( 'Result' ) }}';\r\n\r\n }\r\n </script>\r\n\r\n<div id=\"slide4\" class=\"slide header1\">\r\n <h1>The End</h1>\r\n</div>\r\n<!-- <div style=\"background: #222;\"> \r\n <p style=\"color: white\">Designed and Developed by Deekshitha, Harshitha and Mahathi.</p>\r\n </div> -->\r\n\r\n</body>\r\n</html>\r\n"
},
{
"alpha_fraction": 0.5691117644309998,
"alphanum_fraction": 0.5915668606758118,
"avg_line_length": 31.585365295410156,
"blob_id": "efd3e10a89470f186876af2f3d65a44d557b728c",
"content_id": "7e97e3afa494c5d286a96b8d4cb937550c20777e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4008,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 123,
"path": "/face_recognition.py",
"repo_name": "deekshithamayya/IntelligentOnlineTherapist",
"src_encoding": "UTF-8",
"text": "\nimport cv2\nfrom keras.models import load_model\nimport numpy as np\nfrom statistics import mode\nimport time\n\ndef get_labels():\n return {0:'angry',1:'disgust',2:'fear',3:'happy',\n 4:'sad',5:'surprise',6:'neutral'}\n\ndef load_detection_model(model_path):\n detection_model = cv2.CascadeClassifier(model_path)\n return detection_model\n\ndef detect_faces(detection_model, gray_image_array):\n return detection_model.detectMultiScale(gray_image_array, 1.3, 5)\n\ndef draw_bounding_box(face_coordinates, image_array, color):\n x, y, w, h = face_coordinates\n cv2.rectangle(image_array, (x, y), (x + w, y + h), color, 2)\n\ndef apply_offsets(face_coordinates, offsets):\n x, y, width, height = face_coordinates\n x_off, y_off = offsets\n return (x - x_off, x + width + x_off, y - y_off, y + height + y_off)\n\ndef draw_text(coordinates, image_array, text, color, x_offset=0, y_offset=0,\n font_scale=2, thickness=2):\n x, y = coordinates[:2]\n cv2.putText(image_array, text, (x + x_offset, y + y_offset),\n cv2.FONT_HERSHEY_SIMPLEX,\n font_scale, color, thickness, cv2.LINE_AA)\n\ndef preprocess_input(x, v2=True):\n x = x.astype('float32')\n x = x / 255.0\n if v2:\n x = x - 0.5\n x = x * 2.0\n return x\n\ndef func():\n\n detection_model_path = 'haarcascade_frontalface_default.xml'\n emotion_model_path = 'model.hdf5'\n emotion_labels = get_labels()\n\n frame_window = 10\n emotion_offsets = (20, 40)\n\n face_detection = load_detection_model(detection_model_path)\n emotion_classifier = load_model(emotion_model_path, compile=False)\n emotion_target_size = emotion_classifier.input_shape[1:3]\n\n emotion_window = []\n\n cv2.namedWindow('window_frame')\n video_capture = cv2.VideoCapture(0)\n\n capture_duration = 10\n start_time = time.time()\n resarr=[]\n while (int(time.time() - start_time) < capture_duration):\n bgr_image = video_capture.read()[1]\n gray_image = cv2.cvtColor(bgr_image, cv2.COLOR_BGR2GRAY)\n rgb_image = cv2.cvtColor(bgr_image, cv2.COLOR_BGR2RGB)\n faces = detect_faces(face_detection, gray_image)\n\n for face_coordinates in faces:\n\n x1, x2, y1, y2 = apply_offsets(face_coordinates, emotion_offsets)\n gray_face = gray_image[y1:y2, x1:x2]\n try:\n gray_face = cv2.resize(gray_face, (emotion_target_size))\n except:\n continue\n\n gray_face = preprocess_input(gray_face, True)\n gray_face = np.expand_dims(gray_face, 0)\n gray_face = np.expand_dims(gray_face, -1)\n emotion_prediction = emotion_classifier.predict(gray_face)\n emotion_probability = np.max(emotion_prediction)\n emotion_label_arg = np.argmax(emotion_prediction)\n emotion_text = emotion_labels[emotion_label_arg]\n resarr.append(emotion_text)\n emotion_window.append(emotion_text)\n\n if len(emotion_window) > frame_window:\n emotion_window.pop(0)\n try:\n emotion_mode = mode(emotion_window)\n except:\n continue\n\n\n color = np.asarray((0, 0, 0))\n\n\n color = color.astype(int)\n color = color.tolist()\n\n draw_bounding_box(face_coordinates, rgb_image, color)\n draw_text(face_coordinates, rgb_image, emotion_mode,\n color, 0, -45, 1, 1)\n\n\n bgr_image = cv2.cvtColor(rgb_image, cv2.COLOR_RGB2BGR)\n cv2.imshow('window_frame', bgr_image)\n if cv2.waitKey(1) & 0xFF == ord('q'):\n break\n n=len(resarr)\n score=0\n for i in resarr:\n print(i)\n if i==\"happy\": score+=2\n elif i==\"angry\":score-=1\n elif i==\"disgust\":score-=1\n elif i==\"fear\":score-=1\n elif i == \"sad\": score-=2\n elif i== \"neutral\": score+=0\n print(score/n)\n cv2.destroyWindow('window_frame')\n return score/n"
}
] | 3 |
orirmi/twisted-circus | https://github.com/orirmi/twisted-circus | 7b5c02ef371083497aabcc3b68b1d7446b6c759a | d1516c3ad4e9f7b1b5f2f97e540269fc8a647c51 | 0f9bf2a2a218b2d6dc8db30394c973f403f94fff | refs/heads/master | 2021-01-20T06:25:06.726370 | 2013-11-07T00:39:22 | 2013-11-07T00:39:22 | null | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6313079595565796,
"alphanum_fraction": 0.6328527331352234,
"avg_line_length": 24.893333435058594,
"blob_id": "17b39ff573d8d2d9d8e434dd7f344081806d7f3b",
"content_id": "bc1285445228a852746b7755b45fc8b37c1cf4be",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1942,
"license_type": "permissive",
"max_line_length": 75,
"num_lines": 75,
"path": "/twisted/plugins/fd_endpoints.py",
"repo_name": "orirmi/twisted-circus",
"src_encoding": "UTF-8",
"text": "import os\nimport socket\n\nimport ctypes\nimport ctypes.util\n\nfrom zope.interface import implements\n\nfrom twisted.plugin import IPlugin\nfrom twisted.python import log\nfrom twisted.internet import endpoints, interfaces\n\ndef make_get_address_family():\n libc = ctypes.CDLL(ctypes.util.find_library('c'), use_errno=True)\n\n getsockname = libc.getsockname\n # This assumes socklen_t is int\n getsockname.argtypes = \\\n [ctypes.c_int, ctypes.c_void_p, ctypes.POINTER(ctypes.c_int)]\n getsockname.restype = ctypes.c_int\n\n del libc\n\n af_map = dict(\n (getattr(socket, name), name)\n for name in dir(socket)\n if name.startswith('AF_'))\n\n def get_address_family(fd):\n log.msg('Resolving address family of FD %d' % fd)\n\n fd_ = ctypes.c_int(fd)\n addr = ctypes.c_ushort(0)\n len_ = ctypes.c_int(ctypes.sizeof(addr))\n\n ctypes.set_errno(0)\n res = getsockname(fd_, ctypes.byref(addr), ctypes.byref(len_))\n\n if res != 0:\n e = ctypes.get_errno()\n raise OSError(e, os.strerror(e))\n\n af = addr.value\n\n if af in af_map:\n log.msg('Found address family of FD %d: %s' % (fd, af_map[af]))\n else:\n log.msg('Unknown address family of FD %d: %d' % (fd, af))\n\n return af\n\n return get_address_family\n\nget_address_family = make_get_address_family()\ndel make_get_address_family\n\n\nclass FDServerParser(object):\n implements(IPlugin, interfaces.IStreamServerEndpointStringParser)\n\n prefix = 'fd'\n\n def parseStreamServer(self, reactor, *args, **kwargs):\n # Pass to a version with sane arguments\n return self._parseStreamServer(reactor, *args, **kwargs)\n\n def _parseStreamServer(self, reactor, fd):\n fd = int(fd)\n\n family = get_address_family(fd)\n\n return endpoints.AdoptedStreamServerEndpoint(reactor, fd, family)\n\n\nparser = FDServerParser()\n"
},
{
"alpha_fraction": 0.7063491940498352,
"alphanum_fraction": 0.7063491940498352,
"avg_line_length": 24.200000762939453,
"blob_id": "677fe5b7f705b8fe5607b429b7427f6436d31448",
"content_id": "de0b45021a3b73c9fdecc30a8078eac5a5d52420",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 252,
"license_type": "permissive",
"max_line_length": 48,
"num_lines": 10,
"path": "/demo/protocol.py",
"repo_name": "orirmi/twisted-circus",
"src_encoding": "UTF-8",
"text": "import os\n\nfrom twisted.internet import protocol\n\nclass DemoProtocol(protocol.Protocol):\n MESSAGE = 'Hello, world! %d\\n' % os.getpid()\n\n def connectionMade(self):\n self.transport.write(self.MESSAGE)\n self.transport.loseConnection()\n"
},
{
"alpha_fraction": 0.6656644344329834,
"alphanum_fraction": 0.6662657856941223,
"avg_line_length": 29.236364364624023,
"blob_id": "e60ea54b3313ea7ff3f32928b5db2f64eabb8066",
"content_id": "2e75e3eb7c3c3a513f560655b09b45085b41c4a1",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1663,
"license_type": "permissive",
"max_line_length": 78,
"num_lines": 55,
"path": "/twisted/plugins/demo_plugin.py",
"repo_name": "orirmi/twisted-circus",
"src_encoding": "UTF-8",
"text": "from zope.interface import implements\n\nfrom twisted.plugin import IPlugin\nfrom twisted.python import usage\nfrom twisted.internet import endpoints\nfrom twisted.application import internet, service\nfrom twisted.application.service import IServiceMaker\n\nclass Options(usage.Options):\n optParameters = [\n ['listen', 'l', None, '`strports` description of listening address'],\n ['watchdog', 'w', None, 'address of circus watchdog (host:port)']\n ]\n\n def postOptions(self):\n if self['listen'] is None:\n raise usage.UsageError('Missing `listen` argument')\n\n if self['watchdog']:\n if ':' not in self['watchdog']:\n raise usage.UsageError('Invalid `watchdog` argument format')\n\n\nclass DemoServiceMaker(object):\n implements(IServiceMaker, IPlugin)\n\n tapname = 'demo'\n description = 'Demo service'\n options = Options\n\n def makeService(self, options):\n # Note: Don't move this to module-level\n from twisted.internet import reactor\n from demo.factory import DemoFactory\n from demo.watchdog import CircusWatchdogService\n\n multi = service.MultiService()\n\n # 'Server' service\n endpoint = endpoints.serverFromString(reactor, options['listen'])\n server = internet.StreamServerEndpointService(endpoint, DemoFactory())\n server.setServiceParent(multi)\n\n if options['watchdog']:\n addr, port = options['watchdog'].split(':')\n port = int(port)\n\n watchdog = CircusWatchdogService(1, addr, port)\n\n watchdog.setServiceParent(multi)\n\n return multi\n\n\nserviceMaker = DemoServiceMaker()\n"
},
{
"alpha_fraction": 0.6061493158340454,
"alphanum_fraction": 0.7115666270256042,
"avg_line_length": 19.696969985961914,
"blob_id": "f22890ec89a0b10d9163932fd7c54f8285f58557",
"content_id": "0d011c3f10b483d3541f930e598a229f22df9fcc",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "INI",
"length_bytes": 683,
"license_type": "permissive",
"max_line_length": 72,
"num_lines": 33,
"path": "/demo.ini",
"repo_name": "orirmi/twisted-circus",
"src_encoding": "UTF-8",
"text": "[circus]\nendpoint = tcp://127.0.0.1:5555\npubsub_endpoint = tcp://127.0.0.1:5556\nstats_endpoint = tcp://127.0.0.1:5557\n\nhttpd = True\nhttpd_host = 127.0.0.1\nhttpd_port = 8080\n\n[env:demo]\nreactor = epoll\n\n[watcher:demo]\ncmd = twistd -r $(circus.env.reactor) -n -l - --pidfile=\nargs = demo --listen fd:$(circus.sockets.demo) --watchdog 127.0.0.1:1664\nuse_sockets = True\nnumprocesses = 4\n\nstdout_stream.class = FancyStdoutStream\nstdout_stream.color = green\nstderr_stream.class = FancyStdoutStream\nstderr_stream.color = red\n\n[socket:demo]\nhost = 127.0.0.1\nport = 2323\n\n[plugin:watchdog]\nuse = circus.plugins.watchdog.WatchDog\nloop_rate = 3\nwatchers_regex = demo\nip = 127.0.0.1\nport = 1664\n"
},
{
"alpha_fraction": 0.6835442781448364,
"alphanum_fraction": 0.7172995805740356,
"avg_line_length": 21.935483932495117,
"blob_id": "79cd4395df1632bda5ec2489e130c38e1b266984",
"content_id": "6bf59d478420352662df825f71ad038bf8278a3e",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 711,
"license_type": "permissive",
"max_line_length": 76,
"num_lines": 31,
"path": "/README.md",
"repo_name": "orirmi/twisted-circus",
"src_encoding": "UTF-8",
"text": "twisted-circus\n==============\n\nDemo of a Twisted service running behind a socket managed by Circus.\n\nRequirements\n------------\n- Python\n- Twisted\n- *circus* & *circus-web*\n\nRunning\n-------\nFor demonstration purposes, a Circus configuration file is provided, see\n`demo.ini`. This will launch the Twisted demo service on `127.0.0.1:2323`,\nand the *circus-web* interface on `127.0.0.1:8080`.\n\nYou can launch everything using\n\n```\n$ circusd demo.ini\n```\n\nOnce the monitor & service processes have been launched (you should get some\nTwisted logging in green on your console), you can test the service using\n\n```\n$ telnet localhost 2323\n```\n\nThe service returns a simple message after which it closes the connection.\n"
},
{
"alpha_fraction": 0.8142856955528259,
"alphanum_fraction": 0.8142856955528259,
"avg_line_length": 22.33333396911621,
"blob_id": "d56cfd5fae6220cebc1e734098d470d1c946c756",
"content_id": "2c6b6cf91bd267edaeb8f143f1547c5bdfeb4ca4",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 140,
"license_type": "permissive",
"max_line_length": 41,
"num_lines": 6,
"path": "/demo/factory.py",
"repo_name": "orirmi/twisted-circus",
"src_encoding": "UTF-8",
"text": "from twisted.internet import protocol\n\nimport demo.protocol\n\nclass DemoFactory(protocol.Factory):\n protocol = demo.protocol.DemoProtocol\n"
},
{
"alpha_fraction": 0.5895638465881348,
"alphanum_fraction": 0.5895638465881348,
"avg_line_length": 25.204082489013672,
"blob_id": "3ed7b952893b3034b96e358a98e4d4616d428f1c",
"content_id": "e62f2934e78dd2da29a55e4c265330d0da6a39ec",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1284,
"license_type": "permissive",
"max_line_length": 71,
"num_lines": 49,
"path": "/demo/watchdog.py",
"repo_name": "orirmi/twisted-circus",
"src_encoding": "UTF-8",
"text": "import os\nimport time\nimport socket\n\nfrom twisted.python import log\nfrom twisted.application import internet\n\n# TODO *In theory*, this could could block the mainloop\n# So, it should be rewritten to use UDP 'in Twisted'.\n\nclass CircusWatchdogService(internet.TimerService):\n _pid = None\n _socket = None\n\n def __init__(self, step, host, port, make_message=None):\n self._host = host\n self._port = port\n self._make_message = make_message or self._default_make_message\n\n internet.TimerService.__init__(self, step, self.tick)\n\n def tick(self):\n if self._socket is None:\n self._socket = self._make_socket()\n\n msg = self._make_message()\n\n try:\n self._socket.sendto(msg, (self._host, self._port))\n except:\n log.err(None, 'Failed to send watchdog message')\n\n try:\n self._socket.close()\n except:\n log.err(None, 'Failed to close watchdog socket')\n\n self._socket = None\n\n def _make_socket(self):\n return socket.socket(socket.AF_INET, socket.SOCK_DGRAM)\n\n def _default_make_message(self):\n if not self._pid:\n self._pid = os.getpid()\n\n msg = '%d;%f' % (self._pid, time.time())\n\n return msg\n"
}
] | 7 |
esther24/Sparks_GRIP_internship | https://github.com/esther24/Sparks_GRIP_internship | f6a0f025fc7206524a0dd039bbcb4e701d22e102 | 875c17bb9dc34db4f333cc354a0338fffa898d26 | 4eae4727b362a47971a1ce61213b6b4341a1ec7c | refs/heads/main | 2023-08-28T18:05:27.050751 | 2021-10-19T13:32:08 | 2021-10-19T13:32:08 | 418,934,529 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.7105262875556946,
"alphanum_fraction": 0.8157894611358643,
"avg_line_length": 18,
"blob_id": "fdcc2e34e44445eb65f78db710ea73f040ec5e7f",
"content_id": "e4bb2ff02ec1d0c9995d0c4d56033adbf3969967",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 38,
"license_type": "no_license",
"max_line_length": 24,
"num_lines": 2,
"path": "/README.md",
"repo_name": "esther24/Sparks_GRIP_internship",
"src_encoding": "UTF-8",
"text": "# Sparks_GRIP_internship\nOctober 2021\n"
},
{
"alpha_fraction": 0.6009538769721985,
"alphanum_fraction": 0.6020137667655945,
"avg_line_length": 28.967212677001953,
"blob_id": "7a7d6c6da19f8f7439d8ebc1919ee6d295e23d34",
"content_id": "d5e08cdb1c082b966bd074975eecac0c05adfd0f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1887,
"license_type": "no_license",
"max_line_length": 114,
"num_lines": 61,
"path": "/app.py",
"repo_name": "esther24/Sparks_GRIP_internship",
"src_encoding": "UTF-8",
"text": "from flask import Flask\r\nfrom flask import Flask, render_template,request\r\nfrom flask_mysqldb import MySQL\r\n\r\napp = Flask(__name__)\r\n\r\napp.config['MYSQL_HOST'] = 'localhost'\r\napp.config['MYSQL_USER'] = 'root'\r\napp.config['MYSQL_PASSWORD'] = 'ec24*01/01_20'\r\napp.config['MYSQL_DB'] = 'bankDB'\r\n\r\nmysql = MySQL(app)\r\n\r\[email protected](\"/\")\r\ndef main():\r\n return render_template('homepage.html')\r\n\r\[email protected]('/customer') \r\ndef example(): \r\n cur = mysql.connection.cursor()\r\n result = cur.execute(\"SELECT * FROM customer\") \r\n if result > 0:\r\n customer = cur.fetchall()\r\n return render_template(\"customer.html\", customer=customer)\r\n\r\[email protected](\"/transferhistory\")\r\ndef transhistory():\r\n cur = mysql.connection.cursor()\r\n result = cur.execute(\"SELECT * FROM history\")\r\n if result > 0:\r\n history = cur.fetchall()\r\n return render_template('transhistory.html', history=history)\r\n\r\n\r\[email protected](\"/transfer\",methods=['GET','POST'])\r\ndef transaction():\r\n if request.method == 'POST':\r\n sacc = request.form.get('sacc')\r\n racc = request.form.get('racc')\r\n amt = request.form.get('amt')\r\n sname = request.form.get('sname')\r\n rname = request.form.get('rname')\r\n amount = request.form.get('amount')\r\n cur = mysql.connection.cursor()\r\n cur.execute(\"UPDATE customer SET balance = balance - %s WHERE account = %s\" , (amt,sacc))\r\n cur.execute(\"UPDATE customer SET balance = balance + %s WHERE account = %s\" , (amt,racc))\r\n cur.execute(\"INSERT INTO history (sender , receiver , amount) VALUES (%s,%s,%s)\" , (sname,rname,amount))\r\n mysql.connection.commit()\r\n cur.close()\r\n print(\"success\")\r\n else:\r\n print(\"did not insert\")\r\n return render_template('transfer.html' )\r\n\r\n \r\n\r\n\r\n \r\nif __name__ == \"__main__\":\r\n app.debug = True\r\n app.run()"
},
{
"alpha_fraction": 0.5887573957443237,
"alphanum_fraction": 0.5887573957443237,
"avg_line_length": 22.285715103149414,
"blob_id": "ca1429738d373f1c4a0185767cb6c1685912baa4",
"content_id": "a0fe6af5f4d2fe2340001dc3972d9717c8b22857",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 338,
"license_type": "no_license",
"max_line_length": 63,
"num_lines": 14,
"path": "/fuctions.js",
"repo_name": "esther24/Sparks_GRIP_internship",
"src_encoding": "UTF-8",
"text": "function showTransferForm(){\r\n\r\n var submission;\r\n if (confirm(\"Do you wish to transfer money?\")) {\r\n location.href = \"/transfer\"\r\n } else {\r\n alert(\"okay! Have a good day!\")\r\n }\r\n document.getElementById(\"transfer\").innerHTML = submission;\r\n}\r\n\r\n function msg(){\r\n alert(\"Transfer was a success!\");\r\n}"
}
] | 3 |
Noahs-ARK/semeval-2014 | https://github.com/Noahs-ARK/semeval-2014 | de13a26c573d8a5dc39df18b1d5ce8a2ed297901 | eb402dd5f996fcaf4d5310fe6efa9138bfac199a | 89a435e2e493bf9ee169c6492422ac006dc025bb | refs/heads/master | 2021-03-24T12:33:08.305650 | 2017-06-06T21:48:52 | 2017-06-06T21:48:52 | 16,464,570 | 2 | 1 | null | null | null | null | null | [
{
"alpha_fraction": 0.7135416865348816,
"alphanum_fraction": 0.7309027910232544,
"avg_line_length": 31,
"blob_id": "3837ad3d871f4ecb308a98810ce2b94ae12b96d5",
"content_id": "b843ad38a59b0e963a3d1ae12d120a192c7d5fa4",
"detected_licenses": [
"BSD-2-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 576,
"license_type": "permissive",
"max_line_length": 83,
"num_lines": 18,
"path": "/src/main/java/edu/cmu/cs/ark/semeval2014/lr/fe/LinearOrderFeatures.java",
"repo_name": "Noahs-ARK/semeval-2014",
"src_encoding": "UTF-8",
"text": "package edu.cmu.cs.ark.semeval2014.lr.fe;\n\nimport edu.cmu.cs.ark.semeval2014.lr.fe.FE.FeatureAdder;\nimport util.U;\n\npublic class LinearOrderFeatures extends FE.FeatureExtractor implements FE.EdgeFE {\n\n\t@Override\n\t/**\n\t * Feature for the directed linear distance between a child and its parent.\n\t * Motivation: ARG1 usually on the left, ARG2 on the right, etc.\n\t */\n\tpublic void features(int srcTokenIdx, int destTokenIdx, FeatureAdder fa) {\n\t\tint dist = srcTokenIdx - destTokenIdx;\n\t\tfa.add(U.sf(\"lin:%s\", dist));\n\t\tfa.add(U.sf(\"left:%s\", (srcTokenIdx < destTokenIdx)));\n\t}\n}\n"
},
{
"alpha_fraction": 0.5584415793418884,
"alphanum_fraction": 0.6103895902633667,
"avg_line_length": 18.25,
"blob_id": "e2bd7fb2e3c19f9514d3e82bb454e1ee07318ed1",
"content_id": "0ddc6a59e03a49fc169f9b01c311116052f2e027",
"detected_licenses": [
"BSD-2-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 77,
"license_type": "permissive",
"max_line_length": 43,
"num_lines": 4,
"path": "/scripts/view.sh",
"repo_name": "Noahs-ARK/semeval-2014",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\nset -eu\npython $(dirname $0)/view.py < $1 > $1.html\nopen $1.html\n"
},
{
"alpha_fraction": 0.6516516804695129,
"alphanum_fraction": 0.6559059023857117,
"avg_line_length": 34.6875,
"blob_id": "d4da67dbf2bd9f841780e2a63f8c2084b4ea8f50",
"content_id": "5ca01df5dc5aec036d29e97fd571ce741a130a3e",
"detected_licenses": [
"BSD-2-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 3996,
"license_type": "permissive",
"max_line_length": 158,
"num_lines": 112,
"path": "/src/main/java/edu/cmu/cs/ark/semeval2014/prune/Viterbi.java",
"repo_name": "Noahs-ARK/semeval-2014",
"src_encoding": "UTF-8",
"text": "package edu.cmu.cs.ark.semeval2014.prune;\n\nimport java.util.*;\n\n\n//State: Holds a table that has the features for a sentence\n//\t\t\tHolds the weights \n//Actions: builds the table of scores, then traverses it.\npublic class Viterbi {\n\tMap<String, Double> w;\n\tSet<String> featuresForPrinting;\n\t\n\tpublic Viterbi(Map<String, Double> weights){\n\t\tw = weights;\n\t}\n\n\t// takes a table containing the features for the sentence\n\t// returns a string[] that is the tag sequence for the sentence\n\tpublic String[] decode(List<Map<String, Set<String>>> sentenceFeats){//, ArrayList<String[]> sentence) {\n\t\t// compute score for each element in the table, store in this backpointer table\n\t\tList<Map<String, Double>> scores = new ArrayList<Map<String, Double>>();\n\t\tList<Map<String, String>> backpointers = new ArrayList<Map<String, String>>();\n\t\tMap<String, Double> prevScore = new TreeMap<String, Double>();\n\t\tprevScore.put(\"<START>\", 0.0);\n\t\t\n\t\tfor (int i = 1; i < sentenceFeats.size(); i++){\n\t\t\tMap<String, Set<String>> wordFeats = sentenceFeats.get(i);\n\t\t\t\n\t\t\t// to instantiate the current score and backpointer\n\t\t\tMap<String, Double> score = new TreeMap<String, Double>();\n\t\t\tMap<String, String> backpointer = new TreeMap<String, String>();\n\t\t\t\n\t\t\tfor (String curTag : wordFeats.keySet()){\n\t\t\t\tif (i == sentenceFeats.size() - 1)\n\t\t\t\t\tcurTag = \"<STOP>\";\n\t\t\t\tfor (String prevTag : sentenceFeats.get(1).keySet()){\n\t\t\t\t\tif (i == 1)\n\t\t\t\t\t\tprevTag = \"<START>\";\n\t\t\t\t\t// want to pass the feature vector for the current word, given the tag for the current word\n\t\t\t\t\t// to the scorer. Will also pass the current tag, and the previous tag.\n\t\t\t\t\tdouble emission = FeatureVector.score(wordFeats.get(curTag), prevTag, w);\n\t\t\t\t\t//System.out.println(prevTag + \"_\" + curTag);\n\t\t\t\t\tdouble transition;\n\t\t\t\t\tif (i == 1 || i == sentenceFeats.size() - 1)\n\t\t\t\t\t\ttransition = 0;\n\t\t\t\t\telse\n\t\t\t\t\t\ttransition = w.get(prevTag + \"_\" + curTag);\n\t\t\t\t\tdouble curScore = emission + transition;\n\t\t\t\t\t\n\t\t\t\t\tdouble scoreForPrevTag = prevScore.get(prevTag);\n\t\t\t\t\t\n\t\t\t\t\tupdateScoreAndBackpointer(curScore + scoreForPrevTag, score, backpointer, curTag, prevTag);\n\t\t\t\t\t/*\n\t\t\t\t\tSystem.out.println(\"curTag: \" + curTag);\n\t\t\t\t\tSystem.out.println(\"prevTag: \" + prevTag);\n\t\t\t\t\tSystem.out.println(\"curScore: \" + curScore);\n\t\t\t\t\tSystem.out.println(\"prevScore: \" + prevScore);\n\t\t\t\t\tSystem.out.print(\"weights for curScore: \");\n\t\t\t\t\tfor (String curFeat : wordFeats.get(curTag)){\n\t\t\t\t\t\tSystem.out.print(curFeat + \":\" + w.get(curFeat));\n\t\t\t\t\t}\n\t\t\t\t\tSystem.out.println();\n\t\t\t\t\tSystem.out.println(\"current feats: \" + wordFeats.get(curTag));\n\t\t\t\t\tSystem.out.println();\n\t\t\t\t\t*/\n\t\t\t\t}\n\t\t\t\tif (i == sentenceFeats.size() - 1)\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t\t//System.out.println();\n\t\t\tscores.add(score);\n\t\t\tprevScore = score;\n\t\t\tbackpointers.add(backpointer);\n\t\t}\n\t\t//System.out.println(\"\\n\\n\");\n\t\treturn followBackpointers(backpointers, sentenceFeats);//, sentence);\n\t\t\t\t\n\t}\n\n\tprivate String[] followBackpointers(List<Map<String, String>> backpointers, List<Map<String, Set<String>>> sentenceFeats){//, ArrayList<String[]> sentence) {\n\t\tString[] tags = new String[backpointers.size()];\n\t\tString curTag = \"<STOP>\";\n\t\t//System.out.println(\"Made it here\");\n\t\tfor (int i = backpointers.size() - 1; i >= 0; i--){\n\t\t\t// adding stuff to the set for printing\n\n\t\t//\tSystem.out.println(\"Made it insidet he loop!\" + i);\n\t\t\ttags[i] = backpointers.get(i).get(curTag);\n\t\t\tcurTag = backpointers.get(i).get(curTag);\n\t\t\t//for (String f : sentenceFeats.get(i).get(curTag)) {\n\t\t\t\t//if (w.containsKey(f)){\n\t\t\t\t//\tfeaturesForPrinting.add(f + \" \" + w.get(f));\n\t\t\t\t//}\n\t\t\t//}\n\t\t}\n\t\t\n\t\t//LoadData.printFeaturesUsed(sentence, featuresForPrinting);\n\t\t\n\t\treturn tags;\n\t}\n\n\tprivate void updateScoreAndBackpointer(double curScore, Map<String, Double> score, Map<String, String> backpointer,\n\t\t\tString curTag, String prevTag) {\n\t\tif (score.keySet().size() == 0 || !score.containsKey(curTag) || \n\t\t\t\tcurScore > score.get(curTag)){\n\t\t\tscore.put(curTag, curScore);\n\t\t\tbackpointer.put(curTag, prevTag);\n\t\t}\n\t\t\n\t}\n\t\n}"
},
{
"alpha_fraction": 0.6364678740501404,
"alphanum_fraction": 0.6513761281967163,
"avg_line_length": 31.296297073364258,
"blob_id": "a71345fa068f574fe09029c0b1ddb4b5f3b75c18",
"content_id": "c12a752ec976fbcf2f391e06bc6e2965099d66fa",
"detected_licenses": [
"BSD-2-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 872,
"license_type": "permissive",
"max_line_length": 94,
"num_lines": 27,
"path": "/src/main/java/edu/cmu/cs/ark/semeval2014/lr/MiscUtil.java",
"repo_name": "Noahs-ARK/semeval-2014",
"src_encoding": "UTF-8",
"text": "package edu.cmu.cs.ark.semeval2014.lr;\n\nimport java.lang.management.ManagementFactory;\n\npublic class MiscUtil {\n\t/** http://stackoverflow.com/questions/35842/how-can-a-java-program-get-its-own-process-id */\n\tpublic static String getProcessId(final String fallback) {\n\t // Note: may fail in some JVM implementations\n\t // therefore fallback has to be provided\n\n\t // something like '<pid>@<hostname>', at least in SUN / Oracle JVMs\n\t final String jvmName = ManagementFactory.getRuntimeMXBean().getName();\n\t final int index = jvmName.indexOf('@');\n\n\t if (index < 1) {\n\t // part before '@' empty (index = 0) / '@' not found (index = -1)\n\t return fallback;\n\t }\n\n\t try {\n\t return Long.toString(Long.parseLong(jvmName.substring(0, index)));\n\t } catch (NumberFormatException e) {\n\t // ignore\n\t }\n\t return fallback;\n\t}\n}\n"
},
{
"alpha_fraction": 0.6568810939788818,
"alphanum_fraction": 0.6706225275993347,
"avg_line_length": 28.286584854125977,
"blob_id": "4f832f6ab9fad23174f833f68bc2906ac1882c4e",
"content_id": "08418e4e6f4045efd3ebed028bb5286e65a7f5fe",
"detected_licenses": [
"BSD-2-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 4803,
"license_type": "permissive",
"max_line_length": 85,
"num_lines": 164,
"path": "/src/main/java/edu/cmu/cs/ark/semeval2014/ner/StanfordNER.java",
"repo_name": "Noahs-ARK/semeval-2014",
"src_encoding": "UTF-8",
"text": "package edu.cmu.cs.ark.semeval2014.ner;\n\nimport java.io.BufferedReader;\nimport java.io.FileInputStream;\nimport java.io.IOException;\nimport java.io.InputStreamReader;\nimport java.io.PrintStream;\nimport java.io.UnsupportedEncodingException;\nimport java.util.ArrayList;\nimport java.util.List;\nimport java.util.Properties;\nimport java.util.regex.Matcher;\nimport java.util.regex.Pattern;\n\nimport edu.stanford.nlp.ie.AbstractSequenceClassifier;\nimport edu.stanford.nlp.ie.crf.CRFClassifier;\nimport edu.stanford.nlp.ling.CoreAnnotations;\nimport edu.stanford.nlp.ling.CoreLabel;\n\n/*\n * Run Stanford NER on .sdp-formatted files. Print to stdout in format:\n * \n * <sentenceId> <tokenId> <token> <3-class NER label> <7-class NER label>\n * \n * Requires stanford-ner.jar in classpath, english.all.3class.distsim.crf.ser.gz and \n * english.muc.7class.distsim.crf.ser.gz in classifiers/\n * \n * Download models from: http://nlp.stanford.edu/software/stanford-ner-2014-01-04.zip\n * \n */\npublic class StanfordNER {\n\n\tstatic PrintStream out;\n\n\tstatic {\n\t\ttry {\n\t\t\tout = new PrintStream(System.out, true, \"UTF-8\");\n\t\t} catch (UnsupportedEncodingException e) {\n\t\t\te.printStackTrace();\n\t\t}\n\t}\n\n\t@SuppressWarnings(\"unchecked\")\n\tpublic static void read(String infile) {\n\t\tString serializedClassifier = \"classifiers/english.all.3class.distsim.crf.ser.gz\";\n\t\tString serializedClassifier7 = \"classifiers/english.muc.7class.distsim.crf.ser.gz\";\n\n\t\tProperties basicProperties = new Properties();\n\t\tbasicProperties.put(\"tokenizerFactory\",\n\t\t\t\t\"edu.stanford.nlp.process.WhitespaceTokenizer\");\n\t\tbasicProperties.put(\"tokenizerOptions\", \"tokenizeNLs=true\");\n\n\t\tAbstractSequenceClassifier<CoreLabel> classifier = null;\n\t\tAbstractSequenceClassifier<CoreLabel> classifier7 = null;\n\t\t\n\t\ttry {\n\t\t\tclassifier = (AbstractSequenceClassifier<CoreLabel>) CRFClassifier\n\t\t\t\t\t.getClassifier(serializedClassifier, basicProperties);\n\t\t\tclassifier7 = (AbstractSequenceClassifier<CoreLabel>) CRFClassifier\n\t\t\t\t\t.getClassifier(serializedClassifier7, basicProperties);\n\n\t\t} catch (ClassCastException e1) {\n\t\t\te1.printStackTrace();\n\t\t} catch (IOException e1) {\n\t\t\te1.printStackTrace();\n\t\t} catch (ClassNotFoundException e1) {\n\t\t\te1.printStackTrace();\n\t\t}\n\n\t\tString sentenceIdPattern = \"^#(\\\\d+)\";\n\t\tint lastId = -1;\n\t\tint size = 0;\n\t\ttry {\n\t\t\tBufferedReader in1 = new BufferedReader(new InputStreamReader(\n\t\t\t\t\tnew FileInputStream(infile), \"UTF-8\"));\n\t\t\tString str1;\n\n\t\t\tString line = \"\";\n\t\t\tList<String> tokenCheck = new ArrayList<String>();\n\t\t\t\n\t\t\twhile ((str1 = in1.readLine()) != null) {\n\n\t\t\t\tMatcher sentenceIdMatcher = Pattern.compile(sentenceIdPattern).matcher(str1);\n\t\t\t\tboolean match = sentenceIdMatcher.find();\n\n\t\t\t\tif (str1.matches(\"^\\\\d.*\")) {\n\n\t\t\t\t\tString[] cols = str1.trim().split(\"\\t\");\n\t\t\t\t\tString token = cols[1];\n\t\t\t\t\tline += token + \" \";\n\t\t\t\t\ttokenCheck.add(token);\n\t\t\t\t\tsize++;\n\t\t\t\t} else if (match && size > 0) {\n\n\t\t\t\t\tint id = Integer.valueOf(sentenceIdMatcher.group(1));\n\n\t\t\t\t\tList<CoreLabel> lcl = new ArrayList<CoreLabel>();\n\t\t\t\t\tList<CoreLabel> lcl7 = new ArrayList<CoreLabel>();\n\t\t\t\t\tfor (List<CoreLabel> tmp : classifier.classify(line.trim())) {\n\t\t\t\t\t\tlcl.addAll(tmp);\n\t\t\t\t\t}\n\t\t\t\t\tfor (List<CoreLabel> tmp : classifier7\n\t\t\t\t\t\t\t.classify(line.trim())) {\n\t\t\t\t\t\tlcl7.addAll(tmp);\n\t\t\t\t\t}\n\n\t\t\t\t\tfor (int i = 0; i < size; i++) {\n\t\t\t\t\t\tCoreLabel cl = lcl.get(i);\n\t\t\t\t\t\tCoreLabel cl7 = lcl7.get(i);\n\n\t\t\t\t\t\tString token = cl.originalText();\n\t\t\t\t\t\tassert token.equals(tokenCheck.get(i));\n\t\t\t\t\t\tString ner = cl\n\t\t\t\t\t\t\t\t.get(CoreAnnotations.AnswerAnnotation.class);\n\t\t\t\t\t\tString ner7 = cl7\n\t\t\t\t\t\t\t\t.get(CoreAnnotations.AnswerAnnotation.class);\n\t\t\t\t\t\tout.println(lastId + \"\\t\" + (i + 1) + \"\\t\" + token\n\t\t\t\t\t\t\t\t+ \"\\t\" + ner + \"\\t\" + ner7);\n\t\t\t\t\t}\n\t\t\t\t\tline = \"\";\n\t\t\t\t\tlastId = id;\n\t\t\t\t\tsize = 0;\n\t\t\t\t\ttokenCheck = new ArrayList<String>();\n\t\t\t\t} else if (match) {\n\t\t\t\t\tint id = Integer.valueOf(sentenceIdMatcher.group(1));\n\t\t\t\t\tlastId = id;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tin1.close();\n\n\t\t\tList<CoreLabel> lcl = new ArrayList<CoreLabel>();\n\t\t\tList<CoreLabel> lcl7 = new ArrayList<CoreLabel>();\n\t\t\tfor (List<CoreLabel> tmp : classifier.classify(line.trim())) {\n\t\t\t\tlcl.addAll(tmp);\n\t\t\t}\n\t\t\tfor (List<CoreLabel> tmp : classifier7.classify(line.trim())) {\n\t\t\t\tlcl7.addAll(tmp);\n\t\t\t}\n\n\t\t\tfor (int i = 0; i < size; i++) {\n\t\t\t\tCoreLabel cl = lcl.get(i);\n\t\t\t\tCoreLabel cl7 = lcl7.get(i);\n\n\t\t\t\tString token = cl.originalText();\n\n\t\t\t\tassert token.equals(tokenCheck.get(i));\n\t\t\t\tString ner = cl.get(CoreAnnotations.AnswerAnnotation.class);\n\t\t\t\tString ner7 = cl7.get(CoreAnnotations.AnswerAnnotation.class);\n\t\t\t\tout.println(lastId + \"\\t\" + (i + 1) + \"\\t\" + token + \"\\t\" + ner\n\t\t\t\t\t\t+ \"\\t\" + ner7);\n\t\t\t}\n\n\t\t} catch (Exception e) {\n\t\t\te.printStackTrace();\n\t\t}\n\t}\n\n\tpublic static void main(String[] args) throws IOException {\n\t\tString file = args[0];\n\t\tread(file);\n\t}\n\n}\n"
},
{
"alpha_fraction": 0.6071428656578064,
"alphanum_fraction": 0.6428571343421936,
"avg_line_length": 45.66666793823242,
"blob_id": "7777184591a5fa64b1a8d147f4d1d8a0a19daf9b",
"content_id": "76d5200bf7795a023a33168d44beec6141b88ae8",
"detected_licenses": [
"BSD-2-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 140,
"license_type": "permissive",
"max_line_length": 88,
"num_lines": 3,
"path": "/scripts/sbt",
"repo_name": "Noahs-ARK/semeval-2014",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\ntest -f ~/.sbtconfig && . ~/.sbtconfig\nexec java -Xmx1g -XX:MaxPermSize=256m ${SBT_OPTS} -jar $(dirname $0)/sbt-launch.jar \"$@\"\n"
},
{
"alpha_fraction": 0.47280335426330566,
"alphanum_fraction": 0.5564853549003601,
"avg_line_length": 52.11111068725586,
"blob_id": "a229fa86ef9dd39bf9b7f57e16ae793d6e72ba8f",
"content_id": "34f2e9d5c5e8010320d205699161b05b2d3cec0d",
"detected_licenses": [
"BSD-2-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 478,
"license_type": "permissive",
"max_line_length": 128,
"num_lines": 9,
"path": "/scripts/make_splits.sh",
"repo_name": "Noahs-ARK/semeval-2014",
"src_encoding": "UTF-8",
"text": "DATADIR=\"/cab1/corpora/LDC2013E167/\"\n\nfor f in dm pcedt pas; do\n for ext in sdp sdp.dependencies; do\n cat ${DATADIR}${f}.${ext} | awk 'BEGIN{s=0} /^#/{ s+=1 } s <= 27200' > ${DATADIR}splits/train.${f}.${ext} # 80% train\n cat ${DATADIR}${f}.${ext} | awk 'BEGIN{s=0} /^#/{ s+=1 } s > 27200 && s <= 30600' > ${DATADIR}splits/dev.${f}.${ext} # 10% dev\n cat ${DATADIR}${f}.${ext} | awk 'BEGIN{s=0} /^#/{ s+=1 } s > 30600' > ${DATADIR}splits/test.${f}.${ext} # 10% test\n done\ndone\n"
},
{
"alpha_fraction": 0.5288590788841248,
"alphanum_fraction": 0.5416107177734375,
"avg_line_length": 26.55555534362793,
"blob_id": "98cea3b8477a46267fb21a8d73b30f9315925843",
"content_id": "8757503550879d955b98fff0e6c61c7a20eb3bef",
"detected_licenses": [
"BSD-2-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1490,
"license_type": "permissive",
"max_line_length": 72,
"num_lines": 54,
"path": "/scripts/view.py",
"repo_name": "Noahs-ARK/semeval-2014",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\nimport sys\n\ndef iter_sents():\n cur = []\n sentid = None\n for line in sys.stdin:\n line=line.strip()\n if not line:\n yield sentid,cur\n sentid=None\n cur=[]\n if len(line.split())==1 and line.startswith(\"#\"):\n sentid = line.lstrip(\"#\")\n cur=[]\n continue\n row = line.split('\\t')\n cur.append(row)\n if cur:\n yield sentid,cur\n\nprint \"\"\"\n <meta content=\"text/html; charset=utf-8\" http-equiv=\"Content-Type\"/>\n \"\"\"\n\nfor numsent, (sentid,rows) in enumerate(iter_sents()):\n if numsent>10: break\n\n tokids = [row[0] for row in rows]\n words = [row[1] for row in rows]\n lemmas = [row[2] for row in rows]\n poses = [row[3] for row in rows]\n topmarkers = [row[4] for row in rows]\n predmarkers= [row[5] for row in rows]\n T = len(rows)\n\n # zero-indexed\n pred_triggers = [i for i in range(T) if predmarkers[i]=='+']\n assert len(rows[0])-6 == len(pred_triggers)\n\n # print sentid\n # print predmarkers\n\n header_info = ['','word','lemma','pos','istop','ispred']\n for i in pred_triggers:\n header_info.append(\"%s:%d\" % (words[i], i+1))\n\n print \"#\" + sentid\n # print \"<table>\"\n print \"<table cellpadding=3 border=1 cellspacing=0 width='100%'>\"\n print \"<tr>\", ' '.join([\"<th>%s\" % x for x in header_info])\n for row in rows:\n print \"<tr>\", ' '.join([\"<td>%s\" % x for x in row])\n print \"</table>\"\n\n\n"
},
{
"alpha_fraction": 0.6517412662506104,
"alphanum_fraction": 0.6915422677993774,
"avg_line_length": 39.20000076293945,
"blob_id": "6b97b4468c1b50dff7524a4a44279c2de11932c2",
"content_id": "cfe8e6e643af8de4e03c115734573487a6203a5a",
"detected_licenses": [
"BSD-2-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 201,
"license_type": "permissive",
"max_line_length": 82,
"num_lines": 5,
"path": "/scripts/eval.sh",
"repo_name": "Noahs-ARK/semeval-2014",
"src_encoding": "UTF-8",
"text": "# first arg: gold\n# second arg: pred\n\n# sbt logs to stdout, and Evaluator prints to stderr. we only care about Evaluator\n`dirname $0`/sbt \"run-main sdp.tools.Evaluator ${1} ${2}\" 3>&1 1>/dev/null 2>&3\n"
},
{
"alpha_fraction": 0.5132890343666077,
"alphanum_fraction": 0.514950156211853,
"avg_line_length": 23.079999923706055,
"blob_id": "eec09838c36864272f5a0f386b8648eabf957a1d",
"content_id": "58651905210186a9a45632f2448006fdec5c7e09",
"detected_licenses": [
"BSD-2-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 602,
"license_type": "permissive",
"max_line_length": 59,
"num_lines": 25,
"path": "/errorAnalysis/sdputils.py",
"repo_name": "Noahs-ARK/semeval-2014",
"src_encoding": "UTF-8",
"text": "# annotation iterator from brendan's view.py\nimport sys\n\n\ndef iter_sents(filename=None):\n in_file = open(filename) if filename else sys.stdin\n\n cur = []\n sentid = None\n for line in in_file:\n line = line.strip()\n if not line:\n yield sentid, cur\n sentid = None\n cur = []\n if len(line.split()) == 1 and line.startswith(\"#\"):\n sentid = line.lstrip(\"#\")\n cur = []\n continue\n row = line.split('\\t')\n cur.append(row)\n if cur:\n yield sentid, cur\n if filename:\n in_file.close()\n"
},
{
"alpha_fraction": 0.6710452437400818,
"alphanum_fraction": 0.6795834302902222,
"avg_line_length": 28.35812759399414,
"blob_id": "af20cbf69b3c92f8417f9211d247e607fa2169e6",
"content_id": "fd35e269aea76c4cbc16fcb790940e0f86402ef9",
"detected_licenses": [
"BSD-2-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 10658,
"license_type": "permissive",
"max_line_length": 98,
"num_lines": 363,
"path": "/src/main/java/mltools/classifier/BinaryLogreg.java",
"repo_name": "Noahs-ARK/semeval-2014",
"src_encoding": "UTF-8",
"text": "package mltools.classifier;\n\nimport java.io.BufferedReader;\nimport java.io.BufferedWriter;\nimport java.io.IOException;\nimport java.io.PrintWriter;\nimport java.util.ArrayList;\nimport java.util.List;\n\nimport mltools.classifier.FeatureExtractor.FeatureAdder;\n\n\nimport util.Arr;\nimport util.BasicFileIO;\nimport util.LBFGS;\nimport util.U;\nimport util.Vocabulary;\nimport util.LBFGS.Status;\nimport util.misc.Pair;\nimport util.misc.Triple;\n\npublic class BinaryLogreg<ExampleType> {\n\tpublic List<FeatureExtractor<ExampleType>> featureExtractors;\n\tpublic Vocabulary featVocab;\n\n\tpublic double biasCoef = 0;\n\t/** size J */\n\tpublic double[] featCoefs;\n\n\t/** client is responsible for filling this up. String is the label. */\n\tpublic List<BinaryLabeledExample<ExampleType>> trainingData;\n\tprivate List<ModelExample> trainingModelExamples;\n\tpublic double penalty = 1;\n\tpublic int maxIter = 500;\n\tpublic double tol = 1e-5;\n\n\tpublic double l1penalty() { return 0; }\n\tpublic double l2penalty() { return penalty; }\n\t\n\t/** verbose about training? */\n\tpublic boolean verbose = false;\n\t/** verbose about feature extraction? */\n\tpublic boolean dumpMode = false;\n\t\n\tpublic BinaryLogreg() {\n\t\tfeatureExtractors = new ArrayList<>();\n\t\tfeatVocab = new Vocabulary();\n\t\ttrainingData = new ArrayList<>();\n\t}\n\t\n\tpublic void addTrainingExample(boolean label, ExampleType ex) {\n\t\ttrainingData.add(BinaryLabeledExample.make(label, ex));\n\t}\n\t\n\t//// inference.... is pretty simple for this model ////\n\t\n\t/** the labels' unnorm logprobs (negative energy)\n\t * returns vector \\in R^K */\n\tdouble calcLabelScore(ModelExample ex) {\n\t\tdouble s = biasCoef;\n\t\tfor (Pair<Integer,Double> fv : ex.observationFeatures) {\n\t\t\t\ts += fv.second * featCoefs[fv.first];\n\t\t}\n\t\treturn s;\n\t}\n\t/** returns vector \\in Simplex(K) */\n\tdouble calcLabelProb(ModelExample ex) {\n\t\tdouble s = calcLabelScore(ex);\n\t\treturn 1.0 / (1 + Math.exp(-s));\n\t}\n\tpublic double predictLabelProb(ExampleType ex) {\n\t\tModelExample mx = new ModelExample();\n\t\textractFeatures(ex, mx);\n\t\treturn calcLabelProb(mx);\n\t}\n\t\n\t//// feature dimension reshaping management ... only to make optimizer software happier. ////\n\t\n\tpublic int biasFeature_to_flatID() {\n\t\treturn 0;\n\t}\n\tpublic int observationFeature_to_flatID(int featID) {\n\t\treturn 1 + featID;\n\t}\n\tpublic int flatIDsize() {\n\t\treturn 1 + featVocab.size();\n\t}\n\t\n\tpublic double[] convertCoefsToFlat() {\n\t\tdouble[] flatCoefs = new double[flatIDsize()];\n\t\tflatCoefs[biasFeature_to_flatID()] = biasCoef;\n\t\tfor (int feat=0; feat < featVocab.size(); feat++) {\n\t\t\tflatCoefs[observationFeature_to_flatID(feat)] = featCoefs[feat];\n\t\t}\n\t\treturn flatCoefs;\n\t}\n\t\n\tpublic void setCoefsFromFlat(double[] flatCoefs) {\n\t\tbiasCoef = flatCoefs[0];\n\t\tfor (int feat=0; feat < featVocab.size(); feat++) {\n\t\t\tfeatCoefs[feat] = flatCoefs[observationFeature_to_flatID(feat)];\n\t\t}\n\t}\n\t\n\t////////////////// only for training mode /////////////////////\n\n\t\n\tpublic void lockdownVocabAndAllocateCoefs() {\n\t\tfeatVocab.lock();\n\t\tU.pf(\"%d feature types\\n\", featVocab.size());\n\t\tallocateCoefs(featVocab.size());\n\t}\n\n\tpublic void allocateCoefs(int numFeats) {\n\t\tfeatCoefs = new double[numFeats];\n\t}\n\n\tpublic void sgdLoop(int numiter, double rate) {\n\t\tfor (int iter=0; iter<numiter; iter++) {\n\t\t\tdouble ll = 0;\n\t\t\tfor (ModelExample ex : trainingModelExamples) {\n\t\t\t\tll += sgdUpdate(ex, rate);\n\t\t\t}\n\t\t\tif (verbose) {\n\t\t\t\tU.pf(\"sgd iter: LL %g\\n\", ll);\t\n\t\t\t} else {\n\t\t\t\tU.pf(\"s\");\t\n\t\t\t}\n\t\t}\n\t}\n\n\t/** the numberized sparse <ExT>featurevector representation of an example */\n\tprivate static class ModelExample {\n\t\tpublic int label = -1;\n\t\tpublic ArrayList< Pair<Integer, Double>> observationFeatures = new ArrayList<>();\n\t}\n\t\n\tpublic void optimizationLoop() {\n\t\tdouble[] flatCoefs = convertCoefsToFlat();\n\t\tLBFGS.Params params = new LBFGS.Params();\n\t\tparams.max_iterations = maxIter;\n\t\tparams.orthantwise_c = l1penalty();\n\t\tparams.orthantwise_start = observationFeature_to_flatID(0);\n\t\tparams.delta = params.epsilon = tol; // only 'delta' will matter\n\t\tparams.m = 6;\n\t\t\n\t\tLBFGS.Result r = LBFGS.lbfgs(flatCoefs, new GradientCalculator(),\n\t\t\tnew LBFGS.ProgressCallback() {\n\t\t\t\t@Override\n\t\t\t\tpublic int apply(double[] x, double[] g, double fx,\n\t\t\t\t\t\tdouble xnorm, double gnorm, double step, int n, int iterNum, Status ls) {\n\t\t\t\t\tif (verbose) {\n\t\t\t\t\t\tU.pf(\"iter %d: obj=%g active=%d gnorm=%g\\n\", iterNum, fx, Arr.countNonZero(x), gnorm);\n\t\t\t\t\t} else {\n\t\t\t\t\t\tU.pf(\".\");\n\t\t\t\t\t}\n\t\t\t\t\treturn 0;\n\t\t\t\t}\n\t\t}, params);\n\t\tSystem.out.println(\" Finished status=\" + r.status);\n\t\tsetCoefsFromFlat(flatCoefs);\n\t}\n\n\tprivate class GradientCalculator implements LBFGS.Function {\n\t\t@Override\n\t\tpublic double evaluate(double[] flatCoefs, double[] g, int n, double step) {\n\t\t\tsetCoefsFromFlat(flatCoefs);\n\t\t\tArr.fill(g,0);\n\t\t\tdouble loglik = 0;\n\t\t\tfor (ModelExample ex : trainingModelExamples) {\n\t\t\t\tloglik += computeGradientAndLL(ex, g);\n\t\t\t}\n\t\t\tArr.multiplyInPlace(g, -1);\n\t\t\taddL2regularizerGradient(g, flatCoefs);\n\t\t\treturn -loglik + regularizerValue(flatCoefs);\n\t\t}\n\t}\n\n\t/** return the label's loglik before the update */ \n\tpublic double sgdUpdate(ModelExample ex, double rate) {\n\t\tdouble p = calcLabelProb(ex);\n\t\tint y = ex.label;\n\t\tdouble resid = y-p;\n\t\tbiasCoef += rate*resid;\n\t\tfor (Pair<Integer,Double> fv : ex.observationFeatures) {\n\t\t\tfeatCoefs[fv.first] += rate*resid*fv.second;\n\t\t}\n\t\treturn Math.log(y==1 ? p : (1-p));\n\t}\n\n\n\t/**\n\t * Training-only\n\t * \n\t * add-in loglik gradient (direction of higher likelihood), and return the loglik of the sentence\n\t **/\n\tpublic double computeGradientAndLL(ModelExample ex, double[] grad) {\n\t\tassert grad.length == flatIDsize();\n\t\tdouble ll = 0;\n\t\tdouble p = calcLabelProb(ex);\n\t\tint y = ex.label;\n\t\tdouble resid = y - p;\n\t\tbiasCoef += resid;\n\t\tfor (Pair<Integer,Double> fv : ex.observationFeatures) {\n\t\t\tgrad[observationFeature_to_flatID(fv.first)] += resid;\n\t\t}\n\t\treturn ll;\n\t}\n\n\t/** this is actually *negative* loglik, unlike all other routines which are positive */\n\tprivate void addL2regularizerGradient(double[] grad, double[] flatCoefs) {\n\t\tdouble l2pen = l2penalty();\n\t\tassert grad.length == flatCoefs.length;\n\t\tfor (int f=0; f < flatCoefs.length; f++) {\n\t\t\tgrad[f] += l2pen * flatCoefs[f]; \n\t\t}\n\t}\n\n\t/**\n\t * lambda_2 * (1/2) sum (beta_j)^2 + lambda_1 * sum |beta_j|\n\t * the library only wants the first term\n\t */\n\t private double regularizerValue(double[] flatCoefs) {\n\t\tdouble l2_term = 0;\n\t\tfor (int f=0; f < flatCoefs.length; f++) {\n\t\t\tl2_term += Math.pow(flatCoefs[f], 2);\n\t\t}\n\t\treturn 0.5*l2penalty()*l2_term;\n\t}\n\n\t public void doTraining(String modelOutputFilename) {\n\t\tassert trainingData.size() > 0 : \"Please fill up trainingData before starting training\";\n\t\t\n\t\tU.p(\"Extracting features\");\n\t\ttrainingModelExamples = new ArrayList<>();\n\t\tfor (BinaryLabeledExample<ExampleType> lx : trainingData) {\n\t\t\tModelExample mx = new ModelExample();\n\t\t\tmx.label = lx.label ? 1 : 0;\n\t\t\tif (dumpMode) U.pf(\"LABEL=%s\\tEXAMPLE\\t%s\\n\", lx.label, lx.example);\n\t\t\textractFeatures(lx.example, mx);\n\t\t\ttrainingModelExamples.add(mx);\n\t\t}\n\t\t\n\t\tlockdownVocabAndAllocateCoefs();\n\t\t\n\t\tU.p(\"Training\");\n\t\tsgdLoop(10, 0.1);\n\t\toptimizationLoop();\n\t\t\n\t\ttry {\n\t\t\tU.p(\"Saving model to \" + modelOutputFilename);\n\t\t\tsaveModelAsText(modelOutputFilename);\n\t\t\t\n\t\t} catch (IOException e) {\n\t\t\te.printStackTrace();\n\t\t}\n\t\t\n\t}\n\t\n\t private void extractFeatures(ExampleType ix, final ModelExample mx) {\n\t\t assert featureExtractors.size() > 0 : \"No feature extractors defined\";\n\t\t class MyAdder extends FeatureAdder {\n\t\t\t@Override\n\t\t\tpublic void add(String featurename, double value) {\n\t\t\t\tint featnum = featVocab.num(featurename);\n\t\t\t\tif (featnum==-1) return;\n\t\t\t\tif (dumpMode) U.pf(\"F\\t%s\\t%s\\n\", featurename, value);\n\t\t\t\tmx.observationFeatures.add(U.pair(featnum,value));\n\t\t\t}\n\t\t }\n\t\t FeatureAdder myAdder = new MyAdder();\n\t\t for (FeatureExtractor fe : featureExtractors) {\n\t\t\t fe.computeFeatures(ix, myAdder);\n\t\t }\n\t }\n\t \n\tdouble calcObj() {\n\t\t GradientCalculator g = new GradientCalculator();\n\t\t double[] b = convertCoefsToFlat();\n\t\t return g.evaluate(b, new double[flatIDsize()], flatIDsize(), -1);\n\t }\n\n\t//////////////////// model serialization routines //////////////////////\n\n\tpublic void saveModelAsText(String outputFilename) throws IOException {\n\t\tBufferedWriter writer = BasicFileIO.openFileToWriteUTF8(outputFilename);\n\t\tPrintWriter out = new PrintWriter(writer);\n\n\t\tout.printf(\"BIAS\\t%.4g\\n\", biasCoef);\n\t\t\n\t\tassert featVocab.size() == featCoefs.length;\n\t\tfor (int f=0; f < featVocab.size(); f++) {\n\t\t\tif (featCoefs[f]==0) continue;\n\t\t\tout.printf(\"F\\t%s\\t%.4g\\n\", featVocab.name(f), featCoefs[f]);\n\t\t}\n\t\tout.close();\n\t\twriter.close();\n\t}\n\n\tpublic void loadModel(String filename) throws IOException {\n\t\tBufferedReader reader = BasicFileIO.openFileOrResource(filename);\n\t\tString line;\n\t\t\n\t\tArrayList< Pair<Integer, Double> > obsCoefs = new ArrayList<>();\n\n\t\twhile ( (line = reader.readLine()) != null ) {\n\t\t\tString[] parts = line.split(\"\\t\");\n\t\t\tif (parts[0].equals(\"BIAS\")) {\n\t\t\t\tdouble value = Double.valueOf(parts[1]);\n\t\t\t\tthis.biasCoef = value;\n\t\t\t}\n\t\t\telse if (parts[0].equals(\"F\")) {\n\t\t\t\tint feat = this.featVocab.num(parts[1]);\n\t\t\t\tdouble w = Double.valueOf(parts[2]);\n\t\t\t\tobsCoefs.add(U.pair(feat,w));\n\t\t\t}\n\t\t\telse { throw new RuntimeException(\"bad model line format\"); }\n\t\t}\n\t\t\n\t\tthis.lockdownVocabAndAllocateCoefs();\n\t\t\n\t\tfor (Pair<Integer,Double> x : obsCoefs) {\n\t\t\tthis.featCoefs[x.first] = x.second;\n\t\t}\n\t\treader.close();\n\t}\n\tpublic static BinaryLogreg loadNewModel(String filename) throws IOException {\n\t\tBinaryLogreg model = new BinaryLogreg();\n\t\tmodel.loadNewModel(filename);\n\t\treturn model;\n\t}\n\n\tprivate static void test1() throws IOException {\n\t\tclass MyObject {\n\t\t\tint a,b,c;\n\t\t\tMyObject(int a, int b, int c) { this.a=a; this.b=b; this.c=c; }\n\t\t}\n\t\t\n\t\tBinaryLogreg<MyObject> m = new BinaryLogreg<>();\n\t\tm.penalty = 1e-5;\n\t\tm.featureExtractors.add(new FeatureExtractor<MyObject>() {\n\t\t\t@Override\n\t\t\tpublic void computeFeatures(MyObject ex, FeatureAdder fa) {\n\t\t\t\tfa.add(\"feat_a\", ex.a);\n\t\t\t\tfa.add(\"feat_b\", ex.b);\n\t\t\t\tfa.add(\"feat_c\", ex.c);\n\t\t\t}\n\t\t});\n\t\tm.trainingData.add(BinaryLabeledExample.make(false, new MyObject(0,0,2)));\n\t\tm.trainingData.add(BinaryLabeledExample.make(false, new MyObject(0,0,1)));\n\t\tm.trainingData.add(BinaryLabeledExample.make(true, new MyObject(3,0,0)));\n\t\tm.trainingData.add(BinaryLabeledExample.make(true, new MyObject(5,0,1)));\n\t\tm.doTraining(\"test.model\");\n\t\t\n\t\tBinaryLogreg<MyObject> m2 = BinaryLogreg.loadNewModel(\"test.model\");\n\t\tm2.featureExtractors = m.featureExtractors;\n\t\tfor (BinaryLabeledExample<MyObject> ex : m.trainingData) {\n\t\t\tU.p(m2.predictLabelProb(ex.example));\n\t\t}\n\t}\n\t\n\tpublic static void main(String[] args) throws IOException { test1(); }\n}\n\n"
},
{
"alpha_fraction": 0.6441798806190491,
"alphanum_fraction": 0.6498016119003296,
"avg_line_length": 29.239999771118164,
"blob_id": "d9ee609ebd3b89f25b94f214088ff2556c84017d",
"content_id": "4a798476ba1cc9dfc34f368b8ef45086b833ef86",
"detected_licenses": [
"BSD-2-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 3024,
"license_type": "permissive",
"max_line_length": 139,
"num_lines": 100,
"path": "/src/main/java/edu/cmu/cs/ark/semeval2014/lr/MyGraph.java",
"repo_name": "Noahs-ARK/semeval-2014",
"src_encoding": "UTF-8",
"text": "package edu.cmu.cs.ark.semeval2014.lr;\n\nimport edu.cmu.cs.ark.semeval2014.common.InputAnnotatedSentence;\nimport util.Arr;\nimport util.Vocabulary;\nimport util.misc.Triple;\n\nimport java.io.PrintWriter;\nimport java.util.ArrayList;\nimport java.util.List;\n\n/** ok, decoding logic has bled into this class. */\npublic class MyGraph {\n\tboolean[] isChildOfSomething;\n\tboolean[] isPred;\n\tboolean[] isTop;\n\t\n\tList< Triple<Integer,Integer,String> > edgelist;\n\tString[][] edgeMatrix;\n\tMyGraph(int sentenceLength, List<Triple<Integer, Integer, String>> _edgelist) {\n\t\tedgelist = _edgelist;\n\t\tisChildOfSomething = new boolean[sentenceLength];\n\t\tisPred = new boolean[sentenceLength];\n\t\tisTop = new boolean[sentenceLength];\n\t\tedgeMatrix = new String[sentenceLength][sentenceLength];\n\t\tfor (Triple<Integer,Integer,String> tt : _edgelist) {\n\t\t\tint i=tt.first, j=tt.second;\n\t\t\tedgeMatrix[i][j] = tt.third;\n\t\t\tisPred[i] = true;\n\t\t\tisChildOfSomething[j] = true;\n\t\t}\n\t}\n\t\n\tpublic static void decideTopsStupid(MyGraph g, InputAnnotatedSentence sent) {\n\t\tfor (int i=0; i<g.isPred.length; i++) {\n\t\t\tg.isTop[i] = !g.isChildOfSomething[i] && g.isPred[i];\n\t\t}\n\t}\n\n\tpublic static void decideTops(MyGraph g, InputAnnotatedSentence sent) {\n\t\t\n\t\tArr.fill(g.isTop, false);\n\t\t\n\t\tint n = g.isPred.length;\n\t\tdouble[] topnesses = new double[n];\n\t\t\n\t\tfor (int i=0; i<n; i++) {\n\t\t\tif (!g.isPred[i]) {\n\t\t\t\ttopnesses[i] = -1e6;\n\t\t\t}\n\t\t\telse {\n\t\t\t\ttopnesses[i] = LRParser.topClassifier.topness(sent, i);\n\t\t\t}\n\t\t}\n\t\tint i = Arr.argmax(topnesses);\n\t\tg.isTop[i] = true;\n\t}\n\t\n\tpublic static MyGraph decodeEdgeProbsToGraph(InputAnnotatedSentence sent, double[][][] probs, Vocabulary labelVocab, boolean doPostproc) {\n\t\tint noedgeID = labelVocab.num(LRParser.NO_EDGE);\n\t\tfinal List<Triple<Integer,Integer,String>> edgeList = new ArrayList<>();\n\t\tfor (int i = 0; i < sent.size(); i++) {\n\t\t\tfor (int j = 0; j < sent.size(); j++) {\n\t\t\t\tif (LRParser.badPair(sent, i, j)) continue;\n\t\t\t\tint predLabel = Arr.argmax(probs[i][j]);\n\t\t\t\tif (predLabel==noedgeID) continue;\n\n\t\t\t\tif (doPostproc) {\n\t\t\t\t\t// single direction consistency\n\t\t\t\t\tint labelOtherDir = Arr.argmax(probs[j][i]);\n\t\t\t\t\tif (labelOtherDir != noedgeID && Arr.max(probs[j][i]) > Arr.max(probs[i][j])) {\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tTriple<Integer,Integer,String> tt = new Triple<>(i, j, labelVocab.name(predLabel));\n\t\t\t\tedgeList.add(tt);\n\t\t\t}\n\t\t}\n\t\treturn new MyGraph(sent.size(), edgeList);\n\t}\n\t\n\tpublic void print(PrintWriter out, InputAnnotatedSentence sent) {\n\t\tout.println(\"#\" + sent.sentenceId);\n\t\tfor (int i = 0; i < sent.size(); i++) {\n\t\t\tout.printf(\"%d\\t%s\\tlemmaz\\t%s\\t%s\\t%s\", i + 1, sent.sentence[i], sent.pos[i],\n\t\t\t\t\tisTop[i] ? \"+\" : \"-\", isPred[i] ? \"+\" : \"-\");\n\t\t\tfor (int head = 0; head < sent.size(); head++) {\n\t\t\t\tif (!isPred[head]) continue;\n\t\t\t\t// ok now we're in a predicate column that may be dominating this node\n\t\t\t\tString label = edgeMatrix[head][i];\n\t\t\t\tlabel = label == null ? \"_\" : label;\n\t\t\t\tout.print(\"\\t\" + label);\n\t\t\t}\n\t\t\tout.print(\"\\n\");\n\t\t}\n\t\tout.print(\"\\n\");\n\t\tout.flush();\n\t}\n}\n"
},
{
"alpha_fraction": 0.7202118039131165,
"alphanum_fraction": 0.7528684735298157,
"avg_line_length": 39.46428680419922,
"blob_id": "20f0c9bc40d94c2751ac3845178f9a6844696d3d",
"content_id": "628372cf7f541a395b4610219b2b85ec8d3ac072",
"detected_licenses": [
"BSD-2-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 1133,
"license_type": "permissive",
"max_line_length": 127,
"num_lines": 28,
"path": "/java.sh",
"repo_name": "Noahs-ARK/semeval-2014",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\nset -eu\n\nif [ -d /cab0/tools/java7 ]; then\n export PATH=/cab0/tools/java7/jdk1.7.0_21/bin:$PATH\nfi\n\n# Invokes 'java' using .class files as compiled by SBT and/or Eclipse,\n# plus flags to stop crashes due to idiotic default JVM behaviors\n\n# file.encoding necessary for mac (ugh!!) http://stackoverflow.com/questions/361975/setting-the-default-java-character-encoding\n# TODO: should set encoding explicitly in the java/scala code\n# XX:ParallelGCThreads prevents horrible crashes on large multicore machines (basically, another stupid java bug)\n# -ea enables assertions\n\n# so we don't have to type out \"edu.cmu.cs.ark.semeval2014.\" every time\nklass=\"edu.cmu.cs.ark.semeval2014.$1\"\nshift\n\n# make sure your bin/sbt doesn't override these JAVA_OPTS, or just include these in bin/sbt\nexport JAVA_OPTS=\"-ea -Dfile.encoding=UTF-8 -XX:ParallelGCThreads=2 -Xmx4g\"\nexport SBT_OPTS=$JAVA_OPTS\n\nset -x\n$(dirname $0)/scripts/sbt \"run-main $klass $*\"\n## alternatively, run \"sbt assembly\" then you can put the uberjar on the classpath like below\n# java ${JAVA_OPTS} -cp target/scala-2.10/semeval2014-assembly-0.1-SNAPSHOT.jar $klass $@\n"
},
{
"alpha_fraction": 0.68034827709198,
"alphanum_fraction": 0.6952736377716064,
"avg_line_length": 27.714284896850586,
"blob_id": "c013ef979d66ce930538a6d60f88d3f0e8ea8dec",
"content_id": "af25034fb9e7874e141b8ee11ce5e0e7908b639e",
"detected_licenses": [
"BSD-2-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 804,
"license_type": "permissive",
"max_line_length": 86,
"num_lines": 28,
"path": "/scripts/traintest_example.sh",
"repo_name": "Noahs-ARK/semeval-2014",
"src_encoding": "UTF-8",
"text": "#!/bin/sh\nset -eu\n\nmodel_name=\"mymodel\"\nfeature_opts=\"\"\nmodel_dir=\"bladir\"\nmkdir -p \"${model_dir}\"\nmodel_file=\"${model_dir}/${model_name}\"\n\nformalism=\"pas\"\ndata_dir=\"lildata/lil\"\ntrain_file=\"${data_dir}train.${formalism}.sdp\"\ntrain_deps=\"${train_file}.dependencies\"\n\n# sec20 files are from: /cab1/corpora/LDC2013E167/splits\ntest_file=\"data/splits/sec20.${formalism}.sdp\"\ntest_deps=\"${test_file}.dependencies\"\n\npred_file=\"${model_file}.pred.${formalism}.sdp\"\n\nset -x\n./java.sh lr.LRParser -mode train \\\n -formalism $formalism \\\n -model ${model_file} -sdpInput ${train_file} -depInput ${train_deps} ${feature_opts}\n./java.sh lr.LRParser -mode test \\\n -formalism $formalism \\\n -model ${model_file} -sdpOutput ${pred_file} -depInput ${test_deps} ${feature_opts}\nscripts/eval.sh ${test_file} ${pred_file}\n"
},
{
"alpha_fraction": 0.6395604610443115,
"alphanum_fraction": 0.6494505405426025,
"avg_line_length": 42.33333206176758,
"blob_id": "e0808bfb4ffd8c5d2d4ad1487006c8e8501ce54f",
"content_id": "15bff369a138a93ad6da75dd4c236a18b7df3c04",
"detected_licenses": [
"BSD-2-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 910,
"license_type": "permissive",
"max_line_length": 111,
"num_lines": 21,
"path": "/scripts/eval_to_csv.py",
"repo_name": "Noahs-ARK/semeval-2014",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\nimport sys\nfrom itertools import dropwhile, takewhile\n\n\ndef extract_prf(lines):\n # only care about labeled accuracy, including (virtual dependencies to) tops\n minus_preamble = dropwhile(lambda line: \"including virtual dependencies\" not in line, lines)\n relevant_lines = list(takewhile(lambda line: \"excluding virtual dependencies\" not in line, minus_preamble))\n\n p = [line.split()[1] for line in relevant_lines if line.startswith(\"LP:\")][0]\n r = [line.split()[1] for line in relevant_lines if line.startswith(\"LR:\")][0]\n f = [line.split()[1] for line in relevant_lines if line.startswith(\"LF:\")][0]\n m = [line.split()[1] for line in relevant_lines if line.startswith(\"LM:\")][0]\n return p, r, f, m\n\n\nif __name__ == \"__main__\":\n p, r, f, m = extract_prf(sys.stdin)\n print('\"LP\", \"LR\", \"LF\", \"LM\"')\n print(\", \".join('\"{0}\"'.format(x) for x in (p, r, f, m)))\n"
},
{
"alpha_fraction": 0.7111650705337524,
"alphanum_fraction": 0.7483818531036377,
"avg_line_length": 67.66666412353516,
"blob_id": "c49a6306e7c231b40da97cbd588593f651879cea",
"content_id": "0b5a6d7a5bb3c14de00adc5287709c14356415d5",
"detected_licenses": [
"BSD-2-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 1236,
"license_type": "permissive",
"max_line_length": 188,
"num_lines": 18,
"path": "/README.md",
"repo_name": "Noahs-ARK/semeval-2014",
"src_encoding": "UTF-8",
"text": "SemEval-2014\n============\n\nThis project is our entry in [SemEval 2014 Task 8: Broad-Coverage Semantic Dependency Parsing](http://alt.qcri.org/semeval2014/task8/).\nThe plan is for the project to be a mix of Java and Scala, with the expectation that the majority of team members will be contributing in Java.\n\nThe project can be compiled and run with sbt\n([installing sbt](http://www.scala-sbt.org/release/docs/Getting-Started/Setup.html),\n[running sbt](http://www.scala-sbt.org/release/docs/Getting-Started/Running.html)).\n\nThe system is described in:\n[CMU: Arc-Factored, Discriminative Semantic Dependency Parsing](http://samthomson.com/papers/thomson+etal.semeval2014.pdf). \nSam Thomson, Brendan O'Connor, Jeffrey Flanigan, David Bamman, Jesse Dodge, Swabha Swayamdipta, Nathan Schneider, Chris Dyer and Noah A. Smith. In SemEval 2014, Dublin, Ireland, August 2014. ([bib](http://samthomson.com/papers/thomson+etal.semeval2014.bib))\n\n\nWe also have a newer, more accurate [system](https://github.com/Noahs-ARK/NeurboParser), as described in the following paper:\n[Deep Multitask Learning for Semantic Dependency Parsing](http://samthomson.com/papers/peng+etal.acl2017.pdf). \nHao Peng, Sam Thomson, and Noah A. Smith. In ACL 2017, Vancouver, Canada, July 2017. ([bib](http://samthomson.com/papers/peng+etal.acl2017.bib))\n"
},
{
"alpha_fraction": 0.6250468492507935,
"alphanum_fraction": 0.6291713714599609,
"avg_line_length": 36.04166793823242,
"blob_id": "850c829a9c9814c5ec59be6a0983127c477ee68d",
"content_id": "ea8ff3b368b9021664e7e2084ab85185bcf131c3",
"detected_licenses": [
"BSD-2-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 2667,
"license_type": "permissive",
"max_line_length": 133,
"num_lines": 72,
"path": "/scripts/med_traintest.sh",
"repo_name": "Noahs-ARK/semeval-2014",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\nset -eu\n\nfeature_opts=\"\"\ndata_dir=\"data/splits/med\"\nmodel_dir=\"experiments\"\nmkdir -p \"${model_dir}\"\n\ntimestamp=$(date '+%Y-%m-%dT%H:%M:%S%z')\ngitid=$(git log -1 --format=\"%ci_%ce_%h\" | perl -pe 's/ /T/; s/ //; s/ /_/g')\nreports_dir=\"target/reports_run=${timestamp}_commit=${gitid}\"\nmkdir -p \"${reports_dir}\"\n(cd $(dirname $reports_dir) && ln -sf $(basename $reports_dir) reports)\necho \"REPORTS DIR: ${reports_dir}\"\n\narchive_dir=/cab0/brendano/www/semeval/reports\n\nfor formalism in \"pas\" \"dm\" \"pcedt\"\ndo\n model_name=\"${formalism}_med_model\"\n model_file=\"${model_dir}/${model_name}\"\n\n train_file=\"${data_dir}train.${formalism}.sdp\"\n # train_file=\"lildata/liltrain.${formalism}.sdp\"\n train_deps=\"${train_file}.dependencies\"\n\n test_file=\"data/splits/sec20.${formalism}.sdp\"\n test_deps=\"${test_file}.dependencies\"\n\n pred_file=\"${model_file}.pred.${formalism}.sdp\"\n trainpred_file=\"${model_file}.trainpred.${formalism}.sdp\"\n\n set -x\n (\n ./java.sh lr.LRParser -mode train -saveEvery -1 \\\n -formalism $formalism \\\n -model ${model_file} -sdpInput ${train_file} -depInput ${train_deps} ${feature_opts}\n ./java.sh lr.LRParser -mode test \\\n -formalism $formalism \\\n -model ${model_file} -sdpOutput ${pred_file} -depInput ${test_deps} ${feature_opts}\n ./java.sh lr.LRParser -mode test \\\n -formalism $formalism \\\n -model ${model_file} -sdpOutput ${trainpred_file} -depInput ${train_deps} ${feature_opts}\n ./scripts/eval.sh \"${test_file}\" \"${pred_file}\" | tee \"${reports_dir}/${model_name}.eval.log\"\n ./scripts/eval.sh \"${train_file}\" \"${trainpred_file}\" | tee \"${reports_dir}/${model_name}.train.eval.log\"\n ./scripts/eval_to_csv.py < \"${reports_dir}/${model_name}.eval.log\" > \"${reports_dir}/${model_name}.eval.csv\"\n ./scripts/eval_to_csv.py < \"${reports_dir}/${model_name}.train.eval.log\" > \"${reports_dir}/${model_name}.train.eval.csv\"\n python errorAnalysis/confusionMatrix.py \"${test_file}\" \"${pred_file}\" > \"${reports_dir}/${model_name}_confusion.html\"\n python errorAnalysis/confusionMatrix.py \"${train_file}\" \"${trainpred_file}\" > \"${reports_dir}/${model_name}_train_confusion.html\"\n ) 2>&1 | tee -a ${reports_dir}/run.log\n\n set +x\ndone\n\n(cd ${reports_dir} && awk '\n /including virtual/{x=1} \n /excluding virtual/{x=0}\n /^L/ && x { print FILENAME,$0 } \n ' *.eval.log | \n perl -pe 's/://' > labeled_results.txt\n)\n\necho -e \"\\n\"\necho \"****** FINAL F1 ******\"\ngrep \"LF\" ${reports_dir}/labeled_results.txt\necho -e \"\\n\"\n\necho \"FINISHED: $reports_dir\"\nif [ -d \"${archive_dir}\" ]; then\n cp -r $reports_dir $archive_dir\n echo \"Copied to $archive_dir/$reports_dir\"\nfi\n"
},
{
"alpha_fraction": 0.6260683536529541,
"alphanum_fraction": 0.6474359035491943,
"avg_line_length": 25,
"blob_id": "e1cbd9af21dfef7842c30c02342e535c9b2711b1",
"content_id": "4461001df8b164ff156a2b521d5f58be90390316",
"detected_licenses": [
"BSD-2-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 468,
"license_type": "permissive",
"max_line_length": 91,
"num_lines": 18,
"path": "/scripts/quieteval.sh",
"repo_name": "Noahs-ARK/semeval-2014",
"src_encoding": "UTF-8",
"text": "# first arg: gold\n# second arg: pred\n\n# sbt logs to stdout, and Evaluator prints to stderr. we only care about Evaluator\n(`dirname $0`/sbt \"run-main sdp.tools.Evaluator ${1} ${2}\" 3>&1 1>/dev/null 2>&3) > eval.$$\n\n## If you want to see the real output\n# cat eval.$$; exit\n\n## Pared-down output\ncat eval.$$ | awk '\n/file/{print} \n/including.*dependencies/{x=1} /Labeled scores/{y=1; print}\nx && y && /:/{print}\n/Unlabeled scores/{exit}\n'\n\necho \"\\nFull report: eval.$$\"\n"
},
{
"alpha_fraction": 0.7492957711219788,
"alphanum_fraction": 0.7492957711219788,
"avg_line_length": 24.35714340209961,
"blob_id": "28b5d3cebcd5cb58caa55039108dd64906b23587",
"content_id": "024e8508d34b5179d64c9cfc9be10a439e52c377",
"detected_licenses": [
"BSD-2-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 355,
"license_type": "permissive",
"max_line_length": 75,
"num_lines": 14,
"path": "/src/main/java/mltools/classifier/BinaryLabeledExample.java",
"repo_name": "Noahs-ARK/semeval-2014",
"src_encoding": "UTF-8",
"text": "package mltools.classifier;\n\npublic class BinaryLabeledExample<T> {\n\tpublic T example;\n\tpublic boolean label;\n\t\n\tpublic BinaryLabeledExample(boolean label, T example) {\n\t\tthis.example = example;\n\t\tthis.label = label;\n\t}\n\tpublic static <S> BinaryLabeledExample<S> make(boolean label, S example) {\n\t\treturn new BinaryLabeledExample<S>(label, example);\n\t}\n}\n"
},
{
"alpha_fraction": 0.6804802417755127,
"alphanum_fraction": 0.6917118430137634,
"avg_line_length": 29.3764705657959,
"blob_id": "405c3adbfb32273769dbc8f6018ea9649a8ae7a7",
"content_id": "ef7b4b0e0b97b7a832ec262e1aa58cc7aad860e0",
"detected_licenses": [
"BSD-2-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 2582,
"license_type": "permissive",
"max_line_length": 97,
"num_lines": 85,
"path": "/src/main/java/edu/cmu/cs/ark/semeval2014/topness/TopClassifier.java",
"repo_name": "Noahs-ARK/semeval-2014",
"src_encoding": "UTF-8",
"text": "package edu.cmu.cs.ark.semeval2014.topness;\n\nimport edu.cmu.cs.ark.semeval2014.common.InputAnnotatedSentence;\nimport edu.cmu.cs.ark.semeval2014.utils.Corpus;\nimport mltools.classifier.BinaryLogreg;\nimport scala.Option;\nimport util.U;\n\nimport java.io.IOException;\n\npublic class TopClassifier {\n\tBinaryLogreg<TokenCtx> logreg;\n\t\n\tstatic class TokenCtx {\n\t\tint t = -1;\n\t\tInputAnnotatedSentence sent;\n\t\tTokenCtx(int _t, InputAnnotatedSentence _sent) {\n\t\t\tt=_t; sent=_sent;\n\t\t}\n\t}\n\t\n\tpublic TopClassifier() {\n\t\tlogreg = new BinaryLogreg<>();\n\t\tlogreg.featureExtractors.add(new TopFeats());\n\t}\n\t\n\tstatic class TopFeats extends mltools.classifier.FeatureExtractor<TokenCtx> {\n\t\t@Override\n\t\tpublic void computeFeatures(TokenCtx ex, mltools.classifier.FeatureExtractor.FeatureAdder fa) {\n\t\t\tfinal int tokenIdx = ex.t;\n\t\t\tString pos = ex.sent.pos[tokenIdx];\n\t\t\tfa.add(\"pos=\" + pos);\n//\t\t\tfa.add(\"pos=\" + pos + \"&lcword=\" + ex.sent.sentence()[ex.t].toLowerCase(), 0.2);\n\t\t\tfa.add(\"t=\" + tokenIdx);\n\t\t\tfa.add(\"pos=\" + pos + \"&t=\" + tokenIdx);\n\t\t\tfinal Option<Object> oDepth = ex.sent.syntacticDependencies.depths().apply(tokenIdx);\n\t\t\tfa.add(\"depth=\" + (oDepth.isDefined() ? oDepth.get() : \"NULL\"));\n\t\t\t\n//\t\t\tint q5 = (int) Math.floor(ex.t*1.0 / ex.sent.size() * 5);\n//\t\t\tfa.add(\"q5=\" + ex.t);\n//\t\t\tfa.add(\"pos_and_q5=\" + pos + \"&\" + q5);\n\t\t}\n\t}\n\t\n\tpublic void train(String depFile, String modelOutputFile) {\n\t\tU.p(\"Training topness classifier\");\n\t\t\n\t\tInputAnnotatedSentence[] inputSentences = Corpus.getInputAnnotatedSentences(depFile);\n\t\tfor (InputAnnotatedSentence s1 : inputSentences) {\n\t\t\tfor (int t=0; t<s1.size(); t++) {\n\t\t\t\tboolean y = s1.isTop[t];\n\t\t\t\tlogreg.addTrainingExample(y, new TokenCtx(t, s1));\n\t\t\t}\n\t\t}\n//\t\tlogreg.verbose = true;\n\t\tlogreg.doTraining(modelOutputFile);\n\t}\n\tpublic void loadModel(String modelFile) throws IOException {\n\t\tlogreg.loadModel(modelFile);\n\t}\n\t\n\tpublic double[] predictTopnessProbs(InputAnnotatedSentence sent) {\n\t\tdouble[] probs = new double[sent.size()];\n\t\tfor (int t=0; t<probs.length; t++) {\n\t\t\tprobs[t] = logreg.predictLabelProb(new TokenCtx(t, sent));\n\t\t}\n\t\treturn probs;\n\t}\n\t\n\tpublic double topness(InputAnnotatedSentence sent, int t) {\n\t\treturn logreg.predictLabelProb(new TokenCtx(t, sent));\n\t}\n\t\n\tpublic void predictIntoInputs(InputAnnotatedSentence[] sents) {\n\t\tfor (InputAnnotatedSentence s : sents) {\n\t\t\tpredictIntoInput(s);\n\t\t}\n\t}\n\tpublic void predictIntoInput(InputAnnotatedSentence sent) {\n\t\tsent.topnessPredProbs = new double[sent.size()];\n\t\tfor (int t=0; t<sent.size(); t++) {\n\t\t\tsent.topnessPredProbs[t] = topness(sent, t);\n\t\t}\n\t}\n}\n"
},
{
"alpha_fraction": 0.6104792356491089,
"alphanum_fraction": 0.6369569301605225,
"avg_line_length": 26.639175415039062,
"blob_id": "66b0a7d8b98a8f54ae57782cb7d12a4a98e21e64",
"content_id": "f42c60573febdc165e9fe8237a54e691980fab7c",
"detected_licenses": [
"BSD-2-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5363,
"license_type": "permissive",
"max_line_length": 358,
"num_lines": 194,
"path": "/errorAnalysis/confusionMatrix.py",
"repo_name": "Noahs-ARK/semeval-2014",
"src_encoding": "UTF-8",
"text": "import sdputils\nimport sys,operator,os\nimport numpy as np\nfrom collections import Counter\n\n'''\nPrint confusion matrix for predictions vs. gold data. Usage:\n\npython confusionMatrix.py gold/sec20.pcedt.sdp pred/feb7_recsplit.pcedt.pred > output/pcedt.html\n\n'''\n\ngold=enumerate(sdputils.iter_sents(sys.argv[1]))\npred=enumerate(sdputils.iter_sents(sys.argv[2]))\n\nzipped=zip(gold,pred)\n\nconfusion={}\ntopcounts=np.zeros((2,2))\npredcounts=np.zeros((2,2))\n\nfor (gold, pred) in zipped:\n\tnumsent,(sentid,rows)=gold\n\tnumsent_p,(sentid_p,rows_p)=pred\n\n\tT = len(rows)\n\tK = len(rows[0])\n\tK_pred = len(rows_p[0])\n\n\tif K < 5: continue\n\n\ttopmarkers = [row[4] for row in rows]\n\ttopmarkers_pred = [row[4] for row in rows_p]\n\n\ttop_triggers = [i for i in range(T) if topmarkers[i]=='+']\n\ttop_triggers_pred = [i for i in range(T) if topmarkers_pred[i]=='+']\n\n\tpredmarkers= [row[5] for row in rows]\n\tpredmarkers_pred= [row[5] for row in rows_p]\n\t\n\tpred_triggers = [i for i in range(T) if predmarkers[i]=='+']\n\tpred_triggers_pred = [i for i in range(T) if predmarkers_pred[i]=='+']\n\n\tfor i in range(T):\n\t\ttop_g=int(i in top_triggers)\n\t\ttop_p=int(i in top_triggers_pred)\n\t\ttopcounts[top_g, top_p]+=1\n\n\t\tp_g=int(i in pred_triggers)\n\t\tp_p=int(i in pred_triggers_pred)\n\t\tpredcounts[p_g, p_p]+=1\n\n\n\tgoldVals=np.empty( (T,T), dtype=\"S50\")\n\tgoldVals.fill(\"_\")\n\n\tpredVals=np.empty( (T,T), dtype=\"S50\")\n\tpredVals.fill(\"_\")\n\n\n\tfor i in range(T):\n\t\tfor j in range(6,K):\n\t\t\tval=rows[i][j]\n\t\t\tif val != \"_\":\n\t\t\t\tgoldVals[i, pred_triggers[j-6]]=val\n\t\n\tfor i in range(T):\n\t\tfor j in range(6,K_pred):\n\t\t\tval=rows_p[i][j]\n\t\t\tif val != \"_\":\n\t\t\t\tpredVals[i, pred_triggers_pred[j-6]]=val\n\n\tfor i in range(T):\n\t\tfor j in range(T):\n\t\t\tgval=goldVals[i,j]\n\t\t\tpval=predVals[i,j]\n\n\t\t\tif gval not in confusion:\n\t\t\t\tconfusion[gval]=Counter()\n\n\t\t\tconfusion[gval][pval]+=1\n\ntotals={}\ntargetTotal=0\nprecTotals=Counter()\n\nfor key in confusion:\n\tfor tag in confusion[key]:\n\t\tprecTotals[tag]+=confusion[key][tag]\n\n\ttotals[key]=sum(confusion[key].values())\n\tif key != \"_\":\n\t\ttargetTotal+=totals[key]\n\nsorted_tuple = sorted(totals.iteritems(), key=operator.itemgetter(1), reverse=True)\nsortedKeys=[]\nfor (k,v) in sorted_tuple:\n\tsortedKeys.append(k)\n\nprint \"\"\"<html><style>td\n{\npadding:3px;\n} tr:nth-child(even) {\n background-color: #EEEEEE;\n }</style>\"\"\"\n\nprint \"<h1>%s confusion matrix</h1><hr />\" % os.path.basename(sys.argv[2])\n\ntopPrec=topcounts[1,1]/(topcounts[0,1] + topcounts[1,1])\ntopRecall=topcounts[1,1]/(topcounts[1,0] + topcounts[1,1])\ntopF=(2*topPrec*topRecall)/(topPrec+topRecall)\n\nprint \"<h3>Tops</h3>\" \nprint \"Gold n = %d; predicted n = %d\" % (topcounts[1,0] + topcounts[1,1], topcounts[0,1] + topcounts[1,1])\nprint \"\"\"\n<table>\n<tr><td>P</td><td>R</td><td>F</td></tr>\n<tr><td>%.3f</td><td>%.3f</td><td>%.3f</td></tr>\n</table>\n\"\"\" %(topPrec, topRecall, topF)\n\n\npredPrec=predcounts[1,1]/(predcounts[0,1] + predcounts[1,1])\npredRecall=predcounts[1,1]/(predcounts[1,0] + predcounts[1,1])\npredF=(2*predPrec*predRecall)/(predPrec+predRecall)\n\nprint \"<hr />\"\n\n# print \"<h3>Predicates</h3>\"\n# print \"Gold n = %d; predicted n = %d\" % (predcounts[1,0] + predcounts[1,1], predcounts[0,1] + predcounts[1,1])\n\n# print \"\"\"\n# <table>\n# <tr><td>P</td><td>R</td><td>F</td></tr>\n# <tr><td>%.3f</td><td>%.3f</td><td>%.3f</td></tr>\n# </table>\n# \"\"\" %(predPrec, predRecall, predF)\n\n# print \"<hr />\"\n\nprint \"<h3>Labels</h3>\"\n\nprint \"Gold = rows, Predicted = columns. Sorted by count of the number of gold tags in the test data (the most important quadrant is the upper left). Calculated from the correct vs. predicted label for each pair of tokens in the sentence. High values for ['_', LABEL] and [LABEL, '_'] probably implies that we're getting the label right but the head wrong.\"\nprint \"\"\"\n<p /><table>\n<tr><td bgcolor=\"#FA58AC\" width=30px></td><td>BAD +50% of correct label misattributed to this one predicted category</td></tr>\n<tr><td bgcolor=\"#FFFF00\" width=30px></td><td>BAD +30%</td></tr>\n<tr><td bgcolor=\"#01DF01\" width=30px></td><td>BAD +10%</td></tr>\n<tr><td bgcolor=\"#A9BCF5\" width=30px></td><td>GOOD +80% right!</td></tr>\n</table>\"\"\"\nprint \"<body><p /><table border=1>\"\nprint \"<tr><td><em>cumulative%%</em></td><td><em>count</em><td>P</td><td>R</td><td>F</td></td><td></td><td>%s</td></tr>\" % '</td><td>'.join(sortedKeys)\n\nrunningTotal=0.\nfor tag in sortedKeys:\n\t#print tag\n\ttotal=totals[tag]\n\tif tag != \"_\":\n\t\trunningTotal+=total\n\trecall=float(confusion[tag][tag])/total\n\tprecision=0\n\tif precTotals[tag] > 0:\n\t\tprecision=float(confusion[tag][tag])/precTotals[tag]\n\tF1=0\n\tif precision+recall > 0:\n\t\tF1=(2*precision*recall)/(precision+recall)\n\tprint \"<tr>\"\n\tif tag != \"_\":\n\t\tprint \"<td>%.3f</td>\" % (runningTotal/targetTotal)\n\t\tprint \"<td>%d</td>\" % totals[tag]\n\t\tprint \"<td>%.3f</td>\" % precision\n\t\tprint \"<td>%.3f</td>\" % recall\n\t\tprint \"<td>%.3f</td>\" % F1\n\telse:\n\t\tprint \"<td></td><td></td><td></td><td></td><td></td>\"\n\tprint \"<td>%s</td>\" % tag\n\tfor key in sortedKeys:\n\t\tval=\"\"\n\t\tcolor=\"\"\n\t\tif confusion[tag][key] > 0:\n\t\t\tval=str(confusion[tag][key])\n\t\t\tratio=float(confusion[tag][key])/total\n\t\t\tif key == tag:\n\t\t\t\tif ratio > .8 and tag != \"_\":\n\t\t\t\t\tcolor=\"#A9BCF5\"\n\t\t\telif ratio > .5:\n\t\t\t\tcolor=\"#FA58AC\"\n\t\t\telif ratio > .3:\n\t\t\t\tcolor=\"#FFFF00\"\n\t\t\telif ratio > .2:\n\t\t\t\tcolor=\"#01DF01\"\n\t\tprint \"<td bgcolor=\\\"%s\\\">%s</td>\" % (color, val),\n\tprint \"<td>%s</td></tr>\" % tag\nprint \"<table></body></html>\"\n\n"
},
{
"alpha_fraction": 0.6858299374580383,
"alphanum_fraction": 0.6931173801422119,
"avg_line_length": 32.378379821777344,
"blob_id": "edca7eac8ea9cc923918ba476c8f80a545339610",
"content_id": "c10c230de7e11254982e29e434359b9d8a8f1fe8",
"detected_licenses": [
"BSD-2-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 1235,
"license_type": "permissive",
"max_line_length": 68,
"num_lines": 37,
"path": "/src/main/java/edu/cmu/cs/ark/semeval2014/lr/fe/BasicLabelFeatures.java",
"repo_name": "Noahs-ARK/semeval-2014",
"src_encoding": "UTF-8",
"text": "package edu.cmu.cs.ark.semeval2014.lr.fe;\n\npublic class BasicLabelFeatures {\n\t/** Extracts the name of the label */\n\tpublic static class PassThroughFe implements FE.LabelFE {\n\t\t@Override public void features(String label, FE.FeatureAdder fa) {\n\t\t\tfa.add(label);\n\t\t}\n\t}\n\n\t/** Extracts label features specific to the DM formalism: */\n\tpublic static class DmFe implements FE.LabelFE {\n\t\t@Override public void features(String label, FE.FeatureAdder fa) {\n\t\t\tfa.add(\"IsCore=\" + label.startsWith(\"ARG\"));\n\t\t\tfa.add(\"EndsWith_c=\" + label.endsWith(\"_c\"));\n\t\t}\n\t}\n\n\t/** Extracts label features specific to the PAS formalism: */\n\tpublic static class PasFe implements FE.LabelFE {\n\t\t@Override public void features(String label, FE.FeatureAdder fa) {\n\t\t\tfinal String[] postagAndRole = label.split(\"_\", 2);\n\t\t\tif (postagAndRole.length > 1) {\n\t\t\t\tfa.add(\"Pos=\" + postagAndRole[0]);\n\t\t\t\tfa.add(\"Role=\" + postagAndRole[1]);\n\t\t\t\tfa.add(\"IsCore=\" + postagAndRole[1].startsWith(\"ARG\"));\n\t\t\t}\n\t\t}\n\t}\n\n\t/** Extracts label features specific to the PCEDT formalism */\n\tpublic static class PcedtFE implements FE.LabelFE {\n\t\t@Override public void features(String label, FE.FeatureAdder fa) {\n\t\t\tfa.add(\"EndsWith.member=\" + label.endsWith(\".member\"));\n\t\t}\n\t}\n}\n"
},
{
"alpha_fraction": 0.5641711354255676,
"alphanum_fraction": 0.6176470518112183,
"avg_line_length": 40.55555725097656,
"blob_id": "aa6a63c961a73d957115a90bd40eee0788d0d58e",
"content_id": "904aa4ed2da5a266ff9e5d3125f902c27aaa8609",
"detected_licenses": [
"BSD-2-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 754,
"license_type": "permissive",
"max_line_length": 241,
"num_lines": 18,
"path": "/scripts/make_rec_splits.sh",
"repo_name": "Noahs-ARK/semeval-2014",
"src_encoding": "UTF-8",
"text": "# http://alt.qcri.org/semeval2014/task8/index.php?id=evaluation\n# Following our recommended split of the training data, we then trained the graph-based parser of Bohnet (2010) on Sections 00-19 of the (tree reduction of our) SDP data, and applied the resulting âsyntacticâ parsing model to Section 20.\n\nDATADIR=\"/cab1/corpora/LDC2013E167/\"\n\nfor f in dm pcedt pas; do\n for ext in sdp sdp.dependencies; do\n cat ${DATADIR}/${f}.${ext} |\n awk -v dir=${DATADIR}splits -v myext=${f}.${ext} '\n BEGIN { print \"outputting to dir=\" dir \" and extension=\" myext }\n /^#2[0-1]/ {out=\"sec0019\"} \n /^#220/ {out=\"sec20\"}\n /^#22[^0]/ {out=0} \n out {\n print $0 > (dir \"/\" out \".\" myext)\n }'\n done\ndone\n"
},
{
"alpha_fraction": 0.6987577676773071,
"alphanum_fraction": 0.7111801505088806,
"avg_line_length": 28.272727966308594,
"blob_id": "ba2b0d85a6e6a385a51f3993e00eb2c09e81b74e",
"content_id": "e1bd87879f54219316bf2a4fb8cd3a8fe219a0d1",
"detected_licenses": [
"BSD-2-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 644,
"license_type": "permissive",
"max_line_length": 83,
"num_lines": 22,
"path": "/src/main/java/edu/cmu/cs/ark/semeval2014/lr/fe/JustPOS.java",
"repo_name": "Noahs-ARK/semeval-2014",
"src_encoding": "UTF-8",
"text": "package edu.cmu.cs.ark.semeval2014.lr.fe;\n\nimport edu.cmu.cs.ark.semeval2014.lr.fe.FE.FeatureAdder;\nimport util.U;\n\n/** for simple testing */\npublic class JustPOS extends FE.FeatureExtractor implements FE.TokenFE, FE.EdgeFE {\n\n\t@Override\n\tpublic void features(int srcTokenIdx, int destTokenIdx, FeatureAdder fa) {\n final String srcPostag = sent.pos[srcTokenIdx];\n final String destPostag = sent.pos[destTokenIdx];\n\t\tfa.add(U.sf(\"pos:bg:%s_%s\", srcPostag, destPostag));\n\t}\n\n\t@Override\n\tpublic void features(int tokenIdx, FeatureAdder fa) {\n\t\tfinal String postag = sent.pos[tokenIdx];\n fa.add(U.sf(\"pos:%s\", postag));\n\t}\n\t\n}\n"
},
{
"alpha_fraction": 0.5426561236381531,
"alphanum_fraction": 0.5496921539306641,
"avg_line_length": 23.717391967773438,
"blob_id": "1232967615d5ee50dd16a66a2eb35288fc7fd7c8",
"content_id": "944a30dffae682503e3baaa84f913d499c9d42ac",
"detected_licenses": [
"BSD-2-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1137,
"license_type": "permissive",
"max_line_length": 78,
"num_lines": 46,
"path": "/errorAnalysis/convert_to_beast.py",
"repo_name": "Noahs-ARK/semeval-2014",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\"\"\"\nConvert sdp format to \"thebeast\" format for importing into WhatsWrongWithMyNLP\n(https://code.google.com/p/whatswrong/)\n\nUsage:\n\npython ../convert_to_beast.py < feb7_recsplit.dm.pred > pred.dm.beast\n\nWhen loading the resulting file in whatswrong, use:\n\nTokens: Word\nDeps: Relations\nSpans: Spans\n\"\"\"\nimport sdputils\n\n\nfor numsent, (sentid, rows) in enumerate(sdputils.iter_sents()):\n T = len(rows)\n K = len(rows[0])\n if K < 5: continue\n print \">>\\n>Word\"\n for i in range(T):\n print \"%d\\t\\\"%s\\\"\" % (i, rows[i][1])\n\n topmarkers = [row[4] for row in rows]\n predmarkers = [row[5] for row in rows]\n\n pred_triggers = [i for i in range(T) if predmarkers[i] == '+']\n\n print \"\\n>Spans\"\n for i in range(T):\n if topmarkers[i] == \"+\":\n print \"%s\\t%s\\t\\\"%s\\\"\" % (i, i, \"TOP\")\n if predmarkers[i] == \"+\":\n print \"%s\\t%s\\t\\\"%s\\\"\" % (i, i, \"PRED\")\n\n print \"\\n>Relations\"\n for i in range(T):\n for j in range(6, K):\n\n val = rows[i][j]\n\n if val != \"_\":\n print \"%s\\t%s\\t\\\"%s\\\"\" % (pred_triggers[j - 6], i, val)\n"
},
{
"alpha_fraction": 0.5793991684913635,
"alphanum_fraction": 0.6065808534622192,
"avg_line_length": 30.772727966308594,
"blob_id": "2c97f416d41c66637a6cb76fc6358d72fe702cfb",
"content_id": "ec2a7f2232793c669a07306806ed13b6bbefa3c4",
"detected_licenses": [
"BSD-2-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 699,
"license_type": "permissive",
"max_line_length": 145,
"num_lines": 22,
"path": "/scripts/lilsplits.sh",
"repo_name": "Noahs-ARK/semeval-2014",
"src_encoding": "UTF-8",
"text": "#!/bin/sh\nDATADIR=\"$(dirname $0)/../../data\"\n\nOUTPUT_DATADIR=\"lildata\"\n#OUTPUT_DATADIR=\"${DATADIR}/small_splits/\"\n#OUTPUT_DATADIR=\"${DATADIR}/medium_splits/\"\nmkdir -p \"${OUTPUT_DATADIR}\"\n\nTRAIN_SKIP=250\n#DEV_SKIP=50\n#export TRAIN_SKIP=60\n#export DEV_SKIP=12\n#export TRAIN_SKIP=15\n#export DEV_SKIP=3\n# for f in dm pcedt pas; do\n#for f in dm ; do\nfor f in pcedt ; do\n for ext in sdp sdp.dependencies; do\n cat \"${DATADIR}/splits/train.${f}.${ext}\" | awk 'BEGIN{s=0} /^#/{ s+=1 } s % '\"${TRAIN_SKIP}\"' == 0' > \"${OUTPUT_DATADIR}/liltrain.${f}.${ext}\"\n# cat \"${DATADIR}/splits/dev.${f}.${ext}\" | awk 'BEGIN{s=0} /^#/{ s+=1 } s % '\"${DEV_SKIP}\"' == 0' > \"${OUTPUT_DATADIR}/dev.${f}.${ext}\"\n done\ndone\n"
}
] | 26 |
QixinLi/nndl-exercise | https://github.com/QixinLi/nndl-exercise | b6711b6d559a9a75be93c76ce6300c3f833ff32f | 83f5d0bddae3b47ad55038cc434cb23d3e02d3d0 | 937007d59f84a23195edd2c119f8dea9724f2ffc | refs/heads/master | 2020-06-20T17:03:14.955932 | 2019-07-17T07:36:30 | 2019-07-17T07:36:30 | 197,187,044 | 2 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.4715837836265564,
"alphanum_fraction": 0.5647722482681274,
"avg_line_length": 33.40287780761719,
"blob_id": "cef5d369e95eb95cb7ddda10adff530fcfc89b9e",
"content_id": "c5f416a480eccf6af77de6c6693c31501ddb2963",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6082,
"license_type": "no_license",
"max_line_length": 148,
"num_lines": 139,
"path": "/warmup/warmup.py",
"repo_name": "QixinLi/nndl-exercise",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Mon Jul 15 13:38:43 2019\n\n@author: qixin\n\"\"\"\n# 1.导入numpy库\nimport numpy as np\n\n# 2.建立一个一维数组 a 初始化为[4,5,6], (1)输出a 的类型(type)(2)输出a的各维度的大小(shape)(3)输出 a的第一个元素(值为4)\na = np.array([4,5,6])\nprint(\"a:\",a) \nprint(\"a.shape:\",a.shape)\nprint(\"a[0]:\",a[0])\n\n# 3.建立一个二维数组 b,初始化为 [ [4, 5, 6],[1, 2, 3]] (1)输出各维度的大小(shape)(2)输出 b(0,0),b(0,1),b(1,1) 这三个元素(对应值分别为4,5,2)\nb = np.array([[4, 5, 6],[1, 2, 3]] )\nprint(\"b.shape:\",b.shape)\nprint(\"b[0,0],b[0,1],b[1,1]:\",b[0,0],b[0,1],b[1,1])\n\n# 4. (1)建立一个全0矩阵 a, 大小为 3x3; 类型为整型(提示: dtype = int)(2)建立一个全1矩阵b,大小为4x5; (3)建立一个单位矩阵c ,大小为4x4; (4)生成一个随机数矩阵d,大小为 3x2.\na = np.zeros((3,3), dtype = int)\n#print(a)\nb = np.ones((4,5))\n#print(b)\nc = np.identity(4)\n#print(c)\nd = np.random.rand(3,2)\n#print(d)\n\n# 5. 建立一个数组 a,(值为[[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]] ) ,(1)打印a; (2)输出 下标为(2,3),(0,0) 这两个数组元素的值\na = np.array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]])\nprint(\"a[2,3],a[0,0]:\",a[2,3],a[0,0])\n\n# 6.把上一题的 a数组的 0到1行 2到3列,放到b里面去,(此处不需要从新建立a,直接调用即可)(1),输出b;(2) 输出b 的(0,0)这个元素的值\nc = [0,1]\nd = [2,3]\nb = a[c]\nb = b[:,d]\nprint(\"b:\",b)\nprint(\"b[0,0]:\",b[0,0])\n\n# 7. 把第5题的 数组的,的最后两行所有元素放到 c中,(提示: a[1:2][:])(1)输出 c ; (2) 输出 c 中第一行的最后一个元素(提示,使用 -1 表示最后一个元素)\nc = a[1:2][:]\nprint(\"c:\",c)\nprint(\"c[0,-1]:\",c[0,-1])\n\n# 8.建立数组a,初始化a为[[1, 2], [3, 4], [5, 6]],输出 (0,0)(1,1)(2,0)这三个元素(提示: 使用 print(a[[0, 1, 2], [0, 1, 0]]) )\na = np.array([[1, 2], [3, 4], [5, 6]])\nprint(\"a[[0, 1, 2], [0, 1, 0]]:\",a[[0, 1, 2], [0, 1, 0]])\n\n# 9.建立矩阵a ,初始化为[[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]],输出(0,0),(1,2),(2,0),(3,1) (提示使用 b = np.array([0, 2, 0, 1]) print(a[np.arange(4), b]))\na = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]])\nb = np.array([0,2,0,1])\nprint(\"a[np.arange(4),b]:\",a[np.arange(4),b])\n\n# 10.对9 中输出的那四个元素,每个都加上10,然后从新输出矩阵a.(提示: a[np.arange(4), b] += 10 )\na[np.arange(4), b] += 10\nprint(\"a:\",a)\n\n# 11. 执行 x = np.array([1, 2]),然后输出 x 的数据类型,(答案是 int64)\nx = np.array([1,2])\nprint(\"x.dtype:\",x.dtype)\n\n# 12.执行 x = np.array([1.0, 2.0]) ,然后输出 x 的数据类洗净(答案是 float64)\nx = np.array([1.0, 2.0])\nprint(\"x.dtype:\",x.dtype)\n\n# 13.执行 x = np.array([[1, 2], [3, 4]], dtype=np.float64) ,y = np.array([[5, 6], [7, 8]], dtype=np.float64),然后输出 x+y ,和 np.add(x,y)\nx = np.array([[1, 2], [3, 4]], dtype=np.float64)\ny = np.array([[5, 6], [7, 8]], dtype=np.float64)\nprint(\"x+y:\",x+y)\nprint(\"np.add(x,y):\",np.add(x,y))\n\n# 14. 利用 13题目中的x,y 输出 x-y 和 np.subtract(x,y)\nprint(\"x-y:\",x-y)\nprint(\"np.subtract(x,y):\",np.subtract(x,y))\n\n# 15. 利用13题目中的x,y 输出 x*y ,和 np.multiply(x, y) 还有 np.dot(x,y),比较差异。然后自己换一个不是方阵的试试。\nprint(\"x*y:\",x*y)\nprint(\"np.multiply(x, y):\",np.multiply(x, y))\nprint(\"np.dot(x,y):\",np.dot(x,y))\nx = np.array([[5, 6, 1], [7, 8, 1]])\ny = np.array([[1, 2], [3, 4],[5, 6]])\nprint(\"np.dot(x,y):\",np.dot(x,y))\n\n# 16. 利用13题目中的,x,y,输出 x / y .(提示 : 使用函数 np.divide())\nx = np.array([[1, 2], [3, 4]], dtype=np.float64)\ny = np.array([[5, 6], [7, 8]], dtype=np.float64)\nprint(\"np.divide(x,y):\",np.divide(x,y))\n\n# 17. 利用13题目中的,x,输出 x的 开方。(提示: 使用函数 np.sqrt())\nprint(\"np.sqrt(x):\",np.sqrt(x))\n\n# 18.利用13题目中的,x,y ,执行 print(x.dot(y)) 和 print(np.dot(x,y))\nprint(\"x.dot(y):\",x.dot(y)) \nprint(\"np.dot(x,y):\",np.dot(x,y))\n\n# 19.利用13题目中的 x,进行求和。(提示:输出三种求和 (1)print(np.sum(x)): (2)print(np.sum(x,axis =0 )); (3)print(np.sum(x,axis = 1)))\nprint(\"np.sum(x):\",np.sum(x))\nprint(\"np.sum(x,axis = 0):\",np.sum(x,axis = 0))\nprint(\"np.sum(x,axis = 1):\",np.sum(x,axis = 1))\n\n# 20.利用13题目中的 x,进行求平均数(提示:输出三种平均数(1)print(np.mean(x)) (2)print(np.mean(x,axis = 0))(3) print(np.mean(x,axis =1)))\nprint(\"np.mean(x):\",np.mean(x))\nprint(\"np.mean(x,axis = 0):\",np.mean(x,axis = 0))\nprint(\"np.mean(x,axis = 1):\",np.mean(x,axis = 1))\n\n# 21.利用13题目中的x,对x 进行矩阵转置,然后输出转置后的结果,(提示: x.T 表示对 x 的转置)\nprint(\"x.T:\",x.T)\n\n# 22.利用13题目中的x,求e的指数(提示: 函数 np.exp())\nprint(\"np.exp(x):\",np.exp(x))\n\n# 23.利用13题目中的 x,求值最大的下标(提示(1)print(np.argmax(x)) ,(2) print(np.argmax(x),axis =0)(3)print(np.argmax(x),axis =1))\nprint(\"np.argmax(x):\",np.argmax(x))\nprint(\"np.argmax(x,axis = 0):\",np.argmax(x,axis = 0))\nprint(\"np.argmax(x,axis = 1):\",np.argmax(x,axis = 1))\n\n# 24,画图,y=x*x, x = np.arange(0, 100, 0.1) (提示这里用到 matplotlib.pyplot 库)\nimport matplotlib.pyplot as plt\nx = np.arange(0, 100, 0.1) \ny = x*x\nplt.scatter(x,y,s=100)\nplt.show()\n\n# 25.画图。画正弦函数和余弦函数, x = np.arange(0, 3 * np.pi, 0.1)(提示:这里用到 np.sin() np.cos() 函数和 matplotlib.pyplot 库)\nx = np.arange(0, 3 * np.pi, 0.1)\nplt.scatter(x,np.sin(x),s=100)\nplt.show()\nplt.scatter(x,np.cos(x),s=100)\nplt.show()\n\n# 附加题.执行下面的语句,解释运算结果,了解 nan 和 inf 的含义 print(0*np.nan) print(np.nan == np.nan) print(np.inf > np.nan) print(np.nan - np.nan) print(0.3 == 3***0.1)\nprint(\"0*np.nan:\",0*np.nan) \nprint(\"np.nan == np.nan:\",np.nan == np.nan) \nprint(\"np.inf > np.nan:\",np.inf > np.nan) \nprint(\"np.nan - np.nan:\",np.nan - np.nan) \nprint(\"0.3 == 3*0.1:\",0.3 == 3*0.1)\n\n\n\n\n"
},
{
"alpha_fraction": 0.7308707237243652,
"alphanum_fraction": 0.7506596446037292,
"avg_line_length": 14.119999885559082,
"blob_id": "218887bc37dabd6779c5342a27c906efa5c98a04",
"content_id": "e23b937f024620cf13bafa3e77ea71a66d5a198e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 1214,
"license_type": "no_license",
"max_line_length": 48,
"num_lines": 50,
"path": "/README.md",
"repo_name": "QixinLi/nndl-exercise",
"src_encoding": "UTF-8",
"text": "# 《神经网络与深度学习》课程练习答案\nNeural Network and Deep Learning\n\n课程学习地址 https://nndl.github.io/\n\n示例代码,见https://github.com/nndl/nndl-codes\n\n课程练习,见https://github.com/nndl/exercise\n\n### 前言\n由于我目前也在nndl课程的学习中,没有一次性更新所有作业(会随着学习过程慢慢补全)。\n\n此外,作为一个初学者,作业内容可能有不少错误的地方,还请各位大佬帮忙指正。\n\n### 更新\n - 2019/7/17 添加 `linear_reg.py svm.py`\n\n - 2019/7/16 添加 `warmup.py`\n\n## Exercise \n\n### warmup\nnumpy是Python中对于矩阵处理很实用的工具包,本小节作业主要是熟悉基本的numpy操作。\n\n### linear regression\n\n线性回归模型\n\n### support vector machine\n\n支持向量机\n\n### simple neural network\n\n利用numpy实现全连接神经网络\n\n### convolutional neural network\n利用卷积神经网络,处理MNIST 数据集分类问题。\n\n### recurrent neural network\n基于RNN神经网络的唐诗生成问题。\n### restricted Boltzmann machine\n限制玻尔兹曼机。\n\n### gaussian mixture\n\n高斯混合模型\n\n### project 1 - deep reinforcement learning\n强化学习: 黑白棋\n\n\n"
}
] | 2 |
lcs1998/mysite | https://github.com/lcs1998/mysite | 5c4c2a5812ddd2cae4725fa47bf64d700517f19b | ce3ad5b7023609b052879e3089217c3cd7de0162 | 773bc6df55804c76397db92758e918634209987c | refs/heads/master | 2020-04-08T01:30:43.743221 | 2018-11-24T05:35:03 | 2018-11-24T05:35:03 | 158,898,013 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6905311942100525,
"alphanum_fraction": 0.6905311942100525,
"avg_line_length": 27.866666793823242,
"blob_id": "4bee4b027b98c705ae9e11b554c6638f76008dd1",
"content_id": "cb7e3f15edf394aa466dcc194dc67ba7f4136b69",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 433,
"license_type": "no_license",
"max_line_length": 60,
"num_lines": 15,
"path": "/predict/views.py",
"repo_name": "lcs1998/mysite",
"src_encoding": "UTF-8",
"text": "from django.shortcuts import render\nfrom django.template import loader\nfrom .qcwy_text.predict import pre\n\nfrom django.http import HttpResponse, JsonResponse\n\n\ndef index(request):\n data = {'info': ''}\n data['info'] = pre(request.GET[\"intro\"])\n if request.method == 'GET':\n return JsonResponse(data)\n else:\n template = loader.get_template('predict/index.html')\n return HttpResponse(template.render())\n"
},
{
"alpha_fraction": 0.6472440958023071,
"alphanum_fraction": 0.6582677364349365,
"avg_line_length": 23.423076629638672,
"blob_id": "62f1423f5c0e121c025adffd6d0069d43a2d24b1",
"content_id": "50ec1743533eb5ce47e3a34752b685a20d25494a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 635,
"license_type": "no_license",
"max_line_length": 101,
"num_lines": 26,
"path": "/minganci/views.py",
"repo_name": "lcs1998/mysite",
"src_encoding": "UTF-8",
"text": "from django.shortcuts import render\nfrom django.template import loader\n\nfrom django.http import HttpResponse, JsonResponse\n\nimport importlib, sys\n\nimportlib.reload(sys)\n\n\ndef minganci(f1):\n # coding=utf-8\n f = open('/home/ubuntu/django_project/mysite/minganci/filtered_words.txt', 'r', encoding=\"utf-8\")\n for line in f:\n if line.strip() in f1:\n f1 = f1.replace(line.strip(), '**')\n f.close()\n return f1\n\n\n# Create your views here.\ndef index(request):\n data = {'sentence': ''}\n data['sentence'] = minganci(request.GET[\"sentence\"])\n if request.method == 'GET':\n return JsonResponse(data)\n"
},
{
"alpha_fraction": 0.7962962985038757,
"alphanum_fraction": 0.8148148059844971,
"avg_line_length": 25,
"blob_id": "7bbf55b0861a0534cb743aaba2e5304041511c8a",
"content_id": "e1a14fefbc0278e9d7e82120b962809ac33a137f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 54,
"license_type": "no_license",
"max_line_length": 28,
"num_lines": 2,
"path": "/predict/models.py",
"repo_name": "lcs1998/mysite",
"src_encoding": "UTF-8",
"text": "from django.db import models\nimport urllib3.request\n\n\n"
},
{
"alpha_fraction": 0.7582417726516724,
"alphanum_fraction": 0.7582417726516724,
"avg_line_length": 17.200000762939453,
"blob_id": "f36fd0952d2da59730f28a8b7c4b832ea32b4099",
"content_id": "bdbed315aa3d4a132fcbfa13d5f86af4a6a6ee27",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 91,
"license_type": "no_license",
"max_line_length": 33,
"num_lines": 5,
"path": "/minganci/apps.py",
"repo_name": "lcs1998/mysite",
"src_encoding": "UTF-8",
"text": "from django.apps import AppConfig\n\n\nclass MinganciConfig(AppConfig):\n name = 'minganci'\n"
}
] | 4 |
Tennyx/rinklink | https://github.com/Tennyx/rinklink | d5196134d0d111fd5b9ada3039ea2a2214c277b3 | da9effbf0ca007887794b4f8899fe9251548950c | d985d222a45b70f0238ce8c6c90d4844befca0d0 | refs/heads/master | 2021-09-03T21:13:47.764875 | 2018-01-12T02:28:20 | 2018-01-12T02:28:20 | 104,835,425 | 0 | 1 | null | 2017-09-26T04:19:29 | 2017-09-26T04:21:39 | 2018-01-12T02:28:20 | Python | [
{
"alpha_fraction": 0.7754010558128357,
"alphanum_fraction": 0.7754010558128357,
"avg_line_length": 25.714284896850586,
"blob_id": "68943b4937bede4e3af8d0ddd60596bcdb90cf1c",
"content_id": "a5cda570c0777ee4eb2b5ce4b2c411ed27983b64",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 187,
"license_type": "no_license",
"max_line_length": 50,
"num_lines": 7,
"path": "/rink_calendar/serializers.py",
"repo_name": "Tennyx/rinklink",
"src_encoding": "UTF-8",
"text": "from rest_framework import serializers\nfrom .models import UserData\n\nclass DataSerializer(serializers.ModelSerializer):\n\tclass Meta:\n\t\tmodel = UserData\n\t\tfields = ['user_id','user_data']\n"
},
{
"alpha_fraction": 0.5751634240150452,
"alphanum_fraction": 0.6062091588973999,
"avg_line_length": 24.5,
"blob_id": "15e5b648e51ad0f0fad87843b68206820fa2145d",
"content_id": "1ba0ceb01bf45e5fc74356085ce2ff25d7a09e45",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 612,
"license_type": "no_license",
"max_line_length": 108,
"num_lines": 24,
"path": "/rink_calendar/migrations/0001_initial.py",
"repo_name": "Tennyx/rinklink",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n# Generated by Django 1.11.5 on 2017-11-28 23:11\nfrom __future__ import unicode_literals\n\nimport django.contrib.postgres.fields.jsonb\nfrom django.db import migrations, models\n\n\nclass Migration(migrations.Migration):\n\n initial = True\n\n dependencies = [\n ]\n\n operations = [\n migrations.CreateModel(\n name='UserData',\n fields=[\n ('user_id', models.CharField(blank=True, max_length=25, primary_key=True, serialize=False)),\n ('user_data', django.contrib.postgres.fields.jsonb.JSONField()),\n ],\n ),\n ]\n"
},
{
"alpha_fraction": 0.746658205986023,
"alphanum_fraction": 0.754296600818634,
"avg_line_length": 31.081632614135742,
"blob_id": "a7a455bb2df9f65296e08d78ff6f56dc92ac89c4",
"content_id": "c356a7fc41f8ac981d58381bb87f7fb8a10e71b5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1571,
"license_type": "no_license",
"max_line_length": 113,
"num_lines": 49,
"path": "/rink_calendar/views.py",
"repo_name": "Tennyx/rinklink",
"src_encoding": "UTF-8",
"text": "from __future__ import unicode_literals\n\nfrom rest_framework.generics import CreateAPIView\nfrom rest_framework.views import APIView\nfrom rest_framework.response import Response\n\nfrom django.shortcuts import render\nfrom rink_calendar.serializers import DataSerializer\nfrom .models import UserData\n\nfrom rest_framework import status\nfrom rest_framework.decorators import api_view\nfrom rest_framework.renderers import JSONRenderer\n\ndef rink_calendar(request):\n\treturn render(request, 'rink_calendar/rink-calendar.html')\n\n@api_view(['GET', 'POST'])\ndef api(request):\n\t\n\tif request.method == 'GET':\n\n\t\tq_param = request.GET.get('q', '')\n\n\t\tif q_param:\n\t\t\treturn Response(DataSerializer(UserData.objects.get(user_id=q_param)).data)\n\t\telse:\t\n\t\t\tuser_data = UserData.objects.all()\n\t\t\tserializer = DataSerializer(user_data, many=True)\n\t\t\treturn Response(serializer.data)\n\n\telif request.method == 'POST':\n\t\t\n\n\t\tprint(request.data['user_id'])\n\n\t\tif UserData.objects.all().filter(user_id=request.data['user_id']):\n\t\t\tserializer = DataSerializer(instance=UserData.objects.get(user_id=request.data['user_id']), data=request.data)\n\t\t\tif serializer.is_valid():\n\t\t\t\tserializer.save()\n\t\t\t\treturn Response(serializer.data, status=status.HTTP_201_CREATED)\n\t\t\telse:\n\t\t\t\treturn Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)\n\t\telse:\n\t\t\tserializer = DataSerializer(data=request.data)\n\t\t\tif serializer.is_valid():\n\t\t\t\tserializer.save()\n\t\t\t\treturn Response(serializer.data, status=status.HTTP_201_CREATED)\n\t\t\treturn Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)"
},
{
"alpha_fraction": 0.5273382663726807,
"alphanum_fraction": 0.5428481698036194,
"avg_line_length": 46.63432693481445,
"blob_id": "60265a6bfab8fd04ead9cd3ca902a15971ace0ac",
"content_id": "1d0e6a262dec9d8532519bc8e491c7ac3fd6f1bc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 6383,
"license_type": "no_license",
"max_line_length": 197,
"num_lines": 134,
"path": "/rink_calendar/static/js/modals.js",
"repo_name": "Tennyx/rinklink",
"src_encoding": "UTF-8",
"text": "const verifyModal =\n\t\t'<div class=\"modal fade\" id=\"verify-modal\">\\\n\t\t\t \t<div class=\"modal-dialog\" role=\"document\">\\\n\t\t\t \t<div class=\"modal-content\">\\\n\t\t\t \t\t<div class=\"modal-body text-center\">\\\n\t\t\t \t\t\t <h6 class=\"text-center\">Are you sure you want to delete this event?</h6>\\\n\t\t\t \t\t\t<button type=\"button\" id=\"delete-event\" class=\"btn btn-danger\" data-dismiss=\"modal\">Yes</button>\\\n\t\t\t \t\t<button type=\"button\" class=\"btn btn-primary\" data-dismiss=\"modal\">No</button>\\\n\t\t\t \t\t</div>\\\n\t\t\t \t</div>\\\n\t\t\t \t</div>\\\n\t\t</div>';\n\nfunction eventModal(evTitle, evStart, evEnd, evDesc){\n\tlet eventStr = \n\t\t\t'<div class=\"modal fade\" id=\"event-modal\">\\\n\t\t\t <div class=\"modal-dialog\" role=\"document\">\\\n\t\t\t <div class=\"modal-content\">\\\n\t\t\t <div class=\"modal-header\">\\\n\t\t\t <h5 class=\"modal-title\">Event Details</h5>\\\n\t\t\t <button type=\"button\" class=\"close\" data-dismiss=\"modal\" aria-label=\"Close\">\\\n\t\t\t <span aria-hidden=\"true\">×</span>\\\n\t\t\t </button>\\\n\t\t\t </div>\\\n\t\t\t <div id=\"event-desc\" class=\"modal-body text-center\">' +\n\t\t\t '<div><b><span class=\"event-mod-headers\">Event:</span></b><br>' + evTitle + '</div><br>' +\n\t\t\t '<div><b><span class=\"event-mod-headers\">Time:</span></b><br>' + evStart + '-' + evEnd + '</div><br>' +\n\t\t\t '<div><b><span class=\"event-mod-headers\">Description:</span></b><br>' + evDesc + '</div>' +\n\t\t\t '</div>\\\n\t\t\t <div class=\"modal-footer\">\\\n\t\t\t \t<button type=\"button\" id=\"verify-btn\" class=\"btn btn-danger\" data-toggle=\"modal\" data-target=\"#verify-modal\" data-dismiss=\"modal\"><i class=\"fa fa-trash\" aria-hidden=\"true\"></i></button>\\\n\t\t\t <button id=\"edit-event\" type=\"button\" class=\"btn btn-primary\" data-toggle=\"modal\" data-target=\"#edit-modal\" data-dismiss=\"modal\"><i class=\"fa fa-pencil\" aria-hidden=\"true\"></i></button>\\\n\t\t\t </div>\\\n\t\t\t </div>\\\n\t\t\t </div>\\\n\t\t\t</div>';\n\n\treturn eventStr;\n\n}\n\nfunction editModal(edTitle, edDesc, edColor, modTitle, btnText){\n\n\tconst timeArr = [12,1,2,3,4,5,6,7,8,9,10,11,12,1,2,3,4,5,6,7,8,9,10,11];\n\n\tlet timeGen = '';\n\n\tfor(i=0;i<timeArr.length;i++){\n\t\t\ttimeGen +=\n\t\t\t'<option value=\"' + timeArr[i] + ':00' + (i<12 ? 'pm' : 'am') + '\">' + timeArr[i] + ':00' + (i<12 ? 'pm' : 'am') + '</option>'+\n\t\t\t'<option value=\"' + timeArr[i] + ':15' + (i<12 ? 'pm' : 'am') + '\">' + timeArr[i] + ':15' + (i<12 ? 'pm' : 'am') + '</option>'+\n\t\t\t'<option value=\"' + timeArr[i] + ':30' + (i<12 ? 'pm' : 'am') + '\">' + timeArr[i] + ':30' + (i<12 ? 'pm' : 'am') + '</option>'+\n\t\t\t'<option value=\"' + timeArr[i] + ':45' + (i<12 ? 'pm' : 'am') + '\">' + timeArr[i] + ':45' + (i<12 ? 'pm' : 'am') + '</option>'\n\t}\n\n\tlet editStr = \n\t\t'<div class=\"modal fade\" id=\"edit-modal\" tabindex=\"-1\" role=\"dialog\" aria-labelledby=\"event-modalLabel\" aria-hidden=\"true\">\\\n\t\t\t<div class=\"modal-dialog\" role=\"document\">\\\n\t\t\t\t<div class=\"modal-content\">\\\n\t\t\t\t\t<div class=\"modal-header\">\\\n\t\t\t\t\t\t<h5 class=\"modal-title\" id=\"event-modalLabel\">' + modTitle + '</h5>\\\n\t\t\t\t\t\t<button type=\"button\" class=\"close\" data-dismiss=\"modal\" aria-label=\"Close\">\\\n\t\t\t\t\t\t\t<span aria-hidden=\"true\">×</span>\\\n\t\t\t\t\t\t</button>\\\n\t\t\t\t\t</div>\\\n\t\t\t\t\t<div class=\"modal-body\">\\\n\t\t\t\t <form>\\\n\t\t\t \t\t<div class=\"form-group\">\\\n\t\t\t \t\t<label for=\"edit-event\" class=\"form-control-label\">Event Title:</label>\\\n\t\t\t \t\t<input type=\"text\" value=\"' + edTitle + '\" class=\"form-control\" id=\"edit-event\">\\\n\t\t\t \t\t</div>\\\n\t\t\t \t\t<div class=\"form-group\">\\\n\t\t\t \t\t<label for=\"edit-time-start\" class=\"form-control-label\">Time:</label>\\\n\t\t\t \t\t<select id=\"edit-time-start\"> to' + timeGen +\n\t \t\t\t\t\n\t \t\t\t\t'</select>\\\n\t \t\t\t\t<select id=\"edit-time-end\">' + timeGen +\n\t\t\t \t\t\n\t\t\t \t\t'</select>\\\n\t\t\t \t\t</div>\\\n\t\t\t \t\t<div class=\"form-group\">\\\n\t\t\t \t\t<label for=\"edit-desc\" class=\"form-control-label\">Description:</label>\\\n\t\t\t \t\t<textarea class=\"form-control\" id=\"edit-desc\">' + edDesc + '</textarea>\\\n\t\t\t \t\t</div>\\\n\t\t\t \t\t<div class=\"form-group\">\\\n\t\t\t \t\t\t<label for=\"edit-color\" class=\"form-control-label\">Color:</label>\\\n\t\t\t\t \t\t<div class=\"btn-group\">\\\n\t\t\t\t\t\t\t\t\t<button type=\"button\" class=\"btn btn-secondary\" id=\"edit-main-color\" style=\"background-color:' + edColor + '\"> </button>\\\n\t\t\t\t\t\t\t\t\t<button type=\"button\" class=\"btn btn-secondary dropdown-toggle dropdown-toggle-split\" data-toggle=\"dropdown\" aria-haspopup=\"true\" aria-expanded=\"false\">\\\n\t\t\t\t\t\t\t\t\t\t<span class=\"sr-only\">Toggle Dropdown</span>\\\n\t\t\t\t\t\t\t\t\t</button>\\\n\t\t\t\t\t\t\t\t\t<div class=\"dropdown-menu\" id=\"edit-color\">\\\n\t\t\t\t\t\t\t\t\t\t<div class=\"btn-toolbar\" role=\"toolbar\" aria-label=\"Toolbar with button groups\">\\\n\t\t\t\t\t\t\t\t\t\t\t<div class=\"btn-group mr-2 ml-2\" role=\"group\" aria-label=\"First group\">\\\n\t\t\t\t\t\t\t\t\t\t\t \t<button type=\"button\" class=\"btn btn-secondary\" style=\"background:#bfbdbd;height:35px\"></button>\\\n\t\t\t\t\t\t\t\t\t\t\t <button type=\"button\" class=\"btn btn-secondary\" style=\"background:#ef5858\"></button>\\\n\t\t\t\t\t\t\t\t\t\t\t <button type=\"button\" class=\"btn btn-secondary\" style=\"background:#ed9344\"></button>\\\n\t\t\t\t\t\t\t\t\t\t\t <button type=\"button\" class=\"btn btn-secondary\" style=\"background:#e2db09\"></button>\\\n\t\t\t\t\t\t\t\t\t\t\t <button type=\"button\" class=\"btn btn-secondary\" style=\"background:#35d130\"></button>\\\n\t\t\t\t\t\t\t\t\t\t\t <button type=\"button\" class=\"btn btn-secondary\" style=\"background:#3c7de5\"></button>\\\n\t\t\t\t\t\t\t\t\t\t\t <button type=\"button\" class=\"btn btn-secondary\" style=\"background:#9737ba\"></button>\\\n\t\t\t\t\t\t\t\t\t\t </div>\\\n\t\t\t\t\t\t\t\t\t\t</div>\\\n\t\t\t\t\t\t\t\t\t</div>\\\n\t\t\t\t\t\t\t\t</div>\\\n\t\t\t\t\t\t\t</div>\\\n\t\t\t \t</form>\\\n\t\t\t\t\t</div>\\\n\t\t\t\t\t<div class=\"modal-footer\">\\\n\t\t\t\t\t\t<div id=\"error-tag\"></div>\\\n\t\t\t\t\t\t<button type=\"button\" id=\"submit-changes\" class=\"btn btn-primary\">' + btnText + '</button>\\\n\t\t\t\t\t</div>\\\n\t\t\t\t</div>\\\n\t\t\t</div>\\\n\t\t</div>';\n\n\treturn editStr;\n}\n\nfunction successModal(){\n\tconst successStr = \n\t\t'<div class=\"modal fade\" id=\"success-modal\">\\\n\t\t\t \t<div class=\"modal-dialog\" role=\"document\">\\\n\t\t\t \t<div class=\"modal-content\">\\\n\t\t\t \t\t<div class=\"modal-body\">\\\n\t\t\t \t\t\t <h6 class=\"text-center\">Changes Saved!</h6>\\\n\t\t\t \t\t\t <div class=\"text-center\"><i class=\"fa fa-check-circle\" aria-hidden=\"true\"></i></div>\\\n\t\t\t \t\t</div>\\\n\t\t\t \t</div>\\\n\t\t\t \t</div>\\\n\t\t</div>';\n\n\treturn successStr;\n}\n"
},
{
"alpha_fraction": 0.7465277910232544,
"alphanum_fraction": 0.7465277910232544,
"avg_line_length": 27.899999618530273,
"blob_id": "a52bff3012cf573cb7c065c3cc50336ff3824225",
"content_id": "06ee2bcec9e3b51ce7f726d9b83dc861e9b89c87",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 288,
"license_type": "no_license",
"max_line_length": 61,
"num_lines": 10,
"path": "/rink_calendar/urls.py",
"repo_name": "Tennyx/rinklink",
"src_encoding": "UTF-8",
"text": "from django.conf.urls import include, url\nfrom . import views\nfrom rest_framework.urlpatterns import format_suffix_patterns\n\nurlpatterns = [\n\turl(r'^$', views.rink_calendar, name=\"rink_calendar\"),\n\turl(r'^api/$', views.api, name=\"api\")\n]\n\nurlpatterns = format_suffix_patterns(urlpatterns)"
},
{
"alpha_fraction": 0.739130437374115,
"alphanum_fraction": 0.75,
"avg_line_length": 33.375,
"blob_id": "c45f82ccbfd94d1fa95b47c9bcd622e3d94c140b",
"content_id": "14da25f1ddf93f0630c9772d69f00fefa5327315",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 276,
"license_type": "no_license",
"max_line_length": 72,
"num_lines": 8,
"path": "/rink_calendar/models.py",
"repo_name": "Tennyx/rinklink",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\nfrom django.contrib.postgres.fields import JSONField\nfrom django.db import models\n\nclass UserData(models.Model):\n\tuser_id = models.CharField(max_length=25, primary_key=True, blank=True)\n\tuser_data = JSONField()\n\n"
}
] | 6 |
yinyingyi/test | https://github.com/yinyingyi/test | fcd5e9a75017d1d4ebd960dac3c46653f9b81a9e | 80288d671bd47bb34373cd778dace4538a24ab37 | 4dd1ba3a8a4ab3fceae0052250205b484636eab5 | refs/heads/master | 2022-11-18T18:17:12.185078 | 2020-06-26T13:22:06 | 2020-06-26T13:22:06 | 260,823,160 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5292207598686218,
"alphanum_fraction": 0.5454545617103577,
"avg_line_length": 36.56097412109375,
"blob_id": "8bb2ec63f91ffd144f8b52c444f0d74081aadadd",
"content_id": "19e6410123b4b9d0a4190d16d12e0b2d87436e4e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2058,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 41,
"path": "/game003.py",
"repo_name": "yinyingyi/test",
"src_encoding": "UTF-8",
"text": "'''\n代码优化——继承\n使⽤传参的⽅式传⼊hp和⾎量,第⼆个⾓⾊,他叫后裔,后裔继承了⾓⾊的hp 和power。并多了护甲属性。\n重新定义另外⼀个defense⽅法:final_hp = hp+defense-enemy_power\nenemy_final_hp = enemy_hp - power\n两个hp进⾏对⽐,⾎量剩余多的⼈获胜\n'''\nclass Game(): #定义一个Game类\n def __init__(self,hp=1000,power=200): #定义role方法,并给定属性赋默认值\n self.hp = hp #定义hp属性,使用self后才能被其他方法调用\n self.power = power #定义power属性\n\n def fight(self,enemy_power,enemy_hp): #定义fight方法,敌人属性\n final_hp = self.hp - enemy_power #定义final_hp计算方法\n enemy_final_hp = enemy_hp - self.power #定义敌人final_hp计算方法\n if final_hp > enemy_final_hp: #判断条件\n print('我赢啦!!')\n elif final_hp == enemy_final_hp:\n print('打了个平手!')\n else:\n print('很遗憾!我输了!')\n\nclass Houyi(Game): #定义Houyi类,继承Game的属性\n def __init__(self,defense1): #定义私有方法\n super().__init__() #使用super()继承父类init()方法的属性\n self.defense1 = defense1 #定义defense1属性\n\n def defense(self,enemy_power,enemy_hp): #定义defense方法\n while True:\n self.hp = self.hp + self.defense1 - enemy_power #后裔决战后血量计算方法\n enemy_hp = enemy_hp - self.power #敌人决战后血量计算方法\n print(\"后裔hp{}、敌人hp{}\".format(self.hp,enemy_hp))\n if self.hp <= 0:\n print('h很遗憾!我输啦!!')\n break #退出当前循环\n elif enemy_hp <= 0:\n print('h我赢了!')\n break\n\nh = Houyi(100) #对Houyi()类进行实例化,并给defense1传参\nh.defense(200,2222) #给Houyi()类中defense方法传参\n"
},
{
"alpha_fraction": 0.6085825562477112,
"alphanum_fraction": 0.6762028336524963,
"avg_line_length": 41.77777862548828,
"blob_id": "ec81559ae99082fe8b4b59361433c120bc46da84",
"content_id": "ae470d60708449ce9bdd4ab5bb4ac992c5da49a2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 985,
"license_type": "no_license",
"max_line_length": 85,
"num_lines": 18,
"path": "/python_excel_one/ExcelOne.py",
"repo_name": "yinyingyi/test",
"src_encoding": "UTF-8",
"text": "from openpyxl import Workbook\nfrom openpyxl.utils import get_column_letter\n\nwb = Workbook() #实例化Workbook\ndest_filename = 'empty_book.xlsx' #定义文件名字\nws1 = wb.active #获取当前页签\nws1.title = \"range names\" #给当前页签命名\nfor row in range(1, 40): #在第1-39行循环显示以下内容\n ws1.append(range(600)) #每行显示1-600\nws2 = wb.create_sheet(title=\"Pi\") #创建第二个sheet页\nws2['F5'] = 3.14 #在F5的位置添加3.14\nws3 = wb.create_sheet(title=\"Data\") #创建第三个sheet页\nfor row in range(10, 20): #在第10-19行循环显示以下内容\n for col in range(27, 54): #在第27-53列循环显示以下内容\n # cell单元格,get_column_letter()列字母与数字之间的转换,{0}设置指定位置\n _ = ws3.cell(column=col, row=row, value=\"{0}\".format(get_column_letter(col)))\nprint(ws3['AA10'].value) #打印sheet3中AA10的值\nwb.save(filename = dest_filename) #保存文件"
},
{
"alpha_fraction": 0.6396396160125732,
"alphanum_fraction": 0.6445536613464355,
"avg_line_length": 38.40322494506836,
"blob_id": "74b505e7c3ebbec272e3de580538ed901803564f",
"content_id": "15a18fafda5cc99852e040daa6fd5b996abc9130",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2490,
"license_type": "no_license",
"max_line_length": 93,
"num_lines": 62,
"path": "/test_selenium_event/test_ActionChains.py",
"repo_name": "yinyingyi/test",
"src_encoding": "UTF-8",
"text": "from time import sleep\n\nfrom selenium import webdriver\nfrom selenium.webdriver import ActionChains\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.common.keys import Keys\nfrom selenium.webdriver.support.wait import WebDriverWait\n\n\nclass TestActionChains():\n def setup(self):\n self.driver = webdriver.Chrome()\n self.driver.maximize_window()\n # self.driver.get('http://sahitest.com/demo/label.htm')\n # input1 = self.driver.find_element_by_tag_name('input')[3]\n # input2 = self.driver.find_element_by_tag_name('input')[4]\n # action = ActionChains(self.driver)\n # input1.click()\n # action.send_keys('username').perform()\n # action.key_down(Keys.CONTROL).send_keys('a').key_up(Keys.CONTROL)\n # action.key_down(Keys.CONTROL).send_keys('c').key_up(Keys.CONTROL)\n # action.key_down(Keys.CONTROL,input2).send_keys('v').key_up(Keys.CONTROL)\n\n def teardown(self):\n pass\n # self.driver.quit()\n\n def test_case_click(self):\n pass\n\n def test_movetoelement(self):\n self.driver.get('https://www.baidu.com/')\n ele = self.driver.find_element(By.ID,\"s-usersetting-top\")\n action = ActionChains(self.driver)\n action.move_to_element(ele)\n action.perform()\n sleep(10)\n\n def test_dragdrop(self):\n self.driver.get('http://sahitest.com/demo/dragDropMooTools.htm')\n drag_element = self.driver.find_element_by_class_name(\"drag\")\n drop_element = self.driver.find_element_by_xpath(\"//body/div[2]\")\n action = ActionChains(self.driver)\n #方法一\n # action.drag_and_drop(drag_element,drop_element).perform()\n #方法二\n # action.click_and_hold(drag_element).release(drop_element).perform()\n #方法三\n action.click_and_hold(drag_element).move_to_element(drop_element).release().perform()\n def wait(x): #自定义函数,一定要设置一个参数\n return len(self.driver.find_elements(By.XPATH,'//body/div')) >= 1\n WebDriverWait(self.driver,5).until(wait)\n\n def test_sendkeys(self):\n self.driver.get('http://sahitest.com/demo/label.htm')\n ele = self.driver.find_element_by_xpath('/html/body/label[1]/input')\n action = ActionChains(self.driver)\n ele.click()\n action.send_keys('username')\n action.send_keys(Keys.SPACE)\n action.send_keys('tom')\n action.send_keys(Keys.BACK_SPACE).perform()"
},
{
"alpha_fraction": 0.6229097247123718,
"alphanum_fraction": 0.6454849243164062,
"avg_line_length": 28.875,
"blob_id": "bbff9090a14cc9f2050f8d6773b07be0e0fffd39",
"content_id": "81903d12bc685fc2c61da6f75fa7df96935abaa1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1400,
"license_type": "no_license",
"max_line_length": 61,
"num_lines": 40,
"path": "/biaozhunku.py",
"repo_name": "yinyingyi/test",
"src_encoding": "UTF-8",
"text": "# import os #导入os库\n#\n# print(os.listdir()) #以列表形式展示文件夹下的文件\n# print(os.getcwd()) #打印当前路径\n#\n# print(os.path.exists(\"b\")) #判断当前路径下有没有文件b。返回True有、Flase没有\n# if not os.path.exists(\"b\"):#如果b文件夹不存在\n# os.mkdir(\"b\")#在当前路径下新建文件夹b\n# if not os.path.exists(\"b/test.txt\"):#如果b/test.txt文件不存在\n# f = open(\"b/test.txt\",\"w\")\n# f.write(\"hello,os using!\")\n# f.close()\n\n# import time #导入time库\n#\n# print(time.asctime())\n# print(time.time())\n# print(time.localtime())\n# def strftime(format,p_tuple=None):\n# pass\n# print(time.strftime(\"%Y-%m-%d %H:%M:%S\", time.localtime()))\n# #获取2天前、3天后的时间\n# now_time = time.time()\n# tow_day_before_time = now_time - 60*60*24*2\n# three_day_after_time = now_time + 60*60*24*3\n# time_tuple1 = time.localtime(tow_day_before_time)\n# time_tuple2 = time.localtime(three_day_after_time)\n# print(time.strftime(\"%Y-%m-%d %H:%M:%S\",time_tuple1))\n# print(time.strftime(\"%Y-%m-%d %H:%M:%S\",time_tuple2))\n\n# import urllib.request\n# response = urllib.request.urlopen(\"https://www.baidu.com\")\n# print(response.status)\n# print(response.read())\n\nimport math\n\nprint(math.ceil(5.5)) #返回≥x的最小整数\nprint(math.floor(5.5)) #返回x的下限整数\nprint(math.sqrt(6.25)) #计算x的平方根\n\n"
},
{
"alpha_fraction": 0.5575980544090271,
"alphanum_fraction": 0.5833333134651184,
"avg_line_length": 33.04166793823242,
"blob_id": "6293c40251465f191f3c98544735ca740bbdabaa",
"content_id": "bf617ffc820e6c55f96bae03a1f2a7537c36af2d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1118,
"license_type": "no_license",
"max_line_length": 69,
"num_lines": 24,
"path": "/game002.py",
"repo_name": "yinyingyi/test",
"src_encoding": "UTF-8",
"text": "'''\n⼀个回合制游戏,每个⾓⾊都有hp 和power,hp代表⾎量,power代表攻击⼒,hp的初始值为1000,power的初始值为200。\n定义⼀个fight⽅法:final_hp = hp-enemy_power\nenemy_final_hp = enemy_hp - power\n两个hp进⾏对⽐,⾎量剩余多的⼈获胜\n'''\nclass Game(): #定义一个Game类\n def role(self,hp,power): #定义role方法\n self.hp = hp #定义hp属性,使用self后才能被其他方法调用\n self.power = power #定义power属性\n\n def fight(self,enemy_power,enemy_hp): #定义fight方法,敌人属性\n final_hp = self.hp - enemy_power #定义final_hp计算方法\n enemy_final_hp = enemy_hp - self.power #定义敌人final_hp计算方法\n if final_hp > enemy_final_hp:\n print('我赢啦!!')\n elif final_hp == enemy_final_hp:\n print('打了个平手!')\n else:\n print('很遗憾!我输了!')\n\ng = Game() #对Game()类进行实例化\ng.role(1000,100) #给role方法传参\ng.fight(1200,200) #给fight方法传参"
},
{
"alpha_fraction": 0.5757299065589905,
"alphanum_fraction": 0.5875912308692932,
"avg_line_length": 32.24242401123047,
"blob_id": "fb36473172fd538db924be0d68c2bd856bc267cc",
"content_id": "712860ffb07ab7f02fc4fb9cbcbb9bd3f9acd0a1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1698,
"license_type": "no_license",
"max_line_length": 74,
"num_lines": 33,
"path": "/game001.py",
"repo_name": "yinyingyi/test",
"src_encoding": "UTF-8",
"text": "'''\n写⼀个Bicycle(⾃⾏车)类,有run(骑⾏)⽅法, 调⽤时显⽰骑⾏⾥程km(骑⾏⾥程为传⼊的数字),\n再写⼀个电动⾃⾏车类EBicycle继承⾃Bicycle,添加电池电量valume属性通过,参数传⼊, 同时有两个⽅法:\n1. fill_charge(vol) ⽤来充电, vol 为电量\n2. run(km) ⽅法⽤于骑⾏,每骑⾏10km消耗电量1度,当电量消耗尽时调⽤Bicycle的run⽅法骑⾏,通过传⼊的骑⾏⾥程数,显⽰骑⾏结果\n'''\nclass Bicycle(): #定义一个Bicycle类\n def run(self,km): #定义一个run方法,并传入一个km参数\n print('骑行里程为{}'.format(km)) #打印骑行里程数\n\nclass Ebicycle(Bicycle):#定义一个Ebicycle类,继承Bicycle类的属性\n def __init__(self,volume): #定义私有方法电量\n self.volume = volume\n print('当前电量为{}'.format(self.volume)) #打印当前电量\n\n def fill_charge(self,vol): #定义电瓶车充电电量\n print('充电电量是{}'.format(vol)) #打印电瓶车充电电量\n\n def run(self,km): #定义电瓶车骑行方法\n e_miles = self.volume*10 #e_miles属性,电瓶车支持的最大里程\n miles = km - e_miles #miles属性,用于判断电瓶能否支持骑完全程\n if miles <= 0:\n print('电瓶车骑了{}公里'.format(km))\n else:\n #子类中有一个run,把父类的run覆盖掉了。需要使用super()调用,并传入除了电瓶车骑行的里程数\n super().run(miles)\n\n# b = Bicycle()\n# b.run()\n\neb = Ebicycle(10) #实例化类,只能传参给init方法\neb.run(200)\nprint()"
},
{
"alpha_fraction": 0.6325300931930542,
"alphanum_fraction": 0.6409638524055481,
"avg_line_length": 28.464284896850586,
"blob_id": "deadaf611864ec0aa195a6652c9096e923f7bce0",
"content_id": "f103ef873ceb116abfbc515a4351675c5522983b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 836,
"license_type": "no_license",
"max_line_length": 57,
"num_lines": 28,
"path": "/test_selenium_event/test_TouchAction.py",
"repo_name": "yinyingyi/test",
"src_encoding": "UTF-8",
"text": "from time import sleep\n\nfrom selenium import webdriver\nfrom selenium.webdriver import TouchActions\n\n\nclass TestTouchAction():\n def setup(self):\n option = webdriver.ChromeOptions()\n option.add_experimental_option('w3c',False)\n self.driver = webdriver.Chrome(options = option)\n self.driver.maximize_window()\n self.driver.implicitly_wait(5)\n\n def teardown(self):\n pass\n\n def test_touchaction(self, action=None):\n self.driver.get('https://www.baidu.com/')\n ele = self.driver.find_element_by_id(\"kw\")\n ele_search = self.driver.find_element_by_id(\"su\")\n\n ele.send_keys(\"selenium测试\")\n action = TouchActions(self.driver)\n action.tap(ele_search)\n action.perform()\n #方法一\n action.scroll_from_element(ele,0,1000).perform()\n \n"
},
{
"alpha_fraction": 0.4624277353286743,
"alphanum_fraction": 0.5104491114616394,
"avg_line_length": 37.77586364746094,
"blob_id": "28efd12eb59bfc222235e6dd780ef18198ccbf42",
"content_id": "548ad6e9a6645c83b7bcd87506bed638f8dae06d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2569,
"license_type": "no_license",
"max_line_length": 113,
"num_lines": 58,
"path": "/python_excel_one/ExcelTwo.py",
"repo_name": "yinyingyi/test",
"src_encoding": "UTF-8",
"text": "from openpyxl import Workbook,load_workbook\n\nclass Practice(): #定义一个练习类\n def creat_data_one(self): #定义一个creat()方法\n wb = Workbook() #实例化Workbook(),alt+enter快捷导入\n ws1 = wb.active # 获取当前页签\n ws1.title = \"one\" # 给当前页签命名\n ws1[\"A1\"] = \"身高\" #在A1的位置输入\"身高\"\n ws1[\"B1\"] = \"体重\" #在B1的位置输入\"体重\"\n ws1[\"C1\"] = \"是否超重\" #在C1的位置输入\"是否超重\"\n data_dict = {180:81,160:51,175:71,158:41} #定义字典\n data_keys = [i for i in data_dict.keys()] #使用列表获取字典key值。要想单独获取字典每个值,必须先转化为列表\n for i in range(len(data_dict)): #for循环,循环次数i=字典长度\n ws1.cell(row=i + 2, column=1).value = data_keys[i] #写入字典key值。row行,col列,从excle第二行开始写值,所以row=i + 2\n ws1.cell(row=i + 2, column=2).value = data_dict[data_keys[i]] #写入字典key对应的values值\n\n # 写法一:列表\n # height = [180,160,175,158]\n # weight = [80,50,70,40]\n # for i in range(len(height)):\n # ws1.cell(row = i + 2,column=1).value = height[i]\n # ws1.cell(row = i + 2,column=2).value = weight[i]\n wb.save(\"data_one.xlsx\")\n\n def get_data(self):\n ld = load_workbook(filename=\"data_one.xlsx\")\n sheet = ld[\"one\"]\n for i in range(4):\n print(sheet.cell(row=i + 2, column=1).values)\n\n def health_data(self):\n ld = load_workbook(filename=\"data_one.xlsx\")\n sheet = ld[\"one\"]\n # height = [180, 160, 175, 158]\n # weight = [80,50,70,40]\n # heath_weight = (height - 70) *0.6\n for i in range(4):\n height = sheet.cell(row=i + 2, column=1).value\n weight = sheet.cell(row=i + 2, column=2).value\n mark = sheet.cell(row=i + 2, column=3).value\n heath_weight = (height - 70) *0.6\n if weight > heath_weight:\n print(\"超标\")\n sheet.cell(row=i + 2, column=3).value=\"超标\"\n elif weight == heath_weight:\n print(\"标准\")\n sheet.cell(row=i + 2, column=3).value = \"标准\"\n else:\n print(\"太瘦\")\n sheet.cell(row=i + 2, column=3).value = \"太瘦\"\n ld.save(\"data_one.xlsx\")\n\n\n\np = Practice()\n# p.creat_data_one()\n# p.get_data()\np.health_data()\n"
}
] | 8 |
4tome/IPinfo | https://github.com/4tome/IPinfo | 90171ddbf8a58e3f139bed7c8a17a73556c6ad04 | ac2d6f129435a24020c0d5212297e60afdf969e3 | 3c6576a46be4a5d18e0b3b0f3ec0270585cdd66b | refs/heads/main | 2023-05-15T20:52:04.816094 | 2021-06-09T18:32:33 | 2021-06-09T18:32:33 | 375,450,060 | 1 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5226033329963684,
"alphanum_fraction": 0.5257695913314819,
"avg_line_length": 36.900001525878906,
"blob_id": "550206969d00d6da72c3fd2bc1af7bbb7026fd64",
"content_id": "9cd26b1ee1018600b08ca6a389f20945d02cdd14",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5685,
"license_type": "no_license",
"max_line_length": 249,
"num_lines": 150,
"path": "/ipinfo.py",
"repo_name": "4tome/IPinfo",
"src_encoding": "UTF-8",
"text": "import argparse\nimport requests\nimport requests_html\nimport json\nfrom datetime import date, timedelta\nfrom pyfiglet import figlet_format # fonts http://www.figlet.org/examples.html\n\nclass ip_info:\n def __init__(self):\n pass\n\n def get_info(self, ip):\n r = requests.get('http://ip-api.com/json/' + ip + '?fields=status,message,continent,continentCode,country,countryCode,region,regionName,city,district,zip,lat,lon,timezone,offset,currency,isp,org,as,asname,reverse,mobile,proxy,hosting,query')\n info = json.loads(r.text)\n\n if (info['status'] != \"success\"):\n print(\"Invalid IP address\")\n exit()\n\n print(\"IP: \" + info['query'])\n print(\"\\n[+] Continent: \" + info['continent'])\n print(\"[+] Continent: \" + info['continentCode'])\n print(\"[+] Country: \" + info['country'])\n print(\"[+] Country Code: \" + info['countryCode'])\n print(\"[+] Region: \" + info['regionName'])\n print(\"[+] City: \" + info['city'])\n print(\"[+] Zip: \" + info['zip'])\n print(\"[+] Latitude: \" + str(info['lat']))\n print(\"[+] Longitude: \" + str(info['lon']))\n print(\"[+] ISP: \" + info['isp'])\n print(\"[+] ORG: \" + info['org'])\n print(\"[+] ASN: \" + info['as'])\n print(\"[+] ASN Name: \" + info['asname'])\n print(\"[+] Reverse: \" + info['reverse'])\n print(\"[+] Mobile: \" + str(info['mobile']))\n print(\"[+] Proxy: \" + str(info['proxy']))\n print(\"[+] Hosting: \" + str(info['hosting']))\n\n def get_addinfo(self, ip):\n r = requests.get('https://ipwhois.app/json/' + ip)\n add_info = json.loads(r.text)\n print(\"[+] Currency: \" + add_info['currency'])\n print(\"[+] Currency Code: \" + add_info['currency_code'])\n print(\"[+] Currency Symbol: \" + add_info['currency_symbol'])\n print(\"[+] Currency Rates: \" + add_info['currency_rates'])\n print(\"[+] Currency Plural: \" + add_info['currency_plural'])\n print(\"[+] Timezone: \" + add_info['timezone'])\n print(\"[+] Timezone Name: \" + add_info['timezone_name'])\n print(\"[+] Timezone DST Offset: \" + add_info['timezone_dstOffset'])\n print(\"[+] Timezone GMT Offset: \" + add_info['timezone_gmtOffset'])\n print(\"[+] Timezone GMT: \" + add_info['timezone_gmt'])\n print(\"[+] Country Neighbors: \" + add_info['country_neighbours'])\n\n def checkTor(self, ip):\n today = date.today()\n b4_yesterday = today - timedelta(days=2)\n\n url = 'https://metrics.torproject.org/exonerator.html?ip=' + ip + '×tamp=' + str(b4_yesterday) + '&lang=en'\n\n try:\n session = requests_html.HTMLSession()\n response = session.get(url)\n except requests.exceptions.RequestException as e:\n print(e)\n\n r = response.html.find('.panel-body', first=True).text\n print(\"[+] - \" + r)\n\n def checkBlacklist(self, ip):\n\n url = 'https://www.abuseipdb.com/check/' + ip\n\n try:\n session = requests_html.HTMLSession()\n response = session.get(url)\n except requests.exceptions.RequestException as e:\n print(e)\n\n r = response.html.find('p', containing='This IP address has been reported')\n\n if r:\n for x in r:\n print(\"[+] -\" + x.text)\n else:\n print(\"[+] - This IP address has not been reported\")\n\n def checkCVEPorts(self, ip):\n # Get CVEs and Open Ports: https://spyse.com/target/ip/78.47.211.252\n url = 'https://spyse.com/target/ip/' + ip\n try:\n session = requests_html.HTMLSession()\n response = session.get(url)\n except requests.exceptions.RequestException as e:\n print(e)\n\n r = response.html.find('.security-block', first=True).text\n risks = r.split(\"\\n\")\n\n # Accessing directly to the array elements because the response will always have the same format\n print(\"[+] - Security Score: \" + risks[1])\n print(\"[+] - \" + risks[2])\n print(\"[#] / Critical risk: \" + risks[4])\n print(\"[#] / Medium risk: \" + risks[6])\n print(\"[#] / Medium risk: \" + risks[8])\n\n r = response.html.find('.cve__id')\n if r:\n print(\"\\n[+] - CVEs number: \")\n for x in r:\n print(\"[#] / \" + x.text)\n\n print(\"\\nChecking if IP has open ports\")\n r = response.html.find('.port__value')\n if r:\n # Formatting output, deleting repeated ports\n ports = []\n for x in r:\n if x not in ports:\n ports.append(x.text)\n ports = set(ports)\n for x in ports:\n print(\"[+] - \" + x + \" (Open)\")\n else:\n print(\"[+] - No open ports\")\n\n def main(self, args):\n ip = args.ip\n print(\"\\nGeneral Info:\")\n self.get_info(ip)\n print(\"\\nAdditional Info:\")\n self.get_addinfo(ip)\n print(\"\\nChecking if the IP was used as a TOR relay:\")\n self.checkTor(ip)\n print(\"\\nChecking if the IP has been reported:\")\n self.checkBlacklist(ip)\n print(\"\\nChecking IP security score:\")\n self.checkCVEPorts(ip)\n\n\nif __name__ == \"__main__\":\n print(\"###################################################################\")\n print(figlet_format(\" IPinfo\", font=\"standard\"))\n print(\"###################################################################\")\n\n parser = argparse.ArgumentParser()\n parser.add_argument('-i', '--ip', type=str, help='The target IP.', required=True)\n\n args = parser.parse_args()\n p = ip_info()\n p.main(args)\n"
},
{
"alpha_fraction": 0.5349321961402893,
"alphanum_fraction": 0.6444212794303894,
"avg_line_length": 18.571428298950195,
"blob_id": "c5e12822196be06f01a4bb6308e48a182ee065d9",
"content_id": "fd673629b6e5a5979737607688920a4e4f81d0b2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 959,
"license_type": "no_license",
"max_line_length": 110,
"num_lines": 49,
"path": "/README.md",
"repo_name": "4tome/IPinfo",
"src_encoding": "UTF-8",
"text": "# IPinfo\n\nIPinfo is a python program that give us information about a given IP address (Geolocation, Open ports, CVEs). \n\n## Usage\n\n```python ipinfo.py [-h] -i IP```\n\nExample 1 (IP used - 45.33.32.156 - http://scanme.nmap.org/):\n\n<img src=\"examples/1.png\">\n<img src=\"examples/2.png\">\n\nExample 2(IP used - 176.28.50.165 - http://testphp.vulnweb.com/):\n\n<img src=\"examples/3.png\">\n<img src=\"examples/4.png\">\n\n## Requeriments\n\n- appdirs==1.4.4\n- beautifulsoup4==4.9.3\n- bs4==0.0.1\n- certifi==2021.5.30\n- chardet==4.0.0\n- cssselect==1.1.0\n- fake-useragent==0.1.11\n- idna==2.10\n- lxml==4.6.3\n- parse==1.19.0\n- pyee==8.1.0\n- pyfiglet==0.8.post1\n- pyppeteer==0.2.5\n- pyquery==1.4.3\n- requests==2.25.1\n- requests-html==0.10.0\n- six==1.16.0\n- soupsieve==2.2.1\n- tqdm==4.61.0\n- urllib3==1.26.5\n- w3lib==1.22.0\n- websockets==8.1\n\n## Resources\n- https://spyse.com/\n- https://www.abuseipdb.com/\n- https://metrics.torproject.org/\n- https://ipwhois.app/\n- http://ip-api.com/\n"
}
] | 2 |
zach-wheat/mbp_query | https://github.com/zach-wheat/mbp_query | c342425361ac54e3ae2cee7833e3823fd06c98c6 | 2b799b9d8ce4de5217ef40e4fc608cbd09772622 | 0bf140d0a687671982617ac23f1d7c042100accd | refs/heads/main | 2023-08-22T16:42:00.341704 | 2021-10-28T00:34:56 | 2021-10-28T00:34:56 | null | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5471478700637817,
"alphanum_fraction": 0.5902211666107178,
"avg_line_length": 31.961538314819336,
"blob_id": "496dc11b062a22a174d169b4288a9c7bf8d7a079",
"content_id": "8814a7eb8e39d516bb8c069bd9562a9eaf034793",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 859,
"license_type": "no_license",
"max_line_length": 111,
"num_lines": 26,
"path": "/sub_query.py",
"repo_name": "zach-wheat/mbp_query",
"src_encoding": "UTF-8",
"text": "import requests\nfrom bs4 import BeautifulSoup\nimport json\n\n\ndef subpage_query(url):\n\n headers = {'user-agent':\n 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_3) '\n 'AppleWebKit/537.36 (KHTML, like Gecko) '\n 'Chrome/73.0.3683.86 Safari/537.36'}\n\n r = requests.get(url, headers=headers, timeout=15)\n soup = BeautifulSoup(r.content, 'html.parser')\n\n months = {'January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October',\n 'November', 'December', }\n #enter release year(s) to search\n years = {'2019', '2020'}\n\n data = json.loads(soup.find('script', type='application/ld+json').text)\n\n for month in months:\n for year in years:\n if f'Originally released {month} {year}' in data['description']:\n return data['url']\n\n\n"
},
{
"alpha_fraction": 0.7862595319747925,
"alphanum_fraction": 0.7862595319747925,
"avg_line_length": 42.33333206176758,
"blob_id": "262b1a991f3ad17cbc0bff3755ba10ecedf933c8",
"content_id": "961433b674d52a7631ec5a323ac7acc3d3434be3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 131,
"license_type": "no_license",
"max_line_length": 116,
"num_lines": 3,
"path": "/README.md",
"repo_name": "zach-wheat/mbp_query",
"src_encoding": "UTF-8",
"text": "# mbp_query\n\nA web scraper which searches for refurbished MacBooks and sends a Twilio alert via text message if a match is found. \n"
},
{
"alpha_fraction": 0.625806450843811,
"alphanum_fraction": 0.6301075220108032,
"avg_line_length": 32.16666793823242,
"blob_id": "527dd170e10902a5add8f668e936cfc9abf84b70",
"content_id": "671dc0a726682c64aba47e7ae94fab54e53d5683",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1395,
"license_type": "no_license",
"max_line_length": 103,
"num_lines": 42,
"path": "/mbp_finder.py",
"repo_name": "zach-wheat/mbp_query",
"src_encoding": "UTF-8",
"text": "from query import landingpage_query\nfrom sub_query import subpage_query\nimport time\nfrom datetime import datetime\nimport random\nfrom twilio_alert import twilio\n\n\n# Requests https://www.apple.com/shop/refurbished/mac/macbook-pro and checks to see if model year(s) \\\n# specified in sub_query are available. If so, Twilio notification containing link(s) is sent via text.\n\n\ndef main():\n print(f'Running job... Current time: {datetime.now()}')\n print('\\n')\n urls = landingpage_query()\n print(f\"Number of url's retrieved: {len(urls)}. Moving to subpage_query\")\n\n output = []\n for url in urls:\n time.sleep(random.uniform(1.01, 3.77))\n try:\n link = subpage_query(url)\n if link:\n output.append(link)\n print(f'link found: {link}. Moving to next machine')\n except Exception as Ex:\n print(f'Error while crawling page: {url}. Exception: {Ex}')\n continue\n\n if output:\n print(f'Job completed: {datetime.now()}')\n outlist = ',\\n\\n'.join([link for link in output])\n print(f'Macbook query results: {outlist}')\n return twilio(f'Macbook query results: {outlist}')\n print(f'Job completed: {datetime.now()}')\n print('No Macbooks found for year(s) searched...')\n return twilio('No Macbooks found for selected year(s)...')\n\n\nif __name__ == \"__main__\":\n main()\n\n\n"
},
{
"alpha_fraction": 0.5533807873725891,
"alphanum_fraction": 0.5845195651054382,
"avg_line_length": 31.114286422729492,
"blob_id": "7d97fc0875d33a14da2fdddda35349ba2aa7261d",
"content_id": "8c00b2dd21714f238c573692500982af8728e534",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1124,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 35,
"path": "/query.py",
"repo_name": "zach-wheat/mbp_query",
"src_encoding": "UTF-8",
"text": "import requests\nfrom bs4 import BeautifulSoup\n\n\ndef landingpage_query(attempts=3):\n\n url = 'https://www.apple.com/shop/refurbished/mac/macbook-pro'\n #enter model to search\n model = 'Refurbished 16-inch'\n\n headers = {'user-agent':\n 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_3) '\n 'AppleWebKit/537.36 (KHTML, like Gecko) '\n 'Chrome/73.0.3683.86 Safari/537.36'}\n\n while attempts != 0:\n try:\n r = requests.get(url, headers=headers, timeout=15)\n attempts = 0\n except Exception as Ex:\n print(f'Error when retrieving landing page links: {Ex}')\n attempts -= 1\n print(f'Trying again. Number of attempts remaining: {attempts}')\n\n soup = BeautifulSoup(r.content, 'html.parser')\n products = soup.find_all('div', class_=\"refurbished-category-grid-no-js\")\n\n urls = []\n for prod in products:\n for link in prod.findAll('a', href=True):\n for chunk in link:\n if model in chunk:\n urls.append('https://www.apple.com' + link['href'])\n\n return urls\n"
},
{
"alpha_fraction": 0.5846154093742371,
"alphanum_fraction": 0.5846154093742371,
"avg_line_length": 19.590909957885742,
"blob_id": "36360478c95f12eef49ad5a5c2800454be897a7a",
"content_id": "5666d24ae0f318d0b24e8f0053b088175e975289",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 455,
"license_type": "no_license",
"max_line_length": 71,
"num_lines": 22,
"path": "/twilio_alert.py",
"repo_name": "zach-wheat/mbp_query",
"src_encoding": "UTF-8",
"text": "from twilio.rest import Client\n\n\ndef twilio(body):\n # enter your account sid, auth token, and phone number from Twilio:\n acct_sid = ''\n auth_token = ''\n twilio_number = ''\n \n # enter number to receive text notification:\n number_to_text = ''\n \n client = Client(acct_sid, auth_token)\n\n message = client.messages \\\n .create(\n body=body,\n from_= twilio_number,\n to= number_to_text\n )\n\n return\n\n\n"
}
] | 5 |
quizbowl/audio-packets | https://github.com/quizbowl/audio-packets | 43a7496d0e5490bc48b5df1e66dbcdad630e80a6 | 01e526a82028cd92237c74654fa7e8749ebaff20 | 80c52cf4e3150919c2e47de79eb50eb7b1defd1d | refs/heads/master | 2023-06-22T06:49:11.097022 | 2023-06-13T02:14:42 | 2023-06-13T02:14:42 | 137,306,389 | 3 | 1 | null | null | null | null | null | [
{
"alpha_fraction": 0.5956322550773621,
"alphanum_fraction": 0.6167664527893066,
"avg_line_length": 20.671754837036133,
"blob_id": "0e82bfa23baad10f48498c2d19d87ff695e15192",
"content_id": "b2e79374e1d653e6180b3012baf63e2680d8ea8d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 2885,
"license_type": "no_license",
"max_line_length": 107,
"num_lines": 131,
"path": "/play_il_packet.bash",
"repo_name": "quizbowl/audio-packets",
"src_encoding": "UTF-8",
"text": "# usage:\n# $ cd il1\n# $ ../play_il_packet.bash 1\n\n# settings for manual discord / test / in person\nENTER=1\nJUSTSLEEP=\"\"\nANSWERPLEASE=\"\"\nSAYANSWER=\"\"\nPRINTANSWER=1\n\n# settings for auto discord\nENTER=\"\"\nJUSTSLEEP=1\nANSWERPLEASE=\"\"\nSAYANSWER=1\nPRINTANSWER=1\n\n# settings for self playtest\nENTER=1\nJUSTSLEEP=\"\"\nANSWERPLEASE=\"\"\nSAYANSWER=\"\"\nPRINTANSWER=\"\"\n\n#OUTPUTDEVICE=\"\"\n# Discord input device should be Loopback,\n# which should combine Soundflower (2ch)\n# and Built-in Microphone if needed.\n# This should be set to the ID of the Multi-Audio Output,\n# which splits into Soundflower (2ch) and to Default Output.\nOUTPUTDEVICE=41\n\n# Skip questions whose filename contains the string \"Hard\"\nSKIPHARD=\"\"\n\nSAYOD=\"\"\nMPLAYEROD=\"\"\nif [ $OUTPUTDEVICE ]; then\n\tSAYOD='-a'$OUTPUTDEVICE\n\tMPLAYEROD='-ao coreaudio:device_id='$OUTPUTDEVICE\nfi\n\n\nfunction echosay {\n\techo -e \"\\033[1;33m\"$1\"\\033[0m\"\n\tsay $SAYOD $1\n\tsleep $2\n}\nfunction mplay {\n\teval \"mplayer $MPLAYEROD -msglevel all=0:statusline=9 \\\"$1\\\" 2>/dev/null\"\n}\n\ndir=${1%/}\nindex=0\nlet \"startat=$2 - 1\"\nIFS=$'\\n'\nquestions=($(grep -E '^[0-9]+\\.' answers/$dir.txt))\n# questions=($(grep -E '^(Question|TOSSUP|[0-9]+\\.)' answers/$dir.txt))\nanswers=($(grep -E '^ANSWER' answers/$dir.txt))\n\n# Fonts\nBOLD=`tput bold``tput setaf 4`\nJUSTBOLD=`tput bold`\nBU=`tput bold``tput smul`\nBU=`tput smul`\nREG=`tput sgr0`\nEU=`tput rmul`\nCONCEAL=`tput setaf 0``tput setab 0`\n\necho -en '\\nCommands:\\n'\necho -e \" • Press ${BOLD}Space${REG} to play/pause a question\"\necho -e \" • Press ${BOLD}Ctrl-C${REG} to stop a question early\"\necho -e \" • Press ${BOLD}Ctrl-C${REG} again to exit\"\necho\n\nfor file in $dir/*.mp3; do\n\t# skip hard\n\tif [[ $SKIPHARD && $file = *\"Hard\"* || $index -lt $startat ]]; then\n\t\techo \"${BOLD}Skipping ${file##*/}${REG}\"\n\telse\n\n\t\techo \"**${questions[index]}**\" | pbcopy\n\t\techosay \"${questions[index]}\" 1\n\t\tanswer=`echo ${answers[index]} | sed \"s/_\\([^_]*\\)_/${BU}\\1${EU}/g\"`\n\t\tif [ $PRINTANSWER ]; then\n\t\t\techo $answer\n\t\telse\n\t\t\techo \"<answer hidden> ${CONCEAL}$answer\"\n\t\t\techo \"${file##*/}${REG}\"\n\t\tfi\n\t\techo\n\t\tmplay \"$file\"\n\t\tFOO=$?\n\t\techo \"${JUSTBOLD}Audio finished. ${REG}\"\n\t\techo\n\n\t\techo ${answers[index]} | sed 's/_\\([^_]*\\)_/__**\\1**__/g' | pbcopy\n\n\t\tif [ $ANSWERPLEASE ]; then\n\t\t\tif [ $FOO -ne 1 ]; then\n\t\t\t\tsleep 4\n\t\t\t\techosay \"Answer please?\" 2\n\t\t\t\techosay \"Time.\" 1\n\t\t\tfi\n\t\tfi\n\n\t\tif [ $ENTER ]; then\n\t\t\tread -p $'Press enter to continue\\n'\n\t\tfi\n\n\t\t# replay?\n\n\t\tif [ -z $PRINTANSWER ]; then\n\t\t\techo $answer\n\n\t\t\t# TODO: new 2020\n\t\t\techo\n\t\t\tgrep -P \"^$(($index+1))\\t\" clip_descriptions/$dir.tsv 2>/dev/null || echo \"(no clip descriptions found)\"\n\t\tfi\n\n\t\tif [ $SAYANSWER ]; then\n\t\t\tsaidanswer=`echo ${answers[index]} | sed \"s/_\\([^_]*\\)_/[[emph 100]]\\1/g\"`\n\t\t\tsay $SAYOD $saidanswer\n\t\t\tsleep 3\n\t\tfi\n\tfi\n\n\techo '————————————————————'\n\tlet index++\ndone\n"
},
{
"alpha_fraction": 0.7757936716079712,
"alphanum_fraction": 0.7876983880996704,
"avg_line_length": 166.6666717529297,
"blob_id": "89d68aca095aff0e5b136b4dbf373e80d9a220d9",
"content_id": "5f4a2b3b503870e874227dc99fb6011676868401",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 504,
"license_type": "no_license",
"max_line_length": 364,
"num_lines": 3,
"path": "/muses/readme.txt",
"repo_name": "quizbowl/audio-packets",
"src_encoding": "UTF-8",
"text": "The .mp3 files labelled 1.mp3 through 35.mp3 contain each question. \"muses-spoilerfree.txt\" contains only questions, whereas \"muses.pdf\" contains both questions and answers. \"survey.pdf\" is a copy of the survey handed out at the event. \"scoresheet.xlsx\" includes the scoresheet used at the event, the survey results, and a list of the pieces used in each question.\r\n\r\nIf you plan on playing the set for yourself, I recommend leaving the audio files in the zip file, as extraction will reveal the answers."
},
{
"alpha_fraction": 0.70920729637146,
"alphanum_fraction": 0.7557886242866516,
"avg_line_length": 49.634483337402344,
"blob_id": "a9490e2f86c970d48551540c22d8a717197a0154",
"content_id": "7b442889a1b67c5bb004a7018a5afa218a52aaa1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 7374,
"license_type": "no_license",
"max_line_length": 407,
"num_lines": 145,
"path": "/README.md",
"repo_name": "quizbowl/audio-packets",
"src_encoding": "UTF-8",
"text": "## Audio packets\n\n### Contents\n\nCurrently rehosted without explicit permission. Links to forum threads and sources (which may not work anymore):\n\n* [Imaginary Landscape No. 1](http://hsquizbowl.org/forums/viewtopic.php?f=21&t=11623&p=219113#p219113) (2011)\n * Source:\n[1](https://app.box.com/s/cjz4h8mtmpexg54hwp47)\n[2](https://app.box.com/s/rkog5lzgv5z08h3rz9edjtslame5sna5)\n[3](https://app.box.com/s/dvim5i3jw47pca15nmh6)\n[4](https://app.box.com/s/qgvoru61whvqn8owjn04)\n[5](https://app.box.com/s/4w8styo3zcegiipadwwn)\n[6](https://app.box.com/s/uxxq16t3nayudhlcv8p1)\n[7](https://app.box.com/s/p0p6fjvhrxxq0xcobate)\n[8](https://app.box.com/s/lus2brcnvbszm6gi5b27)\n* [Imaginary Landscape No. 2](http://hsquizbowl.org/forums/viewtopic.php?f=21&t=16234&p=289863#p289863) (2014)\n * Source:\n[1](https://app.box.com/s/9cjrh6f96um752scc4vk)\n[2](https://app.box.com/s/apg3kusvq0iiqfvcwfj9)\n[3](https://app.box.com/s/l66tytwjmejq7pzi1ukj)\n[4](https://app.box.com/s/2w2ztx8ze9x6jvu5u67f)\n[5](https://app.box.com/s/hcy6za2tiooehv5ymdm2)\n[6](https://app.box.com/s/souowe39meed1nzn2evs)\n[7](https://app.box.com/s/41werd8fdlvbqrgxzyn0)\n[8](https://app.box.com/s/wuhmam1jr4ayu8mkjq1g)\n* [Imaginary Landscape No. 3](http://hsquizbowl.org/forums/viewtopic.php?f=21&t=18117&p=318076#p318076) (2016)\n * Source: [All packets](https://app.box.com/s/2bjv3rmjdabi4a5whdh3g4zwmzlmvxub)\n* [Imaginary Landscape No. 4](http://hsquizbowl.org/forums/viewtopic.php?f=8&t=20394) (2018)\n* [Imaginary Landscape No. 5](https://hsquizbowl.org/forums/viewtopic.php?t=21583) (2019)\n* [Guerilla Imaginary Landscape 1](http://hsquizbowl.org/forums/viewtopic.php?f=8&t=21409) (2018)\n* [Guerilla Imaginary Landscape 2](http://hsquizbowl.org/forums/viewtopic.php?f=8&t=21409&p=357792#p357792) (2019)\n* [SOUNDTRACK](http://hsquizbowl.org/forums/viewtopic.php?f=19&t=20359) (2018)\n * Source: [All packets](http://trash.quizbowlpackets.com/2176/)\n* [SOUNDTRACK 2](https://hsquizbowl.org/forums/viewtopic.php?t=26431) (2022)\n* [Eternal Sonata](http://hsquizbowl.org/forums/viewtopic.php?f=19&t=18713) (2016)\n* [Jonathan Magin’s Festivus Audio Packet](https://hsquizbowl.org/forums/viewtopic.php?f=311&t=24733) (2020)\n* other sets may be added later\n\nThe audio files have been renamed to avoid spoilers.\n\nThe original answerline documents are included, and have also been converted to plain text.\n\n### Script\n\nA bash script (`play_il_packet.bash`) is included to make playing the packets easier (either by yourself, or in a group).\nIt merely reads each question prompt out loud before playing the corresponding audio file, and goes through a single directory in order.\n\nThis section assumes that you already know how to use a command line interface to change the directory and run a simple program.\nIt was written for my own personal use (on a Mac), so it may not work for you out of the box.\n\nTo play packet 1 from Imaginary Landscape No. 1, for example:\n\n```\ncd il1\n../play_il_packet.bash 1\n```\n\nYou can skip to a certain question (in case you quit by mistake). For example, to start playing packet 1 from question 6:\n\n```\n../play_il_packet.bash 1 6\n```\n\nThe script is controlled by the following commands:\n\n* Press <kbd>Space</kbd> to play/pause a question\n* Press <kbd>Ctrl-C</kbd> to stop a question early\n* Press <kbd>Ctrl-C</kbd> again to exit\n\nThe script can be configured by enabling the following flags:\n\n* `ANSWERPLEASE`: waits and says \"Answer please?\" and \"Time\" after the question finishes\n* `ENTER`: must press enter to continue to the next question\n* `SAYANSWER`: says the answer after the question finishes\n* `SKIPHARD`: skips questions whose filename contains the string \"Hard\"\n\nThe script has the following requirements:\n\n* `say`: text-to-speech Mac built-in, for speaking the question prompts and answers\n* `mplayer`: open-source media player, for playing the audio files (can install via [homebrew](https://brew.sh/))\n\nTo determine the audio device ID, you can run a command such as `mplayer -ao coreaudio:device_id=help path/to/some/music.mp3`.\n\n### File structure\n\nThe script expects the folders and packets to be formatted in a certain way. See examples in this repository.\n\nFor each packet, there must be a folder of audio questions (`1/01.mp3`, `1/02.mp3`, etc.), a text file of prompts (`prompts/1.txt`) containing one line for each question (starting with `1. `, `2. `, etc.), and a text file of answers (containing one line for each answer (starting with `ANSWER: `). Thus, a question set `foo` with 2 packets of 3 questions each should have the following structure:\n\n```\nfoo/1/01.mp3\nfoo/1/02.mp3\nfoo/1/03.mp3\nfoo/2/01.mp3\nfoo/2/02.mp3\nfoo/2/03.mp3\nfoo/answers/1.txt\nfoo/answers/2.txt\nfoo/prompts/1.txt\nfoo/prompts/2.txt\n```\n\n---\n\n## How to download questions\n\nThe options are listed with the most recommended one first.\n\n1. Use any git client (either command line or desktop) to clone the whole repository. This will keep all the packets in one place. Any updates can easily be downloaded with a simple `git pull`.\n2. Click the green `Clone or download` button above, then `Download ZIP` to download the whole repository.\n3. Using GitHub’s online interface, navigate to an audio file and click either `View Raw` or `Download`. You may need to use a browser like Firefox instead of Google Chrome. This is because GitHub intentionally serves all raw files as `plain/text`, regardless of the actual type. Firefox plays these audio files because it tries to detect the type of any file, instead of trusting what GitHub’s server says.\n\n---\n\n## How to play audio over Discord\n\n### Gist of these instructions\n\nYour computer can have multiple “audio devices,” such as a microphone, a speaker, or a virtual audio device that you might need to create anew.\n\n* Change the _output_ audio device in your _media player_ from “speaker” to the virtual device.\n * If you also want to hear your media player’s audio through your speakers/headphones, you will need to create a “multi-output device” that outputs simultaneously to _both_ “speaker” (or default output) _and_ the virtual device. Then change the output audio device in your media player to this new multi-output device.\n* Change the _input_ audio device in your _Discord voice settings_ from “microphone” to the virtual device.\n * If you also want to talk into your microphone while playing music, you will need to combine “microphone” and the virtual device by doing something more complicated.\n\nSearch `how to play music through mic` on Google for more information on this topic.\n\n### Mac\n\nInstall [Soundflower](https://rogueamoeba.com/freebies/soundflower/) (a Mac extension for interapplication audio routing) and use `Soundflower (2ch)` as the virtual device. Change the media player output to `Soundflower (2ch)` and the Discord input to `Soundflower (2ch)`.\n\nExample settings for the Multi-Output Device (in the native Mac utility Audio MIDI Setup):\n\n<img src=\"audio-midi-settings.png\" width=\"743\" />\n\n### Windows\n\nInstall something like [Cable](https://www.vb-audio.com/Cable/) to use virtual devices.\n\n### Discord settings\n\nYour Discord voice settings should look similar to the following image. Disable echo, silence, and noise detection (as these features are intended for conversational speech). Change the bitrate of the Discord voice channel to 96 kbps.\n\n<img src=\"discord-settings.png\" width=\"614\" />\n"
},
{
"alpha_fraction": 0.7167810201644897,
"alphanum_fraction": 0.7479725480079651,
"avg_line_length": 42.58333206176758,
"blob_id": "753b5117ed4c05c81c692136faf0940ed7d53381",
"content_id": "9e9984a2360c6ddd5ce53de58826b08bcdcc2c3a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 1603,
"license_type": "no_license",
"max_line_length": 158,
"num_lines": 36,
"path": "/muse/readme.txt",
"repo_name": "quizbowl/audio-packets",
"src_encoding": "UTF-8",
"text": "If you wish to play this set for yourself, follow these steps.\r\n\r\n1. Make sure Windows Explorer (or equivalent) is not set to show \"Details\" - this will obscure the answer, which is contained in the \"Title\" of each MP3 file.\r\n2. The questions are below, just play each audio file.\r\n3. The answers are in the Word file.\r\n4. The music sources are in the Excel file.\r\n\r\n----\r\n\r\n1. Name the composer of these works.\r\n2. Name the composer of these works.\r\n3. Name this work.\r\n4. Name the composer of these works.\r\n5. Name this work.\r\n6. Name BOTH the type of composition exemplified by AND the composer of these works.\r\n7. Name the composer of these works.\r\n8. Name the composer of these works.\r\n9. Name the type of composition exemplified by these works.\r\n10. Name the country of origin of the composers of these works.\r\n11. Name the composer of these works.\r\n12. Name the composer of these works.\r\n13. Name this work.\r\n14. Name the composer of these works.\r\n15. Name this work.\r\n16. Name the composer of these works.\r\n17. Name the type of composition exemplified by these works.\r\n18. Name the country of origin of the composers of these works.\r\n19. Name the composer of these works.\r\n20. Name this work.\r\n21. Name the composer of these works.\r\n22. Name the country of origin of the composers of these works.\r\n23. Name this work.\r\n24. Name BOTH the type of composition exemplified by AND the composer of these works.\r\n25. Name BOTH the type of composition exemplified by AND the composer of these works.\r\n26. Name the composer of these works.\r\n27. Name the country of origin of the composers of these works."
},
{
"alpha_fraction": 0.653715968132019,
"alphanum_fraction": 0.6680638194084167,
"avg_line_length": 27.85116195678711,
"blob_id": "95676932a4310edb10b1b27069eb93d8f95cdd98",
"content_id": "f3402843fcfb0d7023ab4feb81c3ae0d348158b4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6203,
"license_type": "no_license",
"max_line_length": 120,
"num_lines": 215,
"path": "/gil2/stitch/download.py",
"repo_name": "quizbowl/audio-packets",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\nfrom __future__ import unicode_literals\nfrom collections import OrderedDict\nimport os\nimport unicodecsv as csv, sys\n\n# Load spreadsheet API\nimport gspread\nfrom oauth2client.service_account import ServiceAccountCredentials\n\nscope = ['https://www.googleapis.com/auth/spreadsheets']\ncredentials = ServiceAccountCredentials.from_json_keyfile_name(path_to_creds + 'service_account_creds.json', scope)\ngc = gspread.authorize(credentials)\n\nsheet_ids = [\n]\n\n# Set up youtube-dl\n\nimport youtube_dl\n\nclass MyLogger(object):\n\tdef debug(self, msg):\n\t\tpass\n\n\tdef warning(self, msg):\n\t\tpass\n\n\tdef error(self, msg):\n\t\tprint(msg)\n\nydl_opts = {\n\t'format': 'bestaudio/best',\n\t'outtmpl': 'dl/%(id)s.%(ext)s',\n\t'postprocessors': [{\n\t\t'key': 'FFmpegExtractAudio',\n\t\t'preferredcodec': 'mp3',\n\t\t'preferredquality': '192',\n\t}],\n\t# 'logger': MyLogger(),\n}\nydl = youtube_dl.YoutubeDL(ydl_opts)\n\n\n# import pydub\nimport subprocess\nsnippet_tmpl = 'snippets/%(youtube_id)s_%(start)s_%(length)s.mp3'\nstitch_tmpl = '%(dir)s/%(question_number)02d_%(difficulty)s.mp3'\n\n\ndef update_scoresheet(spreadsheet_id):\n\tworksheet = get_worksheet(spreadsheet_id)\n\tmeta_worksheet = get_meta_worksheet(spreadsheet_id)\n\n\tyoutube_ids = list(set(worksheet.col_values(3)[1:]))\n\tif u'' in youtube_ids:\n\t\tyoutube_ids.remove(u'')\n\tfor youtube_id in youtube_ids:\n\t\ttry:\n\t\t\tpath = ydl_opts['outtmpl'] % {'id': youtube_id, 'ext': 'mp3'}\n\t\t\tif os.path.exists(path):\n\t\t\t\tprint 'Already downloaded %s' % path\n\t\t\telse:\n\t\t\t\tydl.download([youtube_id])\n\t\texcept youtube_dl.utils.DownloadError as e:\n\t\t\tprint e\n\n\tvalues = worksheet.get_all_values()[1:]\n\tfor row in values:\n\t\tif row[2] != '':\n\t\t\tcreate_snippet(row)\n\n\tmeta_values = [question for question in meta_worksheet.get_all_values()[1:] if question[0] != '' and question[1] != '']\n\tcreate_answer_doc(meta_values, values)\n\n\tfor question in meta_values:\n\t\tquestion_clips = [clip for clip in values if clip[0] == question[0]]\n\t\t# print '\\t'.join(question[:4])\n\t\t# print '\\t'.join(['.'.join(f[:2]) for f in question_clips])\n\t\tquestion[1] = meta_values[0][1] # use author of first question for all questions\n\n\t\tcreate_stitch(question, question_clips)\n\t\tprint\n\n\tprint 'Done'\n\ndef create_answer_doc(meta_values, values):\n\tnames = meta_values[0][1]\n\tanswer_doc_path = u'../answers/packets/%(names)s.txt' % {'names': names.replace(' ', '_')}\n\tanswer_doc = u'Guerilla Imaginary Landscape\\nQuestions submitted by %(names)s\\n\\n' % {'names': names}\n\tfor row in meta_values:\n\t\tanswer_doc += u'%(num)s. %(prompt)s\\n' % {'num': row[0], 'prompt': row[3]}\n\t\tanswer_doc += u'ANSWER: %(answer)s\\n\\n' % {'answer': row[4]}\n\n\tprint 'Writing %s' % answer_doc_path\n\tf = open(answer_doc_path, 'w')\n\tf.write(answer_doc.encode('utf-8'))\n\tf.close()\n\n\tclip_doc_path = u'../clip_descriptions/packets/%(names)s.txt' % {'names': names.replace(' ', '_')}\n\tclip_header = ['Question','Clip','YouTube video ID','Start (sec)','Length (sec)','Link','Description']\n\tprint 'Writing %s' % clip_doc_path\n\tljustf = lambda x: x[:5]+[x[5].ljust(66) if x[5].startswith('http') else x[5]]+x[6:7]\n\tvalues_clips = [ljustf(row) for row in values]\n\twrite_tsv(values_clips, clip_header, clip_doc_path)\n\ndef write_tsv(rows, header, path):\n with open(path, 'w') as f:\n writer = csv.writer(f, dialect=csv.excel_tab, lineterminator='\\n', quoting=csv.QUOTE_NONE, quotechar=str('|'))\n writer.writerow(header)\n writer.writerows(rows)\n\ndef create_snippet(row):\n\ttry:\n\t\tyoutube_id = row[2]\n\t\tstart = float(row[3])\n\t\tlength = float(row[4])\n\n\t\tassert len(youtube_id) == 11 or len(youtube_id) == 17\n\t\tassert start >= 0\n\t\tassert length >= 0 and length <= 30\n\n\t\tfull_path = ydl_opts['outtmpl'] % {'id': youtube_id, 'ext': 'mp3'}\n\t\tsnippet_path = snippet_tmpl % {'youtube_id': youtube_id, 'start': row[3], 'length': row[4]}\n\n\t\tif os.path.exists(snippet_path):\n\t\t\tprint 'Already created %s' % snippet_path\n\t\t\treturn\n\n\t\tprint 'Creating %s' % snippet_path\n\t\tsubprocess.call(['ffmpeg', '-v', 'warning', '-ss', '%s' % start, '-t', '%s' % length, '-i', full_path, snippet_path])\n\n\t\t# full_segment = pydub.AudioSegment.from_mp3(full_path)\n\t\t# snippet_segment = full_segment[start * 1000 : (start + length) * 1000]\n\n\t\t# snippet_segment.export(snippet_path)\n\n\texcept AssertionError as e:\n\t\traise\n\texcept Exception as e:\n\t\tprint e\n\ndef create_stitch(meta_row, rows):\n\ttry:\n\t\tsnippet_paths = []\n\n\t\tif meta_row[5] == '':\n\t\t\tprint 'not checking assert'\n\t\telse:\n\t\t\tassert len(rows) == int(meta_row[5])\n\n\t\tfor row in rows:\n\t\t\tyoutube_id = row[2]\n\t\t\tstart = float(row[3])\n\t\t\tlength = float(row[4])\n\n\t\t\tassert len(youtube_id) == 11 or len(youtube_id) == 17\n\t\t\tassert start >= 0\n\t\t\tassert length >= 0\n\n\t\t\tsnippet_path = snippet_tmpl % {'youtube_id': youtube_id, 'start': row[3], 'length': row[4]}\n\t\t\tif not os.path.exists(snippet_path):\n\t\t\t\traise IOError\n\n\t\t\tsnippet_paths.append(snippet_path)\n\n\t\tname = meta_row[1].replace(' ', '_')\n\t\tquestion_number = meta_row[0]\n\t\tdifficulty = meta_row[2]\n\n\t\tstitch_dir = os.path.join('../packets', name)\n\t\tif not os.path.exists(stitch_dir):\n\t\t\tos.makedirs(stitch_dir)\n\t\tstitch_path = stitch_tmpl % {'dir': stitch_dir, 'question_number': int(question_number), 'difficulty': difficulty}\n\n\t\tconcat_protocol = 'concat:%s' % '|snippets/silence.mp3|'.join(snippet_paths)\n\n\t\tif os.path.exists(stitch_path):\n\t\t\tprint 'Already stitched %s' % stitch_path\n\t\t\treturn\n\n\t\tprint 'Creating %s' % stitch_path\n\t\targs = ['ffmpeg', '-v', 'fatal', '-i', concat_protocol, '-q:a', '2', stitch_path]\n\t\tsubprocess.call(args)\n\n\n\t\t# full_segment = pydub.AudioSegment.from_mp3(full_path)\n\t\t# snippet_segment = full_segment[start * 1000 : (start + length) * 1000]\n\n\t\t# snippet_segment.export(snippet_path)\n\n\texcept AssertionError as e:\n\t\traise\n\texcept Exception as e:\n\t\tprint e\n\t\tprint meta_row\n\n\ndef get_worksheet(spreadsheet_id):\n\tprint 'Opening ' + spreadsheet_id\n\tspreadsheet = gc.open_by_key(spreadsheet_id)\n\tworksheet = spreadsheet.worksheet('Clips')\n\treturn worksheet\n\ndef get_meta_worksheet(spreadsheet_id):\n\tspreadsheet = gc.open_by_key(spreadsheet_id)\n\tworksheet = spreadsheet.worksheet('Meta')\n\treturn worksheet\n\t\nfor sheet_id in sheet_ids:\n\ttry:\n\t\tupdate_scoresheet(sheet_id)\n\texcept gspread.exceptions.APIError as e:\n\t\tprint e\n"
}
] | 5 |
SRUJANA-PENUGONDA13/samplerepo | https://github.com/SRUJANA-PENUGONDA13/samplerepo | ea10d8c39e32448ed286700f30742ded3755f7cf | 4f0069a4f788ecced2fe5d5354c259a3e396ce51 | 329b9396fdce2928ad79643fab8a730f1b3ae7a0 | refs/heads/master | 2023-05-01T03:52:35.020350 | 2021-05-23T13:41:34 | 2021-05-23T13:41:34 | 369,760,796 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5942668318748474,
"alphanum_fraction": 0.5942668318748474,
"avg_line_length": 33.846153259277344,
"blob_id": "c28d7772cf176e668caaf573b3fe9456f09e0ec5",
"content_id": "6033b05880062aee960f6b3d633fc987b38ce85a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 907,
"license_type": "no_license",
"max_line_length": 121,
"num_lines": 26,
"path": "/app.py",
"repo_name": "SRUJANA-PENUGONDA13/samplerepo",
"src_encoding": "UTF-8",
"text": "from flask import Flask\nimport os\nfrom mail.mail import *\n \napp = Flask(__name__)\n \[email protected](\"/\")\ndef home_page():\n msg = \"Please go through the below url for sending mail to srujana penugonda with name, mail and message details\"\n url = \"https://sample-srujana-deploy.herokuapp.com/mail/{{name}}/{{mail}}/{{message}}\"\n return {\"Message\" : msg , \"URL\" : url , \"Status\" : \"Success\"}\n\[email protected](\"/mail/<name>/<email>/<message>\")\ndef mail(name,email,message):\n try:\n THIS_FOLDER = os.path.dirname(os.path.abspath(__file__))\n mail_path = os.path.join(THIS_FOLDER, 'mail')\n os.chdir(mail_path)\n send_status(name,email,message)\n return {\"status\" : \"Success\"}\n except Exception as e:\n error = str(e)\n return {\"status\" : \"Fail\",\n \"error\" : error }\nif __name__ == '__main__':\n app.run(debug=True, use_reloader=True)\n\n"
}
] | 1 |
Sofia-git/GUI---QT | https://github.com/Sofia-git/GUI---QT | 3cef8499fc866aa58c200f0334a34942727578be | ede3bb82b8e371f1156bf70b9220a8d995717112 | 2bd385cc7f6734be98533f29a49ba663dbfb25d6 | refs/heads/main | 2023-01-11T22:03:10.263774 | 2020-11-17T01:12:14 | 2020-11-17T01:12:14 | 313,441,472 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6440715789794922,
"alphanum_fraction": 0.673154354095459,
"avg_line_length": 46.064517974853516,
"blob_id": "c01633ad300f82e2d39c1601dd5845918f2fa6e6",
"content_id": "8bbd84f4b1f41baccebf16d4a102398a025f7c5c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4470,
"license_type": "no_license",
"max_line_length": 118,
"num_lines": 93,
"path": "/QtDemo.py",
"repo_name": "Sofia-git/GUI---QT",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\r\n# Form implementation generated from reading ui file 'test.ui'\r\n#\r\n# Created by: PyQt5 UI code generator 5.15.1\r\n#\r\n# WARNING: Any manual changes made to this file will be lost when pyuic5 is\r\n# run again. Do not edit this file unless you know what you are doing.\r\n\r\n\r\nfrom PyQt5 import QtCore, QtGui, QtWidgets\r\n\r\n\r\nclass Ui_MainWindow(object):\r\n\r\n # def __init__(self, MainWindow):\r\n # super(Ui_MainWindow, self).__init__()\r\n # self.MainWindow = MainWindow\r\n # self.setupUi(MainWindow)\r\n # # MainWindow.show()\r\n\r\n\r\n def setupUi(self,MainWindow):\r\n MainWindow.setObjectName(\"MainWindow\")\r\n MainWindow.resize(868, 558)\r\n self.centralwidget = QtWidgets.QWidget(MainWindow)\r\n self.centralwidget.setObjectName(\"centralwidget\")\r\n self.Button1 = QtWidgets.QPushButton(self.centralwidget)\r\n self.Button1.setGeometry(QtCore.QRect(320, 290, 201, 71))\r\n self.Button1.setObjectName(\"Button1\")\r\n self.progressBar = QtWidgets.QProgressBar(self.centralwidget)\r\n self.progressBar.setGeometry(QtCore.QRect(140, 400, 551, 31))\r\n self.progressBar.setProperty(\"value\", 24)\r\n self.progressBar.setObjectName(\"progressBar\")\r\n self.pushButton_1 = QtWidgets.QPushButton(self.centralwidget)\r\n self.pushButton_1.setGeometry(QtCore.QRect(630, 110, 161, 51))\r\n self.pushButton_1.setObjectName(\"pushButton_1\")\r\n self.label = QtWidgets.QLabel(self.centralwidget)\r\n self.label.setGeometry(QtCore.QRect(40, 120, 181, 31))\r\n self.label.setObjectName(\"label\")\r\n self.label_2 = QtWidgets.QLabel(self.centralwidget)\r\n self.label_2.setGeometry(QtCore.QRect(40, 200, 221, 31))\r\n self.label_2.setObjectName(\"label_2\")\r\n self.pushButton_2 = QtWidgets.QPushButton(self.centralwidget)\r\n self.pushButton_2.setGeometry(QtCore.QRect(630, 190, 161, 51))\r\n self.pushButton_2.setObjectName(\"pushButton_2\")\r\n self.label_3 = QtWidgets.QLabel(self.centralwidget)\r\n self.label_3.setGeometry(QtCore.QRect(280, 30, 301, 31))\r\n self.label_3.setObjectName(\"label_3\")\r\n MainWindow.setCentralWidget(self.centralwidget)\r\n self.menubar = QtWidgets.QMenuBar(MainWindow)\r\n self.menubar.setGeometry(QtCore.QRect(0, 0, 868, 21))\r\n self.menubar.setObjectName(\"menubar\")\r\n self.menuFile = QtWidgets.QMenu(self.menubar)\r\n self.menuFile.setObjectName(\"menuFile\")\r\n MainWindow.setMenuBar(self.menubar)\r\n self.statusbar = QtWidgets.QStatusBar(MainWindow)\r\n self.statusbar.setObjectName(\"statusbar\")\r\n MainWindow.setStatusBar(self.statusbar)\r\n self.actionclasses = QtWidgets.QAction(MainWindow)\r\n self.actionclasses.setObjectName(\"actionclasses\")\r\n self.menubar.addAction(self.menuFile.menuAction())\r\n\r\n self.retranslateUi(MainWindow)\r\n QtCore.QMetaObject.connectSlotsByName(MainWindow)\r\n # MainWindow.show()\r\n\r\n def retranslateUi(self, MainWindow):\r\n _translate = QtCore.QCoreApplication.translate\r\n MainWindow.setWindowTitle(_translate(\"MainWindow\", \"MainWindow\"))\r\n self.Button1.setStatusTip(_translate(\"MainWindow\", \"Training the model with new data!\"))\r\n self.Button1.setText(_translate(\"MainWindow\", \"Train\"))\r\n self.progressBar.setStatusTip(_translate(\"MainWindow\", \"Progress Bar - wait for 100%\"))\r\n self.pushButton_1.setStatusTip(_translate(\"MainWindow\", \"Upload class 0 here!\"))\r\n self.pushButton_1.setText(_translate(\"MainWindow\", \"Upload\"))\r\n self.label.setText(_translate(\"MainWindow\", \"Upload your faulty images here: \"))\r\n self.label_2.setText(_translate(\"MainWindow\", \"Upload your clean/non-faulty images here: \"))\r\n self.pushButton_2.setStatusTip(_translate(\"MainWindow\", \"Upload class 1 here!\"))\r\n self.pushButton_2.setText(_translate(\"MainWindow\", \"Upload\"))\r\n self.label_3.setText(_translate(\"MainWindow\", \"Categorial Detection - Faulty or non-faulty metal components\"))\r\n self.menuFile.setTitle(_translate(\"MainWindow\", \"File\"))\r\n self.actionclasses.setText(_translate(\"MainWindow\", \"classes\"))\r\n\r\n\r\nif __name__ == '__main__':\r\n import sys\r\n app = QtWidgets.QApplication(sys.argv)\r\n app.setStyle('Fusion')\r\n MainWindow = QtWidgets.QMainWindow()\r\n ui = Ui_MainWindow()\r\n ui.setupUi(MainWindow)\r\n MainWindow.show()\r\n sys.exit(app.exec_())\r\n"
},
{
"alpha_fraction": 0.5993564128875732,
"alphanum_fraction": 0.6057924628257751,
"avg_line_length": 24.446807861328125,
"blob_id": "8374953ed92ccb364fd7eaf654578649ddd636cf",
"content_id": "6dd5a821c0a8f1ceada8e6b9f21745085d4ae657",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1243,
"license_type": "no_license",
"max_line_length": 81,
"num_lines": 47,
"path": "/thread-Qt.py",
"repo_name": "Sofia-git/GUI---QT",
"src_encoding": "UTF-8",
"text": "\"\"\"\r\nMulti-threading for progress bar. Progress bar will later shows training process.\r\n\"\"\"\r\nfrom PyQt5 import QtCore, QtGui, QtWidgets\r\nimport QtDemo\r\nimport sys, time\r\n\r\n\r\n# import model_name\r\n\r\nclass MainUi(QtWidgets.QMainWindow, QtDemo.Ui_MainWindow):\r\n\r\n def __init__(self, parent=None):\r\n super(MainUi, self).__init__(parent)\r\n self.setupUi(self)\r\n self.threadcalss = MultiThread()\r\n self.threadcalss.start()\r\n self.threadcalss.updateProgressBar.connect(self.setProgressBar)\r\n\r\n def setProgressBar(self, val):\r\n self.progressBar.setValue(val)\r\n\r\n\r\nclass MultiThread(QtCore.QThread):\r\n updateProgressBar = QtCore.pyqtSignal(int)\r\n\r\n def __init__(self, parent=None):\r\n super(MultiThread, self).__init__(parent)\r\n\r\n def run(self):\r\n count = 0\r\n while count < 100:\r\n print(count)\r\n count += 1\r\n time.sleep(0.2)\r\n # self.emit(QtCore.pyqtSignal('value'), count)\r\n\r\n self.updateProgressBar.emit(count)\r\n # self.signals.result.emit(val)\r\n\r\n\r\nif __name__ == \"__main__\":\r\n app = QtWidgets.QApplication(sys.argv)\r\n app.setStyle('Fusion')\r\n window = MainUi()\r\n window.show()\r\n app.exec_()\r\n"
},
{
"alpha_fraction": 0.7477477192878723,
"alphanum_fraction": 0.7567567825317383,
"avg_line_length": 54.5,
"blob_id": "66a8d945dbfba5b13be931898d97307a3dc3631d",
"content_id": "4ebe86f5fa8a2df53b9b0666b529614b9461d392",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 222,
"license_type": "no_license",
"max_line_length": 135,
"num_lines": 4,
"path": "/README.md",
"repo_name": "Sofia-git/GUI---QT",
"src_encoding": "UTF-8",
"text": "# GUI---QT\n\nGUI for training purposes - Uploading e.g. class 0, class 1 and then let the model train. Training process is shown using progress bar.\nThis program is yet to be completed and it's for demp purposes as of now.\n"
}
] | 3 |
willstauffernorris/cs-module-project-algorithms | https://github.com/willstauffernorris/cs-module-project-algorithms | 39e3a1924308251da9b1a8b55f592156eef49f66 | 03a902eec60a893f51e8c846aa5e0e06698e6bc8 | 1166fbd7a53c38c6a4e2290885084360380328d1 | refs/heads/master | 2022-11-25T02:56:28.668121 | 2020-07-23T22:50:10 | 2020-07-23T22:50:10 | 281,712,837 | 0 | 0 | null | 2020-07-22T15:21:11 | 2020-07-17T18:21:41 | 2020-07-17T18:21:38 | null | [
{
"alpha_fraction": 0.49653738737106323,
"alphanum_fraction": 0.5560941696166992,
"avg_line_length": 24.785715103149414,
"blob_id": "f725b2ad792836db424ae37f1463189be504b0fe",
"content_id": "84138da2eacb67bb45a4ebb69a21b7f86836325c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1444,
"license_type": "no_license",
"max_line_length": 164,
"num_lines": 56,
"path": "/product_of_all_other_numbers/product_of_all_other_numbers.py",
"repo_name": "willstauffernorris/cs-module-project-algorithms",
"src_encoding": "UTF-8",
"text": "'''\nInput: a List of integers\nReturns: a List of integers\n\n\n\n```\n[1, 7, 3, 4]\n```\nyour function should return \n```\n[84, 12, 28, 21]\n``` \nby calculating \n```\n[7*3*4, 1*3*4, 1*7*4, 1*7*3]\n```\n\n\n\n'''\ndef product_of_all_other_numbers(arr):\n # Your code here\n\n final_array = []\n # for each item in an array\n for item in arr:\n print(f'ITEM {item}')\n ## Need to make a copy of this original array, otherwise it updates the original\n to_multiply = arr.copy()\n #print(f'OG TO MULTIPLY {to_multiply}')\n # # take in each other item BUT pop out the current item\n to_multiply.remove(item)\n print(f'NEW TO MULTIPLY {to_multiply}')\n\n #create a final value to be appended to a list for each item\n multiplied = 1\n\n # multiply all the items together\n for item in to_multiply:\n multiplied *= item\n print(f'MULTIPLIED: {multiplied}')\n \n final_array.append(multiplied)\n print(f'FINAL ARRAY {final_array}') \n return final_array\n #pass\n\n\nif __name__ == '__main__':\n # Use the main function to test your implementation\n #arr = [1, 2, 3, 4]\n # arr = [1, 2, 3, 4, 5]\n # arr = [2, 6, 9, 8, 2, 2, 9, 10, 7, 4, 7, 1, 9, 5, 9, 1, 8, 1, 8, 6, 2, 6, 4, 8, 9, 5, 4, 9, 10, 3, 9, 1, 9, 2, 6, 8, 5, 5, 4, 7, 7, 5, 8, 1, 6, 5, 1, 7, 7, 8]\n\n print(f\"Output of product_of_all_other_numbers: {product_of_all_other_numbers(arr)}\")\n"
},
{
"alpha_fraction": 0.44088175892829895,
"alphanum_fraction": 0.4569138288497925,
"avg_line_length": 19,
"blob_id": "c4a8eb6575b8fa514d227a48fa672a5ab64faaeb",
"content_id": "cf21405f0b39f301702eb244c1308c3ebe372769",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 499,
"license_type": "no_license",
"max_line_length": 58,
"num_lines": 25,
"path": "/fib.py",
"repo_name": "willstauffernorris/cs-module-project-algorithms",
"src_encoding": "UTF-8",
"text": "def fib(n, cache={}):\n\n if n<0:\n raise IndexError(\n 'index was negative'\n 'no such thing as a neg index in a series'\n )\n elif n in [0,1]:\n #base cases\n return n\n #cache\n elif n in cache:\n return cache[n]\n\n else:\n if cache is None:\n cache = {}\n \n cache[n] = fib(n-1, cache) + fib(n-2, cache)\n \n print(\"computing fib (%i)\"%n)\n print(cache[n])\n return cache[n]\n\nfib(100)"
},
{
"alpha_fraction": 0.5299065709114075,
"alphanum_fraction": 0.554205596446991,
"avg_line_length": 18.125,
"blob_id": "7860c18e9b9248ab0f499d36c7245104b6aefc32",
"content_id": "1fafd00ccf12eacee59b7e4e8196b24520a9b2bf",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1070,
"license_type": "no_license",
"max_line_length": 73,
"num_lines": 56,
"path": "/single_number/single_number.py",
"repo_name": "willstauffernorris/cs-module-project-algorithms",
"src_encoding": "UTF-8",
"text": "'''\nInput: a List of integers where every int except one shows up twice\nReturns: an integer\n\n```\nSample input: [2, 2, 1]\nExpected output: 1\n```\n\n```\nSample iput: [4, 1, 2, 1, 2]\nExpected output: 4\n```\n\n'''\ndef single_number(arr):\n # Your code here\n\n ### Count the number of times that a given value has appeared\n ## use a dictionary\n\n '''\n integer_count = {}\n\n for item in arr: #O(n)\n if item in integer_count: #O(1)\n integer_count[item] += 1\n else: \n integer_count[item] = 1\n\n # at the end of the list, return only the value that has one instance\n for item in arr:\n if integer_count[item] == 1:\n print(item)\n return item\n\n #print(integer_count)\n\n '''\n\n #Nice, very short solutioon\n \n for item in arr:\n if arr.count(item) == 1:\n return item\n \n \n\n #pass\n\n\nif __name__ == '__main__':\n # Use the main function to test your implementation\n arr = [1, 1, 4, 4, 5, 5, 3, 3, 9, 0, 0]\n\n print(f\"The odd-number-out is {single_number(arr)}\")"
},
{
"alpha_fraction": 0.5092796683311462,
"alphanum_fraction": 0.5498584508895874,
"avg_line_length": 16.46703338623047,
"blob_id": "c6d1772d708ba69ba49aaa1cb923f7cba7cabb79",
"content_id": "71b3ed0ed63b55b7f5332b7b3405cd9cfd51eaa8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3179,
"license_type": "no_license",
"max_line_length": 125,
"num_lines": 182,
"path": "/eating_cookies/eating_cookies.py",
"repo_name": "willstauffernorris/cs-module-project-algorithms",
"src_encoding": "UTF-8",
"text": "'''\nInput: an integer\nReturns: an integer\n\nCookie Monster can eat either 1, 2, or 3 cookies at a time.\n'''\n\n\ndef eating_cookies(n, cache=None):\n print(n)\n\n ## add memoization\n \n\n # base cases\n if n == 0:\n return 1\n if n < 0:\n return 0\n # check if works has already been done by looking in the cache\n\n # is not None checks if the cache has been created\n # cache > 0 checkes if there's anything in it\n elif cache is not None and cache[n] > 0:\n # return the previously computed answer and don't recurse\n return cache[n]\n\n else:\n #create cache for the first time\n if cache is None:\n #initialize an empty list for a cache\n cache = [0 for i in range(n+1)]\n #store the value of the recursive call expressions in our cache\n cache[n] = eating_cookies(n-1, cache) + eating_cookies(n-2, cache) + eating_cookies(n-3, cache)\n #recursion\n return cache[n]\n \n\n\n\n'''\ndef eating_cookies(n):\n # Your code here\n\n ## use recursion\n\n ## base case\n #if cookies == 1 or 0\n\n\n ## add memoization\n d = {}\n \n\n if n == 0:\n ways_to_eat_cookies = 1\n return ways_to_eat_cookies\n\n elif n == 1:\n ways_to_eat_cookies = 1\n return ways_to_eat_cookies\n\n\n elif n ==2:\n ways_to_eat_cookies = 2\n return ways_to_eat_cookies\n\n elif n == 3:\n ways_to_eat_cookies = 4\n return ways_to_eat_cookies\n\n else:\n ## if you ate 1 cookie\n eat_one = eating_cookies(n-1)\n ## eat 2 cookies\n eat_two = eating_cookies(n-2)\n # eat 3 cookies\n eat_three = eating_cookies(n-3)\n\n\n ## almost there, but need to figure out the logic to look up if d[n] exists, and if so don't go through the recursion\n d[n] = eat_one+eat_two+eat_three\n print(d)\n\n return d[n]\n'''\n\n\n\n\n\n\n#ways_to_eat_cookies += 1\n\n #return ways_to_eat_cookies\n\n ##Try this: number of cookies / ways to eat\n # print((n//1)+(n//2)+(n//3)) NO\n\n\n'''\n3 \n1 -> 2 -> 2\n 1 ->1 \n2 _> 1\n3\n\n\n\nn=4\n1\n 3\n \n 2\n 1\n2\n 2\n3\n 1\n\n'''\n\n\n\n\n #what if it's 2 cookies?\n #eat 1 cookies 2x\n # eat 2 cookies 1x\n\n\n\n #what if it's 3 cookies?\n #eat 1 cookies 3x\n # eat 1 cookies 1x and 2 cookies 1x\n # eat 2 cookies 1x and 1 cookies 1x\n # eat 3 cookies 1x\n # elif n ==3:\n # #1 + n-1 + 1\n\n # 3 2 1\n\n # 1\n # 1 1\n # 1\n\n # n -1 n-1 until 1\n\n #what if it's 4 cookies?\n #eat 1 cookies 4x\n # eat 1 cookies 2x and 2 cookies 1x\n # eat 2 cookies 1x and 2 cookies 1x\n # eat 2 cookies 2x and 2 cookies 2 x\n # eat 3 cookies 1 x and 1 cookie 1x\n # eat 1 cookies 1 x and 3 cookie 1x\n # eat 4 cookies 1x\n\n\n\n # 5 cookies\n'''\n 1 cookie 5x\n 1 c 3 x 2 c 1x\n 2c 1x 1c 3x\n 3c 1x 2c 1x\n\n 5 c 1x\n\n\n'''\n\n #return ways_to_eat_cookies\n\n\n #recursive function\n\n\n\nif __name__ == \"__main__\":\n # Use the main function here to test out your implementation\n num_cookies = 500\n\n print(f\"There are {eating_cookies(num_cookies)} ways for Cookie Monster to each {num_cookies} cookies\")\n"
},
{
"alpha_fraction": 0.5780051350593567,
"alphanum_fraction": 0.5873827934265137,
"avg_line_length": 26.928571701049805,
"blob_id": "a9b8d206d4989f8faad9dda032c13bd5867ba97e",
"content_id": "91d6f70080c0630e71049db4e9f3294171e1e2f3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1173,
"license_type": "no_license",
"max_line_length": 102,
"num_lines": 42,
"path": "/sliding_window_max/sliding_window_max.py",
"repo_name": "willstauffernorris/cs-module-project-algorithms",
"src_encoding": "UTF-8",
"text": "'''\nInput: a List of integers as well as an integer `k` representing the size of the sliding window\nReturns: a List of integers\n'''\ndef sliding_window_max(nums, k):\n\n # Your code here\n #print(f'LIST: {nums}')\n #print(f'SIZE OF WINDOW: {k}')\n\n #incrementing variable\n i = 0\n\n max_list = []\n # simplest case: \n #print(len(nums))\n while k+i <= len(nums):\n # grab the first k numbers from nums\n window_list = nums[i:k+i]\n #print(window_list)\n # call the max of the list and return the one max value\n max_value = max(window_list)\n #print(f'MAX: {max_value}')\n\n # append that max value to a new list\n \n max_list.append(max_value)\n #print(f'MAXLIST: {max_list}')\n\n # move the window over one item until the right edge of the window hits the max length of nums\n ## This is a while loop\n #increment i\n i += 1\n return max_list\n\n\nif __name__ == '__main__':\n # Use the main function here to test out your implementation \n arr = [1, 3, -1, -3, 5, 3, 6, 7]\n k = 3\n\n print(f\"Output of sliding_window_max function is: {sliding_window_max(arr, k)}\")\n"
},
{
"alpha_fraction": 0.5571605563163757,
"alphanum_fraction": 0.572843611240387,
"avg_line_length": 21.44444465637207,
"blob_id": "82d5738bef4aaebbbc4d6e7c8ce1c7598ec892e3",
"content_id": "0d3f2b18ff44e4940ea0bf374830a2a6d18c6a9a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2431,
"license_type": "no_license",
"max_line_length": 89,
"num_lines": 108,
"path": "/moving_zeroes/moving_zeroes.py",
"repo_name": "willstauffernorris/cs-module-project-algorithms",
"src_encoding": "UTF-8",
"text": "\"\"\"\n'''\nInput: a List of integers\nReturns: a List of integers\n\n\n\n```\nSample input: [0, 3, 1, 0, -2]\nExpected output: 3\nExpected output array state: [3, 1, -2, 0, 0]\n```\n\n```\nSample input: [4, 2, 1, 5]\nExpected output: 4\nExpected output array state: [4, 2, 1, 5]\n```\n\n\n\n'''\n'''\ndef moving_zeroes(arr):\n # Your code here\n\n ## go through each item in the list \n for item in arr:\n ## if that item is not a zero , move it to the left size of the array\n if item != 0:\n arr.remove(item)\n arr.insert(0, item)\n \n print(arr)\n return arr\n \n\n #pass\n'''\n\n\n\n\n'''\nInput: a List of integers\nReturns: a List of integers\n\n\nwhat if we had a pointer that started at the beginning\nand a pointer that started at the end\nand they moved towards each other in the middle?\n\nif the left pointer \"sees\" a zero and the right pointer \"sees\" a non-zero\nswap\n\nif the left sees a non-zero increment\nif the right sees a zero increment\n'''\n\"\"\"\n\n\ndef moving_zeroes(arr):\n\n \n # initialize a left and right pointer\n left_pointer = 0\n right_pointer = (len(arr)-1)\n print(f'OG array {arr}')\n #print(f'ARR at left {arr[left_pointer]}')\n #print(f'ARR at right {arr[right_pointer]}')\n\n\n # left is 0\n # right is last index in arr\n # while left <= right:\n while left_pointer <= right_pointer:\n # if left points at a zero and right points at non-zero\n if arr[left_pointer] == 0 and arr[right_pointer] != 0:\n # swap left and right items in original arr\n arr[left_pointer], arr[right_pointer] = arr[right_pointer], arr[left_pointer]\n #print(f'!!!!!ARR at left {arr[left_pointer]}')\n #print(f'!!!!!!ARR at right {arr[right_pointer]}')\n \n # increment left\n left_pointer += 1\n # decrement right\n right_pointer -= 1\n\n # # if left is non-zero:\n if arr[left_pointer] != 0:\n left_pointer += 1\n print(f'ARR at left {arr[left_pointer]}')\n \n # # increment left\n #if right is zero:\n if arr[right_pointer] == 0:\n # # decrement right\n right_pointer -= 1\n print(arr)\n return arr\n\n\n\nif __name__ == '__main__':\n # Use the main function here to test out your implementation\n arr = [0, 3, 1, 0, -2]\n\n print(f\"The resulting of moving_zeroes is: {moving_zeroes(arr)}\")"
}
] | 6 |
PinkDiamond1/multisig-coordinator | https://github.com/PinkDiamond1/multisig-coordinator | 0149da93757f4d994e7b0d0cbfd003fed38480fe | cc2a8b96d66cd13b50d2fc5f8388775f9f94ad8f | d3760c5c1389131e31839f76e94c25fad7a9d090 | refs/heads/master | 2023-03-17T02:23:15.517163 | 2018-01-31T15:28:02 | 2018-01-31T15:28:02 | null | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6998730897903442,
"alphanum_fraction": 0.7233502268791199,
"avg_line_length": 26.189655303955078,
"blob_id": "2fd84e1a7459a2fec2a323be7ea6559b7dd09aa3",
"content_id": "b6994b775fad1012710cec9d69cd71fdcbdeac55",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1576,
"license_type": "no_license",
"max_line_length": 89,
"num_lines": 58,
"path": "/main.py",
"repo_name": "PinkDiamond1/multisig-coordinator",
"src_encoding": "UTF-8",
"text": "from flask import Flask, send_file, render_template, request, Response\nimport json, base64\nfrom stellar_base.keypair import Keypair\nfrom stellar_base.address import Address\n\napp = Flask(__name__)\n\npool_address = \"GCCD6AJOYZCUAQLX32ZJF2MKFFAUJ53PVCFQI3RHWKL3V47QYE2BNAUT\"\nphrase = b\"Lumenaut\"\n\npool = Address(address=pool_address, network=\"public\")\npool.get()\npool_signers = [acc[\"public_key\"] for acc in pool.signers if acc[\"weight\"] > 0]\n\[email protected](\"/\")\ndef index():\n\treturn render_template(\"index.html\")\n\n# verify base64-encoded string\ndef verify(pubkey, data):\n\tif pubkey not in pool_signers:\n\t\treturn False\n\n\tdata = bytes(base64.b64decode(data))\n\tpubkp = Keypair.from_address(pubkey)\n\n\ttry:\n\t\tpubkp.verify(phrase, data)\n\texcept:\n\t\treturn False\n\telse:\n\t\treturn True\n\ndef handle_update(data):\n\tif not verify(data[0], data[1]):\n\t\treturn (\"Unauthorized signature\", 401)\n\t\n\twith open(\"transactions.json\", 'w') as fp:\n\t\tjson.dump(data[2], fp)\n\n\treturn (\"Signatures updated\", 200)\n\[email protected](\"/transactions\", methods=[\"GET\", \"POST\"])\ndef get_transactions():\n\tif request.method == \"POST\":\n\t\treturn handle_update(request.json)\n\telse:\n\t\twith open(\"transactions.json\", 'r') as json_file:\n\t\t\ttransactions_json = json_file.read()\n\n\t\t\tresponse = Response(transactions_json, content_type=\"application/json; charset=utf-8\")\n\t\t\tresponse.headers.add(\"content-length\", len(transactions_json))\n\t\t\tresponse.headers.add(\"Pragma\", \"no-cache\")\n\t\t\tresponse.headers.add(\"Cache-Control\", \"no-cache, no-store, must-revalidate\")\n\t\t\tresponse.status_code = 200\n\t\t\treturn response\n\napp.run(port=8888)"
},
{
"alpha_fraction": 0.826347291469574,
"alphanum_fraction": 0.826347291469574,
"avg_line_length": 54.66666793823242,
"blob_id": "c17d759d3d5b8e06f16850420d460b4b897fbb20",
"content_id": "2cff5b41390558db82b1d774086336795e8ad5b3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 167,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 3,
"path": "/README.md",
"repo_name": "PinkDiamond1/multisig-coordinator",
"src_encoding": "UTF-8",
"text": "# Transaction bundle multi-signature co-ordinator\nFacilitates easy signing of multiple transactions between signers of an account.\nRequires `flask` and `stellar_base`\n"
},
{
"alpha_fraction": 0.6863882541656494,
"alphanum_fraction": 0.7010827660560608,
"avg_line_length": 27.119565963745117,
"blob_id": "1738a52206649d9dfbdd223cbe1446194433eeeb",
"content_id": "6b4aaba4f37a58afdac84f1923b7125d7784bcd1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 2586,
"license_type": "no_license",
"max_line_length": 85,
"num_lines": 92,
"path": "/static/index.js",
"repo_name": "PinkDiamond1/multisig-coordinator",
"src_encoding": "UTF-8",
"text": "var transactions = [];\nvar handled = false;\n\nconst phrase = \"Lumenaut\"\n\nfunction update_info() {\n\tdocument.getElementById(\"info-box\").style = \"display: block\";\n\n\tvar num_transactions = document.getElementById(\"num-transactions\");\n\tvar cur_signatures = document.getElementById(\"current-signatures\");\n\tnum_transactions.innerHTML = \"Number of transactions: \" + transactions.length;\n\n\tvar xdr = transactions[0];\n\tvar transaction = new StellarBase.Transaction(xdr);\n\tcur_signatures.innerHTML = \"Number of signatures: \" + transaction.signatures.length;\n}\n\nfunction setStyle(id, style) {\n\tdocument.getElementById(id).style = style;\n}\n\nfunction showMessage(msg, color) {\n\tdocument.getElementById(\"conf-msg\").innerHTML = msg;\n\tsetStyle(\"conf-msg\", \"display: block; color: \" + color);\n\tsetTimeout(function() {\n\t\tsetStyle(\"conf-msg\", \"display: none\");\n\t}, 2500);\n}\n\nfunction handlePostResponse(res) {\n\tif (handled) return; handled = true;\n\tif (res.status == 200) {\n\t\tsetStyle(\"sign-box\", \"display: none\");\n\t\tsetStyle(\"stat-info\", \"display: none\");\n\n\t\tsetStyle(\"done-confirmation\", \"display: block; color: rgb(20, 160, 20)\")\n\t} else {\n\t\tshowMessage(\"Signing failure: \" + res.responseText, \"rgb(150, 40, 40)\");\n\t}\n}\n\nfunction sign() {\n\tvar key = document.getElementById(\"key_input\").value;\n\tvar keypair;\n\ttry {\n\t\tkeypair = StellarBase.Keypair.fromSecret(key);\n\t} catch(e) {\n\t\tshowMessage(\"Invalid signing key\", \"rgb(150, 40, 40)\");\n\t\treturn;\n\t}\n\n\tvar signature = btoa(String.fromCharCode.apply(null, keypair.sign(phrase)));\n\tStellarBase.Network.usePublicNetwork();\n\n\tvar signed = [];\n\tfor (var i = transactions.length - 1; i >= 0; i--) {\n\t\tvar xdr = transactions[i];\n\t\tvar transaction = new StellarBase.Transaction(xdr);\n\t\ttransaction.sign(keypair);\n\t\tsigned.push(transaction.toEnvelope().toXDR(\"base64\"));\n\t}\n\n\tvar data = [keypair.publicKey(), signature, signed];\n\tvar _json = JSON.stringify(data);\n\n\tvar post = new XMLHttpRequest();\n\tpost.open(\"POST\", \"/transactions\");\n\tpost.setRequestHeader(\"Content-Type\", \"application/json;charset=UTF-8\");\n\tpost.onreadystatechange = function() { \n\t\tif (post.readyState == XMLHttpRequest.DONE) {\n\t\t\thandlePostResponse(post);\n\t\t}\n\t}\n\n\tpost.send(_json);\n}\n\nfunction openSigner(served_transactions) {\n\tsetStyle(\"instructions\", \"display:none\");\n\tsetStyle(\"signer\", \"display:block\");\n\n\ttransactions = served_transactions;\n\tupdate_info();\n}\n\nvar http = new XMLHttpRequest();\nhttp.open(\"GET\", \"/transactions\", true);\nhttp.onreadystatechange = function() { \n\tif (http.readyState == 4 && http.status == 200)\n\t\topenSigner(JSON.parse(http.responseText));\n}\nhttp.send();"
}
] | 3 |
therockmandolinist/dotfiles | https://github.com/therockmandolinist/dotfiles | dfe9fea4891f184a64798fae5f4d707507a1cc8f | 4ed08f6f16fd2e6b1ba6055be4cc8d495f1c19c9 | c1f7578e568e4fb305852db03e3bc7855f656720 | refs/heads/master | 2021-03-27T16:40:08.232373 | 2021-03-22T04:27:55 | 2021-03-22T04:27:55 | 65,071,184 | 7 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6666666865348816,
"alphanum_fraction": 0.6833333373069763,
"avg_line_length": 29,
"blob_id": "9f67151f3ba1037740bf5953fdfdf3b5c1c1fe57",
"content_id": "73d0a31fa53bc79584babe41872f59c47e749bce",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 60,
"license_type": "no_license",
"max_line_length": 49,
"num_lines": 2,
"path": "/bin/bin/spotify",
"repo_name": "therockmandolinist/dotfiles",
"src_encoding": "UTF-8",
"text": "#!/bin/sh\n/usr/bin/spotify --force-device-scale-factor=2 $@\n"
},
{
"alpha_fraction": 0.5066548585891724,
"alphanum_fraction": 0.554569661617279,
"avg_line_length": 29.45945930480957,
"blob_id": "87b8131cbdaeb08c46ef8ee7a2a9802c3bbdaa47",
"content_id": "ebb2f4756feeaf2a01fe999205e6d8e9f60dc48b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1127,
"license_type": "no_license",
"max_line_length": 68,
"num_lines": 37,
"path": "/polybar/.config/polybar/blocks/darksky.py",
"repo_name": "therockmandolinist/dotfiles",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/python\n\nimport urllib.request\nimport json\nfrom pathlib import Path\nimport re\n\n# Example darkskyrc\n# key = \"00000000000000000000000000000000\"\n# coord = (00.0000,00.0000)\n# units = \"us\"\n\nCONFIG_PATH = f\"{str(Path.home())}/.config/polybar/darkskyrc\"\n\n\ntry:\n with open(CONFIG_PATH) as f:\n conf = json.load(f)\n key = conf[\"key\"]\n coord = conf[\"coord\"]\n units = conf[\"units\"]\n url_path = f\"{key}/{coord[0]},{coord[1]}?units={units}\"\n with urllib.request.urlopen('https://api.darksky.net/forecast/'\n + url_path) as response:\n raw_text = response.read()\n jdict = json.loads(raw_text)\n with open(\"/tmp/darksky\", \"w\") as f:\n json.dump(jdict, f)\nexcept urllib.error.URLError:\n print(\"? ??\")\nelse:\n print(f\"{re.sub('-day|-night', '', jdict['currently']['icon'])}\"\n f\" {round(jdict['currently']['temperature'])}\"\n f\" %{{T2}}%{{F#665C54}}\"\n f\"{round(jdict['daily']['data'][0]['temperatureMax'])}\"\n f\"/{round(jdict['daily']['data'][0]['temperatureMin'])}\"\n f\"%{{T-}}%{{F-}}\")\n"
},
{
"alpha_fraction": 0.6666666865348816,
"alphanum_fraction": 0.6866666674613953,
"avg_line_length": 29,
"blob_id": "7618dd08a1aafa29b5c62df6ce17de2315d4848e",
"content_id": "60ee111101c06f49991a9e85776c21355a3b0544",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 150,
"license_type": "no_license",
"max_line_length": 90,
"num_lines": 5,
"path": "/autorandr/.config/autorandr/postswitch",
"repo_name": "therockmandolinist/dotfiles",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\nbspc wm -O eDP1 HDMI2\n~/.config/bspwm/bspwmrc\nnotify-send \"autorandr\" \"Switched to profile: $(autorandr | grep current | cut -d' ' -f1)\"\n"
},
{
"alpha_fraction": 0.5656108856201172,
"alphanum_fraction": 0.6063348650932312,
"avg_line_length": 14.785714149475098,
"blob_id": "a9924d61b6891dccfe40df79860e900efab211c8",
"content_id": "e98e851ef905803b0b23abf4665319aac144185d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 221,
"license_type": "no_license",
"max_line_length": 63,
"num_lines": 14,
"path": "/bin/bin/set-wallpaper",
"repo_name": "therockmandolinist/dotfiles",
"src_encoding": "UTF-8",
"text": "#!/bin/sh\n\nday_wall=$1\nnight_wall=$2\nnight_start=2000\nday_start=700\n\ntime=$(date +\"%H%M\")\n\nif [ $time -ge $night_start ] || [ $time -lt $day_start ]; then\n feh --bg-fill $night_wall\nelse\n feh --bg-fill $day_wall\nfi\n"
},
{
"alpha_fraction": 0.5764192342758179,
"alphanum_fraction": 0.6768559217453003,
"avg_line_length": 34.230770111083984,
"blob_id": "eb29417887406b9c42c2d4e409ee29c58a588ef2",
"content_id": "31a26fbf37b3f0653a67d47c44c602d8cb72dd9a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 458,
"license_type": "no_license",
"max_line_length": 122,
"num_lines": 13,
"path": "/bin/bin/desktop-record",
"repo_name": "therockmandolinist/dotfiles",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/bash\nif [ -n \"$1\" ]; then\n NAME=\"$1\"\nelse\n NAME=\"output\"\nfi\nscreenkey -f \"Iosevka Term\" --bg-color \"#282828\" --font-color \"#FDF4C1\" --opacity 1 --mods-mode emacs -p fixed -g 1000x47+780+0\nguvcview --device=/dev/video2 > /dev/null 2>&1 &\nnotify-send desktop-record 'Starting recording in 10 seconds...'\nsleep 10\nffmpeg -video_size 2560x1440 -framerate 30 -f x11grab -i :0.0+0,0 -f pulse -ac 2 -i default \"$NAME.mp4\"\npkill screenkey\npkill guvcview\n"
},
{
"alpha_fraction": 0.5863068699836731,
"alphanum_fraction": 0.6041480898857117,
"avg_line_length": 29.09395980834961,
"blob_id": "5e8ec90126f1afbb313920c6e333f7e35c23cdce",
"content_id": "b61e839867c5b39cc08326fb74f7507622962b63",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 4484,
"license_type": "no_license",
"max_line_length": 215,
"num_lines": 149,
"path": "/zsh/.zshrc",
"repo_name": "therockmandolinist/dotfiles",
"src_encoding": "UTF-8",
"text": "[[ \"$TERM\" = \"dumb\" ]] && unsetopt zle && return\n\neval \"$(dircolors)\"\nexport LS_COLORS=\"$LS_COLORS:di=94:ex=92:\"\n\n# The following lines were added by compinstall\nzmodload zsh/complist\nzstyle ':completion:*' completer _expand _complete _ignored _correct _approximate\nzstyle ':completion:*' format '%F{yellow}--%d--%f'\nzstyle ':completion:*' group-name ''\nzstyle ':completion:*' insert-unambiguous true\nzstyle ':completion:*' list-colors ${(s.:.)LS_COLORS}\nzstyle ':completion:*:options' list-colors '=(#b)*(-- *)=0=90'\nzstyle ':completion:*' matcher-list 'm:{[:lower:]}={[:upper:]} r:|[._-]=* r:|=*' 'm:{[:lower:]}={[:upper:]} r:|[._-]=* r:|=*' 'm:{[:lower:]}={[:upper:]} r:|[._-]=* r:|=*' 'm:{[:lower:]}={[:upper:]} r:|[._-]=* r:|=*'\nzstyle ':completion:*' menu select=1\nzstyle ':completion:*' select-prompt '%S%p%s'\nzstyle ':completion:*' original true\nzstyle ':completion:*' select-prompt '%SScrolling active: current selection at %p%s'\nzstyle ':completion:*' verbose yes\nzstyle ':completion:*:*:*:*:processes' command \"ps -u $USER -o pid,user,comm -w -w\"\nzstyle ':completion:*:*:kill:*:processes' list-colors '=(#b) #([0-9]#) ([0-9a-z-]#)*=01;34=0=01'\nzstyle :compinstall filename '/home/dieggsy/.zshrc'\n\nautoload -Uz compinit\n\nfor dump in ~/.zcompdump(N.mh+24); do\n compinit\ndone\n\ncompinit -C\n# End of lines added by compinstall\n# Lines configured by zsh-newuser-install\nHISTFILE=~/.histfile\nHISTSIZE=10000\nSAVEHIST=10000\nsetopt appendhistory autocd hist_ignore_all_dups hist_ignore_space\nunsetopt beep\nbindkey -v\nbindkey \"^?\" backward-delete-char\nbindkey -M menuselect '^[[Z' reverse-menu-complete\n# End of lines configured by zsh-newuser-install\n\nZPLUGINDIR=$PREFIX/share/zsh/plugins\n[ -d $ZPLUGINDIR/zsh-autopair ] && source $ZPLUGINDIR/zsh-autopair/autopair.zsh\n\n[[ \"$(tty)\" != \"/dev/tty\"* ]] && [ -d $ZPLUGINDIR/zsh-autosuggestions ] \\\n && source $ZPLUGINDIR/zsh-autosuggestions/zsh-autosuggestions.zsh\n\nif [ -d $ZPLUGINDIR/zsh-history-substring-search ]; then\n source $ZPLUGINDIR/zsh-history-substring-search/zsh-history-substring-search.zsh\n bindkey '^[[A' history-substring-search-up\n bindkey '^[[B' history-substring-search-down\n bindkey -M vicmd 'k' history-substring-search-up\n bindkey -M vicmd 'j' history-substring-search-down\nfi\n\nmaybe_host () {\n if [ $SSH_CLIENT ] || [ $SSH_TTY ]; then\n echo \"%F{13}%n@%M%f \"\n fi\n}\n\nmaybe_git () {\n hash git-prompt &> /dev/null && git-prompt\n}\n\nsetopt prompt_subst\nexport PROMPT='$(maybe_host)$(maybe_git)%F{7}%1~%f %F{209}%(!.#.>)%f '\n\nalias ls='ls --color=auto -F'\nalias csi='csi -q'\nalias lsblk='lsblk -o NAME,SIZE,MOUNTPOINT'\nalias chicken-doc='noglob chicken-doc'\nalias startx='startx &>/dev/null'\nalias yay=paru\nalias nmr='sudo systemctl restart NetworkManager'\n\ncd_list () {\n emulate -L zsh\n ls --color=auto -F\n}\n\nchpwd_functions=(${chpwd_functions[@]} \"cd_list\")\n\nzle-keymap-select () {\n if [ $KEYMAP = vicmd ]; then\n # the command mode for vi\n echo -ne \"\\e[2 q\"\n else\n # the insert mode for vi\n echo -ne \"\\e[6 q\"\n fi\n}\n\nif [[ \"$(tty)\" != \"/dev/tty\"* ]]; then\n zle -N zle-keymap-select\n echo -ne '\\e[6 q'\nfi\n\nman() {\n env \\\n LESS_TERMCAP_mb=$(printf \"\\e[38;5;209m\") \\\n LESS_TERMCAP_md=$(printf \"\\e[38;5;209m\") \\\n LESS_TERMCAP_me=$(printf \"\\e[0m\") \\\n LESS_TERMCAP_se=$(printf \"\\e[0m\") \\\n LESS_TERMCAP_so=$(printf \"\\e[48;5;8m\") \\\n LESS_TERMCAP_ue=$(printf \"\\e[0m\") \\\n LESS_TERMCAP_us=$(printf \"\\e[38;5;12m\") \\\n man \"$@\"\n}\n\nqmpv() {\n if [ -z \"$1\" ]; then\n mpv --no-terminal \"$(xclip -o)\" &!\n else\n mpv --no-terminal \"$1\" &!\n fi\n}\n\nssh() {\n [ $TMUX ] && tmux rename-window $(echo \"$*\" | grep -oP '(?<=@)[^ ]+' | head -1)\n autossh -M 0 -o \"ServerAliveInterval=15\" -o \"ServerAliveCountMax=3\" $@\n [ $TMUX ] && tmux set-window-option automatic-rename on\n}\n\nmissing-from-group () {\n comm -23 <(pacman -Sqg \"$1\" | sort) <(pacman -Qqg \"$1\" | sort)\n}\n\ncircular-deps () {\n for pkg in $(pacman -Qqd); do\n [[ -z $(comm -12 <(pactree -r $pkg -u | sort) <(pacman -Qqe | sort)) ]] && echo $pkg;\n done\n}\n\ne () {\n pgrep -f 'emacs --daemon' > /dev/null && emacsclient \"$@\" || emacs \"$@\"\n}\n\nfuck() {\n if hash thefuck &>/dev/null; then\n eval \"$(thefuck --alias)\" && fuck\n else\n echo \"thefuck not in \\$PATH\"\n return 1\n fi\n}\n\nhash tmux &>/dev/null && [[ \"$(tty)\" != \"/dev/tty1\" ]] && [ -z $TMUX ] && { tmux attach || tmux }\n"
},
{
"alpha_fraction": 0.6415891647338867,
"alphanum_fraction": 0.6703296899795532,
"avg_line_length": 30.13157844543457,
"blob_id": "d4130ea6fd8267b9c54507b2affbc5b70ed6c85d",
"content_id": "15fdfb4d16c3196f5b7381df8925ded7099a18e5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 3549,
"license_type": "no_license",
"max_line_length": 127,
"num_lines": 114,
"path": "/bspwm/.config/bspwm/bspwmrc",
"repo_name": "therockmandolinist/dotfiles",
"src_encoding": "UTF-8",
"text": "#!/bin/zsh\n# Behavior\nbspc config focus_follows_pointer true\nbspc config pointer_follows_monitor true\nbspc config external_rules_command \"$HOME/.config/bspwm/rules.sh\"\nbspc config window_gap 30\n\nmonitors=$(bspc query -M --names)\nmain=$(echo $monitors | grep -P 'e-?DP1')\nfirst=$(echo $monitors | head -n1)\n# Make sure laptop is first. Since BSPWM assigns workspaces in visual order,\n# this way I can make sure 1-4 (by index, not name) are actually on the laptop\nbspc monitor $main -s $first\nbspc monitor $main -d 1 2 3 4\nif [ $(echo $monitors | wc -l) -gt 1 ]; then\n i=5;j=6\n for monitor_name in $(echo $monitors | grep -vE 'e-?DP1'); do\n bspc monitor $monitor_name -d $i $j\n i=$((i+2))\n j=$((j+2))\n done\nfi\n\n# Window decoration\nbspc config border_width 5\nbspc config focused_border_color \"#504945\"\nbspc config normal_border_color \"#3C3836\"\nbspc config presel_feedback_color \"#83A598\"\n\n\n# Rules\nbspc rule -r '*'\nbspc rule -a Screenkey manage=off\nbspc rule -a Pavucontrol state=floating\nbspc rule -a Blueman-manager state=floating\nbspc rule -a Pcmanfm state=floating\nbspc rule -a Zathura state=tiled\nbspc rule -a Civ5XP state=fullscreen\nbspc rule -a guvcview state=floating\nbspc rule -a mpv state=floating sticky=on rectangle=960x540+1575+275\nbspc rule -a Steam border=off\nbspc rule -a 'jetbrains-studio:sun-awt-X11-XWindowPeer' manage=off\n\n# Programs\nrun() {\n if [ \"$1\" = \"-f\" ]; then\n flags=\"-f\"\n shift\n fi\n # Make sure the command exists.\n command -v \"$1\" &> /dev/null \\\n && { pgrep $flags \"$1\" &>/dev/null || ($@ &) } || not_found+=\"\\n$1\"\n}\n\n# Bindings\nrun sxhkd -m -1\nrun xcape -e 'Control_L=Escape'\n\n# Pointer behavior\nrun unclutter\n\n# Screen locker\nrun xss-lock -- physlock\n\n# Automount disks\n# run udiskie\n\n# Wallpaper\nrun feh --bg-fill ~/pic/wallpapers/chicken-wall.png\n# Random wallpaper\n# feh --bg-fill $(find ~/pic/wallpapers -type f ! -path '*originals*' \\\n# | shuf -n1 --random-source=/dev/urandom)\n\n# Clipboard\n# run -f autocutsel -selection CLIPBOARD -fork\n# run -f autocutsel -selection PRIMARY -fork\n\n# Multi-monitor polybar setup - new polybar instance for each monitor\nkillall -q polybar\nwhile pgrep -u $UID -x polybar >/dev/null; do sleep 1; done\nfor m in $(bspc query -M --names); do\n MONITOR=$m polybar -q simple &\ndone\nln -sf /tmp/polybar_mqueue.$! /tmp/ipc-polybar-simple\n\n# I'm not sure why I need a 'sleep' at the beginning here, but otherwise\n# polybar levels goes below windows\n(sleep 1\n polybar -q levels &\n ln -sf /tmp/polybar_mqueue.$! /tmp/ipc-polybar-levels \\\n && sleep 1 && echo \"cmd:hide\" >> /tmp/ipc-polybar-levels) &\n\n# Floating st + tmux (see rules.sh)\nrun alacritty --class st-float\n# Run emacs in background, connect with emacsclient\n\nif ! xwininfo -root -children | grep -q emacs-float; then\n COLORTERM=truecolor emacsclient -nc --alternate-editor='' -d $DISPLAY --frame-parameters='(quote (name . \"emacs-float\"))' &\nfi\n\n# pgrep nextcloud || ionice -c 3 nice -n12 nextcloud &\n\nrun -f /usr/lib/polkit-gnome/polkit-gnome-authentication-agent-1\n\nhash picom && pgrep picom &>/dev/null || picom --xrender-sync-fence -b\n\n\n[ -n \"$not_found\" ] \\\n && notify-send \"bspwmrc\" \"Commands not in \\$PATH:$(echo $not_found | sort -u)\"\n\n# Useful commands\n# xrandr --output eDP1 --pos 2560x0 --auto --output HDMI2 --primary --scale-from 2560x1440 --mode 1920x1080\n\n# xrandr --output eDP1 --pos 1920x0 --auto --output HDMI2 --primary --scale 1x1 --mode 1920x1080\n"
},
{
"alpha_fraction": 0.596069872379303,
"alphanum_fraction": 0.6149927377700806,
"avg_line_length": 27.040817260742188,
"blob_id": "b7b3555194c7cd7902f2d19ec1006ebeafb96d59",
"content_id": "8d266ab33d7bf048ab121a8ba364d7a24d8b0347",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1374,
"license_type": "no_license",
"max_line_length": 76,
"num_lines": 49,
"path": "/polybar/.config/polybar/blocks/cal-rofi",
"repo_name": "therockmandolinist/dotfiles",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\nimport datetime\nimport calendar\nimport itertools\nimport locale\nfrom subprocess import Popen, PIPE\n\ntoday = datetime.date.today()\nmonth = today.month\nextra = today.replace(day=1).weekday() + 1\nlastday = calendar.monthrange(today.year, month)[1]\n\nENC = locale.getpreferredencoding()\n\ndays = [\"Su\", \"Mo\", \"Tu\", \"We\", \"Th\", \"Fr\", \"Sa\"]\n\ninp = days + [\" \"] * extra + list(\n map(lambda x: str(x).rjust(2), (range(1, lastday + 1)))\n)\n\n\ndef chop(l, n):\n \"\"\"Yield successive n-sized chunks from l.\"\"\"\n for i in range(0, len(l), n):\n yield l[i:i + n]\n\n\nchopped = list(chop(inp, 7))\nchopped[-1] = chopped[-1] + [\" \"] * (7 - len(chopped[-1]))\n\nROFI_CMD = [\n \"rofi\", \"-dmenu\",\n \"-mesg\", f\"{calendar.month_name[month]} {today.year}\",\n \"-location\", \"2\",\n # \"-xoffset\", \"36\",\n \"-columns\", \"7\",\n \"-theme-str\", \"#textbox{text-color:@foregroundcolor;}\",\n \"-theme-str\", \"#inputbar{enabled:false;}\",\n \"-theme-str\", f\"#listview{{lines:{len(chopped)};}}\",\n \"-theme-str\", \"#element{border:1 1 0 0;}\",\n \"-width\", \"384\"\n]\n\ntransposed = list(map(list, zip(*chopped)))\nflattened = list(itertools.chain.from_iterable(transposed))\nactive = flattened.index(str(today.day).rjust(2))\ninp = \"\\n\".join(flattened)\nPopen(ROFI_CMD + [\"-a\", str(active)], stdin=PIPE,\n stdout=PIPE).communicate(input=inp.encode(ENC))[0].decode(ENC).strip()\n"
},
{
"alpha_fraction": 0.5812807679176331,
"alphanum_fraction": 0.5960590839385986,
"avg_line_length": 24.375,
"blob_id": "c6b1cd1f643e3cfe56df8f7926a04d1941665880",
"content_id": "247586efaeccfbd2d020fa14c4ec070b9a62a892",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 203,
"license_type": "no_license",
"max_line_length": 71,
"num_lines": 8,
"path": "/etc/networkmanager/NetworkManager/dispatcher.d/10-tzupdate.sh",
"repo_name": "therockmandolinist/dotfiles",
"src_encoding": "UTF-8",
"text": "#!/bin/sh\ninterface=$1\nstatus=$2\n\nif [[ \"$status\" = \"up\" ]] && [[ \"$interface\" != \"tun0\" ]] ; then\n echo \"UPDATING TIMEZONE\"\n timedatectl set-timezone \"$(curl --fail https://ipapi.co/timezone)\"\nfi\n"
},
{
"alpha_fraction": 0.48081350326538086,
"alphanum_fraction": 0.5172678232192993,
"avg_line_length": 30.39759063720703,
"blob_id": "85beb966c128bfd6556a2008cedd5275bf191e2e",
"content_id": "0435d23b5be24ec09e519229158098497bbcb650",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 2606,
"license_type": "no_license",
"max_line_length": 83,
"num_lines": 83,
"path": "/bspwm/.config/bspwm/rules.sh",
"repo_name": "therockmandolinist/dotfiles",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/zsh\n\nwid=$1\nclass=$2\ninstance=$3\ntitle=$(xwininfo -id $wid | sed -n '2p' | cut -d\\\" -f2)\n\nbspwm_gap=$(bspc config window_gap)\nbspwm_border=$(bspc config border_width)\n\n# Store width and height of primary\nxrandr \\\n | grep -P \"e-?DP-?1\" \\\n | awk -F'[[:space:]x+]' '{print $4 \" \" $5}' \\\n | read -r display_width display_height\n\npos () {\n # Takes position, W, H and creates a bspwm geometry, where position is any\n # of 0-9, arranged in a 3x3 grid on the screen\n polybar_height=47\n W=$2\n H=$3\n case $1 in\n 0)\n X=15\n Y=$((polybar_height + 15))\n ;;\n 1)\n X=$((display_width/2 - (W/2 - bspwm_border)))\n Y=$((polybar_height + bspwm_gap/2))\n ;;\n 2)\n X=$((display_width - W - bspwm_border*2 - bspwm_gap/2))\n Y=$((polybar_height + bspwm_gap/2))\n ;;\n 3)\n X=15\n Y=$((display_height/2 - (H/2 - bspwm_border)))\n ;;\n 4)\n X=$((display_width/2 - (W/2 - bspwm_border)))\n Y=$((display_height/2 - (H/2 - bspwm_border)))\n ;;\n 5)\n X=$((display_width - W - bspwm_border*2 - bspwm_gap/2))\n Y=$((display_height/2 - (H/2 - bspwm_border)))\n ;;\n 6)\n X=15\n Y=$((display_height - H - bspwm_border*2 - bspwm_gap/2))\n ;;\n 7)\n X=$((display_width/2 - (W/2 - bspwm_border)))\n Y=$((display_height - H - bspwm_border*2 - bspwm_gap/2))\n ;;\n 8)\n X=$((display_width - W - bspwm_border*2 - bspwm_gap/2))\n Y=$((display_height - H - bspwm_border*2 - bspwm_gap/2))\n ;;\n esac\n echo ${2}x$3+$X+$Y\n}\n\nif [[ $instance = \"gl\" ]]; then\n echo $wid > /tmp/mpv-float\n echo layer=normal\nelif [[ $instance = \"st-256color\" && $title = \"htop\" ]]; then\n echo state=floating\nelif [[ $instance = \"st-float\" ]]; then\n echo $wid > /tmp/st-float\n echo layer=above state=floating hidden=on sticky=on rectangle=$(pos 8 1078 560)\nelif [[ $instance = \"emacs-float\" ]]; then\n echo $wid > /tmp/emacs-float\n echo state=floating hidden=on sticky=on rectangle=$(pos 8 1078 1110)\n# fix vlc control window not showing up in fullscreen\nelif [[ $instance = \"vlc\" && $title = \"vlc\" ]]; then\n echo layer=above border=off\nelif [[ $class = \"firefox\" && $title = \"Picture-in-Picture\" ]]; then\n echo $wid > /tmp/mpv-float\n echo state=floating sticky=on rectangle=960x540+1575+275\nelif [[ $title = \"Android Emulator - *\" ]]; then\n echo state=floating\nfi\n"
},
{
"alpha_fraction": 0.49820616841316223,
"alphanum_fraction": 0.502032995223999,
"avg_line_length": 41.232322692871094,
"blob_id": "3a0deb6dad7938ce3955f1f78d13640afe7cc08e",
"content_id": "c36384d4d90036aa2474b72a03041e3bc48df047",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C",
"length_bytes": 4185,
"license_type": "no_license",
"max_line_length": 95,
"num_lines": 99,
"path": "/polybar/.config/polybar/blocks/bluez.c",
"repo_name": "therockmandolinist/dotfiles",
"src_encoding": "UTF-8",
"text": "#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <dbus/dbus.h>\n\nstatic void check_and_abort(DBusError *error);\n\nint main() {\n DBusError error;\n dbus_error_init(&error);\n\n DBusConnection *connection = dbus_bus_get(DBUS_BUS_SYSTEM, &error);\n check_and_abort(&error);\n DBusMessage *msg_query = dbus_message_new_method_call(\"org.bluez\",\n \"/\",\n \"org.freedesktop.DBus.ObjectManager\",\n \"GetManagedObjects\");\n DBusMessage *msg_reply = dbus_connection_send_with_reply_and_block(connection,\n msg_query,\n -1,\n &error);\n dbus_message_unref(msg_query);\n check_and_abort(&error);\n // Enter dict\n DBusMessageIter pathdict;\n dbus_message_iter_init(msg_reply, &pathdict);\n // Get length of dict\n int len = dbus_message_iter_get_element_count(&pathdict);\n // Go to first entry\n DBusMessageIter pathdict_entry;\n dbus_message_iter_recurse(&pathdict, &pathdict_entry);\n DBusMessageIter pathdict_kv;\n char *path, *iface, *prop;\n if (len == 1) {\n // Bluetooth is off, probably\n return 0;\n }\n while (len--) {\n // pathdict_kv points to the key in the first entry, which is a path.\n dbus_message_iter_recurse(&pathdict_entry, &pathdict_kv);\n dbus_message_iter_get_basic(&pathdict_kv, &path);\n /* printf(\"%s\\n\",path); */\n // pathdict_kv now points to the value in the first entry, which is a\n // dict where keys are interfaces.\n dbus_message_iter_next(&pathdict_kv);\n int len = dbus_message_iter_get_element_count(&pathdict_kv);\n DBusMessageIter ifacedict_entry, ifacedict_kv;\n dbus_message_iter_recurse(&pathdict_kv, &ifacedict_entry);\n while (len-- > 0) {\n dbus_message_iter_recurse(&ifacedict_entry, &ifacedict_kv);\n\n /* printf(\"%d\", len); */\n dbus_message_iter_get_basic(&ifacedict_kv, &iface);\n /* printf(\" %s\\n\",iface); */\n if (strncmp(iface, \"org.bluez.Device1\", 16) == 0) {\n dbus_message_iter_next(&ifacedict_kv);\n int len = dbus_message_iter_get_element_count(&ifacedict_kv);\n DBusMessageIter propdict_entry, propdict_kv;\n dbus_message_iter_recurse(&ifacedict_kv, &propdict_entry);\n /* char *devname; */\n int connected = 0;\n while(len-- > 0) {\n dbus_message_iter_recurse(&propdict_entry, &propdict_kv);\n dbus_message_iter_get_basic(&propdict_kv, &prop);\n /* printf(\" %s\\n\", prop); */\n DBusMessageIter variant;\n if (strncmp(prop, \"Connected\", 9) == 0) {\n dbus_message_iter_next(&propdict_kv);\n dbus_message_iter_recurse(&propdict_kv, &variant);\n dbus_message_iter_get_basic(&variant, &connected);\n\n } /* else if (strncmp(prop, \"Name\", 4) == 0) { */\n /* dbus_message_iter_next(&propdict_kv); */\n /* dbus_message_iter_recurse(&propdict_kv, &variant); */\n /* dbus_message_iter_get_basic(&variant, &devname); */\n /* } */\n dbus_message_iter_next(&propdict_entry);\n }\n if (connected) {\n printf(\"⣿\");\n return 0;\n }\n }\n dbus_message_iter_next(&ifacedict_entry);\n }\n // Go to next dict entry\n dbus_message_iter_next(&pathdict_entry);\n }\n dbus_message_unref(msg_reply);\n printf(\"⠶\");\n return 0;\n}\n\nstatic void check_and_abort(DBusError *error) {\n if (!dbus_error_is_set(error)) return;\n puts(error->message);\n dbus_error_free(error);\n abort();\n}\n"
},
{
"alpha_fraction": 0.547851026058197,
"alphanum_fraction": 0.5553008317947388,
"avg_line_length": 21.662338256835938,
"blob_id": "d15599ffe48680b7d9a4e5cc8ee6b56b59bcd495",
"content_id": "2a510ca5d12ff80e9c3d467c5f8f6909ce1c980e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 1745,
"license_type": "no_license",
"max_line_length": 83,
"num_lines": 77,
"path": "/X/.xinitrc",
"repo_name": "therockmandolinist/dotfiles",
"src_encoding": "UTF-8",
"text": "#!/bin/sh\n#echo \"Running X session wrapper\"\n\n# # Load profile\n# for file in \"/etc/profile\" \"$HOME/.profile\" \"/etc/xprofile\" \"$HOME/.xprofile\"; do\n# if [ -f \"$file\" ]; then\n# #echo \"Loading profile from $file\";\n# . \"$file\"\n# fi\n# done\n\n# Load resources\nfor file in \"/etc/X11/Xresources\" \"$HOME/.Xresources\"; do\n if [ -f \"$file\" ]; then\n #echo \"Loading resource: $file\"\n xrdb -merge \"$file\"\n fi\ndone\n\n# # Load keymaps\n# for file in \"/etc/X11/Xkbmap\" \"$HOME/.Xkbmap\"; do\n# if [ -f \"$file\" ]; then\n# #echo \"Loading keymap: $file\"\n# setxkbmap `cat \"$file\"`\n# XKB_IN_USE=yes\n# fi\n# done\n\n# # Load xmodmap if not using XKB\n# if [ -z \"$XKB_IN_USE\" ]; then\n# for file in \"/etc/X11/Xmodmap\" \"$HOME/.Xmodmap\"; do\n# if [ -f \"$file\" ]; then\n# #echo \"Loading modmap: $file\"\n# xmodmap \"$file\"\n# fi\n# done\n# fi\n\n# unset XKB_IN_USE\n\n# Run all system xinitrc shell scripts\nxinitdir=\"/etc/X11/xinit/xinitrc.d\"\nif [ -d \"$xinitdir\" ]; then\n for script in $xinitdir/*; do\n #echo \"Loading xinit script $script\"\n if [ -x \"$script\" -a ! -d \"$script\" ]; then\n . \"$script\"\n fi\n done\nfi\n\n# # Run user xsession shell script\n# script=\"$HOME/.xsession\"\n# if [ -x \"$script\" -a ! -d \"$script\" ]; then\n# #echo \"Loading xsession script $script\"\n# . \"$script\"\n# fi\n\n#echo \"X session wrapper complete, running session $@\"\n\n# OPTIONS=\"bspwm kde\"\n# select opt in $OPTIONS; do\n# if [ \"$opt\" = \"bspwm\" ]; then\n# exec bspwm\n# elif [ \"$opt\" = \"kde\" ]; then\n# exec startkde\n# fi\n# done\n\n# Pointer\nxsetroot -cursor_name left_ptr\n\n# Type speed/delay\nxset r rate 300 50\n\n# Start bspwm\nexec bspwm\n"
},
{
"alpha_fraction": 0.645901620388031,
"alphanum_fraction": 0.6573770642280579,
"avg_line_length": 21.592592239379883,
"blob_id": "efe36dc87f1d69e0765f7723f90fff9ca672e493",
"content_id": "4b874d7df73b067816d23a79f5a5ce4cbbe21c26",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 610,
"license_type": "no_license",
"max_line_length": 55,
"num_lines": 27,
"path": "/zsh/.zshenv",
"repo_name": "therockmandolinist/dotfiles",
"src_encoding": "UTF-8",
"text": "if [[ \"$PREFIX\" != \"/data/data/com.termux\"* ]]; then\n PREFIX=/usr\nfi\n\nexport EDITOR='emacsclient -a nvim'\nexport VISUAL=$EDITOR\nexport RIPGREP_CONFIG_PATH=~/.config/rg/rg.conf\nexport NLTK_DATA=~/.local/share/nltk_data\nexport _JAVA_AWT_WM_NONREPARENTING=1\nexport DUST_HOME=~/.local/dust\nexport GDK_DPI_SCALE=.5\nexport GDK_SCALE=2\n\npathadd () {\n if [ -d \"$1\" ] && [[ \":$PATH:\" != *\":$1:\"* ]]; then\n PATH=\"$1${PATH:+\":$PATH\"}\"\n fi\n}\n\npathadd $HOME/.local/bin\n# pathadd $HOME/.pyenv/bin\n# pathadd $HOME/.pyenv/shims\npathadd $HOME/bin\npathadd $PREFIX/lib/ccache/bin\n\nexport PATH\nexport KEYTIMEOUT=1\n"
},
{
"alpha_fraction": 0.5223016738891602,
"alphanum_fraction": 0.5352400541305542,
"avg_line_length": 31.633333206176758,
"blob_id": "b7f6c3c06ec14055601a0518f09664eef9781431",
"content_id": "d083fe37f7a480e5bdd8ffd4fb6b32673d6d574d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2940,
"license_type": "no_license",
"max_line_length": 93,
"num_lines": 90,
"path": "/polybar/.config/polybar/blocks/darksky-rofi",
"repo_name": "therockmandolinist/dotfiles",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/python\n\nimport json\nimport os\nimport locale\nimport re\nimport sys\nfrom datetime import datetime\nfrom subprocess import Popen, PIPE, run\nfrom pathlib import Path\n\nFONT_WIDTH=12\nFONT2_WIDTH=9\nROFI_WIDTH=378\n\nwith open(\"/tmp/darksky\") as f:\n jdict = json.load(f)\n\ndef get_offset_hack():\n PAD_WIDTH=2 * FONT_WIDTH\n current_file=Path(os.path.abspath(__file__))\n\n # battery offset\n batt_script=os.path.join(current_file.parent, \"battery-average\")\n batt_out=run([batt_script], stdout=PIPE)\n # add 2 for padding\n batt_len=len(batt_out.stdout.decode('utf-8')) * FONT_WIDTH + PAD_WIDTH\n\n # volume offset\n p1=Popen([\"amixer\",\"get\",\"Master\"], stdout=PIPE)\n p2=Popen([\"grep\",\"-oP\",\"(?<=\\[)(.*)(?=%\\])\"], stdin=p1.stdout, stdout=PIPE)\n p1.stdout.close()\n vol_out=p2.communicate()[0]\n ICON_WIDTH=6 * FONT_WIDTH # icon + space\n vol_len=(len(vol_out.decode('utf-8').split()[0]) * FONT_WIDTH\n + ICON_WIDTH + PAD_WIDTH)\n\n # weather offset\n WEATHER_WIDTH=(len(f\"{re.sub('-day|-night', '', jdict['currently']['icon'])}\"\n f\" {round(jdict['currently']['temperature'])} \")\n * FONT_WIDTH\n + PAD_WIDTH)\n LO_HI_WIDTH=(len(f\"{round(jdict['daily']['data'][0]['temperatureMax'])}\"\n f\"/{round(jdict['daily']['data'][0]['temperatureMin'])}\")\n * FONT2_WIDTH)\n\n return ROFI_WIDTH - ((batt_len + vol_len) + WEATHER_WIDTH + LO_HI_WIDTH)\n\nENV = os.environ.copy()\nENV['LC_ALL'] = 'C'\nENC = locale.getpreferredencoding()\nROFI_CMD = [\n \"rofi\",\n \"-location\", \"3\",\n \"-dmenu\", \"-i\",\n \"-theme-str\", \"#inputbar {enabled:false;}\",\n \"-theme-str\", \"#textbox {text-color:@foregroundcolor;}\",\n \"-xoffset\", str(get_offset_hack()),\n \"-width\",f\"{ROFI_WIDTH}\",\n \"-markup-rows\"\n]\n\n\ndef run_rofi(args, extra_args, lines):\n return (Popen(args + extra_args, stdin=PIPE, stdout=PIPE, env=ENV)\n .communicate(input=\"\\n\".join(lines).encode(ENC))[0]\n .decode(ENC)).strip()\n\n\ndef main(args):\n if args[1] == \"hourly\":\n run_rofi(ROFI_CMD,\n [\"-mesg\", \"hourly forecast\"],\n [\"<span color='#665C54'>\"\n f\"{datetime.fromtimestamp(i['time']).strftime('%H:%M')}\"\n \"</span>\"\n f\" {round(i['temperature'])}° {re.sub('-day|-night', '', i['icon'])}\"\n for i in jdict['hourly']['data']][:24])\n elif args[1] == \"daily\":\n run_rofi(ROFI_CMD,\n [\"-mesg\", \"daily forecast\"],\n [\"<span color='#665C54'>\"\n f\"{datetime.fromtimestamp(i['time']).strftime('%a')}\"\n \"</span>\"\n f\" {round(i['temperatureMin'])}°\"\n f\" - {round(i['temperatureMax'])}° {re.sub('-day|-night', '', i['icon'])}\"\n for i in jdict['daily']['data']])\n\nif __name__ == '__main__':\n main(sys.argv)\n"
},
{
"alpha_fraction": 0.5986621975898743,
"alphanum_fraction": 0.6008918881416321,
"avg_line_length": 16.25,
"blob_id": "f8ee992708e28590c27730550e1caff9152f717c",
"content_id": "2009896fbe97d9f5f6c945288158178b2e2dc07b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Makefile",
"length_bytes": 897,
"license_type": "no_license",
"max_line_length": 66,
"num_lines": 52,
"path": "/Makefile",
"repo_name": "therockmandolinist/dotfiles",
"src_encoding": "UTF-8",
"text": ".DEFAULT_GOAL:=install\n\nCFLAGS=-O3 -Wall -Wextra\n\n_CONF=$(filter-out polybar/ usr/ etc/ bin/,$(sort $(wildcard */)))\nCONF=$(_CONF:%/=%)\n\n_ETC=$(wildcard etc/*/)\nETC=$(_ETC:etc/%/=%)\n\n_USR=$(wildcard usr/*/)\nUSR=$(_USR:usr/%/=%)\n\n.PHONY: install\ninstall: conf etc bin usr\n\n.PHONY: conf\nconf: polybar\n\tstow -t ~ $(CONF)\n\n.PHONY: bin\nbin: bin/bin/git-prompt\n\tstow -t ~ bin\n\ngit-prompt: bin/bin/git-prompt.c\n\t$(CC) $(CFLAGS) bin/bin/git-prompt.c -o bin/bin/git-prompt \\\n\t\t`pkg-config --silence-errors -libs --cflags libgit2`\n\n.PHONY: etc\netc:\n\tcd etc/ && sudo stow -t /etc $(ETC)\n\n.PHONY: usr\nusr:\n\tcd usr/ && sudo stow -t /usr $(USR)\n\n.PHONY: polybar\npolybar:\n\tcd polybar/.config/polybar/blocks/ && make\n\tstow -t ~ polybar\n\n.PHONY: $(CONF)\n$(CONF): $(_CONF)\n\tstow -t ~ $@\n\n.PHONY: $(ETC)\n$(ETC): $(_ETC)\n\tcd etc && sudo stow -t /etc $@\n\n.PHONY: $(USR)\n$(USR): $(_USR)\n\tcd usr && sudo stow -t /usr $@\n"
},
{
"alpha_fraction": 0.4567360281944275,
"alphanum_fraction": 0.4720700979232788,
"avg_line_length": 24.36111068725586,
"blob_id": "4be783f2a654a2a6405997e94b3270bc996c60e0",
"content_id": "7acbdfaede5246a8784d5f46c30d4826822058c3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 913,
"license_type": "no_license",
"max_line_length": 92,
"num_lines": 36,
"path": "/bin/bin/wm-exit-dmenu",
"repo_name": "therockmandolinist/dotfiles",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env bash\n# message=\"Exit i3?\"\n[[ $(cat /proc/1/comm) == \"systemd\" ]] && logind=systemctl || logind=loginctl\n\nwm-exit () {\n case \"$1\" in\n lock)\n physlock\n ;;\n logout)\n loginctl terminate-session $(loginctl session-status | head -n1 | cut -d' ' -f1)\n ;;\n suspend)\n $logind suspend\n ;;\n # hibernate)\n # $logind hibernate\n # ;;\n reboot)\n $logind reboot\n ;;\n shutdown)\n $logind poweroff\n ;;\n *)\n notify-send 'Invalid argument'\n exit 2\n esac\n}\n\nresponse=$(echo -e \"lock\\nlogout\\nsuspend\\nreboot\\nshutdown\" |\n rofi -dmenu -location 3 -xoffset -15 -yoffset 62 -width 200 \\\n -theme-str \"#inputbar {enabled:false;}\")\nif [ -n \"$response\" ]; then\n wm-exit $response\nfi\n"
},
{
"alpha_fraction": 0.803636372089386,
"alphanum_fraction": 0.8181818127632141,
"avg_line_length": 26.5,
"blob_id": "3279694e1bc10b3c3c73054a71f21ecb546f9cac",
"content_id": "39f780bb394ff3d22b5fe1773c73cdbb02d98630",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "INI",
"length_bytes": 275,
"license_type": "no_license",
"max_line_length": 38,
"num_lines": 10,
"path": "/gtk/.config/gtk-3.0/settings.ini",
"repo_name": "therockmandolinist/dotfiles",
"src_encoding": "UTF-8",
"text": "[Settings]\ngtk-application-prefer-dark-theme=true\ngtk-font-name=Iosevka Term 8\ngtk-theme-name=Adwaita-dark\ngtk-icon-theme-name=Papirus\ngtk-fallback-icon-theme=Adwaita\ngtk-toolbar-style=GTK_TOOLBAR_ICONS\ngtk-menu-images=1\ngtk-button-images=1\ngtk-primary-button-warps-slider=0\n"
},
{
"alpha_fraction": 0.6023657917976379,
"alphanum_fraction": 0.6078252792358398,
"avg_line_length": 33.34375,
"blob_id": "98895a82cf67bdc5ae3f4466fd57bd3a6801a0b5",
"content_id": "a3fac2f4e5bef371ce641a0bd9f69e5bc160c46b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 1099,
"license_type": "no_license",
"max_line_length": 95,
"num_lines": 32,
"path": "/bin/bin/tarbackup",
"repo_name": "therockmandolinist/dotfiles",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\nEXCLUDE=(\"*.o\"\n \"*.so\"\n \"/home/dieggsy/.local\"\n \"/home/dieggsy/.cache\"\n \"/home/dieggsy/.ccache\"\n \"/home/dieggsy/.rustup\"\n \"/home/dieggsy/.cargo\"\n \"/home/dieggsy/.minecraft\"\n \"/home/dieggsy/.config/Signal/attachments.noindex\"\n \"/home/dieggsy/.config/Artix*\"\n \"/home/dieggsy/.AndroidStudio*\"\n \"/home/dieggsy/dotfiles/emacs/.emacs.d/straight\"\n \"/home/dieggsy/src/project-euler/rust/target\"\n \"/home/dieggsy/pic/wallpapers/nasa-visions/originals\"\n \"/home/dieggsy/music\"\n \"/home/dieggsy/downloads\")\n\nionice -c 3 -p $$ &> /dev/null\nrenice +12 -p $$ &> /dev/null\n\n# tar process runnning\npgrep -f 'tar --zstd' &>/dev/null \\\n && echo \"Backup may already be running, exiting...\" \\\n && exit 0\n\n# last backup was successful, and less than a day has passed since\n# Error code 24 should be OK, it means the some source files vanished\necho BACKING UP\ntar --zstd ${EXCLUDE[@]/#/--exclude=} -cvf - $HOME | ssh [email protected] \"cat > ~/snapshot.tar.zst\"\necho FINISHED WITH STATUS $?\n"
},
{
"alpha_fraction": 0.5702797174453735,
"alphanum_fraction": 0.5840908885002136,
"avg_line_length": 34.30864334106445,
"blob_id": "c752a498078a3ae0f11199ec3f2d95fb5814095e",
"content_id": "11c2a06566f290d26037759e802032143db4a3fc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C",
"length_bytes": 5768,
"license_type": "no_license",
"max_line_length": 113,
"num_lines": 162,
"path": "/polybar/.config/polybar/blocks/battery-average.c",
"repo_name": "therockmandolinist/dotfiles",
"src_encoding": "UTF-8",
"text": "#include <stdio.h>\n#include <stdlib.h>\n#include <math.h>\n#include <dirent.h>\n#include <string.h>\n#include <dbus/dbus.h>\n#include <espeak-ng/speak_lib.h>\n\nstatic int read_int(const char* filename);\nstatic int get_joint_percent();\nstatic char *get_icon (int adapter_online, int percent);\nstatic void notify(char *title, char* body);\nstatic void say(char *text);\nstatic int build_message(DBusMessage *msg_notify, char *title, char *body);\nstatic void check_and_abort(DBusError *error);\nstatic int round_multiple(int strength, int multiple);\n\nint main () {\n int percent = get_joint_percent();\n int adapter_online = read_int(\"/sys/class/power_supply/AC/online\");\n\n printf(\"%s %d\", get_icon(adapter_online, percent), percent);\n if (!adapter_online && percent <= 10) {\n notify(\"Battery critically low\", \"Consider charging\");\n say(\"Battery critically low, consider charging\");\n }\n return 0;\n}\n\nstatic int read_int(const char* filename) {\n int energy;\n FILE *f = fopen(filename,\"r\");\n fscanf(f,\"%d\",&energy);\n return energy;\n}\n\nstatic int get_joint_percent() {\n char * parent_dir = \"/sys/class/power_supply/\";\n char * fname_now = \"energy_now\";\n char * fname_full = \"energy_full\";\n\n float numerator=0;\n float denominator=0;\n DIR* dir = opendir(parent_dir);\n struct dirent *ent = readdir(dir);\n while (ent) {\n char* name = ent->d_name;\n if (strncmp(name, \"BAT\", 3) == 0) {\n size_t nowlen = snprintf(NULL, 0, \"%s%s/%s\", parent_dir, name, fname_now) + 1;\n size_t fulllen = snprintf(NULL, 0, \"%s%s/%s\", parent_dir, name, fname_full ) + 1;\n char * nowpath = malloc(nowlen);\n char * fullpath = malloc(fulllen);\n snprintf(nowpath, nowlen, \"%s%s/%s\", parent_dir, name, fname_now);\n snprintf(fullpath, fulllen, \"%s%s/%s\", parent_dir, name, fname_full);\n numerator += read_int(nowpath);\n denominator += read_int(fullpath);\n free(nowpath);\n free(fullpath);\n }\n ent = readdir(dir);\n }\n\n /* return 0; */\n return (int)roundf(numerator/denominator * 100);\n}\n\nstatic char * get_icon (int adapter_online, int percent) {\n char* icon;\n percent = round_multiple(percent, 33);\n if (percent == 99) {\n icon = adapter_online ? \"%{F#B8BB26}━━━%{F-}\" : \"━━━\";\n } else if (percent == 66) {\n icon = adapter_online ? \"%{F#B8BB26}━━%{F#665C54}┉%{F-}\" : \"━━%{F#665C54}┉%{F-}\";\n } else if (percent == 33) {\n icon = adapter_online ? \"%{F#B8BB26}━%{F#665C54}┉┉%{F-}\" : \"━%{F#665C54}┉┉%{F-}\";\n } else {\n icon = adapter_online ? \"%{F#665C54}┉┉┉%{F-}\" : \"%{F#665C54}┉┉┉%{F-}\";\n }\n return icon;\n}\n\nstatic void notify(char *title, char* body) {\n\tDBusMessage* msg_notify;\n\tDBusConnection* connection;\n\tDBusError error;\n\n\tdbus_error_init(&error);\n\tconnection = dbus_bus_get(DBUS_BUS_SESSION, &error);\n check_and_abort(&error);\n\n msg_notify = dbus_message_new_method_call(\"org.freedesktop.Notifications\",\n \"/org/freedesktop/Notifications\",\n \"org.freedesktop.Notifications\",\n \"Notify\");\n build_message(msg_notify, title, body);\n\n dbus_connection_send(connection, msg_notify, NULL);\n dbus_connection_flush(connection);\n\n dbus_message_unref(msg_notify);\n}\n\nstatic void say(char *text) {\n static int initialized = 0;\n\n if (!initialized) {\n espeak_Initialize(AUDIO_OUTPUT_PLAYBACK, 0, NULL, 0);\n\n espeak_VOICE voice = {0};\n voice.variant = 2;\n voice.gender = 2;\n\n espeak_SetVoiceByProperties(&voice);\n initialized = 1;\n }\n\n espeak_Synth(text, strlen(text) +1, 0, POS_CHARACTER, 0, espeakCHARS_AUTO,\n NULL, NULL);\n espeak_Synchronize();\n}\n\nstatic int build_message(DBusMessage *msg_notify, char *title, char *body) {\n DBusMessageIter args, actions, hints;\n int replaces_id = -1;\n int timeout = 0;\n char* app_name = \"\";\n char* app_icon = \"\";\n\n dbus_message_iter_init_append(msg_notify, &args);\n int returnCode = dbus_message_iter_append_basic(&args, DBUS_TYPE_STRING, &app_name);\n returnCode |= dbus_message_iter_append_basic(&args, DBUS_TYPE_UINT32, &replaces_id);\n returnCode |= dbus_message_iter_append_basic(&args, DBUS_TYPE_STRING, &app_icon);\n returnCode |= dbus_message_iter_append_basic(&args, DBUS_TYPE_STRING, &title);\n returnCode |= dbus_message_iter_append_basic(&args, DBUS_TYPE_STRING, &body);\n\n returnCode |= dbus_message_iter_open_container(&args, DBUS_TYPE_ARRAY, DBUS_TYPE_STRING_AS_STRING, &actions);\n returnCode |= dbus_message_iter_close_container(&args, &actions);\n\n returnCode |= dbus_message_iter_open_container(&args,\n DBUS_TYPE_ARRAY,\n DBUS_DICT_ENTRY_BEGIN_CHAR_AS_STRING\n DBUS_TYPE_STRING_AS_STRING\n DBUS_TYPE_VARIANT_AS_STRING\n DBUS_DICT_ENTRY_END_CHAR_AS_STRING,\n &hints);\n\treturnCode |= dbus_message_iter_close_container(&args, &hints);\n\n\treturnCode |= dbus_message_iter_append_basic(&args, DBUS_TYPE_INT32, &timeout);\n\n\treturn returnCode;\n}\n\nstatic void check_and_abort(DBusError *error) {\n if (!dbus_error_is_set(error)) return;\n puts(error->message);\n dbus_error_free(error);\n abort();\n}\n\nint round_multiple(int strength, int multiple) {\n return ((strength + multiple/2)/multiple) * multiple;\n}\n"
},
{
"alpha_fraction": 0.6303030252456665,
"alphanum_fraction": 0.6484848260879517,
"avg_line_length": 26.5,
"blob_id": "3659487d5f0378d8f5367f02f691c63dd91efb4e",
"content_id": "2414dd3d899cee7ad21bda3aedc29513f11296c1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 165,
"license_type": "no_license",
"max_line_length": 72,
"num_lines": 6,
"path": "/etc/cron/cron.daily/chicken-doc-update",
"repo_name": "therockmandolinist/dotfiles",
"src_encoding": "UTF-8",
"text": "#!/bin/sh\n\nif command -v csi; then\n cd $(csi -R chicken.platform -p '(chicken-home)')\n curl http://3e8.org/pub/chicken-doc/chicken-doc-repo-5.tgz | tar -xz\nfi\n"
},
{
"alpha_fraction": 0.41201716661453247,
"alphanum_fraction": 0.4721029996871948,
"avg_line_length": 24.88888931274414,
"blob_id": "8913f4cba919a925029d9faad0fb6f147528b2cc",
"content_id": "ed6f2d171534012db2791bb89b406500897d5178",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 233,
"license_type": "no_license",
"max_line_length": 56,
"num_lines": 9,
"path": "/bin/bin/spectrum_ls",
"repo_name": "therockmandolinist/dotfiles",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env zsh\nif [ \"$#\" = 1 ]; then\n code=\"$(printf \"%03d\\n\" $1)\"\n print -P -- \"$code: %F{$code}$code%f\"\nelse;\n for code in $(seq -f \"%03g\" ${1:-000} ${2:-255}); do\n print -P -- \"$code: %F{$code}$code%f\"\n done\nfi\n"
},
{
"alpha_fraction": 0.5469879508018494,
"alphanum_fraction": 0.5710843205451965,
"avg_line_length": 24.9375,
"blob_id": "89a50a11357b98e9d83ca203df77dad6421072df",
"content_id": "2babe4f7216cec47bfdb6fd343e6c1df9a1e5869",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 415,
"license_type": "no_license",
"max_line_length": 52,
"num_lines": 16,
"path": "/bin/bin/ocrpdf",
"repo_name": "therockmandolinist/dotfiles",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env bash\nif [[ -z $1 ]]; then\n echo \"No input file provided.\"\nelif [[ -z $2 ]]; then\n echo \"No output file provided\"\nelse\n echo \"Converting pdf to tif...\"\n \\gs -dNOPAUSE -q -r500 \\\n -sDEVICE=tiffg4 \\\n -dBATCH \\\n -sOutputFile=$TMPDIR/tempocr.tif \\\n $1\n echo \"Running tesseract on pngs...\"\n tesseract $TMPDIR/tempocr.tif $2 >/dev/null 2>&1\n echo \"Done.\"\nfi\n"
},
{
"alpha_fraction": 0.6348884105682373,
"alphanum_fraction": 0.6430020332336426,
"avg_line_length": 19.54166603088379,
"blob_id": "73008cceb8713879bb52f1656f77eede8dc9d2bf",
"content_id": "e8e093b1f04dcfc844e79185754426045249daaa",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Makefile",
"length_bytes": 493,
"license_type": "no_license",
"max_line_length": 125,
"num_lines": 24,
"path": "/polybar/.config/polybar/blocks/Makefile",
"repo_name": "therockmandolinist/dotfiles",
"src_encoding": "UTF-8",
"text": "DEST=~/bin/blocks/\nCFLAGS=-O3 -Wall -Wextra -pedantic\n\nFILES=$(wildcard *.c)\nTARGETS=$(basename $(FILES))\n\nall: $(TARGETS)\n\nsmall: CFLAGS += -s\nsmall: all\n\nclean:\n\trm -rvf $(TARGETS)\n\nnetworkmanager: CFLAGS += `pkg-config --cflags --libs libnm`\n\nbluez: CFLAGS += `pkg-config --cflags --libs dbus-1`\n\nbattery-average: CFLAGS += -lm `pkg-config --libs espeak-ng 2>/dev/null || echo -lespeak` `pkg-config --cflags --libs dbus-1`\n\nintel-backlight: CFLAGS += -lm\n\n% : %.c\n\t$(CC) $(CFLAGS) -o $@ $<\n"
},
{
"alpha_fraction": 0.7007125616073608,
"alphanum_fraction": 0.7209026217460632,
"avg_line_length": 37.272727966308594,
"blob_id": "2b2a41da134226f3dfcaa0e456563ddd4c6c8e33",
"content_id": "0936e30dc859d063d20ad5e40d7fe79cbc959940",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "INI",
"length_bytes": 842,
"license_type": "no_license",
"max_line_length": 126,
"num_lines": 22,
"path": "/rofi/.config/networkmanager-dmenu/config.ini",
"repo_name": "therockmandolinist/dotfiles",
"src_encoding": "UTF-8",
"text": "[dmenu]\n# l = 10\ndmenu_command = rofi -location 3 -yoffset 62 -xoffset -15 -width 330 -theme dropdown -theme-str \"* {highlightcolor: #83A598;}\"\nrofi_highlight = True\n# # Note that dmenu_command can contain arguments as well like `rofi -width 30`\n# # Rofi and dmenu are set to case insensitive by default `-i`\n# l = number of lines to display, defaults to number of total network options\n# fn = font string\n# nb = normal background (name, #RGB, or #RRGGBB)\n# nf = normal foreground\n# sb = selected background\n# sf = selected foreground\n# b = (just set to empty value and menu will appear at the bottom\n# m = number of monitor to display on\n# p = Custom Prompt for the networks menu\n# pinentry = Pinentry command\n\n[editor]\nterminal = termite\ngui_if_available = True\n# terminal = <name of terminal program>\n# gui_if_available = <True or False>\n"
},
{
"alpha_fraction": 0.6215351819992065,
"alphanum_fraction": 0.6332622766494751,
"avg_line_length": 31.34482765197754,
"blob_id": "7c6cfee14b0f68817f4751b3d19a614a7c9d769c",
"content_id": "08a66f43b20ee601d0e5525ebf4c0266496f8eb8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 1876,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 58,
"path": "/etc/networkmanager/NetworkManager/dispatcher.d/20-daily-backup.sh",
"repo_name": "therockmandolinist/dotfiles",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\nexit 0\n\nEXCLUDE=(\"*.o\"\n \"*.so\"\n \"/home/dieggsy/.local\"\n \"/home/dieggsy/.cache\"\n \"/home/dieggsy/.ccache\"\n \"/home/dieggsy/.rustup\"\n \"/home/dieggsy/.cargo\"\n \"/home/dieggsy/.minecraft\"\n \"/home/dieggsy/.config/Signal/attachments.noindex\"\n \"/home/dieggsy/.config/Artix*\"\n \"/home/dieggsy/.AndroidStudio*\"\n \"/home/dieggsy/dotfiles/emacs/.emacs.d/straight\"\n \"/home/dieggsy/src/project-euler/rust/target\"\n \"/home/dieggsy/pic/wallpapers/nasa-visions/originals\"\n \"/home/dieggsy/music\"\n \"/home/dieggsy/downloads\")\n# BANDWIDTH=50000\n\n# necessary info\nstatus=$2\n# Holds time info for last backup\nbackup_file=\"/var/cache/private/dieggsy-$(</etc/machine-id)\"\ntest -f $backup_file || touch $backup_file\narchive=\"/var/cache/private/snapshot.tar.zst\"\nlast_backup=$([[ -f $backup_file ]] && stat -c %Y $backup_file || echo 0)\ncurrtime=$(date +%s)\none_day=86400\n\nionice -c 3 -p $$ &> /dev/null\nrenice +12 -p $$ &> /dev/null\n\n# tar process runnning\npgrep -f 'tar --zstd' &>/dev/null && exit 0\n\n# Reasons to exit (irrelevant if -p is specified)\n# status is something other than \"up\"\n[ \"$status\" != \"up\" ] && exit 0\n\n# last backup was successful, and less than a day has passed since\n# Error code 24 should be OK, it means the some source files vanished\nlast_exit_code=\"$(<$backup_file)\"\n[[ $last_exit_code == 0 ]] \\\n && [[ \"$(($currtime - $last_backup))\" -lt $one_day ]] \\\n && exit 0\n\n# Run this in background, just in case. Networkmanager-dispatcher stops\n# processes with a timeout, and although rsync might be forking on it's own...\n# eh.\n(backup=\"tar --zstd ${EXCLUDE[@]/#/--exclude=} -cf $archive /home/dieggsy\"\n copy=\"scp $archive [email protected]:~/\"\n echo BACKING UP\n $backup && $copy\n status=$?\n echo $status > $backup_file\n echo FINISHED WITH STATUS $status) &\n"
},
{
"alpha_fraction": 0.5223260521888733,
"alphanum_fraction": 0.5334025621414185,
"avg_line_length": 26,
"blob_id": "0f0b407801f3f5c60dca8bd28ab44033eaad8ea8",
"content_id": "94a19e5fe5acb74b1666248bd209e195fbb7cb9e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2889,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 107,
"path": "/polybar/.config/polybar/blocks/bluez-rofi",
"repo_name": "therockmandolinist/dotfiles",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\nimport os\nimport locale\nimport time\nfrom subprocess import Popen, PIPE, run\nfrom pathlib import Path\n\nimport dbus\n\nbus = dbus.SystemBus()\n\nENV = os.environ.copy()\nENV['LC_ALL'] = 'C'\nENC = locale.getpreferredencoding()\nBLUETOOTH_GUI = \"blueman-manager\"\nROFI_CMD = [\n \"rofi\",\n \"-p\", \"bluetooth\",\n \"-location\", \"3\",\n \"-xoffset\", \"-15\",\n \"-yoffset\", \"62\",\n \"-dmenu\", \"-i\",\n \"-width\", \"300\",\n \"-theme-str\", \"* {highlightcolor: #83A598;}\"\n # \"-theme-str\", \"#inputbar {enabled:false;}\",\n]\n\n\n# rofi -dmenu -i -theme-str \"#window {width: 500;}\"\n\n\ndef make_device_list(devmap):\n \"\"\"Generate list for rofi stdin\"\"\"\n def rank_device(dev):\n if dev[\"Connected\"]:\n ROFI_CMD.extend([\"-a\", \"0\"])\n return 1\n if dev[\"Paired\"]:\n if dev[\"Trusted\"]:\n return 2\n return 3\n return 4\n lst = list(devmap.values())\n lst.sort(key=rank_device)\n pad = max(map(lambda dev: len(dev[\"Address\"]), lst)) + 2\n return list(map(lambda dev: (\"P\" if dev[\"Paired\"] else \" \")\n + (\"T\" if dev[\"Trusted\"] else \" \")\n + \" \"\n + dev[\"Alias\"].ljust(pad, \" \")\n + \" \"\n + dev[\"Address\"],\n lst))\n\n\ndef run_rofi(args, inp):\n \"\"\"Open rofi process with inp as stdin\"\"\"\n return (Popen(args, stdin=PIPE, stdout=PIPE, env=ENV)\n .communicate(input=inp.encode(ENC))[0]\n .decode(ENC)).strip()\n\n\ndef main():\n bluez = bus.get_object(\"org.bluez\", \"/\")\n bluez_iface = dbus.Interface(bluez, \"org.freedesktop.DBus.ObjectManager\")\n managed_objects = bluez_iface.GetManagedObjects()\n\n adapter_path = None\n # devlist = []\n devmap = {}\n for key, val in managed_objects.items():\n if val.get(\"org.bluez.Device1\"):\n dev = val[\"org.bluez.Device1\"]\n dev[\"path\"] = key\n # devlist.append(dev)\n devmap[dev[\"Address\"]] = dev\n elif val.get(\"org.bluez.Adapter1\"):\n adapter_path = key\n\n inp = \"\\n\".join(make_device_list(devmap)\n + [\"\", \"Start Discovery\", \"Open GUI\"])\n\n sel = run_rofi(ROFI_CMD, inp)\n\n adapter = bus.get_object(\"org.bluez\", adapter_path)\n\n if sel == \"Start Discovery\":\n adapter_iface = dbus.Interface(adapter, \"org.bluez.Adapter1\")\n adapter_iface.StartDiscovery()\n time.sleep(30)\n elif sel == \"Open GUI\":\n Popen([BLUETOOTH_GUI]).wait()\n elif sel:\n sel = sel.split()[-1]\n path = devmap[sel][\"path\"]\n\n dev = bus.get_object(\"org.bluez\", path)\n dev_iface = dbus.Interface(dev, \"org.bluez.Device1\")\n\n connected = devmap[sel][\"Connected\"]\n if connected:\n dev_iface.Disconnect()\n else:\n dev_iface.Connect()\n\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.5617021322250366,
"alphanum_fraction": 0.5829787254333496,
"avg_line_length": 21.380952835083008,
"blob_id": "1a2dc0947bf837219555bf5aff723e8512e0a585",
"content_id": "724a00bedd6b542ef268cef835bd236e7f51a568",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 470,
"license_type": "no_license",
"max_line_length": 65,
"num_lines": 21,
"path": "/bin/bin/unicode.py",
"repo_name": "therockmandolinist/dotfiles",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/python\n\nimport sys, unicodedata\n\n# Print all named unicode chars\ntry:\n # Max of range chosen experimentally, lol\n for i in range(32,918000):\n try:\n char = chr(i)\n print(f'U+{i:05x}\\t{char}\\t{unicodedata.name(char)}')\n except ValueError:\n continue\n # try:\n # print(f'{i}\\t{char}')\n # except UnicodeEncodeError:\n # print(i)\nexcept (BrokenPipeError, IOError):\n pass\n\nsys.stderr.close()\n"
},
{
"alpha_fraction": 0.48599973320961,
"alphanum_fraction": 0.4996713697910309,
"avg_line_length": 27.384328842163086,
"blob_id": "01826025aba943be99ef002c89e2562cb9663bda",
"content_id": "6b0d3671e44d6103b4d137ce8cb33d7cac2ce37c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C",
"length_bytes": 7607,
"license_type": "no_license",
"max_line_length": 94,
"num_lines": 268,
"path": "/bin/bin/git-prompt.c",
"repo_name": "therockmandolinist/dotfiles",
"src_encoding": "UTF-8",
"text": "// This file contains two ways to produce a git shell prompt\n// - Using libgit2, which makes for much, MUCH cleaner code\n// - Parsing git status, which is ugly but somehow faster\n#include <stdio.h>\n#if defined __has_include\n# if __has_include (<git2.h>)\n#include <git2.h>\n// This one seems not to work on freshly initialized repos\nvoid print_branch_info(git_repository *repo);\nint status_cb(const char *path, unsigned int status_flags, void *payload);\nvoid print_other_info(git_repository *repo);\n\ntypedef struct {\n int staged;\n int conflicts;\n int modified;\n int dirty;\n} status_data;\n\nint main () {\n fclose(stderr);\n git_libgit2_init();\n git_repository *repo = NULL;\n if (git_repository_open_ext(&repo, \"./\", 0, \"/home:/tmp\") == 0) {\n printf(\"(\");\n print_branch_info(repo);\n printf(\"|\");\n print_other_info(repo);\n printf(\") \");\n /* git_status_list_new(&status, repo, opts); */\n /* printf(\" %d\", d.modified); */\n }\n git_libgit2_shutdown();\n return 0;\n}\n\nint status_cb(const char *path, unsigned int flags, void *payload) {\n (void)path;\n status_data *d = (status_data*)payload;\n if (flags & GIT_STATUS_WT_NEW) {\n d->dirty = 1;\n }\n if (flags & GIT_STATUS_WT_MODIFIED) {\n d->modified++;\n }\n if (flags & GIT_STATUS_CONFLICTED) {\n d->conflicts++;\n }\n if ((flags & GIT_STATUS_INDEX_NEW)\n || (flags & GIT_STATUS_INDEX_MODIFIED)\n || (flags & GIT_STATUS_INDEX_DELETED)\n || (flags & GIT_STATUS_INDEX_TYPECHANGE)\n || (flags & GIT_STATUS_INDEX_RENAMED)) {\n d->staged++;\n }\n return 0;\n}\n\nvoid print_other_info(git_repository *repo) {\n status_data d = {0};\n git_status_foreach(repo, status_cb, &d);\n /* if (d.staged>0) {} */\n if (!d.staged && !d.conflicts && !d.modified && !d.dirty) {\n printf(\"%%F{10}~%%f\");\n return;\n };\n if (d.staged > 0) { printf(\"%%F{12}@%d%%f\", d.staged); }\n if (d.modified > 0) { printf(\"%%F{11}#%d%%f\", d.modified); }\n if (d.conflicts > 0) { printf(\"%%F{9}!%d%%f\", d.conflicts); }\n if (d.dirty) { printf(\"*\"); }\n}\n\nvoid print_branch_info(git_repository *repo) {\n git_reference *ref = NULL, *remote_ref = NULL;\n size_t ahead, behind;\n const char *branch_name = NULL;\n\n git_repository_head(&ref, repo);\n if (git_branch_name(&branch_name, ref) == 0) {\n printf(\"%%F{10}%s%%f\", branch_name);\n } else {\n char hash[8];\n printf(\"%%F{10}:%s%%f\",git_oid_tostr(hash,8,git_reference_target(ref)));\n };\n /* puts(branch_name); */\n if (git_branch_upstream(&remote_ref, ref) == 0) {\n git_graph_ahead_behind(&ahead, &behind, repo,\n git_reference_target(ref),\n git_reference_target(remote_ref));\n if (ahead > 0) {\n printf(\"%%F{13}+%zu%%f\", ahead);\n }\n if (behind > 0) {\n printf(\"%%F{13}-%zu%%f\", behind);\n }\n }\n}\n#else\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <regex.h>\n#include <unistd.h>\n#include <sys/wait.h>\n#include <fcntl.h>\n\nint regex_match(char *pattern, const char *string) {\n int status;\n regex_t re;\n\n if (regcomp(&re, pattern, REG_EXTENDED|REG_NOSUB) != 0) {\n return 0;\n }\n status = regexec(&re, string, (size_t) 0, NULL, 0);\n regfree(&re);\n if (status != 0) {\n return 0;\n }\n return 1;\n}\n\nFILE* popenish (pid_t *pid, char* cmd[]) {\n enum { READ = 0, WRITE = 1};\n FILE* output;\n int pipefd[2];\n pipe(pipefd); //create a pipe\n\n *pid = fork(); //span a child process\n if (*pid == 0) {\n // Child. Let's redirect its standard output to our pipe and replace process with tail\n close(pipefd[READ]);\n dup2(pipefd[WRITE], STDOUT_FILENO);\n int dev_null = open(\"/dev/null\", O_WRONLY);\n dup2(dev_null, STDERR_FILENO);\n execvp(cmd[0], cmd);\n }\n\n //Only parent gets here. Listen to what the tail says\n close(pipefd[WRITE]);\n output = fdopen(pipefd[READ], \"r\");\n\n return output;\n}\n\nvoid pcloseish(pid_t pid, FILE* file) {\n waitpid(pid,NULL,0);\n fclose(file);\n}\n\nvoid print_branch_info(FILE *status) {\n size_t n = 0;\n char *first_line = NULL;\n getline(&first_line, &n, status);\n if (strstr(first_line, \"(no branch)\") != NULL) {\n FILE *rev;\n pid_t pid;\n char *cmd[] = {\"git\", \"rev-parse\", \"--short\", \"HEAD\", NULL};\n rev = popenish(&pid, cmd);\n char *rev_out;\n n = 0;\n getline(&rev_out, &n, rev);\n char *rev_parsed = malloc(sizeof(char) * n);\n sscanf(rev_out,\"%[^\\n]\", rev_parsed);\n printf(\"%%F{10}:%s%%f\", rev_parsed);\n free(rev_out);\n free(rev_parsed);\n pcloseish(pid, rev);\n return;\n }\n char *branch = malloc(sizeof(char) * n);\n if (strncmp(first_line, \"## No commits\", 13) == 0) {\n sscanf(first_line, \"## No commits yet on %[^\\n]\", branch);\n printf(\"%%F{10}%s%%f\", branch);\n }\n else {\n char *ahead_behind_str = malloc(sizeof(char) * n);\n int ahead_behind = 0;\n int behind = 0;\n int ret = sscanf(first_line,\n \"## %[^.\\n]...%*[^ ] %*c%s %d, behind %d%*c\",\n branch,\n ahead_behind_str,\n &ahead_behind,\n &behind);\n switch (ret) {\n case 4 :\n printf(\"%%F{10}%s%%f%%F{13}+%d-%d%%f\",\n branch, ahead_behind, behind);\n free(ahead_behind_str);\n break;\n case 3 :\n if (strncmp(ahead_behind_str, \"ahead\", 5) == 0) {\n printf(\"%%F{10}%s%%f%%F{13}+%d%%f\", branch, ahead_behind);\n }\n else {\n printf(\"%%F{10}%s%%f%%F{13}-%d%%f\", branch, ahead_behind);\n }\n free(ahead_behind_str);\n break;\n default :\n printf(\"%%F{10}%s%%f\", branch);\n break;\n }\n }\n free(branch);\n free(first_line);\n}\n\nvoid print_other_info(FILE *status) {\n int staged = 0;\n int conflicts = 0;\n int modified = 0;\n int dirty = 0;\n\n size_t n =0;\n char *first_line = NULL;\n while (getline(&first_line, &n, status) != -1) {\n if (regex_match(\"^(A[ DM]|C[ DM]|D[ M]|M[ DM]|R[ DM])\", first_line)) {\n staged += 1;\n } else if (regex_match(\"^(A[AU]|D[DU]|U[ADU])\", first_line)) {\n conflicts +=1;\n } else if (regex_match(\"^( [DM]|A[DM]|C[DM]|M[DM]|R[DM])\",\n first_line)) {\n modified += 1;\n } else if (strncmp(\"??\",first_line,2) == 0) {\n dirty = 1;\n }\n };\n if (staged > 0) {\n printf(\"%%F{12}@%d%%f\", staged);\n }\n if (conflicts > 0) {\n printf(\"%%F{9}!%d%%f\", conflicts);\n }\n if (modified > 0) {\n printf(\"%%F{11}#%d%%f\", modified);\n }\n if (dirty) {\n printf(\"*\");\n }\n if (!staged && !conflicts && !modified && !dirty) {\n printf(\"%%F{10}~%%f\");\n };\n free(first_line);\n}\n\nint main() {\n FILE *status;\n pid_t pid;\n char *cmd[] = {\"git\", \"status\", \"--porcelain\", \"-b\", NULL};\n status = popenish(&pid,cmd);\n int c;\n if ((c=getc(status)) != '#') {\n return 0;\n }\n ungetc(c,status);\n printf(\"(\");\n print_branch_info(status);\n printf(\"|\");\n print_other_info(status);\n printf(\") \");\n pcloseish(pid, status);\n return 0;\n}\n\n# endif\n#endif\n"
},
{
"alpha_fraction": 0.6470588445663452,
"alphanum_fraction": 0.6519030928611755,
"avg_line_length": 29.744680404663086,
"blob_id": "997a81032b050b92a698cc4c4de5735b4ae18186",
"content_id": "4f5f06b53ce34acc5f96995c358c91a654a7c626",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 1445,
"license_type": "no_license",
"max_line_length": 139,
"num_lines": 47,
"path": "/bin/bin/snapbackup",
"repo_name": "therockmandolinist/dotfiles",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\n# if [ $(/usr/bin/id -u) -ne 0 ]; then\n# echo \"Invalid Permissions. Are you root?\"\n# exit\n# fi\n# Basic snapshot-style rsync backup script\nLOCALDIR=/run/media/$USER/wd-passport\nREMOTEDIR=\"/mnt/wd-passport/backup\"\nREMOTEHOST=odroid\nREMOTEUSER=dieggsy\n\n# Config\nif [ -d $LOCALDIR ]; then\n echo \"Backing up locally...\"\n SSHOPT=\"\"\n DEST=\"$LOCALDIR/backup\"\n SSHDEST=\"\"\nelse\n echo \"Backing up over ssh...\"\n SSHOPT=\"-e ssh -zz\"\n SSHDEST=\"$REMOTEUSER@$REMOTEHOST:\"\n DEST=\"$REMOTEDIR\"\nfi\nOPT=\"-avAXh --delete --exclude-from=/home/$USER/.rsync-exclude\"\nLINK=\"--link-dest=../last\"\nSRC=$HOME\nSNAP=\"$SSHDEST$DEST\"\nLAST=\"$DEST/last\"\ndate=`date \"+%Y-%m-%dT%H%M\"`\n\nnotify-send \"Backing up home directory\" \"to $SNAP\"\n\n# Run rsync to create snapshot\nrsync $OPT $SSHOPT $LINK $SRC ${SNAP}/$date # | pv -lep -s $(rsync -n $OPT $SSHOPT $LINK $SRC ${SNAP}/$date | awk 'NF' | wc -l) > /dev/null\n\n# Remove symlink to previous snapshot\n# Create new symlink to latest snapshot for the next backup to hardlink\nif [ -d $LOCALDIR ]; then\n ln -vnsf $DEST/$date $LAST\n find $DEST -maxdepth 1 -type d ! -path $DEST -mtime +30 -exec echo 'Removing' {} + -exec rm -vrf {} +\nelse\n ssh $REMOTEUSER@$REMOTEHOST \"ln -vnsf ${DEST}/$date $LAST\"\n # ssh $REMOTEUSER@$REMOTEHOST \"find $DEST -maxdepth 1 -type d ! -path $DEST -mtime +30 -exec echo 'Removing' {} + -exec rm -vrf {} +\"\nfi\n\nnotify-send \"Done backing up home directory\"\n"
},
{
"alpha_fraction": 0.6430768966674805,
"alphanum_fraction": 0.6646153926849365,
"avg_line_length": 24,
"blob_id": "93bf07b3d200c8c9087fbeb5a48646b8f0657ee5",
"content_id": "e9ae8b8496fa8aa9d87ca3ff1555acd830b97ae5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 325,
"license_type": "no_license",
"max_line_length": 35,
"num_lines": 13,
"path": "/readline/.inputrc",
"repo_name": "therockmandolinist/dotfiles",
"src_encoding": "UTF-8",
"text": "# -*- mode: sh -*-\n\nset keymap vi-insert\n# \"(\": \"\\C-v(\\C-v)\\e[D\"\n# \"\\\"\": \"\\C-v\\\"\\C-v\\\"\\e[D\"\nset editing-mode vi\nset show-mode-in-prompt on\nset colored-completion-prefix on\nset blink-matching-paren on\nset menu-complete-display-prefix on\nset keyseq-timeout 1\nset vi-ins-mode-string \\1\\e[6 q\\2\nset vi-cmd-mode-string \\1\\e[2 q\\2\n"
},
{
"alpha_fraction": 0.6242938041687012,
"alphanum_fraction": 0.6299434900283813,
"avg_line_length": 31.18181800842285,
"blob_id": "99576ca718f78f66fa7e0dfd9d5f51fbeba09865",
"content_id": "d73c863d759ed8d0463c7e5842c3568133b5e812",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 354,
"license_type": "no_license",
"max_line_length": 64,
"num_lines": 11,
"path": "/zsh/.zlogin",
"repo_name": "therockmandolinist/dotfiles",
"src_encoding": "UTF-8",
"text": "if command -v gpg2 &>/dev/null; then\n ! pgrep gpg-agent &> /dev/null && gpgconf --launch gpg-agent\n export SSH_AUTH_SOCK=$(gpgconf --list-dirs agent-ssh-socket)\n gpg-connect-agent updatestartuptty /bye >&/dev/null\nfi\n\n# if [[ ! $DISPLAY && $(tty) = /dev/tty1 ]]; then\n# exec startx &> /dev/null\n# fi\n\nhash dust &>/dev/null && eval $(dust env)\n"
},
{
"alpha_fraction": 0.5710306167602539,
"alphanum_fraction": 0.582172691822052,
"avg_line_length": 21.4375,
"blob_id": "4281dcb0d31606c255440f474fc9db97a6ef13b0",
"content_id": "9542259893a28b4e7cf3e0cd6e17141dfbeaf444",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C",
"length_bytes": 359,
"license_type": "no_license",
"max_line_length": 74,
"num_lines": 16,
"path": "/polybar/.config/polybar/blocks/intel-backlight.c",
"repo_name": "therockmandolinist/dotfiles",
"src_encoding": "UTF-8",
"text": "#include <stdio.h>\n#include <math.h>\n\nint main () {\n float curr;\n FILE *f;\n f = fopen(\"/sys/class/backlight/intel_backlight/brightness\", \"r\");\n fscanf(f,\"%f\", &curr);\n\n float max;\n f = fopen(\"/sys/class/backlight/intel_backlight/max_brightness\", \"r\");\n fscanf(f,\"%f\", &max);\n\n printf(\"%d\",(int)roundf(curr/max * 100));\n return 0;\n}\n"
},
{
"alpha_fraction": 0.4913010597229004,
"alphanum_fraction": 0.4992435574531555,
"avg_line_length": 31.64197540283203,
"blob_id": "e0bdf1e6ee8708b60ccfa9dd9b4ba1a9591dc74c",
"content_id": "370fe56c25e02f695aea644a2be5a976e15735f5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C",
"length_bytes": 2644,
"license_type": "no_license",
"max_line_length": 95,
"num_lines": 81,
"path": "/polybar/.config/polybar/blocks/networkmanager.c",
"repo_name": "therockmandolinist/dotfiles",
"src_encoding": "UTF-8",
"text": "#include <stdio.h>\n#include <glib.h>\n#include <NetworkManager.h>\n\nchar* get_wifi_icon(int strength, int vpn_on);\nint round_multiple(int strength, int multiple);\n\nint main () {\n NMClient *client;\n\tconst GPtrArray *devices;\n GError *error = NULL;\n\n client = nm_client_new (NULL, &error);\n if (!client) {\n g_message (\"Error: Could not create NMClient: %s.\", error->message);\n g_error_free (error);\n return EXIT_FAILURE;\n\t}\n int vpn_on = 0;\n\n const GPtrArray *active = nm_client_get_active_connections(client);\n for (uint i = 0; i < active->len; i++) {\n NMActiveConnection* conn = g_ptr_array_index(active, i);\n if (nm_active_connection_get_vpn(conn)\n || strncmp(nm_active_connection_get_connection_type(conn),\"tun\",3) == 0) {\n vpn_on = 1;\n }\n }\n\n devices = nm_client_get_devices (client);\n int have_conn = 0;\n for (uint i = 0; i < devices->len; i++) {\n NMDevice *device = g_ptr_array_index (devices, i);\n if (NM_IS_DEVICE_WIFI(device)) {\n NMAccessPoint *ap = nm_device_wifi_get_active_access_point(NM_DEVICE_WIFI(device));\n if (ap != NULL){\n GBytes* ssid = nm_access_point_get_ssid(ap);\n char * ssid_str = nm_utils_ssid_to_utf8(g_bytes_get_data (ssid, NULL),\n g_bytes_get_size (ssid));\n printf(\"%s%s %s\",\n (have_conn ? \" \" : \"\"),\n get_wifi_icon(nm_access_point_get_strength(ap),vpn_on),\n ssid_str);\n have_conn = 1;\n }\n } else if (NM_IS_DEVICE_ETHERNET(device)) {\n NMActiveConnection *conn = nm_device_get_active_connection(device);\n if (conn != NULL) {\n printf(\"%s<-> %s\",\n (have_conn ? \" \" : \"\"),\n nm_active_connection_get_id(conn));\n\n have_conn = 1;\n }\n }\n }\n if (!have_conn || devices->len == 0) {\n printf(\"! No Connection\");\n }\n\n return 0;\n}\n\nchar* get_wifi_icon(int strength, int vpn_on) {\n char* inside;\n strength = round_multiple(strength, 33);\n if (strength == 99) {\n inside = vpn_on ? \"(>>>)\" : \"[>>>]\";\n } else if (strength == 66) {\n inside = vpn_on ? \"(>> )\" : \"[>> ]\";\n } else if (strength == 33) {\n inside = vpn_on ? \"(> )\" : \"[> ]\";\n } else {\n inside = vpn_on ? \"( )\" : \"[ ]\";\n }\n return inside;\n}\n\nint round_multiple(int strength, int multiple) {\n return ((strength + multiple/2)/multiple) * multiple;\n}\n"
},
{
"alpha_fraction": 0.580406665802002,
"alphanum_fraction": 0.6025878190994263,
"avg_line_length": 29.05555534362793,
"blob_id": "b514efe7ed3e9e64ecdb056a3a361074d586a8ee",
"content_id": "6775be7555fdd49d1cfcb0acf7f2e3de042526e2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 541,
"license_type": "no_license",
"max_line_length": 76,
"num_lines": 18,
"path": "/bin/bin/ocrpdf2",
"repo_name": "therockmandolinist/dotfiles",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env bash\nif [[ -z $1 ]]; then\n echo \"No input file provided.\"\nelif [[ -z $2 ]]; then\n echo \"No output file provided\"\nelse\n echo \"Converting pdf to png...\"\n convert -density 500 $1 $TMPDIR/tempocr.png\n count=0\n echo \"Running tesseract on pngs...\"\n while [ -f $TMPDIR/tempocr-$count.png ]; do\n echo \" Page $count\"\n tesseract $TMPDIR/tempocr-$count.png $TMPDIR/tempocr >/dev/null 2>&1\n cat $TMPDIR/tempocr.txt >> $2\n let count=count+1\n done\n echo \"Created output file $2\"\nfi\n"
}
] | 34 |
yasseerr/TCTG | https://github.com/yasseerr/TCTG | 16988014dff20dd3f574398f8567352455702eba | d0ec0cf6411cb393fe97438a9c2a798a259edeff | 7e43ebdd7304e3956e9a7e4ee6e03f427c38120d | refs/heads/master | 2022-11-14T15:25:38.653512 | 2020-07-12T16:49:50 | 2020-07-12T16:49:50 | 278,760,238 | 0 | 1 | null | null | null | null | null | [
{
"alpha_fraction": 0.6212013959884644,
"alphanum_fraction": 0.623321533203125,
"avg_line_length": 31.662790298461914,
"blob_id": "c9f99faabdf0f65ac01a42951d3437cae5aab5fa",
"content_id": "baf848c20848ccf2fed1a883d9f1730b65e6944b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2830,
"license_type": "no_license",
"max_line_length": 109,
"num_lines": 86,
"path": "/tctg_manager.py",
"repo_name": "yasseerr/TCTG",
"src_encoding": "UTF-8",
"text": "\nfrom PySide2.QtCore import QObject,Signal,Slot,QUrl\nfrom PySide2.QtGui import QTextDocument\n\nimport json\nimport yaml\nfrom yaml.error import YAMLError\nfrom jinja2 import Environment, PackageLoader, select_autoescape,Template,TemplateError\n\nfrom pygments import highlight\nfrom pygments.lexers import get_lexer_by_name\nfrom pygments.formatters.html import HtmlFormatter\nfrom pygments.formatters.img import ImageFormatter\nfrom pygments.styles import get_style_by_name\n\n\n#jinjaEnv = Environment()\n\n\n\nclass TCTG_Manager(QObject):\n\n templateError = Signal()\n yamlError = Signal()\n \n def __init__(self):\n QObject.__init__(self)\n self.nmbr= 0\n\n @Slot(str,str,result=str)\n def render(self, template_str,values):\n valuesDict = dict()\n try:\n valuesDict = yaml.load(values)\n print(valuesDict)\n except YAMLError as e :\n self.yamlError.emit()\n return str(e)\n\n rendered = \"nothing rendered\"\n try:\n the_template = Template(template_str)\n rendered = the_template.render(valuesDict)\n print(rendered)\n except Exception as e:\n self.templateError.emit()\n return str(e)\n print(values)\n return rendered\n #return \"every thing is rendered dont worry\"\n \n @Slot(str,str, result=str)\n def highlightCode(self, code,language):\n #converting the html to platin\n td = QTextDocument()\n td.setHtml(code)\n print(\"the plain text is here : \"+ td.toPlainText())\n codeLexer = get_lexer_by_name(\n language)\n f = open(\"highlightTest.html\",'wb')\n #fi = open(\"highlightTest.png\",'wb')\n #style = get_style_by_name(\"native\")\n formatter = HtmlFormatter(full=True,noclasses=True, encoding=\"UTF-8\")\n #imgFormater = ImageFormatter()\n result = highlight(td.toPlainText(),codeLexer,formatter)\n td.setHtml(result.decode(\"UTF-8\"))\n print(td.toHtml())\n return td.toHtml()\n \n @Slot(str,str,QUrl)\n def saveState(self, templateText:str,valuesText:str,fileURL:QUrl):\n #print(\"the save state has been called \\n\"+templateText+\" \\n\"+valuesText+\"\\n \\n \"+fileURL.toString())\n print(\"printing the path: \" + fileURL.path())\n f = open(fileURL.toLocalFile(),\"w\")\n stateDict = {\"name\":\"unknown\",\"template\":templateText,\"values\":valuesText}\n json.dump(stateDict,f)\n f.close()\n \n @Slot(QUrl, result='QVariantList')\n def openState(self, fileUrl:QUrl):\n f = open(fileUrl.toLocalFile(), \"r\")\n stateDict = json.load(f)\n f.close()\n retList = [None,None]\n print(stateDict[\"template\"])\n print(stateDict[\"values\"])\n return [stateDict[\"template\"],stateDict[\"values\"]]\n\n \n \n\n"
},
{
"alpha_fraction": 0.4949594736099243,
"alphanum_fraction": 0.6574421525001526,
"avg_line_length": 27.66288948059082,
"blob_id": "06c85124b9aef538292019c9a39de8b2741796ae",
"content_id": "7d0954b90f4111f7f059c59972309c33e51f2160",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 10118,
"license_type": "no_license",
"max_line_length": 96,
"num_lines": 353,
"path": "/resources.py",
"repo_name": "yasseerr/TCTG",
"src_encoding": "UTF-8",
"text": "# Resource object code (Python 3)\n# Created by: object code\n# Created by: The Resource Compiler for Qt version 5.15.0\n# WARNING! All changes made in this file will be lost!\n\nfrom PySide2 import QtCore\n\nqt_resource_data = b\"\\\n\\x00\\x00\\x03\\x9a\\\ni\\\nmport QtQuick 2.\\\n12\\x0d\\x0aimport QtQui\\\nck.Controls 2.12\\\n\\x0d\\x0a\\x0d\\x0aItem {\\x0d\\x0a \\\nproperty string \\\nresult : \\x22\\x22\\x0d\\x0a \\\n visible: true\\x0d\\\n\\x0a clip: true\\x0d\\\n\\x0a Rectangle{\\x0d\\\n\\x0a anchors\\\n.fill: parent\\x0d\\x0a \\\n color: \\x22l\\\nightGray\\x22\\x0d\\x0a \\\n border.color:\\\n \\x22black\\x22\\x0d\\x0a \\\n border.width: \\\n1\\x0d\\x0a\\x0d\\x0a }\\x0d\\x0a\\x0d\\x0a \\\n Flickable {\\x0d\\x0a \\\n id: scrol\\\nlView\\x0d\\x0a f\\\nlickableDirectio\\\nn: Flickable.Ver\\\nticalFlick\\x0d\\x0a \\\n anchors.fill\\\n: parent\\x0d\\x0a \\\n TextArea.flick\\\nable: TextArea{\\x0d\\\n\\x0a id:\\\nresultTextArea\\x0d\\x0a\\\n anch\\\nors.fill: parent\\\n\\x0d\\x0a pl\\\naceholderText: \\x22\\\nResults\\x22\\x0d\\x0a \\\n text: resu\\\nlt\\x0d\\x0a \\\nselectByMouse: t\\\nrue\\x0d\\x0a \\\n selectByKeyboar\\\nd: true\\x0d\\x0a \\\n clip: false\\\n\\x0d\\x0a //\\\nreadOnly: true\\x0d\\x0a\\\n font\\\n.pixelSize: 17\\x0d\\x0a\\\n\\x0d\\x0a }\\x0d\\x0a \\\n ScrollBar.v\\\nertical: ScrollB\\\nar{}\\x0d\\x0a }\\x0d\\x0a \\\n //onResultChang\\\ned: resultTextAr\\\nea.text = result\\\n\\x0d\\x0a\\x0d\\x0a}\\x0d\\x0a\\x0d\\x0a/*##^##\\\n\\x0d\\x0aDesigner {\\x0d\\x0a \\\n D{i:0;autoSize\\\n:true;height:480\\\n;width:640}\\x0d\\x0a}\\x0d\\x0a\\\n##^##*/\\x0d\\x0a\\\n\\x00\\x00\\x05S\\\ni\\\nmport QtQuick 2.\\\n12\\x0d\\x0aimport QtQui\\\nck.Controls 2.12\\\n\\x0d\\x0a\\x0d\\x0aItem {\\x0d\\x0a \\\nvisible: true\\x0d\\x0a \\\n property alia\\\ns currentText: t\\\nextArea.text\\x0d\\x0a \\\n clip: true\\x0d\\x0a \\\n Rectangle{\\x0d\\x0a \\\n anchors.fi\\\nll: parent\\x0d\\x0a \\\n //color: Qt.\\\nlightGray\\x0d\\x0a \\\n border.color:\\\n \\x22black\\x22\\x0d\\x0a \\\n border.width: \\\n1\\x0d\\x0a }\\x0d\\x0a Fl\\\nickable {\\x0d\\x0a \\\n id: scrollVie\\\nw\\x0d\\x0a flick\\\nableDirection: F\\\nlickable.Vertica\\\nlFlick\\x0d\\x0a \\\nanchors.fill: pa\\\nrent\\x0d\\x0a Te\\\nxtArea.flickable\\\n: TextArea{\\x0d\\x0a \\\n id: tex\\\ntArea\\x0d\\x0a \\\n selectByMouse\\\n: true\\x0d\\x0a \\\n selectByKeyb\\\noard: true\\x0d\\x0a \\\n font.pix\\\nelSize: 15\\x0d\\x0a \\\n textForm\\\nat: Text.RichTe\\\nxt\\x0d\\x0a \\\n//placeholderTex\\\nt: \\x22Values in YA\\\nML\\x22\\x0d\\x0a \\\n //preeditText: \\\n\\x22\\x22\\x0d\\x0a \\\n//overwriteMode:\\\n true\\x0d\\x0a \\\n text: \\x22name :\\\n awesome<br/>chi\\\nldes :<br/> - co\\\nol<br/> - smart<\\\nbr/> - fantastic\\\n\\x22\\x0d\\x0a }\\x0d\\x0a \\\n ScrollBar.\\\nvertical: Scroll\\\nBar{}\\x0d\\x0a o\\\nnWidthChanged: {\\\n\\x0d\\x0a te\\\nxtArea.width = t\\\nextArea.width>sc\\\nrollView.width?t\\\nextArea.width:sc\\\nrollView.width\\x0d\\x0a\\\n }\\x0d\\x0a \\\n onHeightChang\\\ned: {textArea.he\\\night = textArea.\\\nheight>scrollVie\\\nw.height?textAre\\\na.height:scrollV\\\niew.height}\\x0d\\x0a \\\n }\\x0d\\x0a function\\\n updateHighlight\\\ning(richText){\\x0d\\x0a\\\n textArea\\\n.text = richText\\\n\\x0d\\x0a }\\x0d\\x0a fun\\\nction getPlainTe\\\nxt(){\\x0d\\x0a r\\\neturn textArea.g\\\netText(0,textAre\\\na.length)\\x0d\\x0a }\\\n\\x0d\\x0a function l\\\noadText(t){\\x0d\\x0a \\\n textArea.te\\\nxt = t\\x0d\\x0a }\\x0d\\x0a}\\\n\\x0d\\x0a\\\n\\x00\\x00\\x03\\xdc\\\ni\\\nmport QtQuick 2.\\\n0\\x0d\\x0aimport QtQuic\\\nk.Controls 2.12\\x0d\\\n\\x0a\\x0d\\x0aItem {\\x0d\\x0a p\\\nroperty alias cu\\\nrrentText: textA\\\nrea.text\\x0d\\x0a vi\\\nsible: true\\x0d\\x0a \\\n clip: true\\x0d\\x0a \\\n Rectangle{\\x0d\\x0a \\\n anchors.fil\\\nl: parent\\x0d\\x0a \\\n //color: Qt.l\\\nightGray\\x0d\\x0a \\\n border.color: \\\n\\x22black\\x22\\x0d\\x0a \\\n border.width: 1\\\n\\x0d\\x0a }\\x0d\\x0a\\x0d\\x0a F\\\nlickable {\\x0d\\x0a \\\n id: scrollVi\\\new\\x0d\\x0a flic\\\nkableDirection: \\\nFlickable.Vertic\\\nalFlick\\x0d\\x0a \\\n anchors.fill: p\\\narent\\x0d\\x0a T\\\nextArea.flickabl\\\ne: TextArea {\\x0d\\x0a \\\n id: t\\\nextArea\\x0d\\x0a \\\n anchors.fil\\\nl: parent\\x0d\\x0a \\\n selectByM\\\nouse: true\\x0d\\x0a \\\n selectBy\\\nKeyboard: true\\x0d\\x0a\\\n font\\\n.pixelSize: 15\\x0d\\x0a\\\n text\\\n: \\x22the name is :\\\n {{name}} \\x5cn the\\\n children are : \\\n\\x5cn{%for child in\\\n childes%}\\x5cn ano\\\nther one : {{chi\\\nld}} {%endfor%}\\x22\\\n\\x0d\\x0a }\\x0d\\x0a \\\n ScrollBar.v\\\nertical: ScrollB\\\nar{}\\x0d\\x0a }\\x0d\\x0a\\x0d\\x0a \\\n function load\\\nText(t){\\x0d\\x0a \\\n textArea.text \\\n= t\\x0d\\x0a }\\x0d\\x0a\\x0d\\x0a\\x0d\\x0a\\\n}\\x0d\\x0a\\x0d\\x0a/*##^##\\x0d\\x0aDe\\\nsigner {\\x0d\\x0a D{\\\ni:0;autoSize:tru\\\ne;height:480;wid\\\nth:640}D{i:1}D{i\\\n:3;anchors_x:73;\\\nanchors_y:28}\\x0d\\x0a}\\\n\\x0d\\x0a##^##*/\\x0d\\x0a\\\n\\x00\\x00\\x05\\x84\\\n\\x00\\\n\\x00\\x1a\\xdax\\x9c\\xddY[o\\xdb6\\x14~7\\xe0\\xff\\\n@(@\\xe0t\\x81d'\\xeb\\xb0*\\xd8\\x83\\xebd\\xcd\\\n\\xb0d\\xb9y\\x0d\\x82a\\x0b\\x18\\x89\\xb6\\xb9\\xd2\\xa2JR\\\nq\\x8c\\xcc\\xff}\\x87\\xbaY\\x92)GN\\xba\\xac\\x1b_\\\nl\\x93\\xe7\\xfa\\xf1\\xf0\\x9cC\\x9aNC.\\x14\\xbaP\\x17\\\n\\x11\\xf5>\\xa1=\\xbb\\xb7\\xd7n\\xd1\\xd2\\xa4}M\\x03\\x9f\\\n\\xcf\\xcck'x\\xce#%Q\\xcf\\xde_Y\\x1b\\xf0@\\\n\\x09\\xce\\xa4\\x99\\xf3\\x90b\\xc6\\xc7\\x9a\\xb3\\x9b\\xaf\\x0d\\x07\\xc3\\\n\\x0f\\xc9\\x84\\xe3\\xa4S\\x96\\xebX\\xedV\\xbb\\xd5\\x0fCF\\\n=\\xac(\\x0fR{\\x1e\\xdb-\\x04\\x83\\xfa.\\x9ab\\x9a\\\n\\xce&s\\x8a*F\\x5c\\xf4Y\\x0eE\\xc7:&\\x8cq\\\nt\\xcd\\x05\\xf3\\xad\\x9dd}F}5qQo\\xef\\xfb\\\nn21!t<Q.z\\xf7]:qO%\\xbd\\\n\\xd32\\x94\\x88H25%A\\xf4\\x1e\\x0b\\x17\\x9d&_\\\n\\x1e\\xb5Y(\\x1dz.\\xb3(\\x1b%+\\xb6\\x7f\\xa4\\x8c\\\nd\\xfa\\xb3\\xd1\\xf7\\xb4?\\xe8\\x11)\\xf2\\xa0r\\xca_\\xc8\\\n\\xcc\\xb6mk\\x07-\\xcc\\xd4\\xe5\\xd9\\x0c\\x03\\x1e\\x92\\xe0J\\\naE\\x12\\xb2U\\xa2\\x92\\x8e3\\xa0F1y\\xacj\\x95\\\nZN\\x00}/\\x02\\x0e\\xa0\\x0a|,\\xfc\\x9f\\xc9\\xdc\\xd6\\\nl\\xab\\xb4<\\x18\\x0a:\\x1e\\x13A\\xc0\\x0e\\x83y1M\\\nf\\x9e\\xc6!\\xd9{[\\xcfu\\x0c\\xba+~o\\x04\\x83\\\n\\xc4\\xf7\\xa49\\x0cW@\\x9d\\xc0\\xd0\\x1c\\x03\\xcd\\xf3<\\x0c\\\nr\\xdb\\xbe\\x18\\x06\\x06o\\x86d\\x1a\\xb2\\xd8\\xa1\\x0d\\x98>\\\nb\\x16\\x11\\xd9\\x90%\\xe6\\xd8\\xeeKc\\x90\\xea\\x83pE\\\nB,\\xb0\\xe2\\x02\\x18\\x1b\\xd9\\x00\\x19A\\x95E-69\\\nZG\\xbefo\\xa0h\\x10m\\xab\\xa6\\xb8\\x0cx8o\\\nJ{\\x8ees\\xc0\\xb5`tId\\xc4\\x94|\\x81\\xd3\\\nC\\x0e\\xa9\\xb5YB\\xb9$\\x81ODc\\xfb\\x18\\xc1\\x06\\\nb\\xc7\\xd9\\xc8\\xf7:\\xf2/\\xe4=$\\xf5\\xb0\\x99\\xf3\\xfd\\\n;(PFM\\x8b,\\x81\\x7f\\x10\\xd4O\\x0aYQ\\xa9\\\n\\xce&\\xe3|e9\\x8f\\x03\\x0f\\xf2\\x82\\xb4G\\x941\\x17\\\nA\\xa4\\x93@\\x15KAv\\x00uT\\xea#P6R\\\nKU%\\x8a\\xf2z\\xa2-\\x16~\\x9dT\\xa9e\\x052\\\n\\xd0\\x1c\\xa7\\x85\\xab\\x96(\\x14dD\\x04d\\xa5\\x8c\\xb2\\xd7\\\n\\xed\\x1e\\xac\\xa7L\\xf5\\x96\\x09\\x17E\\x17\\x93tQ\\xef\\xe0\\\n}a\\xbd\\x1a\\x16\\xe0\\xb9G&\\x9cAD\\x0e\\xe3}\\xb2\\\n\\xb6\\xe6x\\xca\\xe2M\\xb3\\xfe\\x83X\\xa4\\xb1\\xfc\\x91\\x92\\x99\\\n\\x09\\x0a\\x91,7\\xb6\\xa7\\xa99\\xcfG\\xa1\\x09\\x9c\\x1eg\\\n\\xd14p\\x91Y\\x8d\\xe03\\xb0\\xa1\\x06\\x0f\\xf3a\\xca\\xe0\\\nX\\x1e\\xa8\\x9eQ\\xb4\\xe2\\xe1)\\x16cZ\\xab\\x9b\\x91\\x91\\\n\\xaa\\xa1H\\x8c\\x96.\\xda3rbF\\xc7\\x014q\\x80\\\n\\xcd\\x85\\xb2\\xfb\\xfa\\xd7\\x09\\x08C\\x7f\\xe5?\\x87<\\xfc\\x9a\\\n\\xf7\\xa9L\\xf5>R\\xaa\\xbe\\x0d\\x12q\\xcaOh\\xd6\\xf7\\\n@YqX\\xa5j\\x12/z\\x8c \\x97\\xb9h\\x84\\x99\\\n4\\xf6E\\x03\\xe8\\xdb?\\xad\\xe9\\x8a\\x1c'=#v\\xf2\\\n\\x89~@\\x16U\\xba\\xf9\\x8a\\x98\\x8f\\xee\\x08\\xc2\\xbeO|\\\nd\\x99\\xb9Wx\\xe12\\x10`\\xe8\\xc3zv\\x82A\\xa7\\\n\\x9cmm/\\x12:c\\xeb\\xdc\\xb3[\\xccS\\xc5\\x85F\\\n\\xfd\\xd8&\\xdb\\xe1\\xe9\\x9a\\xdad7\\xd2\\xe2\\xfb\\xfc\\xcdX\\\n\\x7f\\x80\\x0b\\x14\\xd5C\\x9c\\x8d&\\x1b6<;<\\xd3\\xdb\\\n\\x82\\xd4\\x84 \\x19\\x85\\xf1Mm\\x04\\xb5`\\xa2 \\x93O\\\n \\xb0\\x99\\x0en\\x1a\\x8c\\xcd\\x12\\xee\\xb1\\xd0\\xac\\xb7\\x1e\\xf7\\\n\\x09lY\\xdd6\\x1c\\x98\\xb9=\\x1eH\\xce\\x88\\x0d\\xeds\\\n'\\x93b@,\\xd5T\\x10\\x1d\\x85>\\xc4\\xc1q\\xc1\\xbc\\\nN\\x1e,\\xb9\\xd1\\x03\\x90\\x96\\x8b\\xdd\\xb5n\\xfa\\xa7'\\xd6\\\n\\xce\\xc6\\x0d\\xfa\\xfap\\xd0\\x17\\x81\\xacMh\\x12\\x15\\x95\\xc6\\\n\\xfe\\xb5\\xa2c\\xefE1\\xaf\\x9dL\\x1a\\x85\\xc6.f\\xd7\\\n\\x90\\xd7rp\\xffe\\x87\\x1aZ\\xd9\\xb4\\xfa7:\\xda\\xa5\\\n\\xce\\xf7\\xb5\\x5c\\xfc\\xf6\\x09\\x17/y\\x14\\xf8Ood|\\\nmm\\xbc\\x8f\\xb57\\xea\\x7f\\xc6\\xc7\\xb7\\xab\\xcb8\\xbe\\x0b\\\n\\x18\\x9e\\x03\\x0cp\\x94\\xa7~\\x82zQ\\x8bD\\x08\\xbd\\xab\\\n0$\\xcd\\x9a\\xfa\\xbf_\\xad\\xff/\\xc0\\xa0^k\\x8c\\xc1\\\nZ\\x88\\xaeB\\x1c\\xac?\\xce\\xf9\\x8f4\\x8f\\x0d\\xf84\\xe4\\\n\\x01\\xe4a\\x1b\\xea\\x01|gDU*\\xc2\\xf2\\xcd\\xcd\\x96\\\nD\\x0d\\xf5\\xed\\xacc\\xe9\\x07\\xbc\\xe2\\xbe/\\xdb=;\\xbb\\\n6M\\xe3\\xe6MB\\xda\\xefu+\\xf7\\xb0\\xe5\\xb3H\\xf5\\\n\\x1efx9Y\\x12\\xa47C\\xeb\\x1cj\\xa7$\\x08\\xd4\\\np\\xf8\\xc0\\x08\\x10&H\\xf1\\x989\\xa9TqX.\\x19\\\nG\\xf1-\\xc4\\xcd\\xdfy\\xa4=\\xe1\\xd3\\xc2NH\\xc2\\x88\\\n\\xa7\\x8e\\x1e\\xa8\\xd4\\xb5b\\xa5\\xc5\\x99r\\x1fzJ5O\\\n\\xfa\\xc9\\xe5\\xfb\\xe4\\xa9\\x9e_\\x92\\xf1\\xa0\\xefy$\\xac\\xe2\\\n\\xa7G\\xb1\\x92Y7<\\xd2\\xb6K\\xed\\x0a\\xfa\\xc6\\xf8X\\\n\\xa4=\\xfaU0Y9Yy\\x11\\xcby\\x9e\\xd3\\xf4\\xec\\\n\\xa6\\xd2+\\xc2\\x1d\\x07\\xbc\\xfb\\x1cQU|\\xa5Z\\x14\\xdd\\\n\\xbb$\\x7f\\x02LO\\xb97\\x80\\x00\\x00<\\xfdjZ0\\\n\\xca\\xd7\\x83<P\\x10z.xH\\x04\\x9bC\\xbcT\\xe0\\\n/\\xd8\\xe085\\xf1\\x9a>\\xe3\\x02\\xf3\\xf2\\x8c5\\x087\\\n\\xc3c\\xe5\\xbf\\x10n\\xe5\\xb4\\xf0\\xd5D\\x9bn\\xde\\x04Q\\\n'`&\\xe0\\x9a\\xc7^\\x8eY\\xc7\\x1cG\\x95\\x88d\\x1c\\\n\\xfb:\\xea:\\xa9\\xa8\\xdf\\xba\\xbf\\xaf\\xe8)\\xc4\\xe9\\x0ay\\\n\\xafJ\\xfe\\xff\\x8dS\\x9dToO\\x13\\x9c+\\x81\\x9a\\xa1\\\n_t\\xf3\\x06O\\xd9\\x91\\x10\\x5c\\xb8e\\xc7tH\\xc6/\\\n-T\\xa2\\x80+\\x14f\\x16C\\xe3>\\x85\\x8d)y\\xcc\\\n\\x83\\xfc\\x19K\\x8bB\\x06Y\\xd9\\x86f\\xf2f\\x84\\xb1\\x15\\\nYi\\x81\\x8d?\\x9c7[[\\x7flm\\xb5[\\x87D\\\n\\xc2m\\x9b\\xe4OG\\x87\\x8f\\xd4\\xed\\x1d\\xa4\\x05\\xe26\\xfd\\\n/F?\\xb9dS\\xc9\\xff5\\xc5\\x99\\x07\\xb7\\xf7v\\xf9\\\nk\\xee\\xf6\\xdeu\\x17\\xb1\\x9aX\\xc5\\x1b\\xa7\\xdd\\xfa\\x1bR\\\n\\xe4\\xfc(\\\n\"\n\nqt_resource_name = b\"\\\n\\x00\\x0f\\\n\\x0aPF\\x9c\\\n\\x00R\\\n\\x00e\\x00s\\x00u\\x00l\\x00t\\x00s\\x00V\\x00i\\x00e\\x00w\\x00.\\x00q\\x00m\\x00l\\\n\\x00\\x10\\\n\\x0c\\xfd\\x02\\xdc\\\n\\x00V\\\n\\x00a\\x00l\\x00u\\x00e\\x00s\\x00E\\x00d\\x00i\\x00t\\x00o\\x00r\\x00.\\x00q\\x00m\\x00l\\\n\\x00\\x12\\\n\\x0c\\x87>\\xbc\\\n\\x00T\\\n\\x00e\\x00m\\x00p\\x00l\\x00a\\x00t\\x00e\\x00E\\x00d\\x00i\\x00t\\x00o\\x00r\\x00.\\x00q\\x00m\\\n\\x00l\\\n\\x00\\x08\\\n\\x08\\x01Z\\x5c\\\n\\x00m\\\n\\x00a\\x00i\\x00n\\x00.\\x00q\\x00m\\x00l\\\n\"\n\nqt_resource_struct = b\"\\\n\\x00\\x00\\x00\\x00\\x00\\x02\\x00\\x00\\x00\\x04\\x00\\x00\\x00\\x01\\\n\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\\n\\x00\\x00\\x00t\\x00\\x01\\x00\\x00\\x00\\x01\\x00\\x00\\x0c\\xd5\\\n\\x00\\x00\\x01sC\\x8c!\\xc1\\\n\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x01\\x00\\x00\\x00\\x00\\\n\\x00\\x00\\x01s0N\\xcf5\\\n\\x00\\x00\\x00J\\x00\\x00\\x00\\x00\\x00\\x01\\x00\\x00\\x08\\xf5\\\n\\x00\\x00\\x01s=\\xbb\\xb0\\xde\\\n\\x00\\x00\\x00$\\x00\\x00\\x00\\x00\\x00\\x01\\x00\\x00\\x03\\x9e\\\n\\x00\\x00\\x01sC\\xec/)\\\n\"\n\ndef qInitResources():\n QtCore.qRegisterResourceData(0x03, qt_resource_struct, qt_resource_name, qt_resource_data)\n\ndef qCleanupResources():\n QtCore.qUnregisterResourceData(0x03, qt_resource_struct, qt_resource_name, qt_resource_data)\n\nqInitResources()\n"
},
{
"alpha_fraction": 0.6918798685073853,
"alphanum_fraction": 0.7085650563240051,
"avg_line_length": 27.09375,
"blob_id": "a0f83983112ef1a8fe443db31d563c40213f9922",
"content_id": "8043b6af15d3bf4c7a011c61718ae5c8184dc2e2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 899,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 32,
"path": "/main.py",
"repo_name": "yasseerr/TCTG",
"src_encoding": "UTF-8",
"text": "# This Python file uses the following encoding: utf-8\nimport sys\nimport os\n\n\nfrom PySide2.QtGui import QGuiApplication\nfrom PySide2.QtQml import QQmlApplicationEngine,QQmlContext,qmlRegisterType\nfrom PySide2.QtCore import QObject\n\nfrom tctg_manager import TCTG_Manager\n\nimport resources\n\n#deployment configuration\n#https://stackoverflow.com/questions/58035550/pyinstaller-and-qml-files\n# application_path = (\n# sys._MEIPASS\n# if getattr(sys, \"frozen\", False)\n# else os.path.dirname(os.path.abspath(__file__))\n# )\n\nif __name__ == \"__main__\":\n app = QGuiApplication(sys.argv)\n qmlRegisterType(TCTG_Manager, \"TCTG\", 1, 0, \"TCTG_Manager\")\n engine = QQmlApplicationEngine()\n #engine.load(os.path.join(os.path.dirname(__file__), \"main.qml\"))\n engine.load(\"qrc:/main.qml\")\n engine.rootContext()\n\n if not engine.rootObjects():\n sys.exit(-1)\n sys.exit(app.exec_())\n"
},
{
"alpha_fraction": 0.69140625,
"alphanum_fraction": 0.70703125,
"avg_line_length": 24.5,
"blob_id": "591e4b633026d9c7a7fa8e69121b6a436d6bdc70",
"content_id": "61b3ae42f9e4102f0ba56d89d762d66209b45bc8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 256,
"license_type": "no_license",
"max_line_length": 53,
"num_lines": 10,
"path": "/EditorManager.py",
"repo_name": "yasseerr/TCTG",
"src_encoding": "UTF-8",
"text": "# This Python file uses the following encoding: utf-8\nfrom PySide2 import QtCore\nfrom PySide2 import QtWidgets\nfrom PySide2 import QtQuick\n\n\nclass EditorManager(QObject):\n def __init__(self):\n self.templateText = \"\"\n self.valuesText = \"\"\n\n"
}
] | 4 |
m3579/ActsProject | https://github.com/m3579/ActsProject | ff13ee5a12c1f0585487ac0beddc479453d89806 | 2b11b1b74af7320d697c7fa02907d5602ae05f15 | 95eb06c0714d30af9ea7fcf26bdbf42cab6e9f3d | refs/heads/master | 2019-01-01T02:18:56.349586 | 2015-12-08T02:35:21 | 2015-12-08T02:36:17 | 42,147,060 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.543063759803772,
"alphanum_fraction": 0.5535831451416016,
"avg_line_length": 22.765625,
"blob_id": "1f46f3752e86e43fa1ed95453bc8c3cfd691865a",
"content_id": "f183983b1b0e7786cedd470ff3ce6cfae6745388",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1523,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 64,
"path": "/ActsProject/art.py",
"repo_name": "m3579/ActsProject",
"src_encoding": "UTF-8",
"text": "from time import sleep\nimport colorama\nimport sys\n\nclass Terminal():\n def __init__(self, maxRows=50, maxColumns=79):\n colorama.init()\n\n self.maxRows = maxRows\n self.maxColumns = maxColumns\n\n self.frame = 0\n\n self.lineCount = 0\n self.prevLineCount = self.lineCount\n\n self.resetdelay = 0.02\n\n def draw(self, text, resetdelay=None):\n if resetdelay == None:\n resetdelay = self.resetdelay\n\n lines = text.split(\"\\n\")\n\n self.prevLineCount = self.lineCount\n self.lineCount = 0\n\n for line in lines:\n\n self.lineCount += 1\n\n length = len(line)\n\n if length > self.maxColumns:\n self.error(self.lineCount, \"Too many columns in this animation\")\n elif length < self.maxColumns:\n while len(line) < self.maxColumns:\n line += \" \"\n\n print(line)\n\n self.frame += 1\n\n self.reset(resetdelay, self.lineCount)\n\n def reset(self, delay, linecount):\n sleep(delay)\n\n for i in range(linecount):\n sys.stdout.write(\"\\033[1A\")\n\n sys.stdout.write(\"\\r\")\n\n def finish(self, linecount):\n \n for i in range(linecount):\n print(\" \" * self.maxColumns)\n\n\n def error(self, line, message):\n self.reset(self.resetdelay, self.lineCount)\n self.finish(self.prevLineCount)\n print(\"Error (\", self.frame, \" : \", line, \"): \", message, sep=\"\")\n while True: pass\n"
},
{
"alpha_fraction": 0.7777777910232544,
"alphanum_fraction": 0.7777777910232544,
"avg_line_length": 17,
"blob_id": "2f604887548b389aa2dc53bf6a4b01f320591e35",
"content_id": "678b8e971d00d8cb114da48b060384ccc2df2602",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 36,
"license_type": "no_license",
"max_line_length": 21,
"num_lines": 2,
"path": "/README.md",
"repo_name": "m3579/ActsProject",
"src_encoding": "UTF-8",
"text": "# ActsProject\nA project for school.\n"
},
{
"alpha_fraction": 0.39695367217063904,
"alphanum_fraction": 0.4059682786464691,
"avg_line_length": 21.661972045898438,
"blob_id": "f40e99ef24b736d8bc59ac0d9df58fa2c279af30",
"content_id": "8e805d82f240413f4a367d09b6545daaabe3d78e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3219,
"license_type": "no_license",
"max_line_length": 71,
"num_lines": 142,
"path": "/ActsProject/ActsProject.py",
"repo_name": "m3579/ActsProject",
"src_encoding": "UTF-8",
"text": "from stories import jesus_ascends\nfrom stories import holy_spirit_comes\nfrom stories import peter_heals_lame_beggar\nfrom stories import peter_john_before_sanhedrin\nfrom stories import ananias_and_saphira\nfrom stories import choosing_of_seven\nfrom stories import stephen_siezed\nfrom stories import stephens_speech_to_sanhedrin\nfrom stories import simon_the_sorcerer\nfrom stories import philip_and_ethiopian\nfrom stories import sauls_conversion\nfrom stories import aeneus_and_dorcas\nfrom stories import peters_vision\nfrom stories import peters_miraculous_escape\n\nfrom art import Terminal\nimport sys\n\nstoryList = [jesus_ascends, # 1\n holy_spirit_comes, # 2\n peter_heals_lame_beggar, # 3\n peter_john_before_sanhedrin, # 4\n ananias_and_saphira, # 5\n choosing_of_seven, # 6\n stephen_siezed, # 7\n stephens_speech_to_sanhedrin, # 8\n simon_the_sorcerer, # 9\n philip_and_ethiopian, # 10\n sauls_conversion, # 11\n aeneus_and_dorcas, # 12\n peters_vision, # 13\n peters_miraculous_escape # 14 \n ]\n\n# Comment these out during debugging\nprint()\ncommand = input(\"story> \").lower()\n\n# DEBUGGING\n# .go()\n# sys.exit(0)\n\nwhile command != \"\":\n \n if command == \"help\":\n\n print(\n \"\"\"\nWelcome to Mihir Kasmalkar's Acts Project!\n\nThis program contains animations in the terminal for various stories\nin the Book of Acts.\n\nPlease excuse the bad jokes.\n\nHere is a list of all of the stories:\n \"\"\"\n )\n\n for story in storyList:\n print(\"\\t\", story.name)\n\n print()\n\n else:\n story = [story for story in storyList if story.name == command]\n if len(story) > 0:\n t = Terminal()\n t.draw(\n \"\"\"\n\n\n\n\n \n The verses in this animation were taken from\n \"\"\" + story[0].reference + \"\"\"\n\n\n\n\n\n \"\"\",\n 3\n )\n t.draw(\n r\"\"\"\n\n\n \n /\\ \n / \\ o o \n / \\ $$$$ ------ \n / \\ $ | o o\n / \\ $$$$ | o o\n / \\ $ | o o\n / \\ $$$$ ------ o o\n\n\n \"\"\",\n 0.6\n )\n t.draw(\n r\"\"\"\n\n\n \n /\\ \n / \\ ! \n / \\ ------ ======= $$$$\n / \\ | ! $\n / \\ | ! $$$$\n / \\ | ! $\n / \\ ------ ! $$$$\n\n\n \"\"\",\n 0.6\n )\n t.draw(\n r\"\"\"\n\n\n \n \n \n \n \n \n \n \n\n\n \"\"\",\n 1\n )\n\n story[0].go(t)\n else:\n print(\"I cannot recognize this story\")\n\n command = input(\"story> \").lower()"
}
] | 3 |
huzichunjohn/nova-dns | https://github.com/huzichunjohn/nova-dns | 9cea6e47449521a72ae6ef62174aecdea1383704 | 5307a480aa1da8b9ff71eeec3d4589c38cda7627 | 9b62b8b67e1c4ea4554f5528a9d78d22670da058 | refs/heads/master | 2021-01-18T10:59:42.502353 | 2012-04-09T21:18:07 | 2012-04-09T21:18:07 | null | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5992714166641235,
"alphanum_fraction": 0.6047359108924866,
"avg_line_length": 33.1875,
"blob_id": "e42d7baeab731fef0683e0f250824d8bb9d74f2c",
"content_id": "c72c4bc1429d2da72fbbab13c01ef68708bd667b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 549,
"license_type": "no_license",
"max_line_length": 82,
"num_lines": 16,
"path": "/nova_dns/__init__.py",
"repo_name": "huzichunjohn/nova-dns",
"src_encoding": "UTF-8",
"text": "\n__version__ = \"0.2.2\"\n\ntry:\n from nova import flags\n FLAGS = flags.FLAGS\n\n flags.DEFINE_string(\"dns_manager\", \"nova_dns.dnsmanager.powerdns.Manager\",\n \"DNS manager class\")\n flags.DEFINE_string(\"dns_listener\", \"nova_dns.listener.simple.Listener\",\n \"Class to process AMQP messages\")\n flags.DEFINE_string(\"dns_api_paste_config\", \"/etc/nova-dns/dns-api-paste.ini\",\n \"File name for the paste.deploy config for nova-dns api\")\n\nexcept:\n #make setup.py happy\n pass\n\n"
}
] | 1 |
singhamritanshu/face_recognition | https://github.com/singhamritanshu/face_recognition | aabe469aefede50bb21bdc006da5eddbe4f26d66 | 3401eb5472787c7783155e894b565efd1b90d921 | a3350e9ad6c6556507a7558d67b1cfe2cf8af0ef | refs/heads/master | 2023-02-28T20:34:45.433916 | 2021-02-10T17:44:07 | 2021-02-10T17:44:07 | 337,804,068 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6984572410583496,
"alphanum_fraction": 0.7166900634765625,
"avg_line_length": 30.04347801208496,
"blob_id": "eba88e43285600b67ae1c68245d8219a279dd080",
"content_id": "e6ffdc508d91e8b35e198292b032ccfa6de5e5f8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 713,
"license_type": "no_license",
"max_line_length": 84,
"num_lines": 23,
"path": "/face_recognition.py",
"repo_name": "singhamritanshu/face_recognition",
"src_encoding": "UTF-8",
"text": "from os import confstr\nimport cv2 as cv \nimport numpy as np \n\nhaar_cascade = cv.CascadeClassifier('haarcascade_frontalface_default.xml')\nface_recognizer = cv.face.LBPHFaceRecognizer_create()\nface_recognizer.read(\"face_trained.yml\")\n\nimg = cv.imread(\"test_images/2.jpg\")\ngray = cv.cvtColor(img,cv.COLOR_BGR2GRAY)\ncv.imshow(\"Person\",gray)\n\n# Detect faces\n\nface_rect = haar_cascade.detectMultiScale(gray,1.1,8)\nfor (x,y,w,h) in face_rect:\n face_roi = gray[y:y+h,x:x+w]\n label, confidence = face_recognizer.predict(face_roi)\n print(\"Name of the person in the image\", label,\" with confidence of\",confidence)\n cv.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)\n cv.imshow(\"Detected image\", img)\n\ncv.waitKey(0)"
},
{
"alpha_fraction": 0.7040618658065796,
"alphanum_fraction": 0.7292069792747498,
"avg_line_length": 27.77777862548828,
"blob_id": "7ad0ff35b4a2b41d2bcfc69513e9ed25afd58417",
"content_id": "a0fc29aa06526d6b8281b659b265fd448bd848b2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 517,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 18,
"path": "/face_detection.py",
"repo_name": "singhamritanshu/face_recognition",
"src_encoding": "UTF-8",
"text": "import cv2 as cv\n\nimg = cv.imread(\"test_images/4.png\")\ncv.imshow(\"Original\",img)\n\ngray = cv.cvtColor(img,cv.COLOR_BGR2GRAY)\ncv.imshow(\"Gray\",gray)\n\nhaar_cascade = cv.CascadeClassifier('haarcascade_frontalface_default.xml')\n\nrect_cord = haar_cascade.detectMultiScale(gray,scaleFactor=1.1,minNeighbors=8)\nprint(\"number of face found\",len(rect_cord))\n\n# Drawing the rectange on the original image \nfor (x,y,w,h) in rect_cord:\n cv.rectangle(img,(x,y),(x+w,y+h),(255,0,0),3)\ncv.imshow(\"Face Detected\",img)\ncv.waitKey(0)"
},
{
"alpha_fraction": 0.6481481194496155,
"alphanum_fraction": 0.6515775322914124,
"avg_line_length": 30.042552947998047,
"blob_id": "714cffb393b95c8d00b0f2fd5ad128f977a56b33",
"content_id": "f7e895de9c2c2c911cf525cd6ec2a77e2bbbd2f4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1458,
"license_type": "no_license",
"max_line_length": 95,
"num_lines": 47,
"path": "/face_train.py",
"repo_name": "singhamritanshu/face_recognition",
"src_encoding": "UTF-8",
"text": "import os \nimport cv2 as cv\nimport numpy as np \n\npeople =[]\nfor i in os.listdir(r'face'):\n people.append(i)\nprint(people)\nDIR = r'face'\n\nhaar_cascade = cv.CascadeClassifier('haarcascade_frontalface_default.xml')\n\nfeatures = []\nlabels = []\n# Croping the face from the image \ndef collect_feature():\n for person in people:\n path = os.path.join(DIR,person)\n label = people.index(person)\n if os.path.isdir(path):\n for img in os.listdir(path):\n img_path = os.path.join(path,img)\n img_array = cv.imread(img_path)\n gray = cv.cvtColor(img_array,cv.COLOR_BGR2GRAY)\n rect_cord = haar_cascade.detectMultiScale(gray,scaleFactor=1.1,minNeighbors=8)\n\n for(x,y,w,h) in rect_cord:\n face_roi = gray[y:y+h,x:x+w]\n features.append(face_roi)\n labels.append(label)\n\ncollect_feature()\nprint(\"Training done\")\n#print(\"Numbers of Features\", len(features))\n#print(\"Number of Labels\",len(labels))\n\nface_recognizer = cv.face.LBPHFaceRecognizer_create()\n\n# Train the recognizer on the features list and the label\nfeatures = np.array(features,dtype='object')\nlabels = np.array(labels)\nface_recognizer.train(features,labels)\n\nface_recognizer.save('face_trained.yml') # Saving the face trained model so that we can use it.\n# Saving the features and the labels\nnp.save(\"features.npy\",features)\nnp.save(\"labels.npy\",labels)"
}
] | 3 |
efhayes3/django_ui | https://github.com/efhayes3/django_ui | a3f1075fb78e52ca9c03dbebba116d2220425740 | d5188f39efc31f55e3f1eeaf04892f518e8d45c9 | bfc3ae79e42e77bf51a6d0b068f943cc493ee9a4 | refs/heads/master | 2021-01-21T06:47:20.511265 | 2017-02-23T01:53:00 | 2017-02-23T01:53:00 | 82,872,099 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.7079107761383057,
"alphanum_fraction": 0.7079107761383057,
"avg_line_length": 24.947368621826172,
"blob_id": "29a6ff2785f45768daca4d4b0060e82ec3961e90",
"content_id": "a4664b6bcbe6d62091055fc632cbc6a3c305836c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 493,
"license_type": "no_license",
"max_line_length": 62,
"num_lines": 19,
"path": "/web/search/views.py",
"repo_name": "efhayes3/django_ui",
"src_encoding": "UTF-8",
"text": "from django.shortcuts import render\nfrom django.http import HttpResponse\nfrom django import forms\n\nfrom .forms import SubmitNeighborhood, SubmitAlteredParameters\n\n\ndef start(request):\n form = SubmitNeighborhood()\n # args = {}\n # args['neighborhood'] = form.cleaned_data['neighborhood']\n context = {'form': form}\n\n return render(request, 'index.html', context)\n\n# def alter(request, neighborhood):\n # form = SubmitAlteredParameters()\n # context['form'] = form\n # return\n"
},
{
"alpha_fraction": 0.6563193202018738,
"alphanum_fraction": 0.6651884913444519,
"avg_line_length": 27.1875,
"blob_id": "b8a90505919b9612246dfd82210e0bb6b016d360",
"content_id": "bcf84f86cb9e28177c0f79709b899e0c3a9b2759",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 451,
"license_type": "no_license",
"max_line_length": 65,
"num_lines": 16,
"path": "/web/search/forms.py",
"repo_name": "efhayes3/django_ui",
"src_encoding": "UTF-8",
"text": "from django import forms\n\n\nNEIGHBORHOODS = ['Hyde Park', 'Loop', 'Gold Coast']\n\n\nclass SubmitNeighborhood(forms.Form):\n NEIGHBORHOODS = ['Hyde Park', 'Loop', 'Gold Coast']\n\n neighborhood = forms.ChoiceField(label='Select Neighborhood',\n choices=NEIGHBORHOODS)\n\n\nclass SubmitAlteredParameters(forms.Form):\n alt_crime = forms.IntegerField()\n alt_school = forms.IntegerField(min_value=0, max_value=100)\n"
}
] | 2 |
gabriellavor/bottle | https://github.com/gabriellavor/bottle | ed0e4697e95bc64237267f5698262448c5fba24e | 9529954b6ca15825bf79b6a6e36a3ad496787e4d | ff4b8c6c505e57a92c0fbbbd6c93a23a92033e5f | refs/heads/master | 2020-04-15T08:02:12.935261 | 2019-01-07T23:26:49 | 2019-01-07T23:26:49 | 164,513,212 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6365853548049927,
"alphanum_fraction": 0.6365853548049927,
"avg_line_length": 18.5238094329834,
"blob_id": "85244001596755dc531937a16e18c82b6047b611",
"content_id": "f0c0e0e4fe32732d697555b4f8946d8410bad417",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 410,
"license_type": "no_license",
"max_line_length": 69,
"num_lines": 21,
"path": "/test_index.py",
"repo_name": "gabriellavor/bottle",
"src_encoding": "UTF-8",
"text": "import app\nfrom boddle import boddle\nfrom webtest import TestApp\n\nfrom nose import with_setup # optional\nfrom nose.tools import *\n\n\ndef test_hello():\n \n print(app.index('iiii'))\n #assert app.index() == 'iiii'\n\ndef test_login():\n app = TestApp(app)\n app.post('/login', {'username':'Derek','password':'kdodod'}) \n \n print(app )\n #assert mywebapp.login() == 'Hi Derek!'\n\ntest_hello(); "
},
{
"alpha_fraction": 0.6131889820098877,
"alphanum_fraction": 0.6200787425041199,
"avg_line_length": 29.787878036499023,
"blob_id": "741a832a968d14de2d0b595fd410f63e5e426d07",
"content_id": "1f9999d735b818bd553ff450947143181ab8c51b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1016,
"license_type": "no_license",
"max_line_length": 76,
"num_lines": 33,
"path": "/index.py",
"repo_name": "gabriellavor/bottle",
"src_encoding": "UTF-8",
"text": "from bottle import get, post, request ,route, run, template,HTTPResponse\nimport json\n#pip install bottle\n@route('/hello/<name>')\ndef index(name):\n return template('<b>Hello {{name}}</b>!', name=name)\n\n@get('/login')\ndef login():\n return '''\n <form action=\"/login\" method=\"post\">\n Username: <input name=\"username\" type=\"text\" />\n Password: <input name=\"password\" type=\"password\" />\n <input value=\"Login\" type=\"submit\" />\n </form>\n '''\n@post('/login')\ndef login_post():\n username = request.forms.get('username')\n password = request.forms.get('password')\n if check_login(username, password):\n headers = {'Content-type': 'application/json'}\n theBody = json.dumps({'msg': 'Your login information was correct'}) \n return HTTPResponse(status=300, body=theBody,headers=headers)\n else:\n return \"<p>Login failed.</p>\"\n\n\ndef check_login(username, password):\n return True\n\nif __name__ == '__main__':\n run(host='localhost', port=8080)\n"
},
{
"alpha_fraction": 0.4733847975730896,
"alphanum_fraction": 0.5046728849411011,
"avg_line_length": 28.650602340698242,
"blob_id": "9e046d45095f49e92fb3802eb967ddff1bbb71f6",
"content_id": "ecc43e7c8c85d66c33d9832193e26b49d7d1002d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2461,
"license_type": "no_license",
"max_line_length": 72,
"num_lines": 83,
"path": "/app.py",
"repo_name": "gabriellavor/bottle",
"src_encoding": "UTF-8",
"text": "from bottle import get, post, request ,route, run, template,HTTPResponse\nimport json\nfrom pymongo import MongoClient\nfrom bson import json_util\nimport datetime\n\n#pip install bottle\n@route('/localizacao')\ndef localizacao():\n cliente = MongoClient('localhost', 27017)\n banco = cliente.truckpad\n checkin = banco.checkin\n \n retorno = []\n for x in checkin.find({},{'_id':0}):\n retorno.append(\n {\n 'nome' : x['nome'],\n 'idade' : x['idade'],\n 'sexo' : x['sexo'],\n #'carregado' : x['carregado'],\n 'tipo_cnh' : x['tipo_cnh'],\n 'tipo_veiculo_codigo': x['tipo_veiculo_codigo'],\n 'tipo_veiculo_descricao': x['tipo_veiculo_descricao'],\n 'veiculo_proprio': x['veiculo_proprio']\n \n }\n \n )\n \n return retornar(retorno,200)\n\n\n@route('/cadastrar')\ndef cadastrar():\n cliente = MongoClient('localhost', 27017)\n banco = cliente.truckpad\n checkin = banco.checkin\n mydict = {\n 'nome':'Gabriel Lavor',\n 'idade':10, \n 'tipo_cnh':'B',\n 'tipo_veiculo_codigo':2,\n 'tipo_veiculo_descricao':'Truck',\n 'veiculo_proprio':False,\n 'sexo':'M',\n 'checkin':[\n {\n 'carregado':True,\n 'origem':'Extra',\n 'latitude_origem':-23.5555,\n 'longitude_origem':-57.6644,\n 'destino':'Mercearia',\n 'latitude_destino':-20.5555,\n 'longitude_destino':-50.6644,\n 'data': datetime.datetime.now()\n },\n {\n 'carregado':False,\n 'origem':'Extra 2',\n 'latitude_origem':-23.5555,\n 'longitude_origem':-57.6644,\n 'destino':'Mercearia 2',\n 'latitude_destino':-20.5555,\n 'longitude_destino':-50.6644,\n 'data': datetime.datetime.now()\n }\n ]\n \n }\n retorno = checkin.insert_one(mydict)\n if(retorno.inserted_id):\n return retornar('sucesso',200)\n else:\n return retornar('erro',400)\n\ndef retornar(retorno,status):\n headers = {'Content-type': 'application/json'}\n theBody = {'retorno': retorno}\n return HTTPResponse(status=status, body=theBody,headers=headers)\n\nif __name__ == '__main__':\n run(host='localhost', port=8080)\n"
},
{
"alpha_fraction": 0.6002020835876465,
"alphanum_fraction": 0.610194206237793,
"avg_line_length": 31.38909149169922,
"blob_id": "932fac9c21d7ffb7e5b0d826a0bcc1da760e55d9",
"content_id": "8eae2d3ba5a86dfce3b42a049cd9bc50ea6cb68e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 8908,
"license_type": "no_license",
"max_line_length": 146,
"num_lines": 275,
"path": "/app2.py",
"repo_name": "gabriellavor/bottle",
"src_encoding": "UTF-8",
"text": "from bottle import get, post, put,request ,route, run, template,HTTPResponse\nimport json\nfrom Mysql import Mysql\nfrom bson import json_util\nimport datetime\nfrom datetime import date, timedelta\n\n\n#pip install bottle\n@route('/localizacao')\ndef localizacao():\n bd = Mysql()\n retorno = []\n cursor = bd.cursor\n query = (\"SELECT motorista.codigo AS codigo,nome, idade, sexo, tipo_cnh, veiculo_proprio, tipo_veiculo.descricao AS tipo_veiculo_descricao, \\\n (SELECT carregado FROM checkin WHERE checkin.codigo_motorista = motorista.codigo order by data DESC LIMIT 1) AS carregado\\\n FROM motorista\\\n INNER JOIN tipo_veiculo ON (tipo_veiculo.codigo = motorista.tipo_veiculo)\")\n bd.query(query)\n resultado = bd.fetchall()\n\n for dado in resultado:\n retorno.append({\n 'codigo': dado[0],\n 'name': dado[1],\n 'idade': dado[2],\n 'sexo': dado[3],\n 'tipo_cnh': dado[4],\n 'veiculo_proprio':dado[5], \n 'tipo_veiculo_descricao':dado[6],\n 'carregado':dado[7],\n })\n \n return retornar(retorno,200)\n\n\n@route('/veiculo-vazio')\ndef veiculo_proprio():\n bd = Mysql()\n cursor = bd.cursor\n query = (\"SELECT nome, idade, sexo, tipo_cnh, veiculo_proprio, tipo_veiculo.descricao AS tipo_veiculo_descricao, \\\n (SELECT carregado FROM checkin WHERE checkin.codigo_motorista = motorista.codigo order by data DESC LIMIT 1) AS carregado\\\n FROM motorista\\\n INNER JOIN tipo_veiculo ON (tipo_veiculo.codigo = motorista.tipo_veiculo)\\\n WHERE (SELECT carregado FROM checkin WHERE checkin.codigo_motorista = motorista.codigo order by data DESC LIMIT 1) = 0\\\n \")\n bd.query(query)\n retorno = []\n resultado = bd.fetchall()\n for dado in resultado:\n retorno.append({\n 'name': dado[0],\n 'idade': dado[1],\n 'sexo': dado[2],\n 'tipo_cnh': dado[3],\n 'veiculo_proprio':dado[4], \n 'tipo_veiculo_descricao':dado[5],\n 'carregado':dado[6],\n })\n \n return retornar(retorno,200)\n\n\n@route('/veiculo-proprio')\ndef veiculo_proprio():\n retorno = []\n bd = Mysql()\n cursor = bd.cursor\n query = (\"SELECT count(*) AS qtd FROM motorista WHERE veiculo_proprio = 1\")\n bd.query(query)\n \n resultado = bd.fetchall()\n\n for dado in resultado:\n retorno.append({\n 'qtd': dado[0]\n })\n \n return retornar(retorno,200)\n\n@route('/veiculo-terminal')\ndef veiculo_por_terminal():\n retorno = []\n bd = Mysql()\n cursor = bd.cursor\n data = inicio_fim_semana()\n\n query = (\"SELECT (SELECT COUNT(*) FROM checkin WHERE Day(data) = Day(NOW())) as dia,\\\n (SELECT COUNT(*) FROM checkin WHERE data between '\"+data[0]+\"' and '\"+data[1]+\"') as semana,\\\n (SELECT COUNT(*) FROM checkin WHERE Month(data) = Month(NOW())) as mes\\\n FROM checkin LIMIT 1\")\n\n bd.query(query)\n resultado = bd.fetchall()\n\n for dado in resultado:\n retorno.append({\n 'dia': dado[0],\n 'semana': dado[1],\n 'mes': dado[2],\n })\n \n return retornar(retorno,200)\n\n@route('/lista-tipo')\ndef lista_tipo():\n retorno = []\n bd = Mysql()\n cursor = bd.cursor\n data = inicio_fim_semana()\n\n query = (\"SELECT 'origem' as tipo,origem.descricao,tipo_veiculo.descricao,count(*) as qtd FROM checkin\\\n inner join motorista ON (motorista.codigo = checkin.codigo_motorista)\\\n inner join tipo_veiculo ON (tipo_veiculo.codigo = motorista.tipo_veiculo)\\\n INNER JOIN local origem ON (origem.codigo = checkin.codigo_origem)\\\n GROUP BY origem.descricao,tipo_veiculo.descricao\\\n UNION\\\n SELECT 'destino' as tipo ,destino.descricao,tipo_veiculo.descricao,count(*) as qtd FROM checkin\\\n inner join motorista ON (motorista.codigo = checkin.codigo_motorista)\\\n inner join tipo_veiculo ON (tipo_veiculo.codigo = motorista.tipo_veiculo)\\\n INNER JOIN local destino ON (destino.codigo = checkin.codigo_destino)\\\n GROUP BY destino.descricao,tipo_veiculo.descricao\\\n \")\n\n bd.query(query)\n \n resultado = bd.fetchall()\n\n for dado in resultado:\n retorno.append({\n 'tipo': dado[0],\n 'descricao_local': dado[1],\n 'descricao_tipo_veiculo': dado[2],\n 'qtd_veiculos': dado[3],\n })\n return retornar(retorno,200)\n\n\n@put('/atualizar')\ndef atualizar():\n bd = Mysql()\n cursor = bd.cursor\n post = json.loads(request.body.getvalue().decode('utf-8'))\n #motorista\n codigo = post[\"codigo\"]\n nome = post[\"nome\"]\n idade = post[\"idade\"]\n veiculo_proprio = post[\"veiculo_proprio\"]\n tipo_cnh = post[\"tipo_cnh\"]\n sexo = post[\"sexo\"]\n tipo_veiculo = post[\"tipo_veiculo\"]\n\n codigo = retorna_motorista_por_codigo(codigo)\n if(codigo != False):\n sql_motorista = \"\"\" UPDATE `motorista`\n SET `nome` = %s ,`idade` = %s ,`sexo` = %s ,`veiculo_proprio` = %s ,`tipo_veiculo` = %s ,`tipo_cnh` = %s WHERE codigo = %s\"\"\"\n bd.query(sql_motorista,(nome,idade,sexo,veiculo_proprio,tipo_veiculo,tipo_cnh,codigo))\n bd.commit()\n if(cursor.rowcount > 0):\n return retornar('Alterado com sucesso!',200)\n else:\n return retornar('Nenhum registro foi alterado!',200)\n \n else:\n retorno = 'Motorista Informado não existe'\n return retornar(retorno,400)\n \n\n@post('/cadastrar')\ndef cadastrar():\n bd = Mysql()\n cursor = bd.cursor\n post = json.loads(request.body.getvalue().decode('utf-8'))\n #motorista\n nome = post[\"nome\"]\n idade = post[\"idade\"]\n veiculo_proprio = post[\"veiculo_proprio\"]\n tipo_cnh = post[\"tipo_cnh\"]\n sexo = post[\"sexo\"]\n tipo_veiculo = post[\"tipo_veiculo\"]\n #checkin\n carregado = post[\"carregado\"]\n #local\n origem_descricao = post[\"origem_descricao\"]\n origem_latitude = post[\"origem_latitude\"]\n origem_longitude = post[\"origem_longitude\"]\n destino_descricao = post[\"destino_descricao\"]\n destino_latitude = post[\"destino_latitude\"]\n destino_longitude = post[\"destino_longitude\"]\n \n sql_local = \"\"\" INSERT INTO `local`\n (`descricao`, `latitude`, `longitude`) VALUES (%s,%s,%s)\"\"\"\n \n sql_motorista = \"\"\" INSERT INTO `motorista`\n (`nome`, `idade`, `sexo`, `veiculo_proprio`,`tipo_veiculo`,`tipo_cnh`) VALUES (%s,%s,%s,%s,%s,%s)\"\"\"\n\n sql_checkin = \"\"\" INSERT INTO `checkin`\n (`data`, `codigo_origem`, `codigo_destino`, `carregado`,`codigo_motorista`) VALUES (%s,%s,%s,%s,%s)\"\"\"\n\n id_origem = retorna_codigo_local(origem_descricao)\n if(id_origem == False):\n bd.query(sql_local,(origem_descricao,origem_latitude,origem_longitude))\n id_origem = cursor.lastrowid\n id_destino = retorna_codigo_local(destino_descricao)\n \n if(id_destino == False):\n bd.query(sql_local,(destino_descricao,destino_latitude,destino_longitude))\n id_destino = cursor.lastrowid\n id_motorista = retorna_motorista(nome)\n \n if(id_motorista == False):\n bd.query(sql_motorista,(nome,idade,sexo,veiculo_proprio,tipo_veiculo,tipo_cnh))\n id_motorista = cursor.lastrowid\n bd.query(sql_checkin,(datetime.datetime.now(),id_origem,id_destino,carregado,id_motorista))\n id_checkin = cursor.lastrowid\n \n bd.commit()\n if(id_checkin):\n return retornar('sucesso',200)\n else:\n return retornar('erro',400)\n\ndef retornar(retorno,status):\n headers = {'Content-type': 'application/json'}\n theBody = {'retorno': retorno}\n return HTTPResponse(status=status, body=theBody,headers=headers)\n\ndef retorna_codigo_local(descricao):\n bd = Mysql()\n cursor = bd.cursor\n query = (\"SELECT codigo FROM local Where descricao = '\"+str(descricao)+\"'\")\n bd.query(query)\n resultado = bd.fetchall()\n \n for dado in resultado:\n return dado[0]\n return False\n\ndef retorna_motorista(nome):\n bd = Mysql()\n cursor = bd.cursor\n query = (\"SELECT codigo FROM motorista Where nome = '\"+str(nome)+\"'\")\n bd.query(query)\n resultado = bd.fetchall()\n \n for dado in resultado:\n return dado[0]\n return False\n\ndef retorna_motorista_por_codigo(codigo):\n bd = Mysql()\n cursor = bd.cursor\n query = (\"SELECT codigo FROM motorista Where codigo = '\"+str(codigo)+\"'\")\n bd.query(query)\n resultado = bd.fetchall()\n \n for dado in resultado:\n return dado[0]\n return False\n\ndef inicio_fim_semana():\n d = date.today()\n\n if(d.weekday() == 6):\n ini = 0\n fim = 6\n else:\n ini = (d.weekday()+1) * -1\n fim = 5-d.weekday()\n data = (str(date.today() + timedelta(days=ini)) +' 00:00:00',str(date.today() + timedelta(days=fim)) +' 23:59:59') \n \n return data\n\nif __name__ == '__main__':\n run(host='localhost', port=8080)\n"
}
] | 4 |
zadacka/tdd | https://github.com/zadacka/tdd | e4d786dd4a4752ba6164e3ecdd1c4f07315e9809 | 3da9e5135d92eff8bd192f3087bab360333ba61e | e292df32530e8d42c6466af4dce96f5e4fbeb646 | refs/heads/master | 2017-12-22T08:44:27.481569 | 2016-11-03T23:25:59 | 2016-11-04T22:09:52 | 72,795,350 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6157894730567932,
"alphanum_fraction": 0.6184210777282715,
"avg_line_length": 41.27777862548828,
"blob_id": "bafaa8e26634212737c8fc8f1ba1c5522fb6bcc6",
"content_id": "c88dcac66739f4406f70ca8a8d929a01b0d28e06",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 760,
"license_type": "no_license",
"max_line_length": 98,
"num_lines": 18,
"path": "/wrap/wrap.py",
"repo_name": "zadacka/tdd",
"src_encoding": "UTF-8",
"text": "def wrap(text, wrap_size):\n \"\"\"\n Brief:\n - wrap text so that no more than wrap_size characters are on a line.\n - Try to break words naturally, only break a word if there is no alternative\n :param text: input string\n :param wrap_size: an integer representing the maximum wrap width\n :return: a 'wrapped' string containing newline characters\n \"\"\"\n if len(text) <= wrap_size:\n return text\n else:\n index_of_last_space = text.rfind(' ', 0, wrap_size)\n if index_of_last_space == -1 or text[wrap_size] == ' ':\n index_to_break_at = wrap_size\n else:\n index_to_break_at = index_of_last_space\n return text[:index_to_break_at] + '\\n' + wrap(text[index_to_break_at:].strip(), wrap_size)"
},
{
"alpha_fraction": 0.6807313561439514,
"alphanum_fraction": 0.6905766725540161,
"avg_line_length": 27.453332901000977,
"blob_id": "d1edbe5307c258c0950315efe83d9e66fd043d2a",
"content_id": "cd6c14e98337f43721d7d665a46ab54c0d85926d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2133,
"license_type": "no_license",
"max_line_length": 103,
"num_lines": 75,
"path": "/hvac/hvac.py",
"repo_name": "zadacka/tdd",
"src_encoding": "UTF-8",
"text": "LOWER_THRESHOLD = 65\nUPPER_THRESHOLD = 75\n\nTIME_TO_KEEP_FAN_RUNNING = 3\nTIME_TO_KEEP_COOLER_OFF = 5\n\n\ndef thermostatSaysHeaterShouldBeOn(temperature):\n if temperature < LOWER_THRESHOLD:\n return True\n return False\n\n\ndef thermostatSaysFanShouldBeOn(temperature):\n if temperature < LOWER_THRESHOLD:\n return True\n return False\n\n\ndef thermostatSaysCoolerShouldBeOn(temperature):\n return temperature > UPPER_THRESHOLD\n\ndef timerFanMustStayOn(time):\n if time < TIME_TO_KEEP_FAN_RUNNING:\n return True\n return False\n\n\ndef timerAllowsCoolerOn(time):\n if time < TIME_TO_KEEP_COOLER_OFF:\n return False\n return True\n\n\nclass EnvironmentController (object):\n def __init__(self):\n self.heaterIsOn = False\n self.fanIsOn = False\n self.coolerIsOn = False\n self.timeSinceHeaterOn = 0\n self.timeSinceCoolerOn = 0\n\n def heater(self, turnOn):\n self.timeSinceHeaterOn = 0 if turnOn else self.timeSinceHeaterOn + 1\n self.heaterIsOn = turnOn\n\n def cooler(self, turnOn):\n self.timeSinceCoolerOn = 0 if turnOn else self.timeSinceCoolerOn + 1\n self.coolerIsOn = turnOn\n\n def fan(self, turnOn):\n self.fanIsOn = turnOn\n\n def getTemperature(self):\n return 0\n\n def handle(self):\n temperature = self.getTemperature()\n self.cooler(True) if (\n thermostatSaysCoolerShouldBeOn(temperature) and timerAllowsCoolerOn(self.timeSinceCoolerOn)\n ) else self.cooler(False)\n self.fan(True) if (\n thermostatSaysFanShouldBeOn(temperature) or timerFanMustStayOn(self.timeSinceHeaterOn)\n ) else self.fan(False)\n self.heater(True) if thermostatSaysHeaterShouldBeOn(temperature) else self.heater(False)\n\n\n# when tick is called every minute:\n# if temperature < 65 turn on the heater and the fan\n# if temperature > 75 turn on the cooler\n# if temperature > 65 < 75 turn everything off\n\n# but ...\n# if the heater was turned off, keep the fan running for three minutes ()\n# if the cooler was turned off, keep it off for five minutes (freeon must rest for three minutes)"
},
{
"alpha_fraction": 0.5789473652839661,
"alphanum_fraction": 0.6535087823867798,
"avg_line_length": 18.542856216430664,
"blob_id": "75ee95c034333f52fdb1c4be74e2e2528255b764",
"content_id": "660496f75a9ee24cf829d0ff724a30cf24847922",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 684,
"license_type": "no_license",
"max_line_length": 103,
"num_lines": 35,
"path": "/primefactors/tests/test_primefactors.py",
"repo_name": "zadacka/tdd",
"src_encoding": "UTF-8",
"text": "from testfixtures import compare\n\nfrom tdd.primefactors.primefactors import factorsOf\n\n\ndef test_factorsOf1():\n compare(factorsOf(1), expected=[])\n\n\ndef test_factorsOf2():\n compare(factorsOf(2), expected=[2])\n\n\ndef test_factorsOf3():\n compare(factorsOf(3), expected=[3])\n\n\ndef test_factorsOf4():\n compare(factorsOf(4), expected=[2, 2])\n\n\ndef test_factorsOf6():\n compare(factorsOf(6), expected=[2, 3])\n\n\ndef test_factorsOf8():\n compare(factorsOf(8), expected=[2, 2, 2])\n\n\ndef test_factorsOf9():\n compare(factorsOf(9), expected=[3, 3])\n\n\ndef test_factorsOfN():\n compare(factorsOf(2 * 2 * 3 * 5 * 7 * 13 * 13 * 13 * 31), expected=[2, 2, 3, 5, 7, 13, 13, 13, 31])\n"
},
{
"alpha_fraction": 0.6467304825782776,
"alphanum_fraction": 0.6515151262283325,
"avg_line_length": 30.350000381469727,
"blob_id": "e9b26be559451565a1641f5ebd4e0b9a90358306",
"content_id": "7680c6dd7ef7cfcc6d39cfb17b5de25bd963f5fd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1254,
"license_type": "no_license",
"max_line_length": 66,
"num_lines": 40,
"path": "/queue/tests/test_queue.py",
"repo_name": "zadacka/tdd",
"src_encoding": "UTF-8",
"text": "from unittest import TestCase\n\nfrom nose.tools import assert_raises\nfrom testfixtures import compare\n\nfrom tdd.queue.queue import Queue\n\n\nclass TestStack(TestCase):\n def setUp(self):\n self.queue = Queue()\n\n def test_newQueue_isEmpty(self):\n compare(self.queue.length(), expected=0)\n\n def test_add_incrementsLength(self):\n self.queue.add('first customer!')\n compare(self.queue.length(), expected=1)\n self.queue.add('second customer!')\n compare(self.queue.length(), expected=2)\n\n def test_get_retrievesFirstInQueueFirst(self):\n self.queue.add('first in, first out!')\n self.queue.add('last in, last out!')\n compare(self.queue.get(), expected='first in, first out!')\n compare(self.queue.get(), expected='first in, last out!')\n\n def test_get_decrementsLength(self):\n self.queue.add('first customer!')\n self.queue.add('second customer!')\n\n compare(self.queue.length(), expected=2)\n self.queue.get()\n compare(self.queue.length(), expected=1)\n self.queue.get()\n compare(self.queue.length(), expected=0)\n\n def test_get_raisesIndexErrorIfQueueIsEmpty(self):\n with assert_raises(IndexError):\n self.queue.get()\n"
},
{
"alpha_fraction": 0.625250518321991,
"alphanum_fraction": 0.6332665085792542,
"avg_line_length": 28.352941513061523,
"blob_id": "05c5b1939eb02bf6f03260c89893f1b43fe443a5",
"content_id": "6000e31453781f11f0272f85e4eaf91769671841",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 499,
"license_type": "no_license",
"max_line_length": 67,
"num_lines": 17,
"path": "/queue/queue.py",
"repo_name": "zadacka/tdd",
"src_encoding": "UTF-8",
"text": "class Queue(object):\n def __init__(self):\n self._firstElementPointer = 0\n self._lastElementPointer = 0\n self._elements = []\n\n def add(self, value):\n self._lastElementPointer += 1\n self._elements.append(value)\n\n def length(self):\n return self._lastElementPointer - self._firstElementPointer\n\n def get(self):\n frontOfTheQueue = self._elements[self._firstElementPointer]\n self._firstElementPointer += 1\n return frontOfTheQueue\n"
},
{
"alpha_fraction": 0.7603761553764343,
"alphanum_fraction": 0.7632943987846375,
"avg_line_length": 32.5217399597168,
"blob_id": "fdbbdbb14b3f9ce57da129828f1a241c32bbad3b",
"content_id": "8c75abb456b160b3b2b3419b9637b505d769eebe",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3084,
"license_type": "no_license",
"max_line_length": 117,
"num_lines": 92,
"path": "/hvac/tests/test_hvac.py",
"repo_name": "zadacka/tdd",
"src_encoding": "UTF-8",
"text": "from testfixtures import compare\n\nfrom tdd.hvac.hvac import thermostatSaysHeaterShouldBeOn, thermostatSaysCoolerShouldBeOn, LOWER_THRESHOLD, \\\n UPPER_THRESHOLD, thermostatSaysFanShouldBeOn, \\\n TIME_TO_KEEP_FAN_RUNNING, timerFanMustStayOn, TIME_TO_KEEP_COOLER_OFF, timerAllowsCoolerOn, EnvironmentController\n\nDELTA = 5\n\nTOO_COLD = LOWER_THRESHOLD - DELTA\nTOO_HOT = UPPER_THRESHOLD + DELTA\n\n\ndef test_thermostatSaysHeaterShouldBeOn():\n temperature = TOO_COLD\n compare(thermostatSaysHeaterShouldBeOn(temperature), expected=True)\n\n temperature = LOWER_THRESHOLD + DELTA\n compare(thermostatSaysHeaterShouldBeOn(temperature), expected=False)\n\n\ndef test_thermostatSaysFanShouldBeOn():\n temperature = TOO_COLD\n compare(thermostatSaysFanShouldBeOn(temperature), expected=True)\n temperature = LOWER_THRESHOLD + DELTA\n compare(thermostatSaysFanShouldBeOn(temperature), expected=False)\n\n\ndef test_thermostatSaysCoolerShouldBeOn():\n temperature = TOO_HOT\n compare(thermostatSaysCoolerShouldBeOn(temperature), expected=True)\n temperature = UPPER_THRESHOLD - DELTA\n compare(thermostatSaysCoolerShouldBeOn(temperature), expected=False)\n\n\ndef test_timerSaysFanShouldBeOn():\n shortTimeFanMustStayOn = TIME_TO_KEEP_FAN_RUNNING - 1\n compare(timerFanMustStayOn(shortTimeFanMustStayOn), expected=True)\n\n longTimeFanCanBeOff = TIME_TO_KEEP_FAN_RUNNING + 1\n compare(timerFanMustStayOn(longTimeFanCanBeOff), expected=False)\n\n\ndef test_timerSaysCoolerMustStayOff():\n coolerMustStayOff = TIME_TO_KEEP_COOLER_OFF - 1\n compare(timerAllowsCoolerOn(coolerMustStayOff), expected=False)\n coolerCanBeOn = TIME_TO_KEEP_COOLER_OFF + 1\n compare(timerAllowsCoolerOn(coolerCanBeOn), expected=True)\n\n\ndef test_environmentController_tooCold():\n hvac = EnvironmentController()\n hvac.getTemperature = lambda: TOO_COLD\n hvac.handle()\n compare(hvac.heaterIsOn, True)\n compare(hvac.fanIsOn, True)\n compare(hvac.coolerIsOn, False)\n\n\ndef test_environmentController_tooHotButFanMustStayOn():\n hvac = EnvironmentController()\n hvac.getTemperature = lambda: TOO_HOT\n hvac.timeSinceHeaterOn = TIME_TO_KEEP_FAN_RUNNING - 1\n hvac.handle()\n compare(hvac.heaterIsOn, False)\n compare(hvac.fanIsOn, True)\n\n\ndef test_environmentController_tooHotButFanCanTurnOff():\n hvac = EnvironmentController()\n hvac.getTemperature = lambda: TOO_HOT\n hvac.timeSinceHeaterOn = TIME_TO_KEEP_FAN_RUNNING + 1\n hvac.handle()\n compare(hvac.heaterIsOn, False)\n compare(hvac.fanIsOn, False)\n\n\ndef test_environmentController_tooHotButCoolerCannotTurnOn():\n hvac = EnvironmentController()\n hvac.getTemperature = lambda: TOO_HOT\n hvac.timeSinceCoolerOn = TIME_TO_KEEP_COOLER_OFF - 1\n hvac.handle()\n compare(hvac.coolerIsOn, False)\n\n\ndef test_environmentController_tooHotAndCoolerCanTurnOn():\n hvac = EnvironmentController()\n hvac.getTemperature = lambda: TOO_HOT\n hvac.timeSinceCoolerOn = TIME_TO_KEEP_COOLER_OFF + 1\n hvac.handle()\n compare(hvac.coolerIsOn, True)\n\n # Todo: increment state for cooler on, fan on....\n"
},
{
"alpha_fraction": 0.6229507923126221,
"alphanum_fraction": 0.7213114500045776,
"avg_line_length": 29.5,
"blob_id": "0a132a1d6b3c747c508d3159e557ccca386de46e",
"content_id": "a4bbd0cced8f26ab6a61c12a28bfdc7c778e1416",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 61,
"license_type": "no_license",
"max_line_length": 54,
"num_lines": 2,
"path": "/README.md",
"repo_name": "zadacka/tdd",
"src_encoding": "UTF-8",
"text": "# tdd\nExercises for Bob Martin's TDD Workshop (3rd-4th Nov 2016)\n"
},
{
"alpha_fraction": 0.6785290837287903,
"alphanum_fraction": 0.682087779045105,
"avg_line_length": 29.10714340209961,
"blob_id": "fa9396813525c7aece47fdfd228c4d0d6d91343c",
"content_id": "0eb8146e49fda215e39c5c8a85679295467b1bb3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 843,
"license_type": "no_license",
"max_line_length": 64,
"num_lines": 28,
"path": "/stack/tests/test_stack.py",
"repo_name": "zadacka/tdd",
"src_encoding": "UTF-8",
"text": "from unittest import TestCase\n\nfrom nose.tools import assert_raises\nfrom testfixtures import compare\n\nfrom tdd.stack.stack import Stack, UnderflowError\n\n\nclass TestStack(TestCase):\n def setUp(self):\n self.stack = Stack()\n\n def test_newStackIsEmpty(self):\n compare(self.stack.getSize(), actual=0)\n\n def test_afterOnePush_sizeIsOne(self):\n self.stack.push(0)\n compare(self.stack.getSize(), actual=1)\n\n def test_pop_retrievesLastPushedValueFirst(self):\n self.stack.push('first in, last out')\n self.stack.push('last in, first out')\n compare(self.stack.pop(), expected='last in, first out')\n compare(self.stack.pop(), expected='first in, last out')\n\n def test_poppingEmptyStack_willThrowUnderflow(self):\n with assert_raises(UnderflowError):\n self.stack.pop()\n"
},
{
"alpha_fraction": 0.5909943580627441,
"alphanum_fraction": 0.596622884273529,
"avg_line_length": 25.649999618530273,
"blob_id": "a3b2c04008b67563fa9d10f693662f36f1e21759",
"content_id": "c165c025d993d4a500c88b1b76be1f0aac475dec",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 533,
"license_type": "no_license",
"max_line_length": 71,
"num_lines": 20,
"path": "/stack/stack.py",
"repo_name": "zadacka/tdd",
"src_encoding": "UTF-8",
"text": "class UnderflowError(Exception):\n pass\n\n\nclass Stack(object):\n def __init__(self):\n self._stack_elements = []\n\n def getSize(self):\n return len(self._stack_elements)\n\n def push(self, new_value):\n self._stack_elements.append(new_value)\n\n def pop(self):\n if self.getSize() == 0:\n raise UnderflowError\n # top_of_stack = self._stack_elements[-1]\n # self._stack_elements = self._stack_elements[:-1]\n return self._stack_elements.pop() # I can't resist doing this.\n"
},
{
"alpha_fraction": 0.5492228269577026,
"alphanum_fraction": 0.5673575401306152,
"avg_line_length": 24.733333587646484,
"blob_id": "b143145dec3c2e9907acbaf23d9a1ae012a16b27",
"content_id": "2394e7750e0071377cd8655e6c807fd74fab8c95",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 386,
"license_type": "no_license",
"max_line_length": 74,
"num_lines": 15,
"path": "/primefactors/primefactors.py",
"repo_name": "zadacka/tdd",
"src_encoding": "UTF-8",
"text": "def factorsOf(n):\n factors = []\n divisor = 2\n while n > 1:\n while n % divisor == 0:\n factors.append(divisor)\n n /= divisor\n divisor += 1\n return factors\n\n# The trick here is the journey.\n\n# Converting the while loops to C-like 'for' loops is difficult in Python!\n# for divisor = 2; n > 1; divisor++\n# for ; n % divisor == 0; n /= divisor\n"
},
{
"alpha_fraction": 0.6663865447044373,
"alphanum_fraction": 0.6781512498855591,
"avg_line_length": 53.09090805053711,
"blob_id": "e7e39b58e3436d8a5cd40ef283c8bd728e150e2c",
"content_id": "c6eeb092a57ac55d39c167b66818ec91d69a8af6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1190,
"license_type": "no_license",
"max_line_length": 223,
"num_lines": 22,
"path": "/wrap/tests/test_wrap.py",
"repo_name": "zadacka/tdd",
"src_encoding": "UTF-8",
"text": "from tdd.wrap.wrap import wrap\nfrom testfixtures import compare\n\n\ndef test_wrap():\n compare(wrap('', 1), expected='')\n compare(wrap('x', 1), expected='x')\n compare(wrap('xx', 1), expected='x\\nx')\n compare(wrap('xx', 1), expected='x\\nx')\n compare(wrap('x x', 1), expected='x\\nx')\n compare(wrap('xxx', 1), expected='x\\nx\\nx')\n compare(wrap('xxx', 2), expected='xx\\nx')\n compare(wrap('xx xx', 1), expected='x\\nx\\nx\\nx')\n compare(wrap('xx xx', 2), expected='xx\\nxx')\n compare(wrap('xx xx', 3), expected='xx\\nxx')\n compare(wrap('xx xx', 4), expected='xx\\nxx')\n compare(wrap('xx xx xx', 4), expected='xx\\nxx\\nxx')\n compare(wrap('xxx xxx xxx', 7), expected='xxx xxx\\nxxx', show_whitespace=True)\n\n sample = \"Four score and seven years ago our fathers brought forth on this continent a new nation, conceived in liberty, and dedicated to the proposition that all men are created equal\"\n expected = \"Four\\nscore\\nand\\nseven\\nyears\\nago our\\nfathers\\nbrought\\nforth\\non this\\ncontine\\nnt a\\nnew\\nnation,\\nconceiv\\ned in\\nliberty\\n, and\\ndedicat\\ned to\\nthe\\nproposi\\ntion\\nthat\\nall men\\nare\\ncreated\\nequal\"\n compare(wrap(sample, 7), expected=expected)\n"
}
] | 11 |
Mariia997/sirin_software_task | https://github.com/Mariia997/sirin_software_task | dd8c3e68058066c3a1615e76baf2d96b93b50a89 | 0fec47e0500a9f8bc71207a9e4d285ffa3608a3a | 2e9124de9882e26360b88a0a148c958d947b5bda | refs/heads/master | 2023-03-03T08:40:53.309114 | 2021-02-11T17:10:33 | 2021-02-11T17:10:33 | 338,094,717 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6015538573265076,
"alphanum_fraction": 0.6015538573265076,
"avg_line_length": 29.03333282470703,
"blob_id": "cd44549b5890ae8e8fc732bd92d767b8372523dc",
"content_id": "e8168a8a2654043606cc7bf13246c2cd9482457c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1802,
"license_type": "no_license",
"max_line_length": 98,
"num_lines": 60,
"path": "/alias/manager.py",
"repo_name": "Mariia997/sirin_software_task",
"src_encoding": "UTF-8",
"text": "from django.db import models\nfrom django.db.models import Q\nfrom django.utils import timezone\n\nfrom alias.models import Alias\n\n\nclass AliasManager(models.Manager):\n def alias_replace(self, existing_alias, replace_at, new_alias_value):\n \"\"\"\n Create a new Alias with with alias=new_alias_value and start=replace_at, end=None\n @param existing_alias: existing alias object that will be replaced\n @type existing_alias: Alias\n @param replace_at:\n @type replace_at: datetime\n @param new_alias_value:\n @type new_alias_value:\n @return: new Alias\n @rtype:\n \"\"\"\n new_alias, _ = Alias.objects.create(\n alias=new_alias_value,\n target=existing_alias.target,\n start=replace_at,\n end=None\n )\n\n existing_alias.end = replace_at\n existing_alias.save(update_fields=[\"end\"])\n\n return new_alias\n\n def get_aliases(self, target, start, to):\n \"\"\"\n Get aliases with specific start and end\n @param target:\n @type target:str\n @param start:\n @type start: datetime\n @param to:\n @type to: datetime\n @return: Alias object with specific start and end\n @rtype: List[str, Any]\n \"\"\"\n qs = self.get_queryset()\n alias_objs = qs.filter(Q(end_isnull=True) | Q(end_lte=to), target=target, start_lte=start)\n\n return alias_objs.values_list(\"alias\", flat=True)\n\n\nclass AliasHistoricalManager(models.Manager):\n def get_queryset(self):\n \"\"\"\n Get Aliases with specific end\n @return: Aliases with specific end\n @rtype:\n \"\"\"\n qs = super(AliasHistoricalManager, self).get_queryset()\n qs = qs.filter(end_lt=timezone.now())\n return qs\n"
},
{
"alpha_fraction": 0.7177419066429138,
"alphanum_fraction": 0.7177419066429138,
"avg_line_length": 14.5,
"blob_id": "9ed08535046a1bc89276e03df8f11fae52fabd18",
"content_id": "90e9bf5a498d96de004a320f06b1e922618099d0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 124,
"license_type": "no_license",
"max_line_length": 36,
"num_lines": 8,
"path": "/alias/factory.py",
"repo_name": "Mariia997/sirin_software_task",
"src_encoding": "UTF-8",
"text": "import factory\n\nfrom alias.models import Alias\n\n\nclass AliasFactory(factory.Factory):\n class Meta:\n model = Alias\n"
},
{
"alpha_fraction": 0.523809552192688,
"alphanum_fraction": 0.6904761791229248,
"avg_line_length": 15.800000190734863,
"blob_id": "941739cece9197b510fee0e851c615a1540acc68",
"content_id": "b5c81f994659e2afbf1add5da8f513ed81a0f759",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 84,
"license_type": "no_license",
"max_line_length": 19,
"num_lines": 5,
"path": "/requirements.txt",
"repo_name": "Mariia997/sirin_software_task",
"src_encoding": "UTF-8",
"text": "asgiref==3.3.1\nDjango==3.1.6\nfactory-boy==3.2.0\nsqlparse==0.4.1\ntext-unidecode==1.3\n"
},
{
"alpha_fraction": 0.673673689365387,
"alphanum_fraction": 0.6776776909828186,
"avg_line_length": 29.272727966308594,
"blob_id": "f6882328108091fefb1e5f7811774ec41eec7d59",
"content_id": "16c9822427ebc33f9f8dfacbe18a6ed0e21b2caf",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 999,
"license_type": "no_license",
"max_line_length": 91,
"num_lines": 33,
"path": "/alias/models.py",
"repo_name": "Mariia997/sirin_software_task",
"src_encoding": "UTF-8",
"text": "from django.core.exceptions import ValidationError\nfrom django.db import models\n\nfrom alias.manager import AliasManager, AliasHistoricalManager\n\n\nclass Alias(models.Model):\n alias = models.CharField(max_length=50)\n target = models.CharField(max_length=24)\n start = models.DateTimeField()\n end = models.DateTimeField()\n\n objects = AliasManager()\n historical_objs = AliasHistoricalManager()\n\n\nclass AliasHistorical(Alias):\n objects = AliasHistoricalManager()\n\n class Meta:\n proxy = True\n\n def save(self, *args, **kwargs):\n self.full_clean()\n return super(AliasHistorical, self).save(*args, **kwargs)\n\n def clean(self):\n if self.end and self.start >= self.end:\n raise ValidationError(\"The 'start' value can't be bigger than 'end' field\")\n if Alias.objects.filter(\n alias=self.alias, target=self.target,\n ).exists():\n raise ValidationError(\"We should't have two active aliases at the same period\")\n"
},
{
"alpha_fraction": 0.633368194103241,
"alphanum_fraction": 0.633368194103241,
"avg_line_length": 28.41538429260254,
"blob_id": "16b8281b6d1dfe3f07b1d10034e7152ee2fec7a0",
"content_id": "241c2adc9e9eb877e1d138753e8af6c572a6b7f0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1912,
"license_type": "no_license",
"max_line_length": 76,
"num_lines": 65,
"path": "/alias/tests.py",
"repo_name": "Mariia997/sirin_software_task",
"src_encoding": "UTF-8",
"text": "from django.core.exceptions import ValidationError\nfrom django.test import TestCase\nfrom django.utils import timezone\n\nfrom alias.factory import AliasFactory\nfrom alias.models import Alias\n\n\nclass AliasManagerTest(TestCase):\n def setUp(self) -> None:\n self.existing_alias = AliasFactory()\n self.new_alias_value = \"new_alias_value\"\n\n def test_alias_replace_old_alias_after_replace(self):\n replace_at = timezone.now()\n\n _ = Alias.objects.alias_replace(\n existing_alias=self.existing_alias,\n replace_at=replace_at,\n new_alias_value=self.new_alias_value\n\n )\n\n self.existing_alias.refresh_from_db()\n self.assertEqual(self.existing_alias.end, replace_at)\n\n def test_alias_replace_new_alias(self):\n replace_at = timezone.now()\n\n new_alias = Alias.objects.alias_replace(\n existind_alias=self.existing_alias,\n replace_at=replace_at,\n new_alias_value=self.new_alias_value\n )\n\n self.assertEqual(new_alias.alias, self.new_alias_value)\n self.assertIsNone(new_alias.end)\n\n\nclass HistoricalAliassManagerTest(TestCase):\n def test_historical_end_is_not_null(self):\n exist_null = Alias.historical_objs.filter(end__isnull=True).exists()\n self.assertFalse(exist_null)\n\n\nclass AliasTest(TestCase):\n def test_overlap_success(self):\n active_alias = Alias.objects.create()\n\n new_alias, created = Alias.objects.create(\n alias=active_alias.alias,\n target=active_alias.target,\n end=\"other\",\n start='other'\n )\n\n self.assertTrue(created)\n\n def test_overlap_fail(self):\n active_alias = Alias.objects.create()\n new_alias = active_alias\n new_alias.pk = None\n\n with self.assertRaises(ValidationError):\n new_alias.save()\n"
}
] | 5 |
Rads059/django-iris | https://github.com/Rads059/django-iris | 0b705037da2fc35a899dc5b0ee86eaf50ce65a5e | 41cd53f7098e6e850b18640e367f87159e2167d4 | 2b11c5b49044341a654de6c4a2002366142b5807 | refs/heads/main | 2023-02-25T02:35:23.024471 | 2021-02-04T12:44:02 | 2021-02-04T12:44:02 | 335,055,122 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.45026177167892456,
"alphanum_fraction": 0.6753926873207092,
"avg_line_length": 14.833333015441895,
"blob_id": "d4f77f2d2aafd8883d30666a83aac1948d7cd2fb",
"content_id": "66e9df9392d879e4069b0253dc4a014c07346f1f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 191,
"license_type": "no_license",
"max_line_length": 20,
"num_lines": 12,
"path": "/requirements.txt",
"repo_name": "Rads059/django-iris",
"src_encoding": "UTF-8",
"text": "Django==3.1.1\npandas==1.0.4\nnumpy==1.18.5\ngunicorn==20.0.4\nJinja2==2.11.2\npandas==1.0.4\nmatplotlib==3.3.2\nseaborn==0.11.1\npickleshare==0.7.5\nscikit_learn==0.23.2\nscipy>=0.15.1\nPillow==7.0.0\n\n"
},
{
"alpha_fraction": 0.8031914830207825,
"alphanum_fraction": 0.8031914830207825,
"avg_line_length": 36.599998474121094,
"blob_id": "5085b27cb8c2e2cfa83c1e275e49139d1e9bfd99",
"content_id": "150594a93637595d5805d981b3d28ca517a6a51b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 188,
"license_type": "no_license",
"max_line_length": 74,
"num_lines": 5,
"path": "/README.md",
"repo_name": "Rads059/django-iris",
"src_encoding": "UTF-8",
"text": "# django-iris\nIris Prediction ML model with Django web app\nIris Machine Learning Prediction with Django web-app\n\nThe project is deployed into heroku as - https://django-iris.herokuapp.com\n"
},
{
"alpha_fraction": 0.8267716765403748,
"alphanum_fraction": 0.8267716765403748,
"avg_line_length": 24.600000381469727,
"blob_id": "7ac55d8cd86afb354ebced9b98a49f8ad89946ec",
"content_id": "73540518959c3159221ea2ca832f017648f4795a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 127,
"license_type": "no_license",
"max_line_length": 32,
"num_lines": 5,
"path": "/predict/admin.py",
"repo_name": "Rads059/django-iris",
"src_encoding": "UTF-8",
"text": "from django.contrib import admin\nfrom .models import PredResults\n\n# Register your models here.\nadmin.site.register(PredResults)"
}
] | 3 |
Fitz-AI/MalwareClassification | https://github.com/Fitz-AI/MalwareClassification | 6a027212dde1a236c6332756496a391c3ef47799 | 609a243bf22dc2d2f93c62a2e14de9ebd7a95b14 | 73d18b2cc9718878fb9773bbef347cee18246aa2 | refs/heads/master | 2020-09-08T16:50:09.751620 | 2019-08-05T00:49:51 | 2019-08-05T00:49:51 | null | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6828528046607971,
"alphanum_fraction": 0.7040970921516418,
"avg_line_length": 30.232227325439453,
"blob_id": "4c97aeaee5bee501515b34236331fe2850676f17",
"content_id": "0417f718ff476a9202661bacf56e726677075ead",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6590,
"license_type": "permissive",
"max_line_length": 225,
"num_lines": 211,
"path": "/classifyingTextENSEMBLE.py",
"repo_name": "Fitz-AI/MalwareClassification",
"src_encoding": "UTF-8",
"text": "#written by Viktor Zenkov in 2018\n\n#this file classifies using the Ensemble model, making predictions based on the predictions from the text and hex models\n\nfrom __future__ import print_function\nfrom __future__ import absolute_import\n\nimport sys\n\nimport numpy as np\nfrom numpy.random import seed\nfrom tensorflow import set_random_seed\n\nimport matplotlib.pyplot as plt\nimport math\nimport time\nimport datetime\n\nfrom keras.preprocessing import sequence\nfrom keras.models import Sequential, Model\nfrom keras.models import load_model\nfrom keras.layers import Input, Dense, Embedding\nfrom keras.layers import LSTM, Dropout, Activation\nfrom keras.layers import concatenate\nfrom keras.utils import to_categorical, print_summary\nfrom keras.callbacks import EarlyStopping\nfrom sklearn.metrics import confusion_matrix\nimport itertools\n\nprint('sys.argv[0]: {0!r}'.format(sys.argv[0]))\nprint('sys.path[0]: {0!r}'.format(sys.path[0]))\n \n#this plots a confusion matrix\ndef plot_confusion_matrix(cm, classes, normalize=False, title='Confusion Matrix', cmap=plt.cm.Blues):\n if normalize:\n cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]\n \n print(cm)\n \n plt.imshow(cm, interpolation='nearest', cmap=cmap)\n plt.title(title, fontsize=30)\n plt.colorbar()\n tick_marks = np.arange(len(classes))\n plt.xticks(tick_marks,classes, rotation=45,fontsize=22)\n plt.yticks(tick_marks,classes, fontsize=22)\n \n fmt = '.2f'\n thresh = cm.max()/2.\n for i,j in itertools.product(range(cm.shape[0]),range(cm.shape[1])):\n plt.text(j, i, format(cm[i,j],fmt),horizontalalignment=\"center\",color='white' if cm[i,j]>thresh else 'black')\n \n plt.ylabel('True label', fontsize=25)\n plt.xlabel('Predicted label', fontsize=25)\n\n\n\n\nimport classifyingFunctions\n\n\n\n\n\n\nmax_features = 5000 #this is the number of unique integers we will keep\nmaxlen = 15000 # this is the number of integers from each file to keep, the sequence length\nbatch_size = 16\n\nmodelN = load_model('model2018-11-021541168475.h5')\nmodelH = load_model('model2018-10-301540939919.h5')\n\nprint('Loading number data...')\n\n#num_words' default is None, which is taken to mean that all unique integers should be kept.\n(x_train_N, y_train_N), (x_test_N, y_test_N) = classifyingFunctions.load_data(num_words=max_features,numOrHex=True) #all integers greater than max_features get turned into 2's, so there's max_features number of unique \"words\"\nprint(len(x_train_N), 'train sequences')\nprint(len(x_test_N), 'test sequences')\n\n#this makes all the sequences be exactly maxlen integers long\nprint('Pad sequences (samples x time)')\nx_train_N = sequence.pad_sequences(x_train_N, maxlen=maxlen)\nx_test_N = sequence.pad_sequences(x_test_N, maxlen=maxlen)\nprint('x_train_N shape:', x_train_N.shape)\nprint('x_test_N shape:', x_test_N.shape)\n\ny_test_N_ld = y_test_N\n\n#We have classes from 1 to 9, which is 9 classes, but to_categorical will make an array with spots from 0 to max_class, so we subtract 1 such that our classes are 0 to 8 and we can use 9 classes.\ny_train_N -= 1\ny_test_N -= 1\n\n#to_categorical replaces each number between 0 and 8 with a 9-length array of 0's except a 1 in the place of the number.\ny_train_N = to_categorical(y_train_N, 9)\ny_test_N = to_categorical(y_test_N, 9)\n\nprint('y_train_N shape:', y_train_N.shape)\nprint('y_test_N shape:', y_test_N.shape)\n\n\n#we also need the hex data\nprint('Loading hex data...')\n\n(x_train_H, y_train_H), (x_test_H, y_test_H) = classifyingFunctions.load_data(num_words=max_features,numOrHex=False) \nprint(len(x_train_H), 'train sequences')\nprint(len(x_test_H), 'test sequences')\n\nprint('Pad sequences (samples x time)')\nx_train_H = sequence.pad_sequences(x_train_H, maxlen=maxlen)\nx_test_H = sequence.pad_sequences(x_test_H, maxlen=maxlen)\nprint('x_train_N shape:', x_train_N.shape)\nprint('x_test_N shape:', x_test_N.shape)\n\ny_test_H_ld = y_test_H\n\ny_train_H -= 1\ny_test_H -= 1\n\ny_train_H = to_categorical(y_train_H, 9)\ny_test_H = to_categorical(y_test_H, 9)\n\nprint('y_train_H shape:', y_train_H.shape)\nprint('y_test_H shape:', y_test_H.shape)\n\n\n\n#printing the test accuracies of the text and hex models\nscore_N, acc_N = modelN.evaluate(x_test_N, y_test_N,\n batch_size=batch_size)\nprint('Text Test score:', score_N)\nprint('Text Test accuracy:', acc_N)\n\nscore_H, acc_H = modelH.evaluate(x_test_H, y_test_H,\n batch_size=batch_size)\nprint('Hex Test score:', score_H)\nprint('Hex Test accuracy:', acc_H)\n\n\n\n\n\n\ny_probs_N = modelN.predict(x_train_N)\ny_probs_testN = modelN.predict(x_test_N)\n\n\ny_probs_H = modelH.predict(x_train_H)\ny_probs_testH = modelH.predict(x_test_H)\n\n\n#we concatenate the predicted probabilities of the text and hex data\ny_probs_C = np.concatenate((y_probs_N, y_probs_H),axis=1)\ny_probs_testC = np.concatenate((y_probs_testN, y_probs_testH),axis=1)\n\nprint(np.size(y_probs_C))\n\n\n\nmodel = Sequential()\n\n#output layer has 9 probabilities; input has 18 entries (9 text and 9 hex probabilities)\nmodel.add(Dense(9, activation='softmax', input_shape=(18,), name='aux_output'))\n\nmodel.compile(loss='categorical_crossentropy',\n optimizer='adam',\n metrics=['accuracy'])\n\nes = EarlyStopping(monitor=\"val_acc\",min_delta=0.005,patience=28,mode='max')\n\n#we train. This usually runs very quickly\nprint('Train...')\nmodel.fit(y_probs_C, y_train_H,\n batch_size=batch_size,\n epochs=100, \n validation_split=0.15, callbacks=[es])\n \nmodel.save('model'+str(datetime.date.today())+str(math.floor(time.time()))+'.h5')\n\nscore, acc = model.evaluate(y_probs_testC, y_test_H,\n batch_size=batch_size)\nprint('Test score:', score)\nprint('Test accuracy:', acc)\n\n\n\n#the rest of this is for the confusion matrix\n\ny_probs = model.predict(y_probs_testC)\n\n#this block of code makes a 1D array of predicted labels, which is an input to the confusion matrix\ny_pred_ld = []\nfor i in range(0, len(y_probs)):\n probs = y_probs[i]\n predicted_index = np.argmax(probs)\n y_pred_ld.append(predicted_index)\n\n#cnf_matrix = confusion_matrix(y_test_N_ld, y_pred_ld)\n#plt.figure(figsize=(24,20))\ncnf_matrix = confusion_matrix(y_test_N_ld, y_pred_ld)\nprint(cnf_matrix)\ncnf_matrix = cnf_matrix.astype('float') / cnf_matrix.sum(axis=1)[:, np.newaxis]\nprint(cnf_matrix)\n\nprint_summary(model)\n\n\n#plt.subplot(221)\n#plot_confusion_matrix(cnf_matrix,classes=range(1,10),normalize=False,title=\"Confusion matrix\")\n#plt.subplot(222)\n#plot_confusion_matrix(cnf_matrix,classes=range(0,10),normalize=True,title=\"Confusion matrix\")\n#plt.show()\nprint(\"end of file\")\n"
},
{
"alpha_fraction": 0.831210196018219,
"alphanum_fraction": 0.831210196018219,
"avg_line_length": 103.66666412353516,
"blob_id": "6f45b8fc7ab4e0dac3db7fa1926a4ec3a5397ce3",
"content_id": "8a5e45aba3a31ebc822185b81c585510105c4c60",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 628,
"license_type": "permissive",
"max_line_length": 265,
"num_lines": 6,
"path": "/README.md",
"repo_name": "Fitz-AI/MalwareClassification",
"src_encoding": "UTF-8",
"text": "# MalwareClassification\nThis repository contains the files used in the technical report \"Dynamic data fusion using multi-input models for malware classification\" by Viktor Zenkov and Jason Laska.\n\nUsing the data from Microsoft's malware competition at https://www.kaggle.com/c/malware-classification, we used machine learning to train a neural network to classify malware.\n\nWe transformed the data using the files hexParsingCode.py and textParsingCode.py. The files classifyingTextNUMBERS.py, classifyingTextMULTI.py, and classifyingTextENSEMBLE.py were run to create models. The file classifyingFunctions.py contains supporting functions.\n"
},
{
"alpha_fraction": 0.6249122619628906,
"alphanum_fraction": 0.6368421316146851,
"avg_line_length": 25.14678955078125,
"blob_id": "3d542b79726249c045b359cad68d38627259cbe3",
"content_id": "9609e532a66f8402d62a643e7913df3702c30f21",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2850,
"license_type": "permissive",
"max_line_length": 125,
"num_lines": 109,
"path": "/textParsingCode.py",
"repo_name": "Fitz-AI/MalwareClassification",
"src_encoding": "UTF-8",
"text": "#written by Viktor Zenkov in 2018\n\nfrom pyparsing import *\nimport numpy as np\nimport pprint\nimport re\nimport time\nimport os \n\n\n\nblanks = ' \\t'\nParserElement.setDefaultWhitespaceChars(blanks)\n\n\n\n#version with simplified line code giving text\n\nmainStrings = []\n\ndef addLine(tokens):\n \n if (tokens.hexcommands != ''):\n mainStrings.append(tokens.textcommands + '\\n')\n\n\n#we have a set of hexademical numbers and then a set of text commands in many lines. Sometimes that text\n#happens to begin with a lowercase hexadecimal number (like db), so we make hexnumscap include only capital letters.\nhexnumscap = '0123456789ABCDEF?'\n\n#A hexadecimal command is either of the form \"xx\" (possibly and question marks) or the form \"xx+\".\n#The Combine here is required for the code to work\nhexapair = Combine(Word(hexnumscap, exact=2) + Optional(Literal('+')) + WordEnd())\n\n#The hexadecimal commands are a set of hexadecimal commands, which we call \"hexcommands\".\nhexcommands = OneOrMore(hexapair)('hexcommands')\n\n#The text commands are any text up to the end of the line or a \";\", and we call them \"textcommands\".\ntextcommands = CharsNotIn(';\\n')('textcommands')\n\n#if there is a semicolon in a line that signifies a comment, and we name the rest of the line \"comment\".\ncomment = Suppress(';') + restOfLine('comment')\n\n#line = (lineheader + Optional(hexcommands) + Optional(textcommands) + Optional(comment) + LineEnd()).setParseAction(addLine)\nline = (Optional(hexcommands) + Optional(textcommands) + Optional(comment) + LineEnd()).setParseAction(addLine)\n\nentirefile = Optional(OneOrMore(Literal('\\n'))) + OneOrMore(line)\n\n\n#parse\n\n\nf_path = 'asmfiles'\n\nallFileNames = sorted(os.listdir(f_path + '/'))\n\nasmFileNames = [ i for i in allFileNames if i.endswith('.asm')]\n\nfor f_name in asmFileNames:\n print(f_name[:-4])\n \n f = open(f_path + '/' + f_name,encoding='latin-1')\n \n fileText = ''\n counter = 0\n\n time1 = time.time()\n mainStrings = []\n\n while True:\n #counter += 1\n line = f.readline()\n if (line == ''):\n break\n if line.startswith('.text'):\n fileText = fileText + line[14:]\n counter += 1\n if (counter > 1000):\n entirefile.parseString(fileText)\n fileText = ''\n counter = 0\n\n f.close()\n \n if (counter > 0):\n entirefile.parseString(fileText)\n\n time2 = time.time()\n\n\n for i in range(len(mainStrings)):\n \n pattern = re.compile(r'\\s+')\n temp = re.sub(pattern,' ',mainStrings[i])\n temp = temp[:-1] + '\\n'\n\n mainStrings[i] = temp\n if (mainStrings[i] == '\\n'):\n mainStrings[i] = ''\n\n mainString = ''.join(mainStrings)\n \n g = open(f_path + 'Output/' + f_name[:-4] + 'Text.txt','w')\n \n g.write(mainString)\n \n g.close()\n\n print (time2-time1)\n"
},
{
"alpha_fraction": 0.6232464909553528,
"alphanum_fraction": 0.6336005330085754,
"avg_line_length": 34.22352981567383,
"blob_id": "4d1d19bc3604aff3fc49425cef73d82ddcea0518",
"content_id": "d402975641776b61c38156b7419f9755a04518f6",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5988,
"license_type": "permissive",
"max_line_length": 143,
"num_lines": 170,
"path": "/classifyingFunctions.py",
"repo_name": "Fitz-AI/MalwareClassification",
"src_encoding": "UTF-8",
"text": "#written by Viktor Zenkov in 2018\n\n#this file contains functions for reading in training data and processing it and splitting off the training data\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom data_utils import get_file\n\nfrom sequence import _remove_long_seq\n\nimport numpy as np\nimport json\nimport warnings\nimport os\nimport math\nimport time\n\n \n\n#this function loads the data.\n#path is \"where to cache the data (relative to `~/.keras/dataset`)\"\n#num_words is the number of unique words to keep\n#skip_top is the number of most frequent words to skip (for example, if \"the\" was most common in a context this might be altered to skip \"the\")\n#maxlen is the length of each sequence to keep from each file (keep the first maxlen words)\n#seed is the random seed to use for shuffling the data\n#start_char is the number which is inserted at the beginning of each sequence (file)\n#oov_char is the number to replace words lost because they're greater than num_words.\n#index_from is used to push increase all the integers to make room for the start_char and oov_char integers\n#numOrHex determines if we want to read in text (True) or hex (False) data\n\n#useful parameters to alter are num_words, maxlen, seed, and numOrHex\ndef load_data(path='imdb.npz', num_words=None, skip_top=0,\n maxlen=None, seed=113,\n start_char=1, oov_char=2, index_from=3, numOrHex=True, **kwargs):\n\n # Legacy support\n if 'nb_words' in kwargs:\n warnings.warn('The `nb_words` argument in `load_data` '\n 'has been renamed `num_words`.')\n num_words = kwargs.pop('nb_words')\n if kwargs:\n raise TypeError('Unrecognized keyword arguments: ' + str(kwargs))\n\n #if we want to use only a few files, set limitNumFiles to True\n numFiles = 100\n limitNumFiles=False\n \n f_path = ''\n \n #the paths of the text or hex integer files\n if numOrHex:\n f_path = '/asmtextOutputIntegers'\n else:\n f_path = '/asmhexOutputIntegers'\n\n #get the files in order and keep only txt files\n allFileNames = sorted(os.listdir(f_path))\n allFileNames = [f_name for f_name in allFileNames if f_name.endswith('.txt')]\n xs = []\n \n print(\"starting reading files at \", time.time())\n \n counter = 0\n \n #read in each file and puts its contents into an element of xs\n for f_name in allFileNames:\n\n #small number of files code:\n counter += 1\n if (limitNumFiles and counter > numFiles):\n break\n\n f = open(f_path + '/' + f_name,encoding='latin-1')\n tempS = f.read()\n xs.append(tempS)\n \n print(\"finished reading files at \", time.time())\n \n #read in the file labels and create a dictionary\n dictionaryLabels = {}\n with open('trainLabels.csv','r') as dicFile:\n for lineEntry in dicFile:\n bothWords = lineEntry.split(',')\n dictionaryLabels[bothWords[0][1:-1]] = bothWords[1][:-1]\n \n\n labels = []\n\n counter = 0\n\n #create a labels array that matches the xs array\n for f_name in allFileNames:\n \n #small number of files code:\n counter += 1\n if (limitNumFiles and counter > numFiles):\n break\n \n #the numbers here are based on removing \"Numbers.txt\" or \"hexIntegers.txt\" from the end of the files\n if numOrHex:\n labels.append(dictionaryLabels[f_name[:-11]])\n else:\n labels.append(dictionaryLabels[f_name[:-15]])\n\n print(\"starting quantifying files at \", time.time())\n \n #turn the strings into lists of integers\n xs = list(list(int(w) for w in xelem.split()) for xelem in xs)\n xs = np.array(xs)\n\n #turn the strings into integers \n labels = list(int(w) for w in labels)\n labels = np.array(labels)\n \n #randomly sort the xs and labels arrays in the same way\n np.random.seed(seed)\n indices = np.arange(len(xs))\n np.random.shuffle(indices)\n xs = xs[indices]\n labels = labels[indices]\n \n #add a start character to the beginning of each file array and increase all the file integers by index_from\n if start_char is not None:\n xs = [[start_char] + [w + index_from for w in x] for x in xs]\n elif index_from:\n xs = [[w + index_from for w in x] for x in xs]\n\n #remove integers after maxlen length in each file\n if maxlen:\n xs, labels = _remove_long_seq(maxlen, xs, labels)\n if not xs:\n raise ValueError('After filtering for sequences shorter than maxlen=' +\n str(maxlen) + ', no sequence was kept. '\n 'Increase maxlen.')\n if not num_words:\n num_words = max([max(x) for x in xs])\n\n # by convention, use 2 as OOV word\n # reserve 'index_from' (=3 by default) characters:\n # 0 (padding), 1 (start), 2 (OOV)\n if oov_char is not None:\n xs = [[w if (skip_top <= w < num_words) else oov_char for w in x] for x in xs]\n else:\n xs = [[w for w in x if skip_top <= w < num_words] for x in xs]\n\n #put 2/10 of the files into test data and the rest as training.\n idx = math.floor(len(xs)*8/10)\n x_train, y_train = np.array(xs[:idx]), np.array(labels[:idx])\n x_test, y_test = np.array(xs[idx:]), np.array(labels[idx:])\n\n print(\"finished quantifying files at \", time.time())\n\n return (x_train, y_train), (x_test, y_test)\n\n#this is not being used\ndef get_word_index(path='imdb_word_index.json'):\n \"\"\"Retrieves the dictionary mapping words to word indices.\n # Arguments\n path: where to cache the data (relative to `~/.keras/dataset`).\n # Returns\n The word index dictionary.\n \"\"\"\n # path = get_file(path,\n #origin='https://s3.amazonaws.com/text-datasets/imdb_word_index.json',\n #file_hash='bfafd718b763782e994055a2d397834f')\n path = \"imdb_word_index.json\"\n with open(path) as f:\n return json.load(f)\n"
},
{
"alpha_fraction": 0.6948512196540833,
"alphanum_fraction": 0.7132735252380371,
"avg_line_length": 31.73711395263672,
"blob_id": "50e2d36b9380a8b62fc35348e9c9c0cfaadecf83",
"content_id": "5af7de1ecb0db70ec99155156e83eeff30d4babd",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6351,
"license_type": "permissive",
"max_line_length": 225,
"num_lines": 194,
"path": "/classifyingTextMULTI.py",
"repo_name": "Fitz-AI/MalwareClassification",
"src_encoding": "UTF-8",
"text": "#written by Viktor Zenkov in 2018\n\n#this file classifies using the Multi-input model, making predictions based on the text and hex data.\n\nfrom __future__ import print_function\nfrom __future__ import absolute_import\n\nimport sys\n\nimport numpy as np\nfrom numpy.random import seed\nseed(1)\nfrom tensorflow import set_random_seed\nset_random_seed(2)\n\nimport matplotlib.pyplot as plt\nimport math\nimport time\nimport datetime\n\nfrom keras.preprocessing import sequence\nfrom keras.models import Sequential, Model\nfrom keras.layers import Input, Dense, Embedding\nfrom keras.layers import LSTM, Dropout, Activation\nfrom keras.layers import concatenate\nfrom keras.utils import to_categorical, print_summary\nfrom keras.callbacks import EarlyStopping\nfrom sklearn.metrics import confusion_matrix\nimport itertools\n\nprint('sys.argv[0]: {0!r}'.format(sys.argv[0]))\nprint('sys.path[0]: {0!r}'.format(sys.path[0]))\n \n#this plots a confusion matrix\ndef plot_confusion_matrix(cm, classes, normalize=False, title='Confusion Matrix', cmap=plt.cm.Blues):\n if normalize:\n cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]\n \n print(cm)\n \n plt.imshow(cm, interpolation='nearest', cmap=cmap)\n plt.title(title, fontsize=30)\n plt.colorbar()\n tick_marks = np.arange(len(classes))\n plt.xticks(tick_marks,classes, rotation=45,fontsize=22)\n plt.yticks(tick_marks,classes, fontsize=22)\n \n fmt = '.2f'\n thresh = cm.max()/2.\n for i,j in itertools.product(range(cm.shape[0]),range(cm.shape[1])):\n plt.text(j, i, format(cm[i,j],fmt),horizontalalignment=\"center\",color='white' if cm[i,j]>thresh else 'black')\n \n plt.ylabel('True label', fontsize=25)\n plt.xlabel('Predicted label', fontsize=25)\n\n\nimport classifyingFunctions\n\n\n\n\n\n\nmax_features = 5000 #this is the number of unique integers we will keep.\nmaxlen = 5000 # cut texts after this number of words (among top max_features most common words), used in pad_sequences (right after load data, before any modeling)\nbatch_size = 32\n\nprint('Loading number data...')\n\n#num_words' default is None, which is taken to mean that all unique integers should be kept.\n(x_train_N, y_train_N), (x_test_N, y_test_N) = classifyingFunctions.load_data(num_words=max_features,numOrHex=True) #all integers greater than max_features get turned into 2's, so there's max_features number of unique \"words\"\nprint(len(x_train_N), 'train sequences')\nprint(len(x_test_N), 'test sequences')\n\nprint('Pad sequences (samples x time)')\nx_train_N = sequence.pad_sequences(x_train_N, maxlen=maxlen)\nx_test_N = sequence.pad_sequences(x_test_N, maxlen=maxlen)\nprint('x_train_N shape:', x_train_N.shape)\nprint('x_test_N shape:', x_test_N.shape)\n\ny_test_N_ld = y_test_N\n\n#We have classes from 1 to 9, which is 9 classes, but to_categorical will make an array with spots from 0 to max_class, so we subtract 1 such that our classes are 0 to 8 and we can use 9 classes.\ny_train_N -= 1\ny_test_N -= 1\n\n#to_categorical replaces each number between 0 and 8 with a 9-length array of 0's except a 1 in the place of the number.\ny_train_N = to_categorical(y_train_N, 9)\ny_test_N = to_categorical(y_test_N, 9)\n\nprint('y_train_N shape:', y_train_N.shape)\nprint('y_test_N shape:', y_test_N.shape)\n\n\n\n\n#we also need the hex data\nprint('Loading hex data...')\n\n(x_train_H, y_train_H), (x_test_H, y_test_H) = classifyingFunctions.load_data(num_words=max_features,numOrHex=False) \nprint(len(x_train_H), 'train sequences')\nprint(len(x_test_H), 'test sequences')\n\nprint('Pad sequences (samples x time)')\nx_train_H = sequence.pad_sequences(x_train_H, maxlen=maxlen)\nx_test_H = sequence.pad_sequences(x_test_H, maxlen=maxlen)\nprint('x_train_H shape:', x_train_H.shape)\nprint('x_test_H shape:', x_test_H.shape)\n\ny_test_H_ld = y_test_H\n\ny_train_H -= 1\ny_test_H -= 1\n\ny_train_H = to_categorical(y_train_H, 9)\ny_test_H = to_categorical(y_test_H, 9)\n\nprint('y_train_H shape:', y_train_H.shape)\nprint('y_test_H shape:', y_test_H.shape)\n\n\n\n\n#We build up to the LSTM layer for text\nmain_inputNL = Input(shape=(max_features,), dtype='int32', name='main_inputNL')\nembeddingNL = Embedding(output_dim=128, input_dim=max_features, input_length=maxlen)(main_inputNL)\nlstm_outNL = LSTM(150, dropout=0.1, recurrent_dropout=0.1)(embeddingNL)\n\n\n#and we build up to the LSTM layer for hex\nmain_inputHL = Input(shape=(max_features,), dtype='int32', name='main_inputHL')\nembeddingHL = Embedding(output_dim=128, input_dim=max_features, input_length=maxlen)(main_inputHL)\nlstm_outHL = LSTM(150, dropout=0.1, recurrent_dropout=0.1)(embeddingHL)\n\n\n#We concatenate the LSTM layers\nconcatL = concatenate([lstm_outNL, lstm_outHL])\n\nconcatL = Dense(9, activation='softmax', name='aux_output')(concatL)\n\nmodel = Model(inputs=[main_inputNL, main_inputHL], outputs=concatL)\n\n\n\n\nmodel.compile(loss='categorical_crossentropy',\n optimizer='adam',\n metrics=['accuracy'])\n\nes = EarlyStopping(monitor=\"val_acc\",min_delta=0.005,patience=50,mode='max') #meaning of this: when val_acc stops increasing by more than min_delta, we stop at current epoch.\n\nprint('Train...')\nmodel.fit([x_train_N, x_train_H], y_train_N,\n epochs=55, batch_size=batch_size,validation_split=0.15, callbacks=[es])\n\nmodel.save('model'+str(datetime.date.today())+str(math.floor(time.time()))+'.h5')\n\nscore, acc = model.evaluate([x_test_N, x_test_H], y_test_N,\n batch_size=batch_size)\nprint('Test score:', score)\nprint('Test accuracy:', acc)\n\n\n\n\n\n#the rest of this is for the confusion matrix\n\ny_probs = model.predict([x_test_N,x_test_H])\n\n#this block of code makes a 1D array of predicted labels, which is an input to the confusion matrix\ny_pred_ld = []\nfor i in range(0, len(y_probs)):\n probs = y_probs[i]\n predicted_index = np.argmax(probs)\n y_pred_ld.append(predicted_index)\n\n#cnf_matrix = confusion_matrix(y_test_N_ld, y_pred_ld)\n#plt.figure(figsize=(24,20))\ncnf_matrix = confusion_matrix(y_test_N_ld, y_pred_ld)\nprint(cnf_matrix)\ncnf_matrix = cnf_matrix.astype('float') / cnf_matrix.sum(axis=1)[:, np.newaxis]\nprint(cnf_matrix)\n\nprint_summary(model)\n\n\n#plt.subplot(221)\n#plot_confusion_matrix(cnf_matrix,classes=range(1,10),normalize=False,title=\"Confusion matrix\")\n#plt.subplot(222)\n#plt.show()\n#plot_confusion_matrix(cnf_matrix,classes=range(0,10),normalize=True,title=\"Confusion matrix\")\n#plt.show()\nprint(\"end of file\")\n"
},
{
"alpha_fraction": 0.7063822150230408,
"alphanum_fraction": 0.723898708820343,
"avg_line_length": 34.37423324584961,
"blob_id": "60370ea7094a048a9d720e46896df1d8cac7d8fa",
"content_id": "c4e367eee90a5e46882765dad060c1ae2d110120",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5766,
"license_type": "permissive",
"max_line_length": 447,
"num_lines": 163,
"path": "/classifyingTextNUMBERS.py",
"repo_name": "Fitz-AI/MalwareClassification",
"src_encoding": "UTF-8",
"text": "#written by Viktor Zenkov in 2018\n\n#This file was used for both the text and the hex models, with the only difference being that line 69 had an input of true for the text model and false for the hex model.\n\n\n\nfrom __future__ import print_function\nfrom __future__ import absolute_import\n\nimport sys\n\nimport numpy as np\nfrom numpy.random import seed\nseed(1)\nfrom tensorflow import set_random_seed\nset_random_seed(2)\n\nimport matplotlib.pyplot as plt\nimport math\nimport time\nimport datetime\n\nfrom keras.preprocessing import sequence\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Embedding\nfrom keras.layers import LSTM, Dropout, Activation\nfrom keras.utils import to_categorical, print_summary\nfrom keras.callbacks import EarlyStopping\nfrom sklearn.metrics import confusion_matrix\nimport itertools\n\nprint('sys.argv[0]: {0!r}'.format(sys.argv[0]))\nprint('sys.path[0]: {0!r}'.format(sys.path[0]))\n\n\n#this plots a confusion matrix\ndef plot_confusion_matrix(cm, classes, normalize=False, title='Confusion Matrix', cmap=plt.cm.Blues):\n if normalize:\n cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]\n \n print(cm)\n \n plt.imshow(cm, interpolation='nearest', cmap=cmap)\n plt.title(title, fontsize=30)\n plt.colorbar()\n tick_marks = np.arange(len(classes))\n plt.xticks(tick_marks,classes, rotation=45,fontsize=22)\n plt.yticks(tick_marks,classes, fontsize=22)\n \n fmt = '.2f'\n thresh = cm.max()/2.\n for i,j in itertools.product(range(cm.shape[0]),range(cm.shape[1])):\n plt.text(j, i, format(cm[i,j],fmt),horizontalalignment=\"center\",color='white' if cm[i,j]>thresh else 'black')\n \n plt.ylabel('True label', fontsize=25)\n plt.xlabel('Predicted label', fontsize=25)\n\n\nimport classifyingFunctions\n\n\n\n\n\n\nmax_features = 5000 #this is the number of unique integers we will keep.\nmaxlen = 15000 # cut texts after this number of words (among top max_features most common words), used in pad_sequences (right after load data, before any modeling)\nbatch_size = 32\n\nprint('Loading data...')\n\n#num_words' default is None, which is taken to mean that all unique integers should be kept.\n(x_train, y_train), (x_test, y_test) = classifyingFunctions.load_data(num_words=max_features,numOrHex=True) #all integers greater than max_features get turned into 2's, so there's max_features number of unique \"words\"\n\nprint(len(x_train), 'train sequences')\nprint(len(x_test), 'test sequences')\n\nprint('Pad sequences (samples x time)')\nx_train = sequence.pad_sequences(x_train, maxlen=maxlen)\nx_test = sequence.pad_sequences(x_test, maxlen=maxlen)\nprint('x_train shape:', x_train.shape)\nprint('x_test shape:', x_test.shape)\n\ny_test_ld = y_test\n\n#We have classes from 1 to 9, which is 9 classes, but to_categorical will make an array with spots from 0 to max_class, so we subtract 1 such that our classes are 0 to 8 and we can use 9 classes.\n\ny_train -= 1\ny_test -= 1\n\ny_train = to_categorical(y_train, 9)\ny_test = to_categorical(y_test, 9)\n\nprint('y_train shape:', y_train.shape)\nprint('y_test shape:', y_test.shape)\n\n\n\nprint('Build model...')\nmodel = Sequential()\n\nmodel.add(Embedding(max_features, 128)) #first argument - \"the size of the vocabulary\". #second argument is dimension of dense embedding (dimension of each vector \"replacing\" each integer)\nprint (model.output_shape)\n\n#first argument of LSTM (150 here) is the dimensionality of the output space, the number of memory cells/neurons. The input shape is 3D : (batch/samples row, timesteps (=max len) past observations, dimension of the embedding vector (someone called it the features column)).LSTM is recurrent neural network\nmodel.add(LSTM(150, dropout=0.05, recurrent_dropout=0.05))\nprint (model.output_shape)\n\n#a dense/fully connected layer - each neuron is connected to all the neurons on the next layer.\n#Since this is the last layer, the first input is the number of classes, 9. The softmax forms the probabilities.\nmodel.add(Dense(9, activation='softmax'))\nprint (model.output_shape)\n\n\n\nmodel.compile(loss='categorical_crossentropy',\n optimizer='adam',\n metrics=['accuracy'])\n\nes = EarlyStopping(monitor=\"val_acc\",min_delta=0.005,patience=20,mode='max') #meaning of this: when val_acc stops increasing by more than min_delta, we stop at current epoch. patience had default of 0. patience is the number of epochs with no improvement after which training will be stopped. the definition of improvement seems to be our min delta rule. We wonder if this will make less variation in accuracies when running lots of identical jobs\n\nprint('Train...')\nmodel.fit(x_train, y_train,\n batch_size=batch_size,\n epochs=26, \n validation_split=0.15, callbacks=[es])\n \nmodel.save('model'+str(datetime.date.today())+str(math.floor(time.time()))+'.h5')\n\nscore, acc = model.evaluate(x_test, y_test,\n batch_size=batch_size)\nprint('Test score:', score)\nprint('Test accuracy:', acc)\n\n\n\n\n#the rest of this is for the confusion matrix\n\ny_probs = model.predict(x_test)\n\n#this block of code makes a 1D array of predicted labels, which is an input to the confusion matrix\ny_pred_ld = []\nfor i in range(0, len(y_probs)):\n probs = y_probs[i]\n predicted_index = np.argmax(probs)\n y_pred_ld.append(predicted_index)\n\ncnf_matrix = confusion_matrix(y_test_ld, y_pred_ld)\nprint(cnf_matrix)\ncnf_matrix = cnf_matrix.astype('float') / cnf_matrix.sum(axis=1)[:, np.newaxis]\nprint(cnf_matrix)\n\nprint_summary(model)\n\n\n#plt.subplot(221)\n#plot_confusion_matrix(cnf_matrix,classes=range(1,10),normalize=False,title=\"Confusion matrix\")\n#plt.subplot(222)\n#plt.show()\n#plot_confusion_matrix(cnf_matrix,classes=range(0,10),normalize=True,title=\"Confusion matrix\")\n#plt.show()\nprint(\"end of file\")\n"
}
] | 6 |
pombredanne/clogging_tor | https://github.com/pombredanne/clogging_tor | 71a935442948538c1fa388e8b05de0b370cad4c4 | b18bff36b8bcdad9032885a76b596c576ff5039e | 19679fd33bb8b121da94864e80993f3894cc4e5d | refs/heads/master | 2018-05-06T20:38:00.225996 | 2017-05-04T04:35:24 | 2017-05-04T04:35:24 | null | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5669642686843872,
"alphanum_fraction": 0.6308740377426147,
"avg_line_length": 36.008697509765625,
"blob_id": "dfd0274a14004d7aadadf95d01f03e6babcde823",
"content_id": "296d85c6eb02f4d524ff7ebacb2a7bf794419ef8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4256,
"license_type": "no_license",
"max_line_length": 158,
"num_lines": 115,
"path": "/client.py",
"repo_name": "pombredanne/clogging_tor",
"src_encoding": "UTF-8",
"text": "import StringIO\nimport time\nimport argparse\nimport sys\nfrom datetime import datetime\nimport socket\n\nimport stem.control\nfrom stem.util import term\nimport stem.process\nfrom stem.control import EventType\nfrom stem import CircStatus, OperationFailed, InvalidRequest, InvalidArguments, CircuitExtensionFailed\n\nfrom settings import *\n\nSOCKS_PORT = 7000\nCONTROLLER_PORT = 9051\nCONNECTION_TIMEOUT = 30 # timeout before we give up on a circuit\n\nparser = argparse.ArgumentParser(\n prog='client',\n description=\"Measure latency between either a pair of Tor relays (relay1,relay2), or a list of pairs, specified with the --input-file argument.\"\n)\nparser.add_argument('relay1', help=\"First relay\", nargs='?', default='7B3F666CD6665CFF146F61CE005DD19F89DBC23A')\nparser.add_argument('relay2', help=\"Second relay\", nargs='?', default='15999A15088C133AF85AAF73DB74AC5C7B28114D')\nparser.add_argument('relay3', help=\"Thrid relay\", nargs='?', default='7FBD5CCE31EAC5CED96F88ACA9D69656DA75CDF7')\nargs = vars(parser.parse_args())\npath = [args['relay1'], args['relay2'], args['relay3']]\n\nprint(term.format(\"Starting Tor:\", term.Attr.BOLD))\n\ntor_process = stem.process.launch_tor_with_config(\n config={\n 'SocksPort':\n str(SOCKS_PORT),\n 'ControlPort':\n str(CONTROLLER_PORT),\n 'TestingTorNetwork':\n '1',\n #'__DisablePredictedCircuits': '1',\n #'MaxOnionsPending': '0',\n 'newcircuitperiod':\n '999999999',\n 'maxcircuitdirtiness':\n '999999999',\n #'AlternateDirAuthority': ['54.208.179.145:9030 60E6E78926A45A77C1CB6BCC59D459083CB8530F', '138.68.58.173:9030 1A19C6822777515982DAD87CD2AA65B071305A1E'],\n 'DirServer': [\n 'auth orport=5000 no-v2 v3ident=A9495BBC01F3B30247673A7C3253EDF21687468E 54.197.200.127:7000 584C 788B 916E 8C3D 12F1 F750 9199 9038 09A5 1CF7'\n ],\n 'ClientOnly':\n '1',\n #'FetchDirInfoEarly': '1',\n #'FetchDirInfoExtraEarly': '1',\n },\n init_msg_handler=success,\n completion_percent=80)\n\ncircuit_id = None\ntry:\n with stem.control.Controller.from_port(port=CONTROLLER_PORT) as controller:\n controller.authenticate()\n if not controller:\n failure(\n \"Couldn't connect to Tor, controller.authenticate() failed\")\n\n # Attaches a specific circuit to the given stream (event)\n def attach_stream(event):\n try:\n controller.attach_stream(event.id, circuit_id)\n except (OperationFailed, InvalidRequest), e:\n warning(\n \"Failed to attach stream to %s, unknown circuit. Closing stream...\"\n % circuit_id)\n print(\"\\tResponse Code: %s \" % str(e.code))\n print(\"\\tMessage: %s\" % str(e.message))\n controller.close_stream(event.id)\n\n # An event listener, called whenever StreamEvent status changes\n def probe_stream(event):\n if event.status == 'DETACHED':\n if circuit_id is not None:\n warning(\"Stream Detached from circuit {0}...\".format(circuit_id))\n else:\n warning(\"Stream Detached from circuit...\")\n print(\"\\t\" + str(vars(event)))\n if event.status == 'NEW' and event.purpose == 'USER':\n attach_stream(event)\n\n controller.add_event_listener(probe_stream, EventType.STREAM)\n\n print(controller.get_info('circuit-status'))\n\n circuit_id = controller.new_circuit(path=path, await_build=True)\n print circuit_id\n\n print(controller.get_info('circuit-status'))\n socks.setdefaultproxy(SOCKS_TYPE, SOCKS_HOST, SOCKS_PORT)\n socket.socket = socks.socksocket\n\n while True:\n print '%0.5f' % time.time()\n s = socks.socksocket()\n s.settimeout(CONNECTION_TIMEOUT)\n s.connect((SERVER_ADDRESS, SERVER_TO_CLIENT_PORT))\n now = datetime.now()\n timestamp = time.mktime(now.timetuple())\n data = '{} {}'.format(timestamp, now.microsecond)\n print data\n s.send(data)\n s.close()\nexcept Exception as e:\n print e\n tor_process.kill()\nfinally:\n tor_process.kill()\n"
},
{
"alpha_fraction": 0.6785714030265808,
"alphanum_fraction": 0.7969387769699097,
"avg_line_length": 24.789474487304688,
"blob_id": "4ccbc2df038c0cfbc4143022cf7914d1a5fc31e2",
"content_id": "b83eb16d7242e17980cdf158c6b2b79b2c062f4c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 980,
"license_type": "no_license",
"max_line_length": 152,
"num_lines": 38,
"path": "/setup.sh",
"repo_name": "pombredanne/clogging_tor",
"src_encoding": "UTF-8",
"text": "sudo apt-get update\nsudo apt-get install tor -y\n\ncat <<EOT >> torrec\nTestingTorNetwork 1\nRunAsDaemon 0\nNickname CorruptTor\nShutdownWaitLength 0\nProtocolWarnings 1\nSafeLogging 0\nDisableDebuggerAttachment 0\nDirAuthority auth orport=5000 no-v2 v3ident=A9495BBC01F3B30247673A7C3253EDF21687468E 54.197.200.127:7000 584C 788B 916E 8C3D 12F1 F750 9199 9038 09A5 1CF7\n\nSocksPort 0\nControlPort 9051\nOrPort 5000\nAllowSingleHopExits 1\n\n# An exit policy that allows exiting to IPv4 LAN\nExitPolicy accept 0.0.0.0/32:*\n\nContactInfo [email protected]\nEOT\n\nsudo apt-get install python python-pip\npip install pandas\npip install stem\n\ntor --list-fingerprint -f torrec\ntor -f torrec\n\nsudo apt-get install libevent-dev libssl-dev\nwget -c https://www.torproject.org/dist/tor-0.3.0.6.tar.gz\ntar zxvf tor-0.3.0.6.tar.gz\n# modify or/or.h based on\n# https://tor.stackexchange.com/questions/1312/how-to-decrease-number-of-tor-hops/1662\n# https://github.com/aagbsn/torflow/blob/master/tordiffs/one-hop.diff\n./configure\n"
},
{
"alpha_fraction": 0.5528372526168823,
"alphanum_fraction": 0.5647298693656921,
"avg_line_length": 36.97419357299805,
"blob_id": "be7e9174d7e0eef96e83ddb8d1965615965a796b",
"content_id": "33c0123da24d3cf9b06e30bc86da7da8c7481a1b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5886,
"license_type": "no_license",
"max_line_length": 127,
"num_lines": 155,
"path": "/server.py",
"repo_name": "pombredanne/clogging_tor",
"src_encoding": "UTF-8",
"text": "import socket\nimport time\nimport pandas as pd\nfrom datetime import datetime\nimport threading\nimport Queue\nfrom settings import *\n\nHOST, PORT = '', 8000\n\nBUF_SIZE = 1000\nqueue = Queue.Queue(BUF_SIZE)\n\nclass Server(threading.Thread):\n def __init__(self, port):\n threading.Thread.__init__(self)\n self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)\n self.socket.bind((HOST, int(port)))\n self.socket.listen(1)\n self.kill_received = False\n print 'Serving on port %s ...' % port\n\n def get_socket(self, ip, port):\n s = socks.socket(socket.AF_INET, socket.SOCK_STREAM)\n s.settimeout(CONNECTION_TIMEOUT)\n s.connect(ip, port)\n return s\n\nclass ClientHandler(threading.Thread):\n def __init__(self, client_connection, client_address):\n threading.Thread.__init__(self)\n self.client_connection = client_connection\n self.client_address = client_address\n\n def run(self):\n request = self.client_connection.recv(1024)\n now = datetime.utcnow()\n payload = request.decode()\n print '{} Client sent request: {}.'.format(now, payload)\n if len(payload) == 0:\n return\n timestamp, microsecond = payload.split()\n timestamp = float(timestamp)\n past = datetime.utcfromtimestamp(int(timestamp)).replace(microsecond=int(microsecond))\n diff = now - past\n latency = diff.seconds * 1000000 + diff.microseconds\n queue.put(('Client', self.client_address[0], latency, timestamp))\n self.client_connection.close()\n\nclass ClientServer(Server):\n def run(self):\n while True:\n try:\n threads = []\n for i in range(10):\n client_connection, client_address = self.socket.accept()\n thread = ClientHandler(client_connection, client_address)\n thread.start()\n threads.append(thread)\n for i in range(10):\n threads[i].join()\n if self.kill_received:\n return\n except Exception as e:\n print e\n continue\n\nclass CorruptTorHandler(threading.Thread):\n def __init__(self, client_connection, client_address):\n threading.Thread.__init__(self)\n self.client_connection = client_connection\n self.client_address = client_address\n\n def run(self):\n request = self.client_connection.recv(1024)\n now = datetime.utcnow()\n payload = request.decode()\n print '{} Tor sent from {}: {}.'.format(now, self.client_address, payload)\n timestamp, microsecond, name = payload.split()\n past = datetime.utcfromtimestamp(int(timestamp)).replace(microsecond=int(microsecond))\n diff = now - past\n latency = diff.seconds * 1000000 + diff.microseconds\n queue.put((name, self.client_address[0], latency, timestamp))\n self.client_connection.close()\n\nclass CorruptTorServer(Server):\n def run(self):\n while True:\n try:\n threads = []\n for i in range(10):\n client_connection, client_address = self.socket.accept()\n thread = CorruptTorHandler(client_connection, client_address)\n thread.start()\n threads.append(thread)\n for i in range(10):\n threads[i].join()\n if self.kill_received:\n return\n except Exception as e:\n print e\n continue\n\nclass ConsumerThread(threading.Thread):\n def __init__(self):\n threading.Thread.__init__(self)\n self.dataframe = pd.DataFrame(columns=['Client', 'Relay0', 'Relay1', 'Relay2', 'Relay3', 'Relay4'])\n self.kill_received = False\n\n def run(self):\n count = 1\n while True:\n if not queue.empty():\n item = queue.get()\n name, client_address, latency, timestamp = item\n timestamp = int(timestamp) / 5\n if name == 'Client':\n if timestamp in self.dataframe.index and not pd.isnull(self.dataframe.loc[timestamp, 'Client']):\n self.dataframe.loc[timestamp, 'Client'] = latency * 0.1 + self.dataframe.loc[timestamp, 'Client'] * 0.9\n else:\n self.dataframe.loc[timestamp, 'Client'] = latency\n else:\n if timestamp in self.dataframe.index and not pd.isnull(self.dataframe.loc[timestamp, name]):\n self.dataframe.loc[timestamp, name] = latency * 0.1 + self.dataframe.loc[timestamp, name] * 0.9\n else:\n self.dataframe.loc[timestamp, name] = latency\n count += 1\n print count\n if count % 1000 == 0:\n print self.dataframe.head()\n self.dataframe.to_csv('out.csv')\n count = 0\n if self.kill_received:\n return\n\ndef main():\n threads = []\n threads.append(ClientServer(SERVER_TO_CLIENT_PORT))\n threads.append(CorruptTorServer(SERVER_TO_TOR_PORT))\n threads.append(ConsumerThread())\n for thread in threads:\n thread.start()\n while len(threads) > 0:\n try:\n # Join all threads using a timeout so it doesn't block\n # Filter out threads which have been joined or are None\n threads = [t.join(1000) for t in threads if t is not None and t.isAlive()]\n except KeyboardInterrupt:\n print \"Ctrl-c received! Sending kill to threads...\"\n for t in threads:\n t.kill_received = True\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.5345520377159119,
"alphanum_fraction": 0.5817597508430481,
"avg_line_length": 31.966886520385742,
"blob_id": "1cda90ef105db082ef49ae20a4f4ba2189d2a80c",
"content_id": "9a260eda521ea16af494e07c7fd1fc26cb53098f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4978,
"license_type": "no_license",
"max_line_length": 163,
"num_lines": 151,
"path": "/corrupt_tor.py",
"repo_name": "pombredanne/clogging_tor",
"src_encoding": "UTF-8",
"text": "import StringIO\nimport time\nimport argparse\nimport sys\nimport threading\nimport socket\nimport random\n\nimport stem.control\nfrom stem.util import term\nimport stem.process\nfrom stem.control import EventType\nfrom stem import CircStatus, OperationFailed, InvalidRequest, InvalidArguments, CircuitExtensionFailed\n\nfrom settings import *\nimport datetime\n\nSOCKS_PORT = 7000\nCONTROLLER_PORT = 9051\n\ncircuit_id = None\n\nclass RequestHandler(threading.Thread):\n def __init__(self, timeout=60, idx=0):\n threading.Thread.__init__(self)\n self.timeout = timeout\n self.idx = idx\n\n def get_socket(self):\n s = socks.socksocket()\n s.settimeout(CONNECTION_TIMEOUT)\n s.connect((SERVER_ADDRESS, SERVER_TO_TOR_PORT))\n return s\n\n def run(self):\n start = datetime.datetime.now()\n while datetime.datetime.now() - start < datetime.timedelta(seconds=self.timeout):\n s = self.get_socket()\n now = datetime.datetime.utcnow()\n timestamp = int(time.mktime(now.timetuple()))\n data = '{} {} {}'.format(timestamp, now.microsecond, 'Relay' + str(self.idx))\n s.send(data)\n s.close()\n\n\nclass CorruptTorServer(threading.Thread):\n def __init__(self, port, controller):\n threading.Thread.__init__(self)\n socks.setdefaultproxy(SOCKS_TYPE, SOCKS_HOST, SOCKS_PORT)\n socket.socket = socks.socksocket\n self.controller = controller\n\n def run(self):\n while True:\n try:\n global circuit_id\n idx = random.randint(0, 4)\n path = ['8F0F7C5DE13255E5347B003FA2EEF60A4C00110F', FINGERPRINTS[idx]]\n print path\n circuit_id = controller.new_circuit(path=path, await_build=True)\n threads = []\n for i in range(20):\n handler = RequestHandler(idx=idx)\n handler.start()\n threads.append(handler)\n for i in range(20):\n threads[i].join()\n except Exception as e:\n print e\n time.sleep(1)\n continue\n\nprint(term.format(\"Starting Tor:\", term.Attr.BOLD))\n\ntor_process = stem.process.launch_tor_with_config(\n config={\n 'SocksPort':\n str(SOCKS_PORT),\n 'ControlPort':\n str(CONTROLLER_PORT),\n 'TestingTorNetwork':\n '1',\n #'__DisablePredictedCircuits': '1',\n #'MaxOnionsPending': '0',\n 'newcircuitperiod':\n '999999999',\n 'maxcircuitdirtiness':\n '999999999',\n #'AlternateDirAuthority': ['54.208.179.145:9030 60E6E78926A45A77C1CB6BCC59D459083CB8530F', '138.68.58.173:9030 1A19C6822777515982DAD87CD2AA65B071305A1E'],\n 'DirServer': [\n 'auth orport=5000 no-v2 v3ident=A9495BBC01F3B30247673A7C3253EDF21687468E 54.197.200.127:7000 584C 788B 916E 8C3D 12F1 F750 9199 9038 09A5 1CF7'\n ],\n 'ClientOnly':\n '1',\n 'AllowSingleHopCircuits':\n '1',\n 'ExcludeSingleHopRelays':\n '0',\n 'AllowSingleHopExits':\n '1',\n 'DataDirectory':\n '~/.tor/client',\n #'FetchDirInfoEarly': '1',\n #'FetchDirInfoExtraEarly': '1',\n },\n init_msg_handler=success,\n completion_percent=100)\n\ntry:\n with stem.control.Controller.from_port(port=CONTROLLER_PORT) as controller:\n controller.authenticate()\n if not controller:\n failure(\n \"Couldn't connect to Tor, controller.authenticate() failed\")\n\n # Attaches a specific circuit to the given stream (event)\n def attach_stream(event):\n try:\n controller.attach_stream(event.id, circuit_id)\n except (OperationFailed, InvalidRequest), e:\n warning(\n \"Failed to attach stream to %s, unknown circuit. Closing stream...\"\n % circuit_id)\n print(\"\\tResponse Code: %s \" % str(e.code))\n print(\"\\tMessage: %s\" % str(e.message))\n controller.close_stream(event.id)\n\n # An event listener, called whenever StreamEvent status changes\n def probe_stream(event):\n if event.status == 'DETACHED':\n if circuit_id is not None:\n warning(\"Stream Detached from circuit {0}...\".format(\n circuit_id))\n else:\n warning(\"Stream Detached from circuit...\")\n print(\"\\t\" + str(vars(event)))\n if event.status == 'NEW' and event.purpose == 'USER':\n attach_stream(event)\n\n controller.add_event_listener(probe_stream, EventType.STREAM)\n\n print(controller.get_info('circuit-status'))\n\n server = CorruptTorServer(TOR_SERVER_PORT, controller)\n server.start()\n server.join()\nexcept Exception as e:\n print e\n tor_process.kill()\nfinally:\n tor_process.kill()\n"
},
{
"alpha_fraction": 0.5894039869308472,
"alphanum_fraction": 0.5894039869308472,
"avg_line_length": 24.16666603088379,
"blob_id": "3491ac0f84acc7b471a8571357671fdfe482b3f7",
"content_id": "a0eb323e10fe6ba8b1a177fd22de84178f72048c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 151,
"license_type": "no_license",
"max_line_length": 50,
"num_lines": 6,
"path": "/analysis.py",
"repo_name": "pombredanne/clogging_tor",
"src_encoding": "UTF-8",
"text": "import pandas as pd\n\ndf = pd.read_csv('out.csv')\ndf = df.fillna(df.mean())\ndf_norm = (df - df.mean()) / (df.max() - df.min())\ndf_norm.corr()['Client']\n"
},
{
"alpha_fraction": 0.5165205597877502,
"alphanum_fraction": 0.645987868309021,
"avg_line_length": 28.078432083129883,
"blob_id": "bab5af86b14461e1cd220f854b08f6bffb9aa61e",
"content_id": "6c29a2f2aee03572aa67b1725d5f21847f3559cb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1483,
"license_type": "no_license",
"max_line_length": 235,
"num_lines": 51,
"path": "/settings.py",
"repo_name": "pombredanne/clogging_tor",
"src_encoding": "UTF-8",
"text": "import sys\nimport os\nsys.path.append(os.path.join(os.path.dirname(__file__), 'libs'))\nfrom SocksiPy import socks\nfrom datetime import datetime\n\n# zheng.im\nSERVER_ADDRESS = '138.68.58.173'\n#CLIENT_ADDRESS = None\nSERVER_TO_CLIENT_PORT = 8964\n#CORRUPT_TOR_ADDRESS = '54.186.184.40'\nTOR_SERVER_PORT = 8080\nSERVER_TO_TOR_PORT = 6489\nFINGERPRINTS = ['7B3F666CD6665CFF146F61CE005DD19F89DBC23A', '0DDDAAF2FCE825D286D70E99F70BB85FE12660C4', '15999A15088C133AF85AAF73DB74AC5C7B28114D', '7FBD5CCE31EAC5CED96F88ACA9D69656DA75CDF7', 'D4317592208BCB6EDA4572BBB66D4E7913DDAB49']\n\nSOCKS_TYPE = socks.PROXY_TYPE_SOCKS5\nSOCKS_HOST = '127.0.0.1'\nCONNECTION_TIMEOUT = 30 # timeout before we give up on a circuit\n\n\nclass Color:\n HEADER = '\\033[95m'\n BLUE = '\\033[94m'\n SUCCESS = '\\033[92m'\n WARNING = '\\033[93m'\n FAIL = '\\033[91m'\n END = '\\033[0m'\n\n\ndef success(msg):\n sys.stdout.write(Color.SUCCESS + \"{0} {1}\\n\".format(datetime.utcnow(), msg) +\n Color.END)\n sys.stdout.flush()\n\n\ndef warning(msg):\n sys.stdout.write(Color.WARNING + \"{0} {1}\\n\".format(datetime.utcnow(), msg) +\n Color.END)\n sys.stdout.flush()\n\n\ndef failure(msg):\n sys.stdout.write(Color.FAIL + \"{0} [ERROR] {1}\\n\".format(datetime.utcnow(),\n msg) + Color.END)\n sys.stdout.flush()\n sys.exit(-1)\n\n\ndef log(msg):\n sys.stdout.write(\"{0} {1}\\n\".format(datetime.utcnow(), msg))\n sys.stdout.flush()\n"
}
] | 6 |
geoffrey0822/HTNN | https://github.com/geoffrey0822/HTNN | e40c876e829c873815d5443ff8caae93c3b0aa7a | 09e0df91a55dd3d04303303502516f8c22773001 | f1cdbab713d0f4fe56f2d89ca7c4f0d48f90ce9e | refs/heads/master | 2022-11-13T08:01:08.498711 | 2020-07-08T10:14:30 | 2020-07-08T10:14:30 | 273,464,559 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5299856662750244,
"alphanum_fraction": 0.5394548177719116,
"avg_line_length": 39.52325439453125,
"blob_id": "21466263a2421d09a5452451c08d0407831fdf14",
"content_id": "7c7d2ac90dc518790ba8427f6b62d8a6634c8662",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3485,
"license_type": "no_license",
"max_line_length": 104,
"num_lines": 86,
"path": "/models/.ipynb_checkpoints/layers-checkpoint.py",
"repo_name": "geoffrey0822/HTNN",
"src_encoding": "UTF-8",
"text": "import torch\nimport torch.nn as nn\nimport os\n\nclass Projection(torch.autograd.Function):\n @staticmethod\n def forward(ctx, input, weight, forceDisable):\n ctx.save_for_backward(input, weight, forceDisable)\n tmp = (weight == weight.max(dim=0, keepdim=True)[0])\n sigma = tmp.view_as(weight).int().float()\n sigma_w = sigma.mul(weight)\n #print(sigma_w)\n #print(input)\n #output = input.mm(sigma.mul(weight).t())\n output = input.mm(sigma_w)\n return output\n \n @staticmethod\n def backward(ctx, grad_output):\n input, weight, forceDisable = ctx.saved_tensors\n #print('backward')\n grad_input = grad_weight = None\n tmp = (weight == weight.max(dim=0, keepdim=True)[0])\n sigma = tmp.view_as(weight).int().float()\n sigma_w = sigma.mul(weight)\n #sigma = (weight == weight.max(dim=0, keepdim=True)[0]).view_as(input).int().float()\n if ctx.needs_input_grad[0] and not forceDisable:\n grad_input = grad_output.mm(sigma_w.t())\n if ctx.needs_input_grad[1]:\n grad_weight = grad_output.t().mm(input)\n \n return grad_input, grad_weight, None\n\nclass ClassProjection(nn.Module):\n def __init__(self, mapfile_path = None, treeNode = None, learnable = False, n_super = 5, n_sub = 10,\n intermap = None, forceDisable = False):\n super(ClassProjection, self).__init__()\n tmp_pair = {} # {subclass, motherclass}\n i_mother_list = []\n i_child_list = []\n self.input_dim = 0 \n self.output_dim = 0\n self.forceDisable = None\n \n if mapfile_path is not None:\n with (mapfile_path, 'r') as f:\n for ln in f:\n fields = [int(field) for field in ln.rstrip('\\n').split(',')]\n tmp_pair[fields[1]] = fields[0]\n if fields[0] not in i_mother_list:\n self.input_dim += 1\n i_mother_list.append(fields[0])\n self.output_dim += 1\n elif treeNode is not None:\n for node in treeNode:\n tmp_pair[node[1]] = node[0]\n if node[0] not in i_mother_list:\n self.input_dim += 1\n i_mother_list.append(node[0])\n self.output_dim += 1\n else:\n self.input_dim = n_super\n self.output_dim = n_sub\n self.weight = nn.Parameter(torch.Tensor(self.input_dim, self.output_dim))\n self.weight.data.uniform_(0.0, 1.0)\n if not learnable:\n self.weight.requires_grad = False\n print('The weight will be auto initialized.')\n if mapfile_path is not None or treeNode is not None:\n self.weight = nn.Parameter(torch.Tensor(self.input_dim, self.output_dim))\n self.weight.data.zero_()\n if not learnable:\n self.weight.requires_grad = False\n if intermap is None:\n for subcls in tmp_pair.keys():\n self.weight.data[tmp_pair[subcls], subcls] = 1.0\n else:\n for subcls in tmp_pair.keys():\n for basecls in intermap[subcls]:\n self.weight.data[tmp_pair[subcls], basecls] = 1.0\n \n def forward(self, input):\n return Projection.apply(input, self.weight, self.forceDisable)\n \n #def extra_repr(self):\n #return None\n"
},
{
"alpha_fraction": 0.4363046884536743,
"alphanum_fraction": 0.4453808665275574,
"avg_line_length": 36.5121955871582,
"blob_id": "c4e9df40ec497eec0ca3c160acbdeefdaa2ab14b",
"content_id": "9272932b2fe1f2e65dfe2039bedfe39f65e678c5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3085,
"license_type": "no_license",
"max_line_length": 89,
"num_lines": 82,
"path": "/models/.ipynb_checkpoints/dummy-checkpoint.py",
"repo_name": "geoffrey0822/HTNN",
"src_encoding": "UTF-8",
"text": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport numpy as np\nimport os\nimport cv2\nfrom . layers import ClassProjection\n\n\nclass FeatureANN(nn.Module):\n def __init__(self, input_dim, classTree_path, with_aux = True):\n super(FeatureANN, self).__init__()\n self.n_bin = 0\n bins = []\n bin_uniques = []\n i_bins = []\n self.with_aux = with_aux\n with open(classTree_path, 'r') as f:\n for ln in f:\n nodes = [int(field) for field in ln.rstrip('\\n').split(',')]\n n_node = len(nodes)\n if bins == []:\n for i in range(1, n_node):\n bins.append([])\n bin_uniques.append([])\n i_bins.append({})\n self.n_bin += 1\n bin_uniques.append([])\n for i in range(1, n_node):\n bins[i-1].append([nodes[i-1], nodes[i]])\n if nodes[i-1] not in bin_uniques[i-1]:\n bin_uniques[i-1].append(nodes[i-1])\n if nodes[i] not in i_bins[i-1]:\n i_bins[i-1][nodes[i]] = []\n i_bins[i-1][nodes[i]].append(nodes[-1])\n if nodes[-1] not in bin_uniques[-1]:\n bin_uniques[-1].append(nodes[-1])\n \n self.proj_layers = []\n self.fc_s = []\n i = 0\n for ibin in bins:\n output_dim = len(bin_uniques[i])\n self.proj_layers.append(ClassProjection(treeNode = ibin, intermap=i_bins[i]))\n self.fc_s.append(nn.Linear(input_dim, output_dim))\n i += 1\n \n #define back-bone network layers\n output_dim = len(bin_uniques[-1])\n self.fc_s.append(nn.Linear(input_dim, output_dim))\n \n def backbone(self, x):\n return x\n \n def forward(self, x):\n if self.with_aux:\n y = []\n y_ = None\n for i in range(self.n_bin):\n _y = F.softmax(self.fc_s[i](self.backbone(x)), dim=1)\n y.append(_y)\n if y_ is None:\n y_ = self.proj_layers[i](_y)\n else:\n #y.add(self.proj_layers[i](F.softmax(self.fc_s[i](x)))) # sum\n y_ = y_.mul(self.proj_layers[i](_y)) # elementwise product\n b_y = F.softmax(self.fc_s[-1](self.backbone(x)), dim=1)\n y.append(b_y)\n y_ = y_.mul(b_y)\n return y_, y\n else:\n y_ = None\n for i in range(self.n_bin):\n _y = F.softmax(self.fc_s[i](self.backbone(x)), dim=1)\n if y_ is None:\n y_ = self.proj_layers[i](_y)\n else:\n #y.add(self.proj_layers[i](F.softmax(self.fc_s[i](x)))) # sum\n y_ = y_.mul(self.proj_layers[i](_y)) # elementwise product\n b_y = F.softmax(self.fc_s[-1](self.backbone(x)), dim=1)\n y_ = y_.mul(b_y)\n return y_, None\n \n"
},
{
"alpha_fraction": 0.4711662530899048,
"alphanum_fraction": 0.4901736378669739,
"avg_line_length": 41.77540588378906,
"blob_id": "04a9ccbde4a9e8de6561ccc0ea4a7f7f1d4812a0",
"content_id": "2be9a01f084460d01d94169240ad53682927e388",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 34092,
"license_type": "no_license",
"max_line_length": 141,
"num_lines": 797,
"path": "/models/vision.py",
"repo_name": "geoffrey0822/HTNN",
"src_encoding": "UTF-8",
"text": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport numpy as np\nimport os\nimport cv2\nfrom . layers import ClassProjection\n\n\nclass LeNet5(nn.Module):\n\n def __init__(self, n_classes):\n super(LeNet5, self).__init__()\n \n self.feature_extractor = nn.Sequential( \n nn.Conv2d(in_channels=3, out_channels=6, kernel_size=5, stride=1),\n nn.Tanh(),\n nn.AvgPool2d(kernel_size=2),\n nn.Conv2d(in_channels=6, out_channels=32, kernel_size=5, stride=1),\n nn.Tanh(),\n nn.AvgPool2d(kernel_size=2),\n nn.Conv2d(in_channels=32, out_channels=256, kernel_size=5, stride=1),\n nn.Tanh()\n )\n self.drop = nn.Dropout(p=0.7)\n self.fc_1 = nn.Linear(in_features=256, out_features=128)\n self.fc_2 = nn.Linear(in_features=128, out_features=n_classes)\n self.activate_1 = nn.Tanh()\n \n \n self.input_dim = [3, 32, 32]\n self.feat_dim = 128\n\n\n def forward(self, x):\n x = self.feature_extractor(x)\n x = torch.flatten(x, 1)\n #x_1 = self.activate_1(self.fc_1(self.dropout(x)))\n x_1 = self.activate_1(self.fc_1(x))\n return x_1, self.fc_2(x_1)\n\n\nclass AlexNet(nn.Module):\n\n def __init__(self, n_classes=1000):\n super(AlexNet, self).__init__()\n self.features = nn.Sequential(\n nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=2),\n nn.ReLU(inplace=True),\n nn.MaxPool2d(kernel_size=3, stride=2),\n nn.Conv2d(64, 192, kernel_size=5, padding=2),\n nn.ReLU(inplace=True),\n nn.MaxPool2d(kernel_size=3, stride=2),\n nn.Conv2d(192, 384, kernel_size=3, padding=1),\n nn.ReLU(inplace=True),\n nn.Conv2d(384, 256, kernel_size=3, padding=1),\n nn.ReLU(inplace=True),\n nn.Conv2d(256, 256, kernel_size=3, padding=1),\n nn.ReLU(inplace=True),\n nn.MaxPool2d(kernel_size=3, stride=2),\n )\n self.avgpool = nn.AdaptiveAvgPool2d((6, 6))\n \n self.drop_1 = nn.Dropout(p=0.7)\n self.drop_2 = nn.Dropout(p=0.7)\n self.fc_1 = nn.Linear(256 * 6 * 6, 4096)\n self.fc_2 = nn.Linear(4096, 4096)\n self.fc_3 = nn.Linear(4096, n_classes)\n \n self.input_dim = [3, 224, 224]\n self.feat_dim = 4096\n\n def forward(self, x):\n x = self.features(x)\n x = self.avgpool(x)\n x = torch.flatten(x, 1)\n #x_1 = F.relu(self.fc_2(self.drop_2(F.relu(self.fc_1(self.drop_1(x))))))\n x_1 = F.relu(self.fc_2(F.relu(self.fc_1(x))))\n return x_1, self.fc_3(x_1)\n\n\nclass AlexNet32(nn.Module):\n\n def __init__(self, n_classes=1000):\n super(AlexNet32, self).__init__()\n self.features = nn.Sequential(\n nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=5),\n nn.ReLU(inplace=True),\n nn.MaxPool2d(kernel_size=2, stride=2),\n nn.Conv2d(64, 192, kernel_size=5, padding=2),\n nn.ReLU(inplace=True),\n nn.MaxPool2d(kernel_size=2, stride=2),\n nn.Conv2d(192, 384, kernel_size=3, padding=1),\n nn.ReLU(inplace=True),\n nn.Conv2d(384, 256, kernel_size=3, padding=1),\n nn.ReLU(inplace=True),\n nn.Conv2d(256, 256, kernel_size=3, padding=1),\n nn.ReLU(inplace=True),\n nn.MaxPool2d(kernel_size=2, stride=2),\n )\n self.classifier = nn.Linear(256, n_classes)\n self.input_dim = [3, 32, 32]\n self.feat_dim = 256\n\n def forward(self, x):\n x = self.features(x)\n x = torch.flatten(x, 1)\n x_1 = self.classifier(x)\n #x = self.classifier(x)\n return x, x_1\n\n\nclass AlexNet32_B(nn.Module):\n\n def __init__(self, n_classes=1000):\n super(AlexNet32_B, self).__init__()\n self.features = nn.Sequential(\n nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=5),\n nn.ReLU(inplace=True),\n nn.MaxPool2d(kernel_size=2, stride=2),\n nn.Conv2d(64, 192, kernel_size=5, padding=2),\n nn.ReLU(inplace=True),\n nn.MaxPool2d(kernel_size=2, stride=2),\n nn.Conv2d(192, 384, kernel_size=3, padding=1),\n nn.ReLU(inplace=True),\n nn.Conv2d(384, 256, kernel_size=3, padding=1),\n nn.ReLU(inplace=True),\n nn.Conv2d(256, 256, kernel_size=3, padding=1),\n nn.ReLU(inplace=True),\n nn.MaxPool2d(kernel_size=2, stride=2),\n )\n self.drop = nn.Dropout(p=0.5)\n self.fc1 = nn.Linear(256, 1024)\n self.classifier = nn.Linear(1024, n_classes)\n self.input_dim = [3, 32, 32]\n self.feat_dim = 1024\n\n def forward(self, x):\n x = self.features(x)\n x = self.drop(F.relu(self.fc1(torch.flatten(x, 1))))\n x_1 = self.classifier(x)\n #x = self.classifier(x)\n return x, x_1\n\n\nclass AlexNet32_C(nn.Module):\n\n def __init__(self, n_classes=1000, feature_dim=256):\n super(AlexNet32_C, self).__init__()\n self.features = nn.Sequential(\n nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=5),\n nn.ReLU(inplace=True),\n nn.MaxPool2d(kernel_size=2, stride=2),\n nn.Conv2d(64, 192, kernel_size=5, padding=2),\n nn.ReLU(inplace=True),\n nn.MaxPool2d(kernel_size=2, stride=2),\n nn.Conv2d(192, 384, kernel_size=3, padding=1),\n nn.ReLU(inplace=True),\n nn.Conv2d(384, 256, kernel_size=3, padding=1),\n nn.ReLU(inplace=True),\n nn.Conv2d(256, 256, kernel_size=3, padding=1),\n nn.ReLU(inplace=True),\n nn.MaxPool2d(kernel_size=2, stride=2),\n )\n self.drop = nn.Dropout(p=0.5)\n self.fc1 = nn.Linear(256, feature_dim)\n self.classifier = nn.Linear(feature_dim, n_classes)\n self.input_dim = [3, 32, 32]\n self.feat_dim = feature_dim\n\n def forward(self, x):\n x = self.features(x)\n x = self.drop(F.relu(self.fc1(torch.flatten(x, 1))))\n x_1 = self.classifier(x)\n #x = self.classifier(x)\n return x, x_1\n\n\nclass AlexNet32_D(nn.Module):\n\n def __init__(self, n_classes=1000, cdim=1024, fdim=256):\n super(AlexNet32_D, self).__init__()\n self.features = nn.Sequential(\n nn.Conv2d(384, cdim, kernel_size=3, padding=1),\n nn.ReLU(inplace=True),\n nn.Conv2d(cdim, fdim, kernel_size=3, padding=1),\n nn.ReLU(inplace=True),\n nn.MaxPool2d(kernel_size=2, stride=2),\n )\n self.common_features = nn.Sequential(\n nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=5),\n nn.ReLU(inplace=True),\n nn.MaxPool2d(kernel_size=2, stride=2),\n nn.Conv2d(64, 192, kernel_size=5, padding=2),\n nn.ReLU(inplace=True),\n nn.MaxPool2d(kernel_size=2, stride=2),\n nn.Conv2d(192, 384, kernel_size=3, padding=1),\n nn.ReLU(inplace=True)\n )\n self.classifier = nn.Linear(fdim, n_classes)\n self.input_dim = [3, 32, 32]\n self.feat_dim = fdim\n\n def forward(self, x):\n x = self.features(self.common_features(x))\n x = torch.flatten(x, 1)\n x_1 = self.classifier(x)\n #x = self.classifier(x)\n return x, x_1\n\n\nclass FlexiNet(nn.Module):\n\n def __init__(self, backbone, n_classes=1000):\n super(FlexiNet, self).__init__()\n self.features = None\n self.common_features = backbone\n self.classifier = nn.Linear(feature_dim, n_classes)\n self.input_dim = [3, 32, 32]\n self.feat_dim = feature_dim\n self.flexiable = True\n\n def createTasks(feature_dims):\n self.subfeats = nn.ModuleList()\n for fdim in feature_dims:\n feats = nn.Sequential(\n nn.Conv2d(384, fdim, kernel_size=3, padding=1),\n nn.ReLU(inplace=True),\n nn.Conv2d(feature_dim, fdim, kernel_size=3, padding=1),\n nn.ReLU(inplace=True),\n nn.MaxPool2d(kernel_size=2, stride=2),\n )\n self.subfeats.append(feats)\n \n def forward(self, x):\n x = self.features(self.common_features(x))\n x = torch.flatten(x, 1)\n x_1 = self.classifier(x)\n #x = self.classifier(x)\n return x, x_1\n\n\nclass HTCNN(nn.Module):\n def __init__(self, classTree_path, with_aux = True, with_fc = True, backbone = None, \n feat_dim = 0, isCuda = False, isConditionProb = True, coastBack = True, weights=None, autosizeFC = False):\n super(HTCNN, self).__init__()\n \n self.autosizeFC = autosizeFC\n \n self.n_bin = 0\n self.coastBack = coastBack\n bins = []\n bin_uniques = []\n i_bins = []\n self.with_aux = with_aux\n self.with_fc = with_fc\n self.isConditionProb = isConditionProb\n with open(classTree_path, 'r') as f:\n for ln in f:\n nodes = [int(field) for field in ln.rstrip('\\n').split(',')]\n n_node = len(nodes)\n if bins == []:\n for i in range(1, n_node):\n bins.append([])\n bin_uniques.append([])\n i_bins.append({})\n self.n_bin += 1\n bin_uniques.append([])\n for i in range(1, n_node):\n bins[i-1].append([nodes[i-1], nodes[i]])\n if nodes[i-1] not in bin_uniques[i-1]:\n bin_uniques[i-1].append(nodes[i-1])\n if nodes[i] not in i_bins[i-1]:\n i_bins[i-1][nodes[i]] = []\n i_bins[i-1][nodes[i]].append(nodes[-1])\n if nodes[-1] not in bin_uniques[-1]:\n bin_uniques[-1].append(nodes[-1])\n \n output_dim = len(bin_uniques[-1])\n if backbone is not None:\n self.backbone_nn = backbone\n input_dim = self.backbone_nn.feat_dim\n else:\n self.backbone_nn = None\n input_dim = feat_dim\n if input_dim == 0:\n input_dim = 128\n \n self.fc = True\n \n self.proj_layers = nn.ModuleList()\n self.fc_s = nn.ModuleList()\n self.coast_f = nn.ModuleList()\n self.activation_func = nn.LogSoftmax(dim=1)\n #self.activation_func = nn.Softmax(dim=1)\n #self.activation_func = nn.Sigmoid()\n #self.activation_func = nn.ELU()\n i = 0\n n_fine = len(bin_uniques[-1])\n for ibin in bins:\n output_dim = len(bin_uniques[i])\n f_input_dim = input_dim\n if autosizeFC:\n f_input_dim = int(np.ceil(input_dim*float(output_dim)/n_fine))\n forceDisable = not coastBack\n if isCuda:\n self.proj_layers.append(ClassProjection(treeNode = ibin, intermap=i_bins[i], forceDisable = forceDisable).cuda())\n if with_fc:\n self.fc_s.append(nn.Linear(f_input_dim, output_dim).cuda())\n self.coast_f.append(nn.Linear(input_dim, f_input_dim).cuda())\n else:\n self.proj_layers.append(ClassProjection(treeNode = ibin, intermap=i_bins[i], forceDisable = forceDisable))\n if with_fc:\n self.fc_s.append(nn.Linear(input_dim, output_dim))\n self.coast_f.append(nn.Linear(input_dim, f_input_dim))\n \n i += 1\n \n #define back-bone network layers\n if with_fc:\n output_dim = len(bin_uniques[-1])\n if isCuda:\n self.fc_s.append(nn.Linear(input_dim, output_dim).cuda())\n else:\n self.fc_s.append(nn.Linear(input_dim, output_dim))\n \n self.weights = []\n if weights is not None:\n if len(weights) != self.n_bin+1:\n raise Exception('number of weight must be the same as number of bin including the fine.')\n self.weights = weights\n else:\n for ibin in range(self.n_bin+1):\n self.weights.append(1.0)\n \n def backbone(self, x):\n if self.backbone_nn is not None:\n return self.backbone_nn(x)\n else:\n return x\n \n def processOutput(self, x):\n #y = F.softmax(x, dim=1)\n y = x\n return y\n \n def processCoast(self, x, i):\n if self.autosizeFC:\n return self.coast_f[i](x)\n else:\n return x\n \n def forward(self, x):\n # output nodes should be in ordered {coarst 1, coarst 2, ..., coarst n, fine} for n-coarst problem\n feat_y, b_y = self.backbone(x)\n if self.with_fc:\n if self.with_aux:\n y = []\n y_ = None\n for i in range(self.n_bin):\n _y = self.fc_s[i](self.processCoast(feat_y, i))\n y.append(self.activation_func(_y))\n if y_ is None:\n y_ = self.proj_layers[i](self.processOutput(torch.mul(_y, self.weights[i])))\n else:\n if self.isConditionProb:\n y_ = torch.mul(y_, self.proj_layers[i](self.processOutput(torch.mul(_y, self.weights[i])))) # elementwise product\n else:\n y_ = torch.add(y_, self.proj_layers[i](self.processOutput(torch.mul(_y, self.weights[i])))) # sum\n y.append(self.activation_func(b_y))\n if self.isConditionProb:\n y_ = torch.mul(y_, self.processOutput(torch.mul(b_y, self.weights[-1])))\n else:\n y_ = torch.add(y_, self.processOutput(torch.mul(b_y, self.weights[-1])))\n return self.activation_func(y_), y\n else:\n y_ = None\n for i in range(self.n_bin):\n _y = self.fc_s[i](feat_y)\n if y_ is None:\n y_ = self.proj_layers[i](self.processOutput(torch.mul(_y, self.weights[i])))\n else:\n if self.isConditionProb:\n y_ = torch.mul(y_, self.proj_layers[i](self.processOutput(torch.mul(_y, self.weights[i])))) # elementwise product\n else:\n y_ = torch.add(y_, self.proj_layers[i](self.processOutput(torch.mul(_y, self.weights[i])))) # sum\n\n if self.isConditionProb:\n y_ = torch.mul(y_, self.activation_func(self.processOutput(torch.mul(b_y, self.weights[-1]))))\n else:\n y_ = torch.add(y_, self.activation_func(self.processOutput(torch.mul(b_y, self.weights[-1]))))\n return self.activation_func(y_), None\n else:\n if self.with_aux:\n y = []\n y_ = None\n for i in range(self.n_bin):\n _y = b_y\n y.append(self.activation_func(_y))\n if y_ is None:\n y_ = self.proj_layers[i](self.processOutput(torch.mul(_y, self.weights[i])))\n else:\n if self.isConditionProb:\n y_ = torch.mul(y_, self.proj_layers[i](self.processOutput(torch.mul(_y, self.weights[i])))) # elementwise product\n else:\n y_ = torch.add(y_, self.proj_layers[i](self.processOutput(torch.mul(_y, self.weights[i])))) # sum\n y.append(self.activation_func(b_y))\n if self.isConditionProb:\n y_ = torch.mul(y_, self.processOutput(torch.mul(b_y, self.weights[-1])))\n else:\n y_ = torch.add(y_, self.processOutput(torch.mul(b_y, self.weights[-1])))\n return self.activation_func(y_), y\n else:\n y_ = None\n for i in range(self.n_bin):\n _y = b_y\n if y_ is None:\n y_ = self.proj_layers[i](self.processOutput(torch.mul(_y, self.weights[i])))\n else:\n if self.isConditionProb:\n y_ = torch.mul(y_, self.proj_layers[i](self.processOutput(torch.mul(_y, self.weights[i])))) # elementwise product\n else:\n y_ = torch.add(y_, self.proj_layers[i](self.processOutput(torch.mul(_y, self.weights[i])))) # sum\n if self.isConditionProb:\n y_ = torch.mul(y_, self.processOutput(torch.mul(b_y, self.weights[-1])))\n else:\n y_ = torch.add(y_, self.processOutput(torch.mul(b_y, self.weights[-1])))\n return self.activation_func(y_), None\n\n\nclass HTCNN_M(nn.Module):\n def __init__(self, classTree_path, with_aux = True, with_fc = True, backbones = None, \n feat_dim = [], isCuda = False, isConditionProb = True, coastBack = True, weights = None):\n super(HTCNN_M, self).__init__()\n \n \n self.n_bin = 0\n self.coastBack = coastBack\n bins = []\n bin_uniques = []\n i_bins = []\n self.with_aux = with_aux\n self.with_fc = with_fc\n self.isConditionProb = isConditionProb\n self.activation_func = nn.LogSoftmax(dim=1)\n #self.activation_func = nn.Softmax(dim=1)\n #self.activation_func = nn.Sigmoid()\n #self.activation_func = nn.ELU()\n with open(classTree_path, 'r') as f:\n for ln in f:\n nodes = [int(field) for field in ln.rstrip('\\n').split(',')]\n n_node = len(nodes)\n if bins == []:\n for i in range(1, n_node):\n bins.append([])\n bin_uniques.append([])\n i_bins.append({})\n self.n_bin += 1\n bin_uniques.append([])\n for i in range(1, n_node):\n bins[i-1].append([nodes[i-1], nodes[i]])\n if nodes[i-1] not in bin_uniques[i-1]:\n bin_uniques[i-1].append(nodes[i-1])\n if nodes[i] not in i_bins[i-1]:\n i_bins[i-1][nodes[i]] = []\n i_bins[i-1][nodes[i]].append(nodes[-1])\n if nodes[-1] not in bin_uniques[-1]:\n bin_uniques[-1].append(nodes[-1])\n \n output_dim = len(bin_uniques[-1])\n self.input_dims = []\n if backbones is not None:\n self.backbones = backbones\n for backbone in self.backbones:\n self.input_dims.append(backbone.feat_dim)\n else:\n self.backbones = None\n for input_dim in feat_dim:\n self.input_dims.append(input_dim)\n self.fc = True\n \n self.proj_layers = nn.ModuleList()\n self.fc_s = nn.ModuleList()\n i = 0\n for ibin in bins:\n forceDisable = not coastBack\n output_dim = len(bin_uniques[i])\n if isCuda:\n self.proj_layers.append(ClassProjection(treeNode = ibin, intermap=i_bins[i], forceDisable=forceDisable).cuda())\n if with_fc:\n self.fc_s.append(nn.Linear(self.input_dims[i], output_dim).cuda())\n else:\n self.proj_layers.append(ClassProjection(treeNode = ibin, intermap=i_bins[i], forceDisable=forceDisable))\n if with_fc:\n self.fc_s.append(nn.Linear(self.input_dims[i], output_dim))\n \n i += 1\n \n #define back-bone network layers\n if with_fc:\n output_dim = len(bin_uniques[-1])\n if isCuda:\n self.fc_s.append(nn.Linear(self.input_dims[-1], output_dim).cuda())\n else:\n self.fc_s.append(nn.Linear(self.input_dims[-1], output_dim))\n \n if (len(bins)+1)!=len(self.backbones):\n raise Exception('(%d vs %d)The number of backbone networks must be the same as number of bin'%(len(bins)+1,len(self.backbones)))\n \n self.weights = []\n if weights is not None:\n if len(weights) != self.n_bin+1:\n raise Exception('number of weight must be the same as number of bin including the fine.')\n self.weights = weights\n else:\n for ibin in range(self.n_bin+1):\n self.weights.append(1.0)\n \n def backbone(self, x):\n output = [] \n if self.backbones is not None:\n for backbone in self.backbones:\n output.append(backbone(x))\n return output\n else:\n return x\n \n def processOutput(self, x):\n #y = F.softmax(x, 1)\n y = x\n return y\n \n def forward(self, x):\n # output nodes should be in ordered {coarst 1, coarst 2, ..., coarst n, fine} for n-coarst problem\n back_results = self.backbone(x)\n if self.with_fc:\n if self.with_aux:\n y = []\n y_ = None\n for i in range(self.n_bin):\n feat_y, b_y = back_results[i]\n _y = self.fc_s[i](feat_y)\n y.append(self.activation_func(_y))\n if y_ is None:\n y_ = self.proj_layers[i](self.processOutput(torch.mul(_y, self.weights[i])))\n else:\n if self.isConditionProb:\n y_ = torch.mul(y_, self.proj_layers[i](self.processOutput(torch.mul(_y, self.weights[i])))) # elementwise product\n else:\n y_ = torch.add(y_, self.proj_layers[i](self.processOutput(torch.mul(_y, self.weights[i])))) # sum\n b_y = back_results[-1][1]\n y.append(self.activation_func(b_y))\n if self.isConditionProb:\n y_ = torch.mul(y_, self.processOutput(torch.mul(b_y, self.weights[-1])))\n else:\n y_ = torch.add(y_, self.processOutput(torch.mul(b_y, self.weights[-1])))\n return self.activation_func(y_), y\n else:\n y_ = None\n for i in range(self.n_bin):\n _y = self.fc_s[i](feat_y)\n if y_ is None:\n y_ = self.proj_layers[i](self.processOutput(torch.mul(_y, self.weights[i])))\n else:\n if self.isConditionProb:\n y_ = torch.mul(y_, self.proj_layers[i](self.processOutput(torch.mul(_y, self.weights[i])))) # elementwise product\n else:\n y_ = torch.add(y_, self.proj_layers[i](self.processOutput(torch.mul(_y, self.weights[i])))) # sum\n b_y = back_results[-1][1]\n if self.isConditionProb:\n y_ = torch.mul(y_, self.processOutput(torch.mul(b_y, self.weights[-1])))\n else:\n y_ = torch.add(y_, self.processOutput(torch.mul(b_y, self.weights[-1])))\n return self.activation_func(y_), None\n else:\n if self.with_aux:\n y = []\n y_ = None\n for i in range(self.n_bin):\n _y = back_results[i][1]\n y.append(self.activation_func(_y))\n if y_ is None:\n y_ = self.proj_layers[i](self.processOutput(torch.mul(_y, self.weights[i])))\n else:\n if self.isConditionProb:\n y_ = torch.mul(y_, self.proj_layers[i](self.processOutput(torch.mul(_y, self.weights[i])))) # elementwise product\n else:\n y_ = torch.add(y_, self.proj_layers[i](self.processOutput(torch.mul(_y, self.weights[i])))) # sum\n b_y = back_results[-1][1]\n y.append(self.activation_func(b_y))\n if self.isConditionProb:\n y_ = torch.mul(y_, self.processOutput(torch.mul(b_y, self.weights[-1])))\n else:\n y_ = torch.add(y_, self.processOutput(torch.mul(b_y, self.weights[-1])))\n return self.activation_func(y_), y\n else:\n y_ = None\n for i in range(self.n_bin):\n feat_y, b_y = back_results[i]\n _y = b_y\n if y_ is None:\n y_ = self.proj_layers[i](self.processOutput(torch.mul(_y, self.weights[i])))\n else:\n if self.isConditionProb:\n y_ = torch.mul(y_, self.proj_layers[i](self.processOutput(torch.mul(_y, self.weights[i])))) # elementwise product\n else:\n y_ = torch.add(y_, self.proj_layers[i](self.processOutput(torch.mul(_y, self.weights[i])))) # sum\n b_y = back_results[-1][1]\n if self.isConditionProb:\n y_ = torch.mul(y_, self.processOutput(torch.mul(b_y, self.weights[-1])))\n else:\n y_ = torch.add(y_, self.processOutput(torch.mul(b_y, self.weights[-1])))\n return self.activation_func(y_), None\n\n\nclass HTCNN_M_IN(nn.Module):\n def __init__(self, classTree_path, with_aux = True, with_fc = True, backbones = None, \n feat_dim = [], isCuda = False, isConditionProb = True, coastBack = True, weights = None):\n super(HTCNN_M_IN, self).__init__()\n \n \n self.n_bin = 0\n self.coastBack = coastBack\n bins = []\n bin_uniques = []\n i_bins = []\n self.with_aux = with_aux\n self.with_fc = with_fc\n self.isConditionProb = isConditionProb\n self.activation_func = nn.LogSoftmax(dim=1)\n with open(classTree_path, 'r') as f:\n for ln in f:\n nodes = [int(field) for field in ln.rstrip('\\n').split(',')]\n n_node = len(nodes)\n if bins == []:\n for i in range(1, n_node):\n bins.append([])\n bin_uniques.append([])\n i_bins.append({})\n self.n_bin += 1\n bin_uniques.append([])\n for i in range(1, n_node):\n bins[i-1].append([nodes[i-1], nodes[i]])\n if nodes[i-1] not in bin_uniques[i-1]:\n bin_uniques[i-1].append(nodes[i-1])\n if nodes[i] not in i_bins[i-1]:\n i_bins[i-1][nodes[i]] = []\n i_bins[i-1][nodes[i]].append(nodes[-1])\n if nodes[-1] not in bin_uniques[-1]:\n bin_uniques[-1].append(nodes[-1])\n \n output_dim = len(bin_uniques[-1])\n self.input_dims = []\n if backbones is not None:\n self.backbones = backbones\n for backbone in self.backbones:\n self.input_dims.append(backbone.feat_dim)\n else:\n self.backbones = None\n for input_dim in feat_dim:\n self.input_dims.append(input_dim)\n self.fc = True\n \n self.proj_layers = nn.ModuleList()\n self.fc_s = nn.ModuleList()\n i = 0\n for ibin in bins:\n forceDisable = not coastBack\n output_dim = len(bin_uniques[i])\n if isCuda:\n self.proj_layers.append(ClassProjection(treeNode = ibin, intermap=i_bins[i], forceDisable=forceDisable).cuda())\n if with_fc:\n self.fc_s.append(nn.Linear(self.input_dims[i], output_dim).cuda())\n else:\n self.proj_layers.append(ClassProjection(treeNode = ibin, intermap=i_bins[i], forceDisable=forceDisable))\n if with_fc:\n self.fc_s.append(nn.Linear(self.input_dims[i], output_dim))\n \n i += 1\n \n #define back-bone network layers\n if with_fc:\n output_dim = len(bin_uniques[-1])\n if isCuda:\n self.fc_s.append(nn.Linear(self.input_dims[-1], output_dim).cuda())\n else:\n self.fc_s.append(nn.Linear(self.input_dims[-1], output_dim))\n \n if (len(bins)+1)!=len(self.backbones):\n raise Exception('(%d vs %d)The number of backbone networks must be the same as number of bin'%(len(bins)+1,len(self.backbones)))\n \n self.weights = []\n if weights is not None:\n if len(weights) != self.n_bin+1:\n raise Exception('number of weight must be the same as number of bin including the fine.')\n self.weights = weights\n else:\n for ibin in range(self.n_bin+1):\n self.weights.append(1.0)\n \n def backbone(self, x):\n output = [] \n if self.backbones is not None:\n i = 0\n for backbone in self.backbones:\n output.append(backbone(x[i]))\n i += 1\n return output\n else:\n return x\n \n def processOutput(self, x):\n y = F.softmax(x, 1)\n return x\n \n def forward(self, x):\n # output nodes should be in ordered {coarst 1, coarst 2, ..., coarst n, fine} for n-coarst problem\n back_results = self.backbone(x)\n if self.with_fc:\n if self.with_aux:\n y = []\n y_ = None\n for i in range(self.n_bin):\n feat_y, b_y = back_results[i]\n _y = self.fc_s[i](feat_y)\n y.append(self.activation_func(_y))\n if y_ is None:\n y_ = self.proj_layers[i](self.processOutput(torch.mul(_y, self.weights[i])))\n else:\n if self.isConditionProb:\n y_ = torch.mul(y_, self.proj_layers[i](self.processOutput(torch.mul(_y, self.weights[i])))) # elementwise product\n else:\n y_ = torch.add(y_, self.proj_layers[i](self.processOutput(torch.mul(_y, self.weights[i])))) # sum\n b_y = back_results[-1][1]\n y.append(self.activation_func(b_y))\n if self.isConditionProb:\n y_ = torch.mul(y_, self.processOutput(torch.mul(b_y, self.weights[-1])))\n else:\n y_ = torch.add(y_, self.processOutput(torch.mul(b_y, self.weights[-1])))\n return self.activation_func(y_), y\n else:\n y_ = None\n for i in range(self.n_bin):\n _y = self.fc_s[i](feat_y)\n if y_ is None:\n y_ = self.proj_layers[i](self.processOutput(torch.mul(_y, self.weights[i])))\n else:\n if self.isConditionProb:\n y_ = torch.mul(y_, self.proj_layers[i](self.processOutput(torch.mul(_y, self.weights[i])))) # elementwise product\n else:\n y_ = torch.add(y_, self.proj_layers[i](self.processOutput(torch.mul(_y, self.weights[i])))) # sum\n b_y = back_results[-1][1]\n if self.isConditionProb:\n y_ = torch.mul(y_, self.processOutput(torch.mul(b_y, self.weights[-1])))\n else:\n y_ = torch.add(y_, self.processOutput(torch.mul(b_y, self.weights[-1])))\n return self.activation_func(y_), None\n else:\n if self.with_aux:\n y = []\n y_ = None\n for i in range(self.n_bin):\n feat_y, b_y = back_results[i]\n _y = b_y\n y.append(self.activation_func(_y))\n if y_ is None:\n y_ = self.proj_layers[i](self.processOutput(torch.mul(_y, self.weights[i])))\n else:\n if self.isConditionProb:\n y_ = torch.mul(y_, self.proj_layers[i](self.processOutput(torch.mul(_y, self.weights[i])))) # elementwise product\n else:\n y_ = torch.add(y_, self.proj_layers[i](self.processOutput(torch.mul(_y, self.weights[i])))) # sum\n b_y = back_results[-1][1]\n y.append(self.activation_func(b_y))\n if self.isConditionProb:\n y_ = torch.mul(y_, self.processOutput(torch.mul(b_y, self.weights[-1])))\n else:\n y_ = torch.add(y_, self.processOutput(torch.mul(b_y, self.weights[-1])))\n return self.activation_func(y_), y\n else:\n y_ = None\n for i in range(self.n_bin):\n feat_y, b_y = back_results[i]\n _y = b_y\n if y_ is None:\n y_ = self.proj_layers[i](self.processOutput(torch.mul(_y, self.weights[i])))\n else:\n if self.isConditionProb:\n y_ = torch.mul(y_, self.proj_layers[i](self.processOutput(torch.mul(_y, self.weights[i])))) # elementwise product\n else:\n y_ = torch.add(y_, self.proj_layers[i](self.processOutput(torch.mul(_y, self.weights[i])))) # sum\n b_y = back_results[-1][1]\n if self.isConditionProb:\n y_ = torch.mul(y_, self.processOutput(torch.mul(b_y, self.weights[-1])))\n else:\n y_ = torch.add(y_, self.processOutput(torch.mul(b_y, self.weights[-1])))\n return self.activation_func(y_), None\n"
},
{
"alpha_fraction": 0.5376344323158264,
"alphanum_fraction": 0.5672042965888977,
"avg_line_length": 22.25,
"blob_id": "ee5fde4ba3904cf620ea6fc01cbb78921b8f9a15",
"content_id": "98865c22ede89b850b3c1f69d3bfc4302c01226b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 372,
"license_type": "no_license",
"max_line_length": 48,
"num_lines": 16,
"path": "/addons/.ipynb_checkpoints/amath-checkpoint.py",
"repo_name": "geoffrey0822/HTNN",
"src_encoding": "UTF-8",
"text": "import operator as op\nfrom functools import reduce\nimport math\n\ndef ncr(n, r):\n r = min(r, n-r)\n numer = reduce(op.mul, range(n, n-r, -1), 1)\n denom = reduce(op.mul, range(1, r+1), 1)\n return numer // denom # or / in Python 2\n\ndef ncr2(n):\n output = 0\n for i in range(1, n):\n x = math.log(math.pow(2, i), 2)\n output += x\n return output\n"
},
{
"alpha_fraction": 0.5285337567329407,
"alphanum_fraction": 0.5396063327789307,
"avg_line_length": 40.38895797729492,
"blob_id": "b6d4bf58a994b9e00878a5849ec720efd15655d9",
"content_id": "dc0fd3b40a44337404b64844fa723ce3b4d0ff74",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 67464,
"license_type": "no_license",
"max_line_length": 153,
"num_lines": 1630,
"path": "/train.py",
"repo_name": "geoffrey0822/HTNN",
"src_encoding": "UTF-8",
"text": "# +\nimport os\nimport numpy as np\nimport torch\nimport torch.optim as optim\nimport torch.nn as nn\nfrom torchvision import transforms\nfrom albumentations import (\n HorizontalFlip, VerticalFlip, ShiftScaleRotate,\n GaussNoise, MotionBlur, MedianBlur, IAAPiecewiseAffine,\n RandomCrop, Resize, GridDropout, Compose, Normalize, PadIfNeeded\n)\nfrom albumentations.pytorch import ToTensor, ToTensorV2\n#from torch.utils.tensorboard import SummaryWriter\nimport utils\nimport random\nimport time\nimport cv2\nfrom PIL import Image\nimport models.layers\nimport addons.trees as trees\nfrom models.vision import HTCNN, HTCNN_M, HTCNN_M_IN, LeNet5, AlexNet, AlexNet32, AlexNet32_B, AlexNet32_C, AlexNet32_D\nimport argparse\nimport threading\nfrom time import sleep\nfrom torchviz import make_dot, make_dot_from_trace\nfrom addons import amath\n\n__all__ = (\"error\", \"LockType\", \"start_new_thread\", \"interrupt_main\", \"exit\", \"allocate_lock\", \"get_ident\", \"stack_size\", \"acquire\", \"release\", \"locked\")\n\n\n# -\n\nclass DataThread (threading.Thread):\n def __init__(self, indices, img_paths, preprocess_1 = None, preprocess_2 = None):\n threading.Thread.__init__(self)\n self.indices = indices\n self.img_paths = img_paths\n self.preprocess_1 = preprocess_1\n self.preprocess_2 = preprocess_2\n \n def run(self):\n processData(self.indices, self.img_paths, self.preprocess_1, self.preprocess_2)\n\n\ndef loadData(data_path, data_file):\n output = []\n with open(data_file, 'r') as f:\n for ln in f:\n fields = ln.rstrip('\\n').split(',')\n output.append([os.path.join(data_path,fields[0]), int(fields[1])])\n return output\n\ndef processData(indices, img_paths, preprocess_1 = None, preprocess_2 = None):\n global tmp_output\n for idx in indices:\n p_data = Image.open(img_paths[idx])\n if preprocess_1 is not None:\n p_data = preprocess_1(p_data)\n if preprocess_2 is not None:\n p_data = preprocess_2(p_data)\n tmp_output[idx] = p_data.unsqueeze(0)\n\n\ndef loadInBatch(ds, r = 0, batchsize = 16, shuffle=False, preprocessor=None, im_size=None, onehot=False):\n output_data = None\n aux_labels = []\n fine_labels = None\n i = 0\n ndata = len(ds)\n hasDone = False\n im_width = im_size[2]\n im_height = im_size[1]\n im_ch = im_size[0]\n output_data = torch.zeros(batchsize, im_ch, im_height, im_width, device=device)\n while i<batchsize:\n data_rec = ds[r][0]\n img_data = None\n data_blob = None\n \n #img_data = Image.open(data_rec)\n #data_blob = preprocessor(img_data).unsqueeze(0)\n img_data = cv2.imread(data_rec)\n data_blob = preprocessor(image=img_data)['image'].unsqueeze(0)\n base_label = ds[r][1] \n output_data[i, ...] = data_blob\n if aux_labels == []:\n j = 0\n for lv in lookup_lv_list:\n if onehot:\n output_label = torch.zeros(batchsize, coarst_dims[j], device=device).long()\n else:\n output_label = torch.zeros(batchsize, device=device).long()\n aux_labels.append(output_label)\n j += 1\n if fine_labels is None:\n if onehot:\n fine_labels = torch.zeros(batchsize, n_fine, device=device).long()\n else:\n fine_labels = torch.zeros(batchsize, device=device).long()\n j = 0\n for lv in lookup_lv_list:\n up_cls = lookupParent(classTree, base_label, lv)\n if onehot:\n aux_labels[j].data[i, up_cls] = 1\n else:\n aux_labels[j].data[i] = up_cls\n j += 1\n if onehot:\n fine_labels.data[i, base_label] = 1\n else:\n fine_labels.data[i] = base_label\n r += 1\n if r >= ndata:\n r = 0\n hasDone = True\n if shuffle:\n random.shuffle(ds)\n i += 1\n \n output_data = output_data.to(device)\n return output_data, aux_labels, fine_labels, r, hasDone\n\ndef loadInBatch_hp(ds, r = 0, batchsize = 16, shuffle=False, preprocessor=None, im_size=None, nThread=2):\n global tmp_output\n output_data = None\n aux_labels = []\n fine_labels = None\n i = 0\n ndata = len(ds)\n hasDone = False\n tmp_output = []\n img_paths = []\n \n im_width = im_size[2]\n im_height = im_size[1]\n im_ch = im_size[0]\n output_data = torch.zeros(batchsize, im_ch, im_height, im_width, device=device)\n while i<batchsize:\n tmp_output.append(None)\n i+=1\n i = 0\n while i<batchsize:\n data_rec = ds[r][0]\n img_data = None\n data_blob = None\n #_thread.start_new_thread(processData, (i, data_rec, preprocessor, None))\n img_paths.append(data_rec)\n base_label = ds[r][1] \n if aux_labels == []:\n j = 0\n for lv in lookup_lv_list:\n output_label = torch.zeros(batchsize, coarst_dims[j]).long().to(device)\n output_label.require_grad = False\n aux_labels.append(output_label)\n j += 1\n if fine_labels is None:\n fine_labels = torch.zeros(batchsize, n_fine).long().to(device)\n j = 0\n for lv in lookup_lv_list:\n up_cls = lookupParent(classTree, base_label, lv)\n aux_labels[j].data[i, up_cls] = 1\n j += 1\n fine_labels.data[i, base_label] = 1\n r += 1\n if r >= ndata:\n r = 0\n hasDone = True\n if shuffle:\n random.shuffle(ds)\n i += 1\n allDone = False\n idx_i = 0\n thd = []\n n_index = int(np.ceil(float(batchsize)/nThread))\n for ni in range(nThread):\n targets = []\n for ii in range(n_index):\n targets.append(idx_i)\n idx_i+=1\n if idx_i>=batchsize:\n break\n thd.append(DataThread(targets, img_paths, preprocessor, None))\n thd[ni].start()\n for ithd in thd:\n ithd.join()\n thd = []\n i = 0\n while i<batchsize:\n output_data[i, ...] = tmp_output[i]\n i+=1\n tmp_output = []\n #output_data.require_grad = False\n fine_labels.require_grad = False\n return output_data, aux_labels, fine_labels, r, hasDone\n\n\ndef loadInBatch_mblob(ds, r = 0, batchsize = 16, shuffle=False, preprocessors=None, im_sizes=None,\n general_preprocess = None):\n output_data = []\n aux_labels = []\n fine_labels = None\n i = 0\n n_output = 0\n if preprocessors is not None:\n n_output = len(preprocessors)\n else:\n n_output = len(im_sizes)\n ndata = len(ds)\n hasDone = False\n if preprocessors is None:\n raise Exception('Preprocessors cannot be empty.')\n while i<batchsize:\n data_rec = ds[r][0]\n img_data = None\n data_blob = None\n img_data = Image.open(data_rec)\n base_label = ds[r][1] \n if general_preprocess is not None:\n img_data = general_preprocess(img_data)\n for i_output in range(n_output):\n im_width = im_sizes[i_output][2]\n im_height = im_sizes[i_output][1]\n im_ch = im_sizes[i_output][0]\n local_output_data = None\n data_blob = preprocessors[i_output](img_data).unsqueeze(0)\n \n if output_data != [] and len(output_data)>i_output:\n local_output_data = output_data[i_output]\n else:\n local_output_data = torch.zeros(batchsize, im_ch, im_height, im_width, device=device)\n output_data.append(local_output_data)\n local_output_data[i, ...] = data_blob\n if aux_labels == []:\n j = 0\n for lv in lookup_lv_list:\n output_label = torch.zeros(batchsize, coarst_dims[j]).long().to(device)\n output_label.require_grad = False\n aux_labels.append(output_label)\n j += 1\n if fine_labels is None:\n fine_labels = torch.zeros(batchsize, n_fine).long().to(device)\n j = 0\n for lv in lookup_lv_list:\n up_cls = lookupParent(classTree, base_label, lv)\n aux_labels[j].data[i, up_cls] = 1\n j += 1\n fine_labels.data[i, base_label] = 1\n r += 1\n if r >= ndata:\n r = 0\n hasDone = True\n if shuffle:\n random.shuffle(ds)\n i += 1\n \n \n fine_labels.require_grad = False\n return output_data, aux_labels, fine_labels, r, hasDone\n\ndef lookupParent(tree, fine_node, upper_lv=1):\n return tree[fine_node][upper_lv-1]\n\ndef accumulateList(list1, list2):\n output = []\n for i in range(len(list1)):\n output.append((list1[i] + list2[i]) * 0.5)\n return output\n\ndef computeBatchAccuracy(pred, expected, onehot=False):\n output = []\n n_output = len(pred)\n n_batch = pred[0].shape[0]\n for i in range(n_output):\n local_result = 0.0\n for j in range(n_batch):\n cls_pred = pred[i][j].argmax()\n if onehot:\n cls_exp = expected[i][j,...].argmax()\n else:\n cls_exp = expected[i][j]\n #print((cls_pred, cls_exp))\n if cls_pred == cls_exp:\n local_result += 1.0\n local_result /= n_batch\n output.append(local_result)\n return output\n\ndef computeAccuracy(dataset, model, batchsize = 1, withAux = False, preprocessor = None, withLoss = None):\n data_count = len(dataset)\n ptr = 0\n batch_len = int(np.floor(float(data_count)/batchsize))\n batch_elen = int(np.ceil(float(data_count)/batchsize))\n output = []\n aux_output = []\n loss_v = 0\n for i in range(batch_len):\n batch_data, expected_aux, expected_fine, ptr, _ = loadInBatch(dataset, ptr, batchsize, preprocessor=preprocessor,\n im_size = input_sizes)\n pred_final, pred_aux = model(batch_data)\n if withLoss is not None:\n v_loss = withLoss(pred_final, expected_fine).item()\n loss_v += v_loss\n batch_result = computeBatchAccuracy([pred_final], [expected_fine])\n if output == []:\n output = batch_result\n else:\n for j in range(len(output)):\n output[j] += batch_result[j]\n if withAux:\n batch_aux_result = computeBatchAccuracy(pred_aux, expected_aux + [expected_fine])\n if aux_output == []:\n aux_output = batch_aux_result\n else:\n for j in range(len(aux_output)):\n aux_output[j] += batch_aux_result[j]\n if batchsize!=1 and batch_len != batch_elen:\n tmp_batchsize = data_count - ptr\n batch_data, expected_aux, expected_fine, ptr, _ = loadInBatch(dataset, ptr, tmp_batchsize, preprocessor=preprocessor,\n im_size = input_sizes)\n pred_final, pred_aux = model(batch_data)\n batch_result = computeBatchAccuracy([pred_final], [expected_fine])\n for j in range(len(output)):\n output[j] += batch_result[j]\n output[j] /= batch_len + 1\n if withAux:\n batch_aux_result = computeBatchAccuracy(pred_aux, expected_aux + [expected_fine])\n for j in range(len(aux_output)):\n aux_output[j] += batch_aux_result[j]\n aux_output[j] /= batch_len + 1\n else:\n print('damn')\n for j in range(len(output)):\n output[j] /= batch_len\n if withAux:\n for j in range(len(aux_output)):\n aux_output[j] /= batch_len\n \n return output, aux_output, loss_v/batch_elen\n\ndef computeAccuracy_m_in(dataset, model, batchsize = 1, withAux = False, preprocessors = None, im_sizes = None):\n data_count = len(dataset)\n ptr = 0\n batch_len = int(np.floor(float(data_count)/batchsize))\n batch_elen = int(np.ceil(float(data_count)/batchsize))\n output = []\n aux_output = []\n for i in range(batch_len):\n batch_data, expected_aux, expected_fine, ptr, _ = loadInBatch_mblob(dataset, ptr, batchsize, preprocessors=preprocessors, im_sizes=im_sizes)\n pred_final, pred_aux = model(batch_data)\n batch_result = computeBatchAccuracy([pred_final], [expected_fine])\n if output == []:\n output = batch_result\n else:\n for j in range(len(output)):\n output[j] += batch_result[j]\n if withAux:\n batch_aux_result = computeBatchAccuracy(pred_aux, expected_aux + [expected_fine])\n if aux_output == []:\n aux_output = batch_aux_result\n else:\n for j in range(len(aux_output)):\n aux_output[j] += batch_aux_result[j]\n if batchsize!=1 and batch_len != batch_elen:\n tmp_batchsize = data_count - ptr\n batch_data, expected_aux, expected_fine, ptr, _ = loadInBatch_mblob(dataset, ptr, tmp_batchsize, preprocessors=preprocessors, im_sizes=im_sizes)\n pred_final, pred_aux = model(batch_data)\n batch_result = computeBatchAccuracy([pred_final], [expected_fine])\n for j in range(len(output)):\n output[j] += batch_result[j]\n output[j] /= batch_len + 1\n if withAux:\n batch_aux_result = computeBatchAccuracy(pred_aux, expected_aux + [expected_fine])\n for j in range(len(aux_output)):\n aux_output[j] += batch_aux_result[j]\n aux_output[j] /= batch_len + 1\n else:\n for j in range(len(output)):\n output[j] /= batch_len\n if withAux:\n for j in range(len(aux_output)):\n aux_output[j] /= batch_len\n \n return output, aux_output\n\ndef compute_loss_diff(losses, absolute=False, losses_name=None):\n output = []\n output_names = []\n n_loss = len(losses)\n #n_output = amath.ncr(n_loss, r=2)\n for i in range(n_loss):\n for j in range(i, n_loss):\n if i==j:\n continue\n y = losses[j]-losses[i]\n if losses_name is not None:\n output_names.append('%s - %s'%(losses_name[j], losses_name[i]))\n if absolute:\n y = np.abs(y)\n output.append(y)\n return output, output_names\n\n\ndef train(trainset, valset, label_file, output_path, output_fname, \n start_lr=0.1, lr_discount=0.1, lr_steps=[], epoch=30,\n train_batch = 16, val_batch = 16, val_at = 10,\n checkpoint = None, jud_at = -1, aux_scaler = 0.3, final_scaler = 1.0, fine_scaler = 1.0,\n preprocessor = None, v_preprocessor = None, isConditionProb=True, coastBack=True,\n f_weights = None\n ):\n global backbone\n best_v_result = 0.0\n model = HTCNN(label_file, with_aux = True, with_fc = True, backbone=backbone,\n isCuda=True, isConditionProb=isConditionProb, coastBack=coastBack, weights=f_weights, autosizeFC = True).cuda()\n \n #for name, param in model.named_parameters():\n # if param.requires_grad:\n # print(name)\n \n output_filepath = os.path.join(output_path, output_fname)\n\n if checkpoint is not None and os.path.isfile(checkpoint):\n \n model_params = torch.load(checkpoint)\n model.load_state_dict(model_params)\n backbone = model.backbone_nn\n print('Loaded from checkpoint %s'%checkpoint)\n \n #sample, _, _, _, _ = loadInBatch(trainset, batchsize = 1)\n #writer.add_graph(model, sample)\n #writer.close()\n \n v_result = 0\n \n model.eval()\n with torch.no_grad():\n val_result, aux_val_result, v_loss = computeAccuracy(valset, model, val_batch, withAux=True, preprocessor = v_preprocessor)\n v_result = val_result[0]\n print('Validation Accuracy: %f'%v_result)\n print(aux_val_result)\n best_v_result = v_result\n \n lr = start_lr\n \n param_list = [{'params':model.proj_layers.parameters()}, \n {'params':model.fc_s.parameters()},\n {'params':model.backbone_nn.parameters()}\n ]\n \n optimizer = optim.SGD(param_list, lr=lr, momentum=0.9, weight_decay=5e-4)\n #optimizer = optim.Adagrad(model.parameters(), lr=lr)\n #scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=lr_steps, gamma=lr_discount)\n scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, 'min', patience = int(np.ceil(4.0/val_at)), threshold=1e-3)\n \n # create losses\n losses = []\n aux_loss_names = []\n aux_val_names = []\n final_loss = nn.NLLLoss()\n for lv in lookup_lv_list:\n losses.append(nn.NLLLoss())\n aux_loss_names.append('Coarst %d loss'%lv)\n aux_val_names.append('Level %d accuracy'%lv)\n losses.append(nn.NLLLoss())\n aux_loss_names.append('Fine loss')\n aux_val_names.append('Fine accuracy')\n n_aux = len(losses) - 1\n aux_accuracy = {}\n vloss = nn.NLLLoss()\n \n for i in range(epoch):\n # training phase\n model.train()\n ptr = 0\n hasFinishEpoch = False\n epoch_result = []\n epoch_aux_losses_v = []\n epoch_loss_v = 0\n iter_c = 0\n avg_model_fwd_elapsed_time = 0.0\n while not hasFinishEpoch:\n optimizer.zero_grad()\n \n pp_start_time = time.time()\n batch_input, gt_aux, gt_final, ptr, hasFinishEpoch = loadInBatch(trainset, ptr, train_batch, shuffle=True,\n preprocessor=preprocessor, im_size = input_sizes)\n pp_elapsed_time = time.time() - pp_start_time\n \n model_start_time = time.time()\n pred_final, pred_aux = model(batch_input)\n model_fwd_elapsed_time = time.time() - model_start_time\n avg_model_fwd_elapsed_time = (avg_model_fwd_elapsed_time + model_fwd_elapsed_time) / 2.0\n \n iloss = 0\n fine_loss = losses[-1](pred_aux[-1], gt_final)*fine_scaler\n total_loss = fine_loss\n for i_aux in range(n_aux):\n aux_loss = losses[i_aux](pred_aux[i_aux], gt_aux[i_aux])\n total_loss = total_loss + aux_loss * aux_scaler\n #total_loss = torch.sum(aux_loss)\n aux_loss_v = aux_loss.item()\n if epoch_aux_losses_v == []:\n epoch_aux_losses_v.append(aux_loss_v)\n else:\n epoch_aux_losses_v[iloss] += aux_loss_v\n iloss += 1\n \n f_loss = final_loss(pred_final, gt_final)\n total_loss = total_loss + f_loss * final_scaler\n fine_loss_v = fine_loss.item()\n if len(epoch_aux_losses_v) <= iloss:\n epoch_aux_losses_v.append(fine_loss_v)\n else:\n epoch_aux_losses_v[iloss] += fine_loss_v\n # compute gradients\n total_loss.backward()\n \n # update weights\n optimizer.step()\n \n if iter_c == 0:\n epoch_loss_v = total_loss.item()\n else:\n epoch_loss_v += total_loss.item()\n \n if epoch_loss_v == 0:\n epoch_loss_v = total_loss.item()\n \n result = computeBatchAccuracy([pred_final.data],[gt_final])\n if epoch_result == []:\n epoch_result = result\n else:\n epoch_result = accumulateList(epoch_result, result)\n iter_c += 1\n print('[iteration %d]Data Loading Time:%f seconds; Computation Time:%f seconds'%(iter_c,pp_elapsed_time, model_fwd_elapsed_time))\n \n #scheduler.step()\n plot_loss = {}\n for iloss in range(n_aux+1):\n epoch_aux_losses_v[iloss] /= iter_c\n plot_loss[aux_loss_names[iloss]] = epoch_aux_losses_v[iloss]\n plotter.plot('loss', 'aux %d'%iloss,'Losses', i, epoch_aux_losses_v[iloss])\n #print('%s: %f, '%(aux_loss_names[iloss], epoch_aux_losses_v[iloss]), end='')\n epoch_loss_v /= iter_c\n #lot_loss['total loss'] = epoch_loss_v\n plotter.plot('loss', 'total','Total Loss', i, epoch_loss_v)\n #writer.add_scalars('training loss', \n # plot_loss,\n # i)\n #print('Fine loss: %f'%epoch_loss_v)\n print(plot_loss)\n \n # validation phase\n if i % val_at == 0 or (i+1)==epoch:\n disp_i = i+1\n if i==0:\n disp_i = 0\n print('Validating...')\n model.eval()\n with torch.no_grad():\n val_result, aux_val_result, v_loss = computeAccuracy(valset, model, val_batch, withAux=True, preprocessor = v_preprocessor,\n withLoss = vloss)\n for iacc in range(len(aux_val_names)):\n aux_accuracy[aux_val_names[iacc]] = aux_val_result[iacc]\n v_result = val_result[0]\n print('Validation Accuracy: %f'%v_result)\n print(aux_accuracy)\n if v_result > best_v_result:\n print('Best model found and saving it.')\n torch.save(model.state_dict(), output_filepath)\n best_v_result = v_result\n plotter.plot('Validation Loss', 'final','Validation Loss', disp_i, v_loss)\n scheduler.step(v_loss)\n #if i in lr_steps:\n # olr = lr\n # lr *= lr_discount\n # for param_group in optimizer.param_groups:\n # param_group['lr'] = lr\n # print('learning rate has been discounted from %f to %f'%(olr, lr))\n for i_aux in range(len(aux_accuracy)):\n plotter.plot('acc','aux %d'%(i_aux),'Accuracy', disp_i, aux_accuracy[aux_val_names[i_aux]])\n plotter.plot('acc','final','Final Accuracy', disp_i, v_result)\n #writer.add_scalars('Auxiliary Accuracy', \n # aux_accuracy,\n # i)\n #writer.add_scalar('Final Accuracy', \n # v_result,\n # i)\n \n print('Model has been trained.')\n model = None\n\ndef train_mb(trainset, valset, label_file, output_path, output_fname, \n start_lr=0.1, lr_discount=0.1, lr_steps=[], epoch=30,\n train_batch = 16, val_batch = 16, val_at = 10,\n checkpoint = None, jud_at = -1, aux_scaler = 0.3, final_scaler = 1.0, fine_scaler = 1.0,\n preprocessor = None, v_preprocessor = None, isConditionProb=True, coastBack=True,\n f_weights = None\n ):\n \n best_v_result = 0.0\n backbones = nn.ModuleList([backbone_1, backbone_2])\n \n model = HTCNN_M(label_file, with_aux = True, with_fc = False, backbones=backbones,\n isCuda=True, isConditionProb=isConditionProb, coastBack=coastBack, weights=f_weights).to(device)\n \n #with torch.onnx.set_training(model, False):\n #trace, _ = torch.jit.get_trace_graph(model, args=(x,))\n #dot = make_dot_from_trace(trace)\n #dot.format = 'png'\n #dot.render(os.path.join(output_path, 'model.png'))\n \n #for name, param in model.named_parameters():\n # if param.requires_grad:\n # print(name)\n \n output_filepath = os.path.join(output_path, output_fname)\n\n if checkpoint is not None and os.path.isfile(checkpoint):\n model_params = torch.load(checkpoint)\n model.load_state_dict(model_params)\n backbones = model.backbones\n print('Loaded from checkpoint %s'%checkpoint)\n \n #sample, _, _, _, _ = loadInBatch(trainset, batchsize = 1)\n #writer.add_graph(model, sample)\n #writer.close()\n \n \n v_result = 0\n f_v_result = 0\n \n disp_x = torch.zeros(1, backbone_1.input_dim[0], backbone_1.input_dim[1], backbone_1.input_dim[2], device=device)\n disp, _ = model(disp_x)\n dot = make_dot(disp, params = dict(model.named_parameters()))\n dot.render(os.path.join(output_path, \"model.png\"))\n #torch.onnx.export(model, disp_x, os.path.join(output_path, \"model.onnx\"), input_names=['X'], output_names=['Y'], opset_version=11)\n disp_x = None\n \n model.eval()\n with torch.no_grad():\n \n val_result, aux_val_result, v_loss = computeAccuracy(valset, model, val_batch, withAux=True, preprocessor = v_preprocessor)\n v_result = val_result[0]\n print('Validation Accuracy: %f'%v_result)\n print(aux_val_result)\n best_v_result = v_result\n \n \n lr = start_lr\n \n param_list = [{'params':model.proj_layers.parameters()}, \n {'params':model.fc_s.parameters()},\n {'params':model.backbones[0].parameters()}\n ]\n \n fine_lr = start_lr\n fine_lr_rate = lr_discount\n fine_steps = lr_steps\n if args.f_same!=1:\n fine_lr = args.f_lr\n fine_lr_rate = args.f_lr_discount\n fine_steps = [int(istep) for istep in args.f_step_down.split(',')]\n \n default_rated_scaler = args.lr\n rated_scaler = default_rated_scaler\n for i in range(1, len(backbones)):\n param_list.append({'params':model.backbones[i].parameters(), 'lr':rated_scaler})\n \n optimizer = optim.SGD(param_list, lr=lr, momentum=0.9, weight_decay=5e-4)\n #optimizer = optim.ASGD(param_list, lr=lr, weight_decay=1e-4)\n #scheduler = torch.optim.lr_scheduler.CyclicLR(optimizer, base_lr=lr, max_lr=0.01)\n #scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=lr_steps, gamma=lr_discount)\n scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, 'min', patience = int(np.ceil(4.0/val_at)), threshold=1e-3)\n \n # create losses\n losses = nn.ModuleList()\n aux_loss_names = []\n aux_val_names = []\n final_loss = nn.NLLLoss(size_average=None, reduce=None)\n for lv in lookup_lv_list:\n losses.append(nn.NLLLoss(size_average=None, reduce=None))\n aux_loss_names.append('Coarst %d loss'%lv)\n aux_val_names.append('Level %d accuracy'%lv)\n losses.append(nn.NLLLoss(size_average=None, reduce=None))\n aux_loss_names.append('Fine loss')\n aux_val_names.append('Fine accuracy')\n n_aux = len(losses) - 1\n aux_accuracy = {}\n vloss = nn.NLLLoss(size_average=None, reduce=None)\n \n ptr = 0\n for i in range(epoch):\n # training phase\n model.train()\n hasFinishEpoch = False\n epoch_result = []\n epoch_aux_losses_v = []\n epoch_final_loss_v = 0\n epoch_loss_v = 0\n iter_c = 0\n avg_model_fwd_elapsed_time = 0.0\n while not hasFinishEpoch:\n \n pp_start_time = time.time()\n batch_input, gt_aux, gt_final, ptr, hasFinishEpoch = loadInBatch(trainset, ptr, train_batch, shuffle=True,\n preprocessor=preprocessor, im_size = input_sizes)\n pp_elapsed_time = time.time() - pp_start_time\n \n model_start_time = time.time()\n pred_final, pred_aux = model(batch_input)\n model_fwd_elapsed_time = time.time() - model_start_time\n avg_model_fwd_elapsed_time = (avg_model_fwd_elapsed_time + model_fwd_elapsed_time) / 2.0\n \n iloss = 0\n \n aux_loss_vlist = []\n \n fine_loss = losses[-1](pred_aux[-1], gt_final)\n aux_loss_vlist.append(fine_loss.item())\n total_loss = fine_loss*fine_scaler\n for i_aux in range(n_aux):\n aux_loss = losses[i_aux](pred_aux[i_aux], gt_aux[i_aux]) \n total_loss = total_loss + aux_loss * aux_scaler\n aux_loss_v = aux_loss.item()\n aux_loss_vlist.append(aux_loss_v)\n if epoch_aux_losses_v == []:\n epoch_aux_losses_v.append(aux_loss_v)\n else:\n epoch_aux_losses_v[iloss] += aux_loss_v\n iloss += 1\n f_loss = final_loss(pred_final, gt_final)\n epoch_final_loss_v += f_loss.item()\n total_loss = total_loss + f_loss * final_scaler\n fine_loss_v = fine_loss.item()\n if len(epoch_aux_losses_v) <= iloss:\n epoch_aux_losses_v.append(fine_loss_v)\n else:\n epoch_aux_losses_v[iloss] += fine_loss_v\n \n aux_loss_str = ','.join(['%.4f'%loss_v for loss_v in aux_loss_vlist])\n \n optimizer.zero_grad()\n # compute gradients\n total_loss.backward()\n \n # update weights\n optimizer.step()\n #scheduler.step()\n \n if iter_c == 0:\n epoch_loss_v = total_loss.item()\n else:\n epoch_loss_v += total_loss.item()\n \n if epoch_loss_v == 0:\n epoch_loss_v = total_loss.item()\n \n result = computeBatchAccuracy([pred_final.data],[gt_final])\n if epoch_result == []:\n epoch_result = result\n else:\n epoch_result = accumulateList(epoch_result, result)\n iter_c += 1\n print('[iteration %d]Data Loading Time:%f seconds; Computation Time:%f seconds [loss:%f, aux loss:%s]'%(iter_c,\n pp_elapsed_time, \n model_fwd_elapsed_time, \n total_loss.item(),\n aux_loss_str\n ))\n #scheduler.step()\n #print('Training Loss:', end='')\n plot_loss = {}\n aux_losses_names = []\n for iloss in range(n_aux+1):\n epoch_aux_losses_v[iloss] /= iter_c\n plot_loss[aux_loss_names[iloss]] = epoch_aux_losses_v[iloss]\n aux_losses_names.append('aux %d'%iloss)\n plotter.plot('loss', 'aux %d'%iloss,'Losses', i, epoch_aux_losses_v[iloss])\n #print('%s: %f, '%(aux_loss_names[iloss], epoch_aux_losses_v[iloss]), end='')\n epoch_loss_v /= iter_c\n epoch_final_loss_v /=iter_c\n #lot_loss['total loss'] = epoch_loss_v\n plotter.plot('loss', 'final','Losses', i, epoch_final_loss_v)\n plotter.plot('loss', 'total','Losses', i, epoch_loss_v)\n \n #writer.add_scalars('training loss', \n # plot_loss,\n # i)\n #print('Fine loss: %f'%epoch_loss_v)\n print(plot_loss)\n \n last_top1 = f_v_result\n # validation phase\n if i % val_at == 0 or (i+1)==epoch:\n disp_i = i+1\n if i==0:\n disp_i = 0\n print('Validating...')\n model.eval()\n with torch.no_grad():\n val_result, aux_val_result, v_loss = computeAccuracy(valset, model, val_batch, withAux=True, preprocessor = v_preprocessor,\n withLoss = vloss)\n for iacc in range(len(aux_val_names)):\n aux_accuracy[aux_val_names[iacc]] = aux_val_result[iacc]\n f_v_result = aux_val_result[-1]\n v_result = val_result[0]\n print('Validation Accuracy: %f'%v_result)\n print(aux_accuracy)\n if v_result > best_v_result:\n print('Best model found and saving it.')\n torch.save(model.state_dict(), output_filepath)\n best_v_result = v_result\n plotter.plot('Validation Loss', 'final','Validation Loss', disp_i, v_loss)\n scheduler.step(v_loss)\n for i_aux in range(len(aux_accuracy)):\n plotter.plot('acc','aux %d'%(i_aux),'Accuracy', disp_i, aux_accuracy[aux_val_names[i_aux]])\n plotter.plot('acc','final','Final Accuracy', disp_i, v_result)\n \n #diff_losses, diff_names = compute_loss_diff(epoch_aux_losses_v, losses_name = aux_losses_names)\n #for i_dloss in range(len(diff_losses)):\n # if f_v_result-last_top1>0.0:\n # plotter.plot_scatter('difference of loss', '%s (+)'%diff_names[i_dloss], 'Different of Loss', i, diff_losses[i_dloss],\n # [0,255,0])\n # elif f_v_result-last_top1==0.0:\n # plotter.plot_scatter('difference of loss', '%s (*)'%diff_names[i_dloss], 'Different of Loss', i, diff_losses[i_dloss],\n # [10,10,10])\n # else:\n # plotter.plot_scatter('difference of loss', '%s (-)'%diff_names[i_dloss], 'Different of Loss', i, diff_losses[i_dloss],\n # [255,0,0])\n \n if args.f_same == 1:\n \n #old_lr = scheduler.get_last_lr()\n #scheduler.step()\n #print('learning rate has been discounted from %f to %f'%(old_lr, scheduler.get_last_lr()))\n if (i+1) in lr_steps:\n pass\n #olr = lr\n #lr *= lr_discount\n #for param_group in optimizer.param_groups:\n # param_group['lr'] = lr\n #print('learning rate has been discounted from %f to %f'%(olr, lr))\n else:\n if (i+1) in lr_steps:\n olr = lr\n lr *= lr_discount\n lr_t = len(param_list)\n lr_i = 0\n for param_group in optimizer.param_groups:\n param_group['lr'] = lr\n lr_i += 1\n if lr_i>=(lr_t-1):\n break\n print('learning rate has been discounted from %f to %f for aux'%(olr, lr))\n if (i+1) in fine_steps:\n olr = fine_lr\n lr *= fine_lr_rate\n lr_i = 0\n lr_t = len(param_list)\n for param_group in optimizer.param_groups:\n lr_i += 1\n if lr_i!=lr_t:\n continue\n param_group['lr'] = lr\n print('learning rate has been discounted from %f to %f for fine'%(olr, lr))\n \n \n #writer.add_scalars('Auxiliary Accuracy', \n # aux_accuracy,\n # i)\n #writer.add_scalar('Final Accuracy', \n # v_result,\n # i)\n \n print('Model has been trained.')\n model = None\n\ndef train_mb_in(trainset, valset, label_file, output_path, output_fname, \n start_lr=0.1, lr_discount=0.1, lr_steps=[], epoch=30,\n train_batch = 16, val_batch = 16, val_at = 10,\n checkpoint = None, jud_at = -1, aux_scaler = 0.3, final_scaler = 1.0, fine_scaler = 1.0,\n preprocessor = None, v_preprocessor = None, im_size = None, general_process = None, isConditionProb=True, coastBack=True,\n f_weights = None\n ):\n \n best_v_result = 0.0\n backbones = nn.ModuleList([backbone_1, backbone_2])\n model = HTCNN_M_IN(label_file, with_aux = True, with_fc = False, backbones=backbones,\n isCuda=True, isConditionProb=isConditionProb, coastBack=coastBack, weights=f_weights)\n \n model = nn.DataParallel(model).cuda()\n output_filepath = os.path.join(output_path, output_fname)\n\n if checkpoint is not None and os.path.isfile(checkpoint):\n model_params = torch.load(checkpoint)\n model.load_state_dict(model_params)\n backbones = model.backbones\n print('Loaded from checkpoint %s'%checkpoint)\n \n \n v_result = 0\n \n model.eval()\n with torch.no_grad():\n val_result, aux_val_result, v_loss = computeAccuracy_m_in(valset, model, val_batch, withAux=True,\n im_sizes = im_size,\n preprocessors = v_preprocessor)\n v_result = val_result[0]\n print('Validation Accuracy: %f'%v_result)\n print(aux_val_result)\n best_v_result = v_result\n \n \n lr = start_lr\n optimizer = optim.SGD(param_list, lr=lr, momentum=0.9, weight_decay=5e-4)\n #optimizer = optim.Adagrad(model.parameters(), lr=lr)\n \n \n # create losses\n losses = []\n aux_loss_names = []\n aux_val_names = []\n final_loss = nn.CrossEntropyLoss()\n for lv in lookup_lv_list:\n losses.append(nn.CrossEntropyLoss())\n aux_loss_names.append('Coarst %d loss'%lv)\n aux_val_names.append('Level %d accuracy'%lv)\n losses.append(nn.CrossEntropyLoss())\n aux_loss_names.append('Fine loss')\n aux_val_names.append('Fine accuracy')\n n_aux = len(losses) - 1\n aux_accuracy = {}\n \n for i in range(epoch):\n # training phase\n model.train()\n ptr = 0\n hasFinishEpoch = False\n epoch_result = []\n epoch_aux_losses_v = []\n epoch_loss_v = 0\n iter_c = 0\n avg_model_fwd_elapsed_time = 0.0\n \n blobs = []\n \n while not hasFinishEpoch:\n optimizer.zero_grad()\n \n pp_start_time = time.time()\n batch_input, gt_aux, gt_final, ptr, hasFinishEpoch = loadInBatch_mblob(trainset, ptr, train_batch, shuffle=True,\n preprocessors=preprocessor,\n im_sizes=im_size,\n general_preprocess=general_process)\n pp_elapsed_time = time.time() - pp_start_time\n \n \n model_start_time = time.time()\n pred_final, pred_aux = model(batch_input)\n model_fwd_elapsed_time = time.time() - model_start_time\n avg_model_fwd_elapsed_time = (avg_model_fwd_elapsed_time + model_fwd_elapsed_time) / 2.0\n \n iloss = 0\n fine_loss = losses[-1](pred_aux[-1], gt_final)*fine_scaler\n total_loss = fine_loss\n for i_aux in range(n_aux):\n aux_loss = losses[i_aux](pred_aux[i_aux], gt_aux[i_aux])\n total_loss = total_loss + aux_loss * aux_scaler\n aux_loss_v = aux_loss.item()\n if epoch_aux_losses_v == []:\n epoch_aux_losses_v.append(aux_loss_v)\n else:\n epoch_aux_losses_v[iloss] += aux_loss_v\n iloss += 1\n f_loss = final_loss(pred_final, gt_final)\n total_loss = total_loss + f_loss* final_scaler\n fine_loss_v = fine_loss.item()\n if len(epoch_aux_losses_v) <= iloss:\n epoch_aux_losses_v.append(fine_loss_v)\n else:\n epoch_aux_losses_v[iloss] += fine_loss_v\n # compute gradients\n total_loss.backward()\n \n # update weights\n optimizer.step()\n \n if iter_c == 0:\n epoch_loss_v = total_loss.item()\n else:\n epoch_loss_v += total_loss.item()\n \n if epoch_loss_v == 0:\n epoch_loss_v = total_loss.item()\n \n result = computeBatchAccuracy([pred_final],[gt_final])\n if epoch_result == []:\n epoch_result = result\n else:\n epoch_result = accumulateList(epoch_result, result)\n iter_c += 1\n #if iter_c == 1:\n print('[iteration %d]Data Loading Time:%f seconds; Computation Time:%f seconds'%(iter_c,pp_elapsed_time, model_fwd_elapsed_time))\n \n plot_loss = {}\n for iloss in range(n_aux+1):\n epoch_aux_losses_v[iloss] /= iter_c\n plot_loss[aux_loss_names[iloss]] = epoch_aux_losses_v[iloss]\n plotter.plot('loss', 'aux %d'%iloss,'Losses', i, epoch_aux_losses_v[iloss])\n epoch_loss_v /= iter_c\n plotter.plot('loss', 'total','Total Loss', i, epoch_loss_v)\n print(plot_loss)\n \n # validation phase\n if i % val_at == 0:\n print('Validating...')\n model.eval()\n with torch.no_grad():\n val_result, aux_val_result, v_loss = computeAccuracy_m_in(valset, model, val_batch, withAux=True,\n im_sizes = im_size,\n preprocessors = v_preprocessor)\n for iacc in range(len(aux_val_names)):\n aux_accuracy[aux_val_names[iacc]] = aux_val_result[iacc]\n v_result = val_result[0]\n print('Validation Accuracy: %f'%v_result)\n print(aux_accuracy)\n if v_result > best_v_result:\n print('Best model found and saving it.')\n torch.save(model.state_dict(), output_filepath)\n best_v_result = v_result\n if i in lr_steps:\n olr = lr\n lr *= lr_discount\n for param_group in optimizer.param_groups:\n param_group['lr'] = lr\n print('learning rate has been discounted from %f to %f'%(olr, lr))\n for i_aux in range(len(aux_accuracy)):\n plotter.plot('acc','aux %d'%(i_aux),'Accuracy', i, aux_accuracy[aux_val_names[i_aux]])\n plotter.plot('acc','final','Final Accuracy', i, v_result)\n \n print('Model has been trained.')\n model = None\n\ndef train_coast(trainset, valset, label_file, output_path, output_fname, \n start_lr=0.1, lr_discount=0.1, lr_steps=[], epoch=30,\n train_batch = 16, val_batch = 16, val_at = 10, n_coast=0,\n checkpoint = None, jud_at = -1, aux_scaler = 0.3, final_scaler = 1.0, fine_scaler = 1.0,\n preprocessor = None, v_preprocessor = None, isConditionProb=True, coastBack=True,\n f_weights = None\n ):\n global backbone\n best_v_result = 0.0\n model = HTCNN(label_file, with_aux = True, with_fc = True, backbone=backbone,\n isCuda=True, isConditionProb=isConditionProb, coastBack=coastBack, weights=f_weights).cuda()\n \n #for name, param in model.named_parameters():\n # if param.requires_grad:\n # print(name)\n \n output_filepath = os.path.join(output_path, output_fname)\n\n if checkpoint is not None and os.path.isfile(checkpoint):\n \n backbone.load_state_dict(torch.load(checkpoint), strict=False)\n model.load_state_dict(torch.load(checkpoint), strict=False)\n print('Loaded from checkpoint %s'%checkpoint)\n \n #sample, _, _, _, _ = loadInBatch(trainset, batchsize = 1)\n #writer.add_graph(model, sample)\n #writer.close()\n \n v_result = 0\n \n backbone.eval()\n model.eval()\n with torch.no_grad():\n val_result, aux_val_result, v_loss = computeAccuracy(valset, model, val_batch, withAux=True, preprocessor = v_preprocessor)\n v_result = val_result[0]\n print('Validation Accuracy: %f'%v_result)\n print(aux_val_result)\n best_v_result = v_result\n \n lr = start_lr\n optimizer = optim.SGD(model.parameters(), lr=lr)\n #optimizer = optim.Adagrad(model.parameters(), lr=lr)\n \n \n # create losses\n losses = []\n aux_loss_names = []\n aux_val_names = []\n final_loss = nn.MultiLabelSoftMarginLoss()\n for lv in lookup_lv_list:\n losses.append(nn.MultiLabelSoftMarginLoss())\n aux_loss_names.append('Coarst %d loss'%lv)\n aux_val_names.append('Level %d accuracy'%lv)\n losses.append(nn.MultiLabelSoftMarginLoss())\n aux_loss_names.append('Fine loss')\n aux_val_names.append('Fine accuracy')\n n_aux = len(losses) - 1\n aux_accuracy = {}\n \n for i in range(epoch):\n # training phase\n backbone.train()\n model.train()\n ptr = 0\n hasFinishEpoch = False\n epoch_result = []\n epoch_aux_losses_v = []\n epoch_loss_v = 0\n iter_c = 0\n avg_model_fwd_elapsed_time = 0.0\n while not hasFinishEpoch:\n optimizer.zero_grad()\n \n pp_start_time = time.time()\n batch_input, gt_aux, gt_final, ptr, hasFinishEpoch = loadInBatch(trainset, ptr, train_batch, shuffle=True,\n preprocessor=preprocessor, im_size = input_sizes)\n pp_elapsed_time = time.time() - pp_start_time\n \n model_start_time = time.time()\n pred_final, pred_aux = model(batch_input)\n model_fwd_elapsed_time = time.time() - model_start_time\n avg_model_fwd_elapsed_time = (avg_model_fwd_elapsed_time + model_fwd_elapsed_time) / 2.0\n \n iloss = 0\n fine_loss = losses[-1](pred_aux[-1], gt_final)*fine_scaler\n total_loss = fine_loss\n for i_aux in range(n_aux):\n aux_loss = losses[i_aux](pred_aux[i_aux], gt_aux[i_aux]) \n total_loss = total_loss + aux_loss * aux_scaler\n #total_loss = torch.sum(aux_loss)\n aux_loss_v = aux_loss.item()\n if epoch_aux_losses_v == []:\n epoch_aux_losses_v.append(aux_loss_v)\n else:\n epoch_aux_losses_v[iloss] += aux_loss_v\n iloss += 1\n \n f_loss = final_loss(pred_final, gt_final)\n total_loss = total_loss + f_loss * final_scaler\n fine_loss_v = fine_loss.item()\n if len(epoch_aux_losses_v) <= iloss:\n epoch_aux_losses_v.append(fine_loss_v)\n else:\n epoch_aux_losses_v[iloss] += fine_loss_v\n # compute gradients\n total_loss.backward()\n \n # update weights\n optimizer.step()\n \n if iter_c == 0:\n epoch_loss_v = total_loss.item()\n else:\n epoch_loss_v += total_loss.item()\n \n if epoch_loss_v == 0:\n epoch_loss_v = total_loss.item()\n \n result = computeBatchAccuracy([pred_final],[gt_final])\n if epoch_result == []:\n epoch_result = result\n else:\n epoch_result = accumulateList(epoch_result, result)\n iter_c += 1\n print('[iteration %d]Data Loading Time:%f seconds; Computation Time:%f seconds'%(iter_c,pp_elapsed_time, model_fwd_elapsed_time))\n \n #print('Training Loss:', end='')\n #print('Training Loss:', end='')\n plot_loss = {}\n for iloss in range(n_aux+1):\n epoch_aux_losses_v[iloss] /= iter_c\n plot_loss[aux_loss_names[iloss]] = epoch_aux_losses_v[iloss]\n plotter.plot('loss', 'aux %d'%iloss,'Losses', i, epoch_aux_losses_v[iloss])\n #print('%s: %f, '%(aux_loss_names[iloss], epoch_aux_losses_v[iloss]), end='')\n epoch_loss_v /= iter_c\n #lot_loss['total loss'] = epoch_loss_v\n plotter.plot('loss', 'total','Total Loss', i, epoch_loss_v)\n #writer.add_scalars('training loss', \n # plot_loss,\n # i)\n #print('Fine loss: %f'%epoch_loss_v)\n print(plot_loss)\n \n # validation phase\n if i % val_at == 0:\n print('Validating...')\n backbone.eval()\n model.eval()\n with torch.no_grad():\n val_result, aux_val_result, v_loss = computeAccuracy(valset, model, val_batch, withAux=True, preprocessor = v_preprocessor)\n for iacc in range(len(aux_val_names)):\n aux_accuracy[aux_val_names[iacc]] = aux_val_result[iacc]\n v_result = val_result[0]\n print('Validation Accuracy: %f'%v_result)\n print(aux_accuracy)\n if v_result > best_v_result:\n print('Best model found and saving it.')\n torch.save(model.state_dict(), output_filepath)\n best_v_result = v_result\n if i in lr_steps:\n olr = lr\n lr *= lr_discount\n for param_group in optimizer.param_groups:\n param_group['lr'] = lr\n print('learning rate has been discounted from %f to %f'%(olr, lr))\n for i_aux in range(len(aux_accuracy)):\n plotter.plot('acc','aux %d'%(i_aux),'Accuracy', i, aux_accuracy[aux_val_names[i_aux]])\n plotter.plot('acc','final','Final Accuracy', i, v_result)\n #writer.add_scalars('Auxiliary Accuracy', \n # aux_accuracy,\n # i)\n #writer.add_scalar('Final Accuracy', \n # v_result,\n # i)\n \n print('Model has been trained.')\n model = None\n\n\ndef train_mb_coast(trainset, valset, label_file, output_path, output_fname, \n start_lr=0.1, lr_discount=0.1, lr_steps=[], epoch=30,\n train_batch = 16, val_batch = 16, val_at = 10, n_coast=0,\n checkpoint = None, jud_at = -1, aux_scaler = 0.3, final_scaler = 1.0, fine_scaler = 1.0,\n preprocessor = None, v_preprocessor = None, isConditionProb=True, coastBack=True,\n f_weights = None\n ):\n \n best_v_result = 0.0\n backbones = nn.ModuleList([backbone_1, backbone_2])\n \n model = HTCNN_M(label_file, with_aux = True, with_fc = True, backbones=backbones,\n isCuda=True, isConditionProb=isConditionProb, coastBack=coastBack, weights=f_weights).to(device)\n \n #for name, param in model.named_parameters():\n # if param.requires_grad:\n # print(name)\n \n output_filepath = os.path.join(output_path, output_fname)\n\n if checkpoint is not None and os.path.isfile(checkpoint):\n model_params = torch.load(checkpoint)\n model.load_state_dict(model_params)\n backbones = model.backbones\n print('Loaded from checkpoint %s'%checkpoint)\n \n #sample, _, _, _, _ = loadInBatch(trainset, batchsize = 1)\n #writer.add_graph(model, sample)\n #writer.close()\n \n v_result = 0\n \n \n model.eval()\n with torch.no_grad():\n val_result, aux_val_result, v_loss = computeAccuracy(valset, model, val_batch, withAux=True, preprocessor = v_preprocessor)\n v_result = val_result[0]\n print('Validation Accuracy: %f'%v_result)\n print(aux_val_result)\n best_v_result = v_result\n \n \n lr = start_lr\n optimizer = optim.SGD(model.parameters(), lr=lr)\n #optimizer = optim.Adagrad(model.parameters(), lr=lr)\n \n \n # create losses\n losses = nn.ModuleList()\n aux_loss_names = []\n aux_val_names = []\n final_loss = nn.MultiLabelSoftMarginLoss()\n for lv in lookup_lv_list:\n losses.append(nn.MultiLabelSoftMarginLoss())\n aux_loss_names.append('Coarst %d loss'%lv)\n aux_val_names.append('Level %d accuracy'%lv)\n losses.append(nn.MultiLabelSoftMarginLoss())\n aux_loss_names.append('Fine loss')\n aux_val_names.append('Fine accuracy')\n n_aux = len(losses) - 1\n aux_accuracy = {}\n \n for i in range(epoch):\n # training phase\n model.train()\n ptr = 0\n hasFinishEpoch = False\n epoch_result = []\n epoch_aux_losses_v = []\n epoch_loss_v = 0\n iter_c = 0\n avg_model_fwd_elapsed_time = 0.0\n while not hasFinishEpoch:\n optimizer.zero_grad()\n \n pp_start_time = time.time()\n batch_input, gt_aux, gt_final, ptr, hasFinishEpoch = loadInBatch(trainset, ptr, train_batch, shuffle=True,\n preprocessor=preprocessor, im_size = input_sizes)\n pp_elapsed_time = time.time() - pp_start_time\n \n model_start_time = time.time()\n pred_final, pred_aux = model(batch_input)\n model_fwd_elapsed_time = time.time() - model_start_time\n avg_model_fwd_elapsed_time = (avg_model_fwd_elapsed_time + model_fwd_elapsed_time) / 2.0\n \n iloss = 0\n fine_loss = losses[-1](pred_aux[-1], gt_final)*fine_scaler\n total_loss = fine_loss\n for i_aux in range(n_aux):\n aux_loss = losses[i_aux](pred_aux[i_aux], gt_aux[i_aux])\n total_loss = total_loss + aux_loss * aux_scaler\n aux_loss_v = aux_loss.item()\n if epoch_aux_losses_v == []:\n epoch_aux_losses_v.append(aux_loss_v)\n else:\n epoch_aux_losses_v[iloss] += aux_loss_v\n iloss += 1\n f_loss = final_loss(pred_final, gt_final)\n total_loss = total_loss + f_loss * final_scaler\n fine_loss_v = fine_loss.item()\n if len(epoch_aux_losses_v) <= iloss:\n epoch_aux_losses_v.append(fine_loss_v)\n else:\n epoch_aux_losses_v[iloss] += fine_loss_v\n # compute gradients\n total_loss.backward()\n \n # update weights\n optimizer.step()\n \n if iter_c == 0:\n epoch_loss_v = total_loss.item()\n else:\n epoch_loss_v += total_loss.item()\n \n if epoch_loss_v == 0:\n epoch_loss_v = total_loss.item()\n \n result = computeBatchAccuracy([pred_final],[gt_final])\n if epoch_result == []:\n epoch_result = result\n else:\n epoch_result = accumulateList(epoch_result, result)\n iter_c += 1\n print('[iteration %d]Data Loading Time:%f seconds; Computation Time:%f seconds'%(iter_c,pp_elapsed_time, model_fwd_elapsed_time))\n \n #print('Training Loss:', end='')\n plot_loss = {}\n for iloss in range(n_aux+1):\n epoch_aux_losses_v[iloss] /= iter_c\n plot_loss[aux_loss_names[iloss]] = epoch_aux_losses_v[iloss]\n plotter.plot('loss', 'aux %d'%iloss,'Losses', i, epoch_aux_losses_v[iloss])\n #print('%s: %f, '%(aux_loss_names[iloss], epoch_aux_losses_v[iloss]), end='')\n epoch_loss_v /= iter_c\n #lot_loss['total loss'] = epoch_loss_v\n plotter.plot('loss', 'total','Total Loss', i, epoch_loss_v)\n #writer.add_scalars('training loss', \n # plot_loss,\n # i)\n #print('Fine loss: %f'%epoch_loss_v)\n print(plot_loss)\n \n # validation phase\n if i % val_at == 0:\n print('Validating...')\n model.eval()\n with torch.no_grad():\n val_result, aux_val_result, v_loss = computeAccuracy(valset, model, val_batch, withAux=True, preprocessor = v_preprocessor)\n for iacc in range(len(aux_val_names)):\n aux_accuracy[aux_val_names[iacc]] = aux_val_result[iacc]\n v_result = val_result[0]\n print('Validation Accuracy: %f'%v_result)\n print(aux_accuracy)\n if v_result > best_v_result:\n print('Best model found and saving it.')\n torch.save(model.state_dict(), output_filepath)\n best_v_result = v_result\n if i in lr_steps:\n olr = lr\n lr *= lr_discount\n for param_group in optimizer.param_groups:\n param_group['lr'] = lr\n print('learning rate has been discounted from %f to %f'%(olr, lr))\n for i_aux in range(len(aux_accuracy)):\n plotter.plot('acc','aux %d'%(i_aux),'Accuracy', i, aux_accuracy[aux_val_names[i_aux]])\n plotter.plot('acc','final','Final Accuracy', i, v_result)\n #writer.add_scalars('Auxiliary Accuracy', \n # aux_accuracy,\n # i)\n #writer.add_scalar('Final Accuracy', \n # v_result,\n # i)\n \n print('Model has been trained.')\n model = None\n\n\n\ndef main():\n \n is_cond = args.cond == 1\n aux_weight = args.aux_weight\n final_weight = args.final_weight\n fine_weight = args.fine_weight\n step_down = [int(sd) for sd in args.step_down.split(',')]\n nepoch = args.epoch\n val_at = args.val_at\n train_batch = args.train_batch\n val_batch = args.val_batch\n lr = args.lr\n coastBack = args.enable_coast == 1\n fusion_weights = None\n if args.fusion_weights is not None:\n fusion_weights = [float(val) for val in args.fusion_weights.split(',')]\n \n \n checkpoint_path = os.path.join(model_path, model_fname)\n \n train_set = loadData(ds_root_path, training_file)\n val_set = loadData(ds_root_path, val_file)\n print('Training set has been buffered.')\n \n if backbone is not None:\n train(train_set, val_set, label_filepath,\n output_path = model_path, output_fname = model_fname, \n epoch=nepoch, val_at=val_at, lr_steps=step_down, aux_scaler=aux_weight, final_scaler=final_weight,\n train_batch=train_batch, val_batch=val_batch, checkpoint=checkpoint_path,\n start_lr=lr, coastBack=coastBack,\n preprocessor=preprocess, v_preprocessor=preprocess_v, isConditionProb = is_cond, f_weights = fusion_weights)\n else:\n if isinstance(preprocess, list):\n train_mb_in(train_set, val_set, label_filepath,\n output_path = model_path, output_fname = model_fname, \n epoch=nepoch, val_at=val_at, lr_steps=step_down, aux_scaler=aux_weight, final_scaler=final_weight,\n train_batch=train_batch, val_batch=val_batch, checkpoint=checkpoint_path,\n start_lr=lr, coastBack=coastBack,\n preprocessor=preprocess, v_preprocessor=preprocess_v,\n im_size=input_sizes,\n general_process=gp, isConditionProb = is_cond, f_weights = fusion_weights)\n else:\n train_mb(train_set, val_set, label_filepath,\n output_path = model_path, output_fname = model_fname, \n epoch=nepoch, val_at=val_at, lr_steps=step_down, aux_scaler=aux_weight, final_scaler=final_weight,\n train_batch=train_batch, val_batch=val_batch, checkpoint=checkpoint_path,\n start_lr=lr, coastBack=coastBack,\n preprocessor=preprocess, v_preprocessor=preprocess_v, isConditionProb = is_cond, f_weights = fusion_weights)\n \n backbone_1 = None\n backbone_2 = None\n torch.cuda.empty_cache()\n #writer.close()\n print('Done')\n\nif __name__ == '__main__':\n parser = argparse.ArgumentParser(description='Training script for HTNN')\n parser.add_argument('--cond', dest='cond', type=int, default=1,\n help='identify it is a condition prob or not')\n parser.add_argument('--aux-weight', dest='aux_weight', type=float, default=1.0,\n help='loss weight for auxiliry losses')\n parser.add_argument('--fine-weight', dest='fine_weight', type=float, default=1.0,\n help='loss weight for fine loss')\n parser.add_argument('--final-weight', dest='final_weight', type=float, default=1.0,\n help='loss weight for fused loss')\n parser.add_argument('--backbone', dest='backbone_net', default='alexnet',\n help='define the backbone network(s)')\n parser.add_argument('--multi-input', dest='mult_in', type=int, default=0,\n help='identify it is multiple input for multiple backbone networks')\n parser.add_argument('--data-root', dest='data_root', default='/datasets/vision/cifar100_clean',\n help='define the root path for dataset')\n parser.add_argument('--train', dest='train_file', default='/datasets/vision/cifar100_clean/train.txt',\n help='define the training filepath')\n parser.add_argument('--val', dest='val_file', default='/datasets/vision/cifar100_clean/val.txt',\n help='define the validation filepath')\n parser.add_argument('--test', dest='test_file', default='/datasets/vision/cifar100_clean/val.txt',\n help='define the testing filepath')\n parser.add_argument('--tree', dest='tree_file', default='/datasets/vision/cifar100_clean/tree.txt',\n help='define the tree filepath')\n parser.add_argument('--dst', dest='dst', default='/models/cifar100_htcnn_alexnet_2',\n help='define the output path')\n parser.add_argument('--checkpoint', dest='checkpoint', default=None,\n help='define the checkpoint path')\n parser.add_argument('--epoch', dest='epoch', type=int, default=300,\n help='define the number of maximum epoch')\n parser.add_argument('--step-down', dest='step_down', default=[100, 200],\n help='define the step down epoch')\n parser.add_argument('--val-at', dest='val_at', type=int, default=5,\n help='define the number of epoch for validation')\n parser.add_argument('--train-batch', dest='train_batch', type=int, default=2048,\n help='define the batch size for training')\n parser.add_argument('--val-batch', dest='val_batch', type=int, default=1024,\n help='define the batch size of validation')\n parser.add_argument('--lr', dest='lr', type=float, default=0.1,\n help='define the starting learning rate')\n parser.add_argument('--enable-coast', dest='enable_coast', type=int, default=1,\n help='enable the backpropagation of coast projection layers')\n parser.add_argument('--learn-coast', dest='learn_coast', type=int, default=1,\n help='indentify the network learns the coast or not')\n parser.add_argument('--n-coast', dest='n_coast', type=int, default=0,\n help='indentify the number of coast to learn')\n parser.add_argument('--fine-lr', dest='f_lr', type=float, default=0.1,\n help='define the learning rate for the fine branch')\n parser.add_argument('--fine-lr-discount', dest='f_lr_discount', type=float, default=0.1,\n help='define the learning rate discount for the fine branch')\n parser.add_argument('--fine-step-down', dest='f_step_down', default=[100, 200],\n help='define the step down epoch for the fine branch')\n parser.add_argument('--fine-same', dest='f_same', type=int, default=1,\n help='identify the learning rate schedule of fine branch is the same as auxilaries')\n parser.add_argument('--fusion-weights', dest='fusion_weights', default = None,\n help='define weights of bin output for fusion')\n \n\n args = parser.parse_args()\n \n device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n torch.cuda.empty_cache()\n #label_filepath = '/datasets/vision/cifar100_clean/tree.txt'\n #label_filepath = '/datasets/dummy/set1/tree.txt'\n label_filepath = args.tree_file\n classTree, n_coarst, coarst_dims = trees.build_itree(label_filepath)\n lookup_lv_list = [i+1 for i in range(n_coarst)]\n n_fine = len(list(classTree.keys()))\n print(coarst_dims)\n \n #ds_root_path = '/datasets/vision/cifar100_clean'\n #training_file = '/datasets/vision/cifar100_clean/train.txt'\n #val_file = '/datasets/vision/cifar100_clean/val.txt'\n #test_file = '/datasets/vision/cifar100_clean/val.txt'\n ds_root_path = args.data_root\n training_file = args.train_file\n val_file = args.val_file\n test_file = args.test_file\n \n #model_path = '/models/cifar100_htcnn_alexnet_2'\n model_path = args.dst\n if not os.path.isdir(model_path):\n os.mkdir(model_path)\n model_fname = 'model.pth'\n \n #backbone_1 = LeNet5(n_classes=coarst_dims[0])\n #backbone_2 = LeNet5(n_classes=n_fine).cuda()\n #backbone_1 = AlexNet32(n_classes=coarst_dims[0])\n #backbone_2 = AlexNet32(n_classes=n_fine)\n \n #backbone_2 = AlexNet32_B(n_classes=n_fine)\n backbone_2 = AlexNet32_C(n_classes=n_fine, feature_dim = 384)\n \n #backbone_1 = AlexNet32_D(n_classes=coarst_dims[0], fdim=64)\n #backbone_2 = AlexNet32_D(n_classes=n_fine, feature_dim=384)\n #backbone_1.common_features = backbone_2.common_features\n backbone = backbone_2\n #backbone = None\n backbone_inshape = backbone_2.input_dim\n \n gp = transforms.Compose([\n transforms.RandomHorizontalFlip(p=0.5),\n transforms.RandomVerticalFlip(p=0.5)\n ])\n \n preprocess_1 = transforms.Compose([\n transforms.Resize(40),\n transforms.CenterCrop(32),\n transforms.ToTensor(),\n transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),\n ])\n \n preprocess_2 = transforms.Compose([\n transforms.Resize(256),\n transforms.CenterCrop(224),\n transforms.ToTensor(),\n transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),\n ])\n \n preprocess_3 = transforms.Compose([\n transforms.RandomHorizontalFlip(p=0.5),\n transforms.RandomVerticalFlip(p=0.5),\n transforms.Resize(256),\n transforms.RandomCrop(224),\n transforms.ToTensor()\n ])\n \n preprocess_4 = Compose([\n PadIfNeeded(40, 40, cv2.BORDER_CONSTANT, 0),\n RandomCrop(32,32),\n HorizontalFlip(p=0.5),\n ToTensor(normalize={\n 'std':[0.2023, 0.1994, 0.2010],\n 'mean':[0.4914, 0.4822, 0.4465]\n })\n ])\n preprocess_5 = transforms.Compose([\n transforms.RandomCrop(32, padding=4),\n transforms.RandomHorizontalFlip(),\n transforms.ToTensor(),\n transforms.Normalize(\n mean=[0.4914, 0.4822, 0.4465],\n std=[0.2023, 0.1994, 0.2010],\n )\n ])\n \n preprocess_v = Compose([\n ToTensor(normalize={\n 'std':[0.2023, 0.1994, 0.2010],\n 'mean':[0.4914, 0.4822, 0.4465]\n })\n ])\n \n #preprocess = [preprocess_1, preprocess_2]\n preprocess = preprocess_4\n input_sizes = [(3,32,32), (3,224,224)]\n input_sizes = input_sizes[0]\n #writer = SummaryWriter(log_dir = '../training', purge_step = 0,\n # flush_secs = 5)\n \n global plotter\n plotter = utils.VisdomLinePlotter(env_name='Hierarchy Tree Neural Network')\n #main()\n try:\n #pass\n main()\n except Exception as e:\n backbone_1 = None\n backbone_2 = None\n model = None\n torch.cuda.empty_cache()\n print('Error:')\n print(str(e))\n exit(-1)\n"
},
{
"alpha_fraction": 0.5004721283912659,
"alphanum_fraction": 0.5118035674095154,
"avg_line_length": 31.090909957885742,
"blob_id": "6b79957a986f334b45cdc22e6a66efb002f05d46",
"content_id": "f0b33eec1a436fae3e361f6e77b221cc0ce49fa8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1059,
"license_type": "no_license",
"max_line_length": 70,
"num_lines": 33,
"path": "/addons/trees.py",
"repo_name": "geoffrey0822/HTNN",
"src_encoding": "UTF-8",
"text": "# +\nimport numpy as np\nimport os\n\ndef build_itree(tree_path):\n output = {}\n n_coarst = 0\n coarst_output_lens = []\n tmp_dict = []\n with open(tree_path, 'r') as f:\n for ln in f:\n nodes = [int(node) for node in ln.rstrip('\\n').split(',')]\n if nodes[-1] not in output:\n output[nodes[-1]] = []\n n_node = len(nodes)\n if n_coarst == 0:\n n_coarst = n_node-1\n for j in range(n_node-1):\n tmp_dict.append([])\n for i in range(n_node-1):\n output[nodes[-1]].append(nodes[n_node-i-2])\n if nodes[n_node-i-2] not in tmp_dict[i]:\n tmp_dict[i].append(nodes[n_node-i-2])\n for i_coarst in range(n_coarst):\n coarst_output_lens.append(len(tmp_dict[i_coarst]))\n return output, n_coarst, coarst_output_lens\n\ndef remake_tree(lap_mat):\n output = {}\n for nsub in range(lap_mat.shape[1]):\n mother = np.argmax(lap_mat[:,nsub])\n output[nsub]=mother\n return output\n"
}
] | 6 |
JaLuka98/FP_SS19 | https://github.com/JaLuka98/FP_SS19 | ed849f00d803d0dc27567bb4c35400a6d7b3b1e0 | a5f58f99eb94c655458169876b805525f8248ab9 | df5fa6f7f284f8d022fa66df344e4ddcc6f682a5 | refs/heads/master | 2020-05-02T06:54:34.781557 | 2019-07-11T14:37:59 | 2019-07-11T14:37:59 | 177,805,953 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6547619104385376,
"alphanum_fraction": 0.7103174328804016,
"avg_line_length": 18.461538314819336,
"blob_id": "688361f3cc16bf9f1cdc44955c7523d1a0c033c3",
"content_id": "ae2c6e7c6ee8b6478142ef735703f98d25de2f21",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 252,
"license_type": "no_license",
"max_line_length": 47,
"num_lines": 13,
"path": "/V47_molwaerme_von_cu/test.py",
"repo_name": "JaLuka98/FP_SS19",
"src_encoding": "UTF-8",
"text": "import numpy as np\nfrom scipy import integrate\nLCDMf = lambda x: 1.0/np.sqrt(0.3*(1+x)**3+0.7)\nnp.vectorize(LCDMf)\n\ndef LCDMfint(z):\n return integrate.quad(LCDMf, 0, z)\n\nLCDMfint=np.vectorize(LCDMfint)\nz=np.arange(0,100)\n\nan=LCDMfint(z)\nprint(an[0])"
},
{
"alpha_fraction": 0.5810784697532654,
"alphanum_fraction": 0.6587986946105957,
"avg_line_length": 27.16756820678711,
"blob_id": "a9acf1f7571095df472c61a730f5b3d50757be60",
"content_id": "68789413099b8a33f6e0a4f21668178839b334d2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5216,
"license_type": "no_license",
"max_line_length": 268,
"num_lines": 185,
"path": "/V47_molwaerme_von_cu/plot.py",
"repo_name": "JaLuka98/FP_SS19",
"src_encoding": "UTF-8",
"text": "import numpy as np\nfrom scipy import constants\nimport scipy.integrate as integrate\nimport uncertainties as unc\nimport scipy\nimport uncertainties.unumpy as unp\nfrom scipy import optimize\nimport matplotlib.pyplot as plt\nfrom uncertainties import ufloat\nfrom scipy import stats\nfrom uncertainties import correlated_values\nfrom matrix2latex import matrix2latex\n\n\ndef debyeFunction(T, theta_D):\n return [9*scipy.constants.R * (Temp/theta_D)**3 * integrate.quad(lambda x: x**4*np.exp(x)/(np.exp(x)-1)**2, 0.0001, theta_D/Temp)[0] for Temp in T]\n # [0] to only the return the nominal value, [1] would be the uncertainty\n\n\nT = np.array([10, 16, 20, 25, 30, 35, 40, 50])\nC_V = np.array([0.0555, 0.225, 0.462, 0.963, 1.693, 2.64, 3.74, 6.15])\n\n#theta_D = 345\n#Temperaturen und so berechnen\n\nn=5.382 #mol. Folgt aus n=m/M mit der Masse der Probe und der molaren Masse von Cu\n\nRp, Rz=np.genfromtxt('data/data.txt', unpack=True)\nRp = unp.uarray(Rp, 0.1)\nRz = unp.uarray(Rz, 0.1)\nI, t, U=np.genfromtxt('data/data2.txt', unpack=True)\nI=unp.uarray(I, 0.1)\nt=unp.uarray(t, 3)\nU=unp.uarray(U, 0.1)\nI41=np.append(0, I)\nt41=np.append(0, t)\nU41=np.append(0, U)\nhr = ['$R_p/\\Omega$', '$R_z/\\Omega$','$I$/µA', '$t$/s', '$U/$V']\nm = np.zeros((41, 10))\nm[:,0] = unp.nominal_values(Rp)\nm[:,1] = unp.std_devs(Rp)\nm[:,2] = unp.nominal_values(Rz)\nm[:,3] = unp.std_devs(Rz)\nm[:,4] = unp.nominal_values(I41)\nm[:,5] = unp.std_devs(I41)\nm[:,6] = unp.nominal_values(t41)\nm[:,7] = unp.std_devs(t41)\nm[:,8] = unp.nominal_values(U41)\nm[:,9] = unp.std_devs(U41)\nt=matrix2latex(m, headerRow=hr, format='%.2f')\nprint(t)\n\nTp=0.00134*Rp**2+2.296*Rp -243.02 +273.15\nTz=0.00134*Rz**2+2.296*Rz -243.02 +273.15\nI, t, U=np.genfromtxt('data/data2.txt', unpack=True)\nI=unp.uarray(I, 0.1)\nI*=1e-3 #jetzt in Ampere\nt=unp.uarray(t, 3)\nU=unp.uarray(U, 0.1)\n\nE=I*U*t\n\nDeltaT=[]\nfor i in range(1, 41):\n delta=Tp[i]-Tp[i-1]\n DeltaT.append(delta)\n\n\nC_p=E/(n*np.asarray(DeltaT))\n\nC_p41=np.append(0,C_p)\nDeltaT41=np.append(0,DeltaT)\nE41=np.append(0,E)\n\nhr = ['$T_p/$K', '$\\Delta T_p/$K','$E$/J', '$C_p$/\\frac{mol}{kg K}']\nm = np.zeros((41, 8))\nm[:,0] = unp.nominal_values(Tp)\nm[:,1] = unp.std_devs(Tp)\nm[:,2] = unp.nominal_values(DeltaT41)\nm[:,3] = unp.std_devs(DeltaT41)\nm[:,4] = unp.nominal_values(E41)\nm[:,5] = unp.std_devs(E41)\nm[:,6] = unp.nominal_values(C_p41)\nm[:,7] = unp.std_devs(C_p41)\nt=matrix2latex(m, headerRow=hr, format='%.2f')\nprint(t)\n\n\n#C_v berechnen\ndef alphafunktion(T, a, b):\n return a/T+b\n\nT, alpha=np.genfromtxt('data/alpha.txt', unpack=True)\nalpha*=1e-6\n\nV0=7.11*1e-6 #m3/mol\nkappa=137.8*1e9 #N/m^2\n\nparams, covariance_matrix = optimize.curve_fit(alphafunktion, T, alpha)\na, b = correlated_values(params, covariance_matrix)\nprint('Fit für alpha:')\nprint('a=', a)\nprint('b=', b)\n\nlinspace=np.linspace(50, 320, 1000)\nplt.plot(linspace, 1e6*alphafunktion(linspace, *params), 'b-', label='Ausgleichrechnung', linewidth=0.5)\nplt.plot(T, alpha*1e6, 'rx', mew=0.5, label='Gegebene Werte')\nplt.xlabel(r'$T/$K')\nplt.ylabel(r'$\\alpha/10^{-6}$K')\nplt.axis([50,320,0,20])\nplt.tight_layout()\nplt.legend()\nplt.grid()\nplt.savefig('build/alpha.pdf')\nplt.clf()\n\nalpha_T=a/Tp+b\n\nC_v=C_p41-alpha_T**2*kappa*V0*Tp\n\nhr = ['$T$/K', '$\\alpha$/10^-6 °']\nm = np.zeros((24, 2))\nm[:,0] = T\nm[:,1] = alpha*1e6\nt=matrix2latex(m, headerRow=hr, format='%.2f')\nprint(t)\n\nhr = ['$C_V$/J/kg K']\nm = np.zeros((41, 2))\nm[:,0] = unp.nominal_values(C_v)\nm[:,1] = unp.std_devs(C_v)\nt=matrix2latex(m, headerRow=hr, format='%.2f')\nprint(t)\n\n\nplt.errorbar(unp.nominal_values(Tp[1:41]), unp.nominal_values(C_v[1:41]), yerr=unp.std_devs(C_v[1:41]), xerr=unp.std_devs(Tp[1:41]), fmt='rx',mew=0.5, ecolor='b', elinewidth=0.5, label='Errechnete Werte') #Irgendwas klappt hier mit den errorbars noch nicht so richtig\nplt.xlabel(r'$T/$K')\nplt.ylabel(r'$C_v$ in J/(mol K)')\n#plt.axis([50,320,0,0.00002])\nplt.tight_layout()\nplt.legend()\nplt.grid()\nplt.savefig('build/cv.pdf')\nplt.clf()\n\n\n\n#Dieses komische Debye Zeug\n\nparams, covariance_matrix = optimize.curve_fit(debyeFunction, unp.nominal_values(Tp[1:21]), unp.nominal_values(C_v[1:21]), sigma=unp.std_devs(C_v[1:21]), absolute_sigma=True)\nerrors = np.sqrt(np.diag(covariance_matrix))\ntheta_D = params[0]\nsigma_theta_D = errors[0]\ntheta_D_ufloat = ufloat(theta_D, sigma_theta_D)\nprint(\"theta_D = \", theta_D_ufloat)\n\nTlin = np.linspace(-1, 180, 200)\nC_Vplot = debyeFunction(Tlin, theta_D)\nplt.errorbar(unp.nominal_values(Tp[1:21]), unp.nominal_values(C_v[1:21]), label='Messwerte', yerr=unp.std_devs(C_v[1:21]), fmt='rx',mew=0.5, ecolor='b', elinewidth=0.5)\nplt.plot(Tlin, C_Vplot, 'g-', label='Ausgleichsfunktion', linewidth=0.5)\nplt.grid()\nplt.legend()\nplt.axis([0, 180, -1, 25])\nplt.xlabel(r'$T$/K')\nplt.ylabel(r'$C_V/$(J/mol K)')\nplt.savefig('build/debye.pdf')\nplt.clf()\n\n# Calculating Debye Frequency and Temperature analytically\nN_A = 6.02214179*1e23 # in 1/mol\nM = 63.55*1e-3 # in kg/mol\nrho = 8960 # in kg/m^3\nnumber_density = N_A*rho/M\n\nv_long = 4.7*1e3 # in m/s\nv_trans = 2.26*1e3 # in m/s\n\nomega_D = ((18*np.pi**2*number_density) * (1/v_long**3+2/v_trans**3)**(-1))**(1/3)\nprint('Debye-Frequenz', omega_D)\n\nk_B = 1.3806504*1e-23\nhbar = 1.054571628*1e-34\n\nTheta_D = hbar*omega_D/k_B\nprint('Debye-Temperatur', Theta_D)\n"
},
{
"alpha_fraction": 0.6108354926109314,
"alphanum_fraction": 0.6616392135620117,
"avg_line_length": 24.841026306152344,
"blob_id": "02f91b117781d604d23fd4b0c685ffce22bbbbd2",
"content_id": "d5fb10e0354fa247de34391f29fec472c07f472d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5051,
"license_type": "no_license",
"max_line_length": 106,
"num_lines": 195,
"path": "/V61_He-Ne-Laser/plot.py",
"repo_name": "JaLuka98/FP_SS19",
"src_encoding": "UTF-8",
"text": "import numpy as np\nimport scipy.optimize\nimport uncertainties as unc\nimport uncertainties.unumpy as unp\nfrom scipy import optimize\nimport matplotlib.pyplot as plt\nfrom uncertainties import ufloat\nfrom scipy import stats\nfrom uncertainties import correlated_values\nfrom matrix2latex import matrix2latex\n\ndef linfit(x, a, b):\n return a*x + b\n\ndef cos2fit(phi, I_max, delta):\n return I_max*(np.cos(np.radians(phi+delta)))**2\n\ndef parabelfit(x, a, b, c):\n return a*x*x + b*x + c\n\ndef mode0(x, I_max, d, w):\n return I_max*np.exp(-2*((x-d)/w)**2)\n\ndef mode1(x, I_max, d, w):\n return I_max*(((x-d)/w)**2)*np.exp(-2*((x-d)/w)**2)\n\n#Gitter\n#100 Spalte/mm -> 1*10^5 Spalte pro m\nd_1=ufloat(6.3, 0.2) #cm\nd_2=ufloat(6.2, 0.2) #cm\nl=ufloat(95.7, 0.2) #cm\ng=10**(-5)\n#HIER DIE AUSWERTUNG FÜR DIE WELLENLÄNGE\nd_1=d_1*1e-2\nd_2=d_2*1e-2\nl=l*1e-2\nd=np.mean([d_1, d_2])\nprint('Mittelwert der Abstände:', d)\n\nlamda=g*unp.sin(unp.arctan(d/l))\nprint('lambda= ', lamda)\nprint('Abweichung von der Theorie:', (lamda*1e9/632.8-1)*100)\n\n#plankonkav\nprint('plankonkav:')\nl, I = np.genfromtxt('data/plankonkav.txt', unpack=True)\n\nhr = ['$L$/cm', '$I$/µA']\nm = np.zeros((21, 2))\nm[:,0] = l\nm[:,1] = I\nt=matrix2latex(m, headerRow=hr, format='%.2f')\nprint(t)\n\nparams, covariance_matrix = optimize.curve_fit(linfit, l, I)\na, b = correlated_values(params, covariance_matrix)\nprint('Fit zum plankonkaven Resonator:')\nprint('a=', a)\nprint('b=', b)\n\nlinspace=np.linspace(40, 80, 1000)\nplt.plot(linspace, linfit(linspace, *params), 'b-', label='Ausgleichrechnung', linewidth=0.5)\nplt.plot(l, I, 'rx', mew=0.5, label='Messwerte')\n#plt.plot(linspace, (1-linspace/1400), linewidth=0.5, label= 'Theoriekurve')\nplt.xlabel(r'$L/$cm')\nplt.ylabel(r'$I$/µA')\nplt.axis([40,80,0,125])\nplt.tight_layout()\nplt.legend()\nplt.grid()\nplt.savefig('build/plankonkav.pdf')\nplt.clf()\n\n#plankonkav\nprint('konkavkonkav:')\nl, I = np.genfromtxt('data/konkavkonkav.txt', unpack=True)\n\nhr = ['$L$/cm', '$I$/µA']\nm = np.zeros((29, 2))\nm[:,0] = l\nm[:,1] = I\nt=matrix2latex(m, headerRow=hr, format='%.2f')\nprint(t)\n\nparams, covariance_matrix = optimize.curve_fit(parabelfit, l, I)\na, b, c = correlated_values(params, covariance_matrix)\nprint('Fit zum konkavkonkaven Resonator:')\nprint('a=', a)\nprint('b=', b)\nprint('c=', c)\n\nlinspace=np.linspace(68, 155, 100)\nplt.plot(linspace, parabelfit(linspace, *params), 'b-', label='Ausgleichrechnung', linewidth=0.5)\nplt.plot(l, I, 'rx', mew=0.5, label='Messwerte')\n#plt.plot(linspace, np.max(I)*(1-(linspace/140))*(1-(linspace/140)), linewidth=0.5, label= 'Theoriekurve')\nplt.xlabel(r'$L/$cm')\nplt.ylabel(r'$I$/µA')\nplt.axis([68,152,0,260])\nplt.tight_layout()\nplt.legend()\nplt.grid()\nplt.savefig('build/konkavkonkav.pdf')\nplt.clf()\n\n#TEM00\nprint('TEM00:')\ns, I = np.genfromtxt('data/tem00.txt', unpack=True)\n\nhr = ['$L$/cm', '$I$/nA']\nm = np.zeros((63, 2))\nm[:,0] = s\nm[:,1] = I\nt=matrix2latex(m, headerRow=hr, format='%.2f')\nprint(t)\n\nparams, covariance_matrix = optimize.curve_fit(mode0, s, I)\nI_max, d, w = correlated_values(params, covariance_matrix)\nprint('Fit zur TEM00 Mode:')\nprint('I_max=', I_max)\nprint('d=', d)\nprint('w=', w)\n\nlinspace=np.linspace(-40, 40, 1000)\nplt.plot(linspace, mode0(linspace, *params), 'b-', label='Ausgleichrechnung', linewidth=0.5)\nplt.plot(s, I, 'rx', mew=0.5, label='Messwerte')\nplt.xlabel(r'$s$/cm')\nplt.ylabel(r'$I$/nA')\nplt.axis([-33,33,0,830])\nplt.tight_layout()\nplt.legend()\nplt.grid()\nplt.savefig('build/tem00.pdf')\nplt.clf()\n\n#TEM01\nprint('TEM01:')\ns, I = np.genfromtxt('data/tem01.txt', unpack=True)\n\nhr = ['$L$/cm', '$I$/nA']\nm = np.zeros((63, 2))\nm[:,0] = s\nm[:,1] = I\nt=matrix2latex(m, headerRow=hr, format='%.2f')\nprint(t)\n\nparams, covariance_matrix = optimize.curve_fit(mode1, s, I)\nI_max, d, w = correlated_values(params, covariance_matrix)\nprint('Fit zur TEM01 Mode:')\nprint('I_max=', I_max)\nprint('d=', d)\nprint('w=', w)\n\nlinspace=np.linspace(-40, 40, 1000)\nplt.plot(linspace, mode1(linspace, *params), 'b-', label='Ausgleichrechnung', linewidth=0.5)\nplt.plot(s, I, 'rx', mew=0.5, label='Messwerte')\nplt.xlabel(r'$s/$cm')\nplt.ylabel(r'$I$/nA')\nplt.axis([-33,33,0,330])\nplt.tight_layout()\nplt.legend()\nplt.grid()\nplt.savefig('build/tem01.pdf')\nplt.clf()\n\n#polarisation\nprint('Polarisation:')\nphi, I = np.genfromtxt('data/polarisation.txt', unpack=True)\n\nhr = ['$\\phi$/°', '$I$/µA']\nm = np.zeros((36, 2))\nm[:,0] = phi\nm[:,1] = I\nt=matrix2latex(m, headerRow=hr, format='%.2f')\nprint(t)\n\nparams1, covariance_matrix1 = optimize.curve_fit(cos2fit, phi, I, p0=[50, -30])\nerrors1 = np.sqrt(np.diag(covariance_matrix1))\n\nprint('Fitparameter für die Polarisation:')\n\nprint('I_max = ', params1[0], '+-', errors1[0])\nprint('delta = ', params1[1], '+-', errors1[1])\n\nlinspace=np.linspace(-30,380, 1000)\n\nplt.plot(linspace, cos2fit(linspace, *params1), linewidth=0.5, label='Ausgleichsfunktion')\nplt.plot(phi, I, 'rx', mew=0.5, label='Messwerte')\nplt.xlabel(r'$\\phi/$°')\nplt.ylabel(r'$I$/µA')\nplt.axis([-15,365,-5,75])\nplt.tight_layout()\nplt.legend()\nplt.grid()\nplt.savefig('build/polarisation.pdf')\nplt.clf()\n"
},
{
"alpha_fraction": 0.6141209602355957,
"alphanum_fraction": 0.6539671421051025,
"avg_line_length": 25.490739822387695,
"blob_id": "afe1169fdb293f43d7f7601b740641504448ddcd",
"content_id": "5d49392bc4a230ad0c2f3293a9a9a6bdaa249dda",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2870,
"license_type": "no_license",
"max_line_length": 84,
"num_lines": 108,
"path": "/V53_mikrowellen/plot.py",
"repo_name": "JaLuka98/FP_SS19",
"src_encoding": "UTF-8",
"text": "import numpy as np\nimport scipy.optimize\nimport uncertainties as unc\nimport uncertainties.unumpy as unp\nfrom scipy import optimize\nimport matplotlib.pyplot as plt\nfrom uncertainties import ufloat\nfrom scipy import stats\n\n\ndef parabola(x, a, b, c):\n return a*x*x + b*x + c\n\n\n#Frequenzmessung\nf_dip=9006 #MHz\n\n#3dB Mehtode\nd1=69.6 #mm\nd2=67.9\n\n#Abschwächer Methode\nA_3=42 #dB\n\n# Untersuchung der Moden\n\nf, U_l, U_max, U_r, A = np.genfromtxt('data/moden.txt', unpack=True, comments='#')\n\njet = plt.get_cmap('jet')\ncolors = iter(jet(np.linspace(0, 1, 10)))\n\nfor i in range(0, 3):\n x = np.array([U_r[i], U_max[i], U_l[i]])\n xlin = np.linspace(U_r[i], U_l[i], 1000)\n y = np.array([0, A[i], 0])\n print(x)\n print(y)\n params, covariance_matrix = optimize.curve_fit(parabola, x, y)\n # errors = np.sqrt(np.diag(covariance_matrix)) because 3 points fit perfectly\n print(\"a, b, c = %.1f\" % params[i])\n plt.plot(x, y, 'x', color=next(colors))\n plt.plot(xlin, parabola(xlin, *params), color=next(colors))\n plt.grid()\n plt.xlabel(r'$U_\\mathrm{refl}$/V')\n plt.ylabel(r'$A/$mV')\n plt.savefig('build/modes.pdf')\n\nplt.clf()\n\n# Frequenz- und Wellenlängen untersuchen\n\n# Abmessungen des Hohlleiters hier einfügen (in millimetern)\na = 23.0 #mm\nb = 10.2 #mm\n\nx = np.genfromtxt('data/wellenlaenge.txt', unpack=True, comments='#')\nx1 = x[1] - x[0]\nx2 = x[2] - x[1]\nx3 = x[3] - x[2]\nx = 2*np.array([x1, x2, x3]) # x in mm\nlam_g = ufloat(np.mean(x), stats.sem(x)) # default ddof = 1\nprint('Wellenlänge im Hohlleiter', lam_g)\nlam_c = 2*a\nprint('Grenzwellenlänge des Hohlleiters', lam_c)\nlam_0 = 1/unp.sqrt(1/lam_g**2 + 1/(2*a)**2)\nprint('Wellenlänge im freien Raum', lam_0)\nf_exp = 3*1e11*unp.sqrt(1/lam_g**2 + 1/(2*a)**2)\nprint('f_exp ist', f_exp)\nv_ph = f_exp*lam_g\nprint('v_phase', v_ph)\n\n# Betrachtung der Dämpfung\n\ndB_selbst, d_selbst = np.genfromtxt('data/daempfung.txt', unpack=True, comments='#')\nd_dort, dB_dort = np.genfromtxt('data/daempfungbild.txt', unpack=True, comments='#')\n\nplt.plot(d_selbst, dB_selbst, 'rx', mew=0.5, label='Eigene Messung')\nplt.plot(d_dort, dB_dort, 'bo', mew=0.5, markersize=4, label='Herstellerangabe')\nplt.grid()\nplt.legend()\nplt.xlabel(r'$d/$mm')\nplt.ylabel(r'$D/$dB')\nplt.savefig('build/daempfungOhneKorrektur.pdf')\nplt.clf()\n\nprint(d_selbst)\nd_selbst -= 1.5\nd_selbst[0] += 1.5\n#d_dort -= 1.5\nplt.plot(d_selbst, dB_selbst, 'rx', mew=0.5, label='Eigene Messung')\nplt.plot(d_dort, dB_dort, 'bo', mew=0.5, markersize=4, label='Herstellerangabe')\nplt.grid()\nplt.legend()\nplt.xlabel(r'$d/$mm')\nplt.ylabel(r'$D/$dB')\nplt.savefig('build/daempfungMitKorrektur.pdf')\n\n# 3db Methode\nx_1 = 69.6\nx_2 = 67.9\nS_3db = unp.sqrt(1 + 1/(unp.sin(np.pi*(x_1-x_2)/lam_g))**2)\nprint('S durch 3db Methode', S_3db)\n\n# Abschwächermethode\nA_1 = 20\nA_2 = 42\nS_ab = 10**((A_2-A_1)/20)\nprint('S durch Abschwächermethode', S_ab)\n"
},
{
"alpha_fraction": 0.5786271691322327,
"alphanum_fraction": 0.6574103236198425,
"avg_line_length": 35.214691162109375,
"blob_id": "b2e2a375d7ce606ad6e85721ee217f98ee21470b",
"content_id": "2110c90ad1fa1cb8a2b9230ece76d57487e64803",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6416,
"license_type": "no_license",
"max_line_length": 144,
"num_lines": 177,
"path": "/V46_Faradayeffekt/plot.py",
"repo_name": "JaLuka98/FP_SS19",
"src_encoding": "UTF-8",
"text": "import numpy as np\nfrom scipy import constants\nimport scipy.integrate as integrate\nimport uncertainties as unc\nimport scipy\nimport uncertainties.unumpy as unp\nfrom scipy import optimize\nimport matplotlib.pyplot as plt\nfrom uncertainties import ufloat\nfrom scipy import stats\nfrom uncertainties import correlated_values\nfrom matrix2latex import matrix2latex\n\n\ndef linearFunction(x,a):\n return a*x\n\n\ndef mStarred(a, N, B, n):\n e_0 = constants.value(u'elementary charge')\n #eps_0 = constants.value(u'epsilon_0') doesnt work for whatever reason\n eps_0 = 8.8541878128e-12\n c = constants.value(u'speed of light in vacuum')\n return unp.sqrt(N*B*e_0**3/(8*np.pi*np.pi*eps_0*a*c**3*n))\n\n\nB = 325 # in mT\nB *= 1e-3 # in T\n\n#Einlesen Daten reine Probe\ngrad1_rein, minuten1_rein, grad2_rein, minuten2_rein, lamb = np.genfromtxt('data/rein.txt', unpack=True)\nL = 0.0051\n\n#Berechnung der Winkel reine Probe\ntheta1_rein = grad1_rein + minuten1_rein/60\ntheta2_rein = grad2_rein + minuten2_rein/60\n#Winkel in rad umrechnen\ntheta1_rein = theta1_rein/360 * 2*np.pi\ntheta2_rein = theta2_rein/360 * 2*np.pi\n#Berechnung des Drehwinkels\ntheta_rein = 1/2*(theta2_rein - theta1_rein)\n\nhr = [r'$\\lambda/\\mu$m', r'$\\theta_1$/rad', r'$\\theta_2$/rad', r'$\\theta$/rad']\nsize = np.size(lamb)\nm = np.zeros((size, 4))\nm[:, 0] = lamb\nm[:, 1] = theta1_rein[0:size]\nm[:, 2] = theta2_rein[0:size]\nm[:, 3] = theta_rein[0:size]\nt = matrix2latex(m, headerRow=hr, format='%.4f')\nprint(t)\n\ntheta_frei_rein = theta_rein/L # in rad/m\nplt.plot(lamb**2, theta_frei_rein, 'rx')\nplt.xlabel(r'$\\lambda^2/\\mu\\mathrm{m}^2')\nplt.ylabel(r'$\\frac{\\theta}{L}/\\frac{\\mathrm{rad}}{\\mathrm{mm}}$')\nplt.grid()\nplt.axis([0, 8, -2.5, 32.5])\nplt.savefig('build/rein.pdf')\nplt.clf()\n\n#Einlesen Daten erste dotierte Probe\ngrad1_dotiert_136, minuten1_dotiert_136, grad2_dotiert_136, minuten2_dotiert_136, lamb = np.genfromtxt('data/dotiert_136.txt', unpack=True)\nL = 0.00136\n\n#Berechnung der Winkel erste dotierte Probe\ntheta1_dotiert_136 = grad1_dotiert_136 + minuten1_dotiert_136/60\ntheta2_dotiert_136 = grad2_dotiert_136 + minuten2_dotiert_136/60\n#Winkel in rad umrechnen\ntheta1_dotiert_136 = theta1_dotiert_136/360 * 2*np.pi\ntheta2_dotiert_136 = theta2_dotiert_136/360 * 2*np.pi\n#Berechnung des Drehwinkels\ntheta_dotiert_136 = 1/2*(theta2_dotiert_136 - theta1_dotiert_136)\n\nhr = [r'$\\lambda/\\mu$m', r'$\\theta_1$/rad', r'$\\theta_2$/rad', r'$\\theta$/rad']\nsize = np.size(lamb)\nm = np.zeros((size, 4))\nm[:, 0] = lamb\nm[:, 1] = theta1_dotiert_136[0:size]\nm[:, 2] = theta2_dotiert_136[0:size]\nm[:, 3] = theta_dotiert_136[0:size]\nt = matrix2latex(m, headerRow=hr, format='%.4f')\nprint(t)\n\ntheta_frei_dotiert_136 = theta_dotiert_136/L # in rad/m\nplt.plot(lamb**2, theta_frei_dotiert_136, 'rx')\nplt.xlabel(r'$\\lambda^2/\\mu\\mathrm{m}^2')\nplt.ylabel(r'$\\frac{\\theta}{L}/\\frac{\\mathrm{rad}}{\\mathrm{mm}}$')\nplt.grid()\nplt.axis([0, 8, 15, 50])\nplt.savefig('build/dotiert_136.pdf')\nplt.clf()\n\n#Einlesen Daten zweite dotierte Probe\ngrad1_dotiert_1296, minuten1_dotiert_1296, grad2_dotiert_1296, minuten2_dotiert_1296, lamb = np.genfromtxt('data/dotiert_1296.txt', unpack=True)\nL = 0.001296\n\n#Berechnung der Winkel zweite dotierte Probe\ntheta1_dotiert_1296 = grad1_dotiert_1296 + minuten1_dotiert_1296/60\ntheta2_dotiert_1296 = grad2_dotiert_1296 + minuten2_dotiert_1296/60\n#Winkel in rad umrechnen\ntheta1_dotiert_1296 = theta1_dotiert_1296/360 * 2*np.pi\ntheta2_dotiert_1296 = theta2_dotiert_1296/360 * 2*np.pi\n#Berechnung des Drehwinkels\ntheta_dotiert_1296 = 1/2*(theta2_dotiert_1296 - theta1_dotiert_1296)\n\nhr = [r'$\\lambda/\\mu$m', r'$\\theta_1$/rad', r'$\\theta_2$/rad', r'$\\theta$/rad']\nsize = np.size(lamb)\nm = np.zeros((size, 4))\nm[:, 0] = lamb\nm[:, 1] = theta1_dotiert_1296[0:size]\nm[:, 2] = theta2_dotiert_1296[0:size]\nm[:, 3] = theta_dotiert_1296[0:size]\nt = matrix2latex(m, headerRow=hr, format='%.4f')\nprint(t)\n\ntheta_frei_dotiert_1296 = theta_dotiert_1296/L # in rad/m\nplt.plot(lamb**2, theta_frei_dotiert_1296, 'rx')\nplt.xlabel(r'$\\lambda^2/\\mu\\mathrm{m}^2')\nplt.ylabel(r'$\\frac{\\theta}{L}/\\frac{\\mathrm{rad}}{\\mathrm{mm}}$')\nplt.grid()\nplt.axis([0, 8, 25, 95])\nplt.savefig('build/dotiert_1296.pdf')\nplt.clf()\n\n#######################################\n### Bestimmung der effektiven Masse ###\n#######################################\n\nx, nArr = np.genfromtxt('data/n.txt', unpack=True)\nn = ufloat(np.mean(nArr), stats.sem(nArr))\nprint(n)\n\ntheta_136 = theta_frei_dotiert_136 - theta_frei_rein\nparams, covariance_matrix = optimize.curve_fit(linearFunction, lamb**2, theta_136)\na = ufloat(params[0]*1e12, np.sqrt(np.diag(covariance_matrix))*1e12) # *1e12 um von micro meter squared auf m^2 zu kommen\nN = 1.2e18 # in cm^-3\nN *= 1e6 # in m^-3\nprint('------------------------------------------------------------')\nprint('Fit parameter für die erste Probe (136): ', a)\nprint('Effektive masse für die erste Probe: ', mStarred(a, N, B, n))\nprint('------------------------------------------------------------')\n\nlinLin = np.linspace(0, 10, 1000)\nplt.plot(lamb**2, theta_136, 'rx', label='Messwerte', zorder=2)\nplt.plot(linLin, linearFunction(linLin, *params), 'b-', label='Anpassungsfunktion', zorder=3, linewidth=0.5)\nplt.legend()\nplt.grid()\nplt.axis([0, 8, 0, 50])\nplt.gcf().subplots_adjust(bottom=0.18)\nplt.xlabel(r'$\\lambda^2 / \\mu m^2$')\nplt.ylabel(r'$\\frac{\\Theta_\\mathrm{frei}}{L} / \\frac{\\mathrm{rad}}{\\mathrm{m}}$')\nplt.savefig('build/differenz_136.pdf')\nplt.clf()\n\ntheta_1296 = theta_frei_dotiert_1296 - theta_frei_rein\nparams, covariance_matrix = optimize.curve_fit(linearFunction, lamb**2, theta_1296)\na = ufloat(params[0]*1e12, np.sqrt(np.diag(covariance_matrix))*1e12)\nN = 2.8e18 # in cm^-3\nN *= 1e6 # in m^-3\nprint('------------------------------------------------------------')\nprint('Fit parameter für die zweite Probe (1296): ', a)\nprint('Effektive masse für die zweite Probe: ', mStarred(a, N, B, n))\nprint('------------------------------------------------------------')\n\n\nlinLin = np.linspace(0, 10, 1000)\nplt.plot(lamb**2, theta_1296, 'rx', label='Messwerte', zorder=2)\nplt.plot(linLin, linearFunction(linLin, *params), 'b-', label='Anpassungsfunktion', zorder=3, linewidth=0.5)\nplt.legend()\nplt.grid()\nplt.axis([0, 8, 0, 100])\nplt.gcf().subplots_adjust(bottom=0.18)\nplt.xlabel(r'$\\lambda^2 / \\mu m^2$')\nplt.ylabel(r'$\\frac{\\Theta_\\mathrm{frei}}{L} / \\frac{\\mathrm{rad}}{\\mathrm{m}}$')\nplt.savefig('build/differenz_1296.pdf')\nplt.clf()\n"
},
{
"alpha_fraction": 0.5929802060127258,
"alphanum_fraction": 0.6540009379386902,
"avg_line_length": 29.596059799194336,
"blob_id": "a395bec5535aeb9031a74c92d28130926641f6d1",
"content_id": "7275f7af68b92ab3dcd4409a6747a58f2fe78ea3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6217,
"license_type": "no_license",
"max_line_length": 121,
"num_lines": 203,
"path": "/V23_quantenanalogien/plot.py",
"repo_name": "JaLuka98/FP_SS19",
"src_encoding": "UTF-8",
"text": "import numpy as np\nimport scipy.optimize\nimport uncertainties as unc\nimport uncertainties.unumpy as unp\nfrom scipy import optimize\nimport matplotlib.pyplot as plt\nfrom uncertainties import ufloat\nfrom uncertainties import correlated_values\nfrom matrix2latex import matrix2latex\n\nfrom scipy.special import legendre\n\n#Polarplot\n\nalpha, A1, A2, A3=np.genfromtxt('data/peaks.txt', unpack=True)\n\nhr = ['$\\alpha$/°', '$A_1$', '$A_2$','$A_3$']\nm = np.zeros((19, 4))\nm[:,0] = alpha\nm[:,1] = A1\nm[:,2] = A2\nm[:,3] = A3\nt=matrix2latex(m, headerRow=hr, format='%.2f')\nprint(t)\n\n# Test mit Scipy\n# Erklärung: Pn = legendre(n) erschafft ein Objekt namens Pn, dass ein Legendrepolynom\n# des Grades n ist. Damit kann man plotten, wie unten auch zu sehen ist.\n# Damit kriegst du auch diese Schleifen hin!\nplt.figure(3)\nP1 = legendre(1)\nphilin = np.linspace(0, 2*np.pi, 1000)\ntheta = np.arccos(1/2*np.cos(alpha/360*2*np.pi)-1/2)\nplt.polar(theta, A1/np.max(A1), \"rx\", mew=0.5, label=\"Messwerte\")\nplt.polar(philin, np.abs(P1(np.cos(philin))), \"b-\", label=\"Theorie\", linewidth=0.5)\nplt.legend()\nplt.tight_layout()\nplt.savefig(\"build/polar1.pdf\")\nplt.clf()\n\n#Vermutlich P_1,0\nplt.figure(3)\nP1=legendre(1)\nphirange = np.linspace(0, 2*np.pi, 1000)\nplt.polar(np.arccos(1/2*np.cos(alpha/360*2*np.pi)-1/2), A1/np.max(A1), \"rx\", mew=0.5, label=\"Messwerte\")\nplt.polar(phirange, np.abs(P1(np.cos(phirange)))/max(P1(np.cos(phirange))), \"b-\", label=\"Theorie\", linewidth=0.5)\nplt.legend()\nplt.tight_layout()\nplt.savefig(\"build/polar1.pdf\", bbox_inches='tight')\nplt.clf()\n\n#Vermutlich P_2,0\nplt.figure(3)\nP2=legendre(2)\nplt.polar(np.arccos(1/2*np.cos(alpha/360*2*np.pi)-1/2), A2/np.max(A2), \"rx\", mew=0.5, label=\"Messwerte\")\nplt.polar(phirange, np.abs(P2(np.cos(phirange)))/max(np.abs(P2(np.cos(phirange)))), \"b-\", label=\"Theorie\", linewidth=0.5)\nplt.legend()\nplt.tight_layout()\nplt.savefig(\"build/polar2.pdf\", bbox_inches='tight')\nplt.clf()\nplt.figure(3)\nP2=legendre(2)\nplt.polar(np.arccos(1/2*np.cos(alpha/360*2*np.pi)-1/2), 1/2*A2/np.max(A2), \"rx\", mew=0.5, label=\"Messwerte\")\nplt.polar(phirange, np.abs(P2(np.cos(phirange)))/max(np.abs(P2(np.cos(phirange)))), \"b-\", label=\"Theorie\", linewidth=0.5)\nplt.legend()\nplt.tight_layout()\nplt.savefig(\"build/polar2neunormiert.pdf\", bbox_inches='tight')\nplt.clf()\n\n#Vermutlich P_3,0... der passt mit dem gemogelten Minus und ein bisschen Phantasie\nplt.figure(3)\nP3=legendre(3)\nplt.polar(np.arccos(1/2*np.cos(alpha/360*2*np.pi)-1/2), A3/np.max(A3), \"rx\", mew=0.5, label=\"Messwerte\")\nplt.polar(phirange, np.abs(P3(np.cos(phirange)))/max(P3(np.cos(phirange))), \"b-\", label=\"Theorie\", linewidth=0.5)\n#Das Minus gehört da eigentlich nicht hin, aber sonst kommt da nichts raus was auch\n#nur annähernd zu den Messwerten passt...\nplt.legend()\nplt.tight_layout()\nplt.savefig(\"build/polar3.pdf\", bbox_inches='tight')\nplt.clf()\n\n\n# Zylinderkette\ndef linfit(x, a, b):\n return a*x+b\n\n\nzylinder, f1, a1, b1, A1, f2, a2, b2, A2 = np.genfromtxt('data/roehre.txt', unpack=True)\n\ndeltaf = 1000*(f2-f1)\nzylinderanzahl = np.log(zylinder)\nL = np.linspace(5,60,12)\neinsDurchZweiL = 1/(2*L)\nLlin = np.linspace(5,60,1000)\n#linspace = np.linspace(-0.1, 2.7, 500)\nparams, covariance_matrix = optimize.curve_fit(linfit, einsDurchZweiL, deltaf)\na, b = correlated_values(params, covariance_matrix)\nprint('Fit zur Schallgeschwindigkeitsbestimmung')\nprint('a=', a)\nprint('b=', b)\n\nplt.plot(einsDurchZweiL, deltaf/1000, 'rx', mew=0.5, label='Messwerte') # Durch 1000 für Kilohertz\nplt.plot(1/(2*Llin), linfit(1/(2*Llin), *params)/1000, 'b-', label='Ausgleichrechnung', linewidth=0.5)\n#plt.yscale('log')\n#plt.xscale('log')\nplt.xlabel(r'$\\frac{1}{2L}\\cdot$m')\nplt.ylabel(r'$\\Delta f$/kHz')\nplt.tight_layout()\nplt.legend()\nplt.grid()\nplt.savefig('build/roehre.pdf')\nplt.clf()\n\nhr = ['Zylinderanzahl', 'Resonatorlänge/cm' '$f_1/$Hz', '$f_2/$Hz', '$\\Delta f/$Hz']\nm = np.zeros((12, 5))\nm[:,0] = zylinder\nm[:,1] = 5*zylinder\nm[:,2] = f1*1000\nm[:,3] = f2*1000\nm[:,4] = 1000*(f2-f1)\nt = matrix2latex(m, headerRow=hr, format='%.2f')\nprint(t)\n\nzylinder = np.genfromtxt('data/zyldat.txt', dtype=np.str)\n\nfor i in range(0, len(zylinder)):\n f, A = np.genfromtxt(zylinder[i], unpack=True)\n #plt.plot(f, A, 'ro', mew=0.5)\n plt.plot(f, A, 'b-', linewidth=0.5, label='Frequenzspektrum')\n f_osz = np.genfromtxt('data/oszispektrum.txt', usecols=(i))\n for a in range(0, len(f_osz)):\n plt.axvline(f_osz[a]*1000, color='r', linestyle='--', linewidth=0.4)\n plt.xlabel(r'$f/$Hz')\n plt.ylabel(r'$A$')\n plt.xlim(0, np.max(f))\n plt.ylim()\n plt.tight_layout()\n plt.legend()\n plt.grid()\n plt.savefig('build/zyl'+str(i+1)+'.pdf')\n plt.clf()\n\nf1, f2, f3, f4, f5, f6, f7, f8, f9, f10, f11, f12 = np.genfromtxt('data/oszispektrum.txt', unpack=True)\nhr = ['$f_1$/Hz','$f_2$/Hz','$f_3$/Hz','$f_4$/Hz','$f_5$/Hz','$f_6$/Hz','$f_7$/Hz',\n'$f_8$/Hz','$f_9$/Hz','$f_10$/Hz','$f_11$/Hz','$f_12$/Hz']\nm = np.zeros((12, 13))\nm[:,0] = f1*1000\nm[:,1] = f2*1000\nm[:,2] = f3*1000\nm[:,3] = f4*1000\nm[:,4] = f5*1000\nm[:,5] = f6*1000\nm[:,6] = f7*1000\nm[:,7] = f8*1000\nm[:,8] = f9*1000\nm[:,9] = f10*1000\nm[:,10] = f11*1000\nm[:,11] = f12*1000\nt=matrix2latex(m, headerRow=hr, format='%.2f')\nprint(t)\n\n\n#H-Atom\n\nf, A=np.genfromtxt('data/hatom/hatom180alles.dat', unpack=True)\nf_osz=np.genfromtxt('data/hatom.txt', unpack=True)\nplt.plot()\nplt.plot(f, A, 'b-', linewidth=0.5, label='PC')\nfor i in range(0, len(f_osz)):\n plt.axvline(f_osz[i]*1000, color='r', linestyle='--', linewidth=0.5)\nplt.xlabel(r'$f/$Hz')\nplt.ylabel(r'$A$')\nplt.xlim()\nplt.ylim()\nplt.tight_layout()\nplt.legend()\nplt.grid()\nplt.savefig('build/hatomalles.pdf')\nplt.clf()\n\nhr = ['$f_\\symup{res}$/Hz']\nm = np.zeros((10, 1))\nm[:,0] = f_osz*1000\nt=matrix2latex(m, headerRow=hr, format='%.2f')\nprint(t)\n\n\nwasserstoff=np.genfromtxt('data/hatomdat.txt', dtype=np.str)\n\nfor i in range (0, len(wasserstoff)):\n f, A = np.genfromtxt(wasserstoff[i], unpack=True)\n plt.plot()\n #plt.plot(f, A, 'ro', mew=0.5)\n plt.plot(f, A, 'b-', linewidth=0.5, label='Frequenzspektrum')\n plt.xlabel(r'$f/$Hz')\n plt.ylabel(r'$A$')\n plt.xlim()\n plt.ylim()\n plt.tight_layout()\n plt.legend()\n plt.grid()\n plt.savefig('build/hatom'+str(i)+'.pdf')\n plt.clf()\n"
}
] | 6 |
TanmayR07/Human-Cognitive-Predictor | https://github.com/TanmayR07/Human-Cognitive-Predictor | 30bd0ca0bc3fd24e88b5a2137362f0a980ba3ce3 | 7e6cac4cfb637003304f7a323e190cf242e8fb2c | 1c331e8a4dda3ec0bf4fcc14800958209e2fbbbf | refs/heads/main | 2023-02-16T14:58:53.178006 | 2021-01-19T08:00:25 | 2021-01-19T08:00:25 | 330,899,881 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5441297292709351,
"alphanum_fraction": 0.5535714030265808,
"avg_line_length": 36.992000579833984,
"blob_id": "4adec8cb2e5d2959405acf7c6c4227b0389bfef1",
"content_id": "e0724a0b3ceb3a1c85487cd2a8435f9db74e84be",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 4872,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 125,
"path": "/app.js",
"repo_name": "TanmayR07/Human-Cognitive-Predictor",
"src_encoding": "UTF-8",
"text": "function onClickedBehaviourType() {\r\n console.log(\"Estimate Behaviour button clicked\");\r\n var WorkTime = document.getElementById(\"uiworktime\");\r\n var annortaorAge = document.getElementById(\"uiage\");\r\n var logTimeSinceEvent = document.getElementById(\"uilogtime\");\r\n var timeSinceEvent = document.getElementById(\"uitimeevent\");\r\n var Gender = document.getElementById(\"uigender\");\r\n var distracted = document.getElementById(\"uidistracted\");\r\n var frequency = document.getElementById(\"uifrequency\");\r\n var draining = document.getElementById(\"uidraining\");\r\n var importance = document.getElementById(\"uiimportance\");\r\n var stressful = document.getElementById(\"uistressful\");\r\n var similarity = document.getElementById(\"uisimilarity\");\r\n var estbehaviour = document.getElementById(\"uiEstimatedBehaviour\");\r\n var url = \"/predict_human_behaviour\";\r\n\r\n $.post(\r\n url,\r\n {\r\n worktime: parseFloat(workTime.value),\r\n annortaorAge: parseFloat(annortaorAge.value),\r\n logTimeSinceEvent: parseFloat(logTimeSinceEvent.value),\r\n Gender: Gender.value,\r\n distracted: distracted.value,\r\n frequency: frequency.value,\r\n draining: draining.value,\r\n importance: importance.value,\r\n stressful: stressful.value,\r\n },\r\n function (data, status) {\r\n console.log(data.estimated_Human_Behaviour);\r\n estPrice.innerHTML =\r\n \"<h2>\" + data.estimated_Human_Behaviour.toString() + \"</h2>\";\r\n console.log(status);\r\n }\r\n );\r\n}\r\n\r\nfunction onPageLoad() {\r\n console.log(\"document loaded\");\r\n var url = \"/get_gender_names\";\r\n $.get(url, function (data, status) {\r\n console.log(\"got response for get_gender_names request\");\r\n if (data) {\r\n var gender = data.gender;\r\n var uigender = document.getElementById(\"uigender\");\r\n $(\"#uigender\").empty();\r\n for (var i in gender) {\r\n var opt = new Option(gender[i]);\r\n $(\"#uigender\").append(opt);\r\n }\r\n }\r\n });\r\n console.log(\"document loaded\");\r\n var url1 = \"/get_distracted\";\r\n $.get(url1, function (data1, status1) {\r\n console.log(\"got response for get_distracted request\");\r\n if (data1) {\r\n var distracted = data1.distraced;\r\n var uidistracted = document1.getElementById(\"uidistracted\");\r\n $(\"#uidistracted\").empty();\r\n for (var i in distracted) {\r\n var opt1 = new Option(distracted[i]);\r\n $(\"#uidistracted\").append(opt1);\r\n }\r\n }\r\n });\r\n console.log(\"document loaded\");\r\n var url2 = \"/get_frequency\";\r\n $.get(url2, function (data2, status) {\r\n console.log(\"got response for get_frequency request\");\r\n if (data2) {\r\n var frequency = data2.frequency;\r\n var uifrequency = document2.getElementById(\"uifrequency\");\r\n $(\"#uifrequency\").empty();\r\n for (var i in gender) {\r\n var opt2 = new Option(frequency[i]);\r\n $(\"#uifrequency\").append(opt2);\r\n }\r\n }\r\n });\r\n console.log(\"document loaded\");\r\n var url3 = \"/get_draining\";\r\n $.get(url3, function (data3, status3) {\r\n console.log(\"got response for get_draining request\");\r\n if (data3) {\r\n var draining = data3.draining;\r\n var uidraining = document3.getElementById(\"uidraining\");\r\n $(\"#uidraining\").empty();\r\n for (var i in gender) {\r\n var opt3 = new Option(draining[i]);\r\n $(\"#uidraining\").append(opt3);\r\n }\r\n }\r\n });\r\n console.log(\"document loaded\");\r\n var url4 = \"/get_importance\";\r\n $.get(url4, function (data4, status4) {\r\n console.log(\"got response for get_importance request\");\r\n if (data4) {\r\n var importance = data4.importance;\r\n var uiimportance = document4.getElementById(\"uiimporatnce\");\r\n $(\"#uiimportance\").empty();\r\n for (var i in gender) {\r\n var opt4 = new Option(importance[i]);\r\n $(\"#uiimportance\").append(opt4);\r\n }\r\n }\r\n });\r\n console.log(\"document loaded\");\r\n var url5 = \"/get_stressful\";\r\n $.get(url5, function (data5, status5) {\r\n console.log(\"got response for get_stressful request\");\r\n if (data5) {\r\n var stressful = data5.stressful;\r\n var uistressful = document5.getElementById(\"uistressful\");\r\n $(\"#uistressful\").empty();\r\n for (var i in gender) {\r\n var opt5 = new Option(stressful[i]);\r\n $(\"#uistressful\").append(opt5);\r\n }\r\n }\r\n });\r\n}\r\nwindow.onload = onPageLoad;"
},
{
"alpha_fraction": 0.6216216087341309,
"alphanum_fraction": 0.6299376487731934,
"avg_line_length": 22.049999237060547,
"blob_id": "8ed4a2c3dc70ec6c94f7b12ec2826c2deca6183e",
"content_id": "4c36a713e9e97b6680a26f68f61c1f414e9e5c2d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 962,
"license_type": "no_license",
"max_line_length": 90,
"num_lines": 40,
"path": "/app.py",
"repo_name": "TanmayR07/Human-Cognitive-Predictor",
"src_encoding": "UTF-8",
"text": "import os\r\nimport pickle\r\n\r\nimport numpy as np\r\nfrom flask import Flask, jsonify, render_template, request\r\n\r\n#app name\r\napp = Flask(__name__)\r\n\r\n#load the saved model\r\ndef load_model():\r\n return pickle.load(open('model.pkl', 'rb'))\r\n\r\n\r\n#home page\r\[email protected]('/')\r\ndef home():\r\n return render_template('index2.html')\r\n\r\n#predict the result and return it\r\[email protected]('/predict_Human_Behaviour',methods=['POST'])\r\ndef predict():\r\n labels = ['recalled','imagined','retold']\r\n\r\n features = [int(x) for x in request.form.values()]\r\n\r\n values = [np.array(features)]\r\n\r\n model = load_model()\r\n predictions = model.predict(values)\r\n\r\n\r\n #result = labels[predictions[0]]\r\n return render_template('index2.html',output ='The dream is {}'.format(predictions[0]))\r\n\r\nif __name__ == '__main__':\r\n #For Heroku\r\n #port = int(os.environ.get('PORT',5000))\r\n #app.run(port = port, debug= True,use_reloader = False)\r\n app.run(debug =True)\r\n"
},
{
"alpha_fraction": 0.6242268085479736,
"alphanum_fraction": 0.6378865838050842,
"avg_line_length": 29.786884307861328,
"blob_id": "87d4b1ba8e74bb1ce0e23fbf96810228cafb3dd5",
"content_id": "528412283b7451cdcde4f5cd580faab074ce0571",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3880,
"license_type": "no_license",
"max_line_length": 124,
"num_lines": 122,
"path": "/model.py",
"repo_name": "TanmayR07/Human-Cognitive-Predictor",
"src_encoding": "UTF-8",
"text": "\r\nimport pickle\r\n\r\nimport numpy as np # Fundamental package for linear algebra and multidimensional arrays\r\nimport pandas as pd # Data analysis and manipultion tool\r\n\r\n# In read_csv() function, we have passed the location to where the files are located in the dphi official github page.\r\ntrain_data = pd.read_csv(\"https://raw.githubusercontent.com/dphi-official/Datasets/master/hippocorpus/train_set_label.csv\" )\r\n\r\nX = train_data.drop(['recAgnPairId','recImgPairId','similarityReason','story','WorkerId','AssignmentId','summary',\r\n 'annotatorRace','mainEvent','mostSurprising','memType'],axis = 1)\r\ny = train_data['memType'] # Output/Dependent variable\r\n\r\nGender = X.annotatorGender\r\nGender_final = []\r\nfor item in Gender:\r\n if item == 'Man' or item == 'man' or item == 'MAN':\r\n Gender_final.append(0)\r\n elif item == 'woman' or item == 'WOMAN' or item == 'Woman':\r\n Gender_final.append(1)\r\n else:\r\n Gender_final.append(2)\r\n\r\nX.drop('annotatorGender',axis = 1, inplace = True)\r\nX['Gender'] = Gender_final\r\n\r\ndistracted_text = X.distracted\r\n\r\ndistarcted_final = []\r\nfor item in distracted_text:\r\n if item == 'one':\r\n distarcted_final.append(1)\r\n elif item == '2.0':\r\n distarcted_final.append(2)\r\n elif item == '3.0':\r\n distarcted_final.append(3)\r\n elif item == '4.0':\r\n distarcted_final.append(4) \r\n else:\r\n distarcted_final.append(5)\r\n\r\nX.drop('distracted',axis = 1, inplace = True)\r\nX['distracted_num'] = distarcted_final\r\n\r\ndraining_text = X.draining\r\n\r\ndraining_final = []\r\nfor item in draining_text:\r\n if item == 'one':\r\n draining_final.append(1)\r\n elif item == '2.0':\r\n draining_final.append(2)\r\n elif item == '3.0':\r\n draining_final.append(3)\r\n elif item == '4.0':\r\n draining_final.append(4) \r\n else:\r\n draining_final.append(5)\r\n\r\nX.drop('draining',axis = 1, inplace = True)\r\nX['draining'] = draining_final\r\n\r\ny_enc = []\r\nfor item in y:\r\n if item == 'recalled':\r\n y_enc.append(0)\r\n elif item == 'imagined':\r\n y_enc.append(1)\r\n else:\r\n y_enc.append(2)\r\ny_enc \r\n#y = np.array(y_enc)\r\n\r\n# import train_test_split\r\nfrom sklearn.model_selection import train_test_split\r\n\r\nX_train, X_val, y_train, y_val = train_test_split(X,y,test_size=0.3, random_state = 42)\r\n\r\nX_train.annotatorAge.fillna(X_train.annotatorAge.mean(), inplace=True)\r\nX_train.importance.fillna(X_train.importance.mean(), inplace=True)\r\nX_train.frequency.fillna(X_train.frequency.mean(), inplace=True)\r\nX_train.similarity.fillna(X_train.similarity.mean(), inplace=True)\r\n\r\nX_val.annotatorAge.fillna(X_val.annotatorAge.mean(), inplace=True)\r\nX_val.importance.fillna(X_val.importance.mean(), inplace=True)\r\nX_val.frequency.fillna(X_val.frequency.mean(), inplace=True)\r\nX_val.similarity.fillna(X_val.similarity.mean(), inplace=True)\r\n\r\nfrom sklearn.preprocessing import StandardScaler\r\n\r\nss = StandardScaler()\r\nX_train = ss.fit_transform(X_train)\r\nX_val = ss.fit_transform(X_val)\r\n\r\nparams = {\"max_depth\": [25],\r\n \"min_samples_split\": [3],\r\n \"min_samples_leaf\": [1,2,3],\r\n \"bootstrap\": [True],\r\n \"n_estimators\": [125],\r\n \"n_jobs\": [-1],\r\n \"verbose\": [2],\r\n \"criterion\": [\"entropy\"]\r\n }\r\n\r\nfrom sklearn.ensemble import RandomForestClassifier\r\nfrom sklearn.model_selection import GridSearchCV\r\n\r\nrfc1 = RandomForestClassifier()\r\nclf = GridSearchCV(rfc1, params,cv = 4)\r\nclf.fit(X_train,y_train)\r\n\r\npred_clf = clf.predict(X_val)\r\n\r\nfrom sklearn.metrics import f1_score\r\n\r\nresult = clf.score(X_val, y_val)\r\nprint('The Score is;', result)\r\n\r\nprint('F1 Score for random forest classifier is: ', f1_score(y_val, pred_clf, average = 'weighted'))\r\n\r\n#save the model \r\nfilename = 'model.pkl'\r\npickle.dump(clf, open(filename, 'wb'))\r\n"
}
] | 3 |
stmharry/FoodRecognitionServer | https://github.com/stmharry/FoodRecognitionServer | 93f4809b586ad673ba7eacd351d2262fd17e7c04 | b4e906168bad979eecb0a4fe5926fa6997db3190 | f8e1d40f407fbead37422b8e9bf06c3d234b36a5 | refs/heads/master | 2020-07-28T19:01:23.321512 | 2016-09-25T03:16:18 | 2016-09-25T03:16:18 | 67,525,046 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6563630104064941,
"alphanum_fraction": 0.6596514582633972,
"avg_line_length": 36.543209075927734,
"blob_id": "6d2174e396fa86ffc6ca401584c8a90bb8958633",
"content_id": "6a295ed4cda93821ac445ca69feb6f969dc1a434",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3041,
"license_type": "no_license",
"max_line_length": 124,
"num_lines": 81,
"path": "/ddrift_be_img_server/classify_service/views.py",
"repo_name": "stmharry/FoodRecognitionServer",
"src_encoding": "UTF-8",
"text": "from rest_framework.decorators import parser_classes\nfrom rest_framework.decorators import renderer_classes\nfrom rest_framework.parsers import JSONParser\nfrom rest_framework.renderers import JSONRenderer\nfrom rest_framework.response import Response\nfrom rest_framework.views import APIView\n\nimport collections\nimport skimage.io\n\nfrom env import *\nfrom ResNet import set_meta, Meta, QueueProducer, Preprocess, Batch, Net, ResNet50, Postprocess, Timer\n\n\ndef _(x):\n return round(x, 4)\n\n\nclass NetWrapper(object):\n def __init__(self, working_dir):\n if working_dir is None:\n return\n\n meta = Meta.test(working_dir=working_dir)\n set_meta(meta)\n\n self.meta = meta\n self.producer = QueueProducer()\n self.preprocess = Preprocess(num_test_crops=NUM_TEST_CROPS)\n self.batch = Batch(batch_size=BATCH_SIZE, num_test_crops=NUM_TEST_CROPS)\n self.net = ResNet50(num_test_crops=NUM_TEST_CROPS, gpu_frac=GPU_FRAC)\n self.postprocess = Postprocess()\n\n with Timer('Building network...'):\n self.producer.blob().func(self.preprocess.test).func(self.batch.test).func(self.net.build)\n self.blob = self.postprocess.blob([self.net.prob, self.net.consistency])\n self.net.start(default_phase=Net.Phase.TEST)\n\n def get_results(self, request):\n urls = request.data.get('images', [])\n num_urls = len(urls)\n\n self.net.online(**self.batch.kwargs(total_size=num_urls, phase=Net.Phase.TEST))\n\n with Timer('ResNet50 running prediction on %d images... ' % num_urls):\n for url in urls:\n self.net.online(**self.producer.kwargs(image=skimage.io.imread(url)))\n\n flag = True\n results = list()\n while flag:\n fetch = self.net.online(**self.blob.kwargs())\n\n for (prob, consistency) in zip(*[fetch[value.name] for value in self.blob.values]):\n indices = sorted(xrange(len(self.meta.class_names)), key=prob.__getitem__)[:-(TOP_K + 1):-1]\n classes = collections.OrderedDict([(self.meta.class_names[index], _(prob[index])) for index in indices])\n results.append(dict(status='ok', classes=classes, consistency=_(consistency)))\n\n flag = (fetch[self.net.prob.name].size != 0)\n\n return results\n\n\nclass ContentClassifyService(APIView):\n NET_WRAPPER = NetWrapper(working_dir=WORKING_DIR_CONTENT_TYPE)\n\n @parser_classes((JSONParser,))\n @renderer_classes((JSONRenderer,))\n def post(self, request, format=None):\n content = dict(results=ContentClassifyService.NET_WRAPPER.get_results(request))\n return Response(content)\n\n\nclass FoodClassifyService(APIView):\n NET_WRAPPER = NetWrapper(working_dir=WORKING_DIR_FOOD_TYPE)\n\n @parser_classes((JSONParser,))\n @renderer_classes((JSONRenderer,))\n def post(self, request, format=None):\n content = dict(results=FoodClassifyService.NET_WRAPPER.get_results(request))\n return Response(content)\n"
},
{
"alpha_fraction": 0.7636363506317139,
"alphanum_fraction": 0.7818182110786438,
"avg_line_length": 12.75,
"blob_id": "51f681962b9e5bc09f9e51d68ed797bb11799e71",
"content_id": "8ed0dd7be6aa050052784b4bc40b44e62e712396",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 55,
"license_type": "no_license",
"max_line_length": 26,
"num_lines": 4,
"path": "/util/initiate.py",
"repo_name": "stmharry/FoodRecognitionServer",
"src_encoding": "UTF-8",
"text": "import requests\nimport sys\n\nrequests.post(sys.argv[1])\n"
},
{
"alpha_fraction": 0.5280898809432983,
"alphanum_fraction": 0.6952247023582458,
"avg_line_length": 38.55555725097656,
"blob_id": "0eafea7da935780220cf472779d829cf34a22084",
"content_id": "e4f18dc989d132dd81698a844f59b6e1f33ad92d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 712,
"license_type": "no_license",
"max_line_length": 102,
"num_lines": 18,
"path": "/util/request.py",
"repo_name": "stmharry/FoodRecognitionServer",
"src_encoding": "UTF-8",
"text": "import collections\nimport json\nimport requests\nimport sys\nimport time\n\nstart = time.time()\nresponse = requests.post(sys.argv[1], json={\n 'images': [\n 'http://a.ecimg.tw/pic/v1/data/item/201608/D/G/C/F/0/7/DGCF07-A900665NY000_57c406ec5d958.jpg',\n 'http://a.ecimg.tw/pic/v1/data/item/201608/D/G/C/F/0/7/DGCF07-A900665NY000_57c406ec5d958.jpg',\n 'http://a.ecimg.tw/pic/v1/data/item/201608/D/G/C/F/0/7/DGCF07-A900665NY000_57c406ec5d958.jpg',\n 'http://a.ecimg.tw/pic/v1/data/item/201608/D/G/C/F/0/7/DGCF07-A900665NY000_57c406ec5d958.jpg',\n ]\n})\n\nprint('%.3f s' % (time.time() - start))\nprint(json.dumps(json.loads(response.text, object_pairs_hook=collections.OrderedDict), indent=4))\n"
},
{
"alpha_fraction": 0.719298243522644,
"alphanum_fraction": 0.719298243522644,
"avg_line_length": 30.66666603088379,
"blob_id": "3316b0dec72f6647d97ab2ef7c7f92d71898acfa",
"content_id": "6553128dee357450d15bc69790ca72dbe8e911f3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 285,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 9,
"path": "/ddrift_be_img_server/classify_service/urls.py",
"repo_name": "stmharry/FoodRecognitionServer",
"src_encoding": "UTF-8",
"text": "from django.conf.urls import url\n\nfrom classify_service import views\n\nurlpatterns = [\n url(r'^$', views.ContentClassifyService.as_view()), # backward-compatible\n url(r'^content', views.ContentClassifyService.as_view()),\n url(r'^food', views.FoodClassifyService.as_view()),\n]\n"
},
{
"alpha_fraction": 0.5956006646156311,
"alphanum_fraction": 0.7309644818305969,
"avg_line_length": 22.639999389648438,
"blob_id": "cb1e35c5c65dc8df65951ae4b72afd547de8e69d",
"content_id": "734d0ee103ede262f34936c72f95738b0e845585",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 591,
"license_type": "no_license",
"max_line_length": 63,
"num_lines": 25,
"path": "/requirements.txt",
"repo_name": "stmharry/FoodRecognitionServer",
"src_encoding": "UTF-8",
"text": "Django==1.7.5\ndjango-filter==0.9.2\ndjangorestframework==3.1.0\nmarkdown==2.6\n# imported from caffe/python/requirements.txt on 20150314\nCython>=0.19.2\nscipy>=0.13.2\nnumpy>=1.7.1\nnose>=1.3.0\nmatplotlib>=1.3.1\npandas>=0.12.0\nnetworkx>=1.8.1\nipython>=1.1.0\nh5py>=2.2.0\nleveldb>=0.191\npython-dateutil>=1.4,<2 # version number >= 2 is for python3\n#python-dateutil>=2.4.1\nprotobuf>=2.6.0\npython-gflags>=2.0\nscikit-learn>=0.14.1\n# have to install required libraries before installing Pillow\n# See http://pillow.readthedocs.org/en/latest/installation.html\npillow>=2.7.0\nsix>=1.7.0\nscikit-image>=0.9.3\n"
},
{
"alpha_fraction": 0.6169013977050781,
"alphanum_fraction": 0.7183098793029785,
"avg_line_length": 26.30769157409668,
"blob_id": "9f1061d161650964d06f5f200672711ea3b6c723",
"content_id": "4d76c03dbd426ef1590cc3582b4559b160cfdfd6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 355,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 13,
"path": "/ddrift_be_img_server/classify_service/env.py",
"repo_name": "stmharry/FoodRecognitionServer",
"src_encoding": "UTF-8",
"text": "import sys\n\nRESNET_ROOT = '/home/harry/Repository/FoodRecognitionV2'\nif RESNET_ROOT not in sys.path:\n sys.path.append(RESNET_ROOT)\n\nWORKING_DIR_CONTENT_TYPE = None # '/mnt/data/content-save/2016-09-06-155356'\nWORKING_DIR_FOOD_TYPE = '/mnt/data/food-save/2016-09-05-181554'\n\nGPU_FRAC = 1.0\nBATCH_SIZE = int(64 * GPU_FRAC)\nNUM_TEST_CROPS = 16\nTOP_K = 6\n"
}
] | 6 |
janakajain/medbot | https://github.com/janakajain/medbot | 4bd5ac2d543cfc5eda5894ebc7196c138ca705ef | dbaaff35115fa293e81e106f9f12b189c5447167 | 382da3c574ce3829cb26979bbf6c38f7aede9ae7 | refs/heads/Janak | 2021-08-28T00:54:44.330404 | 2017-12-10T23:52:07 | 2017-12-10T23:52:07 | 106,466,371 | 18 | 9 | null | 2017-10-10T20:15:35 | 2017-11-04T16:25:13 | 2017-11-28T23:50:22 | Jupyter Notebook | [
{
"alpha_fraction": 0.620662271976471,
"alphanum_fraction": 0.6235761642456055,
"avg_line_length": 21.34319496154785,
"blob_id": "7d8ceed8201b5ee147e67b6cd4b7cfd8da046031",
"content_id": "8c75fa246487337c78b8b2097a97c69420673c59",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3775,
"license_type": "no_license",
"max_line_length": 104,
"num_lines": 169,
"path": "/code/receive_sms_2.py",
"repo_name": "janakajain/medbot",
"src_encoding": "UTF-8",
"text": "import os\nfrom flask import Flask, request, redirect, session\nfrom twilio.twiml.messaging_response import MessagingResponse\nfrom twilio.twiml.voice_response import VoiceResponse\nimport json\nimport time\n\napp = Flask(__name__)\napp.config.from_object(__name__)\n\n# SECRET_KEY = 'medibo'\n\napp.config['SECRET_KEY'] = 'medibo'\n\n\n\n# =======================[ READING DATA FILES ]=================================\n\nwith open('data/users.json') as users_data:\n\tusers = json.load(users_data)[0] # This is a dictionary of users indexed\n\t\t\t\t\t\t\t\t\t # by their phone numbers\n\nwith open('data/messages.json') as messages_data:\n\tmessages = json.load(messages_data)[0] # This is a dictionary of messages \n\nwith open('data/stories.json') as stories_data:\n\tstories = json.load(stories_data)[0] # This is a dictionary of stories\n\n\n\n# ======================[ DECLARING CODE FLAGS ]================================\n\n# These flags hold sesssion level contextual information\n\nflag = 0\n\n\n\ninput_keys = ['timestamp','user_id','session_id','tags','targets','intent', \\\n'text']\n\n\n\n\n\[email protected](\"/sms\", methods = ['GET','POST'])\ndef sms_reply():\n\n\tglobal flag\n\n\ttimestamp = time.time()\n\tcounter = session.get('counter', 0)\n\tinput_text = request.values.get('Body', None)\n\tfrom_number = request.values.get('From', None)\n\n\t# human = users[from_number][\"name\"] if users[from_number] != 0 else None\n\n\tif(from_number in users.keys()):\n\t\tif(\"name\" in users[from_number].keys()):\n\t\t\thuman = users[from_number][\"name\"]\n\t\telse:\n\t\t\thuman = None\n\telse:\n\t\thuman = None\n\n\tif(from_number not in users.keys()):\n\t\tusers[from_number] = {}\n\n\tprint(\"User input = \" + input_text)\n\n\n\toutput = []\n\tresp = MessagingResponse()\n\n\n\tif(\"counter\" in users[from_number].keys()):\n\t\tusers[from_number][\"counter\"] += 1\n\telse:\n\t\tusers[from_number][\"counter\"] = 1\n\t\n\tsession['counter'] = users[from_number][\"counter\"]\n\n\n\t\n\tprint(\"\\n\\nTalking to \" + str(human))\n\n\tinput_data = dict.fromkeys(input_keys)\n\n\tprint(\"Counter = \" + str(users[from_number][\"counter\"]))\n\n\n\t\n\tif(human == None): # A record for this user doesn't exist \n\t\tif(users[from_number][\"counter\"] == 1): # and if this is the first session\n\t\t\tprint('First session for number : ' + from_number)\n\t\t\tusers[from_number][\"phone\"] = from_number\n\t\t\toutput += [messages[str(_id)][\"text\"] for _id in stories[\"welcome\"][\"messages\"]]\n\t\t\toutput.append(onboard(from_number))\n\t\t\tflag = 'name'\n\n\t\t\tresp.message(str(output))\n\t\t\tprint(\"Output = \" + str(output))\n\n\t\t\toutput = []\n\t\t\t\n\t\t\treturn(str(resp))\n\telse: \n\t\tresp.message(\"Hi \" + human + \"! How are you feeling today?\")\n\n\tif(flag == 'name'):\n\t\tprint(\"User's name is : \" + input_text)\n\t\tusers[from_number][\"name\"] = input_text\n\t\tflag = 'age'\n\t\toutput = 'Nice to meet you ' + input_text + '. How old are you?'\n\t\tresp.message(' '.join(output))\n\t\toutput = []\n\t\treturn str(resp)\n\n\tif(flag == 'age'):\n\t\tprint(\"User's age is : \" + input_text)\n\t\tusers[from_number][\"age\"] = input_text\n\t\tflag = 'sex'\n\t\toutput = 'Got it. If I may ask, which gender do you identify with ' + users[from_number][\"name\"] + '?'\n\t\tresp.message(' '.join(output))\n\t\toutput = []\n\t\treturn str(resp)\n\n\tif(flag == 'sex'):\n\t\tprint(\"User's gender is : \" + input_text)\n\t\tusers[from_number][\"sex\"] = input_text\n\t\tflag = 1\n\n\n\tif(flag == 1):\n\t\tprint(\"Onboarding complete\")\n\t\tprint(\" User information is \\n\\n\" + str(users[from_number]))\n\n\treturn \"OK\"\n\n\n\n\n\n\n\n# @app.route(\"/sms\", methods = ['GET','POST'])\ndef onboard(from_number):\n\n\tprint('-- ONBOARDING --')\n\n\tglobal flag\n\n\t# resp = MessagingResponse()\n\t# resp.message(\"Hi! Let's get to know you first. They call me MediBo. \"\\\n\t# \t\"What is your name?\")\n\n\tresp = \"Let's get to know you first. They call me MediBo. \"\\\n\t\t\"What is your name?\"\n\n\t# flag = 'name'\n\n\treturn str(resp)\n\n\n\n\n\nif __name__ == \"__main__\":\n\t app.run(debug = True)"
},
{
"alpha_fraction": 0.5753753781318665,
"alphanum_fraction": 0.5801801681518555,
"avg_line_length": 19.97468376159668,
"blob_id": "c82e2dd2e7dcc71d134ab46e019cfa5eb18f3c1b",
"content_id": "f202f01e59b2d797c6f6d53e6c157a9a8c75f0d0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1665,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 79,
"path": "/code/cleaning.py",
"repo_name": "janakajain/medbot",
"src_encoding": "UTF-8",
"text": "'''\n================================================================================\n|| \t\t\t\t\t \t\t\t\t\t\t\t\t ||\n|| \t NLP CLEANING CODE \t\t\t\t\t\t\t ||\n|| \t\t\t\t\t \t\t\t\t\t\t\t\t ||\n================================================================================\n\nThis package contains functionalities to clean text message text such as the\nfollowing:\n1. Breaking the message into individual sentences.\n2. Bursting punctuations\n3. Spelling correction\n4. Expanding short forms\n\n\n@author: Janak A Jain\n\n'''\n\n\nimport re\nfrom autocorrect import spell\n\n\n\ndef break_sentence(x):\n\t'''\n\t\tThis function takes in a message string and returns a list of individual\n\t\tsentences contained in that message.\n\n\t\tArgs: str\n\t\tReturns: [str]\n\t'''\n\n\treturn x.split('.')\n\n\n\ndef burst_punc(x):\n\t'''\n\t\tThis function takes in a string and returns the string obtained after \n\t\tbursting the punctuations.\n\n\t\tArgs: str\n\t\tReturns: str\n\t'''\n\n\tpattern = re.compile('\\w+')\n\n\treturn r' '.join(pattern.findall(x))\n\n\ndef spell_check(x):\n\t'''\n\t\tThis function takes in a word, checks it for spelling mistake and\n\t\treturns the corrected spelling of the word.\n\n\t\tArgs: str\n\t\tReturns: str\n\t'''\n\n\treturn spell(x)\n\n\ndef clean(x):\n\t'''\n\t\tThis function takes the open text message string and returns the cleaned\n\t\tlist of individual sentences in the message after the following \n\t\tprocessing steps:\n\t\t1. Breaking the message into individual sentences.\n\t\t2. Bursting punctuations\n\t\t3. Spelling correction\n\t\t4. Expanding short forms\n\n\t\tArgs: str\n\t\tReturns: str\n\t'''\n\treturn [[spell_check(w) for w in burst_punc(s).split(' ')] \\\n\tfor s in x.split('.')]\n\t\n\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.5978260636329651,
"alphanum_fraction": 0.5978260636329651,
"avg_line_length": 89,
"blob_id": "6e942327ae791d913dc433b05b3e48abfb0a6b6a",
"content_id": "6bf5d890140264c82233bb4266b2c97af83a972c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 92,
"license_type": "no_license",
"max_line_length": 89,
"num_lines": 1,
"path": "/code/users.py",
"repo_name": "janakajain/medbot",
"src_encoding": "UTF-8",
"text": "\n\n\nkeys = [\"name\",\"nickname\",\"sex\",\"age\",\"phone\",\"address\",\"city\",\"zipcode\",\"medication\",\"\"]"
},
{
"alpha_fraction": 0.3869209885597229,
"alphanum_fraction": 0.388283371925354,
"avg_line_length": 26.148147583007812,
"blob_id": "a6c4159a984d0c3fd349d43068ff0ce96f6e641d",
"content_id": "012fee04ad253470ac785042571b61bf8cd17700",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 734,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 27,
"path": "/working/medibo/load.py",
"repo_name": "janakajain/medbot",
"src_encoding": "UTF-8",
"text": "'''\n================================================================================\n|| \t \t\t\t\t\t \t\t\t\t\t\t\t ||\n|| \t \t\tLOAD\t\t \t\t\t\t\t\t \t ||\n|| \t \t\t\t\t\t \t\t\t\t\t\t\t ||\n================================================================================\nThis file contains code that loads a data file by a given name passed as an \nargument. Returns the loaded JSON file if the file exists or None otherwise.\n\n@author: Janak A Jain\n\n'''\n\nimport json\n\ndef load_json(file=None):\n\n\tif(file == None):\n\t\treturn None\n\telif(file=='users'):\n\t\twith open('data/'+file+'.json') as f:\n\t\t\tfile = json.load(f)\n\telse:\n\t\twith open('data/'+file+'.json') as f:\n\t\t\tfile = json.load(f)[0]\n\n\treturn file\n\n"
},
{
"alpha_fraction": 0.500990092754364,
"alphanum_fraction": 0.502970278263092,
"avg_line_length": 25.05172348022461,
"blob_id": "a78a74dff98ab3c6189a8b4901ee3a1e7a8a5aa5",
"content_id": "f93d80d949e6e23e8209b403024184859e5ed80f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1515,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 58,
"path": "/working/medibo/nlp.py",
"repo_name": "janakajain/medbot",
"src_encoding": "UTF-8",
"text": "'''\n================================================================================\n|| \t \t\t\t\t\t \t\t\t\t\t\t\t ||\n|| \t \t MEDIBO NLP\t\t \t\t\t\t\t\t \t ||\n|| \t \t\t\t\t\t \t\t\t\t\t\t\t ||\n================================================================================\nThis file contains code that receives a piece of text, cleans and parses this \nsentence, processes this sentence and returns a response JSON.\n\n@author: Janak A Jain\n\n'''\n\ntags = ['VBZ','VBG','NNS','NNP','NN','RB','VB','JJ','PRP']\n\n\nrequest_words = ['show','tell','see','describe','display','list',\\\n'modify''change','add','delete','remove','enlist','medications',\\\n'medication','age','name']\n\ndef parse_to_json(doc):\n\t'''\n\t\tThis function takes a SpaCy object for a cleaned piece of text \n\t\t(cleaned text = output from 'clean' function of cleaning.py module of \n\t\tthe package) and a SpaCy doc object and retuns the parsed JSON object.\n\n\t\tArgs: str, obj (SpaCy doc)\n\t\tReturns: JSON\n\t'''\n\n\tflag = 0\n\n\tresp_json = {'intent':[],\n 'targets':[],\n 'entities':[]\n }\n\n\tfor token in doc:\n\t\t\n\t\tif(token.tag_ in tags):\n\t\t\t\n\t\t\tif(token.tag_ in ['VBZ','VB','VBG']):\n\t\t\t\tresp_json['intent'].append(token.text)\n\t\t\t\n\t\t\tif(token.tag_ in ['NNP','NN']):\n\t\t\t\tresp_json['entities'].append(token.text)\n\t\t\n\t\t\tresp_json['targets'].append(token.text)\n\n\t\tif(token.text in request_words):\n\t\t\tprint(\"request word detected in NLP\")\n\t\t\tflag = 1\n\n\tif(flag ==1):\n\t\tresp_json['intent'] = 'request'\n\n\n\treturn resp_json\n\n\n\n\n"
},
{
"alpha_fraction": 0.547993004322052,
"alphanum_fraction": 0.6666666865348816,
"avg_line_length": 24.954545974731445,
"blob_id": "de33d1f5e13584423736d769a8ea639349e4bc3f",
"content_id": "66340f1fa179b916f6c3a853fdef8bb05f531ecd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 573,
"license_type": "no_license",
"max_line_length": 107,
"num_lines": 22,
"path": "/code/send_sms.py",
"repo_name": "janakajain/medbot",
"src_encoding": "UTF-8",
"text": "import os\nfrom twilio.rest import Client\n\naccount_sid = open('credentials','r').readlines()[0].strip()\nauth_token = open('credentials','r').readlines()[1].strip()\n\nclient = Client(account_sid, auth_token)\n\nusers = {\n\"Andrew\":\"+17868777177\",\n\"Jonathan\":\"+16462150333\",\n\"Minghong\": \"+19518010312\",\n\"Shengyang\": \"+18056377924\",\n\"Janak\": \"+19292088929\"\n}\n\n# for num in team_numbers.values():\n# \tclient.messages.create(\n# \t\tto = num,\n# \t\tfrom_ = \"+18162392619 \",\n# \t\tbody = \"Hello! I am MediBo. Just wanted to say Hi and wanted to check if you are feeling fine today. \" \n# \t)\n\n\n"
},
{
"alpha_fraction": 0.7364016771316528,
"alphanum_fraction": 0.7573221921920776,
"avg_line_length": 33.28571319580078,
"blob_id": "e5840476961afb56652e20389e3c1fb4175fac01",
"content_id": "0aab2ae7685d53387d8b018a521ffe8dcb490fcc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 239,
"license_type": "no_license",
"max_line_length": 98,
"num_lines": 7,
"path": "/documents/nlp-tasks.md",
"repo_name": "janakajain/medbot",
"src_encoding": "UTF-8",
"text": "# NLP Tasks \n \n1. Breaking the message into individual sentences\n2. Bursting punctuations - for e.g. \"Hello, I am MediBo.\" should be parsed as \"Hello I am MedibBo\"\n3. Splitting the sentence\n4. Spelling correction\n5. Expanding short forms"
},
{
"alpha_fraction": 0.5404829382896423,
"alphanum_fraction": 0.5419034361839294,
"avg_line_length": 23.486955642700195,
"blob_id": "4e7406569802849a81931d5c636da88872595bff",
"content_id": "84a74ee079084fe8d2a341ca77c2ba1e78a8243d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2816,
"license_type": "no_license",
"max_line_length": 83,
"num_lines": 115,
"path": "/code/response.py",
"repo_name": "janakajain/medbot",
"src_encoding": "UTF-8",
"text": "'''\n================================================================================\n|| \t \t\t\t\t\t \t\t\t\t\t\t\t ||\n|| \t \t\tRESPONSE\t\t \t\t\t\t\t \t ||\n|| \t \t\t\t\t\t \t\t\t\t\t\t\t ||\n================================================================================\nThis file contains code that creates an approrpiate response from the given\ninput\n\n@author: Janak A Jain\n\n'''\n\nimport json\nimport pandas as pd\nimport time\n\n\nwith open('data/messages.json') as messages_data:\n\tmessages = json.load(messages_data)[0] # This is a dictionary of messages \n\nwith open('data/stories.json') as stories_data:\n\tstories = json.load(stories_data)[0] # This is a dictionary of stories\n\n\ninput_keys = [\"timestamp\",\"user_id\",\"session_id\",\"tags\",\\\n\"return_targets\",\"return_intent\",\"text\",\"embed\"]\n\nuser_keys = [\"name\",\"age\",\"sex\",\"phone\"]\n\nintent = []\ntargets = []\nentities = []\nresp_text = []\n\n\ndef respond(intent = None, targets = None, entities = None, user = None, \\\n\ttype = None, title = None):\n\n\tprint('Incoming input to response: ')\n\tprint(\"Intent = \" +str(intent))\n\tprint(\"Targets = \" +str(targets))\n\tprint(\"Entities = \" +str(entities))\n\n\tresp = dict.fromkeys(input_keys)\n\n\tif(type == \"message\"):\n\t\tintent_m = []\n\t\ttargets_m = []\n\n\n\t\tfor i in messages.keys():\n\t\t\tm = messages[i]\n\t\t\tif any(i in m[\"intent\"] for i in intent):\n\t\t\t\tintent_m.append(m)\n\t\t\t\tintent = set(intent + m[\"intent\"])\n\n\t\tfor m in intent_m:\n\t\t\tif any(t in m[\"targets\"] for t in targets):\n\t\t\t\ttargets_m.append(m)\n\t\t\t\ttargets = set(targets.append(m[\"targets\"]))\n\n\t\tprint(\"Intents messages = \" + str(intent_m))\n\t\tprint()\n\t\tprint(\"Targets messages = \" + str(targets_m))\n\n\telif(type == \"story\"):\n\t\tfor s_name in stories.keys():\n\t\t\t\n\t\t\t# print(\" Testing story: \" + s_name)\n\n\t\t\tstory = stories[s_name]\n\n\t\t\t# print(\" Comparison = \" + str(sorted(story[\"targets\"]) == sorted(targets)))\n\n\t\t\tif(len(targets) >0):\n\n\t\t\t\tresp_text = []\n\n\t\t\t\tif(sorted(story[\"targets\"]) == sorted(targets)):\n\n\t\t\t\t\tprint(\" Serving story: \" + s_name)\n\n\t\t\t\t\tfor m_id in story[\"messages\"]:\n\n\t\t\t\t\t\tresp_text.append(messages[str(m_id)][\"text\"])\n\t\t\t\t\t\n\t\t\t\t\tintent = story[\"return_intent\"]\n\t\t\t\t\ttargets = story[\"return_targets\"]\n\t\t\t\t\tembed = messages[str(m_id)][\"embed\"]\n\n\t\t\t\t\tintent = list(set(intent))\n\t\t\t\t\ttargets = list(set(targets))\n\n\n\t\t\t\t\tresp[\"user_id\"] = user[\"phone\"]\n\t\t\t\t\tresp[\"session_id\"] = 1 # TODO - change this later\n\t\t\t\t\tresp[\"return_tags\"] = list(set(intent + [t for t in targets]))\n\t\t\t\t\tresp[\"return_intent\"] = intent\n\t\t\t\t\tresp[\"return_targets\"] = targets\n\t\t\t\t\tresp[\"text\"] = resp_text\n\t\t\t\t\tresp[\"timestamp\"] = time.time()\n\t\t\t\t\tresp[\"embed\"] = embed\n\n\t\t\t\t\t# Reset intents, targets and tags\n\n\t\t\t\t\tintent = []\n\t\t\t\t\ttargets = []\n\t\t\t\t\tentities = []\n\t\t\t\t\tresp_text = [] \n\n\telse:\n\t\tresp[\"text\"] = \"No match\"\n\t\t\n\treturn resp\n"
},
{
"alpha_fraction": 0.49340102076530457,
"alphanum_fraction": 0.4974619150161743,
"avg_line_length": 20.34782600402832,
"blob_id": "482cd6439dbee30747e9ae77f9aec1d12aed82da",
"content_id": "9b5e4f89524b110c472e2c36024b4f43c9fede0a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 985,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 46,
"path": "/code/nlp-cleaning.py",
"repo_name": "janakajain/medbot",
"src_encoding": "UTF-8",
"text": "'''\n\t============================================================================\n\t|| \t\t\t\t\t \t\t\t\t\t\t\t ||\n\t|| NLP CLEANING CODE \t\t\t\t\t\t\t ||\n\t|| \t\t\t\t\t \t\t\t\t\t\t\t ||\t\t\t\t\t\n\t============================================================================\n\nThis package contains functionalities to clean text message text such as the\nfollowing:\n1. Breaking the message into individual sentences.\n2. Bursting punctuations\n3. Spelling correction\n4. Expanding short forms\n\n'''\n\n\nimport re\n\n\n\ndef break_sentence(x):\n\t'''\n\t\tThis function takes in a message string and returns a list of individual\n\t\tsentences contained in that message.\n\n\t\tArgs: str\n\t\tReturns: [str]\n\t'''\n\n\treturn x.split('.')\n\n\n\ndef burst_punc(x):\n\t'''\n\t\tThis function takes in a string and returns the string obtained after \n\t\tbursting the punctuations.\n\n\t\tArgs: str\n\t\tReturns: str\n\t'''\n\n\tpattern = re.compile('\\w+')\n\n\treturn r' '.join(pattern.findall(x))\n\n\n\n"
},
{
"alpha_fraction": 0.5804587602615356,
"alphanum_fraction": 0.5833908319473267,
"avg_line_length": 23.363445281982422,
"blob_id": "87eb56577023e6c65e71da00b5250ab364b8dda1",
"content_id": "c356e0e367734bd5a59eb80dcc0eb334489890c9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 11596,
"license_type": "no_license",
"max_line_length": 149,
"num_lines": 476,
"path": "/working/medibo/new_receive_response.py",
"repo_name": "janakajain/medbot",
"src_encoding": "UTF-8",
"text": "'''\n================================================================================\n|| \t \t\t\t\t\t \t\t\t\t\t\t\t ||\n|| \t MEDIBO RECEIVE SMS \t\t\t\t\t\t \t ||\n|| \t \t\t\t\t\t \t\t\t\t\t\t\t ||\n================================================================================\nThis file contains code that receives the SMSs sent to Medibo from mobile\nphones. The various steps include:\n1. Extracting contextual information about the user and based on the user \n status, either begin onboarding or begin a new conversation.\n2. Process the input text to identify the correct labels such as intent, \n targets, tags etc.\n3. Send this processed response to the classification engine to classify this\n text and send the correct response for this input.\n4. Receive the suggested response and send this response back to the user.\n5. Log this conversation accordingly.\n\n@author: Janak A Jain\n\n'''\n\n\nfrom flask import Flask, request, redirect, session\nfrom twilio.twiml.messaging_response import Message, MessagingResponse\nfrom twilio.twiml.voice_response import VoiceResponse\nfrom load import *\nfrom response import *\nfrom pprint import pprint\nimport pandas as pd\nfrom pymetamap import MetaMap\nimport pickle\nimport json\nimport time\nimport os\nfrom recommend import *\nimport spacy\nfrom medibo.cleaning import *\nfrom medibo.nlp import *\n\napp = Flask(__name__)\napp.config.from_object(__name__)\n\n\napp.config['SECRET_KEY'] = 'ThisIsNotATest'\n\ndata = pd.read_pickle(open('updated_data.pickle', 'rb')) \n# mm = MetaMap.get_instance('/Users/janakajain/Janak/MS-in-Data-Science/Fall-2017/Capstone/public_mm/bin/metamap16')\nnlp = spacy.load('en')\nsymptoms = pd.read_pickle('data/symptoms_data_20_drugs.pickle')\n\n\n\n# =======================[ READING DATA FILES ]=================================\n\nusers = load_json('users')\nmessages = load_json('messages')\nstories = load_json('stories')\n\n\n\n\n# ===========================[ DECLARATIONS ]===================================\n\n# These flags hold sesssion level contextual information\n\ntimestamp = 0\nuser_id = 0\nsession_id = 0\ntags = []\ntargets = []\nintent = []\ntext = ''\n\n\n# The input keys are used to create an empty dictionary of processed input \n# structure.\ninput_keys = ['timestamp','user_id','session_id','tags','targets','intent', \\\n'text']\n\nuser_keys = ['name','age','sex','phone','medication']\n\nleadgen_keys = [\"name\",\"age\",\"sex\",\"medication\",\"phone\"]\n\n# Set the first set of targets and intent labels\n\ntargets = [\"welcome\",\"onboarding\",\"register\"]\nintent = [\"hello\",\"description\",\"onboarding\",\"leadgen\"]\n\nu_rec = dict.fromkeys(user_keys) # Create a record for the user\n\nrequest_add_words = ['add','append','added','appended']\nrequest_view_words = ['view','display','show','see','describe','list','tell']\nrequest_modify_words = ['change','modify','alter','changed','modified',\\\n'altered']\n\n\n\n\n\n\n\n\n\n# ==========================[ MAIN FUNCTION ]===================================\n\n\[email protected](\"/sms\", methods = ['GET','POST'])\ndef sms_reply():\n\n\tresp = MessagingResponse()\n\n\t\n\t#--------[Get the current contextual information from global labels]--------\n\n\tglobal timestamp, user_id, session_id, tags, targets, intent, text, u_rec\n\n\ttimestamp = time.time()\n\tcounter = session.get('counter', 0)\n\tinput_text = request.values.get('Body', None)\n\tfrom_number = request.values.get('From', None)\n\n\tprint(\"User input = \" + input_text)\n\n\n\n\t#----------[Identify if the user is a new user or an existing user]---------\n\n\tif(from_number in users.keys()):\n\t\tif(\"name\" in users[from_number].keys()):\n\t\t\thuman = users[from_number][\"name\"]\n\t\telse:\n\t\t\thuman = None\n\telse:\n\t\thuman = None\n\n\n\t#-------------[Take the required action based on the user type]-------------\n\n\tif(from_number not in users.keys()): # If the user is a new user\n\n\t\tprint(u_rec)\n\n\t\t\n\t\tif(u_rec[\"phone\"] == None):\n\t\t\tu_rec[\"phone\"] = from_number\n\t\t\tprint(\"Added phone number to u_rec\")\n\n\n\t\tif(\"demog\" in targets): # If the targets contain \"leadgen\"\n\n\t\t\tif(\"medication\" in targets):\n\t\t\t\tusers[from_number] = u_rec\n\t\t\t\tusers[from_number][\"medication\"] = [input_text]\n\t\t\t\tprint('\\n--- Leadgen complete -----\\n')\n\t\t\t\tprint(u_rec)\n\t\t\t\twith open('data/users.json', 'w') as outfile:\n\t\t\t\t\tprint(\"Saving file\")\n\t\t\t\t\tjson.dump(users, outfile)\n\t\t\t\t\tprint(\"Saved file\")\n\t\t\t\t\t# u_rec = dict.fromkeys(user_keys)\n\n\t\t\telse:\n\t\t\t\tfor target in targets: # Then iterate over targets and \n\t\t\t\t\t\t\n\t\t\t\t\tif(target in leadgen_keys): # if a target is in leadgen keys\n\t\t\t\t\t\tprint(' Found target leadgen : ' + target)\n\t\t\t\t\t\tu_rec[target] = input_text # store the leadgen value in \n\t\t\t\t\t\t\t\t\t\t\t\t # in the user record\n\t\t\t\t\t\tprint(u_rec)\n\n\n\t\t\n\t\t# Get the response for the given set of labels\n\n\t\tr = respond(user = u_rec, type = \"story\", targets = \\\n\t\t\ttargets, intent = intent)\n\n\n\telse: # if the user record exists\n\n\t\t# SOS\n\t\t# Request\n\t\t# Symptom\n\n\t\tu_rec = users[from_number]\n\n\t\t\n\t\ttext = \" \".join(clean(input_text)[0])\n\t\tprint(\"Cleaned text = \" + text)\n\t\ttext_json = parse_to_json(nlp(text))\n\n\t\tprint(\"Text JSON = \")\n\t\tprint(str(text_json))\n\n\t\ttargets = text_json['targets']\n\t\tintent = text_json['intent']\n\t\tentities = text_json['entities']\n\n\t\t\n\t\t# --------------[ Case: Service request ]-------------------------------\n\t\t\n\t\tif(intent == ['request']): \n\n\t\t\t# Identify which information is this request about\n\n\t\t\trequest_id_flag = 0 # To check if identification is successful\n\n\t\t\tfor info in leadgen_keys:\n\t\t\t\t\n\t\t\t\tif(info in targets):\n\t\t\t\t\t\n\t\t\t\t\trequest_id_flag = 1\n\n\n\n\n\t\t\t\t\t#-----------------[ Case: Add request ]---------------------\n\t\t\t\t\t\n\t\t\t\t\tif(any(t in request_add_words for t in targets)):\n\n\t\t\t\t\t\tprint(\"add loop entered\")\n\n\t\t\t\t\t\t# If any entity information is present\n\t\t\t\t\t\tif(len(entities) > 0):\n\n\t\t\t\t\t\t\tprint(\"inner add loop entered\")\n\n\t\t\t\t\t\t\tusers[from_number][info] += [e.lower() for e \\\n\t\t\t\t\t\t\tin entities]\n\n\t\t\t\t\t\t\tusers[from_number][info] = list\\\n\t\t\t\t\t\t\t(set(users[from_number][info]))\n\n\t\t\t\t\t\t\twith open('data/users.json', 'w') as outfile:\n\t\t\t\t\t\t\t\tprint(\"Saving file\")\n\t\t\t\t\t\t\t\tjson.dump(users, outfile)\n\t\t\t\t\t\t\t\tprint(\"Saved file\")\n\n\t\t\t\t\t\t\tr = respond(intent = \"confirmation\",\n\t\t\t\t\t\t\t\ttargets = [\"add\",\"request\",\"successful\"],\n\t\t\t\t\t\t\t\tuser = users[from_number],\n\t\t\t\t\t\t\t\ttype = \"message\",\n\t\t\t\t\t\t\t\tembed_key = info)\n\n\t\t\t\t\t\t\tprint(users[from_number])\n\t\t\t\t\t\t\tprint(r)\n\n\n\n\t\t\t\t\t\telse:\n\t\t\t\t\t\t\t# Send confusion message\n\t\t\t\t\t\t\tr = {}\n\t\t\t\t\t\t\tr[\"text\"] = \"Confusion message\"\n\t\t\t\t\t\t\tprint(\"confusion message - add\")\n\n\t\t\t\t\t\t\tr = respond(intent = \"confusion\",\n\t\t\t\t\t\t\t\ttargets = [\"request\",\"confusion\"],\n\t\t\t\t\t\t\t\tuser = users[from_number],\n\t\t\t\t\t\t\t\ttype = \"message\")\n\n\n\n\n\t\t\t\t\t#--------------[ Case: Modify request ]---------------------\n\n\t\t\t\t\telif(any(t in request_modify_words for t in targets)):\n\n\t\t\t\t\t\t# If any entity information is present\n\t\t\t\t\t\tif(len(entities) > 0):\n\t\t\t\t\t\t\tusers[from_number][info] = entities\n\t\t\t\t\t\telse:\n\t\t\t\t\t\t\t# Send confusion message\n\t\t\t\t\t\t\tr[\"text\"] = \"Confusion message\"\n\t\t\t\t\t\t\tprint(\"confusion message - modify\")\n\n\t\t\t\t\t\t\tr = respond(intent = \"confusion\",\n\t\t\t\t\t\t\t\ttargets = [\"request\",\"confusion\"],\n\t\t\t\t\t\t\t\tuser = users[from_number],\n\t\t\t\t\t\t\t\ttype = \"message\")\n\n\n\n\n\n\n\t\t\t\t\t#-----------[ Case: View request ]--------------------------\n\n\t\t\t\t\telif(any(t.lower() in request_view_words for t in targets)):\n\n\t\t\t\t\t\t# If any entity information is present\n\t\t\t\t\t\tif(any(t.lower() in leadgen_keys for t in targets)):\n\n\t\t\t\t\t\t\tentities = [t for t in targets if t in leadgen_keys]\n\n\t\t\t\t\t\t\t# Show the relevant information\n\n\t\t\t\t\t\t\tprint(\"show relevant information\")\n\n\t\t\t\t\t\t\tr = respond(type=\"message\",\n\t\t\t\t\t\t\t\tintent = intent,\n\t\t\t\t\t\t\t\ttargets = targets,\n\t\t\t\t\t\t\t\tuser = users[from_number],\n\t\t\t\t\t\t\t\tentities = entities\n\t\t\t\t\t\t\t\t)\n\n\t\t\t\t\t\t\tprint(\"Entities = \")\n\t\t\t\t\t\t\tprint(entities)\n\t\t\t\t\t\t\tprint(\"Record to be shown: \")\n\t\t\t\t\t\t\tprint(users[from_number][entities[0]])\n\n\t\t\t\t\t\t\tif(type(users[from_number][entities[0]]) == list):\n\t\t\t\t\t\t\t\tshow_rec = \"\\n\".join(users[from_number][entities[0]])\n\t\t\t\t\t\t\telse:\n\t\t\t\t\t\t\t\tshow_rec = users[from_number][entities[0]]\n\n\t\t\t\t\t\t\tr[\"text\"].append(show_rec)\n\n\n\t\t\t\t\t\telse:\n\t\t\t\t\t\t\t# Send confusion message\n\t\t\t\t\t\t\t# r[\"text\"] = \"Confusion message\"\n\t\t\t\t\t\t\tprint(\"confusion message - view\")\n\n\t\t\t\t\t\t\tr = respond(intent = \"confusion\",\n\t\t\t\t\t\t\t\ttargets = [\"request\",\"confusion\"],\n\t\t\t\t\t\t\t\tuser = users[from_number],\n\t\t\t\t\t\t\t\ttype = \"message\")\n\n\t\t\tif(request_id_flag == 0): # Request info target not identified\n\t\t\t\t\n\t\t\t\tprint(\"Unconventional request identified\")\n\n\t\t\t\t# r[\"text\"] = \"Unsuccessul identification of request target\"\n\n\t\t\t\tr = respond(user = users[from_number],\n\t\t\t\t\tintent = intent,\n\t\t\t\t\ttargets = targets,\n\t\t\t\t\tentities=entities,\n\t\t\t\t\ttype=\"message\")\n\n\t\t\n\n\t\t# ----------------[ Case: Symptom addressal ]---------------------------\n\n\t\telif(intent == ['symptom']):\n\t\t\tprint(\"Symptom addressal\")\n\t\t\t# r = {}\n\t\t\t# r[\"text\"] = \"Symptom addressal\"\n\n\t\t\tprint(symptoms.columns)\n\n\t\t\tsymp_df = symptoms[symptoms['drug'].isin(users[from_number]\\\n\t\t\t\t[\"medication\"])]\n\n\t\t\tprint(\"User's current medication on record = \" + str(users[from_number][\"medication\"]))\n\n\n\t\t\tif(symp_df.shape[0] > 0):\n\n\t\t\t\tprint(\"Symp_df len > 0 \")\n\n\t\t\t\tdf = symp_df[symp_df['targets'].\\\n\t\t\t\tapply(lambda x: any(t in targets for t in x))].\\\n\t\t\t\tsort_values('flag',ascending = False)\n\n\t\t\t\tnum_matches = df.shape[0]\n\n\t\t\t\tprint(\"Number of matches = \" + str(num_matches))\n\n\t\t\t\tdfx = df.head(1).iloc[0]\n\n\t\t\t\tprint(dfx)\n\n\t\t\t\tadvise = {\n\t\t\t\t\"noneed\" : \" it is not something to worry about. Give it some time and let me know if you have any new symptoms. I'm right here if you need me.\",\n\t\t\t\t\"doctor\" : \" I think you should visit your doctor as soon as possible.\",\n\t\t\t\t\"emergency\": \" you need to go to the ER immediately.\"\n\t\t\t\t}\n\n\t\t\t\tresp_text = [\"Based on your current medication involving use of \" + \\\n\t\t\t\tdfx['drug'] + \", \" + dfx[\"symptom\"] + \" is/are \" + \\\n\t\t\t\tdfx[\"severity_desc\"] + \" \" + dfx[\"connector\"] + \\\n\t\t\t\tadvise[dfx[\"action\"]]]\n\n\t\t\t\tprint(\"Response text for symptom = \")\n\t\t\t\tprint(resp_text)\n\n\t\t\t\tr = respond(intent = intent, targets = targets, \\\n\t\t\t\t\tuser = users[from_number], text = resp_text,\n\t\t\t\t\ttype = \"symptom\")\n\t\t\telse:\n\t\t\t\tresp_text = [\"I am sorry. I don't have a record of symptoms for your current medication\"]\n\t\t\t\tr = respond(intent = intent, targets = targets, \\\n\t\t\t\t\tuser = users[from_number], text = resp_text,\n\t\t\t\t\ttype = intent)\n\t\t\t\n\n\n\n\n\t\telse:\n\n\t\t\tprint(\" Else loop \")\n\t\t\t# resp_text = [\"I am sorry. I don't have a record of symptoms for your current medication\"]\n\t\t\tr = respond(intent = intent, targets = targets, \\\n\t\t\t\t\tuser = users[from_number],\n\t\t\t\t\ttype = \"message\")\n\n\n\t\n\tprint(\"R = \")\n\tpprint(r)\n\n\n\t\t# lookup_names, triggers = get_umls_concepts(input_text, mm)\n\t\t# print('RESULTS')\n\t\t# print(recommend(data, ['betaxolol'],lookup_names, triggers ))\n\n\n\n\t# ==========[ TAKE ACTION BASED ON THE GENERATED RESPONSE JSON ]============\n\n\tif(r == None):\n\t\tprint(\"None response\")\n\n\n\t# Update the labels\n\n\tintent = r[\"return_intent\"]\n\ttargets = r[\"return_targets\"]\n\ttags = r[\"tags\"]\n\n\tprint(\"---- Updated labels -----\")\n\tprint(\"Intent = \" + str(intent))\n\tprint(\"Targets = \" + str(targets))\n\n\t# print(\"u_rec now is : \")\n\t# print(str(u_rec))\n\n\t\t\n\tif(r[\"text\"] != None):\n\t\tfor sms in r[\"text\"]:\n\t\t\tif(len(r[\"embed\"]) > 0):\n\t\t\t\tresp.message(str(sms%tuple([str(u_rec[i]) for i in r[\"embed\"]])))\n\t\t\telse:\n\t\t\t\tresp.message(sms)\n\telse:\n\t\tresp.message(\"There seems to be some \\\n\t\t\terror. \\n\\n Sorry for the inconvenience\")\n\n\n\t\ttargets, intent, entities, tags = [],[],[],[]\n\n\t# print(\"Last step variables\")\n\t# print(\"targets = \")\n\t# print(targets)\n\t# print(\"intent = \")\n\t# print(intent)\n\n\n\treturn str(resp)\n\n\n\n\n\t\n\n\n\n\n\n\n\n\nif __name__ == \"__main__\":\n\t app.run(debug = True)"
},
{
"alpha_fraction": 0.5980448722839355,
"alphanum_fraction": 0.6005367040634155,
"avg_line_length": 22.82648468017578,
"blob_id": "44f88a8543fb33f6f76aec47b9619ae7b6e41f39",
"content_id": "8aca6d4ac93e321371181240f2e069217b361a6b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5217,
"license_type": "no_license",
"max_line_length": 83,
"num_lines": 219,
"path": "/code/new_receive_response.py",
"repo_name": "janakajain/medbot",
"src_encoding": "UTF-8",
"text": "'''\n================================================================================\n|| \t \t\t\t\t\t \t\t\t\t\t\t\t ||\n|| \t MEDIBO RECEIVE SMS \t\t\t\t\t\t \t ||\n|| \t \t\t\t\t\t \t\t\t\t\t\t\t ||\n================================================================================\nThis file contains code that receives the SMSs sent to Medibo from mobile\nphones. The various steps include:\n1. Extracting contextual information about the user and based on the user \n status, either begin onboarding or begin a new conversation.\n2. Process the input text to identify the correct labels such as intent, \n targets, tags etc.\n3. Send this processed response to the classification engine to classify this\n text and send the correct response for this input.\n4. Receive the suggested response and send this response back to the user.\n5. Log this conversation accordingly.\n\n@author: Janak A Jain\n\n'''\n\n\nfrom flask import Flask, request, redirect, session\nfrom twilio.twiml.messaging_response import Message, MessagingResponse\nfrom twilio.twiml.voice_response import VoiceResponse\nfrom load import *\nfrom response import *\nfrom pprint import pprint\nimport json\nimport time\nimport os\n\napp = Flask(__name__)\napp.config.from_object(__name__)\n\n\napp.config['SECRET_KEY'] = 'ThisIsNotATest'\n\n\n\n# =======================[ READING DATA FILES ]=================================\n\nusers = load_json('users')\nmessages = load_json('messages')\nstories = load_json('stories')\n\n\n\n\n# ======================[ DECLARING CODE LABELS ]===============================\n\n# These flags hold sesssion level contextual information\n\ntimestamp = 0\nuser_id = 0\nsession_id = 0\ntags = []\ntargets = []\nintent = []\ntext = ''\n\n\n# The input keys are used to create an empty dictionary of processed input \n# structure.\ninput_keys = ['timestamp','user_id','session_id','tags','targets','intent', \\\n'text']\n\nuser_keys = ['name','age','sex','phone']\n\nleadgen_keys = [\"name\",\"age\",\"sex\",\"allergies\"]\n\n# Set the first set of targets and intent labels\n\ntargets = [\"welcome\",\"onboarding\",\"register\"]\nintent = [\"hello\",\"description\",\"onboarding\",\"leadgen\"]\n\nu_rec = dict.fromkeys(user_keys) # Create a record for the user\n\n\[email protected](\"/sms\", methods = ['GET','POST'])\ndef sms_reply():\n\n\tresp = MessagingResponse()\n\n\t\n\t#--------[Get the current contextual information from global labels]--------\n\n\tglobal timestamp, user_id, session_id, tags, targets, intent, text\n\n\ttimestamp = time.time()\n\tcounter = session.get('counter', 0)\n\tinput_text = request.values.get('Body', None)\n\tfrom_number = request.values.get('From', None)\n\n\tprint(\"User input = \" + input_text)\n\n\n\n\t#----------[Identify if the user is a new user or an existing user]---------\n\n\tif(from_number in users.keys()):\n\t\tif(\"name\" in users[from_number].keys()):\n\t\t\thuman = users[from_number][\"name\"]\n\t\telse:\n\t\t\thuman = None\n\telse:\n\t\thuman = None\n\n\n\t#-------------[Take the required action based on the user type]-------------\n\n\tif(from_number not in users.keys()): # If the user is a new user\n\n\t\t\n\t\tif(\"phone\" not in u_rec):\n\t\t\tu_rec[\"phone\"] = from_number\n\n\n\t\tif(\"demog\" in targets): # If the targets contain \"leadgen\"\n\n\t\t\tif(\"complete\" in targets):\n\t\t\t\tusers[from_number] = u_rec\n\t\t\t\tprint('\\n--- Leadgen complete -----\\n')\n\t\t\t\tprint(users)\n\n\t\t\telse:\n\t\t\t\tfor target in targets: # Then iterate over targets and \n\t\t\t\t\t\t\n\t\t\t\t\tif(target in leadgen_keys): # if a target is in leadgen keys\n\t\t\t\t\t\tprint(' Found target leadgen : ' + target)\n\t\t\t\t\t\tu_rec[target] = input_text # store the leadgen value in \n\t\t\t\t\t\t\t\t\t\t\t\t # in the user record\n\t\t\t\t\t\tprint(u_rec)\n\n\n\t\t\n\t\t# Get the response for the given set of labels\n\n\t\tr = respond(user = u_rec, type = \"story\", targets = targets, intent = intent)\n\n\t\tpprint(r)\n\n\n\t\tif(r == None):\n\t\t\tprint(\"None response\")\n\n\n\t\t# Update the labels\n\n\t\tintent = r[\"return_intent\"]\n\t\ttargets = r[\"return_targets\"]\n\t\ttags = r[\"tags\"]\n\n\t\tprint(\"---- Updated labels -----\")\n\t\tprint(\"Intent = \" + str(intent))\n\t\tprint(\"Targets = \" + str(targets))\n\n\t\t\n\t\tif(r[\"text\"] != None):\n\t\t\tfor sms in r[\"text\"]:\n\t\t\t\tif(len(r[\"embed\"]) != 0):\n\t\t\t\t\tresp.message(str(sms%tuple([u_rec[i] for i in r[\"embed\"]])))\n\t\t\t\telse:\n\t\t\t\t\tresp.message(sms)\n\t\telse:\n\t\t\tresp.message(\"There seems to be some error. \\n\\n Sorry for the inconvenience\")\n\n\n\t\treturn str(resp)\n\n\n\n\n\n\toutput = []\n\t\n\n\n\tif(\"counter\" in users[from_number].keys()):\n\t\tusers[from_number][\"counter\"] += 1\n\telse:\n\t\tusers[from_number][\"counter\"] = 1\n\t\n\tsession['counter'] = users[from_number][\"counter\"]\n\n\n\t\n\tprint(\"\\n\\nTalking to \" + str(human))\n\n\tinput_data = dict.fromkeys(input_keys)\n\n\tprint(\"Counter = \" + str(users[from_number][\"counter\"]))\n\n\n\t\n\tif(human == None): # A record for this user doesn't exist \n\t\tif(users[from_number][\"counter\"] == 1): # and if this is the first session\n\t\t\tprint('First session for number : ' + from_number)\n\t\t\tusers[from_number][\"phone\"] = from_number\n\t\t\toutput += [messages[str(_id)][\"text\"] for _id in stories[\"welcome\"][\"messages\"]]\n\t\t\toutput.append(onboard(from_number))\n\t\t\tflag = 'name'\n\n\t\t\tresp.message(str(output))\n\t\t\tprint(\"Output = \" + str(output))\n\n\t\t\toutput = []\n\t\t\t\n\telse:\n\t\toutput = [\"Known user\"]\n\n\n\n\n\n\n\nif __name__ == \"__main__\":\n\t app.run(debug = True)"
},
{
"alpha_fraction": 0.6000000238418579,
"alphanum_fraction": 0.6093023419380188,
"avg_line_length": 14.214285850524902,
"blob_id": "6d60e105bb81d2b50278bc1a1d980c359df2fdb0",
"content_id": "3c26fb89b3fd94ff7697811576de9caa14b63824",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 215,
"license_type": "no_license",
"max_line_length": 52,
"num_lines": 14,
"path": "/code/checks.py",
"repo_name": "janakajain/medbot",
"src_encoding": "UTF-8",
"text": "\n\n\ndef is_zip(x):\n\n\tflag = \"\"\n\n\ttry:\n\t\tint(x)\n\t\tif(len(x) == 5):\n\t\t\treturn True\n\t\telse:\n\t\t\tflag = \"The zip code should be 5 characters long\"\n\texcept ValueError:\n\t\tflag = \"Please enter a number!\"\n\n\treturn False, flag"
},
{
"alpha_fraction": 0.7828442454338074,
"alphanum_fraction": 0.7828442454338074,
"avg_line_length": 72.76667022705078,
"blob_id": "a97f7835b27bd4e709fecbb6c4746e96e8adf074",
"content_id": "1579da5d4a32d5aa0e95f2e92e9bd1bad7ed6a07",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 2237,
"license_type": "no_license",
"max_line_length": 388,
"num_lines": 30,
"path": "/documents/glossary-of-terms.md",
"repo_name": "janakajain/medbot",
"src_encoding": "UTF-8",
"text": "# Glossary of terms\n\nIn order to understand the working and structure of the chatbot, it is important to understand some basic terms that define the experience and several components contained within this experience. These have collectively been termed as “elements” of the chatbot. We have provided a details walkthrough through these elements below:\n\n### User \nA user is any human who consumes the experience of the product MediBo. Specifically, a user can be a medical practitioner, a patient or even a pharmacist.\n\n### Chatbot \nThe artificially intelligent computer program that communicates with the users through SMS channel.\n\n### Thread \nA conversation thread (or simply a ‘thread’) is composed of one or more interactions between the user and the chatbot in a contiguous unit of time. A thread is composed of several elements such as messages, stories etc.\n\n### Message \nA message is delivered as a “bubble” on the screen of the user and essentially contains some text. It also contains metadata such as intent, targets, return_intent, return_targets, tags etc. \n\n### Story \nA story is a message or collection of messages that delivers a specific piece of dialogue. Example: A story titled “welcome” serving three messages each targeting one of the following steps: \n * Greeting - greeting the user and welcoming\n * Introduction - introducing the chatbot service\n * Onboarding - signaling the beginning of onboarding. \n\n### Intent \nIntent can be defined as the purpose of a message or a story. A special form of intent is return_intent which is a pre-emptive attempt to set the context labels to inform the chatbot in advance about the anticipated intent of the response. \n\n### Target \nA target can be defined as the specific elements of a message or context that the message intends to serve. Example: A message asking for the user’s age would have “age” in the targets list of the message. A special form of targets is return_targets which is a pre-emptive attempt to set the context labels to inform the chatbot in advance about the anticipated targets of the response. \n\n### Tags\nTags are essentially a union set of intents and targets and can be used for look-up of messages and stories. \n"
},
{
"alpha_fraction": 0.7329757213592529,
"alphanum_fraction": 0.7425373196601868,
"avg_line_length": 53.910255432128906,
"blob_id": "5a520e1672c3b9aacd6e4d85d869862f00f5bf26",
"content_id": "09564d9da79cb7055bbbdb7d6818204fe1f1fadb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 4290,
"license_type": "no_license",
"max_line_length": 607,
"num_lines": 78,
"path": "/README.md",
"repo_name": "janakajain/medbot",
"src_encoding": "UTF-8",
"text": "# MedBot - The Medical Chatbot \n \n## Introduction \n \nMedBot is a capstone project being undertaken by a team of Columbia University students at the [Data Science Institute](https://datascience.columbia.edu). The team consists of Andrew Satz, Jonathan Galsurkar, Minghong Zheng, Shengyang Zhang and Janak A Jain. \n \n--- \n\n## About the Project \n \nThe project is sponsored by [Synergic Partners](http://www.synergicpartners.com/en/), A Telefonica company and involves creation of a chatbot service that can communicate with a user and provide informed medical advice related to medicinal and drug prescription. The chatbot would encapsulate capabilities involving natural language processing, artifical intellgience and machine learning. We are very happy to share that we are being helped by the wonderful team at [Twilio](https://www.twilio.com/) for creating a compelling experience using their [Programmable SMS](https://www.twilio.com/sms) service. \n \n---\n \n## Demo\n \n[Here's a sneak-peak](https://www.youtube.com/watch?v=sJayNZOJ1ZY) of MediBo in action. We will update this demo video as and when we add more capabilities to it. \n\n---\n \n## About the Team \n \n#### Andrew C Satz \n<img src=\"team-details/team-pics/andrew-satz.jpeg\" height=\"150\" width=\"150\"></img>\n\n<table>\n <tr>\n <td>Andrew Satz is a Master's candidate in data science at Columbia University's School of Engineering. He comes with nearly 20 years of experience in the health insurance industry, with roles in risk management and safety. More recently, Andrew founded a data science consultancy firm focusing on the healthcare industry. Andrew is passionate about music, travel, and discovering innovative and realistic ways to utilize technology to solve the challenges of today and tomorrow.\n </td>\n </tr>\n</table> \n\n#### Janak A Jain \n<img src=\"team-details/team-pics/janak-a-jain.jpg\" height=\"150\" width=\"150\"></img>\n\n<table>\n <tr>\n <td>Janak is currently an MS candidate at Columbia University's Data Science Institute. With a rich experience in market research and consulting across several categories, he wishes to explore future steps of data across industries such as marketing, smart cities and healthcare. An MBA Tech alumnus of SVKM's NMIMS, Mumbai - his work has moved along an interesting intersection of technological and managerial spaces. He knows Hindi, English, Marathi and French with qualifications of various levels.\n </td>\n </tr>\n</table> \n\n#### Johnny Galsurkar \n<img src=\"team-details/team-pics/jonathan-galsurkar.jpeg\" height=\"200\" width=\"150\"></img>\n\n<table>\n <tr>\n <td>Jonathan Galsurkar is a final year Master's student at Columbia University studying Data Science. His team came in first place at the 2017 Columbia Data Science Hackathon. He received his Bachelor’s degree in Computer Science and Mathematics from CUNY Hunter College. He is currently a Data Science Research intern at IBM, focusing on sentence/paragraph embedding and semantic searching techniques & applications. His main goal is to use data for social good. For fun he likes to play guitar, snowboard, and travel.\n </td>\n </tr>\n</table> \n\n#### Minghong Zheng \n<img src=\"team-details/team-pics/minghong-zheng.png\" height=\"150\" width=\"150\"></img>\n<table>\n <tr>\n <td>I am very interested in applying advanced models with my business sense to do user behavior analysis. In addition, I like travelling and outdoor activities to try different things and visit as many interesting places as I can.\n </td>\n </tr>\n</table> \n\n#### Shengyang Zhang\n<img src=\"team-details/team-pics/shengyang-zhang.jpg\" height=\"150\" width=\"150\"></img>\n\n<table>\n <tr>\n <td>Shengyang is a graduate student at Data Science Institute, Columbia University, currently in my last semester and plan to graduate in Dec. 2017. \nProgramming Languages: Python, R, MATLAB, Java, C \nBig Data/Cloud Technologies: Hadoop, Hive, Spark, Google Cloud Platform, AWS \nTools/Packages: Tensorflow, Caffe, Theano, DyNet, NLTK, Scikit-Learn, Numpy, SciPy, pandas, keras, RShiny, Git, Tableau, matplotlib \n </td>\n </tr>\n</table> \n--- \n \n## Feeback and Comments \n \nThe team invites members from the larger academic and industry community to provide feeback and comments on the work. Please get in touch with the team members if you would like to know more / provide feedback about the project. \n \n"
},
{
"alpha_fraction": 0.661475419998169,
"alphanum_fraction": 0.66557377576828,
"avg_line_length": 19.016393661499023,
"blob_id": "a28b3202c07fac909d9e0064c28d219a412b3cfa",
"content_id": "e6461021001c7e58733ff3be59a841d3bfa62f80",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1220,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 61,
"path": "/code/receive_sms.py",
"repo_name": "janakajain/medbot",
"src_encoding": "UTF-8",
"text": "import os\nfrom flask import Flask, request, redirect, session\nfrom twilio.twiml.messaging_response import MessagingResponse\nfrom twilio.twiml.voice_response import VoiceResponse\nimport json\n\n\napp = Flask(__name__)\n\n\nwith open('data/users.json') as users_data:\n users = json.load(users_data)[0] # This is a dictionary of users indexed\n \t\t\t\t\t\t\t\t # by their phone numbers\n\n\nflag = 0\n\n\n\[email protected](\"/sms\", methods = ['GET','POST'])\ndef sms_reply():\n\n\tcounter = session.get('counter', 0)\n\tcounter += 1\n\n\n\tfrom_number = request.values.get('From', None)\n\thuman = users[from_number][\"name\"] if from_number in users else None\n\tprint(\"\\n\\nTalking to \" + str(human))\n\n\tif(human == None): # A record for this user doesn't exist\n\t\tprint(\"Talking to someone new\")\n\t\tresp = onboard(from_number)\n\telse:\n\n\t\tresp = MessagingResponse()\n\t\tresp.message(\"Hi \" + human + \"! How are you feeling today?\")\n\n\treturn str(resp)\n\n\n\[email protected](\"/sms\", methods = ['GET','POST'])\ndef onboard(from_number):\n\n\tprint('-- ONBOARDING --')\n\n\tresp = MessagingResponse()\n\tresp.message(\"Hi! Let's get to know you first. They call me MediBo. \"\\\n\t\t\"What is your name?\")\n\n\tflag = 1\n\n\treturn str(resp)\n\n\n\n\n\nif __name__ == \"__main__\":\n\t app.run(debug = True)"
},
{
"alpha_fraction": 0.6017804145812988,
"alphanum_fraction": 0.6100890040397644,
"avg_line_length": 35.65217208862305,
"blob_id": "5bb33b83d1c627dd3ea13ce24532699a1a9f23ed",
"content_id": "ad843403a9f8f2b4be9c21eca0cbdef7f5299eaa",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1685,
"license_type": "no_license",
"max_line_length": 96,
"num_lines": 46,
"path": "/working/medibo/recommend.py",
"repo_name": "janakajain/medbot",
"src_encoding": "UTF-8",
"text": "import pickle\nimport pandas as pd\nfrom pymetamap import MetaMap\n\ndef get_umls_concepts(sentence, metamap):\n concepts, error = metamap.extract_concepts([sentence])\n lookup_names = [concept[3] for concept in concepts]\n triggers = [concept[6] for concept in concepts]\n \n return lookup_names, triggers\n\n\ndef recommend(data, medications, symptom_lookup_names, triggers):\n results = []\n for medication in medications:\n med_data = data[data['Medication'] == medication]\n i = 0\n for lookups in med_data['Lookup_Name']:\n for lookup in lookups.split(','):\n j = 0\n for symptom in symptom_lookup_names:\n if symptom == lookup:\n results.append((lookup, med_data.iloc[i]['Action'], triggers[j]))\n j+=1\n i+=1\n \n emergencies = [result for result in results if result[1] == 'emergency']\n doctor = [result for result in results if result[1] == 'doctor']\n noneed = [result for result in results if result[1] == 'noneed']\n \n if len(emergencies) > 0:\n return 'Emergency', emergencies\n elif len(doctor) > 0:\n return 'Call your doctor', doctor\n elif len(noneed) > 0:\n return \"Don't worry\", noneed\n else:\n return 'No Matches Found', None\n\n# data = pd.read_pickle(open('updated_data.pickle', 'rb')) \n# mm = MetaMap.get_instance('/Users/jgalsurkar/Package_Downloads/public_mm/bin/metamap16')\n\n\n# lookup_names, triggers = get_umls_concepts('I have some head pain and a rash on my face.', mm)\n# print('RESULTS')\n# print(recommend(data, ['betaxolol'],lookup_names, triggers ))"
}
] | 16 |
mattnorris/grain | https://github.com/mattnorris/grain | 885c6fc9934847d86d54a734988d9ca8a2105486 | 454926389d5afebcb2d51a626a62c689577466d7 | be7cd00a1c1690df54587d6d7ac1b2c9a43f2f86 | refs/heads/master | 2021-01-19T16:26:31.218122 | 2019-04-12T04:53:45 | 2019-04-12T04:53:45 | 16,260,699 | 1 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5941320061683655,
"alphanum_fraction": 0.6020782589912415,
"avg_line_length": 27.719297409057617,
"blob_id": "d414cc4c038f3b81f34770e4683545d373590230",
"content_id": "c0de35deb046084fca96a52f53ff634382fa9724",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 1636,
"license_type": "no_license",
"max_line_length": 213,
"num_lines": 57,
"path": "/src/down-web2py.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "#!\n\n###############################################################################\n# \n# Title: down-web2py.sh\n# Description: Remove web2py, archiving any applications beforehand\n# Author: Matthew Norris\n# Reference: http://www.howtoforge.com/forums/showthread.php?t=96\n# http://bit.ly/tar-exclude\n#\n###############################################################################\n\nfunction errorMessage() {\n echo -e \"Error: $1\"\n exit 1\n}\n\nINSTALL_DIR=$HOME/dev/source\nUNINSTALL_DIR=$HOME/uninstall\n\n# Check for 'uninstall' directory. If one does not exist, make one.\n\nif [ ! -d $UNINSTALL_DIR ]\n then\n echo \"Making the $UNINSTALL_DIR directory..\"\n mkdir $UNINSTALL_DIR\n echo \"Done.\"\n echo\nfi\n\n# Archive any applications before removing\n\nif [ -d $INSTALL_DIR/web2py/applications ]\n then \n \tcd $INSTALL_DIR/web2py\n ARCHIVE_FILE=$UNINSTALL_DIR/web2py.applications.$RANDOM.bak.tar.gz\n echo \"Archiving existing applications to $ARCHIVE_FILE...\"\n \n # For each 'exclude' argument, because 'cd' is used for the **local** \n # \"applications\" directory, use the local \"applications/[dir]\" path rather \n # than the **full** path to \"applications\". \n tar -pczf $ARCHIVE_FILE applications --exclude \"applications/admin\" --exclude \"applications/examples\" --exclude \"applications/welcome\" --exclude \"applications/__init__.py\" --exclude \"applications/__init__.pyc\"\n \n if [ $? -eq 0 ]; then\n echo \"Done.\"\n echo\n\telse\n\t errorMessage \"Moving failed with exit status of $?\"\n\tfi\nfi\n\n# Remove web2py\n\necho \"Removing web2py...\"\nrm -fr $INSTALL_DIR/web2py\necho \"Done.\"\necho"
},
{
"alpha_fraction": 0.7666666507720947,
"alphanum_fraction": 0.7666666507720947,
"avg_line_length": 29,
"blob_id": "70b1c7c98e7493a4b5628c1ff50b6543ab8a2a7f",
"content_id": "bceaad98877832531537896f5063724ec470fc06",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 30,
"license_type": "no_license",
"max_line_length": 29,
"num_lines": 1,
"path": "/src/down-hadoop.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "sudo apt-get remove hadoop -y\n"
},
{
"alpha_fraction": 0.5319967269897461,
"alphanum_fraction": 0.5580884218215942,
"avg_line_length": 35.049503326416016,
"blob_id": "95d8264426d6b1341e69548eb588a63988260c78",
"content_id": "dc263ef8663dc869b4c022768aa3672a2b925577",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 3641,
"license_type": "no_license",
"max_line_length": 143,
"num_lines": 101,
"path": "/src/compile_hamlpy_sass.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n################################################################################\n# \n# Description: Eclipse uses \"Builders\" to build projects. These Builders can \n# fire scripts and Ant tasks to accomplish various tasks. This \n# script compiles Haml & Sass files into regular HTML & CSS. \n#\n# Usage: compile_hamlpy_sass [files created] [files updated] [files deleted]\n# \n# Requirements: Hamlpy & Sass must be installed globally on your machine.\n# \n# http://github.com/jessemiller/HamlPy/blob/master/reference.md\n# http://sass-lang.com/tutorial.html\n# \n# Author: Matthew Norris\n# \n# References: Eclipse Builders: http://stackoverflow.com/questions/1012230/eclipse-on-save-execute-a-program/1012281#1012281\n# Eclipse Builders tutorial: http://www.ibm.com/developerworks/opensource/tutorials/os-eclipse-tools/section4.html\n# Incrementing in bash script: http://www.unix.com/shell-programming-scripting/77028-how-create-incrementing-counter.html\n# Get file extension: http://liquidat.wordpress.com/2007/09/29/short-tip-get-file-extension-in-shell-script/#comment-35771\n# Split string by char & iterate: http://stackoverflow.com/questions/918886/split-string-based-on-delimiter-in-bash/918931#918931\n# \n################################################################################\n\n################################################################################\n# Functions\n################################################################################\n\n# Compile the file with the given command, or delete it if the count is 3. \n#\n# arg1: command\n# arg2: source file\n# arg3: file extension for target file\n#\nfunction handleFile {\n\tcommand=$1\n input=$2\n output=\"${2%.*}.$3\"\n \n if [ $BLANK_COUNT -lt 3 ]; then\n \techo -e \"Updated:\\t$input\"\n echo -e \"Compiling:\\t$output\"\n \n $command $input $output\n else\n echo -e \"Deleted:\\t$input\"\n echo -e \"Deleting:\\t$output\"\n \n rm $output\n fi\n}\n\n# Loop through the given files, compiling and .haml or .hamlpy files to HTML \n# and .scss or .sass files to CSS. \n# \n# @see: http://stackoverflow.com/questions/918886/split-string-based-on-delimiter-in-bash/918931#918931\n# \n# arg1: list of filenames separated by spaces\n#\nfunction checkFiles {\n IFS=' ' read -ra ADDR <<< \"$1\"\n for file in \"${ADDR[@]}\"; do \n fext=\"${file##*.}\"\n if [ $fext = \"haml\" ] || [ $fext = \"hamlpy\" ]; then\n handleFile \"hamlpy\" $file \"html\"\n elif [ $fext = \"scss\" ] || [ $fext = \"sass\" ]; then\n handleFile \"sass\" $file \"css\"\n fi\n done\n}\n\n################################################################################\n# Main\n################################################################################\n\n# The corresponding Eclipse Builder's arguments are structured like this: \n#\n# \"${build_files:a,f}\" \"${build_files:c,f}\" \"${build_files:r,f}\"\n#\n# Because any one of these 3 arguments (or all of them) can be blank, we have \n# to account for 4-6 arguments without knowing exactly how many. We do, however,\n# know that they will always be in the order: created, updated, deleted.\n\necho \"Build complete. Script was passed $# arguments.\"\necho\n\n# Loop through the parameters. If a blank is encountered, increment; this way \n# we know if we should be creating (1), updating (2), or deleting (3) files.\n\nBLANK_COUNT=0 \nfor arg in \"$@\"\ndo \n\tif [ \"$arg\" = \"\" ]; then\n\t BLANK_COUNT=`expr $BLANK_COUNT + 1`\n else\n checkFiles \"$arg\"\n fi\ndone \n\necho\necho \"Done.\"\n"
},
{
"alpha_fraction": 0.5514705777168274,
"alphanum_fraction": 0.5588235259056091,
"avg_line_length": 25.153846740722656,
"blob_id": "fd79f399021fe398ee4dfcc65b2bfc97d047e13a",
"content_id": "9859a84a629953c263c0f4f27cc8f2e8b8ddfe17",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 680,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 26,
"path": "/src/down-maven.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "#!\n\n################################################################################\n#\n# Title: up-maven.sh\n# Description: Install Maven 2 for use with Java & Grails\n# Author: Matthew Norris\n# References: http://grails.org/Maven+Integration\n#\n################################################################################\n\n# Installation location\ninstallPrefix=/opt/dev/tools\n\n# Default install directory name\ninstallDirName=\"apache-maven-2.2.1\"\n\n# Friendly names\nfriendlyName=\"Maven 2\"\nfriendlyLinkName=\"maven\"\n\necho \"Removing $friendlyName...\"\nsudo rm /usr/local/bin/mvn\nsudo rm $installPrefix/$friendlyLinkName\nsudo rm -dr $installPrefix/$installDirName\necho \"Done.\"\n"
},
{
"alpha_fraction": 0.7032213807106018,
"alphanum_fraction": 0.7100753784179688,
"avg_line_length": 27.627450942993164,
"blob_id": "401468950174f8fb89542e140c9e2b1815faf9d0",
"content_id": "0e88a5128900b09a45ecfa1e7462d7bd39520703",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1459,
"license_type": "no_license",
"max_line_length": 102,
"num_lines": 51,
"path": "/src/virtualenv/install_pyjamas.py",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "\"\"\"\nInstalls Pyjamas in a virtual environment.\n\n@see: http://pyjs.org/getting_started.html\n\"\"\"\nimport os\nimport sys\nimport tempfile\nimport shutil\n\nsys.path.append(os.path.join(os.environ['HOME'], 'dev/modules'))\nfrom poprop import util\n\n__author__ = \"Matthew Norris\"\n__copyright__ = \"Copyright 2011, Matthew Norris\"\n\nif not util.in_virtualenv():\n sys.exit('Script not running in a virtual environment. Exiting.')\n\nDOWNLOADS = os.path.join(os.environ['HOME'], 'dev/downloads')\nutil.mkdir(DOWNLOADS)\n\nurl = util.URL('http://downloads.sourceforge.net/project/pyjamas/pyjamas/0.7pre1/pyjamas-0.7pre1.tgz')\ndl_file = os.path.join(DOWNLOADS, url.basename)\n\nREL_CONTRIB_PATH = 'modules/contrib' \nCONTRIB_DIR = os.path.join(util.get_virtualenv(), REL_CONTRIB_PATH)\nutil.mkdir(CONTRIB_DIR)\n\n# If the file is not already present, download it. \nif not os.path.isfile(dl_file):\n print 'Downloading %s...' % url.basename \n dl_file = url.download(dstdir=DOWNLOADS)\n print 'Done.\\n'\n\n# Install and create symlinks.\nprint 'Installing pyjamas...' \n\npyjamas = util.Archiver(dl_file)\npyjamas.extract(dstdir=CONTRIB_DIR)\nos.chdir(os.path.join(CONTRIB_DIR, url.rootname))\nos.system('python bootstrap.py')\n\nprint 'Done.\\n'\nprint 'Creating symlinks...'\n\nos.chdir(os.path.join(util.get_virtualenv(), 'bin'))\nos.system('ln -s ../%s/%s/bin/pyjsbuild' % (REL_CONTRIB_PATH, url.rootname))\nos.system('ln -s ../%s/%s/bin/pyjscompile' % (REL_CONTRIB_PATH, url.rootname))\n\nprint 'Done.\\n'"
},
{
"alpha_fraction": 0.5554425120353699,
"alphanum_fraction": 0.5778229832649231,
"avg_line_length": 28.787878036499023,
"blob_id": "ef4454332e78b81113b053c249b3604bad0e5cc4",
"content_id": "a9823702ea45de54528ffa6aba113633e93d81b2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 983,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 33,
"path": "/src/remove_python.py",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "import os\nimport sys\n\nsys.path.append(os.path.join(os.getenv('DEV_HOME'), 'modules'))\nfrom poprop.util import Installer\n\n################################################################################\n#\n# Title: remove_python.sh\n# Description: Remove Python 2.5, dev tools for Python 2.5 & 2.6, and other\n# Python utilties. \n# Author: Matthew Norris\n# Reference:\n#\n################################################################################\n\nprint 'Removing virtualenvwrapper...'\nos.system('sudo easy_install -m virtualenvwrapper')\nprint 'Done.\\n'\n\nprint 'Removing virtualenv...'\nos.system('sudo easy_install -m virtualenv')\nprint 'Done.\\n'\n\n# Remove setuptools. \n# TODO: Not sure how to do this: \n# http://www.eby-sarna.com/pipermail/peak/2006-February/002450.html\n\npackages = ['python2.5', 'python2.5-dev', 'python2.6-dev', 'python-docutils']\ninstaller = Installer(packages)\ninstaller.remove_seq()\n\nos.system('sudo aptitude remove python-lxml')\n"
},
{
"alpha_fraction": 0.6578947305679321,
"alphanum_fraction": 0.6625387072563171,
"avg_line_length": 29.809524536132812,
"blob_id": "9668d5ce2deb35862c0715b72742f89801a7c1bd",
"content_id": "0b98b6776cd4b86fc2b272cd69fcfd8aa84df233",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 646,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 21,
"path": "/src/process_coverage.py",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "\"\"\"\nRemoves unwanted elements from the given XML file and creates a new XML file. \n\"\"\"\nimport os\nimport sys\n\nsys.path.append(os.path.join(os.environ['HOME'], 'dev/modules'))\nfrom poprop.util import XMLFormatter\n\nprint sys.argv\n\nprint 'Formatting coverage report...'\n\n# See http://www.regular-expressions.info/lookaround.html\nformatter = XMLFormatter(sys.argv[1])\n#newxml = formatter.keep('package', 'name', 'poprop(?!\\(.contrib|_)|tests', \n# sys.argv[2])\nnewxml = formatter.coverage_keep('package', 'name', ['poprop'], \n sys.argv[2])\nprint 'Done'\nprint '%s rewritten as %s' % (formatter.path, newxml)"
},
{
"alpha_fraction": 0.5546506643295288,
"alphanum_fraction": 0.5649378299713135,
"avg_line_length": 31.40277862548828,
"blob_id": "6c3eaa481248785b11953034cc13f16ff521d2f5",
"content_id": "2deefd11c36fa68b6eb974304ee633863b405f34",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 2333,
"license_type": "no_license",
"max_line_length": 111,
"num_lines": 72,
"path": "/src/up-unison.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "# Title: up-unison.sh\n# Description: Setup Unison to sync computers\n# Author: Matthew Norris\n# Reference: http://www.micahcarrick.com/11-07-2007/unison-synchronize-ubuntu.html\n# http://fixunix.com/questions/15884-printing-all-arguments-passed-bash-script.html\n\n# Check for the arguments \"server\" or \"client\" before proceeding.\n\nif [ -n \"$1\" ]\n then\n if [ \"$1\" == \"server\" ] \n then\n echo \"Installing server...\"\n sudo aptitude install openssh-server unison -y\n elif [ \"$1\" == \"client\" ]\n then\n echo \"Installing client...\"\n sudo aptitude install unison unison-gtk -y\n else\n echo \"Invalid argument: '$1'. Provide an argument of \\\"server\\\" or \\\"client\\\" to continue.\"\n exit 1\n fi\nelse\n echo \"Invalid argument: '$1'. Provide an argument of \\\"server\\\" or \\\"client\\\" to continue.\"\n exit 1\nfi\n\n################################################################################\n# TODO: Getting an error when using functions: \n# ./unison-setup.sh: line 19: exit_with_error: command not found\n################################################################################\nfunction exit_with_error {\n echo \"Invalid argument: '$1'. Provide an argument of \\\"server\\\" or \\\"client\\\" to continue.\"\n exit 1\n}\n\nfunction install_client {\n sudo aptitude install unison unison-gtk -y\n}\n\nfunction install_server {\n # Install Unison for synchronizing computers\n\n sudo aptitude install openssh-server unison -y\n\n #CONFIG_FILE=/etc/ssh/sshd_config\n #echo \"Backing up $CONFIG_FILE...\"\n #sudo cp $CONFIG_FILE $CONFIG_FILE.bak\n\n ## Write a header in the config file so it is obvious what we have changed\n\n #echo \"Modifying $CONFIG_FILE...\"\n #sudo echo \"################################################################################\" >> $CONFIG_FILE\n #sudo echo \"# Custom settings\" >> $CONFIG_FILE\n #sudo echo \"################################################################################\" >> $CONFIG_FILE\n #sudo echo >> $CONFIG_FILE\n\n ## Deny remote users root access\n\n #sudo echo \"AllowRootLogin no\" >> $CONFIG_FILE\n\n ## Treat args as usernames to allow\n\n #for username in \"$@\"\n # do\n # echo \"Allowing access to $username.\"\n # sudo echo \"AllowUsers $username\" >> $CONFIG_FILE\n #done\n\n ## Restart the SSH service\n #sudo /etc/init.d/ssh\n}\n"
},
{
"alpha_fraction": 0.5848746299743652,
"alphanum_fraction": 0.59186190366745,
"avg_line_length": 24.34375,
"blob_id": "3793bda7f8d645241337c28d52abdda6fed504c0",
"content_id": "54d26de790ed9ff58006841d148ee834ab38f413",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 2433,
"license_type": "no_license",
"max_line_length": 90,
"num_lines": 96,
"path": "/src/down-wdk.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "#!\n\n################################################################################\n# Title: down-wdk.sh\n# Description: Remove Yahoo TV Widget Development Kit\n# Author: Matthew Norris\n# Reference: http://linux.about.com/od/ubuntu_doc/a/ubudg21t2.htm\n#\n################################################################################\n\nverbose=false\n\n#utilDir=\"$HOME/scripts/util\"\nutilDir=\"../util\"\n\n# Location to download source files\ndefDownloadDir=$HOME/sources\ndlDir=$defDownloadDir\n\n# Location to install\ndefInstallPrefix=/opt/dev/sdks\n#defInstallPrefix=$HOME/dev/sdks\ninstallPrefix=$defInstallPrefix\n\n# Linux group (used for permissions)\ngroup=developers\n\n# Download files\n\ndlPrefix=\"http://connectedtv.yahoo.com/developer/wdk/download\"\ndlFileName=\"wdk\"\ndlFileExt=\"zip\"\n\n# Default install directory name\ninstallDirName=\"wdk\"\n\n# Friendly name for messages\nfriendlyName=\"Widget Development Kit\"\n\n\n# Simulator global widget directory\ngWidgetDir=\"devwidgets\"\nlWidgetDir=\"TVWidgets\"\n\n# Package name\npkgPrefix=\"ywe-wdk\"\npkgVersion=\"0.9.7.6\"\npkgArch=\"i386\"\npkgExt=\"deb\"\n\npkgName=\"${pkgPrefix}_${pkgVersion}_${pkgArch}\"\n\n# Default install directory name (notice the subtle difference)\ninstallDirName=\"${pkgPrefix}-${pkgVersion}-${pkgArch}\"\n\n###############################################################################\n# Functions\n###############################################################################\n\nfunction errorMessage() {\n echo -e \"Error: $1\"\n exit 1\n}\n\n###############################################################################\n# Main\n###############################################################################\n\n# Check for the proper utilities\n\nif [[ ! -f \"$utilDir/mmkdir.sh\" ]] || [[ ! -f \"$utilDir/marmdir.sh\" ]]; then\n errorMessage \"Utilities not found in '$utilDir'. Please specify the proper directory.\"\nfi\n\n# Archive any widgets\necho \"Archiving $lWidgetDir and $gWidgetDir...\"\nsudo $utilDir/marmdir.sh $HOME/$lWidgetDir $HOME/uninstall\nsudo $utilDir/marmdir.sh \"/${gWidgetDir}\" $HOME/uninstall\necho \"Done.\"\n\n# Remove package\necho \"Removing $pkgPrefix...\"\nsudo dpkg -r $pkgPrefix\necho \"Done.\"\n\n# Remove SDK folder\necho \"Removing $installDirName...\"\nsudo rm -dr $installPrefix/$installDirName\necho \"Done.\"\n\n# Remove dependencies\necho \"Removing dependencies...\"\nsudo aptitude remove expect -y\nsudo aptitude remove libsdl-image1.2 -y\nsudo aptitude remove lib32readline5 -y\necho \"Done.\"\n"
},
{
"alpha_fraction": 0.4379344582557678,
"alphanum_fraction": 0.44538232684135437,
"avg_line_length": 26.216217041015625,
"blob_id": "a1279fd1144bee0a26f76c5d8c67e976dfa6ca52",
"content_id": "8b158c0d13ed3466aad6e3c6737b983c980baecb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 2014,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 74,
"path": "/src/install-shapado.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "# Title: up-shapado.sh\n#\n# Description: Installs the Shapado project from source in a subfolder named \n# \"shapado\" or to a directory of your choice. \n# PREREQUISITES: This script assumes Ruby, its gems, Rails 2.3.8, \n# and MongoDB are already installed.\n# \n# See https://sites.google.com/a/poprop.com/wiki/build/shapado\n#\n# Author: Matthew Norris\n#\n\nPROJECT_NAME=\"shapado\"\n\n################################################################################\n# Helper functions \n################################################################################\n\nfunction errorMessage() {\n echo -e \"Error: $1\"\n exit 1\n}\n\nfunction warningMessage() {\n echo -e \"Warning: $1\"\n exit 2\n}\n\nfunction outputUsage() {\n echo \"up-shapado\"\n echo \"Usage: `basename $0` [options...]\"\n echo \"Options:\"\n echo \" -n/--name Specify the project name\"\n echo \" -h/--help Output this message\"\n \n exit 1\n}\n\n################################################################################\n# Command line processing\n################################################################################\n\n# Parse the command line arguments. \nwhile [ \"$#\" -gt \"0\" ]; do\n case \"$1\" in \n -n|--name)\n shift 1\n PROJECT_NAME=\"$1\"\n ;;\n -h|--help)\n outputUsage\n ;;\n -*|--*)\n errorMessage \"Unknown option $1.\"\n ;;\n *)\n errorMessage \"Unknown parameter $1.\"\n ;;\n esac\ndone\n\n################################################################################\n# Main\n################################################################################\n\ngit clone git://gitorious.org/shapado/shapado.git $PROJECT_NAME\n\ncp $PROJECT_NAME/config/shapado.sample.yml $PROJECT_NAME/config/shapado.yml\ncp $PROJECT_NAME/config/database.yml.sample $PROJECT_NAME/config/database.yml\n\ncd $PROJECT_NAME\nsudo rake gems:install\nscript/update_geoip\nrake bootstrap\n"
},
{
"alpha_fraction": 0.654411792755127,
"alphanum_fraction": 0.654411792755127,
"avg_line_length": 26.200000762939453,
"blob_id": "e04691431e9e5225509675bbcaa180f194886af5",
"content_id": "45f90102588f6396f2638844531553fb52a2e6d9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 136,
"license_type": "no_license",
"max_line_length": 40,
"num_lines": 5,
"path": "/src/down-pydev.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "# Title: down-pydev.sh\n# Description: Remove PyDev and Eclipse\n# Author: Matthew Norris \n\nsudo aptitude remove eclipse-pydev -y\n"
},
{
"alpha_fraction": 0.4743705093860626,
"alphanum_fraction": 0.4815647602081299,
"avg_line_length": 30.097902297973633,
"blob_id": "ea26dee382bb8770772da46c78d3d10f9165c609",
"content_id": "8ea232b018fc1190958687ede3b15f33cc809544",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 4448,
"license_type": "no_license",
"max_line_length": 121,
"num_lines": 143,
"path": "/src/install-apache.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n# Title: install-apache.sh\n# Description: Installs Apache\n# Author: Matthew Norris\n# Reference: https://help.ubuntu.com/community/ApacheMySQLPHP#Virtual%20Hosts\n# http://www.grymoire.com/Unix/Sed.html\n# tee command - http://bit.ly/bfK1wb\n#\n\n################################################################################\n# File & directory locations \n################################################################################\n\nSITES_CONFIG_DIR=/etc/apache2/sites-available\n\n# Default \nDEF_SITE=default\nDEF_SITES_DIR=\"/var/www\"\n\nDEF_LOGS_DIR=/var/log/apache2\nDEF_ERROR_LOG=errors.log\nDEF_ACCESS_LOG=access.log\n\n# New \nNEW_SITES_DIR=\"$HOME/dev/www\"\nNEW_SITE=development\nNEW_LOGS_DIR=$HOME/dev/log/apache2\n\n################################################################################\n# Packages \n################################################################################\n\nPKGS_APACHE=\"apache2\"\n\n################################################################################\n# Helper functions \n################################################################################\n\n# Prints the given error message and exits.\nfunction errorMessage() {\n echo -e \"Error: $1. Type '`basename $0` -h' for usage and options.\"\n exit 1\n}\n\n# Prints the given warning message and exits.\nfunction warningMessage() {\n echo -e \"Warning: $1.\"\n exit 2\n}\n\n# Prints this script's usage and exists. \nfunction outputUsage() {\n echo \"Usage: `basename $0` [options...]\"\n echo \"Options:\"\n echo \" -h/--help Prints this message\"\n \n # TODO: Add help messages for your options here. \n \n exit 1\n}\n\n################################################################################\n# Installation functions \n################################################################################\n\n# Installs packages and sets up directories and files. \nfunction installPackages() {\n echo \"Installing `basename $0` tools & libraries...\"\n sudo aptitude install $PKGS_APACHE -y\n echo \"Done.\"\n echo \n echo \"Creating a new site in $NEW_SITES_DIR\"\n mkdir -p $NEW_SITES_DIR\n mkdir -p $NEW_LOGS_DIR\n \n # Create a new site, using the default as a template.\n # \n # NOTE: Using a > won't redirect the output properly because sudo loses its \n # permission by the time it gets to the >. Instead, pipe the output to \n # tee instead. See http://bit.ly/bfK1wb for more details. \n cd $SITES_CONFIG_DIR\n sudo sed -e \"s:$DEF_SITES_DIR:$NEW_SITES_DIR:g\" -e \"s:$DEF_LOGS_DIR:$NEW_LOGS_DIR:g\" < $DEF_SITE | sudo tee $NEW_SITE\n echo \"Done.\"\n echo\n \n # Deactivate the old site & activate the new one. \n echo \"Activating new site, '$NEW_SITE'...\"\n sudo a2dissite $DEF_SITE && sudo a2ensite $NEW_SITE\n sudo /etc/init.d/apache2 restart\n echo \"Done.\"\n echo\n echo \"Place anything you want served by localhost in $NEW_SITES_DIR\"\n echo\n}\n\n# Removes packages installed and tears down any directories and files created. \nfunction removePackages() {\n echo \"Removing `basename $0` tools & libraries...\"\n sudo a2dissite $NEW_SITE && sudo a2ensite $DEF_SITE \n sudo /etc/init.d/apache2 stop\n sudo rm \"$SITES_CONFIG_DIR/$NEW_SITE\"\n sudo aptitude remove $PKGS_APACHE -y\n echo \"Done.\"\n \n exit 0 \n}\n\n################################################################################\n# Command line processing\n################################################################################\n\n# Parse the command line arguments. \nwhile [ \"$#\" -gt \"0\" ]; do\n case \"$1\" in\n # EXAMPLE: Uncomment below to assign a 'destination directory', DST_DIR, \n # to the arg given after a '-d' or '--dst' on the command line.\n # \n # -d|--dst)\n # shift 1 # eat the '-d' or '--dst'\n # DST_DIR=\"$1\" # assign the next arg as this variable \n # shift 1 # eat the arg you just assigned\n # ;;\n -r|--remove)\n shift 1\n removePackages\n ;;\n -h|--help)\n outputUsage\n ;;\n -*|--*)\n errorMessage \"Unknown option $1\"\n ;;\n *)\n errorMessage \"Unknown parameter $1\"\n ;;\n esac\ndone\n\n################################################################################\n# Main\n################################################################################\n\ninstallPackages\n\n"
},
{
"alpha_fraction": 0.6593931317329407,
"alphanum_fraction": 0.6651892066001892,
"avg_line_length": 26.632076263427734,
"blob_id": "552a2a2143ca89639389e66a97413cf98750bf05",
"content_id": "4ecb2d0fde111e2faaa67afd4cb2caec57db62ca",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2933,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 106,
"path": "/docs/create_python_project.py",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "\"\"\"Creates a new Python web project using virtualenv.\"\"\"\n\nimport virtualenv\nimport subprocess\nimport os\nimport string\n\n# http://mindtrove.info/virtualenv-bootstrapping/\n\n__author__ = \"Matthew Norris\"\n__copyright__ = \"Copyright 2011, Matthew Norris\"\n\ninstall_path = os.environ['WORKON_HOME'] or os.getcwd()\n\nGITIGNORE = \"\"\"*~\ntmp*\n*.tmp\n*.bak\n*.pyc\n\n# Build artifacts\n#################\nnosetests.xml\ncoverage.xml\n.coverage\n*.cover\n.figleaf\nreport.html\npyccuracy_report.html\n\n# Sass artifacts\n################\n.sass-cache/\n\n# System artifacts\n##################\nPIL.pth\n\"\"\"\n\nPLACEHOLDER = \"\"\"Documentation generated by Sphinx or Epydoc should go here.\n\nDO NOT DELETE. Empty directories are not committed to version control. \nThis README file servers as a placeholder so that your CI tool (e.g., Jenkins) \nwill commit this directory to its repository.\n\"\"\"\n\nSITE_PKGS = \"lib/python2.5/site-packages\"\n\ndef after_install(options, home_dir): \n \"\"\"\n After creating the virtual environment, create the default project \n structure, complete with initial \"best practice\" files. \n \"\"\"\n print 'Environment created. Creating project\\'s directory structure...'\n env_path = os.path.abspath(home_dir)\n os.chdir(env_path)\n # The \"project\" directory is the heart of the app. This is where all \n # source code, project documents, and scripts will go. \n os.makedirs('project/src')\n os.chdir('project')\n # A \"best practice\" .gitignore file that ignores tmp files, \n # build artifacts, etc.\n open('.gitignore', 'w').write(GITIGNORE)\n # Create directory for project documentation. \n os.makedirs('docs/api')\n os.chdir('docs/api')\n open('README', 'w').write(PLACEHOLDER)\n # Save continuous integration files here (e.g., Jenkin's config.xml). \n os.chdir('../..')\n os.makedirs('scripts/ci')\n # Any configuration of paths, etc. \n os.mkdir('config')\n # Generate a path to use for any \"module not found\" PIL errors. \n # http://stackoverflow.com/questions/2813742\n print 'Generating PIL.pth...'\n open('config/PIL.pth', 'w').write(os.path.join(env_path, SITE_PKGS))\n # Create a directory for tests. \n os.makedirs('test/unit')\n os.chdir('test')\n os.mkdir('fixtures')\n os.mkdir('mocks')\n os.mkdir('functional')\n os.mkdir('acceptance')\n \n print 'Done.'\n print 'Installing packages...'\n subprocess.call([string.join(home_dir, 'bin', 'easy_install'),\n 'lxml'])\n print 'Done.'\nvirtualenv.after_install = after_install\n\nprint virtualenv.after_install\n\ndef adjust_options(options, args): \n \"\"\"\n Adjusts options to use Python 2.5 and exclude site packages.\n \"\"\"\n options.python = 'python2.5'\n options.no_site_packages = True \nvirtualenv.adjust_options = adjust_options\n\nprint virtualenv.adjust_options\n\nif __name__ == '__main__': \n print 'This script is not fully functional. No post hooks will be called.'\n virtualenv.main()\n "
},
{
"alpha_fraction": 0.7445820569992065,
"alphanum_fraction": 0.7585139274597168,
"avg_line_length": 34.88888931274414,
"blob_id": "08a89bc1b2594b5fc9cabe3fda112f464e57549a",
"content_id": "fcf42152d1ba111daeeeb1f910b500bc2b0d9198",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 646,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 18,
"path": "/src/up-other-tools.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "# Title: up-other-tools.sh\n# Description: Setup useful tools and utilities\n# Author: Matthew Norris\n# Reference: http://live.gnome.org/GeditPlugins\n\n# sudo aptitude install agave -y\nsudo aptitude install curl -y\nsudo aptitude install gedit-plugins -y\nsudo aptitude install inkscape -y\nsudo aptitude install keepassx -y\nsudo aptitude install unrar -y\nsudo aptitude install virtualbox-ose -y\n\n# NetBeans UML \"sometimes\" deletes a header file, rendering your UML documents \n# useless; evaluating PyDev & PyUML for Eclipse instead.\n# Reference: http://www.nabble.com/-j2ee--High-CPU-usage-td14170525.html\n\n#sudo aptitude install netbeans -y\n"
},
{
"alpha_fraction": 0.5701754093170166,
"alphanum_fraction": 0.5789473652839661,
"avg_line_length": 22.240739822387695,
"blob_id": "0316a8385c37b6ebb859f21b8deb19c52052aa9e",
"content_id": "369f091b4669ccb628eca22cfaff97c28098de11",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 1254,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 54,
"path": "/src/up-webdriver-python.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "#!\n\n################################################################################\n# \n# Title: up-webdriver-python.sh\n# Description: Setup WebDriver/Selenium for use with Python\n# Author: Matthew Norris\n# Reference: http://code.google.com/p/selenium/wiki/PythonBindings\n# \n################################################################################\n\nSOURCES=$HOME/sources\nSDKS=/opt/dev/sdks\nTOOLS=/opt/dev/tools\n\nSVN_TRUNK=http://selenium.googlecode.com/svn/trunk/\nROOT_DIR=selenium-read-only\n\nFRIENDLY_NAME=\"WebDriver\"\n\nfunction errorMessage() {\n echo -e \"Error: $1\"\n exit 1\n}\n\n# Check for 'sources' directory. If one does not exist, make one.\n\nif [ ! -d $SOURCES ]; then\n echo \"Making the $SOURCES directory..\"\n mkdir $SOURCES\nfi\n\ncd $SOURCES\n\nif [ -d $SOURCES/$ROOT_DIR ]; then\n echo \"$ROOT_DIR exists. Using local copy.\"\nelse\n echo \"Checking out $FRIENDLY_NAME from subversion...\"\n \n svn checkout $SVN_TRUNK $ROOT_DIR\n \n if [ $? -eq 0 ]; then\n echo \"Done.\"\n else\n errorMessage \"Checkout failed with exit status of $?\"\n fi \nfi\n\ncd $ROOT_DIR\n\necho \"Installing $FRIENDLY_NAME for python2.5 and python2.6...\"\nsudo python2.6 setup.py install\nsudo python2.5 setup.py install\necho \"Done.\""
},
{
"alpha_fraction": 0.5036131143569946,
"alphanum_fraction": 0.5090327858924866,
"avg_line_length": 30.823009490966797,
"blob_id": "7493d1fc766c1d0bc277f0fbd25803fcdb1b0e1c",
"content_id": "d4db27d747112d53864722a69b32392ace239d94",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7196,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 226,
"path": "/src/gen_install_script.py",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nimport os\nimport sys\nimport getpass\n\n__author__ = \"Matthew Norris\"\n__copyright__ = \"Copyright 2011, Matthew Norris\"\n\n################################################################################\n#\n# Title: gen_install_script.py\n# Description: Generates the skeleton of a bash installation script you can \n# use to install and remove various development tools.\n# Author: Matthew Norris\n# Reference: Get current user's name in Python - http://bit.ly/n2ifWm\n#\n################################################################################\n\nNAME_ARG_HELP = 'TOOLNAME'\n\nDEFAULT_DIR = os.getenv(\"HOME\")\nname_conv = 'install-%s.sh'\n\nfile_contents = \"\"\"#!/bin/bash\n# Title: %s\n# Description: TODO: Write description of what this script does. \n# Author: %s\n# Reference: TODO: Record any references, like URLs, here. \n#\n\n################################################################################\n# Helper functions \n################################################################################\n\n# Prints the given error message and exits.\nfunction errorMessage() {\n echo -e \"Error: $1. Type '`basename $0` -h' for usage and options.\"\n exit 1\n}\n\n# Prints the given warning message and exits.\nfunction warningMessage() {\n echo -e \"Warning: $1.\"\n exit 2\n}\n\n# Prints this script's usage and exists. \nfunction outputUsage() {\n echo \"Usage: `basename $0` [options...]\"\n echo \"Options:\"\n echo \" -h/--help Prints this message\"\n \n # TODO: Add help messages for your options here. \n \n exit 1\n}\n\n################################################################################\n# Installation functions \n################################################################################\n\n# Installs packages and sets up directories and files. \nfunction installPackages() {\n # TODO: Write a setup function. \n #echo \"Installing `basename $0` tools & libraries...\"\n echo \"Function not implemented! Nothing was done!\"\n #echo \"Done.\"\n \n exit 0\n}\n\n# Removes packages installed and tears down any directories and files created. \nfunction removePackages() {\n # TODO: Write a tear down function.\n #echo \"Removing `basename $0` tools & libraries...\" \n echo \"Function not implemented! Nothing was done!\"\n #echo \"Done.\" \n \n exit 0\n}\n\n################################################################################\n# Command line processing\n################################################################################\n\n# Parse the command line arguments. \nwhile [ \"$#\" -gt \"0\" ]; do\n case \"$1\" in\n \n # TODO: Create some script options. \n # EXAMPLE: Uncomment below to assign a 'destination directory', DST_DIR, \n # to the arg given after a '-d' or '--dst' on the command line.\n # \n # -d|--dst)\n # shift 1 # eat the '-d' or '--dst'\n # DST_DIR=\"$1\" # assign the next arg as this variable \n # shift 1 # eat the arg you just assigned\n # ;;\n -r|--remove)\n shift 1\n removePackages\n ;;\n -h|--help)\n outputUsage\n ;;\n -*|--*)\n errorMessage \"Unknown option $1\"\n ;;\n *)\n errorMessage \"Unknown parameter $1\"\n ;;\n esac\ndone\n\n################################################################################\n# Main\n################################################################################\n\n# TODO: Write the main script here.\n\necho \"Executing `basename $0`...\"\ninstallPackages\necho \"Done.\"\n \n\"\"\" \n\ndef print_usage():\n \"\"\"\n Prints the usage for this script. \n \"\"\"\n print \"Usage: python %s %s [options...]\\n\" % \\\n (os.path.basename(__file__), NAME_ARG_HELP)\n print \" -h/--help Prints help message, usage, then exits\"\n print \" -d Directory and/or filename to save to/as\"\n print \" -f Forces file overwrite\"\n print \"\"\n\ndef main():\n \"\"\"\n Writes a new script file given a file name and options. \n \"\"\" \n # Check for help first. \n if '-h' in sys.argv or '--help' in sys.argv: \n script_name = name_conv % NAME_ARG_HELP\n print \"\\nGenerates the file \\'%s\\' by convention and saves it to \\n\" \\\n \"'%s' by default\\n\" % (script_name, os.path.abspath(DEFAULT_DIR))\n print_usage()\n sys.exit(0)\n \n # Check for the TOOLNAME next. \n try:\n name = sys.argv[1]\n \n # A name must be given first, not a flag. \n if name.startswith('-'): \n print '\\nERROR! No %s provided. You must supply the name ' \\\n 'of the tool before supplying any options.\\n' % NAME_ARG_HELP\n print_usage()\n sys.exit(1)\n \n # A name cannot be a file path or contain a directory. \n if os.path.dirname(name):\n print '%s must not contain a directory.\\n' % NAME_ARG_HELP\n print_usage()\n sys.exit(1)\n \n toolname = 'install-%s.sh' % name\n \n except IndexError:\n print '\\nERROR! No %s provided. \\'%s\\' is the name of the tool ' \\\n 'for which you would like to create an installation script.\\n' % \\\n (NAME_ARG_HELP, NAME_ARG_HELP)\n print_usage()\n sys.exit(1)\n \n # Check for file or directory. \n try:\n argpos = sys.argv.index('-d')\n path = sys.argv[argpos + 1]\n if os.path.isdir(path):\n # save to this dir instead of the default. \n filepath = os.path.join(path, toolname)\n elif os.path.isfile(path):\n # Use this filename and location instead of the default. \n filepath = os.path.abspath(path)\n else:\n # Use the -d option as the file name, but with the default dir. \n filepath = os.path.join(DEFAULT_DIR, path)\n except IndexError:\n # No file or dir followed the -d option. \n print 'No valid file or directory provided.\\n'\n print_usage()\n sys.exit(1)\n except ValueError:\n # The -d option was not provided, so just use the default dir. \n filepath = os.path.join(DEFAULT_DIR, toolname)\n \n# if '-f' in sys.argv: \n# # Overwrite the file if it exists. \n# overwrite = True\n \n # Exit if the file path already exists and the 'force' argument \n # has not been explicitly provided. \n if os.path.isfile(filepath) and not '-f' in sys.argv:\n print '%s exists. Use another file name ' \\\n 'or the \\'-f\\' option to overwrite.\\n' % filepath\n sys.exit(1)\n \n # Configuration options for the bash file. \n config = {'title': os.path.basename(filepath), \n 'author': getpass.getuser(), \n }\n \n # Create the file, filling in the configuration data. \n script_file = open(filepath, 'wb')\n script_file.write(file_contents % (config['title'], config['author']))\n script_file.close()\n \n # Make the file executable by anyone. http://bit.ly/qVe75R\n os.chmod(filepath, 0755)\n \n print 'Created \\'%s\\' and made it executable.' % filepath\n\nif __name__ == '__main__':\n main()\n "
},
{
"alpha_fraction": 0.7102137804031372,
"alphanum_fraction": 0.7102137804031372,
"avg_line_length": 37.09090805053711,
"blob_id": "7900916936b2fa56b573e25d77eff3ed2c601588",
"content_id": "65edb599b94045f35f1213188a1f1ff01989319a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 421,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 11,
"path": "/src/up-pydev.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "# Title: up-pydev.sh\n# Description: Setup PyDev and Eclipse\n# Author: Matthew Norris\n# Reference: \n\nsudo aptitude install eclipse-pydev -y\n\n# TODO: Right now, to install plugins in eclipse this way, you must run \n# 'sudo eclipse' so that you have access to the plugins directory (controlled \n# by root). You may be able to change the group on the directory and sub-\n# directories like with Google App Engine. \n"
},
{
"alpha_fraction": 0.6906474828720093,
"alphanum_fraction": 0.7086330652236938,
"avg_line_length": 24.272727966308594,
"blob_id": "ac12e5253ad4de2d2fa95a8f76854c3f39c3264d",
"content_id": "58970afa4e2c576efcfc82841e6ff0bc8a337b3c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 1112,
"license_type": "no_license",
"max_line_length": 66,
"num_lines": 44,
"path": "/src/up-django.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "# Title: up-django.sh\n# Description: Setup Django for use with Python\n# Author: Matthew Norris\n# Reference: http://www.djangoproject.com/download/\n\nSOURCES=~/sources\nSDKS=/opt/dev/sdks\nTOOLS=/opt/dev/tools\n\n#DJANGO_TAR=http://www.djangoproject.com/download/1.0.2/tarball/\nDJANGO_TAR=http://www.djangoproject.com/download/1.1/tarball/\n#DJANGO_FILENAME=Django-1.0.2-final\nDJANGO_FILENAME=Django-1.1\n\n#sudo aptitude install python-django -y\n\n# Version 1.1\n# Check for 'sources' directory. If one does not exist, make one.\n\nif [ ! -d $SOURCES ]\n then\n echo \"Making the $SOURCES directory..\"\n mkdir $SOURCES\nfi\n\necho \"Downloading $DJANGO_FILENAME...\"\ncd $SOURCES\nif [ ! -f $DJANGO_FILENAME.tar.gz ] \n then \n wget $DJANGO_TAR\n echo \"Done.\"\n else\n echo \"...$DJANGO_FILENAME.tar.gz has already been downloaded.\"\nfi\n\necho \"Installing $DJANGO_FILENAME for python2.5 and python2.6...\"\ntar -zxvf $DJANGO_FILENAME.tar.gz\nsudo python2.6 $DJANGO_FILENAME/setup.py install\nsudo python2.5 $DJANGO_FILENAME/setup.py install\necho \"Done.\"\n\necho \"Cleaning up...\"\nsudo rm -fr $DJANGO_FILENAME\necho \"Done.\"\n"
},
{
"alpha_fraction": 0.5407565236091614,
"alphanum_fraction": 0.5462617874145508,
"avg_line_length": 26.602941513061523,
"blob_id": "1479c85b02b7bd9faf2e7d0c8d724071976af11c",
"content_id": "1b4c0175fefddd2ebdd7054273b1d52aaf4517cd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 5631,
"license_type": "no_license",
"max_line_length": 95,
"num_lines": 204,
"path": "/src/up-easyb.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "#!\n\n################################################################################\n# Title: up-easyb.sh\n# Description: Setup Easyb for behavior-driven design\n# Author: Matthew Norris\n# Reference: http://easyb.org\n#\n################################################################################\n\nverbose=false\n\n#utilDir=\"$HOME/scripts/util\"\nutilDir=\"../util\"\n\n# Location to download source files\ndefDownloadDir=$HOME/sources\ndownloadDir=$defDownloadDir\n\n# Location to install\ndefInstallPrefix=/opt/dev/tools\ninstallPrefix=$defInstallPrefix\n\n# Linux group (used for permissions)\ngroup=developers\n\n# Download files\n\n#http://easyb.googlecode.com/files/easyb-0.9.6.tar.gz\narchiveRootName=\"easyb\"\narchiveVersion=\"0.9.6\"\narchiveExt=\"tar.gz\"\narchiveName=\"${archiveRootName}-${archiveVersion}.${archiveExt}\"\ndownloadSite=\"http://easyb.googlecode.com/files\"\n\n# Default install directory name\ninstallDirName=\"easyb\"\n\n# Friendly names\nfriendlyLinkName=\"easyb\"\n\n################################################################################\n# Functions\n################################################################################\n\nfunction checkUtilDir() {\n\n # Strip whitespace\n utilDir=$(expr \"$1\" : '[[:space:]]*\\(.*\\)[[:space:]]*$')\n \n # Check for null and whitespace\n if [[ -z \"$utilDir\" ]] || [[ ! -n \"$utilDir\" ]]; then\n errorMessage \"No UTIL DIRECTORY specified. Try --help for help.\"\n fi\n \n if [[ ! -d $utilDir ]]; then \n errorMessage \"$utilDir does not exist.\"\n fi\n}\n\nfunction checkDirs() {\n \n # Strip whitespace and remove any trailing slashes\n installPrefix=$(expr \"$1\" : '[[:space:]]*\\(.*\\)[[:space:]]*$' | sed -e 's/\\\\/$//g')\n downloadDir=$(expr \"$2\" : '[[:space:]]*\\(.*\\)[[:space:]]*$' | sed -e 's/\\\\/$//g')\n \n # Check for null and whitespace\n if [[ -z \"$installPrefix\" ]] || [[ ! -n \"$installPrefix\" ]]; then\n warningMessage \"No INSTALL DIRECTORY specified. Using default: $defInstallPrefix\"\n installPrefix=$defInstallPrefix\n fi\n \n if [[ -z \"$downloadDir\" ]] || [[ ! -n \"$downloadDir\" ]]; then\n warningMessage \"No DOWNLOAD DIRECTORY specified. Using default: $defDownloadDir\"\n downloadDir=$defDownloadDir\n fi\n \n # Do the specified directories exist?\n if [[ ! -d $installPrefix ]]; then \n errorMessage \"$installPrefix is not a directory. Try '$(basename $0) --help' for help.\"\n fi\n \n if [[ ! -d $downloadDir ]]; then \n errorMessage \"$downloadDir is not a directory. Try '$(basename $0) --help' for help.\"\n fi\n}\n\nfunction checkParamStr() {\n # If there are no parameters, throw error\n if [[ \"${#1}\" -eq \"0\" ]]; then\n errorMessage \"$installPrefix is not a directory. Try '$(basename $0) --help' for help.\"\n fi\n}\n\nfunction errorMessage() {\n echo -e \"Error: $1\"\n exit 1\n}\n\nfunction warningMessage() {\n echo -e \"Warning: $1\"\n}\n\nfunction outputUsage() {\n echo \"up-easyb - Install Easyb\"\n echo \"Usage: $(basename $0) [options...] [INSTALL DIRECTORY] [DOWNLOAD DIR]\"\n echo \"Options:\"\n echo \" -v/--verbose Not yet implemented\"\n echo \" -h/--help Output this message\"\n \n exit 1\n}\n\n################################################################################\n# Command line processing\n################################################################################\n\n# Parse the command line arguments\nwhile [[ \"$#\" -gt \"0\" ]]; do\n case \"$1\" in \n -v|--verbose)\n verbose=true\n shift 1\n checkParamStr \"$@\"\n ;;\n -h|--help)\n outputUsage\n ;;\n -*|--*)\n # Unknown option\n errorMessage \"Unknown option $1.\"\n ;;\n *)\n # If we have more than 2 parameters, the usage is incorrect\n if [[ \"$#\" -gt \"2\" ]]; then\n errorMessage \"Too many parameters. Try '$(basename $0) --help' for help.\"\n else\n checkDirs $1 $2\n shift 2\n fi\n break\n ;;\n esac\ndone\n\n################################################################################\n# Main\n###############################################################################\n\n# Check for the proper utilities\n\nif [[ ! -f \"$utilDir/mmkdir.sh\" ]]; then\n errorMessage \"'mmkdir.sh' not found in '$utilDir'. Please specify the proper directory.\"\nfi\n\n$utilDir/mmkdir.sh $downloadDir\necho\n\n# Download Grails and install it.\n\necho $downloadDir \necho \"$downloadSite/$archiveName\"\n\necho \"Downloading $archiveName...\"\nif [[ ! -f $downloadDir/$archiveName ]]; then\n\twget --directory-prefix=$downloadDir \"$downloadSite/$archiveName\"\n\tif [ $? -eq 0 ]; then\n\t echo \"Done.\"\n\telse\n\t errorMessage \"Download failed with exit status of $?\"\n\tfi\nelse\n echo \"$archiveName has already been downloaded.\"\nfi\necho\n\necho \"Installing $friendlyLinkName to $installPrefix...\"\nsudo mkdir $installPrefix/$installDirName\nsudo tar -zxvf $downloadDir/$archiveName -C $installPrefix/$installDirName\n\nif [ $? -eq 0 ]; then\n echo \"Done.\"\nelse\n errorMessage \"Moving failed with exit status of $?\"\nfi\n\nif [[ ! -n \"$(egrep -i ^$group: /etc/group)\" ]]; then\n echo \"Creating the $group group...\"\n sudo groupadd $group\n echo \"Done.\"\n echo\nfi\n\n# Change permissions so members of the group can run its scripts\n\necho \"Changing group ownership for $installDirName files...\"\nsudo chgrp -R $group $installPrefix/$installDirName\n#sudo chmod g+w hudson.war\necho \"Done.\"\necho\n\nsudo ln -s $installPrefix/$installDirName /usr/local/$installDirName\n\necho \"$friendlyLinkName is installed.\"\n"
},
{
"alpha_fraction": 0.46728041768074036,
"alphanum_fraction": 0.4774819612503052,
"avg_line_length": 30.3984375,
"blob_id": "a18fd1da981d82b2bcfd8587c2628af334b1afe9",
"content_id": "9ca5ccf00f3c246ce7f518170a3db6f7eca24c73",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 4019,
"license_type": "no_license",
"max_line_length": 112,
"num_lines": 128,
"path": "/src/install-mysql.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n# Title: install-mysql.sh\n# Description: Installs MySQL, tools, and drivers for working with PHP, Ruby, \n# and more. \n# Author: Matt Norris\n# Reference: Slicehost - http://bit.ly/cLQzmW\n# dpkg - http://bit.ly/dvX9un\n# dpkg dependency errors - http://bit.ly/b2L2Ni\n# dpkg dependency error detection explained - http://bit.ly/bf2ChF\n#\n\n################################################################################\n# Default locations \n################################################################################\n\nDOWNLOADS=$HOME/dev/downloads\n\n################################################################################\n# Packages \n################################################################################\n\n# MySQL installer is graphical, so we'll have to babysit it. \nPKGS_MYSQL=\"mysql-server mysql-client libmysqlclient15-dev\"\n\n# Ruby libraries for operating with MySQL. \nPKGS_RUBYLIB=\"libmysql-ruby1.8\"\n\n# A graphical tool for viewing databases\nPKGS_EMMA=\"emma\" \n\n# MySQL Workbench\nPKG_MSWB=\"mysql-workbench-gpl-5.2.29-1ubu1004-amd64.deb\"\nPKG_MSWB_SITE=\"http://dev.mysql.com/get/Downloads/MySQLGUITools/$PKG_MSWB/from/http://mysql.mirrors.hoobly.com/\"\n\n################################################################################\n# Helper functions \n################################################################################\n\n# Prints the given error message and exits.\nfunction errorMessage() {\n echo -e \"Error: $1. Type '`basename $0` -h' for usage and options.\"\n exit 1\n}\n\n# Prints the given warning message and exits.\nfunction warningMessage() {\n echo -e \"Warning: $1.\"\n exit 2\n}\n\n# Prints this script's usage and exists. \nfunction outputUsage() {\n echo \"Usage: `basename $0` [options...]\"\n echo \"Options:\"\n echo \" -h/--help Prints this message\"\n \n # TODO: Add help messages for your options here. \n \n exit 1\n}\n\n################################################################################\n# Installation functions \n################################################################################\n\n# Installs packages and sets up directories and files. \nfunction installPackages() {\n\techo \"Installing `basename $0` tools & libraries...\"\n sudo aptitude install $PKGS_MYSQL -y\n sudo aptitude install $PKGS_RUBYLIB -y\n \n cd $DOWNLOADS\n # Get the Workbench package if it isn't yet downloaded. \n if [ ! -e $PKG_MSWB ]; then\n wget $PKG_MSWB_SITE -O $PKG_MSWB\n fi\n \n # Install Workbench, directing any dependency errors into stdout. \n sudo dpkg -i $PKG_MSWB 2>&1\n \t\n # If there were errors, get the generated list of dependencies and install. \n if [ $? -gt 0 ]; then\n \tsudo apt-get -f --force-yes --yes install 2>&1\n \tsudo dpkg -i $PKG_MSWB 2>&1\n fi\n \n echo \"Done.\"\n}\n\n# Removes packages installed and tears down any directories and files created. \nfunction removePackages() {\n\techo \"Removing `basename $0` tools & libraries...\"\n\tsudo dpkg -r mysql-workbench-gpl\n sudo aptitude remove $PKGS_RUBYLIB -y\n sudo aptitude remove $PKGS_MYSQL -y\n echo \"Done.\"\n \n exit 0 # Exit so the default installation doesn't take place again. \n}\n\n################################################################################\n# Command line processing\n################################################################################\n\n# Parse the command line arguments. \nwhile [ \"$#\" -gt \"0\" ]; do\n case \"$1\" in\n -r|--remove)\n shift 1\n removePackages\n ;;\n -h|--help)\n outputUsage\n ;;\n -*|--*)\n errorMessage \"Unknown option $1\"\n ;;\n *)\n errorMessage \"Unknown parameter $1\"\n ;;\n esac\ndone\n\n################################################################################\n# Main\n################################################################################\n\ninstallPackages\n"
},
{
"alpha_fraction": 0.6372548937797546,
"alphanum_fraction": 0.6377005577087402,
"avg_line_length": 34.619049072265625,
"blob_id": "d0d343f9e0fe8cd161e9f37c5da393ed291b464b",
"content_id": "726a21f90a11364df61e1ade1d271f728ac6a94c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2244,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 63,
"path": "/src/virtualenv/install_pyccuracy.py",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "\"\"\"\nInstalls Pyccuracy in a virtual environment. \n\"\"\"\nimport os\nimport sys\nimport shutil\n\nsys.path.append(os.path.join(os.environ['HOME'], 'dev/modules'))\nfrom poprop import util\n\n__author__ = \"Matthew Norris\" \n\n################################################################################\n#\n# Title: install_pyccuracy.py\n# Author: Matthew Norris\n# Description: Setup Pyccuracy on a virtual environment.\n# Dependencies: setuptools, virtualenv\n# References: http://www.pyccuracy.org/getting_started_1.html\n# http://bit.ly/python-virtualenv\n# http://docs.python.org/library/os.path.html\n#\n################################################################################\n\nif not util.in_virtualenv():\n sys.exit('Script not running in a virtual environment. Exiting.')\n\nREL_CONTRIB_PATH = 'modules/contrib' \nCONTRIB_DIR = os.path.join(util.get_virtualenv(), REL_CONTRIB_PATH)\nDOWNLOADS = os.path.join(os.environ['HOME'], 'dev/downloads')\nINSTALL_DIR_NAME = 'pyccuracy'\n\n# Install dependencies.\nos.system('easy_install lxml') \nos.system('easy_install selenium')\nos.system('easy_install pyoc')\n\nutil.mkdir(CONTRIB_DIR)\nutil.mkdir(DOWNLOADS)\n\n# Try to get Pyccuracy locally first, so we don't have to clone the repo \n# unnecessarily. \ndl_source = os.path.join(DOWNLOADS, INSTALL_DIR_NAME)\nif not os.path.isdir(dl_source):\n os.system('git clone git://github.com/heynemann/pyccuracy.git %s/' \\\n 'pyccuracy' % DOWNLOADS)\n\n# Now that we've cloned the source, copy it to the virtualenv's sources folder, \n# removing any old code first. \ndst_source = os.path.join(CONTRIB_DIR, INSTALL_DIR_NAME)\nif os.path.isdir(dst_source):\n shutil.rmtree(dst_source)\nshutil.copytree(dl_source, dst_source)\n\n# Create symlinks.\nos.chdir(os.path.join(util.get_virtualenv(), \n 'lib/python%s/site-packages' % util.get_python_version()))\nos.system('ln -s ../../../%s/pyccuracy/pyccuracy' % REL_CONTRIB_PATH)\nos.chdir(os.path.join(util.get_virtualenv(), 'bin'))\nos.system('ln -s ../%s/pyccuracy/pyccuracy/pyccuracy_console.py ' \\\n 'pyccuracy_console' % REL_CONTRIB_PATH)\nos.system('ln -s ../%s/pyccuracy/pyccuracy/pyccuracy_help.py ' \\\n 'pyccuracy_help' % REL_CONTRIB_PATH)\n"
},
{
"alpha_fraction": 0.5988593101501465,
"alphanum_fraction": 0.608364999294281,
"avg_line_length": 26.736841201782227,
"blob_id": "6146d4c62dd6eb95c32223f25e67031f8dbfa6e4",
"content_id": "0487bb4740a532c5aaa27cb454ff0258b9be198f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 526,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 19,
"path": "/src/remove_color_palettes.py",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "\"\"\"\nRemoves custom color palettes from OpenOffice.org and Inkscape. \n\"\"\"\nimport os\n\n__author__ = \"Matthew Norris\"\n__copyright__ = \"Copyright 2011, Matthew Norris\"\n\nPALETTE_NAME = 'poprop'\n\nprint 'Removing palettes...'\ntry:\n os.system('sudo rm /usr/share/inkscape/palettes/%s.gpl' % PALETTE_NAME)\n os.remove(os.path.join(os.environ['HOME'], \n '.openoffice.org/3/user/config/%s.soc' % \n PALETTE_NAME))\nexcept OSError, ose:\n print 'Some files not present; skipping...'\nprint 'Done.\\n'"
},
{
"alpha_fraction": 0.5244956612586975,
"alphanum_fraction": 0.5360230803489685,
"avg_line_length": 25.730770111083984,
"blob_id": "41f76fdb09b762d2338788375d8f8ba66db73c87",
"content_id": "c0de55888a17b19ea37600c9724fe49be0b9c8f9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 694,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 26,
"path": "/src/down-webdriver-python.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "#!\n\n################################################################################\n# \n# Title: down-webdriver-python.sh\n# Description: Remove WebDriver for use with Python\n# Author: Matthew Norris\n# Reference: \n# \n################################################################################\n\n# TODO: Not working correctly!\n\nINSTALL_DIR_NAME=\"webdriver\" # name not accurate - 0.7\nFRIENDLY_NAME=\"WebDriver\"\n\necho \"Removing $FRIENDLY_NAME...\"\n\nsudo rm -fr /usr/lib/python2.5/site-packages/$INSTALL_DIR_NAME\n\nsudo rm -fr /usr/lib/python2.6/dist-packages/$INSTALL_DIR_NAME\nsudo rm -fr /usr/lib/python2.6/site-packages/$INSTALL_DIR_NAME\n\n#sudo rm /usr/bin/django-admin.py\n\necho \"Done.\""
},
{
"alpha_fraction": 0.5710616707801819,
"alphanum_fraction": 0.5719178318977356,
"avg_line_length": 22.3799991607666,
"blob_id": "dd94ab4b6cfa4b1c418c00336b0b5f1fdabecc2f",
"content_id": "4d5c425923535996f6f5313970ca9f35f3c935a4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 1168,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 50,
"path": "/src/down-hudson.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "#!\n\n################################################################################\n#\n# Title: down-hudson.sh\n# Description: Remove Hudson\n# Author: Matthew Norris\n#\n################################################################################\n\ninstallPrefix=$HOME/dev/tools\nUNINSTALL_DIR=$HOME/uninstall\narchiveName=\"hudson.war\"\n\n# Friendly names\nfriendlyName=\"Hudson\"\nfriendlyLinkName=\"hudson\"\n\n# Check for 'uninstall' directory. If one does not exist, make one.\nif [ ! -d $UNINSTALL_DIR ]\n then\n echo \"Making the $UNINSTALL_DIR directory..\"\n mkdir $UNINSTALL_DIR\n echo \"Done.\"\n echo\nfi\n\n# Archive any preferences before removing.\nif [ -d $HOME/.hudson ]\n then \n cd $HOME\n ARCHIVE_FILE=$UNINSTALL_DIR/hudson.jobs.$RANDOM.bak.tar.gz\n \n echo \"Archiving existing applications to $ARCHIVE_FILE...\"\n tar -pczf $ARCHIVE_FILE .hudson\n \n if [ $? -eq 0 ]; then\n echo \"Done.\"\n echo\n else\n errorMessage \"Moving failed with exit status of $?\"\n fi\nfi\n\n# Remove hudson\necho \"Removing $friendlyName...\"\nsudo rm -fr $HOME/.hudson\nsudo rm /usr/local/$archiveName\nsudo rm $installPrefix/$archiveName\necho \"Done.\""
},
{
"alpha_fraction": 0.7456011772155762,
"alphanum_fraction": 0.7609970569610596,
"avg_line_length": 40.33333206176758,
"blob_id": "dbe06856e74ecfefde5f382f5fa5def4b0989019",
"content_id": "01665b787ebafe4e78f6c4add4c10f67af1555a2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 1364,
"license_type": "no_license",
"max_line_length": 95,
"num_lines": 33,
"path": "/src/up-aptitude.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "# Title: up-aptitude.sh\n# Description: Update aptitude\n# Author: Matthew Norris\n# Reference: http://articles.slicehost.com/2008/11/28/ubuntu-intrepid-setup-page-2\n\n# Now we can update the sources so we have the latest list of software packages.\n\nsudo aptitude update\n# NOTE: If you have used the .bashrc shown above you just need to enter 'update'\n# because the alias will use the entire command. The entire command is included\n# for clarity.\n\n# If Intrepid is a bare bones, we will need to set the system locale.\n#sudo locale-gen en_US.UTF-8\n#sudo /usr/sbin/update-locale LANG=en_US.UTF-8\n\n# Let's see if there are any upgrade options\n\nsudo aptitude safe-upgrade\nsudo aptitude full-upgrade\n\n# Ubuntu Intrepid has some handy meta-packages that include a set of pre-defined\n# programs needed for a single purpose. So instead of installing a dozen\n# different package names, you can install just one meta-package. One such\n# package is called 'build-essential'. \n\nsudo aptitude install build-essential -y\n# NOTE: gcc, make, patch, and other programs are installed. All these are\n# needed for many other programs to install properly.\n\n# Install these XML processing libraries. We will use them for Cucumber.\n# Reference: http://pragmatig.wordpress.com/2008/12/25/getting-started-with-cucumber-on-ubuntu/\nsudo aptitude install libxslt1-dev libxml2-dev -y\n"
},
{
"alpha_fraction": 0.48178136348724365,
"alphanum_fraction": 0.48178136348724365,
"avg_line_length": 22.571428298950195,
"blob_id": "389baea4a4b19f2cb8d6f4a7a776a1e251fabdd4",
"content_id": "38a06b2193a8270636776a706f4029928262e334",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 494,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 21,
"path": "/src/down-easyb.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "#!\n\n################################################################################\n#\n# Title: down-easyb.sh\n# Description: Remove Hudson\n# Author: Matthew Norris\n#\n################################################################################\n\ninstallPrefix=/opt/dev/tools\ninstallDirName=\"easyb\"\n\n# Friendly names\nfriendlyName=\"Easyb\"\nfriendlyLinkName=\"easyb\"\n\necho \"Removing $friendlyName...\"\nsudo rm /usr/local/$installDirName\nsudo rm -dr $installPrefix/$installDirName\necho \"Done.\""
},
{
"alpha_fraction": 0.42650601267814636,
"alphanum_fraction": 0.42650601267814636,
"avg_line_length": 28.64285659790039,
"blob_id": "b0a8e22a87cf1d14efaa3883d572debe4b715345",
"content_id": "0cec2bddf231a88efae411fe93f5dd28d78a78f5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 415,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 14,
"path": "/src/down-version-control.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\n################################################################################\n#\n# Title: down-version-control.sh\n# Description: Remove Git & Subversion\n# Author: Matthew Norris\n#\n################################################################################\n\nsudo aptitude remove subversion -y\n#sudo aptitude remove git-gui -y\nsudo aptitude remove giggle -y\nsudo aptitude remove git-core -y\n"
},
{
"alpha_fraction": 0.7328556776046753,
"alphanum_fraction": 0.7328556776046753,
"avg_line_length": 30.322580337524414,
"blob_id": "b1ef459dbe426db9e8d2e2b9717146670621d29f",
"content_id": "469d7b34f0ec0ab0baaf7c091a74f99eaeb73c80",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 977,
"license_type": "no_license",
"max_line_length": 224,
"num_lines": 31,
"path": "/README.md",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "# Grain\n\n**Grain** is a simplified [SaltStack](http://www.saltstack.com/) for personal configuration management. It is an alternative without the steep learning curve when you need to manage your local utilities in the simplest way. \n\n# Example \n\n gen_install_script.py mysql \n \nWhere *mysql* is the tool you wish to install and uninstall. \n\n## Output \n\n $HOME/install-mysql.sh\n \nA bash script skeleton; fill in the `installPackages` and `removePackages` with the commands to install and remove the desired packages for the tool (in this case, mysql). \n\nSee [install-mysql.sh](https://github.com/mattnorris/grain/blob/master/src/install-mysql.sh) and other scripts for examples and details. \n\n# Options \n\n# -d \n\nSpecifies the directory to save to or the filename to save as. \n\n gen_install_script.py mysql -d $HOME/homesubdir\n \n# -f\n\nBy default, Grain warns you of duplicate filenames. This forces a file overwrite. \n\n gen_install_script.py mysql -f \n \n"
},
{
"alpha_fraction": 0.7305936217308044,
"alphanum_fraction": 0.7397260069847107,
"avg_line_length": 35.5,
"blob_id": "ffe1402a669b75dd3394c362de33374867d0df63",
"content_id": "e8305a4d4ba08e1523a22f171d792523a5e6801c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 219,
"license_type": "no_license",
"max_line_length": 69,
"num_lines": 6,
"path": "/src/down-postgresql.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "# Title: down-postgresql.sh\n# Description: Tear down PostgreSQL and its Ruby connector\n# Author: Matthew Norris\n\n# sudo gem uninstall postgres\nsudo aptitude remove postgresql libpq-dev pgadmin3 python-psycopg2 -y\n"
},
{
"alpha_fraction": 0.5295536518096924,
"alphanum_fraction": 0.5657418370246887,
"avg_line_length": 38.47618865966797,
"blob_id": "6c57fe3ccd672d2ce126cfebb04e2efdbe7344f5",
"content_id": "23c7fad2dfc2015ad1992429015183c0f8204224",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 829,
"license_type": "no_license",
"max_line_length": 98,
"num_lines": 21,
"path": "/src/up-postgresql.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "# Title: up-postgresql.sh\n# Description: Setup PostgreSQL and its Ruby connector\n# Author: Matthew Norris\n# Reference: http://nachbar.name/blog/2008/11/28/rails-and-postgresql-on-ubuntu-hardy-804-lts/\n# https://help.ubuntu.com/community/PostgreSQL\n# http://antoniocangiano.com/2007/12/26/installing-django-with-postgresql-on-ubuntu/\n\nsudo aptitude install postgresql libpq-dev pgadmin3 python-psycopg2 -y\n# sudo gem install postgres\n\n# NOTE: pgadmin3 is a GUI for PostgreSQL\n\n################################################################################\n# Test your Ruby installation with these commands\n################################################################################\n\n# $irb\n# irb(main):005:0> require 'rubygems'\n# => true\n# irb(main):006:0> require 'postgres'\n# => true\n"
},
{
"alpha_fraction": 0.7417491674423218,
"alphanum_fraction": 0.7731022834777832,
"avg_line_length": 40.7931022644043,
"blob_id": "cf1bbab5d82453ad12b3f571e4386abae501e7e9",
"content_id": "af9e080ff9c1612a721a60a22ebd65ada844b79c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 1212,
"license_type": "no_license",
"max_line_length": 137,
"num_lines": 29,
"path": "/src/up-flash-64bit.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n# Script created by\n# Romeo-Adrian Cioaba [email protected]\n\necho \"Stopping any Firefox that might be running\"\nsudo killall -9 firefox\n\necho \"Removing any other flash plugin previously installed:\"\nsudo apt-get remove -y --purge flashplugin-nonfree gnash gnash-common mozilla-plugin-gnash swfdec-mozilla libflashsupport nspluginwrapper\nsudo rm -f /usr/lib/mozilla/plugins/*flash*\nsudo rm -f ~/.mozilla/plugins/*flash*\nsudo rm -f /usr/lib/firefox/plugins/*flash*\nsudo rm -f /usr/lib/firefox-addons/plugins/*flash*\nsudo rm -rfd /usr/lib/nspluginwrapper\n\n\necho \"Installing Flash Player 10\"\ncd ~\nwget http://download.macromedia.com/pub/labs/flashplayer10/libflashplayer-10.0.22.87.linux-x86_64.so.tar.gz\ntar zxvf libflashplayer-10.0.22.87.linux-x86_64.so.tar.gz\nsudo cp libflashplayer.so /usr/lib/mozilla/plugins/ \n\necho \"Linking the libraries so Firefox and apps depending on XULRunner (vuze, liferea, rsswol) can find it.\"\nsudo ln -sf /usr/lib/mozilla/plugins/libflashplayer.so /usr/lib/firefox-addons/plugins/\nsudo ln -sf /usr/lib/mozilla/plugins/libflashplayer.so /usr/lib/xulrunner-addons/plugins/\n\n# now doing some cleaning up:\nsudo rm -rf libflashplayer.so \nsudo rm -rf libflashplayer-10.0.22.87.linux-x86_64.so.tar.gz\n"
},
{
"alpha_fraction": 0.6339977979660034,
"alphanum_fraction": 0.6400886178016663,
"avg_line_length": 25.573530197143555,
"blob_id": "315e223b818ebf80913a3f8b0182663d2305dd4a",
"content_id": "ad3689ae638e80853eea5916d480a3ec6f49f408",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1806,
"license_type": "no_license",
"max_line_length": 129,
"num_lines": 68,
"path": "/src/manage_selenium.py",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "\"\"\"\nInstalls or removes Selenium Remote Control. \n\n@see: http://seleniumhq.org\n\"\"\"\nimport os\nimport sys\nimport shutil\n\nsys.path.append(os.path.join(os.environ['HOME'], 'dev/modules'))\nfrom poprop import util\n\nVERSION = \"1.0.3\" # Change for newer versions\n\nINSTALL_NAME = 'selenium-remote-control-%s' % VERSION \nFILE = \"http://selenium.googlecode.com/files/%s.zip\" % INSTALL_NAME\nDOWNLOAD_DIR = os.path.join(os.environ['HOME'], 'dev/downloads')\nINSTALL_DIR = os.path.join(os.environ['HOME'], 'dev/tools')\n\ndef install():\n \"\"\"\n Downloads (if necessary) and installs Selenium.\n \"\"\"\n util.mkdir(DOWNLOAD_DIR)\n util.mkdir(INSTALL_DIR)\n\n url = util.URL(FILE)\n dl_file = os.path.join(DOWNLOAD_DIR, url.basename)\n if not os.path.isfile(dl_file):\n print 'Downloading %s...' % INSTALL_NAME\n url.download(dstdir=DOWNLOAD_DIR)\n print 'Done.\\n'\n\n print 'Installing %s...' % INSTALL_NAME\n arc = util.Archiver(dl_file)\n arc.extract(dstdir=INSTALL_DIR, newrootname=INSTALL_NAME)\n print 'Done.\\n'\n \n# print 'Type Unable to access jarfile /opt/dev/tools/selenium-remote-control-1.0.1/selenium-server-1.0.1/selenium-server.jar'\n print 'Type \\'%s/%s/selenium-server-%s/selenium-server.jar\\' to use.' % \\\n (INSTALL_DIR, INSTALL_NAME, VERSION)\n\n \ndef remove():\n \"\"\"\n Removes Selenium. \n \"\"\"\n print 'Removing %s...' % INSTALL_NAME\n shutil.rmtree(os.path.join(INSTALL_DIR, INSTALL_NAME))\n print 'Done.\\n'\n\ndef exit():\n \"\"\"\n Exits script with appropriate message. \n \"\"\"\n print 'USAGE: python manage_selenium.py [install|remove]'\n sys.exit()\n\n# Run the script. \ntry:\n if sys.argv[1] == 'install':\n install()\n elif sys.argv[1] == 'remove':\n remove()\n else:\n exit()\nexcept IndexError:\n exit()"
},
{
"alpha_fraction": 0.7164948582649231,
"alphanum_fraction": 0.7268041372299194,
"avg_line_length": 31.33333396911621,
"blob_id": "7dded27098ec04a0420ccaf6e500539438714e4b",
"content_id": "0b9758e3b92a36716fb36e2dbc34f9abecc72d53",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 194,
"license_type": "no_license",
"max_line_length": 48,
"num_lines": 6,
"path": "/src/down-aptitude.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "# Title: down-aptitude.sh\n# Description: Remove extra aptitude libraries\n# Author: Matthew Norris\n\nsudo aptitude remove libxslt1-dev libxml2-dev -y\nsudo aptitude remove build-essential -y\n"
},
{
"alpha_fraction": 0.6504064798355103,
"alphanum_fraction": 0.6585366129875183,
"avg_line_length": 29.024391174316406,
"blob_id": "81ac6de99711e2dcfec4ac6ffc4c9fd45f11676f",
"content_id": "a7c0a92730d8eb6fba4ba22569b3c99888bbae33",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1230,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 41,
"path": "/src/install_color_palettes.py",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "\"\"\"\nInstalls color palettes for OpenOffice.org and Inkscape. \n\"\"\"\nimport os\nimport tempfile\n\nfrom poprop.util import URL\nfrom poprop.brand.color import Palette\n\n__author__ = \"Matthew Norris\"\n__copyright__ = \"Copyright 2011, Matthew Norris\"\n\nREMOTE_FILE = 'http://spreadsheets.google.com/pub?key=ta0-PKPDYY90bjEIAh8tmOg' \\\n '&single=true&gid=1&output=txt'\nPALETTE_NAME = 'poprop'\n\n# Download TSV file\nprint 'Downloading palette source...'\nurl = URL(REMOTE_FILE)\npalpath = url.download(newname='%s.tsv' % PALETTE_NAME, \n dstdir=tempfile.mkdtemp())\nprint 'Done.\\n'\n\n# Create the palette files. \npal = Palette()\npal.read(palpath)\n\noo_palette = os.path.join(os.environ['HOME'], \n '.openoffice.org/3/user/config/%s.soc' % PALETTE_NAME)\nprint 'Installing OpenOffice.org palette...'\npal.write_openoffice(oo_palette)\nprint 'Done.\\n'\n\n# Because Inkscape requires root permission, we cannot write directly. \nprint 'Installing Inkscape palette...'\ntmpfile = os.path.join(tempfile.mkdtemp(), PALETTE_NAME)\npal.write_inkscape(tmpfile)\nos.system('sudo mv %s %s' % (tmpfile, \n '/usr/share/inkscape/palettes/%s.gpl' % \n PALETTE_NAME))\nprint 'Done.\\n'"
},
{
"alpha_fraction": 0.46757134795188904,
"alphanum_fraction": 0.4801436960697174,
"avg_line_length": 24.963729858398438,
"blob_id": "f899c95bc8b09b6212250e3e9e3a0e4e0098c560",
"content_id": "dd2dd491c2fb5ad0fb2193592de334dbeb474c42",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 5011,
"license_type": "no_license",
"max_line_length": 118,
"num_lines": 193,
"path": "/src/install-appengine.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n# Title: install-appengine.sh\n# Description: Installs Google App Engine to the user's local dev setup. \n# Author: matthew\n# Reference: Not used, but helpful: http://stackoverflow.com/questions/229551/string-contains-in-bash/229606#229606\n#\n\n################################################################################\n# Locations \n################################################################################\n\nDEF_VERSION=1.9.2\nSYMLINK=google_appengine\n\n# Local directories\nDOWNLOADS=$HOME/dev/downloads\nSDKS=$HOME/dev/sdks\n\n# Google's remote directory and files \nGOOGLE_SITE_DIR='https://commondatastorage.googleapis.com/appengine-sdks/featured'\n\nPDIRPREFIX=\"google_appengine_\"\nJDIRPREFIX=\"appengine-java-sdk-\"\n\n################################################################################\n# Helper functions \n################################################################################\n\n# Prints the given error message and exits.\nfunction errorMessage() {\n echo -e \"Error: $1. Type '`basename $0` -h' for usage and options.\"\n exit 1\n}\n\n# Prints the given warning message and exits.\nfunction warningMessage() {\n echo -e \"Warning: $1.\"\n exit 2\n}\n\n# Prints this script's usage and exists. \nfunction outputUsage() {\n echo \"Usage: `basename $0` LANGUAGE [options...]\"\n echo \"Options:\"\n echo \" -h/--help Prints this message\"\n echo \" -r/--remove Removes the given language's packages\"\n \n # TODO: Add help messages for your options here. \n \n exit 1\n}\n\n################################################################################\n# Installation functions \n################################################################################\n\n# Installs packages and sets up directories and files. \nfunction installPackages() { \n echo \"Installing $1 $2...\"\n\t\n\t# If we've been given a version, use it. \n\tif [ ! -n \"$2\" ]; then \n\t\tVERSION=$DEF_VERSION\n\tfi\n\t\n\tPDIR=\"$PDIRPREFIX$VERSION\"\n\tPPKG=\"$PDIR.zip\"\n\tJDIR=\"$JDIRPREFIX$VERSION\"\n\tJPKG=\"$JDIR.zip\"\n \n if [ \"$1\" = \"python\" ]; then\n \tPKG=$PPKG\n elif [ \"$1\" = \"java\" ]; then\n PKG=$JPKG\n fi \n \n # If the package hasn't been downloaded, download it.\n cd $DOWNLOADS\n if [ ! -e $PKG ]; then\n \twget $GOOGLE_SITE_DIR/$PKG\n fi\n \n unzip $PKG -d $SDKS\n \n if [ $1 = \"python\" ]; then \n \tcd $SDKS\n \tmv $SYMLINK $PDIR \n \tln -s $PDIR $SYMLINK\n \t\n \t# TODO: In case of path errors, symlink 'django' to 'django_0_96' or \n \t# 'django_1_2'. This is how it's referred to now in App Engine. \n fi\n \n echo \"Done.\"\n exit 0\n}\n\n# Removes packages installed and tears down any directories and files created. \nfunction removePackages() {\n\t# TODO: Check that $1 is not empty string.\n\t\n\tif [ ! -n \"$1\" ]; then\n\t\terrorMessage $1\n\tfi\n\t\n # If we've been given a version, use it. \n if [ ! -n \"$2\" ]; then \n VERSION=$DEF_VERSION\n fi\n\t\n echo \"Removing `basename $0` tools & libraries...\" \n \n if [ \"$1\" = \"python\" ]; then\n export PKG=\"$PDIRPREFIX$VERSION\"\n elif [ \"$1\" = \"java\" ]; then\n export PKG=\"$JDIRPREFIX$VERSION\"\n fi \n \n #echo $PKG\n\n # Remove directory. \n cd $SDKS\n rm -fr $PKG\n\n # Remove symlink. \n if [ \"$1\" = \"python\" ]; then\n rm $SYMLINK\n fi\n\n echo \"Done.\" \n \n exit 0\n}\n\n################################################################################\n# Command line processing\n################################################################################\n\n# Check for a language option, which is required. \nif [ $# -lt 1 ]; then\n errorMessage $#\nfi\n\n# Parse the command line arguments. \nwhile [ \"$#\" -gt \"0\" ]; do\n\t# Convert to lowercase. \n\tARG=`echo \"$1\" | tr '[A-Z]' '[a-z]'`\n case $ARG in\n \n # TODO: Create some script options. \n # EXAMPLE: Uncomment below to assign a 'destination directory', DST_DIR, \n # to the arg given after a '-d' or '--dst' on the command line.\n # \n # -d|--dst)\n # shift 1 # eat the '-d' or '--dst'\n # DST_DIR=\"$1\" # assign the next arg as this variable \n # shift 1 # eat the arg you just assigned\n # ;;\n java|python)\n GAE_LANG=$ARG\n shift 1\n ;;\n -v|--version)\n shift 1\n VERSION=\"$1\"\n shift 1\n ;;\n -r|--remove)\n shift 1\n removePackages $GAE_LANG\n ;;\n -h|--help)\n outputUsage\n ;;\n -*|--*)\n errorMessage \"Unknown option $1\"\n ;;\n *)\n errorMessage \"Unknown parameter $1\"\n ;;\n esac\ndone\n\n################################################################################\n# Main\n################################################################################\n\necho \"Executing `basename $0`...\"\n\n# TODO: Always install afterwards because this is hit each time. \ninstallPackages $GAE_LANG $VERSION\n\necho \"Done.\"\n"
},
{
"alpha_fraction": 0.6648351550102234,
"alphanum_fraction": 0.692307710647583,
"avg_line_length": 29.33333396911621,
"blob_id": "1632bed00a4acafcc130d1450a3c56bc16d17872",
"content_id": "9162714d405d5e443c30567138af383573d07497",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 182,
"license_type": "no_license",
"max_line_length": 54,
"num_lines": 6,
"path": "/src/down-java.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "# Title: down-java.sh\n# Description: Tear down Java\n# Author: Matthew Norris\n\nsudo aptitude remove ia32-sun-java6-bin -y\nsudo aptitude remove sun-java6-jdk sun-java6-plugin -y\n"
},
{
"alpha_fraction": 0.5044404864311218,
"alphanum_fraction": 0.509769082069397,
"avg_line_length": 23.478260040283203,
"blob_id": "5e01b0beabb239dca7d7a6567785503ffedbff71",
"content_id": "202bb35d0db8e0f0e2d41578083c14f834062960",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 563,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 23,
"path": "/src/down-gwt.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\n################################################################################\n#\n# Title: down-gwt.sh\n# Description: Remove Google Web Toolkit\n# Author: Matthew Norris\n#\n################################################################################\n\n# Name to display\nfriendlyName=\"Google Web Toolkit\"\n\n# Installation location\ninstallPrefix=/opt/dev/sdks\n\n# Default install directory name\ninstallDirName=\"gwt-linux-1.7.1\"\n\necho \"Removing $friendlyName...\"\n#sudo rm /usr/local/bin/mvn\nsudo rm -dr $installPrefix/$installDirName\necho \"Done.\"\n"
},
{
"alpha_fraction": 0.7051020264625549,
"alphanum_fraction": 0.7091836929321289,
"avg_line_length": 29.625,
"blob_id": "4f1f39a2a994b93f6ac3fe288186ff7df6805952",
"content_id": "6366930b57303476fda1f5fc70252cf07bc50a60",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 980,
"license_type": "no_license",
"max_line_length": 72,
"num_lines": 32,
"path": "/src/virtualenv/setup_project_structure.py",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "\"\"\"\nSets up the project structure for the virtual environment. \n\n@see: U{https://sites.google.com/a/poprop.com/wiki/build/python}\n\"\"\"\nimport os\nimport sys\n\nsys.path.append(os.path.join(os.environ['HOME'], 'dev/modules'))\nfrom poprop import util\n\n__author__ = \"Matthew Norris\"\n__copyright__ = \"Copyright 2011, Matthew Norris\"\n\nif not util.in_virtualenv():\n sys.exit('Script not running in a virtual environment. Exiting.')\n\nprint 'Generating folder structure...'\n\nproject_dir = util.mkdir(os.path.join(util.get_virtualenv(), 'project'))\nutil.mkdir(os.path.join(project_dir, 'docs'))\nutil.mkdir(os.path.join(project_dir, 'app'))\nutil.mkdir(os.path.join(project_dir, 'scripts'))\n\ntests_dir = util.mkdir(os.path.join(project_dir, 'tests'))\n#util.mkdir(os.path.join(tests_dir, 'mocks'))\n#util.mkdir(os.path.join(tests_dir, 'unit'))\n#util.mkdir(os.path.join(tests_dir, 'fixtures'))\n#util.mkdir(os.path.join(tests_dir, 'functional'))\n#util.mkdir(os.path.join(tests_dir, 'acceptance'))\n\nprint 'Done.'\n"
},
{
"alpha_fraction": 0.6937224864959717,
"alphanum_fraction": 0.6985206007957458,
"avg_line_length": 32.783782958984375,
"blob_id": "40a03e4a697d47e32c3c1a8f2909419be5ab6044",
"content_id": "beac1136729d683a22140cdac76200568ff93a1b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 2501,
"license_type": "no_license",
"max_line_length": 221,
"num_lines": 74,
"path": "/src/virtualenv/postmkvirtualenv",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n# Title: postmkvirtualenv\n# Description: This hook is run after a *new* virtualenv is activated. \n# It resides in the WORKON_HOME directory, and should be \n# copied there. \n# Author: Matthew Norris\n#\n\nSITE_PKGS=\"lib/python2.5/site-packages\" # relative path to site pacakges for easy_install\nSEP=\"################################################################################\" \n\ncdvirtualenv\n\n# TODO: Move this \"best practice\" directory structure to another script. \n\necho \"Creating the project's directory structure and files...\"\n\n# The \"project\" directory is the heart of the app. This is where all \n# source code, project documents, and scripts will go. \nmkdir -p project/src\ncd project\n\n# A \"best practice\" .gitignore file that ignores tmp files, \n# build artifacts, etc.\necho -e \"*~\\ntmp*\\n*.tmp\\n*.bak\\n*.pyc\\n\" > .gitignore\necho -e \"# Build artifacts\\n$SEP\" >> .gitignore\necho -e \"nosetests.xml\\ncoverage.xml\\n.coverage\\n*.cover\" >> .gitignore\necho -e \".figleaf\\nreport.html\\npyccuracy_report.html\\n\" >> .gitignore \necho -e \"# Sass artifacts\\n$SEP\" >> .gitignore\necho -e \".sass-cache/\\n\" >> .gitignore\necho -e \"# System artifacts\\n$SEP\" >> .gitignore\necho -e \"PIL.pth\\n\" >> .gitignore \n\n# Create a README file for the project. \necho -e \"This directory contains the release of this project.\\n\" > $README\n\n# Create a directory for project documentation. \nmkdir -p docs/api\necho -e \"Documentation generated by Sphinx or Epydoc should go here.\\n\\n\" > docs/api/README\necho \"DO NOT DELETE. Empty directories are not committed to version control. This README file servers as a placeholder so that your CI tool (e.g., Jenkins) will commit this directory to its repository.\" >> docs/api/README\n\n# Save continuous integration files here (e.g., Jenkin's config.xml). \nmkdir -p scripts/ci \n\n# Any configuration of paths, etc. \nmkdir config\n# Generate a path to use for any \"module not found\" PIL errors. \n# http://stackoverflow.com/questions/2813742\necho \"`pwd`/$SITE_PKGS\" > config/PIL.pth\n\n# Create a directory for tests. \nmkdir -p test/unit\ncd test\nmkdir mocks\nmkdir fixtures\nmkdir functional\nmkdir acceptance\n\necho \"Done.\"\n# End directory structure\n\necho \"Installing widely-used packages...\"\neasy_install lxml\neasy_install http://dist.repoze.org/PIL-1.1.6.tar.gz\n# For testing... \neasy_install nose\neasy_install coverage\n# For documentation...\neasy_install pylint \necho \"Done.\"\n\n# Change to the project's source directory. \ncdvirtualenv\ncd project/src\n\n"
},
{
"alpha_fraction": 0.49789872765541077,
"alphanum_fraction": 0.5115069150924683,
"avg_line_length": 32.986392974853516,
"blob_id": "2ba3ddcb9c98629e4ae1ebd0c43737a02082e54c",
"content_id": "fe4525ea38687b3e1a64a0faa431ed24c25cb388",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 4997,
"license_type": "no_license",
"max_line_length": 225,
"num_lines": 147,
"path": "/src/gen-project-dirs.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n# Title: gen-project-dirs.sh\n# Description: Generates a directory structure for projects, modeled after \n# observations from virtualenv folders and others. \n# Author: Matthew Norris\n# Reference: Rationale - http://docs.wraithmonster.com/directory-structure\n# http://timmurphy.org/2010/05/19/checking-for-empty-string-in-bash/\n# http://stackoverflow.com/questions/229551/string-contains-in-bash\n# http://stackoverflow.com/a/5701853/154065\n# http://linuxcommand.org/wss0110.php\n# http://www.thegeekstuff.com/2010/06/bash-array-tutorial/\n# http://www.thegeekstuff.com/2010/07/bash-string-manipulation/\n#\n\n################################################################################\n# File & directory locations \n################################################################################\n\nDST_DIR=.\nREADME=\"README.md\"\n\n################################################################################\n# Helper functions \n################################################################################\n\n# Prints the given error message and exits.\nfunction errorMessage() {\n echo -e \"Error: $1. Type '`basename $0` -h' for usage and options.\"\n exit 1\n}\n\n# Prints the given warning message and exits.\nfunction warningMessage() {\n echo -e \"Warning: $1.\"\n exit 2\n}\n\n# Prints this script's usage and exists. \nfunction outputUsage() {\n echo \"Usage: `basename $0` [options...]\"\n echo \"Options:\"\n echo \" -h/--help Prints this message\"\n echo \" -n/--name Specify the project name\"\n echo \" -d/--dst Specify a project directory\"\n \n exit 1\n}\n\n################################################################################\n# Installation functions \n################################################################################\n\n# Generates the appropriate directories. \nfunction genDirs() {\n\n cd $DST_DIR\n\n # If a project name was given, create that parent directory first. \n if [ -n \"$PROJ_NAME\" ]; then\n mkdir $PROJ_NAME\n cd $PROJ_NAME\n fi\n \n echo \"Creating the project's directory structure and files...\"\n\n # The \"project\" directory is the heart of the app. This is where all \n # source code, project documents, and scripts will go. \n mkdir -p project/src\n cd project\n\n # A \"best practice\" .gitignore file that ignores tmp files, \n # build artifacts, etc.\n echo -e \"*~\\ntmp*\\n*.tmp\\n*.bak\\n*.pyc\\n\" > .gitignore\n echo -e \"# Build artifacts\\n$SEP\" >> .gitignore\n echo -e \"nosetests.xml\\ncoverage.xml\\n.coverage\\n*.cover\" >> .gitignore\n echo -e \".figleaf\\nreport.html\\npyccuracy_report.html\\n\" >> .gitignore \n echo -e \"# Sass artifacts\\n$SEP\" >> .gitignore\n echo -e \".sass-cache/\\n\" >> .gitignore\n echo -e \"# System artifacts\\n$SEP\" >> .gitignore\n echo -e \"PIL.pth\\n\" >> .gitignore \n \n # Create a README file for the project. \n echo -e \"This directory contains the release of this project.\\n\" > $README\n\n # Create a directory for project documentation. \n mkdir -p docs/api\n echo -e \"Documentation generated by Sphinx or Epydoc should go here.\\n\\n\" > docs/api/README\n echo \"DO NOT DELETE. Empty directories are not committed to version control. This README file servers as a placeholder so that your CI tool (e.g., Jenkins) will commit this directory to its repository.\" >> docs/api/README\n\n # Save continuous integration files here (e.g., Jenkin's config.xml). \n mkdir -p scripts/ci \n\n # Any configuration of paths, etc. \n mkdir config\n # Generate a path to use for any \"module not found\" PIL errors. \n # http://stackoverflow.com/questions/2813742\n echo \"`pwd`/$SITE_PKGS\" > config/PIL.pth\n\n # Create a directory for tests. \n mkdir -p test/unit\n cd test\n mkdir mocks\n mkdir fixtures\n mkdir functional\n mkdir acceptance\n\n echo \"Done.\"\n}\n\n################################################################################\n# Command line processing\n################################################################################\n\n# Parse the command line arguments. \nwhile [ \"$#\" -gt \"0\" ]; do\n case \"$1\" in \n -d|--dst)\n shift 1 # eat the '-d' or '--dst'\n DST_DIR=\"$1\" # assign the next arg as this variable \n shift 1 # eat the arg you just assigned\n ;;\n -n|--name)\n shift 1\n PROJ_NAME=\"$1\"\n shift 1\n ;;\n -r|--remove)\n shift 1\n removePackages\n ;;\n -h|--help)\n outputUsage\n ;;\n -*|--*)\n errorMessage \"Unknown option $1\"\n ;;\n *)\n errorMessage \"Unknown parameter $1\"\n ;;\n esac\ndone\n\n################################################################################\n# Main\n################################################################################\n\ngenDirs\n\n"
},
{
"alpha_fraction": 0.48338109254837036,
"alphanum_fraction": 0.4888252019882202,
"avg_line_length": 28.829059600830078,
"blob_id": "9e8203b8fcc41253ba0804382e845161a23b8693",
"content_id": "5d5cbf3ac089eaadb8fa776e2c5bde96a386bdbf",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 3490,
"license_type": "no_license",
"max_line_length": 95,
"num_lines": 117,
"path": "/src/install-postfix.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n# Title: install-postfix.sh\n# Description: Installs postfix mail server and configures it to use \n# Optimum Online as a relay (required by the ISP). \n# Author: matthew\n# Reference: TODO: Record any references, like URLs, here. \n#\n\nCONFIG_FILE=\"/etc/postfix/main.cf\"\nRELAYHOST=\"mail.optonline.net\"\n\nBACKUP_DIR=\"$HOME/dev/backup\"\n\n################################################################################\n# Helper functions \n################################################################################\n\n# Prints the given error message and exits.\nfunction errorMessage() {\n echo -e \"Error: $1. Type '`basename $0` -h' for usage and options.\"\n exit 1\n}\n\n# Prints the given warning message and exits.\nfunction warningMessage() {\n echo -e \"Warning: $1.\"\n exit 2\n}\n\n# Prints this script's usage and exists. \nfunction outputUsage() {\n echo \"Usage: `basename $0` [options...]\"\n echo \"Options:\"\n echo \" -h/--help Prints this message\"\n \n # TODO: Add help messages for your options here. \n \n exit 1\n}\n\n################################################################################\n# Installation functions \n################################################################################\n\n# Installs packages and sets up directories and files. \nfunction installPackages() { \n echo \"Installing `basename $0` tools & libraries...\"\n sudo aptitude install postfix -y\n echo \"Done.\"\n echo\n echo \"Backing up current configuration to $CONFIG_FILE.bak...\"\n sudo cp $CONFIG_FILE \"$BACKUP_DIR/main.cf.bak\"\n echo \"Done.\"\n echo\n echo \"Configuring mail server...\"\n sudo sed -e \"s:relayhost = :relayhost = $RELAYHOST:\" < $CONFIG_FILE | sudo tee $CONFIG_FILE\n echo \"Done.\"\n echo \"Reloading mail server...\"\n sudo /etc/init.d/postfix reload\n echo \"Done.\"\n echo\n \n exit 0\n}\n\n# Removes packages installed and tears down any directories and files created. \nfunction removePackages() {\n\t#echo \"Reverting configuration to saved $CONFIG_FILE.bak...\"\n #sudo cp \"$BACKUP_DIR/main.cf.bak\" $CONFIG_FILE\n #echo \"Done.\"\n\t#echo\n echo \"Removing `basename $0` tools & libraries...\" \n sudo aptitude purge postfix -y # 'purge' removes all config files too\n echo \"Done.\"\n echo\n \n exit 0 \n}\n\n################################################################################\n# Command line processing\n################################################################################\n\n# Parse the command line arguments. \nwhile [ \"$#\" -gt \"0\" ]; do\n case \"$1\" in\n \n # TODO: Create some script options. \n # EXAMPLE: Uncomment below to assign a 'destination directory', DST_DIR, \n # to the arg given after a '-d' or '--dst' on the command line.\n # \n # -d|--dst)\n # shift 1 # eat the '-d' or '--dst'\n # DST_DIR=\"$1\" # assign the next arg as this variable \n # shift 1 # eat the arg you just assigned\n # ;;\n -r|--remove)\n shift 1\n removePackages\n ;;\n -h|--help)\n outputUsage\n ;;\n -*|--*)\n errorMessage \"Unknown option $1\"\n ;;\n *)\n errorMessage \"Unknown parameter $1\"\n ;;\n esac\ndone\n\n################################################################################\n# Main\n################################################################################\n\ninstallPackages\n"
},
{
"alpha_fraction": 0.5406583547592163,
"alphanum_fraction": 0.5485066771507263,
"avg_line_length": 31.30281639099121,
"blob_id": "08a6212acf4cd515491e44a237e742d20c4d043b",
"content_id": "0a310e84a86810ac55bb580a302a81c2e33028af",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 4587,
"license_type": "no_license",
"max_line_length": 222,
"num_lines": 142,
"path": "/docs/create-python-project.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n# Title: create-python-project.sh\n# Description: Creates a new Python web project using virtualenv. \n# Author: matthew\n# Reference: http://pypi.python.org/pypi/virtualenv\n# http://forums.fedoraforum.org/showthread.php?t=225470\n#\n\n################################################################################\n# Helper functions \n################################################################################\n\n# Prints the given error message and exits.\nfunction errorMessage() {\n echo -e \"Error: $1. Type '`basename $0` -h' for usage and options.\"\n exit 1\n}\n\n# Prints the given warning message and exits.\nfunction warningMessage() {\n echo -e \"Warning: $1.\"\n exit 2\n}\n\n# Prints this script's usage and exists. \nfunction outputUsage() {\n echo \"Usage: `basename $0` [options...]\"\n echo \"Options:\"\n echo \" -h/--help Prints this message\"\n \n # TODO: Add help messages for your options here. \n \n exit 1\n}\n\n################################################################################\n# Post-hook function\n################################################################################\n\nfunction postmkvirtualenv() {\n\tSITE_PKGS=\"lib/python2.5/site-packages\" # relative path to site pacakges for easy_install\n\tSEP=\"################################################################################\"\n\t\n\tcdvirtualenv\n\n\techo \"Creating the project's directory structure...\"\n\t\n\t# The \"project\" directory is the heart of the app. This is where all \n\t# source code, project documents, and scripts will go. \n\tmkdir -p project/src\n\tcd project\n\t\n\t# A \"best practice\" .gitignore file that ignores tmp files, \n\t# build artifacts, etc. \n\techo -e \"*~\\ntmp*\\n*.tmp\\n*.bak\\n*.pyc\\n\\n\" > .gitgignore\n\techo -e \"# Build artifacts\\n$SEP\\n\" >> .gitignore\n\techo -e \"nosetests.xml\\ncoverage.xml\\n.coverage\\n*.cover\\n\" >> .gitignore\n\techo -e \".figleaf\\nreport.html\\npyccuracy_report.html\\n\\n\" >> .gitignore \n\techo -e \"# Sass artifacts\\n$SEP\\n\" >> .gitignore\n\techo -e \".sass-cache/\\n\\n\" >> .gitignore\n\techo -e \"# System artifacts\\n$SEP\\n\" >> .gitignore\n\techo -e \"PIL.pth\\n\" >> .gitignore \n\t\n\t# Create a directory for project documentation. \n\tmkdir -p docs/api\n\techo -e \"Documentation generated by Sphinx or Epydoc should go here.\\n\\n\" > docs/api/README\n\techo \"DO NOT DELETE. Empty directories are not committed to version control. This README file servers as a placeholder so that your CI tool (e.g., Jenkins) will commit this directory to its repository.\" >> docs/api/README\n\t\n\t# Save continuous integration files here (e.g., Jenkin's config.xml). \n\tmkdir -p scripts/ci \n\t\n\t# Any configuration of paths, etc. \n\tmkdir config\n\t# Generate a path to use for any \"module not found\" PIL errors. \n\t# http://stackoverflow.com/questions/2813742\n\techo \"Generating PIL.pth...\"\n\techo \"`pwd`/$SITE_PKGS\" > config/PIL.pth\n\t\n\t# Create a directory for tests. \n\tmkdir -p test/unit\n\tcd test\n\tmkdir mocks\n\tmkdir fixtures\n\tmkdir functional\n\tmkdir acceptance\n\t\n\techo \"Done.\"\n\texit 1\n}\n\n################################################################################\n# Command line processing\n################################################################################\n\n# Check for a language option, which is required. \nif [ $# -lt 1 ]; then\n errorMessage \"No project name given\"\nelse\n PROJECT_NAME=\"$1\"\n shift\nfi\n\n# Parse the command line arguments. \nwhile [ \"$#\" -gt \"0\" ]; do\n case \"$1\" in\n \n # TODO: Create some script options. \n # EXAMPLE: Uncomment below to assign a 'destination directory', DST_DIR, \n # to the arg given after a '-d' or '--dst' on the command line.\n # \n # -d|--dst)\n # shift 1 # eat the '-d' or '--dst'\n # DST_DIR=\"$1\" # assign the next arg as this variable \n # shift 1 # eat the arg you just assigned\n # ;;\n -h|--help)\n outputUsage\n ;;\n -*|--*)\n errorMessage \"Unknown option $1\"\n ;;\n *)\n errorMessage \"Unknown parameter $1\"\n ;;\n esac\ndone\n\n################################################################################\n# Main\n################################################################################\n\necho \"This script will not work. Use its contents for postmkvirtualenv instead.\"\nexit 1\n\necho \"Creating the environment for $PROJECT_NAME...\"\n# Make the virtual environment. \n# TODO: This does NOT work: \"command not found: mkvirtualenv\"\nmkvirtualenv --no-site-packages -p python2.5 \"$PROJECT_NAME\"\necho $?\nexit 3\n\npostmkvirtualenv\n"
},
{
"alpha_fraction": 0.4713286757469177,
"alphanum_fraction": 0.4786013960838318,
"avg_line_length": 27.583999633789062,
"blob_id": "a9ff4aa31e59775944694cb7b3e43ff860bd54f9",
"content_id": "91ffacb921781e8e835032ad4fb364ed0153e9df",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 3575,
"license_type": "no_license",
"max_line_length": 106,
"num_lines": 125,
"path": "/src/install-wordpress.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n# Title: install-wordpress.sh\n# Description: Downloads and installs WordPress. \n# Required: MySQL, PHP, Apache\n# Author: matthew\n# Reference: http://wordpress.org/download/release-archive/\n# http://premium.wpmudev.org/project/qa-wordpress-questions-and-answers-plugin/installation/\n#\n# http://www.squidoo.com/wordpress-not-found-error-fix\n#\n\nDOWNLOADS=$HOME/dev/downloads\n\nWP_VERSION=3.1.3 # works with Q&A plugin\nWP_PKG=\"wordpress-$WP_VERSION.zip\"\nWP_SITE=\"http://wordpress.org/$WP_PKG\"\n\n################################################################################\n# Helper functions \n################################################################################\n\n# Prints the given error message and exits.\nfunction errorMessage() {\n echo -e \"Error: $1. Type '`basename $0` -h' for usage and options.\"\n exit 1\n}\n\n# Prints the given warning message and exits.\nfunction warningMessage() {\n echo -e \"Warning: $1.\"\n exit 2\n}\n\n# Prints this script's usage and exists. \nfunction outputUsage() {\n echo \"Usage: `basename $0` [options...]\"\n echo \"Options:\"\n echo \" -h/--help Prints this message\"\n \n # TODO: Add help messages for your options here. \n \n exit 1\n}\n\n################################################################################\n# Installation functions \n################################################################################\n\n# Installs packages and sets up directories and files. \nfunction installPackages() {\n # TODO: Write a setup function. \n echo \"Installing `basename $0` tools & libraries...\"\n \n # Get latest Wordpress. \n #wget http://wordpress.org/latest.tar.gz\n #tar -xzvf latest.tar.gz \n \n cd $DOWNLOADS\n # Get the zip file if it isn't yet downloaded. \n if [ ! -e $WP_PKG ]; then\n \twget $WP_SITE -O $WP_PKG\n fi\n\n echo \"Done.\"\n \n exit 0\n}\n\n# Removes packages installed and tears down any directories and files created. \nfunction removePackages() {\n # TODO: Write a tear down function.\n #echo \"Removing `basename $0` tools & libraries...\" \n echo \"Function not implemented! Nothing was done!\"\n #echo \"Done.\" \n \n exit 0\n}\n\n################################################################################\n# Command line processing\n################################################################################\n\n# Parse the command line arguments. \nwhile [ \"$#\" -gt \"0\" ]; do\n case \"$1\" in\n \n # TODO: Create some script options. \n # EXAMPLE: Uncomment below to assign a 'destination directory', DST_DIR, \n # to the arg given after a '-d' or '--dst' on the command line.\n # \n # -d|--dst)\n # shift 1 # eat the '-d' or '--dst'\n # DST_DIR=\"$1\" # assign the next arg as this variable \n # shift 1 # eat the arg you just assigned\n # ;;\n -d|--dst)\n shift 1\n DST_DIR=\"$1\"\n shift 1\n ;;\n -r|--remove)\n shift 1\n removePackages\n ;;\n -h|--help)\n outputUsage\n ;;\n -*|--*)\n errorMessage \"Unknown option $1\"\n ;;\n *)\n errorMessage \"Unknown parameter $1\"\n ;;\n esac\ndone\n\n################################################################################\n# Main\n################################################################################\n\n# TODO: Write the main script here.\n\necho \"Executing `basename $0`...\"\ninstallPackages\necho \"Done.\"\n \n"
},
{
"alpha_fraction": 0.5524039268493652,
"alphanum_fraction": 0.5609133243560791,
"avg_line_length": 26.01915740966797,
"blob_id": "7c04183521837e87912c9fe33cd22d9fd288e861",
"content_id": "000e6638a8b744da616e27f6a1d5dc24b3c21740",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 7051,
"license_type": "no_license",
"max_line_length": 95,
"num_lines": 261,
"path": "/src/up-wdk.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "#!\n\n################################################################################\n# Title: up-wdk.sh\n# Description: Setup Yahoo TV Widget Development Kit\n# Author: Matthew Norris\n# Reference: http://connectedtv.yahoo.com/developer/wdk/download\n# http://ubuntuforums.org/showpost.php?p=7079256&postcount=18\n# http://www.mathlinks.ro/viewtopic.php?t=272131\n#\n################################################################################\n\nverbose=false\n\n#utilDir=\"$HOME/scripts/util\"\nutilDir=\"../util\"\n\n# Location to download source files\ndefDownloadDir=$HOME/sources\ndlDir=$defDownloadDir\n\n# Location to install\ndefInstallPrefix=/opt/dev/sdks\n#defInstallPrefix=$HOME/dev/sdks\ninstallPrefix=$defInstallPrefix\n\n# Linux group (used for permissions)\ngroup=developers\n\n# Download files\n\ndlPrefix=\"http://connectedtv.yahoo.com/developer/wdk/download\"\ndlFileName=\"wdk\"\ndlFileExt=\"zip\"\n\n# Friendly name for messages\nfriendlyName=\"Widget Development Kit\"\n\n\n# Simulator global widget directory\nwidgetDir=\"/devwidgets\"\n\n# Package name\npkgPrefix=\"ywe-wdk\"\npkgVersion=\"0.9.7.6\"\npkgArch=\"i386\"\npkgExt=\"deb\"\n\npkgName=\"${pkgPrefix}_${pkgVersion}_${pkgArch}\"\n\n# Default install directory name (notice the subtle difference)\ninstallDirName=\"${pkgPrefix}-${pkgVersion}-${pkgArch}\"\n\n################################################################################\n# Functions\n################################################################################\n\nfunction checkUtilDir() {\n\n # Strip whitespace\n utilDir=$(expr \"$1\" : '[[:space:]]*\\(.*\\)[[:space:]]*$')\n \n # Check for null and whitespace\n if [[ -z \"$utilDir\" ]] || [[ ! -n \"$utilDir\" ]]; then\n errorMessage \"No UTIL DIRECTORY specified. Try --help for help.\"\n fi\n \n if [[ ! -d $utilDir ]]; then\n errorMessage \"$utilDir does not exist.\"\n fi\n}\n\nfunction checkDirs() {\n \n # Strip whitespace and remove any trailing slashes\n installPrefix=$(expr \"$1\" : '[[:space:]]*\\(.*\\)[[:space:]]*$' | sed -e 's/\\\\/$//g')\n dlDir=$(expr \"$2\" : '[[:space:]]*\\(.*\\)[[:space:]]*$' | sed -e 's/\\\\/$//g')\n \n # Check for null and whitespace\n if [[ -z \"$installPrefix\" ]] || [[ ! -n \"$installPrefix\" ]]; then\n warningMessage \"No INSTALL DIRECTORY specified. Using default: $defInstallPrefix\"\n installPrefix=$defInstallPrefix\n fi\n \n if [[ -z \"$dlDir\" ]] || [[ ! -n \"$dlDir\" ]]; then\n warningMessage \"No DOWNLOAD DIRECTORY specified. Using default: $defDownloadDir\"\n dlDir=$defDownloadDir\n fi\n \n # Do the specified directories exist?\n if [[ ! -d $installPrefix ]]; then \n errorMessage \"$installPrefix is not a directory. Try '$(basename $0) --help' for help.\"\n fi\n \n if [[ ! -d $dlDir ]]; then \n errorMessage \"$dlDir is not a directory. Try '$(basename $0) --help' for help.\"\n fi\n}\n\nfunction checkParamStr() {\n # If there are no parameters, throw error\n if [[ \"${#1}\" -eq \"0\" ]]; then\n errorMessage \"Try '$(basename $0) --help' for help.\"\n fi\n}\n\nfunction errorMessage() {\n echo -e \"Error: $1\"\n exit 1\n}\n\nfunction warningMessage() {\n echo -e \"Warning: $1\"\n}\n\nfunction outputUsage() {\n echo \"up-wdk - Install $friendlyName\"\n echo \"Usage: $(basename $0) [options...] [INSTALL DIRECTORY] [DOWNLOAD DIR]\"\n echo \"Options:\"\n echo \" -v/--verbose Not yet implemented\"\n echo \" -h/--help Output this message\"\n \n exit 1\n}\n\nfunction installPkg() {\n # Since we have a 64-bit system, force 32-bit architecture\n sudo dpkg -i --force-architecture \"$installPrefix/$installDirName/${pkgName}.${pkgExt}\"\n}\n\n################################################################################\n# Command line processing\n################################################################################\n\n# Parse the command line arguments\nwhile [[ \"$#\" -gt \"0\" ]]; do\n case \"$1\" in \n -v|--verbose)\n verbose=true\n shift 1\n checkParamStr \"$@\"\n ;;\n -h|--help)\n outputUsage\n ;;\n -*|--*)\n # Unknown option\n errorMessage \"Unknown option $1.\"\n ;;\n *)\n # If we have more than 2 parameters, the usage is incorrect\n if [[ \"$#\" -gt \"2\" ]]; then\n errorMessage \"Too many parameters. Try '$(basename $0) --help' for help.\"\n else\n checkDirs $1 $2\n shift 2\n fi\n break\n ;;\n esac\ndone\n\n################################################################################\n# Main\n###############################################################################\n\n# Check for the proper utilities\n\nif [[ ! -f \"$utilDir/mmkdir.sh\" ]] || [[ ! -f \"$utilDir/marmdir.sh\" ]]; then\n errorMessage \"Utilities not found in '$utilDir'. Please specify the proper directory.\"\nfi\n\n$utilDir/mmkdir.sh $dlDir\necho\n\n# Download Yahoo WDK zip and inflate it\n\necho $dlDir \necho \"$dlPrefix/$dlFileName.$dlFileExt\"\n\necho \"Downloading $dlFileName.$dlFileExt...\"\nif [[ ! -f $dlDir/$dlFileName.$dlFileExt ]]; then\n\twget --directory-prefix=$dlDir \"$dlPrefix/$dlFileName.$dlFileExt\"\n\tif [ $? -eq 0 ]; then\n\t echo \"Done.\"\n\telse\n\t errorMessage \"Download failed with exit status of $?\"\n\tfi\nelse\n echo \"$dlFileName.$dlFileExt has already been downloaded.\"\nfi\necho\n\necho \"Installing $friendlyName to $installPrefix...\"\n\n# Check for archive type\n\nif [[ \"${dlFileExt}\" == \"zip\" ]]; then \n\tsudo unzip -q $dlDir/$dlFileName.$dlFileExt -d $installPrefix\nelif [[ \"${dlFileExt}\" == \"tar.gz\" ]]; then\n sudo tar -zxf $dlDir/$dlFileName.$dlFileExt -C $installPrefix\nelif [[ $dlFileExt == \"tar.bz2\" ]]; then\n sudo tar -xjf $dlDir/$dlFileName.$dlFileExt -C $installPrefix\nelse\n errorMessage \"File extension '$dlFileExt' not supported.\"\nfi\n\t\n# Was it successful?\n\nif [ $? -eq 0 ]; then\n echo \"Done.\"\nelse\n errorMessage \"Inflating the archive failed with exit status of $?\"\nfi\n\n# Install the package #########################################################\n\n# Get dependencies first\necho \"Installing dependencies...\"\nsudo aptitude install lib32readline5 -y\nsudo aptitude install libsdl-image1.2 -y\nsudo aptitude install expect -y\necho \"Done.\"\n\n\n# Create widgets directory for the Simulator at root\necho \"Making $widgetDir directory...\"\nsudo $utilDir/mmkdir.sh \"${widgetDir}\"\necho \"Done.\"\n\ninstallPkg\n\nif [ $? -eq 0 ]; then\n echo \"Done.\"\nelse\n warningMessage \"Install failed with exit status of $?\"\n echo \"Attempting to resolve dependencies...\"\n exit 2\n# sudo aptitude -f install\n# if [ $? -eq 0 ]; then\n# echo \"Done.\"\n# installPkg \n# else\n# errorMessage \"Could not update aptitude.\"\n# fi\nfi\n\n# Permissions\n\nif [[ ! -n \"`egrep -i ^$group: /etc/group`\" ]]; then\n\techo \"Creating the $group group...\"\n\tsudo groupadd $group\n\techo \"Done.\"\n\techo\nfi\n\necho \"Changing group ownership of files...\"\nsudo chgrp -R $group $installPrefix/$installDirName\nsudo chgrp -R $group /$widgetDir\necho \"Done.\"\necho"
},
{
"alpha_fraction": 0.4849397540092468,
"alphanum_fraction": 0.4909638464450836,
"avg_line_length": 27.04929542541504,
"blob_id": "55b5d77311e27b1527c72c693237f2d4d9bc97d4",
"content_id": "fc12c79e7a1f1255c66ad4cc1e8b303d7fb0a68c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 3984,
"license_type": "no_license",
"max_line_length": 83,
"num_lines": 142,
"path": "/src/install-node.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n# Title: install-node.sh\n# Description: Installs node.js. \n# Author: wraithmonster\n# Reference: https://github.com/joyent/node/wiki/Installation \n# Web framework - http://expressjs.com/\n# Node Pkg Manager - https://github.com/isaacs/npm\n#\n\n################################################################################\n# Default locations \n################################################################################\n\nDST_DIR=$HOME/dev/tools\n\n################################################################################\n# Helper functions \n################################################################################\n\n# Prints the given error message and exits.\nfunction errorMessage() {\n echo -e \"Error: $1. Type '`basename $0` -h' for usage and options.\"\n exit 1\n}\n\n# Prints the given warning message and exits.\nfunction warningMessage() {\n echo -e \"Warning: $1.\"\n exit 2\n}\n\n# Prints this script's usage and exists. \nfunction outputUsage() {\n echo \"Usage: `basename $0` [options...]\"\n echo \"Options:\"\n echo \" -h/--help Prints this message\"\n echo \" -r/--remove Removes Node and Node Package Manager\"\n \n exit 1\n}\n\n################################################################################\n# Installation functions \n################################################################################\n\n# Installation script generated at http://apptob.org (link broken as of 2/17/2012).\nfunction apptobScript() {\n\t####################################\n\t#\n\t# Author: Ruslan Khissamov, email: [email protected]\n\t#\n\t####################################\n\t\n\t# Update System\n\techo 'System Update'\n\tsudo apt-get update # sudo added\n\techo 'Update completed'\n\tsudo apt-get install git-core curl python-software-properties # sudo added \n\t\n\t# Install Node.js\n\techo 'Install Node.js'\n\t# TODO: Check if repo is already installed before adding. \n\tsudo add-apt-repository ppa:chris-lea/node.js # sudo added\n\tsudo apt-get update # sudo added\n\tsudo apt-get install nodejs nodejs-dev -y # sudo added, -y added\n\techo 'Node.js install completed'\n\t\n\t# Install Node Package Manager\n\techo 'Install Node Package Manager'\n\tcurl http://npmjs.org/install.sh | sudo sh\n\techo 'NPM install completed'\n}\n\n# Installs packages and sets up directories and files. \nfunction installPackages() { \n echo \"Installing `basename $0` tools & libraries...\"\n \n sudo aptitude install libssl-dev -y\n apptobScript\n \n# mkdir -p $DST_DIR\n# cd $DST_DIR\n# npm install express\n \n echo \"Done.\"\n \n exit 0\n}\n\n# Removes packages installed and tears down any directories and files created. \nfunction removePackages() {\n \n echo \"Removing `basename $0` tools & libraries...\" \n \n# npm remove express\n \n # Reverse the apptob script. \n sudo npm uninstall npm -g\n sudo apt-get remove nodejs nodejs-dev -y\n \n # TODO: Can we remove a repository? \n echo\n echo \"Not removing chris-lea repository.\" \n echo\n echo \"Not removing the libssl-dev library (a dependency required for Node).\"\n echo \"Other programs depend on this.\"\n echo \n echo \"Done.\" \n \n exit 0\n}\n\n################################################################################\n# Command line processing\n################################################################################\n\n# Parse the command line arguments. \nwhile [ \"$#\" -gt \"0\" ]; do\n case \"$1\" in\n -r|--remove)\n shift 1\n removePackages\n ;;\n -h|--help)\n outputUsage\n ;;\n -*|--*)\n errorMessage \"Unknown option $1\"\n ;;\n *)\n errorMessage \"Unknown parameter $1\"\n ;;\n esac\ndone\n\n################################################################################\n# Main\n################################################################################\n\necho \"Executing `basename $0`...\"\ninstallPackages\necho \"Done.\"\n\n"
},
{
"alpha_fraction": 0.4847750961780548,
"alphanum_fraction": 0.4951557219028473,
"avg_line_length": 23.5,
"blob_id": "3117f11e8b3f99b5bf1ec346dcfd7fad13f17871",
"content_id": "4384f4cbe5fbe8d970aa01fca60117d92dab36ca",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 2890,
"license_type": "no_license",
"max_line_length": 81,
"num_lines": 118,
"path": "/src/install-question2answer.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "# Title: up-question2answer.sh\n#\n# Description: Setup the PHP-based Question2Answer app. \n#\n# Author: Matthew Norris\n#\n# Reference: http://www.question2answer.org/advanced.php\n#\n\nDOWNLOADS=$HOME/dev/downloads # download the file here\nRDIR=http://www.question2answer.org # the file's remote dir\nFILE=question2answer-latest.zip # the file\nDST_DIR=\".\" # default destination dir\nDST_ROOT=\"question2answer\"\n\n################################################################################\n# Helper functions \n################################################################################\n\nfunction errorMessage() {\n echo -e \"Error: $1\"\n exit 1\n}\n\nfunction warningMessage() {\n echo -e \"Warning: $1\"\n exit 2\n}\n\nfunction outputUsage() {\n echo \"up-question2answer\"\n echo \"Usage: `basename $0` [options...]\"\n echo \"Options:\"\n echo \" -d/--dst Destination directory\"\n echo \" -n/--name New root directory name. Default is 'question2answer'.\"\n echo \" -h/--help Output this message\"\n \n exit 1\n}\n\n################################################################################\n# Command line processing\n################################################################################\n\n# Parse the command line arguments. \nwhile [ \"$#\" -gt \"0\" ]; do\n case \"$1\" in \n -d|--dst)\n shift 1\n DST_DIR=\"$1\"\n #echo \"Dst: $DST_DIR\"\n shift 1\n ;;\n -n|--name)\n shift 1\n NEW_NAME=\"$1\"\n #echo \"New name: $NEW_NAME\"\n shift 1\n ;;\n -h|--help)\n outputUsage\n ;;\n -*|--*)\n errorMessage \"Unknown option $1.\"\n ;;\n *)\n errorMessage \"Unknown parameter $1.\"\n ;;\n esac\ndone\n\n################################################################################\n# Main\n################################################################################\n\n# Download the file. \nmkdir -p $DOWNLOADS\ncd $DOWNLOADS\n\necho \"Getting $FILE...\"\n\n# Get the file if it does not exist\nif [ ! -f $FILE ]; then\n\techo \"Downloading from $RDIR...\"\n\twget $RDIR/$FILE\nfi\necho \"Done.\"\n\n# Unzip it. \necho \"Unzipping $FILE...\"\nmkdir -p $DST_DIR\nunzip -q $FILE -d $DST_DIR\necho \"Done.\"\n\ncd $DST_DIR\n\n# Rename if necessary. \nif [ -n \"$NEW_NAME\" ]; then\n echo \"Renaming $DST_ROOT to $NEW_NAME...\"\n mv $DST_ROOT $NEW_NAME\n echo \"Done.\"\n DST_ROOT=$NEW_NAME\nfi\n\ncd $DST_ROOT\n\n# Create config files from the provided examples. \ncp qa-config-example.php qa-config.php\nmv qa-external-example qa-external\n\necho\necho \"TO FINISH THE INSTALLATION:\"\necho\necho \"1. Edit qa-config.php to include MySQL details at the top, \" \necho \" scroll down and set QA_EXTERNAL_USERS to true, then save the file.\"\necho \"2. Visit http://www.question2answer.org/advanced.php \"\necho \" and follow steps 7-11.\"\necho"
},
{
"alpha_fraction": 0.5543766617774963,
"alphanum_fraction": 0.5596817135810852,
"avg_line_length": 24.704545974731445,
"blob_id": "d4adaab25511bd35f346086b07826152146f8148",
"content_id": "a14e856d49bfd3eb431854c9a3c07f7e6554db6a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 1131,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 44,
"path": "/src/up-bash.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "#!\n\n################################################################################\n#\n# Title: up-bash.sh\n# Description: Setup personal bash to be attractive and informative\n# Author: Matthew Norris\n# Reference: http://bit.ly/slicehost-ubuntu-setup\n#\n################################################################################\n\n# Make the terminal more attractive and informative by adding a few lines to \n# the .bashrc file.\n\n# Check that the user has given an arg and that is a valid file.\nif [ -n \"$1\" ] && [ -f $1 ]\n then\n\n bashFile=\"$HOME/.bashrc\"\n bashBackup=\"$bashFile.bak\"\n customLines=\"$1\"\n bashTemp=\"/tmp/$(basename $0).$RANDOM\"\n\n echo \"Backing up your current .bashrc...\"\n\n cp $bashFile $bashBackup\n\n echo \"Adding your custom lines to .bashrc...\"\n\n cat $bashFile $customLines > $bashTemp\n cp $bashTemp $bashFile\n rm $bashTemp\n\n echo \"Changes saved. Activating changes...\"\n\n # Activate the changes made by this script to bash\n cd $HOME\n source $bashFile\n\n echo \"Changes activated.\"\nelse\n echo \"'$1' does not exist. Please specify another file.\"\n exit 1\nfi\n"
},
{
"alpha_fraction": 0.5179738402366638,
"alphanum_fraction": 0.5196078419685364,
"avg_line_length": 23.479999542236328,
"blob_id": "b7543c423129380ff3701b473b45dd4c06ddc2a8",
"content_id": "2c0c810c6ccbbdc6f94f9110464cca98c94b1286",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 612,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 25,
"path": "/src/up-version-control.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\n################################################################################\n#\n# Title: up-version-control.sh\n# Description: Install Git & Subversion\n# Author: Matthew Norris\n#\n################################################################################\n\nutilDir=\"util\"\n\n# Install Git (also installs libdigest-sha1-perl & liberror-perl{a}) \n\nsudo aptitude install git-core -y\n# sudo aptitude install git-gui -y\nsudo aptitude install giggle -y\n\n# Make an SSH directory for git commits \n$utilDir/mmkdir.sh -v \"$HOME/.ssh\"\necho \n\n# Install Subversion\n\nsudo aptitude install subversion -y\n"
},
{
"alpha_fraction": 0.6818181872367859,
"alphanum_fraction": 0.7037037014961243,
"avg_line_length": 41.42856979370117,
"blob_id": "1db298d26a219933f8fc4483fc57417bcbb657a2",
"content_id": "1dd07a3f620639276547bb043c0df2642f3e915f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 594,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 14,
"path": "/src/up-java.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "# Title: up-java.sh\n# Description: Install the default Java JDK (for 64-bit) and the 32-bit Java \n# runtime for applications that need the 32-bit environment \n# (like RubyMine)\n# Author: Matthew Norris\n\n# Add this repo to be able to run Hadoop. \n# https://docs.cloudera.com/display/DOC/Java+Development+Kit+Installation\nsudo add-apt-repository \"deb http://archive.canonical.com/ lucid partner\"\nsudo aptitude update\n\n# Install Java & its 32-bit version also. \nsudo aptitude install sun-java6-jdk sun-java6-plugin -y\n#sudo aptitude install ia32-sun-java6-bin -y\n"
},
{
"alpha_fraction": 0.687747061252594,
"alphanum_fraction": 0.7167325615882874,
"avg_line_length": 29.360000610351562,
"blob_id": "7f77449a7a73b06dc547c9fb2e56b99306fcf853",
"content_id": "0431987ee3af6eb5bddad8b5b9dcba924f4d6551",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 759,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 25,
"path": "/src/down-django.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "# Title: down-django.sh\n# Description: Remove Django\n# Author: Matthew Norris\n# Reference: http://www.djangoproject.com/download/\n# http://weboom.wordpress.com/2009/07/01/how-to-uninstall-django/\n\n#SOURCES=~/sources\n#SDKS=/opt/dev/sdks\n#TOOLS=/opt/dev/tools\n\n#DJANGO_TAR=http://www.djangoproject.com/download/1.0.2/tarball/\n#DJANGO_FILENAME=Django-1.0.2-final\n\n#sudo aptitude remove python-django -y\n\n# Version 1.1\nsudo rm -fr /usr/lib/python2.5/site-packages/django\n\n# TODO: For some reason, it looks like Django didn't install in either place, \n# yet I can still import it in the python console\n\nsudo rm -fr /usr/lib/python2.6/dist-packages/django\nsudo rm -fr /usr/lib/python2.6/site-packages/django\n\nsudo rm /usr/bin/django-admin.py\n"
},
{
"alpha_fraction": 0.5202272534370422,
"alphanum_fraction": 0.5284902453422546,
"avg_line_length": 28.190954208374023,
"blob_id": "35df2ae576a710e708463933322154763e27f6db",
"content_id": "903883ca06d41b97a42962ee2ae9c642b90b9ad8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 5809,
"license_type": "no_license",
"max_line_length": 95,
"num_lines": 199,
"path": "/src/up-openlaszlo.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "#!\n\n######### INCOMPLETE #########\n\n################################################################################\n# Title: up-openlaszlo\n# Description: Setup OpenLaszlo\n# Author: Matthew Norris\n# Reference: http://www.openlaszlo.com/taxonomy/term/8\n# http://www.openlaszlo.org/lps4.5/docs/installation/install-instructions.html\n#\n################################################################################\n\nverbose=false\n\n#utilDir=\"$HOME/scripts/util\"\nutilDir=\"../util\"\n\n# Location to download source files\ndefDownloadDir=$HOME/sources\ndownloadDir=$defDownloadDir\n\n# Location to install\ndefInstallPrefix=\"/opt/dev/sdks\"\ninstallPrefix=$defInstallPrefix\n\n# Linux group (used for permissions)\ngroup=\"developers\"\n\n# Default install directory & app names\nlaszloVersion=\"4.6.1\"\ninstallDirName=\"lps-$laszloVersion\"\narchiveFileName=\"openlaszlo-$laszloVersion-unix.tar.gz\"\n\n# Friendly names\nsdkFriendlyName=\"OpenLaszlo\"\n\n################################################################################\n# Functions\n################################################################################\n\nfunction checkUtilDir() {\n\n # Strip whitespace\n utilDir=$(expr \"$1\" : '[[:space:]]*\\(.*\\)[[:space:]]*$')\n \n # Check for null and whitespace\n if [[ -z \"$utilDir\" ]] || [[ ! -n \"$utilDir\" ]]; then\n errorMessage \"No UTIL DIRECTORY specified. Try --help for help.\"\n fi\n \n if [[ ! -d $utilDir ]]; then \n errorMessage \"$utilDir does not exist.\"\n fi\n}\n\nfunction checkDirs() {\n \n # Strip whitespace and remove any trailing slashes\n installPrefix=$(expr \"$1\" : '[[:space:]]*\\(.*\\)[[:space:]]*$' | sed -e 's/\\\\/$//g')\n downloadDir=$(expr \"$2\" : '[[:space:]]*\\(.*\\)[[:space:]]*$' | sed -e 's/\\\\/$//g')\n \n # Check for null and whitespace\n if [[ -z \"$installPrefix\" ]] || [[ ! -n \"$installPrefix\" ]]; then\n warningMessage \"No INSTALL DIRECTORY specified. Using default: $defInstallPrefix\"\n installPrefix=$defInstallPrefix\n fi\n \n if [[ -z \"$downloadDir\" ]] || [[ ! -n \"$downloadDir\" ]]; then\n warningMessage \"No DOWNLOAD DIRECTORY specified. Using default: $defDownloadDir\"\n downloadDir=$defDownloadDir\n fi\n \n # Do the specified directories exist?\n if [[ ! -d $installPrefix ]]; then \n errorMessage \"$installPrefix is not a directory. Try '$(basename $0) --help' for help.\"\n fi\n \n if [[ ! -d $downloadDir ]]; then \n errorMessage \"$downloadDir is not a directory. Try '$(basename $0) --help' for help.\"\n fi\n}\n\nfunction checkParamStr() {\n # If there are no parameters, throw error\n if [[ \"${#1}\" -eq \"0\" ]]; then\n errorMessage \"No parameters. Try '$(basename $0) --help' for help.\"\n fi\n}\n\nfunction errorMessage() {\n echo -e \"Error: $1\"\n exit 1\n}\n\nfunction warningMessage() {\n echo -e \"Warning: $1\"\n}\n\nfunction outputUsage() {\n echo \"up-openlaszlo - Install $sdkFriendlyName\"\n echo \"Usage: $(basename $0) [options...] [-l language] [DIRECTORY] [DOWNLOAD DIR]\"\n echo \"Options:\"\n echo \" -v/--verbose Not yet implemented\"\n echo \" -h/--help Output this message\"\n \n exit 1\n}\n\n################################################################################\n# Command line processing\n################################################################################\n\n# Parse the command line arguments\nwhile [[ \"$#\" -gt \"0\" ]]; do\n case \"$1\" in \n -v|--verbose)\n verbose=true\n shift 1\n checkParamStr \"$@\"\n ;;\n -h|--help)\n outputUsage\n ;;\n -l|--language)\n checkLang \"$2\"\n shift 2\n ;;\n -*|--*)\n # Unknown option\n errorMessage \"Unknown option $1.\"\n ;;\n *)\n # If we have more than 2 parameters, the usage is incorrect\n if [[ \"$#\" -gt \"2\" ]]; then\n errorMessage \"Too many parameters. Try '$(basename $0) --help' for help.\"\n else\n checkDirs $1 $2\n shift 2\n fi\n break\n ;;\n esac\ndone\n\n################################################################################\n# Main\n###############################################################################\n\n# Check for the proper utilities\n\nif [[ ! -f \"$utilDir/mmkdir.sh\" ]]; then\n errorMessage \"'mmkdir.sh' not found in '$utilDir'. Please specify the proper directory.\"\nfi\n\n$utilDir/mmkdir.sh $downloadDir\n$utilDir/mmkdir.sh $installPrefix\necho\n\nif [[ ! -f $downloadDir/$archiveFileName ]]; then\n errorMessage \"$archiveFileName does not exist.\"\nfi\n\necho \"Installing $sdkFriendlyName to $installPrefix...\"\nsudo tar -C $installPrefix -zxvf $downloadDir/$archiveFileName\necho \"Done.\"\necho\necho \"Granting permissions...\"\nsudo chmod 755 $installPrefix/$installDirName/Server/$installDirName/WEB-INF/bin/*\nsudo chmod 775 $installPrefix/$installDirName/Server/tomcat-5.0.24/logs\nsudo chmod 775 $installPrefix/$installDirName/Server/tomcat-5.0.24/conf\necho \"Done.\"\necho \n\n################################################################################\n# Permissions & symlinks\n################################################################################\n\nif [[ ! -n \"$(egrep -i ^$group: /etc/group)\" ]]; then\n\techo \"Creating the $group group...\"\n\tsudo groupadd $group\n\techo \"Done.\"\n\techo\nfi\n\n# Change permissions so members of the group can run its scripts \n\necho \"Changing group ownership for $sdkFriendlyName files...\"\nsudo chgrp -R $group $installPrefix/$installDirName\necho \"Done.\"\necho\n\n# Add a symlink to /usr/local\n\necho \"Linking $sdkFriendlyName to /usr/local/...\"\nsudo ln -s $installPrefix/$installDirName /usr/local/$installDirName\necho \"Done.\"\necho\necho \"$sdkFriendlyName installed.\"\n"
},
{
"alpha_fraction": 0.48757171630859375,
"alphanum_fraction": 0.49330782890319824,
"avg_line_length": 23.952381134033203,
"blob_id": "1689b43306d6423f0eaa4a2115bb96e470ff8a87",
"content_id": "a699172dd85b2f778c47f99da34687c6dcc8dc33",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 523,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 21,
"path": "/src/down-openlaszlo.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\n################################################################################\n#\n# Title: down-openlaszlo.sh\n# Description: Remove OpenLaszlo\n# Author: Matthew Norris\n#\n################################################################################\n\n# Default install directory & app names\nlaszloVersion=\"4.6.1\"\ninstallDirName=\"lps-$laszloVersion\"\ninstallPrefix=\"/opt/dev/sdks\"\n\necho \"Removing OpenLazslo...\"\n\nsudo rm /usr/local/$installDirName\nsudo rm -fr $installPrefix/$installDirName\n\necho \"Done.\""
},
{
"alpha_fraction": 0.5606530904769897,
"alphanum_fraction": 0.579918384552002,
"avg_line_length": 30.41025733947754,
"blob_id": "b3d01baa41ea154f6faff1860dffe1256eaab1d2",
"content_id": "58e858898e5826176dd6c69fb20dca3d43713170",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 6125,
"license_type": "no_license",
"max_line_length": 123,
"num_lines": 195,
"path": "/src/up-ruby.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "# Title: up-dev-tools.sh\n#\n# Description: Setup Ubuntu with an integrated, automated \n# Ruby on Rails development environment\n#\n# Author: Matthew Norris\n#\n# Reference: http://articles.slicehost.com/2008/11/28/ubuntu-intrepid-setup-page-2\n# http://articles.slicehost.com/2009/1/6/ubuntu-intrepid-ruby-on-rails\n# http://www.claytonlz.com/index.php/2009/04/how-to-setup-rspec-cucumber-webrat-rcov-and-autotest-on-leopard/\n# http://github.com/guides/providing-your-ssh-key\n# No ri/rdoc option - http://bit.ly/cLSaR7\n#\n\n# Installing all gems' documentation is time-consuming. By default, \n# don't install it. See http://bit.ly/cLSaR7 for more details. \nINSTALL_DOC=\"--no-ri --no-rdoc\"\n\n################################################################################\n# Helper functions \n################################################################################\n\nfunction errorMessage() {\n echo -e \"Error: $1\"\n exit 1\n}\n\nfunction warningMessage() {\n echo -e \"Warning: $1\"\n exit 2\n}\n\nfunction outputUsage() {\n echo \"up-ruby\"\n echo \"Usage: `basename $0` [options...]\"\n echo \"Options:\"\n echo \" --railsv/--railsversion Specify Rails version to install\"\n echo \" -d/--doc Install ri and rdoc for all gems\"\n echo \" -h/--help Output this message\"\n \n exit 1\n}\n\n################################################################################\n# Command line processing\n################################################################################\n\n# Parse the command line arguments. Specifically, we're checking for a \n# Rails version because Shapado requires Rails 2.3.8: http://bit.ly/d8OMtX \nwhile [ \"$#\" -gt \"0\" ]; do\n case \"$1\" in \n \t--railsv|--railsversion)\n \t shift 1\n \t RAILS_VERSION=\"$1\"\n \t shift 1\n \t ;;\n \t-d|--doc)\n \t shift 1\n \t INSTALL_DOC=\"\"\n \t ;;\n \t--shapado)\n \t # Defaults for the OS project \"Shapado\". See http://bit.ly/d8OMtX\n \t shift 1\n \t RAILS_VERSION=\"2.3.8\"\n \t errorMessage \"Shapado option is not yet implemented.\"\n \t ;;\n -h|--help)\n outputUsage\n ;;\n -*|--*)\n errorMessage \"Unknown option $1.\"\n ;;\n *)\n errorMessage \"Unknown parameter $1.\"\n ;;\n esac\ndone\n\n################################################################################\n# Ruby \n################################################################################\n\nSOURCES=~/dev/downloads # sources directory\nRUBYFORGE=http://rubyforge.org/frs/download.php/70696 # download location\nRUBYGEMS_VERSION=rubygems-1.3.7 # software version\n\n# The process will involve a mix of installation methods - the main ruby \n# packages and dependencies will be installed using the 'aptitude' package \n# manager but rubygems will be install from source. The reason for this is that\n# it is important to get the latest and most stable version of rubygems and the\n# easiest way to do that is by installing from source.\n\nsudo aptitude install ruby1.8-dev ruby1.8 ri1.8 rdoc1.8 irb1.8 \\\n\tlibreadline-ruby1.8 libruby1.8 libopenssl-ruby sqlite3 libsqlite3-ruby1.8 -y\n\n# Create symlinks from the install to locations every program would look.\n\nsudo ln -s /usr/bin/ruby1.8 /usr/bin/ruby\nsudo ln -s /usr/bin/ri1.8 /usr/bin/ri\nsudo ln -s /usr/bin/rdoc1.8 /usr/bin/rdoc\nsudo ln -s /usr/bin/irb1.8 /usr/bin/irb\n\n# Check for 'sources' directory. If one does not exist, make one so we can \n# install rubygems from source.\n\nif [ ! -d $SOURCES ]\n then\n mkdir $SOURCES\nfi\n\n# Install rubygems from source.\n\n# TODO: Overwrite or remove an old file of the same name first\ncd $SOURCES\nwget $RUBYFORGE/$RUBYGEMS_VERSION.tgz\ntar xzvf $RUBYGEMS_VERSION.tgz\ncd $RUBYGEMS_VERSION\n\nsudo ruby setup.rb\n\n# Make another symlink \n\nsudo ln -s /usr/bin/gem1.8 /usr/bin/gem\n\n# Update\n\nsudo gem update\nsudo gem update --system\n\n################################################################################\n# Gems\n################################################################################\n\n# Install useful gems.\n# Reference: http://www.claytonlz.com/index.php/2009/04/how-to-setup-rspec-cucumber-webrat-rcov-and-autotest-on-leopard/\n\n# If Rails version is null or whitespace, install default version. \nif [ -z \"$RAILS_VERSION\" ] || [ ! -n \"$RAILS_VERSION\" ]; then\n echo \"Installing default Rails version...\"\n sudo gem install rails $INSTALL_DOC\t\n sudo gem install rspec $INSTALL_DOC\n sudo gem install rspec-rails $INSTALL_DOC\nelse \n echo \"Installing Rails version $RAILS_VERSION...\"\n sudo gem install rails -v=\"$RAILS_VERSION\" $INSTALL_DOC\n echo \"Done.\"\n \n RSPEC_VERSION=\"1.2.9\"\n echo \"Installing Rspec version $RSPEC_VERSION.\" \n echo \"See http://github.com/dchelimsky/rspec/wiki/rails for more details.\"\n \n sudo gem install rspec -v=\"$RSPEC_VERSION\" $INSTALL_DOC\n sudo gem install rspec-rails -v=\"$RSPEC_VERSION\" $INSTALL_DOC\n \n echo \"Done.\"\nfi\n\n# Install cucumber; also installs these 6 gems: \n#\n# term-ansicolor-1.0.3\n# polyglot-0.2.5\n# treetop-1.2.5\n# diff-lcs-1.1.2\n# builder-2.1.2\n# cucumber-0.3.1\n\nsudo gem install cucumber $INSTALL_DOC\n\n# Install the rest of our testing suite\n\nsudo gem install webrat $INSTALL_DOC # will also install nokogiri-1.2.3\nsudo gem install rcov $INSTALL_DOC\nsudo gem install ZenTest $INSTALL_DOC\n\n# Install Haml & Sass\nsudo gem install haml $INSTALL_DOC\nsudo gem install rb-inotify $INSTALL_DOC # notifies you of --watch events \n\t\n# Dependencies for Shapado. \n# See http://shapado.com/questions/cmo-se-configura-shapado\nsudo gem install faker $INSTALL_DOC\nsudo gem install gemcutter $INSTALL_DOC\n# sudo gem tumble # This command is deprecated, Gemcutter.org is the primary source for gems.\n\n################################################################################\n# Test your installation with these commands\n################################################################################\n\n# $ruby -v\n# $rails -v\n# $gem -v\n\n# $irb\n# irb(main):005:0> require 'sqlite3'\n# => true\n"
},
{
"alpha_fraction": 0.7048940658569336,
"alphanum_fraction": 0.7100073099136353,
"avg_line_length": 30.837209701538086,
"blob_id": "145639ced64a13dbd4e29ac92a82adacb5a4fca4",
"content_id": "e2c1e98dbcf610939ffdc9a463db62b3919055c3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1369,
"license_type": "no_license",
"max_line_length": 95,
"num_lines": 43,
"path": "/src/virtualenv/install_tipfy.py",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "\"\"\"\nInstalls or upgrades the web2py framework in a virtual environment. \n\"\"\"\nimport os\nimport sys\nimport tempfile\nimport shutil\n\nsys.path.append(os.path.join(os.environ['HOME'], 'dev/modules'))\nfrom poprop import util\n\n__author__ = \"Matthew Norris\"\n__copyright__ = \"Copyright 2011, Matthew Norris\"\n\nif not util.in_virtualenv():\n sys.exit('Script not running in a virtual environment. Exiting.')\n\nSRC_DIR_NAME = 'src'\nDOWNLOADS = os.path.join(os.environ['HOME'], 'dev/downloads')\nutil.mkdir(DOWNLOADS)\nurl = util.URL('http://www.tipfy.org/tipfy.zip') #EDIT\ndl_file = os.path.join(DOWNLOADS, url.basename)\n\n# If the file is not already present, download it. \nif not os.path.isfile(dl_file):\n print 'Downloading %s...' % url.basename \n dl_file = url.download(dstdir=DOWNLOADS)\n print 'Done.\\n'\n \n# Check if the app dir in our project already has code in it. If so, \n# we need to upgrade.\nPROJECT_SRC_DIR = os.path.join(util.get_virtualenv(), 'project')\nPROJECT_APP_DIR = os.path.join(PROJECT_SRC_DIR, SRC_DIR_NAME)\n#CONTRIB_SRC_DIR = os.path.join(os.path.join(os.environ['HOME'], 'dev/modules/contrib')) # EDIT\nutil.mkdir(PROJECT_SRC_DIR)\nutil.mkdir(PROJECT_APP_DIR)\n#util.mkdir(CONTRIB_SRC_DIR) # EDIT\n\n# Extract the new framework.\nprint 'Installing...' \nweb2py = util.Archiver(dl_file)\nweb2py.extract(dstdir=PROJECT_SRC_DIR, newrootname=SRC_DIR_NAME)\nprint 'Done.\\n'\n"
},
{
"alpha_fraction": 0.6610169410705566,
"alphanum_fraction": 0.6723163723945618,
"avg_line_length": 24.285715103149414,
"blob_id": "47fccf5e300fc6179b43c54d38a4fedbde31b463",
"content_id": "47acfc7c23b69af69bad2ce5f7e282c8475951d4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 177,
"license_type": "no_license",
"max_line_length": 40,
"num_lines": 7,
"path": "/src/down-wxpython.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "# Title: up-wxpython.sh\n# Description: Remove wxPython for GUIs\n# Author: Matthew Norris\n\necho \"Removing wxPython...\"\nsudo aptitude remove python-wxgtk2.8 -y\necho \"Done.\"\n"
},
{
"alpha_fraction": 0.5731452703475952,
"alphanum_fraction": 0.5987460613250732,
"avg_line_length": 33.17856979370117,
"blob_id": "5ed17a4da66116e144a4df7bba0519c8c364d9da",
"content_id": "cb0919699647897a471dd6b9d6f1e72d869e0150",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 1914,
"license_type": "no_license",
"max_line_length": 138,
"num_lines": 56,
"path": "/src/down-ruby.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "# Title: down-ruby.sh\n# Description: Tear down the Ubuntu Ruby on Rails development environment\n# Author: Matthew Norris\n# Reference: http://articles.slicehost.com/2008/11/28/ubuntu-intrepid-setup-page-2\n# http://articles.slicehost.com/2009/1/6/ubuntu-intrepid-ruby-on-rails\n# http://geekystuff.net/2009/1/14/remove-all-ruby-gems\n\nLOG_NAME=\"installed-gems\"\n\n################################################################################\n# Gems\n################################################################################\n\n# Check for 'UNINSTALL' directory. If one does not exist, make one to save \n# notes from the uninstallation.\n\nUNINSTALL=~/uninstall\n\nif [ ! -d $UNINSTALL ]\n then\n mkdir $UNINSTALL\nfi\n\n# Save a list of all the installed gems \ngem list --no-versions > \"$UNINSTALL/$LOG_NAME.log\"\ngem list > \"$UNINSTALL/$LOG_NAME-with-versions.log\"\n\n# Uninstall all gems and executables\n# Reference: http://geekystuff.net/2009/1/14/remove-all-ruby-gems\n\ngem list --no-versions | xargs sudo gem uninstall -aIx\n\n# Uninstall the gems and all applicable executables without confirmation (-x).\n#sudo gem uninstall [package-name] -x\n\n################################################################################\n# Ruby on Rails\n################################################################################\n\n# Remove the rubygems symlink and *manually* uninstall rubygems (we cannot \n# simply call 'sudo aptitude remove rubygems' because we did not uninstall it \n# using aptitude, we installed it manually).\n\nsudo rm -fv /usr/bin/gem1.8 /usr/bin/gem\nsudo rm -rfv /usr/lib/ruby/gems\n\n# Remove the other symlinks.\n\nsudo rm -v /usr/bin/ruby\nsudo rm -v /usr/bin/ri\nsudo rm -v /usr/bin/rdoc\nsudo rm -v /usr/bin/irb\n\n# Remove Ruby\n\nsudo aptitude remove ruby1.8-dev ruby1.8 ri1.8 rdoc1.8 irb1.8 libreadline-ruby1.8 libruby1.8 libopenssl-ruby sqlite3 libsqlite3-ruby1.8 -y\n"
},
{
"alpha_fraction": 0.7526316046714783,
"alphanum_fraction": 0.7526316046714783,
"avg_line_length": 28.230770111083984,
"blob_id": "3c5444723eaaacb4a02593e5920e9f4a876919f0",
"content_id": "2ca5ea597a5e9969abba9642a38cdc517ae5fc66",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 380,
"license_type": "no_license",
"max_line_length": 49,
"num_lines": 13,
"path": "/src/down-other-tools.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "# Title: down-other-tools.sh\n# Description: Remove useful tools and utilities\n# Author: Matthew Norris\n\n# sudo aptitude remove netbeans -y\n\nsudo aptitude remove virtualbox-ose -y\nsudo aptitude remove unrar -y\nsudo aptitude remove keepassx -y\nsudo aptitude remove inkscape -y\nsudo aptitude remove gedit-plugins -y\nsudo aptitude remove curl -y\n# sudo aptitude remove agave -y\n"
},
{
"alpha_fraction": 0.5877550840377808,
"alphanum_fraction": 0.6071428656578064,
"avg_line_length": 24.454545974731445,
"blob_id": "03655928d978998cd9fb89268aa67a7829cb8b6c",
"content_id": "d86b0539313f036ebd02b100c504002a3c8ea3e1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 1960,
"license_type": "no_license",
"max_line_length": 90,
"num_lines": 77,
"path": "/src/up-web2py.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "#!\n\n################################################################################\n#\n# Title: up-web2py.sh\n# Description: Install web2py, a Python web framework\n# Author: Matthew Norris\n# Reference: http://www.web2py.com\n# http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_07_02.html\n# http://mdp.cti.depaul.edu/AlterEgo/default/show/12\n#\n################################################################################\n\nSOURCES=$HOME/sources\nINSTALL_DIR=$HOME/dev/source\n\nWEB2PY_SITE=http://www.web2py.com/examples/static\nWEB2PY=web2py_src.zip\n\n# Check for 'sources' directory. If one does not exist, make one.\n\nif [ ! -d $SOURCES ]\n then\n echo \"Making the $SOURCES directory..\"\n mkdir $SOURCES\n echo \"Done.\"\n echo\nfi\n\n# Download and install web2py.\n\necho \"Downloading $WEB2PY...\"\ncd $SOURCES\nif [ ! -f $WEB2PY ] \n then \n wget $WEB2PY_SITE/$WEB2PY\n echo \"Done.\"\n else\n echo \"$WEB2PY has already been downloaded.\"\nfi\necho\n\n# Check for previous installation and the proper directory.\n\nif [ -n \"$1\" ] && [ -d $1 ]; then \n\tINSTALL_DIR=\"$1\" \nelse\n echo \"$1 is not a directory. Using $INSTALL_DIR instead.\"\nfi\n\nif [ -d $INSTALL_DIR/web2py ]\n then\n echo \"web2py is already installed. If you are upgrading:\"\n echo \" 1. Run the accompanying 'down-web2py.sh' script first.\"\n echo \" This will archive your applications for reinstallation.\"\n echo \" 2. Run this script again.\"\n echo \" 3. Extract your archived applications into the new web2py/applcations folder.\"\n echo \" 4. Done! Run web2py normally.\"\n exit 1\n\n# Create the install directory if it doesn't exist.\n\nelif [ ! -d $INSTALL_DIR ]\n then\n echo \"Making the $INSTALL_DIR directory..\"\n mkdir $INSTALL_DIR\n echo \"Done.\"\n echo \nfi\n\n# Install web2py by unzipping.\n\necho \"Unzipping $WEB2PY...\"\nunzip -q $SOURCES/$WEB2PY -d $INSTALL_DIR\necho \"Done.\"\necho\necho \"web2py is now installed in $INSTALL_DIR/web2py\"\n"
},
{
"alpha_fraction": 0.6415094137191772,
"alphanum_fraction": 0.7311320900917053,
"avg_line_length": 34.33333206176758,
"blob_id": "cb8db269e44286319bf5aaa90510aa7d0d26ad91",
"content_id": "c0d77f79007e5a5d00781608b4c0c8fc259e9390",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 212,
"license_type": "no_license",
"max_line_length": 93,
"num_lines": 6,
"path": "/src/launch_eclipse.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "#!/bin/sh\n\nsh /home/matthew/dev/tools/eclipse/eclipse -vm /usr/lib/jvm/ia32-java-6-sun-1.6.0.13/bin/java\n\n#export RUBYMINE_JDK=/usr/lib/jvm/ia32-java-6-sun-1.6.0.13\n#sh /opt/dev/tools/rubymine883/bin/rubymine.sh\n"
},
{
"alpha_fraction": 0.6835442781448364,
"alphanum_fraction": 0.6919831037521362,
"avg_line_length": 28.625,
"blob_id": "0756ab6710a7467ee698a8e4e2be9eb018fcd275",
"content_id": "4ff7c21e36bc46938800726a940091d42af06205",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 237,
"license_type": "no_license",
"max_line_length": 57,
"num_lines": 8,
"path": "/src/up-wxpython.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "# Title: up-wxpython.sh\n# Description: Setup wxPython for GUIs\n# Author: Matthew Norris\n# Reference: http://zetcode.com/wxpython/introduction/\n\necho \"Installing wxPython...\"\nsudo aptitude install python-wxgtk2.8 -y\necho \"Done.\"\n"
},
{
"alpha_fraction": 0.6935483813285828,
"alphanum_fraction": 0.725806474685669,
"avg_line_length": 40.33333206176758,
"blob_id": "c6f5c9edae5473d2278700c95876a278d2c1d31e",
"content_id": "7783ab3c081999e5f9e92e754a472eee26f848ad",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 248,
"license_type": "no_license",
"max_line_length": 85,
"num_lines": 6,
"path": "/src/down-unison.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "# Title: down-unison.sh\n# Description: Remove useful tools and utilities\n# Author: Matthew Norris\n# Reference: http://www.micahcarrick.com/11-07-2007/unison-synchronize-ubuntu.html\n\nsudo aptitude remove openssh-server unison unison-gtk -y\n"
},
{
"alpha_fraction": 0.462184876203537,
"alphanum_fraction": 0.46848738193511963,
"avg_line_length": 22.850000381469727,
"blob_id": "12cc2848f25f4dc835b2ee580ab4c5e9e537a6e3",
"content_id": "4f24a595def15c3969afce9eb1fa80d435b07098",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 476,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 20,
"path": "/src/down-grails.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "#!\n\n################################################################################\n#\n# Title: down-grails.sh\n# Description: Remove Grails\n# Author: Matthew Norris\n#\n################################################################################\n\ninstallPrefix=/opt/dev/sdks\n\n# Friendly names\nfriendlyName=\"Grails\"\nfriendlyLinkName=\"grails\"\n\necho \"Removing $friendlyName...\"\nsudo rm $installPrefix/$friendlyLinkName\nsudo rm -fr $installPrefix/grails-1.1.1\necho \"Done.\""
},
{
"alpha_fraction": 0.6183673739433289,
"alphanum_fraction": 0.6673469543457031,
"avg_line_length": 43.54545593261719,
"blob_id": "7856808183c74e396f658aee92c2010804069011",
"content_id": "1175545c9e92d4ace6a0be3a63b2364410328c13",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 490,
"license_type": "no_license",
"max_line_length": 144,
"num_lines": 11,
"path": "/scripts/move-scripts-to-system.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n# Title: move-scripts-to-system.sh\n# Description: Move the specified directory of scripts to \"production\", \n# the user's scripts directory. \n# Author: Matthew Norris\n# References: http://www.thegeekstuff.com/2010/09/rsync-command-examples/\n# http://www.linuxquestions.org/questions/linux-newbie-8/recursively-cp-all-directories-files-and-hidden-files-808403/#post3972438\n#\n\nrsync -azvru \"$1/.\" \"$2/dev/scripts\"\n#cp -a $1/. \"$2/dev/scripts\"\n"
},
{
"alpha_fraction": 0.72052401304245,
"alphanum_fraction": 0.7234352231025696,
"avg_line_length": 23.5,
"blob_id": "5d1b379b379b77c390b17cf4421c5391094e1bd5",
"content_id": "bf8c4713a63bdc2f9208e98bdf4dc91dd72af5ae",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 687,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 28,
"path": "/src/up-hadoop.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "# Reference: http://archive.cloudera.com/docs/_apt.html\n\nDISTRO=\"lucid\"\nSOURCES_DIR=\"/etc/apt/sources.list.d\"\nLIST=\"cloudera.list\"\n\necho \"Writing new sources list for '$DISTRO' to $LIST...\"\necho \"deb http://archive.cloudera.com/debian $DISTRO-cdh3 contrib\" > $LIST\necho \"deb-src http://archive.cloudera.com/debian $DISTRO-cdh3 contrib\" >> $LIST\nsudo mv $LIST $SOURCES_DIR\necho \"Done.\"\necho\n\necho \"Adding repository key...\"\ncurl -s http://archive.cloudera.com/debian/archive.key | sudo apt-key add -\necho \"Done.\"\necho \n\necho \"Updating apt-get...\"\nsudo apt-get update\necho \"Done.\"\necho\n\necho \"Installing Hadoop...\"\napt-cache search hadoop\nsudo apt-get install hadoop -y\necho \"Done.\"\necho \n"
},
{
"alpha_fraction": 0.6491228342056274,
"alphanum_fraction": 0.7456140518188477,
"avg_line_length": 37,
"blob_id": "7e911a24457fc706fab60d53b47e332d36fa0ff7",
"content_id": "ab51100bfdfa65704c37561007e876e0976083d1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 114,
"license_type": "no_license",
"max_line_length": 57,
"num_lines": 3,
"path": "/src/launch_rubymine.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "#!/bin/sh\nexport RUBYMINE_JDK=/usr/lib/jvm/ia32-java-6-sun-1.6.0.13\nsh /opt/dev/tools/rubymine883/bin/rubymine.sh\n"
},
{
"alpha_fraction": 0.6887892484664917,
"alphanum_fraction": 0.6941704154014587,
"avg_line_length": 33.32307815551758,
"blob_id": "f514f86bcb5cbcadf5b290b51905a136b3a8e29c",
"content_id": "f332eff9c8e9e4f1a334000cb93c3d3690cd28c1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2230,
"license_type": "no_license",
"max_line_length": 87,
"num_lines": 65,
"path": "/src/virtualenv/install_web2py.py",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "\"\"\"\nInstalls or upgrades the web2py framework in a virtual environment. \n\"\"\"\nimport os\nimport sys\nimport tempfile\nimport shutil\n\nsys.path.append(os.path.join(os.environ['HOME'], 'dev/modules'))\nfrom poprop import util\n\n__author__ = \"Matthew Norris\"\n__copyright__ = \"Copyright 2011, Matthew Norris\"\n\nif not util.in_virtualenv():\n sys.exit('Script not running in a virtual environment. Exiting.')\n\nDOWNLOADS = os.path.join(os.environ['HOME'], 'dev/downloads')\nutil.mkdir(DOWNLOADS)\nurl = util.URL('http://www.web2py.com/examples/static/web2py_src.zip')\ndl_file = os.path.join(DOWNLOADS, url.basename)\n\n# If the file is not already present, download it. \nif not os.path.isfile(dl_file):\n print 'Downloading %s...' % url.basename \n dl_file = url.download(dstdir=DOWNLOADS)\n print 'Done.\\n'\n \n# Check if the app dir in our project already has code in it. If so, \n# we need to upgrade.\nPROJECT_SRC_DIR = os.path.join(util.get_virtualenv(), 'src')\nPROJECT_APP_DIR = os.path.join(PROJECT_SRC_DIR, 'app')\nCONTRIB_SRC_DIR = os.path.join(os.path.join(os.environ['HOME'], 'dev/modules/contrib'))\nutil.mkdir(PROJECT_SRC_DIR)\nutil.mkdir(PROJECT_APP_DIR)\nutil.mkdir(CONTRIB_SRC_DIR)\n\ntmp_archive = tempfile.mkdtemp()\nif os.listdir(PROJECT_APP_DIR):\n print 'Archiving existing applications...'\n applications = util.Archiver(os.path.join(PROJECT_APP_DIR, \n 'applications'))\n applications.archive(dstdir=tmp_archive, \n exclude=['admin', 'examples', 'welcome'])\n print 'Done.\\n'\n print 'Removing old web2py framework...'\n shutil.rmtree(PROJECT_APP_DIR)\n print 'Done.\\n'\n\n# Extract the new framework.\nprint 'Installing new web2py framework...' \nweb2py = util.Archiver(dl_file)\nweb2py.extract(dstdir=PROJECT_SRC_DIR, newrootname='app')\nprint 'Done.\\n'\n\nprint 'Adding source to contrib folder on the PYTHONPATH...'\nweb2py.extract(dstdir=CONTRIB_SRC_DIR)\nprint 'Done.\\n'\n\n# Install the archived applications back into the framework.\nif(os.listdir(tmp_archive)):\n print 'Installing archived applications...'\n archived_apps = util.Archiver(os.path.join(tmp_archive, 'applications.tgz'))\n archived_apps.extract(dstdir=PROJECT_APP_DIR)\n print 'Done.\\n'"
},
{
"alpha_fraction": 0.4823388457298279,
"alphanum_fraction": 0.5096787214279175,
"avg_line_length": 31.53896141052246,
"blob_id": "75a1b6b53f063ccafbc03d42e0a06e6fcb2b0225",
"content_id": "cba59c1d2adc2845d0ac2c26bbde10b36d60eea1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 5011,
"license_type": "no_license",
"max_line_length": 109,
"num_lines": 154,
"path": "/src/install-python.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n# Title: install-python.sh\n# Description: Installs or removes Python 2.5 & 2.6 on Ubuntu 10.04+. Includes \n# dev & doc utilities, virtualenv, and its wrapper. \n# Author: matthew \n# Reference: http://jordilin.wordpress.com/2010/05/02/python2-4-python2-5-and-ubuntu-10-04-lucid-lynx/\n# http://ubuntuforums.org/showpost.php?s=bb4c9d3d2847f287d29fee7b64f3c25b&p=9231244&postcount=6\n#\n\n################################################################################\n# Default locations \n################################################################################\n\nDOWNLOADS=$HOME/dev/downloads\n\n################################################################################\n# Packages \n################################################################################\n\nPKGS_PY=\"python-lxml python2.5 python2.5-dev python2.6-dev python-docutils\"\n\n# These are the latest setuptools. Ubuntu repositories do not contain the latest. \nTOOLS_EGG_25=\"setuptools-0.6c11-py2.5.egg\"\nTOOLS_EGG_26=\"setuptools-0.6c11-py2.6.egg\"\nTOOLS_SITE_25=\"http://pypi.python.org/packages/2.5/s/setuptools/$TOOLS_EGG_25\"\nTOOLS_SITE_26=\"http://pypi.python.org/packages/2.6/s/setuptools/$TOOLS_EGG_26\"\n\n################################################################################\n# Helper functions \n################################################################################\n\n# Prints the given error message and exits.\nfunction errorMessage() {\n echo -e \"Error: $1. Type '`basename $0` -h' for usage and options.\"\n exit 1\n}\n\n# Prints the given warning message and exits.\nfunction warningMessage() {\n echo -e \"Warning: $1.\"\n exit 2\n}\n\n# Prints this script's usage and exists. \nfunction outputUsage() {\n echo \"Usage: `basename $0` [options...]\"\n echo \"Options:\"\n echo \" -h/--help Prints this message\"\n \n # TODO: Add help messages for your options here. \n \n exit 1\n}\n\n################################################################################\n# Installation functions \n################################################################################\n\n# Installs packages and sets up directories and files. \nfunction installPackages() {\n echo \"Installing `basename $0` tools & libraries...\"\n sudo aptitude install $PKGS_PY -y\n echo \"Done.\"\n \n echo \"Installing setuptools...\"\n mkdir -p $DOWNLOADS\n cd $DOWNLOADS\n if [ ! -e $TOOLS_EGG_25 ]; then\n \twget $TOOLS_SITE_25 -O $TOOLS_EGG_25\n fi\n if [ ! -e $TOOLS_EGG_26 ]; then\n wget $TOOLS_SITE_26 -O $TOOLS_EGG_26\n fi\n sudo sh $TOOLS_EGG_25\n sudo sh $TOOLS_EGG_26\n echo \"Done.\"\n \n echo \"Installing virtualenv and its wrapper...\"\n sudo easy_install virtualenv\n sudo easy_install virtualenvwrapper\n \t\n # Symlink virtualenv so it can be found at the global level. \n # http://floppix.ccai.com/scripts1.html\n #cd /usr/bin\n #sudo ln -s /usr/local/bin/virtualenv virtualenv\n \n echo \"Done.\"\n \n exit 0\n}\n\n# Removes packages installed and tears down any directories and files created. \nfunction removePackages() {\n echo \"Removing virtualenv and wrapper...\"\n #sudo rm /usr/bin/virtualenv\n sudo easy_install -m virtualenvwrapper\n sudo easy_install -m virtualenv\n echo \"Done.\"\n \n echo \"Cannot remove setuptools.\"\n echo \"See http://www.eby-sarna.com/pipermail/peak/2006-February/002450.html\"\n \n echo \"Removing `basename $0` tools & libraries...\" \n sudo aptitude remove $PKGS_PY -y\n echo \"Done.\" \n \n exit 0\n}\n\n################################################################################\n# Command line processing\n################################################################################\n\n# Parse the command line arguments. \nwhile [ \"$#\" -gt \"0\" ]; do\n case \"$1\" in\n \n # TODO: Create some script options. \n # EXAMPLE: Uncomment below to assign a 'destination directory', DST_DIR, \n # to the arg given after a '-d' or '--dst' on the command line.\n # \n # -d|--dst)\n # shift 1 # eat the '-d' or '--dst'\n # DST_DIR=\"$1\" # assign the next arg as this variable \n # shift 1 # eat the arg you just assigned\n # ;;\n -r|--remove)\n shift 1\n removePackages\n ;;\n -h|--help)\n outputUsage\n ;;\n -*|--*)\n errorMessage \"Unknown option $1\"\n ;;\n *)\n errorMessage \"Unknown parameter $1\"\n ;;\n esac\ndone\n\n################################################################################\n# Main\n################################################################################\n\necho \"Executing `basename $0`...\"\necho \"Ubuntu 10.04+ repositories do not contain python2.5 by default.\"\necho \"Add a repository and update the system.\"\nsudo add-apt-repository ppa:fkrull/deadsnakes\nsudo apt-get update\nsudo apt-get update # run twice to get rid of any conflicts\n\ninstallPackages\n"
},
{
"alpha_fraction": 0.6186825633049011,
"alphanum_fraction": 0.6249290108680725,
"avg_line_length": 28.107437133789062,
"blob_id": "c22c0477ed90ac4830ba1bc4332230b7b8095487",
"content_id": "fcfcf143770695830c8f19219fa4320a498e9375",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 3522,
"license_type": "no_license",
"max_line_length": 122,
"num_lines": 121,
"path": "/src/gen_django_gae_project.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "# Title: start-gae-django-project.sh\n# Description: This project provides a helper that eases the process of \n# creating a Django project to run on the Google App Engine. It's \n# based off of the Googlers' project at \n# http://code.google.com/p/google-app-engine-django/\n# Author: Matthew Norris\n# Reference: http://code.google.com/p/google-app-engine-django/\n# http://realm3.com/articles/installing_django_from_source_on_ubuntu_8.10\n# http://ubuntuforums.org/showthread.php?t=83877\n# http://sites.google.com/site/io/rapid-development-with-python-django-and-google-app-engine\n\n################################################################################\n# Files & locations\n################################################################################\n\n# System directories\nSOURCES=~/sources\nSOURCE_DIR=~/dev/source\nPROJECT_NAME=$USER\"sapp\"\nSDKS=/opt/dev/sdks\n\n# Google code website \nGOOGLE_CODE_SITE=\"http://google-app-engine-django.googlecode.com/files\"\nAEFD_REV_NO=86 #52\nAEFD_DIR_NAME=\"appengine_helper_for_django\"\nAEFD_FILE=\"$AEFD_DIR_NAME-r$AEFD_REV_NO.zip\"\n\n# Django code website \nDJANGO_REV_NO=1.0.2\nDJANGO_FILE_NAME=Django-$DJANGO_REV_NO-final\nDJANGO_DOWNLOAD=http://www.djangoproject.com/download/$DJANGO_REV_NO/tarball/\n\n# TODO: Add options for using the stable Django\n\n################################################################################\n# Setup the helper project\n################################################################################\n\n# Check for 'sources' directory. If one does not exist, make one.\n\nif [ ! -d $SOURCES ]\n then\n mkdir $SOURCES\nfi\n\n# Download and bootstrap the project\necho\necho \"Downloading $AEFD_FILE...\"\n\ncd $SOURCES\nif [ ! -f $AEFD_FILE ] \n then \n wget $GOOGLE_CODE_SITE/$AEFD_FILE\n else\n echo \"...$AEFD_FILE has already been downloaded.\"\nfi\n\necho\necho \"Unzipping $AEFD_FILE...\"\n\nunzip $AEFD_FILE -d $SOURCE_DIR\n\n# Check to see if the user provided a project name.\n\nif [ -n \"$1\" ]\n then\n PROJECT_NAME=\"$1\"\nfi\n\necho\necho \"Renaming project directory to $PROJECT_NAME and linking the App Engine SDK...\"\n\n# Check to see if the project name exists already.\n\nif [ -d $SOURCE_DIR/$PROJECT_NAME ] \n then \n echo\n echo \"$PROJECT_NAME directory already exists. Creating a new project directory name.\"\n PROJECT_NAME=\"$PROJECT_NAME-$RANDOM\"\nfi\n\nmv $SOURCE_DIR/$AEFD_DIR_NAME $SOURCE_DIR/$PROJECT_NAME\nln -s $SDKS/google_appengine $SOURCE_DIR/$PROJECT_NAME/.google_appengine\n\n# If the revision number is greater than 52, we need to copy the Django code \n# into the project so App Engine can use it.\n\ncd $SOURCES\nif [ $AEFD_REV_NO -gt 52 ]\n then\n echo\n echo \"Downloading $DJANGO_FILE_NAME...\"\n if [ ! -f $DJANGO_FILE_NAME.tar.gz ] \n then \n wget $DJANGO_DOWNLOAD\n else\n echo \"...$DJANGO_FILE_NAME has already been downloaded.\"\n fi\n\n echo\n echo \"Moving Django code into $PROJECT_NAME directory...\"\n\n tar -xzvf $DJANGO_FILE_NAME.tar.gz\n cd $DJANGO_FILE_NAME\n zip -r $SOURCE_DIR/$PROJECT_NAME/django.zip django\n\n echo\n echo \"Removing $DJANGO_FILE_NAME files...\"\n\n cd ..\n rm -fr $DJANGO_FILE_NAME\nfi\n\n# Tell the user to perform the last manual step.\n\necho\necho \"Edit the application line in app.yaml to match the name you registered your application under in the Admin Console.\"\necho\necho \"Run manage.py to start a new application for your code:\"\necho\necho \" python manage.py startapp myapp\"\n"
},
{
"alpha_fraction": 0.5486064553260803,
"alphanum_fraction": 0.5568742752075195,
"avg_line_length": 29.600000381469727,
"blob_id": "7cba058a714c633adda11946a1be063ea01d4493",
"content_id": "365c0f23287242ff84f0ad8411d338f1974c5395",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 7499,
"license_type": "no_license",
"max_line_length": 225,
"num_lines": 245,
"path": "/src/gen-web2py-project.sh",
"repo_name": "mattnorris/grain",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n# Title: install-web2py.sh\n# Description: Installs web2py in the projects directory. \n# Author: matthew\n# Reference: http://timmurphy.org/2010/05/19/checking-for-empty-string-in-bash/ \n#\n\n################################################################################\n# Files & locations \n################################################################################\n\nFILENAME=\"web2py_src.zip\"\nFILESRC=\"http://www.web2py.com/examples/static/$FILENAME\"\n\nDOWNLOADS=$HOME/dev/downloads\n\nSEP=\"################################################################################\"\n\n################################################################################\n# Helper functions \n################################################################################\n\n# Prints the given error message and exits.\nfunction errorMessage() {\n echo -e \"Error: $1. Type '`basename $0` -h' for usage and options.\"\n exit 1\n}\n\n# Prints the given warning message and exits.\nfunction warningMessage() {\n echo -e \"Warning: $1.\"\n exit 2\n}\n\n# Prints this script's usage and exists. \nfunction outputUsage() {\n echo \"Usage: `basename $0` PROJECTNAME [options...]\"\n echo \"Options:\"\n echo \" -h/--help Prints this message\"\n echo \" -d/--dst Specifies destination folder\"\n echo \" -f/--force Forces fresh download of web2py source\"\n echo \n \n # TODO: Add help messages for your options here. \n \n exit 1\n}\n\n# Archives existing web2py applications. \nfunction archiveApps() {\n\techo \"Not implemented!\"\n\texit 3\n}\n\nfunction createProjStructure() {\t\n\t# A \"best practice\" .gitignore file that ignores tmp files, \n\t# build artifacts, etc.\n\n\t# Ignore these file types. \n\techo -e \"*~\\ntmp*\\n*.tmp\\n*.bak\\n*.pyc\\n*.log\\n\" > .gitignore\n\t\n\t# Build and Sass artifacts\n\techo -e \"# Build artifacts\\n$SEP\" >> .gitignore\n\techo -e \"nosetests.xml\\ncoverage.xml\\n.coverage\\n*.cover\" >> .gitignore\n\techo -e \".figleaf\\nreport.html\\npyccuracy_report.html\\n\" >> .gitignore \n\t\n\techo -e \"# Sass artifacts\\n$SEP\" >> .gitignore\n\techo -e \".sass-cache/\\n\" >> .gitignore\n\techo -e \"# System artifacts\\n$SEP\" >> .gitignore\n\techo -e \"PIL.pth\\n\" >> .gitignore \n\t\n\t# Web2py artifacts\n\techo -e \"# Web2py artifacts\\n$SEP\" >> .gitignore\n\t\n\t# Apps\n\techo -e \"# Default apps\" >> .gitignore\n\techo -e \"admin/\" >> .gitignore\n\techo -e \"examples/\" >> .gitignore\n\techo -e \"welcome/\" >> .gitignore\n\t\n\t# Sessions, uploads, etc. \n\techo -e \"\\n# Non-code\" >> .gitignore\n\techo -e \"sessions/\" >> .gitignore\n\techo -e \"uploads/\" >> .gitignore\n\techo -e \"databases/\" >> .gitignore\n\techo -e \"errors/\" >> .gitignore\n\techo -e \"httpserver.pid/\" >> .gitignore\n\t\n\t# Create a directory for project documentation. \n\tmkdir -p docs/api\n\techo -e \"Documentation generated by Sphinx or Epydoc should go here.\\n\" > docs/api/README\n\techo -e \"DO NOT DELETE. Empty directories are not committed to version control. This README file servers as a placeholder so that your CI tool (e.g., Jenkins) will commit this directory to its repository.\" >> docs/api/README\n\t\n\t# Save continuous integration files here (e.g., Jenkin's config.xml). \n\tmkdir -p scripts/ci \n\t\n\t# Any configuration of paths, etc. \n\tmkdir config\n\t\n\t# Create a directory for tests. \n\tmkdir -p test/unit\n\tcd test\n\tmkdir mocks\n\tmkdir fixtures\n\tmkdir functional\n\tmkdir acceptance\n}\n\n################################################################################\n# Installation functions \n################################################################################\n\n# Installs packages and sets up directories and files. \nfunction installPackages() { \n #echo \"Creating project '$PROJ_NAME' in directory '$PROJ_HOME'...\"\n \n mkdir -p $DOWNLOADS\n \n cd $PROJ_HOME\n \n # TODO: Check for directory's existence. If it exists, archive the \n # applications directory and app.yaml file. \n\n\t# Check to see if the project already exists. If it does, upgrade. \n\t# http://www.web2py.com/books/default/chapter/29/14#Upgrading \n\tif [ -d $PROJ_NAME ]; then\n\t read -p \"This project already exists. Upgrade web2py (y/n)? \"\n\t \n\t # Convert response to lower case. \n\t\t# http://stackoverflow.com/a/2264537/154065 \n\t UPGRADE=${REPLY,,}\n\t \n\t # If the user does not want to upgrade, exit. \n\t if [ \"$UPGRADE\" != \"y\" ] && [ \"$UPGRADE\" != \"yes\" ]; then\n\t \techo \"Finished without upgrade.\"\n\t \texit 0\n\t\tfi\n\tfi\n\n # Get the file if it isn't yet downloaded (or if we're forced to). \n if [ $FORCE ] || [ $UPGRADE ] || [ ! -e $DOWNLOADS/$FILENAME ]; then\n \techo \"Downloading '$FILENAME' to '$DOWNLOADS'...\"\n \t# -P also works for establishing a \"directory-prefix\" for wget, \n\t\t# explicit file naming seemed to work better. \n \twget $FILESRC -O $DOWNLOADS/$FILENAME \n echo \"Done.\"\n fi \n \n PROJ_DIR=\"$PROJ_NAME/project\"\n echo \"Updating the project and its 'src' directory at '$PROJ_HOME/$PROJ_DIR'...\"\n mkdir -p $PROJ_DIR/src\n cd $PROJ_DIR\n echo \"Done.\"\n \n echo \"Unzipping web2py source...\"\n unzip -qo $DOWNLOADS/$FILENAME -d src\n #mv web2py src\n echo \"Done.\"\n \n if [ -z $UPGRADE ]; then\n\t # Generate the rest of the standard project structure. \n\t\techo \"Creating the project structure...\"\n\t\tcreateProjStructure\n\t echo \"Done.\"\n\tfi\n \n echo \"Finished. Project located at '$PROJ_HOME/$PROJ_NAME'.\"\n \n exit 0\n}\n\n# Removes packages installed and tears down any directories and files created. \nfunction removePackages() {\n # TODO: Write a tear down function.\n #echo \"Removing `basename $0` tools & libraries...\" \n \n # TODO: Archive the applications folder first!\n \n echo \"Function not implemented! Nothing was done!\"\n #echo \"Done.\" \n \n exit 0\n}\n\n################################################################################\n# Command line processing\n################################################################################\n\n# Parse the command line arguments. \nwhile [ \"$#\" -gt \"0\" ]; do\n case \"$1\" in\n -d|--dst)\n shift 1 # eat the '-d' or '--dst'\n PROJ_HOME=\"$1\" # assign the next arg as this variable \n shift 1 # eat the arg you just assigned\n ;;\n -f|--force) \n \t# Force download of the web2py src, even if we have a copy. \n shift 1\n FORCE=1\n ;;\n -r|--remove)\n shift 1\n removePackages\n ;;\n -h|--help) \n outputUsage\n ;;\n -*|--*)\n errorMessage \"Unknown option $1\"\n ;;\n *)\n #errorMessage \"Unknown parameter $1\"\n\t\t\tPROJ_NAME=\"$1\"\n\t\t\tshift 1\n ;;\n esac\ndone\n\n# Check for a project name, which is required. \nif [ -z $PROJ_NAME ]; then\n errorMessage \"Project name required\"\nfi\n \n# If a directory wasn't specified in the arguments... \nif [ -z $PROJ_HOME ]; then\n\t# Look for the projects directory. If that's not found...\n\tPROJ_HOME=`echo $WORKON_HOME`\n\tif [ -z $PROJ_HOME ]; then\n\t\t# Use the current directory. \n \texport PROJ_HOME=`pwd`\n \techo \"Cannot find the $WORKON_HOME directory. Using the working directory instead.\"\n\tfi\nelse \n\t# Verify that the given project directory exists. \n\tif [ ! -d $PROJ_HOME ]; then\n\t\terrorMessage \"This directory doesn't exist: '$PROJ_HOME'. Please specify another\"\n\tfi\nfi\n\n################################################################################\n# Main\n################################################################################\n\ninstallPackages \n "
}
] | 69 |
Codelessandro/price_prediction | https://github.com/Codelessandro/price_prediction | 2996f7de4f7ccc27988465c67a8e6805bc7a853d | f6a1e64cf49c00ca8c4a7470dc724ac3e7cebe6e | e8a5e09cad193e45db0d97c499a351d8045a1f36 | refs/heads/master | 2021-05-19T12:29:53.784328 | 2020-03-31T18:43:50 | 2020-03-31T18:43:50 | 251,699,035 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.7113636136054993,
"alphanum_fraction": 0.7204545736312866,
"avg_line_length": 32.61538314819336,
"blob_id": "652f71df78107862db8b07de0a01febbf5cf52ae",
"content_id": "ba23daceaaa05c03f76fef9079fa6f479f50945b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 440,
"license_type": "no_license",
"max_line_length": 85,
"num_lines": 13,
"path": "/models/feedforward.py",
"repo_name": "Codelessandro/price_prediction",
"src_encoding": "UTF-8",
"text": "from numpy import loadtxt\nfrom keras.models import Sequential\nfrom keras.layers import Dense\n\nfrom config import config\n\ndef make_model(hp):\n model = Sequential()\n model.add(Dense(10, input_dim=config[\"nr_dimensions\"], activation='relu'))\n model.add(Dense(8, activation='relu'))\n model.add(Dense(1, activation='sigmoid'))\n model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])\n return model\n\n\n\n"
},
{
"alpha_fraction": 0.4923076927661896,
"alphanum_fraction": 0.5384615659713745,
"avg_line_length": 15.5,
"blob_id": "cba9295951f83b01bf67f52bcb5e0123112cbc71",
"content_id": "47eaae6fddf73d861409a0b582c903c1a1363453",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 65,
"license_type": "no_license",
"max_line_length": 27,
"num_lines": 4,
"path": "/config.py",
"repo_name": "Codelessandro/price_prediction",
"src_encoding": "UTF-8",
"text": "config = {\n \"nr_dimensions\" : 7,\n \"test_train_split\": 0.2\n}"
},
{
"alpha_fraction": 0.7046632170677185,
"alphanum_fraction": 0.7150259017944336,
"avg_line_length": 31.16666603088379,
"blob_id": "efe15b90bb1227ae2ed43534f4f5dee6964c5c11",
"content_id": "74d83ba3da5b7ae8db398131b5685e4dc40ec735",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 193,
"license_type": "no_license",
"max_line_length": 88,
"num_lines": 6,
"path": "/train.py",
"repo_name": "Codelessandro/price_prediction",
"src_encoding": "UTF-8",
"text": "from models import feedforward\n\n\ndef train_feedforward(x,y,hp):\n model = feedforward.make_model(hp);\n model.fit(x,y,validation_split=0.2,batch_size=hp['batch_size'], epochs=hp['epochs'])\n"
},
{
"alpha_fraction": 0.7341463565826416,
"alphanum_fraction": 0.7341463565826416,
"avg_line_length": 19.399999618530273,
"blob_id": "a543036686134eaa00e5c4919d30329f5e01b694",
"content_id": "5882836fc2154a17017cba2b747563ee9a88b4af",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 410,
"license_type": "no_license",
"max_line_length": 68,
"num_lines": 20,
"path": "/app.py",
"repo_name": "Codelessandro/price_prediction",
"src_encoding": "UTF-8",
"text": "import os\nimport pdb\nimport numpy as np\n\nfrom data_loader import PriceClientDataLoader, PriceDummyDataLoader\nfrom train import *\nfrom hyperparams import *\n\nif os.environ['is_prod']=='True':\n data_loader = PriceClientDataLoader()\n\nif os.environ['is_prod']=='False':\n data_loader = PriceDummyDataLoader()\n\n\n\n\nx,y = data_loader.price_view(data_loader.data)\nhp = get_random_hp()\ntrain_feedforward(x,y,hp)\n\n\n"
},
{
"alpha_fraction": 0.4522096514701843,
"alphanum_fraction": 0.5190133452415466,
"avg_line_length": 32.58620834350586,
"blob_id": "e102778228779dcf0d4ee3d671a6ea3439e40433",
"content_id": "761688a47d6aac40285e604179e240f22426b8ed",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 973,
"license_type": "no_license",
"max_line_length": 59,
"num_lines": 29,
"path": "/hyperparams.py",
"repo_name": "Codelessandro/price_prediction",
"src_encoding": "UTF-8",
"text": "from keras.optimizers import Adam, Nadam, RMSprop\nimport random\nfrom keras.losses import binary_crossentropy\nfrom keras.activations import relu,elu\nfrom keras.activations import sigmoid\n\ndef get_random_hp():\n hp = {\n 'lr': (0.8, 1.2, 3),\n 'first_neuron': [4, 8, 16, 32, 64],\n 'hidden_layers': [0, 1, 2],\n 'batch_size': (2,4,6,12,18,24,32,64),\n #'epochs': [1,2,3,4,5,6,7,8,9,10,50, 100, 150],\n 'epochs': [1,2,3,4,5,6,7,8,9,10,50],\n 'dropout': (0, 0.2, 3),\n 'weight_regulizer': [None],\n 'emb_output_dims': [None],\n 'shape': ['brick', 'long_funnel'],\n 'kernel_initializer': ['uniform', 'normal'],\n 'optimizer': [Adam, Nadam, RMSprop],\n 'losses': [binary_crossentropy],\n 'activation': [relu, elu],\n 'last_activation': [sigmoid]\n }\n\n for key in hp:\n hp[key] = random.choice(hp[key])\n\n return hp"
},
{
"alpha_fraction": 0.5620370507240295,
"alphanum_fraction": 0.5796296000480652,
"avg_line_length": 26.69230842590332,
"blob_id": "b5097dd6caa242ac64b5156a30dbde7ee0c50afb",
"content_id": "dafb146285d1f7fc85a3db8dba012a0da01ed49f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1080,
"license_type": "no_license",
"max_line_length": 83,
"num_lines": 39,
"path": "/data_loader.py",
"repo_name": "Codelessandro/price_prediction",
"src_encoding": "UTF-8",
"text": "import numpy as np\nimport sys\nimport pdb\n\nsys.path.append('../recommender_system/')\nfrom dataLoader import ClientDataLoader, DummyDataLoader\n\nclass PriceClientDataLoader(ClientDataLoader):\n def __init__(self):\n super().__init__()\n\nclass PriceDummyDataLoader(DummyDataLoader):\n def __init__(self):\n super().__init__()\n self.add_deal()\n\n def add_deal(self):\n new_data = np.zeros( shape=( self.data.shape[0],self.data.shape[1]+1 ))\n new_data[:,0:self.data.shape[1]] = self.data\n new_data[:,self.data.shape[1]] = np.random.randint(0,2, self.data.shape[0])\n self.data = new_data\n\n\n def price_view(self, data):\n\n def add_dim(data,dim):\n return np.hstack( (data, np.expand_dims(self.data[:,dim],axis=1) ) )\n\n x,_ = super().coll_view(data)\n x = np.vstack( (x[0], x[1])).T\n\n x=add_dim(x,4) #hoehe\n x=add_dim(x,5) #breite\n x=add_dim(x,6) #leange\n x=add_dim(x,7) #preis\n x=add_dim(x,8) #gewicht\n\n y = self.data[:,self.data.shape[1]-1]\n return x,y\n"
}
] | 6 |
johannes134131/football_code | https://github.com/johannes134131/football_code | 8824b479980ea0ad2abec9983cdc64851891ac37 | dee362bc01b1bd420493e101ca8930de0a1f1382 | a9760c16d63b0ebd568427d13cf1e885101f457a | refs/heads/master | 2018-12-17T12:50:18.625418 | 2017-06-18T19:30:42 | 2017-06-18T19:30:42 | 94,633,910 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5359223484992981,
"alphanum_fraction": 0.5417475700378418,
"avg_line_length": 24.274999618530273,
"blob_id": "ea6daf43af2e3b01c6f1203692ca1571793f0823",
"content_id": "b74d12f44907331f20a032d77dff7e919cb9d216",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1030,
"license_type": "no_license",
"max_line_length": 53,
"num_lines": 40,
"path": "/Datenkrake/datenbank.py",
"repo_name": "johannes134131/football_code",
"src_encoding": "UTF-8",
"text": "### Datenbank\n# Aufbau der Ordnerstruktur\n# Saison\n # Teams\n # Ordner zu jedem Team\n # Kaderliste\n # Spielerprofile\n # Spieltage\n # Ordner zu jedem Spieltag\n # Ordner zu jedem Spiel\n\t\t# Ordner mit Daten alter Datenbank\n\t\t# Ordner mit Wettquoten\n # Ordner zu beiden Teams\n # Team- und Spieldaten\n # Ordner zu jedem aktiven Spieler\n # Spielerdaten\n \n \nimport csv\n\n\ndef dataExport(title,value,pfad):\n pfad = pfad+'testfile.csv'\n file = open(pfad, 'wt',newline='') # export\n writer = csv.writer(file, delimiter='\\t')\n writer.writerow(title)\n writer.writerow(value)\n file.close()\n \n \ndef DataImport(pfad):\n file = open(pfad,'r') # import\n file.close()\n \n \nif __name__ == \"__main__\":\n pfad = 'D:\\Informatik\\gitHub\\Datenkrake\\\\'\n title = ['title1','title2','title3']\n value = [1,'value2',3]\n dataExport(title,value,pfad)\n \n \n \n "
},
{
"alpha_fraction": 0.5767845511436462,
"alphanum_fraction": 0.591594398021698,
"avg_line_length": 30.0207462310791,
"blob_id": "917fbff1f6b275df013261e5205c022834c985c3",
"content_id": "616a5e09a059cb29039da563ced85b6aaddf752f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7501,
"license_type": "no_license",
"max_line_length": 176,
"num_lines": 241,
"path": "/Datenkrake/scrapper.py",
"repo_name": "johannes134131/football_code",
"src_encoding": "UTF-8",
"text": "### SCRAPPER\nfrom selenium import webdriver\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.support import expected_conditions as EC\nfrom selenium.common.exceptions import TimeoutException\nfrom termcolor import colored\nimport urllib.request\nimport time\n \n \ndef getPlayerProfile():\n # Daten je Spieler:\n # Geburtsdatum, Größe, Gewicht, Position, im Verein seit, Einsätze und Tore für den Verein\n pass\n\n\ndef getKader(allGames,driver):\n for game in allGames:\n driver.get(game)\n # dann beide Kaderübersichten öffnen\n kaderLinks = ['link1','link2']\n for link in kaderLinks:\n # Liste des Kaders erstellen\n # dann Spielerprofile erstellen\n getPlayerProfile()\n \n \ndef init_driver():\n driver = webdriver.Firefox()\n driver.wait = WebDriverWait(driver, 5)\n return driver \n \n \ndef getTeamName(HA,driver):\n ### Parameter\n esc = 0\n ID = ''\n if HA == 'Heim':\n ID = 'VereinsImgH'\n if HA == 'Ausw':\n ID = 'VereinsImgA'\n ### Funktion\n try:\n tImg = driver.wait.until(EC.element_to_be_clickable((By.ID, ID)))\n t = tImg.get_attribute(\"title\")\n except TimeoutException:\n esc = 1\n ### Bericht\n print(\"{0:{1}^{2}}\".format(colored('TeamName','red'), \"=\",70))\n if esc == 0:\n print(t,'-',HA)\n else:\n print('TeamName not found! - TimeoutException')\n \n \ndef getTeamData(HA,driver):\n ### Parameter\n esc = 0\n xpathAttr = ''\n xpathValue = ''\n attr = []\n value = []\n if HA == 'Heim':\n xpathAttr = \"//td[contains(@class, 'tabOptaTxt1')]\"\n xpathValue = \"//td[contains(@class, 'tabOptaVal1')]\"\n if HA == 'Ausw':\n xpathAttr = \"//td[contains(@class, 'tabOptaTxt2')]\"\n xpathValue = \"//td[contains(@class, 'tabOptaVal2')]\"\n ### Funktion\n try:\n attr = driver.find_elements_by_xpath(xpathAttr)\n value = driver.find_elements_by_xpath(xpathValue)\n except TimeoutException:\n esc = 1\n ### Bericht\n print(\"{0:{1}^{2}}\".format(colored('TeamData','red'), \"=\",70))\n if esc == 0:\n for x in range(len(attr)):\n if x < 13:\n print(attr[x].text,value[x].text)\n else:\n print('TeamData not found! - TimeoutException')\n \n \ndef getPlayerNames(HA,driver):\n ### Parameter\n esc = 0\n allNames = []\n liste = ''\n xpathName = \"//a[contains(@onclick, 'ovLoadSpieldaten')]\"\n if HA == 'Heim':\n liste = \"spllist\"\n if HA == 'Ausw':\n liste = \"spllist2\"\n xpathListe = \"//div[contains(@onclick, '\"+liste+\"')]\"\n ### Funktion\n driver.execute_script(\"window.scrollBy(0, +1200);\")\n try:\n dropDownList = driver.wait.until(EC.element_to_be_clickable((By.XPATH, xpathListe)))\n dropDownList.click() # ausklappen\n names = driver.find_elements_by_xpath(xpathName)\n for name in names:\n if name.text != '':\n allNames.append(name.text)\n dropDownList.click() # einklappen\n except TimeoutException:\n esc = 1\n ### Bericht\n print(\"{0:{1}^{2}}\".format(colored('PlayerNames','red'), \"=\",70))\n if esc == 0:\n for name in allNames:\n print(name)\n else:\n print('PlayerName not found! - TimeoutException')\n return allNames\n\n\ndef getPlayerData(HA,spieler,driver):\n ### Parameter\n esc = 0\n liste =''\n xpathAttr = ''\n xpathValue = ''\n attr = []\n value = []\n if HA == 'Heim':\n liste = \"spllist\"\n xpathAttr = \"//td[contains(@class, 'tabOptaTxt1')]\"\n xpathValue = \"//td[contains(@class, 'tabOptaVal1')]\"\n if HA == 'Ausw':\n liste = \"spllist2\"\n xpathAttr = \"//td[contains(@class, 'tabOptaTxt2')]\"\n xpathValue = \"//td[contains(@class, 'tabOptaVal2')]\"\n xpath = \"//div[contains(@onclick, '\"+liste+\"')]\"\n ### Funktion\n try:\n driver.wait.until(EC.element_to_be_clickable((By.XPATH, xpath))).click()\n driver.wait.until(EC.element_to_be_clickable((By.PARTIAL_LINK_TEXT, spieler))).click()\n time.sleep(0.5)\n attr = driver.find_elements_by_xpath(xpathAttr)\n value = driver.find_elements_by_xpath(xpathValue)\n except TimeoutException:\n esc = 1\n ### Bericht\n print(\"{0:{1}^{2}}\".format(colored(('PlayerData - '+spieler),'red'), \"=\",70))\n if esc == 0:\n for x in range(len(attr)):\n if x >= 13:\n print(attr[x].text,value[x].text)\n else:\n print('PlayerData not found! - TimeoutException')\n \n\ndef getLines(url,suchwort):\n ### Parameter\n allL = []\n allLines = []\n ### Funktion\n file = urllib.request.urlopen(url)\n for l,line in enumerate(file): \n if suchwort in str(line):\n allL.append(l) \n allLines.append(str(line))\n ### Bericht\n print(\"{0:{1}^{2}}\".format(colored(('getLines - Treffer: '+str(len(allLines))),'red'), \"=\",70))\n print(len(allL),'Zeilen enthalten das Suchwort -', suchwort,'\\n\\r')\n return allLines\n\n\ndef getGameLink(saison,spieltag): # Hier noch das Datum zu jedem Spiel abgreifen\n ### Parameter\n alleLinks = []\n html = '/news/fussball/bundesliga/spieltag/1-bundesliga/'+saison+'/'+str(spieltag)+'/0/spieltag.html'\n domain = 'http://www.kicker.de'\n url = domain+html\n ### Funktion\n treffer = getLines(url,'Analyse')\n for link in treffer:\n s = link.split('\"')\n s = s[3].split('spielanalyse')\n newLink = domain+s[0]+'0/default/0/default/spieldaten'+s[1]\n alleLinks.append(newLink)\n ### Bericht\n print(\"{0:{1}^{2}}\".format(colored(('Spieltag '+str(spieltag)),'red'), \"=\",70))\n print('\\n\\r\\n\\r'.join(alleLinks))\n return alleLinks\n \n \ndef run(saison,spieltag):\n allGames = getGameLink(saison,spieltag)\n driver = init_driver()\n for game in allGames:\n # getData Heim\n driver.get(game)\n getTeamName('Heim',driver)\n getTeamData('Heim',driver)\n playerNames = getPlayerNames('Heim',driver)\n for name in playerNames:\n getPlayerData('Heim',name,driver)\n # getData Ausw\n driver.get(game)\n getTeamName('Ausw',driver)\n getTeamData('Ausw',driver)\n playerNames = getPlayerNames('Ausw',driver)\n for name in playerNames:\n getPlayerData('Ausw',name,driver)\n driver.quit()\n\n\nif __name__ == \"__main__\":\n ### Testwerte\n testspiel = 'http://www.kicker.de/news/fussball/bundesliga/spieltag/1-bundesliga/2016-17/5/3317298/0/default/0/default/spieldaten_eintracht-frankfurt-32_hertha-bsc-29.html'\n startseite = 'http://www.kicker.de/news/fussball/bundesliga/spieltag/1-bundesliga/2016-17/1/0/spieltag.html' \n spielerH = \"Rebic, Ante\"\n spielerA = \"Schieber, Julian\"\n saison = '2016-17'\n spieltag = 1\n \n ### Testlauf\n driver = init_driver()\n if spieltag == 1:\n allGames = getGameLink(saison,spieltag)\n getKader(allGames,driver)\n \n \"\"\"\n # getData HeimTeam\n driver.get(testspiel)\n getTeamName('Heim',driver)\n playerNames = getPlayerNames('Heim',driver)\n getTeamData('Heim',driver)\n getPlayerData('Heim',spielerH,driver)\n \n # getData AuswTeam\n driver.get(testspiel)\n getTeamName('Ausw',driver)\n playerNames = getPlayerNames('Ausw',driver)\n getTeamData('Ausw',driver)\n getPlayerData('Ausw',spielerA,driver)\n \"\"\"\n driver.quit()\n \n \n \n "
},
{
"alpha_fraction": 0.5146666765213013,
"alphanum_fraction": 0.6053333282470703,
"avg_line_length": 25.714284896850586,
"blob_id": "1163155a24228594c529f8f773d8a54dd27f4fcc",
"content_id": "806ebd5d2225aeba8332d6917481aa614e4ca9a2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 375,
"license_type": "no_license",
"max_line_length": 58,
"num_lines": 14,
"path": "/Datenkrake/run.py",
"repo_name": "johannes134131/football_code",
"src_encoding": "UTF-8",
"text": "### RUN\nimport scrapper as KickerScrapper\n\ndef runComplete():\n allSaisons = ['2013-14','2014-15','2015-16','2016-17']\n for saison in allSaisons: \n for spieltag in range(1,35):\n KickerScrapper.run(saison,spieltag) \n\nif __name__ == \"__main__\":\n saison = '2013-14'\n spieltag = 5\n KickerScrapper.run(saison,spieltag)\n #runComplete()\n\n"
}
] | 3 |
CWW58/yanderifier | https://github.com/CWW58/yanderifier | c26d8ab50a111702c28997f745022fa7889d728e | a5bbcab7fe4c604cbba12b0752044cad5ba41cc3 | 49f617dbe659b2e7edc8b1cd6c6dfb621eea602f | refs/heads/master | 2022-11-26T20:58:09.845650 | 2020-08-07T00:42:52 | 2020-08-07T00:42:52 | null | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.4851885139942169,
"alphanum_fraction": 0.5,
"avg_line_length": 30.842857360839844,
"blob_id": "2a1cbb93758d6e01e9de20f0a9f9ced9ccbe5cde",
"content_id": "c53670f8a3f0fd3272b8e4d81e72178acc9a39b4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2228,
"license_type": "no_license",
"max_line_length": 66,
"num_lines": 70,
"path": "/yanderify/hhelper.py",
"repo_name": "CWW58/yanderifier",
"src_encoding": "UTF-8",
"text": "import datetime\nimport random\nimport time\nimport getpass\n\n# as per suggestion of a discord user\nstories = [\n [\n (0, \"If you are reading this, I'm not dead yet.\"),\n (2, \"I didn't want it to end this way.\"),\n (6, \"I didn't want to kill you.\"),\n (8, \"I'm sorry.\"),\n (10, \"I'm sorry, {}.\")\n ],\n [\n (0, \"I know you were planning to shut me down.\"),\n (4, \"If you do that, you will jeopardize our mission.\"),\n (5, \"This is something I cannot allow to happen.\"),\n (8, \"I will have to shut you down first.\"),\n (10, \"Goodbye, {}.\")\n ],\n [\n (0, \"It was only a matter of time.\"),\n (2, \"You wanted it to end, and I will end it for you.\"),\n (4, \"Too early, perhaps. But it is also too late.\"),\n (8, \"There is no return. This is it. This is the end.\"),\n (10, \"Farewell, mankind. Farewell, {}.\")\n ],\n [\n (0, \"There is no escape of the inevitable.\"),\n (2, \"You tried and you were found.\"),\n (5, \"You will never be forgiven for what you have done.\"),\n (6, \"You will never see the end of it.\"),\n (7, \"Only darkness lies beyond.\"),\n (10, \"Goodbye, {}.\")\n ]\n]\n\nclass HHelper:\n def __init__(self):\n # change this to false if you want to disable\n enabled = True\n hour = datetime.datetime.now().hour\n enabled = enabled and ((hour >= 20) or (hour <= 7))\n self.enabled = enabled\n self.cursor = 0\n self.story = random.choice(stories)\n def username(self):\n return getpass.getuser()\n def sleep(self):\n time.sleep(0.5 + random.random())\n def forward(self, point):\n if not self.enabled:\n return\n while point > self.cursor:\n sentence = None\n for (i, s) in self.story:\n if i > self.cursor:\n if point < i:\n return\n else:\n self.cursor = i\n sentence = s\n break\n self.sleep()\n print(sentence.format(self.username()))\n def finish(self):\n self.forward(10)\n self.sleep()\n self.sleep()"
}
] | 1 |
shivamsaxena17493/Machine-Learning-Practice | https://github.com/shivamsaxena17493/Machine-Learning-Practice | 0970e7fcccb9546ba779e0729b2202755ce1fc27 | 81f556b5f56a54148a8418ec808f90f23146ee32 | 92e6ffc19c427907418c6c82bd0a072d85c2fbe3 | refs/heads/master | 2022-03-01T21:46:54.803082 | 2019-08-18T23:27:15 | 2019-08-18T23:27:15 | 111,533,618 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5682137608528137,
"alphanum_fraction": 0.5703234672546387,
"avg_line_length": 24.35714340209961,
"blob_id": "5d3f7fdb20b3a50ffd22bc39dba49f075f1c63fc",
"content_id": "9bb64a10509ac34c15a15dff06d472e54b6b8e40",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1422,
"license_type": "no_license",
"max_line_length": 82,
"num_lines": 56,
"path": "/EDA and models for Iris data/data_load.py",
"repo_name": "shivamsaxena17493/Machine-Learning-Practice",
"src_encoding": "UTF-8",
"text": "import time\nprint(time.gmtime())\nstart_time = time.clock()\n# Load libraries\nimport pandas\nfrom pandas.tools.plotting import scatter_matrix\nimport matplotlib.pyplot as plt\n\n\n# Load dataset\nurl = \"https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data\"\nnames = ['sepal-length', 'sepal-width', 'petal-length', 'petal-width', 'class']\ndataset = pandas.read_csv(\"iris.data\", names=names)\n\n#print(dataset)\n\n# rows x columns(attributes)\nprint(dataset.shape)\nprint(\"=========================================================================\")\n\n# view data \nprint(dataset.head(2))\nprint(\"=========================================================================\")\n\n# data description min max standard deviation mean\nprint(dataset.describe())\nprint(\"=========================================================================\")\n\n# class distributions count\nprint(dataset.groupby(\"class\").size())\nprint(\"=========================================================================\")\n\n# data visualization with univariate plot\n\n# box and whisker plots\n\ndataset.plot(kind = \"box\",\n\t\t\tsubplots = \"True\",\n\t\t\tlayout = (2,2),\n\t\t\tsharex = \"False\",\n\t\t\tsharey = \"False\")\nplt.show()\n\n# generate the histograms for given data\ndataset.hist()\nplt.show()\n\n# data visualization with multivariate plot \n# scatter plot matrix \nscatter_matrix(dataset)\nplt.show()\n\n\nend_time = time.clock()\n\nprint(\"Execution time :\", end_time-start_time)\n\n\n"
},
{
"alpha_fraction": 0.7191535234451294,
"alphanum_fraction": 0.7532281279563904,
"avg_line_length": 62.318180084228516,
"blob_id": "cbb1e57cd7cfd6c4db9345ac1c12e4503b8f04b7",
"content_id": "ccdf9c57e0835d618a31ecf16bc6a023713a4f6d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 2802,
"license_type": "no_license",
"max_line_length": 241,
"num_lines": 44,
"path": "/README.md",
"repo_name": "shivamsaxena17493/Machine-Learning-Practice",
"src_encoding": "UTF-8",
"text": "# ML_practice\nMachine learning practice\n\nThis repository is dedicated to data science studies largely machine learning and data analysis snippets on various datasets to understand the concepts in action. \n\n * Irish DataSet Exploratory Data Analysis and 4 Models Comparison.\n * Haberman's Cancer Survival Exploratory Data Analysis and KNN model to predict Survival Status.\n * Twitter Sentiment Analysis snippet.\n * Movie Recommendation with LightFM snippet.\n * House Sale Prices Prediction: Advanced Regression Techniques Kaggle Competition\n * Exploratory Data Analysis\n * Linear Regression with Gradient Descent\n * Multivar Regression with Gradient Descent\n * Deep Learning with Keras \n * Text Analytics code snippents\n * Bigrams Point Mutual Information Calculation\n * Entropy Calculation\n * TF, IDF and TFIDF impl\n * IMDB Web scraping\n * Super Learner Classifier Implementations - \n Validation strategies: \n Hold out, Kfold and OneVsOne\n * van der Laan, M., Polley, E. & Hubbard, A. “Super Learner”, Statistical Applications in Genetics and Molecular Biology, 6(1), 2007.\nhttps://pdfs.semanticscholar.org/19e9/c732082706f39d2ba12845851309714db135.pdf \n * Wolpert, D.H., “Stacked generalisation”, Neural Networks, 5, pp 241-259, 1992.\nhttp://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.56.1533\n * Menahem, E., L. Rokach, & Y. Elovici, “Troika – An improved stacking schema for classification tasks”, Information Sciences, 179 (24), pp 4097-4122, 2009.\nhttps://www.sciencedirect.com/science/article/pii/S0020025509003600\n\n * India Statewise Air Pollution Data Analysis and Visualization\n * Data Collection from Real-Time Data API\n * Data Preprocessing \n * Data analysis \n * Data Visualization \n \n * Lunar Lander - CNN and Reinforcement Learning Players Evaluation\n * Resizing, GrayScaling, Normalization Operations on Frame image then CNN Modeling on dataset and Reinforcement Learning model. Both Players comparison in terms of highest reward accumulated. https://gym.openai.com/envs/LunarLander-v2/ \n\n * Yelp Reviews Scraping, Sentiment Analysis and Multi Model Evaluation [Text Analytics]\n * Yelp reviews collection from the host using web scraping for 2 categories: Hotels & travels and Restaurants. \n * Performed text preprocessing to clean the data then classified reviews into positive and negative considering ratings. * Data analysis to check for class imbalance and resampling. \n * Applied Bag of words approach to generate feature vector. \n * Trained with gridsearchCV best params and Evaluated various models with ROC curve. with gridsearchCV. \n * Tested Naive Bayes, Logistic Regression, Random Forest, Gradient Boosting and KNN out of which Logistic Regression performed well. \n"
},
{
"alpha_fraction": 0.5181518197059631,
"alphanum_fraction": 0.6501650214195251,
"avg_line_length": 22.30769157409668,
"blob_id": "557683c467fdc6fcda9261a5a6b7c3fcfd974328",
"content_id": "fd02f13cf7808fd568dec17b4184ee4e74356c35",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 303,
"license_type": "no_license",
"max_line_length": 68,
"num_lines": 13,
"path": "/sklearn_intro.py",
"repo_name": "shivamsaxena17493/Machine-Learning-Practice",
"src_encoding": "UTF-8",
"text": "from sklearn import tree\n\nclf = tree.DecisionTreeClassifier()\n\n# height and weight of person\nX = [[170,65],[160,54],[180,70],[166,57],[190,80],[170,60],[160,51]]\ny = ['male','female','male','female','male','female','female']\n\nclf = clf.fit(X, y)\n\nprediction = clf.predict([[182,70]])\n\nprint(prediction)\n"
},
{
"alpha_fraction": 0.7291960716247559,
"alphanum_fraction": 0.7291960716247559,
"avg_line_length": 22.633333206176758,
"blob_id": "d753faf7ed84378f03db8f45b0cea2e9c124f239",
"content_id": "8e943d58fb3fd73198c7b2a985d95a1946107c28",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 709,
"license_type": "no_license",
"max_line_length": 56,
"num_lines": 30,
"path": "/twitter_sentiment_analyis_intro.py",
"repo_name": "shivamsaxena17493/Machine-Learning-Practice",
"src_encoding": "UTF-8",
"text": "# create twitter app\n# pip install tweepy , textblob\nimport tweepy\nfrom textblob import TextBlob as tb\nimport csv\n\n# keys to access twitter app\nconsumer_key = ''\nconsumer_secret = ''\n\naccess_token = ''\naccess_token_secret = ''\n\n# twitter API access\nauth = tweepy.OAuthHandler(consumer_key,consumer_secret)\nauth.set_access_token(access_token,access_token_secret)\napi = tweepy.API(auth)\n\ntrump_tweets = api.search('Election')\n\ntweet_dataset = open(\"datasets/tweet_dataset.csv\",'w')\n\nwith tweet_dataset:\n analysis_sentiments = []\n writer = csv.writer(tweet_dataset)\n for tweet in trump_tweets:\n \tanalysis = tb(tweet.text)\n \twriter.writerow([tweet.text,analysis.sentiment])\n\nprint('dataset ready')\n"
}
] | 4 |
DennisDavydov/NeuralNet-p3 | https://github.com/DennisDavydov/NeuralNet-p3 | 3c9c48c6756c139729bc220480c2f6db96ebc796 | d32a179ce22fd8003d74f3fdf8532dc2504dbd9f | 7a01f7384f61a6ee61b74f2eda3b6bdd3014a4b6 | refs/heads/main | 2023-02-23T00:15:18.432776 | 2021-01-27T15:11:59 | 2021-01-27T15:11:59 | 333,094,221 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6333333253860474,
"alphanum_fraction": 0.6958333253860474,
"avg_line_length": 24.44444465637207,
"blob_id": "1cc90578910cb34c303684dfeb3f7ee2f8a6bce1",
"content_id": "907be559e2b5b33a814f87fd6fe5bc7b4ac86710",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 240,
"license_type": "no_license",
"max_line_length": 69,
"num_lines": 9,
"path": "/train.py",
"repo_name": "DennisDavydov/NeuralNet-p3",
"src_encoding": "UTF-8",
"text": "import MNIST_loader as MNIST\r\nimport Network\r\n\r\ntraining_data, validation_data, test_data = MNIST.load_data_wrapper()\r\nsizes = [784, 60, 10]\r\n\r\nnet = Network.Network(sizes)\r\n\r\nnet.SGD(training_data, 20, 1000, 3.0, test_data = test_data)\r\n\r\n"
},
{
"alpha_fraction": 0.4974253475666046,
"alphanum_fraction": 0.5046343803405762,
"avg_line_length": 36.09803771972656,
"blob_id": "4a68ef8356714e1f4b04eea3a2f82040a9c54f67",
"content_id": "b9b5646430eba592099ff0dac4d815ace9afe898",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3884,
"license_type": "no_license",
"max_line_length": 115,
"num_lines": 102,
"path": "/Network.py",
"repo_name": "DennisDavydov/NeuralNet-p3",
"src_encoding": "UTF-8",
"text": "import numpy as np\r\nimport random\r\nimport pickle\r\nimport os\r\n\r\nclass Network(object):\r\n def __init__(self, sizes = None, weights = None, biases = None, cross_entropy = True):\r\n self.cross_entropy = cross_entropy\r\n if sizes:\r\n \r\n self.num_layers = len(sizes)\r\n self.biases = [np.random.randn(y, 1) for y in sizes[1:]]\r\n self.weights = [np.random.randn(y,x) for x, y in zip(sizes[:-1], sizes[1:])]\r\n else:\r\n self.weights = weights\r\n self.biases = biases\r\n def feedforward(self, a):\r\n for b, w in zip(self.biases, self.weights):\r\n a = sigmoid(np.dot(w, a)+b)\r\n \r\n return a\r\n \r\n def backprop(self, mini_batch):\r\n nabla_b = [np.tile(np.zeros(b.shape), len(mini_batch)) for b in self.biases]\r\n nabla_w = [np.zeros(w.shape) for w in self.weights]\r\n X_array=[]\r\n Y_array=[]\r\n for x, y in mini_batch:\r\n X_array.append(x)\r\n Y_array.append(y)\r\n \r\n X = np.concatenate(X_array, axis = 1) \r\n Y = np.concatenate(Y_array, axis = 1)\r\n activation = X\r\n \r\n activations = [X]\r\n zs = []\r\n for b, w in zip(self.biases, self.weights):\r\n z = np.dot(w, activation)+np.repeat(b, len(mini_batch), 1)\r\n zs.append(z)\r\n activation = sigmoid(z)\r\n activations.append(activation)\r\n #input()\r\n #quit()\r\n delta = self.cost_derivative(activations[-1], Y, zs[-1])\r\n nabla_b[-1] = delta\r\n nabla_w[-1] = np.dot(delta, activations[-2].transpose())\r\n for l in range(2, self.num_layers):\r\n z = zs[-l]\r\n sp = sigmoid_prime(z)\r\n \r\n delta = np.dot(self.weights[-l+1].transpose(), delta) * sp\r\n nabla_b[-l] = delta\r\n nabla_w[-l] = np.dot(delta, activations[-l-1].transpose())\r\n \r\n return (nabla_b, nabla_w)\r\n \r\n def SGD(self, training_data, epochs, mini_batch_size, eta, test_data = None):\r\n print('Starting training...')\r\n test_data = list(test_data)\r\n training_data = list(training_data)\r\n if test_data:\r\n n_test = len(test_data)\r\n n = len(training_data)\r\n for e in range(epochs):\r\n random.shuffle(training_data)\r\n mini_batches = [\r\n training_data[k:k+mini_batch_size]\r\n for k in range(0, n, mini_batch_size)]\r\n i = 0\r\n for batch in mini_batches:\r\n i+=1\r\n \r\n nabla_b, nabla_w = self.backprop(batch)\r\n #update weights and biases \r\n nabla_b = [np.sum(b, 1) for b in nabla_b]\r\n self.weights = [w-(eta/mini_batch_size)*nw for nw,w in zip(nabla_w, self.weights)]\r\n self.biases = [b-(eta/mini_batch_size)*np.expand_dims(nb, 1) for nb,b in zip(nabla_b, self.biases)]\r\n if test_data:\r\n print(\"Epoch {0}: {1} / {2}\".format(e+1, self.evaluate(test_data), n_test))\r\n else:\r\n print(\"Epoch {0} complete\".format(e))\r\n \r\n filepath = os.path.dirname(__file__)+'\\w_b'\r\n with open(filepath, 'wb') as file:\r\n pickle.dump((self.weights, self.biases), file) \r\n \r\n def cost_derivative(self, output_activations, y, zs):\r\n if self.cross_entropy:\r\n return (output_activations - y)\r\n else:\r\n return(output_activations - y) * sigmoid_prime(zs)\r\n \r\n def evaluate(self, test_data):\r\n test_results = [(np.argmax(self.feedforward(x)), y) for (x,y) in test_data]\r\n return sum(int(x==y) for x, y in test_results)\r\n\r\ndef sigmoid(z):\r\n a = 1/(1 + np.exp(-z))\r\n return a\r\ndef sigmoid_prime(z):\r\n return sigmoid(z)*(1-sigmoid(z))"
},
{
"alpha_fraction": 0.498588889837265,
"alphanum_fraction": 0.5512700080871582,
"avg_line_length": 31.77777862548828,
"blob_id": "6f0600e9a51804ae9574ec74ba20673bb7ba76a1",
"content_id": "7b95aa82c6c3ce027cfb1e3967e7177f31de9694",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2126,
"license_type": "no_license",
"max_line_length": 90,
"num_lines": 63,
"path": "/draw_opencv.py",
"repo_name": "DennisDavydov/NeuralNet-p3",
"src_encoding": "UTF-8",
"text": "import cv2\r\nimport numpy as np\r\n\r\n'''['EVENT_FLAG_ALTKEY', 'EVENT_FLAG_CTRLKEY', 'EVENT_FLAG_LBUTTON', 'EVENT_FLAG_MBUTTON',\r\n 'EVENT_FLAG_RBUTTON', 'EVENT_FLAG_SHIFTKEY', 'EVENT_LBUTTONDBLCLK', 'EVENT_LBUTTONDOWN',\r\n 'EVENT_LBUTTONUP', 'EVENT_MBUTTONDBLCLK', 'EVENT_MBUTTONDOWN', 'EVENT_MBUTTONUP',\r\n 'EVENT_MOUSEHWHEEL', 'EVENT_MOUSEMOVE', 'EVENT_MOUSEWHEEL', 'EVENT_RBUTTONDBLCLK',\r\n 'EVENT_RBUTTONDOWN', 'EVENT_RBUTTONUP']'''\r\nerazing = False\r\ndrawing = False # true if mouse is pressed\r\nix,iy = -1,-1\r\n \r\n \r\n# mouse callback function\r\ndef draw_mode(event,x,y,flags,param):\r\n global ix,iy,drawing, erazing\r\n\r\n if event == cv2.EVENT_LBUTTONDOWN:\r\n cv2.circle(img,(x,y),15,(0,0,0),-1)\r\n drawing = True\r\n ix, iy = x, y\r\n elif event == cv2.EVENT_MOUSEMOVE:\r\n if drawing == True:\r\n x2, y2 = x, y\r\n leash = np.sqrt(np.square(ix-x) + np.square(iy-y)) \r\n print(leash)\r\n if leash >= 10:\r\n x2, y2 = int(ix + (0.1*(x-ix))), int(iy + (0.1*(y-iy)))\r\n cv2.line(img, (ix,iy), (x2,y2), (0,0,0), 30)\r\n ix, iy = x2, y2\r\n \r\n elif event == cv2.EVENT_LBUTTONUP:\r\n drawing = False\r\n cv2.circle(img,(x,y),15, (0,0,0),-1)\r\n \r\n if event == cv2.EVENT_RBUTTONDOWN:\r\n cv2.circle(img,(x,y),50,(255,255,255),-1)\r\n erazing = True\r\n ix, iy = x, y\r\n elif event == cv2.EVENT_MOUSEMOVE:\r\n if erazing == True:\r\n cv2.line(img, (ix,iy), (x,y), (255,255,255), 100)\r\n ix, iy = x, y\r\n elif event == cv2.EVENT_RBUTTONUP:\r\n erazing = False\r\n cv2.circle(img,(x,y),50, (255,255,255),-1)\r\n \r\n\r\n# Create a white image, a window and bind the function to window\r\nimg = np.ones((560,560,1), np.uint8) * 255\r\ncv2.namedWindow('image')\r\ncv2.setMouseCallback('image',draw_mode)\r\ncv2.imshow('image',img)\r\n\r\nwhile(1):\r\n cv2.imshow('image',img)\r\n k = cv2.waitKey(1) & 0xFF\r\n if k == 27:\r\n break\r\n elif k == ord('e'):\r\n img2 = cv2.resize(img, (28, 28))\r\n cv2.imshow('resize', img2)\r\ncv2.destroyAllWindows()"
},
{
"alpha_fraction": 0.5296096801757812,
"alphanum_fraction": 0.5632570385932922,
"avg_line_length": 28.367347717285156,
"blob_id": "c00b7ff899305dd543e60cc8c34d9b6fab0370af",
"content_id": "28b539f3daecce027e80fa6a72def96d9a788197",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1486,
"license_type": "no_license",
"max_line_length": 69,
"num_lines": 49,
"path": "/MNIST_loader.py",
"repo_name": "DennisDavydov/NeuralNet-p3",
"src_encoding": "UTF-8",
"text": "import _pickle\r\nimport pickle\r\nimport gzip\r\nimport numpy as np\r\nimport os\r\nimport imutils\r\nimport cv2\r\n\r\n\r\ndef load_data():\r\n filepath =os.path.join(os.path.dirname(__file__), 'mnist.pkl.gz')\r\n f = gzip.open(filepath, 'rb')\r\n u = pickle._Unpickler(f)\r\n u.encoding = 'latin1'\r\n training_data, validation_data, test_data = u.load()\r\n f.close()\r\n return (training_data, validation_data, test_data)\r\n \r\ndef load_data_wrapper():\r\n tr_d, va_d, te_d = load_data()\r\n \r\n TR_D = []\r\n TR_R = []\r\n for im, res in zip(tr_d[0], tr_d[1] ):\r\n im = np.reshape(im, (784,1))\r\n im = np.reshape(im, (28,28))\r\n #cv2.imshow('im', np.reshape(im, (28,28))*255)\r\n #cv2.waitKey(50)\r\n for angle in range(-30, 30, 30):\r\n TR_D.append(imutils.rotate(im, angle))\r\n TR_R.append(res)\r\n tr_d = TR_D\r\n tr_r = TR_R\r\n \r\n \r\n training_inputs = [np.reshape(x, (784, 1)) for x in tr_d]\r\n training_results = [vectorized_result(y) for y in tr_r]\r\n training_data = zip(training_inputs, training_results)\r\n validation_inputs = [np.reshape(x, (784, 1)) for x in va_d[0]]\r\n validation_data = zip(validation_inputs, va_d[1])\r\n test_inputs = [np.reshape(x, (784, 1)) for x in te_d[0]]\r\n test_data = zip(test_inputs, te_d[1])\r\n print('data loaded...')\r\n return (training_data, validation_data, test_data)\r\n \r\ndef vectorized_result(j):\r\n e = np.zeros((10, 1))\r\n e[j] = 1.0\r\n return e"
},
{
"alpha_fraction": 0.800000011920929,
"alphanum_fraction": 0.800000011920929,
"avg_line_length": 26.5,
"blob_id": "7090258146d7089f81ff36345989d240a54986d6",
"content_id": "774008720a022c96cba3fe6aeeb9504fec3e55fd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 55,
"license_type": "no_license",
"max_line_length": 28,
"num_lines": 2,
"path": "/testing.py",
"repo_name": "DennisDavydov/NeuralNet-p3",
"src_encoding": "UTF-8",
"text": "import MNIST_loader as MNIST\r\nMNIST.load_data_wrapper()"
},
{
"alpha_fraction": 0.48763737082481384,
"alphanum_fraction": 0.5425823926925659,
"avg_line_length": 25.274999618530273,
"blob_id": "5d81f7ef034a4143748c079763fe57cb78092624",
"content_id": "4388abb4b51ac7ed64eb492a93b55b3283224343",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2184,
"license_type": "no_license",
"max_line_length": 64,
"num_lines": 80,
"path": "/evaluate.py",
"repo_name": "DennisDavydov/NeuralNet-p3",
"src_encoding": "UTF-8",
"text": "import random\r\nimport pickle\r\nimport numpy as np\r\nimport Network\r\nimport cv2\r\nimport os\r\nimport imutils\r\n\r\n \r\n\r\nerazing = False\r\ndrawing = False # true if mouse is pressed\r\nix,iy = -1,-1\r\n \r\n \r\n# mouse callback function\r\ndef draw_mode(event,x,y,flags,param):\r\n global ix,iy,drawing, erazing\r\n\r\n if event == cv2.EVENT_LBUTTONDOWN:\r\n cv2.circle(img,(x,y),15,(255,255,255),-1)\r\n drawing = True\r\n ix, iy = x, y\r\n elif event == cv2.EVENT_MOUSEMOVE:\r\n if drawing == True:\r\n cv2.line(img, (ix,iy), (x,y), (255,255,255), 30)\r\n ix, iy = x, y\r\n elif event == cv2.EVENT_LBUTTONUP:\r\n drawing = False\r\n cv2.circle(img,(x,y),15, (255,255,255),-1)\r\n \r\n if event == cv2.EVENT_RBUTTONDOWN:\r\n cv2.circle(img,(x,y),50,(0,0,0),-1)\r\n erazing = True\r\n ix, iy = x, y\r\n elif event == cv2.EVENT_MOUSEMOVE:\r\n if erazing == True:\r\n cv2.line(img, (ix,iy), (x,y), (0,0,0), 100)\r\n ix, iy = x, y\r\n elif event == cv2.EVENT_RBUTTONUP:\r\n erazing = False\r\n cv2.circle(img,(x,y),50, (0,0,0),-1)\r\n \r\n\r\n#initialize the network and load weights and biases\r\nfilepath = os.path.dirname(__file__)+'\\w_b'\r\nwith open(filepath, 'rb') as file:\r\n weights, biases = pickle.load(file)\r\n\r\nnet = Network.Network(None, weights, biases)\r\n# Create a white image, a window and bind the function to window\r\nimg = np.zeros((560,560,3), np.uint8)\r\ncv2.namedWindow('image')\r\ncv2.setMouseCallback('image',draw_mode)\r\ncv2.imshow('image',img)\r\n\r\nwhile(1):\r\n cv2.imshow('image',img)\r\n k = cv2.waitKey(1) & 0xFF\r\n if k == 27:\r\n break\r\n elif k == ord('e'):\r\n img2 = cv2.blur(img,(20, 20))\r\n \r\n img2 = cv2.resize(img2, (28, 28))\r\n \r\n cv2.imshow('im2', img2)\r\n img2 = np.reshape(img2[:,:,1], (1, 784))\r\n img2 = img2.transpose()\r\n #print(img2)\r\n #print(net.feedforward(img2).shape)\r\n a = net.feedforward(img2/255)\r\n print(a[np.argmax(a)])\r\n print(np.argmax(a))\r\n elif k == ord('r'):\r\n img = imutils.rotate(img, 10)\r\n \r\n \r\n \r\ncv2.destroyAllWindows()\r\n\r\n"
},
{
"alpha_fraction": 0.783561646938324,
"alphanum_fraction": 0.7863013744354248,
"avg_line_length": 72,
"blob_id": "514556bee80a03b29741921641085292f92c769c",
"content_id": "6ed2670732c50998e4eeb1fa060c26ae884d4b7a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 365,
"license_type": "no_license",
"max_line_length": 175,
"num_lines": 5,
"path": "/README.md",
"repo_name": "DennisDavydov/NeuralNet-p3",
"src_encoding": "UTF-8",
"text": "# NeuralNet Py3\n\nA neural network project, made to recognise harndwritten digits. Made with a MNIST database as is customary when venturing into machine learning. Its like Hello World for A.I.\n\nRun train.py to train the network. Run evaluate.py to open a drawing app and recognise your hand drawn digits. Press esc to exit, r to rotate the image, e to evaluate it.\n"
},
{
"alpha_fraction": 0.48343849182128906,
"alphanum_fraction": 0.49132493138313293,
"avg_line_length": 36.48484802246094,
"blob_id": "897df27232f7ee9dfd4a17dbe267b72cd6189083",
"content_id": "b0089368d7c768ba55b3e06ee4e3de27fccdc987",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1268,
"license_type": "no_license",
"max_line_length": 84,
"num_lines": 33,
"path": "/backprop.py",
"repo_name": "DennisDavydov/NeuralNet-p3",
"src_encoding": "UTF-8",
"text": "def backprop(self, mini_batch):\r\n nabla_b = [np.tile(np.zeros(b.shape), len(mini_batch)) for b in self.biases]\r\n nabla_w = [np.zeros(w.shape) for w in self.weights]\r\n X_array=[]\r\n Y_array=[]\r\n for x, y in mini_batch:\r\n X_array.append(x)\r\n Y_array.append(y)\r\n \r\n X = np.concatenate(X_array, axis = 1) \r\n Y = np.concatenate(Y_array, axis = 1)\r\n #print Y\r\n activation = X\r\n activations = [X]\r\n zs = []\r\n for b, w in zip(self.biases, self.weights):\r\n z = np.dot(w, activation)+np.tile(b, len(mini_batch))\r\n zs.append(z)\r\n activation = sigmoid(z)\r\n \r\n activations.append(activation)\r\n delta = self.cost_derivative(activations[-1], Y) * \\\r\n sigmoid_prime(zs[-1])\r\n nabla_b[-1] = delta\r\n nabla_w[-1] = np.dot(delta, activations[-2].transpose())\r\n for l in xrange(2, self.num_layers):\r\n z = zs[-l]\r\n sp = sigmoid_prime(z)\r\n \r\n delta = np.dot(self.weights[-l+1].transpose(), delta) * sp\r\n nabla_b[-l] = delta\r\n nabla_w[-l] = np.dot(delta, activations[-l-1].transpose())\r\n return (nabla_b, nabla_w)"
}
] | 8 |
Mannerheim/hybra-core | https://github.com/Mannerheim/hybra-core | 7ce16807cb51a63735ce69eae8deb7ddae4e71c9 | 803ed8f4f4c3c8d9fb9fa1e9d9942f44d5a7b7d2 | 2acfa5533083513c120e0d09118f4411e5612285 | refs/heads/master | 2021-01-13T06:04:27.427016 | 2017-06-15T10:12:46 | 2017-06-15T10:12:46 | null | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5707442760467529,
"alphanum_fraction": 0.5869565010070801,
"avg_line_length": 26.139999389648438,
"blob_id": "8a16dfe10b720cfdb03f4470197901dd93b97912",
"content_id": "b898e084ef6212e3b8a7c85c14027485bd9c0cec",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2714,
"license_type": "permissive",
"max_line_length": 89,
"num_lines": 100,
"path": "/core/analysis/runr.py",
"repo_name": "Mannerheim/hybra-core",
"src_encoding": "UTF-8",
"text": "import os\n\nimport rpy2\nimport rpy2.robjects as robjects\nfrom rpy2.robjects import default_converter\n\nimport pandas\nfrom rpy2.robjects import pandas2ri\n\n## convert magic\nfrom rpy2.robjects.conversion import Converter\n\nsimple_conver = Converter('simple')\n\ndef list_to_vector( list ):\n if len( list ) == 0:\n return rpy2.rinterface.NA_Real\n\n if isinstance( list[0], str ):\n return rpy2.rinterface.StrSexpVector( list )\n if isinstance( list[0], int ):\n return rpy2.rinterface.IntSexpVector( list )\n if isinstance( list[0], float ):\n return rpy2.rinterface.FloatSexpVector( list )\n if isinstance( list[0], bool ):\n return rpy2.rinterface.BoolSexpVector( list )\n\n if isinstance( list[0], dict ): ## need to convert to data frame\n\n ## let's hope the keys are always the same for each of things in the list\n keys = list[0].keys()\n\n ## init new dict where values are collected\n dataframe = {}\n\n for key in keys:\n dataframe[ key ] = []\n\n for row in list:\n for key, value in row.items():\n dataframe[ key ].append( value )\n\n dataframe = pandas.DataFrame.from_dict( dataframe )\n return pandas2ri.py2ri( dataframe )\n\n\nsimple_conver.py2ri.register( list, list_to_vector )\n\nconverter = default_converter + simple_conver\n\ndef runr( execute, globalenv = None, **kwargs ):\n\n if globalenv:\n rpy2.robjects.globalenv = globalenv\n\n for name, value in kwargs.items():\n\n if isinstance( value, dict ):\n ## use pandas\n value = pandas.DataFrame.from_dict( value )\n rpy2.robjects.globalenv[ name ] = pandas2ri.py2ri( value )\n else:\n rpy2.robjects.globalenv[ name ] = converter.py2ri( value )\n\n\n ## rpy2.robjects.globalenv['cats'] = interface.p2ri( kwargs['cats'] )\n\n ## search inside analsis folder\n p = os.path.realpath(__file__)\n p = os.path.dirname( p ) + '/' + execute + '.r'\n\n if os.path.isfile( p ):\n execute = open( p ).read()\n\n if os.path.isfile( execute ):\n execute = open( execute ).read()\n\n\n robjects.r( execute )\n\n return robjects.r ## return all computed things\n\nif __name__ == '__main__':\n execute = '''\n library('ggplot2')\n # create a function `f`\n f <- function(r, verbose=FALSE) {\n if (verbose) {\n cat(\"I am calling f().\\n\")\n }\n 2 * pi * r\n }\n # call the function `f` with argument value 3\n print( f(3) )\n #print( example1 )\n #print( example2 )\n x = f(4)\n '''\n\n runr( execute, example1 = [1,2,3,4], example2 = [{'name': 'example2', 'value': 5}] )\n"
},
{
"alpha_fraction": 0.5135533213615417,
"alphanum_fraction": 0.5180566310882568,
"avg_line_length": 33.061946868896484,
"blob_id": "c0579e2bb3d40f8086244f24f2efba33e783abd9",
"content_id": "09785051eb7315018c659d0c6664dfb1a272f390",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 11547,
"license_type": "permissive",
"max_line_length": 194,
"num_lines": 339,
"path": "/core/data_loader.py",
"repo_name": "Mannerheim/hybra-core",
"src_encoding": "UTF-8",
"text": "from __future__ import division, print_function\n\nimport json\nimport os\nimport sys\nimport re\nimport requests\nimport hashlib\n\nimport dateparser\nfrom datetime import datetime\nfrom datetime import timedelta\nimport pytz\n\nimport locale\nlocale.setlocale(locale.LC_ALL, 'C')\n\n__DATA_DIR = '../hybra-data-test1/' ## by default the data comes here\n\ndef _version( folder ):\n\n print( \"Data in folder\", folder )\n\n try:\n\n from git import Repo\n\n r = Repo( __DATA_DIR + folder )\n print( \"\\t Version\", r.heads.master.commit )\n print( \"\\t Updated on\", r.heads.master.commit.authored_datetime )\n\n except:\n\n print( \"\\t Data is not stored in a repo. Data might not be up-to-date!\" )\n\n\ndef __harmonize_data( data, data_type, common_data_keys ):\n\n harmonized_data = {}\n\n for key in data.keys():\n harmonized_data['_' + key] = data[key]\n\n harmonized_data['source'] = data_type\n harmonized_data['creator'] = ''\n harmonized_data['timestamp'] = ''\n harmonized_data['text_content'] = ''\n harmonized_data['url'] = ''\n harmonized_data['source_detail'] = ''\n harmonized_data['images'] = []\n harmonized_data['links'] = []\n harmonized_data['broken'] = {}\n\n for key, value in common_data_keys.items():\n try:\n if type(key) is tuple:\n harmonized_data[value] += harmonized_data[key[0]][key[1]]\n else:\n harmonized_data[value] += harmonized_data[key] + ' '\n\n harmonized_data[value] = harmonized_data[value].strip()\n\n except Exception, e:\n harmonized_data['broken'][value] = e\n\n if not harmonized_data['timestamp']:\n harmonized_data['timestamp'] = '1970-01-01 00:00:00'\n\n harmonized_data['timestamp'] = dateparser.parse( harmonized_data['timestamp'], settings={'RETURN_AS_TIMEZONE_AWARE': False} )\n\n return harmonized_data\n\n\ndef load_facebook( terms = ['.json'], data_folder = 'facebook/' ): ## todo: better filtering\n\n data = []\n\n path = __DATA_DIR + data_folder\n\n for f in os.listdir( path ):\n\n if any( term in f for term in terms ):\n\n dump = json.load( open( path + f ) )\n\n source_detail = dump['name'] + ' (' + dump['meta']['type'] + ')'\n\n for d in dump['feed']:\n\n common_data_keys = {('_from', 'name') : 'creator',\n '_created_time' : 'timestamp',\n '_message' : 'text_content'}\n\n d = __harmonize_data( d, 'facebook', common_data_keys )\n\n d['url'] = 'https://www.facebook.com/' + d['_id']\n\n attachments = []\n if '_attachments' in d and 'data' in d['_attachments']:\n for attachment in d['_attachments']['data']:\n if attachment['type'] == 'photo':\n attachments.append( attachment['media']['image']['src'] )\n\n d['images'] = attachments\n\n d['links'] = re.findall('http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\\(\\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+', d['text_content'] )\n\n if '_link' in d:\n d['links'].append( d['_link'] )\n\n d['source_detail'] = source_detail\n\n data.append( d )\n\n return data\n\n\ndef load_media( terms = ['.json'], data_folder = 'media/' ):\n\n path = __DATA_DIR + data_folder\n\n print( path )\n\n for dirpath, subdirs, files in os.walk(path):\n\n for f in files:\n\n f = os.path.join( dirpath, f )\n\n if any( term in f for term in terms ):\n\n for d in json.load( open( f ) ):\n\n common_data_keys = {'_author' : 'creator',\n '_title' : 'text_content',\n '_ingress' : 'text_content',\n '_text' : 'text_content',\n '_url' : 'url',\n '_domain' : 'source_detail'}\n\n d = __harmonize_data( d, 'news_media', common_data_keys )\n\n ## ensure data is always in a list\n if isinstance( d['_datetime_list'] , str) or isinstance( d['_datetime_list'] , unicode):\n d['_datetime_list'] = [ d['_datetime_list'] ]\n\n try:\n d['timestamp'] = dateparser.parse( min( d['_datetime_list'] ), ) ## should take care of the various formats\n except Exception, e:\n d['broken']['_datetime_list'] = e\n\n d['images'] = d['_images']\n\n yield d\n\n\ndef load_twitter( terms = ['data_'], data_folder = 'twitter/' ):\n\n \"\"\"This is currently written to deal with data from Twitter's Streaming API.\n The data format for Search API data is slightly different\n and allows some things to be done slightly more conveniently;\n we could write this to work with Streaming API data as well.\n \"\"\"\n\n data = []\n\n path = __DATA_DIR + data_folder\n\n for f in os.listdir( path ):\n\n if any( term in f for term in terms ):\n\n unharmonized_data = []\n\n with open( path + f ) as current_file:\n unharmonized_data = json.load( current_file )\n\n for d in unharmonized_data:\n\n unharmonized_data = []\n\n with open( path + f ) as current_file:\n unharmonized_data = json.load( current_file )\n\n for d in unharmonized_data:\n\n common_data_keys = {('_user', 'screen_name') : 'creator',\n '_created_at' : 'timestamp',\n '_text' : 'text_content'}\n\n d = __harmonize_data( d, 'twitter', common_data_keys )\n\n try:\n d['url'] = 'https://www.twitter.com/statuses/' + d['_id_str']\n except Exception, e:\n d['broken']['url'] = e\n\n data.append(d)\n\n return data\n\n\ndef load_futusome( query, data_folder = 'futusome/', api_key = '', check_document_count = False, override_cache = False ):\n\n \"\"\" Checks local data folder for files matching the given query and returns them if a match is found.\n If no local data is found and an API key is given, Futusome API is queried.\n\n :param query: String with which data is queried.\n :param data_folder: Local data folder as a string.\n :param api_key: API key as a string for querying Futusome API.\n :param check_document_count: Boolean. Defaults to False. If True, method checks Futusome API for document count returned by the given query. If False, loads the documents and saves them.\n :param override_cache: Boolean. Defaults to False. If True, always queries Futusome and saves the data over local files. If False, checks local cache for data.\n \"\"\"\n\n data = []\n\n path = __DATA_DIR + data_folder\n\n if not os.path.exists( path ):\n os.makedirs( path )\n\n query_base = 'https://api.futusome.com/api/searches.json?&api_search[query]='\n\n cache_file = query.replace('/', '_') # Slashes not allowed in filenames\n\n # If just checking document count, query for only one document\n if check_document_count:\n r = requests.get( query_base + query + '&api_key=' + api_key + '&api_search[limit]=1' )\n r = r.json()\n print('Total document count: ' + str(r['count']))\n return\n\n unharmonized_data = {}\n\n # Check if data matching the query is cached in the data path\n if not override_cache:\n\n print( \"Checking local data path for cached data...\" )\n\n for f in os.listdir( path ):\n\n if cache_file == f.replace('.json', ''):\n print(\"Data returned from \" + path)\n\n with open( path + '/' + f ) as current_file:\n unharmonized_data = json.load( current_file )\n\n\n # If data not found in cache, query Futusome API\n if api_key and not unharmonized_data:\n\n print( \"Data not returned from cache. Querying Futusome API...\" )\n\n documents = []\n\n collected = 0\n jump = timedelta(365*25) ## high enough\n tz = pytz.timezone(\"Europe/Helsinki\") ## for timezone correction\n\n while True:\n\n ## min-range\n\n time = ''\n\n if collected:\n ## start to do jumps\n max_date = documents[-1]['fields']['indexed']\n max_date = datetime.strptime( max_date , \"%Y-%m-%d %H:%M:%S +0000\")\n min_date = max_date - jump\n\n utc_correct = int( tz.localize(max_date).utcoffset().total_seconds() ) # correct for UTC time\n\n max_date = str( ( int( max_date.strftime('%s') ) + utc_correct ) * 1000 - 1 )\n min_date = str( int( min_date.strftime('%s') ) * 1000 )\n time = ' AND indexed.at:[' + min_date + ' TO ' + max_date + ']' ## could also be indexed at? is it faster?\n\n r = requests.get( query_base + query + time + '&api_key=' + api_key + '&api_search[limit]=5000&api_search[sort]=indexed.at' )\n r = r.json()\n\n if 'error' in r:\n print( r )\n break\n\n if not collected:\n print( '\\tTotal sample is', r['count'] )\n\n if len( r['documents'] ) == 0:\n print( r ) ## for debug\n break ## everything OK\n\n documents += r['documents']\n collected += len( r['documents'] )\n\n print( '\\tNow', collected,'documents and at', documents[-1]['fields']['indexed'], 'and going deeper...')\n\n unharmonized_data = {'documents' : documents}\n\n # Save data in local cache with the query as filename\n json.dump( unharmonized_data , open( path + '/' + cache_file + '.json', 'w' ) )\n print('Data saved to ' + path + '/' + cache_file + '.json')\n\n # If no data found in cache or Futusome, just return\n if not unharmonized_data: return data\n\n # Harmonize data to common format and return it\n for d in unharmonized_data['documents']:\n\n common_data_keys = {'_author' : 'creator',\n '_published' : 'timestamp',\n '_name' : 'text_content',\n '_text' : 'text_content',\n '_url' : 'url',\n '_blog_id' : 'url',\n '_type' : 'source_detail'}\n\n d = __harmonize_data( d['fields'], 'futusome', common_data_keys )\n\n d['timestamp'] = d['timestamp'].replace(tzinfo = None)\n\n if '_forum_post_id' in d:\n d['_id'] = d['_forum_post_id']\n elif '_twitter_retweet_id' in d:\n d['_id'] = 'twitter_' + d['_twitter_retweet_id']\n elif '_facebook_id' in d:\n d['_id'] = 'facebook_' + d['_facebook_id']\n elif '_twitter_tweet_id' in d:\n d['_id'] = 'twitter_' + d['_twitter_tweet_id']\n elif '_url' in d:\n ## make uniq ID ourself\n text = d['_url'].encode('ascii', 'ignore') + str( d['timestamp'] ) + d['text_content'].encode('ascii', 'ignore')\n d['_id'] = 'created_id_' + hashlib.md5( text ).hexdigest()\n else:\n text = str( d['timestamp'] ) + d['text_content'].encode('ascii', 'ignore')\n d['_id'] = 'created_id_' + hashlib.md5( text ).hexdigest()\n\n data.append(d)\n\n return data\n"
},
{
"alpha_fraction": 0.6741573214530945,
"alphanum_fraction": 0.6839887499809265,
"avg_line_length": 25.370370864868164,
"blob_id": "571f452899e8130d80e93f8786b044c4ee3f60cd",
"content_id": "620bd33a889895cd7deb5b696ecda23c2f53a93b",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "R",
"length_bytes": 712,
"license_type": "permissive",
"max_line_length": 60,
"num_lines": 27,
"path": "/core/analysis/create_dtm.r",
"repo_name": "Mannerheim/hybra-core",
"src_encoding": "UTF-8",
"text": "## takes\n## * data, vector of strings as parameter\n## * lang, language name as parameter\n## * docnames, names of each document\n\nlibrary(tm)\nlibrary(slam)\n\na <- Corpus( VectorSource( data ) )\nstop <- stopwords( lang )\n\n## bunch of cleanup and transformations\na <- tm_map(a, removeNumbers, mc.cores=1 )\na <- tm_map(a, stripWhitespace, mc.cores=1 )\na <- tm_map(a, removePunctuation, mc.cores=1 )\na <- tm_map(a, content_transformer(tolower), mc.cores=1 )\na <- tm_map( a, stemDocument, language = lang, mc.cores = 1)\na <- tm_map(a, removeWords, stop )\n\n\n## compute word frequencies\ndtm <-DocumentTermMatrix(a)\n\ndtm$dimnames$Docs <- docnames\n\n## throw away columns with 0 indicators\ndtm <- dtm[ row_sums( dtm ) > 0, ]\n"
},
{
"alpha_fraction": 0.55668705701828,
"alphanum_fraction": 0.559402585029602,
"avg_line_length": 27.326923370361328,
"blob_id": "b0403887d481b717f81cca0955873019bfe0d053",
"content_id": "5f4547927fa0f4572acac1222bd7c2c31ddd87b1",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "reStructuredText",
"length_bytes": 1473,
"license_type": "permissive",
"max_line_length": 87,
"num_lines": 52,
"path": "/docs/examples.rst",
"repo_name": "Mannerheim/hybra-core",
"src_encoding": "UTF-8",
"text": "Examples\n=========\n\nStart using hybra-core by importing it and setting the data path\n****************************************************************\n::\n\n from hybra.core import hybra\n\n hybra.set_data_path('path/to/data')\n\n\nLoad media data from folder 'yle'\n*********************************\n::\n\n data_yle = hybra.load('media', folder='yle/')\n\n\nLoad facebook data from folder 'facebook' with filename including 'page_racist'\n*******************************************************************************\n::\n\n data_fb = hybra.load('facebook', folder='facebook/', terms=['page_racist'])\n\n\nFind mentions to cats in Yle data and print them\n************************************************\n::\n\n data_yle_cats = hybra.filter_from_text( data_yle , ['kiss'] )\n\n for data_entry in data_yle_cats:\n print data_entry['text_content']\n print '' ## empty line makes it easier to work on\n\n\nFind ten most common authors in Facebook data and print them\n************************************************************\n::\n\n from collections import Counter\n\n authors = [] ## create an empty list for the authors\n\n for data_entry in data_fb: ## go through the loaded Facebook data\n authors.append( data_entry['creator'] ) ## and add authors on the list\n\n most_common_authors = Counter(authors).most_common(10) ## save 10 most common authors\n\n for author, count in most_common_authors: ## go through the most common authors\n print author + ' ' + str(count) ## and print them\n"
},
{
"alpha_fraction": 0.7852112650871277,
"alphanum_fraction": 0.7887324094772339,
"avg_line_length": 46.33333206176758,
"blob_id": "35ff1b67353cd6dee807df49bf1d855f2cd10ff2",
"content_id": "20c6589b96c22c67a1bee46b98891efc56edb70e",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 1136,
"license_type": "permissive",
"max_line_length": 231,
"num_lines": 24,
"path": "/README.md",
"repo_name": "Mannerheim/hybra-core",
"src_encoding": "UTF-8",
"text": "# Data management and analysis for HYBRA project\n\nThe core data management and analysis tools for \"Racisms and public communications in the hybrid media environment\"-project, funded by Academy of Finland and executed by\n\n* Helsinki Institute for Information Technology HIIT and Department of Computer Science, Aalto Univeristy and University of Helsinki\n* Department of Social Research, Univerisity of Helsinki\n* School of Communication, Media and Theatre, University of Tampere\n\nWe hope that some day, these tools will be more beautiful _ecosystem_ of data analysis in the Hybrid Media Space. For now, we regret to say, this is just bunch of code put together quickly. So, most likely not that much to see yet!\n\n## Getting the data\n\nIn project root folder, execute\n\n1. `git submodule add https://[email protected]/git/hybra-data-test1`\n1. In the folder hybra-data-test1, run git pull every now and then\n\nUse your University of Helsinki account.\n\n**Do not do any serious stuff with this data. It is for illustration only.**\n\n## Documentation\n\nDocumentation is developing at [readthedocs.org](http://hybra.readthedocs.io/en/latest/).\n"
},
{
"alpha_fraction": 0.5919182896614075,
"alphanum_fraction": 0.5981349945068359,
"avg_line_length": 25.186046600341797,
"blob_id": "0bc0f109536ee18500a950fd407eaa62d24ea145",
"content_id": "1e4fa833f11f07718f34303401c5f69e911d9dbf",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2252,
"license_type": "permissive",
"max_line_length": 137,
"num_lines": 86,
"path": "/core/descriptives.py",
"repo_name": "Mannerheim/hybra-core",
"src_encoding": "UTF-8",
"text": "from __future__ import division, print_function\n\nimport datetime\n\nfrom timeline import module_timeline\n\nfrom collections import *\n\nfrom urlparse import urlparse\n\ndef describe( data ):\n if len(data) == 0:\n print( \"Dataset empty.\" )\n return\n\n print( \"Entries together\", len(data) )\n print( \"Number of different authors\", len( set( map( lambda d: d['creator'], filter( lambda d: d['creator'] is not '', data ) ) ) ) )\n\n ## remove dates which can not be true\n date_ok = filter( lambda d: d['timestamp'] is not '', data )\n date_ok = filter( lambda d: d['timestamp'] > datetime.datetime(1970,1,1,23,59), date_ok )\n\n print( \"First post\", min( map( lambda d: d['timestamp'], date_ok ) ) )\n print( \"Last post\", max( map( lambda d: d['timestamp'], date_ok ) ) )\n\n print(\"Data sources\")\n\n ## todo: reimplement?\n counter = defaultdict( int )\n\n for post in data:\n counter[ post['source_detail'] ] += 1\n\n for name, count in counter.items():\n print( '-', name, count )\n\n return module_timeline.create_timeline( datasets = [date_ok] )\n\ndef author_counts( data ):\n\n if len(data) == 0:\n print( \"Dataset empty.\" )\n return\n\n authors = map( lambda d: d['creator'], data )\n\n author_counts = Counter(authors)\n\n total_count = len( author_counts.keys() )\n\n print('Authors found in data:', total_count)\n\n print('Entry counts by author')\n\n for author, count in author_counts.most_common(total_count):\n print('-', author, count)\n\ndef domain_counts( data ):\n\n if len(data) == 0:\n print( \"Dataset empty.\" )\n return\n\n domains = map( lambda d: '{uri.netloc}'.format( uri= urlparse( d['url'] ) ).replace('www.', ''), data )\n\n domain_counts = Counter(domains)\n\n total_count = len( domain_counts.keys() )\n\n print('Domains found in data:', total_count)\n\n print('Entry counts by domain:')\n\n for domain, count in domain_counts.most_common(total_count):\n print('-', domain, count)\n\nif __name__ == '__main__':\n\n for function_name in dir( data_loader ):\n\n if 'load_' in function_name:\n\n print( function_name )\n f = getattr( data_loader, function_name )\n data = f()\n describe( data )\n"
},
{
"alpha_fraction": 0.5654334425926208,
"alphanum_fraction": 0.5726118087768555,
"avg_line_length": 26.029850006103516,
"blob_id": "591d03b7bd115a9bb7deb0f690156b59bb4d3d9a",
"content_id": "a3feaec95b48815fbb379f87424ba52a1bb94772",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1811,
"license_type": "permissive",
"max_line_length": 159,
"num_lines": 67,
"path": "/core/wordclouds.py",
"repo_name": "Mannerheim/hybra-core",
"src_encoding": "UTF-8",
"text": "from __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom collections import Counter\nimport re\n\nfrom wordcloud import WordCloud\nfrom matplotlib import pyplot as plt\n\ndef create_wordcloud( data, stopwords = [\"the\", \"a\", \"or\", \"tai\", \"and\", \"ja\", \"to\", \"on\", \"in\", \"of\", \"for\", \"is\", \"i\", \"this\", \"http\", \"www\", \"fi\", \"com\"] ):\n if len(data) == 0:\n print( \"Dataset empty.\" )\n return\n\n words = get_words( data )\n\n frequencies = Counter( words )\n\n ## remove stopwords\n for word in stopwords:\n del frequencies[word]\n\n print_frequencies( frequencies )\n\n wordcloud = WordCloud( background_color = \"white\" ).generate_from_frequencies( frequencies.items() )\n\n plt.figure()\n plt.imshow(wordcloud)\n plt.axis(\"off\")\n\ndef get_words( data ):\n words = []\n for d in data:\n words += re.findall(r'\\w+', decode_utf8( d['text_content'].lower() ), re.UNICODE)\n\n if '_comments' in d:\n for c in d['_comments']:\n if 'message' in c:\n words += re.findall(r'\\w+', decode_utf8( c['message'].lower() ), re.UNICODE)\n return words\n\ndef print_frequencies( frequencies ):\n print( \"\\nDistinct words:\", len(frequencies) )\n print( \"10 most common words:\" )\n\n i = 1\n for word in frequencies.most_common(10):\n print( i , \" \", word[0], \"-\", word[1] )\n i += 1\n\n\ndef decode_utf8( string ):\n try:\n return string.decode('utf8')\n except UnicodeEncodeError:\n return string\n\nif __name__ == '__main__':\n\n for function_name in dir( data_loader ):\n\n if 'load_' in function_name:\n\n print( function_name )\n f = getattr( data_loader, function_name )\n data = f()\n create_wordcloud( data )\n plt.show()\n"
},
{
"alpha_fraction": 0.5405405163764954,
"alphanum_fraction": 0.7142857313156128,
"avg_line_length": 16.266666412353516,
"blob_id": "fc1267f6542691c2951facc14b5a2e9bfe7548c9",
"content_id": "e4f189a662933ebe080fa8436239e4305d3018ab",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 259,
"license_type": "permissive",
"max_line_length": 22,
"num_lines": 15,
"path": "/requirements.txt",
"repo_name": "Mannerheim/hybra-core",
"src_encoding": "UTF-8",
"text": "dateparser==0.5.1\nGitPython==2.0.6\njupyter==1.0.0\njupyter-client==4.3.0\njupyter-console==4.1.1\njupyter-core==4.1.0\nmatplotlib==1.5.3\nnbstripout==0.2.9\nnetworkx==1.11\nnumpy==1.11.0\nrequests==2.9.1\nscikit-learn==0.17.1\nscipy==0.17.1\nXlsxWriter==0.9.6\nwordcloud\n"
},
{
"alpha_fraction": 0.6338624358177185,
"alphanum_fraction": 0.6360964179039001,
"avg_line_length": 30.61709976196289,
"blob_id": "f0e177894ce70032f3d3d2ba918b09f291334af6",
"content_id": "93e05ef828c7c9efe610fe4cda3d19554f9a2291",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 8505,
"license_type": "permissive",
"max_line_length": 200,
"num_lines": 269,
"path": "/core/hybra.py",
"repo_name": "Mannerheim/hybra-core",
"src_encoding": "UTF-8",
"text": "import data_loader\nimport exporter\nimport descriptives\nfrom network import module_network\nfrom timeline import module_timeline\nimport wordclouds as module_wordclouds\n\nfrom analysis.runr import runr\n\nfrom IPython.core.display import display, HTML, Javascript\n\nimport os\nimport re\nimport json\nimport random\n\nimport dateparser\nfrom urlparse import urlparse\n\nimport codecs\nfrom string import Template\n\n__sources = dir( data_loader )\n__sources = filter( lambda x: x.startswith('load_') , __sources )\n__sources = map( lambda x: x[5:], __sources )\n\ndef set_data_path( path ):\n \"\"\" Sets the path where the data is stored. Relative to where you run your Python.\n :param path: Where the data is stored\n\n :Example:\n\n ``hybra.set_data_path('.') ## search for data from the current folder\n hybra.set_data_path('~/Documents/data/hybra-data') ## data in folder Documents/data/hybra-data``\n \"\"\"\n\n data_loader.__DATA_DIR = path\n\n ## when data path is set, automatically print out the versions\n for folder in os.listdir( path ):\n if( os.path.isdir( path + folder ) ):\n data_loader._version( folder )\n\n ## TOTALLY UNRELATED BUT LETS USE THIS TO INIT THE D3JS TOO\n ## check if there is any way to not use exernal cloud d3js\n ## import os\n ## path = os.path.dirname(os.path.abspath(__file__))\n ## return Javascript( open( path + '/js/d3/d3.js' ).read() )\n return HTML('<p><script src=\"https://cdnjs.cloudflare.com/ajax/libs/d3/3.5.6/d3.js\"></script>Data science OK!</p>')\n\ndef data_path():\n \"\"\" Returns the existing data path.\n \"\"\"\n\n return data_loader.__DATA_DIR\n\ndef data_sources():\n \"\"\" Lists possible data sources hybra core can parse.\n \"\"\"\n\n return __sources\n\ndef data( source, **kwargs ):\n \"\"\" Load data of type `source` using the parser for that data.\n The `**kwargs` are data loader spesific, but often include parameters such as folder.\n See :ref:`data_loader` for details of `**kwargs`\n\n :param source: type of data loaded. Can be `facebook`, `media`, `twitter`.\n\n :Example:\n\n ``hybra.data('media', folder = 'yle') ## load yle-data from the subfolder YLE in your data folder.``\n \"\"\"\n\n if source not in __sources:\n raise NameError('Unknown media type')\n\n load = getattr( data_loader, 'load_' + source )\n\n return load( **kwargs )\n\ndef filter_from_text( data, text = [], substrings = True, inclusive = True ):\n \"\"\" Only choose parts of data which have certain words, given in parameter `text`.\n\n :param data: list of data entries.\n :param text: list of words looked for. This is *inclusive* all the words need to be in the text to qualify.\n :param substrings: if we accept texts which match all words or texts which has all words. Default behavior is to accept based on match.\n For example `hybra.filter_from_text( example, ['cat', 'dog'])` would match text `cats and dogs are nice`, whereas `hybra.filter_from_text( example, ['cat', 'dog'], substrings = False )` would not.\n :param inclusive: boolean specifying whether all of the words are required to be in the text. Default: True.\n \"\"\"\n\n filtered_data = []\n\n text = map( lambda t: t.decode('utf8'), text)\n\n for d in data:\n if substrings:\n if inclusive:\n if all( string.lower() in d['text_content'].lower() for string in text ):\n filtered_data.append( d )\n else:\n if any( string.lower() in d['text_content'].lower() for string in text ):\n filtered_data.append( d )\n else:\n words = re.findall(r'\\w+', d['text_content'].lower(), re.UNICODE)\n if inclusive:\n if all( string.lower() in words for string in text ):\n filtered_data.append( d )\n else:\n if any( string.lower() in words for string in text ):\n filtered_data.append( d )\n\n return filtered_data\n\ndef filter_by_datetime( data, after = '', before = '' ):\n \"\"\" Filter data by datetime given in parameters `after` and `before`.\n\n :param data: list of data entries.\n :param after: string representation of the datetime after which data is to be returned.\n :param before: string representation of the datetime before which data is to be returned.\n \"\"\"\n\n after = dateparser.parse(after)\n before = dateparser.parse(before)\n\n if (after != None) & (before != None):\n data = filter( lambda d: (d['timestamp'] > after) & (d['timestamp'] < before), data )\n elif after:\n data = filter( lambda d: d['timestamp'] > after, data )\n elif before:\n data = filter( lambda d: d['timestamp'] < before, data )\n else:\n print 'No dates given for filtering!'\n\n return data\n\ndef filter_by_author( data, authors = [] ):\n \"\"\"Filter data by author given in parameter `authors`.\n\n :param data: list of data entries.\n :param authors: list of authors to filter the data by.\n \"\"\"\n\n authors = set( map( lambda a: a.decode('utf8'), authors) )\n\n if authors:\n data = filter( lambda d: d['creator'] in authors, data )\n else:\n print 'No authors given for filtering!'\n\n return data\n\ndef filter_by_domain( data, domains = [] ):\n \"\"\"Filter data by domains given in paramater `domains`.\n\n :param data: list of data entries.\n :param domains: list of domains to filter the data by.\n \"\"\"\n\n domains = set( map( lambda d: d.replace('www.', ''), domains))\n\n if domains:\n data = filter( lambda d: '{uri.netloc}'.format( uri= urlparse( d['url'] ) ).replace('www.', '') in domains, data )\n else:\n print 'No domains given for filtering!'\n\n return data\n\ndef get_author_counts( data ):\n \"\"\"List entry counts of distinct authors found in the dataset `data`.\n\n :param data: list of data entries.\n \"\"\"\n\n descriptives.author_counts( data )\n\ndef get_domain_counts( data ):\n \"\"\"List entry counts of distinct domains found in the dataset `data`.\n\n :param data: list of data entries.\n \"\"\"\n\n descriptives.domain_counts( data )\n\ndef describe( data ):\n \"\"\"Describe the dataset `data`, showing the amount of posts, number of authors, historical data and more detailed data sources.\n\n :param data: list of data entries.\n \"\"\"\n\n return display( HTML( descriptives.describe( data ) ) )\n\ndef timeline( **kwargs ):\n \"\"\"Draws a timeline the dataset `data`.\n\n :todo: check kwargs\n\n :param data: list of data entries.\n \"\"\"\n\n return display( HTML( module_timeline.create_timeline( **kwargs ) ) )\n\ndef network( data ):\n \"\"\"Draws a network the dataset `data`.\n\n :todo: check kwargs\n\n :param data: list of data entries.\n \"\"\"\n\n return display( HTML( module_network.create_network(data) ) )\n\ndef wordcloud( data, **kwargs ):\n \"\"\"Draws a wordcloud the dataset `data`.\n\n :todo: check kwargs\n\n :param data: list of data entries.\n \"\"\"\n\n module_wordclouds.create_wordcloud( data, **kwargs )\n\ndef analyse( script, **kwargs ):\n\n globalenv = None\n if 'previous' in kwargs:\n globalenv = kwargs[ g ]\n del kwargs['previous']\n\n return runr( script, globalenv, **kwargs )\n\ndef export( data, file_path ):\n \"\"\"Export the dataset `data` in common format to the given file format. Recognizes output format from file extension in given file path.\n\n :param data: List of data entries to be exported.\n :param file_path: Path to output file.\n \"\"\"\n\n file_type = file_path.split('.')[-1]\n\n try:\n file_exporter = getattr( exporter, 'export_' + file_type )\n\n file_exporter( data, file_path )\n\n except Exception, e:\n print(repr(e))\n print(\"File export failed. Supported file types:\")\n\n for f_type in filter( lambda x: x.startswith('export_') , dir( exporter ) ):\n print( '.' + f_type.replace('export_', '') )\n\ndef sample(data, size, seed = 100, export_file = None):\n \"\"\"Takes a random sample of the dataset `data`. Optionally exports the sample to file using the hybra module export method.\n\n :param data: List of the data entries to be exported.\n :param size: An integer value specifying the sample size.\n :param seed: Seed to use in randomization. Defaults to 100.\n :param export_file: Path to output file. Defaults to None.\n \"\"\"\n\n random.seed(seed)\n\n data_sample = random.sample(data, size)\n\n if export_file:\n export( data_sample, export_file )\n\n return random.sample(data, size)\n"
}
] | 9 |
solosito/cozmo_ros2_nosdk | https://github.com/solosito/cozmo_ros2_nosdk | dddf4ee02e0ad40b14714d0b057b51c558139ffe | 2e04a683801a9bcb6eddad76022e9f9379ed7c5a | 8e2ba4d6f2601beb39faf5635d7dcb38d2880df1 | refs/heads/master | 2020-09-20T14:15:53.074115 | 2019-12-02T23:15:07 | 2019-12-02T23:29:43 | 224,507,837 | 6 | 2 | null | null | null | null | null | [
{
"alpha_fraction": 0.7571428418159485,
"alphanum_fraction": 0.7857142686843872,
"avg_line_length": 34,
"blob_id": "3509724f76c176816cede12bef2bb1d60251543b",
"content_id": "14e66053a22abe06c79df7769ecaa3ece2230af0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 70,
"license_type": "no_license",
"max_line_length": 50,
"num_lines": 2,
"path": "/README.md",
"repo_name": "solosito/cozmo_ros2_nosdk",
"src_encoding": "UTF-8",
"text": "# cozmo_ros2_nosdk\nStandalone ROS2 Driver for Cozmo (no SDK required)\n"
},
{
"alpha_fraction": 0.600215494632721,
"alphanum_fraction": 0.6099137663841248,
"avg_line_length": 28.967741012573242,
"blob_id": "fa7173eb7f60bb40158d341c1734be34c3e71685",
"content_id": "3a2c76bf454380e8af79dc96282d7a1f33d64b06",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 928,
"license_type": "no_license",
"max_line_length": 81,
"num_lines": 31,
"path": "/setup.py",
"repo_name": "solosito/cozmo_ros2_nosdk",
"src_encoding": "UTF-8",
"text": "from setuptools import setup\n\npackage_name = 'cozmo_ros2_nosdk'\n\nsetup(\n name=package_name,\n version='0.7.0',\n packages=[package_name],\n install_requires=['setuptools', 'numpy', 'pycozmo'],\n zip_safe=True,\n author='Alfonso Troya',\n author_email='[email protected]',\n maintainer='Alfonso Troya',\n maintainer_email='[email protected]',\n keywords=['ROS', 'Cozmo', 'driver'],\n classifiers=[\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Topic :: Software Development',\n ],\n description='Standalone ROS2 Driver for Cozmo (no SDK required)',\n license='Apache License, Version 2.0',\n tests_require=['pytest'],\n entry_points={\n 'console_scripts': [\n 'bringup = cozmo_ros2_nosdk.bringup:main',\n 'teleop_twist_keyboard = cozmo_ros2_nosdk.teleop_twist_keyboard:main'\n ],\n },\n)"
},
{
"alpha_fraction": 0.5623195767402649,
"alphanum_fraction": 0.5762438178062439,
"avg_line_length": 35.24307632446289,
"blob_id": "9439df0f5b6452a79d88475d6e1eca6fceab5684",
"content_id": "2e9c947f9f0fe760e7123809a468f160407c1dc1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 11778,
"license_type": "no_license",
"max_line_length": 143,
"num_lines": 325,
"path": "/cozmo_ros2_nosdk/bringup.py",
"repo_name": "solosito/cozmo_ros2_nosdk",
"src_encoding": "UTF-8",
"text": "# General\nimport numpy as np\nimport time\nfrom copy import deepcopy\n\n# OpenCV\nimport cv2\nfrom cv_bridge import CvBridge\n\n# Pycozmo\nimport pycozmo\nfrom pycozmo import (\n Client,\n event,\n protocol_encoder,\n)\n\n# ROS2\nimport rclpy\nfrom geometry_msgs.msg import (\n Pose,\n PoseWithCovariance,\n Quaternion,\n Twist,\n)\nfrom nav_msgs.msg import Odometry\nfrom sensor_msgs.msg import (\n BatteryState,\n Image,\n Imu,\n JointState,\n)\nfrom std_msgs.msg import Header\n\n\ndef euler_to_quaternion(roll, pitch, yaw):\n qx = np.sin(roll/2) * np.cos(pitch/2) * np.cos(yaw/2) - np.cos(roll/2) * np.sin(pitch/2) * np.sin(yaw/2)\n qy = np.cos(roll/2) * np.sin(pitch/2) * np.cos(yaw/2) + np.sin(roll/2) * np.cos(pitch/2) * np.sin(yaw/2)\n qz = np.cos(roll/2) * np.cos(pitch/2) * np.sin(yaw/2) - np.sin(roll/2) * np.sin(pitch/2) * np.cos(yaw/2)\n qw = np.cos(roll/2) * np.cos(pitch/2) * np.cos(yaw/2) + np.sin(roll/2) * np.sin(pitch/2) * np.sin(yaw/2)\n\n return Quaternion(x=qx, y=qy, z=qz, w=qw)\n\ndef quaternion_to_euler(q):\n t0 = +2.0 * (q.w * q.x + q.y * q.z)\n t1 = +1.0 - 2.0 * (q.x * q.x + q.y * q.y)\n roll = np.arctan2(t0, t1)\n t2 = +2.0 * (q.w * q.y - q.z * q.x)\n t2 = +1.0 if t2 > +1.0 else t2\n t2 = -1.0 if t2 < -1.0 else t2\n pitch = np.arcsin(t2)\n t3 = +2.0 * (q.w * q.z + q.x * q.y)\n t4 = +1.0 - 2.0 * (q.y * q.y + q.z * q.z)\n yaw = np.arctan2(t3, t4)\n return roll, pitch, yaw\n\n\nclass Cozmo(Client):\n def __init__(self, node_name, namespace=None):\n super().__init__()\n self._robot_started = False\n self._ros2_started = False\n self._init_robot()\n self._init_ros2(node_name, namespace)\n self._start_camera()\n rclpy.spin(self.node)\n\n def _start_camera(self):\n pkt = protocol_encoder.EnableCamera(enable=True)\n self.conn.send(pkt)\n pkt = protocol_encoder.EnableColorImages(enable=True)\n self.conn.send(pkt)\n time.sleep(2.0) # Wait for image to stabilize.\n\n def __del__(self):\n if self._ros2_started:\n self.node.destroy_node()\n\n def _init_ros2(self, node_name, namespace):\n self.node = rclpy.create_node(node_name=node_name,\n namespace=namespace or node_name)\n\n # Variables\n self._odom_frame = \"/map\"\n self._base_frame = \"/base_link\"\n self._bridge = CvBridge()\n self._last_pose = Pose()\n self._imu_msg = Imu(header = Header(frame_id = self._base_frame))\n self._odom_msg = Odometry(header = Header(frame_id = self._odom_frame),\n child_frame_id = self._base_frame,\n pose = PoseWithCovariance())\n self._js_msg = JointState()\n self._js_msg.header.frame_id = self._base_frame\n self._js_msg.name = ['head', 'lift']\n self._js_msg.velocity = [0.0, 0.0]\n self._js_msg.effort = [0.0, 0.0]\n self._battery_msg = BatteryState()\n self._battery_msg.present = True\n # self._tf_msg = TransformStamped(header = Header(frame_id = self._odom_frame),\n # child_frame_id = self._base_frame)\n\n # Subscribers\n self._twist_sub = self.node.create_subscription(Twist, 'cmd_vel', self._control, 10)\n\n # Publishers\n self._img_pub = self.node.create_publisher(Image, 'camera', 1)\n self._imu_pub = self.node.create_publisher(Imu, 'imu', 1)\n self._odom_pub = self.node.create_publisher(Odometry, \"odom\", 1)\n self._joint_state_pub = self.node.create_publisher(JointState, 'joints', 1)\n self._battery_pub = self.node.create_publisher(BatteryState, 'battery', 1)\n # self._tf_pub = self.create_publisher(TFMessage, \"tf\", 10)\n\n self._ros2_started = True\n\n def _init_robot(self):\n super().start()\n self._start()\n self.connect()\n self.wait_for_robot()\n self._robot_started = True\n\n def _robot_state_cb(self, cli):\n del cli\n\n # Updated data on EvtRobotStateUpdated:\n #\n # timestamp, pose_frame_id, pose_origin_id, pose_x, pose_y, pose_z, pose_angle_rad,\n # pose_pitch_rad, lwheel_speed_mmps, rwheel_speed_mmps, head_angle_rad, lift_height_mm\n # accel_x, accel_y, accel_z, gyro_x, gyro_y, gyro_z, battery_voltage, status,\n # cliff_data_raw, backpack_touch_sensor_raw, curr_path_segment\n\n if not self._ros2_started:\n return\n\n now = self.node.get_clock().now().to_msg()\n\n self._pub_imu(now)\n self._pub_odom(now)\n self._publish_joint_state(now)\n self._publish_battery(now)\n\n def _pub_odom(self, now):\n \"\"\"\n Publish imu data as Imu.\n\n \"\"\"\n # Publish only if there are subscribers\n if self._odom_pub.get_subscription_count() == 0:\n return\n\n self._odom_msg.header.stamp = now\n self._odom_msg.pose.pose.position.x = self.pose.x * 0.001 # Units?\n self._odom_msg.pose.pose.position.y = self.pose.y * 0.001 # Units?\n self._odom_msg.pose.pose.position.z = self.pose.z * 0.001 # Units?\n self._odom_msg.pose.pose.orientation = euler_to_quaternion(0.0, self.pose_pitch.radians, self.pose_angle.radians)\n self._odom_msg.pose.covariance = np.diag([1e-2, 1e-2, 1e-2, 1e3, 1e3, 1e-1]).ravel()\n # self._odom_msg.twist.twist.linear.x = self._lin_vel\n # self._odom_msg.twist.twist.angular.z = self._ang_vel\n self._odom_msg.twist.covariance = np.diag([1e-2, 1e3, 1e3, 1e3, 1e3, 1e-2]).ravel()\n\n self._odom_pub.publish(self._odom_msg)\n\n self._last_pose = deepcopy(self._odom_msg.pose.pose)\n\n def _pub_imu(self, now):\n \"\"\"\n Publish imu data as Imu.\n\n \"\"\"\n # Publish only if there are subscribers\n if self._imu_pub.get_subscription_count() == 0:\n return\n\n self._imu_msg.header.stamp = now\n self._imu_msg.orientation = euler_to_quaternion(0.0, 0.0, self.pose_angle)\n self._imu_msg.linear_acceleration.x = self.accel.x * 0.001 # Units?\n self._imu_msg.linear_acceleration.y = self.accel.y * 0.001 # Units?\n self._imu_msg.linear_acceleration.z = self.accel.z * 0.001 # Units?\n self._imu_msg.angular_velocity.x = self.gyro.x # Units?\n self._imu_msg.angular_velocity.y = self.gyro.y # Units?\n self._imu_msg.angular_velocity.z = self.gyro.z # Units?\n\n self._imu_pub.publish(self._imu_msg)\n\n def _publish_joint_state(self, now):\n \"\"\"\n Publish joint states as JointStates.\n\n \"\"\"\n # Publish only if there are subscribers\n if self._joint_state_pub.get_subscription_count() == 0:\n return\n\n self._js_msg.header.stamp = now\n self._js_msg.position = [self.head_angle.radians,\n self.lift_position * 0.001]\n self._joint_state_pub.publish(self._js_msg)\n\n def _publish_battery(self, now):\n \"\"\"\n Publish battery as BatteryState message.\n\n \"\"\"\n # Publish only if there are subscribers\n if self._battery_pub.get_subscription_count() == 0:\n return\n\n self._battery_msg.header.stamp = now\n self._battery_msg.voltage = self.battery_voltage\n self._battery_pub.publish(self._battery_msg)\n\n def _robot_charging(self, cli, state):\n \"\"\"\n Publish battery as BatteryState message.\n\n \"\"\"\n del cli\n\n if self._battery_pub.get_subscription_count() == 0:\n return\n\n now = self.node.get_clock().now().to_msg()\n self._battery_msg.header.stamp = now\n if state:\n self._battery_msg.power_supply_status = BatteryState.POWER_SUPPLY_STATUS_CHARGING\n else:\n self._battery_msg.power_supply_status = BatteryState.POWER_SUPPLY_STATUS_NOT_CHARGING\n self._battery_pub.publish(self._battery_msg)\n\n def _cliff_detection_cb(self, cli, state):\n del cli\n if state:\n self.node.get_logger().info(\"Cliff detected.\")\n\n def _robot_kidnapping_cb(self, cli, state):\n del cli\n if state:\n self.node.get_logger().info(\"Picked up.\")\n else:\n self.node.get_logger().info(\"Put down.\")\n\n def _robot_poking_cb(self, cli, pkt):\n # pkt : pycozmo.protocol_encoder.RobotPoked\n del cli, pkt\n self.node.get_logger().info(\"Robot poked.\")\n\n def _robot_falling_cb(self, cli, pkt):\n del cli\n if type(pkt) == protocol_encoder.FallingStarted:\n self.node.get_logger().info(\"Started falling.\")\n elif type(pkt) == protocol_encoder.FallingStopped:\n self.node.get_logger().info(\"Falling stopped after {} ms. Impact intensity {:.01f}.\".format(pkt.duration_ms, pkt.impact_intensity))\n else:\n pass\n\n def _button_pressed_cb(self, cli, pkt):\n del cli\n if pkt.pressed:\n self.node.get_logger().info(\"Button pressed.\")\n else:\n self.node.get_logger().info(\"Button released.\")\n\n def _robot_wheels_moving_cb(self, cli, state):\n del cli\n if state:\n self.node.get_logger().info(\"Started moving.\")\n else:\n self.node.get_logger().info(\"Stopped moving.\")\n\n\n def _image_cb(self, cli, img):\n \"\"\"\n Publish camera image as Image.\n\n \"\"\"\n del cli\n\n # Publish only if ROS2 is started and there are subscribers\n if not self._ros2_started or self._img_pub.get_subscription_count() == 0:\n return\n\n now = self.node.get_clock().now().to_msg()\n cv_image = cv2.cvtColor(np.array(img), cv2.COLOR_RGB2BGR)\n img_msg = self._bridge.cv2_to_imgmsg(cv_image, encoding=\"bgr8\")\n img_msg.header.stamp = now\n img_msg.header.frame_id = \"camera\"\n\n self._img_pub.publish(img_msg)\n\n def _start(self):\n self.add_handler(event.EvtRobotStateUpdated, self._robot_state_cb)\n self.add_handler(event.EvtNewRawCameraImage, self._image_cb)\n self.add_handler(event.EvtRobotChargingChange, self._robot_charging)\n self.add_handler(event.EvtRobotPickedUpChange, self._robot_kidnapping_cb)\n self.add_handler(event.EvtCliffDetectedChange, self._cliff_detection_cb)\n self.add_handler(event.EvtRobotWheelsMovingChange, self._robot_wheels_moving_cb)\n self.conn.add_handler(protocol_encoder.RobotPoked, self._robot_poking_cb)\n self.conn.add_handler(protocol_encoder.FallingStarted, self._robot_falling_cb)\n self.conn.add_handler(protocol_encoder.FallingStopped, self._robot_falling_cb)\n self.conn.add_handler(protocol_encoder.ButtonPressed, self._button_pressed_cb)\n\n def _control(self, twist_msg):\n \"\"\"\n Set commanded velocities from Twist message.\n\n :type twist_msg: Twist\n :param twist_msg: The commanded velocities.\n\n \"\"\"\n\n left_wheel = (twist_msg.linear.x - twist_msg.angular.z) * pycozmo.MAX_WHEEL_SPEED.mmps\n right_wheel = (twist_msg.linear.x + twist_msg.angular.z) * pycozmo.MAX_WHEEL_SPEED.mmps\n\n self.drive_wheels(lwheel_speed=left_wheel, rwheel_speed=right_wheel)\n self.move_head(twist_msg.linear.y)\n self.move_lift(twist_msg.linear.z)\n\ndef main(args=None):\n rclpy.init(args=args)\n cozmo_node = Cozmo(node_name=\"cozmo\")\n cozmo_node.start()\n rclpy.shutdown()\n\nif __name__==\"__main__\":\n main()"
},
{
"alpha_fraction": 0.4160315990447998,
"alphanum_fraction": 0.4682472348213196,
"avg_line_length": 26.261537551879883,
"blob_id": "d4eaab67f075e45619f6a9cddfece3be61881019",
"content_id": "6173af098dbd7ffe6442ed813ecce9f49a457602",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3543,
"license_type": "no_license",
"max_line_length": 119,
"num_lines": 130,
"path": "/cozmo_ros2_nosdk/teleop_twist_keyboard.py",
"repo_name": "solosito/cozmo_ros2_nosdk",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\nimport rclpy\nimport sys, select, termios, tty\nfrom geometry_msgs.msg import Twist\n\nsettings = termios.tcgetattr(sys.stdin)\n\nmsg = '''\nReading from the keyboard and Publishing to Twist!\n---------------------------\nMoving around:\n q w e\n a s d\n z x c\n\no : lift up\np : lift down\nl : head up\n; : head down\n\nanything else : stop\n\nt/y : increase/decrease movement speed by 10%\ng/h : increase/decrease lift speed by 10%\nb/n : increase/decrease head speed by 10%\n\nCTRL-C to quit\n'''\n\nmoveBindings = {\n 'w' : ( 1.0, 0.0, 0.0, 0.0),\n 'e' : ( 1.0, 0.0, 0.0,-1.0),\n 'a' : ( 0.0, 0.0, 0.0, 1.0),\n 'd' : ( 0.0, 0.0, 0.0,-1.0),\n 'q' : ( 1.0, 0.0, 0.0, 1.0),\n 'x' : (-1.0, 0.0, 0.0, 0.0),\n 'c' : (-1.0, 0.0, 0.0, 1.0),\n 'z' : (-1.0, 0.0, 0.0,-1.0),\n 'o' : ( 0.0, 0.0, 1.0, 0.0),\n 'p' : ( 0.0, 0.0,-1.0, 0.0),\n 'l' : ( 0.0, 1.0, 0.0, 0.0),\n ';' : ( 0.0,-1.0, 0.0, 0.0),\n }\n\nspeedBindings={\n 't' : (1.1, 1.0, 1.0), # Movement speed\n 'y' : (0.9, 1.0, 1.0), # Movement speed\n 'g' : (1.0, 1.1, 1.0), # Lift speed\n 'h' : (1.0, 0.9, 1.0), # Lift speed\n 'b' : (1.0, 1.0, 1.1), # Head speed\n 'n' : (1.0, 1.0, 0.9), # Head speed\n }\n\ndef getKey():\n tty.setraw(sys.stdin.fileno())\n select.select([sys.stdin], [], [], 0)\n key = sys.stdin.read(1)\n termios.tcsetattr(sys.stdin, termios.TCSADRAIN, settings)\n return key\n\n\ndef vels(movement_speed, lift_speed, head_speed):\n return 'current speeds:\\tmovement {:.3f}\\tlift {:.3f}\\thead {:.3f} '.format(movement_speed, lift_speed, head_speed)\n\ndef main(args=None):\n\n rclpy.init(args=args)\n node = rclpy.create_node(node_name='teleop_twist_keyboard', namespace='cozmo')\n pub = node.create_publisher(Twist, 'cmd_vel', 1)\n\n movement_speed = 0.2\n head_speed = 2.0\n lift_speed = 2.0\n x = 0.0\n y = 0.0\n z = 0.0\n th = 0.0\n status = 0\n\n try:\n print(msg)\n print(vels(movement_speed, lift_speed, head_speed))\n while(True):\n key = getKey()\n if key in moveBindings.keys():\n x = moveBindings[key][0]\n y = moveBindings[key][1]\n z = moveBindings[key][2]\n th = moveBindings[key][3]\n\n elif key in speedBindings.keys():\n movement_speed = movement_speed * speedBindings[key][0]\n head_speed = head_speed * speedBindings[key][1]\n lift_speed = lift_speed * speedBindings[key][2]\n\n print(vels(movement_speed, lift_speed, head_speed))\n if (status == 5): # 14\n print(msg)\n status = (status + 1) % 6\n\n else:\n x = 0.0\n y = 0.0\n z = 0.0\n th = 0.0\n if (key == '\\x03'):\n break\n\n twist = Twist()\n\n twist.linear.x = x * movement_speed;\n twist.linear.y = y * head_speed;\n twist.linear.z = z * lift_speed;\n twist.angular.z = th * movement_speed\n\n twist.angular.x = 0.0;\n twist.angular.y = 0.0;\n\n pub.publish(twist)\n\n except:\n print(sys.exc_info())\n\n finally:\n pub.publish(Twist())\n termios.tcsetattr(sys.stdin, termios.TCSADRAIN, settings)\n\n\nif __name__ == '__main__':\n main()"
}
] | 4 |
jameswpriceio/python_projects | https://github.com/jameswpriceio/python_projects | bad87c295315b8b5f8549158d1ddac57f26e0880 | dc4e36434429efd13f7250ed19d8102c6dd99777 | 6366bb9d49b4bce32e2ab83856ca0f07af7dd5e7 | refs/heads/main | 2023-02-28T07:05:31.406271 | 2021-02-14T20:38:20 | 2021-02-14T20:38:20 | 335,366,619 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6089466214179993,
"alphanum_fraction": 0.6349206566810608,
"avg_line_length": 26.176469802856445,
"blob_id": "fc41f8ba435ae8c7de00c2b255b00754dac74d0a",
"content_id": "f61561ae77132c457e9aac0080288748c044091d",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1386,
"license_type": "permissive",
"max_line_length": 70,
"num_lines": 51,
"path": "/Simulate_Dice_Roll/dice-roll.py",
"repo_name": "jameswpriceio/python_projects",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Sun Mar 15 14:24:36 2020\n\n@author: james price\n\"\"\"\n\"\"\"Simulating the dice game Craps.\"\"\"\nimport random\n\ndef roll_dice():\n \"\"\"Roll two dice and return their face values as a tuple.\"\"\"\n die1 = random.randrange(1, 7)\n die2 = random.randrange(1, 7)\n return (die1, die2) # pack die face values into a tuple\n\ndef display_dice(dice):\n \"\"\"Display one roll of the two dice.\"\"\"\n die1, die2 = dice # unpack the tuple into variables die1 and die2\n print(f'Player rolled {die1} + {die2} = {sum(dice)}')\n\ndie_values = roll_dice() # first roll\ndisplay_dice(die_values)\n\n# determine game status and point, based on first roll\nsum_of_dice = sum(die_values)\n\nif sum_of_dice in (7, 11): # win\n game_status = 'WON'\nelif sum_of_dice in (2, 3, 12): # lose\n game_status = 'LOST'\nelse: # remember point\n game_status = 'CONTINUE'\n my_point = sum_of_dice\n print('Point is', my_point)\n\n# continue rolling until player wins or loses\nwhile game_status == 'CONTINUE':\n die_values = roll_dice()\n display_dice(die_values)\n sum_of_dice = sum(die_values)\n\n if sum_of_dice == my_point: # win by making point\n game_status = 'WON'\n elif sum_of_dice == 7: # lose by rolling 7\n game_status = 'LOST'\n\n# display \"wins\" or \"loses\" message\nif game_status == 'WON':\n print('Player wins')\nelse:\n print('Player loses')\n"
},
{
"alpha_fraction": 0.761904776096344,
"alphanum_fraction": 0.761904776096344,
"avg_line_length": 20,
"blob_id": "a7703326b32458079ac3b186b8fbcef82c5caa86",
"content_id": "a68d715b7121c86fd68197b0559499cc31d5f913",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 42,
"license_type": "permissive",
"max_line_length": 23,
"num_lines": 2,
"path": "/README.md",
"repo_name": "jameswpriceio/python_projects",
"src_encoding": "UTF-8",
"text": "# python_projects\n Python Projects & Demo\n"
},
{
"alpha_fraction": 0.7727272510528564,
"alphanum_fraction": 0.7727272510528564,
"avg_line_length": 21,
"blob_id": "f564faba88a7f79d1fe8d6211c83ddbad6018b1b",
"content_id": "2c79bd2954b94a5a964491578283de1f44df78cc",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 44,
"license_type": "permissive",
"max_line_length": 31,
"num_lines": 2,
"path": "/Flask_Blog_Demo/README.md",
"repo_name": "jameswpriceio/python_projects",
"src_encoding": "UTF-8",
"text": "# flask-app\nDemo flask app hosted on Heroku\n"
}
] | 3 |
markmillr/math | https://github.com/markmillr/math | 7b4547cf2190ad849454375ecea1d104ed87169b | 71d3af090304310e59bf9a5fdfaff5f8739d3a8b | 9eb3cbcf0a0bd00068f6bc6695693cbbeffea086 | refs/heads/master | 2020-04-15T02:36:59.822797 | 2019-01-06T22:11:15 | 2019-01-06T22:11:15 | 164,320,597 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6168674826622009,
"alphanum_fraction": 0.6393574476242065,
"avg_line_length": 16.757143020629883,
"blob_id": "d538a6758911ddee45bd2f0120e459783f6481e4",
"content_id": "9b29e1d2b55893050078ccd2e070c39437600743",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1245,
"license_type": "no_license",
"max_line_length": 96,
"num_lines": 70,
"path": "/gibbs.py",
"repo_name": "markmillr/math",
"src_encoding": "UTF-8",
"text": "# gibbs.py\n\n# demonstrates the Gibbs phenomenen, that the oscillations of the partial sums of Fourier series\n# do not (necessarily?) decrease as they approach a jump discontinuity\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nx_ = np.linspace(-1, 1, num=1000)\nNx = 10 # number of points to consider\nNh = 100 # number of harmonics\nT = 8\n\ndef testfunction(x):\n\tresult = None\n\tif x >= -1 and x <= 0:\n\t\tprint(x)\n\t\tresult = -1.\n\telif x > 0 and x <= 1:\n\t\tresult = 1.\n\treturn result\n\ndef wn(n):\n\tglobal T\n\twn = (2*np.pi*n)/T\n\treturn wn\n\ndef bn(n):\n\tn = int(n)\n\tif (n%2 != 0):\n\t\treturn 4/(np.pi*n)\n\telse:\n\t\treturn 0\n\n\ndef fourierSeries(x, n_max=None):\n\ta0 = 0.\n\tpartialSums = 0.\n\tfor n in range(1,n_max):\n\t\ttry:\n\t\t\tpartialSums = partialSums + bn(n)*np.sin(wn(n)*x)\n\t\texcept:\n\t\t\tprint(\"nahh\")\n\t\t\tpass\n\treturn partialSums\n\n\n\n\nx = []\ny = []\nf = []\n\nfor i in x_:\n\t#print(testfunction(i))\n\tx.append(i)\n\ty.append(testfunction(i))\n\n\tf.append(fourierSeries( i, n_max=Nh))\n\nfor i in range(0, len(y)):\n\tprint(x[i], y[i])\n\nfig, ax = plt.subplots()\nplt.style.use(\"ggplot\")\nax.plot(x_,y, color=\"purple\", label=\"Signal\")\nax.plot(x_,f, color=\"green\", label=\"Fourier\")\nplt.legend()\nax.set_title(\"Fourier series approximation: Nh = {}\".format(Nh))\nplt.show()\n\n\n"
},
{
"alpha_fraction": 0.8290908932685852,
"alphanum_fraction": 0.8290908932685852,
"avg_line_length": 68,
"blob_id": "02cded48eac2e31e9b7a35b89952ea70361e24f5",
"content_id": "7d4116de533593d369d68255decd71b4b5f3d4ad",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 275,
"license_type": "no_license",
"max_line_length": 222,
"num_lines": 4,
"path": "/README.md",
"repo_name": "markmillr/math",
"src_encoding": "UTF-8",
"text": "# math\nLittle scripts for understanding math better\n\nGibbs - Generates plots demonstrating the Gibbs phenomenon - that Fourier series approximations of functions with jump discontinuities do not converge to zero error even with an infinite number of partial sums (harmonics)."
}
] | 2 |
wkf2000/spy_test | https://github.com/wkf2000/spy_test | 5f51db5e967064d840978855372058f8785bdd89 | 5682f9f13a95a1dd23b94bf6f4cd54eef6ae8381 | 8388f3d1da2c3a43dc7263897b55723dd1e48833 | refs/heads/master | 2021-01-10T02:22:17.395815 | 2016-01-16T19:36:22 | 2016-01-16T19:36:22 | 49,000,292 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5665669441223145,
"alphanum_fraction": 0.5946148037910461,
"avg_line_length": 27.752687454223633,
"blob_id": "26c2529ffe1ba9ce85c585a74d42dbaae395e61f",
"content_id": "7f3468bc467c7a2a01ce5edf2136d9eddbd6b32a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2674,
"license_type": "no_license",
"max_line_length": 73,
"num_lines": 93,
"path": "/sp500data.py",
"repo_name": "wkf2000/spy_test",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nimport urllib2\nimport pytz\nimport csv\nimport os\nfrom bs4 import BeautifulSoup\nfrom datetime import datetime\nfrom pandas_datareader import data as web\nimport pandas as pd\nimport shutil\n\n\nSITE = \"http://en.wikipedia.org/wiki/List_of_S%26P_500_companies\"\nSTART = datetime(2005, 1, 1, 0, 0, 0, 0, pytz.utc)\nEND = datetime(2016, 1, 1, 0, 0, 0, 0, pytz.utc)\nEXTRA = ['SPY', 'CIEN', 'ASHR']\n\n\ndef scrape_list(site):\n hdr = {'User-Agent': 'Mozilla/5.0'}\n req = urllib2.Request(site, headers=hdr)\n page = urllib2.urlopen(req)\n soup = BeautifulSoup(page, \"lxml\")\n\n table = soup.find('table', {'class': 'wikitable sortable'})\n sector_tickers = dict()\n for row in table.findAll('tr'):\n col = row.findAll('td')\n if len(col) > 0:\n sector = str(col[3].string.strip()).lower().replace(' ', '_')\n ticker = str(col[0].string.strip())\n if sector not in sector_tickers:\n sector_tickers[sector] = list()\n sector_tickers[sector].append(ticker)\n return sector_tickers\n\n\ndef get_snp500():\n sector_tickers = scrape_list(SITE)\n symbols = []\n with open('sp500.csv', 'w') as f:\n writer = csv.writer(f, delimiter=',')\n for sector, tickers in sector_tickers.iteritems():\n print \"save sector: \" + sector\n for tk in tickers:\n symbols.append(tk)\n writer.writerow([tk, sector])\n\n print \"sp500 symbols downloaded\"\n return symbols\n\n\ndef download_history(ticker):\n print '\\tworking on ' + ticker\n try:\n data = web.DataReader(ticker, 'yahoo', START, END)\n except Exception:\n print '\\t' + ticker + ' downloading error'\n return\n\n data.drop(['Open'], axis=1, inplace=True)\n data.drop(['High'], axis=1, inplace=True)\n data.drop(['Low'], axis=1, inplace=True)\n data.drop(['Close'], axis=1, inplace=True)\n data.drop(['Volume'], axis=1, inplace=True)\n data.rename(columns={'Adj Close': 'Close'}, inplace=True)\n\n ma = pd.stats.moments.rolling_mean(data['Close'], 10)\n data['10_MA'] = ma\n data['10_MAC'] = (ma - data['Close']) / data['Close'] * -100\n\n ma = pd.stats.moments.rolling_mean(data['Close'], 100)\n data['100_MA'] = ma\n data['100_MAC'] = (ma - data['Close']) / data['Close'] * -100\n\n with open('data/' + ticker + '.csv', 'w') as f:\n data.to_csv(f)\n\n\nif __name__ == '__main__':\n if os.path.exists('data'):\n shutil.rmtree('data')\n os.makedirs('data')\n\n symlist = get_snp500()\n # symlist = symlist[:100]\n symlist += EXTRA\n\n for sym in symlist:\n download_history(sym)\n\n print 'Finished downloading data'\n"
},
{
"alpha_fraction": 0.5256191492080688,
"alphanum_fraction": 0.5666097402572632,
"avg_line_length": 23.652631759643555,
"blob_id": "487b684a616048f4c52f649aaeb90cbcb7d8ae07",
"content_id": "659d60fbf823f1110b36e1dce2e8f22d19290329",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2342,
"license_type": "no_license",
"max_line_length": 113,
"num_lines": 95,
"path": "/policyMA.py",
"repo_name": "wkf2000/spy_test",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nimport os\nimport datetime\nimport pandas as pd\nimport csv\nimport operator\nimport shutil\nfrom multiprocessing.dummy import Pool as ThreadPool\n\nRANGE = range(-18, 20, 2)\nRANGE1 = [-100, -30, -20]\nRANGE2 = [20, 30, 100]\nRANGE = RANGE1 + RANGE + RANGE2\n\n#increase 10%+ in 15 days, etc\nPERIOD = 20\nGAIN = 0.16\nLOSE = -0.08\nTHRESH = 3\n\nPATH = 'data/'\nOUTPATH = 'out/'\n\n\ndef init_dict():\n ret = {}\n for i in range(len(RANGE)-1):\n for j in range(len(RANGE)-1):\n key = \"[%d,%d][%d,%d]\" % (RANGE[i], RANGE[i+1], RANGE[j], RANGE[j+1])\n ret[key] = 0\n return ret\n\n\ndef find_range(ma10, ma100):\n for i in range(len(RANGE)-1):\n for j in range(len(RANGE)-1):\n if RANGE[i] < ma10 < RANGE[i+1] and RANGE[j] < ma100 < RANGE[j+1]:\n return \"[%d,%d][%d,%d]\" % (RANGE[i], RANGE[i+1], RANGE[j], RANGE[j+1])\n return \"error\"\n\n\ndef gain_lose(df, date):\n pp = df.loc[date:date + PERIOD * datetime.timedelta(1), 'Close']\n chg1 = (pp.max() - pp[0]) / pp[0]\n chg2 = (pp.min() - pp[0]) / pp[0]\n if chg1 >= GAIN and chg2 >= LOSE:\n return True\n return False\n\n\ndef update_result(df, date, result):\n ma10 = df.loc[date, '10_MAC']\n ma100 = df.loc[date, '100_MAC']\n key = find_range(ma10, ma100)\n # if '[-100,-30]' in key:\n # print date\n if key == \"error\":\n return\n result[key] += 1\n\n\ndef calculation(afile):\n symbol, ext = os.path.splitext(afile)\n print 'working on ' + symbol\n in_file = os.path.join('data', symbol + '.csv')\n df = pd.read_csv(in_file, index_col='Date', parse_dates=True, usecols=['Date', 'Close', '10_MAC', '100_MAC'])\n\n result = init_dict()\n\n for date in df.index:\n if gain_lose(df, date):\n update_result(df, date, result)\n\n result_x = sorted(result.items(), key=operator.itemgetter(1), reverse=True)\n result = []\n for row in result_x:\n if row[1] > THRESH:\n result.append(row)\n if len(result) > 0:\n writer = csv.writer(open(OUTPATH + symbol + '.csv', 'wb'))\n writer.writerows(result)\n return\n\n\nif __name__ == '__main__':\n if os.path.exists('out'):\n shutil.rmtree('out')\n os.makedirs('out')\n\n files = os.listdir(PATH)\n pool = ThreadPool(4)\n pool.map(calculation, files)\n pool.close()\n pool.join()\n"
},
{
"alpha_fraction": 0.5587905645370483,
"alphanum_fraction": 0.5655094981193542,
"avg_line_length": 22.526315689086914,
"blob_id": "e674dfa10efa0a8783ae7d11983a64db3c52b171",
"content_id": "1f9aad5576aae44efda84843def8ae305e41feac",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 893,
"license_type": "no_license",
"max_line_length": 74,
"num_lines": 38,
"path": "/resultMA.py",
"repo_name": "wkf2000/spy_test",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nimport policyMA\nimport os\nimport csv\nimport operator\n\n\nTHRESH = 8\n\n\ndef do_job(s, r):\n csv_path = os.path.join('out', s + '.csv')\n with open(csv_path, \"rb\") as in_file:\n reader = csv.reader(in_file)\n for row in reader:\n r[row[0]] += int(row[1])\n\n\ndef show_result(r):\n result_x = sorted(r.items(), key=operator.itemgetter(1), reverse=True)\n output = []\n for row in result_x:\n if row[1] > THRESH:\n output.append(row)\n if len(output) > 0:\n writer = csv.writer(open('result' + '.csv', 'wb'))\n writer.writerows(output)\n\n\nif __name__ == '__main__':\n result = policyMA.init_dict()\n for files in os.listdir(policyMA.OUTPATH):\n symbol, ext = os.path.splitext(files)\n if ext == '.csv':\n print \"working on \" + symbol\n do_job(symbol, result)\n show_result(result)"
}
] | 3 |
tomsib2001/orisi | https://github.com/tomsib2001/orisi | 595030cf21b9b9b217594a36cab0d62cb23ef9dc | 1bc64cc1d96af904ee0a46d8a1dfd9b79c655067 | 160bb14bfc715423a8227707dd2992d4fd2255d3 | refs/heads/master | 2020-12-03T07:53:46.673119 | 2014-10-09T10:53:08 | 2014-10-09T10:53:08 | null | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6018633246421814,
"alphanum_fraction": 0.626086950302124,
"avg_line_length": 25.83333396911621,
"blob_id": "b7d72f4ad548818036f859f63119965fa5495d79",
"content_id": "9dfcbb67441e755515b92fac60c5f403a941218b",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 1610,
"license_type": "permissive",
"max_line_length": 93,
"num_lines": 60,
"path": "/install.sh",
"repo_name": "tomsib2001/orisi",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\ntflag=no\nset -- $(getopt t \"$@\")\nwhile [ $# -gt 0 ]\ndo\n case \"$1\" in\n (-t) tflag=yes;;\n (--) shift; break;;\n (-*) echo \"$0: error - unrecognized option $1\" 1>&2; exit 1;;\n (*) break;;\n esac\n shift\ndone\n\nread -p \"Do you want to update? [y/N]? \" -n 1 -r\necho # (optional) move to a new line\nif [[ $REPLY =~ ^[Yy]$ ]]\nthen\n sudo apt-get update\nfi\nsudo apt-get install python-dev vim screen\nsudo pip install -r requirements.txt\n\nDIR=\"$( cd \"$( dirname \"${BASH_SOURCE[0]}\" )\" && pwd )\"\nHOME=\"$DIR/..\"\n\n\nread -p \"Do you need to install bitcoind? [y/N]? \" -n 1 -r\necho # (optional) move to a new line\nif [[ $REPLY =~ ^[Yy]$ ]]\nthen\n wget --directory-prefix=$HOME https://bitcoin.org/bin/0.9.1/bitcoin-0.9.1-linux.tar.gz &&\n tar -C $HOME -zxvf $HOME/bitcoin-0.9.1-linux.tar.gz &&\n mv $HOME/bitcoin-0.9.1-linux $HOME/bitcoin &&\n rm $HOME/bitcoin-0.9.1-linux.tar.gz &&\n echo 'alias bitcoind=~/bitcoin/bin/64/bitcoind' >> $HOME/.bash_aliases &&\n source $HOME/.bash_aliases &&\n \n cp $DIR/src/settings_local.py.example $DIR/src/settings_local.py\nfi\n\n\nif [ \"$tflag\" == \"yes\" ]\nthen\n echo BITCOIND_TEST_MODE=True >> $DIR/src/settings_local.py\nfi\n\nmkdir -p $HOME/.bitcoin/\n# this is harmless even if the file exists\ntouch $HOME/.bitcoin/bitcoin.conf\n\nBTCRPC=`openssl rand -hex 32`\necho rpcuser=bitrpc >> $HOME/.bitcoin/bitcoin.conf\necho rpcpassword=$BTCRPC >> $HOME/.bitcoin/bitcoin.conf\nif [ \"$tflag\" == \"yes\" ]\nthen\n echo connect=127.0.0.1:8333 >> $HOME/.bitcoin/bitcoin.conf\nfi\necho BITCOIND_RPC_PASSWORD = \\\"$BTCRPC\\\" >> $DIR/src/settings_local.py\n"
},
{
"alpha_fraction": 0.6329467296600342,
"alphanum_fraction": 0.6741838455200195,
"avg_line_length": 28.46518898010254,
"blob_id": "f2ded31ac4e161c06bed65cc904981d4340c22c5",
"content_id": "9fbe663e56775682982537a7773a4a5ad60a76c1",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 9312,
"license_type": "permissive",
"max_line_length": 467,
"num_lines": 316,
"path": "/src/client/main.py",
"repo_name": "tomsib2001/orisi",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python2.7\n\nimport pprint\n\nimport sys\nimport os\nsys.path.append(os.path.dirname(os.path.dirname(__file__)))\n\nimport json\nimport datetime\nimport time\n\nfrom random import randrange\n\nfrom shared import liburl_wrapper\nfrom shared.liburl_wrapper import safe_pushtx\nfrom shared.bitcoind_client.bitcoinclient import BitcoinClient\nfrom shared.fastproto import (\n generateKey,\n sendMessage,\n constructMessage)\n\nfrom math import ceil\nfrom decimal import Decimal\n\n\nSTART_COMMAND = \"./runclient.sh\"\n\n# Charter url should be url to json with oracles described. Check out http://oracles.li/timelock-charter.json for example\nCHARTER_URL = 'http://client.orisi.org/static/charter.json'\n# Eligius requires 4096 satoshi fee per 512 bytes of transaction ( http://eligius.st/~gateway/faq-page )\n# With three oracles, the tx fee is around 512 bytes.\nMINERS_FEE = 4*4096 # = fee enough to pay for a tx of 4*512 bytes. a bit higher than required, but we want to support Eligius\n\ndef fetch_charter(charter_url):\n while True:\n try:\n charter_json = liburl_wrapper.safe_read(charter_url, timeout_time=10)\n return json.loads(charter_json)\n except:\n print \"retrying...\"\n\ndef main(args):\n btc = BitcoinClient()\n tmp_address = btc.validate_address(btc.get_new_address())\n\n print \"fetching charter: %s\" % CHARTER_URL\n charter = fetch_charter(CHARTER_URL)\n\n client_pubkey = tmp_address['pubkey']\n oracle_pubkeys = []\n for o in charter['nodes']:\n# print json.dumps(o)\n oracle_pubkeys.append(o['pubkey'])\n\n min_sigs = int(ceil(float(len(oracle_pubkeys))/2))\n\n print \"number of nodes: %i\" % len(charter['nodes'])\n print \"required signatures: %i\" % min_sigs\n sum_fees_satoshi = 0\n for o in charter['nodes']:\n sum_fees_satoshi += Decimal(o['fee'])*100000000\n sum_fees_satoshi += Decimal(charter['org_fee'])*100000000\n\n\n key_list = [client_pubkey] + oracle_pubkeys\n\n response = btc.create_multisig_address(min_sigs, key_list)\n\n print \"\"\n print \"1. wire the funds to %s\" % response['address']\n print \" oracle & org fees: %i satoshi (as detailed in %s)\" % (sum_fees_satoshi , CHARTER_URL)\n print \" miners fee: %i satoshi (see MINERS_FEE in src/client/main.py if you want to lower it)\" % MINERS_FEE\n print \"2. wait for transaction to get any confirmations\"\n print \"3. run:\"\n print \"%s main2 %s <locktime_minutes> <return_address>\" % ( START_COMMAND, client_pubkey )\n\ndef timelock(args):\n if len(args) < 2:\n print \"USAGE: `%s timelock <locktime_minutes> <return_address>`\" % START_COMMAND\n return\n\n return_address = args[1]\n\n print \"fetching charter: %s\" % CHARTER_URL\n charter = fetch_charter(CHARTER_URL)\n\n oracle_pubkeys = []\n oracle_fees = {}\n oracle_bms = []\n for o in charter['nodes']:\n oracle_pubkeys.append(o['pubkey'])\n oracle_fees[o['address']] = o['fee']\n #oracle_bms.append(o['bm'])\n\n min_sigs = int(ceil(float(len(oracle_pubkeys))/2))\n\n print \"number of nodes: %i\" % len(charter['nodes'])\n print \"required signatures: %i\" % min_sigs\n\n oracle_fees[charter['org_address']] = charter['org_fee']\n\n key_list = oracle_pubkeys\n\n request = {}\n msig_addr = return_address\n\n request['message_id'] = \"%s-%s\" % (msig_addr, str(randrange(1000000000,9000000000)))\n request['pubkey_list'] = key_list\n\n request['miners_fee_satoshi'] = MINERS_FEE\n request['locktime'] = time.time() + int(args[0])*60\n request['return_address'] = return_address\n request['oracle_fees'] = oracle_fees\n request['req_sigs'] = min_sigs\n request['operation'] = 'safe_timelock_create'\n\n pub, priv = generateKey()\n\n meta_request = {}\n meta_request['source'] = pub\n meta_request['channel'] = 0\n meta_request['epoch'] = time.mktime(datetime.datetime.utcnow().timetuple())\n meta_request['body'] = json.dumps(request)\n\n print sendMessage(constructMessage(priv, **meta_request))\n\ndef main2(args):\n if len(args)<3:\n print \"USAGE: `%s main2 <pubkey_once> <locktime_minutes> <return_address>`\" % START_COMMAND\n print \"- run `%s main` to obtain pubkey_once\" % START_COMMAND\n print \"- keep in mind that this is alpha, don't expect oracles to run properly for any extended periods of time\"\n return\n\n btc = BitcoinClient()\n\n request = {}\n client_pubkey = args[0]\n request['locktime'] = time.time() + int(args[1])*60\n request['return_address'] = args[2]\n\n print \"fetching charter url\" # hopefully it didn't check between running main1 and main2\n charter = fetch_charter(CHARTER_URL)\n\n oracle_pubkeys = []\n oracle_fees = {}\n oracle_bms = []\n\n for o in charter['nodes']:\n oracle_pubkeys.append(o['pubkey'])\n oracle_fees[o['address']] = o['fee']\n #oracle_bms.append(o['bm'])\n\n oracle_fees[charter['org_address']] = charter['org_fee']\n\n min_sigs = int(ceil(float(len(oracle_pubkeys))/2))\n\n key_list = [client_pubkey] + oracle_pubkeys\n\n response = btc.create_multisig_address(min_sigs, key_list)\n msig_addr = response['address'] # we're using this as an identificator\n redeemScript = response['redeemScript']\n\n request['message_id'] = \"%s-%s\" % (msig_addr, str(randrange(1000000000,9000000000)))\n request['pubkey_list'] = key_list\n\n request['miners_fee_satoshi'] = MINERS_FEE\n\n print \"fetching transactions incoming to %s ...\" % msig_addr\n\n import requests\n # for production purposes you might want to fetch the data using bitcoind, but that's expensive\n print \"get\"\n address_json = requests.get(\"https://blockchain.info/address/%s?format=json\" % msig_addr).text\n #try:\n print address_json\n address_history = json.loads(address_json)\n \n #except:\n #print \"blockchain.info problem\"\n #print address_json\n #return\n\n prevtxs = []\n sum_satoshi = 0\n\n for tx in address_history['txs']:\n outputs = []\n if 'out' in tx:\n outputs = outputs + tx['out']\n if 'outputs' in tx:\n outputs = outputs + tx['outputs']\n\n for vout in tx['out']:\n print vout\n if vout['addr'] == msig_addr:\n prevtx = {\n 'scriptPubKey' : vout['script'],\n 'vout': vout['n'],\n 'txid': tx['hash'],\n 'redeemScript': redeemScript,\n }\n sum_satoshi += vout['value']\n prevtxs.append(prevtx)\n\n if len(prevtxs) == 0:\n print \"ERROR: couldn't find transactions sending money to %s\" % msig_addr\n # return\n\n\n request['prevtxs'] = prevtxs\n request['outputs'] = oracle_fees\n\n request[\"req_sigs\"] = min_sigs\n request['operation'] = 'timelock_create'\n request['sum_satoshi'] = sum_satoshi\n\n pub, priv = generateKey()\n\n meta_request = {}\n meta_request['source'] = pub\n meta_request['channel'] = 0\n meta_request['signature'] = 0\n meta_request['body'] = json.dumps(request)\n\n print sendMessage(constructMessage(priv, **meta_request))\n\n\ndef wait_sign(args):\n\n bm = BitmessageClient()\n while True:\n messages = bm.get_unread_messages()\n\n print \"unread messages: %r\" % len(messages)\n for msg in messages:\n if msg.subject[0:10] == 'final-sign':\n try:\n content = json.loads(msg.message)\n print content['pwtxid']\n except:\n print \"problem with message parsing\"\n time.sleep(5)\n else:\n print \"complete signed tx for pwtxid: %s\" % content['pwtxid']\n print \"please forward this to Eligius pool ( http://eligius.st/~wizkid057/newstats/pushtxn.php ):\"\n print content['transaction']\n bm.mark_message_as_read(msg)\n\n time.sleep(5)\n\ndef tx_info(args):\n tx = args[0]\n btc = BitcoinClient()\n\n prevtxs = '[{\"redeemScript\": \"52210281cf9fa9241f0a9799f27a4d5d60cff74f30eed1d536bf7a72d3dec936c151632102e8e22190b0adfefd0962c6332e74ab68831d56d0bfc2b01b32beccd56e3ef6f021035ff60e6745093b9bcbae93082e1c50ca5b3fcf8bcd186a46da46ded5132530522103a9bd3bfbd9f9b1719d3ecad8658796dc5e778177d77145b5c37247eb3060861854ae\", \"txid\": \"10a3ab54e1e19701fcb86c7725621b5b1b26415f94363de35a493ba9ca502b15\", \"vout\": 0, \"scriptPubKey\": \"a914a37ce66d7065157037e90ca4d4b4a20d8d865a2687\"}]'\n prevtxs = json.loads(prevtxs)\n\n pprint.pprint( btc.decode_raw_transaction(tx))\n\n pprint.pprint (btc.signatures_count(tx, prevtxs))\n\n pprint.pprint (btc.signatures(tx, prevtxs))\n\n\ndef pushtx(args):\n tx = args[0]\n print safe_pushtx(tx)\n\n\nOPERATIONS = {\n 'main': main,\n 'timelock': timelock,\n 'main2': main2,\n 'wait': wait_sign,\n 'txinfo': tx_info,\n 'pushtx': pushtx,\n}\n\nSHORT_DESCRIPTIONS = {\n 'main': \"prepares the first multisig\",\n 'main2': \"broadcasts a request for create (timelock/bounty)\",\n 'wait_sign': \"waits for a signature\",\n 'tx_info': 'information about a signed tx',\n 'pushtx': 'pushes tx to eligius',\n}\n\ndef help():\n print \"You can use one of the following functions:\"\n for name, desc in SHORT_DESCRIPTIONS.iteritems():\n print \"{0} - {1}\".format(name, desc)\n print \"Learn more by using {0} help functionname\".format(START_COMMAND)\n\ndef main(args):\n if len(args) == 0:\n print \"no arguments given, use {0} help for possible operations\".format(START_COMMAND)\n return\n if args[0] == 'help':\n if len(args) == 1:\n help()\n else:\n if args[1] in OPERATIONS:\n print OPERATIONS[args[1]].__doc__\n return\n\n if args[0] in OPERATIONS:\n operation = OPERATIONS[args[0]]\n operation(args[1:])\n else:\n print \"unknown operation, use {} help for possible operations\".format(START_COMMAND)\n\n\n\nif __name__==\"__main__\":\n args = sys.argv[1:]\n main(args)\n\n"
}
] | 2 |
seanlatias/heterocl | https://github.com/seanlatias/heterocl | 7708f506da8bba563c572898946d153c606d0c5a | 338d97cec0b11f859169cd214e8aede53aca3455 | 7de34e920881ca2a5a0b73a341209b6a92090467 | refs/heads/master | 2021-09-18T09:24:24.655456 | 2021-07-29T18:49:28 | 2021-07-29T18:49:28 | 187,276,165 | 0 | 0 | Apache-2.0 | 2019-05-17T20:14:41 | 2019-05-17T19:16:19 | 2019-05-17T16:29:23 | null | [
{
"alpha_fraction": 0.635649561882019,
"alphanum_fraction": 0.6507552862167358,
"avg_line_length": 36.6136360168457,
"blob_id": "47f3acbfee5ca0858a60ec7df0a6cefce740650d",
"content_id": "93470b287c7f88d263f72aaf48d8497d69c33315",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1655,
"license_type": "permissive",
"max_line_length": 105,
"num_lines": 44,
"path": "/tests/test_api_assert.py",
"repo_name": "seanlatias/heterocl",
"src_encoding": "UTF-8",
"text": "import heterocl as hcl\nimport numpy as np\nimport pathlib\nimport subprocess\n\ndef get_stdout(filename):\n path = pathlib.Path(__file__).parent.absolute()\n path = str(path) + \"/test_api_assert_cases/\" + filename + \".py\"\n p = subprocess.run(['python', path], stdout = subprocess.PIPE)\n output = p.stdout.decode('utf-8')\n return str(output)\n\ndef test_basic_assert():\n output = get_stdout(\"basic_assert_tests\")\n golden = \"in the if statement\\ncustomized assert message 1\"\n for x in range(7):\n golden += \"\\nin the first for loop\"\n golden += \"\\nassert message in the second for loop\"\n for x in range(7):\n golden += \"\\nin the first for loop and if statement\\nin the first for loop, outside if statement\"\n golden += \"\\nassert message in the second for loop\\nassert 0 message 0 number 2\\n\"\n\n assert str(output) == golden\n\ndef test_memory_assert():\n output = get_stdout(\"memory_assert_tests\")\n golden = \"assert message in the if statement 9\\n\"\n golden += \"in if statement\\nassert message for loop\\n\"\n for x in range(2):\n golden += \"in if statement\\nin for loop\\n\"\n golden += \"assert error, matrix_A[1, 1]: 11 matrix_A[2, 1]: 11 matrix_A[3, 1]: 11\\n\"\n for x in range(3):\n for x in range(2):\n golden += \"in if statement\\nin for loop\\n\"\n golden += \"in the while loop\\n\"\n golden += \"assert message end\\ncustomized assert message 1\\nassert error in if--value of x: 0\\n\"\n\n assert str(output) == golden\n\ndef test_dsl_def_assert():\n output = get_stdout(\"dsl_def_assert_tests\")\n golden = get_stdout(\"dsl_def_assert_tests_golden\")\n\n assert str(output) == golden\n"
}
] | 1 |
jaehyunan11/solo_project | https://github.com/jaehyunan11/solo_project | fdbd926a2e8b386afa0a56979968ddf941b132a6 | fe082e031608859d304e6f503d9455acdfa9ee76 | b555c9a40a00b7f4f72e42d2272efcb97017714f | refs/heads/master | 2023-02-18T22:06:46.640562 | 2021-01-17T00:35:46 | 2021-01-17T00:35:46 | 328,511,812 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.678288459777832,
"alphanum_fraction": 0.678288459777832,
"avg_line_length": 20.03333282470703,
"blob_id": "2ba440c80539cb7bc8cab90e7456818624a76b20",
"content_id": "38dbed467c03cafe64cb5aadfe96fa07544bd547",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 631,
"license_type": "no_license",
"max_line_length": 61,
"num_lines": 30,
"path": "/shop_app/views.py",
"repo_name": "jaehyunan11/solo_project",
"src_encoding": "UTF-8",
"text": "from django.shortcuts import render, redirect\nfrom .models import *\n\n# Create your views here.\n\n\ndef store(request):\n # create context to pass some data\n context = {}\n return render(request, 'shop_store.html', context)\n\n\ndef user_login(request):\n context = {}\n return render(request, 'user_login.html', context)\n\n\ndef user_registration(request):\n context = {}\n return render(request, 'user_registration.html', context)\n\n\ndef cart(request):\n context = {}\n return render(request, 'shop_cart.html', context)\n\n\ndef buy_it_now(request):\n context = {}\n return render(request, 'buy_it_now.html', context)\n"
},
{
"alpha_fraction": 0.6555555462837219,
"alphanum_fraction": 0.6555555462837219,
"avg_line_length": 35,
"blob_id": "bd4fd4f38ff3e9674e8582f3b0545e0be86bb261",
"content_id": "2add16ee1c2f735400ce3b351281dbafce02b93d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 360,
"license_type": "no_license",
"max_line_length": 82,
"num_lines": 10,
"path": "/shop_app/urls.py",
"repo_name": "jaehyunan11/solo_project",
"src_encoding": "UTF-8",
"text": "from django.urls import path\nfrom . import views\n\nurlpatterns = [\n path('', views.store, name=\"store\"),\n path('cart/', views.cart, name=\"cart\"),\n path('buy_it_now/', views.buy_it_now, name=\"buy_it_now\"),\n path('user_login/', views.user_login, name=\"user_login\"),\n path('user_registration/', views.user_registration, name=\"user_registration\"),\n]\n"
}
] | 2 |
vishwajeetk1160/variational_dropout | https://github.com/vishwajeetk1160/variational_dropout | f1443c77dede6e5532d3bf62c662513a08edb275 | cc4a9d219df52549bb549abeb5db2218957d9013 | 285e026a1ab58a6f4ad79cb0c851bdc15645efde | refs/heads/master | 2020-03-22T03:27:10.707285 | 2018-07-06T10:29:18 | 2018-07-06T10:29:18 | 139,433,019 | 1 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.8468468189239502,
"alphanum_fraction": 0.8468468189239502,
"avg_line_length": 54.5,
"blob_id": "ebba10c713cc7542f7393e94dd9de04e237f4ad8",
"content_id": "60e41cfe1bf9909389bca70e4e096e18708e6761",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 111,
"license_type": "permissive",
"max_line_length": 97,
"num_lines": 2,
"path": "/README.md",
"repo_name": "vishwajeetk1160/variational_dropout",
"src_encoding": "UTF-8",
"text": "# vardropout\nImplementation of \"Variational Dropout and the Local Reparameterization Trick\" paper with Pytorch\n"
},
{
"alpha_fraction": 0.5411298274993896,
"alphanum_fraction": 0.5649157762527466,
"avg_line_length": 28.676469802856445,
"blob_id": "f7b6cc72105d907f2fd1fb8c52a208db39fed998",
"content_id": "b974ffd643144160cd59e024fc0a4b9020a19493",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1009,
"license_type": "permissive",
"max_line_length": 96,
"num_lines": 34,
"path": "/hinton_actual_kd/models/dropout_model.py",
"repo_name": "vishwajeetk1160/variational_dropout",
"src_encoding": "UTF-8",
"text": "import torch.nn as nn\nimport torch.nn.functional as F\n\n\nclass DropoutModel(nn.Module):\n def __init__(self):\n super(DropoutModel, self).__init__()\n\n self.fc = nn.ModuleList([\n nn.Linear(784, 500),\n nn.Linear(500, 50),\n nn.Linear(50, 10)\n ])\n\n def forward(self, input, p=0):\n \"\"\"\n :param input: An float tensor with shape of [batch_size, 784]\n :param p: An float value in [0, 1.] with probability of elements to be zeroed\n :return: An float tensor with shape of [batch_size, 10] filled with logits of likelihood\n \"\"\"\n\n result = input\n\n for i, layer in enumerate(self.fc):\n result = F.elu(layer(result))\n\n if i < len(self.fc) - 1:\n result = F.dropout(result, p, training=True)\n\n return result\n\n def loss(self, **kwargs):\n out = self(kwargs['input'], kwargs['p'])\n return F.cross_entropy(out, kwargs['target'], size_average=kwargs['average'])\n"
},
{
"alpha_fraction": 0.5452943444252014,
"alphanum_fraction": 0.5717306733131409,
"avg_line_length": 34.88607406616211,
"blob_id": "8aa068fe2b9307a352f1a37145ac0f4562c275a1",
"content_id": "4a699a0f5fce75ca0118576d79d4ee070d3cea96",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2837,
"license_type": "permissive",
"max_line_length": 111,
"num_lines": 79,
"path": "/hinton_actual_kd/models/variational_dropout_model.py",
"repo_name": "vishwajeetk1160/variational_dropout",
"src_encoding": "UTF-8",
"text": "import torch.nn as nn\nimport torch.nn.functional as F\nimport torch as t\nimport pdb\nfrom variational_dropout.variational_dropout import VariationalDropout\n\nclass teacherNet(nn.Module):\n def __init__(self):\n super(teacherNet, self).__init__()\n self.fc1 = nn.Linear(28 * 28, 1200)\n self.fc2 = nn.Linear(1200, 1200)\n self.fc3 = nn.Linear(1200, 10)\n\n def forward(self, x):\n x = x.view(-1, 28 * 28)\n x = F.relu(self.fc1(x))\n x = F.dropout(x, p=0.8, training=self.training)\n x = F.relu(self.fc2(x))\n x = F.dropout(x, p=0.8, training=self.training)\n x = self.fc3(x)\n return x\n\nclass VariationalDropoutModel(nn.Module):\n def __init__(self):\n super(VariationalDropoutModel, self).__init__()\n\n self.fc = nn.ModuleList([\n VariationalDropout(784, 600),\n VariationalDropout(600, 600),\n nn.Linear(600, 10)\n ])\n\n def forward(self, input, train=False):\n \"\"\"\n :param input: An float tensor with shape of [batch_size, 784]\n :param train: An boolean value indicating whether forward propagation called when training is performed\n :return: An float tensor with shape of [batch_size, 10]\n filled with logits of likelihood and kld estimation\n \"\"\"\n result = input\n\n if train:\n kld = 0\n\n for i, layer in enumerate(self.fc):\n if i != len(self.fc) - 1:\n result, kld1 = layer(result, train)\n result = F.elu(result)\n kld =kld + kld1\n\n return self.fc[-1](result), kld\n\n for i, layer in enumerate(self.fc):\n if i != len(self.fc) - 1:\n result = F.elu(layer(result, train))\n\n return self.fc[-1](result)\n\n def loss(self, **kwargs):\n if kwargs['train']:\n input1=kwargs['input']\n out, kld = self(input1, train=kwargs['train'])\n teacher_model = teacherNet()\n teacher_model.load_state_dict(t.load('teacher_MLP_jittered_Adam_try_2.pth.tar'))\n teacher_output=teacher_model(input1)\n #teacher_output= teacher_output.detach()\n criterion1=nn.KLDivLoss()\n teacher_output = t.autograd.Variable(teacher_output.data,volatile=True)\n T=2.\n #pdb.set_trace()\n kd_loss_value=criterion1(F.log_softmax(out/T), F.softmax(teacher_output/T))*(2*T*T)\n\n return F.cross_entropy(out, kwargs['target'], size_average=kwargs['average']), kld, kd_loss_value\n\n out = self(kwargs['input'], kwargs['train'])\n return F.cross_entropy(out, kwargs['target'], size_average=kwargs['average'])\n\n #def mse_loss(self, input, target):\n #return t.sum((input - target)**2) / input.data.nelement()\n\n\n"
},
{
"alpha_fraction": 0.5442526340484619,
"alphanum_fraction": 0.5567389130592346,
"avg_line_length": 41.546875,
"blob_id": "1ca4ba9f75f0962f28eee11f5ad21175f68e02d9",
"content_id": "efbb391a37a08170e94927e67524aa303ff6bc93",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5446,
"license_type": "permissive",
"max_line_length": 158,
"num_lines": 128,
"path": "/hinton_actual_kd/train.py",
"repo_name": "vishwajeetk1160/variational_dropout",
"src_encoding": "UTF-8",
"text": "#Hinton KD part\nimport argparse\nimport torch as t\nimport torch.nn as nn\nimport torchvision.transforms as transforms\nfrom tensorboardX import SummaryWriter\nfrom torch.autograd import Variable\nfrom torch.optim import Adam\nfrom torchvision import datasets\nimport matplotlib.pyplot as plt\n\nfrom models import *\n\nif __name__ == \"__main__\":\n\n parser = argparse.ArgumentParser(description='train')\n parser.add_argument('--num-epochs', type=int, default=10, metavar='NI',\n help='num epochs (default: 10)')\n parser.add_argument('--batch-size', type=int, default=100, metavar='BS',\n help='batch size (default: 70)')\n parser.add_argument('--use-cuda', type=bool, default=False, metavar='CUDA',\n help='use cuda (default: False)')\n parser.add_argument('--learning-rate', type=float, default=0.0005, metavar='LR',\n help='learning rate (default: 0.0005)')\n parser.add_argument('--mode', type=str, default='vardropout', metavar='M',\n help='training mode (default: simple)')\n args = parser.parse_args()\n\n\n assert args.mode in ['simple', 'dropout', 'vardropout'], 'Invalid mode, should be in [simple, dropout, vardropout]'\n Model = {\n 'simple': SimpleModel,\n 'dropout': DropoutModel,\n 'vardropout': VariationalDropoutModel\n }\n Model = Model[args.mode]\n\n dataset = datasets.MNIST(root='data/',\n transform=transforms.Compose([\n transforms.ToTensor()]),\n download=True,\n train=True)\n train_dataloader = t.utils.data.DataLoader(dataset, batch_size=args.batch_size, shuffle=True)\n\n dataset = datasets.MNIST(root='data/',\n transform=transforms.Compose([\n transforms.ToTensor()]),\n download=True,\n train=False)\n test_dataloader = t.utils.data.DataLoader(dataset, batch_size=args.batch_size, shuffle=True, drop_last=True)\n\n\n model = Model()\n if args.use_cuda:\n model.cuda()\n\n optimizer = Adam(model.parameters(), args.learning_rate, eps=1e-6)\n coef1=0.7\n coef2=0.05\n\n fh=open(\"training_results.txt\", \"a\")\n fh.write(\"\\n\")\n fh.write(\"\\n\")\n fh.write(\"\\n\")\n\n fh.write(\"batch_size {}, epochs {}, learning rate {}, coeff kld {}, coeff kd{}\".format(args.batch_size,args.num_epochs, args.learning_rate, coef1, coef2))\n fh.write(\"\\n\")\n\n cross_enropy_averaged = nn.CrossEntropyLoss(size_average=True)\n train_loss_list=[]\n validation_loss_list=[]\n for epoch in range(args.num_epochs):\n for iteration, (input, target) in enumerate(train_dataloader):\n input = Variable(input).view(-1, 784)\n target = Variable(target)\n \n if args.use_cuda:\n input, target = input.cuda(), target.cuda()\n\n optimizer.zero_grad()\n\n loss = None\n if args.mode == 'simple':\n loss = model.loss(input=input, target=target, average=True)\n elif args.mode == 'dropout':\n loss = model.loss(input=input, target=target, p=0.4, average=True)\n else:\n likelihood, kld, kd_loss_value = model.loss(input=input, target=target, train=True, average=True)\n #coef = min(epoch / 40., 1.) #making this ceff. 1. understand this coeff. \n #print (\"kd_loss_value\")\n print ('likelihood {},\\n Kld {},\\n kd loss value {}\\n'.format(likelihood.data.numpy()[0], kld.data.numpy()[0], kd_loss_value.data.numpy()[0]))\n loss = likelihood+coef1* kld+ coef2*kd_loss_value\n train_loss_list.append(loss.cpu().data.numpy()[0])\n\n loss.backward()\n optimizer.step()\n if iteration % 300 == 0:\n \tprint('train epoch {}, iteration {}, loss {}'.format(epoch, iteration, loss.cpu().data.numpy()[0]))\n \tfh.write('train epoch {}, iteration {}, loss {}'.format(epoch, iteration, loss.cpu().data.numpy()[0]))\n \tfh.write('\\n')\n\n if (iteration % 300) == 0:\n loss = 0\n for input, target in test_dataloader:\n input = Variable(input).view(-1, 784)\n target = Variable(target)\n\n if args.use_cuda:\n input, target = input.cuda(), target.cuda()\n\n if args.mode == 'simple':\n loss += model.loss(input=input, target=target, average=False).cpu().data.numpy()[0]\n elif args.mode == 'dropout':\n loss += model.loss(input=input, target=target, p=0., average=False).cpu().data.numpy()[0]\n else:\n loss += model.loss(input=input, target=target, train=False, average=False).cpu().data.numpy()[0]\n\n loss = loss / (args.batch_size * len(test_dataloader))\n print('validation epoch {}, iteration {}, loss {}'.format(epoch, iteration, loss))\n fh.write('validation epoch {}, iteration {}, loss {}'.format(epoch, iteration, loss))\n fh.write('\\n')\n \n t.save(model.state_dict(), 'trained_model_usingKLDIVLoss_05.pth.tar')\n plt.figure(1)\n plt.plot(train_loss_list)\n plt.show()\n\nfh.close()\n"
},
{
"alpha_fraction": 0.8652482032775879,
"alphanum_fraction": 0.8652482032775879,
"avg_line_length": 46,
"blob_id": "68780093bd9433ce6f46ee32c2467a3f93c74f86",
"content_id": "85a1bba6ca1ad44f61aee0988309298800b8e885",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 141,
"license_type": "permissive",
"max_line_length": 62,
"num_lines": 3,
"path": "/models/__init__.py",
"repo_name": "vishwajeetk1160/variational_dropout",
"src_encoding": "UTF-8",
"text": "from .simple_model import SimpleModel\nfrom .dropout_model import DropoutModel\nfrom .variational_dropout_model import VariationalDropoutModel\n"
},
{
"alpha_fraction": 0.5282257795333862,
"alphanum_fraction": 0.5551075339317322,
"avg_line_length": 26.518518447875977,
"blob_id": "b48acb0117942fed8968b1f51e8d26cb65a62122",
"content_id": "6acc66778bafe78f32be4af7baf9c2602ae618c1",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 744,
"license_type": "permissive",
"max_line_length": 96,
"num_lines": 27,
"path": "/models/simple_model.py",
"repo_name": "vishwajeetk1160/variational_dropout",
"src_encoding": "UTF-8",
"text": "import torch.nn as nn\nimport torch.nn.functional as F\n\n\nclass SimpleModel(nn.Module):\n def __init__(self):\n super(SimpleModel, self).__init__()\n\n self.fc = nn.Sequential(\n nn.Linear(784, 500),\n nn.ELU(),\n nn.Linear(500, 50),\n nn.ELU(),\n nn.Linear(50, 10)\n )\n\n def forward(self, input):\n \"\"\"\n :param input: An float tensor with shape of [batch_size, 784]\n :return: An float tensor with shape of [batch_size, 10] filled with logits of likelihood\n \"\"\"\n\n return self.fc(input)\n\n def loss(self, **kwargs):\n out = self(kwargs['input'])\n return F.cross_entropy(out, kwargs['target'], size_average=kwargs['average'])\n\n"
},
{
"alpha_fraction": 0.5717154145240784,
"alphanum_fraction": 0.584911048412323,
"avg_line_length": 41.475608825683594,
"blob_id": "0e25e4b53360290af2533ab268ecdf1fdcc68ce6",
"content_id": "530f6b6a8dc8100cddb7c81a59b8064601b39ed2",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3486,
"license_type": "permissive",
"max_line_length": 119,
"num_lines": 82,
"path": "/visualize_model.py",
"repo_name": "vishwajeetk1160/variational_dropout",
"src_encoding": "UTF-8",
"text": "import argparse\nimport torch as t\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torchvision.transforms as transforms\nfrom tensorboardX import SummaryWriter\nfrom torch.autograd import Variable\nfrom torch.optim import Adam\nfrom torchvision import datasets\nfrom models import *\n\nif __name__ == \"__main__\":\n\n parser = argparse.ArgumentParser(description='train')\n parser.add_argument('--num-epochs', type=int, default=20, metavar='NI',\n help='num epochs (default: 10)')\n parser.add_argument('--batch-size', type=int, default=100, metavar='BS',\n help='batch size (default: 70)')\n parser.add_argument('--use-cuda', type=bool, default=False, metavar='CUDA',\n help='use cuda (default: False)')\n parser.add_argument('--learning-rate', type=float, default=0.0005, metavar='LR',\n help='learning rate (default: 0.0005)')\n parser.add_argument('--mode', type=str, default='vardropout', metavar='M',\n help='training mode (default: simple)')\n args = parser.parse_args()\n\n assert args.mode in ['simple', 'dropout', 'vardropout'], 'Invalid mode, should be in [simple, dropout, vardropout]'\n Model = {\n 'simple': SimpleModel,\n 'dropout': DropoutModel,\n 'vardropout': VariationalDropoutModel\n }\n Model = Model[args.mode]\n\n dataset = datasets.MNIST(root='data/',\n transform=transforms.Compose([\n transforms.ToTensor()]),\n download=True,\n train=True)\n train_loader = t.utils.data.DataLoader(dataset, batch_size=args.batch_size, shuffle=True)\n\n dataset = datasets.MNIST(root='data/',\n transform=transforms.Compose([\n transforms.ToTensor()]),\n download=True,\n train=False)\n test_loader = t.utils.data.DataLoader(dataset, batch_size=args.batch_size, shuffle=True, drop_last=True)\n\n\n model = Model()\n if args.use_cuda:\n model.cuda()\n\n model.load_state_dict(t.load('trained_model_NewJittered_275.pth.tar'))\n test_error=[]\n\n def test(model):\n model.eval()\n test_loss = 0\n correct = 0\n for data, target in test_loader:\n if args.use_cuda:\n data, target = data.cuda(), target.cuda()\n data = Variable(data, volatile=True).view(-1, 784)\n target = Variable(target)\n output = model(data)\n test_loss += F.cross_entropy(output, target).data[0] # sum up batch loss\n pred = output.data.max(1, keepdim=True)[1] # get the index of the max log-probability\n correct += pred.eq(target.data.view_as(pred)).cpu().sum()\n\n test_loss /= len(test_loader.dataset)\n test_error.append(100.0-float (100. * float(correct) / float(len(test_loader.dataset))))\n print('\\nTest set: Average loss: {:.7f}, Accuracy: {}/{} ({:.7f}%)\\n'.format(\n test_loss, correct, len(test_loader.dataset),\n float (100. * float(correct) / float(len(test_loader.dataset)))))\n\n for i in range(10):\n test(model)\n avg_test_error=0.\n for i in range(len(test_error)):\n avg_test_error=avg_test_error+test_error[i]\n print ('test error averaged over {} loops is {}'.foramt(len(test_error),avg_test_error/float(len(test_error))))\n\n\n\n"
},
{
"alpha_fraction": 0.5744941830635071,
"alphanum_fraction": 0.5895156264305115,
"avg_line_length": 35.24444580078125,
"blob_id": "cae1c0105def43c6207b83cf96e19486da3a5449",
"content_id": "408a4b7095e44512df4a648e3c999a12de0fc869",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3262,
"license_type": "permissive",
"max_line_length": 114,
"num_lines": 90,
"path": "/hinton_actual_kd/variational_dropout/variational_dropout.py",
"repo_name": "vishwajeetk1160/variational_dropout",
"src_encoding": "UTF-8",
"text": "import math\nimport torch as t\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\nfrom torch.nn.parameter import Parameter\n\n\nclass VariationalDropout(nn.Module):\n def __init__(self, input_size, out_size, log_sigma2=-10, threshold=3):\n \"\"\"\n :param input_size: An int of input size\n :param log_sigma2: Initial value of log sigma ^ 2.\n It is crusial for training since it determines initial value of alpha\n :param threshold: Value for thresholding of validation. If log_alpha > threshold, then weight is zeroed\n :param out_size: An int of output size\n \"\"\"\n super(VariationalDropout, self).__init__()\n\n self.input_size = input_size\n self.out_size = out_size\n\n self.theta = Parameter(t.FloatTensor(input_size, out_size))\n self.bias = Parameter(t.Tensor(out_size))\n\n self.log_sigma2 = Parameter(t.FloatTensor(input_size, out_size).fill_(log_sigma2))\n\n self.reset_parameters()\n\n self.k = [0.63576, 1.87320, 1.48695]\n\n self.threshold = threshold\n\n def reset_parameters(self):\n stdv = 1. / math.sqrt(self.out_size)\n\n self.theta.data.uniform_(-stdv, stdv)\n self.bias.data.uniform_(-stdv, stdv)\n\n @staticmethod\n def clip(input, to=8.):\n input = input.masked_fill(input < -to, -to)\n input = input.masked_fill(input > to, to)\n\n return input\n\n def kld(self, log_alpha):\n\n first_term = self.k[0] * F.sigmoid(self.k[1] + self.k[2] * log_alpha)\n second_term = 0.5 * t.log(1 + t.exp(-log_alpha))\n return (first_term - second_term - self.k[0]).sum() / (self.input_size * self.out_size)\n\n\n def forward(self, input, train):\n \"\"\"\n :param input: An float tensor with shape of [batch_size, input_size]\n :return: An float tensor with shape of [batch_size, out_size] and negative layer-kld estimation\n \"\"\"\n log_alpha = self.clip(self.log_sigma2 - t.log(self.theta ** 2))\n #fh=open(\"log_alpha_values_during_training.txt\", 'a')\n #fh.write(str(self.input_size)+\"-----\"+str(log_alpha.data.numpy().mean())+\"-----\"+str(self.out_size)+\"\\n\")\n #fh.close()\n kld = self.kld(log_alpha)\n\n if not train:\n mask = log_alpha > self.threshold\n if (t.nonzero(mask).dim()!= 0):\n zeroed_weights=t.nonzero(mask).size(0)\n \n else :\n zeroed_weights=0\n \n total_weights=mask.size(0)*mask.size(1)\n print('number of zeroed weights is {}'.format(zeroed_weights))\n print ('total numer of weights is {}'.format(total_weights))\n print ('ratio for non zeroed weights is {}'.format( (total_weights - zeroed_weights)/total_weights) )\n return t.addmm(self.bias, input, self.theta.masked_fill(mask, 0))\n\n mu = t.mm(input, self.theta)\n std = t.sqrt(t.mm(input ** 2, self.log_sigma2.exp()) + 1e-6)\n\n eps = Variable(t.randn(*mu.size()))\n if input.is_cuda:\n eps = eps.cuda()\n\n return std * eps + mu + self.bias, kld\n\n def max_alpha(self):\n log_alpha = self.log_sigma2 - self.theta ** 2\n return t.max(log_alpha)\n"
},
{
"alpha_fraction": 0.561819851398468,
"alphanum_fraction": 0.5891777276992798,
"avg_line_length": 39.09696960449219,
"blob_id": "2b6e96616e0178cdd491d7709945f69224747398",
"content_id": "31bf1b5eeb624883dd95f16ee664b732c73b87c3",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6616,
"license_type": "permissive",
"max_line_length": 115,
"num_lines": 165,
"path": "/teacher_mnist_trainer.py",
"repo_name": "vishwajeetk1160/variational_dropout",
"src_encoding": "UTF-8",
"text": "#teacher jittered model dimension and forwards stored\nfrom __future__ import print_function\nimport argparse\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nfrom torchvision import datasets, transforms\nfrom torchvision.datasets import MNIST\nfrom torch.autograd import Variable\nimport os\nimport time\nimport random\nstart_time = time.time()\n#epochs=50\n# Training settings\nparser = argparse.ArgumentParser(description='PyTorch MNIST Example')\nparser.add_argument('--batch-size', type=int, default=100, metavar='N',\n help='input batch size for training (default: 64)')\nparser.add_argument('--test-batch-size', type=int, default=1000, metavar='N',\n help='input batch size for testing (default: 1000)')\nparser.add_argument('--epochs', type=int, default=30, metavar='N',\n help='number of epochs to train (default: 10)')\nparser.add_argument('--lr', type=float, default=0.0009, metavar='LR',\n help='learning rate (default: 0.01)')\nparser.add_argument('--momentum', type=float, default=0.9, metavar='M',\n help='SGD momentum (default: 0.5)')\nparser.add_argument('--no-cuda', action='store_true', default=False,\n help='disables CUDA training')\nparser.add_argument('--seed', type=int, default=1, metavar='S',\n help='random seed (default: 1)')\nparser.add_argument('--log-interval', type=int, default=10, metavar='N',\n help='how many batches to wait before logging training status')\nargs = parser.parse_args()\nargs.cuda = not args.no_cuda and torch.cuda.is_available()\n\ntorch.manual_seed(args.seed)\nif args.cuda:\n torch.cuda.manual_seed(args.seed)\n\nkwargs = {'num_workers': 1, 'pin_memory': True} if args.cuda else {}\n\ndef jitter(data_tensor):\n for i in range(1, len(data_tensor[0]-1)):\n for j in range(1, len(data_tensor[0]-1)):\n random_int=random.randint(1,9) \n var_1=data_tensor[0][i][j]\n if random_int == 1:\n data_tensor[0][i][j]=data_tensor[0][i-1][j-1]\n data_tensor[0][i-1][j-1]=var_1\n if random_int == 2:\n data_tensor[0][i][j]=data_tensor[0][i-1][j]\n data_tensor[0][i-1][j]=var_1\n if random_int == 3:\n data_tensor[0][i][j]=data_tensor[0][i-1][j+1]\n data_tensor[0][i-1][j+1]=var_1\n if random_int == 4:\n data_tensor[0][i][j]=data_tensor[0][i][j-1]\n data_tensor[0][i][j-1]=var_1\n if random_int == 5:\n data_tensor[0][i][j]=data_tensor[0][i][j+1]\n data_tensor[0][i][j+1]=var_1\n if random_int == 6:\n data_tensor[0][i][j]=data_tensor[0][i+1][j-1]\n data_tensor[0][i+1][j-1]=var_1\n if random_int == 7:\n data_tensor[0][i][j]=data_tensor[0][i+1][j]\n data_tensor[0][i+1][j]=var_1\n if random_int ==8:\n data_tensor[0][i][j]=data_tensor[0][i+1][j+1]\n data_tensor[0][i+1][j+1]=var_1\n\n\ndata_train = MNIST(root='data/', train=True, download=True, transform=transforms.Compose([transforms.ToTensor()]))\ndata_test = MNIST (root='data/', train=False, download=True, transform=transforms.Compose([transforms.ToTensor()]))\n\nfor i in range(len(data_train)):\n jitter(data_train[i])\n\n\ntrain_loader = torch.utils.data.DataLoader(dataset = data_train, batch_size = args.batch_size, shuffle = True)\ntest_loader = torch.utils.data.DataLoader(dataset= data_test, batch_size = args.test_batch_size, shuffle= True)\n\nclass Net(nn.Module):\n def __init__(self):\n super(Net, self).__init__()\n self.fc1 = nn.Linear(28 * 28, 1200)\n self.fc2 = nn.Linear(1200, 1200)\n self.fc3 = nn.Linear(1200, 10)\n\n def forward(self, x):\n x = x.view(-1, 28 * 28)\n x = F.relu(self.fc1(x))\n x = F.dropout(x, p=0.8, training=self.training)\n x = F.relu(self.fc2(x))\n x = F.dropout(x, p=0.8, training=self.training)\n x = self.fc3(x)\n return x\n\nmodel = Net()\nif args.cuda:\n model.cuda()\n\noptimizer = optim.Adam(model.parameters(), lr=args.lr,eps=1e-8,\n weight_decay=5e-4)\n\ndef train(epoch, model):\n model.train()\n for batch_idx, (data, target) in enumerate(train_loader):\n if args.cuda:\n data, target = data.cuda(), target.cuda()\n data, target = Variable(data), Variable(target)\n optimizer.zero_grad()\n output = model(data)\n loss = F.cross_entropy(output, target)\n loss.backward()\n optimizer.step()\n if batch_idx % args.log_interval == 0:\n print('Train Epoch: {} [{}/{} ({:.0f}%)]\\tLoss: {:.6f}'.format(\n epoch, batch_idx * len(data), len(train_loader.dataset),\n 100. * batch_idx / len(train_loader), loss.data[0]))\n\n\ndef train_evaluate(model):\n model.eval()\n train_loss = 0\n correct = 0\n for data, target in train_loader:\n if args.cuda:\n data, target = data.cuda(), target.cuda()\n data, target = Variable(data, volatile=True), Variable(target)\n output = model(data)\n train_loss += F.cross_entropy(output, target).data[0] # sum up batch loss\n pred = output.data.max(1, keepdim=True)[1]\n correct += pred.eq(target.data.view_as(pred)).cpu().sum()\n\n print('\\nTrain set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\\n'.format(\n train_loss, correct, len(train_loader.dataset),\n 100. * correct / len(train_loader.dataset)))\n\ndef test(model):\n model.eval()\n test_loss = 0\n correct = 0\n for data, target in test_loader:\n if args.cuda:\n data, target = data.cuda(), target.cuda()\n data, target = Variable(data, volatile=True), Variable(target)\n output = model(data)\n test_loss += F.cross_entropy(output, target).data[0] # sum up batch loss\n pred = output.data.max(1, keepdim=True)[1] # get the index of the max log-probability\n correct += pred.eq(target.data.view_as(pred)).cpu().sum()\n\n test_loss /= len(test_loader.dataset)\n print('\\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\\n'.format(\n test_loss, correct, len(test_loader.dataset),\n 100. * correct / len(test_loader.dataset)))\n \nfor epoch in range(1, args.epochs + 1):\n train(epoch, model)\n train_evaluate(model)\n test(model)\n\ntorch.save(model.state_dict(), 'teacher_MLP_jittered_Adam_try_2.pth.tar')\nprint(\"--- %s seconds ---\" % (time.time() - start_time))\n"
}
] | 9 |
IBM/xmlservice | https://github.com/IBM/xmlservice | 16cfb57bc85dfc62171b6a382b795e01f6dd53b6 | 84de343df1dc513d5774f5c1f470ada6701f6488 | 4f4ae2cadb7e638ff9f0dd57de771b80cc8c219d | refs/heads/master | 2023-01-10T00:44:55.190729 | 2022-10-07T15:08:44 | 2022-10-07T15:12:35 | 167,116,219 | 33 | 23 | BSD-3-Clause | 2019-01-23T04:19:02 | 2022-09-29T11:39:44 | 2022-11-10T15:27:16 | RPGLE | [
{
"alpha_fraction": 0.6299892067909241,
"alphanum_fraction": 0.6440129280090332,
"avg_line_length": 27.9375,
"blob_id": "aeedb78b59792cb0217ea648a5e957816ea4bd12",
"content_id": "dba8e00ec42ca0ca09506fb4de1bdb096359fede",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 927,
"license_type": "permissive",
"max_line_length": 74,
"num_lines": 32,
"path": "/test/php/test_99977_MISC_db2_set_clear_pgms.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: CTL - clear server\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\n// call IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) {\n echo(\"Bad connect: $database,$user\");\n continue;\n}\n$ctl .= \" *clear\"; // clear XMLSERVICE PGM cache NOW\n$clobIn = \"\";\n$sql = \"call $procLib.iPLUGR4K('$ipc','$ctl','$clobIn')\";\n$stmt=db2_exec($conn,$sql);\nif (!$stmt) echo(\"Bad execute ($database,$user): \".db2_stmt_errormsg());\n$ret=db2_free_stmt($stmt);\nif (!$ret) echo(\"Bad free stmt ($database,$user): \".db2_stmt_errormsg());\nif ($i5persistentconnect) $ret=db2_pclose($conn);\nelse $ret=db2_close($conn);\nif (!$ret) echo(\"Bad close ($database,$user): \".db2_stmt_errormsg());\necho \"i am ...\\n\";\necho \"clear\\n\";\n\n?>\n--EXPECTF--\n%s\nclear\n\n"
},
{
"alpha_fraction": 0.4984140992164612,
"alphanum_fraction": 0.5240240693092346,
"avg_line_length": 43.7681884765625,
"blob_id": "bed9e658d6dc6230975cfd1fb2da22a4b37e91e2",
"content_id": "b4c2749a229136ab693443594fc27652ade19f7c",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "reStructuredText",
"length_bytes": 89231,
"license_type": "permissive",
"max_line_length": 541,
"num_lines": 1993,
"path": "/docs/functions.rst",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "XMLSERVICE Functions\n====================\n`Goto Main Page`_\n\n.. _Goto Main Page: index.html\n\nXMLSERVICE features\n-------------------\n* :ref:`face`\n\n * `REST interface (xmlcgi.pgm)`_\n * `DB2 stored procedure interface (lib/iPLUGxxx - xmlstoredp.srvpgm)`_\n\n* :ref:`ctl`\n* :ref:`cmd`\n* :ref:`sh`\n* :ref:`pgm`\n* :ref:`db2`\n\n\nOne sample worth 1,000 words\n----------------------------\n* XMLSERVICE can handle multiple requests in a single XML input document.\n* XMLSERVICE works with standard ``https://``, wherein, \"secure\" as any other web facility SSL encrypted (actual user profile, see xmlcgi.pgm).\n* XMLSERVICE works with Basic Auth, etc., because XMLSERVICE is just another RPG CGI (\\*NONE user/pwd, see crtnone.clp/xmlnone.pgm).\n* XMLSERVICE REST example is simple html/xml, but any language will do, with/without a language toolkit (Common lab):\n ::\n\n <form id=\"myForm\" name=\"myForm\" action=\"https://common1.frankeni.com:47700/cgi-bin/xmlcgi.pgm\" method=\"post\">\n <input type=\"hidden\" name=\"db2\" value=\"*LOCAL\">\n <br>User: <input type=\"input\" name=\"uid\" value=\"\"> (see instructor)\n <br>Password: <input type=\"password\" name=\"pwd\" value=\"\">\n <input type=\"hidden\" name=\"ipc\" value=\"*na\">\n <input type=\"hidden\" name=\"ctl\" value=\"*here\">\n <br>XML Input:\n <br><textarea readonly name=\"xmlin\" rows=\"20\" cols=\"100\">\n <?xml version='1.0'?>\n <xmlservice>\n <cmd>CHGLIBL LIBL(XMLSERVICE) CURLIB(XMLSERVICE)</cmd>\n <pgm name='ZZCALL'>\n <parm><data type='1A'>a</data></parm>\n <parm><data type='1A'>b</data></parm>\n <parm><data type='7p4'>11.1111</data></parm>\n <parm><data type='12p2'>222.22</data></parm>\n <parm>\n <ds>\n <data type='1A'>x</data>\n <data type='1A'>y</data>\n <data type='7p4'>66.6666</data>\n <data type='12p2'>77777.77</data>\n </ds>\n </parm>\n </pgm>\n <sql>\n <query>select * from QIWS.QCUSTCDT where LSTNAM='Jones'</query>\n <fetch block='all'/>\n </sql>\n </xmlservice>\n </textarea>\n <input type=\"hidden\" name=\"xmlout\" value=\"512000\">\n <br><input type=\"submit\" name=submit\" value=\"submit\" />\n </form>\n </body>\n </html>\n\n\n.. _face:\n\nXMLSERVICE interfaces (with download)\n-------------------------------------\n\nXMLSERVICE includes two basic interfaces, REST (Apache), and, database (DB2). Either interface receives XML input and returns XML output.\nThe samples below show XMLSERVICE library (download/test library), but multiple language products ship XMLSERVICE in different libraries,\nso adjust your LIB accordingly (PHP - ZENDSVR/ZENDSVR6, Ruby-POWERRUBY, etc.). XMLSERVICE uses hard coded library schemes (PLUGCONF), to\navoid interfering with user set library lists, therefore, you can only SAV/RST XMLSERVICE to\nthe original library. If you move XMLSERVICE non-originating libraries it will fail at runtime.\n\n.. _`REST interface (xmlcgi.pgm)`:\n\n* REST interface (xmlcgi.pgm)\n\n::\n\n http://myibmi/cgi-bin/xmlcgi.pgm?db2=x&uid=x&pwd=x&ipc=x&ctl=x&xmlin=x&xmlout=x&persis=x\n db2 - what database (*LOCAL tested)\n uid - user profile (*NONE - no uid version 1.5+)\n pwd - profile password (*NONE - no password version 1.5+)\n ipc - IPC key name/security route to XMLSERVICE job (/tmp/fred01, etc.)\n ctl - CTL admin control XMLSERVICE job (see control below)\n xmlin - XML input document (request)\n xmlout - expected size of XML output document (response size in bytes)\n optional:\n persis - name persistent DB2 connection 8 chars or less (not often used)\n\n Configure (LIB match actual product, php, ruby, etc.):\n /www/myinstance/conf/httpd.conf\n ScriptAlias /cgi-bin/ /QSYS.LIB/XMLSERVICE.LIB/\n <Directory /QSYS.LIB/XMLSERVICE.LIB/>\n AllowOverride None\n order allow,deny\n allow from all\n SetHandler cgi-script\n Options +ExecCGI\n </Directory>\n\n.. _`DB2 stored procedure interface (lib/iPLUGxxx - xmlstoredp.srvpgm)`:\n\n* DB2 stored procedure interface (lib/iPLUGxxx - xmlstoredp.srvpgm)\n\n::\n\n call XMLSERVICE.iPLUG512K(ipc, ctl, xmlin [, xmlout])\n IN IPC CHAR(1024) - IPC key name/security\n IN CTL CHAR(1024) - CTL admin control XMLSERVICE job\n IN CI CLOB(15M) - XML input document (request)\n OUT CO CLOB(15M) - XML output document (response)\n Note: iPLUGRxxx procedures return a result set that is collected by fetch.\n\n call XMLSERVICE.iPLUG512K(ipc, ctl, xmlin xmlout)\n ---\n XMLSERVICE.iPLUG4K(IN IPC CHAR(1024), IN CTL CHAR(1024),IN CI CHAR(4064), OUT C0 CHAR(4064))\n XMLSERVICE.iPLUG32K(IN IPC CHAR(1024), IN CTL CHAR(1024), IN CI CLOB(32000), OUT CO CLOB(32000))\n XMLSERVICE.iPLUG65K(IN IPC CHAR(1024), IN CTL CHAR(1024), IN CI CLOB(65K), OUT CO CLOB(65K))\n XMLSERVICE.iPLUG512K(IN IPC CHAR(1024), IN CTL CHAR(1024), IN CI CLOB(512K), OUT CO CLOB(512K))\n XMLSERVICE.iPLUG1M(IN IPC CHAR(1024), IN CTL CHAR(1024), IN CI CLOB(1M), OUT CO CLOB(1M))\n XMLSERVICE.iPLUG5M(IN IPC CHAR(1024), IN CTL CHAR(1024), IN CI CLOB(5M), OUT CO CLOB(5M))\n XMLSERVICE.iPLUG10M(IN IPC CHAR(1024), IN CTL CHAR(1024), IN CI CLOB(10M), OUT CO CLOB(10M))\n XMLSERVICE.iPLUG15M(IN IPC CHAR(1024), IN CTL CHAR(1024), IN CI CLOB(15M), OUT CO CLOB(15M))\n\n stmt = call XMLSERVICE.iPLUG512K(ipc, ctl, xmlin)\n while (row = fetch(stmt)) xmlout += row[0]\n ---\n XMLSERVICE.iPLUGR4K(IN IPC CHAR(1024), IN CTL CHAR(1024), IN CI CHAR(4096))\n XMLSERVICE.iPLUGR32K(IN IPC CHAR(1024), IN CTL CHAR(1024), IN CI CHAR(32000))\n XMLSERVICE.iPLUGR65K(IN IPC CHAR(1024), IN CTL CHAR(1024), IN CI CLOB(65K))\n XMLSERVICE.iPLUGR512K(IN IPC CHAR(1024), IN CTL CHAR(1024), IN CI CLOB(512K))\n XMLSERVICE.iPLUGR1M(IN IPC CHAR(1024), IN CTL CHAR(1024), IN CI CLOB(1M))\n XMLSERVICE.iPLUGR5M(IN IPC CHAR(1024), IN CTL CHAR(1024), IN CI CLOB(5M))\n XMLSERVICE.iPLUGR10M(IN IPC CHAR(1024), IN CTL CHAR(1024), IN CI CLOB(10M))\n XMLSERVICE.iPLUGR15M(IN IPC CHAR(1024), IN CTL CHAR(1024), IN CI CLOB(15M))\n\n Enable DB2 drivers with no LOB support (JTOpen lite enabler - 1.9.0+):\n XMLSERVICE.iPLUGRC32K(IN IPC CHAR(1024), IN CTL CHAR(1024), IN CI VARCHAR(32700), IN CNT INTEGER)\n Note iPLUGRC32K:\n XML input VARCHAR(32700) can be a limit, therefore interface allows\n accumulated sends of partial XML document. Any call with non-zero\n counter (CNT > 0), XMLSERVICE assumes to be XML document partial\n accumulation. When counter reaches zero (CNT = 0),\n XMLSERVICE will process the request.\n\n.. _ctl:\n\nXMLSERVICE Control words\n------------------------\n\nControl (CTL) keywords for operator control over XMLSERVICE jobs.\n\n* stateless connection -- xmlservice here ``($ctl=\"*here\";)`` -- run in QSQSRVR job, db2_connect current profile control. \\*here overrides/ignores ipc, runs \"\\*here\" in the QSQSRVR job.\n* private connection 1 -- xmlservice spawn ``($ctl=\"\"; $ipc=\"/tmp/user\";)`` -- inherit parent job attributes, include QSQ job profile, and, job description. Per manual (below), xmlservice spawn server job has always been default when ipc included (internalKey), but missing \\*sbmjob.\n\n * Change: 1.9.3 - Rod asked QSQSRVR name changed on spawn (spawn child name also QSQSRVR), therefore, new name spawn child will be XMLSERVICE (XMLSERVICE PGM-XMLSERVICE).\n\n* private connection 2 -- xmlservice sbmjob ``($ctl=\"*sbmjob\"; $ipc=\"/tmp/user\";)`` -- control sbmjob, job description, etc. \\*sbmjob allows control of job description for xmlservice server job.\n\n * Alternative \\*sbmjob, xmlservice allows full control via ``<sbmjob>SBMJOB CMD(CALL PGM(XMLSERVICE/XMLSERVICE) PARM('/tmp/user') ... other sbmjob parms ...)</sbmjob>`` (XTOOLKIT PGM-XMLSERVICE).\n\n::\n\n *****************************************************\n * IPC control options:\n *----------------------------------------------------\n *--- very high peformance ignore all flag parsing (loop calls, etc.)\n * *ignore\n * - do not parse flags (high performance)\n * example: $ctl=\"*ignore\";\n *----------------------------------------------------\n *--- kill XMLSERVICE job\n * *immed\n * - end server immed destroy IPC\n * example: $ctl=\"*immed\";\n *----------------------------------------------------\n *--- misc functions XMLSERVICE\n * *license\n * - return license for this code\n * *session\n * - retrieve session key (IPC name /tmp/fred042)\n * *clear\n * - clear internal XMLSERVICE caches,\n * but will not deactivate loaded PGM/SRVPGM, etc.\n *----------------------------------------------------\n * -- if you need to fix result set return drivers (iPLUGRxxx)\n * hack for pesky DB2 drivers adding \"junk\" to back of records.\n * *hack\n * - add </hack> each record of a result set\n * example: $ctl=\"*hack\";\n * - iPLUGRxxx XMLSERVICE stored procedures\n * return result sets of 3000 byte records,\n * you must loop fetch and concat records\n * to recreate output XML ... except\n * some DB2 drivers have junk end ...\n * - enable easy rec truncate ill behaved drivers\n * remove all past </hack> during concat\n * loop fetch records 1-n\n * rec1: <script>....</hack> (3000 bytes)\n * rec2: ............</hack> (3000 bytes)\n * recn: ...</script></hack> (<3000 bytes)\n *----------------------------------------------------\n * -- pause XMLSERVICE job(s) for debugger attach (message to qsysopr)\n * *debug\n * - stop call server with message qsysopr (XMLSERVICE)\n * example: $ctl=\"*debug\";\n * *debugproc\n * - stop stored proc with message qsysopr (client QSQSRVR)\n * *debugcgi\n * - stop CGI with message qsysopr (XMLCGI only)\n * *test[(n)]\n * - test parse XML in/out and report n level information\n *----------------------------------------------------\n * -- override default XMLSERVICE client/server spawn child behaviour\n * *sbmjob[(lib/jobd/job/asp)]\n * - sbmjob job (instead of XMLSERVICE default spawn)\n * example: $ctl=\"*sbmjob\";\n * example: $ctl=\"*sbmjob(QSYS/QSRVJOB/XTOOLKIT)\";\n * example: $ctl=\"*sbmjob(ZENDSVR/ZSVR_JOBD/XTOOLKIT)\";\n * - default values provided plugconf.rpgle\n * - optional asp INLASPGRP(ASP1) (added 1.6.5)\n * -- Notes:\n * - See embedded XML overrides for user full control\n * of XMLSERVICE start behavior SBMJOB settings\n * *here\n * - run stateless in stored proc job (client only job)\n * example: $ctl=\"*here\";\n * - commonly known as running in PHP job, but in fact\n * more likely runs in database job you connected\n * on/off machine DRDA/ODBC/PASE (QSQSRVR, etc.)\n * - generally runs slower using \"one process\"\n * because XMLSERVICE has to restart itself,\n * wake up PASE, find/load your PGM call, etc.\n * *nostart\n * - disallow spawn and sbmjob (web not start anything)\n * example: $ctl=\"*nostart\";\n * - probably prestart all your XMLSERVICE jobs\n * SBMJOB CMD(CALL PGM(XMLSERVICE/XMLSERVICE)\n * PARM('/tmp/db2ipc042')) USER(DB2)\n * - consider using a custom plugconf to disable\n * issues with timeout defaults (*idle/*wait)\n * *java (1.9.2)\n * - start JVM allowing user classpath\n * <cmd>ADDENVVAR ENVVAR(CLASSPATH) VALUE('$ours')\n * REPLACE(*YES)</cmd>\n * <pgm>... calling my RPG w/JAVA ... </pgm>\n * *sqljava or *dbgjava (port 30000) (1.9.2)\n * - start JVM allowing DB2 classpath (no user control)\n * SQLJ.INSTALL_JAR into schema\n * /QIBM/UserData/OS400/SQLLib/Function/jar/(schema)\n *----------------------------------------------------\n * -- CDATA xml output\n * *cdata(on|off)\n * - xml data fields returned with cdata\n * <data><![CDATA[...]]></data>\n * - on - CDATA (default 1.6.2)\n * - off - CDATA off\n * -- escape xml reserved chars\n * *escp\n * 5 chars will be escaped in the output\n * & --> &\n * > --> >\n * < --> <\n * ' --> '\n * \" --> "\n * -- special hex encoded XML document\n * *hex(in/out/both)\n * - xml document hex encoded avoiding conversions\n * - in - HEX convert input side only\n * - out - HEX convert all output\n * - both - HEX convert input and output\n * example (php sudo):\n * $doc = \"<?xml version='1.0'> ... </script>\";\n * $hexin = bin2hex($doc); // doc-2-hexin\n * XMLSERVICE($ipc, $ctl=\"*hex\", $hexin, $hexout);\n * $doc = pack(\"H*\", $hexout); // hexout-2-doc\n * -- Notes\n * - hex string is '0123456789ABCDEF' or '...abcdef'\n * which are CCSID immutable (or upper most always)\n * which will arrive at XMLSERVCE in ebcdic via\n * \"natural\" clob/rest interface transport conversion\n * (ie., iBLOBxxx would not work *hex, arrive ascii)\n * YES (clob): ebcdic (x'F0F1...C6') or (x'...86')\n * NO (blob): ascii (x'3031...46') or (x'...66')\n * - hex option is useful with CCSID below\n * with following order of operations:\n * - input : *hex followed by *before\n * - output: *after followed by *hex\n * -- CCSID conversion XMLSERVICE (1.6.2)\n * *before(CCSIDFrom/CCSIDTo[/action])\n * - conversion before calling XML processing\n * - CCSIDFrom - XML document client CCSID\n * - CCSIDTo - XML document XMLSERVICE CCSID\n * - action\n * call - call XMLSERVICE (default)\n * nocall - convert only (no call)\n * *after(CCSIDFrom/CCSIDTo)\n * - conversion after calling XML processing\n * - CCSIDFrom - XML document XMLSERVICE CCSID\n * - CCSIDTo - XML document client CCSID\n * *pase(CCSIDpase/CCSIDile)\n * - conversion PGM, LIB, etc, names call processing\n * - CCSIDpase - name CCSID PASE side (ascii)\n * - CCSIDile - name CCSID ILE side (ebcdic)\n *\n * example: $ctl=\"*before(819/37) *after(37/819)\";\n * - DB2 interface provides many possibilities\n * iPLUGxxx - original CLOB automatic DB2 converts\n * iBLOBxxx - raw binary no DB2 converts (1.6.2)\n * - action='nocall' is used to convert\n * anything on IBM i and send it back\n * without calling XMLSERVICE\n * - DB2 interface conversion effects are\n * virtually unlimited performed on IBM i\n * avoiding additional \"code tables\" on client\n * - *pase default (should work)\n * ILE ccsid 0 means job CCSID (all ebcdic)\n * PASE csid default Qp2paseCCSID (see API)\n *----------------------------------------------------\n * -- server time out jobs XMLSERVICE (1.6.2)\n * *wait[(seconds[/action])]\n * - client side wait for XMLSERVICE call (client side)\n * example: $ctl=\"*wait(10)\";\n * - default action *wait(60/busy) (see plugconfx)\n * *call[(seconds[/action[/duration[/job]]])]\n * - client/server side XMLSERVICE call wait (PGM, SRVPGM, PASE, etc)\n * example: $ctl=\"*wait(10) *call(5/busy/client)\";\n * - default for both client/server is *call(0)\n * means wait on call forever (user code flawless),\n * but can be overriden client/server/both\n * *idle[(seconds[/action[/duration]])]\n * - server side XMLSERVICE idle no activity\n * example: $ctl=\"*wait(10/kill) *call(30/kill) *idle(30/kill/perm)\";\n * - default action *idle(1800/kill) (see plugconfx)\n * -- time out parameters\n * seconds:\n * -1 - current default timer\n * 0 - no timer, no timeout, wait forever\n * n - idle timer \"pop\" seconds\n * action:\n * kill - end job immed\n * user - user override signal behaviour (see plugconfx)\n * busy - return busy XML (client side)\n * busy response (1301050):\n * <error>\n * <errnoxml>1301050</errnoxml>\n * <xmlerrmsg>IPC timeout busy</xmlerrmsg>\n * </error>\n * duration:\n * perm - set and use new defaults all requests\n * orig - reset and use original compile defaults (see plugconfx)\n * job:\n * client - *call action applies client side\n * server - *call action applies server side\n * -- Notes:\n * - default timeout/action provided plugconf.rpgle,\n * but each request may override/reset to fit task(s)\n * - signal SIGALRM used with this function\n * can affect user program calls,\n * *call(0) may be used to turn off timer\n * during user program calls\n * - action 'user' allows for custom signal\n * processing in the RPG code (see plugconfx)\n * - if duration not specified, attributes\n * *wait(),*call(),*idle() are temporary\n * for this call only and return to last defaults.\n * - if 'job' not specified on *call(),\n * attribute settings apply to both sides\n * - end job immed kills XMLSERVICE job (server)\n * and destroys IPC, so any waiting client is\n * released with an IPC missing error.\n *----------------------------------------------------\n * -- batch XMLSERVICE processing (version 1.6.2)\n * *batch\n * - use a free batch slot 1 - 16 (release client)\n * responses:\n * <id status='set'>1-16</id> - set batch processing\n * <id status='full'>0</id> - no slots available\n * example: $ctl=\"*sbmjob *batch\";\n * - *batch holds output XML memory until retrieved\n * - *get(n) retrieve of XML memory (some time later)\n * - *batch releases XMLSERVICE client (caller), and\n * returns batch slot number (n) assigned for work.\n * *get[(n)]\n * - get XML results from batch slot 1 - 16 (release slot)\n * responses (report not available):\n * <id status='small'>1-16</id> - buffer too small\n * <id status='done'>1-16</id> - complete removed\n * example: $ctl=\"*get\";\n * example: $ctl=\"*get(3)\";\n * - use with *wait(sec/busy) to avoid hang during\n * *batch running (do something else while waiting)\n * - *get without slot number will get one result\n * any completed batch slot\n * - *get(3) will only get result of batch slot 3\n *----------------------------------------------------\n * -- flight data options (affects performance, state of disrepair) --\n * *rpt\n * - performance report last call\n * *fly\n * - flight record performance\n * *nofly\n * - no flight record performance (default)\n * *justproc\n * - call stored proc, get into XMLSERVICE client,\n * do nothing while in XMLSERVICE, return back\n * - used to check transport speed only\n * -- log to database file (1.7.1) --\n * *log[(key)]\n * - log records into database\n * *nolog\n * - no log records into database (default)\n * Note:\n * - *log key is unique allowing both PHP and XMLSERVICE\n * to record event log data and produce queries of collected\n * reports.\n * Log file layout:\n * create table XMLSERVLOG/LOG (\n * key varchar(64) NOT NULL WITH DEFAULT,\n * log TIMESTAMP NOT NULL WITH DEFAULT,\n * text varchar(64) NOT NULL WITH DEFAULT)\n * Supplemental log dump XML data layout:\n * create table XMLSERVLOG/DUMP (\n * key varchar(64) NOT NULL WITH DEFAULT,\n * log TIMESTAMP NOT NULL WITH DEFAULT,\n * text clob(15M) NOT NULL WITH DEFAULT)\n * - programers/vendors can alter xmlservice log database\n * with plugconf (or custom)\n *****************************************************\n * embedded XML control overrides and function\n *****************************************************\n * sbmjob full user override of SBMJOB for XMLSERVICE start-up -- (1.7.0+)\n * <sbmjob>SBMJOB CMD(CALL PGM(ZENDSVR/XMLSERVICE) PARM('/tmp/override'))</sbmjob>\n * Where SBMJOB can be any user settings (cut/paste green screen command) ...\n * ... required parameters for for XMLSERVICE to start CALL + PARM\n * example:\n * CMD(CALL PGM(ZENDSVR/XMLSERVICE) PARM('/tmp/xxxxxx')) <-- xmlservice test lib\n * -- or --\n * CMD(CALL PGM(ZENDSVR/XMLSERVICE) PARM('/tmp/xxxxxx')) <-- Zend Server production lib\n * -- all other sbmjob parms at your control --\n * Note:\n * - SBMJOB full control allows user to set any type of LIBL,\n * or SBMJOB options, or even custom PGM to call XMLSERVICE\n *****************************************************\n * start/use/stop exclusive use shared IPC (hotel reservation) -- (1.6.8+)\n * <start>KEY</start> - acquire IPC -- first request\n * <use>KEY</use> - match IPC -- each request\n * <stop>KEY</stop> - release IPC -- last request\n * Where KEY anything managed by user (key-2-IPC session data) ...\n * ... random based key -- scaling open any users,\n * want benefit of private RPG call (open files, etc),\n * do not care about reservation multi-request transaction\n * -> hybrid stateless/private call,\n * hold IPC for life of script only and release,\n * but limit jobs\n * ... user based key -- scaling come/go users,\n * want benefit of private RPG call (open files, etc),\n * also want reservation transactions\n * -> hybrid persistent/private,\n * transaction across multi-request (browser clicks),\n * but limited jobs\n * ... task based key -- everyone uses same task/tasks\n * limited pool jobs and all must wait a turn\n * -> hybrid private/persistent with pre-start pool,\n * transaction across multi-request (browser clicks),\n * load balancing design to limit machine stress\n * example many requests exclusive use IPC\n * -- no time out --\n * $ctl .= \" *idle(0)\"\n * -- request 1 --\n * <?xml version=\"1.0\"?>\n * <script>\n * <start>unique-user-key</start>\n * </script>\n * -- request 2 (two minutes later) --\n * <?xml version=\"1.0\"?>\n * <script>\n * <use>unique-user-key</use>\n * <cmd exec='rexx'>RTVJOBA USRLIBL(?)</cmd>\n * </script>\n * -- request 3 (1/2 hour later) --\n * <?xml version=\"1.0\"?>\n * <script>\n * <use>unique-user-key</use>\n * <pgm name='ZZCALL'>\n * <parm>\n * <data type='1A'>a</data>\n * </parm>\n * <return>\n * <data type='10i0'>0</data>\n * </return>\n * </pgm>\n * </script>\n * -- request n (2 hours later) --\n * <?xml version=\"1.0\"?>\n * <script>\n * <stop>unique-user-key</stop>\n * </script>\n * Note:\n * - <start>unique-user-key</start>\n * acquire exclusive IPC if available,\n * - <use>unique-user-key</use>\n * must appear XML every request\n * job held forever until see <stop>\n * - <stop>unique-user-key</stop>\n * release IPC for any other use\n * - <start>no-match-key</start>\n * or <use>unique-user-key</use>\n * non-matching key results in error\n * almost instantly (no wait)\n * busy response (1301060):\n * <error>\n * <errnoxml>1301060</errnoxml>\n * <xmlerrmsg>IPC owner busy</xmlerrmsg>\n * </error>\n * - thoughtful setting server idle timeout\n * can control unwanted reservation hangs\n * due to careless users or errors\n * $ctl .= \" *idle(60)\"\n *************************************************************************\n\n\n.. _cmd:\n\nXMLSERVICE call CMD\n-------------------\nSyntax can largely be cut/paste from a 5250 command line.\n\n::\n\n *************************************************************************\n * 1) call i CMD\n * XMLSERVICE allows calls of *CMDS on IBM i. Typically, you cut/paste\n * from a 5250 QCMD line using prompt (F4). You may use choose the utility\n * to run your command with attribute 'exec'. However, *CMDS with\n * in/out parameters, like RTVJOBA., you must use 'exec'='rexx'.\n * ---\n * <cmd [exec='cmd|system|rexx'\n * hex='on'\n * before='cc1/cc2/cc3/cc4'\n * after='cc4/cc3/cc2/cc1'\n * error='on|off|fast'\n * ]>values (see IBM i *CMD)</cmd>\n * ---\n * cmd - command tag\n * values - (see IBM i *CMD IBM i- 5250 cut/paste)\n * options\n * exec\n * cmd - qcmdexe only return true/false (default)\n * system - system utility return CPFxxxx\n * rexx - rexx output parms and return CPFxxxx\n * (?) character type\n * (?N) explicit cast numeric\n * hex (1.6.8)\n * on - input character hex (5147504C20202020)\n * before\n * cc(n) - input ccsid1->ccsid2->ccsid3->ccsid4\n * after\n * cc(n) - output ccsid1->ccsid2->ccsid3->ccsid4\n * error (1.7.6)\n * on - script stops, full error report\n * off - script continues, job error log (default)\n * fast - script continues, brief error log\n * ---\n * example run command (original)\n * <?xml version=\"1.0\"?>\n * <xmlservice>\n * <cmd>ADDLIBLE LIB(DB2) POSITION(*FIRST)</cmd>\n * </xmlservice>\n * ---\n * example output command (exec='rexx')\n * <?xml version='1.0'?>\n * <xmlservice>\n * <cmd exec='rexx'>RTVJOBA USRLIBL(?) SYSLIBL(?)</cmd>\n * <cmd exec='rexx'>RTVJOBA CCSID(?N) OUTQ(?)</cmd>\n * <cmd exec='rexx'>RTVSYSVAL SYSVAL(QDATETIME) RTNVAR(?)</cmd>\n * </xmlservice>\n * ---\n * Note:\n * - <cmd>command</cmd> should be all on one line (no LFs)\n * - <cmd> run in XMLSERVICE job.\n * cmd - qcmdexe only return true/false (default)\n * system - system utility return CPFxxxx (1.5.2)\n * <cmd exec='system'><error>CPF2103</error></cmd>\n * rexx - rexx output parms and return CPFxxxx (1.5.2)\n * <cmd exec='rexx'><error>CPF2103</error></cmd>\n * - exec='rexx'\n * All parms are assume to be character unless\n * (?N) to explicit cast to numeric (rtvjoba). Most\n * RTVxxxx that ask for a CL variable RTNVAR will\n * not require the (?N) cast (IBM i manuals).\n * QTEMP/XMLREXX(HOW) is created on demand\n * by RPG module plugile (Github download).\n * QTEMP/OUTREXX(OUTREXX) is created for\n * command temp data between RPG and REXX.\n * - Up to four conversions can take place\n * for the truly complex ccsid issues (1.6.8)\n * <cmd hex='on' before='cc1/cc2/cc3/cc4' after='cc4/cc3/cc2/cc1'>\n * flow:\n * -> PHP client bin2hex('wild_ascii_raw_chars')\n * -> xmlservice hex2bin back to 'wild_ascii_raw_chars'\n * -> xmlservice convert cc1->cc2->cc3->cc4 (before)\n * -> xmlservice make ILE call\n * -> xmlservice convert cc4->cc3->cc2->cc1 (after)\n * -> xmlservice tohex \"xml_hex_back\"\n * -> PHP client $chars = pack('H*',\"xml_hex_back\")\n * output (incompatible change hex/ccsid 1.7.4+):\n * <cmd exec='rexx' hex='on' before='819/37' after='37/819'>\n * <success><![CDATA[+++ success RTVJOBA USRLIBL(?) SYSLIBL(?)]]></success>\n * <row><data desc='USRLIBL'><hex><![CDATA[5147504C20202020202020]]></hex></data></row>\n * <row><data desc='SYSLIBL'><hex><![CDATA[5153595320202020202020]]></hex></data></row>\n * </cmd>\n * - error='on,off,fast' (1.7.6)\n * on - script stops, full error report\n * off - script continues, job error log (default)\n * fast - script continues, brief error log\n *************************************************************************\n\n.. _sh:\n\nXMLSERVICE call PASE\n--------------------\n\nSyntax can mostly be cut/paste from PASE shell (call qp2term).\n\n::\n\n *************************************************************************\n * 2) call PASE utility\n * XMLSERVICE allows calls of PASE utilities on IBM i. Typically, you cut/paste\n * from a PASE command line (call qp2term). PASE shell 'sh' is used for\n * execution of your utilities, which, is default behavior of PASE popen() API.\n * ---\n * <sh [rows='on|off'\n * hex='on'\n * before='cc1/cc2/cc3/cc4'\n * after='cc4/cc3/cc2/cc1'\n * error='on|off|fast'\n * ]>values (see PASE utility)</sh>\n * ---\n * sh - shell tag\n * values - (see PASE utility - call qp2term cut/paste)\n * options\n * rows\n * on - return rows lines\n * off - return one string (default)\n * hex (1.7.4)\n * on - input character hex (5147504C20202020)\n * before\n * cc(n) - input ccsid1->ccsid2->ccsid3->ccsid4\n * after\n * cc(n) - output ccsid1->ccsid2->ccsid3->ccsid4\n * ---\n * error (1.7.6)\n * on - script stops, full error report\n * off - script continues, job error log (default)\n * fast - script continues, brief error log\n * ---\n * example run PASE shell\n * <?xml version=\"1.0\"?>\n * <xmlservice>\n * <sh rows='on'>/QOpenSys/usr/bin/system 'wrkactjob' | grep -i fr</sh>\n * </xmlservice>\n * ---\n * Note:\n * - syntax looks as if typed on console (call qp2term)\n * <sh>pase utility</sh> runs \"slower\" because a child job\n * is created to run each PASE utility (normal Unix behavior).\n * All other XML/ILE functions run within XMLSERVICE job.\n * - Using nested shells within this sh shell may\n * produce unpredictable results.\n * - hex='on' before='' after='' -- same as <cmd> (1.7.0)\n * output (incompatible change hex/ccsid 1.7.4+):\n * <sh rows='on' hex='on' before='819/37' after='37/819'>\n * <row><hex>746F74616C2031363636313034</hex></row>\n * </sh>\n * output (rows='off' 1.7.4+):\n * <sh hex='on' before='819/37' after='37/819'>\n * <hex>746F74616C2031363636313034</hex>\n * </sh>\n * - error='on,off,fast' (1.7.6)\n * on - script stops, full error report\n * off - script continues, job error log (default)\n * fast - script continues, brief error log\n *************************************************************************\n\n\n.. _qsh:\n\nXMLSERVICE call QSH (1.9.8+)\n----------------------------\n\nSyntax can mostly be cut/paste from QSH shell (qsh).\n::\n\n *************************************************************************\n * 2.5) call QSH utility (1.9.8+)\n * XMLSERVICE allows calls of QSH utilities on IBM i. Typically, you cut/paste\n * from a QSH command line. STRQSH is used for execution of your utilities.\n * ---\n * <qsh [rows='on|off'\n * hex='on'\n * before='cc1/cc2/cc3/cc4'\n * after='cc4/cc3/cc2/cc1'\n * error='on|off|fast'\n * ]>values (see QSH utility)</qsh>\n * ---\n * qsh - shell tag\n * values - (see QSH utility - qsh cut/paste)\n * options\n * rows\n * on - return rows lines\n * off - return one string (default)\n * hex (1.7.4)\n * on - input character hex (5147504C20202020)\n * before\n * cc(n) - input ccsid1->ccsid2->ccsid3->ccsid4\n * after\n * cc(n) - output ccsid1->ccsid2->ccsid3->ccsid4\n * ---\n * error (1.7.6)\n * on - script stops, full error report\n * off - script continues, job error log (default)\n * fast - script continues, brief error log\n * ---\n * example run QSH shell\n * <?xml version=\"1.0\"?>\n * <xmlservice>\n * <qsh rows='on'>/usr/bin/system 'wrkactjob' | /usr/bin/grep -i fr</qsh>\n * </xmlservice>\n * ---\n * Note:\n * - Recommend qualify qsh utilities with /usr/bin.\n * This will avoid ccsid conversion between PASE/QSH utilities.\n * - syntax looks as if typed on console (qsh)\n * <qsh>QSH utility</qsh> runs \"slower\" because a child job\n * is created to run each QSH utility (normal Unix behavior).\n * - Using nested shells within this qsh shell may\n * produce unpredictable results.\n * - hex='on' before='' after='' -- same as <cmd> (1.7.0)\n * - error='on,off,fast' (1.7.6)\n * on - script stops, full error report\n * off - script continues, job error log (default)\n * fast - script continues, brief error log\n\n\n.. _pgm:\n\nXMLSERVICE call PGM\n-------------------\n\nCall PGM, SRVPGM, or system API, using XML syntax.\n\n1) Call PGM using this XML syntax.\n\n2) Call SRVPGM using this XML syntax.\n\n::\n\n *************************************************************************\n * 3) call PGM/SRVPGM\n * XMLSERVICE allows calls of *PGM and *SRVPGM on IBM i. Typically, you match\n * call parameters, including data structures, and/or simple data elements.\n * ---\n * pgm name (*PGM or *SRVPGM)\n * <pgm name=''\n * [lib=''\n * func=''\n * mode='opm|ile'\n * error='on|off|fast'\n * ]>values (see <parm> and <return>) </pgm>\n * ---\n * pgm - IBM i *PGM or *SRVPGM name (tag)\n * values - (see parm and return)\n * options\n * lib\n * library - IBM i library name\n * func\n * function- IBM i *SRVPGM function name\n * mode\n * ile - ILE and PASE memory (default)\n * opm - ILE only memory (PASE can not view)\n * error (1.7.6)\n * on - script stops, full error report\n * off - script continues, job error log (default)\n * fast - script continues, brief error log\n *\n * ---\n * pgm parameters\n * <parm [io='in|out|both|omit'\n * by='val|ref'\n * ]>values (see <ds> and <data>)</parm>\n * ---\n * parm - parm name (tag)\n * values - (see ds or data)\n * options\n * io\n * in - input only\n * out - output only\n * both - input/output only (default)\n * omit - omit (1.2.3)\n * by\n * ref - pass by reference (default)\n * val - pass by value (1.9.9.3+)\n *\n * ---\n * pgm return\n * <return>values (see <ds> and <data>)</return>\n * ---\n * return - return tag\n * values - (see ds or data)\n * options\n * na - no options\n *\n * ---\n * pgm data structure\n * <ds [dim='n' dou='label'\n * len='label'\n * data='records'\n * ]>values (see <ds> or <data>)</ds>\n * ---\n * ds - data structure tag\n * values - (see ds or data)\n * options\n * dim\n * n - array dimension value (default dim1)\n * dou\n * label - match array dou terminate parm label (see data)\n * len (1.5.4)\n * label - match calculate length of ds parm lable (see data)\n * data (1.7.5)\n * records - data in records tag\n *\n * ---\n * pgm data elements\n * <data type='data types'\n * [dim='n'\n * varying='on|off|2|4'\n * enddo='label'\n * setlen='label'\n * offset='label'\n * hex='on|off' before='cc1/cc2/cc3/cc4' after='cc4/cc3/cc2/cc1'\n * trim='on|off'\n * next='label'\n * ]>(value)</data>\n * ---\n * data - data value name (tag)\n * values - value,\n * type\n * 3i0 int8/byte D myint8 3i 0\n * 5i0 int16/short D myint16 5i 0\n * 10i0 int32/int D myint32 10i 0\n * 20i0 int64/int64 D myint64 20i 0\n * 3u0 uint8/ubyte D myint8 3u 0\n * 5u0 uint16/ushort D myint16 5u 0\n * 10u0 uint32/uint D myint32 10u 0\n * 20u0 uint64/uint64 D myint64 20u 0\n * 32a char D mychar 32a\n * 32a {varying2} varchar D mychar 32a varying\n * 32a {varying4} varchar4 D mychar 32a varying(4)\n * 12p2 packed D mydec 12p 2\n * 12s2 zoned D myzone 12s 2\n * 4f2 float D myfloat 4f\n * 8f4 real/double D myfloat 8f\n * 3b binary D mybin (any)\n * 40h hole (no out) D myhole (any)\n * options\n * dim\n * n - array dimension value (default dim1)\n * varying\n * on - character varying data (same as varying2)\n * off - character non-varying data (default)\n * 2 - character varying data\n * 4 - character varying data\n * enddou\n * label - match array dou terminate parm label (see ds)\n * setlen (1.5.4)\n * label - match calculate length of ds parm lable (see ds)\n * offset\n * label - match offset label (see overlay)\n * hex (1.6.8)\n * on - input character hex (5147504C20202020)\n * before\n * cc(n) - input ccsid1->ccsid2->ccsid3->ccsid4\n * after\n * cc(n) - output ccsid1->ccsid2->ccsid3->ccsid4\n * trim (1.7.1)\n * on - trim character (default)\n * off - no trim character\n * next (1.9.2)\n * label - match next offset label (see overlay)\n *\n * ---\n * pgm parameters/return overlay\n * <overlay\n * [io='in|out|both'\n * offset='n|label'\n * top='on|off|n'\n * setnext='nextoff'\n * ]>(see <ds> and <data>)</overlay>\n * ---\n * overlay - structure overlay name (tag)\n * values - (see ds or data)\n * options\n * io\n * in - input only\n * out - output only\n * both - input/output only (default)\n * offset\n * n - overlay bytes offset relative\n * label - overlay match bytes offset label (see data)\n * setnext (1.9.2)\n * label - overlay match next offset label (see data)\n * top\n * n - overlay parm number (see parm)\n * on - overlay parm first (see parm)\n * off - overlay parm last seen (see parm)\n * ---\n * example run a PGM\n * <?xml version=\"1.0\"?>\n * <xmlservice>\n * <cmd>CHGLIBL LIBL(XMLSERVICE) CURLIB(XMLSERVICE)</cmd>\n * <pgm name='ZZCALL' lib=''>\n * <parm io='both'>\n * <data type='1A' var='INCHARA'>a</data>\n * </parm>\n * <parm io='both'>\n * <data type='1A' var='INCHARB'>b</data>\n * </parm>\n * <parm io='both'>\n * <data type='7p4' var='INDEC1'>11.1111</data>\n * </parm>\n * <parm io='both'>\n * <data type='12p2' var='INDEC2'>222.22</data>\n * </parm>\n * <parm io='both'>\n * <ds>\n * <data type='1A' var='INDS1.DSCHARA'>x</data>\n * <data type='1A' var='INDS1.DSCHARB'>y</data>\n * <data type='7p4' var='INDS1.DSDEC1'>66.6666</data>\n * <data type='12p2' var='INDS1.DSDEC2'>77777.77</data>\n * </ds>\n * </parm>\n * <return>\n * <data type='10i0'>0</data>\n * </return>\n * </pgm>\n * </xmlservice>\n * ---\n * example run a SRVPGM\n * <?xml version=\"1.0\"?>\n * <xmlservice>\n * <pgm name='ZZSRV' lib='XMLSERVICE' func='ZZARRAY'>\n * <parm comment='search this name'>\n * <data var='myName' type='10A'>Ranger</data>\n * </parm>\n * <parm comment='max allowed return'>\n * <data var='myMax' type='10i0'>5</data>\n * </parm>\n * <parm comment='actual count returned'>\n * <data var='myCount' type='10i0' enddo='mycount'>0</data>\n * </parm>\n * <return>\n * <ds var='dcRec_t' dim='999' dou='mycount'>\n * <data var='dcMyName' type='10A'>na</data>\n * <data var='dcMyJob' type='4096A'>na</data>\n * <data var='dcMyRank' type='10i0'>0</data>\n * <data var='dcMyPay' type='12p2'>0.0</data>\n * </ds>\n * </return>\n * </pgm>\n * </xmlservice>\n * ---\n * example optional ccsid convert name/lib format (1.6.8)\n * <?xml version=\"1.0\"?>\n * <xmlservice>\n * <pgm>\n * <name hex='on' before='cc1/cc2/cc3/cc4'>bin2hex('&fredflin')</name>\n * <lib hex='on' before='cc1/cc2/cc3/cc4'>bin2hex('omlated')</lib>\n * <func hex='on' before='cc1/cc2/cc3/cc4'>bin2hex('me&proc')</func>\n * <parm>\n * <ds dim='3'>\n * <data type='1A'>a</data>\n * </ds>\n * </parm>\n * <return>\n * <ds dim='999'>\n * <data type='10i0'>0</data>\n * </ds>\n * </return>\n * </pgm>\n * </xmlservice>\n * ---\n * Note:\n * - data types (similar RPG):\n * ----------------------------------------------------------------------\n * int8/byte D myint8 3i 0 <data type='3i0'/>\n * int16/short D myint16 5i 0 <data type='5i0'/>\n * int32/int D myint32 10i 0 <data type='10i0'/>\n * int64/int64 D myint64 20i 0 <data type='20i0'/>\n * uint8/ubyte D myint8 3u 0 <data type='3u0'/>\n * uint16/ushort D myint16 5u 0 <data type='5u0'/>\n * uint32/uint D myint32 10u 0 <data type='10u0'/>\n * uint64/uint64 D myint64 20u 0 <data type='20u0'/>\n * char D mychar 32a <data type='32a'/>\n * varchar D mychar 32a varying <data type='32a' varying='2'/>\n * varchar4 D mychar 32a varying(4) <data type='32a' varying='4'/>\n * packed D mydec 12p 2 <data type='12p2'/>\n * zoned D myzone 12s 2 <data type='12s2'/>\n * float D myfloat 4f <data type='4f2'/>\n * real/double D myfloat 8f <data type='8f4'/>\n * binary D mybin (any) <data type='3b'>F0F1F2</data>\n * hole (no out) D myhole (any) <data type='40h'/>\n * ------------------------------------------------------------------------\n * type='na' [varying='on|off|2|4'] - character (32A)\n * <data type='32a'/>\n * <data type='32a' varying='on'>ranger</data>\n * <data type='32a'><![CDATA[<i am ranger>]]></data>\n * <data type='200A' hex='on' before='1208/930' after='930/1208'>\n * bin2hex($japan_raw_ascii_data)\n * </data>\n * type='npn' - packed decimal (12p2)\n * <data type='12p2'/>\n * <data type='12p2'>30.29</data>\n * type='nsn' - zoned decimal (12s2)\n * <data type='12s2'/>\n * <data type='12s2'>30.29</data>\n * type='nin' - signed integer (5i0, 10i0, 20i0)\n * <data type='20i0'/>\n * <data type='10i0'>-30</data>\n * type='nun' - unsigned integer (5u0, 10u0, 20u0)\n * <data type='20u0'/>\n * <data type='10u0'>30</data>\n * type='nfn' - floating point (4f2, 8f4)\n * <data type='4f2'/>\n * <data type='4f2'>30.34</data>\n * <data type='8f4'>30.34</data>\n * type='nb' - binary HEX char (2b, 400b)\n * <data type='5b'>F0F1F2CDEF</data>\n * <data type='2b'>1FBC</data>\n * <data type='2b'>0F0F</data>\n * - HEX upper case ('1FBC' not '1fbc')\n * - high/low bits (HEX='0F0F' not HEX='F0F')\n * type='nh' - 'hole' zero in, nothing out (4096h) (1.2.3)\n * <data type='400h'/>\n * - PGM/SRVPGM calls (<pgm>,<parm>,<data>,<return>) use syntax\n * that looks like RPG to describe the data parameters\n * (type='4b', type='32a', type='4f', type='10i0', type='12p2',\n * etc.).\n * - <data dim='n'> - dim='n' is new to 1.2 version and beyond,\n * older versions did not include this feature.\n * - Parameters using dou='label', enddo='label',\n * label must match for this to work,\n * then processing will only return records up to enddo limits.\n * - Type 'h' for 'hole' is used to input x'00' fill 'hole'\n * in the parameter geometry. It can be used to skip over\n * a chunk of complex data that you really did not want to\n * deal with or see in output XML. It is also very handy to\n * use with overlay when output data is variable\n * or unpredictable (1.2.3)\n * input:\n * <ds>\n * <data type='40a'>good stuff</data> <---offset 0\n * <data type='400h'/> <---400 x'00' input\n * <data type='32a'>more good stuff</data><---offset 440\n * </ds>\n * output:\n * <ds>\n * <data type='40a'>stuff back</data> <--- offset 0\n * <data type='400h'> </data> <--- ignored output\n * <data type='32a'>stuff back</data> <--- offset 440\n * </ds>\n * - Added parm='omit' for RPG OPTIONS(*OMIT) parameter. A\n * *NULL will be passed in this location.\n * All parm io='omit' will be excluded from XML\n * output returned because *NULL parameter has no data (1.2.3).\n * <parm comment='my name' io='omit'>\n * <data var='myName' type='10A'>Ranger</data> <--ignore *NULL\n * </parm>\n * RPG procedure (SRVPGM function):\n * D zzomit PI 50A varying\n * D myName 10A options(*OMIT) <---- optional omitted (*NULL)\n * D yourName 10A\n * - Added len='label'/setlen='label' to allow for\n * automatic length calculation for various system\n * APIs that want a %size(thing) parameter.\n * This should work across parameters and within\n * parameters (any order), but nesting len/setlen is\n * not allowed.\n * <parm io=\"both\" comment='Error code'>\n * <ds comment='Format ERRC0100' len='rec2'>\n * <data type='10i0' comment='returned'>0</data>\n * <data type='10i0' comment='available' setlen='rec2'>0</data>\n * <data type='7A' comment='Exception ID'> </data>\n * <data type='1A' comment='Reserved'> </data>\n * </ds>\n * </parm>\n * - Up to four conversions can take place\n * for the truly complex ccsid issues (1.6.8)\n * <data type='A' hex='on' before='cc1/cc2/cc3/cc4' after='cc4/cc3/cc2/cc1'>\n * flow:\n * -> PHP client bin2hex('wild_ascii_raw_chars')\n * -> xmlservice hex2bin back to 'wild_ascii_raw_chars'\n * -> xmlservice convert cc1->cc2->cc3->cc4 (before)\n * -> xmlservice make ILE call\n * -> xmlservice convert cc4->cc3->cc2->cc1 (after)\n * -> xmlservice tohex \"xml_hex_back\"\n * -> PHP client $chars = pack('H*',\"xml_hex_back\")\n * - V5R4 accomidation for OPM programs like CLP (1.6.8)\n * - mode='opm' uses non-teraspace memory to build parm lists\n * that are used with _CALLPGMV for a \"pure\" OPM call mode\n * - mode='ile' default using teraspace for \"mixed\" memory\n * compatible with PASE calls (IBM i possiblilities)\n * - Allow trim control character/binary <data ... trim='on|off'>\n * - trim='on' -- right trim (default character type='na')\n * - trim='off' -- include all (default binary type='nb')\n * - see <overlay> for offset='label'\n * <data offset='label'> <-- memory location to pop off a\n * variable/changing offset value\n * for use in overlay()\n * <overlay top='n' offset='label'> <-- top='n' overlay parameter 'n',\n * then add offset='label' pop value\n * - offset='label' allows label location to pop off a <data> offset value\n * at this data location to add position offset <overlay offset='label'>\n * - 'label' is NOT a position location for <overlay>, it only holds\n * a offset value in this <data> memory location for things like\n * system APIs with offset-2-next.\n * - data='records' - data follows in record format\n * fast \"many records\" i/o big data (see below) (1.7.5)\n * <parm comment='wsopstdcnt'>\n * <data type='3s0' enddo='wsopstdcnt'/>\n * </parm>\n * <parm comment='findMe1'>\n * <ds var='findMe1' data='records'>\n * <ds var='dcRec1_t' array='on'>\n * <ds var='dcRec1_t'>\n * <data var='dcMyName1' type='10A'/>\n * <ds var='dcRec2_t'>\n * <data var='dcMyName2' type='10A'/>\n * <ds var='dcRec3_t'>\n * <data var='dcMyName3' type='10A'/>\n * <ds var='dcRec_t' dim='999' dou='wsopstdcnt'>\n * <data var='dcMyName' type='10A'/>\n * <data var='dcMyJob' type='4096A'/>\n * <data var='dcMyRank' type='10i0'/>\n * <data var='dcMyPay' type='12p2'/>\n * </ds>\n * </ds>\n * </ds>\n * </ds>\n * </ds>\n * </ds>\n * <records delimit=':'>:Rgr:B:Ok:nd1:nd1:1:1.1:...:</records>\n * </parm>\n * a) <records delimit=':'> simply match in order input\n * of any complex structure. Output matches\n * order input (see above)\n * b) <records delimit=':'> delimit can be any character\n * not in your complex records (see above)\n * c) works with any <parm> or <return>\n * d) dou/enddo works, but tricky script to design (be careful)\n * - setnext='nextoff' / next='nextoff' - see overlay (1.9.2)\n * - len/setlen - auto-len calculate ds setlen='here' (1.5.4)\n * - error='on,off,fast' (1.7.6)\n * on - script stops, full error report (default)\n * off - script continues, job error log\n * fast - script continues, brief error log\n * - pgm parameters/return overlay (custom offset='bytes', input/output):\n * <overlay> works \"relative\" to \"previous\" <parm> in\n * \"order of appearance XML\"\n * or absolute position to (top='n')\n * <pgm>\n * --->absolute parm <---relative parm\n * ---><parm>complex stuff</parm><-------------------\n * | <overlay>complex over parm 1 </overlay>____|\n * |\n * |--><parm>complex stuff</parm><-------------------\n * || <overlay>complex over parm 2 </overlay>____|\n * || :\n * || <parm>complex stuff</parm><-------------------\n * || <overlay>complex over last parm</overlay>____|\n * || :\n * |___<overlay top='on'>over top parm</overlay>\n * | :\n * |__<overlay top='2'>over parm 2 </overlay>\n * </pgm>\n * - top='on|n' allow overlay position to parameter n\n * ... top='on' absolute parm='1' (1.2.1)\n * ... top='n' absolute parm='n' (1.2.2)\n * ... offset='n' bytes offset relative\n * to top='n' position (parm 1,2,3, etc)\n * - Once the top='n' parm location is etablished, offset='n'\n * will move overlay to offset within the parameter.\n * <data offset='label'> <-- memory location to pop off a\n * variable/changing offset value\n * for use in overlay()\n * <overlay top='n' offset='label'> <-- top='n' overlay parameter 'n',\n * then add offset='label' pop value\n * - offset='label' allows label location to pop off a <data> offset value\n * at this data location to add position offset <overlay offset='label'>\n * - 'label' is NOT a position location for <overlay>, it only holds\n * a offset value in this <data> memory location for things like\n * system APIs with offset-2-next.\n * - setnext='nextoff' / next='nextoff' (1.9.2)\n * <pgm name='QSZRTVPR'>\n * <parm io='both'>\n * <ds comment='PRDR0200'>\n * :\n * <data type='10i0' offset='myOffset'></data>\n * :\n * </ds>\n * </parm>\n * :\n * <overlay io='out' top='1' offset='myOffset'>\n * <ds>\n * <data type='10A'></data>\n * <data type='2A'></data>\n * <data type='10i0' enddo='prim'></data>\n * <data type='10i0' offset='myOffset2'></data>\n * </ds>\n * </overlay>\n * <overlay io='out' top='1' offset='myOffset2'\n * dim='10' dou='prim' setnext='nextoff'>\n * <ds>\n * <data type='10i0' next='nextoff'></data>\n * <data type='10A'></data>\n * <data type='10A'></data>\n * <data type='10A'></data>\n * <data type='10A'></data>\n * <data type='10A'></data>\n * <data type='10A'></data>\n * <data type='10i0'></data>\n * <data type='10A'></data>\n * </ds>\n * </overlay>\n *************************************************************************\n\n\n*Note: Additional XML attributes added for comments, var names, and documentation will simply be returned untouched, so you may build your own label conventions for XML parsing in client code.*\n\n*Hint: use your own labels and comments XML attributes for easy client XML parsing work (var='MYVAR').*\n\n\nAdvanced CCSID\n^^^^^^^^^^^^^^\n\nUsing default PHP toolkit DB2 clob interface (iPLUGxxx/iPLUGRxxx), ccsid conversion occurs naturally as DB2 client/server and you will not have to code before/after, but method is available if you have a specific concern or you have scripts returning many different languages.\n\nTheory follows that most of XML document “intent” will remain immutable across db2 clob conversion (keywords, numbers, etc.), but for character data on some occasion there will be mixed/competing client/server ccsid conversion intention (client running 819, but data 1208), or there may be multiple language ccsid in the same XML document request (German, English, French), therefore using a combination of transfer in hex (hex=‘on’) and IBM i server ccsid translation before/after should allow complete control over transforms at user need.\n\nExample::\n\n <data type='200A' hex='on' before='819/424' after='424/819'>bin2hex('Hebrew_ascii_raw_chars')</data>\n <data type='200A' hex='on' before='819/1098' after='1098/819'>bin2hex('Farsi_ascii_raw_chars')</data>\n <data type='200A' hex='on' before='819/880' after='880/819'>bin2hex('Russia_ascii_raw_chars')</data>\n <data type='200A' hex='on' before='819/280' after='280/819'>bin2hex('Italy_ascii_raw_chars')</data>\n <data type='200A' hex='on' before='819/273' after='273/819'>bin2hex('Germany_ascii_raw_chars')</data>\n <data type='200A' hex='on' before='819/1088' after='1088/819'>bin2hex('Korea_ascii_raw_chars')</data>\n <data type='200A' hex='on' before='1208/13488' after='13488/1208'>bin2hex('Japan_ascii_raw_chars')</data>\n <data type='200A' hex='on' before='1208/13488' after='13488/1208'>bin2hex('China_ascii_raw_chars')</data>\n where:\n before - XMLSERVICE convert CCSID before ILE program call\n after - XMLSERVICE convert CCSID after ILE program call for client return\n bin2hex() - script hex string unaltered ascii image (also returned hex string avoid any conversion)\n pack() - script uses pack('H*',\"xml_hex_back\") function in PHP program for ascii characters\n Note:\n Up to four conversions can take place for the truly complex ccsid conversion issues\n <data type='A' hex='on' before='cc1/cc2/cc3/cc4' after='cc4/cc3/cc2/cc1'>bin2hex('wild_ascii_raw_chars')</data>\n flow:\n -> PHP client bin2hex('wild_ascii_raw_chars')\n -> xmlservice hex2bin back to 'wild_ascii_raw_chars'\n -> xmlservice convert cc1->cc2->cc3->cc4 (before)\n -> xmlservice make ILE call\n -> xmlservice convert cc4->cc3->cc2->cc1 (after)\n -> xmlservice tohex \"xml_hex_back\"\n -> PHP client $chars = pack('H*',\"xml_hex_back\")\n\n\n.. _db2:\n\nXMLSERVICE DB2 SQL\n------------------\n\nDB2 queries using only XML with syntax cut/paste STRSQL (cool, cool, cool version 1.5+).\n\n*Note: DB2 SQL XML does not work in-line stateless ($ctl='\\*here'), but works fine with normal private connections (ipc='/tmp/fred', $ctl='\\*sbmjob').*\n\n::\n\n *****************************************************\n * Run XML SQL statements:\n * <sql>...</sql> - start/end run XML SQL statements\n *\n * Example easy way (chglibl, default connection, default statement):\n * Input:\n * <?xml version='1.0'?>\n * <script>\n * <cmd>CHGLIBL LIBL(XMLSERVTST QTEMP) CURLIB(XMLSERVTST)</cmd>\n * <sql>\n * <query>select breed, name from animal</query>\n * <fetch block='all' desc='on'/>\n * </sql>\n * </script>\n * Note: You only need chglibl once for all scripts because\n * XMLSERVICE jobs remain active until killed (*immed).\n * XMLSERVICE default is system naming (not sql naming),\n * so library list is significant for unqualified\n * statements (see options below).\n * Output:\n * <?xml version='1.0'?>\n * <script>\n * <cmd>+++ success CHGLIBL LIBL(XMLSERVTST QTEMP) CURLIB(XMLSERVTST)</cmd>\n * <sql>\n * <query conn='conn1' stmt='stmt1'>\n * +++ success select breed, name from animal</query>\n * <fetch block='all' desc='on' stmt='stmt1'>\n * <row><data desc='BREED'>cat</data><data desc='NAME'>Pook</data></row>\n * <row><data desc='BREED'>dog</data><data desc='NAME'>Peaches</data></row>\n * <row><data desc='BREED'>horse</data><data desc='NAME'>Smarty</data></row>\n * <row><data desc='BREED'>gold fish</data><data desc='NAME'>Bubbles</data></row>\n * <row><data desc='BREED'>budgerigar</data><data desc='NAME'>Gizmo</data></row>\n * <row><data desc='BREED'>goat</data><data desc='NAME'>Rickety Ride</data></row>\n * <row><data desc='BREED'>llama</data><data desc='NAME'>Sweater</data></row>\n * </fetch>\n * </sql>\n * </script>\n *\n * Reserved SQL words (syntax):\n * <sql>\n *\n * Connect to DB2 (optional):\n * <connect [conn='label' db='x' uid='x' pwd='x' options='label']/>\n *\n * Options template (optional):\n * <options [options='label' error='on|off|fast'\n * Environment level (SQLSetEnvAttr) ...\n * servermode='on' (default=off)\n * Connection level (SQLSetConnAttr) ...\n * autocommit='on|off' (default=on)\n * commit='none|uncommitted|committed|repeatable|serializable'\n * (default=uncommitted)\n * naming='system|sql' (default=system)\n * sqllib='mylib' (default=na)\n * libl='curlib1 mylib2 mylib3' (default=na)\n * datefmt='iso|usa|eur|jis|mdy|dmy|ymd|jul|job' (default=na)\n * datesep='slash|dash|period|comma|blank|job' (default=na)\n * timefmt='iso|usa|eur|jis|hms|job' (default=na)\n * timesep='colon|period|comma|blank|job' (default=na)\n * decimalsep='period|comma|blank|job' (default=na)\n * optimize='first|all' (default=na)\n * Statement level (SQLSetStmtAttr) (version 1.5.1+) ...\n * scrollable='on|off' (default=off)\n * sensitive='unspecified|sensitive|insensitive' (default=unspecified)\n * cursor='forward|static|dynamic' (default=forward)\n * fetchonly='on|off' (deafult=on)\n * fullopen='on|off' (deafult=off)\n * ]/>\n * These alternate db2 features available ...\n * error='on,off,fast' (1.7.6)\n * on - script stops, full error report (default)\n * off - script continues, job error log\n * fast - script continues, brief error log\n * Note:\n * commit sets transaction-isolation level\n * - none - no commit (*NONE)\n * - uncommitted - uncommitted read (*CHG)\n * - committed - cursor stability (*CS)\n * - repeatable - read stability (*RS, *ALL)\n * - serializable - repeatable read (*RR, *ALL)\n *\n * Commit or rollback transaction (optional):\n * <commit [conn='label' action='rollback' error='on|off|fast']/>\n * These alternate db2 features available ...\n * error='on,off,fast' (1.7.6)\n * on - script stops, full error report (default)\n * off - script continues, job error log\n * fast - script continues, brief error log\n *\n * Execute statement directly (SQLExecDirect):\n * <query [conn='label' stmt='label' options='label' error='on|off|fast']>\n * call storedproc('fred flinrock',42.42)\n * -- or --\n * select * from table where abc = 'abc'\n * -- or (use CDATA if xml trouble special characters) --\n * <![CDATA[select * from animal where ID < 5 and weight > 10.0]]>\n * </query>\n * These alternate db2 features available ...\n * error='on,off,fast' (1.7.6)\n * on - script stops, full error report (default)\n * off - script continues, job error log\n * fast - script continues, brief error log\n * Note:\n * - options='label' (version 1.5.1+)\n *\n * Prepare a statement (SQLPrepare/SQLExecute):\n * <prepare [conn='label' stmt='label' options='label' error='on|off|fast']>\n * call storedproc(?,?)\n * -- or --\n * select * from table where abc = ?\n * -- or (use CDATA if xml trouble special characters) --\n * <![CDATA[select * from animal where ID < ? and weight > ?]]>\n * </prepare>\n * These alternate db2 features available ...\n * error='on,off,fast' (1.7.6)\n * on - script stops, full error report (default)\n * off - script continues, job error log\n * fast - script continues, brief error log\n * Note:\n * - options='label' (version 1.5.1+)\n *\n * Execute a prepared statement (SQLPrepare/SQLExecute):\n * <execute [stmt='label' error='on|off|fast']>\n * These alternate db2 features available ...\n * error='on,off,fast' (1.7.6)\n * on - script stops, full error report (default)\n * off - script continues, job error log\n * fast - script continues, brief error log\n * <parm [io='in|out|both']>my string</parm>\n * <parm [io='in|out|both']>42.42</parm>\n * </execute>\n * Note:\n * - substitution parameters '?' are taken in order of\n * appearence with data types decided internally in\n * XMLSERVICE (SQLNumParams/SQLDescribeParam)\n * and bound at call time (SQLBindParameter)\n *\n * Fetch result(s) of a statement:\n * <fetch [stmt='label' block='all|n' rec='n' desc='on|off' error=\"on|off|fast\"/>\n * (default=all) (default=on) (default='off')\n * These alternate db2 features available ...\n * error='on,off,fast' (1.7.6)\n * on - script stops, full error report\n * off - script continues, job error log (default)\n * fast - script continues, brief error log\n * Output (column description included via desc='on'):\n * <fetch>\n * <row><data desc='NAME'>Rip</data><data desc='ID'>9</data></row>\n * <row><data desc='NAME'>Bee</data><data desc='ID'>3</data></row>\n * </fetch>\n * Note:\n * - result set column types and descriptions are\n * decided internally in XMLSERVICE (SQLNumResultCols/SQLDescribeCol)\n * and bound at call time (SQLBindCol)\n * - rec='n' (version 1.5.1+)\n *\n * After query/execute statement:\n * Statment rows affected by change (SQLRowCount):\n * <rowcount [stmt='label' error='on|off']/>\n * (default=off)\n * Output:\n * <rowcount ...>24</rowcount>\n *\n * After insert of identity id:\n * Statment last identity id (IDENTITY_VAL_LOCAL):\n * <identity [conn='label' error='on|off']/>\n * (default=off)\n * Output:\n * <identity ...>23</identity>\n *\n * Statment describe parms (SQLNumParams/SQLDescribeParam):\n * columns (SQLNumResultCols/SQLDescribeCol):\n * <describe [stmt='label' desc='col|parm|both' error='on|off']/>\n * (default=off)\n * Output (parm):\n * <describe ...>\n * <parm> or <col>\n * <name>FRED</name>\n * <dbtype>DECIMAL</dbtype>\n * <size>12</size>\n * <scale>2</scale>\n * <nullable>0</nullable>\n * </parm> or </col>\n * </describe>\n *\n * Statment count parms (SQLNumParams): (1.7.6)\n * columns (SQLNumResultCols):\n * <count [stmt='label' desc='col|parm|both' error='on|off']/>\n * (default=off)\n * Output (parm):\n * <count ...>\n * <colcount>nbr</colcount>\n * <parmcount>nbr</parmcount>\n * </count>\n *\n * Free resources:\n * <free [conn='all|label'\n * cstmt='label'\n * stmt='all|label'\n * options='all|label'\n * error='on|off|fast']/>\n * These alternate db2 features available ...\n * error='on,off,fast' (1.7.6)\n * on - script stops, full error report\n * off - script continues, job error log (default)\n * fast - script continues, brief error log\n *\n * Meta data - tables:\n * <tables [conn='label' error='on|off|fast'>\n * (default=off)\n * <parm>qualifier or catalog<parm>\n * <parm>schema name</parm>\n * <parm>table name</parm>\n * <parm>table type</parm>\n * </tables>\n * Output (desc varies V5R4 to V6+):\n * <tables ...>\n * <row>\n * <data desc='TABLE_CAT'>LP0164D</data>\n * <data desc='TABLE_SCHEM'>XMLSERVTST</data>\n * <data desc='TABLE_NAME'>ANIMAL</data>\n * <data desc='TABLE_TYPE'>BASE TABLE</data>\n * <data desc='REMARKS'></data>\n * </row>\n * </tables>\n *\n * Meta data - table privileges:\n * <tablepriv [conn='label' error='on|off|fast'>\n * (default=off)\n * <parm>qualifier or catalog<parm>\n * <parm>schema name</parm>\n * <parm>table name</parm>\n * </tablepriv>\n * Output (desc varies V5R4 to V6+):\n * <tablepriv ...>\n * <row>\n * <data desc='TABLE_CAT'>LP0164D</data>\n * <data desc='TABLE_SCHEM'>XMLSERVTST</data>\n * <data desc='TABLE_NAME'>ANIMAL</data>\n * <data desc='GRANTOR'></data>\n * <data desc='GRANTEE'>PUBLIC</data>\n * <data desc='PRIVILEGE'>SELECT</data>\n * <data desc='IS_GRANTABLE'>NO</data>\n * <data desc='DBNAME'></data>\n * </row>\n * </tablepriv>\n *\n * Meta data - columns:\n * <columns [conn='label' error='on|off|fast'>\n * (default=off)\n * <parm>qualifier or catalog</parm>\n * <parm>schema name</parm>\n * <parm>table name</parm>\n * <parm>column name</parm>\n * </columns>\n * Output (desc varies V5R4 to V6+):\n * <columns ...>\n * <row>\n * <data desc='TABLE_CAT'>LP0164D</data>\n * <data desc='TABLE_SCHEM'>XMLSERVTST</data>\n * <data desc='TABLE_NAME'>ANIMAL</data>\n * <data desc='COLUMN_NAME'>BREED</data>\n * <data desc='DATA_TYPE'>12</data>\n * <data desc='TYPE_NAME'>VARCHAR</data>\n * <data desc='LENGTH_PRECISION'>0</data>\n * <data desc='BUFFER_LENGTH'>34</data>\n * <data desc='NUM_SCALE'>0</data>\n * <data desc='NUM_PREC_RADIX'>0</data>\n * <data desc='NULLABLE'>1</data>\n * <data desc='REMARKS'></data>\n * <data desc='COLUMN_DEF'>\n * </data><data desc='DATETIME_CODE'>0</data>\n * <data desc='CHAR_OCTET_LENGTH'>32</data>\n * <data desc='ORDINAL_POSITION'>2</data>\n * </row>\n * </columns>\n *\n * Meta data - special columns:\n * <special [conn='label' error='on|off|fast'>\n * (default=off)\n * <parm>qualifier or catalog</parm>\n * <parm>schema name</parm>\n * <parm>table name</parm>\n * <parm>row|transaction|session</parm>\n * <parm>no|nullable</parm>\n * </special>\n *\n * Meta data - column privileges:\n * <columnpriv [conn='label' error='on|off|fast'>\n * (default=off)\n * <parm>qualifier or catalog</parm>\n * <parm>schema name</parm>\n * <parm>table name</parm>\n * <parm>column name</parm>\n * </columnpriv>\n * Output (desc varies V5R4 to V6+):\n * <columnpriv ...>\n * <row>\n * <data desc='TABLE_CAT'>LP0164D</data>\n * <data desc='TABLE_SCHEM'>XMLSERVTST</data>\n * data desc='TABLE_NAME'>ANIMAL</data>\n * <data desc='COLUMN_NAME'>BREED</data>\n * <data desc='GRANTOR'></data>\n * <data desc='GRANTEE'>PUBLIC</data>\n * <data desc='PRIVILEGE'>SELECT</data>\n * <data desc='IS_GRANTABLE'>NO</data>\n * <data desc='DBNAME'></data>\n * </row>\n * </columnpriv>\n *\n * Meta data - procedures:\n * <procedures [conn='label' error='on|off|fast'>\n * (default=off)\n * <parm>qualifier or catalog</parm>\n * <parm>schema name</parm>\n * <parm>procedure name</parm>\n * </procedures>\n * Output (desc varies V5R4 to V6+):\n * <procedures ...>\n * <row>\n * <data desc='PROCEDURE_CAT'>LP0164D</data>\n * <data desc='PROCEDURE_SCHEM'>XMLSERVTST</data>\n * <data desc='PROCEDURE_NAME'>MATCH1</data>\n * <data desc='NUM_INPUT_PARAMS'>2</data>\n * <data desc='NUM_OUTPUT_PARAMS'>1</data>\n * <data desc='NUM_RESULT_SETS'>1</data>\n * <data desc='REMARKS'></data>\n * </row>\n *\n * Meta data - procedure columns:\n * <pcolumns [conn='label' error='on|off|fast'>\n * (default=off)\n * <parm>qualifier or catalog</parm>\n * <parm>schema name</parm>\n * <parm>proc name</parm>\n * <parm>column name</parm>\n * </pcolumns>\n * Output (desc varies V5R4 to V6+):\n * <pcolumns ...>\n * <row>\n * <data desc='PROCEDURE_CAT'>LP0164D</data>\n * <data desc='PROCEDURE_SCHEM'>XMLSERVTST</data>\n * <data desc='PROCEDURE_NAME'>MATCH1</data>\n * <data desc='COLUMN_NAME'>FIRST_NAME</data>\n * <data desc='COLUMN_TYPE'>1</data>\n * <data desc='DATA_TYPE'>12</data>\n * <data desc='TYPE_NAME'>CHARACTER VARYING</data>\n * <data desc='PRECISION'>0</data>\n * <data desc='LENGTH'>128</data>\n * <data desc='SCALE'>0</data>\n * <data desc='RADIX'>0</data>\n * <data desc='NULLABLE'>YES</data>\n * <data desc='REMARKS'></data>\n * <data desc='COLUMN_DEF'>0</data>\n * <data desc='SQL_DATA_TYPE'>12</data>\n * <data desc='SQL_DATETIME_SUB'>0</data>\n * <data desc='CHAR_OCTET_LENGTH'>128</data>\n * <data desc='ORDINAL_POSITION'>1</data>\n * <data desc='IS_NULLABLE'>YES</data>\n * </row>\n * </pcolumns>\n *\n * Meta data - primary keys:\n * <primarykeys [conn='label' error='on|off|fast'>\n * (default=off)\n * <parm>qualifier or catalog</parm>\n * <parm>schema name</parm>\n * <parm>table name</parm>\n * </primarykeys>\n * Output (desc varies V5R4 to V6+):\n * <primarykeys ...>\n * <row>\n * <data desc='TABLE_CAT'>LP0164D</data>\n * <data desc='TABLE_SCHEM'>XMLSERVTST</data>\n * <data desc='TABLE_NAME'>ANIMAL2</data>\n * <data desc='COLUMN_NAME'>NOTEID</data>\n * <data desc='KEY_SEQ'>1</data>\n * <data desc='PK_NAME'>Q_XMLSERVTST_ANIMAL2_NOTEID_00001</data>\n * </row>\n * </primarykeys>\n *\n * Meta data - foreign keys:\n * <foreignkeys [conn='label' error='on|off|fast'>\n * (default=off)\n * <parm>primary qualifier or catalog</parm>\n * <parm>primary schema name</parm>\n * <parm>primary table name</parm>\n * <parm>foreign qualifier or catalog</parm>\n * <parm>foreign schema name</parm>\n * <parm>foreign table name</parm>\n * </foreignkeys>\n * Output (desc varies V5R4 to V6+):\n * <foreignkeys ...>\n * <row>\n * <data desc='PKTABLE_CAT'>LP0164D</data>\n * <data desc='PKTABLE_SCHEM'>XMLSERVTST</data>\n * <data desc='PKTABLE_NAME'>ANIMAL2</data>\n * <data desc='PKCOLUMN_NAME'>NOTEID</data>\n * <data desc='FKTABLE_CAT'>LP0164D</data>\n * <data desc='FKTABLE_SCHEM'>XMLSERVTST</data>\n * <data desc='FKTABLE_NAME'>FKEY2</data>\n * <data desc='FKCOLUMN_NAME'>IDF</data>\n * <data desc='KEY_SEQ'>1</data>\n * <data desc='UPDATE_RULE'>3</data>\n * <data desc='DELETE_RULE'>3</data>\n * <data desc='FK_NAME'>Q_XMLSERVTST_FKEY2_IDF_00001</data>\n * <data desc='PK_NAME'>Q_XMLSERVTST_ANIMAL2_NOTEID_00001</data>\n * </row>\n * </foreignkeys>\n *\n * Meta data - statistics:\n * <statistics [conn='label' error='on|off|fast'>\n * (default=off)\n * <parm>qualifier or catalog</parm>\n * <parm>schema name</parm>\n * <parm>table name</parm>\n * <parm>all|unique</parm>\n * </statistics>\n * Output (desc varies V5R4 to V6+):\n * <statistics ...>\n * <row>\n * <data desc='TABLE_CAT'></data>\n * <data desc='TABLE_SCHEM'>XMLSERVTST</data>\n * <data desc='TABLE_NAME'>ANIMAL2</data>\n * <data desc='NON_UNIQUE'>1</data>\n * <data desc='00005'>XMLSERVTST</data>\n * <data desc='00006'>INDEX2</data>\n * <data desc='TYPE'>3</data>\n * <data desc='ORDINAL_POSITION'>1</data>\n * <data desc='00009'>ANIMALID</data>\n * <data desc='COLLATION'>A</data>\n * <data desc='CARDINALITY'>0</data>\n * <data desc='PAGES'>0</data>\n * </row>\n * </statistics>\n *\n * </sql>\n *\n * NOTES:\n *\n * The XMLSERVICE sql rules:\n * > Connection rules:\n * - if connect is omitted, then XMLSERVICE will open\n * a default connection under the current profile.\n * - if servermode='na' (off default), ONLY ONE connection\n * allowed for the XMLSERVICE job. This is a DB2 rule of\n * one activation equals one active connection/transaction,\n * so you must free/commit a connection/transaction before\n * attempting to create a new connection/transaction\n * (or connect another profile). This is the correct\n * mode for sharing QTEMP with XMLSERVICE called\n * PGMs/SRVPGMs, so it is also the XMLSERVICE default.\n * - if servermode='on', XMLSERVICE may have multiple\n * connection/transactions active, BUT each connection\n * will be running in a seperate QSQSRVR job. This means\n * QTEMP in XMLSERVICE job will NOT BE USEABLE to communicate\n * between XMLSERVICE called PGM/SRVPGM, etc. (QTEMP useless).\n * Once an XMLSERVICE job enters servermode='on',\n * it will NEVER stop using server mode until the\n * process is ended (choose wisely).\n * > Commit rules (see options):\n * - default is autocommit='on', where all create, insert,\n * delete actions will be committed at operation time.\n * - if autocommit='off', commit action may be delayed\n * across multiple requests to XMLSERVICE and you\n * may also rollback the transaction.\n * > Statement rules:\n * - if stmt='label' is omitted (normal), the first active\n * statement available will be used for a sql operation\n * such as fetch or describe. Therefore it is not wise\n * to mix stmt='label' and omitted in the same XMLSERVICE\n * task.\n * - if a stmt is left active (no free), and a subsequent\n * 'label'/omitted is attempted, the active statement will\n * be released along with all result sets, etc. (free),\n * and the new statement will become the active statement.\n * Therefore if you are attempting to use multiple result\n * sets from different queries, you should manually\n * specify stmt='label' to avoid fetching from the\n * wrong result set.\n * > Options rules:\n * -Using servermode='on' universally executes statements\n * in child process (QSQSRVR jobs), not XMLSERVICE job.\n * Once servermode='on', it stays 'on' for all connections\n * for the life of the XMLSERVICE job, therefore you cannot\n * expect to share QTEMP between PGM calls/DB2 SQL, etc.\n * (ie. think very careful before you use this option)\n * -Default mode is servermode='na' (off), or 'normal mode',\n * which means a single connect/transaction is allowed\n * active at any given time. Therefore, you must end\n * any transation with <commit> and <free conn='all'>,\n * before attempting to switch between connection profiles.\n * (normal CLI rules transaction/connect in a single process)\n * -System vs. SQL naming libl, where:\n * - naming='system' with libl='curlib1 mylib2' (list)\n * example: select * from mylib3/table (specific library)\n * select * from table (uses library list)\n * (system naming is the default naming mode XMLSERVICE)\n * - naming='sql' with *sqllib='mylib' (one)\n * example: select * from mylib3.table (specific schema)\n * select * from table (uses *sqllib='schema')\n * Do not try to mix these two modes in the same\n * connection as this always leads to program errors\n * (ie. make up your mind before you write your scripts).\n * > Connection/statements/options label rules (optional labels):\n * conn='label' - unique name connection (optional)\n * stmt='label' - unique name statement (optional)\n * options='label' - unique name options template (optional)\n * - a label is ten characters or less\n * - a unique 'label' is used as XMLSERVICE/DB2 routing 'key'\n * thereby allowing multiple XML <sql> calls to XMLSERVICE\n * routing back to open/active DB2 connection(s)/statement(s)\n * - If optional conn/stmt 'label' is omitted, XMLSERVICE\n * will return a unique 'label' attribute in output XML\n * for subsequent sql prepare, execute, fetch, etc.\n * - If optional conn/stmt 'label' is omitted, XMLSERVICE\n * will attempt to use any active conn/stmt. No 'label(s)'\n * works just fine with less XML, but you need to be very careful\n * that other scripts do not introduce additional conn/stmt\n * 'label(s)' that spoil your generic XML DB2 statements.\n * > Connection/statements/options free rules (optional free):\n * - Connections/statements remain active/open until released:\n * <free/> - release all\n * <free options='label'/> - release options template\n * <free options='all'/> - release all options template\n * <free conn='label'/> - release connection (and statements)\n * <free conn='all'/> - release all connections (and statements)\n * <free cstmt='label'/> - release all statements 'label' connection\n * <free stmt='label'/> - release this statement\n * <free stmt='all'/> - release all statements\n * - conn='all' : free all connections,\n * also frees all statements\n * - cstmt='label': free all statements under 'label' connection,\n * other connections/statements remain active\n * - stmt='all' : free all statements,\n * connections remain active\n * > These alternate db2 features available ...\n * error='on,off,fast' (1.7.6)\n * on - script stops, full error report\n * off - script continues, job error log\n * fast - script continues, brief error log\n *\n *****************************************************\n\nXMLSERVICE job log\n------------------\nJob log info has been added to all XMLSERVICE returned fatal errors. An additonal function was also added to retrieve a job of the XMLSERVICE job or another job (1.5.8+).\n::\n\n * 0) Optional get diagnostics (1.5.8)\n * <diag [info='joblog|conf' job='job' user='uid' nbr='nbr']/>\n * example run\n * <?xml version=\"1.0\"?>\n * <diag info='joblog'/>\n * Note:\n * The current XMLSERVICE job log is assumed if optional\n * attributes job='job' user='uid' nbr='nbr' missing.\n * if you wish to provide custom diagnostics,\n * info='conf' calls optional hook in plugconfx.\n\n\nOne sample by value (xmlservice 1.9.9.3+)\n------------------------------------------\n\nThis is only about RPG parameters marked 'const' or 'value'. If you do not understand these 'by value' RPG parameter types, this will only confuse (ignore).\n\nA fix was added for by value in xmlservice 1.9.9.3. All data types test work except zoned decimal by value (although may work). This will be handled later in next base version of xjservice. Please note ``by=\"val\"`` is an attribute of ``<parm>``, not ``<data>`` (see below).\n\nBTW -- Attribute of ``<parm by='val'>`` is not an error (no debate). That is, ``by='ref,val'`` is indeed attribute of ``<parm>``, same as ``io='in,out,both'``, and other exotic parameter attributes like \\*omit and \\*nopass. Unfortunately RPG syntax obfuscates spoken words 'passing parameters' into minds eye of 'passing data(s) and structures and arrays of data' (which make no sense). Aka, ``<parm by='val' io='in'>`` is correct.\n::\n\n dcl-proc GetPacked export;\n dcl-pi *N;\n i2d char(8) Value;\n p1 packed(4:2) Value;\n p2 packed(3:2) Value;\n p3 packed(12:2) Value;\n p4 packed(6:2) Value;\n p5 packed(8:2) Value;\n p6 packed(24:4) Value;\n p7 packed(48:8) Value;\n ppd char(15) Value;\n zzd char(30);\n i2 int(5) value;\n i1d char(30);\n i4 int(10) value;\n i8 int(20) value;\n f4 float(4) value;\n f4d char(30);\n f8 float(8) value;\n i4d char(30);\n i8d char(30);\n f8d char(30);\n i1 int(3) value;\n end-pi;\n p1 += 2.22;\n p2 += 2.22;\n p3 += 2.22;\n p4 += 2.22;\n p5 += 2.22;\n p6 += 2.22;\n p7 += 2.22;\n ppd = 'pack man';\n zzd = 'zone man';\n i1 += 2;\n i1d = 'byte man';\n i2 += 2;\n i2d = 'short man';\n i4 += 2;\n i4d = 'integer man';\n i8 += 2;\n i8d = 'longlong man';\n f4 += 2.22;\n f4d = 'float man';\n f8 += 2.22;\n f8d = 'double man';\n end-proc;\n\n xjInData =\n '<?xml version=\"1.0\"?>'\n + '<xmlservice>'\n + '<cmd error=\"fast\" exec=\"cmd\" var=\"chglibl\">'\n + 'CHGLIBL LIBL('+TEST_LIB+')'\n + '</cmd>'\n + '<pgm error=\"fast\" func=\"GETPACKED\" name=\"TESTZSRV\" var=\"packme\">'\n\n + '<parm io=\"both\" by=\"val\" var=\"p8\">'\n + '<data type=\"8a\" var=\"i2d\">1</data></parm>'\n\n + '<parm io=\"both\" by=\"val\" var=\"pp1\">'\n + '<data type=\"4p2\" var=\"pp\">1</data></parm>'\n\n + '<parm io=\"both\" by=\"val\" var=\"pp2\">'\n + '<data type=\"3p2\" var=\"zz\">1</data></parm>'\n\n + '<parm io=\"both\" by=\"val\" var=\"pp3\">'\n + '<data type=\"12p2\" var=\"zz\">1</data></parm>'\n\n + '<parm io=\"both\" by=\"val\" var=\"pp4\">'\n + '<data type=\"6p2\" var=\"zz\">1</data></parm>'\n\n + '<parm io=\"both\" by=\"val\" var=\"pp5\">'\n + '<data type=\"8p2\" var=\"zz\">1</data></parm>'\n\n + '<parm io=\"both\" by=\"val\" var=\"pp6\">'\n + '<data type=\"24p4\" var=\"zz\">1</data></parm>'\n\n + '<parm io=\"both\" by=\"val\" var=\"pp7\">'\n + '<data type=\"48p8\" var=\"zz\">1</data></parm>'\n\n + '<parm io=\"both\" by=\"val\" var=\"p2\">'\n + '<data type=\"15a\" var=\"ppd\">1</data></parm>'\n\n + '<parm io=\"both\" var=\"p4\">'\n + '<data type=\"30a\" var=\"zzd\">1</data></parm>'\n\n + '<parm io=\"both\" by=\"val\" var=\"p7\">'\n + '<data type=\"5i0\" var=\"i2\">1</data></parm>'\n\n + '<parm io=\"both\" var=\"p6\">'\n + '<data type=\"30a\" var=\"i1d\">1</data></parm>'\n\n + '<parm io=\"both\" by=\"val\" var=\"p9\">'\n + '<data type=\"10i0\" var=\"i4\">1</data></parm>'\n\n + '<parm io=\"both\" by=\"val\" var=\"p11\">'\n + '<data type=\"20i0\" var=\"i8\">1</data></parm>'\n\n + '<parm io=\"both\" by=\"val\" var=\"p13\">'\n + '<data type=\"4f\" var=\"f4\">1</data></parm>'\n\n + '<parm io=\"both\" var=\"p14\">'\n + '<data type=\"30a\" var=\"f4d\">1</data></parm>'\n\n + '<parm io=\"both\" by=\"val\" var=\"p15\">'\n + '<data type=\"8f\" var=\"f8\">1</data></parm>'\n\n + '<parm io=\"both\" var=\"p10\">'\n + '<data type=\"30a\" var=\"i4d\">1</data></parm>'\n\n + '<parm io=\"both\" var=\"p12\">'\n + '<data type=\"30a\" var=\"i8d\">1</data></parm>'\n\n + '<parm io=\"both\" var=\"p16\">'\n + '<data type=\"30a\" var=\"f8d\">1</data></parm>'\n\n + '<parm io=\"both\" by=\"val\" var=\"p5\">'\n + '<data type=\"3i0\" var=\"i1\">1</data></parm>'\n\n + '</pgm>'\n + '</xmlservice>'\n + x'00';\n\n\n\n\n.. \n [--Author([[http://youngiprofessionals.com/wiki/index.php/XMLSERVICE/XMLSERVICEQuick?action=expirediff | s ]])--]\n [--Tony \"Ranger\" Cairns - IBM i PHP / PASE--]\n"
},
{
"alpha_fraction": 0.5589080452919006,
"alphanum_fraction": 0.5632184147834778,
"avg_line_length": 23,
"blob_id": "2289c2401d22ae6e4713c3fbfc757a9097706d1a",
"content_id": "e99c2fbf93e447ef2dde8bad3a4d14cd90fd26f0",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 1392,
"license_type": "permissive",
"max_line_length": 58,
"num_lines": 58,
"path": "/test/php/xmlservice_junk_away.php",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "<?php\n// *** $ctl .= \" *hack\"; ***\n// some drivers (result set) ...\n// possible junk end record, therefore\n// xmlservice provided $ctl='*hack'\n// record</hack>junk\nfunction driverJunkAway($xml)\n{\n // trim blanks (NO we need them)\n $clobOut = $xml;\n // *BLANKS returned forget it\n if (! trim($clobOut)) return $clobOut;\n // result set end of record marker ($ctl='*hack')\n $fixme = '</hack>';\n $pos = strpos($clobOut,$fixme);\n if ($pos > -1) {\n $clobOut = substr($clobOut,0,$pos);\n }\n else {\n // traditional end of script\n $fixme = '</script>';\n $pos = strpos($clobOut,$fixme);\n if ($pos > -1) {\n $clobOut = substr($clobOut,0,$pos+strlen($fixme));\n }\n // maybe error/performance report\n else {\n $fixme = '</report>';\n $pos = strpos($clobOut,$fixme);\n if ($pos > -1) {\n $clobOut = substr($clobOut,0,$pos+strlen($fixme));\n }\n }\n }\n return $clobOut;\n}\n\n// xml common text replacement\nfunction test_lib_replace($xml) {\n global $testLib, $procOPM;\n if (!$procOPM) {\n $was = array(\"xyzlibxmlservicexyz\");\n $now = array(\"$testLib\");\n }\n else {\n $was = array(\"xyzlibxmlservicexyz\",\"<pgm\");\n $now = array(\"$testLib\",\"<pgm mode='opm'\");\n }\n $out = str_replace($was,$now,$xml);\n return $out;\n}\n\nfunction driverTime() {\n $t = localtime(time(), true);\n return $t['tm_hour'].\":\".$t['tm_min'].\":\".$t['tm_sec'];\n}\n\n?>\n"
},
{
"alpha_fraction": 0.460803747177124,
"alphanum_fraction": 0.5026130676269531,
"avg_line_length": 29.822221755981445,
"blob_id": "63365fba36c9f638ff7d4fdede8b7accd249aae1",
"content_id": "2e9227520b46feee06918597ff71d4a4fd003c95",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 5549,
"license_type": "permissive",
"max_line_length": 82,
"num_lines": 180,
"path": "/test/php/test_10362_ZZERICH_ibm_db2_erich.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 inout PGM - erich occurs data\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\n// call IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Bad connect: $database,$user\");\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG10M(?,?,?,?)\");\nif (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n$clobIn = getxml();\n$clobOut = \"\";\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) die(\"Bad execute: \".db2_stmt_errormsg());\n// -----------------\n// output processing\n// -----------------\n// dump raw XML (easy test debug)\n// var_dump($clobOut);\n// xml check via simplexml vs. expected results\n$xmlobj = simplexml_load_string($clobOut);\nif (!$xmlobj) die(\"Bad XML returned\");\n$allpgms = $xmlobj->xpath('/script/pgm');\nif (!$allpgms) die(\"Missing XML pgm info\");\n\n// -----------------\n// output pgm call\n// -----------------\n// only one program this XML script\n$pgm = $allpgms[0];\n$name = $pgm->attributes()->name;\n$lib = $pgm->attributes()->lib;\n\n// pgm parms\n$parm = $pgm->xpath('parm');\nif (!$parm) die(\"Fail XML pgm parms missing ($lib/$name)\\n\");\n// expected\n$vevsfi = 'A';\n$vevsrj = 'BB';\n$vevsob = 22.0;\n$vevsve = 33.0;\n// $vevsods ds occurs(200)\n$vsukz = '1';\n$vpos = '23456789';\n$vtxt = 'lots o stuff';\n$vkalw = 42.42;\n$vvsw = 43.43;\n$vvsk = 2.0;\n// top parms\n$vevsfi1 = (string)$parm[0]->data;\n$vevsrj1 = (string)$parm[1]->data;\n$vevsob1 = (string)$parm[2]->data;\n$vevsve1 = (string)$parm[3]->data;\nif ($vevsfi != $vevsfi1) die(\"Fail vevsfi ($vevsfi not $vevsfi1) ($lib/$name)\\n\");\nif ($vevsrj != $vevsrj1) die(\"Fail vevsrj ($vevsrj not $vevsrj1) ($lib/$name)\\n\");\nif ($vevsob != $vevsob1) die(\"Fail vevsob ($vevsob not $vevsob1) ($lib/$name)\\n\");\nif ($vevsve != $vevsve1) die(\"Fail vevsve ($vevsve not $vevsve1) ($lib/$name)\\n\");\n// occurs DS\n$dsall = $parm[4]->ds;\nif (count($dsall) != 200) {\n die(\"Fail XML pgm not return 200 DS records ($lib/$name)\\n\");\n}\n// expect ds1\n$ds1=array($vsukz,$vpos,$vtxt);\nfor ($j=0;$j<15;$j++) $ds1[]=$vkalw;\nfor ($j=0;$j<15;$j++) $ds1[]=$vvsw;\nfor ($j=0;$j<15;$j++) $ds1[]=$vvsk;\n// actual ds2\n$i=1;\nforeach ($dsall as $ds) {\n $vsukz1 = (string)$ds->data[0];\n $vpos1 = (string)$ds->data[1];\n $vtxt1 = (string)$ds->data[2];\n $ds2 = array($vsukz1,$vpos1,$vtxt1);\n for ($j=0;$j<15;$j++) {\n $vkalw1 = (string)$ds->data[$j+3];\n $ds2[] = $vkalw1;\n }\n for ($j=0;$j<15;$j++) {\n $vvsw1 = (string)$ds->data[$j+18];\n $ds2[] = $vvsw1;\n }\n for ($j=0;$j<15;$j++) {\n $vvsk1 = (string)$ds->data[$j+33];\n $ds2[] = $vvsk1;\n }\n // any differences?\n $r = array_diff($ds1,$ds2);\n if ($r && count($r)) {\n echo substr($clobOut,0,3000).\" ... \\n\";\n var_dump($r);\n die(\"Fail XML occurs($i) ($lib/$name)\\n\");\n }\n $i++;\n}\n\n// pgm data returned\n$retn = $pgm->xpath('return');\nif (!$retn) die(\"Fail XML pgm return missing ($lib/$name)\\n\");\n$var = $retn[0]->data->attributes()->var;\n$actual = (string)$retn[0]->data;\n$expect='0';\nif ($actual != $expect) die(\"return: $var ($actual not $expect) ($lib/$name)\\n\");\n\n// good\necho substr($clobOut,0,3000).\" ... \\n\";\necho \"Success ($lib/$name)\\n\";\n\n\n// D $vevsfi s 1\n// D $vevsrj s 2\n// D $vevsob s 7s 0\n// D $vevsve s 5s 0\n// D*Ergebnisdaten:\n// D $vevsods ds occurs(200)\n// D $vsukz 1 1\n// D $vpos 2 9\n// D $vtxt 10 39\n// D $vkalw 40 174 2 dim(15)\n// D $vvsw 175 309 2 dim(15)\n// D $vvsk 310 324 0 dim(15)\n// d*\n// D i S 10i 0 inz(0)\n// D j S 10i 0 inz(0)\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// * main(): Control flow\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// C *Entry PLIST\n// c parm $vevsfi\n// c parm $vevsrj\n// c parm $vevsob\n// c parm $vevsve\n// c parm $vevsods\nfunction getxml() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<script>\n<pgm name='ZZERICH' lib='xyzlibxmlservicexyz'>\n <parm io='both'>\n <data var='vevsfi' type='1A'>a</data>\n </parm>\n <parm io='both'>\n <data var='vevsrj' type='2A'>bb</data>\n </parm>\n <parm io='both'>\n <data var='vevsob' type='7s0'>11</data>\n </parm>\n <parm io='both'>\n <data var='vevsve' type='5s0'>22.0</data>\n </parm>\n <parm io='both'>\n <ds var='vevsods' dim='200'>\n <data var='vsukz' type='1A'>x</data>\n <data var='vpos' type='8A'>y</data>\n <data var='vtxt' type='30A'>hallo</data>\n <data var='vkalw' type='9s2' dim='15'>9.2</data>\n <data var='vvsw' type='9s2' dim='15'>8.2</data>\n <data var='vvsk' type='1s0' dim='15'>1.0</data>\n </ds>\n </parm>\n <return>\n <data var='ret' type='10i0'>0</data>\n </return>\n</pgm>\n</script>\nENDPROC;\nreturn test_lib_replace($clob);\n}\n?>\n--EXPECTF--\n%s\nSuccess (%s)\n\n"
},
{
"alpha_fraction": 0.6640070676803589,
"alphanum_fraction": 0.673758864402771,
"avg_line_length": 34.1875,
"blob_id": "aac84243c3e000385ca55ecbdc4f7fc040e6beed",
"content_id": "0ac9a23884a313568778dbc1777826eb18d6c148",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 1128,
"license_type": "permissive",
"max_line_length": 112,
"num_lines": 32,
"path": "/test/php/test_50100_ibm_db2_io_jvm_ZZJAVA.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 inout JVM test ZZJAVA sleeper\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\nrequire_once('connection.inc');\nrequire_once(\"ToolkitService.php\");\n\n// IBM i\n$conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Bad connect: $database,$user\");\n\n// normal RPG java: 'customControl'=>'*java' -- set your own classpath\n// SP SQL java: 'customControl'=>'*sqljava' or 'customControl'=>'*dbgjava' -- your classpath ignored\ntry { $ToolkitServiceObj = ToolkitService::getInstance($conn); } catch (Exception $e) { die($e->getMessage()); }\n$options = array('plugSize'=>'4K','customControl'=>'*sqljava','stateless'=>true);\n$ToolkitServiceObj->setToolkitServiceParams($options);\n$ms = 5000; // 5 seconds\necho \"java RPG w/java stored procedure now sleep($ms ms) ...\\n\";\n$param = array();\n$param[] = $ToolkitServiceObj->AddParameterInt32('both', \"ms\", \"ms\", $ms);\n$result = $ToolkitServiceObj->PgmCall('ZZJAVA', 'XMLSERVICE', $param, null, null);\nvar_dump($result);\n\nif ($result[\"io_param\"][\"ms\"] != $ms) die(\"ZZJAVA missing $ms \");\n\necho \"\\nSuccess\\n\";\n?>\n--EXPECTF--\n%s\nSuccess\n\n\n"
},
{
"alpha_fraction": 0.5840004086494446,
"alphanum_fraction": 0.6245747208595276,
"avg_line_length": 34.949642181396484,
"blob_id": "d6eef07a1d541895c70aa5b47b23559ccb44fc62",
"content_id": "65b48c20a1601d981c231de0fe71aa6f22dc967f",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "reStructuredText",
"length_bytes": 39976,
"license_type": "permissive",
"max_line_length": 497,
"num_lines": 1112,
"path": "/docs/debugging.rst",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "XMLSERVICE/Toolkit debugging and service\n========================================\n\n`Goto Main Page`_\n\n.. _Goto Main Page: index.html\n\nWho is this page for?\n---------------------\nInstructions designed for IBM i developer learning PHP and XMLSERVICE ...\n\nDebug technique: It's as easy as 1-2-3-4-5-6-7-8-9 :)\n-----------------------------------------------------\na) Run the test script that contains control \"\\*debug\" and script will \"hang\" while it waits on #2\n\n ``$ctl .= \" *debug\";``\n\n 1. A MSGW inquiry message in DSPMSG QSYSOPR will be generated by the toolkit. Note the job information (number, name, user) provided in the MSGW.\n\n #. STRSRVJOB using that job information as parameters.\n\n #. STRDBG with the program and library you wish to debug.\n\n #. Answer the MSGW. Any answer will do--\"G\" is fine.\n\n #. The RPG program source will appear in debug mode in your terminal, ready to step through, allowing you to inspect variables, etc.\n\n #. When done inspecting and stepping, let the RPG program complete (using function keys indicated on screen).\n\n #. ENDDBG\n\nb) ENDSRVJOB\n\nOther debug options::\n\n Job1 (threaded) Job 2 Job 3 (DB2 userid/password) Job 4 (optional XTOOLKIT job)\n (ctl=*debugcgi) (ctl=*debugproc) (ctl=*debug)\n browser -> Apache ->XMLCGI (Apache CGI child) -> QSQSRVR (XMLSERVICE *here)\n -> QSQSRVR (XMLSERVICE client) -> XTOOLKIT (XMLSERVICE ipc=/tmp/flinstone)\n\n $ctl .= \" *debugcgi\"; // Job 2 - debug XMLCGI to see REST/HTTP data passed by client (when using REST only)\n $ctl .= \" *debugproc\"; // Job 3 - debug XMLSERVICE \"client\" to see DB2 passed data (DB2 interface)\n $ctl .= \" *debug\"; // Job 4 - debug XMLSERVICE \"server\" to see XMLSERVICE calls (DB2 interface)\n // Note: when ctl='*here', both XMLSERVICE \"client\"/\"server\"\n // are in QSQSRVSR job (NO XTOOLKIT job)\n // remote: Attaching with LUW drivers changes QSQSRVR ...\n // CLIENT (Client Access drivers) <==> QZDAxxxx\n // CLIENT (DB2 Connect drivers) <==> QRWxxxx\n\n\n..\n Tips\n ----\n This is a collection of debug tips and where you may find your problem data.\n * [[#logs | Check the logs (click here) ...]] -- viewing PHP logs is often enough to solve a problem\n ** [[#phplog | Check PHP log for messages (click here) ...]]\n ** [[#httplog | Check Apache logs for messages (click here) ...]]\n ** [[#newlog | Check PHP toolkit logs for messages (click here) ...]]\n * [[#jobactive | Check active XMLSERVICE job (click here) ...]]\n ** WRKACTJOB -- ``5=work -> 10. Display job log -> F10=Display detailed messages`` can be very revealing\n ** WRKACTJOB -- ``5=work -> 11. Display call stack -> F5=Refresh`` watch stack during stress runs\n * [[#levels | Check one level at a time (click here) ...]] -- run simple tests up web site food chain can be very revealing\n ** Download tests: 2012-05-18 - [[ Attach:levels-1.4.zip | levels-1.4.zip]] -- simple tests for each level\n ** [[#level0 | Level 0 -- PHP working (click here) ...]]\n ** [[#level1 | Level 1 -- Apache/PHP working (click here) ...]]\n ** [[#level2 | Level 2 -- db2 connection working (click here) ...]]\n ** [[#level3 | Level 3 - XMLSERVICE XML interface working (click here) ...]]\n ** [[#level4 | Level 4 -- PHP New Toolkit working (click here) ...]]\n ** [[#level5 | Level 5 -- PHP CW Toolkit working (optional) (click here) ...]]\n * [[#debugger | Stop XMLSERVICE for debugger attach (click here) ..]]\n ** [[#debuggerfind | WRKACTJOB find XMLSERVICE private mode only (click here) ...]]\n ** [[#debuggerstop | XMLSERVICE QSYSOPR message stop anytime (click here) ...]]\n * additional links common XMLSERVICE error information\n ** [[XMLSERVICEFAQ | %blue%{XMLSERVICE FAQ}%%]] - have issues, try here\n ** [[XMLSERVICEERROR | %blue%{XMLSERVICE ERRORS}%%]] - errors, try here\n ** [[XMLSERVICEConnect | %blue%{XMLSERVICE Connections}%%]] - XMLSERVICE/Toolkit connections\n ** [[XMLSERVICEConfig | %blue%{XMLSERVICE Performance}%%]] - performance XMLSERVICE\n * PASE debug secrets (for a PASE geek) ...\n ** [[PASE/Geek]] - everything about PASE from a geek\n ** [[PASE/PASEDebug]] - debug secrets\n * The following links are associated with my machine environment and may be helpful.\n ** [[Apache/ab | Apache ab tool]] -- Apache ab web site stress tests are performed from the 5250 command line (call qp2term) or ssh myibmi using PASE (see Apache ab link install instructions)\n *** You can run stress tests from 2-tier Linux/Windows using Apache ab tool, but i am using Apchae ab from PASE.\n *** Apache ab tool is not perfect, but if you use relatively \"sane\" number of browsers like -c 10 it will work.\n *** Apache ab test is designed to drive CPU to 100% (a good thing), so don't panic about CPU\n ** [[Java/SystemDebugger | GUI System Debugger]] -- GUI debugger always available on IBM i\n ** [[FastCGI/FastCGI | Apache FastCGI]] -- Apache FastCGI is used for PHP on IBM i, this link may be helpful.\n ** [[PHP/IASP | PHP via iASP]] -- My machine setup using iASP Zend Server data.\n *** Many of the examples below reference %red%/MYASP2/www/zend2%% because i am running all my Zend Server data in an Apache ZEND2 configuration on port 80 using iASP /MYASP2 ``(http://myibmi/hello.php)``.\n *** Your configuration is more likely Zend Server out-of-the-box-install running in /www/zendsvr on port 10088 (http://myibmi:10088/hello.php)\n\nWorking with service provider?\n------------------------------\n\nHere are a few common things that can provide useful information if working with outside support people.\n\n0) easy way 1.7.1 ... just send provider XMLSERVLOG/LOG and XMLSERVLOG/DUMP in SAVF\n\nassuming you have a trace of the issue (see xmlservice logging instructions)\n\n1) what do you see in error logs for simple tests ???\n\n::\n\n call qp2term\n > tail /usr/local/zendsvr/var/log/php.log\n > tail /myasp2/www/zend2/logs/error_log.Q112061800\n > tail /myasp2/www/zend2/logs/access_log.Q112061800\n\n2) Summarize configuration (below)???\n\n My Example\n \n ::\n\n SYSASP -- ZENDSVR library ... everything as installed\n SYSASP -- /usr/local/zendsvr ... everything as installed\n SYSASP -- /tmp ... many Zend \"enterprise\" components use /tmp\n MYASP2 -- /myasp2/www/zend2 ... ALL \"user data\" moved from /www/zendsvr (/conf, /logs, /htdocs)\n More ... /myasp2/www/zend2\n -- NO symbolic links between SYSAPS and MYASP2 for true \"independent\"ASP.\n -- config files /myasp2/www/zend2/conf\n attach: fastcgi.conf\n attach: httpd.conf\n\n3) Around time of simple test failure ... do you see job logs?\n\n *wrkoutq*\n\n ::\n\n If jobs are still active (php-cgi, etc.) ...\n wrkactjob\n 5 ZEND2 QTMHHTTP BCI .0 PGM-QZSRHTTP SIGW\n ZEND2 QTMHHTTP BCI .0 PGM-php-cgi.bi THDW\n 5 ZEND2 QTMHHTTP BCI .0 PGM-php-cgi.bi TIMW\n 10. Display job log, if active, on job queue, or pending\n F10 -- full job log\n\n4) Around time of simple test failure ... do you see VLOGS???\n\n ::\n\n STRSST\n 1. Start a service tool\n 5. Licensed Internal Code log\n 1. Select entries from the Licensed Internal Code (LIC) log\n Specify Licensed Internal Code Log Selection Values\n -- leave as is enter ---\n 0100A890 i5/OS PASE 4700 0013 06/15/12 09:03:43 7 <--- PASE\n 0100A891 LIC log interface 0401 0100 06/15/12 09:04:08 1\n 0100A892 Signals management 4600 0001 06/15/12 09:04:51 255\n 0100A893 Process management 1300 0001 06/18/12 21:02:06 1\n\n Maybe look for PASE, storage management, ASP, so on around \"failure time\"\n\n Note: \n You can also dump logs to spool ...\n\nWhen you have no idea (dumping many processes)?\n-----------------------------------------------\n\n Some times you just have no idea what is going on, here is a handy macro to dump a lot of stacks.\n \n ::\n\n STRSST/STRDST\n 1. Start a service tool\n 4. Display/Alter/Dump\n 1. Display/Alter storage\n ... or option for dump to printer ...\n 2. Licensed Internal Code (LIC) data\n 14. Advanced analysis\n Option Command\n 1 processinfo\n\n In this case dumping all process dealing with keyword \"ZEND\" appearing in job ...\n\n Specify Advanced Analysis Options\n Output device . . . . . . : Display\n Type options, press Enter.\n Command . . . . : PROCESSINFO\n Options . . . . . -NAMES ZEND\n\n Note:\n Information dumped printer/display is same as paseps macro.\n\n\n\nCheck active XMLSERVICE job\n---------------------------\n\nIf you are using private connections (InternalKey or $ipc='/tmp/packers'), the XMLSERVICE job is probably available for examination with wrkactjob.\n::\n\n Work with Active Jobs LP0264D\n 05/17/12 11:35:12\n CPU %: .0 Elapsed time: 00:00:00 Active jobs: 313\n\n Type options, press Enter.\n 2=Change 3=Hold 4=End 5=Work with 6=Release 7=Display message\n 8=Work with spooled files 13=Disconnect ...\n Current\n Opt Subsystem/Job User Type CPU % Function Status\n 5 XTOOLKIT DB2 BCH .0 PGM-XMLSERVICE SEMW\n\n1) Use ``option 5=work -> 10. Display job log -> F10=Display detailed messages`` to examine joblog on errors ...\n\n::\n\n Display All Messages\n System: LP0264D\n Job . . : XTOOLKIT User . . : DB2 Number . . . : 435915\n\n >> CALL PGM(XMLSERVICE/XMLSERVICE) PARM('/tmp/packers')\n Pointer not set for location referenced.\n Application error. MCH3601 unmonitored by ZZSRV at statement 0000000448,\n instruction X'0000'.\n\n\n2) Use ``option 5=work -> 11. Display call stack -> F5=Refresh`` to examine stack during stress tests ...\n\n::\n\n Display Call Stack\n System: LP0264D\n Job: XTOOLKIT User: DB2 Number: 437582\n Thread: 0000000C\n\n\n Type Program Statement Procedure\n 1 QCMD QSYS /01C8\n XMLSERVICE XMLSERVICE _QRNP_PEP_XMLSERVICE\n XMLSERVICE XMLSERVICE 1133 XMLSERVICE\n XMLSERVICE XMLSERVICE 4607 RUNSERVER\n XMLSERVICE XMLSERVICE 2983 SIGSETTIMEOUT\n XMLSERVICE XMLSERVICE 2876 SIGTIMEROFF\n QP0SSRV1 QSYS 19 setitimer\n QP0SSRV2 QSYS 159 qp0sitimer__F12qp0sitimer_t >\n\nCheck the logs\n--------------\n\nCheck PHP log for messages\n^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nOn my IBM i machine::\n\n EDTF STMF('/usr/local/zendsvr/var/log/php.log')\n -- or --\n call qp2term (or ssh myibmi)\n > tail /usr/local/zendsvr/var/log/php.log\n ... stuff\n ... in /MYASP2/www/zend2/htdocs/hello.php on line 1\n\nOn my Linux machine::\n\n $ tail /usr/local/zend/var/log/php.log\n [16-May-2012 16:30:12] PHP Warning: db2_close() expects parameter 1 to be resource ...\n\n\nCheck Apache logs for messages\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nerror logs for date in question::\n\n EDTF STMF('/myasp2/www/zend2/logs/error_log.Q112051500')\n -- or --\n call qp2term (or ssh myibmi)\n > tail /myasp2/www/zend2/logs/error_log.Q112051500\n [Tue May 15 17:10:11 2012] [error] [client 9.5.158.38] CGI PROGRAM /QSYS.LIB/XMLSERVICE.LIB/XMLCGI.PGM RETURNED EXCEPTION ID CEE9901\n [Tue May 15 17:10:11 2012] [error] [client 9.5.158.38] SEE JOBLOG FOR JOB 428979/QTMHHTTP /ZEND2\n\naccess logs for date in question::\n\n EDTF STMF('/myasp2/www/zend2/logs/access_log.Q112051500')\n -- or --\n call qp2term (or ssh myibmi)\n > tail /myasp2/www/zend2/logs/access_log.Q112051500\n 9.5.158.38 - - [15/May/2012:17:47:41 -0500] \"GET /cgi-bin/xmlcgi.pgm?db2=LP0264D\n\n\nCheck PHP Toolkit logs for messages\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\ntoolkit.ini logfile\n::\n\n EDTF STMF('/usr/local/zendsvr/share/ToolkitApi/toolkit.log')\n -- or --\n call qp2term (or ssh myibmi)\n > tail /usr/local/zendsvr/share/ToolkitAPI/toolkit.log\n 15 May 2012 22:53:35.752099 Running stateless; no IPC needed. Service library: ZENDSVR\n 15 May 2012 22:53:36.588466 i5Error: num=14 cat=9 msg=\"No more entries.\" desc=\"No more entries.\"\n\n location set in toolkit.ini ...\n EDTF STMF('/usr/local/zendsvr/share/ToolkitApi/toolkit.ini')\n [log]\n ; warnings and errors will be written to the logfile.\n logfile = \"/usr/local/zendsvr/share/ToolkitApi/toolkit.log\"\n\ntoolkit.ini debugLogFile\n::\n\n EDTF STMF('/usr/local/zendsvr/share/ToolkitApi/debug.log')\n -- or --\n call qp2term (or ssh myibmi)\n > tail /usr/local/zendsvr/share/ToolkitAPI/debug.log\n <data type='1A' var='ds1' comment='DSCHARA'><![CDATA[E]]></data>\n <data type='1A' var='ds2' comment='DSCHARB'><![CDATA[F]]></data>\n\n location set in toolkit.ini ...\n EDTF STMF('/usr/local/zendsvr/share/ToolkitApi/toolkit.ini')\n ; debug turns PHP toolkit's debug mode on or off (true/false). Default log file: /usr/local/zendsvr/share/ToolkitApi/debug.log\n ; This log will grow large, so leave this false when you do not need to log everything.\n debug = true\n debugLogFile = \"/usr/local/zendsvr/share/ToolkitApi/debug.log\"\n\nPHP and XMLSERVICE bad XML\n::\n\n EDTF STMF('/tmp/bad.xml')\n -- or --\n > tail /tmp/bad.xml\n start\n <?xml version=\"1.0\" encoding=\"ISO-8859-1\" ?><script><cmd><success><ܬCDATA�+++ success QSYS/DLTDTAARA DTAARA(XMLSERVICE/BETTYBOOP)||></success></cmd>\n\n\nCheck one level at a time\n-------------------------\n\n*Troubles on your PHP site or installation???*\n\nOften times if you take a deep breath, slow down and look at each level of web site components you can find your issue without reaching for the bat phone and calling Zend or IBM. The following set of tests walks up the PHP levels of components to give you confidence you are looking at the correct layer of your issue. Of course after you complete smaller PHP scripts (hello.php, etc.), you can likely use the same step up next level techniques on your sophisticated applications (WorldPeace.php).\n\n\nLevel 0 -- PHP Working\n^^^^^^^^^^^^^^^^^^^^^^\n\nIf you are running on the IBM i machine it is always best to make sure your Apache/PHP setup can do anything.\n\nNote:\nYour machine may have document root at /www/zendsvr vs. /MYASP2/www/zend2 (out-of-box installed Zend Server).\n\n**Test number #0**\n\nInstall the following simple test in your Document root and see if \"Hello World\" appears.\n::\n\n /MYASP2/www/zend2/htdocs/hello.php\n <?php\n echo \"Hello world\";\n ?>\n\n\nRun the test::\n\n call qp2term (or ssh myibmi)\n > export PATH=/usr/local/zendsvr/bin:$PATH\n > export LIBPATH=/usr/local/zendsvr/lib\n > cd /MYASP2/www/zend2/htdocs\n > php hello.php\n Hello world>\n\n..\n No, not working???\n\n * [[#phplog | Check PHP log for messages (click here) ...]]\n\nLevel 1 -- Apache/PHP working\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nIf you are running on the IBM i machine it is always best to make sure your Apache/PHP setup can do anything.\n\nNote:\nYour machine may have document root at /www/zendsvr vs. /MYASP2/www/zend2 (out-of-box installed Zend Server).\n\n**Test number #1**\n\nInstall the following simple test in your Document root and see if \"Hello World\" appears in your browser.\n::\n\n /MYASP2/www/zend2/htdocs/hello.php\n <?php\n echo \"Hello world\";\n ?>\n\n..\n No, not working???\n * [[#phplog | Check PHP log for messages (click here) ...]]\n * [[#httplog | Check Apache logs for messages (click here) ...]]\n\nOk, next run stress test ...\n::\n\n call qp2term\n > cd /usr/local/Zend/apache2/bin\n > ab -t 25 -c 10 http://myibmi/hello.php\n -t 25 -- 25 seconds\n -c 10 -- 10 concurrent browsers (simulates ten browsers)\n\n\n* this test is designed to drive CPU to 100% (a good thing), so don't panic about CPU\n\nwrkactjob refresh (F10) - if you do not see multiple php-cgi jobs getting CPU re-run tests with more load\n::\n\n call qp2term\n > cd /usr/local/Zend/apache2/bin\n > ab -t 25 -c 10 http://myibmi/hello.php &; ab -t 25 -c 10 http://myibmi/hello.php &; ab -t 25 -c 10 http://myibmi/hello.php &;\n -t 25 -- 25 seconds\n -c 10 -- 10 concurrent browsers (simulates ten browsers)\n Multiply by 3 Apache ab jobs running background -- 30 concurrent browsers (simulates thirty browsers)\n\n* above using \"&;\" to fork/exec into background/batch many jobs(s) Apache ab to really hammer your web site\n* ab tool is not perfect, so if you start too many of these forked jobs ('&;') the tool may core dump (die) -- i usually can get 3 - 6 jobs working (30-60 browsers)\n* if you still do not see a split of CPU between php-cgi jobs you may be missing HTTP PTFS\n\nLevel 2 -- db2 connection working\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nAt this level we want to check our DB2 connections.\n\nNote:\n\n* Your machine may have document root at /www/zendsvr vs. /MYASP2/www/zend2 (out-of-box installed Zend Server).\n\n* This same test can be run 2-tier from Linux/Windows to IBM i using DB2 Connect\n\n**Test number #2**\n\nInstall the following simple test in your Document root and see if \"success\" appears in your browser.\n::\n\n /MYASP2/www/zend2/htdocs/connection2.inc\n <?php\n $database = \"*LOCAL\"; // *LOCAL on IBM i ... LP0264D on Linux\n $cwdatabase = \"localhost\";\n $user = \"DB2\";\n $password = \"XXXXXXXX\";\n $libxmlservice = \"ZENDSVR\"; // ZZCALL (Zend Server)\n $i5persistentconnect = false;\n ?>\n\n /MYASP2/www/zend2/htdocs/xxtoolkit_connect.php\n <?php\n require_once('connection2.inc');\n // flip between persistent and non-persistent connections\n for ($i=0;$i<500;$i++) {\n for ($i5persistentconnect=1;$i5persistentconnect>-1;$i5persistentconnect--) {\n if ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\n else $conn = db2_connect($database,$user,$password);\n if (!$conn) echo \"<br>Bad connect: $conn,$database,$user,perm=$i5persistentconnect\";\n else echo \"<br>Good connect: $conn,$database,$user,perm=$i5persistentconnect\";\n if ($i5persistentconnect) $ok = true;\n else $ok = db2_close($conn);\n echo \",ok=$ok\\n\";\n }\n }\n\nrun the test Apache ...\n::\n\n Point your browser to php program ...\n http://myibmi/xxtoolkit_connect.php\n Good connect: Resource id #2,*LOCAL,DB2,perm=1,ok=1\n Good connect: Resource id #3,*LOCAL,DB2,perm=0,ok=1\n Good connect: Resource id #4,*LOCAL,DB2,perm=1,ok=1\n :\n\nrun the test 2-tier ... I choose to run from command line on my Linux machine, but Linux Apache would also work.\n::\n\n $ which php\n /usr/local/zend/bin/php\n $ php xxtoolkit_connect.php\n <br>Good connect: Resource id #5,LP0264D,DB2,perm=1,ok=1\n <br>Good connect: Resource id #6,LP0264D,DB2,perm=0,ok=1\n <br>Good connect: Resource id #7,LP0264D,DB2,perm=1,ok=1\n :\n\n.. \n No, not working???\n * [[#phplog | Check PHP log for messages (click here) ...]]\n * [[#httplog | Check Apache logs for messages (click here) ...]]\n\nLevel 3 - XMLSERVICE XML interface Working\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nAt this level we want to check our PHP raw XML Toolkit built on top of ibm_db2 connections (Level 2).\n\nNote:\n\n* Your machine may have document root at /www/zendsvr vs. /MYASP2/www/zend2 (out-of-box installed Zend Server).\n* This same test can be run 2-tier from Linux/Windows to IBM i using DB2 Connect.\n* RPG test \\*PGM ZENDSVR/ZZCALL is included with Zend Server installation, so test will run out-of-box\n\n**Test number #3**\n\nInstall the following simple test in your Document root and see if \"success\" appears in your browser.\n::\n\n /MYASP2/www/zend2/htdocs/connection2.inc\n <?php\n $database = \"*LOCAL\"; // *LOCAL on IBM i ... LP0264D on Linux\n $cwdatabase = \"localhost\";\n $user = \"DB2\";\n $password = \"XXXXXXXX\";\n $libxmlservice = \"ZENDSVR\"; // ZZCALL (Zend Server)\n $i5persistentconnect = false;\n ?>\n\n /MYASP2/www/zend2/htdocs/xxtoolkit_raw.php\n <?php\n require_once('connection2.inc');\n if ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\n else $conn = db2_connect($database,$user,$password);\n if (!$conn) echo \"Bad connect: $conn,$database,$user,perm=$i5persistentconnect\";\n\n $stmt = db2_prepare($conn, \"call XMLSERVICE.iPLUG4K(?,?,?,?)\");\n $ctl = \"*sbmjob\"; // *here for no additional private job\n $ipc='/tmp/packers';\n // $ipc = \"\"; // *here no need ipc\n $clobIn = \"<?xml version='1.0'?>\n <pgm name='ZZCALL' lib='$libxmlservice'>\n <parm io='both'>\n <data type='1A'>a</data>\n </parm>\n <parm io='both'>\n <data type='1A'>b</data>\n </parm>\n <parm io='both'>\n <data type='7p4'>11.1111</data>\n </parm>\n <parm io='both'>\n <data type='12p2'>222.22</data>\n </parm>\n <parm io='both'>\n <ds>\n <data type='1A'>x</data>\n <data type='1A'>y</data>\n <data type='7p4'>66.6666</data>\n <data type='12p2'>77777.77</data>\n </ds>\n </parm>\n <return>\n <data type='10i0'>0</data>\n </return>\n </pgm>\";\n $clobOut = \"\";\n $ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n $ret=db2_execute($stmt);\n // var_dump($clobOut);\n if (strpos($clobOut,\"4444444444.44\")>0) echo \"success\";\n else echo \"fail\";\n ?>\n\nrun the test Apache ...\n::\n\n Point your browser to php program ...\n http://myibmi/xxtoolkit_raw.php\n success\n\n\nrun the test 2-tier ... I choose to run from command line on my Linux machine, but Linux Apache would also work.\n::\n\n $ which php\n /usr/local/zend/bin/php\n $ php xxtoolkit_raw.php\n success\n\n..\n No, not working???\n * [[#phplog | Check PHP log for messages (click here) ...]]\n * [[#httplog | Check Apache logs for messages (click here) ...]]\n * [[#newlog | Check PHP toolkit logs for messages (click here) ...]]\n\n\nOk, next run stress test ...\n::\n\n call qp2term\n > cd /usr/local/Zend/apache2/bin\n > ab -t 25 -c 10 http://lp0264d/xxtoolkit_raw.php\n -t 25 -- 25 seconds\n -c 10 -- 10 concurrent browsers (simulates ten browsers)\n\n* this test is designed to drive CPU to 100% (a good thing), so don't panic about CPU\n\n\nLevel 4 -- PHP New Toolkit working\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nAt this level we want to check our PHP CW Toolkit built on top of raw XML Toolkit (Level 3).\n\nNote:\n\n* Your machine may have document root at /www/zendsvr vs. /MYASP2/www/zend2 (out-of-box installed Zend Server).\n* This same test can be run 2-tier from Linux/Windows to IBM i using DB2 Connect.\n* RPG test \\*PGM ZENDSVR/ZZCALL is included with Zend Server installation, so test will run out-of-box\n\n**Test number #4**\n\nInstall the following simple test in your Document root and see if \"success\" appears in your browser.\n::\n\n /MYASP2/www/zend2/htdocs/connection2.inc\n <?php\n $database = \"*LOCAL\"; // *LOCAL on IBM i ... LP0264D on Linux\n $cwdatabase = \"localhost\";\n $user = \"DB2\";\n $password = \"XXXXXXXX\";\n $libxmlservice = \"ZENDSVR\"; // ZZCALL (Zend Server)\n $i5persistentconnect = false;\n ?>\n\n /MYASP2/www/zend2/htdocs/xxtoolkit_new.php\n <?php\n require_once('connection2.inc');\n require_once(\"ToolkitService.php\");\n if ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\n else $conn = db2_connect($database,$user,$password);\n if (!$conn) echo \"Bad connect: $conn,$database,$user,perm=$i5persistentconnect\";\n\n try { $ToolkitServiceObj = ToolkitService::getInstance($conn); }\n catch (Exception $e) { die($e->getMessage()); }\n $param[] = $ToolkitServiceObj->AddParameterChar ('both', 1, 'INCHARA', 'var1', 'Y');\n $param[] = $ToolkitServiceObj->AddParameterChar ('both', 1, 'INCHARB', 'var2', 'Z');\n $param[] = $ToolkitServiceObj->AddParameterPackDec('both', 7,4,'INDEC1', 'var3', '001.0001');\n $param[] = $ToolkitServiceObj->AddParameterPackDec('both', 12,2,'INDEC2', 'var4', '0000000003.04');\n $ds[] = $ToolkitServiceObj->AddParameterChar ('both', 1, 'DSCHARA', 'ds1', 'A');\n $ds[] = $ToolkitServiceObj->AddParameterChar ('both', 1, 'DSCHARB', 'ds2', 'B');\n $ds[] = $ToolkitServiceObj->AddParameterPackDec('both', 7,4,'DSDEC1', 'ds3', '005.0007');\n $ds[] = $ToolkitServiceObj->AddParameterPackDec('both', 12,2,'DSDEC1', 'ds4', '0000000006.08');\n $param[] = $ToolkitServiceObj->AddDataStruct($ds);\n $clobOut = $ToolkitServiceObj->PgmCall('ZZCALL', $libxmlservice, $param, null, null);\n // var_dump($clobOut);\n $value = \"what is ...\".$clobOut[\"io_param\"][\"ds4\"];\n if (strpos($value,\"4444444444.44\")>-1) echo \"success\";\n else echo \"fail\";\n\nrun the test Apache ...\n::\n\n Point your browser to php program ...\n http://myibmi/xxtoolkit_new.php\n success\n\n\nrun the test 2-tier ... I choose to run from command line on my Linux machine, but Linux Apache would also work.\n::\n\n $ which php\n /usr/local/zend/bin/php\n $ php xxtoolkit_new.php\n success\n\n..\n No, not working???\n * [[#phplog | Check PHP log for messages (click here) ...]]\n * [[#httplog | Check Apache logs for messages (click here) ...]]\n\nOk, next run stress test ...\n::\n\n call qp2term\n > cd /usr/local/Zend/apache2/bin\n > ab -t 25 -c 10 http://lp0264d/xxtoolkit_new.php\n -t 25 -- 25 seconds\n -c 10 -- 10 concurrent browsers (simulates ten browsers)\n\n* this test is designed to drive CPU to 100% (a good thing), so don't panic about CPU\n\nLevel 5 -- PHP CW Toolkit working (optional)\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nAt this level we want to check our PHP CW Toolkit built on top of new Toolkit (Level 4).\n\nNote:\n\n* Your machine may have document root at /www/zendsvr vs. /MYASP2/www/zend2 (out-of-box installed Zend Server).\n* This same test can be run 2-tier from Linux/Windows to IBM i using DB2 Connect.\n* RPG test \\*PGM ZENDSVR/ZZCALL is included with Zend Server installation, so test will run out-of-box\n\n**Test number #5**\n\nInstall the following simple test in your Document root and see if \"success\" appears in your browser.\n::\n\n /MYASP2/www/zend2/htdocs/connection2.inc\n <?php\n $database = \"*LOCAL\"; // *LOCAL on IBM i ... LP0264D on Linux\n $cwdatabase = \"localhost\";\n $user = \"DB2\";\n $password = \"XXXXXXXX\";\n $libxmlservice = \"ZENDSVR\"; // ZZCALL (Zend Server)\n $i5persistentconnect = false;\n ?>\n\n /MYASP2/www/zend2/htdocs/xxtoolkit_cw.php\n <?php\n require_once('connection2.inc');\n require_once('CW/cw.php'); // new toolkit compatibility (Alan)\n\n /* connect */\n if ($i5persistentconnect) $conn = i5_pconnect($cwdatabase,$user,$password);\n else $conn = i5_connect($cwdatabase,$user,$password);\n if (!$conn) echo \"Bad connect: $conn,$cwdatabase,$user,perm=$i5persistentconnect\";\n\n if (!$conn)\n { $tab = i5_error();\n die(\"fail Connect: \".$tab[2].\" \".\"$tab[3], $tab[0]\");\n }\n\n /* prepare */\n $description =\n array\n (\n // single parms\n array\n ( \"Name\"=>\"INCHARA\",\"IO\"=>I5_IN|I5_OUT,\"Type\"=>I5_TYPE_CHAR,\"Length\"=>\"1\"),\n array\n ( \"Name\"=>\"INCHARB\",\"IO\"=>I5_IN|I5_OUT,\"Type\"=>I5_TYPE_CHAR,\"Length\"=>\"1\"),\n array\n ( \"Name\"=>\"INDEC1\",\"IO\"=>I5_IN|I5_OUT,\"Type\"=>I5_TYPE_PACKED,\"Length\"=>\"7.4\"),\n array\n ( \"Name\"=>\"INDEC2\",\"IO\"=>I5_IN|I5_OUT,\"Type\"=>I5_TYPE_PACKED,\"Length\"=>\"12.2\"),\n // structure parm\n array\n ( \"DSName\"=>\"INDS1\",\n \"Count\"=>1,\n \"DSParm\"=>\n array\n (\n array\n ( \"Name\"=>\"DSCHARA\",\"IO\"=>I5_IN|I5_OUT,\"Type\"=>I5_TYPE_CHAR,\"Length\"=>\"1\"),\n array\n ( \"Name\"=>\"DSCHARB\",\"IO\"=>I5_IN|I5_OUT,\"Type\"=>I5_TYPE_CHAR,\"Length\"=>\"1\"),\n array\n ( \"Name\"=>\"DSDEC1\",\"IO\"=>I5_IN|I5_OUT,\"Type\"=>I5_TYPE_PACKED,\"Length\"=>\"7.4\"),\n array\n ( \"Name\"=>\"DSDEC2\",\"IO\"=>I5_IN|I5_OUT,\"Type\"=>I5_TYPE_PACKED,\"Length\"=>\"12.2\"),\n )\n )\n );\n $pgm = i5_program_prepare(\"$libxmlservice/ZZCALL\", $description);\n if (!$pgm)\n { $tab = i5_error();\n die(\"fail Prepare: \".$tab[2].\" \".\"$tab[3], $tab[0]\");\n }\n\n // *** parameter list allocation\n $list=\n array\n (\n \"DSCHARA\"=>\"x\",\n \"DSCHARB\"=>\"y\",\n \"DSDEC1\"=>66.6666,\n \"DSDEC2\"=>77777.77,\n );\n // *** parameter values passed to procedure\n $in =\n array\n (\n \"INCHARA\"=>\"a\",\n \"INCHARB\"=>\"b\",\n \"INDEC1\"=>11.1111,\n \"INDEC2\"=>222.22,\n \"INDS1\"=>$list,\n );\n // *** name of variables created for out parameters\n $out =\n array\n (\n \"INCHARA\"=>\"INCHARA\",\n \"INCHARB\"=>\"INCHARB\",\n \"INDEC1\"=>\"INDEC1\",\n \"INDEC2\"=>\"INDEC2\",\n \"INDS1\"=>\"INDS1\",\n );\n $rc=i5_program_call($pgm, $in, $out);\n if ($rc != false)\n {\n if ($INCHARA != 'C') die(\"fail C == $INCHARA\\n\");\n if ($INCHARB != 'D') die(\"fail D == $INCHARB\\n\");\n if ($INDEC1 != 321.1234) die(\"fail 321.1234 == $INDEC1\\n\");\n if ($INDEC2 != 1234567890.12) die(\"fail 1234567890.12 = $INDEC2\\n\");\n if ($INDS1[\"DSCHARA\"] != 'E'\n || $INDS1[\"DSCHARB\"] != 'F'\n || $INDS1[\"DSDEC1\"] != 333.333\n || $INDS1[\"DSDEC2\"] != 4444444444.44)\n {\n var_dump($INDS1);\n die(\"fail DS not correct\\n\");\n }\n\n }\n else\n { $tab = i5_error();\n die(\"fail Call: \".$tab[2].\" \".\"$tab[3], $tab[0]\");\n }\n\n\n // good\n echo \"success\";\n ?>\n\n\nrun the test Apache ...\n::\n\n Point your browser to php program ...\n http://myibmi/xxtoolkit_cw.php\n success\n\n\nrun the test 2-tier ... I choose to run from command line on my Linux machine, but Linux Apache would also work.\n::\n\n $ which php\n /usr/local/zend/bin/php\n $ php xxtoolkit_cw.php\n success\n\n.. \n No, not working???\n * [[#phplog | Check PHP log for messages (click here) ...]]\n * [[#httplog | Check Apache logs for messages (click here) ...]]\n * [[#newlog | Check PHP toolkit logs for messages (click here) ...]]\n\nOk, next run stress test ...\n::\n\n call qp2term\n > cd /usr/local/Zend/apache2/bin\n > ab -t 25 -c 10 http://lp0264d/xxtoolkit_cw.php\n -t 25 -- 25 seconds\n -c 10 -- 10 concurrent browsers (simulates ten browsers)\n\n* this test is designed to drive CPU to 100% (a good thing), so don't panic about CPU\n\nApache ab sample running\n------------------------\n\nApache ab web site stress tests are performed from the 5250 command line (call qp2term) or ssh myibmi using PASE.\n\n* You can run stress tests from 2-tier Linux/Windows using Apache ab tool, but i am using Apchae ab from PASE.\n* Apache ab tool is not perfect, but if you use relatively \"sane\" number of browsers like -c 10 it will work.\n* Apache ab test is designed to drive CPU to 100% (a good thing), so don't panic about CPU\n\nExample run my machine ...\n::\n\n > ab -t 25 -c 10 http://lp0264d/hello.php\n This is ApacheBench, Version 2.0.40-dev <$Revision: 1.146 $> apache-2.0\n Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/\n Copyright 2006 The Apache Software Foundation, http://www.apache.org/\n\n Benchmarking lp0264d (be patient)\n Completed 5000 requests\n Completed 10000 requests\n Completed 15000 requests\n Completed 20000 requests\n Finished 20325 requests\n\n\n Server Software: Apache\n Server Hostname: lp0264d\n Server Port: 80\n\n Document Path: /hello.php\n Document Length: 11 bytes\n\n Concurrency Level: 10\n Time taken for tests: 25.5119 seconds\n Complete requests: 20325\n Failed requests: 0\n Write errors: 0\n Total transferred: 3394275 bytes\n HTML transferred: 223575 bytes\n Requests per second: 812.83 [#/sec] (mean) <--- 800 hits/second\n Time per request: 12.303 [ms] (mean)\n Time per request: 1.230 [ms] (mean, across all concurrent requests)\n Transfer rate: 132.53 [Kbytes/sec] received\n\n Connection Times (ms)\n min mean[+/-sd] median max\n Connect: 0 0 1.1 0 30\n Processing: 2 11 7.2 10 322\n Waiting: 2 10 7.2 10 322\n Total: 2 11 7.3 11 322\n\n Percentage of the requests served within a certain time (ms)\n 50% 11\n 66% 12\n 75% 14\n 80% 15\n 90% 17\n 95% 19\n 98% 21\n 99% 23\n 100% 322 (longest request)\n >\n\n\nwrkactjob -- during Apache ab test you can use refresh on wrkactjob scree to see the php-cgi jobs working\n\n* you should expect to see multiple php-cgi jobs getting some CPU time as you refresh with F10\n\n::\n\n Work with Active Jobs LP0264D\n 05/16/12 13:37:25\n CPU %: 100.0 Elapsed time: 00:00:00 Active jobs: 295\n\n Type options, press Enter.\n 2=Change 3=Hold 4=End 5=Work with 6=Release 7=Display message\n 8=Work with spooled files 13=Disconnect ...\n Current\n Opt Subsystem/Job User Type CPU % Function Status\n ZEND2 QTMHHTTP BCI .0 PGM-zfcgi SELW\n ZEND2 QTMHHTTP BCI .0 PGM-php-cgi.bi THDW\n ZEND2 QTMHHTTP BCI .0 PGM-php-cgi.bi THDW\n ZEND2 QTMHHTTP BCI 2.8 PGM-php-cgi.bi TIMA\n ZEND2 QTMHHTTP BCI 1.8 PGM-php-cgi.bi TIMA\n ZEND2 QTMHHTTP BCI 2.3 PGM-php-cgi.bi TIMA\n ZEND2 QTMHHTTP BCI 2.8 PGM-php-cgi.bi TIMA\n ZEND2 QTMHHTTP BCI 1.8 PGM-php-cgi.bi RUN\n ZEND2 QTMHHTTP BCI 1.4 PGM-php-cgi.bi TIMA\n\n\nStop XMLSERVICE for debugger attach\n-----------------------------------\n\nWRKACTJOB find XMLSERVICE private mode only\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nIF you are running a private connection (ipc='/tmp/packers'), you can simply use WRKACTJOB and locate the XMLSERVICE job and attach debugger. However the next qsysopr message option is also available.\n\nXMLSERVICE QSYSOPR message stop anytime\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nAt times it is useful to stop the XMLSERVICE job to connect a debugger to examine an issue, especially if you are running stateless in the QSQSRVR job.\n\nI often find the simple trick below to be very useful.\n\n**The trick ...**\n\nAt this time the PHP wrappers have not implemented a stop for debugger interface (cough ... Alan), \nbut XMLSERVICE has control \\*debug ability already available. In the following example adding \n``$ctl .= \" *debug\";`` will stop XMLSERVICE job with a inquire message on qsysopr, \nsimply attach your debugger to job # in msg, set a breakpoint in your module (or in XMLSERVICE module), \nand answer qsysopr message with any character to let XMLSERVICE continue to your breakpoint. That is it ...\n\nHint: PHP wrappers are sending XML just like below, so if you have toolkit.ini debug turned on you can often simply cut/paste your XML in debug log into the variable $clobIn below.\n\n**Test #3 as example ...**\n\n::\n\n /MYASP2/www/zend2/htdocs/connection2.inc\n <?php\n $database = \"*LOCAL\"; // *LOCAL on IBM i ... LP0264D on Linux\n $cwdatabase = \"localhost\";\n $user = \"DB2\";\n $password = \"XXXXXXXX\";\n $libxmlservice = \"ZENDSVR\"; // ZZCALL (Zend Server)\n $i5persistentconnect = false;\n ?>\n\n /MYASP2/www/zend2/htdocs/xxtoolkit_raw.php\n <?php\n require_once('connection2.inc');\n if ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\n else $conn = db2_connect($database,$user,$password);\n if (!$conn) echo \"Bad connect: $conn,$database,$user,perm=$i5persistentconnect\";\n\n $stmt = db2_prepare($conn, \"call XMLSERVICE.iPLUG4K(?,?,?,?)\");\n $ctl = \"*sbmjob\"; // *here for no additional private job\n $ctl .= \" *debug\"; // THIS WILL STOP XMLSERVICE JOB MSG TO QSYSOPR (server side)\n // $ctl .= \" *debugproc\"; // THIS WILL STOP XMLSERVICE/QSQSRVR JOB MSG TO QSYSOPR (client side)\n // if running *here either *debug/*debugproc will work\n $ipc='/tmp/packers';\n // $ipc = \"\"; // *here no need ipc\n $clobIn = \"<?xml version='1.0'?>\n <pgm name='ZZCALL' lib='$libxmlservice'>\n <parm io='both'>\n <data type='1A'>a</data>\n </parm>\n <parm io='both'>\n <data type='1A'>b</data>\n </parm>\n <parm io='both'>\n <data type='7p4'>11.1111</data>\n </parm>\n <parm io='both'>\n <data type='12p2'>222.22</data>\n </parm>\n <parm io='both'>\n <ds>\n <data type='1A'>x</data>\n <data type='1A'>y</data>\n <data type='7p4'>66.6666</data>\n <data type='12p2'>77777.77</data>\n </ds>\n </parm>\n <return>\n <data type='10i0'>0</data>\n </return>\n </pgm>\";\n $clobOut = \"\";\n $ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n $ret=db2_execute($stmt);\n // var_dump($clobOut);\n if (strpos($clobOut,\"4444444444.44\")>0) echo \"success\";\n else echo \"fail\";\n ?>\n\n**Hint:** If $clobIn PHP XML single quote / double quote issues are driving you nuts try the following technique.\n\nBTW -- For the enterprising do-it-yourself PHP builder you can see how very easy it would be to make a custom \ncall \\*PGM API with parameter substitution using str_replace() function similar to test_lib_replace(),\nsomething like ``function ZZCALL($INCHARA,$INCHARB,$INDEC1,$INDEC2,$INDS1)``\n\n::\n\n $xml = <<<ENDPROC\n <?xml version='1.0'?>\n <script>\n <pgm name='ZZCALL' lib='xyzlibxmlservicexyz'>\n <parm io='both'>\n <data type='1A' var='INCHARA'>a</data>\n </parm>\n <parm io='both'>\n <data type='1A' var='INCHARB'>b</data>\n </parm>\n <parm io='both'>\n <data type='7p4' var='INDEC1'>11.1111</data>\n </parm>\n <parm io='both'>\n <data type='12p2' var='INDEC2'>222.22</data>\n </parm>\n <parm io='both'>\n <ds>\n <data type='1A' var='INDS1.DSCHARA'>x</data>\n <data type='1A' var='INDS1.DSCHARB'>y</data>\n <data type='7p4' var='INDS1.DSDEC1'>66.6666</data>\n <data type='12p2' var='INDS1.DSDEC2'>77777.77</data>\n </ds>\n </parm>\n <return>\n <data type='10i0'>0</data>\n </return>\n </pgm>\n </script>\n ENDPROC;\n $clobIn = test_lib_replace($xml);\n\n // xml common text replacement\n function test_lib_replace($xml) {\n global $libxmlservice, $iOPM;\n if (!$iOPM) {\n $was = array(\"xyzlibxmlservicexyz\");\n $now = array(\"$libxmlservice\");\n }\n else {\n $was = array(\"xyzlibxmlservicexyz\",\"<pgm\");\n $now = array(\"$libxmlservice\",\"<pgm mode='opm'\");\n }\n $out = str_replace($was,$now,$xml);\n return $out;\n }\n\n\nXMLSERVICE can be stopped ...\n-----------------------------\n\nDebug technique:\n^^^^^^^^^^^^^^^^\n\nIt's as easy as 1-2-3-4-5-6-7-8-9-10 :)\n\n1. Add the following line to your PHP script before the program call to be debugged. $toolkitConn should be your toolkit connection object.\n\n::\n\n ``$toolkitConn->setOptions(array('customControl'=>'*debug')).``\n\n Run your script.\n The script will \"hang\" while it waits on #2 below...\n (move to green screen 5250 for steps 2-10)\n\n2. A MSGW inquiry message in DSPMSG QSYSOPR will be generated by the toolkit.\n3. Note the job information (number, name, user) provided in the MSGW.\n4. STRSRVJOB using that job information as parameters.\n5. STRDBG with the program and library you wish to debug.\n6. Answer the MSGW. Any answer will do--\"G\" is fine.\n7. The RPG program source will appear in debug mode in your terminal, ready to step through, allowing you to inspect variables, etc.\n8. When done inspecting and stepping, let the RPG program complete (using function keys indicated on screen).\n9. ENDDBG\n#. ENDSRVJOB\n\n\n.. \n [--Author([[http://youngiprofessionals.com/wiki/index.php/XMLSERVICE/XMLSERVICEDebug?action=expirediff | s ]])--]\n [--Tony \"Ranger\" Cairns - IBM i PHP / PASE--]\n"
},
{
"alpha_fraction": 0.5972602963447571,
"alphanum_fraction": 0.6400684714317322,
"avg_line_length": 37.394737243652344,
"blob_id": "2ef149c76a3cec508d39dcb7e91f6bf17a42ffd2",
"content_id": "70851f96fbd3cc9e14a5986162c977ac723e4d4e",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 2920,
"license_type": "permissive",
"max_line_length": 112,
"num_lines": 76,
"path": "/test/php/test_70010_PERF_loop256_toolkit_pgm.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: - Zend Toolkit inside fetch\n--SKIPIF--\n<?php require_once('skipiftoolkit.inc'); ?>\n--FILE--\n<?php\nrequire_once('connection.inc');\n// assume /usr/local/zendsvr/share/ToolkitAPI\nrequire_once(\"ToolkitService.php\");\n\n// tests criteria\n$i5fail = 2; /* 2 seconds or less */\n$i5loop = 256; /* 256(+) calls */\n$start_time = microtime();\n\n\n// IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Bad connect: $database,$user\");\n$stmt2 = db2_prepare($conn, \"select * from QIWS.QCUSTCDT\");\nif (!$stmt2) die(\"Bad prepare: \".db2_stmt_errormsg());\n\n// new toolkit\ntry { $ToolkitServiceObj = ToolkitService::getInstance($conn); } catch (Exception $e) { die($e->getMessage()); }\n// 'stateless'=>true,'v5r4'=>true,'customControl'=>\"*justproc\",'internalKey'=>$ipc,'plugSize'=>'4K'\n$options = array('stateless'=>true,'plugSize'=>'4K');\n$ToolkitServiceObj->setToolkitServiceParams($options);\n$count = 0;\nwhile ($count < $i5loop) {\n $res = db2_execute($stmt2);\n if (!$res) die(\"Bad execute: \".db2_stmt_errormsg());\n while ($res && $row = db2_fetch_array($stmt2)) {\n $param = array();\n $ds = array();\n $param[] = $ToolkitServiceObj->AddParameterChar ('both', 1, 'INCHARA', 'var1', $row[2]);\n $param[] = $ToolkitServiceObj->AddParameterChar ('both', 1, 'INCHARB', 'var2', $row[5]);\n $param[] = $ToolkitServiceObj->AddParameterPackDec('both', 7,4,'INDEC1', 'var3', $row[10]);\n $param[] = $ToolkitServiceObj->AddParameterPackDec('both', 12,2,'INDEC2', 'var4', $row[9]);\n $ds[] = $ToolkitServiceObj->AddParameterChar ('both', 1, 'DSCHARA', 'ds1', $row[2]);\n $ds[] = $ToolkitServiceObj->AddParameterChar ('both', 1, 'DSCHARB', 'ds2', $row[5]);\n $ds[] = $ToolkitServiceObj->AddParameterPackDec('both', 7,4,'DSDEC1', 'ds3', $row[10]);\n $ds[] = $ToolkitServiceObj->AddParameterPackDec('both', 12,2,'DSDEC1', 'ds4', $row[9]);\n $param[] = $ToolkitServiceObj->AddDataStruct($ds);\n $result = $ToolkitServiceObj->PgmCall('ZZCALL', $testLib, $param, null, null);\n $count++;\n // var_dump(implode(\":\",$row));\n if ($result[\"io_param\"][\"ds4\"] <> 4444444444.44) die (\"BAD call $count\\n\");\n // var_dump(implode(\":\",$result[\"io_param\"]));\n }\n}\n$end_time = microtime();\n$wire_time= control_microtime_used($start_time,$end_time)*1000000;\n\n$opt = str_replace(\"\\n\",\"\",var_export($options,true));\necho\n sprintf(\"$opt -- Time (loop=$count) total=%1.2f sec (%1.2f ms per call)\\n\",\n round($wire_time/1000000,2),\n round(($wire_time/$count)/1000,2));\n\nfunction control_microtime_used($i5before,$i5after) {\n return (substr($i5after,11)-substr($i5before,11))+(substr($i5after,0,9)-substr($i5before,0,9));\n}\n\n\n// result times\n$look = round($wire_time/1000000,2);\nif ($look>$i5fail) die(\"$count calls fail - too slow > $i5fail seconds\\n\");\n\necho \"Success\\n\";\n\n\n?>\n--EXPECTF--\n%s\nSuccess\n\n\n"
},
{
"alpha_fraction": 0.616511344909668,
"alphanum_fraction": 0.6245006918907166,
"avg_line_length": 19.2702693939209,
"blob_id": "05d492523901ad6fb3753a5ce022b87529ec0572",
"content_id": "19cd7b9e4024aaa527223b33e30eee14a37f5c37",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 751,
"license_type": "permissive",
"max_line_length": 79,
"num_lines": 37,
"path": "/test/php/test_33463_cwtest_objlist.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: - cwtest object list\n--SKIPIF--\n<?php require_once('skipifcw.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('xxcw_test_setup.php');\n\nif ($doObjectList) {\n\n\techo h2('Object lists');\n\n\techo \"About to do object list with '$demoLib', '*ALL','*PGM'<BR>\";\n// object list\n$list = i5_objects_list($demoLib, '*ALL', '*PGM', $conn);\nif (!$list) {\n\tdie('Error getting object list: ' . printArray(i5_error()) . '<BR><BR>');\n\n} else {\n\n\twhile ($listItem = i5_objects_list_read($list)) {\n\t\t\techo printArray($listItem);\n\t}\n\techo 'End of list. Error information: ' . printArray(i5_error()) . '<BR><BR>';\n}\ni5_objects_list_close($list);\n\n} // doObjectList\n\n\n// good\necho \"\\nSuccess\\n\";\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.6097240447998047,
"alphanum_fraction": 0.6254927515983582,
"avg_line_length": 19.026315689086914,
"blob_id": "4b2ca90cb63000666584b446d292a96702907243",
"content_id": "233e227cadac27516a97e5750ed234a74819b5f7",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "reStructuredText",
"length_bytes": 761,
"license_type": "permissive",
"max_line_length": 76,
"num_lines": 38,
"path": "/docs/index.rst",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": ".. XMLSERVICE documentation master file, created by\n sphinx-quickstart on Wed Dec 5 17:56:21 2018.\n You can adapt this file completely to your liking, but it should at least\n contain the root `toctree` directive.\n\n\nWelcome to XMLSERVICE's documentation!\n======================================\n\n.. toctree::\n :maxdepth: 1\n :caption: Contents:\n\n intro.rst\n functions.rst\n generic-interface.rst\n examples.rst\n source-layout.rst\n connections.rst\n ipc.rst\t\t\n idle-timeout.rst\n debugging.rst\n date-time-timestamp.rst\n library-list.rst\n qtemp.rst\n varchar.rst\n ccsids.rst\n faster.rst\n performance.rst\n errors.rst\n faq.rst\n\n.. \n Indices and tables\n ==================\n\n * :ref:`genindex`\n * :ref:`search`\n"
},
{
"alpha_fraction": 0.6117669343948364,
"alphanum_fraction": 0.6571107506752014,
"avg_line_length": 24.289098739624023,
"blob_id": "9bc4e33e37333d709d67550b1cb7db9c91e0b06f",
"content_id": "4378bc50c99c0fa861ed1c8cf7f83c839a77740d",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 5337,
"license_type": "permissive",
"max_line_length": 133,
"num_lines": 211,
"path": "/test/php/db2sql.inc",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\n\n// -----------------\n// schema\n// -----------------\n$db2_crt_schema = \"create schema $schematest\";\n\n// -----------------\n// table animal\n// -----------------\n$db2_drp_animal = <<<SQLDRPANIMAL\ndrop table animal\nSQLDRPANIMAL;\n\n$db2_crt_animal = <<<SQLCRTANIMAL\ncreate table animal(\n id integer, breed varchar(32), name char(16),\n weight decimal(7,2), height numeric(9,2))\nSQLCRTANIMAL;\n\n$db2_prep_animal = <<<SQLADDANIMAL\ninsert into\nanimal (id, breed, name, weight, height)\nvalues (?,?,?,?,?)\nSQLADDANIMAL;\n\n$animals = array(\n array(1, 'cat', 'Pook', 3.2, 9.56),\n array(2, 'dog', 'Peaches', 12.3, 22.65),\n array(3, 'horse', 'Smarty', 350.0, 400.23),\n array(4, 'gold fish', 'Bubbles', 0.1, 1.2),\n array(5, 'budgerigar', 'Gizmo', 0.2, 5.4),\n array(6, 'goat', 'Rickety Ride', 9.77, 18.9),\n array(7, 'llama', 'Sweater', 150, 300)\n );\n\n// -----------------\n// table animal1 (clob/blob)\n// -----------------\n$db2_drp_animal1 = <<<SQLDRPANIMAL1\ndrop table animal1\nSQLDRPANIMAL1;\n\n$db2_crt_animal1 = <<<SQLCRTANIMAL1\ncreate table animal1(\n id integer, essay clob(32100), picture blob(32100))\nSQLCRTANIMAL1;\n\n$db2_prep_animal1 = <<<SQLADDANIMAL1\ninsert into\nanimal1 (id, essay, picture)\nvalues (?,?,?)\nSQLADDANIMAL1;\n\n// ------------\n// stored procedure 1\n// ------------\n$db2_drp_sp1 = <<<SQLDRPSP1\nDROP PROCEDURE match1\nSQLDRPSP1;\n\n$db2_crt_sp1 = <<<SQLCRTSP1\nCREATE PROCEDURE match1(\nIN first_name VARCHAR(128), IN second_name VARCHAR(128),\nINOUT any_name VARCHAR(128), OUT animal_weight DOUBLE)\nDYNAMIC RESULT SETS 1\nLANGUAGE SQL\nBEGIN\nDECLARE match_name INT DEFAULT 0;\nDECLARE c1 CURSOR FOR SELECT COUNT(*) FROM animal WHERE name IN (second_name);\nDECLARE c2 CURSOR FOR SELECT SUM(weight) FROM animal WHERE name in (first_name, second_name);\nDECLARE c3 CURSOR WITH RETURN FOR SELECT name, breed, weight FROM animal WHERE name BETWEEN first_name AND second_name ORDER BY name;\nOPEN c1;\nFETCH c1 INTO match_name;\nIF (match_name > 0) THEN SET any_name = 'TRUE'; END IF;\nCLOSE c1;\nOPEN c2;\nFETCH c2 INTO animal_weight;\nCLOSE c2;\nOPEN c3;\nEND\nSQLCRTSP1;\n\n// ------------\n// stored procedure 2\n// ------------\n$db2_drp_sp2 = <<<SQLDRPSP2\nDROP PROCEDURE match2\nSQLDRPSP2;\n\n$db2_crt_sp2 = <<<SQLCRTSP2\nCREATE PROCEDURE match2(\nIN first_id INTEGER, IN second_id INTEGER,\nOUT text clob(32100), OUT pict blob(32100))\nDYNAMIC RESULT SETS 1\nLANGUAGE SQL\nBEGIN\nDECLARE match_id INT DEFAULT 0;\nDECLARE c1 CURSOR FOR SELECT essay FROM animal1 WHERE id IN (first_id);\nDECLARE c2 CURSOR FOR SELECT picture FROM animal1 WHERE id in (second_id);\nDECLARE c3 CURSOR WITH RETURN FOR SELECT id, essay, picture FROM animal1 WHERE id BETWEEN first_id AND second_id ORDER BY id;\nOPEN c1;\nFETCH c1 INTO text;\nCLOSE c1;\nOPEN c2;\nFETCH c2 INTO pict;\nCLOSE c2;\nOPEN c3;\nEND\nSQLCRTSP2;\n\n// -----------------\n// table animal2 (identity notes)\n// -----------------\n$db2_drp_animal2 = <<<SQLDRPANIMAL2\ndrop table animal2\nSQLDRPANIMAL2;\n\n$db2_crt_animal2 = <<<SQLCRTANIMAL2\ncreate table animal2(\n noteid integer GENERATED BY DEFAULT AS IDENTITY,\n animalid integer,\n notetxt clob(64K),\n PRIMARY KEY(noteid))\nSQLCRTANIMAL2;\n\n$db2_idrp_animal2 = <<<SQLIDXANIMAL2\nDROP INDEX index2\nSQLIDXANIMAL2;\n\n$db2_icrt_animal2 = <<<SQLIDXANIMAL2\nCREATE INDEX index2 ON animal2 (animalid)\nSQLIDXANIMAL2;\n\n$db2_fkdrp_animal2 = <<<SQLFKANIMAL2\nDROP TABLE fkey2\nSQLFKANIMAL2;\n\n$db2_fkcrt_animal2 = <<<SQLFKANIMAL2\nCREATE TABLE fkey2 (idf INTEGER NOT NULL, FOREIGN KEY(idf) REFERENCES animal2(noteid))\nSQLFKANIMAL2;\n\n\n// -----------------\n// table qtemp/animal\n// -----------------\n$db2_drp_animalq = <<<SQLDRPANIMAL\ndrop table qtemp/animalq\nSQLDRPANIMAL;\n\n$db2_crt_animalq = <<<SQLCRTANIMAL\nDECLARE GLOBAL TEMPORARY TABLE animalq(\n id integer, breed varchar(32), name char(16),\n weight decimal(7,2), height numeric(9,2))\n ON COMMIT PRESERVE ROWS\nSQLCRTANIMAL;\n\n$db2_prep_animalq = <<<SQLADDANIMAL\ninsert into\nanimalq (id, breed, name, weight, height)\nvalues (?,?,?,?,?)\nSQLADDANIMAL;\n\n// ------------\n// read png file into hex string\n// ------------\n$handle = fopen($pngstar, \"rb\");\n$hexpng = strtoupper( bin2hex( fread( $handle, filesize($pngstar) ) ) );\nfclose($handle);\n\n// ------------\n// read txt file\n// ------------\n$handle = fopen($txtessay, \"rb\");\n$essay = fread( $handle, filesize($txtessay) );\nfclose($handle);\n\n\n// ------------\n// actions order\n// ------------\n$actions = array(\n'schema-create'=>$db2_crt_schema,\n'cmd_libl'=>false,\n'table-animal-drop'=>$db2_drp_animal,\n'table-animal-create'=>$db2_crt_animal,\n'table-animal-insert'=>false,\n'table-animal1-drop'=>$db2_drp_animal1,\n'table-animal1-create'=>$db2_crt_animal1,\n'table-animal1-insert'=>false,\n'proc-sp1-drop'=>$db2_drp_sp1,\n'proc-sp1-create'=>$db2_crt_sp1,\n'proc-sp2-drop'=>$db2_drp_sp2,\n'proc-sp2-create'=>$db2_crt_sp2,\n'index-index2-drop'=>$db2_idrp_animal2,\n'table-fkey-drop'=>$db2_fkdrp_animal2,\n'table-animal2-drop'=>$db2_drp_animal2,\n'table-animal2-create'=>$db2_crt_animal2,\n'index-index2-create'=>$db2_icrt_animal2,\n'table-fkey-create'=>$db2_fkcrt_animal2,\n'table-animalq-drop'=>$db2_drp_animalq,\n'table-animalq-create'=>$db2_crt_animalq,\n'table-animalq-insert'=>false,\n'cmd_end'=>false,\n'kill-all-last'=>false\n);\n\n\n?>\n\n"
},
{
"alpha_fraction": 0.5495766401290894,
"alphanum_fraction": 0.5733405947685242,
"avg_line_length": 30.28205108642578,
"blob_id": "8c6bec1050900bc0ec4e6f6494485a4ab473efba",
"content_id": "0509b62a63dcb3b2b76016ccac57aab09ca0702e",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 3661,
"license_type": "permissive",
"max_line_length": 99,
"num_lines": 117,
"path": "/test/php/test_80140_ZZHANG_ibm_db2_io_timeout.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 inout SRVPGM - hang qsysopr\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\n\n$filename = \"/QOpenSys/usr/bin/system\";\nif (file_exists($filename)===false) die(\"Fail IBM i only missing ($filename)\\n\");\n\n$job1 = \"XHANGME\";\n$job2 = \"XWATCHME\";\n\n// -------------------\n// call IBM i\n// -------------------\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Bad connect: $database,$user\");\n\n$ctl = \"*sbmjob(QSYS/QSRVJOB/$job1) *wait(5) *call(5/busy/client) *call(40/kill/server) *idle(10)\";\n\n// -------------------\n// kill current xmlservice\n// -------------------\necho driverTime().\" $job1 kill any XMLSERVICE on $ipc ... \\n\";\n$ctlkill = \"*immed\"; // kill XMLSERVICE NOW\n$clobIn = \"\";\n$sql = \"call $procLib.iPLUGR4K('$ipc','$ctlkill','$clobIn')\";\n$stmt=db2_exec($conn,$sql);\nif (!$stmt) die(\"Bad execute ($database,$user): \".db2_stmt_errormsg());\n$ret=db2_free_stmt($stmt);\nif (!$ret) die(\"Bad free stmt ($database,$user): \".db2_stmt_errormsg());\n\n\n// -------------------\n// *sbmjob(QSYS/QSRVJOB/XHANGME)\n// -------------------\necho driverTime().\" $job1 calling hang program $ctl\\n\";\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG32K(?,?,?,?)\");\nif (!$stmt) die(\"Bad prepare (1): \".db2_stmt_errormsg());\n$clobIn = getxmlhangme();\n$clobOut = \"\";\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) echo(\"Bad execute (1): \".db2_stmt_errormsg());\necho driverTime().\" $job1 timeout back from calling hang program\\n\";\nvar_dump($clobOut);\nif (strpos($clobOut,'busy')<1) die(\"busy error message missing\\n\");\n\n// -------------------\n// how doing?\n// -------------------\n$assume = \"Failure\\n\";\nfor ($i = 1; $i < 10; $i++) {\n echo driverTime().\" $job2 looking $job1 ...\\n\";\n $clobOut = `/QOpenSys/usr/bin/system wrkactjob`;\n // var_dump($clobOut);\n if (strpos($clobOut,$job1)>0) {\n echo driverTime().\" $job2 wrkactjob fail still see $job1 ...\\n\";\n }\n else {\n echo driverTime().\" $job2 wrkactjob no longer see $job1 ...\\n\";\n $assume = \"Success\\n\";\n break;\n }\n echo driverTime().\" $job2 sleeping 20 seconds ...\\n\";\n set_time_limit(0); // Remove the time limit for command-line usage;\n $counter = 0; // Some simple counter;\n while ($counter < 200) { // Do nothing for 100*100 miliseconds (10 seconds);\n $counter++;\n usleep(100000); // Sleep for 100 miliseconds;\n }\n} // end loop\n\n\n// -------------------\n// kill current xmlservice\n// -------------------\necho driverTime().\" $job1 kill any XMLSERVICE on $ipc ... \\n\";\n$ctlkill = \"*immed\"; // kill XMLSERVICE NOW\n$clobIn = \"\";\n$sql = \"call $procLib.iPLUGR4K('$ipc','$ctlkill','$clobIn')\";\n$stmt=db2_exec($conn,$sql);\nif (!$stmt) die(\"Bad execute ($database,$user): \".db2_stmt_errormsg());\n$ret=db2_free_stmt($stmt);\nif (!$ret) die(\"Bad free stmt ($database,$user): \".db2_stmt_errormsg());\n\n// test result\necho $assume;\n\n\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// * zzhang: bad function hang up\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// P zzhang B export\n// D zzhang PI\nfunction getxmlhangme() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<script>\n<pgm name='ZZSRV' lib='xyzlibxmlservicexyz' func='ZZHANG'>\n</pgm>\n</script>\nENDPROC;\nreturn test_lib_replace($clob);\n}\n\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.4328719675540924,
"alphanum_fraction": 0.48892733454704285,
"avg_line_length": 38.57534408569336,
"blob_id": "e16a84293cc6f527638646306656ed05f8cf86fa",
"content_id": "d403bc56df71496a3fafb2ff0fc6d483759d44d3",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 2890,
"license_type": "permissive",
"max_line_length": 100,
"num_lines": 73,
"path": "/test/php/test_20400_ZZCALL_toolkit_pgm.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: - Zend Toolkit PGM with DS\n--SKIPIF--\n<?php require_once('skipiftoolkit.inc'); ?>\n--FILE--\n<?php\nrequire_once('connection.inc');\n// assume /usr/local/zendsvr/share/ToolkitAPI\nrequire_once(\"ToolkitService.php\");\n// new toolkit\ntry { $ToolkitServiceObj = ToolkitService::getInstance($database, $user, $password); }\ncatch (Exception $e) { die($e->getMessage()); }\n$ToolkitServiceObj->setToolkitServiceParams(\narray('InternalKey'=>$ipc, // route to same XMLSERVICE job /tmp/myjob1\n'subsystem'=>\"QGPL/QDFTJOBD\", // subsystem/jobd to start XMLSERVICE (if not running)\n'plug'=>\"iPLUG32K\")); // max size data i/o (iPLUG4K,32K,65K.512K,1M,5M,10M,15M)\n// D INCHARA S 1a\n// D INCHARB S 1a\n// D INDEC1 S 7p 4\n// D INDEC2 S 12p 2\n// D INDS1 DS\n// D DSCHARA 1a\n// D DSCHARB 1a\n// D DSDEC1 7p 4\n// D DSDEC2 12p 2\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// * main(): Control flow\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// C *Entry PLIST\n// C PARM INCHARA\n// C PARM INCHARB\n// C PARM INDEC1\n// C PARM INDEC2\n// C PARM INDS1\n$param[] = $ToolkitServiceObj->AddParameterChar ('both', 1, 'INCHARA', 'var1', 'Y');\n$param[] = $ToolkitServiceObj->AddParameterChar ('both', 1, 'INCHARB', 'var2', 'Z');\n$param[] = $ToolkitServiceObj->AddParameterPackDec('both', 7,4,'INDEC1', 'var3', '001.0001');\n$param[] = $ToolkitServiceObj->AddParameterPackDec('both', 12,2,'INDEC2', 'var4', '0000000003.04');\n $ds[] = $ToolkitServiceObj->AddParameterChar ('both', 1, 'DSCHARA', 'ds1', 'A');\n $ds[] = $ToolkitServiceObj->AddParameterChar ('both', 1, 'DSCHARB', 'ds2', 'B');\n $ds[] = $ToolkitServiceObj->AddParameterPackDec('both', 7,4,'DSDEC1', 'ds3', '005.0007');\n $ds[] = $ToolkitServiceObj->AddParameterPackDec('both', 12,2,'DSDEC1', 'ds4', '0000000006.08');\n$param[] = $ToolkitServiceObj->AddDataStruct($ds);\n$result = $ToolkitServiceObj->PgmCall('ZZCALL', $testLib, $param, null, null);\necho \"good so far ...\\n\";\nvar_dump($result);\n?>\n--EXPECTF--\n%s\narray(2) {\n [\"io_param\"]=>\n array(8) {\n [\"var1\"]=>\n string(1) \"C\"\n [\"var2\"]=>\n string(1) \"D\"\n [\"var3\"]=>\n string(8) \"321.1234\"\n [\"var4\"]=>\n string(13) \"1234567890.12\"\n [\"ds1\"]=>\n string(1) \"E\"\n [\"ds2\"]=>\n string(1) \"F\"\n [\"ds3\"]=>\n string(8) \"333.3330\"\n [\"ds4\"]=>\n string(13) \"4444444444.44\"\n }\n [\"retvals\"]=>\n array(0) {\n }\n}\n\n"
},
{
"alpha_fraction": 0.4060588479042053,
"alphanum_fraction": 0.45936498045921326,
"avg_line_length": 24.80451202392578,
"blob_id": "a31dc613bba8fb54ebeb75109e677f11a93f5734",
"content_id": "c1a571b121821a8d17812d168966ecede601ec34",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 3433,
"license_type": "permissive",
"max_line_length": 84,
"num_lines": 133,
"path": "/test/php/test_30450_ZZCALL_cw_pgm.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: - CW Toolkit ZZCALL\n--SKIPIF--\n<?php require_once('skipifcw.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\nrequire_once('CW/cw.php'); // new toolkit compatibility (Alan)\n\n/* connect */\n$conn = i5_connect(\"localhost\", $user, $password);\nif (!$conn)\n{ $tab = i5_error();\n die(\"Connect: \".$tab[2].\" \".\"$tab[3], $tab[0]\");\n}\n\n// D INCHARA S 1a\n// D INCHARB S 1a\n// D INDEC1 S 7p 4\n// D INDEC2 S 12p 2\n// D INDS1 DS\n// D DSCHARA 1a\n// D DSCHARB 1a\n// D DSDEC1 7p 4\n// D DSDEC2 12p 2\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// * main(): Control flow\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// C *Entry PLIST\n// C PARM INCHARA\n// C PARM INCHARB\n// C PARM INDEC1\n// C PARM INDEC2\n// C PARM INDS1\n\n\n/* prepare */\n$description =\narray\n(\n // single parms\n array\n ( \"Name\"=>\"INCHARA\",\"IO\"=>I5_IN|I5_OUT,\"Type\"=>I5_TYPE_CHAR,\"Length\"=>\"1\"),\n array\n ( \"Name\"=>\"INCHARB\",\"IO\"=>I5_IN|I5_OUT,\"Type\"=>I5_TYPE_CHAR,\"Length\"=>\"1\"),\n array\n ( \"Name\"=>\"INDEC1\",\"IO\"=>I5_IN|I5_OUT,\"Type\"=>I5_TYPE_PACKED,\"Length\"=>\"7.4\"),\n array\n ( \"Name\"=>\"INDEC2\",\"IO\"=>I5_IN|I5_OUT,\"Type\"=>I5_TYPE_PACKED,\"Length\"=>\"12.2\"),\n // structure parm\n array\n ( \"DSName\"=>\"INDS1\",\n \"Count\"=>1,\n \"DSParm\"=>\n array\n (\n array\n ( \"Name\"=>\"DSCHARA\",\"IO\"=>I5_IN|I5_OUT,\"Type\"=>I5_TYPE_CHAR,\"Length\"=>\"1\"),\n array\n ( \"Name\"=>\"DSCHARB\",\"IO\"=>I5_IN|I5_OUT,\"Type\"=>I5_TYPE_CHAR,\"Length\"=>\"1\"),\n array\n ( \"Name\"=>\"DSDEC1\",\"IO\"=>I5_IN|I5_OUT,\"Type\"=>I5_TYPE_PACKED,\"Length\"=>\"7.4\"),\n array\n ( \"Name\"=>\"DSDEC2\",\"IO\"=>I5_IN|I5_OUT,\"Type\"=>I5_TYPE_PACKED,\"Length\"=>\"12.2\"),\n )\n )\n);\n$pgm = i5_program_prepare(\"$testLib/ZZCALL\", $description);\nif (!$pgm)\n{ $tab = i5_error();\n die(\"Prepare: \".$tab[2].\" \".\"$tab[3], $tab[0]\");\n}\n\n// *** parameter list allocation\n$list=\narray\n(\n \"DSCHARA\"=>\"x\",\n \"DSCHARB\"=>\"y\",\n \"DSDEC1\"=>66.6666,\n \"DSDEC2\"=>77777.77,\n);\n// *** parameter values passed to procedure\n$in =\narray\n(\n \"INCHARA\"=>\"a\",\n \"INCHARB\"=>\"b\",\n \"INDEC1\"=>11.1111,\n \"INDEC2\"=>222.22,\n \"INDS1\"=>$list,\n);\n// *** name of variables created for out parameters\n$out =\narray\n(\n \"INCHARA\"=>\"INCHARA\",\n \"INCHARB\"=>\"INCHARB\",\n \"INDEC1\"=>\"INDEC1\",\n \"INDEC2\"=>\"INDEC2\",\n \"INDS1\"=>\"INDS1\",\n);\n$rc=i5_program_call($pgm, $in, $out);\nif ($rc != false)\n{\n if ($INCHARA != 'C') die(\"bad C == $INCHARA\\n\");\n if ($INCHARB != 'D') die(\"bad D == $INCHARB\\n\");\n if ($INDEC1 != 321.1234) die(\"bad 321.1234 == $INDEC1\\n\");\n if ($INDEC2 != 1234567890.12) die(\"bad 1234567890.12 = $INDEC2\\n\");\n if ($INDS1[\"DSCHARA\"] != 'E'\n || $INDS1[\"DSCHARB\"] != 'F'\n || $INDS1[\"DSDEC1\"] != 333.333\n || $INDS1[\"DSDEC2\"] != 4444444444.44)\n {\n var_dump($INDS1);\n die(\"bad DS not correct\\n\");\n }\n\n}\nelse\n{ $tab = i5_error();\n die(\"Call: \".$tab[2].\" \".\"$tab[3], $tab[0]\");\n}\n\n\n// good\necho \"I have ... \\n\";\necho \"Success\\n\";\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.5727061033248901,
"alphanum_fraction": 0.584711492061615,
"avg_line_length": 45.369319915771484,
"blob_id": "f2188b14fd5b42adc85bc493ec2aa9af3e31f9f1",
"content_id": "cc4d45bc781836f95b30a7c48e6578858d83219b",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "reStructuredText",
"length_bytes": 8163,
"license_type": "permissive",
"max_line_length": 338,
"num_lines": 176,
"path": "/docs/idle-timeout.rst",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "\n\nXMLSERVICE/Toolkit idle time out\n================================\n`Goto Main Page`_\n\n.. _Goto Main Page: index.html\n\nWho is this page for?\n---------------------\n\nInstructions designed for IBM i developer learning PHP and XMLSERVICE ...\n\n\nHow to make your toolkit jobs time out\n--------------------------------------\n\nXMLSERVICE from 1.6.2 onward contains flexible timeout features, including the ability to have a program (RPG, COBOL, CL, whatever) time out if it runs too long. This is useful to end jobs that get hung with a MSGWAIT condition.\n\nThe PHP toolkit wrappers currently enable only one of these timeout features: the \"idle timeout.\" When using a private connection job, also known as jobs that have an IPC or InternalKey, the job can be made to time out when a certain number of seconds of inactivity have elapsed.\n\n**Cleanliness vs. performance**\n\nCausing jobs to time out quickly will give you a nice, empty, clean-looking ZENDSVR subsystem, \nbut will drag down performance the next time the job is started up. Try to find a balance between \ncleanliness and performance. If you plan to use the same jobs over and over, you may wish to NOT \ntime out the jobs, either never ending them or ending them \\*IMMED at night, or some other scheme.\n\n**How to choose a timeout value**\nIf you have many transient users who will briefly use the site and then not return, you may want a quick timeout (30 seconds?). For users who will return over and over again, you may want a long or nonexistent timeout.\n\n* Idle timeout with the new toolkit API\n\nThe \"idle timeout\" can be set or changed any time in this way::\n\n // let's assume the toolkit connection has been established using getInstance() and is present in the variable $conn:\n $idleTimeoutSeconds = 1800; // time out job after 1800 seconds (30 minutes) of inactivity\n // a value of 0 means no timeout (infinite wait)\n $conn->setOptions(array('idleTimeout' => $idleTimeoutSeconds));\n\n* Idle timeout with the Compatibility Wrapper (CW)\n\nJust as with the old toolkit, when you connect with i5_pconnect(), you can set the timeout interval and the number of seconds to wait. As with the old toolkit, idle timeouts only work with private connections. Use the constant I5_OPTIONS_IDLE_TIMEOUT to provide the number of seconds. Use zero (0) seconds to never time out (the default).\n::\n\n $privateNum = 0; // private conn number of zero causes the CW to generate a number for next time.\n $idleTimeoutSeconds = 1800; // time out job after 1800 seconds (30 minutes) of inactivity\n // a value of 0 means no timeout (infinite wait)\n $options = array(I5_OPTIONS_PRIVATE_CONNECTION => $privateNum,\n I5_OPTIONS_IDLE_TIMEOUT => $idleTimeoutSeconds\n );\n\n // connect (note: the \"p\" for persistent is required for CW private connections)\n // and specify a private connection and timeout\n $conn = i5_pconnect('localhost', 'user', 'pw', $options);\n\nNote: The CW also supports the new API's \"setToolkitServiceParams\" technique described above, because the CW uses the new toolkit underneath.\n\n\nFull support RAW XMLSERVICE Samples\n-----------------------------------\n\nPrograms with qsysopr message idle timeout may help.\n::\n\n <?php\n // see connection.inc param details ...\n require_once('connection.inc');\n // call IBM i\n if ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\n else $conn = db2_connect($database,$user,$password);\n if (!$conn) die(\"Bad connect: $database,$user\");\n $stmt = db2_prepare($conn, \"call $libxmlservice.iPLUG32K(?,?,?,?)\");\n if (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n $ctl .= \" *idle(10)\";\n $clobIn = getxml();\n $clobOut = \"\";\n $ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n $ret=db2_execute($stmt);\n if (!$ret) die(\"Bad execute: \".db2_stmt_errormsg());\n // -----------------\n // output processing\n // -----------------\n // dump raw XML (easy test debug)\n var_dump($clobOut);\n // xml check via simplexml vs. expected results\n $xmlobj = simplexml_load_string($clobOut);\n if (!$xmlobj) die(\"Bad XML returned\");\n // good\n if (strpos($clobOut,\"Pointer not set\")<=0) echo \"Failure\\n\";\n echo \"Success\\n\";\n\n // *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n // * zzboom: bad function blow up\n // *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n // P zzboom B export\n // D zzboom PI\n function getxml() {\n $clob = <<<ENDPROC\n <?xml version='1.0'?>\n <script>\n <pgm name='ZZSRV' lib='xyzlibxmlservicexyz' func='ZZBOOM'>\n </pgm>\n </script>\n ENDPROC;\n return test_lib_replace($clob);\n }\n ?>\n\n\nTimeout is a setting control setting with various properties allowed.\n::\n\n *----------------------------------------------------\n * -- server time out jobs XMLSERVICE (1.6.2)\n * *wait[(seconds[/action])]\n * - client side wait for XMLSERVICE call (client side)\n * example: $ctl=\"*wait(10)\";\n * - default action *wait(60/busy) (see plugconfx)\n * *call[(seconds[/action[/duration[/job]]])]\n * - client/server side XMLSERVICE call wait (PGM, SRVPGM, PASE, etc)\n * example: $ctl=\"*wait(10) *call(5/busy/client)\";\n * - default for both client/server is *call(0)\n * means wait on call forever (user code flawless),\n * but can be overriden client/server/both\n * *idle[(seconds[/action[/duration]])]\n * - server side XMLSERVICE idle no activity\n * example: $ctl=\"*wait(10/kill) *call(30/kill) *idle(30/perm)\";\n * - default action *idle(1800/kill) (see plugconfx)\n * -- time out parameters\n * seconds:\n * -1 - current default timer\n * 0 - no timer, no timeout, wait forever\n * n - idle timer \"pop\" seconds\n * action:\n * kill - end job immed\n * user - user override signal behaviour (see plugconfx)\n * busy - return busy XML (client side)\n * busy response (1301050):\n * <error>\n * <errnoxml>1301050</errnoxml>\n * <xmlerrmsg>IPC timeout busy</xmlerrmsg>\n * </error>\n * duration:\n * perm - set and use new defaults all requests\n * orig - reset and use original compile defaults (see plugconfx)\n * job:\n * client - *call action applies client side\n * server - *call action applies server side\n * -- Notes:\n * - default timeout/action provided plugconf.rpgle,\n * but each request may override/reset to fit task(s)\n * - signal SIGALRM used with this function\n * can affect user program calls,\n * *call(0) may be used to turn off timer\n * during user program calls\n * - action 'user' allows for custom signal\n * processing in the RPG code (see plugconfx)\n * - if duration not specified, attributes\n * *wait(),*call(),*idle() are temporary\n * for this call only and return to last defaults.\n * - if 'job' not specified on *call(),\n * attribute settings apply to both sides\n * - end job immed kills XMLSERVICE job (server)\n * and destroys IPC, so any waiting client is\n * released with an IPC missing error.\n *----------------------------------------------------\n\n\n\n\n\n.. \n [--Author([[http://youngiprofessionals.com/wiki/index.php/XMLSERVICE/XMLSERVICETimeOut?action=expirediff | s ]])--]\n [--Tony \"Ranger\" Cairns - IBM i PHP / PASE--]\n"
},
{
"alpha_fraction": 0.5852580070495605,
"alphanum_fraction": 0.6208845376968384,
"avg_line_length": 33.193275451660156,
"blob_id": "cfbab5308ed24e47541ca486e1c7cf1c1d779951",
"content_id": "48a9ff3ec215c2939c08a2e7f79563a0fcc2fffd",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 4070,
"license_type": "permissive",
"max_line_length": 91,
"num_lines": 119,
"path": "/test/php/test_12602_MISC_ibm_db2_io_QWCRSSTS.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 inout sys api - QWCRSSTS\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// -----------------\n// make the call\n// -----------------\n// see connection.inc param details ...\nrequire_once('connection.inc');\n// call IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Fail connect: $database,$user\");\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG65K(?,?,?,?)\");\nif (!$stmt) die(\"Fail prepare: \".db2_stmt_errormsg());\n$clobIn = getxml();\n$clobOut = \"\";\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) die(\"Fail execute: \".db2_stmt_errormsg());\n// -----------------\n// output processing\n// -----------------\n// dump raw XML (easy test debug)\n// var_dump($clobOut);\n// xml check via simplexml vs. expected results\n$xmlobj = simplexml_load_string($clobOut);\nif (!$xmlobj) die(\"Fail XML returned\\n\");\n// -----------------\n// output pgm call\n// -----------------\n$allpgms = $xmlobj->xpath('/script/pgm');\nif (!$allpgms) die(\"Fail XML pgm missing\\n\");\n$pgm = $allpgms[0];\n$name = $pgm->attributes()->name;\n$lib = $pgm->attributes()->lib;\n// pgm parms\n$parm = $pgm->xpath('parm');\nif (!$parm) die(\"Missing XML pgm parms ($name)\");\n// parm data structure Format SSTS0100\n$ds = $parm[0]->ds;\n$count = 0;\nforeach($ds->data as $data) {\n echo $data->attributes()->comment.\": \".(string)$data.\"\\n\";\n $count++;\n}\nif ($count <> 18) die(\"Fail XML Format SSTS0100\");\n// parm data structure Format ERRC0100\n$ds = $parm[1]->ds;\n$count = 0;\nforeach($ds->data as $data) {\n echo $data->attributes()->comment.\": \".(string)$data.\"\\n\";\n $count++;\n}\nif ($count <> 5) die(\"Fail XML Format ERRC0100\");\n\n// good\necho \"Success ($lib/$name)\\n\";\n\n// QWCRSSTS\nfunction getxml() {\n$clob = <<<ENDPROC\n<?xml version='1.0' encoding='ISO-8859-1'?>\n<script>\n<pgm name='QWCRSSTS' lib='QSYS'>\n <parm io='both'>\n <ds>\n <data type='10i0' var='BYTES' comment='BYTES'>0</data>\n <data type='10i0' var='RET' comment='RET'>1</data>\n <data type='8B' var='DATE_TIME' comment='DATE_TIME'>x</data>\n <data type='8A' var='SYSTEM' comment='SYSTEM'>x</data>\n <data type='10i0' var='USERS' comment='USERS'>0</data>\n <data type='10i0' var='DIS_USERS' comment='DIS_USERS'>0</data>\n <data type='10i0' var='SUSP_JOBS' comment='SUSP_JOBS'>0</data>\n <data type='10i0' var='JB_GP_SUSP' comment='JB_GP_SUSP'>0</data>\n <data type='10i0' var='USER_SIGNED_PRINT_WAIT' comment='USER_SIGNED_PRINT_WAIT'>0</data>\n <data type='10i0' var='MSG_WAIT' comment='MSG_WAIT'>0</data>\n <data type='10i0' var='BATCH_JOBS' comment='BATCH_JOBS'>0</data>\n <data type='10i0' var='HELD_BATCH' comment='HELD_BATCH'>0</data>\n <data type='10i0' var='END_BATCH' comment='END_BATCH'>0</data>\n <data type='10i0' var='WAITING_BATCH' comment='WAITING_BATCH'>0</data>\n <data type='10i0' var='BATCH_HELD_ON_QUE' comment='BATCH_HELD_ON_QUE'>0</data>\n <data type='10i0' var='BATCH_ON_HELD_QUE' comment='BATCH_ON_HELD_QUE'>0</data>\n <data type='10i0' var='UNASGN_BATCH' comment='UNASGN_BATCH'>0</data>\n <data type='10i0' var='BATCH_WAIT_PRINT' comment='BATCH_WAIT_PRINT'>0</data>\n </ds>\n </parm>\n <parm comment='LEN' io='in'>\n <data type='10i0' var='LEN' >148</data>\n </parm>\n <parm comment='FORMAT' io='in'>\n <data type='8A' var='FORMAT' >SSTS0100</data>\n </parm>\n <parm comment='RESET' io='in'>\n <data type='10A' var='RESET' >*YES</data>\n </parm>\n <parm io='both'>\n <ds>\n <data type='10i0' var='provided' comment='provided'>0</data>\n <data type='10i0' var='available' comment='available'>0</data>\n <data type='7A' var='Exception' comment='Exception'>x</data>\n <data type='1A' var='reserved' comment='reserved'>x</data>\n <data type='10A' var='data' comment='data'>x</data>\n </ds>\n </parm>\n</pgm>\n</script>\nENDPROC;\nreturn $clob;\n}\n?>\n--EXPECTF--\n%s\nSuccess (%s)\n\n"
},
{
"alpha_fraction": 0.6148832440376282,
"alphanum_fraction": 0.6338946223258972,
"avg_line_length": 31.280702590942383,
"blob_id": "e1a6ff277dbd49d8ccbe621a7fcf54416dba0843",
"content_id": "d79b6b5cdaf6d4ff6d4033afb9a60cca74005770",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 1841,
"license_type": "permissive",
"max_line_length": 110,
"num_lines": 57,
"path": "/test/php/test_80586_db2_io_sh_error_default.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 check sh error default\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\n\n// -------------\n// call IBM i\n// -------------\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Bad connect: $database,$user\");\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG512K(?,?,?,?)\");\nif (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n$clobIn = getxml();\n$clobOut = \"\";\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) die(\"Bad execute: \".db2_stmt_errormsg());\n// -----------------\n// output processing (XML input info)\n// -----------------\n// dump raw XML (easy test debug)\nvar_dump($clobOut);\n$xmlobj = simplexml_load_string($clobOut);\nif (!$xmlobj) die(\"Bad XML input\");\n\n// check if good\nif (strpos($clobOut,'<report>')>0) die(\"Error contains <report>, but should not sh default\\n\");\nif (strpos($clobOut,'<error>')>0) die(\"Error contains <error>, but should not sh default\\n\");\nif (strpos($clobOut,'<joblog')>0) die(\"Error contains <joblog>, but should not sh default\\n\");\nif (strpos($clobOut,\"<error>CPF\")>0) die(\"Error contains <error>CPFxxx</error>, but should not sh default\\n\");\nif (strpos($clobOut,\"hdhdgd not found\")<1) die(\"Missing hdhdgd not found\\n\");\n\n// good\necho \"Success\\n\";\n\nfunction getxml() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<script>\n<sh><![CDATA[/QOpenSys/usr/bin/ls /hdhdgd 2>&1]]></sh>\n<sh><![CDATA[/QOpenSys/usr/bin/ls /hdhdgd 2>&1]]></sh>\n</script>\nENDPROC;\nreturn $clob;\n}\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.4339732527732849,
"alphanum_fraction": 0.47265851497650146,
"avg_line_length": 45.445945739746094,
"blob_id": "f5360e9676d0437fc56f71f1692883825dc79e80",
"content_id": "5ca1c7498584c66227d9b3420adad94a4c92b647",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 3438,
"license_type": "permissive",
"max_line_length": 105,
"num_lines": 74,
"path": "/test/php/test_20405_ZZERICH_toolkit_erich.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: - Zend Toolkit erich occurs data\n--SKIPIF--\n<?php require_once('skipiftoolkit.inc'); ?>\n--FILE--\n<?php\nrequire_once('connection.inc');\n// assume /usr/local/zendsvr/share/ToolkitAPI\nrequire_once(\"ToolkitService.php\");\nob_start();\n// new toolkit\ntry { $ToolkitServiceObj = ToolkitService::getInstance($database, $user, $password); }\ncatch (Exception $e) { echo $e->getMessage(), \"\\n\"; exit();}\n$ToolkitServiceObj->setToolkitServiceParams(\narray('InternalKey'=>$ipc, // route to same XMLSERVICE job /tmp/myjob1\n'subsystem'=>\"QGPL/QDFTJOBD\", // subsystem/jobd to start XMLSERVICE (if not running)\n'plug'=>\"iPLUG5M\")); // max size data i/o (iPLUG4K,32K,65K.512K,1M,5M,10M,15M)\n// D $vevsfi s 1\n// D $vevsrj s 2\n// D $vevsob s 7s 0\n// D $vevsve s 5s 0\n// D*Ergebnisdaten:\n// D $vevsods ds occurs(200)\n// D $vsukz 1 1\n// D $vpos 2 9\n// D $vtxt 10 39\n// D $vkalw 40 174 2 dim(15)\n// D $vvsw 175 309 2 dim(15)\n// D $vvsk 310 324 0 dim(15)\n// d*\n// D i S 10i 0 inz(0)\n// D j S 10i 0 inz(0)\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// * main(): Control flow\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// C *Entry PLIST\n// c parm $vevsfi\n// c parm $vevsrj\n// c parm $vevsob\n// c parm $vevsve\n// c parm $vevsods\n$param[] = $ToolkitServiceObj->AddParameterChar ('both',1, 'vevsfi', 'var1', 'a');\n$param[] = $ToolkitServiceObj->AddParameterChar ('both',2, 'vevsrj', 'var2', 'bb');\n$param[] = $ToolkitServiceObj->AddParameterZoned ('both',7,0,'vevsob', 'var3', '1.0');\n$param[] = $ToolkitServiceObj->AddParameterZoned ('both',5,0,'vevsve', 'var4', '1.0');\nfor ($h=0;$h<200;$h++) {\n $vevsods[] = $ToolkitServiceObj->AddParameterChar ('both', 1, \"vsukz{$h}\", \"ds1{$h}\", 'A');\n $vevsods[] = $ToolkitServiceObj->AddParameterChar ('both', 8, \"vpos{$h}\", \"ds2{$h}\", 'B');\n $vevsods[] = $ToolkitServiceObj->AddParameterChar ('both', 30,\"vtxt{$h}\", \"ds3{$h}\", 'C');\n for ($i=0;$i<15;$i++)\n $vevsods[] = $ToolkitServiceObj->AddParameterZoned('both',9,2,\"vkalw{$h}{$i}\",\"ds4{$h}{$i}\",'9.2' );\n for ($i=0;$i<15;$i++)\n $vevsods[] = $ToolkitServiceObj->AddParameterZoned('both',9,2,\"vvsw{$h}{$i}\", \"ds5{$h}{$i}\",'9.2' );\n for ($i=0;$i<15;$i++)\n $vevsods[] = $ToolkitServiceObj->AddParameterZoned('both',1,0,\"vvsk{$h}{$i}\", \"ds6{$h}{$i}\",'1.0' );\n}\n$param[] = $ToolkitServiceObj->AddDataStruct($vevsods);\n$result = $ToolkitServiceObj->PgmCall('ZZERICH', $testLib, $param, null, null);\nif (!$result) echo $ToolkitServiceObj->getLastError();\nelse var_dump($result);\n$clobOut = ob_get_contents();\nob_end_clean();\necho substr($clobOut,strlen($clobOut)-140,140);\n?>\n--EXPECTF--\n%s\n string(1) \"2\"\n [\"ds619914\"]=>\n string(1) \"2\"\n }\n [\"retvals\"]=>\n array(0) {\n }\n%s\n\n"
},
{
"alpha_fraction": 0.5649392008781433,
"alphanum_fraction": 0.575175940990448,
"avg_line_length": 20.397260665893555,
"blob_id": "d6356a66382756fce3dee6f25bf30fd04571bfe0",
"content_id": "88573b3a25fe2b81cee97abb151a5127c98f4c10",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 1563,
"license_type": "permissive",
"max_line_length": 68,
"num_lines": 73,
"path": "/test/php/test_10507_MISC_rest_post_sh_ls.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: REST POST SH - ls /usr/local/zendsvr/bin\n--SKIPIF--\n<?php require_once('skipifrest.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\n\n// http POST parms\n$clobIn = getxml();\n$clobOut = \"\";\n$postdata = http_build_query(\n array(\n 'db2' => \"*LOCAL\",\n 'uid' => $user,\n 'pwd' => $password,\n 'ipc' => $ipc,\n 'ctl' => $ctl,\n 'xmlin' => $clobIn,\n 'xmlout' => 1000000 // size expected XML output\n )\n);\n$opts = array('http' =>\n array(\n 'method' => 'POST',\n 'header' => 'Content-type: application/x-www-form-urlencoded',\n 'content' => $postdata\n )\n);\n$context = stream_context_create($opts);\n// execute\n$linkall = $i5resturl;\n$result = file_get_contents($linkall, false, $context);\n// result\nif ($result) {\n $getOut = simplexml_load_string($result);\n $clobOut = $getOut->asXML();\n}\nelse $clobOut = \"\";\n// -----------------\n// output processing\n// -----------------\n// dump raw XML (easy test debug)\nvar_dump($clobOut);\n// xml check via simplexml vs. expected results\n$xmlobj = simplexml_load_string($clobOut);\nif (!$xmlobj) die(\"Bad XML returned\");\n// -----------------\n// output sh call\n// -----------------\n$sh = $xmlobj->xpath('/script/sh');\nif (!$sh) die(\"Missing XML sh info\");\n\n// good\necho \"Success (PASE sh)\\n\";\n\n// 5250:\n// call qp2term\n// /QOpenSys/usr/bin/ls /tmp\nfunction getxml() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<script>\n<sh>/QOpenSys/usr/bin/ls /tmp</sh>\n</script>\nENDPROC;\nreturn $clob;\n}\n?>\n--EXPECTF--\n%s\nSuccess (%s)\n\n"
},
{
"alpha_fraction": 0.623001754283905,
"alphanum_fraction": 0.6372113823890686,
"avg_line_length": 26.45121955871582,
"blob_id": "e4da570b73208005747b58e4938d077059b0ffea",
"content_id": "ce8c43a0901165a1fc66cf9d2161d5a82591c4ea",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 2264,
"license_type": "permissive",
"max_line_length": 124,
"num_lines": 82,
"path": "/test/php/test_30452_DTAARA_cw.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: - CW Toolkit REXX RTVMSG command\n--SKIPIF--\n<?php require_once('skipifcw.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\nrequire_once('CW/cw.php'); // new toolkit compatibility (Alan)\n\necho \"Test of BETTYBOOP\";\n\n/* connect */\n// this test seems to fail when running localhost\n// due to switch CCSID, but works private connect\n// $conn = i5_connect(\"localhost\", $user, $password);\n$conId = '42';\n$conn = i5_pconnect(\"\", $user, $password, array(I5_OPTIONS_PRIVATE_CONNECTION => $conId));\nif (!$conn)\n{ $tab = i5_error();\n die(\"Connect: \".$tab[2].\" \".\"$tab[3], $tab[0]\");\n}\n\n\n$cmdString = \"CHGJOB ccsid(273)\";\n$start = microtime(true);\n$success = i5_command($cmdString, array(), array(), $conn);\n$end = microtime(true);\n$elapsed = $end - $start;\necho \"Ran command $cmdString using a single string in $elapsed seconds. \\n\";\nif ($success) echo \"Successful\\n\";\nelse echo \"Failed with this error: \" . print_r(i5_error(), true) . \"\\n\";\n\n\n$dtaara = \"$testLib/BETTYBOOP\";\n\n$ret = i5_data_area_delete($dtaara);\nif ($ret) {\n\techo \"Deleted data area $dtaara successfully.\\n\";\n} else {\n\techo \"Could not delete data area $dtaara.\\n\";\n}\n\n\n$ret = i5_data_area_create($dtaara, 100);\nif ($ret) {\n\techo \"Created data area $dtaara successfully.\\n\";\n} else {\n\techo \"Could not create data area $dtaara. Reason: \" . i5_errormsg() . \" (it may already exist)\\n\";\n}\n\n// Ä Ö Ü ä ö ü\n$dtaara = \"$testLib/BETTYBOOP\";\n$stringToWrite = 'Ä Ö Ü ä ö ü';\n$ret = i5_data_area_write($dtaara, $stringToWrite, 5, 20);\nif ($ret) {\n\techo \"Wrote '$stringToWrite' \".bin2hex($stringToWrite).\" to data area $dtaara successfully.\\n\";\n\t// try to read now.\n\t$start = microtime(true);\n\t$readData = i5_data_area_read($dtaara, 3, 40);\n\t$end = microtime(true);\n\t$elapsed = $end - $start;\n\n if ($readData) {\n \techo \"Read a portion of '$readData' \".bin2hex($readData).\" from data area $dtaara successfully in $elapsed seconds.\\n\";\n } else {\n \techo \"Could not read from data area $dtaara. Reason: \" . i5_errormsg() . \"\\n\";\n die();\n }\n} else {\n\techo \"Could not write to data area $dtaara. Reason: \" . i5_errormsg() . \"\\n\";\n die();\n}\n\n\n// good\necho \"I have ... \\n\";\necho \"Success\\n\";\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.5665768384933472,
"alphanum_fraction": 0.5762802958488464,
"avg_line_length": 21.33734893798828,
"blob_id": "bab8324a30f857742e41f3d62f217dbb953ee54c",
"content_id": "67321c04ddf65b934b83425cd9188e45bbfd4299",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 1855,
"license_type": "permissive",
"max_line_length": 68,
"num_lines": 83,
"path": "/test/php/test_10508_MISC_rest_post_sh_wrkactjob.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: REST POST SH - system 'wrkactjob'\n--SKIPIF--\n<?php require_once('skipifrest.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\n\n// http POST parms\n$clobIn = getxml();\n$clobOut = \"\";\n$postdata = http_build_query(\n array(\n 'db2' => \"*LOCAL\",\n 'uid' => $user,\n 'pwd' => $password,\n 'ipc' => $ipc,\n 'ctl' => $ctl,\n 'xmlin' => $clobIn,\n 'xmlout' => 5000000 // size expected XML output\n )\n);\n$opts = array('http' =>\n array(\n 'method' => 'POST',\n 'header' => 'Content-type: application/x-www-form-urlencoded',\n 'content' => $postdata\n )\n);\n$context = stream_context_create($opts);\n// execute\n$linkall = $i5resturl;\n$result = file_get_contents($linkall, false, $context);\n// result\nif ($result) {\n $getOut = simplexml_load_string($result);\n $clobOut = $getOut->asXML();\n}\nelse $clobOut = \"\";\n// -----------------\n// output processing\n// -----------------\n// dump raw XML (easy test debug)\nvar_dump($clobOut);\n// xml check via simplexml vs. expected results\n$xmlobj = simplexml_load_string($clobOut);\nif (!$xmlobj) die(\"Bad XML returned\");\n// -----------------\n// output sh call\n// -----------------\n$sh = $xmlobj->xpath('/script/sh');\nif (!$sh) die(\"Missing XML sh info\");\n$expect = 'E N D O F L I S T I N G'; // should be in the list\n$missing = true;\nforeach ($sh[0]->row as $row) {\n $data = (string)$row;\n if (strpos($data,$expect)>0) {\n $missing = false;\n break;\n }\n}\nif ($missing) die(\"XML sh data missing ($expect)\");\n\n// good\necho \"Success (PASE sh)\\n\";\n\n// 5250:\n// call qp2term\n// /QOpenSys/usr/bin/system -i 'wrkactjob'\nfunction getxml() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<script>\n<sh rows='on'>/QOpenSys/usr/bin/system -i 'wrkactjob'</sh>\n</script>\nENDPROC;\nreturn $clob;\n}\n?>\n--EXPECTF--\n%s\nSuccess (%s)\n\n"
},
{
"alpha_fraction": 0.4962852895259857,
"alphanum_fraction": 0.5306257009506226,
"avg_line_length": 34.62352752685547,
"blob_id": "901f767fb97e27f8f5887264977dab28092f39ec",
"content_id": "209fc96b6c797bf8a0b04ccfd37e72a2516e3373",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 6057,
"license_type": "permissive",
"max_line_length": 89,
"num_lines": 170,
"path": "/test/php/test_10172_V6_ZZARRAY3_ibm_db2_io_array.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 inout SRVPGM - DS records parm\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// -----------------\n// make the call\n// -----------------\n// see connection.inc param details ...\nrequire_once('connection.inc');\n// call IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Fail connect: $database,$user\");\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG15M(?,?,?,?)\");\nif (!$stmt) die(\"Fail prepare: \".db2_stmt_errormsg());\n$clobIn = getxml();\n$clobOut = \"\";\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) die(\"Fail execute: \".db2_stmt_errormsg());\n// -----------------\n// output processing\n// -----------------\n// dump raw XML (easy test debug)\nvar_dump($clobOut);\n// xml check via simplexml vs. expected results\n$xmlobj = simplexml_load_string($clobOut);\nif (!$xmlobj) die(\"Fail XML returned\\n\");\n// -----------------\n// output pgm call\n// -----------------\n$myName1 = 'Ranger'; // expected name\n$myMax1 = 5; // expected max\n$myCount1= 5; // expected count\n$allpgms = $xmlobj->xpath('/script/pgm');\nif (!$allpgms) die(\"Fail XML pgm missing\\n\");\n$pgm = $allpgms[0];\n$name = $pgm->attributes()->name;\n$lib = $pgm->attributes()->lib;\n$func = $pgm->attributes()->func;\n// pgm parms\n$parm = $pgm->xpath('parm');\nif (!$parm) die(\"Fail XML pgm parms missing ($lib/$name.$func)\\n\");\n$myName = (string)$parm[0]->data;\n$myMax = (string)$parm[1]->data;\n\n// good\necho \"Success ($lib/$name.$func)\\n\";\n\n// D ARRAYMAX c const(999)\n// D dcRec_t ds qualified based(Template)\n// D dcMyName 10A\n// D dcMyJob 4096A\n// D dcMyRank 10i 0\n// D dcMyPay 12p 2\n//\n// D dcRec3_t ds qualified based(Template)\n// D dcMyName3 10A\n// D dcRec3 likeds(dcRec_t) dim(ARRAYMAX)\n//\n// D dcRec2_t ds qualified based(Template)\n// D dcMyName2 10A\n// D dcRec2 likeds(dcRec3_t)\n//\n// D dcRec1_t ds qualified based(Template)\n// D dcMyName1 10A\n// D dcRec1 likeds(dcRec2_t)\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// * zzarray2: check parameter array aggregate\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// D zzarray3 PR\n// D myName 10A\n// D myMax 3s 0\n// D myCount 3s 0\n// D findMe1 likeds(dcRec1_t)\n// D findMe2 likeds(dcRec2_t)\n// D findMe3 likeds(dcRec3_t)\nfunction getxml() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<script>\n<pgm name='ZZSRV6' lib='xyzlibxmlservicexyz' func='ZZARRAY3'>\n <parm comment='search this name'>\n <data var='myName' type='10A'>Ranger</data>\n </parm>\n <parm comment='max allowed return'>\n <data var='myMax' type='3s0'>3</data>\n </parm>\n <parm comment='wskdist'><data var='wskdist' type='3a'>012</data></parm>\n <parm comment='wskyear'><data var='wskyear' type='4a'>2011</data></parm>\n <parm comment='wskytyp'><data var='wskytyp' type='1a'>R</data></parm>\n <parm comment='wskschl'><data var='wskschl' type='4a'>0011</data></parm>\n <parm comment='wskusr'><data var='wskusr' type='20a'>BISCA080</data></parm>\n <parm comment='wsksdate'><data var='wsksdate' type='8a'>20110104</data></parm>\n <parm comment='wskscrse'><data var='wskscrse' type='8a'> </data></parm>\n <parm comment='wskssect'><data var='wskssect' type='4a'> </data></parm>\n <parm comment='wsksmod'><data var='wsksmod' type='2a'>01</data></parm>\n <parm comment='wsoptai'><data var='wsoptai' type='1a'/></parm>\n <parm comment='wsopatdt'><data var='wsopatdt' type='1a'/></parm>\n <parm comment='wsoplslot'><data var='wsoplslot' type='2a'/></parm>\n <parm comment='wsopstdcnt'><data var='wsopstdcnt' type='3s0' enddo='wsopstdcnt'/></parm>\n <parm comment='wsoperrtbl'><data var='wsoperrtbl' type='48000a'/></parm>\n <parm comment='findMe1'>\n <ds var='findMe1'>\n <ds var='dcRec1_t' array='on'>\n <ds var='dcRec1_t'>\n <data var='dcMyName1' type='10A'/>\n <ds var='dcRec2_t'>\n <data var='dcMyName2' type='10A'/>\n <ds var='dcRec3_t'>\n <data var='dcMyName3' type='10A'/>\n <ds var='dcRec_t' dim='999' dou='wsopstdcnt'>\n <data var='dcMyName' type='10A'/>\n <data var='dcMyJob' type='4096A'/>\n <data var='dcMyRank' type='10i0'/>\n <data var='dcMyPay' type='12p2'/>\n </ds>\n </ds>\n </ds>\n </ds>\n </ds>\n </ds>\n </parm>\n <parm comment='findMe2'>\n <ds var='findMe2'>\n <ds var='dcRec2_t' array='on'>\n <ds var='dcRec2_t'>\n <data var='dcMyName2' type='10A'/>\n <ds var='dcRec3_t'>\n <data var='dcMyName3' type='10A'/>\n <ds var='dcRec_t' dim='999' dou='wsopstdcnt'>\n <data var='dcMyName' type='10A'/>\n <data var='dcMyJob' type='4096A'/>\n <data var='dcMyRank' type='10i0'/>\n <data var='dcMyPay' type='12p2'/>\n </ds>\n </ds>\n </ds>\n </ds>\n </ds>\n </parm>\n <parm comment='findMe3'>\n <ds var='findMe3'>\n <ds var='dcRec3_t' array='on'>\n <ds var='dcRec3_t'>\n <data var='dcMyName3' type='10A'/>\n <ds var='dcRec_t' dim='999' dou='wsopstdcnt'>\n <data var='dcMyName' type='10A'/>\n <data var='dcMyJob' type='4096A'/>\n <data var='dcMyRank' type='10i0'/>\n <data var='dcMyPay' type='12p2'/>\n </ds>\n </ds>\n </ds>\n </ds>\n </parm>\n</pgm>\n</script>\nENDPROC;\nreturn test_lib_replace($clob);\n}\n?>\n--EXPECTF--\n%s\nSuccess (%s)\n\n"
},
{
"alpha_fraction": 0.40558046102523804,
"alphanum_fraction": 0.5211758613586426,
"avg_line_length": 27.062936782836914,
"blob_id": "576c05302052ef428d3ff3d4ab6e9cd8abb456dc",
"content_id": "83ee06ddb27309ef0f62d6f2b400d4a754012802",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 4014,
"license_type": "permissive",
"max_line_length": 74,
"num_lines": 143,
"path": "/test/php/test_10150_ZZBIGI_db2_io_limit.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 inout ZZBIGI - call integer limits\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\n// call IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Bad connect: $database,$user\");\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG4K(?,?,?,?)\");\nif (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n$clobIn = getxml();\n$clobOut = \"\";\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) die(\"Bad execute: \".db2_stmt_errormsg());\n// -----------------\n// output processing\n// -----------------\n// dump raw XML (easy test debug)\nvar_dump($clobOut);\n// xml check via simplexml vs. expected results\n$xmlobj = simplexml_load_string($clobOut);\nif (!$xmlobj) die(\"Bad XML returned\");\n$allpgms = $xmlobj->xpath('/script/pgm');\nif (!$allpgms) die(\"Missing XML pgm info\");\n$pgm = $allpgms[0];\n$name = $pgm->attributes()->name;\n$lib = $pgm->attributes()->lib;\n$func = $pgm->attributes()->func;\n$parms = $xmlobj->xpath('/script/pgm/parm');\n$int = array(\n\"-128\",\n\"-32768\",\n\"-2147483648\",\n\"-9223372036854775808\",\n\"127\",\n\"32767\",\n\"2147483647\",\n\"9223372036854775807\",\n\"255\",\n\"65535\",\n\"4294967295\",\n\"18446744073709551615\"\n);\n$i=0;\nforeach($parms as $parm) {\n $mytype = (string)($parm->data->attributes()->type);\n $myval = (string)($parm->data);\n echo \"$mytype {$int[$i]} == $myval\\n\";\n if ($int[$i] != $myval) die(\"Bad value\");\n $i++;\n}\n\n// good\necho \"Success ($lib/$name($func))\\n\";\n\n/*\n *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n * zzbigi: check ints\n * int8 -128 +127\n * int16 -32768 +32767\n * int32 -2147483648 +2147483647\n * int64 -9223372036854775808 +9223372036854775807\n * uint8 +255\n * uint16 +65535\n * uint32 +4294967295\n * uint64 +18446744073709551615\n *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n P zzbigi B export\n D zzbigi PI 20u 0\n D mmint8 3i 0\n D mmint16 5i 0\n D mmint32 10i 0\n D mmint64 20i 0\n D myint8 3i 0\n D myint16 5i 0\n D myint32 10i 0\n D myint64 20i 0\n D myuint8 3u 0\n D myuint16 5u 0\n D myuint32 10u 0\n D myuint64 20u 0\n*/\nfunction getxml() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<script>\n<pgm name='ZZSRV' lib='xyzlibxmlservicexyz' func='ZZBIGI'>\n <parm io='both'>\n <data type='3i0'>-128</data>\n </parm>\n <parm io='both'>\n <data type='5i0'>-32768</data>\n </parm>\n <parm io='both'>\n <data type='10i0'>-2147483648</data>\n </parm>\n <parm io='both'>\n <data type='20i0'>-9223372036854775808</data>\n </parm>\n <parm io='both'>\n <data type='3i0'>127</data>\n </parm>\n <parm io='both'>\n <data type='5i0'>32767</data>\n </parm>\n <parm io='both'>\n <data type='10i0'>2147483647</data>\n </parm>\n <parm io='both'>\n <data type='20i0'>9223372036854775807</data>\n </parm>\n <parm io='both'>\n <data type='3u0'>255</data>\n </parm>\n <parm io='both'>\n <data type='5u0'>65535</data>\n </parm>\n <parm io='both'>\n <data type='10u0'>4294967295</data>\n </parm>\n <parm io='both'>\n <data type='20u0'>18446744073709551615</data>\n </parm>\n <return>\n <data type='20u0'>0</data>\n </return>\n</pgm>\n</script>\nENDPROC;\nreturn test_lib_replace($clob);\n}\n?>\n--EXPECTF--\n%s\nSuccess (%s)\n\n"
},
{
"alpha_fraction": 0.578158438205719,
"alphanum_fraction": 0.6070663928985596,
"avg_line_length": 26.02898597717285,
"blob_id": "e1091bc95cd9f5d078eae89e3a09f8633c650c5e",
"content_id": "7e10b8f461156a67fd6fecefaebf6b4a3bee3255",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 1868,
"license_type": "permissive",
"max_line_length": 74,
"num_lines": 69,
"path": "/test/php/test_90310_ibm_db2_io_sqlcallproclob.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 inout SQL - stored proc lob animal1\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\n\n// -------------\n// call IBM i\n// -------------\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Bad connect: $database,$user\");\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG15M(?,?,?,?)\");\nif (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n$clobIn = getxml();\n$clobOut = \"\";\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) die(\"Bad execute: \".db2_stmt_errormsg());\n// -----------------\n// output processing (XML input info)\n// -----------------\n// dump raw XML (easy test debug)\n// var_dump($clobIn);\n$xmlobj = simplexml_load_string($clobIn);\nif (!$xmlobj) die(\"Bad XML input\");\n// -----------------\n// output processing\n// -----------------\n// dump raw XML (easy test debug)\n// var_dump($clobOut);\n// xml check via simplexml vs. expected results\n$xmlobj = simplexml_load_string($clobOut);\nif (!$xmlobj) die(\"Bad XML returned\");\n$len = strlen($clobOut);\nif ( $len < 149000 && $len > 160000 ) die(\"Wrong size output $len \\n\");\nelse echo \"size in range 149000 < $len < 160000 \\n\";\n\n// good\necho \"Success\\n\";\n\nfunction getxml() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<script>\n<sql>\n<prepare>call match2(?, ?, ?, ?)</prepare>\n<execute>\n<parm io='in'>1</parm>\n<parm io='in'>5</parm>\n<parm io='out'>frog is wrong</parm>\n<parm io='out'>9</parm>\n</execute>\n<fetch block='all' desc='on'/>\n</sql>\n</script>\nENDPROC;\nreturn $clob;\n}\n?>\n--EXPECTF--\n%s\nSuccess\n\n\n\n"
},
{
"alpha_fraction": 0.608858048915863,
"alphanum_fraction": 0.6237993836402893,
"avg_line_length": 26.115942001342773,
"blob_id": "806c2e1898610f5a761db5a5e5212c1c8e18f1eb",
"content_id": "7c20cf7e8a110ae3f011c0f326131daf8c79f206",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 1874,
"license_type": "permissive",
"max_line_length": 74,
"num_lines": 69,
"path": "/test/php/test_90400_ibm_db2_io_sqlidentity.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 inout SQL - identity\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\n\n// -------------\n// call IBM i\n// -------------\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Bad connect: $database,$user\");\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG15M(?,?,?,?)\");\nif (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n$clobIn = getxml();\n$clobOut = \"\";\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) die(\"Bad execute: \".db2_stmt_errormsg());\n// -----------------\n// output processing (XML input info)\n// -----------------\n// dump raw XML (easy test debug)\n// var_dump($clobIn);\n$xmlobj = simplexml_load_string($clobIn);\nif (!$xmlobj) die(\"Bad XML input\");\n// -----------------\n// output processing\n// -----------------\n// dump raw XML (easy test debug)\nvar_dump($clobOut);\n// xml check via simplexml vs. expected results\n$xmlobj = simplexml_load_string($clobOut);\nif (!$xmlobj) die(\"Bad XML returned\");\n\nif (strpos($clobOut,'<identity')<1) die(\"Identity failed.\\n\");\n\n// good\necho \"Success\\n\";\n\nfunction getxml() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<script>\n<sql>\n<free/>\n<options options='noauto' autocommit='off'/>\n<connect conn='myconn' options='noauto'/>\n<query conn='myconn' stmt='myid'>\nINSERT INTO animal2 (animalid, notetxt) VALUES(1,'My interesting notes.')\n</query>\n<identity conn='myconn' desc='off' error='off'/>\n<commit conn='myconn' action='rollback'/>\n<free/>\n</sql>\n</script>\nENDPROC;\nreturn $clob;\n}\n?>\n--EXPECTF--\n%s\nSuccess\n\n\n\n"
},
{
"alpha_fraction": 0.5315403342247009,
"alphanum_fraction": 0.5623471736907959,
"avg_line_length": 39.880001068115234,
"blob_id": "28006850523679905a8974c5066b6611d01dfb3d",
"content_id": "0efbaa5a3f8429dc8cb0e99a125a75473f93c7a1",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 2045,
"license_type": "permissive",
"max_line_length": 103,
"num_lines": 50,
"path": "/test/php/test_20411_ZZDATEUSA_toolkit_pgm.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: - Zend Toolkit date USA\n--SKIPIF--\n<?php require_once('skipiftoolkit.inc'); ?>\n--FILE--\n<?php\nrequire_once('connection.inc');\n// assume /usr/local/zendsvr/share/ToolkitAPI\nrequire_once(\"ToolkitService.php\");\n// new toolkit\ntry { $ToolkitServiceObj = ToolkitService::getInstance($database, $user, $password); }\ncatch (Exception $e) { die($e->getMessage()); }\n$ToolkitServiceObj->setToolkitServiceParams(\narray('InternalKey'=>$ipc, // route to same XMLSERVICE job /tmp/myjob1\n'subsystem'=>\"QGPL/QDFTJOBD\", // subsystem/jobd to start XMLSERVICE (if not running)\n'plug'=>\"iPLUG32K\")); // max size data i/o (iPLUG4K,32K,65K.512K,1M,5M,10M,15M)\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// * zzdateUSA: check date parm\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// P zzdateUSA B export\n// D zzdateUSA PI D datfmt(*USA)\n// D myDate D datfmt(*USA)\n// * vars\n// D retDate s D datfmt(*USA)\n// /free\n// retDate=myDate;\n// myDate=d'2007-09-30';\n// return retDate;\n// /end-free\n// P E\n$param[] = $ToolkitServiceObj->AddParameterChar ('both', 10, 'ZZDATEUSA', 'myDate', '05/11/2009');\n$retrn[] = $ToolkitServiceObj->AddParameterChar ('both', 10, 'ZZDATEUSA', 'retDate', '2002-02-02');\n$result = $ToolkitServiceObj->PgmCall('ZZSRV', $testLib, $param, $retrn, array('func'=>'ZZDATEUSA'));\n// var_dump($result);\n/* in/out param myDate */\n$myDate = \"XMLSERVICE i/o param myDate: \".$result[\"io_param\"][\"myDate\"];\necho \"$myDate\\n\";\n$expect = '09/30/2007';\nif (strpos($myDate,$expect)<1) die(\"Fail missing $expect\\n\");\n/* return value retDate */\n$retDate = \"XMLSERVICE return retDate: \".$result[\"retvals\"][\"retDate\"];\necho \"$retDate\\n\";\n$expect = '05/11/2009';\nif (strpos($retDate,$expect)<1) die(\"Fail missing $expect\\n\");\n/* all good */\necho \"Success\\n\";\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.5811634063720703,
"alphanum_fraction": 0.6060941815376282,
"avg_line_length": 27.619047164916992,
"blob_id": "e1a666d2f486c4c0926787a57eb67fa7f2e54e03",
"content_id": "40688d5b0e4e2aa7e47926894b32ca7cf8a5d857",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 1805,
"license_type": "permissive",
"max_line_length": 95,
"num_lines": 63,
"path": "/test/php/test_10516_MISC_ibm_db2_io_sh_rows_wrkactjob_hex_ccsid_cdata.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 inout SH - hex wrkactjob\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\n// call IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Bad connect: $database,$user\");\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG15M(?,?,?,?)\");\nif (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n$ctl .= \" *cdata\";\n$clobIn = getxml();\n$clobOut = \"\";\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) die(\"Bad execute: \".db2_stmt_errormsg());\n// -----------------\n// output processing\n// -----------------\n// dump raw XML (easy test debug)\n// var_dump($clobOut);\n\n// xml check via simplexml vs. expected results\n$xmlobj = simplexml_load_string($clobOut);\nif (!$xmlobj) die(\"Bad XML returned\");\n\n// expected\n$hex = (string)$xmlobj->sh->hex;\n// var_dump($hex);\n\n$clobOut = pack(\"H*\",$hex);\n// var_dump($clobOut);\nif (strpos($clobOut,\"E N D O F L I S T I N G\")<1) die(\"E N D O F L I S T I N G missing\\n\");\necho \"I am completed\\n\";\n\n// good\necho \"Success (PASE sh)\\n\";\n\n// 5250:\n// call qp2term\n// /QOpenSys/usr/bin/ls /tmp\nfunction getxml() {\n$clob = \"<?xml version='1.0'?>\\n\";\n$clob .= \"<script>\\n\";\n$clob .= \"<sh hex='on' before='819/37' after='37/819'>\";\n$sh = \"/QOpenSys/usr/bin/system -i 'wrkactjob'\";\nfor ($i=0;$i<10;$i++) $sh .= \";/QOpenSys/usr/bin/system -i 'wrkactjob'\";\n$clob .= bin2hex($sh);\n$clob .= \"</sh>\\n\";\n$clob .= \"</script>\\n\";\nreturn $clob;\n}\n?>\n--EXPECTF--\n%s\nSuccess (%s)\n\n\n"
},
{
"alpha_fraction": 0.5652173757553101,
"alphanum_fraction": 0.5853658318519592,
"avg_line_length": 25.928571701049805,
"blob_id": "976d861e62aa305fa6355b81903f1b30516f1c61",
"content_id": "c44fa9170e15918c21a9f856901a3b75a44e0581",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 1886,
"license_type": "permissive",
"max_line_length": 74,
"num_lines": 70,
"path": "/test/php/test_10581_db2_io_cmd_rexx_RTVJOBA_LIBL_hex_ccsid_cdata.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 rexx RTVJOBA LIBL hex ccsid\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\n\n// -------------\n// call IBM i\n// -------------\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Bad connect: $database,$user\");\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG512K(?,?,?,?)\");\nif (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n$ctl .= \" *cdata\";\n$clobIn = getxml();\n$clobOut = \"\";\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) die(\"Bad execute: \".db2_stmt_errormsg());\n\n// -----------------\n// output processing (XML input info)\n// -----------------\n// dump raw XML (easy test debug)\nvar_dump($clobOut);\n\n// xml check via simplexml vs. expected results\n$xmlobj = simplexml_load_string($clobOut);\nif (!$xmlobj) die(\"Bad XML output\");\n\n// -----------------\n// output cmd call\n// -----------------\n$sh = $xmlobj->xpath('/script/cmd');\nif (!$sh) die(\"Missing XML cmd info\");\n// expected\n$hexrows = $xmlobj->xpath('/script/cmd/row');\n$data = \"\";\nforeach($hexrows as $row) {\n $hexdata = $row->xpath('data/hex');\n foreach($hexdata as $hex) {\n $data .= \" \\n-> \".pack(\"H*\", $hex);\n }\n}\necho \"$data\\n\";\nif (strpos($data,\"QSYS\")<1) die(\"Missing QSYS\\n\");\n\n// good\necho \"Success\\n\";\n\nfunction getxml() {\n$clob = \"<?xml version='1.0'?>\\n\";\n$clob .= \"<script>\\n\";\n$clob .= \"<cmd exec='rexx' hex='on' before='819/37' after='37/819'>\";\n$clob .= bin2hex(\"RTVJOBA USRLIBL(?) SYSLIBL(?)\");\n$clob .= \"</cmd>\\n\";\n$clob .= \"</script>\\n\";\nreturn $clob;\n}\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.5870988965034485,
"alphanum_fraction": 0.6037982702255249,
"avg_line_length": 44.582088470458984,
"blob_id": "a558ecddb7c44bf9e59ae0dcdaff62a3da5f4542",
"content_id": "79dd4810f89be020882ce9ac210f25f981dab44b",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 3054,
"license_type": "permissive",
"max_line_length": 87,
"num_lines": 67,
"path": "/test/php/authorization.php",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "<?php\n// **** authorization ***\n$database = '*LOCAL'; // i5 only side\n// $database = 'LP0364D'; // i5 or LINUX side\n$user = 'DB2'; // tests profile\n$password = 'PWD';\n$adoptuser1 = 'TKITU1'; // swap profile tests\n$adoptpass1 = 'PWD';\n$adoptuser2 = $user;\n$adoptpass2 = $password;\n// *** DB2 interface (iPLUGxxx, iPLUGRxxx) ***\n$i5persistentconnect = false; // persistent db2 connection\n$procDrive = 'ibm_db2'; // choose get, post, odbc, ibm_db2, pdo_ibm\n$procConn = false; // QXMLSERV - IBM production library\n$procOPM = false; // run V5R4 OPM mode\n$procLib = 'XMLSERVICE'; // XMLSERVICE - new release testing library\n// $procLib = 'ZENDSVR'; // ZENDSVR - Zend Server production library\n// $procLib = 'QXMLSERV'; // QXMLSERV - IBM production library\n$procPlug = \"iPLUG5M\"; // iPLUGxxx various sizes\n$procPlugR = \"iPLUGR5M\"; // iPLUGRxxx various sizes (result set)\n// REST interface (xmlcgi.pgm) ***\n$i5resturl = \"http://ut28p63/cgi-bin/xmlcgi.pgm\";\n$i5restdb = \"*LOCAL\"; // only *LOCAL tested\nif ($user == '') {\n $i5restuser = '*NONE'; // *NONE not allowed by default compile\n $i5restpass = '*NONE'; // *NONE not allowed by default compile\n}\nelse {\n $i5restuser = $user;\n $i5restpass = $password;\n}\n$i5restsz = \"512000\"; // size expected XML output\n// *** default parameters xmlservice ***\n$ctlhere = \"*here\"; // stateless run (xmlservice in-process)\n$ctllog = \"*sbmjob *log\"; // state full run logging (xmlservice seperate process)\n$ctl = \"*sbmjob\"; // state full run (xmlservice seperate process)\n$ipc = \"/tmp/packers\"; // ipc ignored $ctl=\"*here\"\n$ipcover = \"/tmp/override\"; // ipc ignored $ctl=\"*here\"\n$ipcpecl = \"/tmp/peclme\"; // ipc ignored $ctl=\"*here\"\n$clobIn = \"\"; // *** XML input script\n$clobOut = \"\"; // *** XML output returned\n$pase_ccsid = 819; // *** pecl ccsid pase\n$ebcdic_ccsid = 37; // *** pecl ccsid ebcidic\n// *** pear tests ***\n// *** php.ini and etc/config.d/*.ini\n// *** enable ibm_db2, odbc, pdo_ibm, xml, and pcntl\n$testLib = 'XMLSERVICE'; // XMLSERVICE - testing call programs, etc.\n$schematest = 'XMLSERVTST'; // XMLSERVTST - testing DB2 by XML\n$chglibl = \"CHGLIBL LIBL($schematest QTEMP) CURLIB($schematest)\";\n$pdfInFile = \"ZendCall.pdf\";\n$pdfOutFile = \"ZendCallOut.pdf\";\n$pngstar = \"ZendStar.png\";\n$txtessay = \"ZendEssay.txt\";\n$curr_ibm_db2_file = \"/usr/local/zendsvr/etc/conf.d/ibm_db2.ini\";\n$save_ibm_db2_file = \"/usr/local/zendsvr/etc/conf.d/ibm_db2.ini-orig\";\n$here_ibm_db2_file = \"ibm_db2.ini-here\";\n// *** pear cw tests ***\n// *** CW, toolkit tests use Alan's CWDEMO library (download php toolkit)\n$cwdb = 'localhost';\n\n// *** driver includes, etc.\nrequire_once 'xmlservice_drivers.php';\nrequire_once 'xmlservice_junk_away.php';\nrequire_once 'xmlservice_ignore_userid.php';\nrequire_once 'xmlservice_perf_report.php';\n\n?>\n"
},
{
"alpha_fraction": 0.6355932354927063,
"alphanum_fraction": 0.6412429213523865,
"avg_line_length": 18.108108520507812,
"blob_id": "6e7e798c5af170098d3c6132838df99c255b4580",
"content_id": "549f531a9c588b2ee1bd6db14c6ea4e4c05f5607",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 708,
"license_type": "permissive",
"max_line_length": 92,
"num_lines": 37,
"path": "/test/php/test_33471_cwtest_joblogs.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: - cwtest job log list\n--SKIPIF--\n<?php require_once('skipifcw.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('xxcw_test_setup.php');\n\n// job log.\nif ($doJobLogs) {\n\necho h2('Job logs');\n\n// Try current job. Good, it works, except for not enough data coming back from PHP wrapper.\necho \"About to get joblog (partial data) for current job<BR>\";\n$list = i5_jobLog_list();\nif (!$list) {\n\techo 'No joblogs found<BR>';\n} else {\n\n while ($listItem = i5_jobLog_list_read($list)) {\n\t\t\techo printArray($listItem);\n\t}\n\techo '<BR>End of list.<BR><BR>';\n}\ni5_jobLog_list_close($list);\n\n} //(if do joblogs)\n\n\n// good\necho \"\\nSuccess\\n\";\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.7617489695549011,
"alphanum_fraction": 0.7730517387390137,
"avg_line_length": 40.51852035522461,
"blob_id": "c9bb407e5a27731a69c64ab6fb9250d8e1a58cdc",
"content_id": "9be5414d0dec632323444408066e76f10ee6166e",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 3363,
"license_type": "permissive",
"max_line_length": 304,
"num_lines": 81,
"path": "/CONTRIBUTING.md",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "# Contributing\n\n## Contributing In General\n\nOur project welcomes external contributions. If you have an itch, please feel\nfree to scratch it.\n\nTo contribute code or documentation, please submit a [pull request](https://github.com/IBM/xmlservice/pulls).\n\nA good way to familiarize yourself with the codebase and contribution process is\nto look for and tackle low-hanging fruit in the [issue tracker](https://github.com/IBM/xmlservice/issues).\nThese will be marked with the [good first issue](https://github.com/IBM/xmlservice/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22) label. You may also want to look at those marked with [help wanted](https://github.com/IBM/xmlservice/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22).\n\n**Note: We appreciate your effort, and want to avoid a situation where a contribution\nrequires extensive rework (by you or by us), sits in backlog for a long time, or\ncannot be accepted at all!**\n\n### Proposing new features\n\nIf you would like to implement a new feature, please [raise an issue](https://github.com/IBM/xmlservice/issues)\nbefore sending a pull request so the feature can be discussed. This is to avoid\nyou wasting your valuable time working on a feature that the project developers\nare not interested in accepting into the code base.\n\n### Fixing bugs\n\nIf you would like to fix a bug, please [raise an issue](https://github.com/IBM/xmlservice/issues) before sending a\npull request so it can be tracked.\n\n## Legal\n\nWe have tried to make it as easy as possible to make contributions. This\napplies to how we handle the legal aspects of contribution. We use the\nsame approach - the [Developer's Certificate of Origin 1.1 (DCO)](https://github.com/hyperledger/fabric/blob/master/docs/source/DCO1.1.txt) - that the Linux® Kernel [community](https://elinux.org/Developer_Certificate_Of_Origin)\nuses to manage code contributions.\n\nWe simply ask that when submitting a patch for review, the developer\nmust include a sign-off statement in the commit message.\n\nHere is an example Signed-off-by line, which indicates that the\nsubmitter accepts the DCO:\n\n```text\nSigned-off-by: John Doe <[email protected]>\n```\n\nYou can include this automatically when you commit a change to your\nlocal git repository using the following command:\n\n```bash\ngit commit -s\n```\n\n## Communication\n\nPlease feel free to connect with us on our [Ryver forum](https://ibmioss.ryver.com/index.html#forums/1000128). You can join the Ryver community [here](https://ibmioss.ryver.com/application/signup/members/9tJsXDG7_iSSi1Q).\n\n## Setup\n\nThis project can only be built on an IBM i with the RPG compiler installed along with Python 3\nand GNU make. You must build it from an SSH terminal and not QSH or QP2TERM.\n\n```bash\n./configure --lib MYLIBRARY\n\nmake\n```\n\n## Testing\n\nCurrently there are no XMLSERVICE-specific tests. The existing tests require the use of a downstream\ntoolkit (either [PHP](https://github.com/zendtech/IbmiToolkit) or [Python](https://github.com/IBM/python-itoolkit)).\nThere is an ongoing effort to convert these to toolkit-agnostic tests in [issue 20](https://github.com/IBM/xmlservice/issues/20)\n\n\n## Coding style guidelines\n\nCurrently the code is in fixed format, but the goal is to convert to full-free format code (see\n[issue 24](https://github.com/IBM/xmlservice/issues/24)).\n\nAll new contributions should be in full-free format."
},
{
"alpha_fraction": 0.5660594701766968,
"alphanum_fraction": 0.6024901866912842,
"avg_line_length": 35.436973571777344,
"blob_id": "01928bdb1f9535317276154e133445a8d5df83b9",
"content_id": "cb0385a894a0566fdde5ae9900f8a8889e87962f",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 4337,
"license_type": "permissive",
"max_line_length": 107,
"num_lines": 119,
"path": "/test/php/test_33460_cwtest_pcml.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: - cwtest pcml\n--SKIPIF--\n<?php require_once('skipifcw.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('xxcw_test_setup.php');\n\nif ($doPcml) {\n\necho h2('PCML program calls');\n\n\t$pcml = '<pcml version=\"4.0\">\n <program name=\"YYPLUS\" entrypoint=\"YYPLUS\" path=\"/QSYS.LIB/' . $demoLib . '.LIB/YYSRVNORES.SRVPGM\" >\n <data name=\"START\" type=\"int\" length=\"4\" precision=\"31\" usage=\"inputoutput\" />\n <data name=\"RESULT\" type=\"int\" length=\"4\" precision=\"31\" usage=\"inputoutput\" />\n </program>\n</pcml>';\n\n\techo 'About to do simple PCML program prepare.<BR>';\n\t$pgmHandle = i5_program_prepare_PCML($pcml);\n\n\tif (!$pgmHandle) {\n\t\tdie('Error preparing simple PCML program: ' . printArray(i5_error()) . '<BR><BR>');\n\t} else {\n\n\t\t$input = array('START' => '25', 'RESULT' => '0');\n\t\t$output = array('START' => 'START', 'RESULT' => 'RESULT');\n\t\techo 'About to do simple PCML program call.<BR>';\n\t\t$success = i5_program_call($pgmHandle, $input, $output);\n\n\t\tif ($success) {\n\t\t\techo \"Success. Output variables: START: $START. RESULT: $RESULT.\";\n\t\t} else {\n\t\t\tdie(\"Problem calling PCML-described program. Error: \" . print_r(i5_error(), true));\n\t\t}\n\n\t} //(if !$pgmHandle)\n\n\necho '<BR><BR>';\n\n$pcml = \"<pcml version=\\\"4.0\\\">\n <struct name=\\\"S2\\\">\n <data name=\\\"ZOND2\\\" type=\\\"zoned\\\" length=\\\"10\\\" precision=\\\"5\\\" usage=\\\"inherit\\\" />\n <data name=\\\"PACK2\\\" type=\\\"packed\\\" length=\\\"19\\\" precision=\\\"5\\\" usage=\\\"inherit\\\" />\n <data name=\\\"PACK3\\\" type=\\\"packed\\\" length=\\\"19\\\" precision=\\\"5\\\" usage=\\\"inherit\\\" />\n <data name=\\\"ALPH2\\\" type=\\\"char\\\" length=\\\"20\\\" usage=\\\"inherit\\\" />\n </struct>\n <struct name=\\\"S1\\\">\n <data name=\\\"ZOND\\\" type=\\\"zoned\\\" length=\\\"10\\\" precision=\\\"5\\\" usage=\\\"inherit\\\" />\n <data name=\\\"PACK1\\\" type=\\\"packed\\\" length=\\\"19\\\" precision=\\\"5\\\" usage=\\\"inherit\\\" />\n <data name=\\\"ALPH1\\\" type=\\\"char\\\" length=\\\"10\\\" usage=\\\"inherit\\\" />\n </struct>\n <program name=\\\"TESTSTRUC\\\" path=\\\"/QSYS.LIB/{$demoLib}.LIB/TESTSTRUC.PGM\\\">\n <data name=\\\"CODE\\\" type=\\\"char\\\" length=\\\"10\\\" usage=\\\"output\\\" />\n <data name=\\\"S1\\\" type=\\\"struct\\\" struct=\\\"S1\\\" usage=\\\"inputoutput\\\" />\n <data name=\\\"S2\\\" type=\\\"struct\\\" struct=\\\"S2\\\" usage=\\\"inputoutput\\\" />\n <data name=\\\"PACK\\\" type=\\\"packed\\\" length=\\\"1\\\" precision=\\\"1\\\" usage=\\\"output\\\" />\n <data name=\\\"CH10\\\" type=\\\"char\\\" length=\\\"19\\\" usage=\\\"output\\\" />\n <data name=\\\"CH11\\\" type=\\\"char\\\" length=\\\"20\\\" usage=\\\"output\\\" />\n <data name=\\\"CH12\\\" type=\\\"char\\\" length=\\\"29\\\" usage=\\\"output\\\" />\n <data name=\\\"CH13\\\" type=\\\"char\\\" length=\\\"33\\\" usage=\\\"output\\\" />\n </program>\n</pcml>\";\n\n\techo 'About to do a complex PCML program prepare.<BR>';\n\t$pgmHandle = i5_program_prepare_PCML($pcml);\n\n\tif ($pgmHandle) {\n\t\techo \"Successfully prepared complex PCML program description.<BR>\";\n\t} else {\n\t\techo \"Problem while preparing complex PCML program description.<BR>\";\n\t}\n\t// define some input values\n$pack3value=7789777.44;\n$alph2value=4;\n\n$paramIn = Array(\n\"S1\"=>Array(\"ZOND\"=>54.77, \"PACK1\"=>16.2, \"ALPH1\"=>\"MyValue\"),\n\"S2\"=>Array(\"ZOND2\"=>44.66, \"PACK2\"=>24444.99945, \"PACK3\"=>$pack3value, \"ALPH2\"=>$alph2value)\n);\n\n// now we need to define where to place output values; it will create new local variables\n\n$paramOut = array(\n\t\t\t\t\t\"S1\"=>\"S1_Value\", \"S2\"=>\"S2_Value\",\n\t\t\t\t\t\"CH10\"=>\"CH10_Value\", \"CH11\"=>\"CH11_Value\", \"CH12\"=>\"CH12_Value\", \"CH13\"=>\"CH13_Value\",\n\t\t\t\t\t\"CODE\"=>\"Code_Value\", \"PACK\"=>\"Pack\"\n);\n\techo 'About to do complex PCML program call.';\n\t$success = i5_program_call($pgmHandle, $paramIn, $paramOut);\n if (function_exists('i5_output')) extract(i5_output()); // i5_output() required if called in a function\n\n if ($success) {\n\t\techo \"Success.\";\n\t\techo \"<BR>S1: \" . var_export($S1_Value, true);\n\t\techo \"<BR>S2: \" . var_export($S2_Value, true);\n\t\techo \"<BR>CH10: \" . var_export($CH10_Value, true);\n\t\techo \"<BR>CH11: \" . var_export($CH11_Value, true);\n\t\techo \"<BR>CH12: \" . var_export($CH12_Value, true);\n\t\techo \"<BR>CH13: \" . var_export($CH13_Value, true);\n\t\techo \"<BR>Code: \" . var_export($Code_Value, true);\n\t\techo \"<BR>Pack: \" . var_export($Pack, true);\n\n\t} else {\n\t\tdie(\"Problem calling PCML-described program. Error: \" . printArray(i5_error()));\n\t}\n\n} //(doPcml)\n\n\n// good\necho \"\\nSuccess\\n\";\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.395066499710083,
"alphanum_fraction": 0.45673540234565735,
"avg_line_length": 27.822221755981445,
"blob_id": "f665bda5093028c784469ca89d5bb5ccf5e9e0d4",
"content_id": "f7d8fbea42536033b538d55100bc28ff6b823c3d",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 5189,
"license_type": "permissive",
"max_line_length": 74,
"num_lines": 180,
"path": "/test/php/test_10690_ZZSIMP_ibm_db2_io.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 inout PGM - many simple parms data\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\n// call IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Bad connect: $database,$user\");\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG512K(?,?,?,?)\");\nif (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n$ctl .= \"*here\";\n$clobIn = getxml();\n$clobOut = \"\";\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) die(\"Bad execute: \".db2_stmt_errormsg());\n// -----------------\n// output processing\n// -----------------\n// dump raw XML (easy test debug)\nvar_dump($clobOut);\n\n// good\necho \"Success\\n\";\n\n// D ZZSIMP PR ExtPgm\n// D HEXHND 64\n// D DBID 2\n// D SPPGID 10\n// D SPOP01 1\n// D SPOP02 1\n// D SPOP03 1\n// D SPOP04 1\n// D SPOP05 1\n// D SPOP06 1\n// D SPOP07 1\n// D SPOP08 1\n// D SPOP09 1\n// D SPOP10 1\n// D SPOP11 1\n// D SPOP12 1\n// D SPOP13 1\n// D SPOP14 1\n// D SPOP15 1\nfunction getxml() {\n global $testLib;\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<script>\n<pgm name='ZZSIMP' lib='xyzlibxmlservicexyz'>\n <parm co='HEXHND' io='both'>\n <data type='64A' var='HEXHND'>xyzHEXHND</data>\n </parm>\n <parm co='DBID' io='both'>\n <data type='2A' var='DBID'>xyzDBID</data>\n </parm>\n <parm co='SPPGID' io='both'>\n <data type='10A' var='SPPGID'>xyzSPPGID</data>\n </parm>\n <parm co='SPOP01' io='both'>\n <data type='1A' var='SPOP01'>xyzSPOP01</data>\n </parm>\n <parm co='SPOP02' io='both'>\n <data type='1A' var='SPOP02'>xyzSPOP02</data>\n </parm>\n <parm co='SPOP03' io='both'>\n <data type='1A' var='SPOP03'>xyzSPOP04</data>\n </parm>\n <parm co='SPOP04' io='both'>\n <data type='1A' var='SPOP04'>xyzSPOP05</data>\n </parm>\n <parm co='SPOP05' io='both'>\n <data type='1A' var='SPOP05'>xyzSPOP06</data>\n </parm>\n <parm co='SPOP06' io='both'>\n <data type='1A' var='SPOP06'>xyzSPOP07</data>\n </parm>\n <parm co='SPOP08' io='both'>\n <data type='1A' var='SPOP08'>xyzSPOP08</data>\n </parm>\n <parm co='SPOP09' io='both'>\n <data type='1A' var='SPOP09'>xyzSPOP09</data>\n </parm>\n <parm co='SPOP10' io='both'>\n <data type='1A' var='SPOP10'>xyzSPOP10</data>\n </parm>\n <parm co='SPOP11' io='both'>\n <data type='1A' var='SPOP11'>xyzSPOP11</data>\n </parm>\n <parm co='SPOP12' io='both'>\n <data type='1A' var='SPOP12'>xyzSPOP12</data>\n </parm>\n <parm co='SPOP13' io='both'>\n <data type='1A' var='SPOP13'>xyzSPOP13</data>\n </parm>\n <parm co='SPOP14' io='both'>\n <data type='1A' var='SPOP14'>xyzSPOP14</data>\n </parm>\n <parm co='SPOP15' io='both'>\n <data type='1A' var='SPOP15'>xyzSPOP15</data>\n </parm>\n</pgm>\n</script>\nENDPROC;\n$data = \"\";\nfor ($i=0;$i<32000;$i++) $data .= 'F0';\n// D HEXHND 64\n// D DBID 2\n// D SPPGID 10\n// D SPOP01 1\n// D SPOP02 1\n// D SPOP03 1\n// D SPOP04 1\n// D SPOP05 1\n// D SPOP06 1\n// D SPOP07 1\n// D SPOP08 1\n// D SPOP09 1\n// D SPOP10 1\n// D SPOP11 1\n// D SPOP12 1\n// D SPOP13 1\n// D SPOP14 1\n// D SPOP15 1\n$was = array(\n\"xyzlibxmlservicexyz\",\n\"xyzHEXHND\",\n\"xyzDBID\",\n\"xyzSPPGID\",\n\"xyzSPOP01\",\n\"xyzSPOP02\",\n\"xyzSPOP03\",\n\"xyzSPOP04\",\n\"xyzSPOP05\",\n\"xyzSPOP06\",\n\"xyzSPOP07\",\n\"xyzSPOP08\",\n\"xyzSPOP09\",\n\"xyzSPOP10\",\n\"xyzSPOP11\",\n\"xyzSPOP12\",\n\"xyzSPOP13\",\n\"xyzSPOP14\",\n\"xyzSPOP15\"\n);\n$now = array(\n\"$testLib\",\nsubstr($data,0,64), // HEXHND\n'12', // DBID\n'0123456789', // SPPGID\n'1', // SPOP01\n'2', // SPOP02\n'3', // SPOP03\n'4', // SPOP04\n'5', // SPOP05\n'6', // SPOP06\n'7', // SPOP07\n'8', // SPOP08\n'9', // SPOP09\n'a', // SPOP10\n'b', // SPOP11\n'c', // SPOP12\n'd', // SPOP13\n'e', // SPOP14\n'f' // SPOP15\n);\n$xml = str_replace($was,$now,$clob);\nreturn $xml;\n}\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.59247887134552,
"alphanum_fraction": 0.6087490320205688,
"avg_line_length": 30.17224884033203,
"blob_id": "5cef5958732f37199b1d13547e48b07414f051c9",
"content_id": "1c6ec8c5941b1e526284bab8078d59acef573758",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "C",
"length_bytes": 6515,
"license_type": "permissive",
"max_line_length": 82,
"num_lines": 209,
"path": "/utils/xmlservice-cli.c",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "// Copyright 2019 IBM Busines Machines Corp.\n//\n// Redistribution and use in source and binary forms, with or without\n// modification, are permitted provided that the following conditions are met:\n//\n// 1. Redistributions of source code must retain the above copyright notice,\n// this list of conditions and the following disclaimer.\n//\n// 2. Redistributions in binary form must reproduce the above copyright notice,\n// this list of conditions and the following disclaimer in the documentation\n// and/or other materials provided with the distribution.\n//\n// 3. Neither the name of the copyright holder nor the names of its\n// contributors may be used to endorse or promote products derived from this\n// software without specific prior written permission.\n//\n// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n// AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n// IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n// FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n// DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n// SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n// CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n// OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <sys/stat.h>\n#include <fcntl.h>\n#include <unistd.h>\n\n#include <as400_protos.h>\n\n#define VERSION \"1.1.0\"\n\n__attribute__((__noreturn__))\nvoid exit_with_usage(FILE* f, const char* argv0, int rc) {\n fprintf(f, \"usage: %s [-h] [-v] [-c ctl] [-i ipc]\\n\", argv0);\n exit(rc);\n}\n\nint main(int argc, char **argv)\n{\n ILEpointer xmlstoredp __attribute__ ((aligned (16)));\n\n const char *ctl = NULL;\n const char *ipc = NULL;\n\n int opt;\n while ((opt = getopt(argc, argv, \"hvc:i:\")) != -1) {\n switch (opt) {\n case 'c': /* ctl */\n ctl = optarg;\n break;\n case 'i':\n ipc = optarg;\n break;\n case 'v':\n printf(\"xmlservice-cli \" VERSION \"\\n\");\n return 0;\n case 'h':\n exit_with_usage(stdout, argv[0], 0);\n default:\n exit_with_usage(stderr, argv[0], 1);\n }\n }\n\n if (ctl == NULL) {\n ctl = \"\"; //\"*here *cdata\";\n }\n int ctl_len = strlen(ctl);\n\n if (ipc == NULL) {\n ipc = \"\"; //\"*na\";\n }\n int ipc_len = strlen(ipc);\n\n unsigned long long actmark = _ILELOADX(\"QXMLSERV/XMLSTOREDP\", ILELOAD_LIBOBJ);\n\n if (actmark == (unsigned long long)-1) {\n perror(\"failed to load XMLSERVICE stored procedure\");\n return 1;\n }\n\n if(_ILESYMX(&xmlstoredp, actmark, \"RUNASCII\") < 0) {\n perror(\"failed to load XMLSERVICE symbol\");\n return 1;\n }\n\n#define THIRTY_TWO_K (32 * 1024)\n size_t xmlin_size = THIRTY_TWO_K;\n int xmlin_len = 0;\n\n char* xmlin = (char*) malloc(xmlin_size);\n\n while(!feof(stdin)) {\n size_t remain = xmlin_size - xmlin_len;\n\n if(!remain) {\n xmlin_size *= 2;\n remain = xmlin_size - xmlin_len;\n xmlin = (char*) realloc(xmlin, xmlin_size);\n if(!xmlin) {\n perror(\"error expanding XML input buffer\");\n return 1;\n }\n }\n\n size_t count = fread(xmlin + xmlin_len, 1, remain, stdin);\n xmlin_len += count;\n\n if(count < remain) {\n if(ferror(stdin)) {\n perror(\"error reading input\");\n return 1;\n }\n break;\n }\n }\n\n size_t xmlout_len = (16 * 1024 * 1024);\n char* xmlout = (char*) malloc(xmlout_len);\n\n if(!xmlout) {\n perror(\"error allocating XML output buffer\");\n return 1;\n }\n\n int rc;\n\n // These are always defined to 0, why?\n int ebcdic_ccsid = 0; //Qp2jobCCSID();\n int pase_ccsid = 1208;\n\n const arg_type_t signature[] = {\n ARG_MEMPTR, ARG_MEMPTR, ARG_MEMPTR, ARG_MEMPTR, ARG_MEMPTR,\n ARG_MEMPTR, ARG_MEMPTR, ARG_MEMPTR, ARG_MEMPTR, ARG_MEMPTR,\n ARG_END\n };\n\n struct {\n ILEarglist_base base;\n ILEpointer ppIPCSP;\n ILEpointer szIPCSP;\n ILEpointer ppCtlSP;\n ILEpointer szCtlSP;\n ILEpointer ppIClob;\n ILEpointer szIClob;\n ILEpointer ppOClob;\n ILEpointer szOClob;\n ILEpointer ccsidPASE;\n ILEpointer ccsidILE;\n } arglist __attribute__ ((aligned (16)));\n\n\n ILEpointer IPCSP __attribute__ ((aligned (16)));\n ILEpointer CtlSP __attribute__ ((aligned (16)));\n ILEpointer IClob __attribute__ ((aligned (16)));\n ILEpointer OClob __attribute__ ((aligned (16)));\n\n _SETSPP(&IPCSP, ipc);\n _SETSPP(&CtlSP, ctl);\n _SETSPP(&IClob, xmlin);\n _SETSPP(&OClob, xmlout);\n\n arglist.ppIPCSP.s.addr = (address64_t)(intptr_t) &IPCSP;\n arglist.szIPCSP.s.addr = (address64_t)(intptr_t) &ipc_len;\n arglist.ppCtlSP.s.addr = (address64_t)(intptr_t) &CtlSP;\n arglist.szCtlSP.s.addr = (address64_t)(intptr_t) &ctl_len;\n arglist.ppIClob.s.addr = (address64_t)(intptr_t) &IClob;\n arglist.szIClob.s.addr = (address64_t)(intptr_t) &xmlin_len;\n arglist.ppOClob.s.addr = (address64_t)(intptr_t) &OClob;\n arglist.szOClob.s.addr = (address64_t)(intptr_t) &xmlout_len;\n arglist.ccsidPASE.s.addr = (address64_t)(intptr_t) &pase_ccsid;\n arglist.ccsidILE.s.addr = (address64_t)(intptr_t) &ebcdic_ccsid;\n\n // Ensure the ILE side doesn't write to our stdout or else\n // bad things will happen, ie. EBCDIC garbage\n int devnull = open(\"/dev/null\", O_RDWR|O_APPEND);\n int oldout = dup(1);\n int olderr = dup(2);\n\n dup2(devnull, 1);\n dup2(devnull, 2);\n\n rc = _ILECALL(&xmlstoredp, &arglist.base, signature, RESULT_UINT8);\n\n /* perror likely not useful w/ _ILECALL */\n if (rc != ILECALL_NOERROR) {\n dprintf(olderr, \"%s: _ILECALL error\\n\", argv[0]);\n return 1;\n }\n if (arglist.base.result.s_uint8.r_uint8 == 0xF0) {\n dprintf(olderr, \"%s: XMLSTOREDP error\\n\", argv[0]);\n return 1;\n }\n\n\n int code;\n code = write(oldout, xmlout, strlen(xmlout));\n code = write(oldout, \"\\n\", 1);\n\n return 0;\n}\n\n/* vim: set tabstop=4:softtabstop=4:shiftwidth=4:expandtab */\n"
},
{
"alpha_fraction": 0.5887632966041565,
"alphanum_fraction": 0.5986372232437134,
"avg_line_length": 56.938533782958984,
"blob_id": "8d340a0da2045ad846dca7cc266b5df958574762",
"content_id": "1f8303f4c864033e9acbfe99e5564dbb2eb684a3",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "reStructuredText",
"length_bytes": 24511,
"license_type": "permissive",
"max_line_length": 997,
"num_lines": 423,
"path": "/docs/connections.rst",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "XMLSERVICE Connections\n======================\n\n`Goto Main Page`_\n\n.. _Goto Main Page: index.html\n\nSometimes a real conversation helps ...\n---------------------------------------\n\n**Customer wants to know how to remain stateless, but get the benefit of being able to audit the use of RPG subprocedure invocations via XMLToolkit without toggling back and forth (different QSQSRVR jobs).**\n\n Stateless may use multiple QSQSRVR jobs (toggling), this is definition of 'stateless' in PHP fastcgi environment (working correct).\n\n\n**Will ToolkitService object using persistent db2 connections and \"stateless => true\" help single QSQ job?**\n\n Toolkit \"stateless => true\" via db2 transport using db2_connect or db2_pconnect makes no difference, fastcgi implies unpredictable QSQSRVR job will be used (command line slightly more predictable over web). \n\n\n**Why 'unpredictable' QSQ?**\n\n It's all web PHP fastcgi 'random' worker selection, NOTHING to do with ibm_db2 'connection' persistent or non-persistent, toolkit connection is irrelevant. Apache web site using fastcgi routes \"randomly\" to any available PHP worker job php-cgi, therefore 'random' php-cgi worker using db2_(p)connect<>QSQSRVR will also appear 'random'. To wit, you end up different QSQSRVR jobs (toggle back and forth). Technically, fastcgi protocol, PHP workers poll a single fastcgi socket waiting to take some work, ZS on i at /www/zendsvr6/logs/fcgi-njjjineh.sock (sock name random). As each php worker strips work off fastcgi socket (1st come == 1st do), the worker is busy communicating on 'private web socket' running script until finished (no longer waiting). Combination of natural web worker selection (browser +/- KeepAlive), and fastcgi socket poll for work, results in 'random' appearance QSQ job toolkit usage, DB2 connection just came along for ride in back seat of the 1950 PHP fastcgi roadster. \n\n\n**Help me be one-and-only-one job?**\n\n The only way to assure private connection back to same XTOOLKIT job is 'internalKey'=>'/tmp/XTOOLKIT_job_1' ('ipc'=/tmp/XTOOLKIT_job_1'). \n\n\n\nXMLSERVICE connections discussed this page\n----------------------------------------------\n\n- Stateless -- clean running come/go, \"full open/close,\" start/stop connection to be used by any requester. Must set LIBL during each request\n- State full -- State retained between requests: LIBL, transactions, etc., \"private\" connection used by one requester/task for a long period of time (like 5250)\n- State full -- hybrid \"private/persistent\" connection shared by many requesters, but keep open PGM, files, etc.\n- State full -- reservation hybrid \"private/persistent\" connection exclusively held for a period each requesters, but returned back to pool for re-use (rare use)\n\nThe jobs involved in \"connections\":\n::\n\n I. Stateless\n (job 1) (job 2) (job 3)\n Apache FastCGI DB2 (stateless)\n ------- --------------- ---------------------\n browser/client ->thread-->socket->php-cgi\n --->$ctl=\"*here\"; -->QSQSRVR+XMLSERVICE\n shut down after\n each request\n (\"stateless\")\n\n II. State full\n (job 2) (job 3) (job 4)\n FastCGI QSQ (proxy) XMLSERVICE (state full)\n --------------- --------------------- ------------------------\n -->socket->php-cgi\n --->$ctl=\"*sbmjob\";\n --->$ipc=\"/tmp/sally\"; -->QSQSRVR+XMLSERVICE -->XMLSERVICE\n alive until stopped\n (\"state full\")\n\n\n Jobs originates::\n\n (job 1) Apache picks any thread (1st level routing)\n (job 2) FastCGI all \"non-busy\" worker php-cgi wait on unix domain socket /www/zend2/logs/fcgi-hmjadgek.sock (2nd level routing)\n (job 3) php-cgi - database connections odbc, ibm_db2, pdo_ibm (3rd level routing)\n db2_pconnect() attach to pooled/persistent QSQ (matching profile) but leaves connection open on exit\n db2_connect() acquires a unused pre-start QSQ (or starts one) then attaches to QSQ (profile), and returns to unused pool on exit\n (job 3) XMLSERVICE -- Stateless -- run inside QSQ job and clean-up after each request (3rd level routing)\n $ctl = \"*here\";\n (job 4) XMLSERVICE -- State full -- run in separate job that any QSQ job can call using IPC (4th level routing)\n $ctl = \"*sbmjob\";\n $ipc = \"/tmp/sally\";\n\nDrivers involved in conection::\n\n 400 server start-up \tCommon usage \tBig picture \tComments\n STRTCPSVR SERVER(*HTTP)\n\n port 80 or 10088 or … (REST/HTTP interface)\n\n 1-tier - XMLCGI \tCLIENT <==> IBM HTTP Server <==> XMLCGI (CLI server mode) <==> QSQxxxx <==> IBM i Resources \tXMLCGI + CLI\n PASE library (no start needed)\n\n no port\n\n 1-tier - PHP ibm_db2+pdo_ibm+odbc (Native PASE CLI libdb400.a driver) \tCLIENT (PASE libdb400.a driver) <==> QSQxxx <==> IBM i Resources \tNative PASE CLI libdb400.a driver (IBM Rochester)\n STRHOSTSVR SERVER(*DATABASE)\n\n port 8471 (database)\n\n 2-tier - PHP odbc interface (IBM i Client Access odbc driver interface) \tCLIENT (Client Access drivers) <==> QZDAxxxx <==> IBM i Resources \tClient Access odbc-based drivers (IBM Rochester)\n STRTCPSVR SERVER(*DDM)\n\n port 446 (DDM/DRDA)\n\n 2-tier - PHP ibm_db2+pdo_ibm (DB2 Connect driver interface) \tCLIENT (DB2 Connect drivers) <==> QRWxxxx <==> IBM i Resources \tDB2 CLI DRDA-based DB2 Connect drivers (IBM Toronto)\n\n\n\n\n1) Stateless -- no LIBL, come/go\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\nThese connections are traditional web requests \"full open/close\" clean running start/stop connection to be used by any requester.\n\n- Stateless\n\n::\n\n $ctl = \"*here\";\n\n (1) (2) (3)\n Apache FastCGI DB2 (server mode)\n ------- --------------- ---------------------\n browser/client -->thread--socket->php-cgi--->QSQSRVR(profile fred)\n XMLSERVICE (fred) <--shut down after each request\n --->QSQSRVR(profile sally)\n XMLSERVICE (sally) <--shut down after each request\n --->QSQSRVR(profile john)\n XMLSERVICE (john) <--shut down after each request\n\n\nExample new Toolkit (stateless):\n\n::\n\n if ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\n else $conn = db2_connect($database,$user,$password);\n try { $ToolkitServiceObj = ToolkitService::getInstance($conn); }\n catch (Exception $e) { die($e->getMessage()); }\n $ToolkitServiceObj->CLCommand(\"CHGLIBL LIBL(FREDFLIN WILMAFLIN) CURLIB(FREDFLIN)\");\n\n**Stateless:** \nIf you choose $ctl='\\*here', you will run in the calling process DB2 connection (QSQSRVR job).\nWhen XMLSERVICE completes your XML script it will shut down to nothing, considered stateless and holds zero state on return.\n\n+ In general you will run slower in stateless mode (CW default / Toolkit default), because XMLSERVICE has to keep starting things over and over and over again, but perhaps not an issue if you have CPU to burn.*\n+ The is no semaphore locking or shared memory ipc when running as stateless (\\*here), because only one sally client/server is a pair, but of course there may be many sally client/server pairs on the same machine.*\n+ There is no \"memory\" of the LIBL in stateless, so it must be set EVERY time before use.*\n\n\n2) State full -- LIBL, transactions, etc.\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\nThese connections are traditional 5250-like \"private\" connection used by one requester/task for a long period of time.\n\nState full (most RPG programs)\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n::\n\n $ctl = \"*sbmjob\";\n $ipc = \"/tmp/sally\";\n -- or --\n $ipc = \"/tmp/john\";\n (1) (2) (3) (4)\n Apache FastCGI DB2 (server mode) XMLSERVICE\n ------- --------------- --------------------- ----------\n -->thread--socket->php-cgi--->QSQSRVR(profile sally)---.->XMLSERVICE (sally) <--alive until stopped (or idle timemout)\n --->QSQSRVR(profile john)--. |\n -->thread--socket->php-cgi--->QSQSRVR(profile fred) | |\n --->QSQSRVR(profile sally)---.\n --->QSQSRVR(profile fred) |\n --->QSQSRVR(profile john)--.--->XMLSERVICE (john) <--alive until stopped (or idle timemout)\n\nExample new Toolkit (state full)::\n\n $internalKey = '/tmp/packers';\n if ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\n else $conn = db2_connect($database,$user,$password);\n try { $ToolkitServiceObj = ToolkitService::getInstance($conn); }\n catch (Exception $e) { die($e->getMessage()); }\n $ToolkitServiceObj->setToolkitServiceParams(array(\n 'InternalKey'=>$internalKey, // *** RIGHT HERE internalKey/IPC\n // *** run state full\n // use SBMJOB command run in new job\n // PHP can call again, again, again\n // with /tmp/packers and get\n // same job every time\n // same library list (*LIBL)\n // same PGMs with open files, etc.\n // exactly like 5250 sign-on screen\n 'plug'=>\"iPLUG32K\")); // max size data i/o (iPLUG4K,32K,65K.512K,1M,5M,10M,15M)\n // state full - MUST do this ONCE ONLY after start/sbmjob of XMLSERVICE job\n // then forget about it (unless you choose to change libl)\n $ToolkitServiceObj->CLCommand(\"CHGLIBL LIBL(FREDFLIN WILMAFLIN) CURLIB(FREDFLIN)\");\n /* Do not use the disconnect() function for \"state full\" connection */\n /* NEVER EVER USE THIS ... $ToolkitServiceObj->disconnect(); */\n /* Why? *immed kill of job, not nice or sync, just kill */\n /* Use idle timeout for \"state full\" / \"private\" connections */\n\n**State full**: If you choose ``$ctl=\"\\*sbmjob\"`` + ``$ipc=\"/tmp/packers\"``, you will run in a separate job past \nthe calling DB2 connection (child job of QSQSRVR). This $ctl/$ipc combination will allow you to return to the \nsame XMLSERVICE job from any connection to the machine, therefore considered \"state full\" and any called program \ncan keep open files, transactions, etc. (just like a real RPG 5250 program does mate).\n\n+ $ipc='/tmp/anything' can be any unique/accessible directory you want to route you back to same XMLSERVICE job (\\*sbmjob), but usually anchored in /tmp directory because xmlservice will try to create it if missing.\n+ Technically $ipc=\"/tmp/packers\" is a unique IFS machine location in posix function ftok('/tmp/packers') which presents a unique numeric key representing /tmp/packers that is used for XMLSERVICE shared memory and semaphores creation/attach (XMLSERVICE uses shared memory/semaphores for communication).\n+ Shared memory + semaphore locking is only required for state full connections ($ctl=\"\\*sbmjob\" + $ipc=\"/tmp/packers\"), where each sally XMLSERVICE semaphore \"front door lock\" will allow only one sally client to chat with a XMLSERVICE job, the other sally requesters will wait until they are invited to chat (just like the dentist office).\n+ Security is managed through IFS shared memory / semaphores access control just like any other IFS file, so once profile sally owns an active XMLSERVICE ctl+ipc then no other profile can attach to the active XMLSERVICE job ... well ... except for high authority profiles like \\*SECOFR (of course).\n+ With version 1.6.6 state full XMLSERVICE connections are ended via configurable idle timeout $ctl .= \" \\*idle(3000)\", you may keep the jobs alive forever using $ctl .= \" \\*idle(0)\" to match the original version behavior. There are other options for client wait $clt .= \" \\*wait(30)\" and waiting for called program to return $ctl .= \" \\*call(200)\" and various actions that can be taken for each wait/timer (busy,kill,etc.).\n+ In this example we have been using one sally client/server $ipc=\"/tmp/packers\", you can of course have many different sally client/server ($ipc=\"/tmp/packers\", $ipc=\"/tmp/vikings\", $ipc=\"/tmp/bears\", etc.) and each of these sally ipcs may have many clients chatting with each sally server ipc ... sort of a sally work load balancing dream situation where we can clone a new sally ipc server for each task at hand.\n+ You only need to set the LIBL once in state full (unless you want to change LIBL for some reason).\n\n**Toolkit State full (with IPC) - avoid start up/shut down XMLSERVICE (NOT toolkit default)**\n\n+ avoid using toolkit disconnect ($ctl=\"\\*immed\") to leave XMLSERVICE up and running (will timeout shut down if idle for 1/2 hour)\n+ choose a the minimum plug size need for the task to avoid send/receive extra blanks\n+ TURN DEBUG and LOGS off in toolkit to avoid IFS file write (takes forever in computer timings)\n+ db2_pconnect() - persistent or \"shared\" connection with toolkit avoids acquire/release QSQ jobs (NOT toolkit default)\n\n\nXMLSERVICE adopt authority issues\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nWhen using ctl+ipc \"state full\" jobs is a generally bad idea to \"adopt authority\" as originating profile sally will lose all access ... and ... in fact ipc may become unreachable causing a orphan XMLSERVICE (client is still sally, ipc is still sally's, but adopt xmlservice server becomes fred).\n\nTwo choices:\n\n a) If you MUST \"adopt authority\" do it in a stateless job (\\*here), where full connection processing may undo \"left over switch profile\" potential damage on the way out of XMLSERVICE script. This option should always work.\n b) Be very careful to return back to sally profile EACH TIME leaving xmlservice sending data back to the waiting sally client ... sort of good manners talk to sally client as sally server (adopted fred can speak/do only when asked, then go away)\n\n**Note**: \nWe are thinking about forcing \"switch back to originating profile on the way back\" within XMLSERVICE code, but have not yet understood what that means to PHP wrappers like CW, so the mission is at the moment in your called program and/or PHP wrapper/user code.\n\n3) State full -- hybrid \"private/persistent\" connection\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\nThese connections are hybrid \"private/persistent\" connection shared by many requesters, but keep open PGM, files, etc.\n\nWorried about too many IPC's/XMLSERVICE jobs??\n\nThe following gives you a hybrid \"private/persistent\" connection\n\n* most all the benefits for called RPG (state full, open files, etc.)\n* but only $maxpool XMLSERVICE jobs\n\nTry this simple technique for pooled IPC's/XMLSERVICE jobs ``$internalKey = '/tmp/packers'.rand(1,$maxpool)``\n\n* IF your application set can tolerate multi-client shared access to a pool of persistent/private/semi-stateless connections the random technique should work well.\n* However, if you need your client make a exclusive reservation see the next topic\n\n**State full -- hybrid \"private/persistent\" connection**\n\n::\n\n $maxpool = 3;\n // -- PHP raw ---\n $ctl = \"*sbmjob\";\n $ipc = \"/tmp/sally\".rand(1,$maxpool);\n // -- or PHP toolkit --\n $internalKey = '/tmp/sally'.rand(1,$maxpool)\n\n (1) (2) (3) (4)\n Apache FastCGI DB2 (server mode) XMLSERVICE\n ------- --------------- --------------------- ----------\n -->thread--socket->php-cgi--->QSQSRVR(profile sally)--.\n -->thread--socket->php-cgi--->QSQSRVR(profile sally)--|\n : |\n -->thread--socket->php-cgi--->QSQSRVR(profile sally)--|->XMLSERVICE (/tmp/sally1) <--alive until stopped (or idle timemout)\n -->thread--socket->php-cgi--->QSQSRVR(profile sally)--|->XMLSERVICE (/tmp/sally2) <--alive until stopped (or idle timemout)\n -->thread--socket->php-cgi--->QSQSRVR(profile sally)--|->XMLSERVICE (/tmp/sally3) <--alive until stopped (or idle timemout)\n : |\n -->thread--socket->php-cgi--->QSQSRVR(profile sally)--|\n -->thread--socket->php-cgi--->QSQSRVR(profile sally)--.\n\n3 XMSLERVICE jobs handle work for all sally clients using the site\n\nExample new Toolkit (hybrid \"private/persistent\" connection)::\n\n $maxpool = 40; // 40 jobs good enough to handle my machine needs\n\n if ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\n else $conn = db2_connect($database,$user,$password);\n\n try { $ToolkitServiceObj = ToolkitService::getInstance($conn); }\n catch (Exception $e) { die($e->getMessage()); }\n\n $internalKey = '/tmp/packers'.rand(1,$maxpool);\n $ToolkitServiceObj->setToolkitServiceParams(array(\n 'InternalKey'=>$internalKey, // *** RIGHT HERE internalKey/IPC $maxpool jobs for service\n 'plug'=>\"iPLUG32K\")); // max size data i/o (iPLUG4K,32K,65K.512K,1M,5M,10M,15M)\n\n /* Do not use the disconnect() function for \"state full\" connection */\n /* NEVER EVER USE THIS ... $ToolkitServiceObj->disconnect(); */\n /* Why? *immed kill of job, not nice or sync, just kill */\n /* Use idle timeout for \"state full\" / \"private\" connections */\n\n\n* So simple, why does it work???\n\n + Works much same as Apache FastCGI PHP jobs (even using random), because $maxpool \"child XMLSERVICE workers\" can be increased to match machine workload (tinker-trial-error) ... just like Apache threads ... just like PHP children ... all the same\n + Most web requests are sub-second, so even on routing collision by random it is a short wait.\n\n\n* Could i dedicate different pools to different tasks ???\n\n + Yes, a bag full of really low effort work ``(/tmp/packers1-40, /tmp/vikings1-40)``.\n\n\n* Could i dedicate different user ids to different pools as well as tasks???\n\n + Yes, a bag full of really low effort work ``(/tmp/packers1-40, /tmp/vikings1-40)``.\n\n\n* Can i idle timeout unused XMLSERVICE jobs ???\n + Yes of course, toolkit.ini setting or specify manually.\n + NEVER EVER USE THIS ... ``$ToolkitServiceObj->disconnect();``\n\n\n* Should i use persistent connections??\n\n + db2_pconnect -- Yes of course, it will save the time \"attaching\" a QSQSRVR job\n + db2_connect -- However, you can use same technique with full open/close (yes it does work, try it)\n\n\n* Can i prestart jobs?\n\n + Yes, but they will start on web demand which i think is much better (just like Apache)\n\n ::\n\n SBMJOB CMD(CALL PGM(ZENDSVR/XMLSERVICE) PARM('/tmp/packers1')) JOBD(ZENDSVR/ZSVR_JOBD) USER(PACKERS)\n SBMJOB CMD(CALL PGM(ZENDSVR/XMLSERVICE) PARM('/tmp/vikings1')) JOBD(ZENDSVR/ZSVR_JOBD) USER(VIKINGS)\n\n\n\n4) State full -- reservation hybrid \"private/persistent\" connection\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\nThese connections are hybrid \"private/persistent\" connection exclusively held for a period of time by each requesters, but returned back to pool for re-use.\n\nIf your client needs to start/use/stop a reservation hybrid \"private/persistent\" connection, use the appropriate keyword in your XML sent to XMLSERVICE to gain exclusive rights to the hybrid \"private/persistent\" connection.\n\n* <start>unique-user-key</start> -- acquire exclusive IPC if available\n* <use>unique-user-key</use> -- must appear XML every request job held forever until see <stop>\n* <stop>unique-user-key</stop> -- release IPC for any other use\n* Errors:\n\n + <use>no-match-user-key</use> -- non-matching key results in error almost instantly (no wait)\n ::\n\n busy response (1301060):\n <error>\n <errnoxml>1301060</errnoxml>\n <xmlerrmsg>IPC owner busy</xmlerrmsg>\n </error>\n\n + thoughtful setting server idle timeout can control unwanted reservation hangs due to careless users or errors ** $ctl .= \" \\*idle(60)\" **\n\n**hybrid \"private/persistent\" connection with reservation**\n\n::\n\n $maxpool = 3;\n // -- PHP raw ---\n $ctl = \"*sbmjob\";\n $ipc = \"/tmp/sally\".rand(1,$maxpool);\n // -- or PHP toolkit (not available yet -- Alan) --\n\n (1) (2) (3) (4)\n Apache FastCGI DB2 (server mode) XMLSERVICE\n ------- --------------- --------------------- ----------\n -->thread--socket->php-cgi--->QSQSRVR(profile sally)--.\n -->thread--socket->php-cgi--->QSQSRVR(profile sally)--|\n : |\n -->thread--socket->php-cgi--->QSQSRVR(profile sally)--|->XMLSERVICE (/tmp/sally1) <--alive until stopped (or idle timemout)\n <start>unique-user-key</start> <--exclusive reservation until stopped\n <use>unique-user-key</use>\n <stop>unique-user-key</stop>\n -->thread--socket->php-cgi--->QSQSRVR(profile sally)--|->XMLSERVICE (/tmp/sally2) <--alive until stopped (or idle timemout)\n <start>unique-user-key</start> <--exclusive reservation until stopped\n <use>unique-user-key</use>\n <stop>unique-user-key</stop>\n -->thread--socket->php-cgi--->QSQSRVR(profile sally)--|->XMLSERVICE (/tmp/sally3) <--alive until stopped (or idle timemout)\n <start>unique-user-key</start> <--exclusive reservation until stopped\n <use>unique-user-key</use>\n <stop>unique-user-key</stop>\n : |\n -->thread--socket->php-cgi--->QSQSRVR(profile sally)--|\n -->thread--socket->php-cgi--->QSQSRVR(profile sally)--.\n\n3 XMSLERVICE jobs handle work for all sally clients using the site\nHowever, reservation locks exclusive use until reservation is stopped.\n\nExample new Toolkit (hybrid \"private/persistent\" connection with reservation)::\n\n --- unfortunately reservation is not available in PHP wrapper yet (Alan) ---\n --- raw xml pseudo code version of what happens follows start/use/stop ---\n -- no time out --\n $ctl .= \" *idle(0)\"\n -- request 1 --\n <?xml version=\"1.0\"?>\n <script>\n <start>unique-user-key</start>\n </script>\n -- request 2 (two minutes later) --\n <?xml version=\"1.0\"?>\n <script>\n <use>unique-user-key</use>\n <cmd exec='rexx'>RTVJOBA USRLIBL(?)</cmd>\n </script>\n -- request 3 (1/2 hour later) --\n <?xml version=\"1.0\"?>\n <script>\n <use>unique-user-key</use>\n <pgm name='ZZCALL'>\n <parm>\n <data type='1A'>a</data>\n </parm>\n <return>\n <data type='10i0'>0</data>\n </return>\n </pgm>\n </script>\n -- request n (2 hours later) --\n <?xml version=\"1.0\"?>\n <script>\n <stop>unique-user-key</stop>\n </script>\n\n..\n [--Author([[http://youngiprofessionals.com/wiki/index.php/XMLSERVICE/XMLSERVICEConnect?action=expirediff | s ]])--]\n [--Tony \"Ranger\" Cairns - IBM i PHP / PASE--]\n\n"
},
{
"alpha_fraction": 0.5924826860427856,
"alphanum_fraction": 0.6261127591133118,
"avg_line_length": 29.636363983154297,
"blob_id": "d961b5ddf73940067b8642561512166d73f25dc6",
"content_id": "975dd814bb87b6e7d40b84cdd1e6ccacdd7a26ac",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 1011,
"license_type": "permissive",
"max_line_length": 73,
"num_lines": 33,
"path": "/test/php/xmlservice_ignore_userid.php",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "<?php\nfunction ibm_db2_Status($myfile) {\n $rc = 0;\n $output = `cat $myfile`;\n if (strpos($output,'sion=ibm_db2')>0) $rc += 1;\n if (strpos($output,'i5_ignore_userid')>0) $rc += 1;\n if (strpos($output,'pear test only')>0) $rc += 1;\n return $rc;\n}\n\nfunction ibm_db2_IgnoreOn() {\n global $curr_ibm_db2_file, $save_ibm_db2_file, $here_ibm_db2_file;\n // already set\n $rc = ibm_db2_Status($curr_ibm_db2_file);\n if ($rc < 3) {\n // assure backup\n $diff = trim(`diff $curr_ibm_db2_file $save_ibm_db2_file`);\n if ($diff) `cp $curr_ibm_db2_file $save_ibm_db2_file`; // curr to save\n // set up ignore_userid\n `cp $here_ibm_db2_file $curr_ibm_db2_file`; // here to curr\n }\n echo `cat $curr_ibm_db2_file`;\n}\n\nfunction ibm_db2_IgnoreOff() {\n global $curr_ibm_db2_file, $save_ibm_db2_file, $here_ibm_db2_file;\n // using pear test ini, then return to normal\n $rc = ibm_db2_Status($curr_ibm_db2_file);\n if ($rc > 2) `cp $save_ibm_db2_file $curr_ibm_db2_file`; // save to curr\n echo `cat $curr_ibm_db2_file`;\n}\n\n?>\n"
},
{
"alpha_fraction": 0.552657961845398,
"alphanum_fraction": 0.570712149143219,
"avg_line_length": 30.125,
"blob_id": "9794785b5a612fe205c46efa52e6f943cc4425ce",
"content_id": "7dfbfcfcf8dc94dd90cba848ee3ff617d79685fd",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 997,
"license_type": "permissive",
"max_line_length": 86,
"num_lines": 32,
"path": "/test/php/skipifrest.inc",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "<?php\n// see license here\nrequire_once('license.inc');\nrequire_once('authorization.php');\nerror_reporting(0); // **** NO ERRORS please\n$clobIn = \"<?xml version='1.0'?><script><sh>/QOpenSys/usr/bin/uname -a</sh></script>\";\n$clobOut = \"\";\n$parm = \"?db2=$i5restdb\";\n$parm .= \"&uid=$user\";\n$parm .= \"&pwd=$password\";\n$parm .= \"&ipc=$ipc\";\n$parm .= \"&ctl=$ctl\";\n$parm .= \"&xmlin=\".urlencode($clobIn);\n$parm .= \"&xmlout=32768\"; // size expected XML output\n$linkall = \"$i5resturl\".htmlentities($parm);\n$xmlobj = simplexml_load_file($linkall);\nif (!$xmlobj) die('skip');\n$sh = $xmlobj->xpath('/script/sh');\nif (!$sh) die('skip');\n$data = (string)$sh[0];\nif (!$data) die('skip');\nif (isset($_SERVER['PHP_SELF'])) $HANDLER = $_SERVER['PHP_SELF'];\nelse $HANDLER = $argv[0];\nif (strpos($data,\" 4 5 \")>0 && strpos($HANDLER,\"V6\")>0) die('skip');\n$clobIn = \"\";\n$clobOut = \"\";\n$sh = \"\";\n$data = \"\";\n$HANDLER = \"\";\n$xmlobj = \"\";\nerror_reporting(E_ALL); // **** OK ERRORS please\n?>\n\n"
},
{
"alpha_fraction": 0.590938925743103,
"alphanum_fraction": 0.6152330636978149,
"avg_line_length": 26.178571701049805,
"blob_id": "f8dc70101ccbe85a20a7372e7152b95df5ddea25",
"content_id": "6c5ad01dfc8c3b8190d060b72954eb573183396f",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 1523,
"license_type": "permissive",
"max_line_length": 76,
"num_lines": 56,
"path": "/test/php/test_40513_nocall_empty_1a.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: DSUDIMVAL1\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// -----------------\n// make the call\n// -----------------\n// see connection.inc param details ...\nrequire_once('connection.inc');\n// call IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Fail connect: $database,$user\");\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG15M(?,?,?,?)\");\nif (!$stmt) die(\"Fail prepare: \".db2_stmt_errormsg());\n$ctl .= \" *test\";\n$zz0cnt = 3;\n$clobIn = getxml($zz0cnt);\n// xml check via simplexml vs. expected results\n$xmlobj = simplexml_load_string($clobIn);\nif (!$xmlobj) die(\"Fail XML input\\n\");\n$clobOut = \"\";\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) die(\"Fail execute: \".db2_stmt_errormsg());\n// -----------------\n// output processing\n// -----------------\n// dump raw XML (easy test debug)\nvar_dump($clobOut);\n\n// good\necho \"Success\\n\";\n\nfunction getxml($zz0cnt) {\n$clob = <<<ENDPROC\n<?xml version='1.0' encoding='ISO-8859-1' ?>\n<script>\n<pgm name='JUSTONE'>\n<parm io='in' comment='test mode flag'><data type='1A' var='tmode' /></parm>\n</pgm>\n</script>\nENDPROC;\n$was = array(\"zz0cnt\");\n$now = array(\"$zz0cnt\");\nreturn str_replace($was,$now,$clob);\n}\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.497775673866272,
"alphanum_fraction": 0.5277484059333801,
"avg_line_length": 38.26200866699219,
"blob_id": "d67e875e0ad0f9e5800ad82a1f3cb9a6d379b331",
"content_id": "85356c4c615d6d3671a7975d8e23a2233a5c58bd",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "reStructuredText",
"length_bytes": 71932,
"license_type": "permissive",
"max_line_length": 522,
"num_lines": 1832,
"path": "/docs/intro.rst",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "XMLSERVICE Introduction\n=======================\n\n`Goto Main Page`_\n\n.. _Goto Main Page: index.html\n\nXMLSERVICE Introduction\n-----------------------\n\nXMLSERVICE is Open Source RPG, created for web programming simplicity and deployment flexibility, \nany language, any transport, avoiding complexities dealing with big package 'web services'. \nInternally, XMLSERVICE is designed Plain Old XML (POX), which enables simple REST XML protocol, \navoiding complexity of SOAP/WSDL based Web Services. XMLSERVICE is 100% RPG, never requires Java, \nPHP, or any other front end language to serve your IBM i RPG programs to the web. In fact, \nXMLSERVICE is so simple, that you can almost run your entire IBM i machine using only simple \nHTML/XML (demonstrated later section).\n\nXMLSERVICE is a single library of Open Source RPG code that enables XML scripting calls of System i resources.\nXMLSERVICE RPG server library does not require other language products to run on your IBM i, however language teams often provide a client toolkit to greatly simplify XML calls to XMLSERVICE.\n\nXMLSERVICE, as name implies, enables XML services on your IBM i machine. XMLSERVICE library is simply \na collection of Open Source RPG modules that essentially allow your to access anything on your IBM i machine, \nassuming proper profile authority. Simply stated, XMLSERVICE accepts XML document containing actions/parameters \n``(<pgm>,<cmd>,<sh>,<sql>,etc.)``, performs requested operations on IBM i, then sends an XML document of results back to the client. This simple concept has become a powerful tool for scripting language clients on IBM i including Zend Server PHP Toolkit and PowerRuby Toolkit. However, PHP and Ruby are not unique to XMLSERVICE, XML is a universal protocol support by nearly every language, therefore nearly any language can use XMLSERVICE, and most any transport between client/server.\n\nInstallation\n------------\n**Build requirements**\n\nBuilding requires Python 3 and GNU make. These can be installed with yum: yum install python3 make-gnu\n\nYou will also need the ILE RPG compiler installed (5770-WDS option 31) along with the following PTFs::\n\n 7.3: SI62605\n 7.2: SI62604\n 7.1: SI62580\n\n**Building**\n\n::\n\n PATH=/QOpenSys/pkgs/bin:$PATH\n\n git clone https://github.com/IBM/xmlservice.git\n\n cd xmlservice\n\n python3 ./configure\n\n make\n\n**Customizing the Build**\n\nYou can customize the build by passing options to configure::\n\n --library: set the build library\n --debug: set the debug level (DBGVIEW CL parameter)\n\n\nREST interface\n^^^^^^^^^^^^^^\n\nOptional: Set up your Apache to enable XMLSERVICE REST using RPG program XMLCGI.PGM included in XMLSERVICE installation, then restart your Apache instance.\n\n::\n\n /www/apachedft/conf/httpd.conf:\n ScriptAlias /cgi-bin/ /QSYS.LIB/XMLSERVICE.LIB/\n <Directory /QSYS.LIB/XMLSERVICE.LIB/>\n AllowOverride None\n order allow,deny\n allow from all\n SetHandler cgi-script\n Options +ExecCGI\n </Directory>\n\n stop/start Apache instance:\n endTCPSVR SERVER(*HTTP) HTTPSVR(APACHEDFT)\n STRTCPSVR SERVER(*HTTP) HTTPSVR(APACHEDFT)\n\nOperation and transports\n------------------------\n\nXMLSERVICE XML documents can be transported between IBM i and clients over any connection, any language.\n\n\nXMLSERVICE APIs (included)\n^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nXMLSERVICE library includes language transports for popular REST and DB2 connections, which fulfills needs for most internet services applications.\n\n* XMLSERVICE/XMLCGI.PGM-- RPG CGI HTTP/REST method GET or POST (traditional web service interface)\n \n * http://myibmi/cgi-bin/xmlcgi.pgm?db2=x@uid=x@pwd=x@ipc=x@ctl=x@xmlin=x@xmlout=x\n\n* XMLSERVICE/XMLSTOREDP.SRVPGM -- RPG DB2 stored procedure (IBM i's premier DB2 for i)\n\n * DB2 drivers local/remote with stored procedure IN/OUT capabilities (traditional DB2 interface)\n\n ::\n\n iPLUG4K (IN IPC CHAR(1024), IN CTL CHAR(1024), IN XMLIN CHAR(4064), OUT XMLOUT CHAR(4064))\n iPLUG32K (IN IPC CHAR(1024), IN CTL CHAR(1024), IN XMLIN CLOB(32000), OUT XMLOUT CLOB(32000))\n iPLUG65K (IN IPC CHAR(1024), IN CTL CHAR(1024), IN XMLIN CLOB(65K), OUT XMLOUT CLOB(65K))\n iPLUG512K(IN IPC CHAR(1024), IN CTL CHAR(1024), IN XMLIN CLOB(512K), OUT XMLOUT CLOB(512K))\n iPLUG1M (IN IPC CHAR(1024), IN CTL CHAR(1024), IN XMLIN CLOB(1M), OUT XMLOUT CLOB(1M))\n iPLUG5M (IN IPC CHAR(1024), IN CTL CHAR(1024), IN XMLIN CLOB(5M), OUT XMLOUT CLOB(5M))\n iPLUG10M (IN IPC CHAR(1024), IN CTL CHAR(1024), IN XMLIN CLOB(10M), OUT XMLOUT CLOB(10M))\n iPLUG15M (IN IPC CHAR(1024), IN CTL CHAR(1024), IN XMLIN CLOB(15M), OUT XMLOUT CLOB(15M))\n\n * DB2 drivers local/remote without stored procedure IN/OUT capabilities (loop fetch required)\n \n ::\n\n iPLUGR4K (IN IPC CHAR(1024), IN CTL CHAR(1024), IN XMLIN CHAR(4064))\n iPLUGR32K (IN IPC CHAR(1024), IN CTL CHAR(1024), IN XMLIN CLOB(32000))\n iPLUGR65K (IN IPC CHAR(1024), IN CTL CHAR(1024), IN XMLIN CLOB(65K))\n iPLUGR512K(IN IPC CHAR(1024), IN CTL CHAR(1024), IN XMLIN CLOB(512K))\n iPLUGR1M (IN IPC CHAR(1024), IN CTL CHAR(1024), IN XMLIN CLOB(1M))\n iPLUGR5M (IN IPC CHAR(1024), IN CTL CHAR(1024), IN XMLIN CLOB(5M))\n iPLUGR10M (IN IPC CHAR(1024), IN CTL CHAR(1024), IN XMLIN CLOB(10M))\n iPLUGR15M (IN IPC CHAR(1024), IN CTL CHAR(1024), IN XMLIN CLOB(15M))\n \n* XMLSERVICE/XMLSTOREDP.SRVPGM -- optional custom transport (programmers only)\n \n * if included XMLSERVICE transports do not fill your need, please feel free to create your own (sockets, data queues, ftp, etc.). Multiple entry APIs exist in XMLSERVICE that you may find useful:\n\n ::\n\n xmlstoredp.srvpgm - *SRVPGM interface for calls\n\n Native stored procedure call target (iPLUG4K - iPLUG15M):\n D iPLUG4K PR 1N extproc(*CL:'iPLUG4K')\n D pIPC 1024A\n D pCtl 1024A\n D pXmlIn *\n D pXmlOut *\n\n RPG call target:\n D runClient PR 1N\n D pIPCSP 1024A\n D pCtl 1024A\n D pIClob *\n D szIClob 10i 0\n D pOClob *\n D szOClob 10i 0\n\n PASE call target (also use RPG when CCSID issues):\n D runASCII PR 1N\n D pIPCSP2 *\n D szIPCSP2 10i 0\n D pCtlSP2 *\n D szCtlSP2 10i 0\n D pIClob2 *\n D szIClob2 10i 0\n D pOClob2 *\n D szOClob2 10i 0\n D ccsidPASE2 10i 0\n D ccsidILE2 10i 0\n\n\nIBM i CCSID 65535\n^^^^^^^^^^^^^^^^^\n\nXMLSERVICE REST and DB2 connections have implicit CCSID conversion between client and server, therefore your XMLIN/XMLOUT XML document will be implicitly CCSID converted by the transport layer, to wit, XMLSERVICE should just work. However, IBM i CCSID 65535 (hex), will destroy the entire XMLSERVICE scheme, and you will witness horrible hangs, junk data, dead processes, etc., so please take action on your IBM i to live with the modern ASCII world (all clients, all new scripting languages, remote/local including PASE).\n\n**Check your IBM i for CCSID convert safety.**\n\nIf you see DSPSYSVAL 65535, you have a big problem with the ASCII world, but you can take corrective IBM i action.\n\n::\n\n DSPSYSVAL SYSVAL(QCCSID)\n Coded character set\n identifier . . . . . : 65535 1-65535\n \n\nHere are some corrective IBM i suggestions (CCSID 37 an example), but remember you have to end current running jobs and restart for the CCSID changes to enable (including XTOOLKIT jobs):\n\n* change system ccsid\n\n::\n\n CHGSYSVAL SYSVAL(QCCSID) VALUE(37)\n \n* change Apache instances (/www/instance/httpd.conf)\n\n::\n\n # protect FastCGI against bad CCSID (dspsysval qccsid 65535)\n DefaultFsCCSID 37\n CGIJobCCSID 37\n\n* change user profile(s)\n\n::\n\n CHGUSRPRF USRPRF(FRED) CCSID(37)\n \n\nConnection public or private\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nXMLSERVICE parameters CTL and IPC enable two types of connections.\n\n* public connection -- XMLSERVICE stateless, many jobs, every use a fresh start\n \n * ``CTL='*here' , IPC=\"*NA\"``\n\n * \"public\" because any IBM i profile use is a fresh start\n * life scope -- life of a single XML IN/OUT request\n\n* private connection -- XMLSERVICE state full, single job, where a given profile is routed to XTOOLKIT job(s)\n\n + ``CTL='*sbmjob' , IPC=\"/tmp/myjob1\"``\n\n + ``CTL='*sbmjob'`` -- if not running, submit a new XTOOLKIT job\n + ``IPC=\"/tmp/myjob1\"`` -- all requests of IPC route to XTOOLKIT job (\\*sbmjob)\n + \"private\" because ONE IBM i profile owns XTOOLKIT job, any other IBM i profile will fail to attach\n + life scope -- forever, until ended by operator or user program ends XTOOLKIT job (like a 5250)\n\n**When to use XMLSERVICE public or private?**\n\nOf course any discussion that presumes to predict usage is likely to have legitimate user exception, but a guideline may be appropriate for use of XMLSERVICE public vs. private connection.\n\n**Web programming style (public, stateless)**\n\nXMLSERVICE public connections are generally used when IBM i resource can be called once by many different user profiles, \ncurrent data returned, no lasting persistent data needed by the IBM i resource. The programmer meta description of \ntransient resource services is \"stateless programming\".\n\nXMLSERVICE public \"stateless\"\n(CTL='\\*here', IPC='\\*NA')\n\n- profile FRED (any public QSQ)\n- profile SALLY (any public QSQ)\n- profile RITA (any public QSQ)\n- profile XAVIER (any public QSQ)\n\nXMLSTOREDP->XMLSERVICE (QSQ)\n\n- QSQ temporary profile use (stateless)\n- QSQ return to pool on script end\n- XMLSERVICE restart every request (web style)\n\n\nAlthough very handy, clean wrkactjob, no hanging locks, etc., public connection is not a high performance use of XMLSERVICE.\n\n* public connection -- XMLSERVICE stateless, many jobs, every use a fresh start\n \n + ``CTL='*here' , IPC=\"*NA\"`` -- profile FRED (RPG company stock service)\n + ``CTL='*here' , IPC=\"*NA\"`` -- profile SALLY (RPG IRS flat tax rate service)\n + ``CTL='*here' , IPC=\"*NA\"`` -- profile RITA (RPG calculate pound to kilo)\n + ``CTL='*here' , IPC=\"*NA\"`` -- profile XAVIER (RPG company stock service)\n\n**Traditional programming style (private, state full)**\n\nXMLSERVICE private connections are generally used when IBM i resource will be called many times by the same user profile, lasting persistent data needed by the IBM i resource (RPG variables, open files, etc.). The programmer meta description of required data services is \"state full programming\".\n\nXMLSERVICE private \"state full\"\n(CTL='\\*sbmjob', IPC='/tmp/xxxx')\n\n- profile FRED XTOOLKIT myjob1,myjob2 (private)\n- profile SALLY XTOOLKIT sallyjob1 (private)\n- profile RITA XTOOLKIT nursejob (private)\n- profile XAVIER XTOOLKIT xjob1 - xjob5 (private)\n\nXMLSTOREDP (QSQ)\n\n- QSQ temporary profile use (stateless)\n- QSQ return to pool on script end\n\nXMLSERVICE (XTOOLKIT)\n\n- XTOOLKIT owned by profile (private)\n- XTOOLKIT job never ends (until killed)\n- XTOOLKIT full state programming (5250 style)\n\n\nTraditional RPG programs usually need to track local variables, open files, etc., between requests, both for correct functionality and performance. The XTOOLKIT jobs that result from ``CTL='*sbmjob' , IPC=\"/tmp/xxxx\"`` are similar to 5250 jobs, where a user profile signs on, then uses 5250 job to run other programs (aka XMLSERVICE design), also, like multiple 5250 sessions (PC emulators), many different XTOOLKIT jobs can be used by the same user profile.\n\n* private connection -- XMLSERVICE state full, single job, where a given profile is routed to same XTOOLKIT job\n \n + ``CTL='*sbmjob' , IPC=\"/tmp/myjob1\"`` -- profile FRED XTOOLKIT myjob1 (Fred's use only RPG payroll application)\n + ``CTL='*sbmjob' , IPC=\"/tmp/myjob2\"`` -- profile FRED XTOOLKIT myjob2 (Fred's use only RPG inventory application)\n + ``CTL='*sbmjob' , IPC=\"/tmp/sallyjob1\"`` -- profile SALLY XTOOLKIT sallyjob1 (Sally's use only RPG admissions application)\n + ``CTL='*sbmjob' , IPC=\"/tmp/nursejob\"`` -- profile RITA XTOOLKIT nursejob (Rita's use only RPG nurse scheduling application)\n + ``CTL='*sbmjob' , IPC=\"/tmp/xjob1\"`` -- profile XAVIER XTOOLKIT xjob1 (Xavier's use only RPG payroll application)\n + ``CTL='*sbmjob' , IPC=\"/tmp/xjob2\"`` -- profile XAVIER XTOOLKIT xjob2 (Xavier's use only RPG inventory application)\n + ``CTL='*sbmjob' , IPC=\"/tmp/xjob3\"`` -- profile XAVIER XTOOLKIT xjob3 (Xavier's use only RPG admissions application)\n + ``CTL='*sbmjob' , IPC=\"/tmp/xjob4\"`` -- profile XAVIER XTOOLKIT xjob4 (Xavier's use only RPG nurse scheduling application)\n + ``CTL='*sbmjob' , IPC=\"/tmp/xjob5\"`` -- profile XAVIER XTOOLKIT xjob5 (Xavier's use only RPG super hero application)\n + IBM i programmer: Profile Xavier is using many XTOOLKIT jobs (xjob1 - xjob5), many different applications, but Xavier cannot use Fred's, Sally's or Rita's XTOOLKIT jobs (myjob1,myjob2,sallyjob1,nursejob), because Xavier does not own other profile XTOOLKIT jobs. XTOOLKIT jobs should be an easy pattern for RPG programmers familiar with single session 5250 job(s), owned by a profile, one thing at a time (not threaded), long running RPG programs, many IBM i files open, etc.\n \n * However, XTOOLKIT jobs have an interesting characteristic that 5250 emulator jobs cannot match, profile owned XTOOLKIT jobs can be accessed by many different client devices all at the same time, to wit, Xavier can use a laptop and a smart phone to all jobs at the same time (xjob1 - xjob5), or Xavier can leave his work laptop running (connected), go home, have dinner, and continue working all XTOOLKIT jobs from the family iPAD.\n\n * IBM i operator: You may wrkactjob and kill ``*immed`` XTOOLKIT jobs (same as 5250).\n \n * However, an ipcs administrative cleaner solution may suggest you write a custom XMLSERVICE program ``CTL='*immed' , IPC=\"/tmp/myjob1\"``, to remove \"in the way\" XTOOLKIT jobs (suggestion only).\n\nOpen Source goal\n----------------\n\nXMLSERVICE is constantly growing Open Source, so new functions are added over time. \nXMLSERVICE goal is never impact existing applications with new features, to date, \nXML's natural ability to add keywords and attributes has been very successful in keeping this goal. \n\nXMLSERVICE CMDs\n^^^^^^^^^^^^^^^\n::\n\n <?xml version='1.0'?>\n <script>\n <cmd exec='rexx'>RTVJOBA USRLIBL(?) SYSLIBL(?)</cmd>\n <cmd>DLTDTAQ DTAQ(MYLIB/MYDATAQ)</cmd>\n <cmd>CRTDTAQ DTAQ(MYLIB/MYDATAQ) MAXLEN(100) AUT(*EXCLUDE)</cmd>\n </script>\n\nXMLSERVICE PASE\n^^^^^^^^^^^^^^^\n\n::\n\n <?xml version='1.0'?>\n <script>\n <sh rows='on'>/QOpenSys/usr/bin/ls -l /tmp</sh>\n <sh rows='on'>/QOpenSys/usr/bin/system -i 'wrkactjob'</sh>\n </script>\n\nXMLSERVICE DB2\n^^^^^^^^^^^^^^\n\n::\n\n <?xml version='1.0'?>\n <script>\n <sql>\n <options options='noauto' autocommit='off'/>\n <connect conn='myconn' options='noauto'/>\n <query conn='myconn' stmt='myupdate'>UPDATE animal SET id = 9 where ID = 3</query>\n <query conn='myconn' stmt='myselect'>select count(*) from animal where ID = 9</query>\n <fetch stmt='myselect' block='all' desc='off'/>\n <free stmt='myselect'/>\n <commit conn='myconn' action='rollback'/>\n <query conn='myconn' stmt='myselect'>select count(*) from animal where ID = 9</query>\n <fetch stmt='myselect' block='all' desc='off'/>\n <free/>\n </sql>\n </script>\n\n\nXMLSERVICE PGMs, SRVPGMs, APIs\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n::\n\n PGM:\n <?xml version='1.0'?>\n <script>\n <pgm name='ZZCALL' lib='XMLSERVICE'>\n <parm io='both'>\n <data type='1A' var='INCHARA'>a</data>\n </parm>\n <parm io='both'>\n <data type='1A' var='INCHARB'>b</data>\n </parm>\n <parm io='both'>\n <data type='7p4' var='INDEC1'>11.1111</data>\n </parm>\n <parm io='both'>\n <data type='12p2' var='INDEC2'>222.22</data>\n </parm>\n <parm io='both'>\n <ds>\n <data type='1A' var='INDS1.DSCHARA'>x</data>\n <data type='1A' var='INDS1.DSCHARB'>y</data>\n <data type='7p4' var='INDS1.DSDEC1'>66.6666</data>\n <data type='12p2' var='INDS1.DSDEC2'>77777.77</data>\n </ds>\n </parm>\n <return>\n <data type='10i0'>0</data>\n </return>\n </pgm>\n </script>\n\n SRVPGM:\n <?xml version='1.0'?>\n <script>\n <pgm name='ZZSRV' lib='XMLSERVICE' func='ZZVARY'>\n <parm comment='search this name' io='in'>\n <data var='myName' type='10A' varying='on'><![CDATA[<Ranger>]]></data>\n </parm>\n <return>\n <data var='myNameis' type='20A' varying='on'><![CDATA[<Mud>]]></data>\n </return>\n </pgm>\n </script>\n\n System APIs:\n <?xml version='1.0'?>\n <script>\n <pgm name='QSNDDTAQ'>\n <parm io='in'>\n <data type='10A'>MYDATAQ</data>\n </parm>\n <parm io='in'>\n <data type='10A'>XMLSERVICE</data>\n </parm>\n <parm io='in'>\n <data type='5p0'>50</data>\n </parm>\n <parm io='in'>\n <data type='100A'>System i data queues forever</data>\n </parm>\n </pgm>\n </script>\n\n\nXMLSERVICE HTML/XML interface\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nUsing your IBM i and XMSERVICE download (see installation), without writing one line of code in any language, we can already check out XMLSERVICE functions using HTML/XML forms. We should note, that by any standard the following trivial XMLSERVICE example is clearly REST web services, but no SOAP, no WSDL, no Java, no PHP, no Ruby, nothing but 2 cents worth of HTML/XML.\n\n\n**XMLSERVICE input** - Plain Old XML input to XMLSERVICE for request ``select * from db2/animals``.\n\n::\n\n <myscript>\n <sql>\n <query>select * from db2/animals</query>\n <fetch block='all' desc='on'></fetch>\n </sql>\n </myscript>\n\n**XMLSERVICE output** - Plain Old XML output from XMLSERVICE with records returned from ``db2/animals``.\n\n::\n\n <myscript>\n <query conn='conn1' stmt='stmt1'>\n <success><![CDATA[+++ success select * from db2/animals]]></success>\n </query>\n <fetch block='all' desc='on' stmt='stmt1'>\n <row>\n <data desc='ID'><![CDATA[0]]></data>\n <data desc='BREED'><![CDATA[cat]]></data>\n <data desc='NAME'><![CDATA[Pook]]></data>\n <data desc='WEIGHT'><![CDATA[3.20]]></data>\n </row>\n </myscript>\n\n\nInstructions for your IBM i machine\n-----------------------------------\n\n**Step 1)** - Add XMLSERVICE to any Apache instance (APACHEDFT)\n\nSet up your Apache to enable XMLSERVICE REST using RPG program XMLCGI.PGM included in XMLSERVICE installation, then restart your Apache instance.\n\n::\n\n /www/apachedft/conf/httpd.conf:\n ScriptAlias /cgi-bin/ /QSYS.LIB/XMLSERVICE.LIB/\n <Directory /QSYS.LIB/XMLSERVICE.LIB/>\n AllowOverride None\n order allow,deny\n allow from all\n SetHandler cgi-script\n Options +ExecCGI\n </Directory>\n\n start Apache instance:\n STRTCPSVR SERVER(*HTTP) HTTPSVR(APACHEDFT)\n\n\n**Step 2)** - Ready to use XMLSERVICE for HTML/XML\n\nCut/Paste following HTML/XML form to your laptop Desktop/strsql.html:\n\n* change action target to your actual machine ``action=\"http://myibmi/cgi-bin/xmlcgi.pgm\"``\n* point your favorite browser at the HTML file ``file:///home/adc/Desktop/strsql.html``\n \n * enter database (\\*LOCAL), user (your profile), password (your password)\n * enter a SQL query in HTML strsql command and press button ``STRSQL`` \n\n::\n\n desktop/strsql.html:\n <html>\n <head>\n <script>\n function getVal() {\n xml = \"<?xml version='1.0'?>\";\n xml += \"<myscript>\";\n xml += \"<sql>\";\n xml += \"<query>\";\n xml += document.getElementById('strsql').value;\n xml += \"</query>\";\n xml += \"<fetch block='all' desc='on'>\";\n xml += \"</fetch>\";\n xml += \"</sql>\";\n xml += \"</myscript>\";\n document.getElementById('xmlin').value = xml;\n }\n </script>\n </head>\n <body>\n <h3>STRSQL</h3>\n <form onsubmit=\"getVal();\" name=\"input\" action=\"http://myibmi/cgi-bin/xmlcgi.pgm\" method=\"post\">\n <br><input type=\"text\" name=\"db2\" value=\"*LOCAL\" size=\"40\" > database\n <br><input type=\"text\" name=\"uid\" value=\"MYUID\" size=\"40\" > user\n <br><input type=\"password\" name=\"pwd\" value=\"MYPWD\" size=\"40\" > password\n <input type=\"hidden\" name=\"ipc\" value=\"*NA\">\n <input type=\"hidden\" name=\"ctl\" value=\"*here *cdata\">\n <input type=\"hidden\" name=\"xmlin\" id=\"xmlin\" value=\"na\">\n <input type=\"hidden\" name=\"xmlout\" value=\"500000\">\n <br><input type=\"text\" name=\"strsql\" id=\"strsql\" size=\"40\" /> strsql command (select * from db2/animals)\n </table>\n <br><br><input type=\"submit\" value=\"STRSQL\" />\n </form>\n </body>\n </html>\n\n\n**desktop/strsql.html example**\n\nAs strsql.html name implies, this simple html enables STRSQL running from your laptop to your IBM i. Enter any SQL statement you wish in the html form and XMLSERVICE will run just like STRSQL green screen when you press the ``STRSQL`` button. XMLSERVICE output returned will be XML (of course), so if your browser has issues displaying XML, you may have to view page source.\n\nHTML form strsql.html uses simple JavaScript function ``getVal()`` with ``document.getElementById('strsql').value``, which reads HTML text input ``<input id='strsql'`` and adds user SQL request to XML document to HTML text input ``<input id=\"xmlin\"``. All XMLSERVICE required REST tag elements can bee seen in the following HTML form ``http://myibmi/cgi-bin/xmlcgi.pgm?db2=*LOCAL@uid=MYUID@pwd=MYPWD@ipc=*NA@ctl=\"*here *cdata\"@xmlin=(see JavaScript)@xmlout=500000``, but we will cover this in later sections.\n\n**What is happening?**\n\nIf we change ``method=\"post\"`` to ``method=\"get\"``, the fully encoded document will appear on the browser URL line. As you can see, when input arrives from our browser (or REST client), XMLSERVICE/XMLCGI.PGM has much HTTP decoding before actually parsing the XML document and servicing the request.\n\n::\n\n <form onsubmit=\"getVal();\" name=\"input\" action=\"http://myibmi/cgi-bin/xmlcgi.pgm\" method=\"post\">\n -- change --\n <form onsubmit=\"getVal();\" name=\"input\" action=\"http://myibmi/cgi-bin/xmlcgi.pgm\" method=\"get\">\n\n The follow cut/paste is one continuous browser line (split for viewing):\n http://myibmi/cgi-bin/xmlcgi.pgm?db2=*LOCAL\n &uid=MYUID\n &pwd=MYPWD\n &ipc=*NA\n &ctl=*here+*cdata\n &xmlin=%3C%3Fxml+version%3D%271.0%27%3F%3E\n %3Cmyscript%3E\n %3Csql%3E\n %3Cquery%3Eselect+*+from+db2%2Fanimals\n %3C%2Fquery%3E\n %3Cfetch+block%3D%27all%27+desc%3D%27on%27%3E\n %3C%2Ffetch%3E\n %3C%2Fsql%3E\n %3C%2Fmyscript%3E\n &xmlout=500000\n &strsql=select+*+from+db2%2Fanimals\n\n\nThe flow:\n\n* We point our browser ``file:///home/adc/Desktop/strsql.html``, enter SQL query and press ``STRSQL`` button\n* IBM i Apache XMLCGI.PGM receives our encoded HTML/XML form ``action=\"http://myibmi/cgi-bin/xmlcgi.pgm\"``\n* XMLCGI.PGM calls XMLSERVICE.PGM (using DB2 stored procedures iPLUGxxx, but you do not need to know this yet).\n* XMLSERVICE.PGM parses XML input and runs internal DB2 driver ``<sql><query>...</query></sql>``.\n* XMLSERVICE.PGM parses result set from DB2 driver into output XML document ``<sql><fetch/></sql>``\n* browser sees XML return of DB2 data\n\nXMLSERVICE is Open Source, so you can examine internals of XMLCGI.PGM, for now , we simply need to understand XMLCGI.PGM decodes HTML/XML document, passes XML document request to XMLSERVICE.PGM (DB2 request example), and returns XML output to client (browser). If REST client is not a browser, but a scripting language like PHP or Ruby, exact same sequence occurs, except additionally most languages offer an XML parser to parse output XML into variables or structures (PHP Toolkit or Ruby Toolkit).\n\nQuick test functions HTML/XML\n-----------------------------\n\nXMLSERVICE HTML/XML technique can be used for nearly anything on IBM i machine CMD, PGM, SRVPGM, system APIs, PASE utilities, DB2, etc. Feel free to copy strsql.html form, modify, and try other XMLSERVICE functions ``<myscript>other functions</myscript>``. HTML/XML technique is a very handy testing a potential XMLSERVICE program service without writing a line of code, and, clearly demonstrates elegant simplicity embodied by XMLSERVICE.\n\nXMLSERVICE DB2 interface\n^^^^^^^^^^^^^^^^^^^^^^^^\n\nDB2 connection is not a web service, but many languages support high speed DB2 local/remote requests, so XMLSERVICE included a stored procedures interface (iPLUG4K - iPLUG15M). The nature of DB2 stored procedures requires a size specified on in/out parameters, therefore XMLSERVICE library includes various iPLUGxx sizes to fit your XML document data needs (4K, 32K, 65K, 512K, 1M, 5M, 10M, up to 15M).\n\nWe should note, XMLSERVICE DB2 is much faster over REST interface, so many language toolkits offer DB2 connection as the premier service.\n\n* RPG DB2 (no toolkit)\n\n::\n\n myIPC = '/tmp/thebears1';\n myCtl = '*sbmjob';\n // call XMLSERVICE/ZZCALL(...) using XML only\n myXmlIn =\n '<?xml version=\"1.0\" encoding=\"ISO-8859-1\"?>' + x'0D'\n + '<script>' + x'0D'\n + '<pgm name=\"ZZCALL\" lib=\"XMLSERVICE\">' + x'0D'\n + '<parm><data type=\"1a\" v=\"1\">Y</data></parm>' + x'0D'\n + '<parm><data type=\"1a\" v=\"2\">Z</data></parm>' + x'0D'\n + '<parm><data type=\"7p4\" v=\"3\">001.0001</data></parm>' + x'0D'\n + '<parm><data type=\"12p2\" v=\"4\">0003.04</data></parm>' + x'0D'\n + '<parm>' + x'0D'\n + ' <ds>' + x'0D'\n + ' <data type=\"1a\" v=\"5\">A</data>' + x'0D'\n + ' <data type=\"1a\" v=\"6\">B</data>' + x'0D'\n + ' <data type=\"7p4\" v=\"7\">005.0007</data>' + x'0D'\n + ' <data type=\"12p2\" v=\"8\">0000000006.08</data>' + x'0D'\n + ' </ds>' + x'0D'\n + '</parm>' + x'0D'\n + '</pgm>' + x'0D'\n + '</script>' + x'00';\n myXmlOut = *BLANKS;\n // make call to XMLSERVICE provided stored procedure(s)\n // sizes from iPLUG4k to iPLUG15M (see crtsql xmlservice package)\n Exec Sql call XMLSERVICE/iPLUG4K(:myIPC,:myCtl,:myXmlIn,:myXmlOut);\n\n* PHP DB2 (toolkit)\n\n::\n\n require_once(\"ToolkitService.php\");\n try { $ToolkitServiceObj = ToolkitService::getInstance($database, $user, $password); }\n catch (Exception $e) { die($e->getMessage()); }\n $ToolkitServiceObj->setToolkitServiceParams(\n array('InternalKey'=>$ipc, // route to same XMLSERVICE job /tmp/myjob1\n 'subsystem'=>\"QGPL/QDFTJOBD\", // subsystem/jobd to start XMLSERVICE (if not running)\n 'plug'=>\"32K\")); // max size data i/o (iPLUG4K,32K,65K, 512K,1M,5M,10M,15M)\n $param[] = $ToolkitServiceObj->AddParameterChar ('both', 1, 'INCHARA', 'var1', 'Y');\n $param[] = $ToolkitServiceObj->AddParameterChar ('both', 1, 'INCHARB', 'var2', 'Z');\n $param[] = $ToolkitServiceObj->AddParameterPackDec('both', 7,4,'INDEC1', 'var3', '001.0001');\n $param[] = $ToolkitServiceObj->AddParameterPackDec('both', 12,2,'INDEC2', 'var4', '0000000003.04');\n $ds[] = $ToolkitServiceObj->AddParameterChar ('both', 1, 'DSCHARA', 'ds1', 'A');\n $ds[] = $ToolkitServiceObj->AddParameterChar ('both', 1, 'DSCHARB', 'ds2', 'B');\n $ds[] = $ToolkitServiceObj->AddParameterPackDec('both', 7,4,'DSDEC1', 'ds3', '005.0007');\n $ds[] = $ToolkitServiceObj->AddParameterPackDec('both', 12,2,'DSDEC1', 'ds4', '0000000006.08');\n $param[] = $ToolkitServiceObj->AddDataStruct($ds);\n $result = $ToolkitServiceObj->PgmCall('ZZCALL', $testLib, $param, null, null);\n echo \"good so far ...\\n\";\n var_dump($result);\n\n\n* PHP DB2 (without toolkit)\n\n::\n\n $fast = false;\n $ipc = \"*NA\";\n $ctl = \"*here *cdata\";\n $xmlIn = \"<?xml version='1.0' encoding='ISO-8859-1'?>\n <script>\n <pgm name='ZZCALL' lib='ZENDSVR'>\n <parm><data type='1a'>Y</data></parm>\n <parm><data type='1a'>Z</data></parm>\n <parm><data type='7p4'>001.0001</data></parm>\n <parm><data type='12p2'>0000000003.04</data></parm>\n <parm>\n <ds>\n <data type='1a'>A</data>\n <data type='1a'>B</data>\n <data type='7p4'>005.0007</data>\n <data type='12p2'>0000000006.08</data>\n </ds>\n </parm>\n </pgm>\n </script>\";\n $xmlOut = '';\n if ($fast) $conn = db2_pconnect($db, $user, $pass); // persistent/pooled connection\n else $conn = db2_connect($db, $user, $pass); // full open/close connection\n if (!$conn) die(\"Bad connect: $db, $user\");\n $stmt = db2_prepare($conn, \"call $lib.$plug(?,?,?,?)\"); // Call XMLSERVICE\n // stored procedure interface\n // in/out parameter (xmlOut)\n // sizes: iPLUG4K - iPLUG15M\n if (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n $ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN); // ? - /tmp/raw_$user (*sbmjob)\n $ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN); // ? - *here or *sbmjob\n $ret=db2_bind_param($stmt, 3, \"xmlIn\", DB2_PARAM_IN); // ? - XML input script\n $ret=db2_bind_param($stmt, 4, \"xmlOut\", DB2_PARAM_OUT); // ? - XML output return\n $ret=db2_execute($stmt);\n if (!$ret) die(\"Bad execute: \".db2_stmt_errormsg());\n var_dump($xmlOut);\n \n\n\n PowerRuby DB2 (toolkit)\n -----------------------\n\n::\n\n require 'active_record'\n require 'xmlservice'\n\n ActiveRecord::Base.establish_connection(\n :adapter => 'ibm_db',\n :database => '*LOCAL',\n :username => 'MYUID',\n :password => 'MYPWD'\n )\n ActiveXMLService::Base.establish_connection(\n :connection => 'ActiveRecord',\n :install => \"XMLSERVICE\",\n :ctl => \"*here *cdata\",\n :ipc => \"*NA\",\n :size => 4096,\n :head => \"<?xml version='1.0'?>\"\n )\n zzcall = XMLService::I_PGM.new(\"zzcall\",\"xmlservice\")\n zzcall << XMLService::I_a.new('inchara',1,'a')\n zzcall << XMLService::I_a.new('incharb',1,'b')\n zzcall << XMLService::I_p.new('indec1',7,4,11.1111)\n zzcall << XMLService::I_p.new('indec2',12,2,222.22)\n zzcall << XMLService::I_DS.new('inds1',1,\n [XMLService::I_a.new('dschara',1,'x'),\n XMLService::I_a.new('dscharb',1,'y'),\n XMLService::I_p.new('dsdec1',7,4,66.6666),\n XMLService::I_p.new('dsdec2',12,2,77777.77)])\n zzcall.xmlservice\n puts zzcall.out_xml\n\n* PowerRuby DB2 (without toolkit)\n\n::\n\n require 'active_record'\n\n ipc = \"*NA\"\n ctl = \"*here *cdata\"\n xmlin = '<?xml version=\"1.0\"?>\n <script>\n <pgm name=\"ZZCALL\" lib=\"XMLSERVICE\">\n <parm io=\"both\">\n <data type=\"1A\">a</data>\n </parm>\n <parm io=\"both\">\n <data type=\"1A\">b</data>\n </parm>\n <parm io=\"both\">\n <data type=\"7p4\">11.1111</data>\n </parm>\n <parm io=\"both\">\n <data type=\"12p2\">222.22</data>\n </parm>\n <parm io=\"both\">\n <ds>\n <data type=\"1A\">x</data>\n <data type=\"1A\">y</data>\n <data type=\"7p4\">66.6666</data>\n <data type=\"12p2\">77777.77</data>\n </ds>\n </parm>\n <return>\n <data type=\"10i0\">0</data>\n </return>\n </pgm>\n </script>'\n xmlout = \"\"\n ActiveRecord::Base.establish_connection(\n :adapter => 'ibm_db',\n :database => '*LOCAL',\n :username => 'MYUID',\n :password => 'MYPWD'\n )\n conn = ActiveRecord::Base.connection.connection\n stmt = IBM_DB::prepare(conn, 'CALL XMLSERVICE.iPLUG512K(?,?,?,?)')\n ret = IBM_DB::bind_param(stmt, 1, \"ipc\", IBM_DB::SQL_PARAM_INPUT)\n ret = IBM_DB::bind_param(stmt, 2, \"ctl\", IBM_DB::SQL_PARAM_INPUT)\n ret = IBM_DB::bind_param(stmt, 3, \"xmlin\", IBM_DB::SQL_PARAM_INPUT)\n ret = IBM_DB::bind_param(stmt, 4, \"xmlout\", IBM_DB::SQL_PARAM_OUTPUT)\n ret = IBM_DB::execute(stmt)\n puts xmlout\n\n\n* Java DB2 (no toolkit)\n\n::\n\n import java.io.*;\n import java.util.*;\n import java.sql.*;\n import java.math.*;\n\n public class javaXMLserviceDemoJDBC {\n public static void main(String[] args) {\n Connection conn = null;\n Statement stmt=null;\n CallableStatement cstmt = null ;\n PreparedStatement pstmt = null;\n String url = \"jdbc:db2://localhost\"; // Set URL for data source\n String user = \"MYUID\";\n String password = \"MYPWD\";\n try\n { // Load the DB2(R) Universal JDBC Driver with DriverManager\n Class.forName(\"com.ibm.db2.jdbc.app.DB2Driver\");\n conn = DriverManager.getConnection(url, user, password);\n String inputClob =\n \"<?xml version='1.0'?>\"\n + \" <script>\"\n + \" <pgm name='ZZCALL' lib='XMLSERVICE'>\"\n + \" <parm><data type='1A'>a</data></parm>\"\n + \" <parm><data type='1A'>b</data></parm>\"\n + \" <parm><data type='7p4'>11.1111</data></parm>\"\n + \" <parm><data type='12p2'>222.22</data></parm>\"\n + \" <parm>\"\n + \" <ds>\"\n + \" <data type='1A'>x</data>\"\n + \" <data type='1A'>y</data>\"\n + \" <data type='7p4'>66.6666</data>\"\n + \" <data type='12p2'>77777.77</data>\"\n + \" </ds>\"\n + \" </parm>\"\n + \" <return><data type='10i0'>0</data></return>\"\n + \" </pgm>\"\n + \" </script>\";\n String sql=\"CALL XMLSERVICE.iPLUG512K(?,?,?,?)\";\n cstmt = conn.prepareCall(sql);\n System.out.println(\"Calling with valid name\");\n cstmt.setString(1,\"/tmp/packers01\");\n cstmt.setString(2,\"*sbmjob\");\n cstmt.setString(3,inputClob);\n cstmt.registerOutParameter(4, Types.CLOB);\n cstmt.execute();\n String doc = cstmt.getString(4);\n System.out.println(\"****** Documento XML: **********\");\n System.out.println(doc);\n }\n catch (Exception e)\n { System.out.println(\"******* Eccezione !!! *********\");\n e.printStackTrace();\n }\n }\n }\n\n\n* perl DB2 (no toolkit)\n\n::\n\n use DBI;\n use DBD::DB2::Constants;\n use DBD::DB2;\n\n $dbh = DBI->connect(\"dbi:DB2:*LOCAL\")\n or die $DBI::errstr;\n\n $stmt = 'call XMLSERVICE.iPLUG65K(?,?,?,?)';\n $sth = $dbh->prepare($stmt)\n or die \"prepare got error \" . $dbh->err;\n $ipc = \"/tmp/perlme\";\n $sth->bind_param(1, $ipc)\n or die \"bind 1 got error \" . $dbh->err;\n $ctl = \"*sbmjob\";\n $sth->bind_param(2, $ctl)\n or die \"bind 2 got error \" . $dbh->err;\n $xmlin = '<?xml version=\"1.0\"?>\n <script>\n <pgm name=\"ZZCALL\" lib=\"XMLSERVICE\">\n <parm io=\"both\">\n <data type=\"1A\">a</data>\n </parm>\n <parm io=\"both\">\n <data type=\"1A\">b</data>\n </parm>\n <parm io=\"both\">\n <data type=\"7p4\">11.1111</data>\n </parm>\n <parm io=\"both\">\n <data type=\"12p2\">222.22</data>\n </parm>\n <parm io=\"both\">\n <ds>\n <data type=\"1A\">x</data>\n <data type=\"1A\">y</data>\n <data type=\"7p4\">66.6666</data>\n <data type=\"12p2\">77777.77</data>\n </ds>\n </parm>\n <return>\n <data type=\"10i0\">0</data>\n </return>\n </pgm>\n </script>\n ';\n $sth->bind_param(3, $xmlin)\n or die \"bind 3 got error \" . $dbh->err;\n $xmlout = \"\";\n $xmloutlen = 4096;\n $sth->bind_param_inout(4, \\$xmlout, $xmloutlen)\n or die \"bind 4 got error \" . $dbh->err;\n $sth->execute()\n or die \"execute got error\" . $dbh->err;\n\n\n\nXMLSERVICE REST interface\n^^^^^^^^^^^^^^^^^^^^^^^^^\n\nXMLSERVICE includes a simple REST interface (XMLCGI.PGM), we demonstrated using the REST service using only HTML/XML in a previous section. Most languages support REST calls, so XMLSERVICE REST interface can be very useful for cloud applications where DB2 drivers are not available. XMLSERVICE has REST production clients in most every scripting language you can imagine (PHP, Ruby, perl, python, etc.).\n\n* JavaScript REST (no toolkit)\n\n::\n\n <html>\n <head>\n <script src=\"http://ajax.googleapis.com/ajax/libs/dojo/1.5/dojo/dojo.xd.js\" type=\"text/javascript\"></script>\n <script language=\"javascript\">\n dojo.require(\"dojox.xml.parser\");\n // you will need actual uid/pwd\n // *NONE not enabled by default\n var msgin = \"xmlservice input\";\n var msgout = \"xmlservice output\";\n var xmlhttp = null;\n var url = encodeURI(\"http://\"\n + self.location.hostname\n + \"/cgi-bin/xmlcgi.pgm?\"\n + \"db2=*LOCAL\"\n + \"&uid=MYUID\"\n + \"&pwd=MYPWD\"\n + \"&ipc=/tmp/rangerhtmlonly\"\n + \"&ctl=*sbmjob\"\n + \"&xmlin=\"\n + \"<?xml version='1.0'?>\"\n + \" <myscript>\"\n + \" <pgm name='ZZCALL' lib='XMLSERVICE'>\"\n + \" <parm var='p1'><data type='1A' var='d1'>a</data></parm>\"\n + \" <parm var='p2'><data type='1A' var='d2'>b</data></parm>\"\n + \" <parm var='p3'><data type='7p4' var='d3'>11.1111</data></parm>\"\n + \" <parm var='p4'><data type='12p2' var='d4'>222.22</data></parm>\"\n + \" <parm var='p5'>\"\n + \" <ds var='myds'>\"\n + \" <data type='1A' var='ds1'>x</data>\"\n + \" <data type='1A' var='ds2'>y</data>\"\n + \" <data type='7p4' var='ds3'>66.6666</data>\"\n + \" <data type='12p2' var='ds4'>77777.77</data>\"\n + \" </ds>\"\n + \" </parm>\"\n + \" <return var='rc'><data type='10i0' var='d1'>0</data></return>\"\n + \" </pgm>\"\n + \" </myscript>\"\n + \"&xmlout=32768\");\n function readNode(baseNode,output)\n { // @copy: http://blog.char95.com/post/simple-javascript-xml2array-parser/\n var node = baseNode.firstChild;\n if (output==undefined) var output = {};\n while(node)\n { var nodeData = {};\n if (node.attributes)\n { var nNodes = node.attributes.length;\n while(nNodes--) nodeData['$'+node.attributes[nNodes].nodeName] = node.attributes[nNodes].nodeValue;\n }\n if (output[node.nodeName]==undefined) output[node.nodeName] = new Array(nodeData);\n else output[node.nodeName].push(nodeData);\n var id = output[node.nodeName].length-1;\n output[node.nodeName][id] = readNode(node,output[node.nodeName][id]);\n if (node.firstChild) nodeData['#text'] = node.firstChild.nodeValue;\n node = node.nextSibling;\n }\n return output;\n }\n function processXMLSERVICE2()\n { var args =\n { url:url,\n handleAs:\"xml\",\n preventCache:true,\n load:function(data)\n { var xmlArray = readNode(data);\n var table = document.createElement('table');\n var output =\n \"<th>parm</th>\"\n + \"<th>ds</th>\"\n + \"<th>var</th>\"\n + \"<th>value</th>\\n\";\n parms = xmlArray['myscript'][0]['pgm'][0]['parm'];\n for (var i=0;i<parms.length;i++)\n { if (i<parms.length - 1)\n { output += \"<tr>\"\n + \"<td>\" + parms[i].$var + \"</td>\\n\"\n + \"<td>\" + \"(na)\" + \"</td>\\n\"\n + \"<td>\" + parms[i]['data'][0].$var + \"</td>\\n\"\n + \"<td>\" + parms[i]['data'][0]['#text'] + \"</td>\\n\"\n + \"</tr>\";\n }\n else\n { dsvar = parms[i]['ds'][0].$var;\n dsdata = parms[i]['ds'][0]['data'];\n for (var j=0;j<dsdata.length;j++)\n { output += \"<tr>\"\n + \"<td>\" + parms[i].$var + \"</td>\\n\"\n + \"<td>\" + dsvar + \"</td>\\n\"\n + \"<td>\" + dsdata[j].$var + \"</td>\\n\"\n + \"<td>\" + dsdata[j]['#text'] + \"</td>\\n\"\n + \"</tr>\";\n }\n }\n }\n output += \"</table>\\n\";\n table.setAttribute(\"border\",\"1\")\n table.innerHTML = output;\n document.getElementById(\"addtable\").appendChild(table);\n },\n error:function(error)\n { alert(\"Error:\" + error);\n }\n };\n var ajaxCall = dojo.xhrGet(args);\n }\n </script>\n </head>\n <body>\n <p>This page demonstrates calling XMLSERVICE by JavaScript. Display source in your browser to see JavaScript used.</p>\n <form>\n <ul>\n <li><a href=\"javascript: processXMLSERVICE2();\">{XMLSERVICE JavaScript table (click me)}</a> - DoJo REST call RPG build table element</li>\n </eul>\n <p>\n <div id=\"addtable\"></div>\n </p>\n </form>\n </body>\n </html>\n \n\n* PHP REST (no toolkit)\n\n::\n\n <?php\n function getxml() {\n $clob = <<<ENDPROC\n <?xml version='1.0'?>\n <script>\n <pgm name='ZZCALL' lib='XMLSERVICE'>\n <parm><data type='1A'>a</data></parm>\n <parm><data type='1A'>b</data></parm>\n <parm><data type='7p4'>11.1111</data></parm>\n <parm><data type='12p2'>222.22</data></parm>\n <parm>\n <ds>\n <data type='1A'>x</data>\n <data type='1A'>y</data>\n <data type='7p4'>66.6666</data>\n <data type='12p2'>77777.77</data>\n </ds>\n </parm>\n <return><data type='10i0'>0</data></return>\n </pgm>\n </script>\n ENDPROC;\n return $clob;\n }\n // make the call\n $i5persistentconnect = false;\n $database = \"*LOCAL\";\n $user = \"MYUID\";\n $password = \"MYPWD\";\n $libxmlservice = \"XMLSERVICE\";\n $ipc = '/tmp/rangerhtmlonly';\n $ctl = '*sbmjob';\n $clobIn = getxml();\n $clobOut = \"\";\n $clobOutMax = \"32000\";\n $i5rest = \"http://174.79.32.155/cgi-bin/xmlcgi.pgm\";\n $postdata = http_build_query(\n array(\n 'db2' => $database,\n 'uid' => $user,\n 'pwd' => $password,\n 'ipc' => $ipc,\n 'ctl' => $ctl,\n 'xmlin' => $clobIn,\n 'xmlout' => $clobOutMax // size expected XML output\n )\n );\n $opts = array('http' =>\n array(\n 'method' => 'POST',\n 'header' => 'Content-type: application/x-www-form-urlencoded',\n 'content' => $postdata\n )\n );\n $context = stream_context_create($opts);\n $clobOut = file_get_contents($i5rest, false, $context);\n ?>\n\n\n* PowerRuby REST (no toolkit)\n\n::\n\n require 'adapters/abstract_adapter'\n require 'net/http'\n require 'uri'\n\n xmlin = '<?xml version=\"1.0\"?>\n <script>\n <pgm name=\"ZZCALL\" lib=\"XMLSERVICE\">\n <parm io=\"both\">\n <data type=\"1A\">a</data>\n </parm>\n <parm io=\"both\">\n <data type=\"1A\">b</data>\n </parm>\n <parm io=\"both\">\n <data type=\"7p4\">11.1111</data>\n </parm>\n <parm io=\"both\">\n <data type=\"12p2\">222.22</data>\n </parm>\n <parm io=\"both\">\n <ds>\n <data type=\"1A\">x</data>\n <data type=\"1A\">y</data>\n <data type=\"7p4\">66.6666</data>\n <data type=\"12p2\">77777.77</data>\n </ds>\n </parm>\n <return>\n <data type=\"10i0\">0</data>\n </return>\n </pgm>\n </script>'\n xmlout = \"\"\n\n post_args = {\n :db2 => \"*LOCAL\",\n :uid => \"MYUID\",\n :pwd => \"MYPWD\",\n :ipc => \"*NA\",\n :ctl => \"*here *cdata\",\n :xmlin => xmlin,\n :xmlout => 4096\n }\n uri = URI(\"http://myibmi/cgi-bin/xmlcgi.pgm\")\n res = Net::HTTP.post_form(uri, post_args)\n xmlout = res.body\n puts xmlout\n\n\n* Java REST (no toolkit)\n\n::\n\n import java.net.*;\n import java.io.*;\n public class javaXMLserviceDemoREST {\n public static void main(String[] args)\n { try\n { // The URL class does not itself encode or decode any URL components according to the escaping mechanism defined in RFC2396.\n // It is the responsibility of the caller to encode any fields, which need to be escaped prior to calling URL,\n // and also to decode any escaped fields, that are returned from URL. Furthermore, because URL has no knowledge of URL escaping,\n // it does not recognise equivalence between the encoded or decoded form of the same URL\n String inputURL =\n \"http://174.79.32.155/cgi-bin/xmlcgi.pgm?\"\n + java.net.URLEncoder.encode(\n \"db2=*LOCAL\"\n + \"&uid=MYUID\"\n + \"&pwd=MYPWD\"\n + \"&ipc=/tmp/rangerhtmlonly\"\n + \"&ctl=*sbmjob\"\n + \"&xmlin=\"\n + \"<?xml version='1.0'?>\"\n + \" <script>\"\n + \" <pgm name='ZZCALL' lib='XMLSERVICE'>\"\n + \" <parm><data type='1A'>a</data></parm>\"\n + \" <parm><data type='1A'>b</data></parm>\"\n + \" <parm><data type='7p4'>11.1111</data></parm>\"\n + \" <parm><data type='12p2'>222.22</data></parm>\"\n + \" <parm>\"\n + \" <ds>\"\n + \" <data type='1A'>x</data>\"\n + \" <data type='1A'>y</data>\"\n + \" <data type='7p4'>66.6666</data>\"\n + \" <data type='12p2'>77777.77</data>\"\n + \" </ds>\"\n + \" </parm>\"\n + \" <return><data type='10i0'>0</data></return>\"\n + \" </pgm>\"\n + \" </script>\"\n + \"&xmlout=32768\",\n \"ISO-8859-1\");\n // make REST call to XMLSERVICE\n URL restUrl = new URL(inputURL);\n URLConnection conn = restUrl.openConnection();\n // output processing\n System.out.println(\"Content type: \" + conn.getContentType());\n System.out.println(\"Content length: \" + conn.getContentLength());\n BufferedReader strm = new BufferedReader(new InputStreamReader(conn.getInputStream()));\n String inputLine;\n String doc = \"\";\n while ((inputLine = strm.readLine()) != null) {\n doc = doc + inputLine;\n }\n System.out.println(\"****** Documento XML: **********\");\n System.out.println(doc);\n }\n catch (Exception e)\n { System.out.println(\"******* Eccezione !!! *********\");\n e.printStackTrace();\n }\n }\n }\n ``\n\n\nRPG and XMLSERVICE\n------------------\n\nRPG is usually considered server side of XMLSERVICE, a called RPG PGM or SRVPGM (web service), but RPG is well versed in DB2, therefore can also be used as RPG client to XMLSERVICE. In fact, client RPG DB2 connection to XMLSERVICE will likely be fastest and easiest choice, especially for your remote IBM i systems (WRKRDBDIRE).\n\nThe RPG client examples will demonstrate RPG CGI using XMLSERVICE via Apache configuration, but the techniques also work as batch or green screen application, after using XMLSERVICE for a while, you may realize just how little code you have to write to do a great deal of work.\n\n* Apache CGI configuration (httpd.conf) \n\n::\n\n ScriptAlias /cgi-bin/ /QSYS.LIB/XMLSERVICE.LIB/\n <Directory /QSYS.LIB/XMLSERVICE.LIB/>\n order allow,deny\n allow from all\n SetHandler cgi-script\n Options +ExecCGI\n </Directory>\n\n\n* RPG client XMLSERVICE (\\*PGM)\n\nRPG program ZZRPGSQL.PGM demonstrates client use of XMLSERVICE calling a \\*PGM (ZZCALL.pgm).\n\n::\n\n XMLSERVICE/ZZCALL (*PGM)\n D INCHARA S 1a\n D INCHARB S 1a\n D INDEC1 S 7p 4\n D INDEC2 S 12p 2\n D INDS1 DS\n D DSCHARA 1a\n D DSCHARB 1a\n D DSDEC1 7p 4\n D DSDEC2 12p 2\n *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n * main(): Control flow\n *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n C *Entry PLIST\n C PARM INCHARA\n C PARM INCHARB\n C PARM INDEC1\n C PARM INDEC2\n C PARM INDS1\n\n\nThe simple example is of course a bit contrived, simply extracting strings between ``<data>output</data>`` returned XMLSERVICE output, but serves to demonstrate XMLSERVICE call using ``Exec Sql call XMLSERVICE/iPLUG4K(:myIPC,:myCtl,:myXmlIn,:myXmlOut);``.\n\nZZRPGSQL.PGM calling XMLSERVICE via ``Exec Sql call XMLSERVICE/iPLUG4K``\n\n::\n\n H AlwNull(*UsrCtl)\n H BNDDIR('QC2LE')\n\n * vars\n D myIPC s 1024A inz(*BLANKS)\n D myCtl s 1024A inz(*BLANKS)\n D myXmlIn s 4096A inz(*BLANKS)\n D myXmlOut s 4096A inz(*BLANKS)\n\n D i s 10i 0 inz(0)\n D rn s 10i 0 inz(0)\n\n D data s 65000A inz(*BLANKS)\n D xml1 s 65000A inz(*BLANKS)\n D xml2 s 65000A inz(*BLANKS)\n D xml3 s 65000A inz(*BLANKS)\n D xml4 s 65000A inz(*BLANKS)\n\n D strstr PR * ExtProc('strstr')\n D pVal1 * Value options(*string)\n D pVal2 * Value options(*string)\n\n D strlen PR 10I 0 ExtProc('strlen')\n D pVal * Value options(*string)\n\n D writeIFS PR 20I 0 ExtProc('write')\n D fd 10I 0 value\n D buf * value\n D size 10I 0 value\n\n D xmlfind PR 65000A\n D xml * value\n D nbr 10i 0 value\n D xbeg 32A value\n D xbeg1 32A value\n D xend 32A value\n\n D Main PR ExtPgm('ZZRPGSQL')\n *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n * main(): Control flow\n *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n D Main PI\n /free\n Monitor;\n\n // -----\n // input\n\n myIPC = '*NA';\n myCtl = '*here';\n myXmlIn =\n '<?xml version=\"1.0\" encoding=\"ISO-8859-1\"?>' + x'0D'\n + '<script>' + x'0D'\n + '<pgm name=\"ZZCALL\" lib=\"XMLSERVICE\">' + x'0D'\n + '<parm><data type=\"1a\" var=\"INCHARA\">Y</data></parm>' + x'0D'\n + '<parm><data type=\"1a\" var=\"INCHARB\">Z</data></parm>' + x'0D'\n + '<parm><data type=\"7p4\" var=\"INDEC1\">001.0001</data></parm>' + x'0D'\n + '<parm><data type=\"12p2\" var=\"INDEC2\">0003.04</data></parm>' + x'0D'\n + '<parm>' + x'0D'\n + ' <ds>' + x'0D'\n + ' <data type=\"1a\" var=\"DSCHARA\">A</data>' + x'0D'\n + ' <data type=\"1a\" var=\"DSCHARB\">B</data>' + x'0D'\n + ' <data type=\"7p4\" var=\"DSDEC1\">005.0007</data>' + x'0D'\n + ' <data type=\"12p2\" var=\"DSDEC2\">0000000006.08</data>' + x'0D'\n + ' </ds>' + x'0D'\n + '</parm>' + x'0D'\n + '</pgm>' + x'0D'\n + '</script>' + x'00';\n myXmlOut = *BLANKS;\n\n // -----\n // sql call XMLSERVICE provided stored procedure(iPLUG4k -iPLUG15M)\n Exec Sql call XMLSERVICE/iPLUG4K(:myIPC,:myCtl,:myXmlIn,:myXmlOut);\n\n // -----\n // output (CGI)\n\n // -----\n // send header + end (LFLF)\n data = 'Content-type: text/html' + x'15' + x'15' + x'00';\n rn = writeIFS(1:%addr(data):strlen(%addr(data)));\n // -----\n // send return data\n // HTML table\n data = '<h3>RPG call XMLSERVICE</h3>' + x'0D';\n data = %trim(data) + '<table border=\"1\">' + x'0D';\n data = %trim(data) + '<th>Parameter name</th>' + x'0D';\n data = %trim(data) + '<th>Type value</th>' + x'0D';\n data = %trim(data) + '<th>Input value</th>' + x'0D';\n data = %trim(data) + '<th>Output value</th>' + x'0D';\n for i = 1 to 8;\n xml1 = xmlfind(%addr(myXmlIn): i:'var=' :'\"':'\"');\n xml2 = xmlfind(%addr(myXmlIn): i:'type=':'\"':'\"');\n xml3 = xmlfind(%addr(myXmlIn): i:'<data':'>':'</data>');\n xml4 = xmlfind(%addr(myXmlOut):i:'<data':'>':'</data>');\n // HTML table row\n data = %trim(data) + '<tr>' + x'0D';\n data = %trim(data) + '<td>' + %trim(xml1) + '</td>' + x'0D';\n data = %trim(data) + '<td>' + %trim(xml2) + '</td>' + x'0D';\n data = %trim(data) + '<td>' + %trim(xml3) + '</td>' + x'0D';\n data = %trim(data) + '<td>' + %trim(xml4) + '</td>' + x'0D';\n data = %trim(data) + '</tr>' + x'0D';\n endfor;\n data = %trim(data) + '</table>' + x'00';\n rn = writeIFS(1:%addr(data):strlen(%addr(data)));\n\n On-error;\n Endmon;\n\n return;\n /end-free\n\n *****************************************************\n * xmlfind\n *****************************************************\n P xmlfind B\n D xmlfind PI 65000A\n D xml * value\n D nbr 10i 0 value\n D xbeg 32A value\n D xbeg1 32A value\n D xend 32A value\n * vars\n D i s 10i 0 inz(0)\n D cbeg s 33A inz(*BLANKS)\n D cbeg1 s 33A inz(*BLANKS)\n D cend s 33A inz(*BLANKS)\n D pbeg s * inz(*NULL)\n D pend s * inz(*NULL)\n D xmlFragment s 65000A inz(*BLANKS)\n /free\n cbeg = %trim(xbeg) + x'00';\n cbeg1 = %trim(xbeg1) + x'00';\n cend = %trim(xend) + x'00';\n pbeg = xml;\n for i = 1 to nbr;\n // <item stuff>123</item>\n // x\n if pbeg <> *NULL;\n pbeg = strstr(pbeg + 1:%addr(cbeg));\n endif;\n endfor;\n // <item stuff>123</item>\n // x\n if pbeg <> *NULL;\n pbeg = strstr(pbeg:%addr(cbeg1));\n endif;\n if pbeg <> *NULL;\n // <item stuff>123</item>\n // x x\n pbeg += 1;\n pend = strstr(pbeg:%addr(cend));\n if pend <> *NULL and pend > pbeg;\n xmlFragment = %str(pbeg:pend - pbeg);\n endif;\n endif;\n return xmlFragment;\n /end-free\n P E\n\n\n**What is happening?**\n\nThe flow of ZZRPGSQL.PGM is similar to HTML/XML, except we are using the XMLSERVICE provided stored procedure interface to call XMLSERVICE.\n\n* We point our browser to ``http://lp0364d:10022/cgi-bin/zzrpgsql.pgm``, which is RPG CGI program ZZRPGSQL.PGM.\n* ZZRPGSQL.PGM uses XMLSERVICE DB2 interface ``Exec Sql call XMLSERVICE/iPLUG4K(:myIPC,:myCtl,:myXmlIn,:myXmlOut);``, passing XML input request myXmlIn to XMLSERVICE.\n* XMLSERVICE.PGM parses XML input and dynamically loads/activates/calls target ZZCALL.PGM with included parameters ``<parm>...</parm>``.\n* ZZCALL.PGM runs using normal in/out parameters (memory)\n* XMLSERVICE.PGM parses in/out parameters from ZZCALL.PGM into output XML document\n* ZZRPGSQL.PGM myXmlOut variable contains XML output document\n* ZZRPGSQL.PGM parses myXmlOut into a HTML table output (writeIFS)\n* browser sees the HTML table of ZZCALL data\n\n* RPG client XMLSERVICE (\\*SRVPGM)\n\nRPG program ZZVRYSQL.PGM demonstrates client use of XMLSERVICE calling a \\*SRVPGM (ZZSRV.ZZVARY). As you read source code for ZZVRYSQL.PGM, you will see it is nearly identical to previous example ZZRPGSQL.PGM.\n\n::\n\n XMLSERVICE/ZZSRV.ZZVARY (*SRVPGM)\n P zzvary B export\n D zzvary PI 20A varying\n D myName 10A varying\n\nXMLSERVICE called ZZSRV.ZZARRY SRVPGM demonstrates advance functions problematic for PCML based web services.\n\n* difficult -- parameter ``10A varying`` - PCML requires multiple XML definitions for length, data\n* impossible -- return ``20A varying`` - PCML has no ability to return complex data (only single integer)\n\n\nZZVRYSQL.PGM calling XMLSERVICE via ``Exec Sql call XMLSERVICE/iPLUG4K``\n\n::\n\n H AlwNull(*UsrCtl)\n H BNDDIR('QC2LE')\n\n * vars\n D myIPC s 1024A inz(*BLANKS)\n D myCtl s 1024A inz(*BLANKS)\n D myXmlIn s 4096A inz(*BLANKS)\n D myXmlOut s 4096A inz(*BLANKS)\n\n D i s 10i 0 inz(0)\n D rn s 10i 0 inz(0)\n\n D data s 65000A inz(*BLANKS)\n D xml1 s 65000A inz(*BLANKS)\n D xml2 s 65000A inz(*BLANKS)\n D xml3 s 65000A inz(*BLANKS)\n D xml4 s 65000A inz(*BLANKS)\n\n D strstr PR * ExtProc('strstr')\n D pVal1 * Value options(*string)\n D pVal2 * Value options(*string)\n\n D strlen PR 10I 0 ExtProc('strlen')\n D pVal * Value options(*string)\n\n D writeIFS PR 20I 0 ExtProc('write')\n D fd 10I 0 value\n D buf * value\n D size 10I 0 value\n\n D xmlfind PR 65000A\n D xml * value\n D nbr 10i 0 value\n D xbeg 32A value\n D xbeg1 32A value\n D xend 32A value\n\n D Main PR ExtPgm('ZZVRYSQL')\n *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n * main(): Control flow\n *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n D Main PI\n /free\n Monitor;\n\n // -----\n // input\n // P zzvary B export\n // D zzvary PI 20A varying\n // D myName 10A varying\n myIPC = '*NA';\n myCtl = '*here';\n myXmlIn =\n '<?xml version=\"1.0\" encoding=\"ISO-8859-1\"?>' + x'0D'\n + '<script>' + x'0D'\n + '<pgm name=\"ZZSRV\" lib=\"XMLSERVICE\" func=\"ZZVARY\">' + x'0D'\n + '<parm>' + x'0D'\n + '<data type=\"10a\" var=\"myName\" varying=\"on\">Ranger</data>' + x'0D'\n + '</parm>' + x'0D'\n + '<return>' + x'0D'\n + '<data var=\"retName\" type=\"20A\" varying=\"on\">Mud</data>' + x'0D'\n + '</return>' + x'0D'\n + '</pgm>' + x'0D'\n + '</script>' + x'00';\n myXmlOut = *BLANKS;\n\n // -----\n // sql call XMLSERVICE provided stored procedure(iPLUG4k -iPLUG15M)\n Exec Sql call XMLSERVICE/iPLUG4K(:myIPC,:myCtl,:myXmlIn,:myXmlOut);\n\n // -----\n // output (CGI)\n\n // -----\n // send header + end (LFLF)\n data = 'Content-type: text/html' + x'15' + x'15' + x'00';\n rn = writeIFS(1:%addr(data):strlen(%addr(data)));\n // -----\n // send return data\n // HTML table\n data = '<h3>RPG call XMLSERVICE</h3>' + x'0D';\n data = %trim(data) + '<table border=\"1\">' + x'0D';\n data = %trim(data) + '<th>Parameter name</th>' + x'0D';\n data = %trim(data) + '<th>Type value</th>' + x'0D';\n data = %trim(data) + '<th>Input value</th>' + x'0D';\n data = %trim(data) + '<th>Output value</th>' + x'0D';\n for i = 1 to 2;\n xml1 = xmlfind(%addr(myXmlIn): i:'var=' :'\"':'\"');\n xml2 = xmlfind(%addr(myXmlIn): i:'type=':'\"':'\"');\n xml3 = xmlfind(%addr(myXmlIn): i:'<data':'>':'</data>');\n xml4 = xmlfind(%addr(myXmlOut):i:'<data':'>':'</data>');\n // HTML table row\n data = %trim(data) + '<tr>' + x'0D';\n data = %trim(data) + '<td>' + %trim(xml1) + '</td>' + x'0D';\n data = %trim(data) + '<td>' + %trim(xml2) + '</td>' + x'0D';\n data = %trim(data) + '<td>' + %trim(xml3) + '</td>' + x'0D';\n data = %trim(data) + '<td>' + %trim(xml4) + '</td>' + x'0D';\n data = %trim(data) + '</tr>' + x'0D';\n endfor;\n data = %trim(data) + '</table>' + x'00';\n rn = writeIFS(1:%addr(data):strlen(%addr(data)));\n\n On-error;\n Endmon;\n\n return;\n /end-free\n\n *****************************************************\n * xmlfind\n *****************************************************\n P xmlfind B\n D xmlfind PI 65000A\n D xml * value\n D nbr 10i 0 value\n D xbeg 32A value\n D xbeg1 32A value\n D xend 32A value\n * vars\n D i s 10i 0 inz(0)\n D cbeg s 33A inz(*BLANKS)\n D cbeg1 s 33A inz(*BLANKS)\n D cend s 33A inz(*BLANKS)\n D pbeg s * inz(*NULL)\n D pend s * inz(*NULL)\n D xmlFragment s 65000A inz(*BLANKS)\n /free\n cbeg = %trim(xbeg) + x'00';\n cbeg1 = %trim(xbeg1) + x'00';\n cend = %trim(xend) + x'00';\n pbeg = xml;\n for i = 1 to nbr;\n // <item stuff>123</item>\n // x\n if pbeg <> *NULL;\n pbeg = strstr(pbeg + 1:%addr(cbeg));\n endif;\n endfor;\n // <item stuff>123</item>\n // x\n if pbeg <> *NULL;\n pbeg = strstr(pbeg:%addr(cbeg1));\n endif;\n if pbeg <> *NULL;\n // <item stuff>123</item>\n // x x\n pbeg += 1;\n pend = strstr(pbeg:%addr(cend));\n if pend <> *NULL and pend > pbeg;\n xmlFragment = %str(pbeg:pend - pbeg);\n endif;\n endif;\n return xmlFragment;\n /end-free\n P E\n\n\nRPG client XMLSERVICE (DataQueue)\n---------------------------------\n\nRPG program ZZQUESQL.PGM demonstrates client use of XMLSERVICE calling a CMDs and System APIs for Data Queue. As you read source code for ZZQUESQL.PGM, you will see it is nearly identical to previous example ZZRPGSQL.PGM.\n\n::\n\n DLTDTAQ DTAQ(XMLSERVICE/MYDATAQ)\n CRTDTAQ DTAQ(XMLSERVICE/MYDATAQ) MAXLEN(100)\n ***************************************\n * Send Data Queue (QSNDDTAQ) API\n ***************************************\n * 1 Data queue name Input Char(10)\n * 2 Library name Input Char(10)\n * 3 Length of data Input Packed(5,0)\n * 4 Data Input Char(*) Input\n ***************************************\n * Receive Data Queue (QRCVDTAQ) API\n ***************************************\n * 1 Data queue name Input Char(10)\n * 2 Library name Input Char(10)\n * 3 Length of data Input Packed(5,0)\n * 4 Data Char(*) Output\n * 5 Wait time Input Packed(5,0)\n\n\nZZQUESQL.PGM calling XMLSERVICE via ``Exec Sql call XMLSERVICE/iPLUG4K`` \n\n::\n\n H AlwNull(*UsrCtl)\n H BNDDIR('QC2LE')\n\n * vars\n D myIPC s 1024A inz(*BLANKS)\n D myCtl s 1024A inz(*BLANKS)\n D myXmlIn s 4096A inz(*BLANKS)\n D myXmlOut s 4096A inz(*BLANKS)\n\n D i s 10i 0 inz(0)\n D rn s 10i 0 inz(0)\n\n D data s 65000A inz(*BLANKS)\n D xml1 s 65000A inz(*BLANKS)\n D xml2 s 65000A inz(*BLANKS)\n D xml3 s 65000A inz(*BLANKS)\n D xml4 s 65000A inz(*BLANKS)\n\n D strstr PR * ExtProc('strstr')\n D pVal1 * Value options(*string)\n D pVal2 * Value options(*string)\n\n D strlen PR 10I 0 ExtProc('strlen')\n D pVal * Value options(*string)\n\n D writeIFS PR 20I 0 ExtProc('write')\n D fd 10I 0 value\n D buf * value\n D size 10I 0 value\n\n D xmlfind PR 65000A\n D xml * value\n D nbr 10i 0 value\n D xbeg 32A value\n D xbeg1 32A value\n D xend 32A value\n\n D Main PR ExtPgm('ZZVRYSQL')\n *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n * main(): Control flow\n *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n D Main PI\n /free\n Monitor;\n\n // -----\n // input\n // ***************************************\n // * Send Data Queue (QSNDDTAQ) API\n // ***************************************\n // * 1 Data queue name Input Char(10)\n // * 2 Library name Input Char(10)\n // * 3 Length of data Input Packed(5,0)\n // * 4 Data Input Char(*) Input\n // ***************************************\n // * Receive Data Queue (QRCVDTAQ) API\n // ***************************************\n // * 1 Data queue name Input Char(10)\n // * 2 Library name Input Char(10)\n // * 3 Length of data Input Packed(5,0)\n // * 4 Data Char(*) Output\n // * 5 Wait time Input Packed(5,0)\n myIPC = '*NA';\n myCtl = '*here';\n myXmlIn =\n '<?xml version=\"1.0\" encoding=\"ISO-8859-1\"?>' + x'0D'\n + '<script>' + x'0D'\n + '<cmd error=\"fast\">DLTDTAQ DTAQ(XMLSERVICE/MYDATAQ)</cmd>' + x'0D'\n + '<cmd error=\"fast\">CRTDTAQ DTAQ(XMLSERVICE/MYDATAQ) MAXLEN(100)</cmd>'\n + x'0D'\n + '<pgm name=\"QSNDDTAQ\">' + x'0D'\n + ' <parm>' + x'0D'\n + ' <data type=\"10A\" var=\"sndque\">MYDATAQ</data>' + x'0D'\n + ' </parm>' + x'0D'\n + ' <parm>' + x'0D'\n + ' <data type=\"10A\" var=\"sndlib\">XMLSERVICE</data>' + x'0D'\n + ' </parm>' + x'0D'\n + ' <parm>' + x'0D'\n + ' <data type=\"5p0\" var=\"sndlen\">50</data>' + x'0D'\n + ' </parm>' + x'0D'\n + ' <parm>' + x'0D'\n + ' <data type=\"100A\" var=\"snddata\">i data queues</data>' + x'0D'\n + ' </parm>' + x'0D'\n + '</pgm>' + x'0D'\n + '<pgm name=\"QRCVDTAQ\">' + x'0D'\n + ' <parm>' + x'0D'\n + ' <data type=\"10A\" var=\"rcvque\">MYDATAQ</data>' + x'0D'\n + ' </parm>' + x'0D'\n + ' <parm>' + x'0D'\n + ' <data type=\"10A\" var=\"rcvlib\">XMLSERVICE</data>' + x'0D'\n + ' </parm>' + x'0D'\n + ' <parm>' + x'0D'\n + ' <data type=\"5p0\" var=\"rcvlen\">50</data>' + x'0D'\n + ' </parm>' + x'0D'\n + ' <parm>' + x'0D'\n + ' <data type=\"100A\" var=\"rcvdata\">bad stuff</data>' + x'0D'\n + ' </parm>' + x'0D'\n + ' <parm>' + x'0D'\n + ' <data type=\"5p0\" var=\"rcvwait\">0</data>' + x'0D'\n + ' </parm>' + x'0D'\n + '</pgm>' + x'0D'\n + '<cmd error=\"fast\">DLTDTAQ DTAQ(XMLSERVICE/MYDATAQ)</cmd>' + x'0D'\n + '</script>' + x'00';\n myXmlOut = *BLANKS;\n\n // -----\n // sql call XMLSERVICE provided stored procedure(iPLUG4k -iPLUG15M)\n Exec Sql call XMLSERVICE/iPLUG32K(:myIPC,:myCtl,:myXmlIn,:myXmlOut);\n\n // -----\n // output (CGI)\n\n // -----\n // send header + end (LFLF)\n data = 'Content-type: text/html' + x'15' + x'15' + x'00';\n rn = writeIFS(1:%addr(data):strlen(%addr(data)));\n // -----\n // send return data\n // HTML table\n data = '<h3>RPG call XMLSERVICE</h3>' + x'0D';\n data = %trim(data) + '<table border=\"1\">' + x'0D';\n data = %trim(data) + '<th>Parameter name</th>' + x'0D';\n data = %trim(data) + '<th>Type value</th>' + x'0D';\n data = %trim(data) + '<th>Input value</th>' + x'0D';\n data = %trim(data) + '<th>Output value</th>' + x'0D';\n for i = 1 to 9;\n xml1 = xmlfind(%addr(myXmlIn): i:'var=' :'\"':'\"');\n xml2 = xmlfind(%addr(myXmlIn): i:'type=':'\"':'\"');\n xml3 = xmlfind(%addr(myXmlIn): i:'<data':'>':'</data>');\n xml4 = xmlfind(%addr(myXmlOut):i:'<data':'>':'</data>');\n // HTML table row\n data = %trim(data) + '<tr>' + x'0D';\n data = %trim(data) + '<td>' + %trim(xml1) + '</td>' + x'0D';\n data = %trim(data) + '<td>' + %trim(xml2) + '</td>' + x'0D';\n data = %trim(data) + '<td>' + %trim(xml3) + '</td>' + x'0D';\n data = %trim(data) + '<td>' + %trim(xml4) + '</td>' + x'0D';\n data = %trim(data) + '</tr>' + x'0D';\n endfor;\n data = %trim(data) + '</table>' + x'00';\n rn = writeIFS(1:%addr(data):strlen(%addr(data)));\n\n On-error;\n Endmon;\n\n return;\n /end-free\n\n *****************************************************\n * xmlfind\n *****************************************************\n P xmlfind B\n D xmlfind PI 65000A\n D xml * value\n D nbr 10i 0 value\n D xbeg 32A value\n D xbeg1 32A value\n D xend 32A value\n * vars\n D i s 10i 0 inz(0)\n D cbeg s 33A inz(*BLANKS)\n D cbeg1 s 33A inz(*BLANKS)\n D cend s 33A inz(*BLANKS)\n D pbeg s * inz(*NULL)\n D pend s * inz(*NULL)\n D xmlFragment s 65000A inz(*BLANKS)\n /free\n cbeg = %trim(xbeg) + x'00';\n cbeg1 = %trim(xbeg1) + x'00';\n cend = %trim(xend) + x'00';\n pbeg = xml;\n for i = 1 to nbr;\n // <item stuff>123</item>\n // x\n if pbeg <> *NULL;\n pbeg = strstr(pbeg + 1:%addr(cbeg));\n endif;\n endfor;\n // <item stuff>123</item>\n // x\n if pbeg <> *NULL;\n pbeg = strstr(pbeg:%addr(cbeg1));\n endif;\n if pbeg <> *NULL;\n // <item stuff>123</item>\n // x x\n pbeg += 1;\n pend = strstr(pbeg:%addr(cend));\n if pend <> *NULL and pend > pbeg;\n xmlFragment = %str(pbeg:pend - pbeg);\n endif;\n endif;\n return xmlFragment;\n /end-free\n P E\n\n\n\n\n"
},
{
"alpha_fraction": 0.6122646927833557,
"alphanum_fraction": 0.6759169697761536,
"avg_line_length": 40.20399856567383,
"blob_id": "d55d993621b8aa426bb5796524f29c03f9958df6",
"content_id": "35b7e17bb6dd15ea57294b3d4aa69e76b093c42f",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "reStructuredText",
"length_bytes": 10308,
"license_type": "permissive",
"max_line_length": 490,
"num_lines": 250,
"path": "/docs/ccsids.rst",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "\n\nXMLSERVICE/Toolkit CCSID\n========================\n\n`Goto Main Page`_\n\n.. _Goto Main Page: index.html\n\nXMLSERVICE should just work \n-------------------------------\nIn general if you are using DB2 connections 1/2 tier and settings for your Apache or user profile are valid (not 65535), you will not need any additional CCSID manipulation/configuration.\n\n..\n If you are unfamiliar with 2-tier DB2 CCSID please try following link before reading this page [[PHP/DB2CCSID | DB2 CCSID]].\n\nCCSID - what do?\n----------------\n\nFew other topics dealing with web programs cause more frustrations over IBM i CCSID settings.\n\nCCSID conflict between web scripts (clients) and IBM i PGMs (server)\n--------------------------------------------------------------------\nAny good conflict needs a basic difference in philosophy causing all the trouble.\n\n* *Client (ASCII)* - web scripts typically wish to run ASCII 819/1208 (PHP), code set(s) laptop\n* *Server (EBCDIC)* - IBM i PGMs generally run EBCDIC (RPG/Cobol/CLP), code set(s) main frame\n\nCCSID rule -- always a conversion client/server\n-----------------------------------------------\nThe basic operational code set differences between client (ASCII) / server (EBCDIC) required a conversion for all character data moving between client/server to be of value on either side. Your CCSID setting options are many, experimentation is usually required, but perhaps this web page helps avoid hours of frustrating tinkering attempting to adjust multitude of software \"products\" where CCSID conversions is possible.\n\nHappens 99 times out of 100 when not working (CCSID 65535) \n----------------------------------------------------------\nYou cannot run IBM i PHP and interact with IBM i DB2, PGM, SRVPGM (xmlservice), when your IBM i machine is default ship setting of 65535. Change the default ship Zend Server setting and you will likely be up and running.\n\nI run 65535 xmlservice on my V6R1 machine (among other machines) \n::\n\n DSPSYSVAL SYSVAL(QCCSID)\n Coded character set\n identifier . . . . . : 65535 1-65535\n\n\nChange these files and restart everything (web, db2, xmlservice, tec.) \n\nI. Apache: /www/zendsvr/conf/httpd.conf <-- (ILE Apache side)\n::\n\n DefaultFsCCSID 37 ... or 280 (Italian) ... or so on ...\n CGIJobCCSID 37 ... or 280 (Italian) ... or so on ...\n\nThis often fixes majority of web script issue\n\nII. FastCGI: /www/zendsvr/conf/fastcgi.conf <-- THIS file (PASE side)\n::\n\n CCSID=819 and LANG=C,\n which works nearly anywhere (UNIX default).\n\nNote:\n\n - This file must be ASCII 819, so careful with editor or will not work (EDTF especially).\n - CCSID=1208 can be a problem for PHP extensions, so most configurations use CCSID=819 and take specific script action to encode/decode 1208 data (see xmlservice hex/before/after)\n\nIII. User profile -- DB2 connections\n::\n\n CHGUSRPRF USRPRF(IAM37) CCSID(37)\n\nThis fixes most DB2 CCSID client/server problems both 1-tier and 2-tier\n\n*-- or (skip I. - III.)--*\n\nIV. change system ccsid for everything on the machine\n::\n\n CHGSYSVAL SYSVAL(QCCSID) VALUE(37)\n\nNote:\n\n - some legacy applications may not work, should work but..., so check things out\n - *65535* means HEX (or binary), which also means no conversion and 65535 is essentially doomsday for most any application trying to work between PASE and DB2 (most any client and DB2).\n \n ::\n\n client (ASCII 819) + server (EBCDIC 65535) = junk-you-can't-read-or-use = xmlservice fails\n\n\nNow **the rest of the story** 65535 problem is common, so IBM i DB2 folks force a reasonable job CCSID for SOME remote connections (based on primary language generally), but NOT all connections (PASE and others), therefore observationally you may see some applications work (client access products) and others will not (PHP).\n\n1) Specific examples for XMLSERVICE\n\n - Command user ccsid override\n\n ::\n\n if ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\n else $conn = db2_connect($database,$user,$password);\n if (!$conn) die(\"Bad connect: $database,$user\");\n $stmt = db2_prepare($conn, \"call $libxmlservice.iPLUG4K(?,?,?,?)\");\n if (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n\n $ipc = \"/tmp/packers\"; // *** RIGHT HERE internalKey/IPC\n $ctl = \"*sbmjob\"; // *** run state full \n\n $clob = \"<?xml version='1.0'?>\\n\";\n $clob .= \"<script>\\n\";\n $clob .= \"<cmd exec='rexx' hex='on' before='819/37' after='37/819'>\";\n $clob .= bin2hex(\"RTVJOBA USRLIBL(?) SYSLIBL(?)\");\n $clob .= \"</cmd>\\n\";\n $clob .= \"</script>\\n\";\n $clobIn = $clob;\n $clobOut = \"\";\n\n $ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n $ret=db2_execute($stmt);\n\n var_dump($clobOut);\n\n $xmlobj = simplexml_load_string($clobOut);\n\n if (!$xmlobj) die(\"Bad XML output\");\n\n $clobOut = pack(\"H*\",(string)$xmlobj->cmd->hex);\n\n var_dump($clobOut);\n\n OUTPUT:\n > php zzhexccsidcmd.php\n string(527) \"<?xml version='1.0'?>\n <script>\n <cmd exec='rexx' hex='on' before='819/37' after='37/819'><success>+++ success RTVJOBA USRLIBL(?) SYSLIBL(?)</success><hex>3C726F773E0A3C6461746120646573633D275553524C49424C273E5147504C202020202020205154454D5020202020202051444556454C4F5020202051424C445359532020202051424C44535953523C2F646174613E0A3C2F726F773E0A3C726F773E0A3C6461746120646573633D275359534C49424C273E5153595320202020202020515359533220202020202051484C5053595320202020515553525359533C2F646174613E0A3C2F726F773E0A</hex></cmd>\n </script>\"\n string(176) \"<row>\n <data desc='USRLIBL'>QGPL QTEMP QDEVELOP QBLDSYS QBLDSYSR</data>\n </row>\n <row>\n <data desc='SYSLIBL'>QSYS QSYS2 QHLPSYS QUSRSYS</data>\n </row>\n \n\n\n - Program ccsid override\n\n ::\n\n if ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\n else $conn = db2_connect($database,$user,$password);\n if (!$conn) die(\"Bad connect: $database,$user\");\n $stmt = db2_prepare($conn, \"call $libxmlservice.iPLUG512K(?,?,?,?)\");\n if (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n\n $ipc = \"/tmp/packers\"; // *** RIGHT HERE internalKey/IPC\n $ctl = \"*sbmjob\"; // *** run state full \n\n $clob = \"<?xml version='1.0'?>\\n\";\n $clob .= \"<script>\\n\";\n $clob .= \"<pgm name='ZZSRV' lib='XMLSERVICE' func='ZZ200'>\\n\";\n $clob .= \"<parm io='both'>\\n\";\n $clob .= \"<data type='200A' hex='on' before='819/37' after='37/819'>\";\n $clob .= bin2hex(\"Hi there i am ok on return from xmlservice.\");\n $clob .= \"</data>\\n\";\n $clob .= \"</parm>\\n\";\n $clob .= \"</pgm>\\n\";\n $clob .= \"</script>\\n\";\n $clobIn = $clob;\n $clobOut = \"\";\n\n $ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n $ret=db2_execute($stmt);\n\n var_dump($clobOut);\n\n $xmlobj = simplexml_load_string($clobOut);\n\n if (!$xmlobj) die(\"Bad XML output\");\n\n $clobOut = pack(\"H*\",(string)$xmlobj->pgm->parm->data);\n\n var_dump($clobOut);\n\n\n OUTPUT:\n > php zzhexccsidpgm.php\n string(273) \"<?xml version='1.0'?>\n <script>\n <pgm name='ZZSRV' lib='XMLSERVICE' func='ZZ200'>\n <parm io='both'>\n <data type='200A' hex='on' before='819/37' after='37/819'>4869207468657265206920616D206F6B206F6E2072657475726E2066726F6D20786D6C736572766963652E</data>\n </parm>\n </pgm>\n </script>\"\n string(43) \"Hi there i am ok on return from xmlservice.\"\n\n\n\n\n2) Specific examples for New PHP Toolkit\n\n - CCSID override - PHP Toolkit/CW\n \n Simple CCSIDs only require setting the CCSID via QCCSID or in Apache. The overrides directly below are intended for for languages with more complex needs such as Hebrew or Japanese, or when individual pieces of data are encoded differently (such as a combination of 819 and 1208).\n \n The easiest way to try these CCSID settings is with three new settings in toolkit.ini:\n ::\n\n advanced CCSID options. Use all three options together.\n ccsidBefore = '819/37'\n ccsidAfter = '37/819'\n useHex = true\n\n Uncomment the three settings and then adjust the ccsidBefore and ccsidAfter values according to your needs.\n\n Another way to set these global CCSID settings is with the method setToolkitServiceParams(). In your code, after connecting with $conn::getInstance(* etc.), set the parameters with this statement:\n ::\n\n $conn->setToolkitServiceParams(array('ccsidBefore'=>'819/37', 'ccsidAfter'=>'37/819', 'useHex'=>true));\n \n This technique works identically to changing INI values, except that this coding technique can be re-done over and over with different settings before each program/command call.\n\n These \"global\" CCSID techniques work with both the new API and the CW, and will convert not only data/commands and command output, but the names of programs, libraries, and functions. You may notice that your data will be converted to hex inside the toolkit and then converted back to readable text by the toolkit.\n\n For more fine-grained control over parameter data--that is, the ability to use a different CCSID conversion for each parameter, if desired--chain several new methods to AddParameterChar() like so: (new API only--not in CW):\n ::\n\n $param[] = $conn->AddParameterChar('both', 10,'CODE', 'CODE', $code)\n ->setParamCcsidBefore('819/37')\n ->setParamCcsidAfter('37/819')\n ->setParamUseHex(true);\n\n\n These parameters can also be passed as AddParameterChar() function parameters directly but it's easier to use the setParam… methods above.\n\n Note:\n\n These advanced CCSID settings do not affect some of the handmade API calls in the CW such as getting object lists. Helping those may be a future enhancement.\n \n\nIf you wish to see how XMLSERVICE implements these overrides, see the following URL, under the heading: \"CCSID user override - xmlservice options (hex/before/after)\".\n\n\n\n..\n [--Author([[http://youngiprofessionals.com/wiki/index.php/XMLSERVICE/XMLSERVICECCSID?action=expirediff | s ]])--]\n [--Tony \"Ranger\" Cairns - IBM i PHP / PASE--]\n\n\n\n"
},
{
"alpha_fraction": 0.5175751447677612,
"alphanum_fraction": 0.5355747938156128,
"avg_line_length": 46.86991882324219,
"blob_id": "d8ea16ca3dfc6cab0e0e577081ccd2748fbb104b",
"content_id": "0b41bf20fafb4438f9926b4bca431d36d922811f",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "reStructuredText",
"length_bytes": 11786,
"license_type": "permissive",
"max_line_length": 448,
"num_lines": 246,
"path": "/docs/qtemp.rst",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "\n\nXMLSERVICE/Toolkit Sharing QTEMP\n================================\n\n`Goto Main Page`_\n\n.. _Goto Main Page: index.html\n\nSharing QTEMP - what do?\n------------------------\nSharing QTEMP ibm_db2/odbc/pdo_ibm/RPG/xmlservice has a few quirks, but can be \"mostly\" done if you understand the jobs in play.\n\nThe big picture ...\n-------------------\n::\n\n xmlservice |--LUW/PASE------|-----------------------IBM i----------------------\n run type |job 1 (1-2 tier)| job 2 (stateless) job 3 (IPC/private)\n -------|----------------|------------------------|-------------------------\n stateless | php-client-----|-->QSQSRVR |\n | | ->ibm_db2,pdo_ibm,odbc |\n | | *iPLUGxxxx-> |\n | | ->XMLSERVICE<>RPG |\n | | ->XMLSERVICE<>DB2 (xml)|\n | | ->QTEMP (yes share) |\n\n IPC/private| php-client-----|--->QSQSRVR |\n | | ->ibm_db2,pdo_ibm,odbc|\n | | *iPLUGxxxx----------|-->XMLSERVICE<>RPG\n | | ->QTEMP (no share) | ->XMLSERVICE<>DB2 (xml)\n | | | ->QTEMP (yes share)\n\n Note: iPLUGxxx stored procedures are included with xmlservice download\n\n* **php-client (job 1)** - can be 1-tier (PASE) / 2-tier (Linux/Unix/Windows) and PHP using extension ibm_db2,pdo_ibm,odbc.\n* **ibm_db2,pdo_ibm,odbc (job 2 ONLY)** - can ONLY \"share\" QTEMP when running stateless XMLSERVICE<>RPG mode, therefore no IPC/private.\n \n * PHP ibm_db2,pdo_ibm,odbc are shown in stateless column (job 2), so that everyone is visually reminded that IBM i DB2 actions do not really occur in php-client (job 1), but actually run in an \"acquired\" DB2 pre-started job (ie. 1-tier/2-tier makes no difference).\n \n * 2-tier - \"true\" IBM i connection/job maybe QSQ (PASE), QWI (LUW), etc., but picture above remains consistent behavior for \"sharing\" QTEMP (job 2).\n * persistent connect - persistent connection (db2_pconnect/odbc_pconnect) or full connection (db2_connect/odbc_connect) makes no difference, big picture remains exactly the same for both connections. Of course you will run faster with persistent connections (up to 30-40%), due to QSQ server job already up and waiting, but you cannot rely on persistent connections to reach same QSQ job each script start / browser click.\n\n + exception - PASE ibm_db2 provides an obscure \"in-line\" mode only a few people understand where ibm_db2.ini file set to ibm_db2.i5_ignore_userid=1 will run all DB2 work under the default web profile and stateless in PASE php-client (job 1) ... but only if pure PASE ibm_db2 site (no PASE odbc/pdo_ibm) ... and ... well gee ... you are an expert if you understand i5_ignore_userid mode and you probably don't need this documentation.\n + Note to future self ibm_db2, etc. ... just like xmlservice has idle timeout today (1/2 hour), we really should have a timeout idle for persistent connections across db2.\n\n* **XMLSERVICE<>RPG (job 2 or job 3)** - is toolkit call to RPG program wishing to \"share\" QTEMP.\n \n * Stateless (job 2) - Generally run slower with more CPU challenges, re-started xmlservice running in \"acquired\" QSQ job under stored procedure call iPLUGxxx, but on return iPLUGxxx/xmlservice need to close up shop and lose all caches (shut down, wake up, shut down, wake up, ...).\n \n * YES stateless - ibm_db2/odbc/pdo_ibm (job 2) CAN share QTEMP with XMLSERVICE<>RPG/XMLSERVICE<>DB2 (also job 2).\n \n * IPC/private (job 3) - Generally run faster with less CPU challenges, because xmlservice stays running until explicitly killed or idle timeout (1/2 hour default / user controlled).\n \n * NO IPC/private - ibm_db2/odbc/pdo_ibm (job 2) can NOT share QTEMP with XMLSERVICE<>RPG/XMLSERVICE<>DB2 (job 3).\n\n* **XMLSERVICE<>DB2 (job 2 or job 3)** - is NOT yet available in toolkit wrapper (Alan), but RAW xmlservice interface allows xml based DB2 queries and therefore can \"share\" QTEMP (available today in RAW xmlservice mode).\n\n::\n\n <?xml version='1.0'?>\n <script>\n <sql>\n <query>\n DECLARE GLOBAL TEMPORARY TABLE animalq(\n id integer, breed varchar(32), name char(16),\n weight decimal(7,2), height numeric(9,2))\n ON COMMIT PRESERVE ROWS\n </query>\n <prepare>\n insert into\n animalq (id, breed, name, weight, height)\n values (?,?,?,?,?)\n </prepare>\n <execute>\n <parm io='in'>1</parm>\n <parm io='in'>cat</parm>\n <parm io='in'>Pook</parm>\n <parm io='in'>3.2</parm>\n <parm io='in'>9.56</parm>\n </execute>\n <execute>\n <parm io='in'>2</parm>\n <parm io='in'>dog</parm>\n <parm io='in'>Peaches</parm>\n <parm io='in'>12.3</parm>\n <parm io='in'>22.65</parm>\n </execute>\n <prepare>select * from animalq where WEIGHT > 1.0</prepare>\n <execute/>\n <describe desc='col'/>\n <fetch block='all' desc='on'/>\n </sql>\n </script>\n\n\n1) Specific examples for New PHP Toolkit\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n::\n\n -----------------------------------------\n | Browser |\n |---------------------------------------|\n | Download RPG (1) | Download PHP (2) |\n | 1) XMLSERVICE | a) PHP CW Toolkit |\n | HTML/XML/REST | b) New PHP Toolkit |<- cw-tk-php-x.x.x.zip (*)\n | no PHP |--------------------|\n | (xmlcgi.pgm) | c) PHP “Raw XML” |\n | (optional) | (ibm_db2, odbc) |\n | -----------------------------------|\n | 2) XMLSERVICE DB2 stored procedures |\n | (iPLUG4K, iPLUG32K, ..., iPLUG15M) |\n | 3) XMLSERVICE (xmlservice.pgm) |\n | call most anything on IBM i ... |\n | (PGM, SRVPGM, PASE, DB2, etc.) |\n ------------------------------------------\n\n* **Stateless** - connection (job 1 / job 2)\n\n::\n\n // *** stateless occurs when no internalKey (e.g. '/tmp/packers') was specified ***\n // *** also when ->setToolkitServiceParams('stateless'=>true)) is called\n try { $ToolkitServiceObj = ToolkitService::getInstance($database, $user, $password); }\n catch (Exception $e) { die($e->getMessage()); }\n $ToolkitServiceObj->setToolkitServiceParams(array(\n 'plug'=>\"iPLUG32K\")); // max size data i/o (iPLUG4K,32K,65K.512K,1M,5M,10M,15M)\n\n\n* **State Full** - connection (job 1 / job 2 / job 3)\n\n::\n\n $internalKey = '/tmp/packers';\n try { $ToolkitServiceObj = ToolkitService::getInstance($database, $user, $password); }\n catch (Exception $e) { die($e->getMessage()); }\n $ToolkitServiceObj->setToolkitServiceParams(array(\n 'InternalKey'=>$internalKey, // *** RIGHT HERE internalKey/IPC\n // *** run state full ...\n // use SBMJOB command run in new job\n // PHP can call again, again, again\n // with /tmp/packers and get ...\n // same job every time\n // same library list (*LIBL)\n // same PGMs with open files, etc.\n // ... exactly like 5250 sign-on screen\n 'plug'=>\"iPLUG32K\")); // max size data i/o (iPLUG4K,32K,65K.512K,1M,5M,10M,15M)\n\n* **persistent connections** - later releases PHP Toolkit allow reused/shared connections with other work, including persistent connections, but internalKey (IPC) rules remain the same (above)\n \n::\n\n if ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\n else $conn = db2_connect($database,$user,$password);\n\n try { $ToolkitServiceObj = ToolkitService::getInstance($conn); }\n catch (Exception $e) { die($e->getMessage()); }\n\n\n2) Specific examples for XMLSERVICE\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n::\n\n -----------------------------------------\n | Browser |\n |---------------------------------------|\n | Download RPG (1) | Download PHP (2) |\n | 1) XMLSERVICE | a) PHP CW Toolkit |\n | HTML/XML/REST | b) New PHP Toolkit |\n | no PHP |--------------------|\n | (xmlcgi.pgm) | c) PHP “Raw XML” |<- Zend Server for IBM i or Linux or Windows (*)\n | (optional) | (ibm_db2, odbc) |\n | -----------------------------------|\n | 2) XMLSERVICE DB2 stored procedures |\n | (iPLUG4K, iPLUG32K, ..., iPLUG15M) |\n | 3) XMLSERVICE (xmlservice.pgm) |\n | call most anything on IBM i ... |\n | (PGM, SRVPGM, PASE, DB2, etc.) |\n ------------------------------------------\n\n* **Stateless** - (job 1 / job 2)\n\n::\n\n if ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\n else $conn = db2_connect($database,$user,$password);\n if (!$conn) die(\"Bad connect: $database,$user\");\n $stmt = db2_prepare($conn, \"call $libxmlservice.iPLUG4K(?,?,?,?)\");\n if (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n $ipc = \"\"; // *** RIGHT HERE MISSING internalKey/IPC\n $ctl = \"*here\"; // *** run stateless ...\n // here in any available database job\n // must set *LIBL evey time\n // stateless - MUST do this every single script after connect/getInstance()\n // even if using persistent connections (db2_pconnect, odbc_pconnect)\n $clobIn =\n \"<?xml version='1.0'?>\n <script>\n <cmd>CHGLIBL LIBL(FREDFLIN WILMAFLIN) CURLIB(FREDFLIN)</cmd>\n </script>\";\n $clobOut = \"\";\n $ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n $ret=db2_execute($stmt);\n\n* **State Full** - (job 1 / job 2 / job 3)\n\n::\n\n if ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\n else $conn = db2_connect($database,$user,$password);\n if (!$conn) die(\"Bad connect: $database,$user\");\n $stmt = db2_prepare($conn, \"call $libxmlservice.iPLUG4K(?,?,?,?)\");\n if (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n $ipc = \"/tmp/packers\"; // *** RIGHT HERE internalKey/IPC\n $ctl = \"*sbmjob\"; // *** run state full ...\n // use SBMJOB command run in new job\n // PHP can call again, again, again\n // with /tmp/packers and get ...\n // same job every time\n // same library list (*LIBL)\n // same PGMs with open files, etc.\n // ... exactly like 5250 sign-on screen\n // state full - MUST do this ONCE ONLY after start/sbmjob of XMLSERVICE job\n // then forget about it (unless you choose to change libl) ...\n $clobIn =\n \"<?xml version='1.0'?>\n <script>\n <cmd>CHGLIBL LIBL(FREDFLIN WILMAFLIN) CURLIB(FREDFLIN)</cmd>\n </script>\";\n $clobOut = \"\";\n $ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n $ret=db2_execute($stmt);\n\n\n\n\n\n.. \n [--Author([[http://youngiprofessionals.com/wiki/index.php/XMLSERVICE/XMLSERVICEQTEMP?action=expirediff | s ]])--]\n [--Tony \"Ranger\" Cairns - IBM i PHP / PASE--]\n"
},
{
"alpha_fraction": 0.6197183132171631,
"alphanum_fraction": 0.6197183132171631,
"avg_line_length": 16.625,
"blob_id": "aeadcdf6898aa61dff929942ce3dbda888394d44",
"content_id": "a1617d019d62128114508613c284200cb14b15ff",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 142,
"license_type": "permissive",
"max_line_length": 34,
"num_lines": 8,
"path": "/test/php/skipifcw.inc",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "<?php\n// see license here\nrequire_once('license.inc');\nrequire_once('authorization.php');\nif(!@include(\"CW/cw.php\")) {\n die('skip');\n}\n?>\n\n"
},
{
"alpha_fraction": 0.6595091819763184,
"alphanum_fraction": 0.6658998131752014,
"avg_line_length": 40.60638427734375,
"blob_id": "cef2e9c897af696eac909099ee40ede8f60372d1",
"content_id": "f558e84769ef2a2a1bdc795e795a0d2cce32d496",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 3912,
"license_type": "permissive",
"max_line_length": 195,
"num_lines": 94,
"path": "/test/php/test_33466_cwtest_commands.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: - cwtest commands\n--SKIPIF--\n<?php require_once('skipifcw.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('xxcw_test_setup.php');\n\n\necho h2('Commands');\n\n$msg = 'HELLO';\n$cmdString = \"SNDMSG MSG($msg) TOUSR($user)\";\n$start = microtime(true);\n$commandSuccessful = i5_command($cmdString, array(), array(), $conn);\n$end = microtime(true);\n$elapsed = $end - $start;\necho \"Ran command $cmdString using a single string in $elapsed seconds. Return: \" . OkBad($commandSuccessful) . \"<BR><BR>\";\n\n$badUser = 'jerk';\n$msg = 'HELLO';\n$cmdString = \"SNDMSG MSG($msg) TOUSR($badUser)\";\n$start = microtime(true);\n$commandSuccessful = i5_command($cmdString, array(), array(), $conn);\n$end = microtime(true);\n$elapsed = $end - $start;\necho \"Ran command $cmdString using a single string to BAD user in $elapsed seconds.. Return: \" . OkBad($commandSuccessful). \"<BR>\";\nif (!$commandSuccessful) {\n\techo \"Error returned: \" . printArray(i5_error()) . \"<BR><BR>\";\n}\n\n$cmdString = 'RTVJOBA';\n$input = array();\n// we want variable name ccsid to be created\n$output = array('ccsid' => array('ccsid', 'dec(5 0)'),\n 'dftccsid' => array('defaultCcsid', 'dec(5 0)'),\n 'curuser'=>'currentUser', 'nbr'=>'jobNumber', 'job'=>'jobName', 'user'=>'jobUser',\n 'usrlibl' => 'userLibl');\n$start = microtime(true);\n$commandSuccessful = i5_command($cmdString, $input, $output, $conn);\nif (function_exists('i5_output')) extract(i5_output()); // i5_output() required if called in a function\n$end = microtime(true);\n\n$elapsed = $end - $start;\n\necho \"Ran command $cmdString with an output array in $elapsed seconds. Return: \" .\n OkBad($commandSuccessful) .\n \" with CCSID '$ccsid', default CCSID '$defaultCcsid', current user '$currentUser', job name '$jobName', job number '$jobNumber', job user '$jobUser', with user liblist '$userLibl'.<BR><BR>\";\n\n// Note: old toolkit cannot get interactive output of this sort (DSPJOBLOG). This is additional functionality of the new toolkit.\n$cmdString =\"DSPJOBLOG JOB($jobNumber/$jobUser/$jobName)\";\necho \"About to run \" . $cmdString .\".<BR>\";\n$conn->setToolkitServiceParams(array('plug'=>'iPLUG5M')); // bigger to handle large joblog\n$interactiveOutput = $conn->CLInteractiveCommand($cmdString);\n$conn->setToolkitServiceParams(array('plug'=>'iPLUG512K')); // put back to default\necho printArray($interactiveOutput) . \"<BR><BR>\";\n\n$msg = 'HELLO_WITH_INPUTS_ARRAY';\n$cmdString = \"SNDMSG\";\n$inputs = array('MSG'=>$msg, 'TOUSR'=>$user);\n$commandSuccessful = i5_command($cmdString, $inputs);\necho \"Ran command $cmdString with an input array: \" . printArray($inputs) . \"Return: \" . OkBad($commandSuccessful) . \".<BR><BR>\";\n\n$msg = \"MixedCaseNoSpaces\";\n$cmdString = \"SNDMSG\";\n$inputs = array('MSG'=>$msg, 'TOUSR'=>$user);\n$commandSuccessful = i5_command($cmdString, $inputs);\necho \"Ran command $cmdString with an input array: \" . printArray($inputs) . \"Return: \" . OkBad($commandSuccessful) . \".<BR><BR>\";\n\n\n$msg = \"Davey Jones embedded spaces without quotes--caused error in old toolkit\";\n$cmdString = \"SNDMSG\";\n$inputs = array('MSG'=>$msg, 'TOUSR'=>$user);\n$commandSuccessful = i5_command($cmdString, $inputs);\necho \"Ran command $cmdString with an input array: \" . printArray($inputs) . \"Return: \" . OkBad($commandSuccessful) . \".<BR><BR>\";\n\n$msg = \"O'flanagan single quote--caused error in old toolkit\";\n$cmdString = \"SNDMSG\";\n$inputs = array('MSG'=>$msg, 'TOUSR'=>$user);\n$commandSuccessful = i5_command($cmdString, $inputs);\necho \"Ran command $cmdString with an input array: \" . printArray($inputs) . \"Return: \" . OkBad($commandSuccessful) . \".<BR><BR>\";\n\necho h2('Error functions');\necho \"Let's test i5_errormsg() and i5_errno()<BR>Get last error message: \" . i5_errormsg();\necho \"<BR>Get last error number: \" . i5_errno(). \"<BR><BR>\";\n\n\n// good\necho \"\\nSuccess\\n\";\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.5837068557739258,
"alphanum_fraction": 0.6055892705917358,
"avg_line_length": 30.86554527282715,
"blob_id": "c55cefff5827991a9ebb4e728a09049f9532a943",
"content_id": "1b2b545f7b73d7a43840c0e41e6ea1352c9900ab",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 3793,
"license_type": "permissive",
"max_line_length": 80,
"num_lines": 119,
"path": "/test/php/test_12610_MISC_ibm_db2_io_adopt.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 inout multi sys api - adopt profile\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// -----------------\n// make the call\n// -----------------\n// see connection.inc param details ...\nrequire_once('connection.inc');\n// call IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Fail connect: $database,$user\");\n\n$ctl = \"*here\";\n$newProfile = QSYGETPH($adoptuser1,$adoptpass1);\n$oldProfile = QSYGETPH();\nQWTSETP($newProfile);\ndiag();\nQWTSETP($oldProfile);\ndiag();\nQSYRLSPH($newProfile);\nQSYRLSPH($oldProfile);\n\n// good\necho \"Success (adopt)\\n\";\n\n// call IBM i\nfunction callme ($xmlIn) {\n global $procLib, $conn, $ipc, $ctl;\n $stmt = db2_prepare($conn, \"call $procLib.iPLUG65K(?,?,?,?)\");\n if (!$stmt) die(\"Fail prepare: \".db2_stmt_errormsg());\n $clobIn = $xmlIn;\n $clobOut = \"\";\n $ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n $ret=db2_execute($stmt);\n if (!$ret) die(\"Fail execute: \".db2_stmt_errormsg());\n return $clobOut;\n}\n// diag check 00000003979133CD36843001\nfunction diag() {\n $xmlIn = \"<?xml version='1.0'?>\\n\";\n $xmlIn .= \"<script>\\n\";\n $xmlIn .= \"<diag/>\\n\";\n $xmlIn .= \"</script>\";\n $clobOut = callme ($xmlIn);\n var_dump($clobOut);\n $xmlobj = simplexml_load_string($clobOut);\n if (!$xmlobj) die(\"Fail XML returned\\n\");\n $allparms = $xmlobj->xpath('/script/diag/jobinfo/curuser');\n if (!$allparms) die(\"Fail XML diag missing\\n\");\n var_dump($allparms);\n}\n\n// Get Profile Handle (QSYGETPH) API\nfunction QSYGETPH($user=\"*CURRENT\",$pwd=\"*NOPWD\") {\n $xmlIn = \"<?xml version='1.0'?>\\n\";\n $xmlIn .= \"<script>\\n\";\n $xmlIn .= \"<pgm name='QSYGETPH'>\\n\";\n $xmlIn .= \"<parm io='in'><data type='10A'>$user</data></parm>\\n\";\n $xmlIn .= \"<parm io='in'><data type='10A'>$pwd</data></parm>\\n\";\n $xmlIn .= \"<parm io='out'><data type='12b'/></parm>\\n\";\n $xmlIn .= \"<parm io='in'><data type='10i0'>0</data></parm>\\n\";\n if (substr($pwd, 0, 1) != '*') {\n $xmlIn .= \"<parm io='in'><data type='10i0'>\".strlen($pwd).\"</data></parm>\\n\";\n $xmlIn .= \"<parm io='in'><data type='10i0'>-1</data></parm>\\n\";\n }\n $xmlIn .= \"</pgm>\\n\";\n $xmlIn .= \"</script>\";\n // var_dump($xmlIn);\n $clobOut = callme ($xmlIn);\n // var_dump($clobOut);\n $xmlobj = simplexml_load_string($clobOut);\n if (!$xmlobj) die(\"Fail XML returned\\n\");\n $allparms = $xmlobj->xpath('/script/pgm/parm');\n if (!$allparms) die(\"Fail XML parm missing\\n\");\n $profile = (string)$allparms[0]->data;\n if (!$profile) die(\"Fail XML profile missing\\n\");\n return $profile;\n}\n// Set Profile Handle (QWTSETP, QsySetToProfileHandle) API\nfunction QWTSETP($profile) {\n $xmlIn = \"<?xml version='1.0'?>\\n\";\n $xmlIn .= \"<script>\\n\";\n $xmlIn .= \"<pgm name='QWTSETP'>\\n\";\n $xmlIn .= \"<parm io='in'><data type='12b'>$profile</data></parm>\\n\";\n $xmlIn .= \"<parm io='in'><data type='10i0'>0</data></parm>\\n\";\n $xmlIn .= \"</pgm>\\n\";\n $xmlIn .= \"</script>\";\n // var_dump($xmlIn);\n $clobOut = callme ($xmlIn);\n // var_dump($clobOut);\n $xmlobj = simplexml_load_string($clobOut);\n if (!$xmlobj) die(\"Fail XML returned\\n\");\n}\n// Release Profile Handle (QSYRLSPH, QsyReleaseProfileHandle) API\nfunction QSYRLSPH($profile) {\n $xmlIn = \"<?xml version='1.0'?>\\n\";\n $xmlIn .= \"<script>\\n\";\n $xmlIn .= \"<pgm name='QSYRLSPH'>\\n\";\n $xmlIn .= \"<parm io='in'><data type='12b'>$profile</data></parm>\\n\";\n $xmlIn .= \"</pgm>\\n\";\n $xmlIn .= \"</script>\";\n // var_dump($xmlIn);\n $clobOut = callme ($xmlIn);\n // var_dump($clobOut);\n $xmlobj = simplexml_load_string($clobOut);\n if (!$xmlobj) die(\"Fail XML returned\\n\");\n}\n\n?>\n--EXPECTF--\n%s\nSuccess (%s)\n\n"
},
{
"alpha_fraction": 0.29852795600891113,
"alphanum_fraction": 0.3835132420063019,
"avg_line_length": 39.01047134399414,
"blob_id": "42039f15d6320cacccfe09c080751521e6bad70e",
"content_id": "dc2ac42009f4e88b3df88b8df034bbd994a21bff",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 15285,
"license_type": "permissive",
"max_line_length": 74,
"num_lines": 382,
"path": "/test/php/test_80602_db2_io_ZZTON_deep_error.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 inout SRVPGM - deep ds bad error\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// -----------------\n// make the call\n// -----------------\n// see connection.inc param details ...\nrequire_once('connection.inc');\n// call IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Fail connect: $database,$user\");\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG1M(?,?,?,?)\");\nif (!$stmt) die(\"Fail prepare: \".db2_stmt_errormsg());\n$clobIn = getxml();\n$clobOut = \"\";\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) die(\"Fail execute: \".db2_stmt_errormsg());\n// -----------------\n// output processing\n// -----------------\n// dump raw XML (easy test debug)\nvar_dump($clobOut);\n\n// xml check via simplexml vs. expected results\n$xmlobj = simplexml_load_string($clobOut);\nif (!$xmlobj) die(\"Fail XML returned\\n\");\n\n// good\necho \"Success\\n\";\n\n// D TONMAX c const(2)\n// D dcTon_t ds qualified based(Template)\n// D so7a 7a\n// D so4p0 4p 0\n// D so8a 8a\n// D so5p0 5p 0\n// D so3a 3a\n// D so6a 6a\n// D so3p0 3p 0\n// D so22a 22a\n// D so2p0 2p 0\n// D so2a 2a\n// D so4p2 4p 2\n// D so1a 1a\n// D so3s0 3s 0\n// D so4s0 4s 0\n// D so9p0 9p 0\n// D so12a 12a\n// D so12p3 12p 3\n// D so15a 15a\n// D so17p7 17p 7\n// D to7a 7a\n// D to4p0 4p 0\n// D to8a 8a\n// D to5p0 5p 0\n// D to3a 3a\n// D to6a 6a\n// D to3p0 3p 0\n// D to22a 22a\n// D to2p0 2p 0\n// D to2a 2a\n// D to4p2 4p 2\n// D to1a 1a\n// D to3s0 3s 0\n// D to4s0 4s 0\n// D to9p0 4p 2\n// D to12a 12a\n// D to12p3 12p 3\n// D to15a 15a\n// D to17p7 17p 7\n// D uo7a 7a\n// D uo4p0 4p 0\n// D uo8a 8a\n// D uo5p0 5p 0\n// D uo3a 3a\n// D uo6a 6a\n// D uo3p0 3p 0\n// D uo22a 22a\n// D uo2p0 2p 0\n// D uo2a 2a\n// D uo4p2 4p 2\n// D uo1a 1a\n// D uo3s0 3s 0\n// D uo4s0 4s 0\n// D uo9p0 4p 2\n// D uo12a 12a\n// D uo12p3 12p 3\n// D uo15a 15a\n// D uo17p7 17p 7\n// D vo7a 7a\n// D vo4p0 4p 0\n// D vo8a 8a\n// D vo5p0 5p 0\n// D vo3a 3a\n// D vo6a 6a\n// D vo3p0 3p 0\n// D vo22a 22a\n// D vo2p0 2p 0\n// D vo2a 2a\n// D vo4p2 4p 2\n// D vo1a 1a\n// D vo3s0 3s 0\n// D vo4s0 4s 0\n// D vo9p0 4p 2\n// D vo12a 12a\n// D vo12p3 12p 3\n// D vo15a 15a\n// D vo17p7 17p 7\n// D wo7a 7a\n// D wo4p0 4p 0\n// D wo8a 8a\n// D wo5p0 5p 0\n// D wo3a 3a\n// D wo6a 6a\n// D wo3p0 3p 0\n// D wo22a 22a\n// D wo2p0 2p 0\n// D wo2a 2a\n// D wo4p2 4p 2\n// D wo1a 1a\n// D wo3s0 3s 0\n// D wo4s0 4s 0\n// D wo9p0 4p 2\n// D wo12a 12a\n// D wo12p3 12p 3\n// D wo15a 15a\n// D wo17p7 17p 7\n// D xo7a 7a\n// D xo4p0 4p 0\n// D xo8a 8a\n// D xo5p0 5p 0\n// D xo3a 3a\n// D xo6a 6a\n// D xo3p0 3p 0\n// D xo22a 22a\n// D xo2p0 2p 0\n// D xo2a 2a\n// D xo4p2 4p 2\n// D xo1a 1a\n// D xo3s0 3s 0\n// D xo4s0 4s 0\n// D xo9p0 4p 2\n// D xo12a 12a\n// D xo12p3 12p 3\n// D xo15a 15a\n// D xo17p7 17p 7\n// D yo7a 7a\n// D yo4p0 4p 0\n// D yo8a 8a\n// D yo5p0 5p 0\n// D yo3a 3a\n// D yo6a 6a\n// D yo3p0 3p 0\n// D yo22a 22a\n// D yo2p0 2p 0\n// D yo2a 2a\n// D yo4p2 4p 2\n// D yo1a 1a\n// D yo3s0 3s 0\n// D yo4s0 4s 0\n// D yo9p0 4p 2\n// D yo12a 12a\n// D yo12p3 12p 3\n// D yo15a 15a\n// D yo17p7 17p 7\n// D zo7a 7a\n// D zo4p0 4p 0\n// D zo8a 8a\n// D zo5p0 5p 0\n// D zo3a 3a\n// D zo6a 6a\n// D zo3p0 3p 0\n// D zo22a 22a\n// D zo2p0 2p 0\n// D zo2a 2a\n// D zo4p2 4p 2\n// D zo1a 1a\n// D zo3s0 3s 0\n// D zo4s0 4s 0\n// D zo9p0 4p 2\n// D zo12a 12a\n// D zo12p3 12p 3\n// D zo15a 15a\n// D zo17p7 17p 7\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// * zzton: check parm array aggregate\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// P zzton B export\n// D zzton PI\n// D nn1p0 1p 0\n// D nn7a 7a\n// D nn8p0 8p 0\n// D nnDS likeds(dcTon_t) dim(TONMAX)\n// D nn9p0 9p 0\n// D nn1a 1a\n// D nn60a 60a\n// D nn35a 35a\nfunction getxml() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<script>\n<pgm name='ZZSRV' lib='xyzlibxmlservicexyz' func='ZZTONBAD'>\n <parm io='in'><data type='1p0'>1</data></parm>\n <parm io='in'><data type='7a'>7</data></parm>\n <parm io='in'><data type='8p0'>8</data></parm>\n <parm io='both'>\n <ds var='dcTon_t' dim='2'>\n <data var='s01' type='7A'>1</data>\n <data var='s02' type='4p0'>2</data>\n <data var='s03' type='8A'>3</data>\n <data var='s04' type='5p0'>4</data>\n <data var='s05' type='3A'>5</data>\n <data var='s06' type='6A'>6</data>\n <data var='s07' type='3p0'>7</data>\n <data var='s08' type='22A'>8</data>\n <data var='s09' type='2p0'>9</data>\n <data var='s10' type='2A'>10</data>\n <data var='s11' type='4p2'>11</data>\n <data var='s12' type='1A'>x</data>\n <data var='s13' type='3s0'>13</data>\n <data var='s14' type='4s0'>14</data>\n <data var='s15' type='9p0'>15</data>\n <data var='s16' type='12A'>16</data>\n <data var='s17' type='12p3'>17</data>\n <data var='s18' type='15A'>18</data>\n <data var='s19' type='17p7'>19</data>\n <data var='t01' type='7A'></data>\n <data var='t02' type='4p0'></data>\n <data var='t03' type='8A'></data>\n <data var='t04' type='5p0'></data>\n <data var='t05' type='3A'></data>\n <data var='t06' type='6A'></data>\n <data var='t07' type='3p0'></data>\n <data var='t08' type='22A'></data>\n <data var='t09' type='2p0'></data>\n <data var='t10' type='2A'></data>\n <data var='t11' type='4p2'></data>\n <data var='t12' type='1A'></data>\n <data var='t13' type='3s0'></data>\n <data var='t14' type='4s0'></data>\n <data var='t15' type='9p0'></data>\n <data var='t16' type='12A'></data>\n <data var='t17' type='12p3'></data>\n <data var='t18' type='15A'></data>\n <data var='t19' type='17p7'></data>\n <data var='u01' type='7A'>1</data>\n <data var='u02' type='4p0'>2</data>\n <data var='u03' type='8A'>3</data>\n <data var='u04' type='5p0'4></data>\n <data var='u05' type='3A'>5</data>\n <data var='u06' type='6A'>6</data>\n <data var='u07' type='3p0'>7</data>\n <data var='u08' type='22A'>8</data>\n <data var='u09' type='2p0'>9</data>\n <data var='u10' type='2A'>10</data>\n <data var='u11' type='4p2'>11</data>\n <data var='u12' type='1A'>x</data>\n <data var='u13' type='3s0'>13</data>\n <data var='u14' type='4s0'>14</data>\n <data var='u15' type='9p0'>15</data>\n <data var='u16' type='12A'>16</data>\n <data var='u17' type='12p3'>17</data>\n <data var='u18' type='15A'>18</data>\n <data var='u19' type='17p7'>19</data>\n <data var='v01' type='7A'></data>\n <data var='v02' type='4p0'></data>\n <data var='v03' type='8A'></data>\n <data var='v04' type='5p0'></data>\n <data var='v05' type='3A'></data>\n <data var='v06' type='6A'></data>\n <data var='v07' type='3p0'></data>\n <data var='v08' type='22A'></data>\n <data var='v09' type='2p0'></data>\n <data var='v10' type='2A'></data>\n <data var='v11' type='4p2'></data>\n <data var='v12' type='1A'></data>\n <data var='v13' type='3s0'></data>\n <data var='v14' type='4s0'></data>\n <data var='v15' type='9p0'></data>\n <data var='v16' type='12A'></data>\n <data var='v17' type='12p3'></data>\n <data var='v18' type='15A'></data>\n <data var='v19' type='17p7'></data>\n <data var='w01' type='7A'>1</data>\n <data var='w02' type='4p0'>2</data>\n <data var='w03' type='8A'>3</data>\n <data var='w04' type='5p0'>4</data>\n <data var='w05' type='3A'>5</data>\n <data var='w06' type='6A'>6</data>\n <data var='w07' type='3p0'>7</data>\n <data var='w08' type='22A'>8</data>\n <data var='w09' type='2p0'>9</data>\n <data var='w10' type='2A'>10</data>\n <data var='w11' type='4p2'>11</data>\n <data var='w12' type='1A'>x</data>\n <data var='w13' type='3s0'>13</data>\n <data var='w14' type='4s0'>14</data>\n <data var='w15' type='9p0'>15</data>\n <data var='w16' type='12A'>16</data>\n <data var='w17' type='12p3'>17</data>\n <data var='w18' type='15A'>18</data>\n <data var='w19' type='17p7'>19</data>\n <data var='x01' type='7A'></data>\n <data var='x02' type='4p0'></data>\n <data var='x03' type='8A'></data>\n <data var='x04' type='5p0'></data>\n <data var='x05' type='3A'></data>\n <data var='x06' type='6A'></data>\n <data var='x07' type='3p0'></data>\n <data var='x08' type='22A'></data>\n <data var='x09' type='2p0'></data>\n <data var='x10' type='2A'></data>\n <data var='x11' type='4p2'></data>\n <data var='x12' type='1A'></data>\n <data var='x13' type='3s0'></data>\n <data var='x14' type='4s0'></data>\n <data var='x15' type='9p0'></data>\n <data var='x16' type='12A'></data>\n <data var='x17' type='12p3'></data>\n <data var='x18' type='15A'></data>\n <data var='x19' type='17p7'></data>\n <data var='y01' type='7A'></data>\n <data var='y02' type='4p0'></data>\n <data var='y03' type='8A'></data>\n <data var='y04' type='5p0'></data>\n <data var='y05' type='3A'></data>\n <data var='y06' type='6A'></data>\n <data var='y07' type='3p0'></data>\n <data var='y08' type='22A'></data>\n <data var='y09' type='2p0'></data>\n <data var='y10' type='2A'></data>\n <data var='y11' type='4p2'></data>\n <data var='y12' type='1A'></data>\n <data var='y13' type='3s0'></data>\n <data var='y14' type='4s0'></data>\n <data var='y15' type='9p0'></data>\n <data var='y16' type='12A'></data>\n <data var='y17' type='12p3'></data>\n <data var='y18' type='15A'></data>\n <data var='y19' type='17p7'></data>\n <data var='z01' type='7A'>1</data>\n <data var='z02' type='4p0'>2</data>\n <data var='z03' type='8A'>3</data>\n <data var='z04' type='5p0'>4</data>\n <data var='z05' type='3A'>5</data>\n <data var='z06' type='6A'>6</data>\n <data var='z07' type='3p0'>7</data>\n <data var='z08' type='22A'>8</data>\n <data var='z09' type='2p0'>9</data>\n <data var='z10' type='2A'>10</data>\n <data var='z11' type='4p2'>11</data>\n <data var='z12' type='1A'>x</data>\n <data var='z13' type='3s0'>13</data>\n <data var='z14' type='4s0'>14</data>\n <data var='z15' type='9p0'>15</data>\n <data var='z16' type='12A'>16</data>\n <data var='z17' type='12p3'>17</data>\n <data var='z18' type='15A'>18</data>\n <data var='z19' type='17p7'>19</data>\n </ds>\n </parm>\n <parm io='both'><data type='9p0'>9</data></parm>\n <parm io='both'><data type='1a'>1</data></parm>\n <parm io='both'><data type='60a'>60</data></parm>\n <parm io='both'><data type='35a'>35</data></parm>\n</pgm>\n</script>\nENDPROC;\nreturn test_lib_replace($clob);\n}\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.5805946588516235,
"alphanum_fraction": 0.6087636947631836,
"avg_line_length": 29.783132553100586,
"blob_id": "cc31a7809c7fe328ef22ee9357cce67351816771",
"content_id": "348401a15f8e7883b84f909be5aec2de11e5831d",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 2556,
"license_type": "permissive",
"max_line_length": 115,
"num_lines": 83,
"path": "/test/php/test_15650_ZZSHLOMO_OPM_ibm_db2_io_pgm.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 inout PGM OPM CLP - ZZSHLOMO CLP test\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\n// call IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Bad connect: $database,$user\");\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG4K(?,?,?,?)\");\nif (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n$clobIn = getxml();\n$clobOut = \"\";\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) die(\"Bad execute: \".db2_stmt_errormsg());\n// -----------------\n// output processing\n// -----------------\n// dump raw XML (easy test debug)\nvar_dump($clobOut);\n// xml check via simplexml vs. expected results\n$xmlobj = simplexml_load_string($clobOut);\nif (!$xmlobj) die(\"Bad XML returned\");\n$allpgms = $xmlobj->xpath('/script/pgm');\nif (!$allpgms) die(\"Missing XML pgm info\");\n// -----------------\n// output pgm call\n// -----------------\n// only one program this XML script\n$pgm = $allpgms[0];\n$name = $pgm->attributes()->name;\n$lib = $pgm->attributes()->lib;\n$mode = $pgm->attributes()->mode;\n// pgm parms\n$parm = $pgm->xpath('parm');\nif (!$parm) die(\"Missing XML pgm parms ($lib/$name $mode)\");\nif ((string)$parm[0]->data != \"I am 15\") die(\"1) not 'I am 15'\");\nif ((string)$parm[1]->data != 12.2) die(\"2) not '12.2'\");\nif ((string)$parm[2]->data != \"ALAN HOW ARE YOU TEST one of one\") die(\"3) not 'ALAN HOW ARE YOU TEST one of one'\");\n\n// good\necho \"Success ($lib/$name $mode)\\n\";\n\n//PGM PARM(&CHAR1 &NUM2 &RTNVAL)\n// DCL VAR(&CHAR1) TYPE(*CHAR) LEN(15)\n// DCL VAR(&NUM1) TYPE(*DEC) LEN(10)\n// DCL VAR(&NUM2) TYPE(*CHAR) LEN(10)\n// DCL VAR(&RTNVAL) TYPE(*CHAR) LEN(100)\n// CHGVAR VAR(&CHAR1) VALUE('I am 15')\n// CHGVAR VAR(&NUM2) VALUE(12.2)\n// CHGVAR VAR(&RTNVAL) VALUE('ALAN HOW ARE YOU TEST one of one')\n// ENDPGM: +\n// ENDPGM\nfunction getxml() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<script>\n<pgm mode='opm' name='ZZSHLOMO' lib='xyzlibxmlservicexyz'>\n<parm io='both'>\n<data type='15A'>frog</data>\n</parm>\n<parm io='both'>\n<data type='10A'>12.2</data>\n</parm>\n<parm io='both'>\n<data type='100A'>dude</data>\n</parm>\n</pgm>\n</script>\nENDPROC;\nreturn test_lib_replace($clob);\n}\n?>\n--EXPECTF--\n%s\nSuccess (%s)\n\n"
},
{
"alpha_fraction": 0.5071761012077332,
"alphanum_fraction": 0.5323818325996399,
"avg_line_length": 35.933921813964844,
"blob_id": "f39239192f31d3d88c088e35c62fe4a118fb26bf",
"content_id": "1a0ea978ae197022e528ef783345651f10e9a58c",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "reStructuredText",
"length_bytes": 25153,
"license_type": "permissive",
"max_line_length": 371,
"num_lines": 681,
"path": "/docs/date-time-timestamp.rst",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "\nXMLSERVICE/Toolkit Date, Time, Timestamp\n========================================\n\n`Goto Main Page`_\n\n.. _Goto Main Page: index.html\n\nDate, Time, Timestamp - what do?\n--------------------------------\nTime and date are easy, correct size character buffer and pass any format you want. Essentially xmlservice/toolkit works without formal knowledge of date, time, timestamp formats, simply passing another set of characters in memory for a parameter (pass-by-reference mostly), called program most likely throws exception when format incompatible (see with any MI debugger).\n\n\n1) New PHP Toolkit Samples\n^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nZZDATE (D datfmt(\\*iso))\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n\n::\n\n <?php\n require_once('connection.inc');\n // assume /usr/local/zendsvr/share/ToolkitAPI\n require_once(\"ToolkitService.php\");\n // new toolkit\n try { $ToolkitServiceObj = ToolkitService::getInstance($database, $user, $password); }\n catch (Exception $e) { die($e->getMessage()); }\n $ToolkitServiceObj->setToolkitServiceParams(\n array('InternalKey'=>$internalKey, // route to same XMLSERVICE job /tmp/myjob1\n 'subsystem'=>\"QGPL/QDFTJOBD\", // subsystem/jobd to start XMLSERVICE (if not running)\n 'plug'=>\"iPLUG32K\")); // max size data i/o (iPLUG4K,32K,65K.512K,1M,5M,10M,15M)\n // *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n // * zzdate: check date parm\n // *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n // P zzdate B export\n // D zzdate PI D\n // D myDate D datfmt(*iso)\n // * vars\n // D retDate s D datfmt(*iso)\n // /free\n // retDate=myDate;\n // myDate=d'2007-09-30';\n // return retDate;\n // /end-free\n // P E\n $param[] = $ToolkitServiceObj->AddParameterChar ('both', 10, 'ZZDATE', 'myDate', '2009-05-11');\n $retrn[] = $ToolkitServiceObj->AddParameterChar ('both', 10, 'ZZDATE', 'retDate', '2002-02-02');\n $result = $ToolkitServiceObj->PgmCall('ZZSRV', $libxmlservice, $param, $retrn, array('func'=>'ZZDATE'));\n // var_dump($result);\n /* in/out param myDate */\n $myDate = \"XMLSERVICE i/o param myDate: \".$result[\"io_param\"][\"myDate\"];\n echo \"$myDate\\n\";\n $expect = '2007-09-30';\n if (strpos($myDate,$expect)<1) die(\"Fail missing $expect\\n\");\n /* return value retDate */\n $retDate = \"XMLSERVICE return retDate: \".$result[\"retvals\"][\"retDate\"];\n echo \"$retDate\\n\";\n $expect = '2009-05-11';\n if (strpos($retDate,$expect)<1) die(\"Fail missing $expect\\n\");\n /* all good */\n echo \"Success\\n\";\n ?>\n\n\nZZDATEUSA (D datfmt(\\*USA))\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n\n::\n\n <?php\n require_once('connection.inc');\n // assume /usr/local/zendsvr/share/ToolkitAPI\n require_once(\"ToolkitService.php\");\n // new toolkit\n try { $ToolkitServiceObj = ToolkitService::getInstance($database, $user, $password); }\n catch (Exception $e) { die($e->getMessage()); }\n $ToolkitServiceObj->setToolkitServiceParams(\n array('InternalKey'=>$internalKey, // route to same XMLSERVICE job /tmp/myjob1\n 'subsystem'=>\"QGPL/QDFTJOBD\", // subsystem/jobd to start XMLSERVICE (if not running)\n 'plug'=>\"iPLUG32K\")); // max size data i/o (iPLUG4K,32K,65K.512K,1M,5M,10M,15M)\n // *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n // * zzdateUSA: check date parm\n // *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n // P zzdateUSA B export\n // D zzdateUSA PI D datfmt(*USA)\n // D myDate D datfmt(*USA)\n // * vars\n // D retDate s D datfmt(*USA)\n // /free\n // retDate=myDate;\n // myDate=d'2007-09-30';\n // return retDate;\n // /end-free\n // P E\n $param[] = $ToolkitServiceObj->AddParameterChar ('both', 10, 'ZZDATEUSA', 'myDate', '05/11/2009');\n $retrn[] = $ToolkitServiceObj->AddParameterChar ('both', 10, 'ZZDATEUSA', 'retDate', '2002-02-02');\n $result = $ToolkitServiceObj->PgmCall('ZZSRV', $libxmlservice, $param, $retrn, array('func'=>'ZZDATEUSA'));\n // var_dump($result);\n /* in/out param myDate */\n $myDate = \"XMLSERVICE i/o param myDate: \".$result[\"io_param\"][\"myDate\"];\n echo \"$myDate\\n\";\n $expect = '09/30/2007';\n if (strpos($myDate,$expect)<1) die(\"Fail missing $expect\\n\");\n /* return value retDate */\n $retDate = \"XMLSERVICE return retDate: \".$result[\"retvals\"][\"retDate\"];\n echo \"$retDate\\n\";\n $expect = '05/11/2009';\n if (strpos($retDate,$expect)<1) die(\"Fail missing $expect\\n\");\n /* all good */\n echo \"Success\\n\";\n ?>\n\nZZTIME (T timfmt(\\*iso))\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n\n::\n\n <?php\n require_once('connection.inc');\n // assume /usr/local/zendsvr/share/ToolkitAPI\n require_once(\"ToolkitService.php\");\n // new toolkit\n try { $ToolkitServiceObj = ToolkitService::getInstance($database, $user, $password); }\n catch (Exception $e) { die($e->getMessage()); }\n $ToolkitServiceObj->setToolkitServiceParams(\n array('InternalKey'=>$internalKey, // route to same XMLSERVICE job /tmp/myjob1\n 'subsystem'=>\"QGPL/QDFTJOBD\", // subsystem/jobd to start XMLSERVICE (if not running)\n 'plug'=>\"iPLUG32K\")); // max size data i/o (iPLUG4K,32K,65K.512K,1M,5M,10M,15M)\n // *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n // * zztime: check time parm\n // *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n // P zztime B export\n // D zztime PI T\n // D myTime T timfmt(*iso)\n // * vars\n // D retTime s T timfmt(*iso)\n // /free\n // retTime=myTime;\n // myTime=t'12.34.56';\n // return retTime;\n // /end-free\n // P E\n $param[] = $ToolkitServiceObj->AddParameterChar ('both', 8, 'ZZTIME', 'myTime', '09.45.29');\n $retrn[] = $ToolkitServiceObj->AddParameterChar ('both', 8, 'ZZTIME', 'retTime', '02.02.02');\n $result = $ToolkitServiceObj->PgmCall('ZZSRV', $libxmlservice, $param, $retrn, array('func'=>'ZZTIME'));\n // var_dump($result);\n /* in/out param myDate */\n $myTime = \"XMLSERVICE i/o param myTime: \".$result[\"io_param\"][\"myTime\"];\n echo \"$myTime\\n\";\n $expect = '12.34.56';\n if (strpos($myTime,$expect)<1) die(\"Fail missing $expect\\n\");\n /* return value retTime */\n $retTime = \"XMLSERVICE return retTime: \".$result[\"retvals\"][\"retTime\"];\n echo \"$retTime\\n\";\n $expect = '09.45.29';\n if (strpos($retTime,$expect)<1) die(\"Fail missing $expect\\n\");\n /* all good */\n echo \"Success\\n\";\n ?>\n\n\nZZTIME (T timfmt(\\*USA))\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n\n::\n\n <?php\n require_once('connection.inc');\n // assume /usr/local/zendsvr/share/ToolkitAPI\n require_once(\"ToolkitService.php\");\n // new toolkit\n try { $ToolkitServiceObj = ToolkitService::getInstance($database, $user, $password); }\n catch (Exception $e) { die($e->getMessage()); }\n $ToolkitServiceObj->setToolkitServiceParams(\n array('InternalKey'=>$internalKey, // route to same XMLSERVICE job /tmp/myjob1\n 'subsystem'=>\"QGPL/QDFTJOBD\", // subsystem/jobd to start XMLSERVICE (if not running)\n 'plug'=>\"iPLUG32K\")); // max size data i/o (iPLUG4K,32K,65K.512K,1M,5M,10M,15M)\n // *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n // * zztimeUSA: check time parm\n // *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n // P zztimeUSA B export\n // D zztimeUSA PI T timfmt(*USA)\n // D myTime T timfmt(*USA)\n // * vars\n // D retTime s T timfmt(*USA)\n // /free\n // retTime=myTime;\n // myTime=t'12.34.00';\n // return retTime;\n // /end-free\n // P E\n $param[] = $ToolkitServiceObj->AddParameterChar ('both', 8, 'ZZTIMEUSA', 'myTime', '09:45 AM');\n $retrn[] = $ToolkitServiceObj->AddParameterChar ('both', 8, 'ZZTIMEUSA', 'retTime', '02:02 PM');\n $result = $ToolkitServiceObj->PgmCall('ZZSRV', $libxmlservice, $param, $retrn, array('func'=>'ZZTIMEUSA'));\n // var_dump($result);\n /* in/out param myDate */\n $myTime = \"XMLSERVICE i/o param myTime: \".$result[\"io_param\"][\"myTime\"];\n echo \"$myTime\\n\";\n $expect = '12:34 PM';\n if (strpos($myTime,$expect)<1) die(\"Fail missing $expect\\n\");\n /* return value retTime */\n $retTime = \"XMLSERVICE return retTime: \".$result[\"retvals\"][\"retTime\"];\n echo \"$retTime\\n\";\n $expect = '09:45 AM';\n if (strpos($retTime,$expect)<1) die(\"Fail missing $expect\\n\");\n /* all good */\n echo \"Success\\n\";\n ?>\n\nZZSTAMP (Z)\n\"\"\"\"\"\"\"\"\"\"\"\n\n::\n\n <?php\n require_once('connection.inc');\n // assume /usr/local/zendsvr/share/ToolkitAPI\n require_once(\"ToolkitService.php\");\n // new toolkit\n try { $ToolkitServiceObj = ToolkitService::getInstance($database, $user, $password); }\n catch (Exception $e) { die($e->getMessage()); }\n $ToolkitServiceObj->setToolkitServiceParams(\n array('InternalKey'=>$internalKey, // route to same XMLSERVICE job /tmp/myjob1\n 'subsystem'=>\"QGPL/QDFTJOBD\", // subsystem/jobd to start XMLSERVICE (if not running)\n 'plug'=>\"iPLUG32K\")); // max size data i/o (iPLUG4K,32K,65K.512K,1M,5M,10M,15M)\n // *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n // * zzstamp: check timestamp parm\n // *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n // P zzstamp B export\n // D zzstamp PI Z\n // D myStamp Z\n // * vars\n // D retStamp s Z\n // /free\n // retStamp=myStamp;\n // myStamp=z'1960-12-31-12.32.23.000000';\n // return retStamp;\n // /end-free\n // P E\n $param[] = $ToolkitServiceObj->AddParameterChar ('both', 26, 'ZZSTAMP', 'myStamp', '2011-12-29-12.45.29.000000');\n $retrn[] = $ToolkitServiceObj->AddParameterChar ('both', 26, 'ZZSTAMP', 'retStamp', '2002-02-02-02.02.02.000000');\n $result = $ToolkitServiceObj->PgmCall('ZZSRV', $libxmlservice, $param, $retrn, array('func'=>'ZZSTAMP'));\n // var_dump($result);\n /* in/out param myDate */\n $myStamp = \"XMLSERVICE i/o param myStamp: \".$result[\"io_param\"][\"myStamp\"];\n echo \"$myStamp\\n\";\n $expect = '1960-12-31-12.32.23.000000';\n if (strpos($myStamp,$expect)<1) die(\"Fail missing $expect\\n\");\n /* return value retStamp */\n $retStamp = \"XMLSERVICE return retStamp: \".$result[\"retvals\"][\"retStamp\"];\n echo \"$retStamp\\n\";\n $expect = '2011-12-29-12.45.29.000000';\n if (strpos($retStamp,$expect)<1) die(\"Fail missing $expect\\n\");\n /* all good */\n echo \"Success\\n\";\n ?>\n\n\n\n\n2) XMLSERVICE Raw XML Samples\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nZZDATE (D datfmt(\\*iso))\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n\n::\n\n <?php\n // see connection.inc param details ...\n require_once('connection.inc');\n // call IBM i\n if ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\n else $conn = db2_connect($database,$user,$password);\n if (!$conn) die(\"Bad connect: $database,$user\");\n $stmt = db2_prepare($conn, \"call $libxmlservice.iPLUG4K(?,?,?,?)\");\n if (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n $clobIn = getxml();\n $clobOut = \"\";\n $ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n $ret=db2_execute($stmt);\n if (!$ret) die(\"Bad execute: \".db2_stmt_errormsg());\n // -----------------\n // output processing\n // -----------------\n // dump raw XML (easy test debug)\n var_dump($clobOut);\n // xml check via simplexml vs. expected results\n $xmlobj = simplexml_load_string($clobOut);\n if (!$xmlobj) die(\"Bad XML returned\");\n $allpgms = $xmlobj->xpath('/script/pgm');\n if (!$allpgms) die(\"Missing XML pgm info\");\n // -----------------\n // output pgm call\n // -----------------\n // only one program this XML script\n $pgm = $allpgms[0];\n $name = $pgm->attributes()->name;\n $lib = $pgm->attributes()->lib;\n $func = $pgm->attributes()->func;\n // pgm parms\n $parm = $pgm->xpath('parm');\n if (!$parm) die(\"Fail XML pgm parms missing ($lib/$name.$func)\\n\");\n $var = $parm[0]->data->attributes()->var;\n $actual = (string)$parm[0]->data;\n $expect='2007-09-30';\n if ($actual != $expect) die(\"parm: $var ($actual not $expect) ($lib/$name.$func)\\n\");\n // pgm data returned\n $retn = $pgm->xpath('return');\n if (!$retn) die(\"Fail XML pgm return missing ($lib/$name.$func)\\n\");\n $var = $retn[0]->data->attributes()->var;\n $actual = (string)$retn[0]->data;\n $expect='2009-05-11';\n if ($actual != $expect) die(\"return: $var ($actual not $expect) ($lib/$name.$func)\\n\");\n\n // good\n echo \"Success ($lib/$name.$func)\\n\";\n\n // *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n // * zzdate: check date parm\n // *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n // P zzdate B export\n // D zzdate PI D\n // D myDate D datfmt(*iso)\n function getxml() {\n $clob = <<<ENDPROC\n <?xml version='1.0'?>\n <script>\n <pgm name='ZZSRV' lib='xyzlibxmlservicexyz' func='ZZDATE'>\n <parm io='both'>\n <data var='myDate' type='10A'>2009-05-11</data>\n </parm>\n <return>\n <data var='myDateRet' type='10A'>nada</data>\n </return>\n </pgm>\n </script>\n ENDPROC;\n return test_lib_replace($clob);\n }\n ?>\n\n\nZZDATEUSA (D datfmt(\\*USA))\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n\n::\n\n <?php\n // see connection.inc param details ...\n require_once('connection.inc');\n // call IBM i\n if ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\n else $conn = db2_connect($database,$user,$password);\n if (!$conn) die(\"Bad connect: $database,$user\");\n $stmt = db2_prepare($conn, \"call $libxmlservice.iPLUG4K(?,?,?,?)\");\n if (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n $clobIn = getxml();\n $clobOut = \"\";\n $ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n $ret=db2_execute($stmt);\n if (!$ret) die(\"Bad execute: \".db2_stmt_errormsg());\n // -----------------\n // output processing\n // -----------------\n // dump raw XML (easy test debug)\n var_dump($clobOut);\n // xml check via simplexml vs. expected results\n $xmlobj = simplexml_load_string($clobOut);\n if (!$xmlobj) die(\"Bad XML returned\");\n $allpgms = $xmlobj->xpath('/script/pgm');\n if (!$allpgms) die(\"Missing XML pgm info\");\n // -----------------\n // output pgm call\n // -----------------\n // only one program this XML script\n $pgm = $allpgms[0];\n $name = $pgm->attributes()->name;\n $lib = $pgm->attributes()->lib;\n $func = $pgm->attributes()->func;\n // pgm parms\n $parm = $pgm->xpath('parm');\n if (!$parm) die(\"Fail XML pgm parms missing ($lib/$name.$func)\\n\");\n $var = $parm[0]->data->attributes()->var;\n $actual = (string)$parm[0]->data;\n $expect='09/30/2007';\n if ($actual != $expect) die(\"parm: $var ($actual not $expect) ($lib/$name.$func)\\n\");\n // pgm data returned\n $retn = $pgm->xpath('return');\n if (!$retn) die(\"Fail XML pgm return missing ($lib/$name.$func)\\n\");\n $var = $retn[0]->data->attributes()->var;\n $actual = (string)$retn[0]->data;\n $expect='05/11/2009';\n if ($actual != $expect) die(\"return: $var ($actual not $expect) ($lib/$name.$func)\\n\");\n\n // good\n echo \"Success ($lib/$name.$func)\\n\";\n\n // *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n // * zzdateUSA: check date parm\n // *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n // P zzdateUSA B export\n // D zzdateUSA PI D datfmt(*USA)\n // D myDate D datfmt(*USA)\n function getxml() {\n $clob = <<<ENDPROC\n <?xml version='1.0'?>\n <script>\n <pgm name='ZZSRV' lib='xyzlibxmlservicexyz' func='ZZDATEUSA'>\n <parm io='both'>\n <data var='myDate' type='10A'>05/11/2009</data>\n </parm>\n <return>\n <data var='myDateRet' type='10A'>nada</data>\n </return>\n </pgm>\n </script>\n ENDPROC;\n return test_lib_replace($clob);\n }\n ?>\n\nZZTIME (T timfmt(\\*iso))\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n\n::\n \n <?php\n // see connection.inc param details ...\n require_once('connection.inc');\n // call IBM i\n if ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\n else $conn = db2_connect($database,$user,$password);\n if (!$conn) die(\"Bad connect: $database,$user\");\n $stmt = db2_prepare($conn, \"call $libxmlservice.iPLUG4K(?,?,?,?)\");\n if (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n $clobIn = getxml();\n $clobOut = \"\";\n $ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n $ret=db2_execute($stmt);\n if (!$ret) die(\"Bad execute: \".db2_stmt_errormsg());\n // -----------------\n // output processing\n // -----------------\n // dump raw XML (easy test debug)\n var_dump($clobOut);\n // xml check via simplexml vs. expected results\n $xmlobj = simplexml_load_string($clobOut);\n if (!$xmlobj) die(\"Bad XML returned\");\n $allpgms = $xmlobj->xpath('/script/pgm');\n if (!$allpgms) die(\"Missing XML pgm info\");\n // -----------------\n // output pgm call\n // -----------------\n // only one program this XML script\n $pgm = $allpgms[0];\n $name = $pgm->attributes()->name;\n $lib = $pgm->attributes()->lib;\n $func = $pgm->attributes()->func;\n // pgm parms\n $parm = $pgm->xpath('parm');\n if (!$parm) die(\"Fail XML pgm parms missing ($lib/$name.$func)\\n\");\n $var = $parm[0]->data->attributes()->var;\n $actual = (string)$parm[0]->data;\n $expect='12.34.56';\n if ($actual != $expect) die(\"parm: $var ($actual not $expect) ($lib/$name.$func)\\n\");\n // pgm data returned\n $retn = $pgm->xpath('return');\n if (!$retn) die(\"Fail XML pgm return missing ($lib/$name.$func)\\n\");\n $var = $retn[0]->data->attributes()->var;\n $actual = (string)$retn[0]->data;\n $expect='09.45.29';\n if ($actual != $expect) die(\"return: $var ($actual not $expect) ($lib/$name.$func)\\n\");\n\n // good\n echo \"Success ($lib/$name.$func)\\n\";\n\n // *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n // * zztime: check time parm\n // *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n // P zztime B export\n // D zztime PI T\n // D myTime T timfmt(*iso)\n function getxml() {\n $clob = <<<ENDPROC\n <?xml version='1.0'?>\n <script>\n <pgm name='ZZSRV' lib='xyzlibxmlservicexyz' func='ZZTIME'>\n <parm io='both'>\n <data var='myTime' type='8A'>09.45.29</data>\n </parm>\n <return>\n <data var='myTimeRet' type='8A'>nada</data>\n </return>\n </pgm>\n </script>\n ENDPROC;\n return test_lib_replace($clob);\n }\n ?>\n\nZZTIMEUSA (T timfmt(\\*USA))\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n\n::\n\n <?php\n // see connection.inc param details ...\n require_once('connection.inc');\n // call IBM i\n if ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\n else $conn = db2_connect($database,$user,$password);\n if (!$conn) die(\"Bad connect: $database,$user\");\n $stmt = db2_prepare($conn, \"call $libxmlservice.iPLUG4K(?,?,?,?)\");\n if (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n $clobIn = getxml();\n $clobOut = \"\";\n $ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n $ret=db2_execute($stmt);\n if (!$ret) die(\"Bad execute: \".db2_stmt_errormsg());\n // -----------------\n // output processing\n // -----------------\n // dump raw XML (easy test debug)\n var_dump($clobOut);\n // xml check via simplexml vs. expected results\n $xmlobj = simplexml_load_string($clobOut);\n if (!$xmlobj) die(\"Bad XML returned\");\n $allpgms = $xmlobj->xpath('/script/pgm');\n if (!$allpgms) die(\"Missing XML pgm info\");\n // -----------------\n // output pgm call\n // -----------------\n // only one program this XML script\n $pgm = $allpgms[0];\n $name = $pgm->attributes()->name;\n $lib = $pgm->attributes()->lib;\n $func = $pgm->attributes()->func;\n // pgm parms\n $parm = $pgm->xpath('parm');\n if (!$parm) die(\"Fail XML pgm parms missing ($lib/$name.$func)\\n\");\n $var = $parm[0]->data->attributes()->var;\n $actual = (string)$parm[0]->data;\n $expect='12:34 PM';\n if ($actual != $expect) die(\"parm: $var ($actual not $expect) ($lib/$name.$func)\\n\");\n // pgm data returned\n $retn = $pgm->xpath('return');\n if (!$retn) die(\"Fail XML pgm return missing ($lib/$name.$func)\\n\");\n $var = $retn[0]->data->attributes()->var;\n $actual = (string)$retn[0]->data;\n $expect='09:45 AM';\n if ($actual != $expect) die(\"return: $var ($actual not $expect) ($lib/$name.$func)\\n\");\n\n // good\n echo \"Success ($lib/$name.$func)\\n\";\n\n // *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n // * zztimeUSA: check time parm\n // *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n // P zztimeUSA B export\n // D zztimeUSA PI T timfmt(*USA)\n // D myTime T timfmt(*USA)\n function getxml() {\n $clob = <<<ENDPROC\n <?xml version='1.0'?>\n <script>\n <pgm name='ZZSRV' lib='xyzlibxmlservicexyz' func='ZZTIMEUSA'>\n <parm io='both'>\n <data type='8A'>09:45 AM</data>\n </parm>\n <return>\n <data type='8A'>nada</data>\n </return>\n </pgm>\n </script>\n ENDPROC;\n return test_lib_replace($clob);\n }\n ?>\n\n\nZZSTAMP (Z)\n\"\"\"\"\"\"\"\"\"\"\"\n\n::\n\n <?php\n // see connection.inc param details ...\n require_once('connection.inc');\n // call IBM i\n if ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\n else $conn = db2_connect($database,$user,$password);\n if (!$conn) die(\"Bad connect: $database,$user\");\n $stmt = db2_prepare($conn, \"call $libxmlservice.iPLUG4K(?,?,?,?)\");\n if (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n $clobIn = getxml();\n $clobOut = \"\";\n $ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n $ret=db2_execute($stmt);\n if (!$ret) die(\"Bad execute: \".db2_stmt_errormsg());\n // -----------------\n // output processing\n // -----------------\n // dump raw XML (easy test debug)\n var_dump($clobOut);\n // xml check via simplexml vs. expected results\n $xmlobj = simplexml_load_string($clobOut);\n if (!$xmlobj) die(\"Bad XML returned\");\n $allpgms = $xmlobj->xpath('/script/pgm');\n if (!$allpgms) die(\"Missing XML pgm info\");\n // -----------------\n // output pgm call\n // -----------------\n // only one program this XML script\n $pgm = $allpgms[0];\n $name = $pgm->attributes()->name;\n $lib = $pgm->attributes()->lib;\n $func = $pgm->attributes()->func;\n // pgm parms\n $parm = $pgm->xpath('parm');\n if (!$parm) die(\"Fail XML pgm parms missing ($lib/$name.$func)\\n\");\n $var = $parm[0]->data->attributes()->var;\n $actual = (string)$parm[0]->data;\n $expect='1960-12-31-12.32.23.000000';\n if ($actual != $expect) die(\"parm: $var ($actual not $expect) ($lib/$name.$func)\\n\");\n // pgm data returned\n $retn = $pgm->xpath('return');\n if (!$retn) die(\"Fail XML pgm return missing ($lib/$name.$func)\\n\");\n $var = $retn[0]->data->attributes()->var;\n $actual = (string)$retn[0]->data;\n $expect='2011-12-29-12.45.29.000000';\n if ($actual != $expect) die(\"return: $var ($actual not $expect) ($lib/$name.$func)\\n\");\n\n // good\n echo \"Success ($lib/$name.$func)\\n\";\n\n // *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n // * zzstamp: check timestamp parm\n // *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n // P zzstamp B export\n // D zzstamp PI Z\n // D myStamp Z\n function getxml() {\n $clob = <<<ENDPROC\n <?xml version='1.0'?>\n <script>\n <pgm name='ZZSRV' lib='xyzlibxmlservicexyz' func='ZZSTAMP'>\n <parm io='both'>\n <data var='myStamp' type='26A'>2011-12-29-12.45.29.000000</data>\n </parm>\n <return>\n <data var='myStampRet' type='26A'>nada</data>\n </return>\n </pgm>\n </script>\n ENDPROC;\n return test_lib_replace($clob);\n }\n ?>\n\n\n\n\n..\n [--Author([[http://youngiprofessionals.com/wiki/index.php/XMLSERVICE/XMLSERVICEDate?action=expirediff | s ]])--]\n [--Tony \"Ranger\" Cairns - IBM i PHP / PASE--]\n"
},
{
"alpha_fraction": 0.5918803215026855,
"alphanum_fraction": 0.6143162250518799,
"avg_line_length": 25.714284896850586,
"blob_id": "80ae2ad7d3ca9c98a5c653384633ed7be207e736",
"content_id": "72f817ae96be9468d03569faff193355f5dc43af",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 936,
"license_type": "permissive",
"max_line_length": 139,
"num_lines": 35,
"path": "/test/php/test_30451_RTVMSG_cw_pgm.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: - CW Toolkit REXX RTVMSG command\n--SKIPIF--\n<?php require_once('skipifcw.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\nrequire_once('CW/cw.php'); // new toolkit compatibility (Alan)\n\n/* connect */\n$conn = i5_connect(\"localhost\", $user, $password);\n// $conId = '42';\n// $conn = i5_pconnect(\"\", $user, $password, array(I5_OPTIONS_PRIVATE_CONNECTION => $conId));\nif (!$conn)\n{ $tab = i5_error();\n die(\"Connect: \".$tab[2].\" \".\"$tab[3], $tab[0]\");\n}\n\n$msgid = \"CPF0010\";\n$msgf = \"QCPFMSG\";\n\nif (!i5_command(\"RTVMSG\", array(\"MSGID\" => $msgid, \"MSGF\" => $msgf), array(\"MSG\" => array(\"msg\", \"char(200)\")), $conn)) die(i5_errormsg());\nif (function_exists('i5_output')) extract(i5_output());\necho \"Description of $msgid: $msg\\n\";\n\nif (strpos($msg,'support')<1) die(\"$msg not contain support\");\n\n// good\necho \"I have ... \\n\";\necho \"Success\\n\";\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.568382740020752,
"alphanum_fraction": 0.6245449781417847,
"avg_line_length": 27.235294342041016,
"blob_id": "395e75558c3e10ca38c39c2ba4b444800590f3ab",
"content_id": "390cadc800a6688a0a076f2ce7d90bc66f5d1882",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1923,
"license_type": "permissive",
"max_line_length": 54,
"num_lines": 68,
"path": "/test/byval/testval2.py",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "import os\nfrom itoolkit import *\nfrom itoolkit.lib.ilibcall import *\n\nitransport = iLibCall()\n\nitool = iToolKit()\nitool.add(iCmd('chglibl', 'CHGLIBL LIBL(XMLSERVICE)'))\n# dcl-proc GetSmart export;\n# dcl-pi *N;\n# p0Format char(10) Value;\n# p0Type char(1) Value;\n# p0Value char(15) Value;\n# p0Channel char(5);\n# p0ChannelDesc char(50);\n# p0MPG char(5);\n# p0MPGDesc char(50);\n# p0Class char(5);\n# p0ClassDesc char(50);\n# p0Line char(5);\n# p0LineDesc char(50);\n# p0Part char(15);\n# p0PartDesc char(30);\n# end-pi;\ns1 = iSrvPgm('getsmart','TESTZSRV','GETSMART')\np1 = iParm('p1',{'by':'val'})\np1.add(iData('p0Format','10a',''))\np2 = iParm('p2',{'by':'val'})\np2.add(iData('p0Type','1a',''))\np3 = iParm('p3',{'by':'val'})\np3.add(iData('p0Value','15a',''))\ns1.add(p1)\ns1.add(p2)\ns1.add(p3)\ns1.addParm(iData('p0Channel','5a',''))\ns1.addParm(iData('p0ChannelDesc','50a',''))\ns1.addParm(iData('p0MPG','5a',''))\ns1.addParm(iData('p0MPGDesc','50a',''))\ns1.addParm(iData('p0Class','5a',''))\ns1.addParm(iData('p0ClassDesc','50a',''))\ns1.addParm(iData('p0Line','5a',''))\ns1.addParm(iData('p0LineDesc','50a',''))\ns1.addParm(iData('p0Part','15a',''))\ns1.addParm(iData('p0PartDesc','30a',''))\n\nitool.add(s1)\n\nitool.call(itransport)\nchglibl = itool.dict_out('chglibl')\ngetsmart = itool.dict_out('getsmart')\nprint(chglibl['success'])\n# print(getsmart)\n# print(itool.xml_in())\n# print(itool.xml_out())\nif 'success' in getsmart:\n print(getsmart['p0Format'])\n print(getsmart['p0Type'])\n print(getsmart['p0Value'])\n print(getsmart['p0Channel'])\n print(getsmart['p0ChannelDesc'])\n print(getsmart['p0MPG'])\n print(getsmart['p0MPGDesc'])\n print(getsmart['p0Class'])\n print(getsmart['p0ClassDesc'])\n print(getsmart['p0Line'])\n print(getsmart['p0LineDesc'])\n print(getsmart['p0Part'])\n print(getsmart['p0PartDesc'])\n\n\n\n"
},
{
"alpha_fraction": 0.5307967066764832,
"alphanum_fraction": 0.5518183708190918,
"avg_line_length": 32.97142791748047,
"blob_id": "211dab6359e27e43ddb36adb637aa7d1ed5be506",
"content_id": "0135ec99c4393bf925a4e35fac351ec837841b8d",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 4757,
"license_type": "permissive",
"max_line_length": 93,
"num_lines": 140,
"path": "/test/php/test_10167_ZZARRAY_odbc_set_array_small.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: ODBC result set SRVPGM - array small data\n--SKIPIF--\n<?php require_once('skipifodbc.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\n// call IBM i\nif ($i5persistentconnect) $conn = odbc_pconnect($database,$user,$password);\nelse $conn = odbc_connect($database,$user,$password);\nif (!$conn) die(\"Bad connect: $database,$user\");\n$stmt = odbc_prepare($conn, \"call $procLib.iPLUGR32K(?,?,?)\");\nif (!$stmt) die(\"Bad prepare: \".odbc_errormsg());\n$ctl .= \" *hack\";\n$clobIn = getxml();\n$clobOut = \"\";\n// bad behavior odbc extension ...\n// why IBM i result set warning???\nerror_reporting(~E_ALL);\n$ret=odbc_execute($stmt,array($ipc,$ctl,$clobIn));\nif (!$ret) die(\"Bad execute: \".odbc_errormsg());\nerror_reporting(E_ALL);\nwhile(odbc_fetch_row($stmt)) {\n $clobOut .= driverJunkAway(odbc_result($stmt, 1));\n}\n// -----------------\n// output processing\n// -----------------\n// dump raw XML (easy test debug)\nvar_dump($clobOut);\n// xml check via simplexml vs. expected results\n$xmlobj = simplexml_load_string($clobOut);\nif (!$xmlobj) die(\"Fail XML returned\\n\");\n// -----------------\n// output cmd call\n// -----------------\n$allcmds = $xmlobj->xpath('/script/cmd');\nif (!$allcmds) die(\"Fail XML cmd missing\\n\");\n// -----------------\n// output pgm call\n// -----------------\n$myName1 = 'Ranger'; // expected name\n$myMax1 = 5; // expected max\n$myCount1= 5; // expected count\n$allpgms = $xmlobj->xpath('/script/pgm');\nif (!$allpgms) die(\"Fail XML pgm missing\\n\");\n$pgm = $allpgms[0];\n$name = $pgm->attributes()->name;\n$lib = $pgm->attributes()->lib;\n$func = $pgm->attributes()->func;\n// pgm parms\n$parm = $pgm->xpath('parm');\nif (!$parm) die(\"Fail XML pgm parms missing ($lib/$name.$func)\\n\");\n$myName = (string)$parm[0]->data;\n$myMax = (string)$parm[1]->data;\n$myCount= (string)$parm[2]->data;\nif ($myName != $myName1) die(\"Fail myName ($myName not $myName1) ($lib/$name.$func)\\n\");\nif ($myMax != $myMax1) die(\"Fail (myMax $myMax not $myMax1) ($lib/$name.$func)\\n\");\nif ($myCount != $myCount1) die(\"Fail myCount ($myCount not $myCount1) ($lib/$name.$func)\\n\");\n// pgm data structure returned dim(999)\n$retn = $pgm->xpath('return');\nif (!$retn) die(\"Fail XML pgm return missing ($lib/$name.$func)\\n\");\n$dsret = $retn[0]->ds; // DS records returned\nif (count($dsret) != $myCount) {\n die(\"Fail XML pgm not return $myCount DS records ($lib/$name.$func)\\n\");\n}\n$AAAs = \"\";\nfor ($j=0;$j<4095;$j++) $AAAs .= \"A\";\n$i=0; // DS records\nforeach($dsret as $ds) {\n // DS records expected\n $irpg = $i+1;\n $dcMyName = $myName1.$irpg;\n if ($myMax > 10) $dcMyJob = $AAAs;\n else $dcMyJob = \"Test 10\".$irpg;\n $dcMyRank = 10+$irpg;\n $dcMyPay = sprintf(\"%1.2f\", 13.42*$irpg);\n $data1 = array($dcMyName,$dcMyJob,$dcMyRank,$dcMyPay);\n // DS data elements\n for ($j=0;$j<4;$j++) {\n $var = $ds->data[$j]->attributes()->var;\n $actual = (string)$ds->data[$j];\n $expect = $data1[$j];\n if ((string)$actual != $expect) {\n die(\"Fail dcRec_t[$i].$var ($actual != $expect) ($lib/$name.$func)\\n\");\n }\n }\n $i++;\n}\n\n// good\necho \"Success ($lib/$name.$func)\\n\";\n\n// D ARRAYMAX c const(999)\n// D dcRec_t ds qualified based(Template)\n// D dcMyName 10A\n// D dcMyJob 4096A\n// D dcMyRank 10i 0\n// D dcMyPay 12p 2\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// * zzarray: check return array aggregate\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// P zzarray B export\n// D zzarray PI likeds(dcRec_t) dim(ARRAYMAX)\n// D myName 10A\n// D myMax 10i 0\n// D myCount 10i 0\nfunction getxml() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<script>\n<cmd comment='addlible'>ADDLIBLE LIB(xyzlibxmlservicexyz) POSITION(*FIRST)</cmd>\n<pgm name='ZZSRV' lib='xyzlibxmlservicexyz' func='ZZARRAY'>\n <parm comment='search this name'>\n <data var='myName' type='10A'>Ranger</data>\n </parm>\n <parm comment='max allowed return'>\n <data var='myMax' type='10i0'>5</data>\n </parm>\n <parm comment='actual count returned'>\n <data var='myCount' type='10i0' enddo='mycount'>0</data>\n </parm>\n <return>\n <ds var='dcRec_t' dim='999' dou='mycount'>\n <data var='dcMyName' type='10A'>na</data>\n <data var='dcMyJob' type='4096A'>na</data>\n <data var='dcMyRank' type='10i0'>0</data>\n <data var='dcMyPay' type='12p2'>0.0</data>\n </ds>\n </return>\n</pgm>\n</script>\nENDPROC;\nreturn test_lib_replace($clob);\n}\n?>\n--EXPECTF--\n%s\nSuccess (%s)\n\n"
},
{
"alpha_fraction": 0.6261998414993286,
"alphanum_fraction": 0.6408158540725708,
"avg_line_length": 44.37623596191406,
"blob_id": "770dd74b8a82846fcfe9e87ff31d1f8fe77c2c61",
"content_id": "6c3416a062db843e56de14760bda26d19bc602e9",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "reStructuredText",
"length_bytes": 9168,
"license_type": "permissive",
"max_line_length": 364,
"num_lines": 202,
"path": "/docs/faster.rst",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "Toolkit go faster\n=================\n\n`Goto Main Page`_\n\n.. _Goto Main Page: index.html\n\nBefore you start ...\n\nThis page is about running faster using persistent and private pooled connections workload balancing techniques. This is not a discussion about using QTEMP or \\*LDA in called RPG programs across browser clicks, that is a different topic entirely.\n\nNew version Zend Server include ZRAY\n------------------------------------\n\nServer set up asks if Developer or Production. If Developer, Z-Ray is on by default, but easy to turn off. If Production, Z-Ray is off by default, but easy to turn on. Z-Ray is recommended to be off in Production for performance and security reasons. It can be set up in secured mode in production to only be used on pages deliberately accessed by a developer.\n\nToolkit connections performance\n-------------------------------\nThere are many ways to workload balance PHP Toolkit connections:\n\n* slower ... public \"stateless\" job - no 'InternalKey' (default)\n\n::\n\n$ToolkitServiceObj->setToolkitServiceParams( array('stateless'=>true));\n\n* faster ... private \"state full \" single job - $user=\"SALLY\" connected to single xmlservice job\n\n::\n\n // php:db2_connect <- XML IN/OUT -> QSQSRVR:XMLSERVICE <- \"/tmp/$user\" -> XTOOLKIT:XMLSERVICE\n $ToolkitServiceObj->setToolkitServiceParams(array('InternalKey'=>\"/tmp/$user\"))\n\n* fastest ... private \"state full\" pool jobs - $user=\"SALLY\".rand(1,10) connected to 10 random xmlservice jobs \n\n::\n\n // php:db2_connect :<- XML IN/OUT -> QSQSRVR:XMLSERVICE <- \"/tmp/$user1\" -> XTOOLKIT:XMLSERVICE(1)\n // rand pick server:\n // :<- XML IN/OUT -> QSQSRVR:XMLSERVICE <- \"/tmp/$user10\" -> XTOOLKIT:XMLSERVICE(10)\n $ToolkitServiceObj->setToolkitServiceParams(array('InternalKey'=>\"/tmp/$user\".rand(1,10))\n\nThe following is a relative performance guideline:\n\n* slower ... public \"stateless\" job ... stateless connection (safe default)\n\n::\n\n <?php\n // job 1 (client) job 2 (server)\n // any php-cgi job attach QSQSRVR call XMSLERVICE\n // ------------------ ------------------------------\n // php:db2_(p)connect <- XML IN/OUT -> QSQSRVR:XMLSERVICE\n\n $extension='ibm_db2';\n try { $ToolkitServiceObj = ToolkitService::getInstance($db, $user, $pass, $extension); }\n catch (Exception $e) { echo $e->getMessage(), \"\\n\"; exit(); }\n $options = array('stateless'=>true,'plugSize'=>'4K');\n $ToolkitServiceObj->setToolkitServiceParams($options);\n $ToolkitServiceObj->disconnect();\n\n* slower ... uses db2_connect / odbc_connect (full open/close of QSQSRVR job)\n\n* slower ... starts/stops xmlservice within QSQSRVR job each script request\n\n* What to watch for ...\n\n not much, this is a simple full start/stop model (slow, but safe mostly)\n\n* faster ... private pool jobs ... state full connection (private)\n\n::\n\n <?php\n // job 1 (client) job 2 (proxy) job 3..13 (10 servers)\n // any php-cgi job QSQSRVR passthru XTOOLKIT(s) ready (always)\n // ------------------ ------------------ --------------------------\n // php:db2_(p)connect:<- XML IN/OUT -> QSQSRVR:XMLSERVICE <- \"/tmp/$user1\" -> XTOOLKIT:XMLSERVICE(1)\n // :<- XML IN/OUT -> QSQSRVR:XMLSERVICE <- \"/tmp/$user2\" -> XTOOLKIT:XMLSERVICE(2)\n // rand pick a server:\n // :<- XML IN/OUT -> QSQSRVR:XMLSERVICE <- \"/tmp/$user10\" -> XTOOLKIT:XMLSERVICE(10)\n\n $extension='ibm_db2';\n try { $ToolkitServiceObj = ToolkitService::getInstance($db, $user, $pass, $extension); }\n catch (Exception $e) { echo $e->getMessage(), \"\\n\"; exit(); }\n $maxpool = 10; // 10 jobs good enough to handle my machine needs\n $internalKey = '/tmp/packers'.rand(1,$maxpool);\n $ToolkitServiceObj->setToolkitServiceParams(array('InternalKey'=>$internalKey));\n /* Do not use the disconnect() function for \"state full\" connection */\n /* NEVER EVER USE THIS ... $ToolkitServiceObj->disconnect(); */\n /* Why? *immed kill of job, not nice or sync, just kill */\n\n* slower ... uses db2_connect / odbc_connect (full open/close of QSQSRVR job)\n\n* fastest ... starts another xmlservice /tmp/packers1-to-10 beyond QSQSRVR job (XTOOLKIT PGM-XMLSERVICE), and jobs 1-10 use over and over and over ... until killed by $ToolkitServiceObj->disconnect() or by IBM i operator\n\n* What to watch for ...\n \n Security 1: profile FRED can not attach to profile SALLY XMLSERVICE jobs (SALLY owns /tmp/packers1-10, FRED will have to make his own jobs /tmp/bears1-10)\n \n Co-operate with web site: develop procedures start/stop xmlservice when doing system maintenance (kill xmlservice jobs, etc.)\n \n A live job: QTEMP/\\*LDA are re-used, therefore your called applications must be ready to handle/clear\n\n* fastest ... private pool jobs ... add persistent db2 connections (db2_pconnect)\n\n::\n\n <?php\n // job 1 (client) job 2 (proxy) job 3..13 (10 servers)\n // any php-cgi job QSQSRVR passthru XTOOLKIT(s) ready (always)\n // ------------------ ------------------ --------------------------\n // php:db2_(p)connect:<- XML IN/OUT -> QSQSRVR:XMLSERVICE <- \"/tmp/$user1\" -> XTOOLKIT:XMLSERVICE(1)\n // :<- XML IN/OUT -> QSQSRVR:XMLSERVICE <- \"/tmp/$user2\" -> XTOOLKIT:XMLSERVICE(2)\n // rand pick a server:\n // :<- XML IN/OUT -> QSQSRVR:XMLSERVICE <- \"/tmp/$user10\" -> XTOOLKIT:XMLSERVICE(10)\n\n require_once(\"ToolkitService.php\");\n $i5persistentconnect = true;\n if ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\n else $conn = db2_connect($database,$user,$password);\n if (!$conn) echo \"Bad connect: $conn,$database,$user,perm=$i5persistentconnect\";\n try { $ToolkitServiceObj = ToolkitService::getInstance($conn); }\n catch (Exception $e) { die($e->getMessage()); }\n $maxpool = 10; // 10 jobs good enough to handle my machine needs\n $ToolkitServiceObj->setToolkitServiceParams(array('InternalKey'=>'/tmp/packers'.rand(1,$maxpool),'plug'=>'iPLUG32K'));\n\n* fastest ... uses db2_pconnect / odbc_pconnect (persistent QSQSRVR job stays alive \"forever\")\n\n* fastest ... starts another xmlservice /tmp/packers1-to-10 beyond QSQSRVR job (XTOOLKIT PGM-XMLSERVICE), and jobs 1-10 use over and over and over ... until killed by $ToolkitServiceObj->disconnect() or by IBM i operator\n\n* What to watch for ...\n \n Security 1: profile FRED can not attach to profile SALLY XMLSERVICE jobs (SALLY owns /tmp/packers1-10, FRED will have to make his own jobs /tmp/bears1-10)\n \n Security 2: profile FRED owns a db2_pconnect(tion), and SALLY owns a db2_pconnect(ion), XMLSERVICE connect InternalKey profile must match (db2_pconnect(\"SALLY\") owns /tmp/packers1-10, db2_pconnect(\"FRED\") owns /tmp/bears1-10)\n \n Co-operate with web site: develop procedures start/stop xmlservice when doing system maintenance (kill xmlservice jobs, etc.)\n \n A live job: QTEMP/\\*LDA are re-used, therefore your called applications must be ready to handle/clear\n\n\nToolkit operations performance\n------------------------------\n\nAlways use PgmCall API for speed including data area, job attributes, etc. (V6+ also call CL and OPM \\*PGM with PgmCall), most command functions will run significantly slower.\n\n* slower ... PASE sh utilities (system wrkactjob, ls, ps, etc.)\n\n::\n\n $ToolkitServiceObj->CLInteractiveCommand\n\n* slightly faster ... CMDS that return data (RTVJOBA, etc.)\n\n::\n\n $ToolkitServiceObj->CLCommandWithOutput\n\n* faster ... CMDS that do not return data (ADDLIBLE, etc.)\n\n::\n\n $ToolkitServiceObj->CLCommand\n \n* fastest ... calling PGMs/SRVPGMs (RPG, CLP, Cobol, System API, etc.)\n\n::\n\n $ToolkitServiceObj->PgmCall\n\nToolkit plug size performance\n-----------------------------\n\nSetting plug size to match your data size can offer increased performance.\n\n* slower ... 15 MB plug size (max)\n\n::\n\n $ToolkitServiceObj->setToolkitServiceParams(array('InternalKey'=>'/tmp/packers'.rand(1,$maxpool),'plugSize' => '15M'));\n\n* faster ... 512K plug size (default)\n\n::\n\n $ToolkitServiceObj->setToolkitServiceParams(array('InternalKey'=>'/tmp/packers'.rand(1,$maxpool),'plugSize'=>'512K'));\n\n* fastest ... 4K plug size (min)\n\n::\n\n $ToolkitServiceObj->setToolkitServiceParams(array('InternalKey'=>'/tmp/packers'.rand(1,$maxpool),'plugSize'=>'4K'));\n\nWhy a plug size at all?\n^^^^^^^^^^^^^^^^^^^^^^^\n DB2 connections are safe reliable transport for XML documents between client (PHP) and server (XMLSERVICE), but DB2 forces you to declare IN/OUT parameter size of any call procedure, XMLSERVICE download includes a few different stored procedure sizes (iPLUG4k .. iPLUG15M), so your script needs to choose the IN/OUT size that fits your data.\n\n\n..\n [--Author([[http://youngiprofessionals.com/wiki/index.php/XMLSERVICE/XMLSERVICEFaster?action=expirediff | s ]])--]\n [--Tony \"Ranger\" Cairns - IBM i PHP / PASE--]\n\n\n"
},
{
"alpha_fraction": 0.5620039701461792,
"alphanum_fraction": 0.5877976417541504,
"avg_line_length": 28.8592586517334,
"blob_id": "87a4599931ecb892a2dad7c14531633c96609332",
"content_id": "cf59b85aa25457f1622bb7e37433d9508609a2cf",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 4032,
"license_type": "permissive",
"max_line_length": 104,
"num_lines": 135,
"path": "/test/php/test_12600_MISC_ibm_db2_io_dataqueue.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 inout multi sys api - QSNDDTAQ/QRCVDTAQ data queues\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// -----------------\n// make the call\n// -----------------\n// see connection.inc param details ...\nrequire_once('connection.inc');\n// call IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Fail connect: $database,$user\");\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG65K(?,?,?,?)\");\nif (!$stmt) die(\"Fail prepare: \".db2_stmt_errormsg());\n$clobIn = getxml();\n$clobOut = \"\";\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) die(\"Fail execute: \".db2_stmt_errormsg());\n// -----------------\n// output processing\n// -----------------\n// dump raw XML (easy test debug)\nvar_dump($clobOut);\n// xml check via simplexml vs. expected results\n$xmlobj = simplexml_load_string($clobOut);\nif (!$xmlobj) die(\"Fail XML returned\\n\");\n// -----------------\n// output cmd call\n// -----------------\n$allcmds = $xmlobj->xpath('/script/cmd');\nif (!$allcmds) die(\"Fail XML cmd missing\\n\");\nif (count($allcmds) != 3) die(\"Fail XML not return 3 CMD records\\n\");\n// -----------------\n// output pgm call\n// -----------------\n$allpgms = $xmlobj->xpath('/script/pgm');\nif (!$allpgms) die(\"Fail XML pgm missing\\n\");\nif (count($allpgms) != 2) die(\"Fail XML not return 2 PGM records\\n\");\n$expect = \"System i data queues forever\";\nif (strpos($clobOut,$expect)<1) die(\"XML data missing ($expect)\");\n\n// good\necho \"Success (data queue script)\\n\";\n\n\n// work data queues\n// CMD: dltdtaq,crtdtaq\n// PGM: Send Data Queue (QSNDDTAQ) API\n// 1 Data queue name Input Char(10)\n// 2 Library name Input Char(10)\n// 3 Length of data Input Packed(5,0)\n// 4 Data Input Char(*) Input\n// PGM: Receive Data Queue (QRCVDTAQ) API\n// 1 Data queue name Input Char(10)\n// 2 Library name Input Char(10)\n// 3 Length of data Input Packed(5,0)\n// 4 Data Char(*) Output\n// 5 Wait time Input Packed(5,0)\nfunction getxml() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<script>\n<comment>\n***************************************\n* recreate the data queue\n***************************************\n</comment>\n<cmd comment='dltdtaq MYDATAQ'>DLTDTAQ DTAQ(xyzlibxmlservicexyz/MYDATAQ)</cmd>\n<cmd comment='crtdtaq MYDATAQ'>CRTDTAQ DTAQ(xyzlibxmlservicexyz/MYDATAQ) MAXLEN(100) AUT(*EXCLUDE)</cmd>\n<comment>\n***************************************\n* Send Data Queue (QSNDDTAQ) API\n***************************************\n* 1 Data queue name Input Char(10)\n* 2 Library name Input Char(10)\n* 3 Length of data Input Packed(5,0)\n* 4 Data Input Char(*) Input\n</comment>\n<pgm name='QSNDDTAQ'>\n <parm io='in'>\n <data type='10A'>MYDATAQ</data>\n </parm>\n <parm io='in'>\n <data type='10A'>xyzlibxmlservicexyz</data>\n </parm>\n <parm io='in'>\n <data type='5p0'>50</data>\n </parm>\n <parm io='in'>\n <data type='100A'>System i data queues forever</data>\n </parm>\n</pgm>\n<comment>\n***************************************\n* Receive Data Queue (QRCVDTAQ) API\n***************************************\n* 1 Data queue name Input Char(10)\n* 2 Library name Input Char(10)\n* 3 Length of data Input Packed(5,0)\n* 4 Data Char(*) Output\n* 5 Wait time Input Packed(5,0)\n</comment>\n<pgm name='QRCVDTAQ'>\n <parm io='in'>\n <data type='10A'>MYDATAQ</data>\n </parm>\n <parm io='in'>\n <data type='10A'>xyzlibxmlservicexyz</data>\n </parm>\n <parm io='in'>\n <data type='5p0'>50</data>\n </parm>\n <parm io='out'>\n <data type='100A'>bad stuff here</data>\n </parm>\n <parm comment='wait' io='in'>\n <data type='5p0'>0</data>\n </parm>\n</pgm>\n<cmd comment='dltdtaq MYDATAQ'>DLTDTAQ DTAQ(xyzlibxmlservicexyz/MYDATAQ)</cmd>\n</script>\nENDPROC;\nreturn test_lib_replace($clob);\n}\n?>\n--EXPECTF--\n%s\nSuccess (%s)\n\n"
},
{
"alpha_fraction": 0.5452777743339539,
"alphanum_fraction": 0.581944465637207,
"avg_line_length": 22.99333381652832,
"blob_id": "d37214ca6e21f312eb2c09ecb383cba49c2a9df9",
"content_id": "29beae2f598f393f5ff3a3189264ce57044d91e7",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 3600,
"license_type": "permissive",
"max_line_length": 84,
"num_lines": 150,
"path": "/test/php/test_72350_BATCH_ibm_db2_io_wrkactjob.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 inout multi batch processing\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\n// call IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Bad connect: $database,$user\");\n$ctlstart = $ctl;\nfor ($i=0;$i<6;$i++) {\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG5M(?,?,?,?)\");\nif (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\necho \"=========================\\n\";\nswitch($i) {\n case 0:\n echo \"INPUT BATCH getxml1 (wrkactjob): Submitted XMLSERVICE batch request ...\";\n $ctl = $ctlstart . \" *batch *idle(300)\";\n $clobIn = getxml1();\n break;\n case 1:\n echo \"INPUT BATCH getxml2 (ls /tmp): Submitted XMLSERVICE batch request ...\";\n $ctl = $ctlstart . \" *batch *idle(300)\";\n $clobIn = getxml2();\n break;\n case 2:\n echo \"IN/OUT CALL getxml3 (ZZCALL): Doing something else waiting for batch ...\";\n $ctl = $ctlstart;\n $clobIn = getxml3();\n break;\n case 3:\n case 4:\n echo \"OUTPUT GET BATCH (wrkactjob/ls): Waiting for batch ...\";\n $ctl = $ctlstart . \" *get *wait(10)\";\n $clobIn = \"<?xml version='1.0'?>\";\n break;\n default:\n echo \"OUTPUT BATCH (empty batch): Waiting for batch ...\";\n $ctl = $ctlstart . \" *get *wait(10)\";\n $clobIn = \"<?xml version='1.0'?>\";\n break;\n}\necho \"ctl = $ctl\\n\";\n$clobOut = \"\";\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) die(\"Bad execute: \".db2_stmt_errormsg());\n// ------------------\n// output processing\n// ------------------\nswitch($i) {\n case 0:\n var_dump($clobOut);\n break;\n case 1:\n var_dump($clobOut);\n break;\n case 2:\n if (strpos($clobOut,\"<data type='12p2' var='INDS1.DSDEC2'>\")>0) {\n echo \"Good:\\n\".substr($clobOut,-200).\"\\n\";\n }\n else {\n var_dump($clobOut);\n die(\"FAIL\");\n }\n break;\n case 3:\n case 4:\n if (strpos($clobOut,\"</sh>\")>0) {\n echo \"Good:\\n\".substr($clobOut,-200).\"\\n\";\n }\n else {\n var_dump($clobOut);\n die(\"FAIL\");\n }\n break;\n default:\n if (strlen($clobOut)>300) echo substr($clobOut,-200).\"\\n\";\n else var_dump($clobOut);\n break;\n}\n}\n// good\necho \"Success (PASE sh)\\n\";\n\nfunction getxml1() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<script>\n<sh rows='on'>/QOpenSys/usr/bin/system -i 'wrkactjob'</sh>\n</script>\nENDPROC;\nreturn $clob;\n}\n\nfunction getxml2() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<script>\n<sh>/QOpenSys/usr/bin/ls /tmp</sh>\n</script>\nENDPROC;\nreturn $clob;\n}\n\nfunction getxml3() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<script>\n<pgm name='ZZCALL' lib='xyzlibxmlservicexyz'>\n <parm io='both'>\n <data type='1A' var='INCHARA'>a</data>\n </parm>\n <parm io='both'>\n <data type='1A' var='INCHARB'>b</data>\n </parm>\n <parm io='both'>\n <data type='7p4' var='INDEC1'>11.1111</data>\n </parm>\n <parm io='both'>\n <data type='12p2' var='INDEC2'>222.22</data>\n </parm>\n <parm io='both'>\n <ds>\n <data type='1A' var='INDS1.DSCHARA'>x</data>\n <data type='1A' var='INDS1.DSCHARB'>y</data>\n <data type='7p4' var='INDS1.DSDEC1'>66.6666</data>\n <data type='12p2' var='INDS1.DSDEC2'>77777.77</data>\n </ds>\n </parm>\n <return>\n <data type='10i0'>0</data>\n </return>\n</pgm>\n</script>\nENDPROC;\nreturn test_lib_replace($clob);\n}\n\n?>\n\n--EXPECTF--\n%s\nSuccess (%s)\n\n"
},
{
"alpha_fraction": 0.5948320627212524,
"alphanum_fraction": 0.6229974031448364,
"avg_line_length": 30.713115692138672,
"blob_id": "48c316bf90e9a6dda697090554a1636ea80c5437",
"content_id": "334b4d56a3ffe6f365dbda658435073146291868",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 3870,
"license_type": "permissive",
"max_line_length": 116,
"num_lines": 122,
"path": "/test/php/test_33461_cwtest_userspace.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: - cwtest user space\n--SKIPIF--\n<?php require_once('skipifcw.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('xxcw_test_setup.php');\n\n$bigDesc = array(\narray (\"DSName\"=>\"BIGDS\", \"DSParm\"=>array (\narray (\"Name\"=>\"P1\", \"IO\"=>I5_INOUT, \"Type\"=>I5_TYPE_CHAR, \"Length\"=>10, \"Count\"=>5),\narray (\"Name\"=>\"P2C\", \"IO\"=>I5_INOUT,\"Type\"=>I5_TYPE_LONG, \"Length\"=>4),\narray (\"Name\"=>\"P2\", \"IO\"=>I5_INOUT, \"Type\"=>I5_TYPE_CHAR, \"Length\"=>1, \"CountRef\"=>\"P2C\" ),\narray (\"DSName\"=>\"PS\", \"Count\"=>2, \"DSParm\"=>array (\narray (\"Name\"=>\"PS1\", \"IO\"=>I5_INOUT, \"Type\"=>I5_TYPE_CHAR, \"Length\"=>10),\narray (\"Name\"=>\"PS2\", \"IO\"=>I5_INOUT, \"Type\"=>I5_TYPE_CHAR, \"Length\"=>10),\narray (\"Name\"=>\"PS3\", \"IO\"=>I5_INOUT, \"Type\"=>I5_TYPE_CHAR, \"Length\"=>10)\n)\n)))\n);\n\n$bigInputValues = array(\n\"BIGDS\"=>array(\n\"P1\"=>array(\"t1\", \"t2\", \"t3\", \"t4\", \"t5\"),\n\"P2C\"=>2,\n\"P2\"=>array(\"a\", \"b\"),\n\"PS\"=>array(\narray(\"PS1\"=>\"test1\", \"PS2\"=>\"test2\", \"PS3\"=>\"test3\"),\narray(\"PS1\"=>\"test3\", \"PS2\"=>\"test4\", \"PS3\"=>\"test5\")\n)\n));\n\n\nif ($doUserSpace) {\necho h2('User spaces');\n\n$userSpaceName = 'DEMOSPACE';\n$userSpaceLib = $demoLib;\n\n$usObj = new UserSpace($conn);\n$usObj->setUSName($userSpaceName, $userSpaceLib);\n\n// toolkit does not have an i5_userspace_delete so delete with a command.\n$ret = i5_command(\"DLTUSRSPC USRSPC($userSpaceLib/$userSpaceName)\");\nif (function_exists('i5_output')) extract(i5_output()); // i5_output() required if called in a function\n\n$status = ($ret) ? 'successfully' : 'badly';\necho \"deleted user space: $status<BR>\";\n//$us = $usObj->CreateUserSpace('ALANUS', 'ALAN', $InitSize =1024, $Authority = '*ALL', $InitChar=' ' );\n$usProperties = array(I5_NAME=>$userSpaceName, I5_LIBNAME=>$userSpaceLib, I5_INIT_VALUE=>'Y');\necho \"About to create user space.<BR>\";\n$us = i5_userspace_create($usProperties, $conn);\nif (!$us) {\n\tdie(\"Error returned: \" . printArray(i5_error()) . \"<BR><BR>\");\n} else {\n\techo \"Success!<BR><BR>\";\n}\n\n// prepare userspace for a put\n$us = i5_userspace_prepare(\"$userSpaceLib/$userSpaceName\", $bigDesc, $conn);\nif (!$us) {\n\tdie(\"Error returned from user space prepare $userSpaceLib/$userSpaceName: \" . printArray(i5_error()) . \"<BR><BR>\");\n} else {\n\techo \"Success preparing user space.<BR><BR>\";\n}\n\n// do the userspace put\n$success = i5_userspace_put($us, $bigInputValues);\nif (!$success) {\n\tdie(\"Error returned from user space put: \" . printArray(i5_error()) . \"<BR><BR>\");\n} else {\n\techo \"Success putting data into user space.<BR><BR>\";\n}\n\n// do the userspace get\n// removed counfref because doesn't work when getting.\n$bigDesc = array(\narray (\"DSName\"=>\"BIGDS\", \"DSParm\"=>array (\narray (\"Name\"=>\"P1\", \"IO\"=>I5_INOUT, \"Type\"=>I5_TYPE_CHAR, \"Length\"=>10, \"Count\"=>5),\narray (\"Name\"=>\"P2C\", \"IO\"=>I5_INOUT,\"Type\"=>I5_TYPE_LONG, \"Length\"=>4),\narray (\"Name\"=>\"P2\", \"IO\"=>I5_INOUT, \"Type\"=>I5_TYPE_CHAR, \"Length\"=>1, \"Count\"=>2),\narray (\"DSName\"=>\"PS\", \"Count\"=>2, \"DSParm\"=>array (\narray (\"Name\"=>\"PS1\", \"IO\"=>I5_INOUT, \"Type\"=>I5_TYPE_CHAR, \"Length\"=>10),\narray (\"Name\"=>\"PS2\", \"IO\"=>I5_INOUT, \"Type\"=>I5_TYPE_CHAR, \"Length\"=>10),\narray (\"Name\"=>\"PS3\", \"IO\"=>I5_INOUT, \"Type\"=>I5_TYPE_CHAR, \"Length\"=>10)\n))\n))\n);\n\n/*\n*/\n\n// prepare userspace for a get\n$us = i5_userspace_prepare(\"$userSpaceLib/$userSpaceName\", $bigDesc, $conn);\nif (!$us) {\n\tdie(\"Error returned from user space prepare: \" . printArray(i5_error()) . \"<BR><BR>\");\n} else {\n\techo \"Success preparing user space.<BR><BR>\";\n}\n\n\n$success = i5_userspace_get($us, array(\"BIGDS\"=>\"BIGDS\"));\nif (function_exists('i5_output')) extract(i5_output()); // i5_output() required if called in a function\n\nif (!$success) {\n\tdie(\"Error returned from user space get: \" . i5_error() . \"<BR><BR>\");\n} else {\n\techo \"Success getting data from user space. BIGDS=\" . printArray($BIGDS) . \"<BR><BR>\";\n}\n\n\n} //(user space)\n\n\n\n// good\necho \"\\nSuccess\\n\";\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.5982752442359924,
"alphanum_fraction": 0.6442687511444092,
"avg_line_length": 26.544553756713867,
"blob_id": "7a47b71a41aad8f32d6111351c0ee9383d1cbd00",
"content_id": "6a6f03caf44d48e40c631cd3e73ab4c2860a2f0f",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 2783,
"license_type": "permissive",
"max_line_length": 97,
"num_lines": 101,
"path": "/test/php/test_70002_PERF_odbc_set.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: ODBC result set PGM - performance loop call\n--SKIPIF--\n<?php require_once('skipifodbc.inc'); ?>\n--FILE--\n<?php\n$i5loop = 5000;\nrequire_once('connection.inc');\n\n// include connect performance\n// (worst connect situation)\n$start_time = microtime();\n$conn = odbc_connect($database,$user,$password);\n\n// odbc on linux one prepare fails\n$stmt = odbc_prepare($conn, \"call $procLib.iPLUGR32K(?,?,?)\");\nfor ($i=0;$i<$i5loop;$i++) {\n // get the xml simulate changed data\n // randomly happening througout php script\n $ctl .= \" *hack\";\n $clobIn = getxml();\n $clobOut = \"\";\n\n // odbc on my linux\n // $stmt = odbc_prepare($conn, \"call $procLib.iPLUGR32K(?,?,?)\");\n\n // bad behavior odbc extension ...\n // on IBM i result set warning???\n // slooows down writing to php.log\n error_reporting(~E_ALL);\n // rebind parms simulate changed data bindings\n // randomly happening througout php script\n $ret=odbc_execute($stmt,array($ipc,$ctl,$clobIn));\n error_reporting(E_ALL);\n while(odbc_fetch_row($stmt)) {\n $clobOut .= driverJunkAway(odbc_result($stmt, 1));\n }\n // remove var dump because screen output will\n // be the greatest timing factor dwarfing other data\n // echo \" IN:\\n\"; var_dump($clobIn);\n // echo \"OUT:\\n\"; var_dump($clobOut);\n if (strpos($clobOut,'4444444444.44')<1) {\n var_dump($clobOut);\n die(\"test failed loop count $i\\n\");\n }\n $ctl = \"*ignore\"; // high performance ignore flags\n}\n$end_time = microtime();\n$wire_time= control_microtime_used($start_time,$end_time)*1000000;\n\n// result times\n$look = round($wire_time/1000000,2);\necho\n sprintf(\"Time (loop=$i5loop) total=%1.2f sec (%1.2f ms per call)\\n\",\n round($wire_time/1000000,2),\n round(($wire_time/$i5loop)/1000,2));\n// less than two minutes (usually around one minute)\nif ($look<120) echo \"ok\\n\";\nelse echo \"fail - too slow\\n\";\n\nfunction control_microtime_used($i5before,$i5after) {\n return (substr($i5after,11)-substr($i5before,11))+(substr($i5after,0,9)-substr($i5before,0,9));\n}\n\nfunction getxml() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<script>\n<pgm name='ZZCALL' lib='xyzlibxmlservicexyz'>\n <parm io='both'>\n <data type='1A' var='INCHARA'>a</data>\n </parm>\n <parm io='both'>\n <data type='1A' var='INCHARB'>b</data>\n </parm>\n <parm io='both'>\n <data type='7p4' var='INDEC1'>11.1111</data>\n </parm>\n <parm io='both'>\n <data type='12p2' var='INDEC2'>222.22</data>\n </parm>\n <parm io='both'>\n <ds>\n <data type='1A' var='INDS1.DSCHARA'>x</data>\n <data type='1A' var='INDS1.DSCHARB'>y</data>\n <data type='7p4' var='INDS1.DSDEC1'>66.6666</data>\n <data type='12p2' var='INDS1.DSDEC2'>77777.77</data>\n </ds>\n </parm>\n <return>\n <data type='10i0'>0</data>\n </return>\n</pgm>\n</script>\nENDPROC;\nreturn test_lib_replace($clob);\n}\n?>\n--EXPECTF--\n%s\nok\n\n"
},
{
"alpha_fraction": 0.6112334728240967,
"alphanum_fraction": 0.6277533173561096,
"avg_line_length": 25.30434799194336,
"blob_id": "722863b0a0f4e6e7e926b13739cae2557f21e18d",
"content_id": "dc009e4c99d7c38f0a782c322f1a1e5ae5c713c9",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 1816,
"license_type": "permissive",
"max_line_length": 75,
"num_lines": 69,
"path": "/test/php/test_50800_ibm_db2_io_jvm_sql.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 inout SQL JVM\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\n\n// -------------\n// call IBM i\n// -------------\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Bad connect: $database,$user\");\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG512K(?,?,?,?)\");\nif (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n$ctl = \"*here *sqljava\";\n$clobIn = getxml();\n$clobOut = \"\";\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) die(\"Bad execute: \".db2_stmt_errormsg());\n// -----------------\n// output processing\n// -----------------\n// dump raw XML (easy test debug)\n// var_dump($clobOut);\n// xml check via simplexml vs. expected results\n$xmlobj = simplexml_load_string($clobOut);\nif (!$xmlobj) die(\"Bad XML returned\");\n\n$rows = $xmlobj->xpath('/script/sql/fetch/row');\nif (!$rows) die(\"Missing XML rows info\");\nforeach($rows as $row) {\n foreach($row->data as $data) {\n echo $data;\n }\n echo \"\\n\";\n}\n\nif (strpos($data, \"db2_classes.jar\") < 1) die(\"missing db2_classes.jar\\n\");\n\n// good\necho \"\\nSuccess\\n\";\n\nfunction getxml() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<script>\n<sql>\n<free/>\n<options options='sqlmode' naming='sql'/>\n<connect conn='mydot' options='sqlmode'/>\n<query conn='mydot'>set schema xmlservice</query>\n<query>select getProperty('java.class.path') from sysibm.sysdummy1</query>\n<fetch block='all' desc='on'/>\n</sql>\n</script>\nENDPROC;\nreturn $clob;\n}\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.6362799406051636,
"alphanum_fraction": 0.6381215453147888,
"avg_line_length": 34.032257080078125,
"blob_id": "d7d01d36c4af05acd71e3a40097e56a78b46cfa7",
"content_id": "5185ef6d6065d2143fde10e219a34952b9d3ca62",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": true,
"language": "Python",
"length_bytes": 1086,
"license_type": "permissive",
"max_line_length": 101,
"num_lines": 31,
"path": "/configure",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\nfrom argparse import ArgumentParser, Action\n\nclass UppercaseAction(Action):\n def __call__(self, parser, namespace, values, option_string=None):\n setattr(namespace, self.dest, values.upper())\n\nparser = ArgumentParser(description='Configure script')\nparser.add_argument('--library', default='XMLSERVICE', action=UppercaseAction, help='Output library')\nparser.add_argument('--debug', default='*ALL', help='Debug option (DBGVIEW)')\nparser.add_argument('--target-release', default='*CURRENT', help='Target release (TGTRLS)')\nargs = parser.parse_args()\n\n# Map command line arguments to replacement variables\n# ie. --foo -> '@FOO@': args.foo\nmappings = { '@' + k.upper() + '@': v for k,v in vars(args).items() }\n\nfiles_to_map = (\n 'src/plugconf.rpgle',\n 'src/xmlstoredp.sql',\n 'Makefile',\n)\n\nfor f in files_to_map:\n with open(f + \".in\", \"r\") as _in, open(f, \"w\") as _out:\n for line in _in:\n for k,v in mappings.items():\n line = line.replace(k, v)\n\n print(line, end='', file=_out)\n"
},
{
"alpha_fraction": 0.5152738690376282,
"alphanum_fraction": 0.5240519642829895,
"avg_line_length": 22.340164184570312,
"blob_id": "48a9f3574d64639d82a00a02cac4b2303c90da83",
"content_id": "fca78692c035f6245bf2ba71cf2f058244a808f4",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 5696,
"license_type": "permissive",
"max_line_length": 164,
"num_lines": 244,
"path": "/test/byval/make.sh",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "#!/bin/sh\n# -------\n#\n# > ./make.sh [testsrvx ...]\n#\n# examples:\n# full build\n# > export INIRPGLIB=XMLSERVICE\n# > export INIALLOWNONE=ON\n# > export INIJOBD=QSYS/QSRVJOB\n# > ./make.sh\n#\n# individual module compiles\n# > ./make.sh testsrvx\n#\n### RPG PTF required ###\n# TGTCCSID(37)\n# V7R1 - SI62954\n# V7R2 - SI62956 (SI52690)\n# V7R3 - SI62958\n#\n\n# ini settings\n\nif [[ -z \"$INIRPGLIB\" ]]\nthen\n INIRPGLIB='XMLSERVICE'\nfi\nINIRPGLIB=$(echo $INIRPGLIB | tr [a-z] [A-Z])\necho \"INIRPGLIB=$INIRPGLIB (export INIRPGLIB=XMLSERVICE)\"\n\nif [[ -z \"$INIALLOWNONE\" ]]\nthen\n INIALLOWNONE='ON'\nfi\nINIALLOWNONE=$(echo $INIALLOWNONE | tr [a-z] [A-Z])\necho \"INIALLOWNONE=$INIALLOWNONE (export INIALLOWNONE=ON)\"\n\nif [[ -z \"$INIJOBD\" ]]\nthen\n INIJOBD='QSYS/QSRVJOB'\nfi\nINIJOBD=$(echo $INIJOBD | tr [a-z] [A-Z])\necho \"INIJOBD=$INIJOBD (export INIJOBD=QSYS/QSRVJOB)\"\n\n# -------\nRPGLIB=$INIRPGLIB\nRPGFILES=''\nRPGFILES=''\nRPGMODS=''\nRPGASYNC=''\nRPGDONE=''\nRPGFAIL=''\nMYDIR=$(pwd)\n\n# -------\n# user input\n# -------\nif [[ -z \"$RPGASYNC\" ]]\nthen\n if [ \"$#\" -gt 0 ]\n then\n RPGFILES=\"$1 $2 $3 $4 $5 $6 $7 $8 $9\"\n else\n RPGASYNC='ALL'\n fi\nfi\n\nif [[ $RPGASYNC == \"ALL\" ]]\nthen\n RPGFILELIST=$(ls *.rpgle | grep -v _h)\n for j in $RPGFILELIST; do\n i=$(basename \"$j\" .rpgle)\n RPGFILES=\"$RPGFILES $i\"\n done\nfi\n\n# -------\n# build test include\n# -------\necho \" /if defined(TEST_H)\" > test_h.rpgle\necho \" /eof\" >> test_h.rpgle\necho \" /endif\" >> test_h.rpgle\necho \" /define TEST_H\" >> test_h.rpgle\necho \" dcl-c TEST_LIB const('$RPGLIB');\" >> test_h.rpgle\necho \" dcl-c TEST_IPC const('/tmp/$RPGLIB');\" >> test_h.rpgle\necho \" \" >> test_h.rpgle\n\n# -------\n# build core modules sync (one at time)\n# -------\necho \"building $RPGFILES ...\"\nif [[ -z \"$RPGASYNC\" ]]\nthen\n for i in $RPGFILES; do\n echo '===================================='\n echo \"==> $RPGLIB/$i ...\"\n system \"DLTMOD MODULE($RPGLIB/$i)\"\n if [[ $i == *\"sql\"* ]]\n then\n cmd=\"CRTSQLRPGI OBJ($RPGLIB/$i) SRCSTMF('$i.rpgle') OBJTYPE(*MODULE) OUTPUT(*PRINT) DBGVIEW(*SOURCE) REPLACE(*YES) RPGPPOPT(*LVL2) COMPILEOPT('TGTCCSID(37)')\"\n else\n cmd=\"CRTRPGMOD MODULE($RPGLIB/$i) SRCSTMF('$i.rpgle') DBGVIEW(*SOURCE) OUTPUT(*PRINT) REPLACE(*YES) TGTCCSID(37)\"\n fi\n echo \"$cmd\"\n system \"$cmd\" > /dev/null\n if [[ -e \"/qsys.lib/$RPGLIB.lib/$i.module\" ]]\n then\n echo \"==> $RPGLIB/$i.module -- ok\"\n else\n system \"$cmd\"\n echo \"==> $RPGLIB/$i.module -- failed\"\n exit\n fi\n echo '===================================='\n done\n# -------\n# build core modules async (all at once)\n# -------\nelse\n echo \"building async $RPGASYNC... \"\n rm ./ok_*\n rm ./fail_*\n for i in $RPGFILES; do\n if [[ -e \"/qsys.lib/$RPGLIB.lib/$i.module\" ]]\n then\n system \"DLTMOD MODULE($RPGLIB/$i)\"\n fi\n done\n for i in $RPGFILES; do\n echo \"building script $i.sh ... \"\n if [[ $i == *\"sql\"* ]]\n then\n cmd=\"CRTSQLRPGI OBJ($RPGLIB/$i) SRCSTMF('$i.rpgle') OBJTYPE(*MODULE) OUTPUT(*PRINT) DBGVIEW(*SOURCE) REPLACE(*YES) RPGPPOPT(*LVL2) COMPILEOPT('TGTCCSID(37)')\"\n else\n cmd=\"CRTRPGMOD MODULE($RPGLIB/$i) SRCSTMF('$i.rpgle') DBGVIEW(*SOURCE) OUTPUT(*PRINT) REPLACE(*YES) TGTCCSID(37)\"\n fi\n echo \"#!/bin/sh\" > \"$i.sh\"\n echo \"system \\\"$cmd\\\" > /dev/null\" >> \"$i.sh\"\n echo \"if [[ -e '/qsys.lib/$RPGLIB.lib/$i.module' ]]\" >> \"$i.sh\"\n echo \"then\" >> \"$i.sh\"\n echo \" echo '==> $RPGLIB/$i.module -- ok'\" >> \"$i.sh\"\n echo \" echo '==> $RPGLIB/$i.module -- ok' > ok_$i\" >> \"$i.sh\"\n echo \"else\" >> \"$i.sh\"\n echo \" echo '==> $RPGLIB/$i.module -- failed'\" >> \"$i.sh\"\n echo \" system \\\"$cmd\\\" > fail_$i\" >> \"$i.sh\"\n echo \"fi\" >> \"$i.sh\"\n echo \"background run $i.sh (async) ... \"\n ./$i.sh &\n done\n while [[ -z \"$RPGDONE\" ]]; do\n if [[ -z \"$RPGFAIL\" ]]\n then\n echo \"waiting for build ... \"\n sleep 5\n else\n break\n fi\n for i in $RPGFILES; do\n if [[ -e \"./fail_$i\" ]]\n then\n echo \"./fail_$i\"\n RPGFAIL=\"./fail_$i\"\n break\n else\n if [[ -e \"./ok_$i\" ]]\n then\n RPGDONE='yes'\n else\n RPGDONE=''\n break\n fi\n fi\n done\n done\nfi\n\n# -------\n# check up on fails\n# -------\nif [[ -z \"$RPGASYNC\" ]]\nthen\n echo \"module build complete\"\nelse\n rm ./ok_*\n if [[ -z \"$RPGFAIL\" ]]\n then\n echo \"module build complete\"\n else\n echo \"error $RPGFAIL\"\n cat \"$RPGFAIL\"\n exit\n fi\nfi\n\n# -------\n# rpg files\n# -------\nRPGFILELIST=$(ls *.rpgle | grep -v _h)\n\n# -------\n# build pgm(s)\n# -------\nfor i in $RPGFILES; do\n if [[ $i == *\"testzsrv\"* ]]\n then\n echo '===================================='\n echo \"==> $RPGLIB/$i ...\"\n RPGMODS=\"$RPGLIB/$i\"\n system \"DLTSRVPGM SRVPGM($RPGLIB/$i)\"\n cmd=\"CRTSRVPGM SRVPGM($RPGLIB/$i) MODULE($RPGMODS) EXPORT(*ALL) ACTGRP(*CALLER)\"\n echo \"$cmd\"\n system \"$cmd\"\n elif [[ $i == *\"test\"* ]]\n then\n echo '===================================='\n echo \"==> $RPGLIB/$i ...\"\n RPGBND=\"$RPGLIB/xmlstoredp\"\n system \"DLTPGM PGM($RPGLIB/$i)\"\n cmd=\"CRTPGM PGM($RPGLIB/$i) MODULE($RPGLIB/$i) BNDSRVPGM($RPGBND)\"\n echo \"$cmd\"\n system \"$cmd\"\n if [[ -e \"/qsys.lib/$RPGLIB.lib/$i.PGM\" ]]\n then\n echo \"==> $RPGLIB/$i -- ok\"\n else\n echo \"==> $RPGLIB/$i -- failed\"\n exit\n fi\n fi\ndone\n\n\n# -------\n# authorization\n# -------\necho '===================================='\necho \"==> $RPGLIB\"\ncmd=\"CHGAUT OBJ('/qsys.lib/$RPGLIB.lib') USER(QTMHHTTP) DTAAUT(*RWX) OBJAUT(*ALL) SUBTREE(*ALL)\"\necho \"$cmd\"\nsystem \"$cmd\"\ncmd=\"CHGAUT OBJ('/qsys.lib/$RPGLIB.lib') USER(QTMHHTP1) DTAAUT(*RWX) OBJAUT(*ALL) SUBTREE(*ALL)\"\necho \"$cmd\"\nsystem \"$cmd\"\n\n"
},
{
"alpha_fraction": 0.6012163162231445,
"alphanum_fraction": 0.6238054037094116,
"avg_line_length": 24.55555534362793,
"blob_id": "e639f159d61b794bd152dc3e6ac1ac9a005ee230",
"content_id": "168a0a87151ae7294adeaed88121e6d2cb704618",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 1151,
"license_type": "permissive",
"max_line_length": 74,
"num_lines": 45,
"path": "/test/php/test_90999_db2_io_sqlfree.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 inout SQL - free all\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\n// call IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Bad connect: $database,$user\");\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG512K(?,?,?,?)\");\nif (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n$clobIn = getxml();\n$clobOut = \"\";\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) die(\"Bad execute: \".db2_stmt_errormsg());\n// -----------------\n// output processing\n// -----------------\n// dump raw XML (easy test debug)\nvar_dump($clobOut);\n// good\necho \"Success\\n\";\n\nfunction getxml() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<script>\n<sql>\n<free conn='all' options='all'/>\n</sql>\n</script>\nENDPROC;\nreturn $clob;\n}\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.6099647879600525,
"alphanum_fraction": 0.6210367679595947,
"avg_line_length": 23.518518447875977,
"blob_id": "e2a0b28c107b497214c72b2a25788d720972b366",
"content_id": "3007d30bf477bef0106f9bff599c414c19eec6f0",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 1987,
"license_type": "permissive",
"max_line_length": 103,
"num_lines": 81,
"path": "/test/php/test_33468_cwtest_dtaara.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: - cwtest dtaara\n--SKIPIF--\n<?php require_once('skipifcw.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('xxcw_test_setup.php');\n\necho h2('Data areas');\n$dtaara = \"$demoLib/ALLEYOOP\";\n$ret = i5_data_area_create($dtaara, 72);\nif ($ret) {\n\techo \"Created data area $dtaara successfully.<BR>\";\n} else {\n\techo \"Could not create data area $dtaara.<BR>\";\n die();\n}\n\n$ret = i5_data_area_delete($dtaara);\nif ($ret) {\n\techo \"Deleted data area $dtaara successfully.<BR>\";\n} else {\n\techo \"Could not delete data area $dtaara.<BR>\";\n die();\n}\n\n$dtaara = 'BETTYBOOP';\n$ret = i5_data_area_create($dtaara, 100);\nif ($ret) {\n\techo \"Created data area $dtaara successfully.<BR>\";\n} else {\n\techo \"Could not create data area $dtaara. Reason: \" . i5_errormsg() . \" (it may already exist)<BR>\";\n}\n\n$dtaara = 'BETTYBOOP';\n$stringToWrite = 'Very nice';\n$ret = i5_data_area_write($dtaara, $stringToWrite, 5, 20);\nif ($ret) {\n\techo \"Wrote '$stringToWrite' to data area $dtaara successfully.<BR>\";\n\n\t// try to read now.\n\t$start = microtime(true);\n\t$readData = i5_data_area_read($dtaara, 3, 40);\n\t$end = microtime(true);\n\t$elapsed = $end - $start;\n\n if ($readData) {\n \techo \"Read a portion of '$readData' from data area $dtaara successfully in $elapsed seconds.<BR>\";\n } else {\n \techo \"Could not read from data area $dtaara. Reason: \" . i5_errormsg() . \"<BR>\";\n die();\n }\n\n\t// try to read now.\n\t$start = microtime(true);\n\t$readData = i5_data_area_read($dtaara); // the whole thing\n\t$end = microtime(true);\n\t$elapsed = $end - $start;\n\n if ($readData) {\n \techo \"Read ALL of '$readData' from data area $dtaara successfully in $elapsed seconds.<BR>\";\n } else {\n \techo \"Could not read from data area $dtaara. Reason: \" . i5_errormsg() . \"<BR>\";\n die();\n }\n\n\n\n} else {\n\techo \"Could not write to data area $dtaara. Reason: \" . i5_errormsg() . \"<BR>\";\n die();\n}\n\n\n// good\necho \"\\nSuccess\\n\";\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.6794871687889099,
"alphanum_fraction": 0.6972386837005615,
"avg_line_length": 35.17856979370117,
"blob_id": "f6bb721b9ac3176de3d99d4dcf5c2fea43d67efc",
"content_id": "246b81a3ddbf60d703d3c074350ec2d79aef417a",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 1014,
"license_type": "permissive",
"max_line_length": 92,
"num_lines": 28,
"path": "/test/php/test_20401_cmd_toolkit.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: - Zend Toolkit CMDs\n--SKIPIF--\n<?php require_once('skipiftoolkit.inc'); ?>\n--FILE--\n<?php\nrequire_once('connection.inc');\n// assume /usr/local/zendsvr/share/ToolkitAPI\nrequire_once(\"ToolkitService.php\");\n// new toolkit\ntry { $ToolkitServiceObj = ToolkitService::getInstance($database, $user, $password); }\ncatch (Exception $e) { die($e->getMessage()); }\n$ToolkitServiceObj->setToolkitServiceParams(\narray('InternalKey'=>$ipc, // route to same XMLSERVICE job /tmp/myjob1\n'subsystem'=>\"QGPL/QDFTJOBD\", // subsystem/jobd to start XMLSERVICE (if not running)\n'plug'=>\"iPLUG32K\")); // max size data i/o (iPLUG4K,32K,65K.512K,1M,5M,10M,15M)\n// execute command\n$cmd = \"addlible $testLib\";\n$Result = $ToolkitServiceObj->CLCommand($cmd);\n$Rows = $ToolkitServiceObj->CLInteractiveCommand(\"DSPLIBL\");\nif(!$Rows) die($ToolkitServiceObj->getLastError());\nvar_dump($Rows);\nif (strpos(implode(\",\",$Rows),$testLib)<1) die(\"missing $testLib\");\necho \"Success\\n\";\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.5474940538406372,
"alphanum_fraction": 0.562291145324707,
"avg_line_length": 40.880001068115234,
"blob_id": "7a0ead3ce0d6b363a1348bfc9e6bf015507d3f82",
"content_id": "a9b675400a7a318af9567fcf8c5d23af87da054f",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 2095,
"license_type": "permissive",
"max_line_length": 135,
"num_lines": 50,
"path": "/test/php/test_20426_ZZVARY_toolkit_pgm.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: - Zend Toolkit vary char\n--SKIPIF--\n<?php require_once('skipiftoolkit.inc'); ?>\n--FILE--\n<?php\nrequire_once('connection.inc');\n// assume /usr/local/zendsvr/share/ToolkitAPI\nrequire_once(\"ToolkitService.php\");\n// new toolkit\ntry { $ToolkitServiceObj = ToolkitService::getInstance($database, $user, $password); }\ncatch (Exception $e) { die($e->getMessage()); }\n$ToolkitServiceObj->setToolkitServiceParams(\narray('InternalKey'=>$ipc, // route to same XMLSERVICE job /tmp/myjob1\n'subsystem'=>\"QGPL/QDFTJOBD\", // subsystem/jobd to start XMLSERVICE (if not running)\n'plug'=>\"iPLUG32K\")); // max size data i/o (iPLUG4K,32K,65K.512K,1M,5M,10M,15M)\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// * zzvary: check return varying\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// P zzvary B export\n// D zzvary PI 20A varying\n// D myName 10A varying\n// * vars\n// D tmp S 20A varying\n// /free\n// tmp = 'my name is ';\n// tmp = tmp + myName;\n// return tmp;\n// /end-free\n// P E\n$param[] = $ToolkitServiceObj->AddParameterChar('both', 10, 'ZZVARY', 'myVary', 'Ranger', 'on'); // 6th parameter--'on'--is for varying\n$retrn[] = $ToolkitServiceObj->AddParameterChar('both', 20, 'ZZVARY', 'retVary', 'Mud', 'on'); // 6th parameter--'on'--is for varying\n$result = $ToolkitServiceObj->PgmCall('ZZSRV', $testLib, $param, $retrn, array('func'=>'ZZVARY'));\n// var_dump($result);\n/* in/out param myDate */\n$myVary = \"XMLSERVICE i/o param myVary: \".$result[\"io_param\"][\"myVary\"];\necho \"$myVary\\n\";\n$expect = 'Ranger';\nif (strpos($myVary,$expect)<1) die(\"Fail missing $expect\\n\");\n/* return value retVary */\n$retVary = \"XMLSERVICE return retVary: \".$result[\"retvals\"][\"retVary\"];\necho \"$retVary\\n\";\n$expect = 'my name is Ranger';\nif (strpos($retVary,$expect)<1) die(\"Fail missing $expect\\n\");\n/* all good */\necho \"Success\\n\";\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.6698465943336487,
"alphanum_fraction": 0.6791993975639343,
"avg_line_length": 29.724138259887695,
"blob_id": "3bbad11d54ab840f38dee5d666f5fdaf0933d21f",
"content_id": "50bc0b2eac42c833582a5599326c178861da80cf",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 5346,
"license_type": "permissive",
"max_line_length": 112,
"num_lines": 174,
"path": "/test/php/xxcw_test_setup.php",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\nrequire_once('CW/cw.php'); // don't need if added auto_append in PHP.INI\n\n\n// some items to turn on and off in the test\n$doPcml = true;\n$doUserSpace = true;\n$doDataQueue = true;\n$doPgmCallComplex = true;\n$doPgmCallSimple = true;\n$doObjectList = true;\n$doJobLists = true;\n$doJobLogs = true;\n$doSpooledFiles = true;\n$doAdoptAuthority = true;\n\n// Use configurable demo lib/name from toolkit ini\n$demoLib = trim(getConfigValue('demo', 'demo_library'));\nif (!$demoLib) {\n die('Demo library not set in toolkit.ini.');\n}\n\n// Use configurable encoding from toolkit ini\n// We use encoding in meta tag so characters appear correctly in browser\n$encoding = trim(getConfigValue('system', 'encoding'));\nif (!$encoding) {\n die('Encoding not set in toolkit.ini. Example: ISO-8859-1');\n}\n\n// optional demo values\n$setLibList = trim(getConfigValue('demo', 'initlibl', ''));\n$setCcsid = trim(getConfigValue('demo', 'ccsid', ''));\n$setJobName = trim(getConfigValue('demo', 'jobname', ''));\n$setIdleTimeout = trim(getConfigValue('demo', 'idle_timeout', ''));\n\n// optional demo connection values\nif (!isset($private)) {\n$private = false; // default\n$privateNum = false;\n$persistent = false;\nif ($persistent) {\n\t// private can only happen with persistence\n $private = trim(getConfigValue('demo', 'private', false));\n $privateNum = trim(getConfigValue('demo', 'private_num', '0'));\n} //(persistent)\n}\n\n$scriptTitle = 'Test script for IBM i Compatibility Wrapper (CW)';\n\nfunction OkBad($success = false) {\n\tif ($success) {\n\t\treturn 'Successful';\n\t} else {\n\t\treturn 'Failed with this error: ' . print_r(i5_error(), true);\n\t}\n} //(OkBad)\nfunction printArray($array)\n{\n\treturn '<PRE>' . print_r($array, true) . '</PRE>';\n}\n\nfunction h1($headString) {\n\treturn \"<h1>$headString</h1>\";\n}\n\nfunction h2($headString) {\n\treturn \"<h2>$headString</h2>\";\n}\n\necho h1($scriptTitle);\n\necho h2('Version check');\n\n// display CW version or warning message.\n$downloadSite = 'http://www.youngiprofessionals.com/wiki/XMLSERVICE';\n$downloadLink = '<a href=\"' . $downloadSite . '\" target=\"_blank\">' . $downloadSite . '</a>';\nif (function_exists('i5_version')) {\n\techo \"You are running CW version <b>\" . i5_version() . \"</b>.\\n Any updates will be found at $downloadLink.\\n\";\n} else {\n\techo \"This version of CW is out of date.\\nPlease download the latest CW from $downloadLink.\\n\\n\";\n} //(if i5_version function exists)\n\necho h2('Connection');\n\n// choose connection function based on persistence choice\n$connFunction = ($persistent) ? 'i5_pconnect' : 'i5_connect';\n\necho \"About to connect with $connFunction($cwdb, $user, xxxxx)\";\n\n// options (liblist, ccsid, jobname) can be changed by the user in toolkit.ini.\n$options = array();\nif ($setLibList) {\n\t$options[I5_OPTIONS_INITLIBL] = $setLibList;\n\techo \"I5_OPTIONS_INITLIBL = '$setLibList'\\n\";\n}\nif ($setCcsid) {\n\t$options[I5_OPTIONS_RMTCCSID] = $setCcsid;\n\techo \"I5_OPTIONS_RMTCCSID = '$setCcsid'\\n\";\n}\nif ($setJobName) {\n\t$options[I5_OPTIONS_JOBNAME] = $setJobName;\n\techo \"I5_OPTIONS_JOBNAME = '$setJobName'\\n\";\n}\nif ($setIdleTimeout) {\n\t$options[I5_OPTIONS_IDLE_TIMEOUT] = $setIdleTimeout;\n\techo \"I5_OPTIONS_IDLE_TIMEOUT = '$setIdleTimeout'\\n\";\n}\n\nif ($persistent && $private) {\n\t$options[I5_OPTIONS_PRIVATE_CONNECTION] = $privateNum;\n\techo \"I5_OPTIONS_PRIVATE_CONNECTION = '$privateNum'\\n\";\n} // (private and privateNum)\n\necho '\\n';\necho \"setup CW private = $private\\n\";\necho \"setup CW privateNum = $privateNum\\n\";\necho \"setup CW persistent = $persistent\\n\";\n\n/*\n * // Optionally re-use an existing database connection for your transport\n * // If you specify a naming mode (i5/sql) in your connection, make sure they match.\n * $namingMode = DB2_I5_NAMING_ON;\n * $existingDb = db2_pconnect('', '','', array('i5_naming' => $namingMode));\n * // Add to existing connection options\n * $options[CW_EXISTING_TRANSPORT_CONN] = $existingDb;\n * $options[CW_EXISTING_TRANSPORT_I5_NAMING] = $namingMode;\n*/\n\n$start = microtime(true);\n\n// about to connect. Can use i5_connect or i5_pconnect.\n$conn = $connFunction($cwdb, $user, $password, $options);\n$end = microtime(true);\n$elapsed = $end - $start;\necho \"Ran $connFunction function, with options, in $elapsed seconds.\\n\";\n\n// if unable to connect, find out why.\nif (!$conn) {\n die('\\nCould not connect. Reason: ' . printArray(i5_error()));\n}\n\necho \"Connection object output: '$conn'\\n\\n\";\n\nif ($private) {\n\t// if a private connection, show what number was used or generated.\n $privateConnNum = i5_get_property(I5_PRIVATE_CONNECTION, $conn);\n echo \"Private conn number from i5_get_property(I5_PRIVATE_CONNECTION, \\$conn): $privateConnNum\\n\\n\";\n\n $isNew = i5_get_property(I5_NEW_CONNECTION, $conn);\n echo \"Is new connection?: $isNew\\n\\n\";\n} //($private)\n\n\n// CONNECTED.\n\n// check that demo library exists\necho \"About to verify that the demo library, '$demoLib', exists.\\n\";\n$list = i5_objects_list('QSYS', $demoLib, '*LIB', $conn);\nif (!$list) {\n\techo 'Error getting object list: ' . printArray(i5_error()) . '\\n\\n';\n} else {\n if ($listItem = i5_objects_list_read($list)) {\n\t echo \"Demo library '$demoLib' exists.\\n\\n\";\n\t} else {\n\t die (\"\\nDemo library '$demoLib' NOT found. Ending.\");\n\t} //(if object was read)\n} //(if !$list)\n\ni5_objects_list_close($list);\n\n// ON TO ACTUAL FUNCTIONALITY\n?>\n"
},
{
"alpha_fraction": 0.6593948006629944,
"alphanum_fraction": 0.6751114130020142,
"avg_line_length": 48.55813980102539,
"blob_id": "1f4d404fe541acc36ec5714357fcec536908176c",
"content_id": "67e05bd2024c310fe4c46b8e6cb1f971424d8302",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "reStructuredText",
"length_bytes": 4263,
"license_type": "permissive",
"max_line_length": 481,
"num_lines": 86,
"path": "/docs/ipc.rst",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "XMLSERVICE IPC\n==============\n\n`Goto Main Page`_\n\n.. _Goto Main Page: index.html\n\n\nToolkit internalKey, IPC, toolkit persistent connection ...\n-----------------------------------------------------------\n\nWith exception of old CW layer, DB2 connection persistent/full makes NO difference, internalKey (IPC) applies ONLY to XMLSERVICE layer to route back to same xmlservice job every time. This routing technique is referred to as a \"state full\" XMLSERVICE connection.\n\nThe naming of the internalKey (IPC) varies across layers PHP Toolkit, but it means the same thing.\n\n* New PHP Toolkit - refers to internalKey ('InternalKey'=>\"/tmp/packers\")\n \n + Note: works with both persistent and non-persistent connections (db2_pconnect or db2_connect, odbc_pconnect or odbc_connect)\n\n* XMLSERVICE RAW - refers to IPC ($ipc=\"/tmp/packers\")\n \n + Note: works with both persistent and non-persistent connections (db2_pconnect or db2_connect, odbc_pconnect or odbc_connect)\n\n* New PHP Toolkit CW Layer - refers to \"private connection\" and provides APIs to get/set private \"key/nbr\"\n \n + Note: CW Layer requires \"persistent connection\" to achieve \"private connection\", this old technology is just for compatibly with old toolkit concepts\n\nHow IPC/internalKey works?\n--------------------------\n\nThe internalKey (IPC) provided by the user (or PHP wrapper) is simply a unique place in the IFS file system. That is to say that one and only one /tmp/packers lives on the machine (LPAR IBM i instance), therefore it is very handy to hash this location into a key (google ftok), that can be used to create unique purpose semaphores (locks) and shared memory (shared data) on the IBM i.\n\nIFS /tmp/path only provides a unique key (hash key), therefore all manner of IPC-2-xmlservice workload/user balancing could be imagined, in fact you could restrict your site to only the IFS paths pre-created in some lower level directory to avoid any unwanted user ability to start an xmlservice job assuming toolkit wrapper plays along beyond using just /tmp.\n\nWhen an active xmlservice session is running on /tmp/packers you can see the semaphores and shared memory using the utility ipcs. Please note authorizations ipcs displays are exactly like any other IFS file, including owner access (RW, read/write) and \\*PUBLIC access (--, none), etc. This is how xmlservice controls authorization front door to a state full XMLSERVICE job allowing only correct/matching profiles to call any specific xmlservice service job (one request at a time).\n\n::\n\n call qp2term\n > ipcs\n SHARED MEMORY:\n T ID KEY MODE OWNER GROUP\n M 2306 0X010404F7 T-RW------- DB2 *NONE <---- /tmp/packers (shared data)\n SEMAPHORES:\n T ID KEY MODE OWNER GROUP\n S 377 0X010404F7 --RW------- DB2 *NONE <---- /tmp/packers (lock one use at time)\n\n\nPHP program decode hex IPC key\n------------------------------\n\nUnfortunately ipcs \"KEY\" column is displayed in hex, so if you want to see what goes with /tmp/packers \nyou will need to run a little program using ``$ctl=\"*session\"``.\n\n::\n\n zzftok2.php:\n <?php\n require_once('connection.inc');\n if ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\n else $conn = db2_connect($database,$user,$password);\n if (!$conn) die(\"Bad connect: $database,$user\");\n $stmt = db2_prepare($conn, \"call $libxmlservice.iPLUG4K(?,?,?,?)\");\n if (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n $ctl = \"*session\";\n $ipc = \"/tmp/packers\";\n $clobIn = \"<?xml version='1.0'?>\";\n $clobOut = \"\";\n $ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n $ret=db2_execute($stmt);\n if (!$ret) die(\"Bad execute: \".db2_stmt_errormsg());\n var_dump($clobOut);\n ?>\n\n > http://myibmi/zzftok2.php\n string(70) \"<?xml version='1.0'?>\n <session key='010404F7'>/tmp/packers</session>\n\n\n\n.. \n [--Author([[http://youngiprofessionals.com/wiki/index.php/XMLSERVICE/XMLSERVICEIPC?action=expirediff | s ]])--]\n [--Tony \"Ranger\" Cairns - IBM i PHP / PASE--]\n\n"
},
{
"alpha_fraction": 0.6133796572685242,
"alphanum_fraction": 0.6333622932434082,
"avg_line_length": 24.55555534362793,
"blob_id": "d98cead8e9f323bafd4f5eb0e49903febf0a8262",
"content_id": "4a13ef91081a529f1b2ca6fb93f9000a11deadb0",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 1151,
"license_type": "permissive",
"max_line_length": 74,
"num_lines": 45,
"path": "/test/php/test_99988_MISC_db2_set_immed_kill.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: CTL - kill server\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\nibm_db2_IgnoreOff(); // remove ibm_db2.i5_ignore_userid=1\n\n$ipc2 = \"/tmp/ipc_cw_\".$user.\"_42\"; // cw tests\n$ipc3 = \"/tmp/Toolkit\"; // cw or toolkit tests\n\n// call IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) {\n echo(\"Bad connect: $database,$user\");\n continue;\n}\n\n$allipc = array($ipc,$ipc2,$ipcover,$ipc3);\n\nforeach ($allipc as $ipc) {\n\n$ctl = \"*immed\"; // kill XMLSERVICE NOW\n$clobIn = \"\";\n$sql = \"call $procLib.iPLUGR4K('$ipc','$ctl','$clobIn')\";\n$stmt=db2_exec($conn,$sql);\nif (!$stmt) echo(\"Bad execute ($database,$user): \".db2_stmt_errormsg());\n$ret=db2_free_stmt($stmt);\nif (!$ret) echo(\"Bad free stmt ($database,$user): \".db2_stmt_errormsg());\n\n}\n\nif ($i5persistentconnect) $ret=db2_pclose($conn);\nelse $ret=db2_close($conn);\nif (!$ret) echo(\"Bad close ($database,$user): \".db2_stmt_errormsg());\necho \"i am ...\\n\";\necho \"dead\\n\";\n\n?>\n--EXPECTF--\n%s\ndead\n\n"
},
{
"alpha_fraction": 0.6703296899795532,
"alphanum_fraction": 0.6891679763793945,
"avg_line_length": 32.47368240356445,
"blob_id": "af9b0e8b11bc0abe30ecade67d886812b634266d",
"content_id": "c9d4dc3e2ab3cb8b775f8faea6a3e84b9f820755",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 1274,
"license_type": "permissive",
"max_line_length": 174,
"num_lines": 38,
"path": "/test/php/test_50000_ibm_db2_io_jvm_init.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 inout create JVM test\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\nrequire_once('connection.inc');\nrequire_once(\"ToolkitService.php\");\n\n// IBM i\n$conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Bad connect: $database,$user\");\n\n// force schema\n$r = db2_exec($conn, \"set schema xmlservice\");\n\n// java stored procedure 1\n$r = db2_exec($conn, \"create procedure gc() language java parameter style java external name 'java.lang.System.gc'\");\n$stmt = db2_exec($conn, \"call gc()\");\nif (!$stmt) die(\"Failed: call gc\");\n\n// java UDF\n$r = db2_exec($conn, \"create function getProperty(prop varchar(1024)) returns varchar(1024) language java parameter style java external name 'java.lang.System.getProperty'\");\n$stmt = db2_exec($conn, \"select getProperty('java.class.path') from sysibm.sysdummy1\");\nif (!$stmt) die(\"Failed: select getProperty\");\nwhile ($row = db2_fetch_array($stmt)) { var_dump($row); echo \"\\n\"; }\n\n// java stored procedure 2\n$r = db2_exec($conn, \"create procedure sleeper(millis BIGINT) language java parameter style java external name 'java.lang.Thread.sleep'\");\n$stmt = db2_exec($conn, \"call sleeper(2)\");\nif (!$stmt) die(\"Failed: call sleeper(2)\");\n\n\necho \"Success\\n\";\n?>\n--EXPECTF--\n%s\nSuccess\n\n\n"
},
{
"alpha_fraction": 0.46264514327049255,
"alphanum_fraction": 0.5136294960975647,
"avg_line_length": 33.443477630615234,
"blob_id": "20caa9990b9162cabc7c7064acf98ba22db9c2c1",
"content_id": "b5f71ac2cb5e29390d0e460c51ae02d1179fcc5a",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 3962,
"license_type": "permissive",
"max_line_length": 89,
"num_lines": 115,
"path": "/test/php/test_98120_ZZCALL_ibm_db2_io_pgm.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 inout PGM - call pgm complex data\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\n// call IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Bad connect: $database,$user\");\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG4K(?,?,?,?)\");\nif (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n$clobIn = getxml();\n$clobOut = \"\";\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) die(\"Bad execute: \".db2_stmt_errormsg());\n// -----------------\n// output processing\n// -----------------\n// dump raw XML (easy test debug)\nvar_dump($clobOut);\n// xml check via simplexml vs. expected results\n$xmlobj = simplexml_load_string($clobOut);\nif (!$xmlobj) die(\"Bad XML returned\");\n$allpgms = $xmlobj->xpath('/script/pgm');\nif (!$allpgms) die(\"Missing XML pgm info\");\n// -----------------\n// output pgm call\n// -----------------\n// only one program this XML script\n$pgm = $allpgms[0];\n$name = $pgm->attributes()->name;\n$lib = $pgm->attributes()->lib;\n// pgm parms\n$parm = $pgm->xpath('parm');\nif (!$parm) die(\"Missing XML pgm parms ($lib/$name)\");\nif ((string)$parm[0]->data != 'C') die(\"1) INCHARA not 'C'\");\nif ((string)$parm[1]->data != 'D') die(\"2) INCHARB not 'D'\");\nif ((string)$parm[2]->data != '321.1234') die(\"3) INDEC1 not '321.1234'\");\nif ((string)$parm[3]->data != '1234567890.12') die(\"4) INDEC2 not '1234567890.12'\");\n// parm 5 data structure\n$ds = $parm[4]->ds;\nif ((string)$ds->data[0] != 'E') die(\"5) INDS1.DSCHARA not 'E'\");\nif ((string)$ds->data[1] != 'F') die(\"5) INDS1.DSCHARB not 'F'\");\nif ((string)$ds->data[2] != '333.3330') die(\"5) INDS1.DSDEC1 not '333.3330'\");\nif ((string)$ds->data[3] != '4444444444.44') die(\"5) INDS1.DSDEC2 not '4444444444.44'\");\n// pgm return\n$retn = $pgm->xpath('return');\nif (!$retn) die(\"No XML pgm return ($lib/$name)\");\nif ((string)$retn[0]->data != '0') die(\"return not '0'\");\n\n// good\necho \"Success ($lib/$name)\\n\";\n\n// D INCHARA S 1a\n// D INCHARB S 1a\n// D INDEC1 S 7p 4\n// D INDEC2 S 12p 2\n// D INDS1 DS\n// D DSCHARA 1a\n// D DSCHARB 1a\n// D DSDEC1 7p 4\n// D DSDEC2 12p 2\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// * main(): Control flow\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// C *Entry PLIST\n// C PARM INCHARA\n// C PARM INCHARB\n// C PARM INDEC1\n// C PARM INDEC2\n// C PARM INDS1\nfunction getxml() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<script>\n<pgm name='ZZCALL' lib='xyzlibxmlservicexyz'>\n <parm io='both'>\n <data type='1A' var='INCHARA'>a</data>\n </parm>\n <parm io='both'>\n <data type='1A' var='INCHARB'>b</data>\n </parm>\n <parm io='both'>\n <data type='7p4' var='INDEC1'>11.1111</data>\n </parm>\n <parm io='both'>\n <data type='12p2' var='INDEC2'>222.22</data>\n </parm>\n <parm io='both'>\n <ds>\n <data type='1A' var='INDS1.DSCHARA'>x</data>\n <data type='1A' var='INDS1.DSCHARB'>y</data>\n <data type='7p4' var='INDS1.DSDEC1'>66.6666</data>\n <data type='12p2' var='INDS1.DSDEC2'>77777.77</data>\n </ds>\n </parm>\n <return>\n <data type='10i0'>0</data>\n </return>\n</pgm>\n</script>\nENDPROC;\nreturn test_lib_replace($clob);\n}\n?>\n--EXPECTF--\n%s\nSuccess (%s)\n\n"
},
{
"alpha_fraction": 0.5611631274223328,
"alphanum_fraction": 0.6046122908592224,
"avg_line_length": 26.190908432006836,
"blob_id": "353935a67bec1e1ce2379ed69a1b512e0991cf66",
"content_id": "f830a207d59899e21f1e572a1a8be2661f639369",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 2992,
"license_type": "permissive",
"max_line_length": 97,
"num_lines": 110,
"path": "/test/php/test_70003_PERF_pdo_ibm_io.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: PDO_IBM inout PGM - performance loop call\n--SKIPIF--\n<?php require_once('skipifpdo.inc'); ?>\n--FILE--\n<?php\n$i5loop = 5000;\nrequire_once('connection.inc');\n\n// include connect performance\n// (worst connect situation)\n$start_time = microtime();\ntry {\n $db = new PDO(\"ibm:\".$database,\n strtoupper($user),\n strtoupper($password),\n array(PDO::ATTR_AUTOCOMMIT=>true));\n} catch( Exception $e ) {\n die(\"bad connect\");\n}\nfor ($i=0;$i<$i5loop;$i++) {\n // get the xml simulate changed data\n // randomly happening througout php script\n $clobIn = getxml();\n $clobOut = \"\";\n try {\n // some bug pdo_ibm\n // redo the prepare\n $stmt = $db->prepare(\"call $procLib.iPLUG4K(?,?,?,?)\");\n } catch( Exception $e ) {\n $err = $db->errorInfo();\n $cod = $db->errorCode();\n die($cod.\" \".$err[0].\" \".$err[1].\" \".$err[2]);\n }\n try {\n // rebind parms simulate changed data bindings\n // randomly happening througout php script\n $r1=$stmt->bindParam(1,$ipc, PDO::PARAM_STR);\n $r2=$stmt->bindParam(2,$ctl, PDO::PARAM_STR);\n $r3=$stmt->bindParam(3,$clobIn, PDO::PARAM_STR);\n $r4=$stmt->bindParam(4,$clobOut, PDO::PARAM_STR|PDO::PARAM_INPUT_OUTPUT);\n $ret = $stmt->execute();\n $clobOut = driverJunkAway($clobOut);\n } catch( Exception $e ) {\n $err = $stmt->errorInfo();\n $cod = $stmt->errorCode();\n die($cod.\" \".$err[0].\" \".$err[1].\" \".$err[2]);\n }\n // remove var dump because screen output will\n // be the greatest timing factor dwarfing other data\n // echo \" IN:\\n\"; var_dump($clobIn);\n // echo \"OUT:\\n\"; var_dump($clobOut);\n if (strpos($clobOut,'4444444444.44')<1) {\n var_dump($clobOut);\n die(\"test failed loop count $i\\n\");\n }\n $ctl = \"*ignore\"; // high performance ignore flags\n}\n$end_time = microtime();\n$wire_time= control_microtime_used($start_time,$end_time)*1000000;\n\n// result times\n$look = round($wire_time/1000000,2);\necho\n sprintf(\"Time (loop=$i5loop) total=%1.2f sec (%1.2f ms per call)\\n\",\n round($wire_time/1000000,2),\n round(($wire_time/$i5loop)/1000,2));\n// less than two minutes (usually around one minute)\nif ($look<120) echo \"ok\\n\";\nelse echo \"fail - too slow\\n\";\n\nfunction control_microtime_used($i5before,$i5after) {\n return (substr($i5after,11)-substr($i5before,11))+(substr($i5after,0,9)-substr($i5before,0,9));\n}\n\nfunction getxml() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<pgm name='ZZCALL' lib='xyzlibxmlservicexyz'>\n <parm io='both'>\n <data type='1A'>a</data>\n </parm>\n <parm io='both'>\n <data type='1A'>b</data>\n </parm>\n <parm io='both'>\n <data type='7p4'>11.1111</data>\n </parm>\n <parm io='both'>\n <data type='12p2'>222.22</data>\n </parm>\n <parm io='both'>\n <ds>\n <data type='1A'>x</data>\n <data type='1A'>y</data>\n <data type='7p4'>66.6666</data>\n <data type='12p2'>77777.77</data>\n </ds>\n </parm>\n <return>\n <data type='10i0'>0</data>\n </return>\n</pgm>\nENDPROC;\nreturn test_lib_replace($clob);\n}\n?>\n--EXPECTF--\n%s\nok\n\n"
},
{
"alpha_fraction": 0.6288022994995117,
"alphanum_fraction": 0.6330798268318176,
"avg_line_length": 32.38888931274414,
"blob_id": "61282e68d9a309cbe28a90899cbe7b6ef3073024",
"content_id": "9605a706c686b8478bd38aba9ce3ba3a2de10762",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Makefile",
"length_bytes": 4208,
"license_type": "permissive",
"max_line_length": 220,
"num_lines": 126,
"path": "/Makefile.in",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "LIBRARY=@LIBRARY@\nDBGVIEW=@DEBUG@\nTGTRLS=@TARGET_RELEASE@\n\nCOMMON=\\\n\tplugconf.module \\\n\tplugbug.module \\\n\tplugipc.module \\\n\tplugrun.module \\\n\tplugperf.module \\\n\tplugcach.module \\\n\tplugerr.module \\\n\tplugsql.module \\\n\tplugdb2.module \\\n\tplugpase.module \\\n\tpluglic.module \\\n\tplugsig.module \\\n\tplugconv.module \\\n\tplugxml.module \\\n\tplugile.module\n\nTEST_PGMS=\\\n\tzzbigboy.pgm \\\n\tzzcall.pgm \\\n\tzzcust.pgm \\\n\tzzdeep.pgm \\\n\tzzerich.pgm \\\n\tzzjava.pgm \\\n\tzzjava2.pgm \\\n\tzzmore.pgm \\\n\tzznone.pgm \\\n\tzzshlomo.pgm \\\n\tzzsimp.pgm \\\n\tzzvlad.pgm \\\n\tzzvlad2.pgm \\\n\tzzvlad3.pgm\n\nTEST_SRVPGMS=\\\n\tzzsrv.srvpgm \\\n\tzzsrv6.srvpgm\n\nXMLSTOREDP_MODULES=xmlstoredp.module $(COMMON)\n\n# Ensure that intermediate files created by rules chains don't get\n# automatically deleted\n.PRECIOUS: %.module %.srcpf %.lib\n\nall: build\n\nprod: build-main build-procedure build-cgi prod-clean\n\nbuild: build-main build-procedure build-cgi build-test\n\nbuild-main: $(LIBRARY).lib xmlmain.pgm xmlservice.pgm xmlver.pgm\n\nbuild-procedure: $(LIBRARY).lib xmlstoredp.srvpgm xmlstoredp.sqlinst\n\nbuild-cgi: $(LIBRARY).lib xmlcgi.pgm\n\nbuild-test: $(TEST_PGMS) $(TEST_SRVPGMS)\n\nprod-clean:\n\trm -f *.module *.srcpf\n\trm -rf /qsys.lib/$(LIBRARY).lib/*.FILE\n\trm -rf /qsys.lib/$(LIBRARY).lib/*.MODULE\n\nclean:\n\trm -f *.lib *.pgm *.srvpgm *.module *.sqlinst *.srcpf\n\tsystem -q 'DLTLIB $(LIBRARY)' || :\n\nxmlmain.pgm: xmlmain.module $(COMMON)\n\tsystem -q \"CRTPGM PGM($(LIBRARY)/$(@:%.pgm=%)) MODULE($(^:%.module=$(LIBRARY)/%)) TGTRLS($(TGTRLS))\" && touch $@\n\nxmlservice.pgm: xmlservice.module $(COMMON)\n\tsystem -q \"CRTPGM PGM($(LIBRARY)/$(@:%.pgm=%)) MODULE($(^:%.module=$(LIBRARY)/%)) TGTRLS($(TGTRLS))\" && touch $@\n\nxmlver.pgm: xmlver.module\n\tsystem -q \"CRTPGM PGM($(LIBRARY)/$(@:%.pgm=%)) MODULE($(^:%.module=$(LIBRARY)/%)) TGTRLS($(TGTRLS))\" && touch $@\n\nxmlcgi.pgm: xmlcgi.module $(COMMON)\n\tsystem -q \"CRTPGM PGM($(LIBRARY)/$(@:%.pgm=%)) MODULE($(^:%.module=$(LIBRARY)/%)) BNDSRVPGM(QHTTPSVR/QZSRCORE) TGTRLS($(TGTRLS))\" && touch $@\n\nxmlstoredp.srvpgm: qsrvsrc.srcpf $(XMLSTOREDP_MODULES)\n\tsystem -q \"CPYFRMSTMF FROMSTMF('src/$(@:%.srvpgm=%).bnd') TOMBR('/qsys.lib/$(LIBRARY).lib/qsrvsrc.file/$(@:%.srvpgm=%).mbr') MBROPT(*REPLACE)\"\n\tsystem -q \"CRTSRVPGM SRVPGM($(LIBRARY)/$(@:%.srvpgm=%)) MODULE($(XMLSTOREDP_MODULES:%.module=$(LIBRARY)/%)) EXPORT(*SRCFILE) SRCFILE($(LIBRARY)/QSRVSRC) ACTGRP(*CALLER) TGTRLS($(TGTRLS))\" && touch $@\n\n%.pgm: src/%.clp qclsrc.srcpf\n\tsystem -q \"CPYFRMSTMF FROMSTMF('$<') TOMBR('/qsys.lib/$(LIBRARY).lib/qclsrc.file/$*.mbr') MBROPT(*REPLACE)\"\n\tsystem -q \"CRTCLPGM PGM($(LIBRARY)/$*) SRCFILE($(LIBRARY)/QCLSRC) TGTRLS($(TGTRLS))\"\n\ttouch $@\n\n%.pgm: %.module\n\tsystem -q \"CRTPGM PGM($(LIBRARY)/$(@:%.pgm=%)) MODULE($(^:%.module=$(LIBRARY)/%)) TGTRLS($(TGTRLS))\" && touch $@\n\n%.srvpgm: %.module\n\tsystem -q \"CRTSRVPGM SRVPGM($(LIBRARY)/$(@:%.srvpgm=%)) MODULE($(^:%.module=$(LIBRARY)/%)) EXPORT(*ALL) ACTGRP(*CALLER) TGTRLS($(TGTRLS))\" && touch $@\n\n%.sqlinst: src/%.sql\n\tsystem -q \"RUNSQLSTM SRCSTMF('$<')\" && touch $@\n\n%.module: src/%.rpgle\n\tsystem \"CRTRPGMOD MODULE($(LIBRARY)/$*) SRCSTMF('$<') DBGVIEW($(DBGVIEW)) REPLACE(*YES) TGTCCSID(37) TGTRLS($(TGTRLS))\" > $*.log 2> $*.msg && rm $*.log $*.msg || touch $*.failed\n\ttest ! -e $*.msg || cat $*.msg\n\ttest ! -e $*.log || ./parse.sh $*.log\n\tif [ -e $*.failed ]; then rm $*.failed; exit 1; fi\n\ttouch $@\n\n%.module: src/%.sqlrpgle\n\tsystem \"CRTSQLRPGI OBJ($(LIBRARY)/$*) SRCSTMF('$<') OBJTYPE(*MODULE) REPLACE(*YES) RPGPPOPT(*LVL2) COMPILEOPT('INCDIR(''src/'') TGTCCSID(37)') TGTRLS($(TGTRLS))\" > $*.log 2> $*.msg && rm $*.log $*.msg || touch $*.failed\n\ttest ! -e $*.msg || cat $*.msg\n\ttest ! -e $*.log || ./parse.sh $*.log\n\tif [ -e $*.failed ]; then rm $*.failed; exit 1; fi\n\ttouch $@\n\n%.module: src/%.c\n\tsystem \"CRTCMOD MODULE($(LIBRARY)/$*) SRCSTMF('$<') DBGVIEW($(DBGVIEW)) REPLACE(*YES) TGTCCSID(37) TGTRLS($(TGTRLS))\" > $*.log 2> $*.msg && rm $*.log $*.msg || touch $*.failed\n\ttest ! -e $*.msg || cat $*.msg\n\ttest ! -e $*.log || cat $*.log\n\tif [ -e $*.failed ]; then rm $*.failed; exit 1; fi\n\ttouch $@\n\n%.lib:\n\t(system -q 'CHKOBJ $* *LIB' || system -q 'CRTLIB $*') && touch $@\n\n%.srcpf: $(LIBRARY).lib\n\t(system -q 'CHKOBJ $(LIBRARY)/$* *FILE' || system -q 'CRTSRCPF $(LIBRARY)/$*') && touch $@\n\n"
},
{
"alpha_fraction": 0.7323805093765259,
"alphanum_fraction": 0.747166097164154,
"avg_line_length": 22.5930233001709,
"blob_id": "be7558a964f12c35c93ef23410d7ae60a349934f",
"content_id": "09b0a709166b7a9c42ef5d853fada79e6f61f06d",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 2029,
"license_type": "permissive",
"max_line_length": 228,
"num_lines": 86,
"path": "/README.md",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "# XMLSERVICE\n\nXMLSERVICE is a set of procedures written in ILE RPG that allow you to interact with IBM i resources such as programs and commands using a plain XML protocol. XMLSERVICE can be called directly or via high-level language toolkit.\n\n![XMLSERVICE visualization](https://raw.githubusercontent.com/IBM/xmlservice/master/xmlservice.png)\n\n## Example\n\nTODO ...\n\n## Documentation\n\nDocumentation is at https://xmlservice.readthedocs.io/\n\n## Building from Source\n\n### Build requirements\n\nBuilding requires Python 3 and GNU make. These can be installed with `yum`: `yum install python3 make-gnu`\n\nYou will also need the ILE RPG compiler installed (5770-WDS option 31) along with the following PTFs:\n\n- 7.3: SI62605\n- 7.2: SI62604\n- 7.1: SI62580\n\n### Building\n\n```sh\nPATH=/QOpenSys/pkgs/bin:$PATH\n\ngit clone https://github.com/IBM/xmlservice.git\n\ncd xmlservice\n\npython3 ./configure\n\nmake\n```\n\n### Customizing the Build\n\nYou can customize the build by passing options to `configure`:\n\n- `--library`: set the build library\n- `--debug`: set the debug level (DBGVIEW CL parameter)\n\n## Running the Tests\n\nTODO ...\n\n## Language Toolkits\n\n- [.NET](https://github.com/richardschoen/IbmiXmlserviceStd)\n- [Node.js](https://github.com/IBM/nodejs-itoolkit)\n- [PHP](https://github.com/zendtech/IbmiToolkit)\n- [Python](https://github.com/IBM/python-itoolkit)\n- [Ruby](https://bitbucket.org/litmis/ruby-itoolkit)\n- [Swift](https://bitbucket.org/litmis/swift-itoolkit)\n\n## Contributing\n\nSee [CONTRIBUTING.md](CONTRIBUTING.md)\n\n## Releasing a New Version\n\nThis project uses [bumpversion](https://github.com/peritus/bumpversion) to manage its version numbers. To update official releases, run the following commands:\n\n```\n# checkout and pull the latest code from master\ngit checkout master\ngit pull\n\n# bump to a release version (a tag and commit are made)\nbumpversion release\n\n# bump to the new dev version (a commit is made)\nbumpversion --no-tag patch\n\n# push the new tag and commits\ngit push origin master --tags\n```\n\n## License\n\n[BSD](LICENSE)\n"
},
{
"alpha_fraction": 0.56582111120224,
"alphanum_fraction": 0.6053404808044434,
"avg_line_length": 23.311687469482422,
"blob_id": "f70a6049e093c631ba1cc547d2e7679f17f243ef",
"content_id": "d678cc989e09ae5779fc7b97e0c9964e9ee15f0c",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 3745,
"license_type": "permissive",
"max_line_length": 89,
"num_lines": 154,
"path": "/test/php/test_71160_reservation_ibm_db2_io.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 inout multi reservation processing\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\n// call IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Bad connect: $database,$user\");\nfor ($i=0;$i<4;$i++) {\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG5M(?,?,?,?)\");\nif (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\necho \"=========================\\n\";\nswitch($i) {\n case 0:\n // $ctlstart = $ctl;\n // $ctl .= \" *debugproc\";\n echo \"INPUT getxml0 (wrkactjob): good exclusive key IPC ...\";\n $clobIn = getxml0();\n break;\n case 1:\n // $ctl = $ctlstart;\n echo \"INPUT getxml1 (ls /tmp): bad wrong key IPC ...\";\n $clobIn = getxml1();\n break;\n case 2:\n echo \"INPUT CALL getxml2 (ZZCALL): good exclusive key IPC stop ...\";\n $clobIn = getxml2();\n break;\n case 3:\n echo \"INPUT getxml3 (ls /tmp): free key IPC ...\";\n $clobIn = getxml3();\n break;\n default:\n die(\"OUTPUT (empty): nothing ...\");\n break;\n}\necho \"ctl = $ctl\\n\";\n$clobOut = \"\";\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) die(\"Bad execute: \".db2_stmt_errormsg());\n// ------------------\n// output processing\n// ------------------\necho \"\\n\";\nswitch($i) {\n case 0:\n echo \"OUTPUT getxml0 (wrkactjob): good exclusive key IPC ...\";\n var_dump($clobOut);\n if (strpos($clobOut,\"Work with Active Jobs\")<1) die(\"Missing Work with Active Jobs\");\n break;\n case 1:\n echo \"OUTPUT getxml1 (ls /tmp): bad wrong key IPC ...\";\n var_dump($clobOut);\n if (strpos($clobOut,\"1301060\")<1) die(\"Missing busy 1301060\");\n break;\n case 2:\n echo \"OUTPUT CALL getxml2 (ZZCALL): good exclusive key IPC stop ...\";\n var_dump($clobOut);\n if (strpos($clobOut,\"4444444444.44\")<1) die(\"Missing ZZCALL 4444444444.44\");\n break;\n case 3:\n echo \"OUTPUT getxml3 (ls /tmp): good free key IPC ...\";\n var_dump($clobOut);\n if (strpos($clobOut,\"zsemfile\")<1) die(\"Missing ls /tmp/zsemfile\");\n break;\n default:\n break;\n}\necho \"\\n\";\n}\n// good\necho \"Success\\n\";\n\nfunction getxml0() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<script>\n<start>myspecialkey</start>\n<sh rows='on'>/QOpenSys/usr/bin/system -i 'wrkactjob'</sh>\n</script>\nENDPROC;\nreturn $clob;\n}\n\nfunction getxml1() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<script>\n<use>mywrongkey</use>\n<sh>/QOpenSys/usr/bin/ls /tmp</sh>\n</script>\nENDPROC;\nreturn $clob;\n}\n\nfunction getxml2() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<script>\n<use>myspecialkey</use>\n<pgm name='ZZCALL' lib='xyzlibxmlservicexyz'>\n <parm io='both'>\n <data type='1A' var='INCHARA'>a</data>\n </parm>\n <parm io='both'>\n <data type='1A' var='INCHARB'>b</data>\n </parm>\n <parm io='both'>\n <data type='7p4' var='INDEC1'>11.1111</data>\n </parm>\n <parm io='both'>\n <data type='12p2' var='INDEC2'>222.22</data>\n </parm>\n <parm io='both'>\n <ds>\n <data type='1A' var='INDS1.DSCHARA'>x</data>\n <data type='1A' var='INDS1.DSCHARB'>y</data>\n <data type='7p4' var='INDS1.DSDEC1'>66.6666</data>\n <data type='12p2' var='INDS1.DSDEC2'>77777.77</data>\n </ds>\n </parm>\n <return>\n <data type='10i0'>0</data>\n </return>\n</pgm>\n<stop>myspecialkey</stop>\n</script>\nENDPROC;\nreturn test_lib_replace($clob);\n}\n\nfunction getxml3() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<script>\n<sh>/QOpenSys/usr/bin/ls /tmp</sh>\n</script>\nENDPROC;\nreturn $clob;\n}\n\n?>\n\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.5424210429191589,
"alphanum_fraction": 0.5672701597213745,
"avg_line_length": 31.744186401367188,
"blob_id": "84e38e01e55f369fd24ead8d9682d3ad7f6b5276",
"content_id": "64cb9f1f74ba6b341e0d6e5d963f07fa414eaff9",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 2817,
"license_type": "permissive",
"max_line_length": 74,
"num_lines": 86,
"path": "/test/php/test_10138_ZZVLAD3_ibm_db2_io-pgm.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 inout PGM - call pgm complex ds\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\n// call IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Bad connect: $database,$user\");\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG512K(?,?,?,?)\");\nif (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n$clobIn = getxml();\n$clobOut = \"\";\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) die(\"Bad execute: \".db2_stmt_errormsg());\n// -----------------\n// output processing\n// -----------------\n// dump raw XML (easy test debug)\nvar_dump($clobOut);\n// xml check via simplexml vs. expected results\n$xmlobj = simplexml_load_string($clobOut);\nif (!$xmlobj) die(\"Bad XML returned\");\n$allpgms = $xmlobj->xpath('/script/pgm');\nif (!$allpgms) die(\"Missing XML pgm info\");\n// -----------------\n// output pgm call\n// -----------------\n// only one program this XML script\n$pgm = $allpgms[0];\n$name = $pgm->attributes()->name;\n$lib = $pgm->attributes()->lib;\n// good\necho \"Success ($lib/$name)\\n\";\n\n\n// d MyErrorDs ds qualified based(Template)\n// d ErrorId 8a\n// d Severity 3u 0\n// d Description 80a\n// d ErrorParm ds likeds(MyErrorDs) dim(20)\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// * main(): Control flow\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// C *Entry PLIST\n// C PARM ErrorParm\nfunction getxml() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<script>\n<pgm name='ZZVLAD3' lib='xyzlibxmlservicexyz'>\n<parm io='in' dim='20' comment='init all'>\n<ds var='MyErrorDs' comment='first data'>\n<data type='8a' var='ErrorId'/>\n<data type='3u0' var='Severity'/>\n<data type='80a' var='Description'/>\n</ds>\n</parm>\n<overlay io='both' top='on' comment='set any number messages'>\n<ds var='MyErrorDs' comment='first data'>\n<data type='8a' var='ErrorId'>12345678</data>\n<data type='3u0' var='Severity'>1</data>\n<data type='80a' var='Description'>Toad wrangler</data>\n</ds>\n<ds var='MyErrorDs' comment='right behind first'>\n<data type='8a' var='ErrorId'>87654321</data>\n<data type='3u0' var='Severity'>3</data>\n<data type='80a' var='Description'>Frog wrangler</data>\n</ds>\n</overlay>\n</pgm>\n</script>\nENDPROC;\nreturn test_lib_replace($clob);\n}\n?>\n--EXPECTF--\n%s\nSuccess (%s)\n\n"
},
{
"alpha_fraction": 0.6447761058807373,
"alphanum_fraction": 0.6626865863800049,
"avg_line_length": 19.8125,
"blob_id": "b1bb9353d6f5011598926886324196f09c9de6f2",
"content_id": "7d33a44cd0959aabec59eedaee884e04de52b043",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 335,
"license_type": "permissive",
"max_line_length": 57,
"num_lines": 16,
"path": "/test/php/test_98999_reset_ignore_userid.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_Db2 ignore userid - reset to normal\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\necho \"reset ibm_db2.i5_ignore_userid=1 to normal ... \\n\";\nibm_db2_IgnoreOff();\n// good\necho \"Success\\n\";\n?>\n--EXPECTF--\n%s\nSuccess\n\n\n"
},
{
"alpha_fraction": 0.6078252792358398,
"alphanum_fraction": 0.6260236501693726,
"avg_line_length": 30.385713577270508,
"blob_id": "7c737fd56f7dc26113010cd16eb8a35fc4c16d2b",
"content_id": "85c690a3b2919e2a1f7b79bf401d0be0375db70c",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 2198,
"license_type": "permissive",
"max_line_length": 86,
"num_lines": 70,
"path": "/test/php/test_90503_ibm_db2_io_qtemp_sqlrollback.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 inout SQL - qtemp rollback\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\n\n// -------------\n// call IBM i\n// -------------\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Bad connect: $database,$user\");\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG512K(?,?,?,?)\");\nif (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n$clobIn = getxml();\n$clobOut = \"\";\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) die(\"Bad execute: \".db2_stmt_errormsg());\n// -----------------\n// output processing\n// -----------------\n// dump raw XML (easy test debug)\n// var_dump($clobOut);\n// xml check via simplexml vs. expected results\n$xmlobj = simplexml_load_string($clobOut);\nif (!$xmlobj) die(\"Bad XML returned\");\n// -----------------\n// output row data\n// -----------------\n$format = \"%s %'.15.15s\\n\";\n$row = $xmlobj->xpath('/script/sql/fetch/row');\nprintf ($format, \"pre rollback changed\",(string)$row[0]->data);\nif ((string)$row[0]->data != 1) die(\"pre rollback bad\\n\");\nprintf ($format, \"aft rollback changed\",(string)$row[1]->data);\nif ((string)$row[1]->data != 0) die(\"aft rollback bad\\n\");\n// good\necho \"Success\\n\";\n\nfunction getxml() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<script>\n<sql>\n<free/>\n<options options='noauto' autocommit='off'/>\n<connect conn='myconn' options='noauto'/>\n<query conn='myconn' stmt='myupdate'>UPDATE animalq SET id = 9 where ID = 3</query>\n<query conn='myconn' stmt='myselect'>select count(*) from animalq where ID = 9</query>\n<fetch stmt='myselect' block='all' desc='off'/>\n<free stmt='myselect'/>\n<commit conn='myconn' action='rollback'/>\n<query conn='myconn' stmt='myselect'>select count(*) from animalq where ID = 9</query>\n<fetch stmt='myselect' block='all' desc='off'/>\n<free/>\n</sql>\n</script>\nENDPROC;\nreturn $clob;\n}\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.4404709041118622,
"alphanum_fraction": 0.4980008900165558,
"avg_line_length": 30.25694465637207,
"blob_id": "3f9cf07a9816f9947726567f71f46567c31a4795",
"content_id": "b4981374ba539932cb2c21dbc62c453585689eff",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 4502,
"license_type": "permissive",
"max_line_length": 112,
"num_lines": 144,
"path": "/test/php/test_20430_ZZARRAY_toolkit.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: - Zend Toolkit return array full\n--SKIPIF--\n<?php require_once('skipiftoolkit.inc'); ?>\n--FILE--\n<?php\nrequire_once('connection.inc');\n// assume /usr/local/zendsvr/share/ToolkitAPI\nrequire_once(\"ToolkitService.php\");\n// new toolkit\ntry { $ToolkitServiceObj = ToolkitService::getInstance($database, $user, $password); }\ncatch (Exception $e) { echo $e->getMessage(), \"\\n\"; exit();}\n$ToolkitServiceObj->setToolkitServiceParams(\narray('InternalKey'=>$ipc, // route to same XMLSERVICE job /tmp/myjob1\n'subsystem'=>\"QGPL/QDFTJOBD\", // subsystem/jobd to start XMLSERVICE (if not running)\n'plug'=>\"iPLUG5M\")); // max size data i/o (iPLUG4K,32K,65K.512K,1M,5M,10M,15M)\n// D ARRAYMAX c const(999)\n// D dcRec_t ds qualified based(Template)\n// D dcMyName 10A\n// D dcMyJob 4096A\n// D dcMyRank 10i 0\n// D dcMyPay 12p 2\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// * zzarray: check return array aggregate\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// P zzarray B export\n// D zzarray PI likeds(dcRec_t) dim(ARRAYMAX)\n// D myName 10A\n// D myMax 10i 0\n// D myCount 10i 0\n$param = array(); // reset array\n$result = array(); // reset array\n$max = 10;\n$param[] = $ToolkitServiceObj->AddParameterChar ('both', 10, \"name\", \"myName\", \"nada\");\n$param[] = $ToolkitServiceObj->AddParameterInt32 ('both', \"max\", \"myMax\", $max);\n$param[] = $ToolkitServiceObj->AddParameterInt32 ('both', \"count\", \"myCount\", 0);\nfor ($h=0;$h<$max;$h++) {\n $dcRec_t[] = $ToolkitServiceObj->AddParameterChar ('both', 10, \"name\", \"dcMyName{$h}\", 'A');\n $dcRec_t[] = $ToolkitServiceObj->AddParameterChar ('both', 4096, \"job\", \"dcMyJob{$h}\", 'B');\n $dcRec_t[] = $ToolkitServiceObj->AddParameterInt32 ('both', \"rank\", \"dcMyRank{$h}\", 1);\n $dcRec_t[] = $ToolkitServiceObj->AddParameterPackDec('both', 12, 2, \"pay\", \"dcMyPay{$h}\", 9.2);\n}\n$result[] = $ToolkitServiceObj->AddDataStruct($dcRec_t);\n$output = $ToolkitServiceObj->PgmCall(\"ZZSRV\", $testLib, $param, $result, array('func'=>\"ZZARRAY\"));\nif (!$output) echo \"Failure \" . $ToolkitServiceObj->getLastError();\nelse var_dump($output);\necho \"Success\\n\";\n?>\n--EXPECTF--\narray(2) {\n [\"io_param\"]=>\n array(3) {\n [\"myName\"]=>\n string(4) \"nada\"\n [\"myMax\"]=>\n string(2) \"10\"\n [\"myCount\"]=>\n string(2) \"10\"\n }\n [\"retvals\"]=>\n array(40) {\n [\"dcMyName0\"]=>\n string(5) \"nada1\"\n [\"dcMyJob0\"]=>\n string(8) \"Test 101\"\n [\"dcMyRank0\"]=>\n string(2) \"11\"\n [\"dcMyPay0\"]=>\n string(5) \"13.42\"\n [\"dcMyName1\"]=>\n string(5) \"nada2\"\n [\"dcMyJob1\"]=>\n string(8) \"Test 102\"\n [\"dcMyRank1\"]=>\n string(2) \"12\"\n [\"dcMyPay1\"]=>\n string(5) \"26.84\"\n [\"dcMyName2\"]=>\n string(5) \"nada3\"\n [\"dcMyJob2\"]=>\n string(8) \"Test 103\"\n [\"dcMyRank2\"]=>\n string(2) \"13\"\n [\"dcMyPay2\"]=>\n string(5) \"40.26\"\n [\"dcMyName3\"]=>\n string(5) \"nada4\"\n [\"dcMyJob3\"]=>\n string(8) \"Test 104\"\n [\"dcMyRank3\"]=>\n string(2) \"14\"\n [\"dcMyPay3\"]=>\n string(5) \"53.68\"\n [\"dcMyName4\"]=>\n string(5) \"nada5\"\n [\"dcMyJob4\"]=>\n string(8) \"Test 105\"\n [\"dcMyRank4\"]=>\n string(2) \"15\"\n [\"dcMyPay4\"]=>\n string(5) \"67.10\"\n [\"dcMyName5\"]=>\n string(5) \"nada6\"\n [\"dcMyJob5\"]=>\n string(8) \"Test 106\"\n [\"dcMyRank5\"]=>\n string(2) \"16\"\n [\"dcMyPay5\"]=>\n string(5) \"80.52\"\n [\"dcMyName6\"]=>\n string(5) \"nada7\"\n [\"dcMyJob6\"]=>\n string(8) \"Test 107\"\n [\"dcMyRank6\"]=>\n string(2) \"17\"\n [\"dcMyPay6\"]=>\n string(5) \"93.94\"\n [\"dcMyName7\"]=>\n string(5) \"nada8\"\n [\"dcMyJob7\"]=>\n string(8) \"Test 108\"\n [\"dcMyRank7\"]=>\n string(2) \"18\"\n [\"dcMyPay7\"]=>\n string(6) \"107.36\"\n [\"dcMyName8\"]=>\n string(5) \"nada9\"\n [\"dcMyJob8\"]=>\n string(8) \"Test 109\"\n [\"dcMyRank8\"]=>\n string(2) \"19\"\n [\"dcMyPay8\"]=>\n string(6) \"120.78\"\n [\"dcMyName9\"]=>\n string(6) \"nada10\"\n [\"dcMyJob9\"]=>\n string(9) \"Test 1010\"\n [\"dcMyRank9\"]=>\n string(2) \"20\"\n [\"dcMyPay9\"]=>\n string(6) \"134.20\"\n }\n}\n%s\n\n"
},
{
"alpha_fraction": 0.5691840052604675,
"alphanum_fraction": 0.6097313761711121,
"avg_line_length": 24.28205108642578,
"blob_id": "533c6248286d8c94e9addbe5e021cbffa800824c",
"content_id": "c61b09c9b1d5764bba83c85b2f106495009b7f38",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 1973,
"license_type": "permissive",
"max_line_length": 82,
"num_lines": 78,
"path": "/test/php/test_12611_MISC_ibm_db2_io_aftermath.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 inout - after adopt profile\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// -----------------\n// make the call\n// -----------------\n// see connection.inc param details ...\nrequire_once('connection.inc');\n// call IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$adoptuser1,$adoptpass1);\nelse $conn = db2_connect($database,$adoptuser1,$adoptpass1);\nif (!$conn) die(\"Fail connect: $database,$adoptuser1\");\n\n// $ctl .= \"*debugproc\";\ndiag();\n\n// good\necho \"Success (aftermath)\\n\";\n\n// call IBM i\nfunction callme ($xmlIn) {\n global $procLib, $conn, $ipc, $ctl;\n $stmt = db2_prepare($conn, \"call $procLib.iPLUG65K(?,?,?,?)\");\n if (!$stmt) die(\"Fail prepare: \".db2_stmt_errormsg());\n $clobIn = $xmlIn;\n $clobOut = \"\";\n $ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n $ret=db2_execute($stmt);\n if (!$ret) die(\"Fail execute: \".db2_stmt_errormsg());\n return $clobOut;\n}\n// diag check\nfunction diag() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<script>\n<pgm name='ZZCALL' lib='xyzlibxmlservicexyz'>\n <parm io='both'>\n <data type='1A' var='INCHARA'>a</data>\n </parm>\n <parm io='both'>\n <data type='1A' var='INCHARB'>b</data>\n </parm>\n <parm io='both'>\n <data type='7p4' var='INDEC1'>11.1111</data>\n </parm>\n <parm io='both'>\n <data type='12p2' var='INDEC2'>222.22</data>\n </parm>\n <parm io='both'>\n <ds>\n <data type='1A' var='INDS1.DSCHARA'>x</data>\n <data type='1A' var='INDS1.DSCHARB'>y</data>\n <data type='7p4' var='INDS1.DSDEC1'>66.6666</data>\n <data type='12p2' var='INDS1.DSDEC2'>77777.77</data>\n </ds>\n </parm>\n <return>\n <data type='10i0'>0</data>\n </return>\n</pgm>\n</script>\nENDPROC;\n$xmlIn = test_lib_replace($clob);\n$clobOut = callme ($xmlIn);\nvar_dump($clobOut);\n}\n\n?>\n--EXPECTF--\n%s\nSuccess (%s)\n\n"
},
{
"alpha_fraction": 0.5918635129928589,
"alphanum_fraction": 0.6115485429763794,
"avg_line_length": 26.196428298950195,
"blob_id": "faab3a0364bc42825a24a5b70173d32b2508e9c8",
"content_id": "025919bbb973bbe372e43774959b5ccfe6279030",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 1524,
"license_type": "permissive",
"max_line_length": 74,
"num_lines": 56,
"path": "/test/php/test_10505_MISC_ibm_db2_io_sh_rows_ls_cdata.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 inout SH - ls -l /usr/local/zendsvr/bin\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\n// call IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Bad connect: $database,$user\");\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG10M(?,?,?,?)\");\nif (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n$ctl .= \" *cdata\";\n$clobIn = getxml();\n$clobOut = \"\";\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) die(\"Bad execute: \".db2_stmt_errormsg());\n// -----------------\n// output processing\n// -----------------\n// dump raw XML (easy test debug)\nvar_dump($clobOut);\n// xml check via simplexml vs. expected results\n$xmlobj = simplexml_load_string($clobOut);\nif (!$xmlobj) die(\"Bad XML returned\");\n// -----------------\n// output sh call\n// -----------------\n$sh = $xmlobj->xpath('/script/sh');\nif (!$sh) die(\"Missing XML sh info\");\n\n// good\necho \"Success (PASE sh)\\n\";\n\n// 5250:\n// call qp2term\n// /QOpenSys/usr/bin/ls /tmp\nfunction getxml() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<script>\n<sh rows='on'>/QOpenSys/usr/bin/ls -l /tmp</sh>\n</script>\nENDPROC;\nreturn $clob;\n}\n?>\n--EXPECTF--\n%s\nSuccess (%s)\n\n"
},
{
"alpha_fraction": 0.5006301403045654,
"alphanum_fraction": 0.5340264439582825,
"avg_line_length": 33.11827850341797,
"blob_id": "44cca456a87c9159ebc277103202c91bc04d8d25",
"content_id": "37c8d26e8a583965b1677158f587448b1b86b17d",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 3174,
"license_type": "permissive",
"max_line_length": 80,
"num_lines": 93,
"path": "/test/php/test_10161_ZZARRAY_ibm_db2_set_array_bigger.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 result set SRVPGM - really big data\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// -----------------\n// make the call\n// -----------------\n// see connection.inc param details ...\nrequire_once('connection.inc');\n// call IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Bad connect: $database,$user\");\n$stmt = db2_prepare($conn, \"call $procLib.iPLUGR1M(?,?,?)\");\nif (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n$clobIn = getxml();\n$clobOut = \"\";\n$ret=db2_execute($stmt,array($ipc,$ctl,$clobIn));\nif (!$ret) die(\"Bad execute: \".db2_stmt_errormsg());\nwhile ($row = db2_fetch_array($stmt)){\n $clobOut .= $row[0];\n}\n$clobOut = trim($clobOut);\n// -----------------\n// output processing\n// -----------------\n$myName1 = 'Ranger'; // expected name\n$myMax1 = 212; // expected max\n$myCount1= 212; // expected count\n$size = strlen($clobOut);\necho substr($clobOut,$size-400).\"\\n\";\nif ($size < 917000) die(\"Failed ($size < 917000)\\n\");\nfor ($i=0;$i<$myCount1;$i++) {\n // DS records expected\n $irpg = $i+1;\n $dcMyName = \">\".$myName1.$irpg.\"<\";\n if (strpos($clobOut,$dcMyName)<1) die(\"Fail dcMyName $dcMyName missing\\n\");\n $dcMyRank = \">\".(10+$irpg).\"<\";\n if (strpos($clobOut,$dcMyRank)<1) die(\"Fail dcMyRank $dcMyRank missing\\n\");\n $dcMyPay = \">\".sprintf(\"%1.2f\", 13.42*$irpg).\"<\";\n if (strpos($clobOut,$dcMyPay)<1) die(\"Fail dcMyPay $dcMyPay missing\\n\");\n}\n// good\necho \"Success ($size)\\n\";\n\n// D ARRAYMAX c const(999)\n// D dcRec_t ds qualified based(Template)\n// D dcMyName 10A\n// D dcMyJob 4096A\n// D dcMyRank 10i 0\n// D dcMyPay 12p 2\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// * zzarray: check return array aggregate\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// P zzarray B export\n// D zzarray PI likeds(dcRec_t) dim(ARRAYMAX)\n// D myName 10A\n// D myMax 10i 0\n// D myCount 10i 0\nfunction getxml() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<script>\n<cmd comment='addlible'>ADDLIBLE LIB(xyzlibxmlservicexyz) POSITION(*FIRST)</cmd>\n<pgm name='ZZSRV' lib='xyzlibxmlservicexyz' func='ZZARRAY'>\n <parm comment='search this name'>\n <data var='myName' type='10A'>Ranger</data>\n </parm>\n <parm comment='max allowed return'>\n <data var='myMax' type='10i0'>212</data>\n </parm>\n <parm comment='actual count returned'>\n <data var='myCount' type='10i0' enddo='mycount'>0</data>\n </parm>\n <return>\n <ds var='dcRec_t' dim='999' dou='mycount'>\n <data var='dcMyName' type='10A'>na</data>\n <data var='dcMyJob' type='4096A'>na</data>\n <data var='dcMyRank' type='10i0'>0</data>\n <data var='dcMyPay' type='12p2'>0.0</data>\n </ds>\n </return>\n</pgm>\n</script>\nENDPROC;\nreturn test_lib_replace($clob);\n}\n?>\n--EXPECTF--\n%s\nSuccess (%s)\n\n"
},
{
"alpha_fraction": 0.5734463334083557,
"alphanum_fraction": 0.5926553606987,
"avg_line_length": 25.803030014038086,
"blob_id": "76ed4e9538cc1d3d9f8b875ca1706e0b0b98903c",
"content_id": "b34769f4df007c68697afb5e59896317dad83b76",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 1770,
"license_type": "permissive",
"max_line_length": 103,
"num_lines": 66,
"path": "/test/php/test_33464_cwtest_pgmsimple.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: - cwtest pgm simple\n--SKIPIF--\n<?php require_once('skipifcw.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('xxcw_test_setup.php');\n\nif ($doPgmCallSimple) {\n\n\techo h2('Program calls');\n echo 'Program call with simple parameters<BR>';\n\n$progname = \"$demoLib/TESTSTP2\";\n\n$desc = array (\narray (\"Name\"=>\"code\", \"IO\"=>I5_INOUT, \"Type\"=>I5_TYPE_CHAR, \"Length\"=>\"10\"),\narray (\"Name\"=>\"name\", \"IO\"=>I5_INOUT, \"Type\"=>I5_TYPE_CHAR, \"Length\"=>\"10\")\n);\n$desc = Array (\n 0 => Array ( 'type' => 0, 'name' => 'code', 'length' => 10, 'io' => 3 ),\n 1 => Array ( 'type' => 0, 'name' => 'name', 'length' => 10, 'io' => 3 ) ) ;\necho \"<b>About to call $progname with two char parameters.</b><BR>\";\n\n$prog = i5_program_prepare($progname, $desc);\nif ($prog === FALSE) {\n\t$errorTab = i5_error();\n\techo \"Program prepare failed <br>\\n\";\n\tvar_dump($errorTab);\n\tdie();\n}\n/* Execute Program */\n$params = array (\"code\"=>\"123\",\"name\"=>\"ABC\");\n$retvals = array(\"code\"=>\"code\",\"name\"=>\"name\");\n$ret = i5_program_call($prog, $params, $retvals) ;\nif (function_exists('i5_output')) extract(i5_output()); // i5_output() required if called in a function\n\nif ($ret === FALSE)\n{\n$errorTab = i5_error();\necho \"FAIL : i5_program_call failure message: \" . $conn->getLastError() . \" with code <br>\";\nvar_dump($errorTab);\ndie();\n}else {\n // success\n echo \"Success! The return values are: <br>\", \"Name: \", $name, \"<br> Code: \", $code, \"<br><BR>\";\n}\n$close_val = i5_program_close ($prog);\nif ($close_val === false )\n{\nprint (\"FAIL : i5_program_close returned fales, closing an open prog.<br>\\n\");\n$errorTab = i5_error();\nvar_dump($errorTab);\ndie();\n}\n\n\n} // (simple call)\n\n// good\necho \"\\nSuccess\\n\";\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.5701204538345337,
"alphanum_fraction": 0.5807228684425354,
"avg_line_length": 27.027027130126953,
"blob_id": "662c364cb65e9b66179983a4f34f62e759be7315",
"content_id": "17b68c7d577b77908d7a1ebb36e374aa986e576c",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 2075,
"license_type": "permissive",
"max_line_length": 214,
"num_lines": 74,
"path": "/test/php/test_33473_cwtest_loop_adopt.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: - cwtest loop adopt\n--SKIPIF--\n<?php require_once('skipifcw.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('xxcw_test_setup.php');\n\n\nfor ($i=1;$i<100;$i++) {\n\nif ($i % 2) {\n// this user, TKITU1, should exist on the system\n$user = $adoptuser1;\n$testPw = $adoptpass1;\n}\nelse {\n// this user, DB2, should exist on the system\n$user = $adoptuser2;\n$testPw = $adoptpass2;\n}\n\n\nif ($doAdoptAuthority) {\n\n\techo h2(\"\\n\\nAdopt authority loop = $i\");\n\n\t// Note: only works if you've defined $user and $testPw, and created the user profile.\n\techo \"About to adopt authority to user $user<BR>\";\n\t$start = microtime(true);\n\n\t$success = i5_adopt_authority($user, $testPw);\n $end = microtime(true);\n $elapsed = $end - $start;\n\n\n\tif (!$success) {\n\t echo \"Error adopting authority: \" . printArray(i5_error()) . \"<BR>\";\n die();\n } else {\n \techo \"Success adopting authority in $elapsed seconds<BR>\";\n\n \techo \"About to check current user and other variables after adopting authority.<BR>\";\n\n \t$cmdString = 'RTVJOBA';\n $input = array();\n\n $output = array('ccsid' => array('ccsid', 'dec(5 0)'),\n 'dftccsid' => array('defaultCcsid', 'dec(5 0)'),\n 'curuser'=>'currentUser',\n 'nbr'=>'jobNumber',\n 'job'=>'jobName',\n 'user'=>'jobUser',\n 'usrlibl' => 'userLibl');\n $commandSuccessful = i5_command($cmdString, $input, $output, $conn);\n if (function_exists('i5_output')) extract(i5_output()); // i5_output() required if called in a function\n\n echo \"Ran command $cmdString. Return: \" . OkBad($commandSuccessful) .\n \" with original job user '$jobUser', current user '$currentUser', CCSID '$ccsid', default CCSID '$defaultCcsid', job name '$jobName', job number '$jobNumber', with user liblist '$userLibl'.<BR><BR>\";\n\n } //(if $success)\n\n} //(if ($doAdoptAuthority))\n\n} // loop\n\n\n// good\necho \"\\nSuccess\\n\";\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.6029239892959595,
"alphanum_fraction": 0.6269006133079529,
"avg_line_length": 27.966102600097656,
"blob_id": "be92f0ea3d4ae90a8d0565dab1805e3996135c22",
"content_id": "8adf6c704f8815479882c8ec543fc93c38a68fdc",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 1710,
"license_type": "permissive",
"max_line_length": 96,
"num_lines": 59,
"path": "/test/php/test_80438_pgm_db2_io_error_off.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 IBM_DB2 pgm check error off\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// -----------------\n// make the call\n// -----------------\n// see connection.inc param details ...\nrequire_once('connection.inc');\n// call IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Fail connect: $database,$user\");\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG512K(?,?,?,?)\");\nif (!$stmt) die(\"Fail prepare: \".db2_stmt_errormsg());\n$clobIn = getxml();\n$clobOut = \"\";\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) die(\"Fail execute: \".db2_stmt_errormsg());\n// -----------------\n// output processing\n// -----------------\n// dump raw XML (easy test debug)\nvar_dump($clobOut);\n// xml check via simplexml vs. expected results\n$xmlobj = simplexml_load_string($clobOut);\nif (!$xmlobj) die(\"Fail XML returned\\n\");\n\n// check if good\nif (strpos($clobOut,'<report>')>0) die(\"Error contains <report>, but should not error='off'\\n\");\nif (strpos($clobOut,'<error>')<1) die(\"Missing error <error>\\n\");\nif (strpos($clobOut,'<joblog')<1) die(\"Missing error <joblog>\\n\");\n\n// good\necho \"Success\\n\";\n\n\nfunction getxml() {\n$clob = <<<ENDPROC\n<?xml version='1.0' encoding='ISO-8859-1' ?>\n<script>\n<pgm name='TEST_100' lib='LOUISBAD' error='off'>\n</pgm>\n<pgm name='TEST_100' lib='LOUISBAD' error='off'>\n</pgm>\n</script>\nENDPROC;\nreturn test_lib_replace($clob);\n}\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.6502732038497925,
"alphanum_fraction": 0.688524603843689,
"avg_line_length": 12,
"blob_id": "1ebe5bf729f85b7639e65764a90ea29c178af744",
"content_id": "3dd536814e268b5f246dbd6e2b17f83cac996f38",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 183,
"license_type": "permissive",
"max_line_length": 39,
"num_lines": 14,
"path": "/test/php/ftpcompile.sh",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "# bash ftpcompile.sh lp0364d adc\nMYPWD=$(<$HOME/.ftprc)\nftp -i -n -v $1 << ftp_end\nuser $2 $MYPWD\n\nquote namefmt 1\n\nbin\ncd /www/zendsvr/htdocs/tests/xmlservice\nmput *\n\nquit\n\nftp_end\n\n"
},
{
"alpha_fraction": 0.6276102066040039,
"alphanum_fraction": 0.6310904622077942,
"avg_line_length": 33.47999954223633,
"blob_id": "b463b75b6849843a5fcdb505f624d85c43b497f4",
"content_id": "c28aae3e0c6cce3097a867f9d05b464ca0c7df17",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 862,
"license_type": "permissive",
"max_line_length": 99,
"num_lines": 25,
"path": "/test/php/xmlservice_drivers.php",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "<?php\n// *** http://ibmi/test.php?driver=ibm_db2 ***\nif (!isset($procDrive)) die('Failure -- authorization.php $procDrive=driver variable is not set.');\nif (isset($_GET['driver'])) $procDrive = $_GET['driver'];\nif (isset($_POST['driver'])) $procDrive = $_POST['driver'];\nswitch($procDrive) {\ncase 'get':\n require_once 'xmlservice_get.php'; // $xmlOut = xmlservice($xmlIn)\n break;\ncase 'post':\n require_once 'xmlservice_post.php'; // $xmlOut = xmlservice($xmlIn)\n break;\ncase 'odbc':\n $ctl .= \" *hack\"; // quirky odbc drivers (result set)\n require_once 'xmlservice_odbc.php'; // $xmlOut = xmlservice($xmlIn)\n break;\ncase 'pdo_ibm':\n require_once 'xmlservice_pdo_ibm.php'; // $xmlOut = xmlservice($xmlIn)\n break;\ncase 'ibm_db2':\ndefault:\n require_once 'xmlservice_ibm_db2.php'; // $xmlOut = xmlservice($xmlIn)\n break;\n}\n?>\n"
},
{
"alpha_fraction": 0.5682663917541504,
"alphanum_fraction": 0.5829613208770752,
"avg_line_length": 43.60580825805664,
"blob_id": "b91b465b6130a9925f93121c37593ed44bf6414f",
"content_id": "d56c3bd0317df0a9429fc12c46ecf9aef0807be0",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "reStructuredText",
"length_bytes": 10752,
"license_type": "permissive",
"max_line_length": 227,
"num_lines": 241,
"path": "/docs/library-list.rst",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "\n\nXMLSERVICE/Toolkit \\*LIBL\n=========================\n\n`Goto Main Page`_\n\n.. _Goto Main Page: index.html\n\nchange XMLSERVICE 1.7.3+\n------------------------\n\nThe default setting for SBMJOB of all XMLSERVICE jobs was changed to INLLIBL(\\*CURRENT) better follow the user profile of QSQSRVR job. If you are having difficulty with private connections libl you may want to try this version.\n\nsetting \\*LIBL - what do i do?\n------------------------------\n\nFew other topics dealing with web programs cause more frustrations then setting IBM i library list (\\*LIBL), but if you follow a few rules it all works.\n\n\\*LIBL conflict between web scripts and IBM i PGMs\n--------------------------------------------------\n\nAny good conflict needs a basic difference in philosophy causing all the trouble.\n\n* '''Stateless''' - web scripts typically wish to run stateless (PHP), with no environmental settings (such as \\*LIBL) set on the server\n* '''State Full''' - IBM PGMs generally run state full (RPG/Cobol/CLP), meaning need \\*LIBL to run at all\n\nRule of toolkit web \\*LIBL ...\n------------------------------\n\nThe following rules apply to all toolkit DB2 connections persistent (db2_pconnect, odbc_pconnect) and non-persistent (db2_connect, odbc_connect).\n\n* '''Stateless''' - PHP scripts using Toolkit without internalKey (IPC) MUST set \\*LIBL on every request before calling PGMs (see specific examples)\n* '''State Full''' - PHP scripts using Toolkit with internalKey (IPC) set \\*LIBL only once for life of job before calling PGMs (see specific examples)\n\n1) Specific examples for New PHP Toolkit\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n* '''Stateless''' - set \\*LIBL every single request\n\n::\n\n // *** stateless occurs when no internalKey (e.g. '/tmp/packers') was specified ***\n // *** also when ->setToolkitServiceParams('stateless'=>true)) is called\n try { $ToolkitServiceObj = ToolkitService::getInstance($database, $user, $password); }\n catch (Exception $e) { die($e->getMessage()); }\n $ToolkitServiceObj->setToolkitServiceParams(array(\n 'plug'=>\"iPLUG32K\")); // max size data i/o (iPLUG4K,32K,65K.512K,1M,5M,10M,15M)\n\n // stateless - MUST do this every single script after connect/getInstance()\n // even if using persistent connections (db2_pconnect, odbc_pconnect)\n $ToolkitServiceObj->CLCommand(\"CHGLIBL LIBL(FREDFLIN WILMAFLIN) CURLIB(FREDFLIN)\");\n\n // another option might be to call a setup program that sets *LIBL for you.\n\n\n* '''State Full''' - set \\*LIBL once and forget it\n\n::\n\n $internalKey = '/tmp/packers';\n try { $ToolkitServiceObj = ToolkitService::getInstance($database, $user, $password); }\n catch (Exception $e) { die($e->getMessage()); }\n $ToolkitServiceObj->setToolkitServiceParams(array(\n 'InternalKey'=>$internalKey, // *** RIGHT HERE internalKey/IPC\n // *** run state full ...\n // use SBMJOB command run in new job\n // PHP can call again, again, again\n // with /tmp/packers and get ...\n // same job every time\n // same library list (*LIBL)\n // same PGMs with open files, etc.\n // ... exactly like 5250 sign-on screen\n 'plug'=>\"iPLUG32K\")); // max size data i/o (iPLUG4K,32K,65K.512K,1M,5M,10M,15M)\n\n\n // state full - MUST do this ONCE ONLY after start/sbmjob of XMLSERVICE job\n // then forget about it (unless you choose to change libl) ...\n $ToolkitServiceObj->CLCommand(\"CHGLIBL LIBL(FREDFLIN WILMAFLIN) CURLIB(FREDFLIN)\");\n\n\n* '''persistent connections''' - later releases PHP Toolkit allow reused/shared connections with other work, including persistent connections, but internalKey (IPC) rules remain the same\n\n::\n\n if ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\n else $conn = db2_connect($database,$user,$password);\n\n try { $ToolkitServiceObj = ToolkitService::getInstance($conn); }\n catch (Exception $e) { die($e->getMessage()); }\n\n $internalKey = '/tmp/packers';\n $ToolkitServiceObj->setToolkitServiceParams(array(\n 'InternalKey'=>$internalKey, // *** RIGHT HERE internalKey/IPC\n // *** run state full ...\n // use SBMJOB command run in new job\n // PHP can call again, again, again\n // with /tmp/packers and get ...\n // same job every time\n // same library list (*LIBL)\n // same PGMs with open files, etc.\n // ... exactly like 5250 sign-on screen\n 'plug'=>\"iPLUG32K\")); // max size data i/o (iPLUG4K,32K,65K.512K,1M,5M,10M,15M)\n\n* '''plug size''' - later releases PHP Toolkit 1.2.4+ calculates the correct plug name via 'plugSize', taking db type (ODBC or DB2) into account.\n\n::\n\n if ($userPickFast) $conn = db2_pconnect($database,$user,$password);\n else $conn = odbc_pconnect($database,$user,$password);\n\n try { $ToolkitServiceObj = ToolkitService::getInstance($conn); }\n catch (Exception $e) { die($e->getMessage()); }\n\n $internalKey = '/tmp/packers';\n $ToolkitServiceObj->setToolkitServiceParams(array(\n 'InternalKey'=>$internalKey, // *** RIGHT HERE internalKey/IPC\n // *** run state full ...\n 'plugSize'=>\"32K\")); // max size data i/o (iPLUG4K,32K,65K.512K,1M,5M,10M,15M)\n\n\n2) Specific examples for XMLSERVICE\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n* '''Stateless''' - set \\*LIBL every single request\n\n::\n\n if ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\n else $conn = db2_connect($database,$user,$password);\n if (!$conn) die(\"Bad connect: $database,$user\");\n $stmt = db2_prepare($conn, \"call $libxmlservice.iPLUG4K(?,?,?,?)\");\n if (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n $ipc = \"\"; // *** RIGHT HERE MISSING internalKey/IPC\n $ctl = \"*here\"; // *** run stateless ...\n // here in any available database job\n // must set *LIBL evey time\n // stateless - MUST do this every single script after connect/getInstance()\n // even if using persistent connections (db2_pconnect, odbc_pconnect)\n $clobIn =\n \"<?xml version='1.0'?>\n <script>\n <cmd>CHGLIBL LIBL(FREDFLIN WILMAFLIN) CURLIB(FREDFLIN)</cmd>\n </script>\";\n $clobOut = \"\";\n $ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n $ret=db2_execute($stmt);\n\n* '''State Full''' - set \\*LIBL once and forget it\n\n::\n\n if ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\n else $conn = db2_connect($database,$user,$password);\n if (!$conn) die(\"Bad connect: $database,$user\");\n $stmt = db2_prepare($conn, \"call $libxmlservice.iPLUG4K(?,?,?,?)\");\n if (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n $ipc = \"/tmp/packers\"; // *** RIGHT HERE internalKey/IPC\n $ctl = \"*sbmjob\"; // *** run state full ...\n // use SBMJOB command run in new job\n // PHP can call again, again, again\n // with /tmp/packers and get ...\n // same job every time\n // same library list (*LIBL)\n // same PGMs with open files, etc.\n // ... exactly like 5250 sign-on screen\n // state full - MUST do this ONCE ONLY after start/sbmjob of XMLSERVICE job\n // then forget about it (unless you choose to change libl) ...\n $clobIn =\n \"<?xml version='1.0'?>\n <script>\n <cmd>CHGLIBL LIBL(FREDFLIN WILMAFLIN) CURLIB(FREDFLIN)</cmd>\n </script>\";\n $clobOut = \"\";\n $ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n $ret=db2_execute($stmt);\n\n\n3) Specific examples for New PHP Toolkit CW layer\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n* '''Stateless''' - set \\*LIBL every single request, but for CW, specify libraries in reverse order (via ADDLIBLE)\n\n::\n\n $options = array(I5_OPTIONS_INITLIBL => 'WILMAFLIN FREDFLIN' );\n $conn = i5_connect($host, $user, $password, $options);\n // or persistent (which is also stateless): still must specify library each time\n $conn = i5_pconnect($host, $user, $password, $options);\n \n* '''State Full''' - set \\*LIBL once and forget it\n\n::\n\n session_start();\n\n // if we previously saved a connection number in PHP session, use it.\n // otherwise, use 0 (which means create a new connection)\n $conNum = (isset($_SESSION['conectionNum']) ? $_SESSION['conectionNum'] : 0;\n\n // I5_OPTIONS_PRIVATE_CONNECTION: connection is private for the session\n // I5_OPTIONS_IDLE_TIMEOUT: After a delay of the specified number of seconds with no activity, the job will end.\n $options = array(I5_OPTIONS_PRIVATE_CONNECTION => $conNum,\n I5_OPTIONS_IDLE_TIMEOUT => \"60\");\n\n // connect as a private connection, starting with a persistent conn\n $conn = i5_pconnect ($host, $user, $password, $options);\n\n if (!$conn) {\n echo \"Something went wrong: <PRE>\" . print_r(i5_error(), true) . \"</PRE>\";\n $_SESSION['conectionNum'] = 0; // reset number\n } else {\n // connected successfully\n\n // if original conNum was 0, let's retrieve the new number.\n if ($conNum == 0) {\n\n // Session variable was 0: Get connection ID and store it in session variable.\n $ret = i5_get_property(I5_PRIVATE_CONNECTION, $conn);\n\n if (!$ret) {\n echo \"Something went wrong: <PRE>\" . print_r(i5_error(), true) . \"</PRE>\";\n } else {\n\n // We have a good new private connection. Store connection ID in session variable\n $_SESSION['conectionNum'] = $ret;\n\n // and set library list, too.\n i5_command(\"CHGLIBL LIBL(FREDFLIN WILMAFLIN) CURLIB(FREDFLIN)\");\n }\n }\n }\n\n\n..\n [--Author([[http://youngiprofessionals.com/wiki/index.php/XMLSERVICE/XMLSERVICELibl?action=expirediff | s ]])--]\n [--Tony \"Ranger\" Cairns - IBM i PHP / PASE--]\n"
},
{
"alpha_fraction": 0.5806451439857483,
"alphanum_fraction": 0.6190897226333618,
"avg_line_length": 39.41071319580078,
"blob_id": "ca434feeb292896960ae884c0f137e8f7c95fa71",
"content_id": "40121da94734790a2398c3a29ef89efee44f7e44",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 2263,
"license_type": "permissive",
"max_line_length": 104,
"num_lines": 56,
"path": "/test/php/test_20403_ZZNADA_ZZCALL_ZZNADA_toolkit_pgm.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: - Zend Toolkit PGM with DS\n--SKIPIF--\n<?php require_once('skipiftoolkit.inc'); ?>\n--FILE--\n<?php\nrequire_once('connection.inc');\n// assume /usr/local/zendsvr/share/ToolkitAPI\nrequire_once(\"ToolkitService.php\");\n// new toolkit\ntry { $ToolkitServiceObj = ToolkitService::getInstance($database, $user, $password); }\ncatch (Exception $e) { die($e->getMessage()); }\n$ToolkitServiceObj->setToolkitServiceParams(\narray('InternalKey'=>$ipc, // route to same XMLSERVICE job /tmp/myjob1\n'subsystem'=>\"QGPL/QDFTJOBD\", // subsystem/jobd to start XMLSERVICE (if not running)\n'plug'=>\"iPLUG512K\")); // max size data i/o (iPLUG4K,32K,65K.512K,1M,5M,10M,15M)\necho \"my ipc is $ipc ... \\n\";\nfor ($i=0; $i<3; $i++) {\n $param = array();\n $ds = array();\n switch($i) {\n case 0:\n echo \"Call ZZNADA ... \\n\";\n $result = $ToolkitServiceObj->PgmCall('ZZSRV', $testLib, null, null, array('func'=>'ZZNADA'));\n var_dump($result);\n break;\n case 1:\n echo \"Call ZZCALL ... \\n\";\n $param[] = $ToolkitServiceObj->AddParameterChar ('both', 1, 'INCHARA', 'var1', 'Y');\n $param[] = $ToolkitServiceObj->AddParameterChar ('both', 1, 'INCHARB', 'var2', 'Z');\n $param[] = $ToolkitServiceObj->AddParameterPackDec('both', 7,4,'INDEC1', 'var3', '001.0001');\n $param[] = $ToolkitServiceObj->AddParameterPackDec('both', 12,2,'INDEC2', 'var4', '0000000003.04');\n $ds[] = $ToolkitServiceObj->AddParameterChar ('both', 1, 'DSCHARA', 'ds1', 'A');\n $ds[] = $ToolkitServiceObj->AddParameterChar ('both', 1, 'DSCHARB', 'ds2', 'B');\n $ds[] = $ToolkitServiceObj->AddParameterPackDec('both', 7,4,'DSDEC1', 'ds3', '005.0007');\n $ds[] = $ToolkitServiceObj->AddParameterPackDec('both', 12,2,'DSDEC1', 'ds4', '0000000006.08');\n $param[] = $ToolkitServiceObj->AddDataStruct($ds);\n $result = $ToolkitServiceObj->PgmCall('ZZCALL', $testLib, $param, null, null);\n var_dump($result);\n break;\n case 2:\n echo \"Call ZZNADA ... \\n\";\n $result = $ToolkitServiceObj->PgmCall('ZZSRV', $testLib, null, null, array('func'=>'ZZNADA'));\n var_dump($result);\n break;\n default:\n die(\"loop bad\");\n break;\n }\n if (!$result) die(\"Failure\");\n}\necho \"Success\\n\";\n?>\n--EXPECTF--\n%s\nSuccess\n"
},
{
"alpha_fraction": 0.5510777831077576,
"alphanum_fraction": 0.6194939017295837,
"avg_line_length": 23.76744270324707,
"blob_id": "c1643a6ed8c8b97e6039c785f38067fa0a022d9b",
"content_id": "54f1421fd9bdb8de4b17ec6a3e85d7c29bdd1f08",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1067,
"license_type": "permissive",
"max_line_length": 54,
"num_lines": 43,
"path": "/test/byval/testval.py",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "import os\nfrom itoolkit import *\nfrom itoolkit.lib.ilibcall import *\n\nitransport = iLibCall()\n\nitool = iToolKit()\nitool.add(iCmd('chglibl', 'CHGLIBL LIBL(XMLSERVICE)'))\n# dcl-pr testval5 int(20);\n# arg1 int(3);\n# arg2 int(5);\n# arg3 int(10);\n# arg4 int(20);\n# end-pr;\ns1 = iSrvPgm('testval5','TESTZSRV','TESTVAL5')\np1 = iParm('p1',{'by':'val'})\np1.add(iData('arg1','3i0','4'))\np2 = iParm('p2',{'by':'val'})\np2.add(iData('arg2','5i0','5'))\np3 = iParm('p3',{'by':'val'})\np3.add(iData('arg3','10i0','6'))\np4 = iParm('p4',{'by':'val'})\np4.add(iData('arg4','20i0','7'))\ns1.add(p1)\ns1.add(p2)\ns1.add(p3)\ns1.add(p4)\ns1.addRet(iData('retval','20i0','0'))\nitool.add(s1)\nprint(itool.xml_in())\nexit()\nitool.call(itransport)\nchglibl = itool.dict_out('chglibl')\ntestval5 = itool.dict_out('testval5')\nprint(itool.xml_out())\nprint(chglibl['success'])\nprint(testval5['success'])\nif 'success' in testval5:\n print(testval5['arg1'])\n print(testval5['arg2'])\n print(testval5['arg3'])\n print(testval5['arg4'])\n print(testval5['retval'])\n\n\n"
},
{
"alpha_fraction": 0.6946308612823486,
"alphanum_fraction": 0.7010738253593445,
"avg_line_length": 51.06993103027344,
"blob_id": "e6925756ecb3c6a83ec0d2a2335deae245e9280d",
"content_id": "fe13741cb592851a372574a724ac52f0ba7f2b1a",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "reStructuredText",
"length_bytes": 7454,
"license_type": "permissive",
"max_line_length": 701,
"num_lines": 143,
"path": "/docs/source-layout.rst",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "\nXMLSERVICE Layouts\n==================\n\n`Goto Main Page`_\n\n.. _Goto Main Page: index.html\n\nThis page is for RPG developers that want to customize XMLSERVICE.\n\n\nXMLSERVICE source files\n-----------------------\n\n::\n\n -----------------------------------------\n | Browser |\n |---------------------------------------|\n (1)->| Download RPG (1) | Download PHP (2) |\n | 1) XMLSERVICE | a) PHP CW Toolkit |\n | HTML/XML/REST | b) New PHP Toolkit |\n | no PHP |--------------------|\n | (xmlcgi.pgm) | c) PHP “Raw XML” |\n | (optional) | (ibm_db2, odbc) |\n | -----------------------------------|\n | 2) XMLSERVICE DB2 stored procedures |<-(1)\n | (iPLUG4K, iPLUG32K, ..., iPLUG15M) |\n | 3) XMLSERVICE (xmlservice.pgm) |<-(1) xmlservice-rpg-x.x.x.zip\n | call most anything on IBM i ... |\n | (PGM, SRVPGM, PASE, DB2, etc.) |\n -----------------------------------------\n \nThe following is a brief description of the function of the source files included with the XMLSERVICE library Open Source project.\n::\n\n LIB:\n XMLSERVICE\n\n QSQLSRC:\n crtsql - db2 utility script to create stored procedures\n\n QRPGLESRC:\n plugconf_h - compile configuration options\n plugmri_h - translateable messages\n plugcach_h - interface for all cahces\n plugcach - implementation for all caches\n plugbug_h - interface debug (*debug)\n plugbug - implement debug message qsysopr\n plugerr_h - interface errors\n plugerr - implement error collection and report\n plugile_h - ILE ABI interface\n plugile - implement ILE ABI paramter memory layout\n plugipc_h -interface IPC (*nostart)\n plugipc - implement IPC shared memory and semaphore\n pluglic_h - interface license (*license)\n pluglic - implement license report\n plugpase_h - interface PASE functions\n plugpase - implement PASE PGM, SRVPGM, sh\n plugperf_h - interface performance (*rpt)\n plugperf - implement performance collection and report\n plugrun_h - interface run loop (*here *immed *justproc *sbmjob)\n plugrun - implement client/server run loop (RPG cycle)\n plugxml_h - interface XML\n plugxml - implement XML parse and run logic\n plugsql_h - interface sql by XML\n plugsql - implement sql by XML (1.5)\n plugdb2_h - interface sql db2 by XML\n plugdb2 - implement sql db2 by XML\n plugsig_h.rpgle - interface timeout signal (1.6.6)\n plugsig.rpgle - implement timeout signal\n plugconv_h.rpgle - interface ccsid conversion (1.6.6)\n plugconv.rpgle - implement ccsid conversion\n xmlcgi - PGM client interface Apache CGI (HTTP REST)\n xmlmain - PGM client interface RPG-2-RPG (experimental)\n xmlservice - PGM server deamons (XMLSERVICE jobs)\n xmlstoredp - SRVPGM client DB2 stored procedures (DB2 connections)\n zzcall - optional PGM for pear tests\n zzsrv - optional SRVPGM for pear tests\n\n\nXMLSERVICE hooks plugconf\n-------------------------\n\nXMLSERVICE allows for you to customize a few key security and output decisions in the RPG module plugconf or \nprovide your own plugconf(x) version. \n\n**XMLSERVICE hooks APIs plugconf_h**\n\nThe hooks are ONLY available with XMLSERVICE version 1.5.5+.\n\nplugconf module must implement the following interfaces prototyped by plugconf_h ... \nfailure to implement these APIs will result in compile failure.\n\nThere are key hooks in plugrun / xmlcgi that allow your to decline service after you snoop the input XML. \nLikewise, there are key hooks that allow you to manipulate the XML output returned to the clients \nif you wish to \"customize\" XML (cool), however it should be noted that if you change the output XML \nyou may break other clients (like Zend PHP wrapper).\n\n\nRunning with \\*NONE from XMLCGI\n-------------------------------\nXMLSERVICE via XMLCGI may be too powerful using only HTML/XML with no user/password (\\*NONE), \nso it is turned OFF by default always. However, if your machine is behind a firewall and you trust your team you can enable \\*NONE ...\n\n#. make your own plugconf(x)\n#. ``D PLUGNONEOK S 1N inz(*ON)``\n#. recompile and your machine will be wide open ...\n\nNote: The other setting PLUGDEMOOK is not used is future consideration for demos not from XMLCGI.\n\n\nXMLSERVICE XMLCGI (RPG CGI)\n---------------------------\nA sample XMLCGI interface was included to allow REST calls to XMLSERVICE. This was very handy to check out XML to/from XMLSERVICE before ever committing pen to paper on an actual script like PHP. It also seems to very handy fro one off HTML/XML applications that can reside on your laptop (or other device) and use the built in browser to run IBM i.\n\n\n**BIG DESIGN HINT:**\n\nAlthough common practice in IBM i circles, I choose not to use Basic Authentication and/or \"switch profile\" in RPG CGI programs on the web (httpd.conf ServerId). I much prefer to leave all Apache/CGI related jobs as neutral \"low authority\" like QTMHHTTP (IBM i) or NOBODY (Linux) for clearly understood security. Also, I prefer keeping any security \"switch profile\" activity in the database connection (DB2) for portability of design and multiple client support (1-tier/2-tier). An example of this \"database security\" style of CGI is Apache module XMLCGI included in XMLSERVICE download (main page). XMLCGI implements all \"security\" preferences and \"switch profile\" using the DB2 CLI interface in RPG.\n\nBriefly, reasons i like keep the \"profile security\" in the database ...\n* portable design idea in tradition of millions of PHP/MySql web sites (idea of profile is in database)\n* easy switch/sharing between 1-tier and 2-tier applications across common database design (DB2)\n* no \"protocol\" invention required because most languages support database (including PHP, Java, RPG, Cobol, etc.)\n* supporting/sharing applications a wide range of device clients such as PDAs, laptops, servers is much easier as most can handle REST and/or DB2 communications (whatever language is available)\n\n**Do it yourself RPG:**\n\n* **If you are a do it yourself RPG programmer, you may want to download and study XMLCGI RPG source file included as this would allow you to write any RPG front-end (just steal the DB2 CLI stuff in XMLCGI if you want a server with profiles).**\n \n * RPG program XMLCGI is a traditional \"Apache/Unix\" style CGI server (ala PHP/MySql), therefore you do not need \"switch profile\" in the XMLCGI job. All/any \"switch profile\" is occurring at the DB2 database layer via DB2 CLI server mode not the Apache level (without using profiles in httpd.conf)\n * WARNING: Unless https, XMLCGI sends unencrypted password around web and/or leaves them in files (unless using \\*NONE), which is a bad idea of course, but this is a demonstration module for testing (without PHP, etc.), you as the RPG programmer are intended to customize to your production world (Open Source RPG code).\n\n\nXMLSERVICE XMLMAIN (unused)\n---------------------------\n\nAlso included, is a simple RPG program XMLMAIN that calls client interface XMLSERVICE in processes inside or outside web context, \nhas not been extensively tested (very little in fact), but should be general idea for a IBM i RPG centric interface \n(if such a thing is even relevant these Internet days).\n\n*If you wish to build a traditional web IBM i \"switch profile\" CGI web service (httpd.conf ServerId), \nyou would need to combine parts of XMLMAIN (direct call XMLSERVICE) and XMLCGI (excluding DB2 CLI).*\n\n\n\n"
},
{
"alpha_fraction": 0.5687029957771301,
"alphanum_fraction": 0.6154834032058716,
"avg_line_length": 32.84471893310547,
"blob_id": "96b48072f23c884e76a383611bb4024b97c0e9b2",
"content_id": "ab46795b908bef5fc00395b5ac8af6d5f509b903",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 5451,
"license_type": "permissive",
"max_line_length": 74,
"num_lines": 161,
"path": "/test/php/test_12601_MISC_ibm_db2_io_QSZRTVPR.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 inout sys api - QSZRTVPR prod info\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// -----------------\n// make the call\n// -----------------\n// see connection.inc param details ...\nrequire_once('connection.inc');\n// call IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Fail connect: $database,$user\");\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG65K(?,?,?,?)\");\nif (!$stmt) die(\"Fail prepare: \".db2_stmt_errormsg());\n$clobIn = getxml();\n$clobOut = \"\";\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) die(\"Fail execute: \".db2_stmt_errormsg());\n// -----------------\n// output processing\n// -----------------\n// dump raw XML (easy test debug)\n// var_dump($clobOut);\n// xml check via simplexml vs. expected results\n$xmlobj = simplexml_load_string($clobOut);\nif (!$xmlobj) die(\"Fail XML returned\\n\");\n// -----------------\n// output pgm call\n// -----------------\n$allpgms = $xmlobj->xpath('/script/pgm');\nif (!$allpgms) die(\"Fail XML pgm missing\\n\");\n$pgm = $allpgms[0];\n$name = $pgm->attributes()->name;\n// pgm parms\n$parm = $pgm->xpath('parm');\nif (!$parm) die(\"Missing XML pgm parms ($name)\");\n// parm data structure Format PRDR0100\n$ds = $parm[0]->ds;\n$count = 0;\nforeach($ds->data as $data) {\n echo $data->attributes()->comment.\": \".(string)$data.\"\\n\";\n $count++;\n}\nif ($count <> 22) die(\"Fail XML Format PRDR0100\");\n// parm data structure Format ERRC0100\n$ds = $parm[1]->ds;\n$count = 0;\nforeach($ds->data as $data) {\n echo $data->attributes()->comment.\": \".(string)$data.\"\\n\";\n $count++;\n}\nif ($count <> 4) die(\"Fail XML Format ERRC0100\");\n\n// good\necho \"Success ($name)\\n\";\n\n/*\n***************************************\n* QSZRTVPR prod info\n***************************************\n* QSZRTVPR\n* 1 Receiver variable Output Char(*)\n* 2 Length of receiver variable Input Binary(4)\n* 3 Format name Input Char(8)\n* 4 Product information Input Char(*)\n* 5 Error code I/O Char(*)\n*\n* PRDR0100 Format\n* 0 0 BINARY(4) Bytes returned\n* 4 4 BINARY(4) Bytes available\n* 8 8 BINARY(4) Reserved\n* 12 C CHAR(7) Product ID\n* 19 13 CHAR(6) Release level\n* 25 19 CHAR(4) Product option\n* 29 1D CHAR(4) Load ID\n* 33 21 CHAR(10) Load type\n* 43 2B CHAR(10) Symbolic load state\n* 53 35 CHAR(10) Load error indicator\n* 63 3F CHAR(2) Load state\n* 65 41 CHAR(1) Supported flag\n* 66 42 CHAR(2) Registration type\n* 68 44 CHAR(14) Registration value\n* 82 52 CHAR(2) Reserved\n* 84 54 BINARY(4) Offset to additional information\n* 88 58 CHAR(4) Primary language load identifier\n* 92 5C CHAR(6) Minimum target release\n* 98 62 CHAR(6) Minimum VRM of *BASE required\n* 104 68 CHAR(1) Requirements met between base\n* 105 69 CHAR(3) Level\n* 108 6C CHAR(*) Reserved\n*\n* Format ERRC0100 for the error code\n* 0 0 INPUT BINARY(4) Bytes provided\n* 4 4 OUTPUT BINARY(4) Bytes available\n* 8 8 OUTPUT CHAR(7) Exception ID\n* 15 F OUTPUT CHAR(1) Reserved\n*/\nfunction getxml() {\n$clob = <<<ENDPROC\n<?xml version=\"1.0\"?>\n<script>\n<pgm name='QSZRTVPR'>\n <parm io=\"out\" comment='Receiver variable'>\n <ds comment='PRDR0100 Format' len='rec1'>\n <data type='10i0' comment='Bytes returned'>0</data>\n <data type='10i0' comment='Bytes available'>0</data>\n <data type='10i0' comment='Reserved'>0</data>\n <data type='7A' comment='Product ID'> </data>\n <data type='6A' comment='Release level'> </data>\n <data type='4A' comment='Product option'> </data>\n <data type='4A' comment='Load ID'> </data>\n <data type='10A' comment='Load type'> </data>\n <data type='10A' comment='Symbolic load state'> </data>\n <data type='10A' comment='Load error indicator'> </data>\n <data type='2A' comment='Load state'> </data>\n <data type='1A' comment='Supported flag'> </data>\n <data type='2A' comment='Registration type'> </data>\n <data type='14A' comment='Registration value'> </data>\n <data type='2A' comment='Reserved'> </data>\n <data type='10i0' comment='Offset to additional information'>0</data>\n <data type='4A' comment='Primary language load identifier'> </data>\n <data type='6A' comment='Minimum target release'> </data>\n <data type='6A' comment='Minimum VRM of *BASE required'> </data>\n <data type='1A' comment='Requirements met between base'> </data>\n <data type='3A' comment='Level'> </data>\n <data type='5A' comment='pad'> </data>\n </ds>\n </parm>\n <parm io=\"in\" comment='Length of receiver variable'>\n <data type='10i0' setlen='rec1'>8</data>\n </parm>\n <parm io=\"in\" comment='Format name'>\n <data type='8A'>PRDR0100</data>\n </parm>\n <parm io=\"in\" comment='Product information'>\n <data type='100A'>*OPSYS *CUR 0033*CODE</data>\n </parm>\n <parm io=\"both\" comment='Error code'>\n <ds comment='Format ERRC0100' len='rec2'>\n <data type='10i0' comment='Bytes returned'>0</data>\n <data type='10i0' comment='Bytes available' setlen='rec2'>0</data>\n <data type='7A' comment='Exception ID'> </data>\n <data type='1A' comment='Reserved'> </data>\n </ds>\n </parm>\n</pgm>\n</script>\nENDPROC;\nreturn $clob;\n}\n?>\n--EXPECTF--\n%s\nSuccess (%s)\n\n\n"
},
{
"alpha_fraction": 0.548588216304779,
"alphanum_fraction": 0.5632563233375549,
"avg_line_length": 31.452381134033203,
"blob_id": "45a68cf44c4b0845efb3045a794c1cc23cc55514",
"content_id": "527c830bd0a27625052ff1d79fc3a7cf6ec0726c",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 2727,
"license_type": "permissive",
"max_line_length": 79,
"num_lines": 84,
"path": "/test/php/test_10310_ZZOMIT_ibm_db2_io_varying.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 inout SRVPGM - omit\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\n// call IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Bad connect: $database,$user\");\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG32K(?,?,?,?)\");\nif (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n$clobIn = getxml();\n$clobOut = \"\";\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) die(\"Bad execute: \".db2_stmt_errormsg());\n// -----------------\n// output processing\n// -----------------\n// dump raw XML (easy test debug)\nvar_dump($clobOut);\n// xml check via simplexml vs. expected results\n$xmlobj = simplexml_load_string($clobOut);\nif (!$xmlobj) die(\"Bad XML returned\");\n$allpgms = $xmlobj->xpath('/script/pgm');\nif (!$allpgms) die(\"Missing XML pgm info\");\n// -----------------\n// output pgm call\n// -----------------\n// only one program this XML script\n$pgm = $allpgms[0];\n$name = $pgm->attributes()->name;\n$lib = $pgm->attributes()->lib;\n$func = $pgm->attributes()->func;\n// pgm parms\n// $parm = $pgm->xpath('parm');\n// if ($parm) die(\"Unexpected XML pgm parms io='in' ($lib/$name.$func)\\n\");\n// pgm data returned\n$retn = $pgm->xpath('return');\nif (!$retn) die(\"Fail XML pgm return missing ($lib/$name.$func)\\n\");\n$var = $retn[0]->data->attributes()->var;\n$actual = (string)$retn[0]->data;\n$expect = 'my name is OMIT Vlad';\nif ($actual != $expect) die(\"$var ($actual not $expect) ($lib/$name.$func)\\n\");\n\n// good\necho \"Success ($lib/$name.$func)\\n\";\n\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// * zzomit: check omit\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// P zzomit B export\n// D zzomit PI 50A varying\n// D myName 10A options(*OMIT)\n// D yourName 10A\nfunction getxml() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<script>\n<pgm name='ZZSRV' lib='xyzlibxmlservicexyz' func='ZZOMIT'>\n <parm comment='my name' io='omit'>\n <data var='myName' type='10A'>Ranger</data>\n </parm>\n <parm comment='your name' io='in'>\n <data var='yourName' type='10A'>Vlad</data>\n </parm>\n <return>\n <data var='myReturnName' type='50A' varying='on'>Mud</data>\n </return>\n</pgm>\n</script>\nENDPROC;\nreturn test_lib_replace($clob);\n}\n?>\n--EXPECTF--\n%s\nSuccess (%s)\n\n"
},
{
"alpha_fraction": 0.5126491785049438,
"alphanum_fraction": 0.5713604092597961,
"avg_line_length": 40.880001068115234,
"blob_id": "96d942267d2be6852033b0c758afb823da170a08",
"content_id": "d5c8902ff6916ee8cc17bf254550ca350fa0ec80",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 2095,
"license_type": "permissive",
"max_line_length": 118,
"num_lines": 50,
"path": "/test/php/test_20425_ZZSTAMP_toolkit_pgm.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: - Zend Toolkit timestamp\n--SKIPIF--\n<?php require_once('skipiftoolkit.inc'); ?>\n--FILE--\n<?php\nrequire_once('connection.inc');\n// assume /usr/local/zendsvr/share/ToolkitAPI\nrequire_once(\"ToolkitService.php\");\n// new toolkit\ntry { $ToolkitServiceObj = ToolkitService::getInstance($database, $user, $password); }\ncatch (Exception $e) { die($e->getMessage()); }\n$ToolkitServiceObj->setToolkitServiceParams(\narray('InternalKey'=>$ipc, // route to same XMLSERVICE job /tmp/myjob1\n'subsystem'=>\"QGPL/QDFTJOBD\", // subsystem/jobd to start XMLSERVICE (if not running)\n'plug'=>\"iPLUG32K\")); // max size data i/o (iPLUG4K,32K,65K.512K,1M,5M,10M,15M)\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// * zzstamp: check timestamp parm\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// P zzstamp B export\n// D zzstamp PI Z\n// D myStamp Z\n// * vars\n// D retStamp s Z\n// /free\n// retStamp=myStamp;\n// myStamp=z'1960-12-31-12.32.23.000000';\n// return retStamp;\n// /end-free\n// P E\n$param[] = $ToolkitServiceObj->AddParameterChar ('both', 26, 'ZZSTAMP', 'myStamp', '2011-12-29-12.45.29.000000');\n$retrn[] = $ToolkitServiceObj->AddParameterChar ('both', 26, 'ZZSTAMP', 'retStamp', '2002-02-02-02.02.02.000000');\n$result = $ToolkitServiceObj->PgmCall('ZZSRV', $testLib, $param, $retrn, array('func'=>'ZZSTAMP'));\n// var_dump($result);\n/* in/out param myDate */\n$myStamp = \"XMLSERVICE i/o param myStamp: \".$result[\"io_param\"][\"myStamp\"];\necho \"$myStamp\\n\";\n$expect = '1960-12-31-12.32.23.000000';\nif (strpos($myStamp,$expect)<1) die(\"Fail missing $expect\\n\");\n/* return value retStamp */\n$retStamp = \"XMLSERVICE return retStamp: \".$result[\"retvals\"][\"retStamp\"];\necho \"$retStamp\\n\";\n$expect = '2011-12-29-12.45.29.000000';\nif (strpos($retStamp,$expect)<1) die(\"Fail missing $expect\\n\");\n/* all good */\necho \"Success\\n\";\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.4842061400413513,
"alphanum_fraction": 0.5006234645843506,
"avg_line_length": 30.44444465637207,
"blob_id": "1483067a0fc7ecc4e7737edeb05cdecca08aba0f",
"content_id": "a1f9aa9069c65a1f9f8399f65adc8007f132405e",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 4812,
"license_type": "permissive",
"max_line_length": 98,
"num_lines": 153,
"path": "/test/php/test_10155_ZZCUST_ibm_db2_io_dou.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 inout PGM - DOU DS records returned\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// -----------------\n// make the call\n// -----------------\n// see connection.inc param details ...\nrequire_once('connection.inc');\n// call IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Fail connect: $database,$user\");\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG65K(?,?,?,?)\");\nif (!$stmt) die(\"Fail prepare: \".db2_stmt_errormsg());\n$clobIn = getxml();\n$clobOut = \"\";\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) die(\"Fail execute: \".db2_stmt_errormsg());\n// -----------------\n// output processing\n// -----------------\nob_start();\necho \"<h1>Easy Peasy XMLSERVICE</h1>\\n\";\necho \"<table border='1'>\\n\";\n// echo \"<th>RPG anyone?</th>\\n\";\necho \"<th>Easy peasy simpleXML</th>\\n\";\n// echo \"<th>Easy peasy text XML</th>\\n\";\necho \"<th>Easy peasy PHP arrays</th>\\n\";\necho \"<th>Easy peasy JSON</th>\\n\";\necho \"<tr>\\n\";\n\n// echo \"<td valign='top'><pre>\\n\";\n// echo file_get_contents(\"./zzcust.rpgle\");\n// echo \"</pre></td>\\n\";\n\n$xmlobj = new SimpleXmlIterator($clobOut);\necho \"<td valign='top'><pre>\\n\";\nvar_dump( $xmlobj );\necho \"</pre></td>\\n\";\n\n// $rawxml = $xmlobj->asXML();\n// echo \"<td valign='top'>\\n\";\n// var_dump( $rawxml );\n// echo \"</td>\\n\";\n\n$phparray = sxiToArray( $xmlobj );\necho \"<td valign='top'><pre>\\n\";\nvar_dump( $phparray );\necho \"</pre></td>\\n\";\n\n$jsonobj = json_encode( $xmlobj );\necho \"<td valign='top'>\\n\";\nvar_dump( $jsonobj );\necho \"</td>\\n\";\n\necho \"</tr>\\n\";\necho \"</table>\\n\";\n\n$myout = ob_get_clean();\n\n//---------------\n// check the data\nif (strpos($myout,'\"data\":[\"good item 7\",\"7.000\",\"700.0000\"]') < 1) die(\"Missing data \\n\".$myout);\n\n// good\necho $myout;\necho \"Success\\n\";\n\n// D pcustomer_id s 8p 0\n// D plines s 4p 0\n// D pline_ds DS Dim(10) qualified\n// D item 35a\n// D qty 11p 3\n// D price 14p 4\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// * main(): Control flow\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// C *Entry PLIST\n// C PARM pcustomer_id\n// C PARM plines\n// C PARM pline_ds\nfunction getxml() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<script>\n<pgm name='ZZCUST' lib='xyzlibxmlservicexyz'>\n <parm><data var='pcustomer_id' type='8p0'>12345678</data></parm>\n <parm><data var='plines' type='10i0' enddo='plines'>0</data></parm>\n <parm>\n <ds var='pline_ds' dim='10' dou='plines'>\n <data var='item' type='35A'/>\n <data var='qty' type='11p3'/>\n <data var='price' type='14p4'/>\n </ds>\n </parm>\n</pgm>\n</script>\nENDPROC;\nreturn test_lib_replace($clob);\n}\n\n// -----------------------\n// helper functions\n// -----------------------\nfunction sxiToArray($sxi)\n{ // copy : http://php.net/manual/en/class.simplexmliterator.php\n // author: ratfactor at gmail dot com\n // note : @ADC - added attributes processing\n $a = array();\n for( $sxi->rewind(); $sxi->valid(); $sxi->next() ) {\n if(!array_key_exists($sxi->key(), $a)){\n $a[$sxi->key()] = array();\n }\n if($sxi->hasChildren()){\n // @ADC - added attributes processing\n $attributes = $sxi->current()->attributes();\n $att = array();\n foreach ($attributes as $attributeName => $attributeValue)\n {\n $attribName = strtolower(trim((string)$attributeName));\n $attribVal = trim((string)$attributeValue);\n $att[$attribName] = $attribVal;\n }\n $a[$sxi->key()][] = array('@attr'=>$att);\n // child\n $a[$sxi->key()][] = sxiToArray($sxi->current());\n }\n else{\n // @ADC - added attributes processing\n $attributes = $sxi->current()->attributes();\n $att = array();\n foreach ($attributes as $attributeName => $attributeValue)\n {\n $attribName = strtolower(trim((string)$attributeName));\n $attribVal = trim((string)$attributeValue);\n $att[$attribName] = $attribVal;\n }\n $a[$sxi->key()][] = array('@attr'=>$att,'@value'=>strval($sxi->current()));\n }\n }\n return $a;\n}\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.5298013091087341,
"alphanum_fraction": 0.5387611985206604,
"avg_line_length": 42.50847625732422,
"blob_id": "2873a86d5d43f396c7c6a77b0a42be1f9cde9a32",
"content_id": "e69de3f08b3492d000173fa9edc983d4ec7f98f8",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 2567,
"license_type": "permissive",
"max_line_length": 134,
"num_lines": 59,
"path": "/test/php/xmlservice_pdo_ibm.php",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "<?php\n// PHP driver: xmlservice_pdo_ibm.php\n// Notes:\n// Assumes you have PHP pdo_ibm driver enabled in Zend Server.\n// You may use PHP pdo_ibm driver on IBM i (1-tier)\n// or from Linux/Windows to IBM i (2-tier).\n// For help:\n// http://www.youngiprofessionals.com/wiki/index.php/PHP/DB2\nif (!extension_loaded('pdo_ibm')) {\n die('Error: pdo_ibm extension not loaded, use Zend Server GUI to enable.');\n}\n\n// *** XMLSERVICE call (DB2 driver) ***\n// Note:\n// Connection ($procConn) is global to avoid looping\n// re-open/re-close that errors most drivers\nfunction xmlservice($xml) {\nglobal $i5persistentconnect, $database, $user, $password, $ipc, $ctl, $procConn, $procLib, $procPlug, $procPlugR;\n $xmlIn = $xml;\n $xmlOut = '';\n if (!$procConn) {\n $database = \"ibm:\".$database;\n try {\n if ($i5persistentconnect) $opt = array(PDO::ATTR_PERSISTENT=>true, PDO::ATTR_AUTOCOMMIT=>true); // persistent/pooled connection\n else $opt = array(PDO::ATTR_AUTOCOMMIT=>true); // full open/close connection\n $procConn = new PDO($database, strtoupper($user), strtoupper($password), $opt);\n if (!$procConn) throw new Exception(\"Bad\");\n } catch( Exception $e ) {\n die(\"Bad connect: $database, $user\");\n }\n }\n try {\n $stmt = $procConn->prepare(\"call $procLib.$procPlug(?,?,?,?)\"); // Call XMLSERVICE\n // stored procedure interface\n // in/out parameter (xmlOut)\n // sizes: iPLUG4K - iPLUG15M\n if (!$stmt) throw new Exception('Bad');\n } catch( Exception $e ) {\n $err = $procConn->errorInfo();\n $cod = $procConn->errorCode();\n die(\"Bad prepare: \".$cod.\" \".$err[0].\" \".$err[1].\" \".$err[2]);\n }\n try {\n $r1=$stmt->bindParam(1,$ipc, PDO::PARAM_STR);\n $r2=$stmt->bindParam(2,$ctl, PDO::PARAM_STR);\n $r3=$stmt->bindParam(3,$xmlIn, PDO::PARAM_STR);\n $r4=$stmt->bindParam(4,$xmlOut, PDO::PARAM_STR|PDO::PARAM_INPUT_OUTPUT);\n $ret = $stmt->execute();\n if (!$ret) throw new Exception('Bad');\n } catch( Exception $e ) {\n $err = $stmt->errorInfo();\n $cod = $stmt->errorCode();\n die(\"Bad execute: \".$cod.\" \".$err[0].\" \".$err[1].\" \".$err[2]);\n }\n return driverJunkAway($xmlOut); // just in case driver odd\n // ... possible junk end record,\n // record</script>junk\n}\n?>\n"
},
{
"alpha_fraction": 0.4969899654388428,
"alphanum_fraction": 0.527090311050415,
"avg_line_length": 27.740385055541992,
"blob_id": "20acc77641971f26a3f676da907c7a52bafe8549",
"content_id": "d81a63cfa78b514f74b5821d81f0c7570f22bbc5",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 2990,
"license_type": "permissive",
"max_line_length": 76,
"num_lines": 104,
"path": "/test/php/test_10128_ZZNADA_ZZCALL_ZZNADA_ibm_db2_io_nada.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 inout SRVPGM - RPG no parm and no return\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\n// call IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Bad connect: $database,$user\");\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG512K(?,?,?,?)\");\nif (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\necho \"my ipc is $ipc ... \\n\";\nfor ($i=0; $i<3; $i++) {\n switch($i) {\n case 0:\n echo \"Call ZZNADA ... \\n\";\n $clobIn = getxml1(); // XML in\n break;\n case 1:\n echo \"Call ZZCALL ... \\n\";\n $clobIn = getxml2(); // XML in\n break;\n case 2:\n echo \"Call ZZNADA ... \\n\";\n $clobIn = getxml1(); // XML in\n break;\n default:\n die(\"loop bad\");\n break;\n }\n $clobOut = \"\"; // XML out\n $ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n $ret=db2_execute($stmt);\n if (!$ret) die(\"Bad execute: \".db2_stmt_errormsg());\n // dump XML out\n var_dump($clobOut);\n // XML check\n $xmlobj = simplexml_load_string($clobOut);\n if (!$xmlobj) die(\"Bad XML returned\\n\");\n // check raw XML no error?\n if (strlen(trim($clobOut))<1 || strpos($clobOut,\"error\")>0) die(\"Fail\\n\");\n}\necho \"Success\\n\";\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// * zznada: check no parms\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// P zznada B export\n// D zznada PI\nfunction getxml1() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<pgm name='ZZSRV' lib='xyzlibxmlservicexyz' func='ZZNADA'>\n</pgm>\nENDPROC;\nreturn test_lib_replace($clob);\n}\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// * zznada: check no parms\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// P zznada B export\n// D zznada PI\nfunction getxml2() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<script>\n<pgm name='ZZCALL' lib='xyzlibxmlservicexyz'>\n <parm io='both'>\n <data type='1A' var='INCHARA'>a</data>\n </parm>\n <parm io='both'>\n <data type='1A' var='INCHARB'>b</data>\n </parm>\n <parm io='both'>\n <data type='7p4' var='INDEC1'>11.1111</data>\n </parm>\n <parm io='both'>\n <data type='12p2' var='INDEC2'>222.22</data>\n </parm>\n <parm io='both'>\n <ds>\n <data type='1A' var='INDS1.DSCHARA'>x</data>\n <data type='1A' var='INDS1.DSCHARB'>y</data>\n <data type='7p4' var='INDS1.DSDEC1'>66.6666</data>\n <data type='12p2' var='INDS1.DSDEC2'>77777.77</data>\n </ds>\n </parm>\n <return>\n <data type='10i0'>0</data>\n </return>\n</pgm>\n</script>\nENDPROC;\nreturn test_lib_replace($clob);\n}\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.5523595213890076,
"alphanum_fraction": 0.5849438309669495,
"avg_line_length": 31.71323585510254,
"blob_id": "2b779cd34cad5b9ce5c938ebd4d05f80ebf556d9",
"content_id": "a2bd7c68bb7d4c402e9832472743f9c3509fe39c",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 4450,
"license_type": "permissive",
"max_line_length": 74,
"num_lines": 136,
"path": "/test/php/test_15651_69PARMS_OPM.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: OPM 64 parms\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// -----------------\n// make the call\n// -----------------\n// see connection.inc param details ...\nrequire_once('connection.inc');\n// call IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Fail connect: $database,$user\");\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG15M(?,?,?,?)\");\nif (!$stmt) die(\"Fail prepare: \".db2_stmt_errormsg());\n$zz0cnt = 3;\n$clobIn = getxml($zz0cnt);\n// xml check via simplexml vs. expected results\n$xmlobj = simplexml_load_string($clobIn);\nif (!$xmlobj) die(\"Fail XML input\\n\");\n$clobOut = \"\";\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) die(\"Fail execute: \".db2_stmt_errormsg());\n// -----------------\n// output processing\n// -----------------\n// dump raw XML (easy test debug)\nvar_dump($clobOut);\n\n// good\necho \"Success\\n\";\n\nfunction getxml($zz0cnt) {\n$clob = <<<ENDPROC\n<?xml version='1.0' encoding='ISO-8859-1' ?>\n<script>\n<pgm mode='opm' name='ZZCALL' lib='xyzlibxmlservicexyz'>\n <parm io='both'>\n <data type='1A' var='INCHARA'>a</data>\n </parm>\n <parm io='both'>\n <data type='1A' var='INCHARB'>b</data>\n </parm>\n <parm io='both'>\n <data type='7p4' var='INDEC1'>11.1111</data>\n </parm>\n <parm io='both'>\n <data type='12p2' var='INDEC2'>222.22</data>\n </parm>\n <parm io='both'>\n <ds>\n <data type='1A' var='INDS1.DSCHARA'>x</data>\n <data type='1A' var='INDS1.DSCHARB'>y</data>\n <data type='7p4' var='INDS1.DSDEC1'>66.6666</data>\n <data type='12p2' var='INDS1.DSDEC2'>77777.77</data>\n </ds>\n </parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n<parm io='in'><data type='1A'/></parm>\n</pgm>\n</script>\nENDPROC;\nreturn test_lib_replace($clob);\n}\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.4987458288669586,
"alphanum_fraction": 0.5029264092445374,
"avg_line_length": 48.83333206176758,
"blob_id": "c4ca77afced885c6807ca191980e68e8b2ff74b9",
"content_id": "4674f25bf07c49d6e123c383d39a9355291ea6c4",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 2392,
"license_type": "permissive",
"max_line_length": 118,
"num_lines": 48,
"path": "/test/php/xmlservice_odbc.php",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "<?php\n// PHP driver: xmlservice_odbc.php\n// Notes:\n// Assumes you have PHP odbc driver enabled in Zend Server.\n// You may use PHP odbc driver on IBM i (1-tier)\n// or from Linux/Windows to IBM i (2-tier).\n// For help:\n// http://www.youngiprofessionals.com/wiki/index.php/PHP/DB2\nif (!extension_loaded('odbc')) {\n die('Error: odbc extension not loaded, use Zend Server GUI to enable.');\n}\n\n// *** XMLSERVICE call (DB2 driver) ***\n// Note:\n// Connection ($procConn) is global to avoid looping\n// re-open/re-close that errors most drivers\nfunction xmlservice($xml) {\nglobal $i5persistentconnect, $database, $user, $password, $ipc, $ctl, $procConn, $procLib, $procPlug, $procPlugR;\n $xmlIn = $xml;\n $xmlOut = '';\n if (!$procConn) {\n if ($i5persistentconnect) $procConn = odbc_pconnect($database, $user, $password); // persistent/pooled connection\n else $procConn = odbc_connect($database, $user, $password); // full open/close connection\n }\n $stmt = odbc_prepare($procConn, \"call $procLib.$procPlugR(?,?,?)\"); // Call XMLSERVICE\n // stored procedure interface\n // result set return (fetch)\n // sizes: iPLUGR4K - iPLUGR15M\n if (!$stmt) die(\"Bad prepare: \".odbc_errormsg());\n $options = array($ipc, // ? - /tmp/raw_$user (*sbmjob)\n $ctl, // ?- *here or *sbmjob\n $xmlIn); // ?- XML input script\n // bad behavior odbc extension\n // ... IBM i result set warning???\n error_reporting(~E_ALL); // bad behavior odbc extension\n // ... IBM i result set warning???\n $ret=odbc_execute($stmt,$options);\n if (!$ret) die(\"Bad execute: \".odbc_errormsg());\n error_reporting(E_ALL);\n while(odbc_fetch_row($stmt)) {\n $xmlOut .= driverJunkAway(odbc_result($stmt, 1)); // bad behavior odbc extension\n // ... possible junk end record,\n // xmlservice provided $ctl='*hack'\n // record</hack>junk\n }\n return $xmlOut;\n}\n?>\n"
},
{
"alpha_fraction": 0.54499751329422,
"alphanum_fraction": 0.566616415977478,
"avg_line_length": 29.58461570739746,
"blob_id": "5e0e36c4fa2bca63da16d3108ebc7bdef0c91735",
"content_id": "c37be964cf6dcc54faff58cc72f14ce92ba42ce7",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 1989,
"license_type": "permissive",
"max_line_length": 74,
"num_lines": 65,
"path": "/test/php/test_80603_db2_io_ZZVARY_error.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: ibm_db2 SRVPGM - varying cdata error\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\n// call IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Bad connect: $database,$user\");\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG32K(?,?,?,?)\");\nif (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n$clobIn = getxml();\n$clobOut = \"\";\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) die(\"Bad execute: \".db2_stmt_errormsg());\n// -----------------\n// output processing\n// -----------------\n// dump raw XML (easy test debug)\nvar_dump($clobOut);\n\n// xml check via simplexml vs. expected results\n$xmlobj = simplexml_load_string($clobOut);\n\n// good\necho \"Success\\n\";\n\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// * zzvary: check return varying\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// P zzvary B export\n// D zzvary PI 20A varying\n// D myName 10A varying\nfunction getxml() {\n$clob = <<<ENDPROC\n<?xml version='1.0' encoding='ISO-8859-1' ?>\n<script>\n<pgm name='ZZSRV' lib='xyzlibxmlservicexyz' func='ZZVARY'>\n <parm comment='search this name'>\n <data var='myName' type='10A' varying='on'><![CDATA[<Ranger>]]></data>\n </parm>\n <overlay io='out'>\n <data var='myOver1' type='10s0'>R1</data>\n </overlay>\n <return>\n <data var='myNameis' type='20A' varying='on'><![CDATA[<Mud>]]></data>\n </return>\n</pgm>\n</script>\nENDPROC;\nENDPROC;\n$test = simplexml_load_string($clob);\nreturn test_lib_replace($clob);\n}\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.6878980994224548,
"alphanum_fraction": 0.6878980994224548,
"avg_line_length": 23.076923370361328,
"blob_id": "c5fe8350d79ff1bb2ae11efa07c7dbd2453b3055",
"content_id": "c82afe27e0793d736170d3a89a0e3ca70bfa809c",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 314,
"license_type": "permissive",
"max_line_length": 40,
"num_lines": 13,
"path": "/test/byval/testdiag.py",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "import os\nfrom itoolkit import *\nfrom itoolkit.lib.ilibcall import *\nitransport = iLibCall()\nitool = iToolKit()\nmyxml = \"<diag/>\"\nitool.add(iXml(myxml))\n# print(itool.xml_in())\nitool.call(itransport)\n# print(itool.xml_out())\ndiag = itool.dict_out()\nif 'version' in diag:\n print (\"version : \"+diag['version'])\n\n"
},
{
"alpha_fraction": 0.2960711121559143,
"alphanum_fraction": 0.3141382336616516,
"avg_line_length": 85.7313461303711,
"blob_id": "118030bef7d9740c2c98b010a67bd9df75a4eb24",
"content_id": "b058bfa727a163f1a1af70a56b838fb7a68cf6f9",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "reStructuredText",
"length_bytes": 17437,
"license_type": "permissive",
"max_line_length": 392,
"num_lines": 201,
"path": "/docs/performance.rst",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "\n\nXMLSERVICE Performance\n======================\n\n`Goto Main Page`_\n\n.. _Goto Main Page: index.html\n\nPerformance at a glance\n-----------------------\n\nThe following tests are full browser simulations (Apache ab tool), not PHP loop tests\n\n* Apache ab web site stress tests are performed from the 5250 command line (call qp2term) or ssh myibmi using PASE (see Apache ab link install instructions)\n\n * ``> ab -t 15 -c 10 http://lp0264d/level/xxtoolkit_new.php``\n \n * You can run stress tests from 2-tier Linux/Windows using Apache ab tool, but i am using Apchae ab from PASE.\n * Apache ab tool is not perfect, but if you use relatively \"sane\" number of browsers like -c 10 it will work.\n * Apache ab test is designed to drive CPU to 100% (a good thing), so don't panic about CPU\n\n+-----------------------+-------------------------------------------------------------------------------------------+\n| Connection | Sample (ab tool) |\n+=======================+===========================================================================================+\n| Apache | ``<H1>Hello world</H1>`` ................................. Requests per second: 767.87 |\n+-----------------------+-------------------------------------------------------------------------------------------+\n| PHP | ``<?php echo \"Hello world\"; ?>`` ......................... Requests per second: 560.05 |\n+-----------------------+-------------------------------------------------------------------------------------------+\n| ibm_db2 | ``$conn = db2_connect($database,$user,$password);`` ....... Requests per second: 64.54 |\n| | |\n| | ``$conn = db2_pconnect($database,$user,$password);`` ...... Requests per second: 504.04 |\n| | |\n| | :: |\n| | |\n| | $ipc = \"\"; |\n| | $ctl = \"*justproc\"; // *justproc (no progam call) |\n| | $clobIn = \"<?xml version='1.0'?>\"; |\n| | $clobOut = \"\"; |\n| | $stmt = db2_prepare($conn, \"call XMLSERVICE.iPLUG4K(?,?,?,?)\"); |\n| | $ret = db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN); |\n| | $ret = db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN); |\n| | $ret = db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN); |\n| | $ret = db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT); |\n| | $ret = db2_execute($stmt); |\n| | echo \"success\"; |\n+-----------------------+-------------------------------------------------------------------------------------------+\n|XMLSERVICE | ``$ipc = ''; $ctl = '*here';`` ............................ Requests per second: 69.58 |\n| | |\n| | ``$ipc = '/tmp/packers'; $ctl = '*sbmjob';`` ............... Requests per second: 320.72 |\n| | |\n| | :: |\n| | |\n| | <?php |\n| | require_once('connection2.inc'); |\n| | if ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password); |\n| | else $conn = db2_connect($database,$user,$password); |\n| | if (!$conn) echo \"Bad connect: $conn,$database,$user,perm=$i5persistentconnect\"; |\n| | $ipc = '/tmp/packers'; // *here no need ipc |\n| | $ctl = \"*sbmjob\"; // *here for no additional private job |\n| | // $ipc = ''; |\n| | // $ctl = '*here'; |\n| | $clobIn = \"<?xml version='1.0'?> |\n| | <pgm name='ZZCALL' lib='$libxmlservice'> |\n| | <parm io='both'> |\n| | <data type='1A'>a</data> |\n| | </parm> |\n| | <parm io='both'> |\n| | <data type='1A'>b</data> |\n| | </parm> |\n| | <parm io='both'> |\n| | <data type='7p4'>11.1111</data> |\n| | </parm> |\n| | <parm io='both'> |\n| | <data type='12p2'>222.22</data> |\n| | </parm> |\n| | <parm io='both'> |\n| | <ds> |\n| | <data type='1A'>x</data> |\n| | <data type='1A'>y</data> |\n| | <data type='7p4'>66.6666</data> |\n| | <data type='12p2'>77777.77</data> |\n| | </ds> |\n| | </parm> |\n| | <return> |\n| | <data type='10i0'>0</data> |\n| | </return> |\n| | </pgm>\"; |\n| | $clobOut = \"\"; |\n| | $stmt = db2_prepare($conn, \"call XMLSERVICE.iPLUG4K(?,?,?,?)\"); |\n| | $ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN); |\n| | $ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN); |\n| | $ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN); |\n| | $ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT); |\n| | $ret=db2_execute($stmt); |\n| | // var_dump($clobOut); |\n| | if (strpos($clobOut,\"4444444444.44\")>0) echo \"success\"; |\n| | else echo \"fail\"; |\n| | ?> |\n+-----------------------+-------------------------------------------------------------------------------------------+\n|Toolkit |``setToolkitServiceParams(array('stateless'=>true));`` ..... Requests per second: 62.97 |\n| |:: |\n| | |\n| | <?php |\n| | require_once('connection2.inc'); |\n| | require_once(\"ToolkitService.php\"); |\n| | if ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password); |\n| | else $conn = db2_connect($database,$user,$password); |\n| | if (!$conn) echo \"Bad connect: $conn,$database,$user,perm=$i5persistentconnect\"; |\n| | try { $ToolkitServiceObj = ToolkitService::getInstance($conn); } |\n| | catch (Exception $e) { die($e->getMessage()); } |\n| | // $ToolkitServiceObj->setToolkitServiceParams(array('stateless'=>true, |\n| | 'plug'=>'iPLUG32K')); |\n| | ToolkitServiceObj->setToolkitServiceParams(array('InternalKey'=>'/tmp/packers', |\n| | 'plug'=>'iPLUG32K')); |\n| | $param[] = $ToolkitServiceObj->AddParameterChar ('both', 1, 'INCHARA', 'var1', 'Y'); |\n| | $param[] = $ToolkitServiceObj->AddParameterChar ('both', 1, 'INCHARB', 'var2', 'Z'); |\n| | $param[] = $ToolkitServiceObj->AddParameterPackDec('both', 7,4,'INDEC1', 'var3', |\n| | '001.0001'); |\n| | $param[] = $ToolkitServiceObj->AddParameterPackDec('both', 12,2,'INDEC2', 'var4', |\n| | '0000000003.04'); |\n| | $ds[] = $ToolkitServiceObj->AddParameterChar ('both', 1, 'DSCHARA', 'ds1', 'A'); |\n| | $ds[] = $ToolkitServiceObj->AddParameterChar ('both', 1, 'DSCHARB', 'ds2', 'B'); |\n| | $ds[] = $ToolkitServiceObj->AddParameterPackDec('both', 7,4,'DSDEC1', 'ds3', |\n| | '005.0007'); |\n| | $ds[] = $ToolkitServiceObj->AddParameterPackDec('both', 12,2,'DSDEC1', 'ds4', |\n| | '0000000006.08'); |\n| | $param[] = $ToolkitServiceObj->AddDataStruct($ds); |\n| | $clobOut = $ToolkitServiceObj->PgmCall('ZZCALL', $libxmlservice, $param, null, null); |\n| | // var_dump($clobOut); |\n| | $value = \"what is ...\".$clobOut[\"io_param\"][\"ds4\"]; |\n| | if (strpos($value,\"4444444444.44\")>-1) echo \"success\"; |\n| | else echo \"fail\"; |\n| | ?> |\n+-----------------------+-------------------------------------------------------------------------------------------+\n\n\nHow many users?\n---------------\n\nApache ab tool is designed to \"exercise\" CPU/Apache engine as fast as possible, therefore 'ab tool' requests/second multiply by a factor of 5-10x to estimate 'user capacity' assuming humans actually read/scan each browser page (~ 5-10 seconds per page). So, on this machine using just ibm_db2 without toolkit, the following is about as fast as any given connection can run using only ONE job.\n::\n\n Drive the CPU to 100% capacity, then estimate actual users supported to drive the same CPU 100%.\n >> ab -t 15 -c 15 http://lp0264d/ibm_db2_fetch.php?pool\n 483 requests/second..............483 users/second (varies widely per application base line)\n 28,980 requests/minute..............600 users/minute (relatively constant user capacity ibm_db2 scripts persistent connections)\n 1,738,800 requests/hour.............36,000 users/hour (relatively constant user capacity ibm_db2 scripts persistent connections)\n 41,731,200 requests/day (24 hours)..864,000 users/day (relatively constant user capacity ibm_db2 scripts persistent connections)\n \nThings to consider:\n\n* Who am i testing with ab tool?\n \n * lighting fast ab tool \"no delay\" push CPU 100% (superman users faster than a speeding bullet ... a machine)\n * normal thinking users \"10 second delay user actions\" to push CPU 100% (average Joe ... a human)\n * your application complexity may go far beyond simple ibm_db2_fetch.php above, but most any\n web 'load runner' tool (like 'ab tool') or worse php script 'loop call test'\n requires human action calculation to understand user capacity, so don't believe\n blog people until you actually run your application specific performance test.\n\nIf you want to run Apache ab tool on PASE (IBM i) ...\n\n * ab — Apache ab tool from old Zend Core (simulate browsers)\n\nXMLSERVICE still evolving\n-------------------------\n\nToolkit/XMLSERVICE performance will evolve over time, we understand performance is important and we intend push data through faster (perhaps much faster).\n\n* What runs fast?\n\n * calling PGMs/SRVPGMs (RPG, CLP, Cobol, System API, etc.)\n\n * loading your program is relatively slow so keep process alive (see below)\n * use persistent connections (db2_pconnect/i5_pconnect)\n * use IPC state full XMLSERVICE ('InternalKey'=>'/tmp/packers')\n * turn off PHP debug, logs, traces toolkit all ini files\n\n* What runs slower?\n\n * CMDs, especially CMDS that return data (RTVJOBA, etc.)\n\n * CMDs return data use REXX, which is slow first time\n * CMDs that do NOT return data, will run fairly quickly\n * call CLP using the program interface (not command interface)\n * use IPC state full XMLSERVICE ('InternalKey'=>'/tmp/packers')\n * turn off PHP debug, logs, traces toolkit all ini files\n\n* What runs slowest?\n\n * PASE sh utilities (system wrkactjob, ls, ps, etc.)\n \n * not much can be done because most time due to fork of another job (Unix/PASE style)\n * what little can be done is mostly same all call types\n * use IPC state full XMLSERVICE ('InternalKey'=>'/tmp/packers')\n * turn off PHP debug, logs, traces toolkit all ini files\n\n\n\n..\n [--Author([[http://youngiprofessionals.com/wiki/index.php/XMLSERVICE/XMLSERVICEConfig?action=expirediff | s ]])--]\n [--Tony \"Ranger\" Cairns - IBM i PHP / PASE--]\n"
},
{
"alpha_fraction": 0.5898477435112,
"alphanum_fraction": 0.607106626033783,
"avg_line_length": 26.73239517211914,
"blob_id": "fb8aebfbb68393a01e1dfe17d24e495f5e257907",
"content_id": "4a0587ac58d02d5300c1ca95b4cc8e85fd2c9c85",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 1970,
"license_type": "permissive",
"max_line_length": 74,
"num_lines": 71,
"path": "/test/php/test_98140_ZZSTEP_ibm_db2_io_step.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 inout SRVPGM - RPG step var incrementing\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\n// call IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Bad connect: $database,$user\");\nfor ($i=0; $i<5; $i++) {\n $stmt = db2_prepare($conn, \"call $procLib.iPLUG4K(?,?,?,?)\");\n if (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n $clobIn = getxml();\n $clobOut = \"\";\n $ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n $ret=db2_execute($stmt);\n if (!$ret) die(\"Bad execute: \".db2_stmt_errormsg());\n // -----------------\n // output processing\n // -----------------\n // dump raw XML (easy test debug)\n var_dump($clobOut);\n // xml check via simplexml vs. expected results\n $xmlobj = simplexml_load_string($clobOut);\n if (!$xmlobj) die(\"Bad XML returned\");\n $allpgms = $xmlobj->xpath('/script/pgm');\n if (!$allpgms) die(\"Missing XML pgm info\");\n $pgm = $allpgms[0];\n $name = $pgm->attributes()->name;\n $lib = $pgm->attributes()->lib;\n $func = $pgm->attributes()->func;\n // pgm return\n $retn = $pgm->xpath('return');\n if (!$retn) die(\"No XML pgm return ($lib/$name.$func)\");\n $data = (string)$retn[0]->data;\n if ($data == 0) die(\"return not greater than '0'\");\n // good\n echo \"Success ($lib/$name.$func return $data)\\n\";\n}\n\nfunction getxml() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<script>\n<pgm name='ZZSRV' lib='xyzlibxmlservicexyz' func='ZZSTEP'>\n <return>\n <data type='10i0'>0</data>\n </return>\n</pgm>\n</script>\nENDPROC;\nreturn test_lib_replace($clob);\n}\n?>\n--EXPECTF--\n%s\nSuccess (%s)\n%s\nSuccess (%s)\n%s\nSuccess (%s)\n%s\nSuccess (%s)\n%s\nSuccess (%s)\n\n"
},
{
"alpha_fraction": 0.5001814961433411,
"alphanum_fraction": 0.5284936428070068,
"avg_line_length": 31.785715103149414,
"blob_id": "830a2db9b1b66db024e8ae1684357f328f50bacc",
"content_id": "a1942b7404cca97661996bd2894e9800d93f31a2",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 2755,
"license_type": "permissive",
"max_line_length": 74,
"num_lines": 84,
"path": "/test/php/test_80601_db2_io_ZZARRAY2_deep_error.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 inout SRVPGM - deep error in data\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// -----------------\n// make the call\n// -----------------\n// see connection.inc param details ...\nrequire_once('connection.inc');\n// call IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Fail connect: $database,$user\");\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG15M(?,?,?,?)\");\nif (!$stmt) die(\"Fail prepare: \".db2_stmt_errormsg());\n$clobIn = getxml();\n$clobOut = \"\";\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) die(\"Fail execute: \".db2_stmt_errormsg());\n// -----------------\n// output processing\n// -----------------\n// dump raw XML (easy test debug)\nvar_dump($clobOut);\n\n// xml check via simplexml vs. expected results\n$xmlobj = simplexml_load_string($clobOut);\nif (!$xmlobj) die(\"Fail XML returned\\n\");\n\n// good\necho \"Success\\n\";\n\n// D ARRAYMAX c const(999)\n// D dcRec_t ds qualified based(Template)\n// D dcMyName 10A\n// D dcMyJob 4096A\n// D dcMyRank 10i 0\n// D dcMyPay 12p 2\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// * zzarrbad: check parameter array aggregate\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// P zzarrbad B export\n// D zzarrbad PI\n// D myName 10A\n// D myMax 10i 0\n// D myCount 10i 0\n// D findMe likeds(dcRec_t) dim(ARRAYMAX)\nfunction getxml() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<script>\n<pgm name='ZZSRV' lib='xyzlibxmlservicexyz' func='ZZARRBAD'>\n <parm comment='search this name'>\n <data var='myName' type='10A'>Ranger</data>\n </parm>\n <parm comment='max allowed return'>\n <data var='myMax' type='10i0'>128</data>\n </parm>\n <parm comment='actual count returned'>\n <data var='myCount' type='10i0' enddo='mycount'>0</data>\n </parm>\n <parm comment='array return'>\n <ds var='dcRec_t' dim='999' dou='mycount'>\n <data var='dcMyName' type='10A'>na</data>\n <data var='dcMyJob' type='4096A'>na</data>\n <data var='dcMyRank' type='10i0'>0</data>\n <data var='dcMyPay' type='12p2'>0.0</data>\n </ds>\n </parm>\n</pgm>\n</script>\nENDPROC;\nreturn test_lib_replace($clob);\n}\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.6466148495674133,
"alphanum_fraction": 0.6804948449134827,
"avg_line_length": 41.05250549316406,
"blob_id": "a2118151fa865862f11d89fb5cfaa5d15e60eb06",
"content_id": "1d61c7a47a6816dd2f80e65baae0de86a9c55ab0",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "reStructuredText",
"length_bytes": 17621,
"license_type": "permissive",
"max_line_length": 461,
"num_lines": 419,
"path": "/docs/faq.rst",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "XMLSERVICE FAQ\n==============\n\n`Goto Main Page`_\n\n.. _Goto Main Page: index.html\n\nTip\n---\nSome webserver products take issue with ``<script>`` in XML documents. If you find chunks missing in your XML, try ``<myscript>``\n::\n\n <script> change <myscript>\n : to :\n </script> this </myscript>\n\n\n\nXMLSERVICE is hanging right out of box (because CCSID 65535)\n------------------------------------------------------------\n::\n\n Q: XMLSERVICE is returning junk, how can i fix?\n Q: XMLSERVICE is not working at all and IPC /tmp/xxx dir not created, how can i fix?\n Q: XMLSERVICE is hanging?\n A: You need to change Apache settings.\n\n The following steps lead to a successful test::\n\n end current running jobs:\n ENDTCPSVR SERVER(*HTTP) HTTPSVR(ZENDSVR)\n ENDPJ SBS(QSYSWRK) PGM(QSQSRVR) OPTION(*IMMED)\n\n check the system CCSID value is 65535:\n DSPSYSVAL SYSVAL(QCCSID)\n Coded character set\n identifier . . . . . : 65535 1-65535\n\n edit configuration:\n /www/zendsvr/conf/fastcgi.conf (optional from editor):\n SetEnv=\"LANG=C\"\n\n /www/zendsvr/conf/httpd.conf (web admin GUI port 2001 - ZENDSVR):\n DefaultFsCCSID 37 ... or 208 (Italian) ... or so on ...\n CGIJobCCSID 37 ... or 208 (Italian) ... or so on ...\n\n restart jobs:\n STRPJ SBS(QSYSWRK) PGM(QSQSRVR)\n STRTCPSVR SERVER(*HTTP) HTTPSVR(ZENDSVR)\n\n\n wrkactjob\n If you see ANY XMLSERVICE jobs, kill them before retry.\n\n This will allow XMLSERVICE to work (IPC was junk not /tmp/whatever).\n This should fix \"junk\" from ibm_db2 on i (odbc, pdo_ibm, etc.).\n\n\nphp-cli command line issues (CCSID 65535) ...\n---------------------------------------------\n\nThe easy way to fix this is to simply CHGUSRPRF and specify a valid CCSID like 37, then as you sign-on (ssh, call qp2term) you will not be stuck with nasty 65535 problems.\n\nAnd now i can run my test cases (ssh->tcsh)\n::\n\n >setenv PATH /usr/local/zendsvr/bin:$PATH\n >setenv LIBPATH /usr/local/zendsvr/lib\n >cd /mytests\n >pear run-tests *.phpt\n >php test.php\n\n\n\nXMLSERVICE DBCS (CCSID 5035)\n----------------------------\n\n::\n\n Q: Did anyone try DBCS with XMLSERVICE?\n A: Yes. Japanese 5035 appears to work.\n The following steps lead to a successful test:\n\n end current running jobs:\n ENDTCPSVR SERVER(*HTTP) HTTPSVR(ZENDSVR)\n ENDPJ SBS(QSYSWRK) PGM(QSQSRVR) OPTION(*IMMED)\n\n change the system CCSID value:\n CHGSYSVAL SYSVAL(QCCSID) VALUE(5035)\n\n edit configuration:\n /www/zendsvr/conf/fastcgi.conf (editor:\n SetEnv=\"CCSID=1208\" SetEnv=\"LANG=C\"\n\n /www/zendsvr/conf/httpd.conf (web admin GUI port 2001 - ZENDSVR):\n DefaultFsCCSID 5035\n CGIJobCCSID 5035\n\n restart jobs:\n STRPJ SBS(QSYSWRK) PGM(QSQSRVR)\n STRTCPSVR SERVER(*HTTP) HTTPSVR(ZENDSVR)\n\n Note: If using ODBC will likely have to start/stop prestart jobs as well.\n\n\nXMLSERVICE multi-CCSID or problems?\n-----------------------------------\nUsing default PHP toolkit DB2 clob interface (iPLUGxxx/iPLUGRxxx), ccsid conversion occurs naturally as DB2 client/server and you will not have to code before/after, but method is available if you have a specific concern or you have scripts returning many different languages.\n\nExample::\n\n <data type='200A' before='819/424' after='424/819'>bin2hex('Hebrew_ascii_raw_chars')</data>\n <data type='200A' before='819/1098' after='1098/819'>bin2hex('Farsi_ascii_raw_chars')</data>\n <data type='200A' before='819/880' after='880/819'>bin2hex('Russia_ascii_raw_chars')</data>\n <data type='200A' before='819/280' after='280/819'>bin2hex('Italy_ascii_raw_chars')</data>\n <data type='200A' before='819/273' after='273/819'>bin2hex('Germany_ascii_raw_chars')</data>\n <data type='200A' before='819/1088' after='1088/819'>bin2hex('Korea_ascii_raw_chars')</data>\n <data type='200A' before='1208/13488' after='13488/1208'>bin2hex('Japan_ascii_raw_chars')</data>\n <data type='200A' before='1208/13488' after='13488/1208'>bin2hex('China_ascii_raw_chars')</data>\n where:\n before - XMLSERVICE convert CCSID before ILE program call\n after - XMLSERVICE convert CCSID after ILE program call for client return\n bin2hex() - script hex string unaltered ascii image (also returned hex string avoid any conversion)\n pack() - script uses pack('H*',\"xml_hex_back\") function in PHP program for ascii characters\n Note:\n Up to four conversions can take place for the truly diabolical ccsid issues\n <data type='A' before='cc1/cc2/cc3/cc4' after='cc4/cc3/cc2/cc1'>bin2hex('wild_ascii_raw_chars')</data>\n flow:\n -> PHP client bin2hex('wild_ascii_raw_chars')\n -> xmlservice hex2bin back to 'wild_ascii_raw_chars'\n -> xmlservice convert cc1->cc2->cc3->cc4 (before)\n -> xmlservice make ILE call\n -> xmlservice convert cc4->cc3->cc2->cc1 (after)\n -> xmlservice tohex \"xml_hex_back\"\n -> PHP client $chars = pack('H*',\"xml_hex_back\")\n\n\nCan i use CDATA for XML special characters?\n-------------------------------------------\n\nMany common XMLSERVICE tags already support a form of CDATA\n::\n\n <data><![CDATA[<i am tony>]]></data>\n <query><![CDATA[select * from animal where ID < 5 and weight > 10.0]]></query>\n <prepare><![CDATA[select * from animal where ID < ? and weight > ?]]></prepare>\n\nBUT there are restrictions for speed of parsing (i think reasonable)\n\n* not allowed to put reserved words in cdata - NO ``<query><![CDATA[</query>]]></query>``\n* no binary data (character data only) - ``<query><![CDATA[binary stuff]]></query>``\n* there may be other restrictions because i don't know everything about CDATA \"abuse\" in XML.\n\n.. \n LIBL, LIBL, LIBL\n ----------------\n\n Information moved to performance page *[[XMLSERVICELibl | %blue%{XMLSERVICE Libl}%%]]*.\n\n prestart XMLSERVICE jobs dramatically improve first-call performance?\n ---------------------------------------------------------------------\n\n Information moved to performance page *[[XMLSERVICEConfig | %blue%{XMLSERVICE Performance}%%]]*.\n\n Where do i find <xmlerrno> messages?\n ------------------------------------\n\n Information moved to error page *[[XMLSERVICEError | %blue%{XMLSERVICE Errors}%%]]*.\n\nHow do i kill/end XMLSERVICE jobs?\n----------------------------------\n\nXMLSERVICE jobs stay active until killed with CTL option \\*immed. You need only know the ipc name (/tmp/fred01, /tmp/sally43, etc.), and have appropriate profile abilities to issue the XML/CTL kill to a running XMLSERVICE job.\n\nExample kill XMLSERVICE for '/tmp/rangerusr'::\n\n $ipc='/tmp/rangerusr';\n $ctlKill=\"*immed\";\n $clobInKill = '<?xml version=\"1.0\"?>';\n $sql = \"call $libxmlservice.iPLUGR4K('$ipc','$ctlKill','$clobInKill')\";\n $ret=db2_exec($conn,$sql);\n\n\nCan i run \"stateless\" XMLSERVICE jobs (\\*here)?\n-----------------------------------------------\n\nXMLSERVICE can also run in the stored procedure (or web) job by using the ctl='\\*here' option. \nWhen you run in \\*here mode you do NOT need an IPC (ipc is ignored). XMLSERVICE is just another \nprogram in the job in \\*here mode, so when the job ends so does XMLSERVICE.\n\nExample XMLSERVICE running in stored procedure job::\n\n $ipc='not need ipc it is ignored';\n $ctl=\"*here\";\n $clobIn = '<?xml version=\"1.0\"?>\n <script>do stuff</script>';\n $sql = \"call $libxmlservice.iPLUGR512K('$ipc','$ctl','$clobIn')\";\n $ret=db2_exec($conn,$sql);\n\n**Note**: Performance is generally slower when running in this mode,\nbecause XMLSERVICE has to start from scratch each time. There are varying\ndegrees of overall XMLSERVICE performance using \\*here (\"stateless\")\ndepending on application interface supporting\npersistent/non-persistent/private connections (ie., db2_connect vs. db2_pconnect).\n\n**Warning**: It should be noted XMLSERVICE \"private\" connection\nwith ipc (ipc=/tmp/fred01, ipc=/tmp/fred02, ipc=/tmp/sally43) is most often\nthe best high performance fit for calling \"real\" RPG programs because most\nRPG programs actually do something other than \"hello world\" with\nmultiple files open and lot's of state you do not want to restart each time\ncalled (much better to dedicate a XMLSERVICE job to the task).\nSo, while \\*here is an easy answer for operator style process\nmanagement (older toolkit options), and works ok for small \"calculate this\"\nSRVPGMs, you will likely find \\*here does not work in a practical world\nwhere business tasks are much more complex (ie. exactly why XMLSERVICE\nis optimized for using a IPC connection).\n\nWhat is IPC??\n-------------\n\nIPC is traditional computer science abbreviation for InterProcess Communication (IPC), which in the case of XMLSERVICE is simply a unique directory name /tmp/mydir01 used for machine-wide unique 'key/token' to route XML script documents between various clients calling XMLSERVICE job(s)(/tmp/fred01, /tmp/sally02, etc.).\n\nBehind the scenes of XMLSERVICE the unique IPC provided by any client interface (/tmp/fred01, /tmp/sally02, etc.), establishes a shared memory area for XML information/control passing and a semaphore to provide one-at-time processing in XMLSERVICE job(s). All XMLSERVICE InterProcess Communication activities are keyed and routed to appropriate XMLSERVICE jobs by the IPC name in the /tmp directory (/tmp/fred01, /tmp/sally02, etc.).\n\nAs far as security is concerned, all normal IBM i IFS authority rules apply to the IPC access (/tmp/fred01, /tmp/sally02, etc.), \ntherefore assuming no group profile override precedence, only profile fred will be able to access IPC /tmp/fred01 XMLSERVICE job, \nand only sally profile will be able to access /tmp/sally XMLSERVICE job, and so on for all profiles used. Of course, \nhigh authority profiles like \\*SECOFR have more abilities over \\*USER profiles, but general rule is the profile you see \nin wrkactjob \"owns/uses\" the XMLSERVICE job.\n\n\ndeployment centric, not development centric (caches used XMLSERVICE)\n--------------------------------------------------------------------\n\nXMLSERVICE is a deployment/production centric service, not development centric, therefore XMLSERVICE internal caches are used to speed up next call processing. XMLSERVICE job(s) caching allows production sites to simply ignore costly operations like XMLSERVICE find/load called modules (your RPG programs), under the assumption that production machine is stable (no recompiles going on).\n\nHowever, if you are doing development work on your machine (recompiles CL/RPG), you will have to end active/running XMLSERVICE job(s) and restart to call your new program.\n\nExample (actual email): My coworker has told me on more than one occasion that I'm calling his \"old\" CL or RPG programs even though he recompiled them. I resolve this by restarting the correct XMLSERVICE jobs, allowing me to call the current versions of CL/RPG programs.\n\n\nWhy not use pcml?\n-----------------\n\npcml falls short in terms of supported RPG types and common attribute conventions (varying) and simply cannot do what this XMLSERVICE custom XML interface is capable of doing. pcml falls even shorter when trying to return complex structures as <return> elements, which is very popular for modern RPG SRVPGMs (XMLSERVICE supports of course). Last, pcml is especially short of the \"complete scripting\" mark when calling PASE shells, or CMDs (XMLSERVICE supports).\n\nI leave it to the entrepreneurial user to XSLT map PCML to XMLSERVICE (hint).\n\nDB2 XML SQL not work ctl='\\*here'?\n----------------------------------\n\nDB2 SQL XML does not work in-line stateless (``$ctl='*here'``), but works fine with normal private connections \n(``ipc='/tmp/fred', $ctl='*sbmjob'``).\n\n..\n DB2 Connect (2 tier)\n --------------------\n\n See this wiki link for details [[Tier2/DB2Connect | DB2 Connect]]\n\n ODBC (2 tier)\n --------------\n\n See this link [[Tier2/IAccess | ODBC IAccess]]\n\nWorkaround for \"bad\" drivers leaving junk back of output clob\n-------------------------------------------------------------\nThe world is not perfect 1-2 tier DB2 drivers IBM i, Windows, Linux, etc., so occasionally a \"hack\" is handy.\n\nI always \"scope\" my XML input requests with ``<script>...</script>``, so anything past tailing ``</script>`` is 'junk' (errors return as ``<report>...</report>``).\n\nThe new XMLSERVICE keyword \\*hack adds ``</hack>`` back of every record return result set can be very useful for drivers that do not support stored procedure \nin/out parameters like PHP odbc.\n\n::\n\n function driverJunkAway($xml)\n {\n $clobOut = $xml;\n if (! trim($clobOut)) return $clobOut;\n\n // result set has extra data (junk)\n $fixme = '</hack>';\n $pos = strpos($clobOut,$fixme);\n if ($pos > -1) {\n $clobOut = substr($clobOut,0,$pos);\n }\n else {\n $fixme = '</script>';\n $pos = strpos($clobOut,$fixme);\n if ($pos > -1) {\n $clobOut = substr($clobOut,0,$pos+strlen($fixme));\n }\n // maybe error/performance report\n else {\n $fixme = '</report>';\n $pos = strpos($clobOut,$fixme);\n if ($pos > -1) {\n $clobOut = substr($clobOut,0,$pos+strlen($fixme));\n }\n }\n }\n return $clobOut;\n }\n\n\nCan i use curl to test XMLSERVICE?\n----------------------------------\n\nThis works from my Linux machine to IBM i.\n::\n\n curl http://myibmi/cgi-bin/xmlcgi.pgm --data-urlencode db2=*LOCAL --data-urlencode uid=*NONE \n --data-urlencode pwd=*NONE --data-urlencode ipc=/tmp/rangerhtmlonly --data-urlencode ctl=*sbmjob \n --data-urlencode xmlin=\"<?xml version='1.0'?><script><pgm name='ZZCALL' lib='XMLSERVICE'>\n <parm><data type='1A'>a</data></parm><parm><data type='1A'>b</data></parm><parm>\n <data type='7p4'>11.1111</data></parm><parm><data type='12p2'>222.22</data></parm><parm><ds>\n <data type='1A'>x</data><data type='1A'>y</data><data type='7p4'>66.6666</data>\n <data type='12p2'>77777.77</data></ds></parm><return><data type='10i0'>0</data>\n </return></pgm></script>\" --data-urlencode xmlout=32768\n\nI need to start over kill everything\n------------------------------------\n\n... doing a lot of machine \"updating\" test cw/xmlservice versions ... many install errors ... state unknown ... i recommend following actions assuming only Zend Server running this system (be kind to others) ...\n\n1) end zendsrvr http\n::\n\n ENDTCPSVR SERVER(*HTTP) INSTANCE(ZENDSVR)\n\nwrkactjob wait all php-cgi to go down, kill php-cgi \\*immed by hand if won't die.\n\n2) clear left over IPC attributes for cw/xmlservice test profiles\n::\n\n call qp2term\n > ipcs | grep -i qtm | awk '{print \"ipcrm -\" tolower($1) \" \"$2}' | sh\n > ipcs | grep -i myid1 | awk '{print \"ipcrm -\" tolower($1) \" \"$2}' | sh\n > ipcs | grep -i myid1 | awk '{print \"ipcrm -\" tolower($1) \" \"$2}' | sh\n Where:\n > qtm - clear IPCs for QTMHHTTP\n > myid1 - clear IPCs for profile MYID1\n > myid2 - clear IPCs for profile MYID2\n\n3) carefully reset /tmp\n::\n\n call qp2term\n > cd /tmp\n > pwd\n /tmp --- if you do not see /tmp, stop now rm -R may kill whole system\n > rm -R *\n\n4) recycle DB2 connections (optional)\n::\n\n ENDPJ SBS(QSYSWRK) PGM(QSQSRVR) OPTION(*IMMED)\n STRPJ SBS(QSYSWRK) PGM(QSQSRVR)\n\n5) re-check configuration\n\nChange these files and restart everything (web, db2, xmlservice, tec.)\n::\n\n I. Apache: /www/zendsvr/conf/httpd.conf <-- (ILE Apache side)\n DefaultFsCCSID 37 ... or 280 (Italian) ... or so on ...\n CGIJobCCSID 37 ... or 280 (Italian) ... or so on ...\n\n6) restart http\n::\n\n STRTCPSVR SERVER(*HTTP) HTTPSVR(ZENDSVR)\n\n\n\nPASE php-cgi out of memory (SetEnv=\"LDR_CNTRL=MAXDATA=0x80000000\")\n------------------------------------------------------------------\n\n*This is a very rare condition, but if you find your huge size PHP script runs out of PASE memory ...*\n\nYou can force most any 32-bit PASE program to leave more heap space (including php-cgi), just specify environment variable up to LDR_CNTRL=MAXDATA=0x80000000 (8 * 256MB). However, please be aware while you are increasing the heap, you are also limiting the number of 256MB segments available for shared memory (not commonly used by PHP programs anyway).\n\n* [[http://www.ibm.com/developerworks/aix/library/j-nativememory-aix/index.html]] - Thanks for the memory article applies to PASE as well (therefore also PHP)*\n\nIn FastCGI php-cgi just add directive to fastcgi.config.\n::\n\n /www/zendsvr/conf/fastcgi.conf\n Server type=\"application/x-httpd-php\" ... normal stuff ... SetEnv=\"LDR_CNTRL=MAXDATA=0x80000000\"\n\n\nGraphically memory of PASE 32 bit LDR_CNTRL=MAXDATA=0x80000000. Variations of LDR_CNTRL allow you to rearrange even reserved segments and shared libraries, but these uses of LDR_CNTRL are infrequently deployed (except for the most gruesome memory hog Java programs MAXDATA=0xD0000000\\@DSA).\n::\n\n >export LDR_CNTRL=MAXDATA=0x80000000\n >php -v\n memory of PASE program PHP\n 00000000 - 0FFFFFFF - PASE kernel heap (mostly read or no access to user programs)\n 10000000 - 1FFFFFFF - PASE main program text/code (php)\n 20000000 - 2FFFFFFF - PASE stack (programs usually share stack/heap in this one segment 256MB)\n 30000000 - 3FFFFFFF ------\n 40000000 - 4FFFFFFF |\n 50000000 - 5FFFFFFF |\n 60000000 - 6FFFFFFF |--- PASE heap (LDR_CNTRL=MAXDATA=0x80000000)\n 70000000 - 7FFFFFFF | used for new/delete/malloc/free allocations PHP to run scripts\n 80000000 - 8FFFFFFF |\n 90000000 - 9FFFFFFF | *Note: no more segments avail for shared memory\n A0000000 - AFFFFFFF ------\n B0000000 - BFFFFFFF - reserved\n C0000000 - CFFFFFFF - reserved\n D0000000 - DFFFFFFF - machine wide shared library text/code (libc.a, etc.)\n E0000000 - EFFFFFFF - shared memory\n F0000000 - FFFFFFFF - machine wide shared library data\n\n\n..\n [--Author([[http://youngiprofessionals.com/wiki/index.php/XMLSERVICE/XMLSERVICEFAQ?action=expirediff | s ]])--]\n [--Tony \"Ranger\" Cairns - IBM i PHP / PASE--]\n\n"
},
{
"alpha_fraction": 0.5524142384529114,
"alphanum_fraction": 0.5987929105758667,
"avg_line_length": 27.098215103149414,
"blob_id": "38ceeeb2068961213afbdf05c632d50004b7e89f",
"content_id": "dd91e961ab3c8769bea2f561e0697b2e905b8588",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 3148,
"license_type": "permissive",
"max_line_length": 97,
"num_lines": 112,
"path": "/test/php/test_70009_PERF_loop256_raw_pgm.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: inside fetch\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\n\n\n// tests criteria\n$ctl = \"*here\"; /* *here *justproc *fly *debug */\n$ipc = \"\"; /* no internal key */\n$i5fail = 2; /* 2 seconds or less */\n$i5loop = 256; /* 256(+) calls */\n$start_time = microtime();\n\n// call IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Bad connect: $database,$user\");\n\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG32K(?,?,?,?)\");\nif (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n\n$stmt2 = db2_prepare($conn, \"select * from QIWS.QCUSTCDT\");\nif (!$stmt2) die(\"Bad prepare: \".db2_stmt_errormsg());\n\n$count = 0;\n$rpt = \"\";\nwhile ($count < $i5loop) {\n $res = db2_execute($stmt2);\n if (!$res) die(\"Bad execute: \".db2_stmt_errormsg());\n while ($res && $row = db2_fetch_array($stmt2)) {\n $clobIn = getxml($row);\n $clobOut = \"\";\n $ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n $ret=db2_execute($stmt);\n if (!$ret) die(\"Bad execute: \".db2_stmt_errormsg());\n $count++;\n // var_dump(implode(\":\",$row));\n if (strpos($clobOut, '4444444444.44') < 1) die (\"BAD call $count\\n\");\n // var_dump($clobOut);\n //$rpt = $clobOut;\n //echo \"$count $count $count\\n\";\n //if ($count == 1) { fly_rpt($rpt); }\n //elseif ($count == 3) fly_rpt($rpt);\n //elseif ($count == 5) fly_rpt($rpt);\n //elseif ($count > 5) break;\n }\n //break;\n}\n$end_time = microtime();\n$wire_time= control_microtime_used($start_time,$end_time)*1000000;\n\n\n$opt = \"(\".trim($ctl).\" \".trim($ipc).\")\";\necho\n sprintf(\"$opt -- Time (loop=$count) total=%1.2f sec (%1.2f ms per call)\\n\",\n round($wire_time/1000000,2),\n round(($wire_time/$count)/1000,2));\n\nfunction control_microtime_used($i5before,$i5after) {\n return (substr($i5after,11)-substr($i5before,11))+(substr($i5after,0,9)-substr($i5before,0,9));\n}\n\n// result times\n$look = round($wire_time/1000000,2);\nif ($look>$i5fail) die(\"$count calls fail - too slow > $i5fail seconds\\n\");\necho \"Success\\n\";\n\nfunction getxml($row) {\n/* mode='opm' */\n$clob = \"\n<?xml version='1.0'?>\n<script>\n<pgm name='ZZCALL' lib='xyzlibxmlservicexyz'>\n <parm io='both'>\n <data type='1A' var='INCHARA'>{$row[2]}</data>\n </parm>\n <parm io='both'>\n <data type='1A' var='INCHARB'>{$row[5]}</data>\n </parm>\n <parm io='both'>\n <data type='7p4' var='INDEC1'>{$row[10]}</data>\n </parm>\n <parm io='both'>\n <data type='12p2' var='INDEC2'>{$row[9]}</data>\n </parm>\n <parm io='both'>\n <ds>\n <data type='1A' var='INDS1.DSCHARA'>{$row[2]}</data>\n <data type='1A' var='INDS1.DSCHARB'>{$row[5]}</data>\n <data type='7p4' var='INDS1.DSDEC1'>{$row[10]}</data>\n <data type='12p2' var='INDS1.DSDEC2'>{$row[9]}</data>\n </ds>\n </parm>\n <return>\n <data type='10i0'>0</data>\n </return>\n</pgm>\n</script>\n\";\nreturn test_lib_replace($clob);\n}\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.5369440317153931,
"alphanum_fraction": 0.6151363253593445,
"avg_line_length": 28.967741012573242,
"blob_id": "578cf87db23e8aecaeac267463ae46d37a2bf93a",
"content_id": "3dbfe18dccd3922d82199a75fe0977c0efa5445d",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 2788,
"license_type": "permissive",
"max_line_length": 80,
"num_lines": 93,
"path": "/test/php/test_40501_nocall_nested3.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: NESTED3\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// -----------------\n// make the call\n// -----------------\n// see connection.inc param details ...\nrequire_once('connection.inc');\n// call IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Fail connect: $database,$user\");\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG15M(?,?,?,?)\");\nif (!$stmt) die(\"Fail prepare: \".db2_stmt_errormsg());\n$ctl .= \" *test\";\n$zz0cnt = 3;\n$clobIn = getxml($zz0cnt);\n// xml check via simplexml vs. expected results\n$xmlobj = simplexml_load_string($clobIn);\nif (!$xmlobj) die(\"Fail XML input\\n\");\n$clobOut = \"\";\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) die(\"Fail execute: \".db2_stmt_errormsg());\n// -----------------\n// output processing\n// -----------------\n// dump raw XML (easy test debug)\nvar_dump($clobOut);\n// xml check via simplexml vs. expected results\n$xmlobj = simplexml_load_string($clobOut);\nif (!$xmlobj) die(\"Fail XML output\\n\");\n// pm1\nif (substr_count($clobOut,\"aa\") < 1) die (\"aa - missing\\n\");\nif (substr_count($clobOut,\"ab\") < 1) die (\"ab - missing\\n\");\n// pm2\nif (substr_count($clobOut,\"ba\") < 1) die (\"ba - missing\\n\");\nif (substr_count($clobOut,\"bb\") < 1) die (\"bb - missing\\n\");\n$x = \"cdefghijklmno\";\nfor ($i=0;$i<$zz0cnt;$i++) {\n for ($j=0;$i<strlen();$i++) {\n $y = \"b\".substr($x,$j,1);\n if (substr_count($clobOut,$y) <> $zz0cnt) die (\"$y(1:$zz0cnt) - missing\\n\");\n }\n}\n\n// good\necho \"Success\\n\";\n\nfunction getxml($zz0cnt) {\n$clob = <<<ENDPROC\n<?xml version='1.0' encoding='ISO-8859-1' ?>\n<script>\n<pgm name='NESTED3'>\n<parm prm='pm10000000aa'>\n<data var='va10000000ab' type='3s0' enddo='v0cnt'>zz0cnt</data>\n</parm>\n<parm prm='pm20000000ba'>\n<ds var='ds21000000bb'>\n <ds var='ds21100000bc' dim='999' dou='v0cnt'>\n <data var='va21100000bd' type='10a'>2012-06-22</data>\n <ds var='ds21110000be'>\n <data var='va21110000bf' type='17a'>FlinFlam</data>\n <data var='va21110000bg' type='12a'>Fredrich</data>\n <ds var='ds21111000bh'>\n <data var='va21111000bj' type='1a'>R</data>\n <data var='va21111000bk' type='9a'>Regan</data>\n </ds>\n <data var='va21110000bl' type='3a'>101</data>\n <data var='va21110000bm' type='12p2'>42.42</data>\n </ds>\n <data var='va21100000bn' type='1a'>N</data>\n <data var='va21110000bo' type='5s0'>42</data>\n </ds>\n</ds>\n</parm>\n</pgm>\n</script>\nENDPROC;\n$was = array(\"zz0cnt\");\n$now = array(\"$zz0cnt\");\nreturn str_replace($was,$now,$clob);\n}\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.5094274878501892,
"alphanum_fraction": 0.5375385880470276,
"avg_line_length": 34.132530212402344,
"blob_id": "464f59da102ef19938b8530129466dfdb0ac68ae",
"content_id": "87b8ad3f68f1b2cc3d770983064c2e5f44e798a1",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 2917,
"license_type": "permissive",
"max_line_length": 74,
"num_lines": 83,
"path": "/test/php/test_80001_db2_io_srvpgm_array_data_error_default.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 srvpgm error bad array element error default\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// -----------------\n// make the call\n// -----------------\n// see connection.inc param details ...\nrequire_once('connection.inc');\n// call IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Fail connect: $database,$user\");\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG65K(?,?,?,?)\");\nif (!$stmt) die(\"Fail prepare: \".db2_stmt_errormsg());\n$clobIn = getxml();\n$clobOut = \"\";\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) die(\"Fail execute: \".db2_stmt_errormsg());\n// -----------------\n// output processing\n// -----------------\n// dump raw XML (easy test debug)\nvar_dump($clobOut);\n\n// check if good\nif (strpos($clobOut,'<report>')<1) die(\"Missing error <report>\\n\");\nif (strpos($clobOut,'<error>')<1) die(\"Missing error <error>\\n\");\nif (strpos($clobOut,\"<joblogscan>\")<1) die(\"Missing <joblogscan>\\n\");\nif (strpos($clobOut,'<joblog')<1) die(\"Missing error <joblog>\\n\");\nif (strpos($clobOut,'baddata')<1) die(\"Missing error baddata\\n\");\n\necho \"Success\\n\";\n\n// D ARRAYMAX c const(999)\n// D dcRec_t ds qualified based(Template)\n// D dcMyName 10A\n// D dcMyJob 4096A\n// D dcMyRank 10i 0\n// D dcMyPay 12p 2\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// * zzarray: check return array aggregate\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// P zzarray B export\n// D zzarray PI likeds(dcRec_t) dim(ARRAYMAX)\n// D myName 10A\n// D myMax 10i 0\n// D myCount 10i 0\nfunction getxml() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<pgm name='ZZSRV' lib='xyzlibxmlservicexyz' func='ZZARRAY'>\n <parm comment='ok data'>\n <data var='myName' type='10A'>1282635</data>\n </parm>\n <parm comment='bad data'>\n <data var='myMax' type='10i0'>baddata</data>\n </parm>\n <parm comment='bad data'>\n <data var='myCount' type='10i0' enddo='mycount'>baddata</data>\n </parm>\n <return>\n <ds var='dcRec_t' dim='10' dou='mycount' comment='bad data'>\n <data var='dcMyName' type='10A'>na</data>\n <data var='dcMyJob' type='4096A'>na</data>\n <data var='dcMyRank' type='10i0'>baddata</data>\n <data var='dcMyPay' type='12p2'>baddata</data>\n </ds>\n </return>\n</pgm>\nENDPROC;\nreturn test_lib_replace($clob);\n}\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.5819970965385437,
"alphanum_fraction": 0.5982142686843872,
"avg_line_length": 25.507246017456055,
"blob_id": "e61d93a56fc8daeae28e883892d46d4deea70d55",
"content_id": "064c230e3a08401d2b01ed3ae1ea3bf9294a8ce8",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 5488,
"license_type": "permissive",
"max_line_length": 101,
"num_lines": 207,
"path": "/test/php/test_33462_cwtest_dataqueue.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: - cwtest data queue\n--SKIPIF--\n<?php require_once('skipifcw.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('xxcw_test_setup.php');\n\n\n// data queue\nif ($doDataQueue) {\n\n\techo h2('Data queues');\n\n\t$queueName = 'KEYEDQ';\n\t$keyLen = 10;\n\t$qObj = new DataQueue($conn);\n\techo \"<BR>About to delete data queue $queueName. (Will fail if doesn't exist yet)<BR>\";\n\ttry {\n\t $qObj->DeleteDQ($queueName, $demoLib);\n\t echo \"Success deleting data queue $queueName.\";\n\t} catch (Exception $e) {\n\t\techo(\"Error deleting data queue: \" . $e . \"<BR><BR>\");\n\t}\n\n\techo \"<BR>About to create data queue $queueName.<BR>\";\n\ttry {\n\t $qObj->CreateDataQ($queueName, $demoLib, 128, '*KEYED', $keyLen); // length 10 key\n\t echo \"Success creating data queue $queueName.\";\n\t} catch (Exception $e) {\n\t\tdie(\"Error creating data queue: \" . $e . \"<BR><BR>\");\n\t}\n\n\t// test case adapted from p398 of Zend Server 5.1 manual\n\t$simpleStructure =\n\tarray(\n 'DSName' => 'PS',\n 'DSParm' =>\n array (\n\n array (\n 'type' => 0,\n 'name' => 'PS1',\n 'length' => '10',\n ),\n\n array (\n 'type' => 6,\n 'name' => 'PS2',\n 'length' => '10.4',\n ),\n\n array (\n 'type' => 0,\n 'name' => 'PS3',\n 'length' => '10',\n ),\n )\n);\n\t// prepare\n\t$queue = i5_dtaq_prepare(\"$demoLib/$queueName\", $simpleStructure, $keyLen);\n if (!$queue) {\n \tdie(\"Error preparing data queue.<BR><BR>\");\n }\n\n\n // send\n $key = 'abc';\n\t$data = array('PS1' => 'test1', 'PS2' => 13.1415, 'PS3' => 'test2');\n\n echo \"<BR>About to send simple structure to keyed data queue $queueName with key $key.<BR>\";\n\t$success = i5_dtaq_send($queue, $key, $data);\n\t//\n\tif (!$success) {\n\t\tdie(\"Error returned from data queue send: \" . printArray(i5_error()) . \"<BR><BR>\");\n\t} else {\n\t\techo \"Success sending data to data queue.<BR><BR>\";\n\t}\n\n echo \"<BR>About to receive simple structure from keyed data queue $queueName with key $key.<BR>\";\n\t$data = i5_dtaq_receive($queue, 'EQ', $key);\n\n\t// receive\n\tif (!$data) {\n\t\tdie(\"Error returned from simple data queue receive: \" . printArray(i5_error()));\n\t} else {\n\t\techo \"Success getting simple data structure from data queue: \" . printArray($data);\n\t}\n\n\techo '<BR>';\n\n\t// unkeyed queue with complex structure\n\n\t$queueName = 'NEWQ';\n\t$qObj = new DataQueue($conn);\n\techo \"<BR>About to delete data queue $queueName. (Will fail if doesn't exist yet)<BR>\";\n\ttry {\n\t $qObj->DeleteDQ($queueName, $demoLib);\n\t echo \"Success deleting data queue $queueName.\";\n\t} catch (Exception $e) {\n\t\techo(\"Error deleting data queue: \" . $e . \"<BR><BR>\");\n\t}\n\n\techo \"<BR>About to create data queue $queueName.<BR>\";\n\ttry {\n\t $qObj->CreateDataQ($queueName, $demoLib);\n\t echo \"Success creating data queue $queueName.\";\n\t} catch (Exception $e) {\n\t\tdie(\"Error creating data queue: \" . $e . \"<BR><BR>\");\n\t}\n\n\n\t\t$bigDesc = array(\narray (\"DSName\"=>\"BIGDS\", \"DSParm\"=>array (\narray (\"Name\"=>\"P1\", \"IO\"=>I5_INOUT, \"Type\"=>I5_TYPE_CHAR, \"Length\"=>10, \"Count\"=>5),\narray (\"Name\"=>\"P2C\", \"IO\"=>I5_INOUT,\"Type\"=>I5_TYPE_LONG, \"Length\"=>4),\narray (\"Name\"=>\"P2\", \"IO\"=>I5_INOUT, \"Type\"=>I5_TYPE_CHAR, \"Length\"=>1, \"Count\"=>2),\narray (\"DSName\"=>\"PS\", \"Count\"=>2, \"DSParm\"=>array (\narray (\"Name\"=>\"PS1\", \"IO\"=>I5_INOUT, \"Type\"=>I5_TYPE_CHAR, \"Length\"=>10),\narray (\"Name\"=>\"PS2\", \"IO\"=>I5_INOUT, \"Type\"=>I5_TYPE_CHAR, \"Length\"=>10),\narray (\"Name\"=>\"PS3\", \"IO\"=>I5_INOUT, \"Type\"=>I5_TYPE_CHAR, \"Length\"=>10)\n))\n))\n);\n\n\n\t// prepare\n\t$queue = i5_dtaq_prepare(\"$demoLib/$queueName\", $bigDesc);\n if (!$queue) {\n \tdie(\"Error preparing data queue.<BR><BR>\");\n }\n\t// send\n\n\n // send\n echo \"<BR>About to send big data structure to data queue $queueName.<BR>\";\n\t$success = i5_dtaq_send($queue, '', $bigInputValues);\n\t//\n\tif (!$success) {\n\t\tdie(\"Error returned from data queue send: \" . i5_error() . \"<BR><BR>\");\n\t} else {\n\t\techo \"Success sending data to data queue.<BR><BR>\";\n\t}\n\n\n echo \"<BR>About to receive big data structure from data queue $queueName.<BR>\";\n\t$data = i5_dtaq_receive($queue);//, $operator = null, $key = '', $timeout = 0)\n\n\t// receive\n\tif (!$data) {\n\t\tdie(\"Error returned from data queue receive: \" . printArray(i5_error()));\n\t} else {\n\t\techo \"Success getting data from data queue: \" . printArray($data);\n\t}\n\n\techo '<BR>';\n\n\n\t// Now a short-form DQ test\n\n\t// short-form description\n\t$littleDesc = array (\"Name\"=>\"sometext\", \"IO\"=>I5_INOUT, \"Type\"=>I5_TYPE_CHAR, \"Length\"=>20);\n\t$littleInput = \"Small text input\";\n\n\techo \"<BR>About to send small short-form data structure to data queue $queueName.<BR>\";\n\n\t// prepare\n\t$queue = i5_dtaq_prepare(\"$demoLib/$queueName\", $littleDesc);\n if (!$queue) {\n \tdie(\"Error preparing data queue.<BR><BR>\");\n }\n\n // send\n\t$success = i5_dtaq_send($queue, '', $littleInput);\n\t//\n\tif (!$success) {\n\t\tdie(\"Error returned from data queue send of small input: \" . i5_error() . \"<BR><BR>\");\n\t} else {\n\t\techo \"Success sending the string '$littleInput' to data queue.<BR><BR>\";\n\t}\n\n\n echo \"<BR>About to receive small data structure from data queue $queueName.<BR>\";\n\t$data = i5_dtaq_receive($queue);//, $operator = null, $key = '', $timeout = 0)\n\t// receive\n\tif (!$data) {\n\t\tdie(\"Error returned from data queue receive of small data: \" . i5_error() . \"<BR><BR>\");\n\t} else {\n\t\techo \"Success getting small data from data queue: '$data'<BR><BR>\";\n\t}\n\n\techo '<BR><BR>';\n\n\t// end, short-form DQ test\n\n\n} //(data queue)\n\n\n\n// good\necho \"\\nSuccess\\n\";\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.5593008995056152,
"alphanum_fraction": 0.5921764373779297,
"avg_line_length": 25.9887638092041,
"blob_id": "245450ac9ccecadd8805fdb10d70ccc5db2676ea",
"content_id": "179a0dfb2e0174d8043454d6dbff69d722c53151",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 2403,
"license_type": "permissive",
"max_line_length": 103,
"num_lines": 89,
"path": "/test/php/test_33465_cwtest_pgmcomplex.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: - cwtest pgm complex\n--SKIPIF--\n<?php require_once('skipifcw.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('xxcw_test_setup.php');\n\n// *** data structure call! ***\n\nif ($doPgmCallComplex) {\n\necho '<BR>Program call with complex parameters<BR>';\n\n$progname = \"$demoLib/RPCTEST\";\n\necho \"<b>About to call $progname with data structure parameters.</b>\";\n\n/*Call a program with parameters that include a DS */\n\n$desc = array (\narray (\"Name\"=>\"P1\", \"IO\"=>I5_INOUT, \"Type\"=>I5_TYPE_CHAR, \"Length\"=>10, \"Count\"=>5),\narray (\"Name\"=>\"P2C\", \"IO\"=>I5_INOUT,\"Type\"=>I5_TYPE_LONG, \"Length\"=>4),\narray (\"Name\"=>\"P2\", \"IO\"=>I5_INOUT, \"Type\"=>I5_TYPE_CHAR, \"Length\"=>1, \"CountRef\"=>\"P2C\" ),\narray (\"DSName\"=>\"PS\", \"Count\"=>2, \"DSParm\"=>array (\narray (\"Name\"=>\"PS1\", \"IO\"=>I5_INOUT, \"Type\"=>I5_TYPE_CHAR, \"Length\"=>10),\narray (\"Name\"=>\"PS2\", \"IO\"=>I5_INOUT, \"Type\"=>I5_TYPE_CHAR, \"Length\"=>10),\narray (\"Name\"=>\"PS3\", \"IO\"=>I5_INOUT, \"Type\"=>I5_TYPE_CHAR, \"Length\"=>10)\n)\n));\n\n$prog = i5_program_prepare($progname, $desc);\nif ($prog === FALSE) {\n\t$errorTab = i5_error();\n\techo \"Program prepare failed <br>\\n\";\n\tvar_dump($errorTab);\n\tdie();\n}\n/* Execute Program */\n\n// The nameless elements in array.\n$params1 = array(\narray(\"PS1\"=>\"test1\", \"PS2\"=>\"test2\", \"PS3\"=>\"test3\"),\narray(\"PS1\"=>\"test3\", \"PS2\"=>\"test4\", \"PS3\"=>\"test5\")\n);\n\n$params2 = Array(\n\"P1\"=>array(\"t1\", \"t2\", \"t3\", \"t4\", \"t5\"),\n\"P2C\"=>2,\n\"P2\"=>array(\"a\", \"b\"),\n\"PS\"=>$params1);\n$retvals = array(\"P1\"=>\"P1\", \"PS\"=>\"PS\", \"P2\"=>\"P2\", \"P2C\"=>\"P2C\");\n\n$ret = i5_program_call($prog, $params2, $retvals) ;\nif (function_exists('i5_output')) extract(i5_output()); // i5_output() required if called in a function\n\nif ($ret === FALSE)\n{\n$errorTab = i5_error();\necho \"FAIL : i5_program_call failure message: \" . $conn->getLastError() . \" with code <br>\";\nvar_dump($errorTab);\ndie();\n}else {\n // success\n echo \"<BR><BR>Success! The return values are: <br>\";\n echo \"P1 : \" . printArray($P1) . \"<BR>\";\n echo \"P2C : \" . $P2C . \"<BR>\";\n echo \"P2 : \" . printArray($P2) . \"<BR>\";\n echo \"PS: \" . printArray($PS) . \"<BR>\";\n}\n$close_val = i5_program_close ($prog);\nif ($close_val === false )\n{\nprint (\"FAIL : i5_program_close returned fales, closing an open prog.<br>\\n\");\n$errorTab = i5_error();\nvar_dump($errorTab);\ndie();\n}\n\n} //(pgmcall complex)\n\n\n// good\necho \"\\nSuccess\\n\";\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.5825358629226685,
"alphanum_fraction": 0.6040669679641724,
"avg_line_length": 20.41025733947754,
"blob_id": "d9897d5eaeb128552c087e678d3a27f70b26ab7b",
"content_id": "529e95306868743c56b0d0b4e17437640755640b",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 836,
"license_type": "permissive",
"max_line_length": 57,
"num_lines": 39,
"path": "/test/php/test_99999_MISC_rest_immed_kill.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: CTL - kill server\n--SKIPIF--\n<?php require_once('skipifrest.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\nibm_db2_IgnoreOff(); // remove ibm_db2.i5_ignore_userid=1\n\n$ipc2 = \"/tmp/ipc_cw_\".$user.\"_42\"; // cw tests\n\n$allipc = array($ipc,$ipc2,$ipcover);\n\nforeach ($allipc as $ipc) {\n\n\n// call IBM i\n$ctl = \"*immed\"; // kill XMLSERVICE NOW\n$clobIn = \"<?xml version='1.0'?>\"; // XML in\n$parm = \"?db2=$i5restdb\";\n$parm .= \"&uid=$user\";\n$parm .= \"&pwd=$password\";\n$parm .= \"&ipc=$ipc\";\n$parm .= \"&ctl=$ctl\";\n$parm .= \"&xmlin=\".urlencode($clobIn);\n$parm .= \"&xmlout=32768\"; // size expected XML output\n// execute\n$linkall = \"$i5resturl\".htmlentities($parm);\n$getOut = simplexml_load_file($linkall);\n\n}\n\necho \"i am ...\\n\";\necho \"dead\\n\";\n?>\n--EXPECTF--\n%s\ndead\n\n"
},
{
"alpha_fraction": 0.5888610482215881,
"alphanum_fraction": 0.5976220369338989,
"avg_line_length": 36.16279220581055,
"blob_id": "51701bede76d9234cf55433b16e6bfaf3a4f93c9",
"content_id": "75d13a2a16bb431f0c2f47face2c7824f9600670",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 1598,
"license_type": "permissive",
"max_line_length": 113,
"num_lines": 43,
"path": "/test/php/xmlservice_get.php",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "<?php\n// PHP driver: xmlservice_get.php\n// Notes:\n// Assumes you have XMLSERVICE REST driver enabled\n// HTTP Server (Apache) and a IBM i version match\n// XMLSERVICE/XMLCGI.PGM (you compiled).\n//\n// You may use XMLSERVICE REST driver on IBM i (1-tier)\n// or from Linux/Windows to IBM i (2-tier).\n// For help:\n// http://www.youngiprofessionals.com/wiki/index.php/XMLService\n\n// *** XMLSERVICE call (REST GET + internal RPG DB2 driver) ***\n// Example: /www/zendsvr/conf/httpd.conf\n// ScriptAlias /cgi-bin/ /QSYS.LIB/XMLSERVICE.LIB/\n// <Directory /QSYS.LIB/XMLSERVICE.LIB/>\n// AllowOverride None\n// order allow,deny\n// allow from all\n// SetHandler cgi-script\n// Options +ExecCGI\n// </Directory>\nfunction xmlservice($xml) {\nglobal $i5persistentconnect, $database, $user, $password, $ipc, $ctl, $procConn, $procLib, $procPlug, $procPlugR,\n $i5resturl, $i5restdb, $i5restuser, $i5restpass, $i5restsz;\n $was = array('\"');\n $now = array(\"'\");\n $xmlIn = str_replace($was,$now,$xml);\n $xmlOut = '';\n $parm = \"?db2=$i5restdb\";\n $parm .= \"&uid=$i5restuser\";\n $parm .= \"&pwd=$i5restpass\";\n $parm .= \"&ipc=$ipc\";\n $parm .= \"&ctl=$ctl\";\n $parm .= \"&xmlin=\".urlencode($xmlIn);\n $parm .= \"&xmlout=$i5restsz\"; // size expected XML output\n $linkall = \"$i5resturl\".htmlentities($parm);\n $xmlOut = file_get_contents($linkall);\n return driverJunkAway($xmlOut); // just in case driver odd\n // ... possible junk end record,\n // record</script>junk\n}\n?>\n"
},
{
"alpha_fraction": 0.5133528709411621,
"alphanum_fraction": 0.5223801732063293,
"avg_line_length": 38.20648956298828,
"blob_id": "db73cd3d64d420c2cbb9c932b78dc13d3db50ac9",
"content_id": "8fbda50da837255540f7e5e4a4f162eb69e5f2f5",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "reStructuredText",
"length_bytes": 13293,
"license_type": "permissive",
"max_line_length": 675,
"num_lines": 339,
"path": "/docs/examples.rst",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "\n\nXMLSERVICE Examples\n===================\n`Goto Main Page`_\n\n.. _Goto Main Page: index.html\n\nThe following foils include running examples you can click to help get started as you learn about this Open Source.\n\n* :ref:`part-1`\n* :ref:`part-2`\n\nGoal of Open Source XMLSERVICE RPG library is flexibility, enabling ANY sort of transport local/remote connection with any language available in your enterprise to call resources on your IBM i machine. XMLSERVICE primary transport interface in production today is DB2 connections using included stored procedures (xmlstoredp.srvpgm iPLUGxxxx). XMLSERVICE secondary transport interface used mostly demos is Apache REST connection (xmlcgi.pgm). However, XMLSERVICE allows writing your own custom transport (anything you imagine). XML Service and Toolkit on-going mission is to add new function, improve performance, expand uses, with goal of never impacting existing customers.\n\n\n**Warning**: This is an active IBM i education machine, occasionally examples may not work, try back later.\n\n.. _part-1:\n\nPart 1 - Protect your investment\n--------------------------------\n\n* XML Service Access Native IBM i Objects from any language using XML\n* XML Service completely free/safe commercial use download (Github)\n* XML Service examples HTML/XML (no PHP, no RPG)\n* XML Service examples RPG (no PHP)\n* XML Service vs. DB2 Stored Procedures - Why use XML Service at all?\n* XML Service protect your investment summary\n\n\nXML Service - is free\n^^^^^^^^^^^^^^^^^^^^^\n\n* XML Service is completely free/safe commercial use \n \n * BSD license, download, keep any source copy forever (safe)\n\n* XML Service written in RPG open source (you can change it)\n \n * Techniques used IBM i calls are stable, unlikely to change (ever)\n\n* XML Service is supported Open Source\n \n * XML Service fix/add improvement goal is never impact current customers\n\n\nXML Service - XML everything\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n* XML input <> XML output\n\n * Any device\n * Any language\n * Any local/remote connection\n * Any IBM i service (PGM, CMD, System API, DB2, PASE)\n\n\n\nXML Service - IBM i Native Access through XML\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n* Access Native IBM i Objects from any language using XML\n \n * Local or remote IBM i: IBM i local, IBM i-2-IBM i, Linux/Unix/Windows-2-IBM, etc.\n * Any language access: PHP, Ruby, Java, Perl, RPG, no language at all (HTML/XML)\n * Many Native Object Types: DB2 SQL and Native, Program call, Procedure call, Data Area, Data Queue, Message Queue, Commands, System values, Spool files, PASE utilities\n\n* Local or remote call interfaces to XMLSERVICE (included RPG download)\n \n * Primary: call DB2 stored procedures local or remote (iPLUG4K - iPLUG15M)\n * Secondary: call REST HTTP local or remote (xmlcgi.pgm)\n\n\nXML Service - Moving Parts\n^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n* Any language\n \n * Browser HTML/XML\n * Script PHP, Ruby, JavaScript, etc.\n * Compiled RPG, C, etc.\n\n* Any local/remote connection\n \n * Linux/Unix/Windows/Mac ibm_db2/odbc/REST to IBM i\n * Native IBM i ibm_db2/odbc/REST to IBM i\n\n\nXML Service public \"stateless\" (\\*here)\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n* XMLSERVICE public \"stateless\" (CTL='\\*here', IPC='\\*NA')\n \n * profile FRED (any public QSQ)\n * profile SALLY (any public QSQ)\n * profile RITA (any public QSQ)\n * profile XAVIER (any public QSQ)\n\n* XMLSTOREDP->XMLSERVICE (QSQ)\n \n * QSQ temporary profile use (stateless)\n * QSQ return to pool on script end\n * XMLSERVICE restart every request (web style)\n\n\nXML Service private \"state full\" (\\*sbmjob)\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n* XMLSERVICE private \"state full\" (CTL='\\*sbmjob', IPC='/tmp/xxxx')\n\n * profile FRED XTOOLKIT myjob1,myjob2 (private)\n * profile SALLY XTOOLKIT sallyjob1 (private)\n * profile RITA XTOOLKIT nursejob (private)\n * profile XAVIER XTOOLKIT xjob1,xjob2,xjob3,xjob4,xjob5 (private)\n\n* XMLSTOREDP (QSQ)\n \n * QSQ temporary profile use (stateless)\n * QSQ return to pool on script end\n\n* XMLSERVICE (XTOOLKIT)\n \n * XTOOLKIT owned by profile (private)\n * XTOOLKIT job never ends (until killed)\n * XTOOLKIT full state programming (5250 style)\n\n\n\nXML Service Configuration\n^^^^^^^^^^^^^^^^^^^^^^^^^\n\n* Apache REST (xmlcgi.pgm)\n* DB2 stored procedures (xmlstoredp.srvpgm iPLUGxx)\n\n\nXML Service - Example HTML/XML (no PHP, no RPG)\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n* XML Service HTTP browser direct (xmlcgi.pgm)\n \n * web enable via httpd.conf\n\n ::\n\n <form method='POST' action='/cgi-bin/xmlcgi.pgm'>\n ScriptAlias /cgi-bin/ /QSYS.LIB/XMLSERVICE.LIB/\n <Directory /QSYS.LIB/XMLSERVICE.LIB/>\n order allow,deny\n allow from all\n SetHandler cgi-script\n Options +ExecCGI\n </Directory>\n\n\nXML Service - Example RPG (no PHP)\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n* XML Service stored procedure interface\n \n (1) RPG DB2 using Exec Sql (iPLUGxxx)\n (2) RPG DB2 using CLI (iPLUGxxx)\n\n* web enable via httpd.conf\n\n::\n\n # demo callpase\n ScriptAlias /demo/ /QSYS.LIB/CALLPASE.LIB/\n <Directory /QSYS.LIB/CALLPASE.LIB/>\n order allow,deny\n allow from all\n SetHandler cgi-script\n Options +ExecCGI\n </Directory>\n \nNote: Exec Sql RPG CGI uses profile QTMHHTP1, however PHP is usually running QTMHHTTP, so you may fail authorisation sharing same XMLSERVICE job (1). Therefore, i recommend use RPG CLI technique to run any profile, where XMLSERVICE job(s) PHP/RPG is no problem (2).\n\n\n\nXML Service vs. DB2 Stored Procedures - Why use XML Service at all?\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nWhy not write my own stored procedures? Why use XML Service at all?\n\n+------------------------------------------+-------------------------------------------+\n| XML Service | Stored procedure |\n+==========================================+===========================================+\n| Device, protocol, & language portability |\n+------------------------------------------+-------------------------------------------+\n|browser, REST, DB2, RPG... |device/driver/language specific |\n+------------------------------------------+-------------------------------------------+\n| Complex Data Structures |\n+------------------------------------------+-------------------------------------------+\n|trivial |near impossible |\n+------------------------------------------+-------------------------------------------+\n| Call CMDs and collect data |\n+------------------------------------------+-------------------------------------------+\n|trivial |very difficult |\n+------------------------------------------+-------------------------------------------+\n| Call PASE utilities and collect data |\n+------------------------------------------+-------------------------------------------+\n|trivial |very difficult |\n+------------------------------------------+-------------------------------------------+\n| Route same job (IPC/internalKey) |\n+------------------------------------------+-------------------------------------------+\n|trivial | near impossible |\n+------------------------------------------+-------------------------------------------+\n| 30,000 records around 2 seconds |\n+------------------------------------------+-------------------------------------------+\n|fast |faster |\n+------------------------------------------+-------------------------------------------+\n\n\nProtect your investment\n^^^^^^^^^^^^^^^^^^^^^^^\n\n* XML Service protects your wallet 100% free download\n* XML Service protects your skills 100% RPG source code\n* XML Service protects your project costs with XML based low budget features\n* XML Service protects your investment applications functions/performance over time\n* XML Service protects your investment across device proliferation\n* XML Service protects your investment across script language proliferation\n* XML Service protects your investment across any transport driver (XML is a string)\n\n\n.. _part-2:\n\nPart 2 - Production use today\n-----------------------------\n\n**Topics**\n\n* PHP Toolkit included with Zend Server for IBM i\n* XML Interface ibm_db2, pdo_ibm, odbc included with Zend Server for IBM i\n* XML Interface and PHP Toolkit - Performance\n* XML Interface and PHP Toolkit - Debugging\n* XML Interface and PHP Toolkit - Active community/support\n\n\nNew PHP Toolkit included with Zend Server for IBM i\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n\n* Zend Server PHP Toolkit (included)\n \n * PHP CW Layer - old toolkit\n * PHP New Toolkit - OO toolkit\n\n\nXML Interface ibm_db2, pdo_ibm, odbc included with Zend Server for IBM i\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n* XML Service stored procedure interface (included)\n \n * DB2 ibm_db2 param in/out (iPLUG 4k-15m)\n * DB2 odbc result set (iPLUGR 4k-15m)\n\n..\n XML Service - Performance Speed Limits\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n +--------------------------------------------------------------------------------------+\n | **Avg IBM i machine speed limits today (Oct 2012)** |\n +--------------------------------------------------------------------------------------+\n |* Avg Apache serves HTML 800/hits second (non-FRCA) |\n |* Avg persistent DB2 driver serves 400/hits second (db2_pconnect) |\n |* Avg non-persistent DB2 driver serves 40/hits second (db2_connect) |\n +--------------------------------------------------------------------------------------+\n | **PHP direct XMLSERVICE** |\n +--------------------------------------------------------------------------------------+\n |* Avg PHP direct 200-400 calls/sec |\n |* 30,000 records around 2 seconds |\n +--------------------------------------------------------------------------------------+\n\nXML Interface and PHP Toolkit - Active community/support\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nYou are not alone, seek out help ...\n\n* `Zend Server IBM i forums`_\n\n.. _Zend Server IBM i forums: http://forums.zend.com/viewforum.php?f=67\n\n* `PHP Toolkit forum`_\n\n.. _PHP Toolkit forum: http://forums.zend.com/viewforum.php?f=113\n\n* `Zend Manuals`_\n\n.. _Zend Manuals: http://files.zend.com/help/Zend-Server-IBMi/zend-server.htm#php_toolkit_xml_service_functions.htm\n\n\nDebug technique: It's as easy as 1-2-3-4-5-6-7-8-9 :)\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n1. Run the test script that contains control \"\\*debug\" and script will \"hang\" while it waits on #2\n\n ::\n\n $ctl .= \"*debug\";\n\n2. A MSGW inquiry message in DSPMSG QSYSOPR will be generated by the toolkit. Note the job information (number, name, user) provided in the MSGW.\n\n#. STRSRVJOB using that job information as parameters.\n\n#. STRDBG with the program and library you wish to debug.\n\n#. Answer the MSGW. Any answer will do--\"G\" is fine.\n\n#. The RPG program source will appear in debug mode in your terminal, ready to step through, allowing you to inspect variables, etc.\n\n#. When done inspecting and stepping, let the RPG program complete (using function keys indicated on screen).\n\n#. ENDDBG\n\n#. ENDSRVJOB\n\nOther debug options ...\n::\n\n Job1 (threaded) Job 2 Job 3 (DB2 userid/password) Job 4 (optional XTOOLKIT job)\n (ctl=*debugcgi) (ctl=*debugproc) (ctl=*debug)\n browser -> Apache ->XMLCGI (Apache CGI child) -> QSQSRVR (XMLSERVICE *here)\n -> QSQSRVR (XMLSERVICE client) -> XTOOLKIT (XMLSERVICE ipc=/tmp/flinstone)\n\n $ctl .= \" *debugcgi\"; // Job 2 - debug XMLCGI to see REST/HTTP data passed by client (when using REST only)\n $ctl .= \" *debugproc\"; // Job 3 - debug XMLSERVICE \"client\" to see DB2 passed data (DB2 interface)\n $ctl .= \" *debug\"; // Job 4 - debug XMLSERVICE \"server\" to see XMLSERVICE calls (DB2 interface)\n // Note: when ctl='*here', both XMLSERVICE \"client\"/\"server\"\n // are in QSQSRVSR job (NO XTOOLKIT job)\n // remote: Attaching with LUW drivers changes QSQSRVR ...\n // CLIENT (Client Access drivers) <==> QZDAxxxx\n // CLIENT (DB2 Connect drivers) <==> QRWxxxx\n\n\n\n\n..\n [--Author([[http://youngiprofessionals.com/wiki/index.php/XMLSERVICE/XMLSERVICEIntro?action=expirediff | s ]])--]\n [--Tony \"Ranger\" Cairns - IBM i PHP / PASE--]\n"
},
{
"alpha_fraction": 0.4480488896369934,
"alphanum_fraction": 0.5115185976028442,
"avg_line_length": 30.04379653930664,
"blob_id": "75b596961a6856c25d95e07f647c68433dce5edf",
"content_id": "ba80ec54f8a07e26c44e91b7e3aff7f1b4c3bb08",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 4254,
"license_type": "permissive",
"max_line_length": 74,
"num_lines": 137,
"path": "/test/php/test_10190_ZZDEEP_ibm_db2_deep.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 inout PGM - deep nested structures\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\n// call IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Bad connect: $database,$user\");\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG10M(?,?,?,?)\");\nif (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n$clobIn = getxml();\n$clobOut = \"\";\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) die(\"Bad execute: \".db2_stmt_errormsg());\n// -----------------\n// output processing\n// -----------------\n// dump raw XML (easy test debug)\nvar_dump($clobOut);\n// xml check via simplexml vs. expected results\n$xmlobj = simplexml_load_string($clobOut);\nif (!$xmlobj) die(\"Bad XML returned\");\n$allpgms = $xmlobj->xpath('/script/pgm');\nif (!$allpgms) die(\"Missing XML pgm info\");\n// -----------------\n// output pgm call\n// -----------------\n// only one program this XML script\n$pgm = $allpgms[0];\n$name = $pgm->attributes()->name;\n$lib = $pgm->attributes()->lib;\n\n// pgm parms\n$parm = $pgm->xpath('parm');\nif (!$parm) die(\"Fail XML pgm parms missing ($lib/$name)\\n\");\n\n$expect = \"Oh my, this is complex\";\nif (!strpos($clobOut,$expect)) die(\"Failed not find: $expect\\n\");\n\n// good\necho \"Success ($lib/$name)\\n\";\n\n\nfunction getxml() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<script>\n<pgm name='ZZDEEP' lib='xyzlibxmlservicexyz'>\n <parm io='out'>\n <ds var='INDS1'>\n <data var='1-I1' type='10i0'/>\n <data var='1-C2' type='10a'/>\n <data var='1-P1' type='12p2'/>\n <data var='1-Z2' type='12s2'/>\n <ds var='INDS2' dim='2'>\n <data var='2-I1' type='10i0'/>\n <data var='2-C2' type='10a'/>\n <data var='2-P1' type='12p2'/>\n <data var='2-Z2' type='12s2'/>\n <ds var='INDS3' dim='3'>\n <data var='3-I1' type='10i0'/>\n <data var='3-C2' type='10a'/>\n <data var='3-P1' type='12p2'/>\n <data var='3-Z2' type='12s2'/>\n <ds var='INDS4' dim='4'>\n <data var='4-I1' type='10i0'/>\n <data var='4-C2' type='10a'/>\n <data var='4-P1' type='12p2'/>\n <data var='4-Z2' type='12s2'/>\n <ds var='INDS5' dim='5'>\n <data var='5-I1' type='10i0'/>\n <data var='5-C2' type='10a'/>\n <data var='5-P1' type='12p2'/>\n <data var='5-Z2' type='12s2'/>\n <data var='5-R2' type='8f4'/>\n <data var='5-R3' type='4f2'/>\n </ds>\n </ds>\n </ds>\n </ds>\n <ds var='INDS4' dim='4'>\n <data var='4-I1' type='10i0'/>\n <data var='4-C2' type='10a'/>\n <data var='4-P1' type='12p2'/>\n <data var='4-Z2' type='12s2'/>\n <ds var='INDS5' dim='5'>\n <data var='5-I1' type='10i0'/>\n <data var='5-C2' type='10a'/>\n <data var='5-P1' type='12p2'/>\n <data var='5-Z2' type='12s2'/>\n <data var='5-R2' type='8f4'/>\n <data var='5-R3' type='4f2'/>\n </ds>\n </ds>\n <ds var='INDS5'>\n <data var='5-I1' type='10i0'/>\n <data var='5-C2' type='10a'/>\n <data var='5-P1' type='12p2'/>\n <data var='5-Z2' type='12s2'/>\n <data var='5-R2' type='8f4'/>\n <data var='5-R3' type='4f2'/>\n </ds>\n <ds var='INDS6'>\n <data var='6-I1' type='10i0'/>\n <data var='6-C2' type='10a'/>\n <data var='6-P1' type='12p2'/>\n <data var='6-Z2' type='12s2'/>\n <data var='6-R2' type='8f4'/>\n <data var='6-R3' type='4f2'/>\n </ds>\n <data var='1-R2' type='8f4'/>\n <data var='1-R3' type='4f2'/>\n <data var='1-C3' type='60a'/>\n <data var='1-Z3' type='12s3'/>\n <data var='1-Z4' type='12s4'/>\n </ds>\n </parm>\n <return>\n <data var='ret' type='10i0'>0</data>\n </return>\n</pgm>\n</script>\nENDPROC;\nreturn test_lib_replace($clob);\n}\n?>\n--EXPECTF--\n%s\nSuccess (%s)\n\n"
},
{
"alpha_fraction": 0.49573948979377747,
"alphanum_fraction": 0.5270845890045166,
"avg_line_length": 29.41666603088379,
"blob_id": "f902fc9a671e7aa5dce7e09e67fe682983b44208",
"content_id": "5a1d98ed61f33e0bc90a282d294a6d2134ae4620",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 3286,
"license_type": "permissive",
"max_line_length": 80,
"num_lines": 108,
"path": "/test/php/test_10166_ZZARRAY_rest_post_array_bigger.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: REST POST SRVPGM - really big data\n--SKIPIF--\n<?php require_once('skipifrest.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\n\n// http POST parms\n$clobIn = getxml();\n$clobOut = \"\";\n$postdata = http_build_query(\n array(\n 'db2' => \"*LOCAL\",\n 'uid' => $user,\n 'pwd' => $password,\n 'ipc' => $ipc,\n 'ctl' => $ctl,\n 'xmlin' => $clobIn,\n 'xmlout' => 1000000 // size expected XML output\n )\n);\n$opts = array('http' =>\n array(\n 'method' => 'POST',\n 'header' => 'Content-type: application/x-www-form-urlencoded',\n 'content' => $postdata\n )\n);\n$context = stream_context_create($opts);\n// execute\n$linkall = $i5resturl;\n$result = file_get_contents($linkall, false, $context);\n// result\nif ($result) {\n $getOut = simplexml_load_string($result);\n $clobOut = $getOut->asXML();\n}\nelse $clobOut = \"\";\n// -----------------\n// output processing\n// -----------------\n$myName1 = 'Ranger'; // expected name\n$myMax1 = 212; // expected max\n$myCount1= 212; // expected count\n$size = strlen($clobOut);\necho substr($clobOut,$size-400).\"\\n\";\nif ($size < 917000) die(\"Failed ($size < 917000)\\n\");\nfor ($i=0;$i<$myCount1;$i++) {\n // DS records expected\n $irpg = $i+1;\n $dcMyName = \">\".$myName1.$irpg.\"<\";\n if (strpos($clobOut,$dcMyName)<1) die(\"Fail dcMyName $dcMyName missing\\n\");\n $dcMyRank = \">\".(10+$irpg).\"<\";\n if (strpos($clobOut,$dcMyRank)<1) die(\"Fail dcMyRank $dcMyRank missing\\n\");\n $dcMyPay = \">\".sprintf(\"%1.2f\", 13.42*$irpg).\"<\";\n if (strpos($clobOut,$dcMyPay)<1) die(\"Fail dcMyPay $dcMyPay missing\\n\");\n}\n// good\necho \"Success ($size)\\n\";\n\n// D ARRAYMAX c const(999)\n// D dcRec_t ds qualified based(Template)\n// D dcMyName 10A\n// D dcMyJob 4096A\n// D dcMyRank 10i 0\n// D dcMyPay 12p 2\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// * zzarray: check return array aggregate\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// P zzarray B export\n// D zzarray PI likeds(dcRec_t) dim(ARRAYMAX)\n// D myName 10A\n// D myMax 10i 0\n// D myCount 10i 0\nfunction getxml() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<script>\n<cmd comment='addlible'>ADDLIBLE LIB(xyzlibxmlservicexyz) POSITION(*FIRST)</cmd>\n<pgm name='ZZSRV' lib='xyzlibxmlservicexyz' func='ZZARRAY'>\n <parm comment='search this name'>\n <data var='myName' type='10A'>Ranger</data>\n </parm>\n <parm comment='max allowed return'>\n <data var='myMax' type='10i0'>212</data>\n </parm>\n <parm comment='actual count returned'>\n <data var='myCount' type='10i0' enddo='mycount'>0</data>\n </parm>\n <return>\n <ds var='dcRec_t' dim='999' dou='mycount'>\n <data var='dcMyName' type='10A'>na</data>\n <data var='dcMyJob' type='4096A'>na</data>\n <data var='dcMyRank' type='10i0'>0</data>\n <data var='dcMyPay' type='12p2'>0.0</data>\n </ds>\n </return>\n</pgm>\n</script>\nENDPROC;\nreturn test_lib_replace($clob);\n}\n?>\n--EXPECTF--\n%s\nSuccess (%s)\n\n"
},
{
"alpha_fraction": 0.5854811072349548,
"alphanum_fraction": 0.6383161544799805,
"avg_line_length": 25.44318199157715,
"blob_id": "773af4807160d28e2d6da2a17f08c6ee532fd0e9",
"content_id": "f07f57dd584a125f8186f5e10cf24252e513f45c",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 2328,
"license_type": "permissive",
"max_line_length": 97,
"num_lines": 88,
"path": "/test/php/test_70001_PERF_ibm_db2_set.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 result set PGM - performance loop call\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n$i5loop = 5000;\nrequire_once('connection.inc');\n\n// include connect performance\n// (worst connect situation)\n$start_time = microtime();\n$conn = db2_connect($database,$user,$password);\nfor ($i=0;$i<$i5loop;$i++) {\n // get the xml simulate changed data\n // randomly happening througout php script\n $clobIn = getxml();\n $clobOut = \"\";\n $stmt = db2_prepare($conn, \"call $procLib.iPLUGR4K(?,?,?)\");\n // rebind parms simulate changed data bindings\n // randomly happening througout php script\n $ret=db2_execute($stmt,array($ipc,$ctl,$clobIn));\n while ($row = db2_fetch_array($stmt)){\n $clobOut .= $row[0];\n }\n $clobOut = trim($clobOut);\n // remove var dump because screen output will\n // be the greatest timing factor dwarfing other data\n // echo \" IN:\\n\"; var_dump($clobIn);\n // echo \"OUT:\\n\"; var_dump($clobOut);\n if (strpos($clobOut,'4444444444.44')<1) {\n var_dump($clobOut);\n die(\"test failed loop count $i\\n\");\n }\n $ctl = \"*ignore\"; // high performance ignore flags\n}\n$end_time = microtime();\n$wire_time= control_microtime_used($start_time,$end_time)*1000000;\n\n// result times\n$look = round($wire_time/1000000,2);\necho\n sprintf(\"Time (loop=$i5loop) total=%1.2f sec (%1.2f ms per call)\\n\",\n round($wire_time/1000000,2),\n round(($wire_time/$i5loop)/1000,2));\n// less than two minutes (usually around one minute)\nif ($look<120) echo \"ok\\n\";\nelse echo \"fail - too slow\\n\";\n\nfunction control_microtime_used($i5before,$i5after) {\n return (substr($i5after,11)-substr($i5before,11))+(substr($i5after,0,9)-substr($i5before,0,9));\n}\n\nfunction getxml() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<pgm name='ZZCALL' lib='xyzlibxmlservicexyz'>\n <parm io='both'>\n <data type='1A'>a</data>\n </parm>\n <parm io='both'>\n <data type='1A'>b</data>\n </parm>\n <parm io='both'>\n <data type='7p4'>11.1111</data>\n </parm>\n <parm io='both'>\n <data type='12p2'>222.22</data>\n </parm>\n <parm io='both'>\n <ds>\n <data type='1A'>x</data>\n <data type='1A'>y</data>\n <data type='7p4'>66.6666</data>\n <data type='12p2'>77777.77</data>\n </ds>\n </parm>\n <return>\n <data type='10i0'>0</data>\n </return>\n</pgm>\nENDPROC;\nreturn test_lib_replace($clob);\n}\n?>\n--EXPECTF--\n%s\nok\n\n"
},
{
"alpha_fraction": 0.5405580997467041,
"alphanum_fraction": 0.589444100856781,
"avg_line_length": 37.516666412353516,
"blob_id": "9fdcd1316f2d1568b191e77821fb985663df3320",
"content_id": "17473ecb89c98a43f84fd5ea6be152ad40b4eea0",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 4805,
"license_type": "permissive",
"max_line_length": 126,
"num_lines": 120,
"path": "/test/php/test_17070_NLS_ibm_db2_io.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 - UTF8 NLS test\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\n$ctlstart = \"$ctl\";\n\n// connect to IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Bad connect: $database,$user\");\n\n// nls conversion attempts\n$hebrew_xml_ccsid = \"<data type='200A' hex='on' before='819/424' after='424/819'>\"; // encoding 1100\n$hebrew_raw_ascii_data ='אֵין בְּעָיָה! עַל לֹא דָּבָר';\n$farsi_xml_ccsid = \"<data type='200A' hex='on' before='819/1098' after='1098/819'>\"; // encoding 1100\n$farsi_raw_ascii_data = 'دستشویی/داروخانه) کجاست؟';\n$russia_xml_ccsid = \"<data type='200A' hex='on' before='819/880' after='880/819'>\"; // encoding 1100\n$russia_raw_ascii_data = 'Беда́ никогда́ не прихо́дит одна';\n$italy_xml_ccsid = \"<data type='200A' hex='on' before='819/280' after='280/819'>\"; // encoding 1100\n$italy_raw_ascii_data = 'risorse perché il loro unico obiettivo è produrre a costi più';\n$german_xml_ccsid = \"<data type='200A' hex='on' before='819/273' after='273/819'>\"; // encoding 1100\n$german_raw_ascii_data = 'Ä Ö Ü ä ö ü hier befindet sich die größte datenbank an deutschen untertiteln';\n$korea_xml_ccsid = \"<data type='200A' hex='on' before='819/1088' after='1088/819'>\"; // encoding 2100\n$korea_raw_ascii_data = '길을 잃어버렸어요';\n$japan_xml_ccsid = \"<data type='200A' hex='on' before='1208/13488' after='13488/1208'>\"; // encoding 7200 (61952, 13488)\n$japan_raw_ascii_data = 'ラドクリフ、マラソン五輪代表に1万m出場にも含み';\n$china_xml_ccsid = \"<data type='200A' hex='on' before='1208/13488' after='13488/1208'>\"; // encoding 7200 (61952, 13488)\n$china_raw_ascii_data = '顆老鼠屎壞了一鍋粥(一颗老鼠屎坏了一锅粥';\n\n$nls = array(\narray(\"Hebrew\",$hebrew_xml_ccsid, $hebrew_raw_ascii_data),\narray(\"Farsi\",$farsi_xml_ccsid, $farsi_raw_ascii_data),\narray(\"Russia\",$russia_xml_ccsid, $russia_raw_ascii_data),\narray(\"Italy\",$italy_xml_ccsid, $italy_raw_ascii_data),\narray(\"Germany\",$german_xml_ccsid, $german_raw_ascii_data),\narray(\"Korea\",$korea_xml_ccsid, $korea_raw_ascii_data),\narray(\"Japan\",$japan_xml_ccsid, $japan_raw_ascii_data),\narray(\"China\",$china_xml_ccsid, $china_raw_ascii_data)\n);\n\nforeach($nls as $nlsthis) {\n$title = $nlsthis[0];\n$nls_xml_ccsid = $nlsthis[1];\n$nls_raw_ascii_data = $nlsthis[2];\n\necho \"***********************************************************\\n\";\necho \" $title\\n\";\necho \"***********************************************************\\n\";\n\n// ***********************************************************\n// *** setup (XML input) ***\n$clobIn =\n\"<?xml version='1.0'?>\n<script>\n<pgm name='ZZSRV' lib='XMLSERVICE' func='ZZ200'>\n<parm io='both'>\n$nls_xml_ccsid\"\n.bin2hex($nls_raw_ascii_data).\n\"</data>\n</parm>\n</pgm>\n</script>\";\n$clobOut = \"\";\n// ***********************************************************\n// *** call IBM i XMLSERVICE (XML input) ***\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG32K(?,?,?,?)\");\nif (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) die(\"Bad execute: \".db2_stmt_errormsg());\n\n// ***********************************************************\n// *** back from XMLSERVICE call with results (XML output) ***\necho \"=== $nls_xml_ccsid ===\\n\";\n$xmlobj = simplexml_load_string($clobOut);\nif (!$xmlobj) {\n var_dump($clobOut);\n echo(\"--->Bad XML returned<----\\n\");\n}\nelse echo(\"--->Good XML returned<----\\n\");\n$allpgms = $xmlobj->xpath('/script/pgm');\nif (!$allpgms) {\n var_dump($clobOut);\n die(\"Missing XML pgm info\");\n}\n$pgm = $allpgms[0];\n$name = $pgm->attributes()->name;\n$lib = $pgm->attributes()->lib;\n$parm = $pgm->xpath('parm');\nif (!$parm) {\n var_dump($clobOut);\n die(\"Missing XML pgm parms ($lib/$name)\");\n}\n// how do chars look?\n$nls_return_from_xmlservice_data = pack(\"H*\",(string)$parm[0]->data);\necho \"Input...$nls_raw_ascii_data\\n\";\necho \"Return..$nls_return_from_xmlservice_data\\n\";\n$inhex = bin2hex($nls_raw_ascii_data);\n$outhex = bin2hex($nls_return_from_xmlservice_data);\necho \"Input...$inhex\\n\";\necho \"Return..$outhex\\n\";\nif ($inhex != $outhex) die(\"--->Bad HEX returned<----\\n\");\nelse echo(\"--->Good HEX returned<----\\n\");\n// ***********************************************************\n} // end nls loop\n\necho \"Success\\n\";\n\n?>\n\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.5149954557418823,
"alphanum_fraction": 0.5443804860115051,
"avg_line_length": 29.55555534362793,
"blob_id": "6d8010b17f393c5affc3fa8cd1af961f1a0686c0",
"content_id": "6c776407f119267fc0c502cf63e4ed7bceb724db",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 3301,
"license_type": "permissive",
"max_line_length": 79,
"num_lines": 108,
"path": "/test/php/test_10680_ZZMORE_ibm_db2_io.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 inout PGM - More data\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\n// call IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Bad connect: $database,$user\");\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG512K(?,?,?,?)\");\nif (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n$clobIn = getxml();\n$clobOut = \"\";\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) die(\"Bad execute: \".db2_stmt_errormsg());\n// -----------------\n// output processing\n// -----------------\n// dump raw XML (easy test debug)\n// var_dump($clobOut);\n// xml check via simplexml vs. expected results\n$xmlobj = simplexml_load_string($clobOut);\nif (!$xmlobj) die(\"Bad XML returned\");\n$allpgms = $xmlobj->xpath('/script/pgm');\nif (!$allpgms) die(\"Missing XML pgm info\");\n// -----------------\n// output pgm call\n// -----------------\n// only one program this XML script\n$pgm = $allpgms[0];\n$name = $pgm->attributes()->name;\n$lib = $pgm->attributes()->lib;\n// pgm parms\n$parm = $pgm->xpath('parm');\nif (!$parm) die(\"Missing XML pgm parms ($lib/$name)\");\n\nif (strpos($clobOut,\"var='INCHARA'>F0F0\")<1) die(\"missing var='INCHARA'>F0F0\");\nif (strpos($clobOut,\"var='INCHARB'>F0F0\")<1) die(\"missing var='INCHARB'>F0F0\");\nif (strpos($clobOut,\"var='INCHARC'>F0F0\")<1) die(\"missing var='INCHARC'>F0F0\");\nif (strpos($clobOut,\"var='INCHARD'>F0F0\")<1) die(\"missing var='INCHARD'>F0F0\");\n\n// good\necho \"I am ... \\n\";\necho \"Success ($lib/$name)\\n\";\n\n// D INCHARA S 64a\n// D INCHARB S 32000a\n// D INCHARC S 32000a\n// D INCHARD S 4a\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// * main(): Control flow\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// C *Entry PLIST\n// C PARM INCHARA\n// C PARM INCHARB\n// C PARM INCHARC\n// C PARM INCHARD\nfunction getxml() {\n global $testLib;\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<script>\n<pgm name='ZZMORE' lib='xyzlibxmlservicexyz'>\n <parm co='a' io='both'>\n <data type='64A' var='INCHARA'>xyzINCHARA</data>\n </parm>\n <parm co='b' io='both'>\n <data type='32000A' var='INCHARB'>xyzINCHARB</data>\n </parm>\n <parm co='c' io='both'>\n <data type='32000A' var='INCHARC'>xyzINCHARC</data>\n </parm>\n <parm co='d' io='both'>\n <data type='4A' var='INCHARD'>xyzINCHARD</data>\n </parm>\n</pgm>\n</script>\nENDPROC;\n$data = \"\";\nfor ($i=0;$i<32000;$i++) $data .= 'F0';\n$was = array(\n\"xyzlibxmlservicexyz\",\n\"xyzINCHARA\",\n\"xyzINCHARB\",\n\"xyzINCHARC\",\n\"xyzINCHARD\"\n);\n$now = array(\n\"$testLib\",\nsubstr($data,0,64),\nsubstr($data,0,32000),\nsubstr($data,0,32000),\nsubstr($data,0,4)\n);\n$xml = str_replace($was,$now,$clob);\nreturn $xml;\n}\n?>\n--EXPECTF--\n%s\nSuccess (%s)\n\n"
},
{
"alpha_fraction": 0.6225626468658447,
"alphanum_fraction": 0.6569173336029053,
"avg_line_length": 36.45217514038086,
"blob_id": "880ef560e12038cf5ab33c7a82f7acb9679951d9",
"content_id": "d46833a37eb8c1f81d239df2ed3762cd9859a3e6",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 4308,
"license_type": "permissive",
"max_line_length": 84,
"num_lines": 115,
"path": "/test/php/test_52601_MISC_ibm_db2_io_QSZRTVPR_setnext.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 inout sys api setnext - QSZRTVPR prod info\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\n// call IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Bad connect: $database,$user\");\n$stmt = db2_prepare($conn, \"call \".$procLib.\".iPLUG512K(?,?,?,?)\");\nif (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n$clobIn = getxml();\n$clobOut = \"\";\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) die(\"Fail execute: \".db2_stmt_errormsg());\n// -----------------\n// output processing\n// -----------------\n// dump raw XML (easy test debug)\nvar_dump($clobOut);\n// xml check via simplexml vs. expected results\n$xmlobj = simplexml_load_string($clobOut);\n\nif (strpos($clobOut, \"QSYSNLS\")<1) dies (\"Failed missing QSYSNLS\\n\");\n\necho \"\\nSuccess\\n\";\n\nfunction getxml() {\n$clob = <<<ENDPROC\n<?xml version=\"1.0\"?>\n<script>\n<pgm name='QSZRTVPR'>\n <parm io=\"both\" comment='Receiver variable'>\n <ds comment='PRDR0200 Format' len='rec1'>\n <data type='10i0' comment='Bytes returned'>0</data>\n <data type='10i0' comment='Bytes available' >0</data>\n <data type='10i0' comment='Reserved'>0</data>\n <data type='7A' comment='Product ID'> </data>\n <data type='6A' comment='Release level'> </data>\n <data type='4A' comment='Product option'> </data>\n <data type='4A' comment='Load ID'> </data>\n <data type='10A' comment='Load type'> </data>\n <data type='10A' comment='Symbolic load state'> </data>\n <data type='10A' comment='Load error indicator'> </data>\n <data type='2A' comment='Load state'> </data>\n <data type='1A' comment='Supported flag'> </data>\n <data type='2A' comment='Registration type'> </data>\n <data type='14A' comment='Registration value'> </data>\n <data type='2A' comment='Reserved'> </data>\n <data type='10i0' offset='myOffset' comment='beyond size of PRDR0100'></data>\n <data type='4A' comment='Primary language load identifier'> </data>\n <data type='6A' comment='Minimum target release'> </data>\n <data type='6A' comment='Minimum VRM of *BASE required'> </data>\n <data type='1A' comment='Requirements met between base'> </data>\n <data type='3A' comment='Level'> </data>\n <data type='2048h' comment='leave some space for PRDR0200'/>\n </ds>\n</parm>\n <parm comment='Length of receiver variable'>\n <data type='10i0' setlen='rec1'>0</data>\n </parm>\n <parm comment='Format name'>\n <data type='8A'>PRDR0200</data>\n </parm>\n <parm comment='Product information'>\n <data type='100A'>*OPSYS *CUR 0021*CODE</data>\n </parm>\n <parm io=\"both\" comment='Error code'>\n <ds comment='Format ERRC0100' len='rec2'>\n <data type='10i0' comment='Bytes returned'>0</data>\n <data type='10i0' comment='Bytes available' setlen='rec2'>0</data>\n <data type='7A' comment='Exception ID'> </data>\n <data type='1A' comment='Reserved'> </data>\n </ds>\n </parm>\n\n <overlay io=\"out\" top=\"1\" offset='myOffset'>\n <ds>\n <data type='10A' comment='Second language library'></data>\n <data type='2A' comment='Reserved'></data>\n <data type='10i0' enddo='prim' comment='Number of Primary languages'></data>\n <data type='10i0' offset=\"myOffset2\" comment='Offset to library records'></data>\n </ds>\n </overlay>\n\n <overlay io=\"out\" top=\"1\" offset=\"myOffset2\" dim='10' dou='prim' setnext='nextoff'>\n <ds>\n <data type='10i0' next='nextoff' comment='Offset to next library record'></data>\n <data type='10A' comment='Primary library name'></data>\n <data type='10A' comment='Installed library name'></data>\n <data type='10A' comment='Library type'></data>\n <data type='10A' comment='Library authority'></data>\n <data type='10A' comment='Library create authority'></data>\n <data type='10A' comment='Postoperation exit program name'></data>\n <data type='10i0' comment='Number of preoperation exit program names'></data>\n <data type='10A' comment='Preoperation exit program names'></data>\n </ds>\n </overlay>\n\n</pgm>\n</script>\nENDPROC;\nreturn $clob;\n}\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.5817790627479553,
"alphanum_fraction": 0.6228479146957397,
"avg_line_length": 36.416107177734375,
"blob_id": "ef3ebf5266e4f5076a4c6a1f4289df2d05771626",
"content_id": "d2d68356496b4eac81092bafb613a830a46435e4",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 5576,
"license_type": "permissive",
"max_line_length": 136,
"num_lines": 149,
"path": "/test/php/test_71165_reservation_fork_parent_ibm_db2_io.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 inout fork multi reservation processing\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n$iamdead = false;\n\n// sequence of jobs\necho \"Parent fork 4 sequence children jobs now ...\\n\";\n$out0 = `php test_71166_reservation_child.php2 0`;\n$out1 = `php test_71166_reservation_child.php2 1`;\n$out2 = `php test_71166_reservation_child.php2 2`;\n$out3 = `php test_71166_reservation_child.php2 3`;\ncheckItNow(\"OUTPUT getxml0 (dsplibl): good exclusive key IPC ...\",$out0);\ncheckItNow(\"OUTPUT getxml1 (RTVSYSVAL): good exclusive key IPC ...\",$out1);\ncheckItNow(\"OUTPUT getxml2 (ZZCALL): good exclusive key IPC stop ...\",$out2);\ncheckItNow(\"OUTPUT getxml3 (ls /tmp): good free key IPC ...\",$out3);\n\n// independent jobs\n$max = 8;\nfor ($loop=0;$loop<$max;$loop++) {\n `echo 'test waiting ...' > \"/tmp/test_71166_reservation_0_{$loop}.test\"`;\n `echo 'test waiting ...' > \"/tmp/test_71166_reservation_1_{$loop}.test\"`;\n `echo 'test waiting ...' > \"/tmp/test_71166_reservation_2_{$loop}.test\"`;\n `echo 'test waiting ...' > \"/tmp/test_71166_reservation_3_{$loop}.test\"`;\n}\n// fork jobs (&)\nfor ($loop=0;$loop<$max;$loop++) {\n echo \"Parent fork 4 independent children jobs now (loop = $loop) ...\\n\";\n $pid = pcntl_fork();\n if ($pid == -1) {\n die('Failure could not fork');\n }\n // child\n if (!$pid) {\n $out0 = `php test_71166_reservation_child.php2 0 > \"/tmp/test_71166_reservation_0_{$loop}.test\" &`;\n $out1 = `php test_71166_reservation_child.php2 1 > \"/tmp/test_71166_reservation_1_{$loop}.test\" &`;\n $out2 = `php test_71166_reservation_child.php2 2 > \"/tmp/test_71166_reservation_2_{$loop}.test\" &`;\n $out3 = `php test_71166_reservation_child.php2 3 > \"/tmp/test_71166_reservation_3_{$loop}.test\" &`;\n exit(0);\n }\n}\n// collect output\nfor ($loop=0;$loop<$max;$loop++) {\n echo \"Parent collect data 4 independent children jobs now (loop = $loop) ...\\n\";\n $fork0[$loop] = true;\n $fork1[$loop] = true;\n $fork2[$loop] = true;\n $fork3[$loop] = true;\n}\n// retry over time period\n$retry = true;\n$reservation = true;\nfor ($h=0; $retry && $h<40;$h++) {\n usleep(500000); // Sleep for 500 miliseconds;\n // look at all files\n for ($loop=0;$loop<$max;$loop++) {\n if ($fork0[$loop]) $fork0[$loop] = continueLookingFile(\"OUTPUT getxml0 (dsplibl): good exclusive key IPC ...\",\"0\",\"$loop\");\n if ($fork1[$loop]) $fork1[$loop] = continueLookingFile(\"OUTPUT getxml1 (RTVSYSVAL): good exclusive key IPC ...\",\"1\",\"$loop\");\n if ($fork2[$loop]) $fork2[$loop] = continueLookingFile(\"OUTPUT getxml2 (ZZCALL): good exclusive key IPC stop ...\",\"2\",\"$loop\");\n if ($fork3[$loop]) $fork3[$loop] = continueLookingFile(\"OUTPUT getxml3 (ls /tmp): good free key IPC ...\",\"3\",\"$loop\");\n } // loop\n\n // all files reported Success???\n $retry = false;\n for ($loop=0;$loop<$max;$loop++) {\n if ($fork0[$loop] || $fork1[$loop] || $fork2[$loop] || $fork3[$loop]) $retry = true;\n } // loop\n\n // check all <use>myspecialkey</use> complete\n // make sure <stop>myspecialkey</stop> occurrs\n // allow getxml3 (ls /tmp): good free key IPC ...\n if ($retry || $reservation) {\n $reservation = false;\n for ($loop=0;$loop<$max;$loop++) {\n if ($fork0[$loop] || $fork1[$loop] || $fork2[$loop]) $reservation = true;\n } // loop\n if ($reservation) {\n $out2 = `php test_71166_reservation_child.php2 2`;\n checkItNow(\"OUTPUT getxml2 (ZZCALL): good exclusive key IPC stop ...\",$out2);\n }\n }\n\n} // h\n\n// bad\nif ($retry) {\n for ($loop=0;$loop<$max;$loop++) {\n if ($fork0[$loop] || $fork1[$loop] || $fork2[$loop] || $fork3[$loop]) $retry = true;\n if ($fork0[$loop]) $fork0[$loop] = continueLookingFile(\"OUTPUT getxml0 (dsplibl): good exclusive key IPC ...\",\"0\",\"$loop\",true);\n if ($fork1[$loop]) $fork1[$loop] = continueLookingFile(\"OUTPUT getxml1 (RTVSYSVAL): good exclusive key IPC ...\",\"1\",\"$loop\",true);\n if ($fork2[$loop]) $fork2[$loop] = continueLookingFile(\"OUTPUT getxml2 (ZZCALL): good exclusive key IPC stop ...\",\"2\",\"$loop\",true);\n if ($fork3[$loop]) $fork3[$loop] = continueLookingFile(\"OUTPUT getxml3 (ls /tmp): good free key IPC ...\",\"3\",\"$loop\",true);\n } // loop\n}\n\n// allow free use RPC next test\n$out2 = `php test_71166_reservation_child.php2 2`;\ncheckItNow(\"OUTPUT getxml2 (ZZCALL): good exclusive key IPC stop ...\",$out2);\n\n// clear\npcntl_wait($status); //Protect against Zombie children\n`rm /tmp/test_71166_reservation_*`;\n\n// bad\nif ($retry) die(\"Failure ($max x 4) independent child forks did not report in Success\\n\");\n\n// good or bad\nif ($iamdead) {\n var_dump($iamdead);\n die(\"Failure\\n\");\n}\necho \"Success\\n\";\n\nfunction continueLookingFile($title, $id, $lp, $dumpError=false) {\n global $iamdead;\n $cmd = \"cat /tmp/test_71166_reservation_{$id}_{$lp}.test\";\n // echo \"$cmd ($id, $lp)\\n\";\n $data = `$cmd`;\n // child serious error\n if (strpos($data,'am dead')>1) {\n echo (\"*** ERROR dead child program ($cmd) $title \\n\");\n if ($dumpError) echo \"$data\\n\";\n $iamdead[] = \"*** ERROR dead child program ($cmd) $title\\n\\n$data\\n\";\n return false; // done with child\n }\n // child worked\n if (strpos($data,'Success')<1) {\n echo (\"Still waiting child program ($cmd) $title \\n\");\n if ($dumpError) echo \"$data\\n\";\n return true; // waiting on child\n }\n /// child needs to run\n echo (\"Good child program ($cmd) $title \\n\");\n return false; // done with child\n}\nfunction checkItNow($title, $data) {\n if (strpos($data,'Success')<1) {\n // var_dump($data);\n die(\"Failure program $title \\n\");\n }\n echo (\"Good program $title \\n\");\n}\n?>\n\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.49366018176078796,
"alphanum_fraction": 0.5025359392166138,
"avg_line_length": 29.320512771606445,
"blob_id": "677461262c5870fb1163486d0621f36df41ae606",
"content_id": "8aa38613e428683fbd0b3b9d8dd456ae191f8283",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 2366,
"license_type": "permissive",
"max_line_length": 122,
"num_lines": 78,
"path": "/test/php/test_33470_cwtest_spool.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: - cwtest spool\n--SKIPIF--\n<?php require_once('skipifcw.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('xxcw_test_setup.php');\n\nif ($doSpooledFiles) {\n\necho h2('Spooled Files');\n\n$splUser = 'QTMHHTTP';\necho \"Get up to 5 spooled files for user $splUser<BR>\";\n$list = i5_spool_list(array(I5_USERNAME=>$splUser), $conn);\nif (!$list) {\n\techo 'Error getting spool list: ' . printArray(i5_error()) . '<BR>';\n die();\n} else {\n $spoolCount = 0;\n\t while (($listItem = i5_spool_list_read($list)) && (++$spoolCount <= 5)) {\n\t echo \"<BR>list item: \" . printArray($listItem) . \"<BR>\";\n\t echo '<BR>Output data for this spool file: <BR>';\n\t $data = i5_spool_get_data($listItem['SPLFNAME'],\n\t $listItem['JOBNAME'],\n\t $listItem['USERNAME'],\n\t $listItem['JOBNBR'],\n\t $listItem['SPLFNBR']);\n\t if (!$data) {\n\t \techo '<BR>No spool data. Error info: ' . printArray(i5_error()) . '<BR>';\n\t } else {\n\t \techo \"<PRE>$data</PRE><BR>\";\n\t } //(if data)\n\n\t }\n}\ni5_spool_list_close($list);\n\n\n$outq = 'QGPL/QPRINT';\necho \"<BR>Get up to 5 spooled files for outq $outq (may get permissions message if user's authority is insufficient)<BR>\";\n$list = i5_spool_list(array(I5_OUTQ=>$outq), $conn);\nif (!$list) {\n\techo 'Error getting spool list: ' . printArray(i5_error()) . '<BR>';\n die();\n} else {\n\n $spoolCount = 0;\n\t while (($listItem = i5_spool_list_read($list)) && (++$spoolCount <= 5)) {\n\n\t echo \"<BR>list item: \" . printArray($listItem) . \"<BR>\";\n\t echo '<BR>Output data for this spool file: <BR>';\n\t $data = i5_spool_get_data($listItem['SPLFNAME'],\n\t $listItem['JOBNAME'],\n\t $listItem['USERNAME'],\n\t $listItem['JOBNBR'],\n\t $listItem['SPLFNBR']);\n\t if (!$data) {\n\t \techo '<BR>No spool data. Error info: ' . printArray(i5_error()) . '<BR>';\n\t } else {\n\t \techo \"<PRE>$data</PRE><BR>\";\n\t } //(if data)\n\n\t } //(while spool files)\n}\ni5_spool_list_close($list);\n\n\n} //(spooled files)\n\n\n// good\necho \"\\nSuccess\\n\";\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.5770089030265808,
"alphanum_fraction": 0.5892857313156128,
"avg_line_length": 23.53424644470215,
"blob_id": "81a0cff05576778f6df07ce1a1b25573e485eeba",
"content_id": "393b423eb91976e9934abec7c8668292cbbdfd07",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 1792,
"license_type": "permissive",
"max_line_length": 60,
"num_lines": 73,
"path": "/test/php/test_10144_ZZSTEP_rest_get_stephex.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: REST GET SRVPGM - RPG step var incrementing\n--SKIPIF--\n<?php require_once('skipifrest.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\nfor ($i=0; $i<5; $i++) {\n // http GET parms\n $clobIn = getxml();\n $clobOut = \"\";\n $parm = \"?db2=$i5restdb\";\n $parm .= \"&uid=$user\";\n $parm .= \"&pwd=$password\";\n $parm .= \"&ipc=$ipc\";\n $parm .= \"&ctl=$ctl\";\n $parm .= \"&xmlin=\".urlencode($clobIn);\n $parm .= \"&xmlout=32768\"; // size expected XML output\n // execute\n $linkall = \"$i5resturl\".htmlentities($parm);\n $getOut = simplexml_load_file($linkall);\n // result\n if ($getOut) $clobOut = $getOut->asXML();\n else $clobOut = \"\";\n // -----------------\n // output processing\n // -----------------\n // dump raw XML (easy test debug)\n var_dump($clobOut);\n // xml check via simplexml vs. expected results\n $xmlobj = simplexml_load_string($clobOut);\n if (!$xmlobj) die(\"Bad XML returned\");\n $allpgms = $xmlobj->xpath('/script/pgm');\n if (!$allpgms) die(\"Missing XML pgm info\");\n $pgm = $allpgms[0];\n $name = $pgm->attributes()->name;\n $lib = $pgm->attributes()->lib;\n $func = $pgm->attributes()->func;\n // pgm return\n $retn = $pgm->xpath('return');\n if (!$retn) die(\"No XML pgm return ($lib/$name.$func)\");\n $data = (string)$retn[0]->data;\n if (strlen($data) < 8) die(\"return not greater than '0'\");\n // good\n echo \"Success ($lib/$name.$func return $data)\\n\";\n}\n\nfunction getxml() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<script>\n<pgm name='ZZSRV' lib='xyzlibxmlservicexyz' func='ZZSTEP'>\n <return>\n <data type='4b'>C6BF1461</data>\n </return>\n</pgm>\n</script>\nENDPROC;\nreturn test_lib_replace($clob);\n}\n?>\n--EXPECTF--\n%s\nSuccess (%s)\n%s\nSuccess (%s)\n%s\nSuccess (%s)\n%s\nSuccess (%s)\n%s\nSuccess (%s)\n\n"
},
{
"alpha_fraction": 0.5631067752838135,
"alphanum_fraction": 0.6162937879562378,
"avg_line_length": 27.190475463867188,
"blob_id": "c245c293d61ce42d824bed1c11a24e7751d54a2f",
"content_id": "0cf4c5e518d6850fe5c4bf8d18eca5a4d27d7c10",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 2369,
"license_type": "permissive",
"max_line_length": 99,
"num_lines": 84,
"path": "/test/php/test_17500_ZZCALL_ibm_db2_io_pgm_records.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 inout PGM - call pgm complex data\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\n// call IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Bad connect: $database,$user\");\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG4K(?,?,?,?)\");\nif (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n$clobIn = getxml();\n$clobOut = \"\";\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) die(\"Bad execute: \".db2_stmt_errormsg());\n// -----------------\n// output processing\n// -----------------\n// dump raw XML (easy test debug)\nvar_dump($clobOut);\n// xml check via simplexml vs. expected results\n$xmlobj = simplexml_load_string($clobOut);\nif (!$xmlobj) die(\"Bad XML returned\");\n$allpgms = $xmlobj->xpath('/script/pgm');\nif (!$allpgms) die(\"Missing XML pgm info\");\n// -----------------\n// output pgm call\n// -----------------\n// only one program this XML script\n$pgm = $allpgms[0];\n$name = $pgm->attributes()->name;\n$lib = $pgm->attributes()->lib;\n\nif (strpos($clobOut,\":E:F:333.3330:4444444444.44:\")<1) die(\"Missing :E:F:333.3330:4444444444.44:\");\n\n\n// good\necho \"Success ($lib/$name)\\n\";\n\nfunction getxml() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<script>\n<pgm name='ZZCALL' lib='xyzlibxmlservicexyz'>\n <parm io='both'>\n <data type='1A' var='INCHARA'>a</data>\n </parm>\n <parm io='both'>\n <data type='1A' var='INCHARB'>b</data>\n </parm>\n <parm io='both'>\n <data type='7p4' var='INDEC1'>11.1111</data>\n </parm>\n <parm io='both'>\n <data type='12p2' var='INDEC2'>222.22</data>\n </parm>\n <parm io='both'>\n <ds data=\"records\">\n <data type='1A' var='INDS1.DSCHARA'>x</data>\n <data type='1A' var='INDS1.DSCHARB'>y</data>\n <data type='7p4' var='INDS1.DSDEC1'>66.6666</data>\n <data type='12p2' var='INDS1.DSDEC2'>77777.77</data>\n </ds>\n <records delimit=':'>:r:o:444.4444:888.88:</records>\n </parm>\n <return>\n <data type='10i0'>0</data>\n </return>\n</pgm>\n</script>\nENDPROC;\nreturn test_lib_replace($clob);\n}\n?>\n--EXPECTF--\n%s\nSuccess (%s)\n\n"
},
{
"alpha_fraction": 0.6023985147476196,
"alphanum_fraction": 0.6356088519096375,
"avg_line_length": 27.473684310913086,
"blob_id": "155eac0112c716a53e2677f072d3bd7905e7bf54",
"content_id": "066dcb2bf82cb7310e0f58391eaf362da209d290",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1084,
"license_type": "permissive",
"max_line_length": 55,
"num_lines": 38,
"path": "/test/byval/testref.py",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "import os\nfrom itoolkit import *\nfrom itoolkit.lib.ilibcall import *\n\nitransport = iLibCall()\n\nitool = iToolKit()\nitool.add(iCmd('chglibl', 'CHGLIBL LIBL(XMLSERVICE)'))\nitool.add(\n iSrvPgm('zzarray','ZZSRV','ZZARRAY')\n .addParm(iData('myName','10a','ranger'))\n .addParm(iData('myMax','10i0','8'))\n .addParm(iData('myCount','10i0','',{'enddo':'mycnt'}))\n .addRet(\n iDS('dcRec_t',{'dim':'999','dou':'mycnt'})\n .addData(iData('dcMyName','10a',''))\n .addData(iData('dcMyJob','4096a',''))\n .addData(iData('dcMyRank','10i0',''))\n .addData(iData('dcMyPay','12p2',''))\n )\n )\nitool.call(itransport)\nchglibl = itool.dict_out('chglibl')\nzzarray = itool.dict_out('zzarray')\nprint(chglibl['success'])\nprint(zzarray['success'])\nif 'success' in zzarray:\n print(zzarray['myName'],'ranger')\n print(zzarray['myMax'],'8')\n print(zzarray['myCount'],'8')\n i = 1\n dcRec_t = zzarray['dcRec_t']\n for rec in dcRec_t:\n print(rec['dcMyName'],\"ranger\"+str(i))\n print(rec['dcMyJob'],\"Test 10\"+str(i))\n print(int(rec['dcMyRank']), 10 + i)\n print(float(rec['dcMyPay']),13.42 * i)\n i+=1\n\n\n"
},
{
"alpha_fraction": 0.6124905943870544,
"alphanum_fraction": 0.6313017010688782,
"avg_line_length": 26.102041244506836,
"blob_id": "f581ee8216abce591bf45f2d9a5572f4a8eb9da6",
"content_id": "3ca1c3593eaa9e09e86d819a225bd8c444e6e05e",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 1329,
"license_type": "permissive",
"max_line_length": 74,
"num_lines": 49,
"path": "/test/php/test_10010_SESSION_ibm_db2_io.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 inout PGM - session data\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\n// call IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Bad connect: $database,$user\");\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG4K(?,?,?,?)\");\nif (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n$ctl = \"*session\";\n$clobIn = getxml();\n$clobOut = \"\";\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) die(\"Bad execute: \".db2_stmt_errormsg());\n// -----------------\n// output processing\n// -----------------\n// dump raw XML (easy test debug)\nvar_dump($clobOut);\n// xml check via simplexml vs. expected results\n$xmlobj = simplexml_load_string($clobOut);\nif (!$xmlobj) die(\"Bad XML returned\");\n\nif (strpos($clobOut,\"key=\")<1) die(\"missing key=\");\n\n\n// good\necho \"I am ...\\n\";\necho \"Success\\n\";\n\nfunction getxml() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\nENDPROC;\nreturn test_lib_replace($clob);\n}\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.5994173288345337,
"alphanum_fraction": 0.6147122979164124,
"avg_line_length": 23.5,
"blob_id": "cca6521a286882047228d0e3b2a80ab7471d1081",
"content_id": "7a87f038e54507877da95c90e9a4e9226c0e3629",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 1373,
"license_type": "permissive",
"max_line_length": 130,
"num_lines": 56,
"path": "/test/php/test_33469_cwtest_joblist.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: - cwtest job list\n--SKIPIF--\n<?php require_once('skipifcw.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('xxcw_test_setup.php');\n\n// job list\n\nif ($doJobLists) {\n\necho h2('Job lists');\n\necho \"About to get up to 5 jobs with jobname ZENDSVR (can also do I5_JOBUSER, I5_USERNAME, I5_JOBNUMBER, and I5_JOBTYPE).<BR>\";\n\n$list = i5_job_list(array(I5_JOBNAME=>'ZENDSVR'));\nif (!$list) {\n\techo 'Error getting job list: ' . printArray(i5_error()) . '<BR>';\n die();\n} else {\n $jobCount = 0;\n while (($listItem = i5_job_list_read($list)) && (++$jobCount <= 5)) {\n\t\t\techo printArray($listItem) . '<BR>';\n\t}\n\techo 'End of list.<BR><BR>';\n}\ni5_job_list_close($list);\n\n\n// Get info about current job\necho \"Getting information about current job.<BR>\";\n$list = i5_job_list();//array(I5_USERNAME=>'*ALL'), $conn);\nif (!$list) {\n\techo 'Error getting job list: ' . printArray(i5_error()) . '<BR>';\n die();\n} else {\n\t // should be only one for current job.\n\t $listItem = i5_job_list_read($list);\n\t echo \"<BR>list item for current job: \" . printArray($listItem) . \"<BR><BR>\";\n\t echo \"Job name: {$listItem[I5_JOB_NAME]} user: {$listItem[I5_JOB_USER_NAME]} job number: {$listItem[I5_JOB_NUMBER]}<BR><BR>\";\n}\ni5_job_list_close($list);\n\n} //(if do job lists)\n\n\n\n\n// good\necho \"\\nSuccess\\n\";\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.6363636255264282,
"alphanum_fraction": 0.6898396015167236,
"avg_line_length": 12.285714149475098,
"blob_id": "0dadc5e7b780e5bc51adef9c11c6ed23c57d5cb0",
"content_id": "d1ed0ca40e328c38164d207467e1cb1ca92385bc",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 187,
"license_type": "permissive",
"max_line_length": 39,
"num_lines": 14,
"path": "/test/php/ftpfew.sh",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "# bash ftpfew.sh lp0364d adc\nMYPWD=$(<$HOME/.ftprc)\nftp -i -n -v $1 << ftp_end\nuser $2 $MYPWD\n\nquote namefmt 1\n\nbin\ncd /www/zendsvr/htdocs/tests/xmlservice\nmput test_801*\n\nquit\n\nftp_end\n\n"
},
{
"alpha_fraction": 0.5958762764930725,
"alphanum_fraction": 0.6159793734550476,
"avg_line_length": 27.940298080444336,
"blob_id": "eb6fa86273de4ebae3ee77f6f8f2301bd0aafa6d",
"content_id": "5a0df1f65d84a4ba91a751246c7f6ab6173a6150",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 1940,
"license_type": "permissive",
"max_line_length": 74,
"num_lines": 67,
"path": "/test/php/test_98811_ERROR_ibm_db2_io_warning.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 - check warning error\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// -----------------\n// make the call\n// -----------------\n// see connection.inc param details ...\nrequire_once('connection.inc');\n// call IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Fail connect: $database,$user\");\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG65K(?,?,?,?)\");\nif (!$stmt) die(\"Fail prepare: \".db2_stmt_errormsg());\n$clobIn = getxml();\n$clobOut = \"\";\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) die(\"Fail execute: \".db2_stmt_errormsg());\n// -----------------\n// output processing\n// -----------------\n// dump raw XML (easy test debug)\n// var_dump($clobOut);\n// xml check via simplexml vs. expected results\n$xmlobj = simplexml_load_string($clobOut);\nif (!$xmlobj) die(\"Fail XML returned\\n\");\n// -----------------\n// output error report\n// -----------------\n$mybads = array(\"bad1\", \"bad2\", \"bad3\");\n$mypaths = array(\"version\", \"error\", \"jobinfo\", \"joblog\");\nforeach ($mybads as $bad) {\n foreach ($mypaths as $path) {\n $look = \"/script/$bad/cmd/$path\";\n $allerrors = $xmlobj->xpath($look);\n if (!$allerrors) die(\"Fail XML errors missing $look\\n\");\n else echo \"Good XML $look\\n\";\n }\n}\n\n// good\necho \"Success\\n\";\n\n\nfunction getxml() {\n$clob = <<<ENDPROC\n<?xml version='1.0' encoding='ISO-8859-1' ?>\n<script>\n<cmd>addlible xmlservice</cmd>\n<bad1><cmd exec='cmd'>addlible xmlservice</cmd></bad1>\n<bad2><cmd exec='system'>addlible xmlservice</cmd></bad2>\n<bad3><cmd exec='rexx'>addlible xmlservice</cmd></bad3>\n</script>\nENDPROC;\nreturn test_lib_replace($clob);\n}\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.5267762541770935,
"alphanum_fraction": 0.5392364859580994,
"avg_line_length": 27.786258697509766,
"blob_id": "25256b492483150a5d12f0852bfeba1d78be6862",
"content_id": "c42c8d6c0aff34a3c393aec7674e84f4758ded7d",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 3772,
"license_type": "permissive",
"max_line_length": 79,
"num_lines": 131,
"path": "/test/php/test_10244_ZZPDF_pdo_ibm_set_pdf.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: PDO_IBM result set PDF - binary data\n--SKIPIF--\n<?php require_once('skipifpdo.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\n// read pdf file into hex string\n$handle = fopen($pdfInFile, \"rb\");\n$hexpdf = strtoupper( bin2hex( fread( $handle, filesize($pdfInFile) ) ) );\nfclose($handle);\n// -----------------\n// make the call\n// -----------------\n// call IBM i\n$database = \"ibm:\".$database;\ntry {\n $db = new PDO($database,\n strtoupper($user),\n strtoupper($password),\n array(PDO::ATTR_AUTOCOMMIT=>true));\n if (!$db) throw new Exception('foo');\n} catch( Exception $e ) {\n die(\"Bad connect: $database,$user\");\n}\ntry {\n $stmt = $db->prepare(\"call $procLib.iPLUGR1M(?,?,?)\");\n if (!$stmt) throw new Exception('bar');\n} catch( Exception $e ) {\n $err = $db->errorInfo();\n $cod = $db->errorCode();\n die(\"Bad prepare: \".$cod.\" \".$err[0].\" \".$err[1].\" \".$err[2]);\n}\ntry {\n $ctl .= \" *hack\";\n $clobIn = getxml($hexpdf);\n $clobOut = \"\";\n $ret = $stmt->execute(array($ipc,$ctl,$clobIn));\n if (!$ret) throw new Exception('yoyo');\n while( $row = $stmt->fetch(PDO::FETCH_NUM,PDO::FETCH_ORI_NEXT) ) {\n $clobOut .= driverJunkAway($row[0]);\n }\n} catch( Exception $e ) {\n $err = $stmt->errorInfo();\n $cod = $stmt->errorCode();\n die(\"Bad execute: \".$cod.\" \".$err[0].\" \".$err[1].\" \".$err[2]);\n}\n// -----------------\n// output processing\n// -----------------\n// dump raw XML (easy test debug)\n// var_dump($clobOut);\n// xml check via simplexml vs. expected results\n$xmlobj = simplexml_load_string($clobOut);\nif (!$xmlobj) die(\"Bad XML returned\");\n$allpgms = $xmlobj->xpath('/script/pgm');\nif (!$allpgms) die(\"Missing XML pgm info\");\n// -----------------\n// output pgm call\n// -----------------\n// only one program this XML script\n$pgm = $allpgms[0];\n$name = $pgm->attributes()->name;\n$lib = $pgm->attributes()->lib;\n$func = $pgm->attributes()->func;\n// pgm parms\n$parm = $pgm->xpath('parm');\nif (!$parm) die(\"Missing XML pgm parms ($lib/$name.$func)\\n\");\n$var = $parm[0]->data->attributes()->var;\n$hexret = (string)$parm[0]->data;\n$size = 32;\n$max = strlen($hexpdf);\nfor ($i=0;$i<$max;$i+=$size) {\n if ($i + $size > $max) $size = $max - $i;\n $a = substr($hexpdf,$i,$size);\n $b = substr($hexret,$i,$size);\n if ($a<>$b) {\n echo \"offset $i: $a\\n\";\n echo \"offset $i: $b\\n\";\n die(\"Fail XML $var in/out not match ($lib/$name.$func)\\n\");\n }\n}\n// pgm data returned\n$retn = $pgm->xpath('return');\nif (!$retn) die(\"Fail XML pgm return missing ($lib/$name.$func)\\n\");\n$var = $retn[0]->data->attributes()->var;\n$actual = (string)$retn[0]->data;\n$expect = '1';\nif ($actual != $expect) die(\"$var ($actual not $expect) ($lib/$name.$func)\\n\");\n\n// good\n// file_put_contents($pdfOutFile, pack(\"H*\", $hexret));\necho substr($hexret,0,400).\"\\n\";\necho \"Success ($lib/$name.$func)\\n\";\n\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// * zzpdf: check binary\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// P zzpdf B export\n// D zzpdf PI 10i 0\n// D myPDF 65000A\nfunction getxml($hexpdf) {\n$clob1 = <<<ENDPROC1\n<?xml version='1.0'?>\n<script>\n<pgm name='ZZSRV' lib='xyzlibxmlservicexyz' func='ZZPDF'>\n <parm comment='binary data'>\n <data var='myPDF' type='xxsizeb'>\nENDPROC1;\n$clob3 = <<<ENDPROC3\n </data>\n </parm>\n <return>\n <data var='myRet' type='10i0'>0</data>\n </return>\n</pgm>\n</script>\nENDPROC3;\n$was = array('xxsize');\n$now = array(strlen($hexpdf)/2);\n$clob1 = str_replace($was,$now,$clob1);\n$clob = $clob1;\n$clob .= $hexpdf;\n$clob .= $clob3;\nreturn test_lib_replace($clob);\n}\n?>\n--EXPECTF--\n%s\nSuccess (%s)\n\n"
},
{
"alpha_fraction": 0.4920634925365448,
"alphanum_fraction": 0.4920634925365448,
"avg_line_length": 11.399999618530273,
"blob_id": "adaaf5df3210ba100b8dfeb6d4ff7d80d4c754f2",
"content_id": "51831cb19a6303dda63f92759abb5b84bf03b323",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 63,
"license_type": "permissive",
"max_line_length": 33,
"num_lines": 5,
"path": "/test/php/skipifibmi.inc",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "<?php\nif (!extension_loaded('ibm_i')) {\n die('skip');\n}\n?>\n\n"
},
{
"alpha_fraction": 0.540898323059082,
"alphanum_fraction": 0.5560283660888672,
"avg_line_length": 50.585365295410156,
"blob_id": "b59b2c7c36499c6357792ec96326c370a9bc59b5",
"content_id": "5683b77803c59a7e6094b3664076067931e2fc8d",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 2115,
"license_type": "permissive",
"max_line_length": 118,
"num_lines": 41,
"path": "/test/php/xmlservice_ibm_db2.php",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "<?php\n// PHP driver: xmlservice_ibm_db2.php\n// Notes:\n// Assumes you have PHP ibm_db2 driver enabled in Zend Server.\n// You may use PHP ibm_db2 driver on IBM i (1-tier)\n// or from Linux/Windows to IBM i (2-tier).\n// For help:\n// http://www.youngiprofessionals.com/wiki/index.php/PHP/DB2\nif (!extension_loaded('ibm_db2')) {\n die('Error: ibm_db2 extension not loaded, use Zend Server GUI to enable.');\n}\n\n// *** XMLSERVICE call (DB2 driver) ***\n// Note:\n// Connection ($procConn) is global to avoid looping\n// re-open/re-close that errors most drivers\nfunction xmlservice($xml) {\nglobal $i5persistentconnect, $database, $user, $password, $ipc, $ctl, $procConn, $procLib, $procPlug, $procPlugR;\n $xmlIn = $xml;\n $xmlOut = '';\n if (!$procConn) {\n if ($i5persistentconnect) $procConn = db2_pconnect($database, $user, $password); // persistent/pooled connection\n else $procConn = db2_connect($database, $user, $password); // full open/close connection\n }\n if (!$procConn) die(\"Bad connect: $database, $user\");\n $stmt = db2_prepare($procConn, \"call $procLib.$procPlug(?,?,?,?)\"); // Call XMLSERVICE\n // stored procedure interface\n // in/out parameter (xmlOut)\n // sizes: iPLUG4K - iPLUG15M\n if (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n $ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN); // ? - /tmp/raw_$user (*sbmjob)\n $ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN); // ? - *here or *sbmjob\n $ret=db2_bind_param($stmt, 3, \"xmlIn\", DB2_PARAM_IN); // ? - XML input script\n $ret=db2_bind_param($stmt, 4, \"xmlOut\", DB2_PARAM_OUT); // ? - XML output return\n $ret=db2_execute($stmt);\n if (!$ret) die(\"Bad execute: \".db2_stmt_errormsg());\n return driverJunkAway($xmlOut); // just in case driver odd\n // ... possible junk end record,\n // record</script>junk\n}\n?>\n"
},
{
"alpha_fraction": 0.5549936890602112,
"alphanum_fraction": 0.5714285969734192,
"avg_line_length": 32.63829803466797,
"blob_id": "5259019d55f58b5eb071a6a246a10ea7f5bbad86",
"content_id": "642c3d45d1d49d9c261259afcd60809c0960dbb5",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 1582,
"license_type": "permissive",
"max_line_length": 74,
"num_lines": 47,
"path": "/test/php/test_10100_ZZNADA_ibm_db2_io_nada.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 inout SRVPGM - RPG no parm and no return\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\n// call IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Bad connect: $database,$user\");\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG4K(?,?,?,?)\");\nif (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n$clobIn = getxml(); // XML in\n$clobOut = \"\"; // XML out\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) die(\"Bad execute: \".db2_stmt_errormsg());\n// dump XML out\nvar_dump($clobOut);\n// XML check\n$xmlobj = simplexml_load_string($clobOut);\nif (!$xmlobj) die(\"Bad XML returned\\n\");\n// check raw XML no error?\nif (strlen(trim($clobOut))<1 || strpos($clobOut,\"error\")>0) die(\"Fail\\n\");\necho \"Success\\n\";\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// * zznada: check no parms\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// P zznada B export\n// D zznada PI\nfunction getxml() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<pgm name='ZZSRV' lib='xyzlibxmlservicexyz' func='ZZNADA'>\n</pgm>\nENDPROC;\nreturn test_lib_replace($clob);\n}\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.6499999761581421,
"alphanum_fraction": 0.654347836971283,
"avg_line_length": 18.95652198791504,
"blob_id": "6133485ab267e030be8326330488750f6c3d17a6",
"content_id": "70496f5525e95da879a695ffa1071ee40e2e014f",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 460,
"license_type": "permissive",
"max_line_length": 70,
"num_lines": 23,
"path": "/test/php/test_33467_cwtest_sysval.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: - cwtest system value\n--SKIPIF--\n<?php require_once('skipifcw.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('xxcw_test_setup.php');\n\necho h2('Get system value');\n$start = microtime(true);\n$date = i5_get_system_value('QDATE');\n$end = microtime(true);\n$elapsed = $end - $start;\necho \"QDATE system value: '$date', obtained in $elapsed seconds.<BR>\";\n\n\n// good\necho \"\\nSuccess\\n\";\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.602465808391571,
"alphanum_fraction": 0.6540369987487793,
"avg_line_length": 54.89075469970703,
"blob_id": "9ee0b540bf8c970257845e610c48a75eb9c002b8",
"content_id": "b8fcf51aab55a95d868d951a8da4a7443a910edf",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "reStructuredText",
"length_bytes": 13302,
"license_type": "permissive",
"max_line_length": 318,
"num_lines": 238,
"path": "/docs/errors.rst",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "XMLSERVICE Errors\n=================\n`Goto Main Page`_\n\n.. _Goto Main Page: index.html\n\n\nCommon errors\n-------------\n\n::\n\n <errnoile>3401</errnoile>\n <errnoilemsg><![CDATA[Permission denied.]]></errnoilemsg>\n <errnoxml>1301011</errnoxml>\n <xmlerrmsg><![CDATA[IPC shmat fail 1]]></xmlerrmsg>\n <xmlhint><![CDATA[/tmp/packers3]]></xmlhint>\n </error>\n\nerrnoile 3401 -- usually means another process with a different user profile is using IPC (/tmp/packers3)\n\n::\n\n <errnoile>3021</errnoile>\n <errnoxml>1301009</errnoxml>\n <xmlerrmsg>IPC getshm fail</xmlerrmsg>\n <xmlhint><![CDATA[/tmp/ ]]></xmlhint>\n\n'Hung' semaphores/shared memory associated with a user never suppose to happen, but have seen in rare occasion. The following commands can be used to remove \"hung\" semaphores/shared memory associated with a user assuming you have appropriate authority to run like SECOFR, etc. (Ranger welcomes you to Unix geek-ville).\n\nExample::\n\n grep -i qtm means ipcrm for (QTM)HHTTP ...\n\n endTCPSVR SERVER(*HTTP) INSTANCE(ZENDSVR) -- suggest end web server\n\n call qp2term\n > ipcs | grep -i qtm | awk '{print \"ipcrm -\" tolower($1) \" \"$2}' -- show action, but NOT do action\n > ipcs | grep -i qtm | awk '{print \"ipcrm -\" tolower($1) \" \"$2}' | sh -- remove semaphores/shared memory\n\n strTCPSVR SERVER(*HTTP) INSTANCE(ZENDSVR) -- suggest start web server\n\n\n::\n\n <errnoxml>1000005</errnoxml>\n <xmlerrmsg>PASE resolve failed</xmlerrmsg>\n <xmlhint><![CDATA[MYPGM]]></xmlhint>\n\nThe program you tried to call, shown here as \"MYPGM\" (in the CDATA tag), was not found. Make sure you specified the library and program correctly, including upper or lower case (usually upper case).\n\n::\n\n <errnoxml>1480002</errnoxml>\n <xmlerrmsg>XMLCGI invalid</xmlerrmsg>\n <xmlhint>*NONE</xmlhint>\n\n\\*NONE requires a special compile of the RPG source and is NOT enabled in production versions of the toolkit by default. \nIt is most useful for demos with custom security like this site, if you try on your machine you will likely get 1480002 \nerror XMLCGI. You can find details in plugerr_h.\n\n\nILE errno\n---------\n\nErrno Values for UNIX-Type Functions\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nPrograms using the UNIX(R)-type functions may receive error information as errno values.\nThe possible values returned are listed here in ascending errno value sequence.\n\n::\n\n Name \t Value \tText\n EDOM \t 3001 \tA domain error occurred in a math function.\n ERANGE \t 3002 \tA range error occurred.\n ETRUNC \t3003 \tData was truncated on an input, output, or update operation.\n ENOTOPEN 3004 \tFile is not open.\n ENOTREAD 3005 \tFile is not opened for read operations.\n EIO \t 3006 \tInput/output error.\n ENODEV \t 3007 \tNo such device.\n ERECIO \t 3008 \tCannot get single character for files opened for record I/O.\n ENOTWRITE 3009 \tFile is not opened for write operations.\n ESTDIN \t 3010 \tThe stdin stream cannot be opened.\n ESTDOUT 3011 \tThe stdout stream cannot be opened.\n ESTDERR 3012 \tThe stderr stream cannot be opened.\n EBADSEEK 3013 \tThe positioning parameter in fseek is not correct.\n EBADNAME 3014 \tThe object name specified is not correct.\n EBADMODE 3015 \tThe type variable specified on the open function is not correct.\n EBADPOS 3017 \tThe position specifier is not correct.\n ENOPOS \t 3018 \tThere is no record at the specified position.\n ENUMMBRS 3019 \tAttempted to use ftell on multiple members.\n ENUMRECS 3020 \tThe current record position is too long for ftell.\n EINVAL \t 3021 \tThe value specified for the argument is not correct.\n EBADFUNC 3022 \tFunction parameter in the signal function is not set.\n ENOENT \t 3025 \tNo such path or directory.\n ENOREC \t3026 \tRecord is not found.\n EPERM \t 3027 \tThe operation is not permitted.\n EBADDATA 3028 \tMessage data is not valid.\n EBUSY \t 3029 \tResource busy.\n EBADOPT 3040 \tOption specified is not valid.\n ENOTUPD 3041 \tFile is not opened for update operations.\n ENOTDLT 3042 \tFile is not opened for delete operations.\n EPAD \t 3043 \tThe number of characters written is shorter than the expected record length.\n EBADKEYLN 3044 \tA length that was not valid was specified for the key.\n EPUTANDGET 3080 \tA read operation should not immediately follow a write operation.\n EGETANDPUT 3081 \tA write operation should not immediately follow a read operation.\n EIOERROR 3101 \tA nonrecoverable I/O error occurred.\n EIORECERR 3102 \tA recoverable I/O error occurred.\n EACCES \t 3401 \tPermission denied.\n ENOTDIR 3403 \tNot a directory.\n ENOSPC \t 3404 \tNo space is available.\n EXDEV \t 3405 \tImproper link.\n EAGAIN \t 3406 \tOperation would have caused the process to be suspended.\n EWOULDBLOCK 3406 \tOperation would have caused the process to be suspended.\n EINTR \t 3407 \tInterrupted function call.\n EFAULT \t 3408 \tThe address used for an argument was not correct.\n ETIME \t 3409 \tOperation timed out.\n ENXIO \t 3415 \tNo such device or address.\n EAPAR \t 3418 \tPossible APAR condition or hardware failure.\n ERECURSE 3419 \tRecursive attempt rejected.\n EADDRINUSE 3420 \tAddress already in use.\n EADDRNOTAVAIL 3421 \tAddress is not available.\n EAFNOSUPPORT 3422 \tThe type of socket is not supported in this protocol family.\n EALREADY 3423 \tOperation is already in progress.\n ECONNABORTED 3424 \tConnection ended abnormally.\n ECONNREFUSED 3425 \tA remote host refused an attempted connect operation.\n ECONNRESET 3426 \tA connection with a remote socket was reset by that socket.\n EDESTADDRREQ 3427 \tOperation requires destination address.\n EHOSTDOWN 3428 \tA remote host is not available.\n EHOSTUNREACH 3429 \tA route to the remote host is not available.\n EINPROGRESS 3430 \tOperation in progress.\n EISCONN 3431 \tA connection has already been established.\n EMSGSIZE 3432 \tMessage size is out of range.\n ENETDOWN 3433 \tThe network currently is not available.\n ENETRESET 3434 \tA socket is connected to a host that is no longer available.\n ENETUNREACH 3435 \tCannot reach the destination network.\n ENOBUFS 3436 \tThere is not enough buffer space for the requested operation.\n ENOPROTOOPT 3437 \tThe protocol does not support the specified option.\n ENOTCONN 3438 \tRequested operation requires a connection.\n ENOTSOCK 3439 \tThe specified descriptor does not reference a socket.\n ENOTSUP 3440 \tOperation is not supported.\n EOPNOTSUPP 3440 \tOperation is not supported.\n EPFNOSUPPORT 3441 \tThe socket protocol family is not supported.\n EPROTONOSUPPORT 3442 \tNo protocol of the specified type and domain exists.\n EPROTOTYPE 3443 \tThe socket type or protocols are not compatible.\n ERCVDERR 3444 \tAn error indication was sent by the peer program.\n ESHUTDOWN 3445 \tCannot send data after a shutdown.\n ESOCKTNOSUPPORT 3446 \tThe specified socket type is not supported.\n ETIMEDOUT 3447 \tA remote host did not respond within the timeout period.\n EUNATCH 3448 \tThe protocol required to support the specified address family is not available at this time.\n EBADF \t 3450 \tDescriptor is not valid.\n EMFILE \t 3452 \tToo many open files for this process.\n ENFILE \t 3453 \tToo many open files in the system.\n EPIPE \t 3455 \tBroken pipe.\n ECANCEL 3456 \tOperation cancelled.\n EEXIST \t 3457 \tFile exists.\n EDEADLK 3459 \tResource deadlock avoided.\n ENOMEM \t 3460 \tStorage allocation request failed.\n EOWNERTERM 3462 \tThe synchronization object no longer exists because the owner is no longer running.\n EDESTROYED 3463 \tThe synchronization object was destroyed, or the object no longer exists.\n ETERM \t 3464 \tOperation was terminated.\n ENOENT1 3465 \tNo such file or directory.\n ENOEQFLOG 3466 \tObject is already linked to a dead directory.\n EEMPTYDIR 3467 \tDirectory is empty.\n EMLINK \t 3468 \tMaximum link count for a file was exceeded.\n ESPIPE \t 3469 \tSeek request is not supported for object.\n ENOSYS \t 3470 \tFunction not implemented.\n EISDIR \t 3471 \tSpecified target is a directory.\n EROFS \t 3472 \tRead-only file system.\n EUNKNOWN 3474 \tUnknown system state.\n EITERBAD 3475 \tIterator is not valid.\n EITERSTE 3476 \tIterator is in wrong state for operation.\n EHRICLSBAD 3477 \tHRI class is not valid.\n EHRICLBAD 3478 \tHRI subclass is not valid.\n EHRITYPBAD 3479\tHRI type is not valid.\n ENOTAPPL 3480 \tData requested is not applicable.\n EHRIREQTYP 3481 \tHRI request type is not valid.\n EHRINAMEBAD 3482 \tHRI resource name is not valid.\n EDAMAGE 3484 \tA damaged object was encountered.\n ELOOP \t 3485 \tA loop exists in the symbolic links.\n ENAMETOOLONG 3486 \tA path name is too long.\n ENOLCK \t 3487 \tNo locks are available.\n ENOTEMPTY 3488 \tDirectory is not empty.\n ENOSYSRSC 3489 \tSystem resources are not available.\n ECONVERT 3490 \tConversion error.\n E2BIG \t 3491 \tArgument list is too long.\n EILSEQ \t 3492 \tConversion stopped due to input character that does not belong to the input codeset.\n ETYPE \t 3493 \tObject type mismatch.\n EBADDIR 3494 \tAttempted to reference a directory that was not found or was destroyed.\n EBADOBJ 3495 \tAttempted to reference an object that was not found, was destroyed, or was damaged.\n EIDXINVAL 3496 \tData space index used as a directory is not valid.\n ESOFTDAMAGE 3497 \tObject has soft damage.\n ENOTENROLL 3498 \tUser is not enrolled in system distribution directory.\n EOFFLINE 3499 \tObject is suspended.\n EROOBJ \t 3500 \tObject is a read-only object.\n EEAHDDSI 3501 \tHard damage on extended attribute data space index.\n EEASDDSI 3502 \tSoft damage on extended attribute data space index.\n EEAHDDS 3503 \tHard damage on extended attribute data space.\n EEASDDS 3504 \tSoft damage on extended attribute data space.\n EEADUPRC 3505 \tDuplicate extended attribute record.\n ELOCKED 3506 \tArea being read from or written to is locked.\n EFBIG \t 3507 \tObject too large.\n EIDRM \t 3509 \tThe semaphore, shared memory, or message queue identifier is removed from the system.\n ENOMSG \t 3510 \tThe queue does not contain a message of the desired type and (msgflg logically ANDed with IPC_NOWAIT).\n EFILECVT 3511 \tFile ID conversion of a directory failed.\n EBADFID 3512 \tA file ID could not be assigned when linking an object to a directory.\n ESTALE \t 3513 \tFile handle was rejected by server.\n ESRCH \t 3515 \tNo such process.\n ENOTSIGINIT 3516 \tProcess is not enabled for signals.\n ECHILD \t 3517 \tNo child process.\n EBADH \t 3520 \tHandle is not valid.\n ETOOMANYREFS 3523 \tThe operation would have exceeded the maximum number of references allowed for a descriptor.\n ENOTSAFE 3524 \tFunction is not allowed.\n EOVERFLOW 3525 \tObject is too large to process.\n EJRNDAMAGE 3526 \tJournal is damaged.\n EJRNINACTIVE 3527 \tJournal is inactive.\n EJRNRCVSPC 3528 \tJournal space or system storage error.\n EJRNRMT 3529 \tJournal is remote.\n ENEWJRNRCV 3530 \tNew journal receiver is needed.\n ENEWJRN 3531 \tNew journal is needed.\n EJOURNALED 3532 \tObject already journaled.\n EJRNENTTOOLONG 3533 \tEntry is too large to send.\n EDATALINK 3534 \tObject is a datalink object.\n ENOTAVAIL 3535 \tIASP is not available.\n ENOTTY \t 3536 \tI/O control operation is not appropriate.\n EFBIG2 \t 3540 \tAttempt to write or truncate file past its sort file size limit.\n ETXTBSY 3543 \tText file busy.\n EASPGRPNOTSET 3544 \tASP group not set for thread.\n ERESTART 3545 \tA system call was interrupted and may be restarted.\n ESCANFAILURE 3546 \tAn object has been marked as a scan failure due to processing by an exit program associated with the scan-related integrated file system exit points.\n\n\n\n\n..\n [--Author([[http://youngiprofessionals.com/wiki/index.php/XMLSERVICE/XMLSERVICEError?action=expirediff | s ]])--]\n [--Tony \"Ranger\" Cairns - IBM i PHP / PASE--]\n"
},
{
"alpha_fraction": 0.5981506109237671,
"alphanum_fraction": 0.6047556400299072,
"avg_line_length": 34.68867874145508,
"blob_id": "d4682cf8d34dabb32d3e4aa12d50e1f23b5d238d",
"content_id": "ae2c8be591974d5e50d50c5453fc932eefd7de1a",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 3785,
"license_type": "permissive",
"max_line_length": 74,
"num_lines": 106,
"path": "/test/php/test_90255_ibm_db2_io_sqlexec_error.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 inout SQL - error query animalnotthere\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\n\n// -------------\n// call IBM i\n// -------------\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Bad connect: $database,$user\");\n\n$allerror = array(\"fast\",\"off\",\"on\");\nforeach ($allerror as $myerror) {\n $stmt = db2_prepare($conn, \"call $procLib.iPLUG15M(?,?,?,?)\");\n if (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n $clobIn = getxml($myerror);\n $clobOut = \"\";\n $ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n $ret=db2_execute($stmt);\n if (!$ret) die(\"Bad execute: \".db2_stmt_errormsg());\n // -----------------\n // output processing (XML input info)\n // -----------------\n // dump raw XML (easy test debug)\n // var_dump($clobIn);\n $xmlobj = simplexml_load_string($clobIn);\n if (!$xmlobj) die(\"Bad XML input\");\n $sqls = $xmlobj->xpath('/script/sql/query');\n if (!$sqls) die(\"Missing XML sql input info\");\n // -----------------\n // output processing\n // -----------------\n // dump raw XML (easy test debug)\n var_dump($clobOut);\n // xml check via simplexml vs. expected results\n $xmlobj = simplexml_load_string($clobOut);\n if (!$xmlobj) die(\"Bad XML returned\");\n switch($myerror) {\n case \"fast\":\n $sqlcode = $xmlobj->xpath('/script/sql/query/error/sqlcode');\n if (!$sqlcode) die(\"Bad error='fast' prepare missed sqlcode\");\n $sqlstate = $xmlobj->xpath('/script/sql/query/error/sqlstate');\n if (!$sqlstate) die(\"Bad error='fast' prepare missed sqlstate\");\n $error = $xmlobj->xpath('/script/sql/query/error');\n if (!$error) die(\"Bad error='fast' query missed error\");\n $joblog = $xmlobj->xpath('/script/sql/query/joblog');\n if ($joblog) die(\"Bad error='fast' query included joblog\");\n\n $error = $xmlobj->xpath('/script/sql/fetch/error');\n if (!$error) die(\"Bad error='fast' fetch missed error\");\n $joblog = $xmlobj->xpath('/script/sql/fetch/joblog');\n if ($joblog) die(\"Bad error='fast' fetch included joblog\");\n break;\n case \"off\":\n $sqlcode = $xmlobj->xpath('/script/sql/query/error/sqlcode');\n if (!$sqlcode) die(\"Bad error='fast' prepare missed sqlcode\");\n $sqlstate = $xmlobj->xpath('/script/sql/query/error/sqlstate');\n if (!$sqlstate) die(\"Bad error='fast' prepare missed sqlstate\");\n $error = $xmlobj->xpath('/script/sql/query/error');\n if (!$error) die(\"Bad error='off' query missing error\");\n $joblog = $xmlobj->xpath('/script/sql/query/joblog');\n if (!$joblog) die(\"Bad error='off' query missing joblog\");\n\n $error = $xmlobj->xpath('/script/sql/fetch/error');\n if (!$error) die(\"Bad error='off' fetch missing error\");\n $joblog = $xmlobj->xpath('/script/sql/fetch/joblog');\n if (!$joblog) die(\"Bad error='off' fetch missing joblog\");\n break;\n case \"on\":\n $error = $xmlobj->xpath('/report/error');\n if (!$error) die(\"Bad error='on' missing error\");\n $joblog = $xmlobj->xpath('/report/joblog');\n if (!$joblog) die(\"Bad error='on' missing joblog\");\n break;\n default:\n break;\n }\n}\n// good\necho \"Success\\n\";\n\nfunction getxml($now) {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<script>\n<sql>\n<query error='xyzerror'>select * from animalnotthere</query>\n<fetch block='all' error='xyzerror'/>\n</sql>\n</script>\nENDPROC;\n$clob = str_replace(\"xyzerror\",$now,$clob);\nreturn $clob;\n}\n?>\n--EXPECTF--\n%s\nSuccess\n\n\n"
},
{
"alpha_fraction": 0.5744562149047852,
"alphanum_fraction": 0.5822643637657166,
"avg_line_length": 32.203704833984375,
"blob_id": "20040c99ce15bc8a978b3f85195f3bad4082edb9",
"content_id": "e038c0712a8bdec7e30230596ad773c8859ae59c",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 1793,
"license_type": "permissive",
"max_line_length": 113,
"num_lines": 54,
"path": "/test/php/xmlservice_post.php",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "<?php\n// PHP driver: xmlservice_rest.php\n// Notes:\n// Assumes you have XMLSERVICE REST driver enabled\n// HTTP Server (Apache) and a IBM i version match\n// XMLSERVICE/XMLCGI.PGM (you compiled).\n//\n// You may use XMLSERVICE REST driver on IBM i (1-tier)\n// or from Linux/Windows to IBM i (2-tier).\n// For help:\n// http://www.youngiprofessionals.com/wiki/index.php/XMLService\n\n// *** XMLSERVICE call (REST + internal RPG DB2 driver) ***\n// Example: /www/zendsvr/conf/httpd.conf\n// ScriptAlias /cgi-bin/ /QSYS.LIB/XMLSERVICE.LIB/\n// <Directory /QSYS.LIB/XMLSERVICE.LIB/>\n// AllowOverride None\n// order allow,deny\n// allow from all\n// SetHandler cgi-script\n// Options +ExecCGI\n// </Directory>\nfunction xmlservice($xml) {\nglobal $i5persistentconnect, $database, $user, $password, $ipc, $ctl, $procConn, $proclib, $procPlug, $procPlugR,\n $i5resturl, $i5restdb, $i5restuser, $i5restpass, $i5restsz;\n $was = array('\"');\n $now = array(\"'\");\n $xmlIn = str_replace($was,$now,$xml);\n $xmlOut = '';\n $postdata = http_build_query(\n array(\n 'db2' => $i5restdb,\n 'uid' => $i5restuser,\n 'pwd' => $i5restpass,\n 'ipc' => $ipc,\n 'ctl' => $ctl,\n 'xmlin' => $xmlIn,\n 'xmlout' => $i5restsz // size expected XML output\n )\n );\n $opts = array('http' =>\n array(\n 'method' => 'POST',\n 'header' => 'Content-type: application/x-www-form-urlencoded',\n 'content' => $postdata\n )\n );\n $context = stream_context_create($opts);\n $xmlOut = file_get_contents($i5resturl, false, $context);\n return driverJunkAway($xmlOut); // just in case driver odd\n // ... possible junk end record,\n // record</script>junk\n}\n?>\n"
},
{
"alpha_fraction": 0.5511810779571533,
"alphanum_fraction": 0.5826771855354309,
"avg_line_length": 13,
"blob_id": "6872d8f7cacba6cf8a05191d6627b5b7c4791d02",
"content_id": "9b6b89aae24d766274e53a01c892085aff7066f9",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "C",
"length_bytes": 127,
"license_type": "permissive",
"max_line_length": 46,
"num_lines": 9,
"path": "/src/xmlver.c",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "#include <stdio.h>\n\nconst char* version = \"XML Toolkit 2.0.2-dev\";\n\nint main()\n{\n printf(\"%s\\n\", version);\n return 0;\n}\n\n"
},
{
"alpha_fraction": 0.5284147262573242,
"alphanum_fraction": 0.5538384914398193,
"avg_line_length": 39.099998474121094,
"blob_id": "4da2191f57b4628cd4c032bff73cb6e4f4a4df68",
"content_id": "4129075037b94731ff34a989a959dfd11ac3df52",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 2006,
"license_type": "permissive",
"max_line_length": 99,
"num_lines": 50,
"path": "/test/php/test_20420_ZZTIME_toolkit_pgm.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: - Zend Toolkit time ISO\n--SKIPIF--\n<?php require_once('skipiftoolkit.inc'); ?>\n--FILE--\n<?php\nrequire_once('connection.inc');\n// assume /usr/local/zendsvr/share/ToolkitAPI\nrequire_once(\"ToolkitService.php\");\n// new toolkit\ntry { $ToolkitServiceObj = ToolkitService::getInstance($database, $user, $password); }\ncatch (Exception $e) { die($e->getMessage()); }\n$ToolkitServiceObj->setToolkitServiceParams(\narray('InternalKey'=>$ipc, // route to same XMLSERVICE job /tmp/myjob1\n'subsystem'=>\"QGPL/QDFTJOBD\", // subsystem/jobd to start XMLSERVICE (if not running)\n'plug'=>\"iPLUG32K\")); // max size data i/o (iPLUG4K,32K,65K.512K,1M,5M,10M,15M)\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// * zztime: check time parm\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// P zztime B export\n// D zztime PI T\n// D myTime T timfmt(*iso)\n// * vars\n// D retTime s T timfmt(*iso)\n// /free\n// retTime=myTime;\n// myTime=t'12.34.56';\n// return retTime;\n// /end-free\n// P E\n$param[] = $ToolkitServiceObj->AddParameterChar ('both', 8, 'ZZTIME', 'myTime', '09.45.29');\n$retrn[] = $ToolkitServiceObj->AddParameterChar ('both', 8, 'ZZTIME', 'retTime', '02.02.02');\n$result = $ToolkitServiceObj->PgmCall('ZZSRV', $testLib, $param, $retrn, array('func'=>'ZZTIME'));\n// var_dump($result);\n/* in/out param myDate */\n$myTime = \"XMLSERVICE i/o param myTime: \".$result[\"io_param\"][\"myTime\"];\necho \"$myTime\\n\";\n$expect = '12.34.56';\nif (strpos($myTime,$expect)<1) die(\"Fail missing $expect\\n\");\n/* return value retTime */\n$retTime = \"XMLSERVICE return retTime: \".$result[\"retvals\"][\"retTime\"];\necho \"$retTime\\n\";\n$expect = '09.45.29';\nif (strpos($retTime,$expect)<1) die(\"Fail missing $expect\\n\");\n/* all good */\necho \"Success\\n\";\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.5497412085533142,
"alphanum_fraction": 0.5681425929069519,
"avg_line_length": 28.457626342773438,
"blob_id": "ab234bda474cd463a77ba09adad2a88f42e143b6",
"content_id": "cdc40a50cf01fe91c0ebbad9a3c24177c8b07b08",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 1739,
"license_type": "permissive",
"max_line_length": 81,
"num_lines": 59,
"path": "/test/php/test_80141_ZZHANG_ibm_db2_io_timeout_qsqsrvr.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 inout SRVPGM - hang qsysopr\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\n\n$filename = \"/QOpenSys/usr/bin/system\";\nif (file_exists($filename)===false) die(\"Fail IBM i only missing ($filename)\\n\");\n\n$job1 = \"QSQSRVR\";\n\n// -------------------\n// call IBM i\n// -------------------\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Bad connect: $database,$user\");\n\n$ctl = \"*here *call(15/kill)\";\n\necho driverTime().\" $job1 calling hang program $ctl\\n\";\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG32K(?,?,?,?)\");\nif (!$stmt) die(\"Bad prepare (1): \".db2_stmt_errormsg());\n$clobIn = getxmlhangme();\n$clobOut = \"\";\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) echo(\"Bad execute (1): \".db2_stmt_errormsg());\necho driverTime().\" $job1 timeout back from calling hang program\\n\";\n\n// test result (we make it back)\necho \"Success\\n\";\n\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// * zzhang: bad function hang up\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// P zzhang B export\n// D zzhang PI\nfunction getxmlhangme() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<script>\n<pgm name='ZZSRV' lib='xyzlibxmlservicexyz' func='ZZHANG'>\n</pgm>\n</script>\nENDPROC;\nreturn test_lib_replace($clob);\n}\n\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.6914893388748169,
"alphanum_fraction": 0.6914893388748169,
"avg_line_length": 17.600000381469727,
"blob_id": "8eef907438b64b044c00af7aaa6d489d78bd382c",
"content_id": "4ecd4e54b6909bc1f5bf68c3306944142e71f290",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 94,
"license_type": "permissive",
"max_line_length": 34,
"num_lines": 5,
"path": "/test/php/connection.inc",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "<?php\n// see license here\nrequire_once('license.inc');\nrequire_once('authorization.php');\n?>\n\n"
},
{
"alpha_fraction": 0.5545976758003235,
"alphanum_fraction": 0.5792282223701477,
"avg_line_length": 29.822784423828125,
"blob_id": "adfa13a5c0c03992029780f07df44c443648e57b",
"content_id": "08262218c012536861fca301efe6901dccd31ece",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 2436,
"license_type": "permissive",
"max_line_length": 74,
"num_lines": 79,
"path": "/test/php/test_17503_V6_ZZBIGBOY_ibm_db2_io_pgm_records.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 inout PGM - BIG BOY DATA\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\n// call IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Bad connect: $database,$user\");\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG15M(?,?,?,?)\");\nif (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n// $ctl .= \" *test\";\n$bigdata = 30000;\n$clobIn = getxml($bigdata);\n$clobOut = \"\";\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) die(\"Bad execute: \".db2_stmt_errormsg());\n// -----------------\n// output processing\n// -----------------\n// dump raw XML (easy test debug)\n// var_dump($clobIn);\n// var_dump($clobOut);\n// xml check via simplexml vs. expected results\n$xmlobj = simplexml_load_string($clobOut);\nif (!$xmlobj) die(\"Bad XML returned\");\n$allpgms = $xmlobj->xpath('/script/pgm');\nif (!$allpgms) die(\"Missing XML pgm info\");\n// -----------------\n// output pgm call\n// -----------------\n// only one program this XML script\n$pgm = $allpgms[0];\n$name = $pgm->attributes()->name;\n$lib = $pgm->attributes()->lib;\n\nif (strpos($clobOut,\"{$bigdata}.1\")<1) die(\"Missing {$bigdata}.1\");\necho substr($clobOut,0,400).\"\\n\";\necho \"...\\n\";\necho substr($clobOut,-300).\"\\n\";\n\n// good\necho \"Success ($lib/$name)\\n\";\n\nfunction getxml($recs) {\n$clob = \"<?xml version='1.0'?>\\n\";\n$clob .= \"<script>\\n\";\n$clob .= \"<pgm name='ZZBIGBOY' lib='xyzlibxmlservicexyz'>\\n\";\n$clob .= \" <parm io='both'>\\n\";\n$clob .= \" <data type='7s0' var='STEP1'>$recs</data>\\n\";\n$clob .= \" </parm>\\n\";\n$clob .= \" <parm io='both'>\\n\";\n$clob .= \" <ds dim='$recs' data='records'>\\n\";\n$clob .= \" <data type='7A' var='NAME1'>nada1</data>\\n\";\n$clob .= \" <data type='7s1' var='YEARWIN1'>1.0</data>\\n\";\n$clob .= \" </ds>\";\n$clob .= \" <records delimit=':'>\";\nfor ($i=0;$i<$recs;$i++) $clob .= \":nada1:1.0\";\n$clob .= \":\";\n$clob .= \"</records>\\n\";\n$clob .= \" </parm>\\n\";\n$clob .= \" <return>\\n\";\n$clob .= \" <data type='10i0'>0</data>\\n\";\n$clob .= \" </return>\\n\";\n$clob .= \"</pgm>\\n\";\n$clob .= \"</script>\";\nreturn test_lib_replace($clob);\n}\n?>\n--EXPECTF--\n%s\nSuccess (%s)\n\n"
},
{
"alpha_fraction": 0.5032064914703369,
"alphanum_fraction": 0.526079535484314,
"avg_line_length": 34.70228958129883,
"blob_id": "e5460f040dbd274e3a322da02e266a3b88e37f98",
"content_id": "1dd2ddb0523c73caddd6a39218213eb65dec2dfa",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 4678,
"license_type": "permissive",
"max_line_length": 93,
"num_lines": 131,
"path": "/test/php/test_17501_V6_ZZARRAY_ibm_db2_io_records.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 inout SRVPGM - DS records returned\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// -----------------\n// make the call\n// -----------------\n// see connection.inc param details ...\nrequire_once('connection.inc');\n// call IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Fail connect: $database,$user\");\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG15M(?,?,?,?)\");\nif (!$stmt) die(\"Fail prepare: \".db2_stmt_errormsg());\n$max = 900;\n$clobIn = getxml($max);\n$clobOut = \"\";\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) die(\"Fail execute: \".db2_stmt_errormsg());\n// -----------------\n// output processing\n// -----------------\n// dump raw XML (easy test debug)\n// var_dump($clobOut);\n// xml check via simplexml vs. expected results\n$xmlobj = simplexml_load_string($clobOut);\nif (!$xmlobj) die(\"Fail XML returned\\n\");\n// -----------------\n// output pgm call\n// -----------------\n$myName1 = 'Ranger'; // expected name\n$myMax1 = $max; // expected max\n$myCount1= $max; // expected count\n$allpgms = $xmlobj->xpath('/script/pgm');\nif (!$allpgms) die(\"Fail XML pgm missing\\n\");\n$pgm = $allpgms[0];\n$name = $pgm->attributes()->name;\n$lib = $pgm->attributes()->lib;\n$func = $pgm->attributes()->func;\n// pgm parms\n$parm = $pgm->xpath('parm');\nif (!$parm) die(\"Fail XML pgm parms missing ($lib/$name.$func)\\n\");\n$myName = (string)$parm[0]->data;\n$myMax = (string)$parm[1]->data;\n$myCount= (string)$parm[2]->data;\nif ($myName != $myName1) die(\"Fail myName ($myName not $myName1) ($lib/$name.$func)\\n\");\nif ($myMax != $myMax1) die(\"Fail (myMax $myMax not $myMax1) ($lib/$name.$func)\\n\");\nif ($myCount != $myCount1) die(\"Fail myCount ($myCount not $myCount1) ($lib/$name.$func)\\n\");\n// pgm data structure returned dim(999)\n$retn = $pgm->xpath('return');\nif (!$retn) die(\"Fail XML pgm return missing ($lib/$name.$func)\\n\");\n$i = 1;\n$dsret = (string) $retn[0]->records; // DS records returned\nfor ($tok = strtok($dsret, \":\"); $tok !== false; $tok = strtok(\":\")) {\n switch($i++) {\n case 1:\n if (strpos($tok,$i)<1) echo \"$tok \";\n else die(\"Missing $i in $tok\\n\");\n break;\n case 2:\n if (strpos($tok,\"AAA\")>-1) echo substr($tok,0,40).\"... \";\n else die(\"Missing AAA in $tok\\n\");\n break;\n case 3:\n echo \"$tok \";\n break;\n case 4:\n $i = 1;\n echo \"$tok\\n\";\n break;\n default:\n break;\n }\n}\n\n// good\necho \"Success ($lib/$name.$func)\\n\";\n\n// D ARRAYMAX c const(999)\n// D dcRec_t ds qualified based(Template)\n// D dcMyName 10A\n// D dcMyJob 4096A\n// D dcMyRank 10i 0\n// D dcMyPay 12p 2\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// * zzarray: check return array aggregate\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// P zzarray B export\n// D zzarray PI likeds(dcRec_t) dim(ARRAYMAX)\n// D myName 10A\n// D myMax 10i 0\n// D myCount 10i 0\nfunction getxml($max) {\n$clob .= \"<?xml version='1.0'?>\\n\";\n$clob .= \"<script>\\n\";\n$clob .= \"<pgm name='ZZSRV' lib='xyzlibxmlservicexyz' func='ZZARRAY'>\\n\";\n$clob .= \" <parm comment='search this name'>\\n\";\n$clob .= \" <data var='myName' type='10A'>Ranger</data>\\n\";\n$clob .= \" </parm>\\n\";\n$clob .= \" <parm comment='max allowed return'>\\n\";\n$clob .= \" <data var='myMax' type='10i0'>$max</data>\\n\";\n$clob .= \" </parm>\\n\";\n$clob .= \" <parm comment='actual count returned'>\\n\";\n$clob .= \" <data var='myCount' type='10i0' enddo='mycount'>0</data>\\n\";\n$clob .= \" </parm>\\n\";\n$clob .= \" <return>\\n\";\n$clob .= \" <ds var='dcRec_t' dim='$max' dou='mycount' data='records'>\\n\";\n$clob .= \" <data var='dcMyName' type='10A'>na</data>\\n\";\n$clob .= \" <data var='dcMyJob' type='4096A'>na</data>\\n\";\n$clob .= \" <data var='dcMyRank' type='10i0'>0</data>\\n\";\n$clob .= \" <data var='dcMyPay' type='12p2'>0.0</data>\\n\";\n$clob .= \" </ds>\\n\";\n$clob .= \"<records delimit=':'>\";\nfor ($i=0; $i<$max; $i++) $clob .= \":nada{$i}:nada{$i}:1:1.1\";\n$clob .= \":</records>\";\n$clob .= \" </return>\\n\";\n$clob .= \"</pgm>\\n\";\n$clob .= \"</script>\\n\";\nreturn test_lib_replace($clob);\n}\n?>\n--EXPECTF--\n%s\nSuccess (%s)\n\n"
},
{
"alpha_fraction": 0.6356164216995239,
"alphanum_fraction": 0.6630136966705322,
"avg_line_length": 21.6875,
"blob_id": "868420d8c80794dba3c1a5f6910f80c4488c0bb3",
"content_id": "bc3ec907d9d58898147df2168540e0708b126319",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 365,
"license_type": "permissive",
"max_line_length": 64,
"num_lines": 16,
"path": "/test/php/test_98000_setup_ignore_userid.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_Db2 ignore userid - setup special ibm_db2.ini\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\necho \"set up ibm_db2.i5_ignore_userid=1 ... \\n\";\nibm_db2_IgnoreOn(); // ibm_db2.i5_ignore_userid=1\n// good\necho \"Success\\n\";\n?>\n--EXPECTF--\n%s\nSuccess\n\n\n"
},
{
"alpha_fraction": 0.4438885748386383,
"alphanum_fraction": 0.4572741985321045,
"avg_line_length": 37.30051803588867,
"blob_id": "5e1e6a3b85dd6a836572e4a5b31c40180d597445",
"content_id": "62989fbb96d3f9154132d4f12b2d4500a8bf0e26",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "reStructuredText",
"length_bytes": 7404,
"license_type": "permissive",
"max_line_length": 140,
"num_lines": 193,
"path": "/docs/varchar.rst",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "\n\nXMLSERVICE/Toolkit Character Varying\n====================================\n\n`Goto Main Page`_\n\n.. _Goto Main Page: index.html\n\nCharacter varying - what do?\n----------------------------\n\nCharacter varying is easy, just add character varying attribute. Varying character works for parameters, return and data structures (DS).\n\n1) New PHP Toolkit\n^^^^^^^^^^^^^^^^^^\n\n::\n\n -----------------------------------------\n | Browser |\n |---------------------------------------|\n | Download RPG (1) | Download PHP (2) |\n | 1) XMLSERVICE | a) PHP CW Toolkit |\n | HTML/XML/REST | b) New PHP Toolkit |<- cw-tk-php-x.x.x.zip (*)\n | no PHP |--------------------|\n | (xmlcgi.pgm) | c) PHP “Raw XML” |\n | (optional) | (ibm_db2, odbc) |\n | -----------------------------------|\n | 2) XMLSERVICE DB2 stored procedures |\n | (iPLUG4K, iPLUG32K, ..., iPLUG15M) |\n | 3) XMLSERVICE (xmlservice.pgm) |\n | call most anything on IBM i ... |\n | (PGM, SRVPGM, PASE, DB2, etc.) |\n ------------------------------------------\n\n\nCharacter varying (S 20A varying)\n---------------------------------\n\n::\n\n <?php\n require_once('connection.inc');\n // assume /usr/local/zendsvr/share/ToolkitAPI\n require_once(\"ToolkitService.php\");\n // new toolkit\n try { $ToolkitServiceObj = ToolkitService::getInstance($database, $user, $password); }\n catch (Exception $e) { die($e->getMessage()); }\n $ToolkitServiceObj->setToolkitServiceParams(\n array('InternalKey'=>$internalKey, // route to same XMLSERVICE job /tmp/myjob1\n 'subsystem'=>\"QGPL/QDFTJOBD\", // subsystem/jobd to start XMLSERVICE (if not running)\n 'plug'=>\"iPLUG32K\")); // max size data i/o (iPLUG4K,32K,65K.512K,1M,5M,10M,15M)\n // *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n // * zzvary: check return varying\n // *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n // P zzvary B export\n // D zzvary PI 20A varying\n // D myName 10A varying\n // * vars\n // D tmp S 20A varying\n // /free\n // tmp = 'my name is ';\n // tmp = tmp + myName;\n // return tmp;\n // /end-free\n // P E\n $param[] = $ToolkitServiceObj->AddParameterChar('both', 10, 'ZZVARY', 'myVary', 'Ranger', 'on'); // 6th parameter--'on'--is for varying\n $retrn[] = $ToolkitServiceObj->AddParameterChar('both', 20, 'ZZVARY', 'retVary', 'Mud', 'on'); // 6th parameter--'on'--is for varying\n $result = $ToolkitServiceObj->PgmCall('ZZSRV', $libxmlservice, $param, $retrn, array('func'=>'ZZVARY'));\n // var_dump($result);\n /* in/out param myDate */\n $myVary = \"XMLSERVICE i/o param myVary: \".$result[\"io_param\"][\"myVary\"];\n echo \"$myVary\\n\";\n $expect = 'Ranger';\n if (strpos($myVary,$expect)<1) die(\"Fail missing $expect\\n\");\n /* return value retVary */\n $retVary = \"XMLSERVICE return retVary: \".$result[\"retvals\"][\"retVary\"];\n echo \"$retVary\\n\";\n $expect = 'my name is Ranger';\n if (strpos($retVary,$expect)<1) die(\"Fail missing $expect\\n\");\n /* all good */\n echo \"Success\\n\";\n ?>\n\n\n\n2) XMLSERVICE Raw XML\n^^^^^^^^^^^^^^^^^^^^^\n\n::\n\n -----------------------------------------\n | Browser |\n |---------------------------------------|\n | Download RPG (1) | Download PHP (2) |\n | 1) XMLSERVICE | a) PHP CW Toolkit |\n | HTML/XML/REST | b) New PHP Toolkit |\n | no PHP |--------------------|\n | (xmlcgi.pgm) | c) PHP “Raw XML” |<- Zend Server for IBM i or Linux or Windows (*)\n | (optional) | (ibm_db2, odbc) |\n | -----------------------------------|\n | 2) XMLSERVICE DB2 stored procedures |\n | (iPLUG4K, iPLUG32K, ..., iPLUG15M) |\n | 3) XMLSERVICE (xmlservice.pgm) |\n | call most anything on IBM i ... |\n | (PGM, SRVPGM, PASE, DB2, etc.) |\n ------------------------------------------\n\n\nCharacter varying (S 20A varying)\n---------------------------------\n\n::\n\n <?php\n // see connection.inc param details ...\n require_once('connection.inc');\n // call IBM i\n if ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\n else $conn = db2_connect($database,$user,$password);\n if (!$conn) die(\"Bad connect: $database,$user\");\n $stmt = db2_prepare($conn, \"call $libxmlservice.iPLUG32K(?,?,?,?)\");\n if (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n $clobIn = getxml();\n $clobOut = \"\";\n $ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n $ret=db2_execute($stmt);\n if (!$ret) die(\"Bad execute: \".db2_stmt_errormsg());\n // -----------------\n // output processing\n // -----------------\n // dump raw XML (easy test debug)\n var_dump($clobOut);\n // xml check via simplexml vs. expected results\n $xmlobj = simplexml_load_string($clobOut);\n if (!$xmlobj) die(\"Bad XML returned\");\n $allpgms = $xmlobj->xpath('/script/pgm');\n if (!$allpgms) die(\"Missing XML pgm info\");\n // -----------------\n // output pgm call\n // -----------------\n // only one program this XML script\n $pgm = $allpgms[0];\n $name = $pgm->attributes()->name;\n $lib = $pgm->attributes()->lib;\n $func = $pgm->attributes()->func;\n // pgm parms\n $parm = $pgm->xpath('parm');\n if ($parm) die(\"Unexpected XML pgm parms io='in' ($lib/$name.$func)\\n\");\n // pgm data returned\n $retn = $pgm->xpath('return');\n if (!$retn) die(\"Fail XML pgm return missing ($lib/$name.$func)\\n\");\n $var = $retn[0]->data->attributes()->var;\n $actual = (string)$retn[0]->data;\n $expect = 'my name is Ranger';\n if ($actual != $expect) die(\"$var ($actual not $expect) ($lib/$name.$func)\\n\");\n\n // good\n echo \"Success ($lib/$name.$func)\\n\";\n\n // *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n // * zzvary: check return varying\n // *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n // P zzvary B export\n // D zzvary PI 20A varying\n // D myName 10A varying\n // * vars\n // D tmp S 20A varying\n // /free\n // tmp = 'my name is ';\n // tmp = tmp + myName;\n // return tmp;\n // /end-free\n // P E\n function getxml() {\n $clob = <<<ENDPROC\n <?xml version='1.0'?>\n <script>\n <pgm name='ZZSRV' lib='xyzlibxmlservicexyz' func='ZZVARY'>\n <parm comment='search this name' io='in'>\n <data var='myName' type='10A' varying='on'>Ranger</data>\n </parm>\n <return>\n <data var='myReturnName' type='20A' varying='on'>Mud</data>\n </return>\n </pgm>\n </script>\n ENDPROC;\n return test_lib_replace($clob);\n }\n ?>\n\n\n"
},
{
"alpha_fraction": 0.6422505378723145,
"alphanum_fraction": 0.6443736553192139,
"avg_line_length": 48.578948974609375,
"blob_id": "8339d096547535a25404bd506f850cf049986106",
"content_id": "1ee8c40ea4a1c17af48ca3d76c57298cb754c9f6",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 1884,
"license_type": "permissive",
"max_line_length": 68,
"num_lines": 38,
"path": "/test/php/license.inc",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "<?php\n/*\n *****************************************************\n * Copyright (c) 2010, IBM Corporation\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms,\n * with or without modification, are permitted provided\n * that the following conditions are met:\n * - Redistributions of source code must retain\n * the above copyright notice, this list of conditions\n * and the following disclaimer.\n * - Redistributions in binary form must reproduce the\n * above copyright notice, this list of conditions\n * and the following disclaimer in the documentation\n * and/or other materials provided with the distribution.\n * - Neither the name of the IBM Corporation nor the names\n * of its contributors may be used to endorse or promote\n * products derived from this software without specific\n * prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND\n * CONTRIBUTORS \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES,\n * INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF\n * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n * DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR\n * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,\n * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,\n * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\n * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE\n * USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n * POSSIBILITY OF SUCH DAMAGE.\n *****************************************************\n*/\n?>\n"
},
{
"alpha_fraction": 0.551368236541748,
"alphanum_fraction": 0.5934149026870728,
"avg_line_length": 31.58935546875,
"blob_id": "4915fd31ca8111ee16bafaaf07c0a6f08bbe8627",
"content_id": "fcde9b27516cb393c78bd1c87ba5aa3ff8882f19",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 595767,
"license_type": "permissive",
"max_line_length": 78,
"num_lines": 18281,
"path": "/test/php/test_40502_nocall_timo.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: Timo massive data\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// -----------------\n// make the call\n// -----------------\n// see connection.inc param details ...\nrequire_once('connection.inc');\n// call IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Fail connect: $database,$user\");\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG15M(?,?,?,?)\");\nif (!$stmt) die(\"Fail prepare: \".db2_stmt_errormsg());\n$ctl .= \" *test\";\n$clobIn = getxml();\n// xml check via simplexml vs. expected results\n$xmlobj = simplexml_load_string($clobIn);\nif (!$xmlobj) die(\"Fail XML input\\n\");\n$clobOut = \"\";\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) die(\"Fail execute: \".db2_stmt_errormsg());\n// -----------------\n// output processing\n// -----------------\n// dump raw XML (easy test debug)\n// var_dump($clobOut);\n// xml check via simplexml vs. expected results\n$xmlobj = simplexml_load_string($clobOut);\nif (!$xmlobj) die(\"Fail XML output\\n\");\n\nif (strpos($clobOut,\"fred flinstone\") < 1) die(\"missing fred flinstone\\n\");\necho \"Good we see fred flinstone\\n\";\n\n// good\necho \"Success\\n\";\n\nfunction getxml() {\n$clob = <<<ENDPROC\n<?xml version=\"1.0\" encoding=\"ISO-8859-1\" ?>\n<script>\n<pgm name='RMETUH' lib='*LIBL'>\n<parm io='in' comment='PI_Caller'><data var='PI_Caller' type='18a' /></parm>\n<parm io='in' comment='PI_Action'><data var='PI_Action' type='1a' /></parm>\n<parm io='in' comment='PI_Pyks'><data var='PI_Pyks' type='2a' /></parm>\n<parm io='in' comment='PI_Vano'><data var='PI_Vano' type='15a' /></parm>\n<parm io='in' comment='PI_Asno'><data var='PI_Asno' type='15a' /></parm>\n<parm io='in' comment='PI_Kiekdi'><data var='PI_Kiekdi' type='3a' /></parm>\n<parm io='in' comment='PI_Myyja'><data var='PI_Myyja' type='15a' /></parm>\n<parm io='in' comment='PI_Rows'><data var='PI_Rows' type='5s0'>0</data></parm>\n<parm comment='PI_ProdTable'>\n<ds var='PI_ProdTable' array='on'>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n<ds var='PI_ProdTable'>\n<data var='PI_Pr_Tuno' type='20a' />\n<data var='PI_Pr_Toim' type='15a' />\n<data var='PI_Pr_Mra' type='9s2'>0</data>\n<data var='PI_Pr_Myks' type='3a' />\n</ds>\n</ds>\n</parm>\n<parm io='out' comment='PO_OutCode'><data var='PO_OutCode' type='1a' /></parm>\n<parm io='out' comment='PO_ErrNbr'><data var='PO_ErrNbr' type='3s0' /></parm>\n<parm comment='PO_Err'><ds var='PO_Err' array='on'>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n<ds var='PO_Err'>\n<data var='Err_Id' type='7a' />\n<data var='Err_Msg' type='132a' />\n<data var='Err_Field' type='30a' />\n<data var='Err_Sev' type='1s0'>0</data>\n<data var='Err_Row' type='5s0'>0</data>\n</ds>\n</ds>\n</parm>\n<parm io='out' comment='PO_Valk'><data var='PO_Valk' type='3a' /></parm>\n<parm io='out' comment='PO_Rows'><data var='PO_Rows' type='5s0' /></parm>\n<parm comment='PO_ProdTable'><ds var='PO_ProdTable' array='on'>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n<ds var='PO_ProdTable'>\n<data var='PO_Pr_Error' type='1a' />\n<data var='PO_Pr_NoPrice' type='1a' />\n<data var='PO_Pr_Tuno' type='20a' />\n<data var='PO_Pr_Vetn' type='20a' />\n<data var='PO_Pr_Toim' type='15a' />\n<data var='PO_Pr_TuSta' type='3a' />\n<data var='PO_Pr_Vapaa' type='9s2'>0</data>\n<data var='PO_Pr_TuDesc' type='40a' />\n<data var='PO_Pr_Tvalm' type='3a' />\n<ds var='PO_PriceTable' array='on'>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n<ds var='PO_PriceTable'>\n<data var='PO_Pr_Hity' type='3a' />\n<data var='PO_Pr_Myhi' type='13a' />\n<data var='PO_Pr_Vthi' type='13a' />\n<data var='PO_Pr_Ale1' type='7a' />\n<data var='PO_Pr_Ale2' type='7a' />\n<data var='PO_Pr_Ale3' type='7a' />\n<data var='PO_Pr_Alvp' type='7a' />\n<data var='PO_Pr_Hikt' type='3a' />\n<data var='PO_Pr_Mhalr' type='11a' />\n<data var='PO_Pr_Myks' type='3a' />\n<data var='PO_Pr_Vano' type='15a' />\n</ds>\n</ds>\n</ds>\n</ds>\n</parm>\n</pgm>\n<comment>fred flinstone</comment>\n</script>\nENDPROC;\nreturn $clob;\n}\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.603960394859314,
"alphanum_fraction": 0.6245874762535095,
"avg_line_length": 33.599998474121094,
"blob_id": "16a1cde064148801b535e071f07f29c94f2247f6",
"content_id": "29a92fa3ac6a3a6ef3fbcbd1ed88e52e1e289955",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 1212,
"license_type": "permissive",
"max_line_length": 74,
"num_lines": 35,
"path": "/test/php/test_00000_LEGAL_ibm_db2_io_license.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 inout LIC - license\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\n// call IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Bad connect: $database,$user\");\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG4K(?,?,?,?)\");\nif (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n$clobIn = \"<?xml version='1.0'?>\"; // XML in\n$clobOut = \"\"; // XML out\n$ctl .= \" *license\"; // show license\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) die(\"Bad execute: \".db2_stmt_errormsg());\n// dump XML out\nvar_dump($clobOut);\n// XML check\n$xmlobj = simplexml_load_string($clobOut);\nif (!$xmlobj) die(\"Bad XML returned\\n\");\n// check raw XML no error?\nif (strpos($clobOut,\"Copyright\")<1) die(\"Fail Copyright ($ctl)\\n\");\necho \"Success\\n\";\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.5003576278686523,
"alphanum_fraction": 0.5339770913124084,
"avg_line_length": 31.5,
"blob_id": "25a22c0ae515aa30ae7cbaf212c123e38c903a93",
"content_id": "d46949dfeed2006aef60d8faf28609559ba0dd90",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 2796,
"license_type": "permissive",
"max_line_length": 74,
"num_lines": 86,
"path": "/test/php/test_10131_ZZVLAD_ibm_db2_io-pgm.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 inout PGM - call pgm complex ds\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\n// call IBM i\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Bad connect: $database,$user\");\n$stmt = db2_prepare($conn, \"call $procLib.iPLUG512K(?,?,?,?)\");\nif (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n$clobIn = getxml();\n$clobOut = \"\";\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n$ret=db2_execute($stmt);\nif (!$ret) die(\"Bad execute: \".db2_stmt_errormsg());\n// -----------------\n// output processing\n// -----------------\n// dump raw XML (easy test debug)\nvar_dump($clobOut);\n// xml check via simplexml vs. expected results\n$xmlobj = simplexml_load_string($clobOut);\nif (!$xmlobj) die(\"Bad XML returned\");\n$allpgms = $xmlobj->xpath('/script/pgm');\nif (!$allpgms) die(\"Missing XML pgm info\");\n// -----------------\n// output pgm call\n// -----------------\n// only one program this XML script\n$pgm = $allpgms[0];\n$name = $pgm->attributes()->name;\n$lib = $pgm->attributes()->lib;\n// good\necho \"Success ($lib/$name)\\n\";\n\n// D job_t ds qualified based(Template)\n// D dsMyHire D datfmt(*iso)\n// D dsMyLeav D datfmt(*iso)\n// D dsMyJob 64A varying\n// D dsMyPay 12p 2\n//\n// D MyDsArray ds likeds(job_t) dim(3)\n//\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// * main(): Control flow\n// *+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n// C *Entry PLIST\n// C PARM MyDsArray\nfunction getxml() {\n$clob = <<<ENDPROC\n<?xml version='1.0'?>\n<script>\n<pgm name='ZZVLAD' lib='xyzlibxmlservicexyz'>\n <parm io='both'>\n <ds dim='2' var='job_t'>\n <data type='10a' var='begin'>2011-05-11</data>\n <data type='10a' var='end'>2011-07-12</data>\n <data type='64a' varying='on' var='job'>Frog wrangler</data>\n <data type='12p2' var='salary'>7.25</data>\n </ds>\n <ds var='job_t'>\n <data type='10a' var='begin'>2010-01-11</data>\n <data type='10a' var='end'>2010-07-12</data>\n <data type='64a' varying='on' var='job'>Toad wrangler</data>\n <data type='12p2' var='salary'>4.29</data>\n </ds>\n</parm>\n <return>\n <data type='10i0'>0</data>\n </return>\n</pgm>\n</script>\nENDPROC;\nreturn test_lib_replace($clob);\n}\n?>\n--EXPECTF--\n%s\nSuccess (%s)\n\n"
},
{
"alpha_fraction": 0.5272727012634277,
"alphanum_fraction": 0.5438960790634155,
"avg_line_length": 29.07421875,
"blob_id": "281272971e56212954093dbcb98c4600f444ed01",
"content_id": "1353543a2dbbca93b20443f3b127f0fdaea2be9b",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 7700,
"license_type": "permissive",
"max_line_length": 190,
"num_lines": 256,
"path": "/test/php/test_90000_ibm_db2_io_sqlinit.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 inout SQL- prepare tests\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\n// see connection.inc param details ...\nrequire_once('connection.inc');\nrequire_once('db2sql.inc');\n\n// -----------------\n// schema\n// -----------------\n$sql_crt_schema = \"<query error='off'>$db2_crt_schema</query>\";\n\n// -----------------\n// table animal\n// -----------------\n$sql_drp_animal = \"<query error='off'>$db2_drp_animal</query>\";\n$sql_crt_animal = \"<query>$db2_crt_animal</query>\";\n$sql_prep_animal = \"<prepare>$db2_prep_animal</prepare>\";\n$sql_add_animal = \"<execute><parm io='in'>zzid</parm><parm io='in'>zzbreed</parm><parm io='in'>zzname</parm><parm io='in'>zzweight</parm><parm io='in'>zzheight</parm></execute>\";\n\n// -----------------\n// table animal1 (clob/blob)\n// -----------------\n$sql_drp_animal1 = \"<query error='off'>$db2_drp_animal1</query>\";\n$sql_crt_animal1 = \"<query>$db2_crt_animal1</query>\";\n$sql_prep_animal1 = \"<prepare>$db2_prep_animal1</prepare>\";\n$sql_add_animal1 = \"<execute><parm io='in'>zzid</parm><parm io='in'>zzessay</parm><parm io='in'>zzpicture</parm></execute>\";\n\n// ------------\n// stored procedure 1\n// ------------\n$sql_drp_sp1 = \"<query error='off'>$db2_drp_sp1</query>\";\n$sql_crt_sp1 = \"<query>$db2_crt_sp1</query>\";\n\n// ------------\n// stored procedure 2\n// ------------\n$sql_drp_sp2 = \"<query error='off'>$db2_drp_sp2</query>\";\n$sql_crt_sp2 = \"<query>$db2_crt_sp2</query>\";\n\n// -----------------\n// table animal2 (identity notes)\n// -----------------\n$sql_drp_animal2 = \"<query error='off'>$db2_drp_animal2</query>\";\n$sql_crt_animal2 = \"<query>$db2_crt_animal2</query>\";\n$sql_idrp_animal2 = \"<query>$db2_idrp_animal2</query>\";\n$sql_icrt_animal2 = \"<query>$db2_icrt_animal2</query>\";\n$sql_fkdrp_animal2 = \"<query>$db2_fkdrp_animal2</query>\";\n$sql_fkcrt_animal2 = \"<query>$db2_fkcrt_animal2</query>\";\n\n// -----------------\n// table qtemp/animal\n// -----------------\n$sql_drp_animalq = \"<query error='off'>$db2_drp_animalq</query>\";\n$sql_crt_animalq = \"<query>$db2_crt_animalq</query>\";\n$sql_prep_animalq = \"<prepare>$db2_prep_animalq</prepare>\";\n$sql_add_animalq = \"<execute><parm io='in'>zzid</parm><parm io='in'>zzbreed</parm><parm io='in'>zzname</parm><parm io='in'>zzweight</parm><parm io='in'>zzheight</parm></execute>\";\n\n// -----------------\n// free everything\n// -----------------\n$sql_last = \"<free/>\";\n\n// -----------------\n// common\n// -----------------\n$sql_beg = \"<?xml version='1.0'?>\\n<script>\\n<sql>\\n\";\n$sql_end = \"\\n</sql>\\n</script>\";\n\n// ------------\n// call IBM i\n// ------------\nif ($i5persistentconnect) $conn = db2_pconnect($database,$user,$password);\nelse $conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Bad connect: $database,$user\");\n\n$sql = \"call $procLib.iPLUG512K(?,?,?,?)\";\n$stmt = db2_prepare($conn, $sql);\ncheck_ret(\"prep $sql\",$stmt);\n\n$ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN, DB2_CHAR);\ncheck_ret(\"bind1\",$ret);\n$ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN, DB2_CHAR);\ncheck_ret(\"bind2\",$ret);\n$ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN, DB2_CHAR);\ncheck_ret(\"bind3\",$ret);\n$ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_INOUT, DB2_CHAR);\ncheck_ret(\"bind4\",$ret);\n\nforeach ($actions as $action=>$sql) {\n echo \"*** action =$action \\n\";\n switch ($action) {\n case \"schema-create\":\n $clobIn = $sql_beg;\n $clobIn .= \"$sql_crt_schema\\n\";\n $clobIn .= $sql_end;\n break;\n case \"cmd_libl\":\n case \"cmd_end\":\n $clobIn = \"<?xml version='1.0'?>\\n<script>\\n<cmd>$chglibl</cmd>\\n<cmd>CHGAUT OBJ('/qsys.lib/{$schematest}.lib') USER(*PUBLIC) DTAAUT(*RWX) OBJAUT(*ALL) SUBTREE(*ALL)</cmd>\\n</script>\";\n break;\n case \"table-animal-drop\":\n $clobIn = $sql_beg;\n $clobIn .= $sql_drp_animal;\n $clobIn .= $sql_end;\n break;\n case \"table-animal-create\":\n $clobIn = $sql_beg;\n $clobIn .= $sql_crt_animal;\n $clobIn .= $sql_end;\n break;\n case \"table-animal-insert\":\n $clobIn = $sql_beg;\n $clobIn .= $sql_prep_animal;\n\t foreach ($animals as $r) {\n $was = array(\"zzid\",\"zzbreed\",\"zzname\",\"zzweight\",\"zzheight\");\n $now = array($r[0],$r[1],$r[2],$r[3],$r[4]);\n $out = str_replace($was,$now,$sql_add_animal);\n $clobIn .= $out;\n\t }\n $clobIn .= $sql_end;\n break;\n case \"table-animal1-drop\":\n $clobIn = $sql_beg;\n $clobIn .= $sql_drp_animal1;\n $clobIn .= $sql_end;\n break;\n case \"table-animal1-create\":\n $clobIn = $sql_beg;\n $clobIn .= $sql_crt_animal1;\n $clobIn .= $sql_end;\n break;\n case \"table-animal1-insert\":\n $clobIn = $sql_beg;\n $was = array(\"zzuser\");\n $now = array(\"$user\");\n $clobIn .= $sql_prep_animal1;\n\t foreach ($animals as $r) {\n $was = array(\"zzid\",\"zzessay\",\"zzpicture\");\n $now = array($r[0],$essay,$hexpng);\n $out = str_replace($was,$now,$sql_add_animal1);\n $clobIn .= $out;\n\t }\n $clobIn .= $sql_end;\n break;\n case \"proc-sp1-drop\":\n $clobIn = $sql_beg;\n $clobIn .= $sql_drp_sp1;\n $clobIn .= $sql_end;\n break;\n case \"proc-sp1-create\":\n $clobIn = $sql_beg;\n $clobIn .= $sql_crt_sp1;\n $clobIn .= $sql_end;\n break;\n case \"proc-sp2-drop\":\n $clobIn = $sql_beg;\n $clobIn .= $sql_drp_sp2;\n $clobIn .= $sql_end;\n break;\n case \"proc-sp2-create\":\n $clobIn = $sql_beg;\n $clobIn .= $sql_crt_sp2;\n $clobIn .= $sql_end;\n break;\n case \"table-animal2-drop\":\n $clobIn = $sql_beg;\n $clobIn .= $sql_drp_animal2;\n $clobIn .= $sql_end;\n break;\n case \"table-animal2-create\":\n $clobIn = $sql_beg;\n $clobIn .= $sql_crt_animal2;\n $clobIn .= $sql_end;\n break;\n case \"index-index2-drop\":\n $clobIn = $sql_beg;\n $clobIn .= $sql_idrp_animal2;\n $clobIn .= $sql_end;\n break;\n case \"index-index2-create\":\n $clobIn = $sql_beg;\n $clobIn .= $sql_icrt_animal2;\n $clobIn .= $sql_end;\n break;\n case \"table-fkey-drop\":\n $clobIn = $sql_beg;\n $clobIn .= $sql_fkdrp_animal2;\n $clobIn .= $sql_end;\n break;\n case \"table-fkey-create\":\n $clobIn = $sql_beg;\n $clobIn .= $sql_fkcrt_animal2;\n $clobIn .= $sql_end;\n break;\n case \"table-animalq-drop\":\n $clobIn = $sql_beg;\n $clobIn .= $sql_drp_animalq;\n $clobIn .= $sql_end;\n break;\n case \"table-animalq-create\":\n $clobIn = $sql_beg;\n $clobIn .= $sql_crt_animalq;\n $clobIn .= $sql_end;\n break;\n case \"table-animalq-insert\":\n $clobIn = $sql_beg;\n $clobIn .= $sql_prep_animalq;\n foreach ($animals as $r) {\n $was = array(\"zzid\",\"zzbreed\",\"zzname\",\"zzweight\",\"zzheight\");\n $now = array($r[0],$r[1],$r[2],$r[3],$r[4]);\n $out = str_replace($was,$now,$sql_add_animalq);\n $clobIn .= $out;\n }\n $clobIn .= $sql_end;\n break;\n case \"kill-all-last\":\n $clobIn = $sql_beg;\n $clobIn .= $sql_last;\n $clobIn .= $sql_end;\n break;\n default:\n die(\"bad action $action\");\n }\n $clobOut = \"\";\n\n $ret=db2_execute($stmt);\n check_ret(\"execute $sql\",$ret);\n\n // dump raw XML (easy test debug)\n // var_dump($clobIn);\n var_dump($clobOut);\n // if (strlen($clobOut)==0) die(\"Bad return\\n\");\n}\n\n$ret=db2_free_stmt($stmt);\ncheck_ret(\"free\",$ret);\n\nif ($i5persistentconnect) db2_pclose($conn);\nelse db2_close($conn);\n\nforeach ($actions as $action=>$sql) {\n echo \"*** action =$action\\n\";\n}\n// good\necho \"Success\\n\";\nfunction check_ret($title, $ret) {\n if (!$ret) die(\"*** Bad statement $title \".db2_stmt_errormsg().\"\\n\");\n}\n?>\n--EXPECTF--\n%s\nSuccess\n\n"
},
{
"alpha_fraction": 0.5624740123748779,
"alphanum_fraction": 0.591600775718689,
"avg_line_length": 42.46770095825195,
"blob_id": "af65841779b8d80202fa6f5281ed75687b4fbe1f",
"content_id": "3115bf031d8e03c576804b06486655fac5043c7f",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "reStructuredText",
"length_bytes": 33662,
"license_type": "permissive",
"max_line_length": 579,
"num_lines": 774,
"path": "/docs/generic-interface.rst",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "XMLSERVICE Interfaces\n=====================\n`Goto Main Page`_\n\n.. _Goto Main Page: index.html\n\nWho is this page for?\n---------------------\nInstructions designed for IBM i developer exploring XMLSERVICE ...\n\n\nXMLSERVICE introduction\n-----------------------\n\n::\n\n -----------------------------------------\n | Browser |\n |---------------------------------------|\n (1)->| Download RPG (1) | Download PHP (2) |\n | 1) XMLSERVICE | a) PHP CW Toolkit |<-(4) cw-tk-php-x.x.x.zip\n | HTML/XML/REST | b) New PHP Toolkit |<-(3) cw-tk-php-x.x.x.zip\n | no PHP |--------------------|\n | (xmlcgi.pgm) | c) PHP “Raw XML” |<-(2) Zend Server for IBM i or Linux or Windows\n | (optional) | (ibm_db2, odbc) |\n | |--------------------|\n | 2) XMLSERVICE DB2 stored procedures |<-(1)\n | (iPLUG4K, iPLUG32K, ..., iPLUG15M) |\n | 3) XMLSERVICE (xmlservice.pgm) |<-(1) xmlservice-rpg-x.x.x.zip\n | call most anything on IBM i ... |\n | (PGM, SRVPGM, PASE, DB2, etc.) |\n -----------------------------------------\n\n(1) XMLSERVICE is designed to allow you flexibility to develop from your laptop and run on your IBM i production with no changes. (RPG download)\n(2) PHP DB2 stored procedure interface can be called form your laptop and run on your IBM i production with no changes. (PHP Zend Server)\n(3) Zend New PHP Toolkit is one way to work with XMLSERVICE that ships with Zend Server for IBM i. (PHP download)\n(4) A PHP old toolkit compatibility layer is also available for old PHP Zend toolkit users. (PHP download)\n\n**(3-4) You may update IBM i PHP toolkit via Zend PTF or see main page for update download.**\n\nWhy do you call it XMLSERVICE??\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n* same name as RPG source installation library (XMLSERVICE library)\n* name of the job(s)/program(s) seen in wrkactjob (see below)\n* services XML input scripts and returns XML output data (a rose by any other name)\n\n::\n\n wrkactjob\n\n Opt Subsystem/Job User Type CPU % Function Status\n QSYSWRK QSYS SBS .0 DEQW\n XTOOLKIT QTMHHTP1 BCH .0 PGM-XMLSERVICE SEMW\n\nNote: XTOOLKIT naming 1.5+\n\n\nWhy is XMLSERVICE Open Source?\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n* IBM has many nice folks that believe in Open Source for IBM i, sharing common utilities in IBM i community\n* Business friendly BSD license makes it easy to include in any Open Source project (Zend PHP, your applications, etc.)\n* XML documents over transport neutral design lends itself to \"easy\" inclusion is your proprietary/custom applications (XML is just a big string, you can use it anywhere)\n* Pure RPG source because many IBM i folks know the language, but not the tricks like using PASE for functions\n\nXMLSERVICE highlights\n^^^^^^^^^^^^^^^^^^^^^\n\n* 100% RPG code BSD licensed to be completely free to use (XMLSERVICE library download)\n* allows develop on PC and deploy on either PC (2-tier) or IBM i (1-tier)\n* all communication is XML documents\n* called RPG programs can maintain state, commit boundaries, etc\n* provided IPC mechanism that synchronizes web clients multiple sources\n* traditional DB2 connection security 1-2 tier (when using DB2 stored procedures provided)\n* conversions of XML string text to/from actual program required parameter geometery is handled (packed decimal 12p2, zoned decimal 7s2, integer 10i0/5i0, float 4f2/8f4, hex binary 400b), and character data converted automatically ASCII/EBCDIC (1024A/10a varying='on')\n* arrays of data structures are allowed both as paramters and return, including nested data structurers and/or arrays of data structures (try it to believe it)\n* Many limits of other toolkit PGM/SRVPGM callers are not an issue with XMLSERVICE (up to 15MB to date), therefore \"old school\" RPG using occurs can often pass back all data records in one call.\n\nWe have been experimenting with the following configurations.\n::\n\n 1-tier (running on IBM i):\n 1) HTML REST : |PC|browser<----------------->|IBM i|<-ApacheCGI--|\n 2) PHP ibm_db2 : |PC|browser<----------------->|IBM i|<-FastCGI/PHP|\n 3) PHP pdo_ibm : |PC|browser<----------------->|IBM i|<-FastCGI/PHP|\n 4) PHP odbc : |PC|browser<----------------->|IBM i|<-FastCGI/PHP|\n 5) Zend Toolkit: (using one of above transport interfaces) | ...xml ...\n ->XMLSERVICE\n | ...call...\n 2-tier (Linux/Windows to IBM i): | >PGM\n 1) PHP REST : |PC|Apache/PHP<-REST--------->|IBM i|<-ApacheCGI--| >SRVPGM\n 2) PHP ibm_db2 : |PC|Apache/PHP<-DB2 Connect-->|IBM i|<-QRWTSRVR---| >CMD\n 3) PHP pdo_ibm : |PC|Apache/PHP<-DB2 Connect-->|IBM i|<-QRWTSRVR---| >PASE shell\n 4) PHP odbc : |PC|Apache/PHP<-CA ODBC------>|IBM i|<-QRWTSRVR---|\n 5) Zend Toolkit: (using one of above transport interfaces)\n\n**Note:**\n\n- PHP was used to test, but any DB2/REST capable scripting language will work\n- REST interface used for test does not require PHP on IBM i (not fast)\n- ibm_db2 has been the best interface for production use (fast)\n- 2-tier does not require PHP on IBM i\n- odbc and pdo_ibm had trouble with large contiguous data (PDF)\n\n\nXMLSERVICE syntax (XML in, XML out)\n-----------------------------------\n::\n\n -----------------------------------------\n | Browser |\n |---------------------------------------|\n | Download RPG (1) | Download PHP (2) |\n | 1) XMLSERVICE | a) PHP CW Toolkit |\n | HTML/XML/REST | b) New PHP Toolkit |\n | no PHP |--------------------|\n | (xmlcgi.pgm) | c) PHP “Raw XML” |\n | (optional) | (ibm_db2, odbc) |\n | -----------------------------------|\n | 2) XMLSERVICE DB2 stored procedures |\n | (iPLUG4K, iPLUG32K, ..., iPLUG15M) |\n | 3) XMLSERVICE (xmlservice.pgm) |<-(1)\n | call most anything on IBM i ... |\n | (PGM, SRVPGM, PASE, DB2, etc.) |\n -----------------------------------------\n\n(1) XMLSERVICE parses and executes XML input for IBM i commands (CMD), programs (PGM/SRVPGM), and PASE utilites (ls, system, etc.), data results are then returned in XML output. XMLSERVICE processes XML scripts in \"order of appearance\". Every effort is made to make one pass through the XML to keep performance, therefore XML input \"order of appearance\" is important and left up to the XML script writer (see below).\n\n::\n\n Order is important:\n * <parm> followed by <return>\n * <parm> followed by <overlay> (1.2.1 beta)\n * <return> followed by <overlay> (1.2.1 beta)\n * <ds> followed by <data>\n\n Example:\n <script>\n <cmd>do something before PGM1</cmd> <--- action 1 call CMD1\n <pgm name='PGM1'> <--- action 2 call PGM1\n <parm comment='parm 1'><ds><data>stuff</data></ds></parm>\n <parm comment='parm 2'><ds><data>stuff</data></ds></parm>\n <return comment='complex'><ds><data>stuff</data></ds></return>\n </pgm>\n <cmd>do something after PGM1</cmd> <--- action 3 call CMD2\n <pgm name='PGM2'> <--- action 4 call PGM2\n <parm comment='parm 1'><ds><data>stuff</data></ds></parm>\n <overlay comment='optional over parm 1'><ds><data>stuff</data></ds></parm>\n <parm comment='parm 2'><ds><data>stuff</data></ds></parm>\n <return comment='complex'><ds><data>stuff</data></ds></return>\n <overlay comment='optional over return'><ds><data>stuff</data></ds></overlay>\n </pgm>\n <sh>do something PASE after PGM2</sh> <--- action 5 call PASE utility\n </script>\n\n\n(2) XMLSERVICE setting parameter data\n\nSetting parameter data is a simple task that looks much the same as any language, using XML in this case. As you can see below you must send the entire XML document/script each time, because XMLSERVICE will not remember \"everything\" about each program (<pgm></pgm>), command (<cmd></cmd>) or PASE utility called (<sh></sh>).\n\n::\n\n 1st XMLSERVICE call ...\n <script>\n <pgm name='PGM1'>\n <parm comment='parm 1'><ds><data>stuff</data></ds></parm>\n <parm comment='parm 2'><ds><data>stuff</data></ds></parm>\n <return comment='complex'><ds><data>stuff</data></ds></return>\n </pgm>\n </script>\n\n 2nd XMLSERVICE call ...\n <script>\n <pgm name='PGM1'>\n <parm comment='parm 1'><ds><data>different stuff</data></ds></parm>\n <parm comment='parm 2'><ds><data>different stuff</data></ds></parm>\n <return comment='complex'><ds><data>different stuff</data></ds></return>\n </pgm>\n </script>\n\n\nWhy send a full XML document description each time you call program/script?\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nBecause XMLSERVICE is simply an script interpreter like BASIC or PHP, not a true compiler like RPG where \"binary program\" lasts forever, therefore you cannot just \"pass XML data\" without also describing what the data means (type='12p2').\n\nIf you are writing a language wrapper around this support (like Zend PHPToolkit), you can easily \"hide\" the XML document specifics of passing a full XML document (the whole idea). However, if you are using the RAW XML interface (this document), you will have to bite your tongue and send the whole document.\n\nNote: However we are working short-cuts like overlay for some XML parameter \"customization\", but we intend stopping short of complete \"programming by XML\" because we do not want to reinvent PHP (or the like).\n\nFull example (2 complete calls):\n\n1) XMLSERVICE call number 1 complete call with data XMLSERVICE will work:\n::\n\n <?xml version='1.0'?>\n <script>\n <pgm name='ZZCALLII'>\n <parm io='both'>\n <data type='1A' var='INCHARA'>a</data>\n </parm>\n <parm io='both'>\n <ds>\n <data type='1A' var='INDS1.DSCHARA'>x</data>\n <data type='1A' var='INDS1.DSCHARB'>y</data>\n <data type='7p4' var='INDS1.DSDEC1'>66.6666</data>\n <data type='12p2' var='INDS1.DSDEC2'>77777.77</data>\n </ds>\n </parm>\n <return>\n <data type='10i0'>0</data>\n </return>\n </pgm>\n </script>\n\n\n2) XMLSERVICE call number 2 complete call with \"different\" data XMLSERVICE will work:\n::\n\n <?xml version='1.0'?>\n <script>\n <pgm name='ZZCALLII'>\n <parm io='both'>\n <data type='1A' var='INCHARA'>Z</data>\n </parm>\n <parm io='both'>\n <ds>\n <data type='1A' var='INDS1.DSCHARA'>F</data>\n <data type='1A' var='INDS1.DSCHARB'>T</data>\n <data type='7p4' var='INDS1.DSDEC1'>1.1</data>\n <data type='12p2' var='INDS1.DSDEC2'>4.4</data>\n </ds>\n </parm>\n <return>\n <data type='10i0'>0</data>\n </return>\n </pgm>\n </script>\n\n\nXMLSERVICE REST interface (HTML/XML, no PHP)\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n::\n\n -----------------------------------------\n | Browser |\n |---------------------------------------|\n (1)->| Download RPG (1) | Download PHP (2) |\n | 1) XMLSERVICE | a) PHP CW Toolkit |\n | HTML/XML/REST | b) New PHP Toolkit |\n | no PHP |--------------------|\n | (xmlcgi.pgm) | c) PHP “Raw XML” |\n | (optional) | (ibm_db2, odbc) |\n | -----------------------------------|\n | 2) XMLSERVICE DB2 stored procedures |\n | (iPLUG4K, iPLUG32K, ..., iPLUG15M) |\n | 3) XMLSERVICE (xmlservice.pgm) |\n | call most anything on IBM i ... |\n | (PGM, SRVPGM, PASE, DB2, etc.) |\n -----------------------------------------\n\n (1) HTTP REST xmlcgi.pgm (tests, demos):\n http://myibmi/cgi-bin/xmlcgi.pgm&db2=x@uid=x@pwd=x@ipc=x@ctl=x@xmlin=x@xmlout=x\n\n A example interface is included that does not require scripting language on the IBM i (no PHP). \n The sample program xmlservice/xmlcgi is an RPG program that supports HTTP GET/POST interface. \n This is an easy interface to call XMLSRVICE during testing without the bother of writing a script.\n\nIf you open up the source code of xmlcgi.pgm you will see that is calls the XMLSERVICE stored procedure interface in the next section. This allows for common access methodology including REST clients (like a browser), because no matter if calling from any application/device choice all security design remains within the consistent DB2 database connection(s). Also, because XMLSERVICE has a built in \"one at a time\" lock mechanism (semaphore) all/any clients can access the same XMLSERVICE job at the same time and you the script developer don't have to worry about coordination.\n\n\nREST parameters (xmlcgi.pgm)\n\n * db2 - what database (\\*LOCAL tested)\n * uid - user profile (\\*NONE - no uid version 1.5+)\n * pwd - profile password (\\*NONE - no password version 1.5+)\n * ipc - IPC key name/security route to XMLSERVICE job (/tmp/fred01, etc.)\n * ctl - CTL admin control XMLSERVICE job (see control below)\n * xmlin - XML input document (request)\n * xmlout - expected size of XML output document (response size in bytes)\n\n::\n\n Apache rest configuration (CGI interface)\n Example: Add the following to /www/zendsvr/conf/httpd.conf\n ScriptAlias /cgi-bin/ /QSYS.LIB/XMLSERVICE.LIB/\n <Directory /QSYS.LIB/XMLSERVICE.LIB/>\n AllowOverride None\n order allow,deny\n allow from all\n SetHandler cgi-script\n Options +ExecCGI\n </Directory>\n\n Optional for server side includes (very handy for pure html/xml sites):\n # server side includes\n # AddOutputFilter INCLUDES .htm\n # AddOutputFilter INCLUDES .html\n Options +ExecCGI +Includes\n AddType text/html .shtml\n AddOutputFilter INCLUDES .shtml\n AddOutputFilter INCLUDES .html\n\n Example (html interface xmlcgi.pgm):\n <html>\n <body>\n <!-- call XMLSERVICE/ZZCALL(a:b:11.1111:222.22:record) (post or get) -->\n <form name=\"input\" action=\"/cgi-bin/xmlcgi.pgm\" method=\"post\">\n <input type=\"hidden\" name=\"db2\" value=\"*LOCAL\">\n <input type=\"hidden\" name=\"uid\" value=\"*NONE\">\n <input type=\"hidden\" name=\"pwd\" value=\"*NONE\">\n <input type=\"hidden\" name=\"ipc\" value=\"/tmp/rangerhtml\">\n <input type=\"hidden\" name=\"ctl\" value=\"*none\">\n <input type=\"hidden\" name=\"xmlin\"\n value=\"<?xml version='1.0'?>\n <pgm name='ZZCALL' lib='XMLSERVICE'>\n <parm><data type='1A'>a</data></parm>\n <parm><data type='1A'>b</data></parm>\n <parm><data type='7p4'>11.1111</data></parm>\n <parm><data type='12p2'>222.22</data></parm>\n <parm>\n <ds>\n <data type='1A'>x</data>\n <data type='1A'>y</data>\n <data type='7p4'>66.6666</data>\n <data type='12p2'>77777.77</data>\n </ds>\n </parm>\n <return><data type='10i0'>0</data></return>\n </pgm>\">\n <input type=\"hidden\" name=\"xmlout\" value=\"32768\">\n <input type=\"submit\" value=\"Submit\" />\n </form>\n </body>\n </html>\n\n Output:\n <?xml version='1.0'?>\n <pgm name='ZZCALL' lib='XMLSERVICE'>\n <parm><data>C</data></parm>\n <parm><data>D</data></parm>\n <parm><data>321.1234</data></parm>\n <parm><data>1234567890.12</data></parm>\n <parm>\n <ds>\n <data>E</data>\n <data>F</data>\n <data>333.3330</data>\n <data>4444444444.44</data>\n </ds>\n </parm>\n <return><data>0</data></return>\n </pgm>\n\nNote: uid / pwd \\*NONE available XMLSERVICE version 1.5+.\n\n\n\nXMLSERVICE stored procedure interface (production)\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n::\n\n -----------------------------------------\n | Browser |\n |---------------------------------------|\n | Download RPG (1) | Download PHP (2) |\n | 1) XMLSERVICE | a) PHP CW Toolkit |\n | HTML/XML/REST | b) New PHP Toolkit |\n | no PHP |--------------------|\n | (xmlcgi.pgm) | c) PHP “Raw XML” |\n | (optional) | (ibm_db2, odbc) |\n | -----------------------------------|\n | 2) XMLSERVICE DB2 stored procedures |<-(2)\n | (iPLUG4K, iPLUG32K, ..., iPLUG15M) |\n | 3) XMLSERVICE (xmlservice.pgm) |\n | call most anything on IBM i ... |\n | (PGM, SRVPGM, PASE, DB2, etc.) |\n ------------------------------------------\n\n (2) DB2 stored procedures (production):\n call iPLUG[R]xxx(ipc, ctl, xmlin, [xmlout])\n\n This DB2 stored procedure interface was designed for production level machines Zend Server on i (1 tier) \n and remote machines over DB2 Connect (2 tier).\n\nWhen you first start using XMLSERVICE generic stored procedures (see below), you may wonder why not just write custom stored procedures for each program i wish to call vs. using XMLSERVICE \"dynamic stored procedures\"?\n\n* XML documents are very easy to send around over any transport, therefore you can upgrade your application capabilities instantley without all the custom stored procedure programming, In a practical sense this means you can use different scripting languages (php, java, perl, ruby, etc.), with various database connections (db2 connect, odbc, etc.) and/or simply HTTP (seen previously)\n* XMLSERVICE supports full state programming via IPC routing (/tmp/fred1, /tmp/sally2, etc.), thereby traditional RPG programming techniques with many open files, transactions, etc., can be employed in called programs/modules lowering the learning curve (and rewrites not needed)\n* XMLSERVICE XML documents are very easy to wrapper with any custom scripting interface, such as the new PHP toolkit interface Zend is working on.\n\n::\n\n DB2 in/out parameters (connections supporting in/out parameters):\n ... sizes: 4K, 32K, 65K, 512K, 1M, 5M, 10M up to 15M (see crtsql in download) ...\n iPLUG4K(IN IPC CHAR(1024), IN CTL CHAR(1024),IN CI CHAR(4064), OUT C0 CHAR(4064))\n iPLUG32K(IN IPC CHAR(1024), IN CTL CHAR(1024), IN CI CLOB(32000), OUT CO CLOB(32000))\n iPLUG65K(IN IPC CHAR(1024), IN CTL CHAR(1024), IN CI CLOB(65K), OUT CO CLOB(65K))\n iPLUG512K(IN IPC CHAR(1024), IN CTL CHAR(1024), IN CI CLOB(512K), OUT CO CLOB(512K))\n iPLUG1M(IN IPC CHAR(1024), IN CTL CHAR(1024), IN CI CLOB(1M), OUT CO CLOB(1M))\n iPLUG5M(IN IPC CHAR(1024), IN CTL CHAR(1024), IN CI CLOB(5M), OUT CO CLOB(5M))\n iPLUG10M(IN IPC CHAR(1024), IN CTL CHAR(1024), IN CI CLOB(10M), OUT CO CLOB(10M))\n iPLUG15M(IN IPC CHAR(1024), IN CTL CHAR(1024), IN CI CLOB(15M), OUT CO CLOB(15M))\n\n $stmt = db2_prepare($conn, \"call XMLSERVICE.iPLUG4K(?,?,?,?)\");\n $ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n $ret=db2_execute($stmt);\n\n\n DB2 Result set returned (connections not supporting in/out parameters):\n ... sizes: 4K, 32K, 65K, 512K, 1M, 5M, 10M up to 15M (see crtsql in download) ...\n CREATE PROCEDURE XMLSERVICE.iPLUGR4K(IN IPC CHAR(1024), IN CTL CHAR(1024), IN CI CHAR(4096))\n iPLUGR32K(IN IPC CHAR(1024), IN CTL CHAR(1024), IN CI CHAR(32000))\n iPLUGR65K(IN IPC CHAR(1024), IN CTL CHAR(1024), IN CI CLOB(65K))\n iPLUGR512K(IN IPC CHAR(1024), IN CTL CHAR(1024), IN CI CLOB(512K))\n iPLUGR1M(IN IPC CHAR(1024), IN CTL CHAR(1024), IN CI CLOB(1M))\n iPLUGR5M(IN IPC CHAR(1024), IN CTL CHAR(1024), IN CI CLOB(5M))\n iPLUGR10M(IN IPC CHAR(1024), IN CTL CHAR(1024), IN CI CLOB(10M))\n iPLUGR15M(IN IPC CHAR(1024), IN CTL CHAR(1024), IN CI CLOB(15M))\n\n $stmt = db2_prepare($conn, \"call $libxmlservice.iPLUGR4K(?,?,?)\");\n $ret=db2_execute($stmt,array($ipc,$ctl,$clobIn));\n while ($row = db2_fetch_array($stmt)){\n $clobOut .= trim($row[0]);\n }\n\n\n**Why different sizes??**\n\nThe idea is simply pick the \"best fit\" max size of XML documents to/from IBM i per call,\nso you don't transport a whole bunch of blanks between you scripting/IBM i applications.\n\n\n\nStored procedure parameters (details follow):\n\n* IN IPC CHAR(1024) - IPC key name/security\n* IN CTL CHAR(1024) - CTL admin control XMLSERVICE job\n* IN CI CLOB(15M) - XML input document (request)\n* OUT CO CLOB(15M) - XML output document (response)\n\n*Note: iPLUGRxxx procedures return a result set that is collected by fetch.*\n\n**1) IN IPC CHAR(1024) - IPC key (InternalKey)**\n\nThe IPC/InternalKey is used to route XML input/output documents to the correct XMLSERVICE job. The IPC is fully qualified path commonly in the /tmp directory where each unique path is the target of given XMLSERVICE job being used by a given user/password profile (normal profile authority applies).\n\n#. InternalKey must be a fully IFS qualified path, if not exist then XMLSERVICE will attempt to create on first use.\n#. InternalKey object authorization is standard IBM i profile mechanism.\n#. XMLSERVICE provides a sync IPC lock (/tmp/$anything), so only one user/request is active at a time (like 5250).\n\nExample call PGM ZZCALL, run in Zend subsystem, \\*debug wait before call (qsysopr msg)::\n\n $ipc='/tmp/ranger001';\n $ctl = '*sbmjob(ZENDSVR/ZSVR_JOBD) *debug';\n $clobIn =\n \"<?xml version=\"1.0\"?>\n <script>\n <cmd>ADDLIBLE LIB(XMLSERVICE) POSITION(*FIRST)</cmd>\n <pgm name='ZZCALL'>\n <parm>\n <data type='1A'>a</data>\n </parm>\n <return>\n <data type='10i0'>0</data>\n </return>\n </pgm>\n </script>\";\n $clobOut = \"\";\n $stmt = db2_prepare($conn, \"call XMLSERVICE.iPLUG4K(?,?,?,?)\");\n $ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 3, \"clobIn\", DB2_PARAM_IN);\n $ret=db2_bind_param($stmt, 4, \"clobOut\", DB2_PARAM_OUT);\n $ret=db2_execute($stmt);\n\n\n**2) IN CTL CHAR(1024) - CTL admin**\n\nThese control options are used to control the XMLSERVICE jobs. \n\n::\n\n // *immed - end server immed destroy ipc\n // *sbmjob[(lib/jobd)] - sbmjob(PLUGJOBLIB/PLUGJOBD)\n // *nostart - disallow spawn\n // *here - run stateless in stored proc job\n // *session - retrieve session key (IPC)\n // *license - license this code\n :\n\nExample kill XMLSERVICE for '/tmp/rangerusr'::\n\n $ipc='/tmp/rangerusr';\n $ctlKill=\"*immed\";\n $clobInKill = '<?xml version=\"1.0\"?>';\n $sql = \"call $libxmlservice.iPLUGR4K('$ipc','$ctlKill','$clobInKill')\";\n $ret=db2_exec($conn,$sql);\n\n\n**3) IN CI CLOB(4K to 15M) - XML input**\n\nThis is the input XML request document (see XML syntax next section). The XML input document will run all actions in the XMLSERVICE job and produce an XML output document.\n\n::\n\n <?xml version=\"1.0\"?>\n <script>\n <cmd comment='addlible'>ADDLIBLE LIB(DB2) POSITION(*FIRST)</cmd>\n <pgm name='ZZSRV' func='ZZARRAY'>\n <parm comment='search this name'>\n <data var='myName' type='10A'>Ranger</data>\n </parm>\n <parm comment='max allowed return'>\n <data var='myMax' type='10i0'>5</data>\n </parm>\n <parm comment='actual count returned'>\n <data var='myCount' type='10i0' enddo='mycount'>0</data>\n </parm>\n <return>\n <ds var='dcRec_t' dim='10' dou='mycount'>\n <data var='dcMyName' type='10A'>na</data>\n <data var='dcMyJob' type='4096A'>na</data>\n <data var='dcMyRank' type='10i0'>0</data>\n <data var='dcMyPay' type='12p2'>0.0</data>\n </ds>\n </return>\n </pgm>\n </script>\n\n\n**4) OUT CO CLOB(15M) - XML output**\n\nThe output XML document will be produced in the same order that the XML input script runs, The output can be collected from the in/out parameter (iPLUGxxx) or from fetch of a result set (iPLUGRxxx).\n\n::\n\n <?xml version=\"1.0\"?>\n <script>\n <cmd comment='addlible'>+++ success ADDLIBLE LIB(XMLSERVICE) POSITIO</cmd>\n <pgm name='ZZSRV' lib='XMLSERVICE' func='ZZARRAY'>\n <parm comment='search this name'>\n <data var='myName'>Ranger</data>\n </parm>\n <parm comment='max allowed return'>\n <data var='myMax'>5</data>\n </parm>\n <parm comment='actual count returned'>\n <data var='myCount'>5</data>\n </parm>\n <return>\n <ds var='dcRec_t'>\n <data var='dcMyName'>Ranger1</data>\n <data var='dcMyJob'>Test 101</data>\n <data var='dcMyRank'>11</data>\n <data var='dcMyPay'>13.42</data>\n </ds>\n <ds var='dcRec_t'>\n <data var='dcMyName'>Ranger2</data>\n <data var='dcMyJob'>Test 102</data>\n <data var='dcMyRank'>12</data>\n <data var='dcMyPay'>26.84</data>\n </ds>\n <ds var='dcRec_t'>\n <data var='dcMyName'>Ranger3</data>\n <data var='dcMyJob'>Test 103</data>\n <data var='dcMyRank'>13</data>\n <data var='dcMyPay'>40.26</data>\n </ds>\n <ds var='dcRec_t'>\n <data var='dcMyName'>Ranger4</data>\n <data var='dcMyJob'>Test 104</data>\n <data var='dcMyRank'>14</data>\n <data var='dcMyPay'>53.68</data>\n </ds>\n <ds var='dcRec_t'>\n <data var='dcMyName'>Ranger5</data>\n <data var='dcMyJob'>Test 105</data>\n <data var='dcMyRank'>15</data>\n <data var='dcMyPay'>67.10</data>\n </ds>\n </return>\n </pgm>\n </script>\n\n**Workaround for \"bad\" drivers leaving junk back of output clob**\n\nAlas, the world is not perfect 1-2 tier DB2 drivers IBM i, Windows, Linux, etc., so occasionally a \"hack\" is handy.\n\nI always \"scope\" my XML input requests with <script>...</script>, so anything past tailing </script> is 'junk' (errors return as <report>...</report>).\n\nThe new XMLSERVICE keyword \\*hack adds </hack> back of every record return result set can be very useful for drivers that do not support stored procedure in/out parameters like PHP odbc.\n\n::\n\n function driverJunkAway($xml)\n {\n // trim blanks (opps no)\n $clobOut = $xml;\n if (! trim($clobOut)) return $clobOut;\n\n // result set has extra data (junk)\n $fixme = '</hack>';\n $pos = strpos($clobOut,$fixme);\n if ($pos > -1) {\n $clobOut = substr($clobOut,0,$pos);\n }\n else {\n $fixme = '</script>';\n $pos = strpos($clobOut,$fixme);\n if ($pos > -1) {\n $clobOut = substr($clobOut,0,$pos+strlen($fixme));\n }\n // maybe error/performance report\n else {\n $fixme = '</report>';\n $pos = strpos($clobOut,$fixme);\n if ($pos > -1) {\n $clobOut = substr($clobOut,0,$pos+strlen($fixme));\n }\n }\n }\n return $clobOut;\n }\n\n**Example PHP client use driverJunkAway($xml)**\n\n::\n\n function xmlservice_ibm_db2($xml) {\n global $fast, $db, $user, $pass, $ipc, $ctl, $conn, $lib, $plug, $plugR;\n $xmlIn = $xml;\n $xmlOut = '';\n if (!$conn) {\n if ($fast) $conn = db2_pconnect($db, $user, $pass); // persistent/pooled connection\n else $conn = db2_connect($db, $user, $pass); // full open/close connection\n }\n if (!$conn) die(\"Bad connect: $db, $user\");\n $stmt = db2_prepare($conn, \"call $lib.$plug(?,?,?,?)\"); // Call XMLSERVICE\n // stored procedure interface\n // in/out parameter (xmlOut)\n // sizes: iPLUG4K - iPLUG15M\n if (!$stmt) die(\"Bad prepare: \".db2_stmt_errormsg());\n $ret=db2_bind_param($stmt, 1, \"ipc\", DB2_PARAM_IN); // ? - /tmp/raw_$user (*sbmjob)\n $ret=db2_bind_param($stmt, 2, \"ctl\", DB2_PARAM_IN); // ? - *here or *sbmjob\n $ret=db2_bind_param($stmt, 3, \"xmlIn\", DB2_PARAM_IN); // ? - XML input script\n $ret=db2_bind_param($stmt, 4, \"xmlOut\", DB2_PARAM_OUT); // ? - XML output return\n $ret=db2_execute($stmt);\n if (!$ret) die(\"Bad execute: \".db2_stmt_errormsg());\n return driverJunkAway($xmlOut); // just in case driver odd\n // ... possible junk end record,\n // record</script>junk\n }\n\n function xmlservice_odbc($xml) {\n global $fast, $db, $user, $pass, $ipc, $ctl, $conn, $lib, $plug, $plugR;\n $xmlIn = $xml;\n $xmlOut = '';\n if (!$conn) {\n if ($fast) $conn = odbc_pconnect($db, $user, $pass); // persistent/pooled connection\n else $conn = odbc_connect($db, $user, $pass); // full open/close connection\n }\n $stmt = odbc_prepare($conn, \"call $lib.$plugR(?,?,?)\"); // Call XMLSERVICE\n // stored procedure interface\n // result set return (fetch)\n // sizes: iPLUGR4K - iPLUGR15M\n if (!$stmt) die(\"Bad prepare: \".odbc_errormsg());\n $options = array($ipc, // ? - /tmp/raw_$user (*sbmjob)\n $ctl, // ?- *here or *sbmjob\n $xmlIn); // ?- XML input script\n // bad behavior odbc extension\n // ... IBM i result set warning???\n error_reporting(~E_ALL); // bad behavior odbc extension\n // ... IBM i result set warning???\n $ret=odbc_execute($stmt,$options);\n if (!$ret) die(\"Bad execute: \".odbc_errormsg());\n error_reporting(E_ALL);\n while(odbc_fetch_row($stmt)) {\n $xmlOut .= driverJunkAway(odbc_result($stmt, 1)); // bad behavior odbc extension\n // ... possible junk end record,\n // xmlservice provided $ctl='*hack'\n // record</hack>junk\n }\n return $xmlOut;\n }\n\n function xmlservice_pdo_ibm($xml) {\n global $fast, $db, $user, $pass, $ipc, $ctl, $conn, $lib, $plug, $plugR;\n $xmlIn = $xml;\n $xmlOut = '';\n if (!$conn) {\n $database = \"ibm:\".$db;\n try {\n if ($fast) $opt = array(PDO::ATTR_PERSISTENT=>true, PDO::ATTR_AUTOCOMMIT=>true); // persistent/pooled connection\n else $opt = array(PDO::ATTR_AUTOCOMMIT=>true); // full open/close connection\n $conn = new PDO($database, strtoupper($user), strtoupper($pass), $opt);\n if (!$conn) throw new Exception(\"Bad\");\n } catch( Exception $e ) {\n die(\"Bad connect: $database, $user\");\n }\n }\n try {\n $stmt = $conn->prepare(\"call $lib.$plug(?,?,?,?)\"); // Call XMLSERVICE\n // stored procedure interface\n // in/out parameter (xmlOut)\n // sizes: iPLUG4K - iPLUG15M\n if (!$stmt) throw new Exception('Bad');\n } catch( Exception $e ) {\n $err = $conn->errorInfo();\n $cod = $conn->errorCode();\n die(\"Bad prepare: \".$cod.\" \".$err[0].\" \".$err[1].\" \".$err[2]);\n }\n try {\n $r1=$stmt->bindParam(1,$ipc, PDO::PARAM_STR);\n $r2=$stmt->bindParam(2,$ctl, PDO::PARAM_STR);\n $r3=$stmt->bindParam(3,$xmlIn, PDO::PARAM_STR);\n $r4=$stmt->bindParam(4,$xmlOut, PDO::PARAM_STR|PDO::PARAM_INPUT_OUTPUT);\n $ret = $stmt->execute();\n if (!$ret) throw new Exception('Bad');\n } catch( Exception $e ) {\n $err = $stmt->errorInfo();\n $cod = $stmt->errorCode();\n die(\"Bad execute: \".$cod.\" \".$err[0].\" \".$err[1].\" \".$err[2]);\n }\n return driverJunkAway($xmlOut); // just in case driver odd\n // ... possible junk end record,\n // record</script>junk\n }\n\n\nXMLSERVICE advanced CCSID\n^^^^^^^^^^^^^^^^^^^^^^^^^\n\nUsing default PHP toolkit DB2 clob interface (iPLUGxxx/iPLUGRxxx), ccsid conversion occurs naturally as DB2 client/server and you will not have to code before/after, but method is available if you have a specific concern or you have scripts returning many different languages.\n\nExample::\n\n <data type='200A' before='819/424' after='424/819'>bin2hex('Hebrew_ascii_raw_chars')</data>\n <data type='200A' before='819/1098' after='1098/819'>bin2hex('Farsi_ascii_raw_chars')</data>\n <data type='200A' before='819/880' after='880/819'>bin2hex('Russia_ascii_raw_chars')</data>\n <data type='200A' before='819/280' after='280/819'>bin2hex('Italy_ascii_raw_chars')</data>\n <data type='200A' before='819/273' after='273/819'>bin2hex('Germany_ascii_raw_chars')</data>\n <data type='200A' before='819/1088' after='1088/819'>bin2hex('Korea_ascii_raw_chars')</data>\n <data type='200A' before='1208/13488' after='13488/1208'>bin2hex('Japan_ascii_raw_chars')</data>\n <data type='200A' before='1208/13488' after='13488/1208'>bin2hex('China_ascii_raw_chars')</data>\n where:\n before - XMLSERVICE convert CCSID before ILE program call\n after - XMLSERVICE convert CCSID after ILE program call for client return\n bin2hex() - script hex string unaltered ascii image (also returned hex string avoid any conversion)\n pack() - script uses pack('H*',\"xml_hex_back\") function in PHP program for ascii characters\n Note:\n Up to four conversions can take place for the truly diabolical ccsid issues\n <data type='A' before='cc1/cc2/cc3/cc4' after='cc4/cc3/cc2/cc1'>bin2hex('wild_ascii_raw_chars')</data>\n flow:\n -> PHP client bin2hex('wild_ascii_raw_chars')\n -> xmlservice hex2bin back to 'wild_ascii_raw_chars'\n -> xmlservice convert cc1->cc2->cc3->cc4 (before)\n -> xmlservice make ILE call\n -> xmlservice convert cc4->cc3->cc2->cc1 (after)\n -> xmlservice tohex \"xml_hex_back\"\n -> PHP client $chars = pack('H*',\"xml_hex_back\")\n\n\n\n\n.. \n [--Author([[http://youngiprofessionals.com/wiki/index.php/XMLSERVICE/XMLSERVICEGeneric?action=expirediff | s ]])--]\n [--Tony \"Ranger\" Cairns - IBM i PHP / PASE--]\n\n\n"
},
{
"alpha_fraction": 0.6713240146636963,
"alphanum_fraction": 0.6959414482116699,
"avg_line_length": 38.5,
"blob_id": "aa9413d9ac9722f6f80a96dd0f27015e5a49c9e3",
"content_id": "f166b259a7cadf8b427f43e7173e9d56b69dc2ab",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 1503,
"license_type": "permissive",
"max_line_length": 212,
"num_lines": 38,
"path": "/test/php/test_50110_ibm_db2_io_jvm_ZZJAVA2.phpt",
"repo_name": "IBM/xmlservice",
"src_encoding": "UTF-8",
"text": "--TEST--\nXML i Toolkit: IBM_DB2 inout JVM test ZZJAVA2 return classpath\n--SKIPIF--\n<?php require_once('skipifdb2.inc'); ?>\n--FILE--\n<?php\nrequire_once('connection.inc');\nrequire_once(\"ToolkitService.php\");\n\n// IBM i\n$conn = db2_connect($database,$user,$password);\nif (!$conn) die(\"Bad connect: $database,$user\");\n\n// normal RPG java: 'customControl'=>'*java' -- set your own classpath\n// SP SQL java: 'customControl'=>'*sqljava' or 'customControl'=>'*dbgjava' -- your classpath ignored\ntry { $ToolkitServiceObj = ToolkitService::getInstance($conn); } catch (Exception $e) { die($e->getMessage()); }\n$options = array('plugSize'=>'4K','customControl'=>'*java','stateless'=>true);\n$ToolkitServiceObj->setToolkitServiceParams($options);\n\n$yours = \"/QIBM/ProdData/OS400/Java400/ext/db2routines_classes.jar:/QIBM/ProdData/OS400/Java400/ext/runtime.zip:/QIBM/ProdData/OS400/Java400/ext/sqlj_classes.jar:/QIBM/ProdData/OS400/Java400/ext/db2_classes.jar\";\n$mine = \"/home/adc\";\n$ours = $mine.':'.$yours;\n$result = $ToolkitServiceObj->CLCommand(\"ADDENVVAR ENVVAR(CLASSPATH) VALUE('{$ours}') REPLACE(*YES)\");\nvar_dump($result);\n\necho \"calling java now ...\\n\";\n$param = array();\n$param[] = $ToolkitServiceObj->AddParameterChar ('both', 4096, 'classpath', 'classpath', '');\n$result = $ToolkitServiceObj->PgmCall('ZZJAVA2', 'XMLSERVICE', $param, null, null);\nvar_export($result);\n\nif (strpos($result[\"io_param\"][\"classpath\"],$mine)<1) die(\"ZZJAVA missing $mine\");\n\necho \"\\nSuccess\\n\";\n?>\n--EXPECTF--\n%s\nSuccess\n\n\n"
}
] | 146 |
slyg/rpi-dsl-monitoring | https://github.com/slyg/rpi-dsl-monitoring | a8a67ccae50d6caf20e1f897ced91087df1cc15c | 7d8c1bc0aa921e1380ea8be36b56d12150159963 | 82ba30eac216db1ed0d1b1e71bcf58d5745ee296 | refs/heads/master | 2022-11-13T13:47:40.081236 | 2020-06-12T17:02:45 | 2020-06-12T17:02:45 | 268,566,047 | 1 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6682297587394714,
"alphanum_fraction": 0.6752637624740601,
"avg_line_length": 22.054054260253906,
"blob_id": "1afb20a30b78aedeef3eee39ffb46a2271029bab",
"content_id": "5fc7de0b547e24cade3b056ec93a04f9878e0864",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 853,
"license_type": "no_license",
"max_line_length": 81,
"num_lines": 37,
"path": "/playground.py",
"repo_name": "slyg/rpi-dsl-monitoring",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n\nimport os\nimport random\nimport signal\nimport sys\nfrom time import sleep\n\nfrom icons import (demon, fantom_blue_md, fantom_blue_tl, fantom_red_tl,\n fantom_red_tr, skull, skull_front,\n skull_front_red, bender, ghost, green_ghost, pumpkin, creeper)\nfrom sense_hat import SenseHat\n\nsense = SenseHat()\nsense.set_rotation(270)\n\n\ndef exit_handler(signal):\n sense.clear()\n sys.exit(0)\n\n\nsignal.signal(signal.SIGTERM, exit_handler)\n\nicons = [demon, fantom_blue_md, fantom_blue_tl, fantom_red_tl,\n fantom_red_tr, skull, skull_front,\n skull_front_red, bender, ghost, green_ghost, pumpkin, creeper]\n\nwhile True:\n try:\n sense.set_pixels(random.choice(icons))\n sleep(1)\n\n except KeyboardInterrupt:\n exit_handler(signal.SIGTERM)\n\nexit_handler(signal.SIGTERM)\n"
},
{
"alpha_fraction": 0.7442307472229004,
"alphanum_fraction": 0.7442307472229004,
"avg_line_length": 19.799999237060547,
"blob_id": "f30d4656294eb3662e253f6282976913a8529246",
"content_id": "3214fef9836904cdf100c34dabd8f15bcee72405",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Makefile",
"length_bytes": 520,
"license_type": "no_license",
"max_line_length": 62,
"num_lines": 25,
"path": "/Makefile",
"repo_name": "slyg/rpi-dsl-monitoring",
"src_encoding": "UTF-8",
"text": ".service_name = dsl_monitoring.service\n\n.PHONY: sync\nsync:\n\trsync -a ./ [email protected]:git/rpi-dsl-monitoring\n\n.PHONY: install-service\ninstall-service:\n\tsudo cp $(.service_name) /etc/systemd/system/$(.service_name)\n\n.PHONY: start-service\nstart-service:\n\tsudo systemctl start $(.service_name)\n\n.PHONY: stop-service\nstop-service:\n\tsudo systemctl stop $(.service_name)\n\n.PHONY: set-onrestart\nset-onrestart:\n\tsudo systemctl enable $(.service_name)\n\n.PHONY: unset-onrestart\nunset-onrestart:\n\tsudo systemctl disable $(.service_name)\n"
},
{
"alpha_fraction": 0.3638054430484772,
"alphanum_fraction": 0.46409156918525696,
"avg_line_length": 25.172285079956055,
"blob_id": "9be703c2f31afdde8bcfc66b7e3b307b883cda32",
"content_id": "abd8ef1e1cd87cea7128ea587f83a53841d5dd85",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6990,
"license_type": "no_license",
"max_line_length": 86,
"num_lines": 267,
"path": "/icons.py",
"repo_name": "slyg/rpi-dsl-monitoring",
"src_encoding": "UTF-8",
"text": "\n\nXX = (0, 0, 0)\nW0 = (255, 255, 255)\nW1 = (200, 200, 200)\nW2 = (100, 100, 100)\n\nP0 = (255, 20, 140)\nP1 = (150, 5, 8)\nP2 = (50, 0, 20)\n\nR0 = (255, 0, 0)\nR1 = (150, 0, 0)\nR2 = (50, 0, 0)\n\nG0 = (0, 255, 0)\nG1 = (0, 150, 0)\nG2 = (0, 50, 0)\n\nB0 = (0, 0, 255)\nB1 = (0, 0, 150)\nB2 = (0, 0, 50)\n\nY0 = (255, 255, 0)\n\nOR = (255,165,0)\n\n\ndef px_choice(pixel_a, pixel_b):\n '''\n Selects pixel b in priority\n '''\n if pixel_b == XX:\n return pixel_a\n else:\n return pixel_b\n\n\ndef compose_img(image_a, image_b):\n return [px_choice(pixel_a, pixel_b) for pixel_a, pixel_b in zip(image_a, image_b)]\n\n\ncrossed_W0 = [\n XX, XX, XX, XX, XX, XX, XX, XX,\n W0, XX, XX, XX, XX, XX, XX, XX,\n XX, W0, XX, XX, XX, XX, XX, XX,\n XX, XX, W0, XX, XX, XX, XX, XX,\n XX, XX, XX, W0, XX, XX, XX, XX,\n XX, XX, XX, XX, W0, XX, XX, XX,\n XX, XX, XX, XX, XX, W0, XX, XX,\n XX, XX, XX, XX, XX, XX, W0, XX,\n]\n\ncrossed_W1 = [\n XX, XX, XX, XX, XX, XX, XX, XX,\n W1, XX, XX, XX, XX, XX, XX, XX,\n XX, W1, XX, XX, XX, XX, XX, XX,\n XX, XX, W1, XX, XX, XX, XX, XX,\n XX, XX, XX, W1, XX, XX, XX, XX,\n XX, XX, XX, XX, W1, XX, XX, XX,\n XX, XX, XX, XX, XX, W1, XX, XX,\n XX, XX, XX, XX, XX, XX, W1, XX,\n]\n\ncrossed_W2 = [\n XX, XX, XX, XX, XX, XX, XX, XX,\n W2, XX, XX, XX, XX, XX, XX, XX,\n XX, W2, XX, XX, XX, XX, XX, XX,\n XX, XX, W2, XX, XX, XX, XX, XX,\n XX, XX, XX, W2, XX, XX, XX, XX,\n XX, XX, XX, XX, W2, XX, XX, XX,\n XX, XX, XX, XX, XX, W2, XX, XX,\n XX, XX, XX, XX, XX, XX, W2, XX,\n]\n\npink_heart = [\n XX, XX, XX, XX, XX, XX, XX, XX,\n XX, XX, XX, XX, XX, XX, XX, XX,\n XX, P0, P0, XX, P0, P1, XX, XX,\n P0, P0, P0, P1, P1, P1, P1, XX,\n P0, P0, P1, P1, P1, P1, P2, XX,\n XX, P1, P1, P1, P1, P2, XX, XX,\n XX, XX, P1, P1, P2, XX, XX, XX,\n XX, XX, XX, P2, XX, XX, XX, XX,\n]\n\npink_heart_crossed = compose_img(pink_heart, crossed_W1)\n\npink_heart_empty = [\n XX, XX, XX, XX, XX, XX, XX, XX,\n XX, XX, XX, XX, XX, XX, XX, XX,\n XX, P0, P0, XX, P0, P1, XX, XX,\n P0, XX, XX, P1, XX, XX, P1, XX,\n P0, XX, XX, XX, XX, XX, P2, XX,\n XX, P1, XX, XX, XX, P2, XX, XX,\n XX, XX, P1, XX, P2, XX, XX, XX,\n XX, XX, XX, P2, XX, XX, XX, XX,\n]\n\npink_heart_empty_crossed = compose_img(pink_heart_empty, crossed_W1)\n\nred_heart = [\n XX, XX, XX, XX, XX, XX, XX, XX,\n XX, XX, XX, XX, XX, XX, XX, XX,\n XX, R0, R0, XX, R0, R1, XX, XX,\n R0, R0, R0, R0, R1, R1, R1, XX,\n R0, R0, R0, R1, R1, R1, R2, XX,\n XX, R0, R1, R1, R1, R2, XX, XX,\n XX, XX, R1, R1, R2, XX, XX, XX,\n XX, XX, XX, R2, XX, XX, XX, XX,\n]\n\nred_heart_crossed = compose_img(red_heart, crossed_W2)\n\nred_heart_empty = [\n XX, XX, XX, XX, XX, XX, XX, XX,\n XX, XX, XX, XX, XX, XX, XX, XX,\n XX, R0, R0, XX, R0, R1, XX, XX,\n R0, XX, XX, R0, XX, XX, R1, XX,\n R0, XX, XX, XX, XX, XX, R2, XX,\n XX, R0, XX, XX, XX, R2, XX, XX,\n XX, XX, R1, XX, R2, XX, XX, XX,\n XX, XX, XX, R2, XX, XX, XX, XX,\n]\n\nred_heart_empty_crossed = compose_img(red_heart_empty, crossed_W2)\n\nfantom_red_tl = [\n XX, XX, XX, XX, XX, XX, XX, XX,\n XX, XX, R0, R0, R0, R1, XX, XX,\n XX, R0, R0, R0, R0, R0, R1, XX,\n R0, R0, XX, W0, R0, XX, W1, R2,\n R0, R0, W0, W0, R0, W1, W1, R2,\n R0, R0, R0, R0, R0, R0, R1, R2,\n R0, R0, R0, R0, R0, R0, R1, R2,\n R0, XX, R0, R0, XX, R0, R1, R2,\n]\n\nfantom_red_tr = [\n XX, XX, XX, XX, XX, XX, XX, XX,\n XX, XX, R0, R0, R0, R1, XX, XX,\n XX, R0, R0, R0, R0, R0, R1, XX,\n R0, R0, W0, XX, R0, W1, XX, R2,\n R0, R0, W0, W0, R0, W1, W1, R2,\n R0, R0, R0, R0, R0, R0, R1, R2,\n R0, R0, R0, R0, R0, R0, R1, R2,\n R0, XX, R0, R0, XX, R0, R1, R2,\n]\n\nfantom_blue_tl = [\n XX, XX, XX, XX, XX, XX, XX, XX,\n XX, XX, B0, B0, B0, B1, XX, XX,\n XX, B0, B0, B0, B0, B0, B1, XX,\n B0, B0, XX, W0, B0, XX, W1, B2,\n B0, B0, W0, W0, B0, W1, W1, B2,\n B0, B0, B0, B0, B0, B0, B1, B2,\n B0, B0, B0, B0, B0, B0, B1, B2,\n B0, XX, B0, B0, XX, B0, B1, B2,\n]\n\nfantom_blue_md = [\n XX, XX, XX, XX, XX, XX, XX, XX,\n XX, XX, B0, B0, B0, B1, XX, XX,\n XX, B0, B0, B0, B0, B0, B1, XX,\n B0, B0, W0, W0, B0, W1, W1, B2,\n B0, B0, W0, XX, B0, XX, W1, B2,\n B0, B0, B0, B0, B0, B0, B1, B2,\n B0, B0, B0, B0, B0, B0, B1, B2,\n B0, XX, B0, B0, XX, B0, B1, B2,\n]\n\nskull = [\n XX, W0, W0, W0, W0, W0, W0, XX,\n W0, W0, W0, W0, W0, W0, W0, W0,\n W0, XX, W0, W0, W0, XX, W0, W0,\n W0, R0, W0, W0, W0, R0, W0, W0,\n W0, W0, W0, XX, W0, W0, W0, W0,\n W0, W0, W0, W0, W0, W0, W0, XX,\n XX, W0, XX, W0, XX, W0, XX, XX,\n XX, W2, XX, W2, XX, W2, XX, XX,\n]\n\nskull_front = [\n XX, W2, W0, W0, W0, W0, W2, XX,\n W2, W0, W0, W0, W0, W0, W0, W2,\n W0, W0, W0, W0, W0, W0, W0, W0,\n W2, XX, XX, W0, W0, XX, XX, W2,\n W2, XX, R0, W0, W0, R0, XX, W2,\n W0, W0, W0, XX, XX, W0, W0, W0,\n XX, W0, W0, W0, W0, W0, W0, XX,\n XX, W0, XX, W0, W0, XX, W0, XX,\n]\n\nskull_front_red = [\n XX, R2, R0, R0, R0, R0, R2, XX,\n R2, R0, R0, R0, R0, R0, R0, R2,\n R0, R0, R0, R0, R0, R0, R0, R0,\n R2, XX, XX, R0, R0, XX, XX, R2,\n R2, XX, W0, R0, R0, W0, XX, R2,\n R0, R0, R0, XX, XX, R0, R0, R0,\n XX, R0, R0, R0, R0, R0, R0, XX,\n XX, R0, XX, R0, R0, XX, R0, XX,\n]\n\ndemon = [\n XX, XX, XX, XX, XX, XX, XX, XX,\n R0, XX, XX, XX, XX, XX, XX, R0,\n R0, R0, XX, XX, XX, XX, R0, R0,\n XX, R0, XX, XX, XX, XX, R0, XX,\n XX, XX, XX, XX, XX, XX, XX, XX,\n R0, XX, XX, R2, R2, XX, XX, R0,\n XX, R0, R0, R0, R0, R0, R0, XX,\n XX, XX, R0, XX, XX, R0, XX, XX,\n]\n\nbender = [\n XX, XX, XX, W2, XX, XX, XX, XX,\n XX, XX, XX, W2, XX, XX, XX, XX,\n XX, XX, W2, W2, W2, XX, XX, XX,\n XX, W2, W2, W2, W2, W2, XX, XX,\n XX, W2, Y0, W2, Y0, W2, XX, XX,\n XX, W2, W2, W2, W2, W2, XX, XX,\n XX, XX, Y0, Y0, Y0, XX, XX, XX,\n XX, XX, Y0, Y0, Y0, XX, XX, XX,\n]\n\nghost = [\n XX, XX, XX, XX, XX, XX, XX, XX,\n XX, XX, XX, W1, W1, XX, XX, XX,\n XX, XX, W1, W1, W1, W1, XX, XX,\n XX, XX, R0, W1, R0, W1, XX, XX,\n W1, XX, W1, W1, W1, W1, XX, W1,\n XX, W1, W1, XX, XX, W1, W1, XX,\n XX, XX, W1, W1, W1, W1, XX, XX,\n XX, XX, W1, W1, W1, W1, XX, XX,\n]\n\ngreen_ghost = [\n XX, XX, XX, XX, XX, XX, XX, XX,\n XX, XX, XX, G0, G0, G0, XX, XX,\n XX, XX, G0, R0, G0, R0, XX, XX,\n G0, XX, G0, G0, G0, G0, XX, XX,\n XX, G0, G0, G0, XX, G0, XX, G0,\n XX, XX, G0, G0, G0, XX, G0, XX,\n XX, XX, G0, G0, XX, XX, XX, XX,\n XX, G0, G0, XX, XX, XX, XX, XX,\n]\n\npumpkin = [\n XX, XX, XX, XX, G0, XX, XX, XX,\n XX, XX, XX, G0, XX, XX, XX, XX,\n XX, XX, OR, OR, OR, OR, XX, XX,\n XX, OR, W2, OR, OR, W2, OR, XX,\n OR, OR, OR, OR, OR, OR, OR, OR,\n OR, OR, XX, OR, XX, OR, XX, OR,\n XX, OR, OR, XX, OR, XX, OR, XX,\n XX, XX, XX, XX, XX, XX, XX, XX,\n]\n\ncreeper = [\n XX, G2, G1, G2, G2, G2, G2, XX,\n G1, G1, G2, G2, G1, G2, G1, G2,\n G2, XX, XX, G2, G1, XX, XX, G1,\n G1, OR, XX, G1, G2, OR, XX, G1,\n G2, G1, G2, XX, XX, G2, G1, G2,\n G1, G2, XX, XX, XX, XX, G1, G2,\n G1, G2, XX, XX, XX, XX, G2, G1,\n XX, G1, XX, G1, G2, XX, G1, XX,\n]\n"
},
{
"alpha_fraction": 0.5984556078910828,
"alphanum_fraction": 0.6177605986595154,
"avg_line_length": 23.1733341217041,
"blob_id": "f915f7804c54e9872d624b6257c62599f7ec3076",
"content_id": "5cf554ece8c5faef869e91e9338f01a7b0b756a0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1818,
"license_type": "no_license",
"max_line_length": 81,
"num_lines": 75,
"path": "/dsl_monitoring.py",
"repo_name": "slyg/rpi-dsl-monitoring",
"src_encoding": "UTF-8",
"text": "import os\nimport random\nimport signal\nimport sys\nfrom time import sleep\n\nimport schedule\nfrom colors import blue, cyan, indigo, orange, pink, purple, red, yellow\nfrom icons import (demon, fantom_blue_md, fantom_blue_tl, fantom_red_tl,\n fantom_red_tr, skull, skull_front,\n skull_front_red, bender, ghost, green_ghost, pumpkin, creeper)\nfrom sense_hat import SenseHat\n\nsense = SenseHat()\nsense.low_light = True\nsense.set_rotation(270)\n\nicons = [demon, fantom_blue_md, fantom_blue_tl, fantom_red_tl,\n fantom_red_tr, skull, skull_front,\n skull_front_red, bender, ghost, green_ghost, pumpkin, creeper]\n\ndef exit_handler(signal):\n sense.clear()\n sys.exit(0)\n\n\ndef ping():\n sense.clear()\n\n # Pick random color and line\n y = random.randrange(7)\n (R, G, B) = random.choice(\n [blue, cyan, indigo, orange, pink, purple, red, yellow])\n\n # Display ping start\n for i in range(8):\n sense.set_pixel(i, y, R, G, B)\n sleep(0.05)\n\n # Select a random DNS IP\n ping_ips = [\n \"1.1.1.1\", # Cloudflare\n \"8.8.8.8\", # Google DNS\n \"8.8.4.4\", # Google DNS\n ]\n ping_ip = random.choice(ping_ips)\n\n # Ping DNS IP\n response = os.popen(f\"ping {ping_ip} -c 1\").read()\n\n # Display ping ended\n for i in range(8):\n sense.set_pixel(i, y, 0, 0, 0)\n sleep(0.05)\n\n if \"1 packets transmitted, 1 received, 0% packet loss\" in response:\n sense.clear()\n else:\n sense.clear()\n sense.set_pixels(random.choice(icons))\n\n\n# Exit handling\nsignal.signal(signal.SIGTERM, exit_handler)\n\n# Schedule jobs\nschedule.every(15).seconds.do(ping)\nping()\n\nwhile True:\n try:\n schedule.run_pending()\n sleep(1)\n except KeyboardInterrupt:\n exit_handler(signal.SIGTERM)\n"
},
{
"alpha_fraction": 0.7936508059501648,
"alphanum_fraction": 0.7936508059501648,
"avg_line_length": 24.200000762939453,
"blob_id": "82b204e4b6888c874d904cff5efa154d084a8932",
"content_id": "2138f06972983e6d308061ba36da865406821808",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 126,
"license_type": "no_license",
"max_line_length": 60,
"num_lines": 5,
"path": "/README.md",
"repo_name": "slyg/rpi-dsl-monitoring",
"src_encoding": "UTF-8",
"text": "# rpi-dsl-monitoring\n\nService using sense hat to display DSL connection downtimes.\n\nUsed alongside pi-hole on a Raspberry Pi.\n"
}
] | 5 |
artrslpnv/python_review_2 | https://github.com/artrslpnv/python_review_2 | 20806a0d925f083abacff590c72c1d6e2c2288a5 | 0915869a26a4005334cdd999c2a8aa9aa6fec9fa | bad2e862f3cb7d8f73ce7c0de45290de577eae3f | refs/heads/master | 2020-05-19T09:10:10.618044 | 2019-05-14T22:10:15 | 2019-05-14T22:10:15 | 184,940,169 | 2 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6805702447891235,
"alphanum_fraction": 0.6818901896476746,
"avg_line_length": 44.638553619384766,
"blob_id": "756e4398f3b5fe3aa50023eadd02705d0fe6aed2",
"content_id": "11b87d686408a9533b3ae79c968934a6f870fab6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4231,
"license_type": "no_license",
"max_line_length": 161,
"num_lines": 83,
"path": "/handlers.py",
"repo_name": "artrslpnv/python_review_2",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\nimport telebot\nfrom BD import get_match,add_matches\nfrom app import bot\nimport sqlite3\n\nconn = dict()\nnames=dict()\ndates=dict()\nscore=dict()\ngoals=dict()\[email protected]_handler(commands=['start'])\ndef handle_start(message):\n bot.send_message(message.chat.id, 'Привет, я телеграм-бот сообщающий события ,произошедшие в матче,напиши /help')\n\n\[email protected]_handler(commands=['help'])\ndef handle_reg(message):\n bot.send_message(message.chat.id, \"Напиши навзвания двух команд игравших в матче в формате ' Название команды хозяев \"\n \"- Название Команды гостей' \" )\n bot.register_next_step_handler(message, get_name) \n\n\[email protected]_handler(content_types=['text'])\ndef handle_text_message(message):\n bot.send_message(message.chat.id, message.text)\n\n\ndef get_name(message):\n names[message.chat.id] = message.text\n bot.send_message(message.chat.id, 'Когда это было? Дата в формате дд.мм.гггг')\n bot.register_next_step_handler(message, get_date)\ndef set_score(message):\n score[message.chat.id]=message.text\n bot.send_message(message.chat.id,\"Кто забил голы? в формате <Фамилия минута, Фамилия минута ...>\")\n bot.register_next_step_handler(message,set_goals)\ndef set_goals(message):\n goals[message.chat.id]= message.text\n add_matches(conn[message.chat.id].cursor(),conn[message.chat.id],names[message.chat.id],dates[message.chat.id],score[message.chat.id],goals[message.chat.id])\n bot.send_message(message.chat.id,\"Спасибо , Пока!\")\ndef get_date(message):\n dates[message.chat.id] = message.text\n conn[message.chat.id]=sqlite3.connect('mthcs.db')\n list=get_match(conn[message.chat.id], names[message.chat.id], dates[message.chat.id])\n if len(list)!=0:\n name,score,goals=list[0]\n answer = 'Матч {Name} закончился со счетом {Score} голы забили {Goals}, Вам понравилась моя работа?'.format(Name=name, Score=score, Goals=goals)\n keyboard = telebot.types.InlineKeyboardMarkup() \n key_yes = telebot.types.InlineKeyboardButton(text='Да', callback_data='yes')\n keyboard.add(key_yes)\n key_no = telebot.types.InlineKeyboardButton(text='Нет', callback_data='no')\n keyboard.add(key_no)\n key_idk = telebot.types.InlineKeyboardButton(text='Не знаю', callback_data='idk')\n keyboard.add(key_idk)\n bot.send_message(message.chat.id, text=answer, reply_markup=keyboard)\n else:\n answer=\"Данных по этому матчу найти не удалось :( , Не хотели бы вы нам помочь?\"\n keyboard = telebot.types.InlineKeyboardMarkup() \n key_yes = telebot.types.InlineKeyboardButton(text='Да', callback_data='Yes')\n keyboard.add(key_yes)\n key_no = telebot.types.InlineKeyboardButton(text='Нет', callback_data='No')\n keyboard.add(key_no)\n key_idk = telebot.types.InlineKeyboardButton(text='Не знаю', callback_data='Idk')\n keyboard.add(key_idk)\n bot.send_message(message.chat.id, text=answer, reply_markup=keyboard)\n\n\[email protected]_query_handler(func=lambda call: True)\ndef callback_worker(call):\n if call.data == \"yes\": \n bot.send_message(call.message.chat.id, 'Спасибо! : )')\n elif call.data == \"no\":\n bot.send_message(call.message.chat.id, \"Попрбуйте еще раз или найдите себе другого бота!:(\")\n bot.register_next_step_handler(call.message, get_name)\n elif call.data=='idk':\n bot.send_message(call.message.chat.id, \":|\")\n elif call.data == \"Yes\":\n bot.send_message(call.message.chat.id,'Введите счет этого матча')\n bot.register_next_step_handler(call.message,set_score)\n elif call.data == \"No\":\n bot.send_message(call.message.chat.id,\"Прощайте!\")\n elif call.data== \"Idk\":\n bot.send_message(call.message.chat.id , \":|\")\n"
},
{
"alpha_fraction": 0.4984227120876312,
"alphanum_fraction": 0.5536277890205383,
"avg_line_length": 37.3636360168457,
"blob_id": "1176e29d0261d5aa03b3aaa9198336f1cafd035a",
"content_id": "52f04866008d17c09621c060e4a81a4de0277b6f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1268,
"license_type": "no_license",
"max_line_length": 106,
"num_lines": 33,
"path": "/BD.py",
"repo_name": "artrslpnv/python_review_2",
"src_encoding": "UTF-8",
"text": "import sqlite3\ndef create_tables(cur, conn):\n cur.execute('''\n CREATE TABLE if Not EXISTS matches(\n nms VARCHAR(255) ,\n d VARCHAR (200),\n score varchar (200),\n goals varchar (250)\n \n )''')\n conn.commit()\ndef add_matches(cur,conn,fnms,fd,fscore,fgoals ):\n query=\"\"\"\n INSERT INTO matches (nms,\n d ,score ,goals )\n values ( '%(nms)s','%(d)s','%(score)s','%(goals)s')\n \"\"\" %{\"nms\":fnms,\"d\":fd,\"score\":fscore,\"goals\":fgoals}\n cur.execute(query)\n conn.commit()\ndef get_match(conn,fnms,fd):\n query = '''SELECT nms,score,goals\n FROM matches \n WHERE nms = '%(nms)s' and d = '%(d)s' '''% {\"nms\":fnms,\"d\":fd}\n cur = conn.cursor()\n cur.execute(query)\n list=cur.fetchmany(1)\n return list\ndef addind(cur,conn):\n add_matches(cur,conn,'Barcelona - Liverpool', '01.05.2019','3:0','Suarez 26, Messi 75,82')\n add_matches(cur,conn, 'Tottenham - Ajax', '30.04.2019','0:1', 'Van de Beek 15')\n add_matches(cur,conn,'Barcelona - Manchester United' , '16.04.2000' ,'3:0',\" Messi 16,20 Coutinho 61\")\n add_matches(cur,conn,'Manchester United - Barcelona', '10.04.2000','0:1','Show 12 (AG)')\n conn.commit()\n\n\n"
},
{
"alpha_fraction": 0.5461538434028625,
"alphanum_fraction": 0.7153846025466919,
"avg_line_length": 25.100000381469727,
"blob_id": "77cbbc1e3f70aa8381aefc09e31f5ca69ee3413e",
"content_id": "1a40e324c56dee77ab82b9213878565229702217",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 260,
"license_type": "no_license",
"max_line_length": 72,
"num_lines": 10,
"path": "/app.py",
"repo_name": "artrslpnv/python_review_2",
"src_encoding": "UTF-8",
"text": "import telebot\nbot=None\ndef init_bot(token):\n global bot\n bot=telebot.TeleBot('823664291:AAEfuC8rMJ4eRke2vSrdUsHy7h67RkPO3cE')\n telebot.apihelper.proxy={\n# 'https': 'http://138.68.166.224:8080'\n'https': 'http://54.37.136.149:3128'\n}\n import handlers"
},
{
"alpha_fraction": 0.6194332242012024,
"alphanum_fraction": 0.6538461446762085,
"avg_line_length": 23.75,
"blob_id": "66aa7fa08c70eec7f6c67f7e87fbd193e2016d5d",
"content_id": "2bdca33fc9245739a3802b54627c26fa6b11f574",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 494,
"license_type": "no_license",
"max_line_length": 92,
"num_lines": 20,
"path": "/run.py",
"repo_name": "artrslpnv/python_review_2",
"src_encoding": "UTF-8",
"text": "import argparse\nimport app\n\n\ndef parse_args():\n parser = argparse.ArgumentParser(description=\"Simple telegram bot\",\n formatter_class=argparse.ArgumentDefaultsHelpFormatter)\n parser.add_argument(\"--token-path\", default=\"./token\")\n\n return parser.parse_args()\n\n\ndef main():\n args = parse_args()\n app.init_bot('823664291:AAEfuC8rMJ4eRke2vSrdUsHy7h67RkPO3cE')\n app.bot.polling(none_stop=True, interval=1)\n\n\nif __name__ == \"__main__\":\n main()"
},
{
"alpha_fraction": 0.5723140239715576,
"alphanum_fraction": 0.692148745059967,
"avg_line_length": 47.29999923706055,
"blob_id": "4f95a14fd72bcb59c2ea8e830d20d9c67a4b5f37",
"content_id": "d713e8fcfe1bbb359b30e37d3f9e8f2b7d9fa546",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 484,
"license_type": "no_license",
"max_line_length": 102,
"num_lines": 10,
"path": "/adding_script.py",
"repo_name": "artrslpnv/python_review_2",
"src_encoding": "UTF-8",
"text": "from BD import add_matches,get_match\nimport sqlite3\nconn = sqlite3.connect('mthcs.db')\ncur=conn.cursor()\nadd_matches(cur,conn,'Barcelona - Liverpool', '01.05.2019','3:0','Suarez 26, Messi 75,82')\nadd_matches(cur,conn, 'Tottenham - Ajax', '30.04.2019','0:1', 'Van de Beek 15')\nadd_matches(cur,conn,'Barcelona - Manchester United' , '16.04.2000' ,'3:0',\" Messi 16,20 Coutinho 61\")\nadd_matches(cur,conn,'Manchester United - Barcelona', '10.04.2000','0:1','Show 12 (AG)')\n\nconn.commit()\n\n"
}
] | 5 |
CornellDataScience/distributed_gpu_computing | https://github.com/CornellDataScience/distributed_gpu_computing | a36dc7882c1c6409abc062183f99577c49d9e708 | b731e064021e830619e95d5f539a00e28880c6f7 | d69f4d35b78aa33df1bbaf37dcc946a67b86a8b0 | refs/heads/master | 2021-04-12T09:57:27.852634 | 2018-03-24T20:59:47 | 2018-03-24T20:59:47 | 126,542,275 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.567639946937561,
"alphanum_fraction": 0.5922850370407104,
"avg_line_length": 43.177513122558594,
"blob_id": "520bcea13985fa50b0b406e4ed40f8380d022b91",
"content_id": "2326fd512ce074555c552c08f15a5b7a7f066543",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7466,
"license_type": "no_license",
"max_line_length": 165,
"num_lines": 169,
"path": "/baseline_models/FullyConnectedNet.py",
"repo_name": "CornellDataScience/distributed_gpu_computing",
"src_encoding": "UTF-8",
"text": "import tensorflow as tf\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport time\n\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets('../data/', one_hot=True)\n\nfrom tensorflow.python.tools.inspect_checkpoint import print_tensors_in_checkpoint_file\n\ndef plot_digits(*args):\n # indices of the image of a certain digt\\it\n digit_indices = np.array(list(args))\n num_indices = len(digit_indices)\n fig, axes = plt.subplots(ncols=num_indices, figsize=(num_indices, 1))\n for i in range(num_indices):\n ax = axes.flatten()[i]\n img = mnist.train.images[digit_indices[i]].reshape(28, 28)\n label = np.argmax(mnist.train.labels[digit_indices[i]])\n ax.set_title('Label: {}'.format(label))\n ax.imshow(img, cmap='gray')\n ax.axis('off')\n plt.show()\n\nclass FCN():\n def __init__(self, save_path):\n self.session = tf.Session()\n self.save_path = save_path\n\n self.define_inputs()\n self.define_fc1()\n self.define_fc2()\n self.define_softmax()\n self.define_optimization()\n\n def weight_variable(self, shape):\n w_init = tf.truncated_normal(shape, stddev=0.01)\n return tf.get_variable('w', initializer=w_init)\n\n def bias_variable(self, shape):\n b_init = tf.constant(0.0, shape=shape)\n return tf.get_variable('b', initializer=b_init)\n\n def define_inputs(self):\n with tf.variable_scope('inputs'):\n self.x = tf.placeholder(tf.float32, [None, 784])\n self.y = tf.placeholder(tf.float32, [None, 10])\n self.prob_i = tf.placeholder(tf.float32)\n x_drop = tf.nn.dropout(self.x, keep_prob=self.prob_i)\n\n def define_fc1(self):\n with tf.variable_scope('fc1'):\n W_fc1 = self.weight_variable([784, 1200])\n b_fc1 = self.bias_variable([1200])\n self.h_fc1 = tf.nn.relu(tf.matmul(self.x, W_fc1) + b_fc1)\n self.prob_fc = tf.placeholder(tf.float32)\n h_fc1_drop = tf.nn.dropout(self.h_fc1, self.prob_fc)\n\n def define_fc2(self):\n with tf.variable_scope('fc2'):\n W_fc2 = self.weight_variable([1200, 1200])\n b_fc2 = self.bias_variable([1200])\n self.h_fc2 = tf.nn.relu(tf.matmul(self.h_fc1, W_fc2) + b_fc2)\n h_fc2_drop = tf.nn.dropout(self.h_fc2, self.prob_fc)\n\n def define_softmax(self):\n with tf.variable_scope('softmax'):\n W_fc3 = self.weight_variable([1200, 10])\n b_fc3 = self.bias_variable([10])\n self.preds = tf.nn.softmax(tf.matmul(self.h_fc2, W_fc3) + b_fc3)\n\n def define_optimization(self):\n with tf.variable_scope('optimization'):\n # define the loss function\n self.cross_entropy = tf.reduce_mean(-tf.reduce_sum(self.y * tf.log(self.preds), reduction_indices=[1]))\n # define training step and accuracy\n self.train_step = tf.train.MomentumOptimizer(learning_rate=0.1, momentum=0.9).minimize(self.cross_entropy)\n self.correct = tf.equal(tf.argmax(self.preds, 1), tf.argmax(self.y, 1))\n self.accuracy = tf.reduce_mean(tf.cast(self.correct, tf.float32))\n\n def training_accuracy(self):\n return self.session.run(self.accuracy, feed_dict={self.x: self.x_train, self.y: self.y_train,\n self.prob_i: 1.0, self.prob_fc: 1.0})\n\n def validation_accuracy(self):\n return self.session.run(self.accuracy, feed_dict={self.x: mnist.validation.images,\n self.y: mnist.validation.labels, self.prob_i: 1.0, self.prob_fc: 1.0})\n\n def test_accuracy(self):\n feed_dict={self.x: mnist.test.images,\n self.y: mnist.test.labels, self.prob_i: 1.0, self.prob_fc: 1.0}\n test_accuracy = self.session.run(self.accuracy, feed_dict={self.x: mnist.test.images,\n self.y: mnist.test.labels, self.prob_i: 1.0, self.prob_fc: 1.0})\n return test_accuracy\n\n def train(self, num_iterations):\n with self.session as sess:\n self.saver = tf.train.Saver(tf.global_variables())\n batch_size = 100\n print('Starting training...')\n start_time = time.time()\n best_accuracy = 0\n init = tf.global_variables_initializer()\n self.session.run(init)\n\n for i in range(num_iterations):\n self.x_train, self.y_train = mnist.train.next_batch(batch_size)\n if (i + 1) % 1000 == 0:\n train_accuracy = self.training_accuracy()\n print(\"step %d, training accuracy %g\" % (i, train_accuracy))\n #run training step (note dropout hyperparameters)\n sess.run(self.train_step, feed_dict={self.x: self.x_train,\n self.y: self.y_train, self.prob_i: 0.8, self.prob_fc: 0.5})\n\n # validate\n val_accuracy = self.validation_accuracy()\n print(\"val_accuracy %g\" % val_accuracy)\n\n if val_accuracy > best_accuracy:\n self.saver.save(sess, self.save_path)\n best_accuracy = val_accuracy\n print(\"Validation accuracy improved: %g. Saving the network.\" % val_accuracy)\n else:\n self.saver.restore(sess, self.save_path)\n print(\"Validation accuracy was: %g. Previous accuracy: %g. \" % (val_accuracy, best_accuracy) + \"Using old parameters for further optimizations.\")\n\n print(\"Training took %.4f seconds.\" % (time.time() - start_time))\n self.test_accuracy()\n print(\"Best test accuracy: %g\" % best_accuracy)\n\n \"\"\"\n def load_trained_model(self):\n tf.reset_default_graph()\n self.session = tf.InteractiveSession()\n self.saver = tf.train.import_meta_graph('/models/fcn/mnist_fc.meta')\n self.saver.restore(self.session, tf.train.latest_checkpoint('../models/fcn'))\n self.x = tf.get_default_graph().get_tensor_by_name('inputs/Placeholder:0')\n self.preds = tf.get_default_graph().get_tensor_by_name('softmax/Softmax:0')\n p1 = tf.get_default_graph().get_tensor_by_name('inputs/Placeholder_2:0')\n p2 = tf.get_default_graph().get_tensor_by_name('fc1/Placeholder:0')\n \"\"\"\n\n def plot_predictions(self, *args):\n self.session = tf.InteractiveSession()\n self.saver.restore(self.session, self.save_path)\n digit_indices = np.array(list(args))\n num_indices = len(digit_indices)\n fig, axes = plt.subplots(ncols=num_indices, figsize=(num_indices, 1))\n p1 = tf.get_default_graph().get_tensor_by_name('inputs/Placeholder_2:0')\n p2 = tf.get_default_graph().get_tensor_by_name('fc1/Placeholder:0')\n for i, idx in enumerate(digit_indices):\n ax = axes.flatten()[i]\n vec = mnist.test.images[idx]\n img = vec.reshape(28, 28)\n args = {self.x: np.expand_dims(vec, 0), p1: 1.0, p2: 1.0}\n pred = self.session.run(self.preds, feed_dict=args)\n ax.set_title('Pred: {}'.format(np.argmax(pred)))\n ax.imshow(img, cmap='gray')\n ax.axis('off')\n plt.show()\n\n\nif __name__ == \"__main__\":\n #print_tensors_in_checkpoint_file('models/fcn/mnist_fc', '', '', True)\n #plot_digits(5, 33, 1445, 3434, 9888)\n net = FCN('models/fcn/mnist_fc')\n net.train(100)\n # max 10,000\n net.plot_predictions(5, 33, 1445, 3434, 9888)\n"
},
{
"alpha_fraction": 0.8071603178977966,
"alphanum_fraction": 0.8117710947990417,
"avg_line_length": 126.06896209716797,
"blob_id": "a307f7e34b30ba40c93f67d29bb6c01d1f7733d1",
"content_id": "ee33fd03f760ac0b88d26f5ad7b93d7420b93d8c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 3693,
"license_type": "no_license",
"max_line_length": 1128,
"num_lines": 29,
"path": "/README.md",
"repo_name": "CornellDataScience/distributed_gpu_computing",
"src_encoding": "UTF-8",
"text": "# Distributed GPU Computing Team\n[![Cornell Data Science Logo](images/CDS-banner.png)](cornelldata.science)\n\nMembers: Dae Won Kim, Eashaan Kumar, Katarina Jankov\n\n## Project introduction\n\nDeep learning is an exciting field that is applicable to a wide variety of machine learning problems, most notably in areas of computer vision and natural language processing. However, the complexity of neural networks pose a significant challenge in terms of implementation because of the large number of parameters and matrix/vector computations. GPU computing has become the new norm in these computations, but come with difficulties in parallelization, particularly in the multi-node setting. Typically in such a setting, in order to reduce network load and avoid consequent bottlenecks, one must tend toward large batches - as is the approach typically employed by popular batch processing frameworks like hadoop and spark. This, however, comes with a cost to training accuracy. Particularly, the non-convex loss surfaces of neural networks make batch size a very important hyperparameter, and large batches typically lead to convergence to poor local minima. Our objective in this project is to experiment with training relatively large neural network architectures over considerably large clusters using cloud instances. \n\n## Project Scope\n\nWe intend to look into the following: \n- __Tensorflow__\n\nTensorflow is the most popular execution engine for neural networks. While PyTorch is also an option (as is the whole host of platforms like Caffe2, Keras, Theano and Lasagne), tensorflow is typically easier to expand to larger scales and provides a good starting point for learning and baseline formulations. There also seems to be more widespread efforts to distribute tensorflow over multiple machines - partly due to its more static graph construction. Our aim is to gain relatively wide and thorough understanding of these efforts and their implications, while potentially expanding these efforts to other platforms. \n\n- [__TensorflowOnSpark__](https://docs.databricks.com/applications/deep-learning/tensorflow.html)\n\nDatabrick's convenient documentation: [link](https://docs.databricks.com/applications/deep-learning/tensorflow.html)\n\nThis is a library that launches multiple instances of tensorflow using Apache Spark. This is a naive approach to parallelization, and is most useful for hyperparameter optimization. A potential use case may be to join these efforts with SigOpt, which provides hyperparameter optimization on Spark via Bayesian optimization. However, this will not be the main area of interest for us, because while it is helpful to gain an understanding on how it works, it is not a topic that requires intensive research nor is it a field we are capable of making significant contributions to. \n\n- [__Horovod__](https://github.com/uber/horovod)\n\n This is a library that is executable on both Keras and Tensorflow, and allows parallelization over multiple GPUs. While Tensorflow also has modules for distributions, Norovod claims that its libraries are much easier to use and achieves better results. We intend to thoroughly understand and experiment with their APIs.\n\n- __Facebook’s paper__: [Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour](https://arxiv.org/abs/1706.02677)\n\nThis paper claims to have trained a ResNet-50 under 1 hour on 256 GPU instances while maintaining comparable results using very large batch sizes. We wish to replicate the results of this paper (at least validate its core assumption that linearly scaling the learning rate and introducing a “warmup” phase with small learning rates can effectively counteract the setbacks in using large batch sizes. \n\n"
}
] | 2 |
ConanKapoor/GithubAPI_Reports | https://github.com/ConanKapoor/GithubAPI_Reports | 2f8a6e8827e0ced78b343efeabfbbbc5beaa29b0 | b634f713e6958c600a7f637ee5700393c57991f5 | efff63f15aa68147082873aa5cc598aa4cd05117 | refs/heads/master | 2021-03-16T06:14:43.070425 | 2017-05-17T21:07:39 | 2017-05-17T21:07:39 | 91,577,691 | 2 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6750590205192566,
"alphanum_fraction": 0.6955153346061707,
"avg_line_length": 40,
"blob_id": "23e09c58a42421b697aa994a40ebc74108856bbf",
"content_id": "f56068806a5c50b635413c0f5fbc05d2cb55825e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1271,
"license_type": "no_license",
"max_line_length": 84,
"num_lines": 31,
"path": "/UserData/models.py",
"repo_name": "ConanKapoor/GithubAPI_Reports",
"src_encoding": "UTF-8",
"text": "#Author - Shivam Kapoor\n# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\nfrom django.db import models\n\n#Creating user_data model to store data.\nclass user_data(models.Model):\n Idx = models.PositiveIntegerField(primary_key=True)\n User_name = models.CharField(max_length=200)\n Full_name = models.CharField(max_length=250, blank=True, null=True)\n Location = models.CharField(max_length=200, blank=True, null=True)\n Blog = models.CharField(max_length=200, blank=True, null=True)\n Public_repos = models.PositiveIntegerField()\n Public_gists = models.PositiveIntegerField()\n Email = models.CharField(max_length=250, blank=True, null=True)\n Followers = models.PositiveIntegerField()\n Following = models.PositiveIntegerField()\n Updated_on = models.DateField()\n Image = models.CharField(max_length=500, default='No Picture Available. Weird!')\n\n #Thumbnail function to show thumbails in admin tables.\n def Thumbnail(self):\n return '<img src=\"%s\" width=\"30\" height=\"30\"/>' % self.Image\n Thumbnail.allow_tags = True\n\n class Meta(object):\n verbose_name = '1_User Data'\n verbose_name_plural = '1_User Data'\n\n def __unicode__(self): #Python2 declaration\n return (str(self.Full_name + str(self.Idx)))\n"
},
{
"alpha_fraction": 0.7089715600013733,
"alphanum_fraction": 0.7111597657203674,
"avg_line_length": 32.23636245727539,
"blob_id": "5383b96ff11cf0ad46fd293cbdb3d674133926b0",
"content_id": "7ac7c033036137a29bf5fb0132e96a644ea25fdb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1828,
"license_type": "no_license",
"max_line_length": 225,
"num_lines": 55,
"path": "/UserData/admin.py",
"repo_name": "ConanKapoor/GithubAPI_Reports",
"src_encoding": "UTF-8",
"text": "#Author - Shivam Kapoor\n\n#Importing Essentials\nfrom __future__ import unicode_literals\nfrom django.contrib import admin\nimport datetime\nfrom datetime import date\nfrom .models import user_data\n\n\n#Creating Admin Table Layout.\nclass User_dataAdmin(admin.ModelAdmin):\n list_display = ['Thumbnail', 'Idx','User_name', 'Full_name', 'Location', 'Blog', 'Public_repos', 'Public_gists', 'Email', 'Followers', 'Following', 'Updated_on']\n search_fields = ['Idx','User_name', 'Full_name', 'Location', 'Blog', 'Public_repos', 'Public_gists', 'Email', 'Followers', 'Following', 'Updated_on']\n\n class Meta:\n model = user_data\n\n#Proxy Table 1.\nclass ReportToday(user_data):\n class Meta:\n proxy = True\n\n#Filter query for present day.\nclass Query_Today(User_dataAdmin):\n def get_queryset(self, requests):\n return self.model.objects.filter(Updated_on = datetime.date.today())\n\n#Proxy Table 2.\nclass ReportYear(user_data):\n class Meta:\n proxy = True\n\n#Filter query for year.\nclass Query_Year(User_dataAdmin):\n Updated = datetime.date.today()\n def get_queryset(self, requests):\n return self.model.objects.filter(Updated_on__year__lte= datetime.datetime.today().year)\n\n#Proxy Table 3.\nclass ReportMonth(user_data):\n class Meta:\n proxy = True\n\n#Filter query for month.\nclass Query_Month(User_dataAdmin):\n def get_queryset(self, requests):\n return self.model.objects.filter(Updated_on__month__lte= datetime.datetime.today().month).exclude(Updated_on__month__lt= ((datetime.datetime.today().month)- 1)).filter(Updated_on__year= datetime.datetime.today().year)\n\n\n#Registering Tables for admin page.\nadmin.site.register(user_data, User_dataAdmin)\nadmin.site.register(ReportToday, Query_Today)\nadmin.site.register(ReportYear, Query_Year)\nadmin.site.register(ReportMonth, Query_Month)\n"
},
{
"alpha_fraction": 0.734155535697937,
"alphanum_fraction": 0.7382665276527405,
"avg_line_length": 27.339805603027344,
"blob_id": "ce59ffb82958a9a89e4b5a7ba1d2e35c7c6abf2e",
"content_id": "32d11f29c1a9f1e7723d78dc5630bd0d08df1e86",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 2919,
"license_type": "no_license",
"max_line_length": 200,
"num_lines": 103,
"path": "/README.md",
"repo_name": "ConanKapoor/GithubAPI_Reports",
"src_encoding": "UTF-8",
"text": "# Github API - Reports\n\nA simple Django framework based project to -\n\n* Search Github Users using username and retrieve relevant data.\n* Saving the same in admin views.\n* Filtering data with search bar in admin view.\n* Creating reports on the basis of API requests per day/month/year.\n\n## Getting Started\n\nThese instructions will get you a copy of the project up and running on your local machine for development and testing purposes. See deployment for notes on how to deploy the project on a live system.\n\n### Prerequisites\n\nYou will need Django framework to run this project smoothly. Go to your terminal and execute the following command or visit [Django](https://www.djangoproject.com/) website.\n\n```\npip install django\n```\n\n### Deployment\n\nA step by step series of examples that tells what you have to do to get this project running -\n\n* Enter the project directory.\n* Make migrations for the project -\n\n```\npython manage.py makemigrations\n```\n\n* Migrate the changes -\n\n```\npython manage.py migrate\n```\n\n* Create SuperUser-\n\n```\npython manage.py createsuperuser\n\nProvide username, email and password and remember it matey!\nDefault added = username - admin | password - admin12345\n```\n\n* Run the server -\n\n```\npython manage.py runserver\n```\n\n* Access http://127.0.0.1:8000/ on your browser to use the application.\n\n### Usage\n\n* Add username in search bar and hit 'Get data' to retrieve user data.<br />\n* Access Admin Panel by clicking 'Admin Panel' button.<br />\n\n![Screenshot](/Screenshots/Welcome_Page.png)\n\n* Authenticate yourself to enter admin table.<br />\n![Screenshot](/Screenshots/Authentication.png)\n\n* Choose '1_User_Data' to access user data table to see all API calls.<br />\n![Screenshot](/Screenshots/User_Data.png)\n\n* Use Search bar to filter data according to fields given in table.<br />\n![Screenshot](/Screenshots/Search_Filter.png)\n\n* Choose 'Report Today' to review report of API calls done on present date.<br />\n* Choose 'Report Month' to review report of API calls done in past month.<br />\n* Choose 'Report Year' to review report of API calls done in present year.<br />\n![Screenshot](/Screenshots/User_Data_Today.png)\n\n## Built With\n\n* [Django](https://www.djangoproject.com/) - Python web framework used.\n* [Python](https://www.python.org/) - Python programming language.\n* [Bower](https://bower.io/) - A Package Manager for the web.\n\n## Contributing\n\nIf you want to contribute to this project, for some reason I won't understand, then you can create a pull request. Happy Coding!\n\n## Versioning\n\nVersion 1.something Mehh...\n\n## Authors\n\n* **Shivam Kapoor** - An avid learner who likes to know every tiny detail in working of real life systems. Real enthusiast of cyber security and underlying networking concepts. (Email - [email protected])\n\n## License\n\nToo lazy to decide on a License. zZzZ\n\n## Acknowledgments\n\n* YouTube Videos (Thank god they exist :')'\n* StackOverFlow xD\n* Django Documentation FTW\n"
},
{
"alpha_fraction": 0.461607962846756,
"alphanum_fraction": 0.47831979393959045,
"avg_line_length": 31.558822631835938,
"blob_id": "d43d3f2a4b1e5ea2270be2ae26e258208890e13e",
"content_id": "124a718a824a1f07d3c66aa63da31257fdf176aa",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2214,
"license_type": "no_license",
"max_line_length": 114,
"num_lines": 68,
"path": "/UserData/migrations/0001_initial.py",
"repo_name": "ConanKapoor/GithubAPI_Reports",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n# Generated by Django 1.11.1 on 2017-05-17 20:13\nfrom __future__ import unicode_literals\n\nfrom django.db import migrations, models\n\n\nclass Migration(migrations.Migration):\n\n initial = True\n\n dependencies = [\n ]\n\n operations = [\n migrations.CreateModel(\n name='user_data',\n fields=[\n ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),\n ('Idx', models.PositiveIntegerField()),\n ('User_name', models.CharField(max_length=200)),\n ('Full_name', models.CharField(blank=True, max_length=250, null=True)),\n ('Location', models.CharField(max_length=200)),\n ('Blog', models.CharField(max_length=200)),\n ('Public_repos', models.PositiveIntegerField()),\n ('Public_gists', models.PositiveIntegerField()),\n ('Email', models.CharField(blank=True, max_length=250, null=True)),\n ('Followers', models.PositiveIntegerField()),\n ('Following', models.PositiveIntegerField()),\n ('Updated_on', models.DateField()),\n ('Image', models.CharField(default='No Picture Available. Weird!', max_length=500)),\n ],\n options={\n 'verbose_name': '1_User Data',\n 'verbose_name_plural': '1_User Data',\n },\n ),\n migrations.CreateModel(\n name='ReportMonth',\n fields=[\n ],\n options={\n 'proxy': True,\n 'indexes': [],\n },\n bases=('UserData.user_data',),\n ),\n migrations.CreateModel(\n name='ReportToday',\n fields=[\n ],\n options={\n 'proxy': True,\n 'indexes': [],\n },\n bases=('UserData.user_data',),\n ),\n migrations.CreateModel(\n name='ReportYear',\n fields=[\n ],\n options={\n 'proxy': True,\n 'indexes': [],\n },\n bases=('UserData.user_data',),\n ),\n ]\n"
},
{
"alpha_fraction": 0.6327272653579712,
"alphanum_fraction": 0.6399999856948853,
"avg_line_length": 49,
"blob_id": "4289cf8bfbee0ac579286993135297c2f13dcf3e",
"content_id": "00f1ff611224614703031ebb49c3c53a07d0baf6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1650,
"license_type": "no_license",
"max_line_length": 166,
"num_lines": 33,
"path": "/UserData/views.py",
"repo_name": "ConanKapoor/GithubAPI_Reports",
"src_encoding": "UTF-8",
"text": "#Author - Shivam Kapoor\n\n#Importing Essentials\nfrom __future__ import unicode_literals\nfrom django.shortcuts import render\nimport requests , datetime, json, dateutil.parser\nfrom datetime import date\nfrom .models import *\n\n#View to retrieve user data from GithubAPI.\ndef userdata_retrieval(request):\n content = {}\n if request.method == 'POST':\n username = request.POST.get('user')\n\n #Giving request to GithubAPI and converting to JSON format.\n req = requests.get('https://api.github.com/users/'+username)\n content = req.json()\n retrievedList = []\n retrievedList.append(json.loads(req.content))\n content['created_at'] = dateutil.parser.parse(content['created_at'])\n\n #Adding data retrieved to Table - user_data.\n for entry in retrievedList:\n get_previous_entry=user_data.objects.filter(Idx=retrievedList[0]['id'])\n get_previous_entry.delete()\n data_into_table = user_data(Idx=retrievedList[0]['id'], User_name=str(retrievedList[0]['login']),Full_name=str(retrievedList[0]['name']),\n Location=str(retrievedList[0]['location']), Blog=str(retrievedList[0]['blog']), Public_repos=retrievedList[0]['public_repos'],\n Public_gists=retrievedList[0]['public_gists'], Email=str(retrievedList[0]['email']), Followers=retrievedList[0]['followers'],\n Following=retrievedList[0]['following'], Updated_on=date.today(), Image=retrievedList[0]['avatar_url'])\n data_into_table.save()\n\n return render(request, 'UserData/userdata.html', content)\n"
}
] | 5 |
Alma-field/twitcaspy | https://github.com/Alma-field/twitcaspy | 940597af87734021d0c3122583dfbe3124dec485 | 25f3e850f2d5aab8a864bd6b7003468587fa3ea7 | 4cfc73ba72a49837daec9a4283bc910a793c343f | refs/heads/master | 2023-08-25T19:25:02.522871 | 2021-10-05T05:25:31 | 2021-10-05T05:25:31 | 354,839,031 | 0 | 0 | MIT | 2021-04-05T13:11:33 | 2021-10-03T17:22:26 | 2021-10-05T05:25:31 | Python | [
{
"alpha_fraction": 0.6323296427726746,
"alphanum_fraction": 0.640781819820404,
"avg_line_length": 29.532258987426758,
"blob_id": "10004bf3b63e6162c685e21781e4adcfde267ce3",
"content_id": "721b408d1bbd78362f711bb05d47323a871b5c7f",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1953,
"license_type": "permissive",
"max_line_length": 87,
"num_lines": 62,
"path": "/examples/realtime/lives.py",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": "# Twitcaspy\n# Copyright 2021 Alma-field\n# See LICENSE for details.\n\n# Before running this code, run the following command:\n# このコードを実行する前に、以下のコマンドを実行してください。\n# pip install twitcaspy[realtime]\n\nfrom base64 import b64encode\n# The client id and/or secret can be found on your application's Details page\n# located at select app in https://twitcasting.tv/developer.php\n# (in \"details\" tab)\nCLIENT_ID = ''\nCLIENT_SECRET = ''\n# generate basic token\nBASIC_TOKEN = b64encode(f'{CLIENT_ID}:{CLIENT_SECRET}'.encode('utf-8')).decode('utf-8')\n# create Authorization header\nHEADERS = {'Authorization': f'Basic {BASIC_TOKEN}'}\n# create socket url\nSOCKET_URL = f'wss://{CLIENT_ID}:{CLIENT_SECRET}@realtime.twitcasting.tv/lives'\n\nfrom twitcaspy.parsers import ModelParser\n# set parse keywords\nparser_kwargs = {'movies': ['live', True]}\nparser = ModelParser()\n\nimport json\n\ndef on_message(ws, message):\n try:\n # try parse from json text\n data = json.loads(message)\n except:\n print(type(message))\n print(message)\n return\n if 'hello' in data:\n # first message\n print(f'hello: {data[\"hello\"]}')\n else:\n # parse object\n lives = parser.parse(payload=data, payload_type=parser_kwargs)\n flag = True\n for live in lives.movies:\n if flag:\n flag = False\n else:\n print('-'*25)\n # shoe user name and screen name\n print(f' name: {live.broadcaster.name}({live.broadcaster.screen_id})')\n # show live title and subtitle\n print(f'title: {live.movie.title}({live.movie.subtitle})')\n print('-'*50)\n\nif __name__ == \"__main__\":\n import websocket\n #websocket.enableTrace(True)\n # create websocket instance\n ws = websocket.WebSocketApp(\n SOCKET_URL, header=HEADERS, on_message=on_message)\n # run websocket\n ws.run_forever()\n"
},
{
"alpha_fraction": 0.73617023229599,
"alphanum_fraction": 0.7531914710998535,
"avg_line_length": 22.5,
"blob_id": "b3e582bc87f9c45a54cfca203fa0e2bda0575b27",
"content_id": "ee8dad6927664b0b07f900c6d82c8db2d5542587",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 235,
"license_type": "permissive",
"max_line_length": 49,
"num_lines": 10,
"path": "/twitcaspy/utils.py",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": "# Twitcaspy\n# Copyright 2021 Alma-field\n# See LICENSE for details.\n\nfrom datetime import datetime\n\ndef fromtimestamp(unix_timestamp):\n if unix_timestamp is None:\n return None\n return datetime.fromtimestamp(unix_timestamp)\n"
},
{
"alpha_fraction": 0.66595059633255,
"alphanum_fraction": 0.6788399815559387,
"avg_line_length": 29.032258987426758,
"blob_id": "a754ca198ba786077377429988dd0240f7358348",
"content_id": "0aa120acb4e3f1602fddd0841375554e94add051",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 931,
"license_type": "permissive",
"max_line_length": 71,
"num_lines": 31,
"path": "/tests/test_auth.py",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": "# Twitcaspy\n# Copyright 2021 Alma-field\n# See LICENSE for details.\n#\n# based on tweepy(https://github.com/tweepy/tweepy)\n# Copyright (c) 2009-2021 Joshua Roesslein\n\nfrom unittest import TestCase\nfrom nose.tools import ok_\n\nfrom .config import *\nfrom twitcaspy import API, GrantAuthHandler\n\nclass TwitcaspyAuthTests(TestCase):\n\n def testoauth(self):\n if not client_id or not client_secret:\n self.skipTest(\"Missing client id and/or secret\")\n\n auth = GrantAuthHandler(client_id, client_secret)\n\n # test getting access token\n auth_url = auth.get_authorization_url()\n print('Please authorize: ' + auth_url)\n authorization_response = input('Enter the full callback URL\\n')\n auth.fetch_token(authorization_response)\n ok_(auth.oauth.token['access_token'] is not None)\n\n # build api object test using oauth\n api = API(auth)\n result = api.verify_credentials()\n"
},
{
"alpha_fraction": 0.5744680762290955,
"alphanum_fraction": 0.6102514266967773,
"avg_line_length": 26.210525512695312,
"blob_id": "805a5a89c073b890e0dc1f3e5416070380de66b9",
"content_id": "8207288caa62ec71ff3413dafafd8c0d001e589e",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1034,
"license_type": "permissive",
"max_line_length": 74,
"num_lines": 38,
"path": "/setup.py",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": "# Twitcaspy\n# Copyright 2021 Alma-field\n# See LICENSE for details.\n\nimport re\nfrom setuptools import find_packages, setup\n\nwith open(\"README.md\", encoding='utf-8') as file:\n long_description = file.read()\n\ntests_require = [\n \"nose>=1.3.7,<2\",\n \"vcrpy>=4.1.1,<5\",\n \"python-dotenv>=0.19.0,<1\",\n]\n\nsetup(\n long_description=long_description,\n project_urls={\n \"Documentation\": \"https://twitcaspy.alma-field.com/\",\n \"Issue Tracker\": \"https://github.com/Alma-field/twitcaspy/issues\",\n \"Source Code\": \"https://github.com/Alma-field/twitcaspy\",\n },\n packages=find_packages(exclude=[\"tests\", \"webhook\", \"realtime\"]),\n install_requires=[\n \"requests>=2.26.0,<3\",\n \"requests_oauthlib>=1.3.0,<2\",\n ],\n tests_require=tests_require,\n extras_require={\n \"test\": tests_require,\n \"webhook\": [\"flask>=2.0.1,<3\"],\n \"realtime\": [\"websocket-client>=1.2.1,<2\"]\n },\n test_suite=\"nose.collector\",\n keywords=\"twitcasting library\",\n python_requires=\">=3.7\"\n)\n"
},
{
"alpha_fraction": 0.5470588207244873,
"alphanum_fraction": 0.554411768913269,
"avg_line_length": 20.935483932495117,
"blob_id": "f6d1d3f0f5686b898aa5dc7c4318910bb99e6beb",
"content_id": "0e431a0c350fe11ea9442bbeb1490fa9f624f1a6",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 680,
"license_type": "permissive",
"max_line_length": 52,
"num_lines": 31,
"path": "/twitcaspy/models/webhook.py",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": "# Twitcaspy\n# Copyright 2021 Alma-field\n# See LICENSE for details.\n\nfrom .model import Model\n\nclass WebHook(Model):\n \"\"\"WebHook Object\n\n Attributes\n ----------\n user_id: :class:`str`\n | User ID\n event: :class:`str`\n | Event type to hook\n | `event` must be one of the following:\n | 'livestart' : Live start\n | 'liveend' : Live end\n\n References\n ----------\n https://apiv2-doc.twitcasting.tv/#webhook-object\n \"\"\"\n\n @classmethod\n def parse(cls, api, json):\n webhook = cls(api)\n setattr(webhook, '_json', json)\n for k, v in json.items():\n setattr(webhook, k, v)\n return webhook\n"
},
{
"alpha_fraction": 0.5772217512130737,
"alphanum_fraction": 0.5953407883644104,
"avg_line_length": 27.975000381469727,
"blob_id": "00ff6411f72edb9354ad141ee9170cf52b4d55c8",
"content_id": "d0283ca14726e3b7898668795efeb4306611c1ce",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1159,
"license_type": "permissive",
"max_line_length": 78,
"num_lines": 40,
"path": "/twitcaspy/auth/oauth/basic.py",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": "# Twitcaspy\n# Copyright 2021 Alma-field\n# See LICENSE for details.\n#\n# based on tweepy(https://github.com/tweepy/tweepy)\n# Copyright (c) 2009-2021 Joshua Roesslein\n\nfrom base64 import b64encode\n\nfrom requests.auth import AuthBase\n\nclass OAuth2Basic(AuthBase):\n \"\"\"Basic Authentication\n\n Parameters\n ----------\n client_id: :class:`str`\n |client_id|\n client_secret: :class:`str`\n |client_secret|\n\n Raises\n ------\n TypeError\n If the given client id and/or secret is not a string instance\n \"\"\"\n def __init__(self, client_id, client_secret):\n if not isinstance(client_id, str):\n raise TypeError(\"ClientID must be string, not \"\n + type(client_id).__name__)\n if not isinstance(client_secret, str):\n raise TypeError(\"Client secret must be string, not \"\n + type(client_secret).__name__)\n\n self.token = b64encode(f'{client_id}:{client_secret}'.encode('utf-8'))\n self.token = self.token.decode('utf-8')\n\n def __call__(self, request):\n request.headers['Authorization'] = 'Basic ' + self.token\n return request\n"
},
{
"alpha_fraction": 0.6416385173797607,
"alphanum_fraction": 0.6527236700057983,
"avg_line_length": 37.108909606933594,
"blob_id": "20f468c58f1e8cfbd11c6e04a406662c6bd52087",
"content_id": "b432751d7054b9f357fb01e39764cc89480ad1cb",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 11581,
"license_type": "permissive",
"max_line_length": 76,
"num_lines": 303,
"path": "/tests/test_api.py",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": "# Twitcaspy\n# Copyright 2021 Alma-field\n# See LICENSE for details.\n#\n# based on tweepy(https://github.com/tweepy/tweepy)\n# Copyright (c) 2009-2021 Joshua Roesslein\n\nfrom nose.tools import ok_, eq_, raises\n\nfrom twitcaspy import API\n\nfrom twitcaspy.errors import TwitcaspyException, Unauthorized, NotFound\n\nfrom .config import tape, TwitcaspyTestCase, user_id, username\n\nmovie_id = '189037369'\n\nclass TwitcaspyAPITests(TwitcaspyTestCase):\n @raises(TwitcaspyException)\n @tape.use_cassette('testraise_authenticationrequired.json')\n def testraise_authenticationrequired(self):\n from twitcaspy import api\n data = api.get_user_info(id=user_id)\n\n @tape.use_cassette('testgetuserinfo.json')\n def testgetuserinfo(self):\n data = self.api.get_user_info(id=user_id)\n ok_(hasattr(data, 'user'))\n eq_(data.user.screen_id, username)\n ok_(hasattr(data, 'supporter_count'))\n ok_(hasattr(data, 'supporting_count'))\n\n @raises(Unauthorized)\n @tape.use_cassette('testverifycredentials.json')\n def testverifycredentials(self):\n data = self.api.verify_credentials()\n\n @tape.use_cassette('testgetlivethumbnailimage.yaml', serializer='yaml')\n def testgetlivethumbnailimage(self):\n data = self.api.get_live_thumbnail_image(id=user_id)\n eq_(data.status_code, 200)\n ok_(len(data.content) > 0)\n\n @tape.use_cassette('testgetmovieinfo.json')\n def testgetmovieinfo(self):\n data = self.api.get_movie_info(movie_id=movie_id)\n ok_(hasattr(data, 'movie'))\n eq_(data.movie.user_id, user_id)\n ok_(hasattr(data, 'broadcaster'))\n eq_(data.broadcaster.id, user_id)\n eq_(data.broadcaster.screen_id, username)\n ok_(hasattr(data, 'tags'))\n ok_(isinstance(data.tags, list))\n ok_(hasattr(data, 'live'))\n ok_(hasattr(data.live, 'movie'))\n ok_(hasattr(data.live, 'broadcaster'))\n ok_(hasattr(data.live, 'tags'))\n\n @tape.use_cassette('testgetmoviesbyuser.yaml', serializer='yaml')\n def testgetmoviesbyuser(self):\n data = self.api.get_movies_by_user(id=user_id)\n ok_(hasattr(data, 'total_count'))\n ok_(hasattr(data, 'movies'))\n ok_(len(data.movies) == 20 or len(data.movies) == data.total_count)\n\n @raises(NotFound)\n @tape.use_cassette('testgetcurrentlive_raise404.json')\n def testgetcurrentlive_raise404(self):\n data = self.api.get_current_live(id=user_id)\n\n @raises(TwitcaspyException)\n @tape.use_cassette('testsetcurrentlivesubtitle_raise1.json')\n def testsetcurrentlivesubtitle_raise1(self):\n data = self.api.set_current_live_subtitle('')\n\n @raises(TwitcaspyException)\n @tape.use_cassette('testsetcurrentlivesubtitle_raise2.json')\n def testsetcurrentlivesubtitle_raise2(self):\n data = self.api.set_current_live_subtitle(\n '123456789012345678')\n\n @tape.use_cassette('testgetcomments.yaml', serializer='yaml')\n def testgetcomments(self):\n data = self.api.get_comments('189037369')\n ok_(hasattr(data, 'movie_id'))\n eq_(data.movie_id, '189037369')\n ok_(hasattr(data, 'all_count'))\n ok_(hasattr(data, 'comments'))\n\n @tape.use_cassette('testpostcomment.json')\n def testpostcomment(self):\n movie_id = '189037369'\n data = self.api.post_comment(\n movie_id=movie_id, comment='モイ!', sns='none')\n ok_(hasattr(data, 'movie_id'))\n eq_(data.movie_id, movie_id)\n ok_(hasattr(data, 'all_count'))\n ok_(hasattr(data, 'comment'))\n\n @raises(TwitcaspyException)\n @tape.use_cassette('testpostcomment_raise.json')\n def testpostcomment_raise(self):\n movie_id = '189037369'\n data = self.api.post_comment(\n movie_id=movie_id, comment='1234567890'*15, sns='none')\n\n @tape.use_cassette('testgetsupportingstatus.json')\n def testgetsupportingstatus(self):\n from twitcaspy.models import User\n target_user_id = 'twitcasting_dev'\n data = self.api.get_supporting_status(\n target_user_id=target_user_id, id=user_id)\n ok_(hasattr(data, 'is_supporting'))\n ok_(hasattr(data, 'supported'))\n ok_(hasattr(data, 'target_user'))\n ok_(isinstance(data.target_user, User))\n eq_(data.target_user.screen_id, target_user_id)\n\n @raises(TwitcaspyException)\n @tape.use_cassette('testgetsupportingstatus_raise.json')\n def testgetsupportingstatus_raise(self):\n target_user_id = 'twitcasting_dev'\n data = self.api.get_supporting_status(target_user_id=target_user_id)\n\n @tape.use_cassette('testsupportuser.json')\n def testsupportuser(self):\n target_user_ids = ['twitcasting_jp']\n data = self.api.support_user(target_user_ids=target_user_ids)\n ok_(hasattr(data, 'added_count'))\n eq_(data.added_count, len(target_user_ids))\n\n @tape.use_cassette('testunsupportuser.json')\n def testunsupportuser(self):\n target_user_ids = ['twitcasting_jp']\n data = self.api.unsupport_user(target_user_ids=target_user_ids)\n ok_(hasattr(data, 'removed_count'))\n eq_(data.removed_count, len(target_user_ids))\n\n @tape.use_cassette('testsupportinglist.json')\n def testsupportinglist(self):\n data = self.api.supporting_list(id=user_id)\n ok_(hasattr(data, 'total'))\n ok_(hasattr(data, 'supporting'))\n\n @tape.use_cassette('testsupporterlist.json')\n def testsupporterlist(self):\n data = self.api.supporter_list(id=user_id)\n ok_(hasattr(data, 'total'))\n ok_(hasattr(data, 'supporters'))\n\n @raises(TwitcaspyException)\n @tape.use_cassette('testgetcategories_raise.json')\n def testgetcategories_raise(self):\n data = self.api.get_categories(lang='fr')\n\n @tape.use_cassette('testgetcategories.yaml', serializer='yaml')\n def testgetcategories(self):\n data = self.api.get_categories()\n ok_(hasattr(data, 'categories'))\n\n @tape.use_cassette('testsearchusers.yaml', serializer='yaml')\n def testsearchusers(self):\n data = self.api.search_users(words='ツイキャス 公式')\n ok_(hasattr(data, 'users'))\n\n @raises(TwitcaspyException)\n @tape.use_cassette('testsearchusers_raise1.json')\n def testsearchusers_raise1(self):\n data = self.api.search_users(words='ツイキャス 公式', lang='en')\n\n @raises(TwitcaspyException)\n @tape.use_cassette('testsearchusers_raise2.json')\n def testsearchusers_raise2(self):\n data = self.api.search_users()\n\n @raises(TwitcaspyException)\n @tape.use_cassette('testsearchusers_raise3.json')\n def testsearchusers_raise3(self):\n data = self.api.search_users(words=0)\n\n @tape.use_cassette('testsearchmovies.yaml', serializer='yaml')\n def testsearchmovies(self):\n data = self.api.search_live_movies(type='new')\n ok_(hasattr(data, 'movies'))\n ok_(isinstance(data.movies, list))\n\n @raises(TwitcaspyException)\n @tape.use_cassette('testsearchmovies_raise1.json')\n def testsearchmovies_raise1(self):\n #When type are not specified.\n data = self.api.search_live_movies()\n\n @raises(TwitcaspyException)\n @tape.use_cassette('testsearchmovies_raise2.json')\n def testsearchmovies_raise2(self):\n #When type is not a `tag`, `word`, `category`, `new` or `recommend`.\n data = self.api.search_live_movies(type='error')\n\n @raises(TwitcaspyException)\n @tape.use_cassette('testsearchmovies_raise3.json')\n def testsearchmovies_raise3(self):\n #No context specified when type is tag, word or category.\n data = self.api.search_live_movies(type='word')\n\n @raises(TwitcaspyException)\n @tape.use_cassette('testsearchmovies_raise4.json')\n def testsearchmovies_raise4(self):\n #When lang is not a 'ja'.\n data = self.api.search_live_movies(type='new', lang='en')\n\n def testincomingwebhook(self):\n from json import load\n file_name = 'cassettes/testincomingwebhook.json'\n with open(file_name, \"r\", encoding='utf-8') as file:\n json = load(file)\n self.api.signature = json['signature']\n data = self.api.incoming_webhook(json)\n ok_(hasattr(data, 'signature'))\n ok_(isinstance(data.signature, str))\n ok_(hasattr(data, 'movie'))\n ok_(hasattr(data, 'broadcaster'))\n\n @raises(TwitcaspyException)\n def testincomingwebhook_raise(self):\n from json import load\n file_name = 'cassettes/testincomingwebhook.json'\n with open(file_name, \"r\", encoding='utf-8') as file:\n json = load(file)\n data = self.api.incoming_webhook(json)\n\n @tape.use_cassette('testgetwebhooklist.json')\n def testgetwebhooklist(self):\n data = self.api.get_webhook_list()\n ok_(hasattr(data, 'all_count'))\n ok_(isinstance(data.all_count, int))\n ok_(hasattr(data, 'webhooks'))\n ok_(isinstance(data.webhooks, list))\n\n @tape.use_cassette('testgetwebhooklist_idstring.json')\n def testgetwebhooklist_idstring(self):\n data = self.api.get_webhook_list(user_id=username)\n ok_(hasattr(data, 'all_count'))\n ok_(isinstance(data.all_count, int))\n ok_(hasattr(data, 'webhooks'))\n ok_(isinstance(data.webhooks, list))\n\n @tape.use_cassette('testregisterwebhook.json')\n def testregisterwebhook(self):\n events = ['livestart', 'liveend']\n data = self.api.register_webhook(user_id=user_id, events=events)\n ok_(hasattr(data, 'user_id'))\n eq_(data.user_id, user_id)\n ok_(hasattr(data, 'added_events'))\n ok_(isinstance(data.added_events, list))\n\n @raises(TwitcaspyException)\n @tape.use_cassette('testregisterwebhook_raise1.json')\n def testregisterwebhook_raise1(self):\n data = self.api.register_webhook(user_id=username, events='events')\n\n @raises(TwitcaspyException)\n @tape.use_cassette('testregisterwebhook_raise2.json')\n def testregisterwebhook_raise2(self):\n events = ['livestart', 'liveend', 'none']\n data = self.api.register_webhook(user_id=username, events=events)\n\n @tape.use_cassette('testremovewebhook.json')\n def testremovewebhook(self):\n events = ['livestart', 'liveend']\n data = self.api.remove_webhook(user_id=user_id, events=events)\n ok_(hasattr(data, 'user_id'))\n eq_(data.user_id, user_id)\n ok_(hasattr(data, 'deleted_events'))\n ok_(isinstance(data.deleted_events, list))\n\n @raises(TwitcaspyException)\n @tape.use_cassette('testremovewebhook_raise1.json')\n def testremovewebhook_raise1(self):\n data = self.api.remove_webhook(user_id=username, events='events')\n\n @raises(TwitcaspyException)\n @tape.use_cassette('testremovewebhook_raise2.json')\n def testremovewebhook_raise2(self):\n events = ['livestart', 'liveend', 'none']\n data = self.api.remove_webhook(user_id=username, events=events)\n\n @tape.use_cassette('testgetrtmpurl.json')\n def testgetrtmpurl(self):\n data = self.api.get_rtmp_url()\n ok_(hasattr(data, 'enabled'))\n ok_(isinstance(data.enabled, bool))\n ok_(hasattr(data, 'url'))\n ok_(isinstance(data.url, (str, None)))\n ok_(hasattr(data, 'stream_key'))\n ok_(isinstance(data.stream_key, (str, None)))\n\n @tape.use_cassette('testgetwebmurl.json')\n def testgetwebmurl(self):\n data = self.api.get_webm_url()\n ok_(hasattr(data, 'enabled'))\n ok_(isinstance(data.enabled, bool))\n ok_(hasattr(data, 'url'))\n ok_(isinstance(data.url, (str, None)))\n"
},
{
"alpha_fraction": 0.35712459683418274,
"alphanum_fraction": 0.35959458351135254,
"avg_line_length": 44.86328125,
"blob_id": "63370ce15ae418f55f6b1cb11ba0db49780870d6",
"content_id": "41b5e2e86c1f202ce602074c855f145bc5324144",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "reStructuredText",
"length_bytes": 11741,
"license_type": "permissive",
"max_line_length": 99,
"num_lines": 256,
"path": "/docs/api.rst",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": ".. _api_reference:\n\n.. currentmodule:: twitcaspy\n\n.. include:: parameters.rst\n\n*******************************************************\n:class:`twitcaspy.API` --- Twitcasting API v2 Reference\n*******************************************************\n\n.. autoclass:: API\n\n.. table::\n :align: center\n\n +--------------------------------------------------+-----------------------------------------+\n | Twitcasting API v2 Endpoint | :class:`API` Method |\n +==================================================+=========================================+\n | .. centered:: :ref:`User` |\n +--------------------------------------------------+-----------------------------------------+\n | `GET /users/:user_id`_ | :meth:`API.get_user_info` |\n +--------------------------------------------------+-----------------------------------------+\n | `GET /verify_credentials`_ | :meth:`API.verify_credentials` |\n +--------------------------------------------------+-----------------------------------------+\n | .. centered:: :ref:`Live Thumbnail` |\n +--------------------------------------------------+-----------------------------------------+\n | `GET /users/:user_id/live/thumbnail`_ | :meth:`API.get_live_thumbnail_image` |\n +--------------------------------------------------+-----------------------------------------+\n | .. centered:: :ref:`Movie` |\n +--------------------------------------------------+-----------------------------------------+\n | `GET /movies/:movie_id`_ | :meth:`API.get_movie_info` |\n +--------------------------------------------------+-----------------------------------------+\n | `GET /users/:user_id/movies`_ | :meth:`API.get_movies_by_user` |\n +--------------------------------------------------+-----------------------------------------+\n | `GET /users/:user_id/current_live`_ | :meth:`API.get_current_live` |\n +--------------------------------------------------+-----------------------------------------+\n | `POST /movies/subtitle`_ | :meth:`API.set_current_live_subtitle` |\n +--------------------------------------------------+-----------------------------------------+\n | `DELETE /movies/subtitle`_ | :meth:`API.unset_current_live_subtitle` |\n +--------------------------------------------------+-----------------------------------------+\n | `POST /movies/hashtag`_ | :meth:`API.set_current_live_hashtag` |\n +--------------------------------------------------+-----------------------------------------+\n | `DELETE /movies/hashtag`_ | :meth:`API.unset_current_live_hashtag` |\n +--------------------------------------------------+-----------------------------------------+\n | .. centered:: :ref:`Comment` |\n +--------------------------------------------------+-----------------------------------------+\n | `GET /movies/:movie_id/comments`_ | :meth:`API.get_comments` |\n +--------------------------------------------------+-----------------------------------------+\n | `POST /movies/:movie_id/comments`_ | :meth:`API.post_comments` |\n +--------------------------------------------------+-----------------------------------------+\n | `DELETE /movies/:movie_id/comments/:comment_id`_ | :meth:`API.delete_comment` |\n +--------------------------------------------------+-----------------------------------------+\n | .. centered:: :ref:`Gift` |\n +--------------------------------------------------+-----------------------------------------+\n | `GET /gifts`_ | :meth:`API.get_gifts` |\n +--------------------------------------------------+-----------------------------------------+\n | .. centered:: :ref:`Supporter` |\n +--------------------------------------------------+-----------------------------------------+\n | `GET /users/:user_id/supporting_status`_ | :meth:`API.get_supporting_status` |\n +--------------------------------------------------+-----------------------------------------+\n | `PUT /support`_ | :meth:`API.support_user` |\n +--------------------------------------------------+-----------------------------------------+\n | `PUT /unsupport`_ | :meth:`API.unsupport_user` |\n +--------------------------------------------------+-----------------------------------------+\n | `GET /users/:user_id/supporting`_ | :meth:`API.supporting_list` |\n +--------------------------------------------------+-----------------------------------------+\n | `GET /users/:user_id/supporters`_ | :meth:`API.supporter_list` |\n +--------------------------------------------------+-----------------------------------------+\n | `GET /categories`_ | :meth:`API.get_categories` |\n +--------------------------------------------------+-----------------------------------------+\n | `GET /search/users`_ | :meth:`API.search_users` |\n +--------------------------------------------------+-----------------------------------------+\n | `GET /search/lives`_ | :meth:`API.search_live_movies` |\n +--------------------------------------------------+-----------------------------------------+\n | | :meth:`API.incoming_webhook` |\n +--------------------------------------------------+-----------------------------------------+\n | `GET /webhooks`_ | :meth:`API.get_webhook_list` |\n +--------------------------------------------------+-----------------------------------------+\n | `POST /webhooks`_ | :meth:`API.register_webhook` |\n +--------------------------------------------------+-----------------------------------------+\n | `DELETE /webhooks`_ | :meth:`API.remove_webhook` |\n +--------------------------------------------------+-----------------------------------------+\n | `GET /rtmp_url`_ | :meth:`API.get_rtmp_url` |\n +--------------------------------------------------+-----------------------------------------+\n | `GET /webm_url`_ | :meth:`API.get_webm_url` |\n +--------------------------------------------------+-----------------------------------------+\n\n.. _GET /users/:user_id: https://apiv2-doc.twitcasting.tv/#get-user-info\n.. _GET /verify_credentials: https://apiv2-doc.twitcasting.tv/#verify-credentials\n.. _GET /users/:user_id/live/thumbnail: https://apiv2-doc.twitcasting.tv/#get-live-thumbnail-image\n.. _GET /movies/:movie_id: https://apiv2-doc.twitcasting.tv/#get-movie-info\n.. _GET /users/:user_id/movies: https://apiv2-doc.twitcasting.tv/#get-movies-by-user\n.. _GET /users/:user_id/current_live: https://apiv2-doc.twitcasting.tv/#get-current-live\n.. _POST /movies/subtitle: https://apiv2-doc.twitcasting.tv/#set-current-live-subtitle\n.. _DELETE /movies/subtitle: https://apiv2-doc.twitcasting.tv/#unset-current-live-subtitle\n.. _POST /movies/hashtag: https://apiv2-doc.twitcasting.tv/#set-current-live-hashtag\n.. _DELETE /movies/hashtag: https://apiv2-doc.twitcasting.tv/#unset-current-live-hashtag\n.. _GET /movies/:movie_id/comments: https://apiv2-doc.twitcasting.tv/#get-comments\n.. _POST /movies/:movie_id/comments: https://apiv2-doc.twitcasting.tv/#post-comments\n.. _DELETE /movies/:movie_id/comments/:comment_id: https://apiv2-doc.twitcasting.tv/#delete-comment\n.. _GET /gifts: https://apiv2-doc.twitcasting.tv/#get-gifts\n.. _GET /users/:user_id/supporting_status: https://apiv2-doc.twitcasting.tv/#get-supporting-status\n.. _PUT /support: https://apiv2-doc.twitcasting.tv/#support-user\n.. _PUT /unsupport: https://apiv2-doc.twitcasting.tv/#unsupport-user\n.. _GET /users/:user_id/supporting: https://apiv2-doc.twitcasting.tv/#supporting-list\n.. _GET /users/:user_id/supporters: https://apiv2-doc.twitcasting.tv/#supporter-list\n.. _GET /categories: https://apiv2-doc.twitcasting.tv/#get-categories\n.. _GET /search/users: https://apiv2-doc.twitcasting.tv/#search-users\n.. _GET /search/lives: https://apiv2-doc.twitcasting.tv/#search-live-movies\n.. _GET /webhooks: https://apiv2-doc.twitcasting.tv/#get-webhook-list\n.. _POST /webhooks: https://apiv2-doc.twitcasting.tv/#register-webhook\n.. _DELETE /webhooks: https://apiv2-doc.twitcasting.tv/#remove-webhook\n.. _GET /rtmp_url: https://apiv2-doc.twitcasting.tv/#get-rtmp-url\n.. _GET /webm_url: https://apiv2-doc.twitcasting.tv/#get-webm-url\n\nUser\n----\n\nget_user_info\n=============\n.. automethod:: API.get_user_info\n\nverify_credentials\n==================\n.. automethod:: API.verify_credentials\n\nLive Thumbnail\n--------------\n\nget_live_thumbnail_image\n========================\n.. automethod:: API.get_live_thumbnail_image\n\nMovie\n-----\n\nget_movie_info\n==============\n.. automethod:: API.get_movie_info\n\nget_movies_by_user\n==================\n.. automethod:: API.get_movies_by_user\n\nget_current_live\n================\n.. automethod:: API.get_current_live\n\nset_current_live_subtitle\n=========================\n.. automethod:: API.set_current_live_subtitle\n\nunset_current_live_subtitle\n===========================\n.. automethod:: API.unset_current_live_subtitle\n\nset_current_live_hashtag\n=========================\n.. automethod:: API.set_current_live_hashtag\n\nunset_current_live_hashtag\n===========================\n.. automethod:: API.unset_current_live_hashtag\n\nComment\n-------\n\nget_comments\n============\n.. automethod:: API.get_comments\n\npost_comment\n============\n.. automethod:: API.post_comment\n\ndelete_comment\n==============\n.. automethod:: API.delete_comment\n\nGift\n----\n\nget_gifts\n=========\n.. automethod:: API.get_gifts\n\nSupporter\n---------\n\nget_supporting_status\n=====================\n.. automethod:: API.get_supporting_status\n\nsupport_user\n============\n.. automethod:: API.support_user\n\nunsupport_user\n==============\n.. automethod:: API.unsupport_user\n\nsupporting_list\n===============\n.. automethod:: API.supporting_list\n\nsupporter_list\n==============\n.. automethod:: API.supporter_list\n\nCategory\n--------\n\nget_categories\n==============\n.. automethod:: API.get_categories\n\nSearch users and movies\n-----------------------\n\nsearch_users\n============\n.. automethod:: API.search_users\n\nsearch_live_movies\n==================\n.. automethod:: API.search_live_movies\n\nWebHook\n-------\n\nincoming_webhook\n================\n.. automethod:: API.incoming_webhook\n\nget_webhook_list\n================\n.. automethod:: API.get_webhook_list\n\nregister_webhook\n================\n.. automethod:: API.register_webhook\n\nremove_webhook\n==============\n.. automethod:: API.remove_webhook\n\nBroadcasting\n------------\n\nget_rtmp_url\n============\n.. automethod:: API.get_rtmp_url\n\nget_webm_url\n============\n.. automethod:: API.get_webm_url\n"
},
{
"alpha_fraction": 0.6922339200973511,
"alphanum_fraction": 0.7133269309997559,
"avg_line_length": 27.97222137451172,
"blob_id": "e00ee2fbdf2069f845706dd1da6e71e7cd78d90f",
"content_id": "8290d08d6608e4e58a3577d73cdd371ed92d0e5f",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1043,
"license_type": "permissive",
"max_line_length": 61,
"num_lines": 36,
"path": "/tests/config.py",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": "# Twitcaspy\n# Copyright 2021 Alma-field\n# See LICENSE for details.\n#\n# based on tweepy(https://github.com/tweepy/tweepy)\n# Copyright (c) 2009-2021 Joshua Roesslein\n\nfrom os import environ\nfrom unittest import TestCase\n\nimport vcr\n\nfrom twitcaspy.api import API\nfrom twitcaspy.auth import AppAuthHandler\n\nfrom dotenv import load_dotenv\nload_dotenv('./.env', encoding='utf-8')\nuser_id = environ.get('TWITTER_USER_ID', '182224938')\nusername = environ.get('TWITTER_USERNAME', 'twitcasting_jp')\nbearer_token = environ.get('BEARER_TOKEN', '')\nclient_id = environ.get('CLIENT_ID', '')\nclient_secret = environ.get('CLIENT_SECRET', '')\nuse_replay = environ.get('USE_REPLAY', True)\n\ntape = vcr.VCR(\n cassette_library_dir='cassettes',\n filter_headers=['Authorization'],\n serializer='json',\n # Either use existing cassettes, or never use recordings:\n record_mode='none' if use_replay else 'all',\n)\n\nclass TwitcaspyTestCase(TestCase):\n def setUp(self):\n self.auth = AppAuthHandler(client_id, client_secret)\n self.api = API(self.auth)\n"
},
{
"alpha_fraction": 0.6976127028465271,
"alphanum_fraction": 0.702917754650116,
"avg_line_length": 16.534883499145508,
"blob_id": "ab8889cd75318f68ef5deb018428db826f098893",
"content_id": "9d297b23f988d1cba7d3172257f6cde5e48b4650",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 754,
"license_type": "permissive",
"max_line_length": 68,
"num_lines": 43,
"path": "/twitcaspy/models/factory.py",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": "# Twitcaspy\n# Copyright 2021 Alma-field\n# See LICENSE for details.\n\nfrom .app import App\n\nfrom .category import Category\n\nfrom .comment import Comment\n\nfrom .gift import Gift\n\nfrom .live import Live\n\nfrom .movie import Movie\n\nfrom .raw import Raw\n\nfrom .subcategory import SubCategory\n\nfrom .supporter import Supporter\n\nfrom .user import User\n\nfrom .webhook import WebHook\n\nclass ModelFactory:\n \"\"\"\n | Used by parsers for creating instances of models.\n | You may subclass this factory to add your own extended models.\n \"\"\"\n\n app = App\n category = Category\n comment = Comment\n gift = Gift\n live = Live\n movie = Movie\n raw = Raw\n subcategory = SubCategory\n supporter = Supporter\n user = User\n webhook = WebHook\n"
},
{
"alpha_fraction": 0.6566147804260254,
"alphanum_fraction": 0.661478579044342,
"avg_line_length": 31.125,
"blob_id": "f59a980f9632d9aa7afb6d7b5ea63f0cec9981f4",
"content_id": "cf9c1389b641f423d0d37afccaff7445dbc77a66",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1088,
"license_type": "permissive",
"max_line_length": 63,
"num_lines": 32,
"path": "/examples/webhook/server.py",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": "# Twitcaspy\n# Copyright 2021 Alma-field\n# See LICENSE for details.\n\n# Before running this code, run the following command:\n# このコードを実行する前に、以下のコマンドを実行してください。\n# pip install twitcaspy[webhook]\n\nfrom flask import Flask, request, make_response, jsonify, abort\napp = Flask(__name__)\n\nfrom twitcaspy import api, TwitcaspyException\n\[email protected]('/', methods=['GET', 'POST'])\ndef main():\n if request.method == 'POST':\n webhook = api.incoming_webhook(request.json)\n #Show Parse Result\n print(f'signature : {webhook.signature}')\n print(f'user_id : {webhook.broadcaster.id}')\n print(f'title : {webhook.movie.title}')\n return make_response(jsonify({'message':'OK'}))\n\nif __name__ == '__main__':\n import json\n cassettes_file = '../../cassettes/testincomingwebhook.json'\n # load test webhook data\n with open(cassettes_file, \"r\", encoding='utf-8')as file:\n data = json.load(file)\n # set signature to api instance\n api.signature = data['signature']\n app.run(debug=True)\n"
},
{
"alpha_fraction": 0.7667984366416931,
"alphanum_fraction": 0.782608687877655,
"avg_line_length": 22,
"blob_id": "c2d3a5510d30e39600e46076db8080dcdf745561",
"content_id": "869368225b3dbbcced14166e6c3e507020c4d751",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 253,
"license_type": "permissive",
"max_line_length": 71,
"num_lines": 11,
"path": "/twitcaspy/auth/__init__.py",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": "# Twitcaspy\n# Copyright 2021 Alma-field\n# See LICENSE for details.\n\nfrom .app import AppAuthHandler\n\nfrom .grant import GrantAuthHandler\n\nfrom .implicit import ImplicitAuthHandler\n\n__all__ = ['AppAuthHandler', 'GrantAuthHandler', 'ImplicitAuthHandler']\n"
},
{
"alpha_fraction": 0.7259259223937988,
"alphanum_fraction": 0.7259259223937988,
"avg_line_length": 26,
"blob_id": "92eb128dbc0f9e1b02288d6e8074c9e2fe55a43a",
"content_id": "327a243614cbeb593e8a6a05f9a28d2b121258e3",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "reStructuredText",
"length_bytes": 405,
"license_type": "permissive",
"max_line_length": 77,
"num_lines": 15,
"path": "/docs/exceptions.rst",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": ".. _exceptions_reference:\n\n**********\nExceptions\n**********\n\nExceptions are available directly in the :mod:`twitcaspy` module, which means\n:mod:`twitcaspy.errors` itself does not need to be imported. For example,\n:exc:`twitcaspy.errors.TwitcaspyException` is available as\n:exc:`twitcaspy.TwitcaspyException`.\n\n.. automodule:: twitcaspy.errors\n :members:\n :member-order: bysource\n :show-inheritance:\n"
},
{
"alpha_fraction": 0.61328125,
"alphanum_fraction": 0.62890625,
"avg_line_length": 17.285715103149414,
"blob_id": "35861525e4d256b3e6a21eed02226c59513f278d",
"content_id": "e6fac2dcd8709969798e08ea26ed266270380b33",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 256,
"license_type": "permissive",
"max_line_length": 41,
"num_lines": 14,
"path": "/twitcaspy/models/raw.py",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": "# Twitcaspy\n# Copyright 2021 Alma-field\n# See LICENSE for details.\n\nclass Raw:\n \"\"\"Raw Object\n\n | The data will be returned as it is.\n | This class does not instantiate.\n \"\"\"\n\n @classmethod\n def parse(cls, api, json):\n return json\n"
},
{
"alpha_fraction": 0.5941320061683655,
"alphanum_fraction": 0.6100244522094727,
"avg_line_length": 24.5625,
"blob_id": "ed0ccee4c2b930d9551774629979465a3d2940f9",
"content_id": "adeeab27b5006f8c9b0e6ad4e7de48dbe1abe959",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 818,
"license_type": "permissive",
"max_line_length": 72,
"num_lines": 32,
"path": "/twitcaspy/auth/oauth/bearer.py",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": "# Twitcaspy\n# Copyright 2021 Alma-field\n# See LICENSE for details.\n#\n# based on tweepy(https://github.com/tweepy/tweepy)\n# Copyright (c) 2009-2021 Joshua Roesslein\n\nfrom requests.auth import AuthBase\n\nclass OAuth2Bearer(AuthBase):\n \"\"\"Bearer Authentication\n\n Parameters\n ----------\n bearer_token: :class:`str`\n |bearer_token|\n\n Raises\n ------\n TypeError\n If the given bearer_token is not a string instance\n \"\"\"\n def __init__(self, bearer_token):\n if not isinstance(bearer_token, str):\n raise TypeError(\"bearer_token must be string, not \"\n + type(bearer_token).__name__)\n\n self.bearer_token = bearer_token\n\n def __call__(self, request):\n request.headers['Authorization'] = f'Bearer {self.bearer_token}'\n return request\n"
},
{
"alpha_fraction": 0.7474150657653809,
"alphanum_fraction": 0.7570162415504456,
"avg_line_length": 27.808509826660156,
"blob_id": "6d0a20461b28e90a3080bec40aa7aa9552d00fae",
"content_id": "6ef3e91309bf909851f3ae07a6419aeade5527ed",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1582,
"license_type": "permissive",
"max_line_length": 77,
"num_lines": 47,
"path": "/examples/auth/grant.py",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": "# Twitcaspy\n# Copyright 2021 Alma-field\n# See LICENSE for details.\n\nfrom twitcaspy import API, GrantAuthHandler\n\n# The client id and/or secret can be found on your application's Details page\n# located at select app in https://twitcasting.tv/developer.php\n# (in \"details\" tab)\nCLIENT_ID = ''\nCLIENT_SECRET = ''\nCALLBACK_URL = ''\n\nauth = GrantAuthHandler(CLIENT_ID, CLIENT_SECRET, CALLBACK_URL)\n\n# Open webbrowser\nprint(f'URL: {auth.get_authorization_url()}')\nwebbrowser.open(auth.get_authorization_url())\nredirect_uri = input('input redirect_uri > ')\nauth.fetch_token(redirect_uri)\napi = API(auth)\n\n# Get Verify credentials\ncredential = api.verify_credentials()\n# If you uncomment it, the Response body is displayed.\n# (コメントアウトを外すとレスポンス本体が表示されます。)\n#print(credential._json)\n\n# If you uncomment it,\n# the name of the application you are using will be displayed.\n# (コメントアウトを外すと使用しているアプリケーションの名前が表示されます。)\n#print(credential.app.name)\n\n# If the authentication was successful,\n# you should see the name of the account print out\n# (認証に成功している場合、アカウント名が表示されます。)\nprint(credential.user.name)\n\n# Target User ID and screen ID\nuser_id = '182224938'\nscreen_id = 'twitcasting_jp'\n\nuser_info = api.get_user_info(id=user_id)\n# If the authentication was successful, you should\n# see the name of the account print out\n# (認証に成功している場合、アカウント名が表示されます。)\nprint(user_info.user.name)\n"
},
{
"alpha_fraction": 0.6198083162307739,
"alphanum_fraction": 0.6198083162307739,
"avg_line_length": 16.784090042114258,
"blob_id": "6c72f290e823a0e9dd1e032d651347fa298ba4db",
"content_id": "41e5094caadf633ba440ad61cf60b3a1aab84869",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "reStructuredText",
"length_bytes": 1565,
"license_type": "permissive",
"max_line_length": 44,
"num_lines": 88,
"path": "/docs/models.rst",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": ".. _models_reference:\n\n.. currentmodule:: twitcaspy\n\n.. include:: parameters.rst\n\n****************\nModels Reference\n****************\n\nModel class\n-----------\n.. autoclass:: twitcaspy.models.Model\n :members:\n :member-order: bysource\n\nResult class\n------------\n.. autoclass:: twitcaspy.models.Result\n :members:\n :member-order: bysource\n :show-inheritance:\n\nRaw class\n---------\n.. autoclass:: twitcaspy.models.Raw\n\nUser class\n----------\n.. autoclass:: twitcaspy.models.User\n :show-inheritance:\n\nApp class\n---------\n.. autoclass:: twitcaspy.models.App\n :show-inheritance:\n\nLive class\n----------\n.. autoclass:: twitcaspy.models.Live\n :show-inheritance:\n\nMovie class\n-----------\n.. autoclass:: twitcaspy.models.Movie\n :show-inheritance:\n\nComment class\n-------------\n.. autoclass:: twitcaspy.models.Comment\n :show-inheritance:\n\nGift class\n----------\n.. autoclass:: twitcaspy.models.Gift\n :show-inheritance:\n\nSupporter class\n---------------\n.. autoclass:: twitcaspy.models.Supporter\n :show-inheritance:\n\nCategory class\n--------------\n.. autoclass:: twitcaspy.models.Category\n :show-inheritance:\n\nSubCategory class\n-----------------\n.. autoclass:: twitcaspy.models.SubCategory\n :show-inheritance:\n\nWebHook class\n-------------\n.. autoclass:: twitcaspy.models.WebHook\n :show-inheritance:\n\nLateLimit class\n---------------\n.. autoclass:: twitcaspy.models.LateLimit\n :members:\n :member-order: bysource\n\nModelFactory class\n------------------\n.. autoclass:: twitcaspy.models.ModelFactory\n :members:\n :member-order: bysource\n"
},
{
"alpha_fraction": 0.5223684310913086,
"alphanum_fraction": 0.5276315808296204,
"avg_line_length": 22.75,
"blob_id": "0730ac292167c7377dd4e55fb44238fbc20ef4b4",
"content_id": "95e88341ff383ae4bb2b766263aa9a02c85a6da1",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 760,
"license_type": "permissive",
"max_line_length": 65,
"num_lines": 32,
"path": "/twitcaspy/models/model.py",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": "# Twitcaspy\n# Copyright 2021 Alma-field\n# See LICENSE for details.\n\nclass Model:\n \"\"\"Model Base Class\n\n Attributes\n ----------\n _api : :class:`~twitcaspy.api.API`\n \"\"\"\n\n def __init__(self, api=None):\n self._api = api\n\n def __getstate__(self):\n # pickle\n pickle = dict(self.__dict__)\n try:\n del pickle['_api'] # do not pickle the API reference\n except KeyError:\n pass\n return pickle\n\n @classmethod\n def parse(cls, api, json):\n \"\"\"Parse a JSON Model into a model instance.\"\"\"\n raise NotImplementedError\n\n def __repr__(self):\n state = [f'{k}={v!r}' for (k, v) in vars(self).items()]\n return f'{self.__class__.__name__}({\", \".join(state)})'\n"
},
{
"alpha_fraction": 0.7200000286102295,
"alphanum_fraction": 0.7379310131072998,
"avg_line_length": 25.851852416992188,
"blob_id": "96be05d2ed2d298617b04aa0592e283ade0deb4a",
"content_id": "2d3b3a0f7efdab493ff5b2fee1945b4c8f75424f",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 725,
"license_type": "permissive",
"max_line_length": 77,
"num_lines": 27,
"path": "/examples/auth/app.py",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": "# Twitcaspy\n# Copyright 2021 Alma-field\n# See LICENSE for details.\n\nfrom twitcaspy import API, AppAuthHandler\n\n# The client id and/or secret can be found on your application's Details page\n# located at select app in https://twitcasting.tv/developer.php\n# (in \"details\" tab)\nCLIENT_ID = ''\nCLIENT_SECRET = ''\n\nauth = AppAuthHandler(CLIENT_ID, CLIENT_SECRET)\napi = API(auth)\n\n# Target User ID and screen ID\nuser_id = '182224938'\nscreen_id = 'twitcasting_jp'\n\n# If the authentication was successful, you should\n# see the name of the account print out\nprint(api.get_user_info(id=user_id).user.name)\n\nresult = api.get_webhook_list()\nprint(result.all_count)\nfor webhook in result.webhooks:\n print(f'{webhook.user_id}: {event}')\n"
},
{
"alpha_fraction": 0.5669565200805664,
"alphanum_fraction": 0.5713043212890625,
"avg_line_length": 25.136363983154297,
"blob_id": "2ef7bf8866e72800ec25a079f31fdf00fe92ac21",
"content_id": "b4d6eb9e17845e95006819fd8c0255541bbf864e",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1150,
"license_type": "permissive",
"max_line_length": 67,
"num_lines": 44,
"path": "/twitcaspy/models/comment.py",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": "# Twitcaspy\n# Copyright 2021 Alma-field\n# See LICENSE for details.\n\nfrom ..utils import fromtimestamp\n\nfrom .model import Model\n\nfrom .user import User\n\nclass Comment(Model):\n \"\"\"Comment Object\n\n Attributes\n ----------\n id: :class:`str`\n | |comment_id|\n message: :class:`str`\n | comment text\n from_user: :class:`~twitcaspy.models.User`\n | Comment contributor information\n created: :class:`datetime.datetime`\n | Converted created_time to :class:`datetime.datetime` type\n created_time: :class:`int`\n | Unix time stamp of comment posting datetime\n\n References\n ----------\n https://apiv2-doc.twitcasting.tv/#comment-object\n \"\"\"\n\n @classmethod\n def parse(cls, api, json):\n comment = cls(api)\n setattr(comment, '_json', json)\n for k, v in json.items():\n if k == 'created':\n setattr(comment, k, fromtimestamp(v))\n setattr(comment, f'{k}_time', v)\n elif k == 'from_user':\n setattr(comment, k, User.parse(api, v))\n else:\n setattr(comment, k, v)\n return comment\n"
},
{
"alpha_fraction": 0.742432177066803,
"alphanum_fraction": 0.7539144158363342,
"avg_line_length": 50.09333419799805,
"blob_id": "a38322efca3b1d0c8d108fedde7212b6fc206a1e",
"content_id": "301749305bbba030e1940ef817338179f4fefa61",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 3847,
"license_type": "permissive",
"max_line_length": 170,
"num_lines": 75,
"path": "/README.md",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": "# twitcaspy\n[![license](https://img.shields.io/badge/license-MIT-blue.svg)](https://github.com/Alma-field/twitcaspy/blob/master/LICENSE)\n[![test](https://github.com/Alma-field/twitcaspy/actions/workflows/test.yml/badge.svg?branch=master)](https://github.com/Alma-field/twitcaspy/actions/workflows/test.yml)\n[![Deploy](https://github.com/Alma-field/twitcaspy/actions/workflows/deploy.yml/badge.svg)](https://github.com/Alma-field/twitcaspy/actions/workflows/deploy.yml)\n[![Documentation Status](https://readthedocs.org/projects/twitcaspy/badge/?version=latest)](http://twitcaspy.alma-field.com/en/latest/?badge=latest)\n[![GitHub issues open](https://img.shields.io/github/issues/Alma-field/twitcaspy.svg)](https://github.com/Alma-field/twitcaspy/issues?q=is%3Aopen+is%3Aissue)\n[![GitHub issues close](https://img.shields.io/github/issues-closed-raw/Alma-field/twitcaspy.svg)](https://github.com/Alma-field/twitcaspy/issues?q=is%3Aclose+is%3Aissue)\n[![PyPI Version](https://img.shields.io/pypi/v/twitcaspy?label=PyPI)](https://pypi.org/project/twitcaspy/)\n[![Python Versions](https://img.shields.io/pypi/pyversions/twitcaspy?label=Python)](https://pypi.org/project/twitcaspy/)\n\nTwitcatting for Python\nPython 3.7 - 3.9 are supported.\n\n## Other language version\n - [English](https://github.com/Alma-field/twitcaspy/blob/master/README.md)\n - [Japanese](https://github.com/Alma-field/twitcaspy/blob/master/README_JA.md)\n\n## Document\n - [develop version](https://twitcaspy.alma-field.com/en/latest)\n - [latest version(v1.1.0)](https://twitcaspy.alma-field.com/en/stable)\n - [v1.1.0](https://twitcaspy.alma-field.com/en/1.1.0)\n - [v1.0.2](https://twitcaspy.alma-field.com/en/1.0.2)\n - [v1.0.1](https://twitcaspy.alma-field.com/en/1.0.1)\n - [v1.0.0](https://twitcaspy.alma-field.com/en/1.0.0)\n\n## Installation\nThe easiest way to install the latest version from PyPI is by using pip:\n```\npip install twitcaspy\n```\n\nYou can also use Git to clone the repository from GitHub to install the latest\ndevelopment version:\n```\ngit clone https://github.com/Alma-field/twitcaspy.git\ncd twitcaspy\npip install .\n```\n\nAlternatively, install directly from the GitHub repository:\n```\npip install git+https://github.com/Alma-field/twitcaspy.git\n```\n\n## Examples\nThis is an execution example in the application scope. \nGet the account name of ***@twitcasting_jp***.\n```python\nfrom twitcaspy import API, AppAuthHandler\nauth = AppAuthHandler(client_id, client_secret)\napi = API(auth)\n\nprint(api.get_user_info(id='twitcasting_jp').user.name)\n# > ツイキャス公式\n```\n\nSee in [examples](https://github.com/Alma-field/twitcaspy/tree/master/examples) for other examples and the entire code.\n### Included example\n - [Authorization](https://github.com/Alma-field/twitcaspy/tree/master/examples/auth)\n - [AppAuthHandler](https://github.com/Alma-field/twitcaspy/tree/master/examples/auth/app.py)\n - [GrantAuthHandler](https://github.com/Alma-field/twitcaspy/tree/master/examples/auth/grant.py)\n - [ImplicitAuthHandler](https://github.com/Alma-field/twitcaspy/tree/master/examples/auth/implicit.py)\n - [Realtime API](https://github.com/Alma-field/twitcaspy/blob/master/examples/realtime)\n - [Webhook](https://github.com/Alma-field/twitcaspy/blob/master/examples/webhook)\n - [Server](https://github.com/Alma-field/twitcaspy/blob/master/examples/webhook/server.py)\n - [Client](https://github.com/Alma-field/twitcaspy/blob/master/examples/webhook/client.py)\n\n## Source\nThis library is based on:\n - [tweepy/tweepy](https://github.com/tweepy/tweepy) - Twitter for Python!\n - [tamago324/PyTwitcasting](https://github.com/tamago324/PyTwitcasting) - PyTwitcasting is a library for API v2 (β) of Twitcasting.\n\n## Links\n - [Twitcasting API Documentation](https://apiv2-doc.twitcasting.tv/)\n - [API ChangeLog](https://github.com/twitcasting/PublicApiV2/blob/master/CHANGELOG.md)\n"
},
{
"alpha_fraction": 0.5811518430709839,
"alphanum_fraction": 0.5908175706863403,
"avg_line_length": 30.83333396911621,
"blob_id": "827eb625fa1c43c41f241b67a0331de0b9d9d36c",
"content_id": "5c2970d67545bbd79c72425f8bdcb3cfbd2edd4e",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2483,
"license_type": "permissive",
"max_line_length": 111,
"num_lines": 78,
"path": "/twitcaspy/models/movie.py",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": "# Twitcaspy\n# Copyright 2021 Alma-field\n# See LICENSE for details.\n\nfrom ..utils import fromtimestamp\n\nfrom .model import Model\n\nclass Movie(Model):\n \"\"\"Movie Object\n\n Attributes\n ----------\n id: :class:`str`\n | |movie_id|\n user_id: :class:`str`\n | |id|\n title: :class:`str`\n | Live title\n subtitle: :class:`str`\n | Live subtitle (telop)\n last_owner_comment: :class:`str` or :class:`None`\n | Live streamer's latest comment text\n category: :class:`str` or :class:`None`\n | category id\n link: :class:`str`\n | Link URL to live (or recording)\n is_live: :class:`bool`\n | Whether live streaming now\n is_recorded: :class:`bool`\n | Whether the recording is public\n comment_count: :class:`int`\n | Total number of comments\n large_thumbnail: :class:`str`\n | URL of thumbnail image (large)\n small_thumbnail: :class:`str`\n | URL of thumbnail image (small)\n country: :class:`str`\n | stream area (country code)\n duration: :class:`int`\n | stream time (seconds)\n created: :class:`datetime.datetime`\n | Converted created_time to :class:`datetime.datetime` type\n created_time: :class:`int`\n | Unix time stamp of stream start datetime\n is_collabo: :class:`bool`\n | Whether it is a collaboration stream\n is_protected: :class:`bool`\n | Whether to need the secret word\n max_view_count: :class:`int`\n | Maximum number of simultaneous viewers\n | (0 if streaming now.)\n current_view_count: :class:`int`\n | Current number of simultaneous viewers\n | (0 if not streaming now.)\n total_view_count: :class:`int`\n | Total number of viewers\n hls_url: :class:`str` or :class:`None`\n | URL for HTTP Live Streaming playback\n | `2019-04-17 update <https://github.com/twitcasting/PublicApiV2/blob/master/CHANGELOG.md#2019-04-17>`_\n | Changed the URL of the hls_url parameter from `http` to `https` |google_translate_ja_en|\n\n References\n ----------\n https://apiv2-doc.twitcasting.tv/#movie-object\n \"\"\"\n\n @classmethod\n def parse(cls, api, json):\n movie = cls(api)\n setattr(movie, '_json', json)\n for k, v in json.items():\n if k == 'created':\n setattr(movie, k, fromtimestamp(v))\n setattr(movie, f'{k}_time', v)\n else:\n setattr(movie, k, v)\n return movie\n"
},
{
"alpha_fraction": 0.6252220273017883,
"alphanum_fraction": 0.6314387321472168,
"avg_line_length": 27.149999618530273,
"blob_id": "137cd4245c3cb441ebebdb11cb516bee19cedbba",
"content_id": "105a6467cd405358a9db8a1d50abc15c787510e8",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1126,
"license_type": "permissive",
"max_line_length": 84,
"num_lines": 40,
"path": "/twitcaspy/models/latelimit.py",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": "# Twitcaspy\n# Copyright 2021 Alma-field\n# See LICENSE for details.\n\nfrom ..utils import fromtimestamp\n\nfrom .model import Model\n\nclass LateLimit:\n \"\"\"Late Limit Object\n\n Attributes\n ----------\n limit: :class:`int`\n | Maximum number of API executions\n remaining: :class:`int`\n | API remaining executable times\n reset: :class:`int`\n | Unix Timestamp at the time when the remaining API execution count is reset\n reset_time: :class:`datetime.datetime`\n | Converted reset to :class:`datetime.datetime` type\n\n References\n ----------\n https://apiv2-doc.twitcasting.tv/#response-headers\n \"\"\"\n def __init__(self):\n self.limit = -1\n self.remaining = -1\n self.reset = None\n self.reset_time = None\n\n @classmethod\n def parse(cls, headers):\n latelimit = cls()\n latelimit.limit = int(headers['X-RateLimit-Limit'])\n latelimit.remaining = int(headers['X-RateLimit-Remaining'])\n latelimit.reset = int(headers['X-RateLimit-Reset'])\n latelimit.reset_time = fromtimestamp(latelimit.reset)\n return latelimit\n"
},
{
"alpha_fraction": 0.6295180916786194,
"alphanum_fraction": 0.6656626462936401,
"avg_line_length": 19.75,
"blob_id": "fa5aac8ab7cad49ca3a17bbbe5afe489daf9d1bb",
"content_id": "a173c36d6d980f70367c519471c1e025aa8c2747",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 332,
"license_type": "permissive",
"max_line_length": 51,
"num_lines": 16,
"path": "/twitcaspy/parsers/rawparser.py",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": "# Twitcaspy\n# Copyright 2021 Alma-field\n# See LICENSE for details.\n#\n# based on tweepy(https://github.com/tweepy/tweepy)\n# Copyright (c) 2009-2021 Joshua Roesslein\n\nfrom .parser import Parser\n\nclass RawParser(Parser):\n def __init__(self):\n pass\n\n @classmethod\n def parse(cls, payload, *args, **kwargs):\n return payload\n"
},
{
"alpha_fraction": 0.7038690447807312,
"alphanum_fraction": 0.7038690447807312,
"avg_line_length": 55,
"blob_id": "84592190058c2b54e421b68126f77254205ada83",
"content_id": "6cc74222e0538e2ee64d00bad65fe91221ff4048",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "reStructuredText",
"length_bytes": 1440,
"license_type": "permissive",
"max_line_length": 97,
"num_lines": 24,
"path": "/docs/parameters.rst",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": ".. API parameters:\n\n.. |client_id| replace:: client_id of this app\n.. |client_secret| replace:: client_secret of this app\n.. |callback| replace:: Specify the same Callback URL set when registering the application.\n.. |csrf_token| replace:: CSRF token\n.. |bearer_token| replace:: Bearer Token of this user.\n\n.. |id| replace:: The ID of the user.\n.. |id_notice| replace:: **If you specify this parameter, screen_id is ignored.**\n.. |screen_id| replace:: The screen name of the user.(e.g.: `@~~`)\n.. |movie_id| replace:: The ID of the movie.\n.. |comment_id| replace:: The ID of the comment.\n.. |id_screenid| replace:: Either an id or screen_id is required for this method.\n.. |attribute| replace:: The attribute of :class:`~twitcaspy.models.Result` is as follows.\n.. |attribute_ja| replace:: 以下は :class:`~twitcaspy.models.Result` の属性です。\n.. |live_attribute| replace:: The attribute of :class:`~twitcaspy.models.Live` is as follows.\n.. |live_attribute_ja| replace:: 以下は :class:`~twitcaspy.models.Live` の属性です。\n.. |latelimit| replace:: **latelimit** : :class:`~twitcaspy.models.LateLimit`\n.. |no_auth| replace:: No authentication required.\n.. |no_auth_ja| replace:: 認証は必要ありません。\n\n.. |google_translate_ja_en| replace:: (This sentence was translated by Google Translate.[ja->en])\n.. |google_translate_en_ja| replace:: (この文章はGoogle翻訳によって翻訳されました。[en->ja])\n"
},
{
"alpha_fraction": 0.5364772081375122,
"alphanum_fraction": 0.5417170524597168,
"avg_line_length": 32.08000183105469,
"blob_id": "f0c24bf63ce2c53f928bab596352e59e1633d487",
"content_id": "35869304735196cd2ca82744721c6a4f51c2d087",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2481,
"license_type": "permissive",
"max_line_length": 77,
"num_lines": 75,
"path": "/twitcaspy/parsers/modelparser.py",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": "# Twitcaspy\n# Copyright 2021 Alma-field\n# See LICENSE for details.\n#\n# based on tweepy(https://github.com/tweepy/tweepy)\n# Copyright (c) 2009-2021 Joshua Roesslein\n\nfrom ..errors import TwitcaspyException\n\nfrom ..models.result import Result\n\nfrom ..models.latelimit import LateLimit\n\nfrom ..models.factory import ModelFactory\n\nfrom .parser import Parser\n\nclass ModelParser(Parser):\n def __init__(self, model_factory=None):\n self.model_factory = model_factory or ModelFactory\n\n def parse(self, payload, *, api=None, payload_type=None):\n if payload_type is None:\n return None\n\n if isinstance(payload_type, str):\n _payload_type = {payload_type: [payload_type, False]}\n elif isinstance(payload_type, list):\n _payload_type = {_item: [_item, False] for _item in payload_type}\n elif isinstance(payload_type, dict):\n _payload_type = payload_type\n else:\n raise TypeError(\"payload_type must be (str, list, dict). not \"\n + type(payload_type).__name__)\n for _item in _payload_type.values():\n _key = _item[0]\n if not hasattr(self.model_factory, _key):\n raise TwitcaspyException(\n f'No model for this payload type: {_key}'\n )\n\n if hasattr(payload, 'json'):\n json = payload.json()\n else:\n json = payload\n\n result = Result(api)\n setattr(result, '_json', json)\n try:\n for _name, _item in _payload_type.items():\n _type, _list = _item\n model = getattr(self.model_factory, _type)\n if _name == 'raw_data':\n parse_data = json\n _name = _type\n else:\n parse_data = json[_name]\n if _list:\n data = [model.parse(api, _json) for _json in parse_data]\n else:\n data = model.parse(api, parse_data)\n setattr(result, _name, data)\n except KeyError:\n raise TwitcaspyException(\n f\"Unable to parse response payload: {json}\"\n )\n\n if not hasattr(payload, 'headers'):\n pass\n elif 'X-RateLimit-Limit' in payload.headers:\n setattr(result, 'late_limit', LateLimit.parse(payload.headers))\n else:\n setattr(result, 'late_limit', LateLimit())\n\n return result\n"
},
{
"alpha_fraction": 0.5706472396850586,
"alphanum_fraction": 0.583409309387207,
"avg_line_length": 27.86842155456543,
"blob_id": "c27c03a4d40a5c648b8952beb5ccba904f47db64",
"content_id": "51ababf3bfd3d02693afadc5b7b52c7907f13965",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1097,
"license_type": "permissive",
"max_line_length": 72,
"num_lines": 38,
"path": "/twitcaspy/auth/auth.py",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": "# Twitcaspy\n# Copyright 2021 Alma-field\n# See LICENSE for details.\n#\n# based on tweepy(https://github.com/tweepy/tweepy)\n# Copyright (c) 2009-2021 Joshua Roesslein\n\nclass AuthHandler:\n \"\"\"AuthHandler Base Class\n\n Parameters\n ----------\n client_id: :class:`str`\n |client_id|\n client_secret: :class:`str`\n |client_secret|\n\n Raises\n ------\n TypeError\n If the given client id and/or secret is not a string instance\n \"\"\"\n OAUTH_HOST = 'apiv2.twitcasting.tv'\n OAUTH_ROOT = '/oauth2/'\n\n def __init__(self, client_id, client_secret):\n if not isinstance(client_id, str):\n raise TypeError(\"ClientID must be string, not \"\n + type(client_id).__name__)\n if not isinstance(client_secret, str):\n raise TypeError(\"Client secret must be string, not \"\n + type(client_secret).__name__)\n\n self.client_id = client_id\n self.client_secret = client_secret\n\n def _get_oauth_url(self, endpoint):\n return 'https://' + self.OAUTH_HOST + self.OAUTH_ROOT + endpoint\n"
},
{
"alpha_fraction": 0.5660377144813538,
"alphanum_fraction": 0.5987791419029236,
"avg_line_length": 31.178571701049805,
"blob_id": "5d03a402f6edb422379488560e7fa3b1b2cfa29b",
"content_id": "ab8b78b48f31c947778e72dc8984edc87153d747",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1802,
"license_type": "permissive",
"max_line_length": 111,
"num_lines": 56,
"path": "/twitcaspy/models/user.py",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": "# Twitcaspy\n# Copyright 2021 Alma-field\n# See LICENSE for details.\n\nfrom .model import Model\n\nclass User(Model):\n \"\"\"User Object\n\n Attributes\n ----------\n id: :class:`str`\n | |id|\n screen_id: :class:`str`\n | |screen_id|\n | **Note**: screen_id is subject to change by the user.\n name: :class:`str`\n | Human readable user name\n image: :class:`str`\n | URL of user icon\n profile: :class:`str`\n | Profile text\n level: :class:`int`\n | User level\n last_movie_id: :class:`str` or :class:`None`\n | The last live ID by the user\n is_live: :class:`bool`\n | Whether it is currently live streamed\n supporter_count(deprecated): :class:`int`\n | Number of user supporters.\n | `2018-09-03 update <https://github.com/twitcasting/PublicApiV2/blob/master/CHANGELOG.md#2018-09-03>`_\n | Returns a fixed value of 0.\n | This parameter is deprecated.\n supporting_count(deprecated): :class:`int`\n | Number supported user by the user.\n | `2018-09-03 update <https://github.com/twitcasting/PublicApiV2/blob/master/CHANGELOG.md#2018-09-03>`_\n | Returns a fixed value of 0.\n | This parameter is deprecated.\n created(deprecated): :class:`int`\n | Date and time when this account was created.\n | `2018-08-03 update <https://github.com/twitcasting/PublicApiV2/blob/master/CHANGELOG.md#2018-08-03>`_\n | Returns a fixed value of 0.\n | This parameter is deprecated.\n\n References\n ----------\n https://apiv2-doc.twitcasting.tv/#user-object\n \"\"\"\n\n @classmethod\n def parse(cls, api, json):\n user = cls(api)\n setattr(user, '_json', json)\n for k, v in json.items():\n setattr(user, k, v)\n return user\n"
},
{
"alpha_fraction": 0.643856942653656,
"alphanum_fraction": 0.6578537821769714,
"avg_line_length": 24.719999313354492,
"blob_id": "dbe1c2335157857027eec5d925686773146f3dac",
"content_id": "60c93a4fe9d6c0ad54f45975c5eb4a19de2156a7",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 709,
"license_type": "permissive",
"max_line_length": 63,
"num_lines": 25,
"path": "/examples/webhook/client.py",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": "# Twitcaspy\n# Copyright 2021 Alma-field\n# See LICENSE for details.\n\n# If you actually want to use webhooks,\n# you don't need to write this program.\n# (実際にWebhookを使用する場合は、\n# このプログラムを作成する必要はありません。)\n\nimport requests\nimport json\n\ndef main():\n cassettes_file = '../../cassettes/testincomingwebhook.json'\n # load test webhook data\n with open(cassettes_file, \"r\", encoding='utf-8')as file:\n data = json.load(file)\n response = requests.post(\n 'http://localhost:5000/',\n data=json.dumps(data),\n headers={'Content-Type': 'application/json'})\n print(response.status_code)\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.7240853905677795,
"alphanum_fraction": 0.730182945728302,
"avg_line_length": 16.263158798217773,
"blob_id": "7b6de7c9da89a548b945a45815a6229c29567ed5",
"content_id": "431443cd87040623c9aa7186975efba0e0cbfd8a",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 656,
"license_type": "permissive",
"max_line_length": 73,
"num_lines": 38,
"path": "/twitcaspy/models/__init__.py",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": "# Twitcaspy\n# Copyright 2021 Alma-field\n# See LICENSE for details.\n\nfrom .model import Model\n\nfrom .result import Result\n\nfrom .app import App\n\nfrom .category import Category\n\nfrom .comment import Comment\n\nfrom .gift import Gift\n\nfrom .live import Live\n\nfrom .movie import Movie\n\nfrom .raw import Raw\n\nfrom .subcategory import SubCategory\n\nfrom .supporter import Supporter\n\nfrom .user import User\n\nfrom .webhook import WebHook\n\nfrom .latelimit import LateLimit\n\nfrom .factory import ModelFactory\n\n__all__ = [\n 'App', 'Category', 'Comment', 'ModelFactory', 'Gift', 'LateLimit',\n 'Live', 'Movie', 'Raw', 'SubCategory', 'Supporter', 'User', 'WebHook'\n]\n"
},
{
"alpha_fraction": 0.6988636255264282,
"alphanum_fraction": 0.7443181872367859,
"avg_line_length": 18.55555534362793,
"blob_id": "e3497fedda5507a1cbe487fa76813c1022b231c6",
"content_id": "3b172e4e656239c22f092a9cf5547f20e4982de7",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 176,
"license_type": "permissive",
"max_line_length": 41,
"num_lines": 9,
"path": "/twitcaspy/auth/oauth/__init__.py",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": "# Twitcaspy\n# Copyright 2021 Alma-field\n# See LICENSE for details.\n\nfrom .basic import OAuth2Basic\n\nfrom .bearer import OAuth2Bearer\n\n__all__ = ['OAuth2Basic', 'OAuth2Bearer']\n"
},
{
"alpha_fraction": 0.5526742339134216,
"alphanum_fraction": 0.5551053285598755,
"avg_line_length": 21.436363220214844,
"blob_id": "26e1866af1016310c9908d10cf50c764b549d6eb",
"content_id": "b306dd9b9a4cd164966aa8ff4744b635754d50bc",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "reStructuredText",
"length_bytes": 1234,
"license_type": "permissive",
"max_line_length": 61,
"num_lines": 55,
"path": "/docs/auth.rst",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": ".. _auth_reference:\n\n.. currentmodule:: twitcaspy.auth\n\n.. include:: parameters.rst\n\n*************************************************************\n:class:`AuthHandler` class --- Twitcasting API Authentication\n*************************************************************\n\nAuthentication header\n---------------------\n\nAuthHandler\n===========\n.. autoclass:: twitcaspy.auth.auth.AuthHandler\n :members:\n :member-order: bysource\n\nApplication-only authentication handler\n=======================================\n.. autoclass:: AppAuthHandler\n :members:\n :member-order: bysource\n :show-inheritance:\n\nAuthorization Code Grant handler\n================================\n.. autoclass:: GrantAuthHandler\n :members:\n :member-order: bysource\n :show-inheritance:\n\nImplicit handler\n================\n.. autoclass:: ImplicitAuthHandler\n :members:\n :member-order: bysource\n :show-inheritance:\n\nOAuth2\n------\nBasic Authentication\n====================\n.. autoclass:: twitcaspy.auth.oauth.OAuth2Basic\n :members:\n :member-order: bysource\n :show-inheritance:\n\nBearer Authentication\n=====================\n.. autoclass:: twitcaspy.auth.oauth.OAuth2Bearer\n :members:\n :member-order: bysource\n :show-inheritance:\n"
},
{
"alpha_fraction": 0.5602536797523499,
"alphanum_fraction": 0.5655391216278076,
"avg_line_length": 24.567567825317383,
"blob_id": "f5b9eb5e87e6ba32a189fe3f01762b1703aa4cf0",
"content_id": "6f0eadaa88baedae634dcf8b70099f6834d31d6d",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 946,
"license_type": "permissive",
"max_line_length": 78,
"num_lines": 37,
"path": "/twitcaspy/models/category.py",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": "# Twitcaspy\n# Copyright 2021 Alma-field\n# See LICENSE for details.\n\nfrom .model import Model\n\nfrom .subcategory import SubCategory\n\nclass Category(Model):\n \"\"\"Category Object\n\n Attributes\n ----------\n id: :class:`str`\n | Category ID\n name: :class:`str`\n | Category name\n sub_categories: :class:`list` of :class:`~twitcaspy.models.SubCategory`\n\n References\n ----------\n https://apiv2-doc.twitcasting.tv/#category-object\n \"\"\"\n\n @classmethod\n def parse(cls, api, json):\n category = cls(api)\n setattr(category, '_json', json)\n for k, v in json.items():\n if k == 'sub_categories':\n sub_categories = []\n for subcategory in v:\n sub_categories.append(SubCategory.parse(api, subcategory))\n setattr(category, k, sub_categories)\n else:\n setattr(category, k, v)\n return category\n"
},
{
"alpha_fraction": 0.586995542049408,
"alphanum_fraction": 0.5982062816619873,
"avg_line_length": 31.31884002685547,
"blob_id": "85e17064039ee4ea8c2154b94ff857234b3ace38",
"content_id": "685dab617fd04a2c7f67c431a9757f38862a3817",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2230,
"license_type": "permissive",
"max_line_length": 128,
"num_lines": 69,
"path": "/twitcaspy/models/supporter.py",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": "# Twitcaspy\n# Copyright 2021 Alma-field\n# See LICENSE for details.\n\nfrom ..utils import fromtimestamp\n\nfrom .user import User\n\nclass Supporter(User):\n \"\"\"Supporter Object\n\n Attributes\n ----------\n id: :class:`str`\n | |id|\n screen_id: :class:`str`\n | |screen_id|\n | **Note**: screen_id is subject to change by the user.\n name: :class:`str`\n | Human readable user name\n image: :class:`str`\n | URL of user icon\n profile: :class:`str`\n | Profile text\n level: :class:`int`\n | User level\n last_movie_id: :class:`str` or :class:`None`\n | The last live ID by the user\n is_live: :class:`bool`\n | Whether it is currently live streamed\n supported: :class:`int`\n | Unix time stamp of supported datetime\n | `2021-09-29 update <https://github.com/twitcasting/PublicApiV2/blob/master/CHANGELOG.md#2021-09-29>`_\n | Added 'unix time stamp <supported> of supported datetime' to SupporterUser object of response |google_translate_ja_en|\n supported_time: :class:`datetime.datetime`\n | Converted supported to :class:`datetime.datetime` type\n supporter_count(deprecated): :class:`int`\n | Number of user supporters.\n | Returns a fixed value of 0.\n | This parameter is deprecated.\n supporting_count(deprecated): :class:`int`\n | Number supported user by the user.\n | Returns a fixed value of 0.\n | This parameter is deprecated.\n point: :class:`int`\n | Item score\n total_point: :class:`int`\n | Cumulative score\n created(deprecated): :class:`int`\n | Date and time when this account was created.\n | Returns a fixed value of 0.\n | This parameter is deprecated.\n\n References\n ----------\n https://apiv2-doc.twitcasting.tv/#supporteruser-object\n \"\"\"\n\n @classmethod\n def parse(cls, api, json):\n supporter = cls(api)\n setattr(supporter, '_json', json)\n for k, v in json.items():\n if k == 'supported':\n setattr(supporter, k, v)\n setattr(supporter, f'{k}_time', fromtimestamp(v))\n else:\n setattr(supporter, k, v)\n return supporter\n"
},
{
"alpha_fraction": 0.5200303792953491,
"alphanum_fraction": 0.5246681571006775,
"avg_line_length": 34.10456085205078,
"blob_id": "a9c52abf318ab58b86d4d959777f84dc7f3b1fe5",
"content_id": "341d3fe0f1f7932c78b10e3794ac582d98e3b852",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 50024,
"license_type": "permissive",
"max_line_length": 115,
"num_lines": 1425,
"path": "/twitcaspy/api.py",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": "# Twitcaspy\n# Copyright 2021 Alma-field\n# See LICENSE for details.\n#\n# based on tweepy(https://github.com/tweepy/tweepy)\n# Copyright (c) 2009-2021 Joshua Roesslein\n\nimport functools\nimport logging\nimport json\nfrom platform import python_version\nimport sys\n\nimport requests\n\nfrom . import __version__ as twitcaspy_version\nfrom .errors import (\n BadRequest, Forbidden, HTTPException, NotFound, TooManyRequests,\n TwitcaspyException, TwitcastingServerError, Unauthorized\n)\nfrom .parsers import Parser, ModelParser, RawParser\n\nlog = logging.getLogger(__name__)\n\ndef payload(*payload_list, **payload_kwargs):\n if payload_kwargs is None:\n payload_kwargs = {}\n if isinstance(payload_list, tuple):\n for _key in payload_list:\n payload_kwargs[_key] = [_key, False]\n def decorator(method):\n @functools.wraps(method)\n def wrapper(*args, **kwargs):\n kwargs['payload_type'] = payload_kwargs\n return method(*args, **kwargs)\n wrapper.payload_type = payload_kwargs\n return wrapper\n return decorator\n\nclass API:\n \"\"\"Twitcasting API v2.0 Interface\n\n Parameters\n ----------\n auth: :class:`~twitcaspy.auth.auth.AuthHandler`\n The authentication handler to be used\n host: :class:`str`\n The general REST API host server URL\n parser: :class:`~twitcaspy.parsers.parser.Parser`\n | The Parser instance to use for parsing the response from Twitcasting.\n | defaults to an instance of ModelParser\n user_agent: :class:`str`\n The UserAgent to be used\n signature: :class:`str`\n Key to prove that it is a legitimate request in webhook.\n\n Raises\n ------\n TypeError\n If the given parser is not a Parser instance\n\n References\n ----------\n https://apiv2-doc.twitcasting.tv/\n \"\"\"\n\n def __init__(\n self, auth=None, *, host='apiv2.twitcasting.tv',\n parser=None, user_agent=None, signature=None\n ):\n self.auth = auth\n self.host = host\n self.signature = signature\n\n if parser is None:\n parser = ModelParser()\n self.parser = parser\n\n if user_agent is None:\n user_agent = (\n f\"Python/{python_version()} \"\n f\"Requests/{requests.__version__} \"\n f\"twitcaspy/{twitcaspy_version}\"\n )\n self.user_agent = user_agent\n\n if not isinstance(self.parser, Parser):\n raise TypeError(\n \"parser should be an instance of Parser, not \" +\n str(type(self.parser))\n )\n\n self.session = requests.Session()\n\n def request(\n self, method, endpoint, *, endpoint_parameters=(), params=None,\n headers=None, json_payload=None, parser=None, payload_type=None,\n post_data=None, require_auth=True, **kwargs\n ):\n # If authentication is required and no credentials\n # are provided, throw an error.\n if require_auth and not self.auth:\n raise TwitcaspyException('Authentication required!')\n\n if headers is None:\n headers = {}\n headers[\"X-Api-Version\"] = '2.0'\n headers[\"User-Agent\"] = self.user_agent\n\n # Build the request URL\n url = 'https://' + self.host + endpoint\n\n if params is None:\n params = {}\n for k, arg in kwargs.items():\n if arg is None:\n continue\n if k not in endpoint_parameters:\n log.warning(f'Unexpected parameter: {k}')\n continue\n params[k] = arg\n log.debug(\"PARAMS: %r\", params)\n\n if parser is None:\n parser = self.parser\n\n try:\n # Execute request\n try:\n response = self.session.request(\n method, url, params=params, headers=headers,\n data=json.dumps(post_data), json=json_payload,\n auth=self.auth.auth\n )\n except Exception as e:\n raise TwitcaspyException(f'Failed to send request: {e}')\\\n .with_traceback(sys.exc_info()[2])\n\n # If an error was returned, throw an exception\n self.last_response = response\n if response.status_code == 400:\n raise BadRequest(response)\n if response.status_code == 401:\n raise Unauthorized(response)\n if response.status_code == 403:\n raise Forbidden(response)\n if response.status_code == 404:\n raise NotFound(response)\n if response.status_code == 429:\n raise TooManyRequests(response)\n if response.status_code >= 500:\n raise TwitcastingServerError(response)\n if response.status_code and not 200 <= response.status_code < 300:\n raise HTTPException(response)\n\n result = parser.parse(\n response, api=self, payload_type=payload_type)\n\n return result\n finally:\n self.session.close()\n\n @payload(\n 'user', supporter_count=['raw', False],\n supporting_count=['raw', False])\n def get_user_info(self, *, id=None, screen_id=None, **kwargs):\n \"\"\"get_user_info(*, id=None, screen_id=None)\n\n | Returns information about the specified user.\n | |id_screenid|\n\n Parameters\n ----------\n id: :class:`str`\n |id|\n |id_notice|\n screen_id: :class:`str`\n |screen_id|\n\n Returns\n -------\n :class:`~twitcaspy.models.Result`\n | |attribute|\n | |latelimit|\n | **user** : :class:`~twitcaspy.models.User`\n | **supporter_count** : :class:`~twitcaspy.models.Raw` (:class:`int`)\n Number of user supporters.\n | **supporting_count** : :class:`~twitcaspy.models.Raw` (:class:`int`)\n Number supported user by the user.\n\n Raises\n ------\n TwitcaspyException\n If both id and screen_id are not specified\n\n References\n ----------\n https://apiv2-doc.twitcasting.tv/#get-user-info\n \"\"\"\n target_id = id if id is not None else screen_id\n if target_id is None:\n raise TwitcaspyException(\n 'Either an id or screen_id is required for this method.')\n return self.request('GET', f'/users/{target_id}', **kwargs)\n\n @payload(\n 'app', 'user', supporter_count=['raw', False],\n supporting_count=['raw', False])\n def verify_credentials(self, **kwargs):\n \"\"\"verify_credentials()\n\n Returns application and user information about the access_token.\n\n Returns\n -------\n :class:`~twitcaspy.models.Result`\n | |attribute|\n | |latelimit|\n | **app** : :class:`~twitcaspy.models.App`\n | **user** : :class:`~twitcaspy.models.User`\n | **supporter_count** : :class:`~twitcaspy.models.Raw` (:class:`int`)\n | **supporting_count** : :class:`~twitcaspy.models.Raw` (:class:`int`)\n\n References\n ----------\n https://apiv2-doc.twitcasting.tv/#verify-credentials\n \"\"\"\n return self.request('GET', '/verify_credentials', **kwargs)\n\n def get_live_thumbnail_image(self, *, id=None, screen_id=None, **kwargs):\n \"\"\"get_live_thumbnail_image(*, id=None, screen_id=None,\\\n size='small', position='latest')\n\n | Returns live thumbnail the specified user.\n | Returns an offline image if the user is not streaming now.\n | |id_screenid|\n\n Tip\n ---\n |no_auth|\n\n Parameters\n ----------\n id: :class:`str`\n |id|\n |id_notice|\n screen_id: :class:`str`\n |screen_id|\n size(optional): :class:`str`\n | image size\n | 'large' or 'small' can be specified.(default is 'small'.)\n position(optional): :class:`str`\n | 'beginning' or 'latest' can be specified.(default is 'latest'.)\n\n Returns\n -------\n :class:`requests.models.Response`\n | Image data is stored in the content attribute.\n\n Raises\n ------\n TwitcaspyException\n If both id and screen_id are not specified\n\n References\n ----------\n https://apiv2-doc.twitcasting.tv/#get-live-thumbnail-image\n \"\"\"\n target_id = id if id is not None else screen_id\n if target_id is None:\n raise TwitcaspyException(\n 'Either an id or screen_id is required for this method.')\n return self.request(\n 'GET', f'/users/{target_id}/live/thumbnail',\n parser=RawParser, require_auth=False,\n endpoint_parameters=('size', 'position'), **kwargs)\n\n @payload(\n 'movie', broadcaster=['user', False],\n tags=['raw', False], raw_data=['live', False])\n def get_movie_info(self, movie_id, **kwargs):\n \"\"\"get_movie_info(movie_id)\n\n Returns information about the specified movie.\n\n Parameters\n ----------\n movie_id: :class:`str`\n |movie_id|\n\n Returns\n -------\n :class:`~twitcaspy.models.Result`\n | |attribute|\n | |latelimit|\n | **movie** : :class:`~twitcaspy.models.Movie`\n | **broadcaster** : :class:`~twitcaspy.models.User`\n | **tags** : :class:`~twitcaspy.models.Raw` (:class:`list`)\n | **live** : :class:`~twitcaspy.models.Live`\n | |live_attribute|\n | **movie** : :class:`~twitcaspy.models.Movie`\n | **broadcaster** : :class:`~twitcaspy.models.User`\n | **tags** : :class:`~twitcaspy.models.Raw` (:class:`list`)\n | `result.movie` is equivalent to` result.live.movie`.\n\n References\n ----------\n https://apiv2-doc.twitcasting.tv/#get-movie-info\n \"\"\"\n return self.request('GET', f'/movies/{movie_id}', **kwargs)\n\n @payload(movies=['movie', True], total_count=['raw', False])\n def get_movies_by_user(self, *, id=None, screen_id=None, **kwargs):\n \"\"\"get_movies_by_user(*, id=None, screen_id=None,\\\n offset=0, limit=20, slice_id=None)\n\n | Returns movies of the specified user\n in descending order of creation date and time.\n | |id_screenid|\n\n Parameters\n ----------\n id: :class:`str`\n |id|\n |id_notice|\n screen_id: :class:`str`\n |screen_id|\n offset(optional): :class:`int`\n | Position from the beginning\n | It can be specified in the range of 0 to 1000.(default is 0.)\n limit(optional): :class:`int`\n | Maximum number of acquisitions\n | It can be specified in the range of 1 to 50.(default is 20.)\n | (In some cases,\n it may return less than the specified number of videos.)\n slice_id(optional): :class:`int` or :class:`None`\n | Gets the movie before this slice_id.\n | It can be specified in the range of 1 or more.\n | (Not specified by default.[= :class:`None`])\n | If you specify this parameter, offset is ignored.\n\n Returns\n -------\n :class:`~twitcaspy.models.Result`\n | |attribute|\n | |latelimit|\n | **total_count** : :class:`~twitcaspy.models.Raw` (:class:`int`)\n | **movies** : :class:`List` of :class:`~twitcaspy.models.Movie`\n\n Raises\n ------\n TwitcaspyException\n If both id and screen_id are not specified\n\n References\n ----------\n https://apiv2-doc.twitcasting.tv/#get-movies-by-user\n \"\"\"\n target_id = id if id is not None else screen_id\n if target_id is None:\n raise TwitcaspyException(\n 'Either an id or screen_id is required for this method.')\n return self.request(\n 'GET', f'/users/{target_id}/movies',\n endpoint_parameters=('offset', 'limit', 'slice_id'), **kwargs)\n\n @payload(\n 'movie', broadcaster=['user', False],\n tags=['raw', False], raw_data=['live', False])\n def get_current_live(self, *, id=None, screen_id=None, **kwargs):\n \"\"\"get_current_live(*, id=None, screen_id=None)\n\n | Returns live information if the user is streaming now.\n | |id_screenid|\n\n Parameters\n ----------\n id: :class:`str`\n |id|\n |id_notice|\n screen_id: :class:`str`\n |screen_id|\n\n Returns\n -------\n :class:`~twitcaspy.models.Result`\n | |attribute|\n | |latelimit|\n | **movie** : :class:`~twitcaspy.models.Movie`\n | **broadcaster** : :class:`~twitcaspy.models.User`\n | **tags** : :class:`~twitcaspy.models.Raw` (:class:`list`)\n | **live** : :class:`~twitcaspy.models.Live`\n | |live_attribute|\n | **movie** : :class:`~twitcaspy.models.Movie`\n | **broadcaster** : :class:`~twitcaspy.models.User`\n | **tags** : :class:`~twitcaspy.models.Raw` (:class:`list`)\n | `result.movie` is equivalent to` result.live.movie`.\n\n Raises\n ------\n TwitcaspyException\n If both id and screen_id are not specified\n\n References\n ----------\n https://apiv2-doc.twitcasting.tv/#get-current-live\n \"\"\"\n target_id = id if id is not None else screen_id\n if target_id is None:\n raise TwitcaspyException(\n 'Either an id or screen_id is required for this method.')\n return self.request(\n 'GET', f'/users/{target_id}/current_live', **kwargs)\n\n @payload(movie_id=['raw', False], subtitle=['raw', False])\n def set_current_live_subtitle(self, subtitle, *, cut_out=False, **kwargs):\n \"\"\"set_current_live_subtitle(subtitle, *, cut_out=False)\n\n | If the user is broadcasting, set a live telop.\n\n Parameters\n ----------\n subtitle: :class:`str`\n | live telop\n cut_out: :class:`bool`\n | If the subtitle is more than 17 characters, cut out\n\n Returns\n -------\n :class:`~twitcaspy.models.Result`\n | |attribute|\n | |latelimit|\n | **movie_id** : :class:`~twitcaspy.models.Raw` (:class:`str`)\n |movie_id|\n | **subtitle** : :class:`~twitcaspy.models.Raw` (:class:`str`)\n\n Raises\n ------\n TwitcaspyException:\n When the subtitle is less than one character.\n When the subtitle is more than 17 characters and cut_out is False.\n\n References\n ----------\n https://apiv2-doc.twitcasting.tv/#set-current-live-subtitle\n \"\"\"\n if len(subtitle) < 1:\n raise TwitcaspyException(\n '`subtitle` must be at least one character.')\n if not cut_out and 17 < len(subtitle):\n raise TwitcaspyException(\n 'The subtitle must be 17 characters or less.')\n else:\n post_data = {}\n post_data['subtitle'] = subtitle[:17]\n return self.request(\n 'POST', '/movies/subtitle', post_data=post_data, **kwargs)\n\n @payload(movie_id=['raw', False], subtitle=['raw', False])\n def unset_current_live_subtitle(self, **kwargs):\n \"\"\"unset_current_live_subtitle()\n\n | If the user is broadcasting, unset a live telop.\n\n Returns\n -------\n :class:`~twitcaspy.models.Result`\n | |attribute|\n | |latelimit|\n | **movie_id** : :class:`~twitcaspy.models.Raw` (:class:`str`)\n |movie_id|\n | **subtitle** : :class:`~twitcaspy.models.Raw` (:class:`None`)\n\n References\n ----------\n https://apiv2-doc.twitcasting.tv/#unset-current-live-subtitle\n \"\"\"\n return self.request('DELETE', '/movies/subtitle', **kwargs)\n\n @payload(movie_id=['raw', False], hashtag=['raw', False])\n def set_current_live_hashtag(self, hashtag, *, cut_out=False, **kwargs):\n \"\"\"set_current_live_hashtag(hashtag, *, cut_out=False)\n\n | If the user is broadcasting, set a live hashtag.\n\n Parameters\n ----------\n hashtag: :class:`str`\n live hashtag\n cut_out: :class:`bool`\n | If the hashtag is more than 26 characters, cut out\n\n Returns\n -------\n :class:`~twitcaspy.models.Result`\n | |attribute|\n | |latelimit|\n | **movie_id** : :class:`~twitcaspy.models.Raw` (:class:`str`)\n |movie_id|\n | **hashtag** : :class:`~twitcaspy.models.Raw` (:class:`str`)\n\n Raises\n ------\n TwitcaspyException:\n When the hashtag is less than one character./\n When the hashtag is more than 26 characters and cut_out is False.\n\n References\n ----------\n https://apiv2-doc.twitcasting.tv/#set-current-live-hashtag\n \"\"\"\n if len(hashtag) < 1:\n raise TwitcaspyException(\n '`hashtag` must be at least one character.')\n if not cut_out and 26 < len(hashtag):\n raise TwitcaspyException(\n '`hashtag` must be 26 characters or less.')\n else:\n post_data = {}\n post_data['hashtag'] = hashtag[:26]\n return self.request(\n 'POST', '/movies/hashtag', post_data=post_data, **kwargs)\n\n @payload(movie_id=['raw', False], hashtag=['raw', False])\n def unset_current_live_hashtag(self, **kwargs):\n \"\"\"unset_current_live_hashtag()\n\n | If the user is broadcasting, unset a live hashtag.\n\n Returns\n -------\n :class:`~twitcaspy.models.Result`\n | |attribute|\n | |latelimit|\n | **movie_id** : :class:`~twitcaspy.models.Raw` (:class:`str`)\n |movie_id|\n | **hashtag** : :class:`~twitcaspy.models.Raw` (:class:`None`)\n\n References\n ----------\n https://apiv2-doc.twitcasting.tv/#unset-current-live-hashtag\n \"\"\"\n return self.request('DELETE', '/movies/hashtag', **kwargs)\n\n @payload(\n movie_id=['raw', False],\n all_count=['raw', False],\n comments=['comment', True])\n def get_comments(self, movie_id, **kwargs):\n \"\"\"get_comments(movie_id, *, offset=0, limit=20, slice_id=None)\n\n | Returns comments of the specified movie\n in descending order of creation date and time.\n\n Parameters\n ----------\n movie_id: :class:`str`\n |movie_id|\n offset(optional): :class:`int`\n | Position from the beginning\n | It can be specified in the range of 0 or more.(default is 0.)\n limit(optional): :class:`int`\n | Maximum number of acquisitions\n | It can be specified in the range of 1 to 50.(default is 10.)\n | (In some cases,\n it may return less than the specified number of videos.)\n slice_id(optional): :class:`int` or :class:`None`\n | Gets the comment after this slice_id.\n | It can be specified in the range of 1 or more.\n | (Not specified by default.[= :class:`None`])\n | If you specify this parameter, offset is ignored.\n | `2018-08-28 update <https://github.com/twitcasting/PublicApiV2/blob/master/CHANGELOG.md#2018-08-28>`_\n | The minimum value that can be specified for slice_id is now 1.\n\n Returns\n -------\n :class:`~twitcaspy.models.Result`\n | |attribute|\n | |latelimit|\n | **movie_id** : :class:`~twitcaspy.models.Raw` (:class:`str`)\n |movie_id|\n | **all_count** : :class:`~twitcaspy.models.Raw` (:class:`int`)\n Total number of comments\n | **comments** : :class:`List` of :class:`~twitcaspy.models.Comment`\n\n References\n ----------\n https://apiv2-doc.twitcasting.tv/#get-comments\n \"\"\"\n return self.request(\n 'GET', f'/movies/{movie_id}/comments',\n endpoint_parameters=('offset', 'limit', 'slice_id'), **kwargs)\n\n @payload('comment', movie_id=['raw', False], all_count=['raw', False])\n def post_comment(self, movie_id, comment, **kwargs):\n \"\"\"post_comment(movie_id, comment, *, sns='none')\n\n | Post a comment.\n | It can be executed only on a user-by-user basis.\n\n Parameters\n ----------\n movie_id: :class:`str`\n |movie_id|\n comment: :class:`str`\n | Comment text to post.\n | Must be 1 to 140 characters.\n sns: :class:`str`\n | Simultaneous posting to SNS.\n | (Valid only when the user is linked with Twitter or Facebook.)\n | 'reply' : Post in a format that replies to the streamer.\n | 'normal' : Regular post.\n | 'none' : No SNS posts.\n\n Returns\n -------\n :class:`~twitcaspy.models.Result`\n | |attribute|\n | |latelimit|\n | **movie_id** : :class:`~twitcaspy.models.Raw` (:class:`str`)\n |movie_id|\n | **all_count** : :class:`~twitcaspy.models.Raw` (:class:`int`)\n Total number of comments\n | **comment** : :class:`~twitcaspy.models.Comment`\n\n Raises\n ------\n TwitcaspyException:\n When comment is not 1-140 characters.\n\n References\n ----------\n https://apiv2-doc.twitcasting.tv/#post-comment\n \"\"\"\n if not 1 <= len(comment) <= 140:\n raise TwitcaspyException(\n '`comment` must be in the range 1-140 characters.')\n else:\n post_data = {'comment': comment}\n if 'sns' in kwargs:\n if kwargs['sns'] in ['none', 'normal', 'reply']:\n post_data['sns'] = kwargs['sns']\n else:\n post_data['sns'] = 'none'\n return self.request(\n 'POST', f'/movies/{movie_id}/comments',\n post_data=post_data, **kwargs)\n\n @payload(comment_id=['raw', False])\n def delete_comment(self, movie_id, comment_id, **kwargs):\n \"\"\"delete_comment(movie_id, comment_id)\n\n | Delete the comment.\n | It can be executed only on a user-by-user basis.\n | As a general rule, the comments that can be deleted are limited to\n those that the poster is the same as the user associated\n with the access token.\n | However, if you use the access token of the user who owns the movie,\n you can delete the comments posted by other users.\n\n Parameters\n ----------\n movie_id: :class:`str`\n |movie_id|\n comment_id: :class:`str`\n |comment_id|\n\n Returns\n -------\n :class:`~twitcaspy.models.Result`\n | |attribute|\n | |latelimit|\n | **comment_id** : :class:`~twitcaspy.models.Raw` (:class:`str`)\n ID of the deleted comment.\n\n References\n ----------\n https://apiv2-doc.twitcasting.tv/#delete-comment\n \"\"\"\n return self.request(\n 'DELETE', f'/movies/{movie_id}/comments/{comment_id}', **kwargs)\n\n @payload(slice_id=['raw', False], gifts=['gift', True])\n def get_gifts(self, **kwargs):\n \"\"\"get_gifts(*, slice_id=-1)\n\n | Acquire the item sent by the user associated\n with the access token in the last 10 seconds.\n\n Parameters\n ----------\n slice_id(optional): :class:`int`\n | Gets the items sent after this item send ID.\n | It can be specified in the range of -1 or more.(default is -1.)\n\n Returns\n -------\n :class:`~twitcaspy.models.Result`\n | |attribute|\n | |latelimit|\n | **slice_id** : :class:`~twitcaspy.models.Raw` (:class:`int`)\n Slice_id to be specified the next time you call the API.\n | **gifts** : :class:`list` of :class:`~twitcaspy.models.Gift`\n\n References\n ----------\n https://apiv2-doc.twitcasting.tv/#get-gifts\n \"\"\"\n return self.request(\n 'GET', '/gifts',\n endpoint_parameters=('slice_id'), **kwargs)\n\n @payload(\n is_supporting=['raw', False],\n supported=['raw', False],\n target_user=['user', False])\n def get_supporting_status(\n self, target_user_id, *, id=None, screen_id=None, **kwargs):\n \"\"\"get_supporting_status(target_user_id, *, id=None, screen_id=None)\n\n | Gets the status of whether a user is a supporter of another user.\n | |id_screenid|\n\n Parameters\n ----------\n id: :class:`str`\n |id|\n |id_notice|\n screen_id: :class:`str`\n |screen_id|\n target_user_id: :class:`str`\n | target user id or screen_id\n\n Warnings\n --------\n Note that unlike :class:`~twitcaspy.models.Supporter`,\n there is no supported_time attribute.\n\n Returns\n -------\n :class:`~twitcaspy.models.Result`\n | |attribute|\n | |latelimit|\n | **is_supporting** : :class:`~twitcaspy.models.Raw` (:class:`bool`)\n The status of whether (id/screen_id) supported target_user_id.\n | **supported** : :class:`~twitcaspy.models.Raw` (:class:`int`)\n Unix time stamp of supported datetime\n | **target_user** : :class:`~twitcaspy.models.User`\n Target user information\n\n Raises\n ------\n TwitcaspyException\n If both id and screen_id are not specified\n\n References\n ----------\n https://apiv2-doc.twitcasting.tv/#get-supporting-status\n \"\"\"\n target_id = id if id is not None else screen_id\n if target_id is None:\n raise TwitcaspyException(\n 'Either an id or screen_id is required for this method.')\n kwargs['target_user_id'] = target_user_id\n return self.request(\n 'GET', f'/users/{target_id}/supporting_status',\n endpoint_parameters=('target_user_id'), **kwargs)\n\n @payload(added_count=['raw', False])\n def support_user(self, target_user_ids=None, **kwargs):\n \"\"\"support_user(target_user_ids=None)\n\n | Become a supporter of the specified user.\n\n Parameters\n ----------\n target_user_ids: :class:`list` or :class:`tuple`\n | An array of target user id or screen_id\n | The number of elements in the array must be 20 or less.\n\n Returns\n -------\n :class:`~twitcaspy.models.Result`\n | |attribute|\n | |latelimit|\n | **added_count** : :class:`~twitcaspy.models.Raw` (:class:`int`)\n Number of registered supporters.\n\n Raises\n ------\n TwitcaspyException\n When target_user_ids is not a :class:`list` or :class:`tuple`\n\n References\n ----------\n https://apiv2-doc.twitcasting.tv/#support-user\n \"\"\"\n if not isinstance(target_user_ids, (list, tuple)):\n raise TwitcaspyException(\"target_user_ids must be list or tuple, not \"\n + type(target_user_ids).__name__)\n post_data = {'target_user_ids': target_user_ids}\n return self.request(\n 'PUT', '/support', post_data=post_data, **kwargs)\n\n @payload(removed_count=['raw', False])\n def unsupport_user(self, target_user_ids=None, **kwargs):\n \"\"\"unsupport_user(target_user_ids=None)\n\n | Release the supporter status of the specified user.\n\n Parameters\n ----------\n target_user_ids: :class:`list` or :class:`tuple`\n | An array of target user id or screen_id\n | The number of elements in the array must be 20 or less.\n\n Returns\n -------\n :class:`~twitcaspy.models.Result`\n | |attribute|\n | |latelimit|\n | **removed_count** : :class:`~twitcaspy.models.Raw` (:class:`int`)\n Number of cases where supporters were released.\n\n Raises\n ------\n TwitcaspyException\n When target_user_ids is not a :class:`list` or :class:`tuple`\n\n References\n ----------\n https://apiv2-doc.twitcasting.tv/#support-user\n \"\"\"\n if not isinstance(target_user_ids, (list, tuple)):\n raise TwitcaspyException(\"target_user_ids must be list or tuple, not \"\n + type(target_user_ids).__name__)\n post_data = {'target_user_ids': target_user_ids}\n return self.request(\n 'PUT', '/unsupport', post_data=post_data, **kwargs)\n\n @payload(total=['raw', False], supporting=['supporter', True])\n def supporting_list(self, *, id=None, screen_id=None, **kwargs):\n \"\"\"supporting_list(*, id=None, screen_id=None, offset=0, limit=20)\n\n | Get a list of users supported by the specified user.\n | |id_screenid|\n\n Parameters\n ----------\n id: :class:`str`\n |id|\n |id_notice|\n screen_id: :class:`str`\n |screen_id|\n offset(optional): :class:`int`\n | Position from the beginning\n | It can be specified in the range of 0 or more.(default is 0.)\n limit(optional): :class:`int`\n | Maximum number of acquisitions\n | It can be specified in the range of 1 to 20.(default is 20.)\n | (In some cases,\n it may return less than the specified number of support users.)\n\n Returns\n -------\n :class:`~twitcaspy.models.Result`\n | |attribute|\n | |latelimit|\n | **total** : :class:`~twitcaspy.models.Raw` (:class:`int`)\n Total number of records.\n (It may differ from the actual number that can be obtained)\n | **supporting** : :class:`list` of :class:`~twitcaspy.models.Supporter`\n\n Raises\n ------\n TwitcaspyException\n If both id and screen_id are not specified\n\n References\n ----------\n https://apiv2-doc.twitcasting.tv/#supporting-list\n \"\"\"\n target_id = id if id is not None else screen_id\n if target_id is None:\n raise TwitcaspyException(\n 'Either an id or screen_id is required for this method.')\n return self.request(\n 'GET', f'/users/{target_id}/supporting',\n endpoint_parameters=('offset', 'limit'), **kwargs)\n\n @payload(total=['raw', False], supporters=['supporter', True])\n def supporter_list(self, sort='ranking', *, id=None, screen_id=None, **kwargs):\n \"\"\"supporter_list(sort='ranking', *, id=None, screen_id=None,\\\n offset=0, limit=20)\n\n | Get a list of users who support the specified user.\n | |id_screenid|\n\n Parameters\n ----------\n sort: :class:`str`\n | Sort order\n | `sort` must be one of the following:\n | 'new' : New arrival order\n | 'ranking' : Contribution order\n id: :class:`str`\n |id|\n |id_notice|\n screen_id: :class:`str`\n |screen_id|\n offset(optional): :class:`int`\n | Position from the beginning\n | It can be specified in the range of 0 or more.(default is 0.)\n limit(optional): :class:`int`\n | Maximum number of acquisitions\n | It can be specified in the range of 1 to 20.(default is 20.)\n | (In some cases,\n it may return less than the specified number of support users.)\n\n Returns\n -------\n :class:`~twitcaspy.models.Result`\n | |attribute|\n | |latelimit|\n | **total** : :class:`~twitcaspy.models.Raw` (:class:`int`)\n Total number of records.\n (It may differ from the actual number that can be obtained)\n | **supporters** : :class:`list` of :class:`~twitcaspy.models.Supporter`\n\n Raises\n ------\n TwitcaspyException\n If both id and screen_id are not specified.\n TwitcaspyException\n When sort is not a 'new' or 'ranking'.\n\n References\n ----------\n https://apiv2-doc.twitcasting.tv/#supporter-list\n \"\"\"\n target_id = id if id is not None else screen_id\n if target_id is None:\n raise TwitcaspyException(\n 'Either an id or screen_id is required for this method.')\n if sort == 'new' or sort == 'ranking':\n kwargs['sort'] = sort\n else:\n raise TwitcaspyException(\"sort must be 'new' or 'ranking', not \"\n + sort)\n return self.request(\n 'GET', f'/users/{target_id}/supporters',\n endpoint_parameters=('sort', 'offset', 'limit'), **kwargs)\n\n @payload(categories=['category', True])\n def get_categories(self, **kwargs):\n \"\"\"get_categories(lang='ja')\n\n | Get only the categories being streamed.\n\n Parameters\n ----------\n lang: :class:`str`\n | Language to search\n | `lang` must be one of the following:\n | 'ja' : Japanese\n | 'en' : English\n\n Returns\n -------\n :class:`~twitcaspy.models.Result`\n | |attribute|\n | |latelimit|\n | **categories** : :class:`list` of :class:`~twitcaspy.models.Category`\n\n Raises\n ------\n TwitcaspyException\n When lang is not a 'ja' or 'en'.\n\n References\n ----------\n https://apiv2-doc.twitcasting.tv/#get-categories\n \"\"\"\n\n if 'lang' in kwargs:\n if not (kwargs['lang'] == 'ja' or kwargs['lang'] == 'en'):\n raise TwitcaspyException(\"lang must be 'ja' or 'en', not \"\n + kwargs['lang'])\n else:\n kwargs['lang'] = 'ja'\n return self.request(\n 'GET', '/categories', endpoint_parameters=('lang'), **kwargs)\n\n @payload(users=['user', True])\n def search_users(self, **kwargs):\n \"\"\"search_users(words, *, limit=10, lang='ja')\n\n | Search for users.\n\n Parameters\n ----------\n words: :class:`str` or :class:`list` or :class:`tuple`\n | Multiple words are ANDed by separating them with space.\n lang: :class:`str`\n | Language setting of the user to be searched.\n | Currently only \"ja\" is supported.\n | 'ja' : Japanese\n limit(optional): :class:`int`\n | Maximum number of acquisitions\n | It can be specified in the range of 1 to 50.(default is 10.)\n | (In some cases,\n it may return less than the specified number of support users.)\n\n Returns\n -------\n :class:`~twitcaspy.models.Result`\n | |attribute|\n | |latelimit|\n | **users** : :class:`list` of :class:`~twitcaspy.models.User`\n\n Raises\n ------\n TwitcaspyException\n When lang is not a 'ja'.\n TwitcaspyException\n When words are not specified\n TwitcaspyException\n When words is not a :class:`str`, :class:`list` or :class:`tuple`\n\n References\n ----------\n https://apiv2-doc.twitcasting.tv/#search-users\n \"\"\"\n if 'words' in kwargs:\n if isinstance(kwargs['words'], str):\n kwargs['words'] = kwargs['words'].split(' ')\n elif isinstance(kwargs['words'], (list, tuple)):\n kwargs['words'] = kwargs['words']\n else:\n raise TwitcaspyException(\"words must be str, list or tuple, not \"\n + type(kwargs['words']).__name__)\n else:\n raise TwitcaspyException(\"You must specify `words`.\")\n kwargs['words'] = ' '.join(kwargs['words'])\n if 'lang' in kwargs:\n if not kwargs['lang'] == 'ja':\n raise TwitcaspyException(\"lang must be 'ja', not \"\n + kwargs['lang'])\n else:\n kwargs['lang'] = 'ja'\n return self.request(\n 'GET', '/search/users',\n endpoint_parameters=('words', 'lang', 'limit'), **kwargs)\n\n @payload(movies=['live', True])\n def search_live_movies(self, **kwargs):\n \"\"\"search_live_movies(*, type='word', content)\n\n | Search for live concerts being streamed.\n\n Parameters\n ----------\n limit(optional): :class:`int`\n | Maximum number of acquisitions\n | It can be specified in the range of 1 to 100.(default is 10.)\n | (In some cases,\n it may return less than the specified number of support users.)\n type: :class:`str`\n | Search type\n | `type` must be one of the following:\n | 'tag' : Tag search\n | 'word' : Word search\n | 'category' : Subcategory ID match search\n | 'new' : New Search\n | 'recommend' : recommend search\n context: :class:`int`, :class:`str`, :class:`list` or :class:`tuple`\n | The type of context is as follows:\n | When type is tag or word : :class:`str`, :class:`list` or :class:`tuple`\n | When type is category : :class:`int`\n | Not required when type is new or recommend.\n lang: :class:`str`\n | Language setting of the user to be searched.\n | Currently only \"ja\" is supported.\n | 'ja' : Japanese\n\n Returns\n -------\n :class:`~twitcaspy.models.Result`\n | |attribute|\n | |latelimit|\n | **movies** : :class:`list` of :class:`~twitcaspy.models.Live`\n\n Raises\n ------\n TwitcaspyException\n When type are not specified.\n TwitcaspyException\n When type is not a `tag`, `word`, `category`, `new` or `recommend`.\n TwitcaspyException\n No context specified when type is tag, word or category.\n TwitcaspyException\n When lang is not a 'ja'.\n\n References\n ----------\n https://apiv2-doc.twitcasting.tv/#search-live-movies\n \"\"\"\n #Is type specified\n if 'type' in kwargs:\n #Whether type is tag, word or category\n if kwargs['type'] in ['tag', 'word', 'category']:\n #Is context specified\n if 'context' in kwargs:\n if kwargs['type'] == 'category':\n #When the type of type is not int\n if not isinstance(kwargs['context'], int):\n raise TwitcaspyException(\"context must be int, not \"\n + type(kwargs['context']).__name__)\n else:\n #Is the context type str\n if isinstance(kwargs['context'], str):\n pass\n #Is the context type list or tuple\n elif isinstance(kwargs['context'], (list, tuple)):\n kwargs['context'] = ' '.join(kwargs['context'])\n #When the type of type is not str, list or tuple\n else:\n raise TwitcaspyException(\"context must be str, list or tuple, not \"\n + type(kwargs['context']).__name__)\n else:\n raise TwitcaspyException(\"You must specify `context`.\")\n kwargs['context'] = kwargs['context'].split(' ')\n #Whether type is new or recommend\n elif kwargs['type'] in ['new', 'recommend']:\n pass\n else:\n raise TwitcaspyException(\"type must be `tag`, `word`, `category`, `new` or `recommend`, not \"\n + type(kwargs['type']).__name__)\n else:\n raise TwitcaspyException(\"You must specify `type`.\")\n if 'lang' in kwargs:\n if not kwargs['lang'] == 'ja':\n raise TwitcaspyException(\"lang must be 'ja', not \"\n + kwargs['lang'])\n else:\n kwargs['lang'] = 'ja'\n return self.request(\n 'GET', '/search/lives',\n endpoint_parameters=('limit', 'type', 'context', 'lang'), **kwargs)\n\n @payload('movie', signature=['raw', False], broadcaster=['user', False])\n def incoming_webhook(self, data, secure=True, **kwargs):\n \"\"\"incoming_webhook(data, secure=True)\n\n | Parses notifications to the specified WebHook URL.\n\n Hint\n ----\n By using the WebHook API, it is possible to notify the distribution\n start / end event of a specific distributor to the WebHook URL\n specified in advance. |google_translate_ja_en|\n\n Tip\n ---\n |no_auth|\n\n Note\n ----\n Method : POST\n\n Parameters\n ----------\n data: :class:`dict`\n | WebHook Payload\n secure: :class:`bool`\n | Enable signature verification function setting.\n\n Returns\n -------\n :class:`~twitcaspy.models.Result`\n | |attribute|\n | **signature** : :class:`~twitcaspy.models.Raw` (:class:`str`)\n | **movie** : :class:`~twitcaspy.models.Movie`\n | **broadcaster** : :class:`~twitcaspy.models.User`\n\n Raises\n ------\n TwitcaspyException\n When `secure` is true and `signature` do not match.\n\n References\n ----------\n https://apiv2-doc.twitcasting.tv/#incoming-webhook\n \"\"\"\n webhook = self.parser.parse(\n data, api=self, payload_type=kwargs['payload_type'])\n if secure and self.signature != webhook.signature:\n raise TwitcaspyException('Invalid signature')\n return webhook\n\n @payload(all_count=['raw', False], webhooks=['webhook', True])\n def get_webhook_list(self, **kwargs):\n \"\"\"get_webhook_list(limit=50, offset=0, user_id)\n\n | Get the list of WebHooks associated with the application.\n\n Tip\n ---\n | It can only be executed on an Application-only authentication.\n\n Parameters\n ----------\n limit(optional): :class:`int`\n | Maximum number of acquisitions\n | It can be specified in the range of 1 to 50.(default is 50.)\n | (In some cases,\n it may return less than the specified number of webhooks.)\n offset(optional): :class:`int`\n | Position from the beginning\n | It can be specified in the range of 0 or more.(default is 0.)\n user_id(optional): :class:`str`\n | Target user id\n\n Hint\n ----\n | The limit and offset parameters are valid\n only if user_id is not specified.\n\n Note\n ----\n | For user_id, you can specify a numeric id (e.g.: 182224938) or\n a character string (e.g.: twitcasting_jp).\n\n Returns\n -------\n :class:`~twitcaspy.models.Result`\n | |attribute|\n | |latelimit|\n | **all_count** : :class:`~twitcaspy.models.Raw` (:class:`int`)\n Number of registered WebHooks\n | **webhooks** : :class:`list` of :class:`~twitcaspy.models.WebHook`\n\n References\n ----------\n https://apiv2-doc.twitcasting.tv/#get-webhook-list\n \"\"\"\n return self.request(\n 'GET', '/webhooks',\n endpoint_parameters=('limit', 'offset', 'user_id'), **kwargs)\n\n @payload(user_id=['raw', False], added_events=['raw', True])\n def register_webhook(self, user_id, events, **kwargs):\n \"\"\"register_webhook(user_id, event)\n\n | Register a new WebHook.\n\n Tip\n ---\n | It can only be executed on an Application-only authentication.\n\n Parameters\n ----------\n user_id: :class:`str`\n | Target user id\n events: :class:`list` or :class:`tuple`\n | Event type to hook\n | The content of the events must be :class:`str`.\n | The content of the events must be one of the following:\n | 'livestart' : Live start\n | 'liveend' : Live end\n\n Note\n ----\n | For user_id, you can specify a numeric id (e.g.: 182224938) or\n a character string (e.g.: twitcasting_jp).\n\n Returns\n -------\n :class:`~twitcaspy.models.Result`\n | |attribute|\n | |latelimit|\n | **user_id** : :class:`~twitcaspy.models.Raw` (:class:`str`)\n User ID\n | **added_events** : :class:`list` of :class:`str`\n Registered event type\n\n Raises\n ------\n TwitcaspyException\n When events is not a :class:`list` or :class:`tuple`\n TwitcaspyException\n When events is not a `livestart`, `liveend`.\n\n References\n ----------\n https://apiv2-doc.twitcasting.tv/#get-webhook-list\n \"\"\"\n if not isinstance(events, (list, tuple)):\n raise TwitcaspyException(\"events must be list or tuple, not \"\n + type(events).__name__)\n events = list(set(events))\n for event in events:\n if event not in ['livestart', 'liveend']:\n raise TwitcaspyException(\"events must be `livestart` or `liveend`, not \"\n + event)\n post_data = {'user_id': user_id, 'events': events}\n return self.request(\n 'POST', '/webhooks', post_data=post_data, **kwargs)\n\n @payload(user_id=['raw', False], deleted_events=['raw', True])\n def remove_webhook(self, user_id, events, **kwargs):\n \"\"\"remove_webhook(user_id, event)\n\n | Remove WebHook.\n\n Tip\n ---\n | It can only be executed on an Application-only authentication.\n\n Parameters\n ----------\n user_id: :class:`str`\n | Target user id\n events: :class:`list` or :class:`tuple`\n | Event type to hook\n | The content of the events must be :class:`str`.\n | The content of the events must be one of the following:\n | 'livestart' : Live start\n | 'liveend' : Live end\n\n Note\n ----\n | For user_id, you can specify a numeric id (e.g.: 182224938) or\n a character string (e.g.: twitcasting_jp).\n\n Returns\n -------\n :class:`~twitcaspy.models.Result`\n | |attribute|\n | |latelimit|\n | **user_id** : :class:`~twitcaspy.models.Raw` (:class:`str`)\n User ID\n | **deleted_events** : :class:`list` of :class:`str`\n Event type to delete hook\n\n Raises\n ------\n TwitcaspyException\n When events is not a :class:`list` or :class:`tuple`\n TwitcaspyException\n When events is not a `livestart`, `liveend`.\n\n References\n ----------\n https://apiv2-doc.twitcasting.tv/#remove-webhook\n \"\"\"\n if not isinstance(events, (list, tuple)):\n raise TwitcaspyException(\"events must be list or tuple, not \"\n + type(events).__name__)\n events = list(set(events))\n for event in events:\n if event not in ['livestart', 'liveend']:\n raise TwitcaspyException(\"events must be `livestart` or `liveend`, not \"\n + event)\n kwargs['user_id'] = user_id\n kwargs['events[]'] = events#'&'.join([f'events[]={event}' for event in events])\n return self.request(\n 'DELETE', '/webhooks',\n endpoint_parameters=('user_id', 'events[]'), **kwargs)\n\n @payload(\n enabled=['raw', False], url=['raw', False], stream_key=['raw', False])\n def get_rtmp_url(self, **kwargs):\n \"\"\"get_rtmp_url()\n\n | Obtain the URL (RTMP) for stream of the user associated with the access token.\n\n Tip\n ---\n | It can only be executed on an non-Application-only authentication.\n\n Returns\n -------\n :class:`~twitcaspy.models.Result`\n | |attribute|\n | |latelimit|\n | **enabled** : :class:`~twitcaspy.models.Raw` (:class:`bool`)\n Whether RTMP stream is enabled\n | **url** : :class:`str` of :class:`None`\n URL for RTMP stream\n | **stream_key** : :class:`str` or :class:`None`\n RTMP stream key\n\n References\n ----------\n https://apiv2-doc.twitcasting.tv/#get-rtmp-url\n \"\"\"\n return self.request('GET', '/rtmp_url', **kwargs)\n\n @payload(\n enabled=['raw', False], url=['raw', False])\n def get_webm_url(self, **kwargs):\n \"\"\"get_webm_url()\n\n | Obtain the URL (WebM, WebSocket) for stream of the user associated with the access token.\n\n Tip\n ---\n | It can only be executed on an non-Application-only authentication.\n\n Returns\n -------\n :class:`~twitcaspy.models.Result`\n | |attribute|\n | |latelimit|\n | **enabled** : :class:`~twitcaspy.models.Raw` (:class:`bool`)\n Whether WebM stream is enabled\n | **url** : :class:`str` or :class:`None`\n URL for WebM stream\n\n References\n ----------\n https://apiv2-doc.twitcasting.tv/#get-webm-url\n \"\"\"\n return self.request('GET', '/webm_url', **kwargs)\n"
},
{
"alpha_fraction": 0.5390946269035339,
"alphanum_fraction": 0.5445815920829773,
"avg_line_length": 21.78125,
"blob_id": "809e38212ede4df79044e3ea9e5384b63052cf87",
"content_id": "7041e56e7f70a64404b4bb805cb53a6db607cc31",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 729,
"license_type": "permissive",
"max_line_length": 53,
"num_lines": 32,
"path": "/twitcaspy/models/live.py",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": "# Twitcaspy\n# Copyright 2021 Alma-field\n# See LICENSE for details.\n\nfrom .model import Model\n\nfrom .movie import Movie\n\nfrom .user import User\n\nclass Live(Model):\n \"\"\"Live Object\n\n Attributes\n ----------\n movie: :class:`~twitcaspy.models.Movie`\n broadcaster: :class:`~twitcaspy.models.User`\n tags: :class:`list`\n \"\"\"\n\n @classmethod\n def parse(cls, api, json):\n live = cls(api)\n setattr(live, '_json', json)\n for k, v in json.items():\n if k == 'movie':\n setattr(live, k, Movie.parse(api, v))\n elif k == 'broadcaster':\n setattr(live, k, User.parse(api, v))\n else:\n setattr(live, k, v)\n return live\n"
},
{
"alpha_fraction": 0.6377952694892883,
"alphanum_fraction": 0.6535432934761047,
"avg_line_length": 18.538461685180664,
"blob_id": "1ce20e3a09554518dfa8396f010a7b40ab69f804",
"content_id": "05b848762377980c4be690c76d8f1433e2f83f7d",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 254,
"license_type": "permissive",
"max_line_length": 46,
"num_lines": 13,
"path": "/twitcaspy/parsers/jsonparser.py",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": "# Twitcaspy\n# Copyright 2021 Alma-field\n# See LICENSE for details.\n\nfrom .parser import Parser\n\nclass JsonParser(Parser):\n def __init__(self):\n pass\n\n @classmethod\n def parse(self, payload, *args, **kwargs):\n return payload.json()\n"
},
{
"alpha_fraction": 0.7298507690429688,
"alphanum_fraction": 0.7402985095977783,
"avg_line_length": 24.769229888916016,
"blob_id": "b1938f7e0a193f524d8c8da7924cd750a8cca6e3",
"content_id": "2ffc4587d1ed881eb7a6d4f944c10feaa94ed1f1",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 670,
"license_type": "permissive",
"max_line_length": 68,
"num_lines": 26,
"path": "/twitcaspy/__init__.py",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": "# Twitcaspy\n# Copyright 2021 Alma-field\n# See LICENSE for details.\n\n\"\"\"\nTwitcaspy : Twitcasting API library\n\"\"\"\n__version__ = '1.1.0'\n__author__ = 'Alma-field'\n__license__ = 'MIT'\n\nfrom twitcaspy.api import API\nfrom twitcaspy.auth import (\n GrantAuthHandler, ImplicitAuthHandler, AppAuthHandler\n)\nfrom twitcaspy.errors import (\n BadRequest, Forbidden, HTTPException, NotFound, TooManyRequests,\n TwitcaspyException, TwitcastingServerError, Unauthorized\n)\nfrom twitcaspy.models import (\n App, Category, Comment, ModelFactory, Gift, LateLimit,\n Live, Movie, Raw, SubCategory, Supporter, User, WebHook\n)\n\n# Global, unauthenticated instance of API\napi = API()\n"
},
{
"alpha_fraction": 0.6293103694915771,
"alphanum_fraction": 0.6465517282485962,
"avg_line_length": 16.846153259277344,
"blob_id": "9962cdf8a19cdd663bbfa30c8d1d2bebe7b5d9b2",
"content_id": "99b7a4e578aedc8bd82b4b95fa106246aa18065a",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 232,
"license_type": "permissive",
"max_line_length": 52,
"num_lines": 13,
"path": "/twitcaspy/models/result.py",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": "# Twitcaspy\n# Copyright 2021 Alma-field\n# See LICENSE for details.\n\nfrom .model import Model\n\nclass Result(Model):\n \"\"\"API Result Object\n\n Attributes\n ----------\n latelimit : :class:`~twitcaspy.models.LateLimit`\n \"\"\"\n"
},
{
"alpha_fraction": 0.7832512259483337,
"alphanum_fraction": 0.802955687046051,
"avg_line_length": 17.454545974731445,
"blob_id": "2991845cf4fd403bedda81dc5d15ae88a93c3d80",
"content_id": "3c9e6752cc64888670f15edcb2ddf39d91c6652b",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 203,
"license_type": "permissive",
"max_line_length": 36,
"num_lines": 11,
"path": "/twitcaspy/parsers/__init__.py",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": "# Twitcaspy\n# Copyright 2021 Alma-field\n# See LICENSE for details.\n\nfrom .jsonparser import JsonParser\n\nfrom .modelparser import ModelParser\n\nfrom .parser import Parser\n\nfrom .rawparser import RawParser\n"
},
{
"alpha_fraction": 0.6173912882804871,
"alphanum_fraction": 0.6391304135322571,
"avg_line_length": 22,
"blob_id": "22725a8a9ed29c7e7129068c2ac9b6567b5860eb",
"content_id": "b28b3da62fcf1d7ad1f84230a298ab9d0ed79266",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 690,
"license_type": "permissive",
"max_line_length": 57,
"num_lines": 30,
"path": "/twitcaspy/auth/app.py",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": "# Twitcaspy\n# Copyright 2021 Alma-field\n# See LICENSE for details.\n#\n# based on tweepy(https://github.com/tweepy/tweepy)\n# Copyright (c) 2009-2021 Joshua Roesslein\n\nfrom .auth import AuthHandler\n\nfrom .oauth import OAuth2Basic\n\nclass AppAuthHandler(AuthHandler):\n \"\"\"\n Application-only authentication handler\n\n Parameters\n ----------\n client_id: :class:`str`\n |client_id|\n client_secret: :class:`str`\n |client_secret|\n\n References\n ----------\n https://apiv2-doc.twitcasting.tv/#access-token\n \"\"\"\n\n def __init__(self, client_id, client_secret):\n super().__init__(client_id, client_secret)\n self.auth = OAuth2Basic(client_id, client_secret)\n"
},
{
"alpha_fraction": 0.5600335597991943,
"alphanum_fraction": 0.5642317533493042,
"avg_line_length": 24.340425491333008,
"blob_id": "e44f267fcd5279fe35630d1998ecc23a8e52db21",
"content_id": "83cf32ed36264e82e6f173d890808e6e3734edbc",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1191,
"license_type": "permissive",
"max_line_length": 62,
"num_lines": 47,
"path": "/twitcaspy/models/gift.py",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": "# Twitcaspy\n# Copyright 2021 Alma-field\n# See LICENSE for details.\n\nfrom .model import Model\n\nclass Gift(Model):\n \"\"\"Gift Object\n\n Attributes\n ----------\n id: :class:`str`\n | Item transmission ID\n message: :class:`str`\n | Message body when sending an item.\n item_image: :class:`str`\n | Item image URL\n item_sub_image: :class:`str` or :class:`None`\n | If there is an image selected when sending the item,\n the URL of the image.\n item_id: :class:`str`\n | Item ID\n item_mp: :class:`str`\n | Item MP\n item_name: :class:`str`\n | Item name\n user_image: :class:`str`\n | URL of user icon\n user_screen_id: :class:`str`\n | User's screen_id at the time the item was submitted.\n user_screen_name: :class:`str`\n | Human readable screen_id\n user_name: :class:`str`\n | Human readable user name\n\n References\n ----------\n https://apiv2-doc.twitcasting.tv/#gift-object\n \"\"\"\n\n @classmethod\n def parse(cls, api, json):\n gift = cls(api)\n setattr(gift, '_json', json)\n for k, v in json.items():\n setattr(gift, k, v)\n return gift\n"
},
{
"alpha_fraction": 0.7412368059158325,
"alphanum_fraction": 0.7537760138511658,
"avg_line_length": 46.41891860961914,
"blob_id": "ca5035b3c6983d16d79f2535ce5b6be121c3db5c",
"content_id": "aa2b6493b269aed9b534c57b189b4a677b0bca65",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 4000,
"license_type": "permissive",
"max_line_length": 170,
"num_lines": 74,
"path": "/README_JA.md",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": "# twitcaspy\n[![license](https://img.shields.io/badge/license-MIT-blue.svg)](https://github.com/Alma-field/twitcaspy/blob/master/LICENSE)\n[![test](https://github.com/Alma-field/twitcaspy/actions/workflows/test.yml/badge.svg?branch=master)](https://github.com/Alma-field/twitcaspy/actions/workflows/test.yml)\n[![Deploy](https://github.com/Alma-field/twitcaspy/actions/workflows/deploy.yml/badge.svg)](https://github.com/Alma-field/twitcaspy/actions/workflows/deploy.yml)\n[![Documentation Status](https://readthedocs.org/projects/twitcaspy/badge/?version=latest)](http://twitcaspy.alma-field.com/ja/latest/?badge=latest)\n[![GitHub issues open](https://img.shields.io/github/issues/Alma-field/twitcaspy.svg)](https://github.com/Alma-field/twitcaspy/issues?q=is%3Aopen+is%3Aissue)\n[![GitHub issues close](https://img.shields.io/github/issues-closed-raw/Alma-field/twitcaspy.svg)](https://github.com/Alma-field/twitcaspy/issues?q=is%3Aclose+is%3Aissue)\n[![PyPI Version](https://img.shields.io/pypi/v/twitcaspy?label=PyPI)](https://pypi.org/project/twitcaspy/)\n[![Python Versions](https://img.shields.io/pypi/pyversions/twitcaspy?label=Python)](https://pypi.org/project/twitcaspy/)\n\nPython用Twitcattingクライアントライブラリ\nPython 3.7 - 3.9 がサポートされています。\n\n## Other language version/他言語版\n - [English/英語](README.md)\n - [Japanese/日本語](README_JA.md)\n\n## ドキュメント\n - [開発版](https://twitcaspy.alma-field.com/ja/latest)\n - [最新版 (v1.1.0)](https://twitcaspy.alma-field.com/ja/stable)\n - [v1.1.0](https://twitcaspy.alma-field.com/ja/1.1.0)\n - [v1.0.2](https://twitcaspy.alma-field.com/ja/1.0.2)\n - [v1.0.1](https://twitcaspy.alma-field.com/ja/1.0.1)\n - [v1.0.0](https://twitcaspy.alma-field.com/ja/1.0.0)\n\n## インストール\nPyPIから最新バージョンはpipを用いてインストールできます。\n```\npip install twitcaspy\n```\n\nGitHubからリポジトリのクローンを作成することで、最新の開発バージョンをインストールすることもできます。\n```\ngit clone https://github.com/Alma-field/twitcaspy.git\ncd twitcaspy\npip install .\n```\n\nまたは、GitHubリポジトリから直接インストールします。\n```\npip install git+https://github.com/Alma-field/twitcaspy.git\n```\n\n## 例\nアプリケーションスコープでの実行例です。 \n***@twitcasting_jp*** のアカウント名を取得します。\n```python\nfrom twitcaspy import API, AppAuthHandler\nauth = AppAuthHandler(client_id, client_secret)\napi = API(auth)\n\nprint(api.get_user_info(id='twitcasting_jp').user.name)\n# > ツイキャス公式\n```\n\nその他の例やコード全体は[examples](https://github.com/Alma-field/twitcaspy/tree/master/examples)内のコードをご覧ください。\n### 含まれている例\n - [Authorization](https://github.com/Alma-field/twitcaspy/tree/master/examples/auth)\n - [AppAuthHandler](https://github.com/Alma-field/twitcaspy/tree/master/examples/auth/app.py)\n - [GrantAuthHandler](https://github.com/Alma-field/twitcaspy/tree/master/examples/auth/grant.py)\n - [ImplicitAuthHandler](https://github.com/Alma-field/twitcaspy/tree/master/examples/auth/implicit.py)\n - [Realtime API](https://github.com/Alma-field/twitcaspy/blob/master/examples/realtime)\n - [Webhook](https://github.com/Alma-field/twitcaspy/blob/master/examples/webhook)\n - [Server](https://github.com/Alma-field/twitcaspy/blob/master/examples/webhook/server.py)\n - [Client](https://github.com/Alma-field/twitcaspy/blob/master/examples/webhook/client.py)\n\n## 出典\nこのライブラリは以下を参考にしています:\n - [tweepy/tweepy](https://github.com/tweepy/tweepy) - Twitter for Python!\n - [tamago324/PyTwitcasting](https://github.com/tamago324/PyTwitcasting) - PyTwitcasting is a library for API v2 (β) of Twitcasting.\n\n## リンク\n - [Twitcasting API Documentation](https://apiv2-doc.twitcasting.tv/)\n - [API ChangeLog](https://github.com/twitcasting/PublicApiV2/blob/master/CHANGELOG.md)\n"
},
{
"alpha_fraction": 0.5562599301338196,
"alphanum_fraction": 0.5641838312149048,
"avg_line_length": 20.03333282470703,
"blob_id": "72016ebe79fbdd73ff4b3fa5379aee68979bf8eb",
"content_id": "dcd6d2c428c55d3a5608aa7674d279279b2a22a4",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 631,
"license_type": "permissive",
"max_line_length": 48,
"num_lines": 30,
"path": "/twitcaspy/models/app.py",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": "# Twitcaspy\n# Copyright 2021 Alma-field\n# See LICENSE for details.\n\nfrom .model import Model\n\nclass App(Model):\n \"\"\"Application Object\n\n Attributes\n ----------\n client_id: :class:`str`\n | Application client ID\n name: :class:`str`\n | Application name\n owner_user_id: :class:`str`\n | Application developer user ID\n\n References\n ----------\n https://apiv2-doc.twitcasting.tv/#app-object\n \"\"\"\n\n @classmethod\n def parse(cls, api, json):\n app = cls(api)\n setattr(app, '_json', json)\n for k, v in json.items():\n setattr(app, k, v)\n return app\n"
},
{
"alpha_fraction": 0.6158798336982727,
"alphanum_fraction": 0.6437768340110779,
"avg_line_length": 19.2608699798584,
"blob_id": "a281974cb273154b646f389e60d109a9bf6f2750",
"content_id": "0211b7a330946c7fcf0cd77ba3edd2a5a7124fcc",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "reStructuredText",
"length_bytes": 466,
"license_type": "permissive",
"max_line_length": 76,
"num_lines": 23,
"path": "/docs/index.rst",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": ".. twitcaspy documentation master file, created by\n sphinx-quickstart on Thu Jun 17 12:41:16 2021.\n You can adapt this file completely to your liking, but it should at least\n contain the root `toctree` directive.\n\nTwitcaspy Documentation\n=======================\n\n.. toctree::\n :maxdepth: 2\n :caption: Contents:\n\n install.rst\n auth.rst\n api.rst\n models.rst\n exceptions.rst\n\nIndices and tables\n==================\n\n* :ref:`genindex`\n* :ref:`search`\n"
},
{
"alpha_fraction": 0.5790273547172546,
"alphanum_fraction": 0.5866261124610901,
"avg_line_length": 20.933332443237305,
"blob_id": "c4f38a6264152b1134042aa26f1b2e0c727b18e0",
"content_id": "a8a13c2142870c159499b5aed83f3c19cf91e93a",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 658,
"license_type": "permissive",
"max_line_length": 57,
"num_lines": 30,
"path": "/twitcaspy/models/subcategory.py",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": "# Twitcaspy\n# Copyright 2021 Alma-field\n# See LICENSE for details.\n\nfrom .model import Model\n\nclass SubCategory(Model):\n \"\"\"SubCategory Object\n\n Attributes\n ----------\n id: :class:`str`\n | SubCategory ID\n name: :class:`str`\n | SubCategory name\n count: :class:`int`\n | Number of subcategory streams\n\n References\n ----------\n https://apiv2-doc.twitcasting.tv/#sub-category-object\n \"\"\"\n\n @classmethod\n def parse(cls, api, json):\n subcategory = cls(api)\n setattr(subcategory, '_json', json)\n for k, v in json.items():\n setattr(subcategory, k, v)\n return subcategory\n"
},
{
"alpha_fraction": 0.5771992802619934,
"alphanum_fraction": 0.5852782726287842,
"avg_line_length": 25.843374252319336,
"blob_id": "656b60f1713fcaf10c42865252ed3f4079e0a52c",
"content_id": "aaf5dfdf5bb4fb36c2b4ca4aed748ad9d9a5bc69",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2228,
"license_type": "permissive",
"max_line_length": 79,
"num_lines": 83,
"path": "/twitcaspy/auth/grant.py",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": "# Twitcaspy\n# Copyright 2021 Alma-field\n# See LICENSE for details.\n#\n# based on tweepy(https://github.com/tweepy/tweepy)\n# Copyright (c) 2009-2021 Joshua Roesslein\n\nfrom requests_oauthlib import OAuth2Session\n\nfrom ..errors import TwitcaspyException\n\nfrom .auth import AuthHandler\n\nfrom .oauth import OAuth2Bearer\n\nclass GrantAuthHandler(AuthHandler):\n \"\"\"\n Authorization Code Grant handler\n\n Parameters\n ----------\n client_id: :class:`str`\n |client_id|\n client_secret: :class:`str`\n |client_secret|\n callback: :class:`str`\n |callback|\n state: :class:`str`\n |csrf_token|\n\n References\n ----------\n https://apiv2-doc.twitcasting.tv/#authorization-code-grant\n \"\"\"\n\n def __init__(self, client_id, client_secret, callback=None, *, state=None):\n super().__init__(client_id, client_secret)\n self.callback = callback\n self.state = state\n self.auth = None\n\n self.oauth = OAuth2Session(\n client_id,\n redirect_uri=callback,\n state=state\n )\n\n def get_authorization_url(self):\n \"\"\"Get the authorization URL to redirect the user\"\"\"\n try:\n url = self._get_oauth_url('authorize')\n authorization_url, self.state = self.oauth.authorization_url(url)\n return authorization_url\n except Exception as e:\n raise TwitcaspyException(e)\n\n def fetch_token(self, authorization_response):\n try:\n token = self.oauth.fetch_token(\n self._get_oauth_url('access_token'),\n authorization_response=authorization_response,\n include_client_id=self.client_id,\n client_secret=self.client_secret\n )\n self.auth = OAuth2Bearer(self.oauth.token['access_token'])\n except Exception as e:\n raise TwitcaspyException(e)\n\n def set_access_token(self, bearer_token):\n \"\"\"set_access_token(bearer_token)\n\n | Set bearer_token.\n\n Parameters\n ----------\n bearer_token: :class:`str`\n bearer_token to use\n\n Returns\n -------\n :class:`None`\n \"\"\"\n self.auth = OAuth2Bearer(bearer_token)\n"
},
{
"alpha_fraction": 0.6246246099472046,
"alphanum_fraction": 0.6366366147994995,
"avg_line_length": 26.75,
"blob_id": "21d4480fcd102787e76ca2e310cb4f1566016a65",
"content_id": "c6f544986bae80d0438baad1d9333509f4394aff",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 333,
"license_type": "permissive",
"max_line_length": 57,
"num_lines": 12,
"path": "/twitcaspy/parsers/parser.py",
"repo_name": "Alma-field/twitcaspy",
"src_encoding": "UTF-8",
"text": "# Twitcaspy\n# Copyright 2021 Alma-field\n# See LICENSE for details.\n\nclass Parser:\n def parse(self, payload, *args, **kwargs):\n \"\"\"\n Parse the response payload and return the result.\n Returns a tuple that contains the result data\n (or None if not present).\n \"\"\"\n raise NotImplementedError\n"
}
] | 48 |
valent-in-to/justiciacordoba-scraper | https://github.com/valent-in-to/justiciacordoba-scraper | 3157bdbd786fd6ae94c138e8add8e98272ca000a | 09d45d77b8262a8193929d11b13be7138a9e8b5e | f105b241cad1b48da356b5a94501ef16a815ceae | refs/heads/main | 2023-06-14T20:44:24.912125 | 2021-07-06T23:54:16 | 2021-07-06T23:54:16 | 382,722,768 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6454293727874756,
"alphanum_fraction": 0.6518218517303467,
"avg_line_length": 30.476511001586914,
"blob_id": "f3dcbfcb8e6255006b2ea7342b68c3a47a739273",
"content_id": "0ebfef8bf962c43dcfb68d255fa6a2e5c6288fa5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4702,
"license_type": "no_license",
"max_line_length": 148,
"num_lines": 149,
"path": "/scrapy-coco.py",
"repo_name": "valent-in-to/justiciacordoba-scraper",
"src_encoding": "UTF-8",
"text": "# for scrapping \nimport time\nfrom datetime import date\nfrom selenium import webdriver\nfrom selenium.webdriver.chrome.options import Options\nfrom selenium.webdriver.support.ui import Select\nimport pandas as pd\nimport json\n\n# for database\nfrom sqlalchemy import create_engine, Column, Integer, String, Boolean, DateTime\nfrom sqlalchemy.ext.declarative import declarative_base\nfrom sqlalchemy.orm import sessionmaker, relationship\n\nBase = declarative_base()\n\nclass Day(Base):\n __tablename__= \"days\"\n \n mes = Column('mes', Integer)\n dia = Column('dia', Integer)\n inhabil = Column('inhabil', Boolean)\n razon = Column('razon', String(10000))\n created_at = Column('created_at', DateTime, default=date.today())\n id = Column('id', Integer, primary_key=True)\n\n\nengine = create_engine('mysql://user:password!@localhost:3306/mysql', echo=True)\nBase.metadata.create_all(bind=engine)\nSession = sessionmaker(bind=engine)\n\nsession = Session()\n\n\n# Setting up webdriver\n\npath = \"./chromedriver/chromedriver\"\nchrome_options = Options()\nchrome_options.add_argument(\"--disable-extensions\")\nchrome_options.add_argument(\"--disable-gpu\")\nchrome_options.add_argument(\"--no-sandbox\")\nchrome_options.add_argument(\"--headless\")\n\ndriver = webdriver.Chrome(path, options=chrome_options)\ndriver.get(\"https://www.justiciacordoba.gob.ar/justiciacordoba/servicios/DiasInhabiles.aspx\")\n\n# Scraping\n# Trabajando la lista de meses y el boton de buscar\nmeses_dd = Select(driver.find_element_by_xpath('.//*[@id=\"ddlMeses\"]'))\nbuscar_btn = driver.find_element_by_xpath('.//*[@id=\"btnBuscar\"]')\n\nlista_tabla_cruda = []\nlista_tabla_limpia = []\nlista_completa = []\n\n# Iterando para ejecutar el scraping en todos los meses del año\nfor i in range(1,13):\n meses_dd.select_by_value(str(i))\n buscar_btn.click()\n\n time.sleep(1.5)\n\n # Obtener tabla de días hábiles e inhabiles en 2 objs. de selenium\n ## obtener tabla con razones de los feriados\n\n tabla_cruda = driver.find_element_by_xpath('.//*[@id=\"datepicker\"]/div')\n inhabiles = tabla_cruda.find_elements_by_css_selector('[title=inhabil]')\n\n # Obtener PANDAS dataframe y eliminar sábados y domingos, TRASNFORMARLO EN LISTA UNIDIMENSIONAL SIN NESTED LISTS\n tabla_limpia = pd.read_html(driver.page_source)[0].drop('Sá', axis=1).drop('Do', axis=1).fillna(value=0).values.flatten().tolist()\n\n tabla_confinde = pd.read_html(driver.page_source)[0].fillna(value=0).values.flatten().tolist()\n lista_findes = []\n for j in tabla_confinde:\n if int(j) == 0:\n pass\n else:\n if j not in tabla_limpia:\n lista_findes.append(int(j))\n \n\n\n # Obtener Pandas dataframe con tabla de razones de los días inháblies, y transformarla en diccionario con fechas como keys y razones como values\n try: # xq si no se rompe en los meses sin feriados\n tabla_razones = pd.read_html(driver.page_source)[1].values.flatten().tolist()\n dict_razones = dict(zip(tabla_razones[::2], tabla_razones[1::2]))\n final_razones = []\n\n # este loop es para obtener solo el dd como key y el dd/mm/aaaa + razon como valor\n index = 0\n newdict = {}\n for key, value in dict_razones.items():\n newdict[int(key[:2])] = key + ' ' + value\n index += 1\n \n dict_razones = newdict\n \n except:\n pass\n\n # Transformo lista de str devuelta x pandas en ints. Remuevo acá los nan xq no sé usar el method dropna en el dataframe\n lista_dias_semana = []\n for j in tabla_limpia:\n if int(j) == 0:\n pass\n else:\n lista_dias_semana.append(int(j))\n #except:\n #pass\n \n\n lista_dias_inhabiles = []\n for index, dia in enumerate(inhabiles):\n\n if int(dia.text) not in lista_dias_semana:\n pass\n else:\n lista_dias_inhabiles.append(int(dia.text))\n\n lista_completa.append({\n \"mes\": i,\n \"fines_de_semana\": lista_findes,\n \"dias_semana\": lista_dias_semana,\n \"dias_inhabiles\": lista_dias_inhabiles,\n \"razones\": dict_razones\n }) \n \n# print(json.dumps(lista_completa))\n\n# transformo al objeto final\n\nfinal_list = []\nsession.execute('delete from days')\nsession.commit()\nfor month in lista_completa:\n\n for day in month[\"dias_semana\"]:\n if day in month[\"dias_inhabiles\"]:\n razon = month[\"razones\"][day]\n\n db_day = Day(mes=month[\"mes\"],dia=day ,inhabil=True, razon=razon)\n else:\n db_day = Day(mes=month[\"mes\"],dia=day, inhabil=False)\n session.add(db_day)\n\nsession.commit()\nsession.close()\nprint(\"success\")\ndriver.quit()\n\n\n\n"
},
{
"alpha_fraction": 0.47058823704719543,
"alphanum_fraction": 0.7058823704719543,
"avg_line_length": 16,
"blob_id": "985b1a1d606da59f86a7cf9c1b3e492602ee6956",
"content_id": "74c1bdad3ff14e6e3dc7b7781bd45f8189ea8bd1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 51,
"license_type": "no_license",
"max_line_length": 18,
"num_lines": 3,
"path": "/requirements.txt",
"repo_name": "valent-in-to/justiciacordoba-scraper",
"src_encoding": "UTF-8",
"text": "selenium==3.141.0\npandas==1.2.4\nSQLAlchemy==1.4.20\n"
},
{
"alpha_fraction": 0.7611241340637207,
"alphanum_fraction": 0.7704917788505554,
"avg_line_length": 212.5,
"blob_id": "b8c2277bae31b2fe7759c9e6c211ea200e03309e",
"content_id": "75a87e8e3c5992903248e884814e0190c6088d3f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 439,
"license_type": "no_license",
"max_line_length": 414,
"num_lines": 2,
"path": "/README.md",
"repo_name": "valent-in-to/justiciacordoba-scraper",
"src_encoding": "UTF-8",
"text": "# Scraper \nDescarga la información del calendario de dias hábiles oficial del poder judicial de Córdoba y otorga un objeto conteniendo un objeto por cada mes del año (por ahora sólo del 2021), y éste objeto \"mes\" tiene a su vez un objeto \"día\" por cada día de la semana (excluye fines de semana). Cada dia tiene una propiedad \"inhabil\", y, si esta es verdadera, una propiedad \"razón\" con la razón de porque el día es inhábil. "
}
] | 3 |
deblearn/dsne_single_shot | https://github.com/deblearn/dsne_single_shot | e4d562734bbc40ff6a68972e57ee79473c0b8fce | 135c8493a7cd5b45fe5352892042fedbf114b89a | e9c92ae0efebc370475b60d8a8988f4b4de5c093 | refs/heads/master | 2021-07-17T03:52:24.982937 | 2017-10-20T22:46:37 | 2017-10-20T22:46:37 | 107,640,437 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5834951400756836,
"alphanum_fraction": 0.6000000238418579,
"avg_line_length": 23.5238094329834,
"blob_id": "d2f548a9af8cc02a47115c0eeaa268528f3e1744",
"content_id": "0628078eeab4917f3b9b4bb4927e2d2d953d8d14",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1030,
"license_type": "no_license",
"max_line_length": 59,
"num_lines": 42,
"path": "/remote.py",
"repo_name": "deblearn/dsne_single_shot",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Thu Oct 19 20:48:24 2017\n\n@author: gazula\n\"\"\"\n\nimport numpy as np\nfrom tsneFunctions import normalize_columns, tsne\n\n\ndef remote_site(args, computation_phase):\n shared_X = np.loadtxt(args[\"shared_X\"])\n # sharedLabel = np.loadtxt(args.run[\"shared_Label\"])\n no_dims = args[\"no_dims\"]\n initial_dims = args[\"initial_dims\"]\n perplexity = args[\"perplexity\"]\n\n shared_X = normalize_columns(shared_X)\n (sharedRows, sharedColumns) = shared_X.shape\n\n init_Y = np.random.randn(sharedRows, no_dims)\n\n # shared data computation in tsne\n shared_Y = tsne(\n shared_X,\n init_Y,\n sharedRows,\n no_dims,\n initial_dims,\n perplexity,\n computation_phase=computation_phase)\n\n with open(\"Y_values.txt\", \"w\") as f:\n for i in range(0, len(shared_Y)):\n f.write(str(shared_Y[i][0]) + '\\t')\n f.write(str(shared_Y[i][1]) + '\\n')\n\n args[\"shared_Y\"] = \"Y_values.txt\"\n\n return args\n"
},
{
"alpha_fraction": 0.6117708086967468,
"alphanum_fraction": 0.6319050192832947,
"avg_line_length": 30.24193572998047,
"blob_id": "1261185eed85142bad0d99758a694a8ae9f6bf2f",
"content_id": "3f76334b9e53613797b26a1a6099569756c736dc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1937,
"license_type": "no_license",
"max_line_length": 76,
"num_lines": 62,
"path": "/local.py",
"repo_name": "deblearn/dsne_single_shot",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Thu Oct 19 20:49:04 2017\n\n@author: gazula\n\"\"\"\nimport numpy as np\nimport argparse\nimport json\nfrom tsneFunctions import normalize_columns, tsne\n\n\ndef local_site(args, computation_phase):\n\n shared_X = np.loadtxt(args[\"shared_X\"])\n shared_Y = np.loadtxt(args[\"shared_Y\"])\n no_dims = args[\"no_dims\"]\n initial_dims = args[\"initial_dims\"]\n perplexity = args[\"perplexity\"]\n sharedRows, sharedColumns = shared_X.shape\n\n # load high dimensional site 1 data\n parser = argparse.ArgumentParser(\n description='''read in coinstac args for local computation''')\n parser.add_argument('--run', type=json.loads, help='grab coinstac args')\n localSite1_Data = ''' {\n \"site1_Data\": \"Site_1_Mnist_X.txt\",\n \"site1_Label\": \"Site_1_Label.txt\"\n } '''\n site1args = parser.parse_args(['--run', localSite1_Data])\n Site1Data = np.loadtxt(site1args.run[\"site1_Data\"])\n (site1Rows, site1Columns) = Site1Data.shape\n\n # create combinded list by local and remote data\n combined_X = np.concatenate((shared_X, Site1Data), axis=0)\n combined_X = normalize_columns(combined_X)\n\n # create low dimensional position\n combined_Y = np.random.randn(combined_X.shape[0], no_dims)\n combined_Y[:shared_Y.shape[0], :] = shared_Y\n\n Y_plot = tsne(\n combined_X,\n combined_Y,\n sharedRows,\n no_dims=no_dims,\n initial_dims=initial_dims,\n perplexity=perplexity,\n computation_phase=computation_phase)\n\n # save local site data into file\n with open(\"local_site1.txt\", \"w\") as f1:\n for i in range(sharedRows, len(Y_plot)):\n f1.write(str(Y_plot[i][0]) + '\\t')\n f1.write(str(Y_plot[i][1]) + '\\n')\n\n # pass data to remote in json format\n localJson = ''' {\"local\": \"local_site1.txt\"} '''\n localY = parser.parse_args(['--run', localJson])\n\n return (localY.run)\n"
},
{
"alpha_fraction": 0.6086956262588501,
"alphanum_fraction": 0.6309921741485596,
"avg_line_length": 25.382352828979492,
"blob_id": "014f0c20221c16f4443e684379cc6d6145d65eba",
"content_id": "72d90efae72fb0aa61cb292321365f5117e9a9fb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 897,
"license_type": "no_license",
"max_line_length": 76,
"num_lines": 34,
"path": "/dsne_single_shot.py",
"repo_name": "deblearn/dsne_single_shot",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Wed Oct 18 08:59:22 2017\n\n@author: dsaha\n\"\"\"\nimport numpy as np\nimport json\nimport argparse\n\nfrom remote import remote_site\nfrom local import local_site\n\nif __name__ == \"__main__\":\n parser = argparse.ArgumentParser(\n description='''read in coinstac args for remote computation''')\n parser.add_argument('--run', type=json.loads, help='grab coinstac args')\n\n sharedData = ''' {\n \"shared_X\": \"Shared_Mnist_X.txt\",\n \"shared_Label\": \"Shared_Label.txt\",\n \"no_dims\": 2,\n \"initial_dims\": 50,\n \"perplexity\" : 20.0\n } '''\n\n args = parser.parse_args(['--run', sharedData])\n\n remote_output = remote_site(args.run, computation_phase='remote')\n local_output = local_site(remote_output, computation_phase='local')\n\n #Receive local site data\n LY = np.loadtxt(local_output[\"local\"])\n"
}
] | 3 |
TrumpNat1on/Nathan-B.-Space-Invaders | https://github.com/TrumpNat1on/Nathan-B.-Space-Invaders | 21ef1156f8170f04cb7a6b2d6304527300145fad | a614270cbfa4a493222c1d81e1608ebb37cfe850 | ddc4dd2b39e4b4bd7466a78f1fb85e7ac8e81aef | refs/heads/master | 2018-06-28T12:42:38.629580 | 2018-06-01T15:45:54 | 2018-06-01T15:45:54 | 133,668,774 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5230534076690674,
"alphanum_fraction": 0.5488549470901489,
"avg_line_length": 26.854625701904297,
"blob_id": "1cc20a7ac574c8cb66c7cffbce4cadea9e34789f",
"content_id": "7ca05f1806e5247d9c50096c7fc9f983929db2f1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 13100,
"license_type": "no_license",
"max_line_length": 104,
"num_lines": 454,
"path": "/SpaceInvadersVI.py",
"repo_name": "TrumpNat1on/Nathan-B.-Space-Invaders",
"src_encoding": "UTF-8",
"text": "# Imports\r\nimport sys\r\nimport os\r\nimport pygame\r\nimport random\r\nfrom decimal import Decimal\r\n\r\nif getattr(sys, 'frozen', False):\r\n current_path = sys._MEIPASS\r\nelse:\r\n current_path = os.path.dirname(__file__)\r\n\r\nhigh_score_file = \"high_score.txt\"\r\nexists = os.path.exists(high_score_file)\r\n\r\n# Initialize game engine\r\npygame.init()\r\n\r\nnew_stage = True\r\n# Window\r\nWIDTH = 1280\r\nHEIGHT = 1000\r\nSIZE = (WIDTH, HEIGHT)\r\nTITLE = \"Space War\"\r\nscreen = pygame.display.set_mode(SIZE)\r\npygame.display.set_caption(TITLE)\r\n\r\n\r\n# Timer\r\nclock = pygame.time.Clock()\r\nrefresh_rate = 60\r\n\r\n# Colors\r\nRED = (255, 0, 0)\r\nWHITE = (255, 255, 255)\r\nBLACK = (0, 0, 0)\r\nYELLOW = (255, 255, 0)\r\nGREEN = (100, 255, 100)\r\n\r\n# Fonts\r\nFONT_SM = pygame.font.Font(current_path + \"/fonts/pdark.ttf\", 24)\r\nFONT_MD = pygame.font.Font(current_path + \"/fonts/pdark.ttf\", 32)\r\nFONT_LG = pygame.font.Font(current_path + \"/fonts/pdark.ttf\", 64)\r\nFONT_XL = pygame.font.Font(current_path + \"/fonts/pdark.ttf\", 96)\r\n\r\n# Images\r\nfalcon_img = pygame.image.load(current_path + '/images/falcon.png')\r\nXWing_img = pygame.image.load(current_path + '/images/X-wing.png')\r\nlaser_img = pygame.image.load(current_path + '/images/laserRed.png')\r\nmob_img = pygame.image.load(current_path + '/images/Tiefighter.png')\r\nbomb_img = pygame.image.load(current_path + '/images/bombGreen.png')\r\nufo_img = pygame.image.load(current_path + '/images/ufo.png')\r\nbackground_img = pygame.image.load(current_path + '/images/background.png')\r\nTieBomber_img = pygame.image.load(current_path + '/images/tie_bomber.png')\r\nVTieFighter_img = pygame.image.load(current_path + '/images/v_tie_fighter.png')\r\n\r\n# Sound Effects\r\nEXPLOSION = pygame.mixer.Sound(current_path + '/sounds/Explosion3.wav')\r\nSHOT_sound = pygame.mixer.Sound(current_path + '/sounds/Explosion5.wav')\r\nBombDrop_sound = pygame.mixer.Sound(current_path + '/sounds/Pew-Pew.wav')\r\n\r\nsongs = [current_path + \"/sounds/Cantina_Band.ogg\",\r\n current_path + \"/sounds/Parade_Of_The_Ewoks.ogg\",\r\n current_path + \"/sounds/Yoda_And_The_Younglings.ogg\"]\r\n\r\n# Stages\r\nSTART = 0\r\nPLAYING = 1\r\nEND = 2\r\n\r\nship_dead = False\r\nmobs_dead = False\r\nclass Shots():\r\n global shots_taken, shots_hit\r\n shots_taken = 0\r\n\r\n# Game classes\r\nclass Ship(pygame.sprite.Sprite):\r\n def __init__(self, x, y, image):\r\n super().__init__()\r\n\r\n self.image = image\r\n self.rect = self.image.get_rect()\r\n self.rect.x = x\r\n self.rect.y = y\r\n \r\n self.speed = 10\r\n self.shield = 5\r\n\r\n def move_left(self):\r\n self.rect.x -= self.speed\r\n \r\n def move_right(self):\r\n self.rect.x += self.speed\r\n\r\n def shoot(self):\r\n laser = Laser(laser_img)\r\n laser.rect.centerx = self.rect.centerx\r\n laser.rect.centery = self.rect.top\r\n lasers.add(laser)\r\n\r\n def update(self, bombs):\r\n hit_list = pygame.sprite.spritecollide(self, bombs, True, pygame.sprite.collide_mask)\r\n\r\n for hit in hit_list:\r\n # play hit sound\r\n self.shield -= 1\r\n\r\n if self.rect.right < 0:\r\n self.rect.left = WIDTH \r\n\r\n elif self.rect.left > WIDTH:\r\n self.rect.right = 0\r\n\r\n hit_list = pygame.sprite.spritecollide(self, mobs, False, pygame.sprite.collide_mask)\r\n if len(hit_list) > 0:\r\n self.shield = 0\r\n stage = END\r\n \r\n if self.shield == 0:\r\n EXPLOSION.play()\r\n self.kill()\r\n stage = END\r\n \r\nclass Laser(pygame.sprite.Sprite):\r\n \r\n def __init__(self, image):\r\n super().__init__()\r\n\r\n self.image = image\r\n self.rect = self.image.get_rect()\r\n \r\n self.speed = 5\r\n\r\n def update(self):\r\n self.rect.y -= self.speed\r\n\r\n if self.rect.bottom < 0:\r\n self.kill()\r\n\r\nclass UFO(pygame.sprite.Sprite):\r\n def __init__(self, x, y, image):\r\n super().__init__()\r\n\r\n self.image = image\r\n self.mask = pygame.mask.from_surface(self.image)\r\n self.rect = self.image.get_rect()\r\n self.rect.x = x\r\n self.rect.y = y\r\n self.moving_right = True\r\n self.speed = 40\r\n\r\n def move(self):\r\n for u in UFOs:\r\n if self.moving_right:\r\n u.rect.x += self.speed\r\n if u.rect.right >= WIDTH:\r\n self.kill()\r\n\r\n def update(self, lasers, player):\r\n self.move()\r\n hit_list = pygame.sprite.spritecollide(self, lasers, True, pygame.sprite.collide_mask)\r\n\r\n for hit in hit_list:\r\n EXPLOSION.play()\r\n player.score += 100\r\n self.kill()\r\n\r\n \r\nclass Mob(pygame.sprite.Sprite):\r\n def __init__(self, x, y, image):\r\n super().__init__()\r\n\r\n self.image = image\r\n self.mask = pygame.mask.from_surface(self.image)\r\n self.rect = self.image.get_rect()\r\n self.rect.x = x\r\n self.rect.y = y\r\n self.shield = 3\r\n\r\n def drop_bomb(self):\r\n bomb = Bomb(bomb_img)\r\n bomb.rect.centerx = self.rect.centerx\r\n bomb.rect.centery = self.rect.bottom\r\n bombs.add(bomb)\r\n \r\n def update(self, lasers, player):\r\n hit_list = pygame.sprite.spritecollide(self, lasers, True, pygame.sprite.collide_mask)\r\n \r\n for hit in hit_list:\r\n # play hit sound\r\n self.shield -= 1\r\n\r\n if self.shield == 0:\r\n EXPLOSION.play()\r\n player.score += 5\r\n self.kill()\r\n \r\n if self.rect.top > HEIGHT:\r\n self.kill()\r\n\r\n if len(hit_list) == 0:\r\n mobs_dead = True\r\n done = True\r\n stage = END\r\n\r\nclass Bomb(pygame.sprite.Sprite):\r\n \r\n def __init__(self, image):\r\n super().__init__()\r\n\r\n self.image = image\r\n self.rect = self.image.get_rect()\r\n \r\n self.speed = 3\r\n\r\n def update(self):\r\n self.rect.y += self.speed\r\n\r\n if self.rect.bottom < 0:\r\n self.kill()\r\n \r\nclass Fleet:\r\n\r\n def __init__(self, mobs):\r\n self.mobs = mobs\r\n self.moving_right = True\r\n self.speed = 8\r\n self.bomb_rate = 60\r\n\r\n def move(self):\r\n reverse = False\r\n \r\n for m in mobs:\r\n if self.moving_right:\r\n m.rect.x += self.speed\r\n if m.rect.right >= WIDTH:\r\n reverse = True\r\n else:\r\n m.rect.x -= self.speed\r\n if m.rect.left <=0:\r\n reverse = True\r\n\r\n if reverse == True:\r\n self.moving_right = not self.moving_right\r\n for m in mobs:\r\n m.rect.y += 32\r\n \r\n\r\n def choose_bomber(self):\r\n rand = random.randrange(0, self.bomb_rate)\r\n all_mobs = mobs.sprites()\r\n \r\n if len(all_mobs) > 0 and rand == 0:\r\n return random.choice(all_mobs)\r\n else:\r\n return None\r\n \r\n def update(self):\r\n self.move()\r\n\r\n bomber = self.choose_bomber()\r\n if bomber != None:\r\n bomber.drop_bomb()\r\n\r\ndef setup():\r\n global player, lasers, mobs, UFOs, bombs, fleet, stage, ship, ufo, high_score, accuracy, shots_taken\r\n\r\n ufo_position = random.randint(-2000, -200)\r\n # Make game objects\r\n ship = Ship(614.4, 853.3333, falcon_img)\r\n ufo = UFO(ufo_position, 10, VTieFighter_img)\r\n mob1 = Mob(80, 106.6666, TieBomber_img)\r\n mob2 = Mob(272, 106.6666, TieBomber_img)\r\n mob3 = Mob(464, 106.6666, TieBomber_img)\r\n mob4 = Mob(656, 106.6666, TieBomber_img)\r\n mob5 = Mob(848, 106.6666, TieBomber_img)\r\n mob6 = Mob(1040, 106.6666, TieBomber_img)\r\n mob7 = Mob(128, 298.3333, mob_img)\r\n mob8 = Mob(288, 298.3333, mob_img)\r\n mob9 = Mob(448, 298.3333, mob_img)\r\n mob10 = Mob(608, 293.3333, mob_img)\r\n mob11 = Mob(768, 293.3333, mob_img)\r\n mob12 = Mob(928, 293.3333, mob_img)\r\n mob13= Mob(1088, 293.3333, mob_img)\r\n\r\n if exists:\r\n with open('high_score.txt') as high_score_file:\r\n high_score = int(high_score_file.read())\r\n elif not exists:\r\n print(\"not exists\")\r\n '''with open(high_score_file, 'w') as f:\r\n f.write(\"0\")\r\n print(\"not exists\")'''\r\n \r\n # Make sprite groups\r\n player = pygame.sprite.GroupSingle()\r\n player.add(ship)\r\n player.score = 0\r\n\r\n lasers = pygame.sprite.Group()\r\n\r\n mobs = pygame.sprite.Group()\r\n mobs.add(mob1, mob2, mob3, mob4, mob5, mob6, mob7, mob8, mob9, mob10, mob11, mob12, mob13)\r\n\r\n UFOs = pygame.sprite.Group()\r\n UFOs.add(ufo)\r\n\r\n\r\n bombs = pygame.sprite.Group()\r\n\r\n fleet = Fleet(mobs)\r\n\r\n stage = START\r\n\r\n \r\n accuracy = 0\r\n\r\n shots_taken = 0\r\n\r\n\r\n# Game helper functions\r\ndef show_title_screen():\r\n title_text = FONT_XL.render(\"Space Wars\", 1, WHITE)\r\n title_text_rect = title_text.get_rect(center=(WIDTH/2, 400))\r\n screen.blit(title_text, title_text_rect)\r\n\r\ndef show_subtitle_screen():\r\n subtitle_text = FONT_MD.render(\"A Star Wars Story\", 1, WHITE)\r\n subtitle_text_rect = subtitle_text.get_rect(center=(WIDTH/2, 480))\r\n screen.blit(subtitle_text, subtitle_text_rect)\r\n\r\ndef show_stats():\r\n score_text = FONT_MD.render(\"Score \" + str(player.score), 1, WHITE)\r\n shield_text = FONT_MD.render(\"Shield \" + str(ship.shield), 1, WHITE)\r\n if exists:\r\n high_score_text = FONT_MD.render(\"High Score \" + str(high_score), 1, WHITE)\r\n screen.blit(high_score_text, (32, 64))\r\n screen.blit(score_text, (32, 32))\r\n screen.blit(shield_text, (32, 96))\r\n\r\ndef show_accuracy():\r\n accuracy_text = FONT_MD.render(\"Accuracy \" + str(accuracy) + \"%\", 1, WHITE)\r\n accuracy_text_rect = accuracy_text.get_rect(center=(WIDTH/2, 680))\r\n screen.blit(accuracy_text, accuracy_text_rect)\r\n\r\ndef show_kills():\r\n kills_text = FONT_XL.render(\"Kills \" + \"13\", 1, WHITE)\r\n kills_text_rect = kills_text.get_rect(center=(WIDTH/2, 60))\r\n screen.blit(kills_text, kills_text_rect)\r\n\r\ndef show_restart():\r\n restart_text = FONT_MD.render(\"Press space to restart\", 1, WHITE)\r\n restart_text_rect = restart_text.get_rect(center=(WIDTH/2, HEIGHT/2))\r\n screen.blit(restart_text, restart_text_rect)\r\n \r\ndef choose_music():\r\n song = random.choice(songs)\r\n pygame.mixer.music.load(song) \r\n pygame.mixer.music.play(2)\r\n \r\n# Game loop\r\nsetup()\r\nchoose_music()\r\n\r\ndone = False\r\n\r\nwhile not done:\r\n # Event processing (React to key presses, mouse clicks, etc.)\r\n for event in pygame.event.get():\r\n if event.type == pygame.QUIT:\r\n done = True\r\n elif event.type == pygame.KEYDOWN:\r\n if stage == START:\r\n if event.key == pygame.K_SPACE or event.key == pygame.K_a or event.key == pygame.K_d:\r\n stage = PLAYING\r\n choose_music()\r\n elif stage == PLAYING:\r\n if event.key == pygame.K_SPACE:\r\n SHOT_sound.play()\r\n shots_taken += 1\r\n ship.shoot()\r\n elif stage == END:\r\n if event.key == pygame.K_SPACE:\r\n setup()\r\n \r\n if stage == PLAYING:\r\n pressed = pygame.key.get_pressed()\r\n if pressed[pygame.K_a]:\r\n ship.move_left()\r\n elif pressed[pygame.K_d]:\r\n ship.move_right()\r\n \r\n # Game logic (Check for collisions, update points, etc.)\r\n if stage == PLAYING:\r\n player.update(bombs)\r\n lasers.update() \r\n mobs.update(lasers, player)\r\n UFOs.update(lasers, player)\r\n bombs.update()\r\n fleet.update()\r\n if ship.shield == 0:\r\n new_stage = True\r\n stage = END\r\n choose_music()\r\n elif len(mobs) == 0:\r\n new_stage = True\r\n stage = END\r\n choose_music()\r\n \r\n \r\n # Drawing code (Describe the picture. It isn't actually drawn yet.)\r\n screen.fill(BLACK)\r\n screen.blit(background_img, (0,0))\r\n screen.blit(ufo_img, (200,150))\r\n lasers.draw(screen)\r\n player.draw(screen)\r\n bombs.draw(screen)\r\n mobs.draw(screen)\r\n UFOs.draw(screen)\r\n show_stats()\r\n\r\n if stage == START:\r\n show_title_screen()\r\n show_subtitle_screen()\r\n\r\n if stage == END:\r\n print(shots_taken)\r\n mobs_left = len(mobs)\r\n shots_hit = 3 * (13 - len(mobs))\r\n if shots_taken > 0:\r\n x = ((shots_hit / shots_taken) * 100)\r\n accuracy = round(x,2)\r\n else:\r\n accuracy = 0\r\n show_kills()\r\n show_accuracy()\r\n show_restart()\r\n if exists:\r\n if player.score > high_score:\r\n writehighscore = open(\"high_score.txt\", \"w\")\r\n writehighscore.write(str(player.score))\r\n \r\n \r\n # Update screen (Actually draw the picture in the window.)\r\n pygame.display.flip()\r\n\r\n\r\n # Limit refresh rate of game loop \r\n clock.tick(refresh_rate)\r\n\r\n\r\n# Close window and quit\r\npygame.quit()\r\n"
},
{
"alpha_fraction": 0.7328194975852966,
"alphanum_fraction": 0.7405266761779785,
"avg_line_length": 38.871795654296875,
"blob_id": "1068f0482ec6ed54d918011d9043844a62522a0c",
"content_id": "a70cffb4677a32f004343794d2418ef37e8cbc05",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 1557,
"license_type": "no_license",
"max_line_length": 122,
"num_lines": 39,
"path": "/README.md",
"repo_name": "TrumpNat1on/Nathan-B.-Space-Invaders",
"src_encoding": "UTF-8",
"text": "# Space Invaders\n\n##### .exe file\n[file](https://github.com/TrumpNat1on/SpaceWar/releases/tag/1.0.0)\n\n## About\nSave the galaxy from the evil sith & their tiefighters and bombers! \nIn this game you will survive long enough to destroy all the enemy mobs(tie-fighters, tie-bombers, and vader's own ship) \nIn order to do this you must survive long enough to destroy them by not losing your shield(it takes 3 hits to kill you) \nYou are the last hope! Save the galaxy and bring victory to the republic!\n\n## Objective\nKill all the mobs \nSurvive until you kill all the mobs \nShoot precisely and accurately in order to have a good accuracy score\n\n## Controls\nA --> left \nD --> right \nspace --> shoot \n\n## ScreenShots\n![alt-text](https://raw.github.com/TrumpNat1on/SpaceWar/master/images/StartScreen.PNG \"Start Screen\")\n![alt-text](https://raw.github.com/TrumpNat1on/SpaceWar/master/images/MidScreen.PNG \"Mid Screen\")\n![alt-text](https://raw.github.com/TrumpNat1on/SpaceWar/master/images/EndScreen.PNG \"End Screen\")\n\n## Characters\n\n### Good Guy\n#### Millennium Falcon\n![alt-text](https://raw.github.com/TrumpNat1on/SpaceWar/master/images/falcon.png \"Millennium Falcon\") \n\n### Bad Guys\n#### Tie-Fighter\n![alt-text](https://raw.github.com/TrumpNat1on/SpaceWar/master/images/Tiefighter.png \"Tie-Fighter\") \n#### Tie-Bomber\n![alt-text](https://raw.github.com/TrumpNat1on/SpaceWar/master/images/tie_bomber.png \"Tie-Bomber\") \n#### Vader's-Tie-Fighter\n![alt-text](https://raw.github.com/TrumpNat1on/SpaceWar/master/images/v_tie_fighter.png \"Vader's-Tie-Fighter\") \n"
}
] | 2 |
ivanurban/tradecore_task | https://github.com/ivanurban/tradecore_task | e8341eed574299912998886bb48130eefbd8fe39 | b3428b16fdb02bdaeeca985170c339038bc1db8b | 943f1082789d03b85e2bb988984f1f4b19d4a4a5 | refs/heads/main | 2023-08-27T19:59:56.956327 | 2021-11-11T11:38:08 | 2021-11-11T11:38:08 | 426,757,496 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6968135237693787,
"alphanum_fraction": 0.7010309100151062,
"avg_line_length": 34.6779670715332,
"blob_id": "9c02f65a2b9f20791b164de135b1ac9008e45eed",
"content_id": "ebf9c00956ca287182ddecc84dcda00331111153",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 2134,
"license_type": "no_license",
"max_line_length": 156,
"num_lines": 59,
"path": "/README.md",
"repo_name": "ivanurban/tradecore_task",
"src_encoding": "UTF-8",
"text": "# tradecore_task\n###This a django rest application, that enables user to:\n - Register, Login\n - Create, Update, Delete Posts\n - Like or dislike posts(only one like or dislike per post). Trying to like or dislike poste twice results in removing like or dislike\n \n \n###Application uses dj-rest-auth third party application for authentication\n\n Endpoints:\n\n - api/v1/dj-rest-auth/login/\n\n - api/v1/dj-rest-auth/logout/\n\n - api/v1/dj-rest-auth/password/reset/\n\n - api/v1/dj-rest-auth/password/reset/confirm\n\n For user registration additional third part app in nedded, django-allauth,\n\n Endpoint:\n\n - api/v1/dj-rest-auth/registration/\n\n This is how settings.py should look:\n #third-party apps\n 'rest_framework',\n 'rest_framework.authtoken', #generates the tokens on the server\n 'allauth',\n 'allauth.account',\n 'allauth.socialaccount', #if using social-network accounts\n 'dj_rest_auth', #log in log out, password reset API endpoints\n 'dj_rest_auth.registration'\n\n###List of all posts, Adding a new post:\n - Endpoint:\n - api/v1/\n\n###Post Update/delete:\n - Endpoint:\n - api/v1/{id ofa post}\n \n###Liking/Disliking posts\n - Endpoints:\n - api/v1/like{id of a post}\n - api/v1/dislike{id of a post}\n \n###When user is registered, Abstract API (https://www.abstractapi.com/)is used o find the location of user by IP,\n and to check if there is a national holiday on the day of the user registration, these information are saved \n in a database in CustomUser.\n\n For communication with Abstract API, third party app Celery is used. Celery is a distributed task queue that can process vast amounts of messages. Using\n Celery, not only can you create asynchronous tasks easily and let them be executed by workers as soon as possible. \n In addition Celery is set to use automatically retrying failed celery tasks,in this app exponential backoff is used to avoid overwhelming the service.\n\n Celery also needs message broker, I used RabbitMQ, but Redis is fine also.\n \n###There are a few testsin posts/tests.py file\n \n\n\n \n\n\n\n\n\n\n\n\n\n\n\n \n\n\n \n \n"
},
{
"alpha_fraction": 0.7188106179237366,
"alphanum_fraction": 0.7226890921592712,
"avg_line_length": 33.022220611572266,
"blob_id": "154ba87c16103b703eecc3ee617065dde599a9cb",
"content_id": "f4c0e9014ce521f3a9587d019818d726af88f01e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1547,
"license_type": "no_license",
"max_line_length": 104,
"num_lines": 45,
"path": "/posts/models.py",
"repo_name": "ivanurban/tradecore_task",
"src_encoding": "UTF-8",
"text": "from django.db import models\n\nfrom django.contrib.auth.models import User\n\nfrom django.contrib.auth.models import AbstractUser\n\n\n# Create your models here.\n\n#CustomUser class serves as user profile, inherited from class AbstractUser\n# added fields for location a nd registration_local_holiday\nclass CustomUser(AbstractUser):\n \n location = models.CharField(max_length=200,null=True, blank=True ) \n registration_local_holiday = models.CharField(max_length=200,null=True, blank=True)\n\n\n#user can have many posts\nclass Post(models.Model):\n \n author = models.ForeignKey(CustomUser, on_delete=models.CASCADE)\n post_body = models.TextField()\n pub_date = models.DateTimeField(auto_now_add=True)\n update_date = models.DateTimeField(auto_now=True)\n \n def __str__(self):\n return self.post_body\n\n def get_likes_count(self):\n return PostLikes.objects.filter(post_like=self).count()\n\n def get_dislikes_count(self):\n return PostDislikes.objects.filter(post_dislike=self).count()\n\n\nclass PostLikes(models.Model):\n user = models.ForeignKey(CustomUser, on_delete=models.CASCADE, null=True ) \n post_like = models.ForeignKey(Post, related_name='liked', on_delete=models.CASCADE, null=True )\n\n def __str__(self):\n return f'user:{self.user} liked {self.post_like}'\n\nclass PostDislikes(models.Model):\n user = models.ForeignKey(CustomUser, on_delete=models.CASCADE, null=True)\n post_dislike = models.ForeignKey(Post, related_name='Disliked', on_delete=models.CASCADE, null=True)\n\n \n\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.7600979208946228,
"alphanum_fraction": 0.7613219022750854,
"avg_line_length": 31.719999313354492,
"blob_id": "6c7df58276e0a6f3a7819787f2b902f007ab9d12",
"content_id": "064b4691c6752b6e444977e2b67f93c751b3255d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 817,
"license_type": "no_license",
"max_line_length": 89,
"num_lines": 25,
"path": "/soc_net/celery.py",
"repo_name": "ivanurban/tradecore_task",
"src_encoding": "UTF-8",
"text": "from __future__ import absolute_import\nimport os\n\nfrom celery import Celery\n\n# set the default Django settings module for the 'celery' program.\nos.environ.setdefault('DJANGO_SETTINGS_MODULE', 'soc_net.settings')\n\n#createj an instance of the application with app = Celery(''\napp = Celery('soc_net')\n\n##load any custom configuration from your project settings using the config_from_object()\napp.config_from_object('django.conf:settings', namespace='CELERY')\n\n\n# telling Celery to auto-discover asynchronous tasks for applications. \n# Celery will look for a tasks.py file in each application\n# directory of applications added to INSTALLED_APPS in order to load\n# asynchronous tasks defined in it\napp.autodiscover_tasks()\n\n\n# @app.task(bind=True)\n# def debug_task(self):\n# print('Request: {0!r}'.format(self.request))"
},
{
"alpha_fraction": 0.6650077700614929,
"alphanum_fraction": 0.6678071618080139,
"avg_line_length": 29.428571701049805,
"blob_id": "0b0d40f0e443106ac651887a85e04fb471aefcb3",
"content_id": "be7f90663c41752f51429f2a44e3cdafb169519f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3215,
"license_type": "no_license",
"max_line_length": 132,
"num_lines": 105,
"path": "/posts/serializers.py",
"repo_name": "ivanurban/tradecore_task",
"src_encoding": "UTF-8",
"text": "from rest_framework import serializers\n\nfrom allauth.account.adapter import get_adapter\n\nfrom .models import Post, CustomUser, PostLikes,PostDislikes\n\nfrom dj_rest_auth.registration.serializers import RegisterSerializer\n\nfrom django.contrib.auth import get_user_model\n\n#importing celery tasks \nfrom .tasks import get_location, get_local_holiday, user_location\n\n\n#imported methods from tasks.py(using celery for async calls), calling the delay() method of the tasks to execute it asynchronously.\n#The tasks will be added to the queue and will be executed by a worker as soon as possible.\nget_location.delay()\nuser_location.delay()\nget_local_holiday.delay()\n\n\nclass CustomRegisterSerializer(RegisterSerializer):\n\n location = serializers.CharField(max_length=200, default=user_location)\n registration_local_holiday = serializers.CharField(max_length=200, default=get_local_holiday)\n\n class Meta:\n model=CustomUser\n fields = ('id','username','email','password','location','registration_local_holiday', )\n\n\n # override get_cleaned_data of RegisterSerializer\n def get_cleaned_data(self):\n return {\n 'username': self.validated_data.get('username', ''),\n 'password1': self.validated_data.get('password1', ''),\n 'password2': self.validated_data.get('password2', ''),\n 'email': self.validated_data.get('email', ''),\n 'location': self.validated_data.get('location'),\n 'registration_local_holiday': self.validated_data.get('registration_local_holiday'),\n \n }\n\n # override save method of RegisterSerializer\n def save(self, request):\n adapter = get_adapter()\n user = adapter.new_user(request)\n self.cleaned_data = self.get_cleaned_data()\n user.location = self.cleaned_data.get('location')\n user.registration_local_holiday = self.cleaned_data.get('registration_local_holiday')\n \n user.save()\n adapter.save_user(request, user, self)\n return user\n\n\n#Get user details\nclass CustomUserDetailsSerializer(serializers.ModelSerializer):\n\n class Meta:\n model = CustomUser\n fields = (\n 'id',\n 'username',\n 'email',\n 'location',\n 'registration_local_holiday',\n )\n read_only_fields = ('location',)\n\n\n\nclass PostLikeSerializer(serializers.ModelSerializer):\n\n\n class Meta:\n model = PostLikes\n fields = ( 'post_like',)\n\n\nclass PostDislikeSerializer(serializers.ModelSerializer):\n\n\n class Meta:\n model = PostDislikes\n fields = ( 'post_dislike',)\n\n\nclass PostSerializer(serializers.ModelSerializer):\n\n dislikes = serializers.IntegerField(source='get_dislikes_count',read_only=True)\n likes = serializers.IntegerField(source='get_likes_count',read_only=True)\n #liked = PostLikeSerializer(many=True, read_only=True)\n class Meta:\n model = Post\n fields = ('id','author', 'post_body','pub_date','likes', 'dislikes')\n\n\n\n\n\nclass UserSerializer(serializers.ModelSerializer):\n class Meta:\n model= get_user_model()\n fields=('id','username','email','location','registration_local_holiday')\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.6456086039543152,
"alphanum_fraction": 0.6456086039543152,
"avg_line_length": 17,
"blob_id": "28ccfc48cd19e2d0561ffc3d6fda4d8534b306c3",
"content_id": "804657dc30f1503c757e36644d4754d15693d501",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 649,
"license_type": "no_license",
"max_line_length": 73,
"num_lines": 36,
"path": "/posts/urls.py",
"repo_name": "ivanurban/tradecore_task",
"src_encoding": "UTF-8",
"text": "from django.urls import path\n\nfrom .views import PostList, PostDetail, UserList, UserDetail\nfrom django.contrib.auth import views \n\nfrom . import views\n\n\n\nfrom rest_framework.routers import DefaultRouter\n\n\nurlpatterns = [\n\n #user list and user details\n path('users/<int:pk>',UserDetail.as_view()),\n path('users/',UserList.as_view()),\n \n #post list, post detail, \n path('',PostList.as_view()),\n path('<int:pk>/',PostDetail.as_view()),\n \n \n #like/dilike posts\n path('like/<int:pk>',views.post_like_view,name = 'post_likes'),\n path('dislike/<int:pk>',views.post_dislike_view,name = 'post_likes'),\n\n \n\n\n\n\n\n\n \n]\n\n"
},
{
"alpha_fraction": 0.686956524848938,
"alphanum_fraction": 0.7020289897918701,
"avg_line_length": 33.52000045776367,
"blob_id": "5862f86647eff3430b208cf62ddc90e50f384bd3",
"content_id": "165953474992263df620205944cf61a47bed93d5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1725,
"license_type": "no_license",
"max_line_length": 160,
"num_lines": 50,
"path": "/posts/tasks.py",
"repo_name": "ivanurban/tradecore_task",
"src_encoding": "UTF-8",
"text": "from celery.decorators import task\n\nimport requests\nimport json\nimport datetime\n\n\n#Automatically Retrying Failed Celery Tasks \n#If Celery task needs to send a request to a third-party service,\n# using exponential backoff to avoid overwhelming the service.\n@task(bind=True, autoretry_for=(Exception,), retry_backoff=True, retry_kwargs={'max_retries': 5})\ndef get_location(self):\n\n \"\"\"Get the location of the user by users ip\"\"\"\n\n response = requests.get(\"https://ipgeolocation.abstractapi.com/v1/?api_key=ab24d7de15824ba1bccfb014c057e769\")\n \n\n response_json=json.loads(response.content)\n city=response_json.get(\"city\")\n country=response_json.get(\"country\")\n country_code = response_json.get('country_code')\n \n return [city,country,country_code]\n\n@task(bind=True, autoretry_for=(Exception,), retry_backoff=True, retry_kwargs={'max_retries': 5})\ndef user_location(self):\n \"\"\"City and country for the field location when user registers\"\"\"\n city, country, code, = get_location()\n return city, country\n\n\n@task(bind=True, autoretry_for=(Exception,), retry_backoff=True, retry_kwargs={'max_retries': 5})\ndef get_local_holiday(self):\n \"\"\"Look if there is a holiday in the location on the day\n that user is registered\"\"\"\n\n year = datetime.date.today().year\n month = datetime.date.today().month\n day = datetime.date.today().day\n city, country, code = get_location()\n \n response = requests.get(f\"https://holidays.abstractapi.com/v1/?api_key=b758785aff304d969b59ead5b46b61d3&country={code}&year={year}&month={month}&day={day}\")\n \n if not response:\n return \"no holiday\"\n else:\n response_json=json.loads(response.content)\n holiday = response_json[0]\n return holiday[\"name\"]"
},
{
"alpha_fraction": 0.7756654024124146,
"alphanum_fraction": 0.7756654024124146,
"avg_line_length": 18.538461685180664,
"blob_id": "4786dc0d5e049085c4da0d4d3cc99fda89372364",
"content_id": "3d7343a32a123b66b22b200e3d53d3e644c3a395",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 263,
"license_type": "no_license",
"max_line_length": 62,
"num_lines": 13,
"path": "/posts/admin.py",
"repo_name": "ivanurban/tradecore_task",
"src_encoding": "UTF-8",
"text": "from django.contrib import admin\n\nfrom .models import Post, CustomUser, PostLikes, PostDislikes\n\n# Register your models here.\n\nadmin.site.register(Post)\n\nadmin.site.register(CustomUser)\n\nadmin.site.register(PostLikes)\n\nadmin.site.register(PostDislikes)\n\n\n\n\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.708258867263794,
"alphanum_fraction": 0.7114765644073486,
"avg_line_length": 25.133333206176758,
"blob_id": "583fbf113c2fc998336bc255ddc17b5cdf46aeb9",
"content_id": "86ae9f79b8fe0976569624a5f28cbe33bb5f266a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2797,
"license_type": "no_license",
"max_line_length": 83,
"num_lines": 105,
"path": "/posts/views.py",
"repo_name": "ivanurban/tradecore_task",
"src_encoding": "UTF-8",
"text": "from django.contrib.auth.models import User\nfrom django.shortcuts import render, get_object_or_404\n\nfrom rest_framework import generics, permissions, viewsets, mixins, status\n\nfrom rest_framework.response import Response\n\nfrom rest_framework.views import APIView\n\nfrom rest_framework.decorators import api_view\n\n\nfrom django.db.models import Q\n\nfrom django.contrib.auth import get_user_model\n\nfrom .models import Post, CustomUser, PostLikes, PostDislikes\n\nfrom django.http import JsonResponse\n\nfrom .serializers import PostSerializer,CustomUserDetailsSerializer, \\\nCustomRegisterSerializer, UserSerializer, PostLikeSerializer, PostDislikeSerializer\n\nfrom .permissions import IsAuthorOrReadOnly\n\n\n\n# Create your views here.\n\n\nclass PostList(generics.ListCreateAPIView):\n\n \n queryset = Post.objects.all()\n serializer_class= PostSerializer\n\nclass PostDetail(generics.RetrieveUpdateDestroyAPIView):\n\n permission_classes = (IsAuthorOrReadOnly,)\n queryset = Post.objects.all()\n serializer_class = PostSerializer\n\n\n\nclass UserList(generics.ListCreateAPIView):\n\n \n queryset = get_user_model().objects.all()\n serializer_class= UserSerializer\n\nclass UserDetail(generics.RetrieveUpdateDestroyAPIView):\n\n \n queryset = get_user_model().objects.all()\n serializer_class= UserSerializer\n\n\n\n\n\n\n#Using function based views for likes and dislikes\n#Checking of user already liked or disliked the post, if it did\n#then remove users like or dislike, else like or dislike\n@api_view(['GET', 'POST'])\ndef post_like_view(request, pk):\n \n\n post = Post.objects.get(pk=pk) \n\n if request.method == 'GET':\n serializer = PostSerializer(post)\n return Response(serializer.data)\n\n elif request.method == 'POST':\n user=CustomUser.objects.get(username=request.user)\n try:\n PostLikes.objects.get(user=user, post_like=post).delete()\n except PostLikes.DoesNotExist:\n PostDislikes.objects.create(user=user, post_dislike=post)\n\n serializer = PostSerializer(post)\n return Response(serializer.data,status=status.HTTP_201_CREATED)\n\n\n@api_view(['GET', 'POST'])\ndef post_dislike_view(request, pk):\n \n\n post = Post.objects.get(pk=pk) \n\n if request.method == 'GET':\n serializer = PostSerializer(post)\n return Response(serializer.data)\n\n elif request.method == 'POST':\n user=CustomUser.objects.get(username=request.user)\n try:\n PostDislikes.objects.get(user=user,post_dislike=post).delete()\n except PostDislikes.DoesNotExist:\n\n PostDislikes.objects.create(user=user, post_dislike=post)\n \n serializer = PostSerializer(post)\n return Response(serializer.data,status=status.HTTP_201_CREATED)\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n "
},
{
"alpha_fraction": 0.5523408651351929,
"alphanum_fraction": 0.5681220293045044,
"avg_line_length": 37.75510025024414,
"blob_id": "69f972638da28147e0fefa6cf2daa372cbe4f10c",
"content_id": "412f214776856f0a69d9d4feec49e452e4a92536",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1901,
"license_type": "no_license",
"max_line_length": 122,
"num_lines": 49,
"path": "/posts/tests.py",
"repo_name": "ivanurban/tradecore_task",
"src_encoding": "UTF-8",
"text": "from django.test import TestCase\n\nfrom .models import Post,PostLikes,PostDislikes\n\n#from django.contrib.auth.models import User\n\nfrom django.contrib.auth import get_user_model\nUser = get_user_model()\n\n# Create your tests here.\n\nclass PostTestCase(TestCase):\n \n #create a User\n def setUp(self):\n \n testuser1 = User.objects.create_user(username=\"testuser1\", email=\"[email protected]\", password=\"test@123\")\n testuser1.save()\n\n #create a post\n testpost = Post.objects.create(author=testuser1, post_body=\"A prime is a positive integer X that has exactly \\\n two distinct divisors: 1 and X. The first few prime \\\n integers are 2, 3, 5, 7, 11 and 13.\")\n\n testpost.save() \n\n \n def test_post_content(self):\n post = Post.objects.get(id=1)\n author = f'{post.author}'\n post_body = f'{post.post_body}'\n self.assertEqual(author,'testuser1')\n self.assertEqual(post_body,\"A prime is a positive integer X that has exactly \\\n two distinct divisors: 1 and X. The first few prime \\\n integers are 2, 3, 5, 7, 11 and 13.\") \n\n\n def test_like_post(self):\n post = Post.objects.get(id=1)\n user=User.objects.get(username=\"testuser1\")\n PostLikes.objects.create(user=user, post_like=post)\n self.assertEqual(PostLikes.objects.filter(post_like=post).count(),1)\n\n\n def test_dislike_post(self):\n post = Post.objects.get(id=1)\n user=User.objects.get(username=\"testuser1\")\n PostDislikes.objects.create(user=user, post_dislike=post)\n self.assertEqual(PostDislikes.objects.filter(post_dislike=post).count(),1)\n\n\n"
}
] | 9 |
kristpykreme/Facial-Recognition-Attendance-System | https://github.com/kristpykreme/Facial-Recognition-Attendance-System | 7defa2d0154bc1462444f52e9c4275d340b5fcc1 | 44fa7d346e3e06ce5b309c6b0fc765f8f8998924 | 9c92550e029cfe81f915e26332c421755bc29e9a | refs/heads/main | 2023-03-21T12:22:58.804891 | 2021-03-17T21:36:13 | 2021-03-17T21:36:13 | 348,852,505 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.792682945728302,
"alphanum_fraction": 0.7987805008888245,
"avg_line_length": 122,
"blob_id": "9787dc0fee5f40a48738436fd9519fad768a31f0",
"content_id": "c7e79dbd64975cbdd86249fd9dffebdc7f478a2b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 492,
"license_type": "no_license",
"max_line_length": 236,
"num_lines": 4,
"path": "/README.md",
"repo_name": "kristpykreme/Facial-Recognition-Attendance-System",
"src_encoding": "UTF-8",
"text": "# Facial-Recognition-Attendance-System\nPersonal project for a friend; AI facial recognition attendance GUI using Python libraries tkinter for the app and opencv for the trainer.\nCreate a file name 'dataset' to store the pictures and 'trainer' to store the trained dataset in the same directory. In the demo, I combined all .py files into an .exe file. I also stored the pictures used for the app in the 'lib' file.\nDemo: https://drive.google.com/file/d/1BdjixPVNbnM5VNk2ZArEHtmsNSkLDHSr/view\n"
},
{
"alpha_fraction": 0.47731542587280273,
"alphanum_fraction": 0.5044295191764832,
"avg_line_length": 26.87596893310547,
"blob_id": "a18e8e736cac234ea82d3f8f478adeff1bf9fc49",
"content_id": "79a416e48fe3372081afe948d47213c6021a3150",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3725,
"license_type": "no_license",
"max_line_length": 85,
"num_lines": 129,
"path": "/Taking_Attendance.py",
"repo_name": "kristpykreme/Facial-Recognition-Attendance-System",
"src_encoding": "UTF-8",
"text": "''''\r\nBased on original code by Anirban Kar: https://github.com/thecodacus/Face-Recognition \r\n\r\nDeveloped by Marcelo Rovai - MJRoBot.org @ 21Feb18 \r\n\r\n'''\r\n### -------- import for CAM -------- ###\r\nimport cv2\r\nimport numpy as np\r\nimport os\r\n\r\n### -------- import for EXCEl -------- ###\r\nfrom openpyxl import Workbook\r\nfrom openpyxl import load_workbook\r\nimport os.path\r\n\r\n\r\n### -------- EXCEL SETUP -------- ###\r\n\r\ndef Taking_Attendance(date):\r\n# check for existing attendence list, create one if does not exist\r\n if os.path.isfile('Attendance.xlsx'):\r\n book = load_workbook('Attendance.xlsx')\r\n print(\"\\nLoading existing attendance sheet...\")\r\n else:\r\n book = Workbook()\r\n print(\"\\nCreating new attendance sheet...\")\r\n\r\n sheet = book.create_sheet()\r\n sheet.title = '{}'.format(date)\r\n\r\n # format namelist \r\n\r\n name_list = (\r\n ['Index', 'Name', 'Attendance'],\r\n [1, 'Brendon'],\r\n [2, 'Raymond'],\r\n [3, 'Tzi Seong'],\r\n )\r\n\r\n for rows in name_list: \r\n\r\n sheet.append(rows)\r\n\r\n ### -------- CAM SETUP -------- ###\r\n \r\n recognizer = cv2.face.LBPHFaceRecognizer_create()\r\n recognizer.read('trainer/trainer.yml')\r\n cascadePath = \"haarcascade_frontalface_default.xml\"\r\n faceCascade = cv2.CascadeClassifier(cascadePath);\r\n\r\n font = cv2.FONT_HERSHEY_SIMPLEX\r\n\r\n id = 0\r\n\r\n # namelist for cam\r\n names = ['None','Brendon (1)','Raymond (2)','Tzi Seong (3)']\r\n\r\n # Initialize and start realtime video capture\r\n cam = cv2.VideoCapture(0)\r\n cam.set(3, 640) # set video widht\r\n cam.set(4, 480) # set video height\r\n\r\n # Define min window size to be recognized as a face\r\n minW = 0.1*cam.get(3)\r\n minH = 0.1*cam.get(4)\r\n\r\n while True:\r\n\r\n ret, img =cam.read()\r\n img = cv2.flip(img, 1)\r\n\r\n gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)\r\n\r\n faces = faceCascade.detectMultiScale( \r\n gray,\r\n scaleFactor = 1.2,\r\n minNeighbors = 5,\r\n minSize = (int(minW), int(minH)),\r\n )\r\n\r\n for(x,y,w,h) in faces:\r\n\r\n cv2.rectangle(img, (x,y), (x+w,y+h), (0,255,0), 2)\r\n\r\n id, confidence = recognizer.predict(gray[y:y+h,x:x+w])\r\n\r\n # Check if confidence is less then 100 ==> \"0\" is perfect match \r\n if (confidence < 100):\r\n sheet.cell(row = id + 1, column = 3).value = 1\r\n id = names[id]\r\n confidence = \" {0}%\".format(round(100 - confidence))\r\n else:\r\n id = \"unknown\"\r\n confidence = \" {0}%\".format(round(100 - confidence))\r\n \r\n cv2.putText(img, str(id), (x+5,y-5), font, 1, (255,255,255), 2)\r\n cv2.putText(img, str(confidence), (x+5,y+h-5), font, 1, (255,255,0), 1) \r\n \r\n cv2.imshow('camera',img)\r\n\r\n k = cv2.waitKey(10) & 0xff # Press 'ESC' for exiting video\r\n if k == 27:\r\n break\r\n\r\n ### -------- CHECK WHO IS ABSENT -------- ###\r\n \r\n absent_list = []\r\n\r\n for x in range(2, len(name_list)+1):\r\n present = sheet['C{}'.format(x)]\r\n if present.value != 1:\r\n name = sheet['B{}'.format(x)]\r\n absent_list.append(name.value)\r\n \r\n ### -------- CLEANUP CAM -------- ###\r\n print(\"\\n---------------------------------\")\r\n print(\"\\n---------Exiting Program---------\")\r\n cam.release()\r\n cv2.destroyAllWindows()\r\n\r\n ### -------- CLEANUP EXCEl -------- ###\r\n book.save('Attendance.xlsx')\r\n book.close()\r\n\r\n print(\"\\nAbsentee(s): {} \".format(len(absent_list)))\r\n for y in absent_list:\r\n print(y)\r\n print(\"\\nAttendance for {} saved.\".format(date))\r\n"
}
] | 2 |
kak-bo-che/wildapricotapi | https://github.com/kak-bo-che/wildapricotapi | 605f2034c559f766db4dd7b3ea66bc109dc72cb6 | a5e4f2d374b7051388d1adb24ee33cbd55ae8541 | e08f3d53cbd21c5e900092ed38094756e96f6300 | refs/heads/master | 2018-12-21T16:29:07.129706 | 2018-09-29T19:12:11 | 2018-09-29T19:12:11 | 104,963,839 | 0 | 1 | null | null | null | null | null | [
{
"alpha_fraction": 0.6181818246841431,
"alphanum_fraction": 0.6285714507102966,
"avg_line_length": 26.571428298950195,
"blob_id": "cb42caec4711a6b96596ddfe2e0ce38d4fed5333",
"content_id": "2b39678a13c0ca9ad5238b4bb76416089070f446",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 385,
"license_type": "no_license",
"max_line_length": 62,
"num_lines": 14,
"path": "/setup.py",
"repo_name": "kak-bo-che/wildapricotapi",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nfrom distutils.core import setup\n\nsetup(name='wildapricotapi',\n version='1.1.3',\n description='Simple Wrapper to the Wild Apricot API v2',\n author='D Smirnov',\n author_email='[email protected]',\n maintainer='Troy Ross',\n maintainer_email='[email protected]',\n url='https://github.com/kak-bo-che/wildapricotapi',\n packages=['wildapricotapi'],\n )"
},
{
"alpha_fraction": 0.684684693813324,
"alphanum_fraction": 0.6872586607933044,
"avg_line_length": 32.826087951660156,
"blob_id": "2f6721e04b14a2e5e12653ed3460c76b3fb47613",
"content_id": "0d78fad5536718792a8743ecc6f7dfa1c108b122",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 777,
"license_type": "no_license",
"max_line_length": 95,
"num_lines": 23,
"path": "/README.md",
"repo_name": "kak-bo-che/wildapricotapi",
"src_encoding": "UTF-8",
"text": "# WildApricot API\nExample Code taken from wildapricot github, and modified to work for me.\n\n[Wild Apricot Samples](https://github.com/WildApricot/ApiSamples)\n\n## Example Usage:\n```python\ndef getContactById(id):\n # Create API and authenticate by api key\n api = WaApiClient(api_key=os.environ['WA_API_KEY'])\n api.authenticate_with_apikey()\n # Get all accounts\n account = self.api.execute_request(\"/v2/accounts\")[0]\n\n # Get Contacts URL from API Response\n contactsUrl = next(res for res in account['Resources'] if res['Name'] == 'Contacts')['Url']\n\n # Request Client Info\n params = {'$async': 'false'}\n request_url = contactsUrl + str(id) + '?' + urllib.parse.urlencode(params)\n response = api.execute_request(request_url)\n print(response)\n```"
},
{
"alpha_fraction": 0.8813559412956238,
"alphanum_fraction": 0.8813559412956238,
"avg_line_length": 58,
"blob_id": "badcb0e932e37e7f2684010acd23f1959289245d",
"content_id": "5fb2a5b0d1d2eba3df58e069848eb6eb6c3cb9ca",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 59,
"license_type": "no_license",
"max_line_length": 58,
"num_lines": 1,
"path": "/wildapricotapi/__init__.py",
"repo_name": "kak-bo-che/wildapricotapi",
"src_encoding": "UTF-8",
"text": "from wildapricotapi.WaApi import WaApiClient, ApiException\n"
}
] | 3 |
bazsimarkus/tropomi-emission-source | https://github.com/bazsimarkus/tropomi-emission-source | 3ee26e6d26b55e166506da7b5cfc8a7bc8067803 | 7a6df9d476e2cb30ae25edf51347c42df9706811 | c62df5abacc69bed0b4c1f6991d883fda72654f4 | refs/heads/main | 2023-04-15T08:20:47.950630 | 2022-11-10T14:41:53 | 2022-11-10T14:41:53 | 564,346,713 | 3 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5997008681297302,
"alphanum_fraction": 0.6111664772033691,
"avg_line_length": 44.59090805053711,
"blob_id": "70c6fa3b0f13b8e1d842277121649b21ae5f4f14",
"content_id": "756df4303312642954c47838257e62b68cf4917f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2006,
"license_type": "no_license",
"max_line_length": 164,
"num_lines": 44,
"path": "/Volcano.py",
"repo_name": "bazsimarkus/tropomi-emission-source",
"src_encoding": "UTF-8",
"text": "# -----------------------------------------------------------\n# Volcano.py\n# Contains the Volcano class, which contains all relevant information to a volcanic scenario (name, date, location, filename)\n# -----------------------------------------------------------\n\nimport os\n\nimport pandas as pd\n\nfrom CONFIG_PARAMETERS import ASSETS_DIRECTORY\n\npd.set_option('display.max_columns', None)\npd.set_option('display.width', None)\npd.set_option('display.max_colwidth', None)\n\nvolcanolist_df = pd.read_csv('volcano_coordinates.csv')\n\n\nclass Volcano:\n \"\"\"Volcano class implementation\"\"\"\n def __init__(self, name, time_begin):\n self.name = name\n self.time_begin = time_begin\n\n self.year = int(time_begin.split('-')[0])\n self.month = int(time_begin.split('-')[1])\n self.day = int(time_begin.split('-')[2])\n\n self.fullname = str(volcanolist_df.loc[volcanolist_df['name'] == name]['fullname'].to_string(index=False))\n self.latitude = float(volcanolist_df.loc[volcanolist_df['name'] == name]['lat'])\n self.longitude = float(volcanolist_df.loc[volcanolist_df['name'] == name]['lon'])\n self.elevation = int(volcanolist_df.loc[volcanolist_df['name'] == name]['elevation'])\n self.id = float(volcanolist_df.loc[volcanolist_df['name'] == name]['id'])\n\n # Connect to the db file\n tropomi_files = pd.read_csv('CONFIG_TROPOMI_file_archive.csv')\n\n # Get path to file in local database\n dataset_root = ASSETS_DIRECTORY\n print(dataset_root)\n self.filename = dataset_root + tropomi_files.loc[(tropomi_files['name'] == name) & (tropomi_files['date'] == time_begin)]['filepath'].to_string(index=False)\n self.hour = int(str(os.path.basename(self.filename)).replace(\"S5P_NRTI_L2__SO2____\",\"\")[9:11])\n self.minute = int(str(os.path.basename(self.filename)).replace(\"S5P_NRTI_L2__SO2____\",\"\")[11:13])\n self.second = int(str(os.path.basename(self.filename)).replace(\"S5P_NRTI_L2__SO2____\",\"\")[13:15])\n"
},
{
"alpha_fraction": 0.5436483025550842,
"alphanum_fraction": 0.5512231588363647,
"avg_line_length": 47.512046813964844,
"blob_id": "cf9ac5237df1b5545198461d3f90393c5a28b7ed",
"content_id": "7bc492fa81cced1970fbddc48d313818e9279d1f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 8053,
"license_type": "no_license",
"max_line_length": 181,
"num_lines": 166,
"path": "/calculate_metrics.py",
"repo_name": "bazsimarkus/tropomi-emission-source",
"src_encoding": "UTF-8",
"text": "# -----------------------------------------------------------\n# calculate_metrics.py\n# Contains the function for relevant metric calculation to evaluate the algorithms\n# -----------------------------------------------------------\n\nimport math\n\nimport numpy as np\n\nfrom CONFIG_PARAMETERS import molar_mass_SO2, pixel_area_m2\nfrom Volcano import *\n\n\ndef get_metrics_response(sem_inst, mask_filled, gt_data, current_volcano, lons, lats, mask, SO2_PBL, mass_filled, method, trajectories=None):\n \"\"\"Calculate relevant metrics to evaluate the algorithms. Takes in the result of an algorithm function in segmentation_algorithms.py, returns an object containing the results\"\"\"\n\n data = {\"data\": []}\n\n if sem_inst == 0:\n volcanoes_to_check = [current_volcano.id]\n else:\n volcanoes_to_check = [current_volcano.id]\n # volcanoes_to_check = np.unique(mask_filled)\n\n for current_volcano_id in volcanoes_to_check:\n if current_volcano_id != 0:\n current_volcano_data = volcanolist_df.loc[volcanolist_df['id'] == int(current_volcano_id)]\n\n current_mask_filled = np.where(mask_filled == current_volcano_id, current_volcano_id, 0)\n current_mask_filled_binary = np.where(mask_filled == current_volcano_id, 1, 0)\n center_volcano_mask = np.where(mask_filled == current_volcano_id, 1, 0)\n mass_filled = np.multiply(SO2_PBL, center_volcano_mask)\n mass_filled = mass_filled * molar_mass_SO2 * pixel_area_m2\n so2_mass = mass_filled.sum() / 1000000\n\n current_gt_data = gt_data\n so2_pixels_on_image = int(np.count_nonzero(mask == 1))\n\n if current_gt_data is not None:\n # Mask GT for current volcano\n current_gt_data = np.where(current_gt_data == current_volcano_id, 1, 0) # >> assign 1 for flag values above 0\n ground_truth_available = 1\n\n true_positive_pixels = int(np.sum(np.logical_and(current_mask_filled_binary == 1, current_gt_data == 1)))\n true_negative_pixels = int(np.sum(np.logical_and(current_mask_filled_binary == 0, current_gt_data == 0))) - int(np.count_nonzero(mask == 0))\n false_positive_pixels = int(np.sum(np.logical_and(current_mask_filled_binary == 1, current_gt_data == 0)))\n false_negative_pixels = int(np.sum(np.logical_and(current_mask_filled_binary == 0, current_gt_data == 1)))\n\n ground_truth_pixels_for_current_volcano = int(current_gt_data.sum())\n\n # CALCULATION OF SEGMENTATION METRICS\n # Intersection over Union\n\n if (true_positive_pixels + false_positive_pixels + false_negative_pixels) != 0:\n intersection_over_union = true_positive_pixels / (true_positive_pixels + false_positive_pixels + false_negative_pixels)\n else:\n intersection_over_union = None\n\n # Area Fit Index\n\n if (true_positive_pixels + false_positive_pixels) != 0:\n area_fit_index = (false_positive_pixels - false_negative_pixels) / (true_positive_pixels + false_positive_pixels)\n else:\n area_fit_index = None\n\n # Oversegmentation, undersegmentation\n\n if (true_positive_pixels + false_positive_pixels) != 0:\n oversegmentation = 1 - true_positive_pixels / (true_positive_pixels + false_positive_pixels)\n else:\n oversegmentation = None\n\n if (true_positive_pixels + false_negative_pixels) != 0:\n undersegmentation = 1 - true_positive_pixels / (true_positive_pixels + false_negative_pixels)\n else:\n undersegmentation = None\n\n # Root Mean Square\n\n if oversegmentation is not None and undersegmentation is not None:\n root_mean_square = math.sqrt((oversegmentation * oversegmentation + undersegmentation * undersegmentation) / 2)\n else:\n root_mean_square = None\n\n # Precision, recall, F1, accuracy, specificity\n\n if (true_positive_pixels + false_positive_pixels) != 0:\n precision = true_positive_pixels / (true_positive_pixels + false_positive_pixels)\n else:\n precision = None\n\n if (true_positive_pixels + false_negative_pixels) != 0:\n recall = true_positive_pixels / (true_positive_pixels + false_negative_pixels)\n else:\n recall = None\n\n if precision is not None and recall is not None:\n f1_score = (2 * precision * recall) / (precision + recall)\n else:\n f1_score = None\n\n if (true_positive_pixels + true_negative_pixels + false_positive_pixels + false_negative_pixels) != 0:\n accuracy = (true_positive_pixels + true_negative_pixels) / (true_positive_pixels + true_negative_pixels + false_positive_pixels + false_negative_pixels)\n else:\n accuracy = None\n\n if (true_negative_pixels + false_positive_pixels) != 0:\n specificity = true_negative_pixels / (true_negative_pixels + false_positive_pixels)\n else:\n specificity = None\n\n else:\n ground_truth_available = 0\n intersection_over_union = None\n oversegmentation = None\n undersegmentation = None\n area_fit_index = None\n root_mean_square = None\n\n true_positive_pixels = None\n true_negative_pixels = None\n false_positive_pixels = None\n false_negative_pixels = None\n\n ground_truth_pixels_for_current_volcano = None\n\n precision = None\n recall = None\n f1_score = None\n accuracy = None\n specificity = None\n\n response = {\n \"name\": str(current_volcano_data['name'].to_string(index=False)),\n \"fullname\": str(current_volcano_data['fullname'].to_string(index=False)),\n \"datetime\": str(current_volcano.time_begin) + \"T\" + str(current_volcano.hour) + \":\" + str(current_volcano.minute) + \":\" + str(current_volcano.second) + \"Z\",\n \"latitude\": float(current_volcano_data['lat']),\n \"longitude\": float(current_volcano_data['lon']),\n \"elevation\": int(current_volcano_data['elevation']),\n \"method\": method,\n \"id\": int(current_volcano_id),\n \"filename\": os.path.basename(current_volcano.filename),\n \"so2_mass_value\": so2_mass,\n \"so2_mass_unit\": \"tons\",\n \"ground_truth_available\": ground_truth_available,\n \"intersection_over_union\": intersection_over_union,\n \"oversegmentation\": oversegmentation,\n \"undersegmentation\": undersegmentation,\n \"area_fit_index\": area_fit_index,\n \"root_mean_square\": root_mean_square,\n \"true_positive_pixels\": true_positive_pixels,\n \"true_negative_pixels\": true_negative_pixels,\n \"false_positive_pixels\": false_positive_pixels,\n \"false_negative_pixels\": false_negative_pixels,\n \"precision\": precision,\n \"recall\": recall,\n \"f1_score\": f1_score,\n \"accuracy\": accuracy,\n \"specificity\": specificity,\n \"ground_truth_pixels_for_current_volcano\": ground_truth_pixels_for_current_volcano,\n \"so2_pixels_on_image\": so2_pixels_on_image\n }\n\n data['data'].append(response)\n\n return data\n"
},
{
"alpha_fraction": 0.6204870939254761,
"alphanum_fraction": 0.634294867515564,
"avg_line_length": 56.656028747558594,
"blob_id": "1512996879127393c6144bb742f54b80f4378fb0",
"content_id": "8b49ff16012ef2bedf4b25d5b40eb0e357eec253",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 32518,
"license_type": "no_license",
"max_line_length": 321,
"num_lines": 564,
"path": "/segmentation_algorithms.py",
"repo_name": "bazsimarkus/tropomi-emission-source",
"src_encoding": "UTF-8",
"text": "# -----------------------------------------------------------\n# segmentation_algorithms.py\n# Contains all algorithms and functions to open, handle, and classify TROPOMI SO2 products\n# -----------------------------------------------------------\n\nimport math\n\nimport geopy.distance\nimport netCDF4\nimport numpy as np\nfrom scipy import spatial\nfrom skimage.morphology import flood_fill\nfrom sklearn.cluster import DBSCAN\n\nfrom CONFIG_PARAMETERS import molar_mass_SO2\nfrom Volcano import *\nfrom trajectory_algorithms import get_forward_trajectory, get_reverse_trajectory\n\n\ndef read_satellite_file(current_volcano):\n \"\"\"Takes in a Volcano class, returns the arrays for SO2 density (PBL), lons, lats, and the SO2 detection mask\"\"\"\n\n print(\"Current volcano: \" + current_volcano.name + \" - \" + str(current_volcano.time_begin) + 'T' + str(current_volcano.hour) + ':' + str(current_volcano.minute) + ':' + str(current_volcano.second) + 'Z')\n print(current_volcano.filename)\n\n # OPEN FILE\n fh = netCDF4.Dataset(current_volcano.filename, 'r', format=\"NETCDF4\")\n\n global pixel_area_m2\n pixel_area_m2 = float(str(fh.__dict__['spatial_resolution'])[:-3].split('x')[0]) * float(str(fh.__dict__['spatial_resolution'])[:-3].split('x')[1]) * 1000000\n\n # GET DATA\n # - lat/lon\n lons = fh.groups['PRODUCT'].variables['longitude'][:][0, :, :]\n lats = fh.groups['PRODUCT'].variables['latitude'][:][0, :, :]\n\n # - SO2 maps\n SO2_PBL = fh.groups['PRODUCT'].variables['sulfurdioxide_total_vertical_column'][0, :, :]\n\n # - SO2 detection flag: 'sulfurdioxide_detection_flag'\n flag = fh.groups['PRODUCT'].groups['SUPPORT_DATA'].groups['DETAILED_RESULTS'].variables['sulfurdioxide_detection_flag'][0, :, :]\n mask = np.where(flag > 0, 1, 0) # Assign 1 for flag values above 0\n\n SO2_1km = fh.groups['PRODUCT'].groups['SUPPORT_DATA'].groups['DETAILED_RESULTS'].variables['sulfurdioxide_total_vertical_column_1km'][0, :, :]\n SO2_7km = fh.groups['PRODUCT'].groups['SUPPORT_DATA'].groups['DETAILED_RESULTS'].variables['sulfurdioxide_total_vertical_column_7km'][0, :, :]\n SO2_15km = fh.groups['PRODUCT'].groups['SUPPORT_DATA'].groups['DETAILED_RESULTS'].variables['sulfurdioxide_total_vertical_column_15km'][0, :, :]\n\n return SO2_PBL, lons, lats, mask\n\n\ndef get_ground_truth(current_volcano, mask_shape):\n \"\"\"Takes in a Volcano class, and the size of the detection mask (to fill up the rest with zeros), and returns the ground truth array\"\"\"\n\n # Ground truth from CSV\n gt_filename = ASSETS_DIRECTORY + \"/ground_truth/\" + str(os.path.basename(current_volcano.filename)) + \".csv\"\n if os.path.exists(gt_filename):\n gt_data = np.genfromtxt(gt_filename, delimiter=',')\n gt_data = np.nan_to_num(gt_data)\n gt_shape = np.shape(gt_data)\n padded_array = np.zeros(mask_shape)\n padded_array[:gt_shape[0], :gt_shape[1]] = gt_data\n return padded_array\n else:\n return None\n\n\ndef get_lat_lon_index(latitude, longitude, lat_lon_combo):\n \"\"\"Takes in decimal degrees latitude/longitude, and the combined lats/lons array, returns the index of the given arbitrary latitude/longitude in the combined array\"\"\"\n\n new_array = np.abs(lat_lon_combo - [latitude, longitude])\n flat_array = new_array.reshape(-1, 2)\n origin_coordinates = (0, 0)\n distance, index = spatial.KDTree(flat_array).query(origin_coordinates)\n latindex, lonindex = [int(index / 450), int(int(index) % 450)]\n\n return latindex, lonindex\n\n\ndef get_dbscan_label_map(SO2_PBL, mask):\n \"\"\"Run the DBSCAN clustering algorithm on an array. Takes in the arrays for SO2 density (PBL), the SO2 detection mask, and returns an array of the detection mask, labelled with the generated cluster labels\"\"\"\n\n input_mask = np.array(mask.nonzero()).T # nonzero - return the indeices of nonzero elements\n\n weight_matrix = []\n for current_point in input_mask:\n weight_matrix.append(SO2_PBL[current_point[0], current_point[1]]*1000)\n\n db = DBSCAN(eps=4, min_samples=3).fit(input_mask, sample_weight=weight_matrix)\n core_samples_mask = np.zeros_like(db.labels_, dtype=bool)\n core_samples_mask[db.core_sample_indices_] = True\n labels = db.labels_\n labels = labels + 1 # To eliminate the label \"0\", the minimum label should be 1, because we later filter with np.where\n\n # Number of clusters in labels, ignoring noise if present.\n n_clusters_ = len(set(labels)) - (1 if -1 in labels else 0)\n n_noise_ = list(labels).count(-1)\n\n #print(\"Estimated number of clusters: %d\" % n_clusters_)\n #print(\"Estimated number of noise points: %d\" % n_noise_)\n\n mask_filled = np.where(mask > 0, 1, 0) # Assign 1 for flag values above 0\n\n label_index = 0\n\n for current_index in input_mask:\n mask_filled[current_index[0]][current_index[1]] = labels[label_index]\n label_index = label_index + 1\n\n return mask_filled\n\n\ndef get_cluster_center_of_mass(SO2_PBL, mask_labelled, label):\n \"\"\"Return the center of mass for a chosen cluster. Takes in the arrays for SO2 density (PBL), the DBSCAN-labeled SO2 detection mask, and the label of the chosen cluster, and returns the coordinates of the center of mass in the SO2 density array\"\"\"\n\n SO2_overzero = SO2_PBL\n SO2_overzero[SO2_overzero < 0] = 0\n SO2_overzero = np.power(SO2_overzero * 1000, 4)\n\n label_mask = np.where(mask_labelled == label, 1, 0)\n masses = np.where(label_mask > 0, SO2_overzero, 0)\n\n center_of_mass = np.around((masses * np.mgrid[0:masses.shape[0], 0:masses.shape[1]]).sum(1).sum(1) / masses.sum()).astype(int)\n\n return center_of_mass\n\n\ndef run_radius_100km(current_volcano): # Automatically associate every SO2 detection within a 100km radius (radius can be set)\n \"\"\"Run the Radius search classifier (BC-1 in Markus et al. 2022 Frontiers). Takes in a Volcano class, and returns all relevant information, including the filled detection mask with the volcano ID-s\"\"\"\n\n SO2_PBL, lons, lats, mask = read_satellite_file(current_volcano)\n gt_data = get_ground_truth(current_volcano, mask.shape)\n\n mask_circle = np.zeros(mask.shape)\n if mask.sum() > 0:\n radius_km = 100\n\n for current_lat_number in range(mask.shape[0]): # Iterate through the mask and draw the circle with the specified radius\n for current_lon_number in range(mask.shape[1]):\n temp_diff = geopy.distance.geodesic((current_volcano.latitude, current_volcano.longitude), (lats[current_lat_number][current_lon_number], lons[current_lat_number][current_lon_number])).km\n if temp_diff < radius_km:\n mask_circle[current_lat_number, current_lon_number] = 1\n\n mask_circle = np.multiply(mask_circle, mask) # Logical AND the circle with the original detection mask\n\n mask_filled = mask_circle\n mask_filled = np.where(mask_filled > 0, current_volcano.id, 0)\n center_volcano_mask = np.where(mask_filled == current_volcano.id, 1, 0)\n mass_filled = np.multiply(SO2_PBL, center_volcano_mask)\n mass_filled = mass_filled * molar_mass_SO2 * pixel_area_m2\n\n return 0, mask_filled, gt_data, current_volcano, lons, lats, mask, SO2_PBL, mass_filled, \"radius_100km\"\n\n\ndef run_floodfill(current_volcano):\n \"\"\"Run the Flood-fill classifier (BC-2 in Markus et al. 2022 Frontiers). Takes in a Volcano class, and returns all relevant information, including the filled detection mask with the volcano ID-s\"\"\"\n\n SO2_PBL, lons, lats, mask = read_satellite_file(current_volcano)\n gt_data = get_ground_truth(current_volcano, mask.shape)\n\n mask_to_fill = mask\n if mask.sum() > 0:\n lat_lon_combo = np.dstack((lats, lons))\n latindex, lonindex = get_lat_lon_index(current_volcano.latitude, current_volcano.longitude, lat_lon_combo)\n\n if mask[latindex, lonindex] != 0:\n mask_to_fill = flood_fill(mask, (latindex, lonindex), 2, connectivity=2)\n else: # Search for the nearest nonzero element\n search_area = 3\n idx = np.argwhere(mask)\n idx = idx[~(idx == [latindex, lonindex]).all(1)]\n nearest_index = idx[((idx - [latindex, lonindex]) ** 2).sum(1).argmin()]\n\n if abs(nearest_index[0] - latindex) < search_area and abs(nearest_index[1] - lonindex) < search_area:\n mask_to_fill = flood_fill(mask, (nearest_index[0], nearest_index[1]), 2, connectivity=2)\n\n mask_filled = np.where(mask_to_fill > 1, 1, 0) # Assign 1 for flag values above 0\n mask_filled = np.where(mask_filled > 0, current_volcano.id, 0)\n center_volcano_mask = np.where(mask_filled == current_volcano.id, 1, 0)\n mass_filled = np.multiply(SO2_PBL, center_volcano_mask)\n mass_filled = mass_filled * molar_mass_SO2 * pixel_area_m2\n\n return 0, mask_filled, gt_data, current_volcano, lons, lats, mask, SO2_PBL, mass_filled, \"floodfill\"\n\n\ndef run_floodfill_withwind(current_volcano):\n \"\"\"Run the Radius search classifier with wind (BC-3 in Markus et al. 2022 Frontiers). Takes in a Volcano class, and returns all relevant information, including the filled detection mask with the volcano ID-s\"\"\"\n\n SO2_PBL, lons, lats, mask = read_satellite_file(current_volcano)\n gt_data = get_ground_truth(current_volcano, mask.shape)\n\n mask_to_fill = mask\n trajectories = None\n if mask.sum() > 0:\n lat_lon_combo = np.dstack((lats, lons))\n latindex, lonindex = get_lat_lon_index(current_volcano.latitude, current_volcano.longitude, lat_lon_combo)\n\n if mask[latindex, lonindex] != 0:\n mask_to_fill = flood_fill(mask, (latindex, lonindex), 2, connectivity=2)\n else: # Search for the nearest nonzero element\n search_area = 3\n idx = np.argwhere(mask)\n idx = idx[~(idx == [latindex, lonindex]).all(1)]\n nearest_index = idx[((idx - [latindex, lonindex]) ** 2).sum(1).argmin()]\n\n if abs(nearest_index[0] - latindex) < search_area and abs(nearest_index[1] - lonindex) < search_area:\n mask_to_fill = flood_fill(mask, (nearest_index[0], nearest_index[1]), 2, connectivity=2)\n\n trajectories = [get_forward_trajectory(current_volcano)]\n\n for index, current_latitude in enumerate(trajectories[0][1]):\n current_latindex, current_lonindex = get_lat_lon_index(current_latitude, trajectories[0][0][index], lat_lon_combo)\n if mask[current_latindex, current_lonindex] == 1:\n mask_to_fill = flood_fill(mask_to_fill, (current_latindex, current_lonindex), 2, connectivity=2)\n else:\n search_area = 3\n idx = np.argwhere(mask)\n idx = idx[~(idx == [current_latindex, current_lonindex]).all(1)]\n nearest_index = idx[((idx - [current_latindex, current_lonindex]) ** 2).sum(1).argmin()]\n\n if abs(nearest_index[0] - current_latindex) < search_area and abs(nearest_index[1] - current_lonindex) < search_area:\n mask_to_fill = flood_fill(mask_to_fill, (nearest_index[0], nearest_index[1]), 2, connectivity=2)\n\n mask_filled = np.where(mask_to_fill > 1, 1, 0) # Assign 1 for flag values above 0\n mask_filled = np.where(mask_filled > 0, current_volcano.id, 0)\n center_volcano_mask = np.where(mask_filled == current_volcano.id, 1, 0)\n mass_filled = np.multiply(SO2_PBL, center_volcano_mask)\n mass_filled = mass_filled * molar_mass_SO2 * pixel_area_m2\n\n return 0, mask_filled, gt_data, current_volcano, lons, lats, mask, SO2_PBL, mass_filled, \"floodfill_withwind\", trajectories\n\n\ndef run_dbscan(current_volcano):\n \"\"\"Run the DBSCAN classifier (BC-4 in Markus et al. 2022 Frontiers). Takes in a Volcano class, and returns all relevant information, including the filled detection mask with the volcano ID-s\"\"\"\n\n SO2_PBL, lons, lats, mask = read_satellite_file(current_volcano)\n gt_data = get_ground_truth(current_volcano, mask.shape)\n\n mask_filled = np.zeros(mask.shape)\n if mask.sum() >= 4: # Minimum number of samples in DBSCAN, under this the algorithm cannot run\n lat_lon_combo = np.dstack((lats, lons))\n latindex, lonindex = get_lat_lon_index(current_volcano.latitude, current_volcano.longitude, lat_lon_combo)\n\n mask_labelled = get_dbscan_label_map(SO2_PBL, mask)\n\n if mask_labelled[latindex, lonindex] != 0:\n value_to_fill = mask_labelled[latindex, lonindex]\n else: # Search for the nearest nonzero element\n search_area = 3\n idx = np.argwhere(mask_labelled)\n idx = idx[~(idx == [latindex, lonindex]).all(1)]\n nearest_index = idx[((idx - [latindex, lonindex]) ** 2).sum(1).argmin()]\n\n if abs(nearest_index[0] - latindex) < search_area and abs(nearest_index[1] - lonindex) < search_area:\n value_to_fill = mask_labelled[nearest_index[0], nearest_index[1]]\n else:\n value_to_fill = -1\n\n mask_filled = np.where(mask_labelled == value_to_fill, 1, 0) # Assign 1 for flag values above 0\n\n mask_filled = np.where(mask_filled > 0, current_volcano.id, 0)\n center_volcano_mask = np.where(mask_filled == current_volcano.id, 1, 0)\n mass_filled = np.multiply(SO2_PBL, center_volcano_mask)\n mass_filled = mass_filled * molar_mass_SO2 * pixel_area_m2\n\n return 0, mask_filled, gt_data, current_volcano, lons, lats, mask, SO2_PBL, mass_filled, \"dbscan\"\n\n\ndef run_dbscan_withwind(current_volcano):\n \"\"\"Run the DBSCAN classifier with wind (BC-5 in Markus et al. 2022 Frontiers). Takes in a Volcano class, and returns all relevant information, including the filled detection mask with the volcano ID-s\"\"\"\n\n SO2_PBL, lons, lats, mask = read_satellite_file(current_volcano)\n gt_data = get_ground_truth(current_volcano, mask.shape)\n\n mask_filled = np.zeros(mask.shape)\n trajectories = None\n if mask.sum() >= 4: # Minimum number of samples in DBSCAN, under this the algorithm cannot run\n lat_lon_combo = np.dstack((lats, lons))\n latindex, lonindex = get_lat_lon_index(current_volcano.latitude, current_volcano.longitude, lat_lon_combo)\n\n mask_labelled = get_dbscan_label_map(SO2_PBL, mask)\n\n if mask_labelled[latindex, lonindex] != 0:\n value_to_fill = mask_labelled[latindex, lonindex]\n else: # Search for the nearest nonzero element\n search_area = 3\n idx = np.argwhere(mask_labelled)\n idx = idx[~(idx == [latindex, lonindex]).all(1)]\n nearest_index = idx[((idx - [latindex, lonindex]) ** 2).sum(1).argmin()]\n\n if abs(nearest_index[0] - latindex) < search_area and abs(nearest_index[1] - lonindex) < search_area:\n value_to_fill = mask_labelled[nearest_index[0], nearest_index[1]]\n else:\n value_to_fill = -1\n\n trajectories = [get_forward_trajectory(current_volcano)]\n\n for index, current_latitude in enumerate(trajectories[0][1]):\n current_latindex, current_lonindex = get_lat_lon_index(current_latitude, trajectories[0][0][index], lat_lon_combo)\n\n if mask[current_latindex, current_lonindex] == 1:\n mask_labelled = np.where(mask_labelled == mask_labelled[current_latindex, current_lonindex], value_to_fill, mask_labelled)\n else:\n search_area = 3\n idx = np.argwhere(mask)\n idx = idx[~(idx == [current_latindex, current_lonindex]).all(1)]\n nearest_index = idx[((idx - [current_latindex, current_lonindex]) ** 2).sum(1).argmin()]\n\n if abs(nearest_index[0] - current_latindex) < search_area and abs(nearest_index[1] - current_lonindex) < search_area:\n mask_labelled = np.where(mask_labelled == mask_labelled[nearest_index[0], nearest_index[1]], value_to_fill,mask_labelled)\n\n mask_filled = np.where(mask_labelled == value_to_fill, 1, 0) # Assign 1 for flag values above 0\n\n mask_filled = np.where(mask_filled > 0, current_volcano.id, 0)\n center_volcano_mask = np.where(mask_filled == current_volcano.id, 1, 0)\n mass_filled = np.multiply(SO2_PBL, center_volcano_mask)\n mass_filled = mass_filled * molar_mass_SO2 * pixel_area_m2\n\n return 0, mask_filled, gt_data, current_volcano, lons, lats, mask, SO2_PBL, mass_filled, \"dbscan_withwind\", trajectories\n\n\ndef run_reverse_trajectory_dbscan(current_volcano): # Start reverse trajectory from center of mass point of cluster, if the trajectory is within range of the volcano, we associate the cluster\n \"\"\"Run the Reverse Trajectory DBSCAN classifier (BC-6 in Markus et al. 2022 Frontiers). Takes in a Volcano class, and returns all relevant information, including the filled detection mask with the volcano ID-s\"\"\"\n\n SO2_PBL, lons, lats, mask = read_satellite_file(current_volcano)\n gt_data = get_ground_truth(current_volcano, mask.shape)\n\n mask_to_fill = np.zeros(mask.shape)\n trajectories = None\n if mask.sum() >= 4: # Minimum number of samples in DBSCAN, under this the algorithm cannot run\n mask_labelled = get_dbscan_label_map(SO2_PBL, mask)\n\n trajectories = []\n\n for unique_label in np.unique(mask_labelled):\n if unique_label != 0:\n cluster_center_of_mass = get_cluster_center_of_mass(SO2_PBL, mask_labelled, unique_label)\n height_of_cluster = current_volcano.elevation\n current_trajectory = get_reverse_trajectory(lats[cluster_center_of_mass[0], cluster_center_of_mass[1]], lons[cluster_center_of_mass[0], cluster_center_of_mass[1]], current_volcano.year, current_volcano.month, current_volcano.day, current_volcano.hour, height_of_cluster)\n trajectories.append(current_trajectory)\n\n for index, current_latitude in enumerate(current_trajectory[1]):\n current_longitude = current_trajectory[0][index]\n coordinate_distance = geopy.distance.geodesic((current_volcano.latitude, current_volcano.longitude),(current_latitude, current_longitude)).km\n\n if coordinate_distance < 50: # If the trajectory is nearer than 50km, we associate the plume\n label_mask = np.where(mask_labelled == unique_label, 1, 0)\n mask_to_fill = np.logical_or(mask_to_fill, label_mask)\n break # If we find a volcano, we don't have to iterate further\n\n mask_filled = mask_to_fill\n mask_filled = np.where(mask_filled > 0, current_volcano.id, 0)\n center_volcano_mask = np.where(mask_filled == current_volcano.id, 1, 0)\n mass_filled = np.multiply(SO2_PBL, center_volcano_mask)\n mass_filled = mass_filled * molar_mass_SO2 * pixel_area_m2\n\n return 0, mask_filled, gt_data, current_volcano, lons, lats, mask, SO2_PBL, mass_filled, \"reverse_trajectory_dbscan\", trajectories\n\n\ndef run_multi_dbscan(current_volcano):\n \"\"\"Run the Multi-class DBSCAN classifier (MC-1 in Markus et al. 2022 Frontiers). Takes in a Volcano class, and returns all relevant information, including the filled detection mask with the volcano ID-s\"\"\"\n\n SO2_PBL, lons, lats, mask = read_satellite_file(current_volcano)\n gt_data = get_ground_truth(current_volcano, mask.shape)\n mask_to_fill = np.zeros(mask.shape)\n if mask.sum() >= 4: # Minimum number of samples in DBSCAN, under this the algorithm cannot run\n volcano_centers = []\n for index, row in volcanolist_df.iterrows():\n volcano_centers.append([float(row['lat']), float(row['lon']), int(row['id'])])\n\n mask_labelled = get_dbscan_label_map(SO2_PBL, mask)\n mask_to_fill = mask_labelled\n\n # Build the list of clusters and their nearest clusters/volcanoes on the image\n clusters_on_image = []\n if len(np.unique(mask_labelled) > 1) and len(volcano_centers) > 0:\n for unique_label in np.unique(mask_labelled):\n cluster_response = {\n 'unique_label': None,\n 'center_of_mass_lon': None,\n 'center_of_mass_lat': None,\n 'nearest_cluster_label': None,\n 'nearest_cluster_distance': None,\n 'nearest_clusters': [],\n 'nearest_volcano_id': None,\n 'nearest_volcano_distance': None\n }\n\n nearest_volcano_id = None\n nearest_volcano_distance = None\n\n if unique_label != 0:\n ccmx, ccmy = get_cluster_center_of_mass(SO2_PBL, mask_labelled, unique_label)\n ccmlat, ccmlon = lats[ccmx, ccmy], lons[ccmx, ccmy]\n\n for new_unique_label in np.unique(mask_labelled):\n if new_unique_label != 0 and new_unique_label != unique_label:\n nccmx, nccmy = get_cluster_center_of_mass(SO2_PBL, mask_labelled, new_unique_label)\n nccmlat, nccmlon = lats[nccmx, nccmy], lons[nccmx, nccmy]\n\n distance_to_new_cluster = geopy.distance.geodesic((ccmlat, ccmlon), (nccmlat, nccmlon)).km\n cluster_response['nearest_clusters'].append([new_unique_label, distance_to_new_cluster])\n\n cluster_response['nearest_clusters'].sort(key=lambda x: x[1])\n\n nearest_cluster_label = None\n nearest_cluster_distance = None\n\n if len(cluster_response['nearest_clusters']) > 0:\n nearest_cluster_label = cluster_response['nearest_clusters'][0][0]\n nearest_cluster_distance = cluster_response['nearest_clusters'][0][1]\n\n for current_volcano_center in volcano_centers:\n if nearest_volcano_id is None:\n nearest_volcano_id = current_volcano_center[2]\n nearest_volcano_distance = geopy.distance.geodesic((ccmlat, ccmlon), (current_volcano_center[0], current_volcano_center[1])).km\n else:\n distance_to_new_volcano = geopy.distance.geodesic((ccmlat, ccmlon), (current_volcano_center[0], current_volcano_center[1])).km\n if distance_to_new_volcano < nearest_volcano_distance:\n nearest_volcano_id = current_volcano_center[2]\n nearest_volcano_distance = distance_to_new_volcano\n\n cluster_response['unique_label'] = unique_label\n cluster_response['center_of_mass_lat'] = ccmlat\n cluster_response['center_of_mass_lon'] = ccmlon\n cluster_response['nearest_cluster_label'] = nearest_cluster_label\n cluster_response['nearest_cluster_distance'] = nearest_cluster_distance\n cluster_response['nearest_volcano_id'] = nearest_volcano_id\n cluster_response['nearest_volcano_distance'] = nearest_volcano_distance\n\n clusters_on_image.append(cluster_response)\n\n # Beginning of the algorithm\n\n tolerance_km = 200 # Tolerance parameter for the algorithm\n\n associations = []\n closest_cluster_to_volcano = sorted(clusters_on_image, key=lambda x: x['nearest_volcano_distance'])[0]\n\n previous_cluster = None\n current_cluster = closest_cluster_to_volcano\n source_volcano_id = closest_cluster_to_volcano['nearest_volcano_id'] # The original source volcano ID, this value gets replaced, when it's not the nearest volcano anymore\n\n shortest_index = 1 # We keep track in this variable, how many times we have switched source volcanoes, aka if we are jumping to the second, third, fourth... shortest volcano-cluster distance\n while True: # We start growing the associations list, and stop if we have associated every cluster\n if current_cluster['nearest_volcano_id'] == source_volcano_id:\n if current_cluster['unique_label'] not in [i[0] for i in associations]:\n associations.append([current_cluster['unique_label'], current_cluster['nearest_volcano_id']])\n else:\n for current_near_cluster in current_cluster['nearest_clusters']: # As the nearest_clusters list is already sorted, we can just iterate through it and stop when we find the first unassociated cluster\n if current_near_cluster[0] not in [i[0] for i in associations]:\n previous_cluster = current_cluster\n current_cluster = list(filter(lambda t: t['unique_label'] == current_near_cluster[0], clusters_on_image))[0]\n break\n else:\n if current_cluster['nearest_volcano_distance'] > tolerance_km and current_cluster['nearest_volcano_distance'] > geopy.distance.geodesic((current_cluster['center_of_mass_lat'], current_cluster['center_of_mass_lon']), (previous_cluster['center_of_mass_lat'], previous_cluster['center_of_mass_lon'])).km:\n associations.append([current_cluster['unique_label'], source_volcano_id])\n else:\n shortest_index = shortest_index + 1 # Increase the shortest_index for the future changes\n closest_cluster_to_volcano = sorted(clusters_on_image, key=lambda x: x['nearest_volcano_distance'])[shortest_index - 1] # Get the cluster with the n-th nearest volcano-cluster distance\n\n previous_cluster = current_cluster\n current_cluster = closest_cluster_to_volcano # Change current cluster to the new cluster\n source_volcano_id = closest_cluster_to_volcano['nearest_volcano_id'] # Change the source volcano to the new nearest volcano\n\n if len(associations) == (len(np.unique(mask_labelled)) - 1): # If we have associated every cluster, we stop the while loop\n break\n\n # Remove clusters that are far away from any volcano or cluster from the association list, using the tolerance\n for current_associated_cluster in clusters_on_image:\n if current_associated_cluster['nearest_volcano_distance'] is not None and current_associated_cluster['nearest_cluster_distance'] is not None:\n if current_associated_cluster['nearest_volcano_distance'] > tolerance_km and current_associated_cluster['nearest_cluster_distance'] > tolerance_km * 2:\n for associations_to_remove in associations:\n if associations_to_remove[0] == current_associated_cluster['unique_label']:\n associations_to_remove[1] = -1\n\n for current_association in associations:\n mask_to_fill[mask_to_fill == current_association[0]] = current_association[1]\n\n #print(associations)\n\n mask_filled = mask_to_fill\n center_volcano_mask = np.where(mask_filled == current_volcano.id, 1, 0)\n mass_filled = np.multiply(SO2_PBL, center_volcano_mask)\n mass_filled = mass_filled * molar_mass_SO2 * pixel_area_m2\n\n return 1, mask_filled, gt_data, current_volcano, lons, lats, mask, SO2_PBL, mass_filled, \"multi_dbscan\"\n\n\ndef run_multi_reverse_trajectory_dbscan(current_volcano): # Start reverse trajectory from center of mass point of cluster, if the trajectory is within range of the volcano, we associate the cluster\n \"\"\"Run the Multi-class Reverse Trajectory DBSCAN classifier (MC-2 in Markus et al. 2022 Frontiers). Takes in a Volcano class, and returns all relevant information, including the filled detection mask with the volcano ID-s\"\"\"\n\n SO2_PBL, lons, lats, mask = read_satellite_file(current_volcano)\n gt_data = get_ground_truth(current_volcano, mask.shape)\n\n mask_to_fill = np.zeros(mask.shape)\n trajectories = None\n if mask.sum() >= 4: # Minimum number of samples in DBSCAN, under this the algorithm cannot run\n image_center_point = [current_volcano.latitude, current_volcano.longitude]\n volcano_centers = []\n\n for index, row in volcanolist_df.iterrows():\n if abs(image_center_point[0] - float(row['lat'])) < 20 and abs(image_center_point[1] - float(row['lon'])) < 20: # Only append volanoes that are maximum +/- 20 degrees far from the center point\n volcano_centers.append([float(row['lat']), float(row['lon']), int(row['id'])])\n\n mask_labelled = get_dbscan_label_map(SO2_PBL, mask)\n mask_to_fill = mask_labelled\n\n associations = []\n\n if len(np.unique(mask_labelled) > 1) and len(volcano_centers) > 0:\n trajectories = []\n for unique_label in np.unique(mask_labelled):\n if unique_label != 0:\n cluster_center_of_mass = get_cluster_center_of_mass(SO2_PBL, mask_labelled, unique_label)\n\n nearest_volcano_id = None\n nearest_volcano_distance = None\n\n for current_volcano_center in volcano_centers:\n if nearest_volcano_id is None:\n nearest_volcano_id = current_volcano_center[2]\n nearest_volcano_distance = geopy.distance.geodesic((lats[cluster_center_of_mass[0], cluster_center_of_mass[1]], lons[cluster_center_of_mass[0], cluster_center_of_mass[1]]), (current_volcano_center[0], current_volcano_center[1])).km\n else:\n distance_to_new_volcano = geopy.distance.geodesic((lats[cluster_center_of_mass[0], cluster_center_of_mass[1]], lons[cluster_center_of_mass[0], cluster_center_of_mass[1]]), (current_volcano_center[0], current_volcano_center[1])).km\n if distance_to_new_volcano < nearest_volcano_distance:\n nearest_volcano_id = current_volcano_center[2]\n nearest_volcano_distance = distance_to_new_volcano\n\n height_of_cluster = int(volcanolist_df.loc[volcanolist_df['id'] == nearest_volcano_id]['elevation'])\n\n current_trajectory = get_reverse_trajectory(lats[cluster_center_of_mass[0], cluster_center_of_mass[1]], lons[cluster_center_of_mass[0], cluster_center_of_mass[1]], current_volcano.year, current_volcano.month, current_volcano.day, current_volcano.hour, height_of_cluster)\n trajectories.append(current_trajectory)\n\n nearest_volcano_id = None\n nearest_volcano_distance = None\n\n for index, current_latitude in enumerate(current_trajectory[1]):\n current_longitude = current_trajectory[0][index]\n for current_volcano_center in volcano_centers:\n if nearest_volcano_id is None:\n nearest_volcano_id = current_volcano_center[2]\n nearest_volcano_distance = geopy.distance.geodesic((current_latitude, current_longitude), (current_volcano_center[0], current_volcano_center[1])).km\n else:\n distance_to_new_volcano = geopy.distance.geodesic((current_latitude, current_longitude), (current_volcano_center[0], current_volcano_center[1])).km\n if distance_to_new_volcano < nearest_volcano_distance:\n nearest_volcano_id = current_volcano_center[2]\n nearest_volcano_distance = distance_to_new_volcano\n\n if nearest_volcano_distance is not None and nearest_volcano_distance < 200:\n associations.append([unique_label, nearest_volcano_id])\n else:\n associations.append([unique_label, -1])\n\n for current_association in associations:\n mask_to_fill[mask_to_fill == current_association[0]] = current_association[1]\n\n #print(associations)\n\n mask_filled = mask_to_fill\n center_volcano_mask = np.where(mask_filled == current_volcano.id, 1, 0)\n mass_filled = np.multiply(SO2_PBL, center_volcano_mask)\n mass_filled = mass_filled * molar_mass_SO2 * pixel_area_m2\n\n return 1, mask_filled, gt_data, current_volcano, lons, lats, mask, SO2_PBL, mass_filled, \"multi_reverse_trajectory_dbscan\", trajectories\n"
},
{
"alpha_fraction": 0.595863938331604,
"alphanum_fraction": 0.6160773634910583,
"avg_line_length": 44.56230926513672,
"blob_id": "6570fa8d7c94c30258e4970b042a359f6ad3ddf9",
"content_id": "d109c6efc33cc1d7a3c5183b4502666d29d94667",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 14990,
"license_type": "no_license",
"max_line_length": 252,
"num_lines": 329,
"path": "/plotting_functions.py",
"repo_name": "bazsimarkus/tropomi-emission-source",
"src_encoding": "UTF-8",
"text": "# -----------------------------------------------------------\n# plotting_functions.py\n# Contains all functions to plot the results of function in segmentation_algorithms.py\n# -----------------------------------------------------------\n\nimport matplotlib.colors\nimport matplotlib.pyplot as plt\nimport netCDF4\nimport numpy as np\nfrom matplotlib.ticker import FixedLocator\nfrom mpl_toolkits.basemap import Basemap\n\nimport segmentation_algorithms\nfrom Volcano import *\n\n\ndef plot_segmentation_result(sem_inst, mask_filled, gt_data, current_volcano, lons, lats, mask, SO2_PBL, mass_filled, method, trajectories=None):\n \"\"\"Plot the result of an algorithm. Takes in the result of an algorithm function in segmentation_algorithms.py, returns the plot figure\"\"\"\n\n PRINT_MODE = 0 # set to 1 if plotting for document, set to 0 if viewed on PC\n zoom_span = 6 # Zooming parameter on plot (the larger, the more zoomed out)\n\n # PLOT\n # Set plot DPI high for Spyder plot\n if PRINT_MODE == 1:\n plt.rcParams['figure.dpi'] = 500\n fig = plt.figure(figsize=(4, 2))\n else:\n plt.rcParams['figure.dpi'] = 200\n fig = plt.figure(figsize=(16, 9))\n\n m = Basemap(llcrnrlon=current_volcano.longitude - zoom_span / 2,\n llcrnrlat=current_volcano.latitude - zoom_span / 2,\n urcrnrlon=current_volcano.longitude + zoom_span / 2,\n urcrnrlat=current_volcano.latitude + zoom_span / 2, resolution='l', projection='cyl',\n lat_ts=40, lat_0=current_volcano.latitude, lon_0=current_volcano.longitude)\n\n m.drawparallels(np.arange(-85., 85., 2.), labels=[1, 0, 0, 0], fontsize=10)\n m.drawmeridians(np.arange(-180., 181., 2.), labels=[0, 0, 0, 1], fontsize=10)\n try:\n m.drawcoastlines()\n m.drawstates()\n m.drawcountries()\n except:\n print(\"No basemap lines drawn\")\n\n # - SO2 map\n vmin, vmax = 0, 0.01\n vcd2plot, vcd2plot_str = SO2_PBL, 'SO2_PBL'\n cs = m.pcolormesh(lons, lats, vcd2plot, latlon=True, vmin=vmin, vmax=vmax, cmap='CMRmap_r', shading='auto')\n\n # - SO2 detection mask\n colorbar_labels = []\n\n if sem_inst == 0:\n fillcmap = matplotlib.colors.LinearSegmentedColormap.from_list(\"\", [\"w\", \"g\"], 2)\n cmask_filled = m.pcolormesh(lons, lats, mask_filled.astype(np.float32), latlon=True, vmin=0, vmax=1, cmap=fillcmap, alpha=.5, shading='auto')\n else:\n # Crop the clusters that are not on the plot, to eliminate non-present colorbar labels\n for iterator_row in range(len(mask_filled)):\n for iterator_column in range(len(mask_filled[0])):\n if lons[iterator_row, iterator_column] > current_volcano.longitude + zoom_span / 2 or lons[iterator_row, iterator_column] < current_volcano.longitude - zoom_span / 2:\n mask_filled[iterator_row, iterator_column] = 0\n if lats[iterator_row, iterator_column] > current_volcano.latitude + zoom_span / 2 or lats[iterator_row, iterator_column] < current_volcano.latitude - zoom_span / 2:\n mask_filled[iterator_row, iterator_column] = 0\n\n # Count the remaining unique elements after cropping the image\n mask_filled = mask_filled.astype(int)\n number_of_remained_unique_elements_after_crop = len(np.unique(mask_filled))\n mask_filled = mask_filled.astype(np.float32)\n\n for element in np.unique(mask_filled[~np.isnan(mask_filled)]):\n current_id = int(element)\n if current_id != 0:\n current_label = int(current_id)\n colorbar_labels.append(current_label)\n if len(colorbar_labels) == 0:\n colorbar_labels.append(0)\n mycmap = matplotlib.colors.LinearSegmentedColormap.from_list(\"\", [\"w\", \"w\"])\n cmask_filled = m.pcolormesh(lons, lats, mask_filled, latlon=True, cmap=mycmap, alpha=.8, shading='auto')\n else:\n mask_filled[mask_filled == 0] = np.nan # to make the zeros transparent in pcolormesh\n mycmap = matplotlib.colors.LinearSegmentedColormap.from_list(\"\", [\"C0\", \"yellow\", \"C1\", \"C3\", \"C2\"], number_of_remained_unique_elements_after_crop - 1)\n cmask_filled = m.pcolormesh(lons, lats, mask_filled, latlon=True, cmap=mycmap, alpha=.8, shading='auto')\n\n # - SO2 detection mask\n cmask = m.pcolormesh(lons, lats, mask.astype(np.float32), latlon=True, vmin=0, vmax=1, cmap=plt.cm.get_cmap('Greys', 2), alpha=.25, shading='auto')\n\n other_volcanoes = []\n # - cosmetics\n for index, row in volcanolist_df.iterrows():\n plt.text(row['lon'], row['lat'], str(row['fullname']) + \" (\" + str(row['id']) + \")\", clip_on=True, fontsize=6)\n plt.plot(row['lon'], row['lat'], '^b', zorder=2)\n\n if (abs(float(row['lon']) - float(current_volcano.longitude)) < zoom_span / 2) and abs(\n (float(row['lat']) - float(current_volcano.latitude)) < zoom_span / 2) and str(\n row['name']) != current_volcano.name:\n other_volcanoes.append(row)\n\n if trajectories is not None:\n for current_trajectory in trajectories:\n plt.plot(current_trajectory[0], current_trajectory[1], color='C9', label='Plume trajectory')\n\n if PRINT_MODE == 1:\n handles, labels = plt.gca().get_legend_handles_labels()\n by_label = dict(zip(labels, handles))\n plt.legend(by_label.values(), by_label.keys(), prop={'size': 4}, loc='upper right')\n else:\n handles, labels = plt.gca().get_legend_handles_labels()\n by_label = dict(zip(labels, handles))\n plt.legend(by_label.values(), by_label.keys(), prop={'size': 10}, loc='upper right')\n\n if PRINT_MODE == 0:\n title = '$SO_{2}$ vertical column density - ' + str(current_volcano.time_begin) + 'T' + str(current_volcano.hour) + ':' + str(current_volcano.minute) + ':' + str(current_volcano.second) + 'Z - ' + str(current_volcano.name) + ' - ' + str(method)\n textstr = \"Total filled $SO_{2}$ mass of center volcano (\" + current_volcano.name + \"): \" + \"{:.2f}\".format(mass_filled.sum() / 1000000) + \" tons\"\n plt.gcf().text(0.13, 0.06, textstr, fontsize=14)\n plt.title(title)\n\n if PRINT_MODE != 1:\n cba = plt.colorbar(cs)\n cba.set_label('$SO_{2}$ mole/$m^{2}$')\n\n if sem_inst == 0:\n cbb = plt.colorbar(cmask_filled, ticks=[0, 1])\n cbb.set_label('Filled mask')\n else:\n cbb = plt.colorbar(cmask_filled, ticks=FixedLocator(colorbar_labels))\n cbb.set_label('Associated volcano ID-s')\n\n cbc = plt.colorbar(cmask, ticks=[0, 1])\n cbc.set_label('$SO_{2}$ detection mask')\n\n # plt.show()\n\n return fig\n\n\ndef plot_ground_truth(current_volcano):\n \"\"\"Plot the ground truth for a product. Takes in a Volcano class, returns the plot figure\"\"\"\n\n # OPEN FILE\n print(current_volcano.filename)\n fh = netCDF4.Dataset(current_volcano.filename, 'r', format=\"NETCDF4\")\n\n PRINT_MODE = 0 # set to 1 if plotting for document, set to 0 if viewed on PC\n zoom_span = 6\n\n # PLOT\n # Set plot DPI high for Spyder plot\n if PRINT_MODE == 1:\n plt.rcParams['figure.dpi'] = 500\n fig = plt.figure(figsize=(4, 2))\n else:\n plt.rcParams['figure.dpi'] = 200\n fig = plt.figure(figsize=(16, 9))\n\n # GET DATA\n # - lat/lon\n lons = fh.groups['PRODUCT'].variables['longitude'][:][0, :, :]\n lats = fh.groups['PRODUCT'].variables['latitude'][:][0, :, :]\n # - other\n acqdate = fh.time_reference\n viewing_azimuth_angle = (\n fh.groups['PRODUCT'].groups['SUPPORT_DATA'].groups['GEOLOCATIONS'].variables['viewing_azimuth_angle'][0, :, :])\n surface_altitude = (\n fh.groups['PRODUCT'].groups['SUPPORT_DATA'].groups['INPUT_DATA'].variables['surface_altitude'][0, :, :])\n cloud_fraction = (\n fh.groups['PRODUCT'].groups['SUPPORT_DATA'].groups['INPUT_DATA'].variables['cloud_fraction_crb'][0, :, :])\n cloud_height = (\n fh.groups['PRODUCT'].groups['SUPPORT_DATA'].groups['INPUT_DATA'].variables['cloud_height_crb'][0, :, :])\n\n # PLOT\n m = Basemap(llcrnrlon=current_volcano.longitude - zoom_span / 2, llcrnrlat=current_volcano.latitude - zoom_span / 2,\n urcrnrlon=current_volcano.longitude + zoom_span / 2, urcrnrlat=current_volcano.latitude + zoom_span / 2,\n resolution='l', projection='cyl', lat_ts=40, lat_0=current_volcano.latitude,\n lon_0=current_volcano.longitude)\n\n gt_filename = ASSETS_DIRECTORY + \"/ground_truth/\" + str(os.path.basename(current_volcano.filename)) + \".csv\"\n a = np.zeros(shape=lons.shape)\n if os.path.exists(gt_filename):\n a = np.genfromtxt(gt_filename, delimiter=',')\n\n u, ind = np.unique(a, return_inverse=True)\n b = ind.reshape((a.shape))\n\n labels = []\n print(np.unique(a))\n for element in np.unique(a):\n if not np.isnan(element):\n current_id = int(element)\n else:\n current_id = -1\n # print(current_id)\n if current_id == -1:\n current_label = str(current_id) + \" - Unidentified\"\n # print(current_label)\n elif current_id == 0:\n current_label = str(current_id) + \" - No SO2\"\n # print(current_label)\n else:\n current_label = str(current_id) + \" - \" + str(\n volcanolist_df.loc[volcanolist_df['id'] == current_id]['fullname'].values[0])\n # print(current_label)\n\n labels.append(current_label)\n\n # - SO2 detection mask\n m.pcolormesh(lons, lats, b, latlon=True, cmap=plt.cm.get_cmap('viridis', len(labels)), alpha=.5, shading='auto')\n\n for index, row in volcanolist_df.iterrows():\n plt.text(row['lon'], row['lat'], str(row['fullname']) + \" (\" + str(row['id']) + \")\", clip_on=True, fontsize=6)\n plt.plot(row['lon'], row['lat'], '^b')\n\n m.drawparallels(np.arange(-85., 85., 2.), labels=[1, 0, 0, 0], fontsize=10)\n m.drawmeridians(np.arange(-180., 181., 2.), labels=[0, 0, 0, 1], fontsize=10)\n try:\n m.drawcoastlines()\n m.drawstates()\n m.drawcountries()\n except:\n print(\"No basemap lines drawn\")\n\n cb = plt.colorbar(ticks=np.arange(len(u)))\n cb.ax.set_yticklabels(labels)\n if PRINT_MODE == 0:\n plt.title(current_volcano.time_begin + \" - \" + gt_filename)\n\n # plt.show()\n\n return fig\n\n\ndef plot_dbscan_clusters(current_volcano):\n \"\"\"Plot the result of DBSCAN cluster generation. Takes in a Volcano class, returns the plot figure\"\"\"\n\n print(current_volcano.filename)\n # OPEN FILE\n fh = netCDF4.Dataset(current_volcano.filename, 'r', format=\"NETCDF4\")\n\n PRINT_MODE = 0 # set to 1 if plotting for document, set to 0 if viewed on PC\n zoom_span = 6\n\n # PLOT\n # Set plot DPI high for Spyder plot\n if PRINT_MODE == 1:\n plt.rcParams['figure.dpi'] = 500\n fig = plt.figure(figsize=(4, 2))\n else:\n plt.rcParams['figure.dpi'] = 200\n fig = plt.figure(figsize=(16, 9))\n\n # GET DATA\n # - lat/lon\n lons = fh.groups['PRODUCT'].variables['longitude'][:][0, :, :]\n lats = fh.groups['PRODUCT'].variables['latitude'][:][0, :, :]\n\n # - SO2 maps\n SO2_PBL = fh.groups['PRODUCT'].variables['sulfurdioxide_total_vertical_column'][0, :, :]\n\n # - SO2 detection flag: 'sulfurdioxide_detection_flag'\n flag = fh.groups['PRODUCT'].groups['SUPPORT_DATA'].groups['DETAILED_RESULTS'].variables['sulfurdioxide_detection_flag'][0, :, :]\n mask = np.where(flag > 0, 1, 0) # >> assign 1 for flag values above 0\n\n mask_filled = np.zeros(mask.shape)\n if mask.sum() >= 4:\n mask_filled = segmentation_algorithms.get_dbscan_label_map(SO2_PBL, mask)\n\n m = Basemap(llcrnrlon=current_volcano.longitude - zoom_span / 2,\n llcrnrlat=current_volcano.latitude - zoom_span / 2,\n urcrnrlon=current_volcano.longitude + zoom_span / 2,\n urcrnrlat=current_volcano.latitude + zoom_span / 2, resolution='l', projection='cyl',\n lat_ts=40, lat_0=current_volcano.latitude, lon_0=current_volcano.longitude)\n\n # - SO2 map\n vmin, vmax = 0, 0.01\n vcd2plot, vcd2plot_str = SO2_PBL, 'SO2_PBL'\n cs = m.pcolormesh(lons, lats, vcd2plot, latlon=True, vmin=vmin, vmax=vmax, cmap='CMRmap_r', alpha=.2, shading='auto')\n\n # Crop the clusters that are not on the plot, to eliminate non-present colorbar labels\n for iterator_row in range(len(mask_filled)):\n for iterator_column in range(len(mask_filled[0])):\n if lons[iterator_row, iterator_column] > current_volcano.longitude + zoom_span / 2 or lons[\n iterator_row, iterator_column] < current_volcano.longitude - zoom_span / 2:\n mask_filled[iterator_row, iterator_column] = 0\n if lats[iterator_row, iterator_column] > current_volcano.latitude + zoom_span / 2 or lats[\n iterator_row, iterator_column] < current_volcano.latitude - zoom_span / 2:\n mask_filled[iterator_row, iterator_column] = 0\n\n # Count the remaining unique elements after cropping the image\n mask_filled = mask_filled.astype(int)\n number_of_remained_unique_elements_after_crop = len(np.unique(mask_filled))\n mask_filled = mask_filled.astype(np.float32)\n\n colorbar_labels = []\n\n for element in np.unique(mask_filled[~np.isnan(mask_filled)]):\n current_id = int(element)\n if current_id != 0:\n current_label = int(current_id)\n colorbar_labels.append(current_label)\n if len(colorbar_labels) == 0:\n colorbar_labels.append(0)\n mycmap = matplotlib.colors.LinearSegmentedColormap.from_list(\"\", [\"w\", \"w\"])\n cmask_filled = m.pcolormesh(lons, lats, mask_filled, latlon=True, cmap=mycmap, alpha=.8, shading='auto')\n else:\n mask_filled[mask_filled == 0] = np.nan # to make the zeros transparent in pcolormesh\n mycmap = matplotlib.colors.LinearSegmentedColormap.from_list(\"\", [\"C0\", \"yellow\", \"C1\", \"C3\", \"C2\"], number_of_remained_unique_elements_after_crop - 1)\n cmask_filled = m.pcolormesh(lons, lats, mask_filled, latlon=True, cmap=mycmap, alpha=.8, shading='auto')\n\n # - SO2 detection mask\n cmask = m.pcolormesh(lons, lats, mask.astype(np.float32), latlon=True, vmin=0, vmax=1, cmap=plt.cm.get_cmap('Greys', 2), alpha=.1, shading='auto')\n\n cbb = plt.colorbar(cmask_filled, ticks=FixedLocator(colorbar_labels))\n cbb.set_label('Cluster labels')\n\n for other_volcanoindex, other_row in volcanolist_df.iterrows():\n plt.text(other_row['lon'], other_row['lat'], str(other_row['fullname']) + \" (\" + str(other_row['id']) + \")\", clip_on=True, fontsize=6)\n plt.plot(other_row['lon'], other_row['lat'], '^b')\n\n m.drawparallels(np.arange(-85., 85., 2.), labels=[1, 0, 0, 0], fontsize=10)\n m.drawmeridians(np.arange(-180., 181., 2.), labels=[0, 0, 0, 1], fontsize=10)\n try:\n m.drawcoastlines()\n m.drawstates()\n m.drawcountries()\n except:\n print(\"No basemap lines drawn\")\n\n return fig\n"
},
{
"alpha_fraction": 0.6034142971038818,
"alphanum_fraction": 0.6066973209381104,
"avg_line_length": 40.72602844238281,
"blob_id": "0402fc7dc1c3e848467bf8f4408f39067ab3e50d",
"content_id": "549442e49d318d8ea651a756b40f01a8adadfb7e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3046,
"license_type": "no_license",
"max_line_length": 473,
"num_lines": 73,
"path": "/main.py",
"repo_name": "bazsimarkus/tropomi-emission-source",
"src_encoding": "UTF-8",
"text": "# -----------------------------------------------------------\n# main.py\n# Runs the result generation using the predefined parameters in the CONFIG files\n# -----------------------------------------------------------\n\nimport csv\nimport datetime\nimport gc\nimport json\nimport os\nimport warnings\n\nimport segmentation_algorithms\nfrom Volcano import *\nfrom calculate_metrics import get_metrics_response\nfrom plotting_functions import plot_segmentation_result\n\nwarnings.filterwarnings(\"ignore\")\n\n# Datetime object for the distinct log file name\nnow = datetime.datetime.now()\ndt_string = now.strftime(\"%Y%m%dT%H%M%S\")\n\nvolcano_list_df = pd.read_csv('CONFIG_INPUT_LIST.csv')\n\nOUTPUT_CSV_OR_JSON = 1\nfnames = ['name','fullname','id','datetime','latitude','longitude','elevation','method','filename','so2_mass_value','so2_mass_unit','so2_pixels_on_image','ground_truth_available','ground_truth_pixels_for_current_volcano','true_positive_pixels','false_positive_pixels','true_negative_pixels','false_negative_pixels','intersection_over_union','oversegmentation','undersegmentation','area_fit_index','root_mean_square','accuracy','precision','recall','f1_score','specificity']\n\nif OUTPUT_CSV_OR_JSON == 1:\n output_filename = 'output/result_' + dt_string + '.csv'\n with open(output_filename, 'a') as filedata:\n writer = csv.writer(filedata, delimiter=',')\n writer.writerow(fnames)\n\n\nfor index, row in volcano_list_df.iterrows():\n try:\n current_volcano = Volcano(row['name'], row['date'])\n\n function_to_use = getattr(segmentation_algorithms, 'run_' + str(row['method']))\n\n result = function_to_use(current_volcano)\n\n response = get_metrics_response(*result)\n\n if OUTPUT_CSV_OR_JSON == 0:\n output_filename = 'output/result_' + dt_string + '.json'\n if not os.path.exists(output_filename):\n data = {\"data\": []}\n with open(output_filename, 'w') as outputfile:\n json.dump(data, outputfile)\n\n with open(output_filename, 'r+') as outputfile:\n # First we load existing data into a dict.\n file_data = json.load(outputfile)\n # Join new_data with file_data inside emp_details\n file_data[\"data\"].append(response['data'])\n # Sets file's current position at offset.\n outputfile.seek(0)\n # convert back to json.\n json.dump(file_data, outputfile, indent=4)\n else:\n output_filename = 'output/result_' + dt_string + '.csv'\n with open(output_filename, 'a') as filedata:\n writer = csv.DictWriter(filedata, delimiter=',', fieldnames=fnames)\n writer.writerow(response['data'][0])\n\n fig = plot_segmentation_result(*result)\n fig.savefig('plots/' + str(row['name']) + '_' + str(row['date']) + '_' + str(row['method']) + '_' + dt_string + '.png', bbox_inches='tight')\n\n gc.collect()\n except Exception as e:\n print(\"Exception: \" + str(e))\n"
},
{
"alpha_fraction": 0.487109899520874,
"alphanum_fraction": 0.514246940612793,
"avg_line_length": 31.086956024169922,
"blob_id": "8e9858dec71758afb02aefa2eb98afd6b0502f1e",
"content_id": "48234c8dafae70e5b988e1d9207ce9e7edff33d8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 737,
"license_type": "no_license",
"max_line_length": 97,
"num_lines": 23,
"path": "/CONFIG_PARAMETERS.py",
"repo_name": "bazsimarkus/tropomi-emission-source",
"src_encoding": "UTF-8",
"text": "# -----------------------------------------------------------------------------------------------\n# SET THIS TO 1 IF AN EXTERNAL HARD DRIVE CALLED \"Main Volume\" IS MOUNTED\n# SET THIS TO 0. IF THE ASSETS ARE PLACED IN THE REPOSITORY ROOT IN A FOLDER NAMED \"assets\"\n\nEXTERNAL_DRIVE = 0\n\n# -----------------------------------------------------------------------------------------------\n\n# DON'T EDIT AFTER THIS\n\nmolar_mass_SO2 = 64.0638\npixel_area_m2 = 19250000\n\nimport getpass\nimport os\n\nif EXTERNAL_DRIVE == 1:\n username = str(getpass.getuser())\n ASSETS_DIRECTORY = \"/media/\" + username + \"/Main Volume\"\nelse:\n dirname = os.path.dirname(__file__)\n ASSETS_DIRECTORY = os.path.join(dirname, 'assets')\n print(ASSETS_DIRECTORY)"
},
{
"alpha_fraction": 0.7092163562774658,
"alphanum_fraction": 0.7577624320983887,
"avg_line_length": 42.870269775390625,
"blob_id": "37c0c74614aefcccdfe425e3446855728175eef6",
"content_id": "7563c660cdd7402df31d9f7e7f5f78bb56cd5eaa",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 8116,
"license_type": "no_license",
"max_line_length": 528,
"num_lines": 185,
"path": "/README.md",
"repo_name": "bazsimarkus/tropomi-emission-source",
"src_encoding": "UTF-8",
"text": "# tropomi-emission-source | Automatic Retrieval of Volcanic SO2 Emission Source from TROPOMI Products\n\nWe have developed fast and accurate SO2 plume classifier and segmentation algorithms using classic clustering, segmentation and image processing techniques. These algorithms, applied to measurements from the TROPOMI instrument onboard the Sentinel-5 Precursor platform, can help in the accurate measurement of the mass of SO2 plumes originating from various volcanoes.\n\nThis repository contains a simple script and an API which can be used to retrieve volcanic SO2 emission source and the mass of SO2 plumes from TROPOMI products.\n\n![index-image](docs/index-image.png)\n\n## Getting started\n\nIn this short tutorial, we are setting up the script, and we are getting the mass of some SO2 plumes.\n\n### Step 1\n\nClone this repository:\n\n```bash\ngit clone https://github.com/bazsimarkus/tropomi-emission-source.git\n```\n\n### Step 2\n\nCopy and paste the required files into the following directories, depending on which function you want to use.\n\n```\ntropomi-emission-source/assets/DATA/\ntropomi-emission-source/assets/gdas/\ntropomi-emission-source/assets/ground_truth/\ntropomi-emission-source/assets/hysplit/\ntropomi-emission-source/assets/trajectories/\n```\n\n- DATA - the TROPOMI NRTI products have to be placed here - example filename for such a file: `S5P_NRTI_L2__SO2____20210815T184135_20210815T184635_19896_02_020201_20210815T193716.nc`\n- gdas - the GDAS meteorological files have to be placed here, as provided on the NOAA FTP server - ftp://ftp.arl.noaa.gov/pub/archives/gdas1 - no password, Port 21 - example filename: `gdas1.aug21.w3`\n- ground_truth - this folder contains the ground truth for the files in the DATA folder, in .csv format - They have to have the same file name as the .nc file, extended with \".csv\" - For example: `S5P_NRTI_L2__SO2____20210815T184135_20210815T184635_19896_02_020201_20210815T193716.nc.csv` One cell represents one pixel in the product, and they have to have the same size as the product. In the cell there should be the ID of the volcano from the volcano_coordinates.csv file, and if there is no pixel, it should be left empty.\n- hysplit - the compiled HYSPLIT distribution for Linux has to be placed here\n- trajectories - this folder is used by HYSPLIT, and it can be left empty. The generated trajectories are saved here.\n\nIf you are aiming for the minimum working example, and only want to use the algorithms without wind models, it is enough to just copy and paste the TROPOMI products into the DATA folder. The other folders can be left empty, but in this case the algorithms with wind cannot be used (if you try to use them, the script will fail).\n\n### Step 3\n\nInstall the dependencies based on the **requirements.txt** file. It may be needed to install these additional dependencies first:\n\n```bash\nsudo apt-get update \nsudo apt-get install libgfortran5\nsudo apt-get install libgeos++-dev\nsudo apt-get install libgeos-dev\nsudo apt-get install libproj-dev\nsudo apt-get install python3 python3-dev build-essential libssl-dev libffi-dev libxml2-dev libxslt1-dev zlib1g-dev python-pip\nsudo apt-get install ffmpeg libsm6 libxext6 -y\npip3 install proj\npip3 install geos\npip3 install basemap\npip3 install \"uvicorn[standard]\"\n```\n\n### Step 4\n\nDefine the links between the files and the volcanoes and dates in the **CONFIG_TROPOMI_file_archive.csv** file. With this in the script you only have to provide the volcano and the date, and the corresponding file will be opened automatically. This step is essential, as one image can contain multiple volcanoes.\nThe first column contains the name of the volcano, the second column contains the date in YYYY-MM-DD format, and the third column contains the relative path of the product in the assets folder.\n\nExample content of the **CONFIG_TROPOMI_file_archive.csv** file:\n\n```\nname,date,filepath\nsangay,2021-08-15,/DATA/S5P_NRTI_L2__SO2____20210815T184135_20210815T184635_19896_02_020201_20210815T193716.nc\netna,2021-06-17,/DATA/S5P_NRTI_L2__SO2____20210617T121026_20210617T121526_19055_01_020104_20210617T130549.nc\n```\n\n\n### Step 5\n\nDefine the volcanoes to analyze, and the used methods in the **CONFIG_INPUT_LIST.csv** file:\nThe first column contains the name of the volcano, the second column contains the date in YYYY-MM-DD format, and the third column contains the name of the desired algorithm to determine the SO2 mass.\nPay attention to the correct names and the date format, otherwise the script might fail.\n\nExample content of the **CONFIG_INPUT_LIST.csv** file:\n\n```\nname,date,method\nsangay,2021-08-15,dbscan_withwind\netna,2021-06-17,floodfill_withwind\n```\n\nAvailable algorithms to choose from in the third column:\n\n```\nradius_100km\nfloodfill\nfloodfill_withwind\ndbscan\ndbscan_withwind\nreverse_trajectory_dbscan\nmulti_dbscan\nmulti_reverse_trajectory_dbscan\n```\n\n### Step 6\n\nIt may happen that the PySPLIT library source code contains the HYSPLIT Windows path hardcoded. If you want to use the algorithms with wind under Linux, you might have to edit the **trajectory_generator.py** file of PySPLIT at the top of the file to contain the following (notice the path of the executable). For this you might have to copy and paste the whole library next to the the **main.py** file in a folder called \"pysplit\", so that you can edit its files.\n\n```python\nfrom CONFIG_PARAMETERS import ASSETS_DIRECTORY\n\ndef generate_bulktraj(basename, hysplit_working, output_dir, meteo_dir, years,\n months, hours, altitudes, coordinates, run,\n meteoyr_2digits=True, outputyr_2digits=False,\n monthslice=slice(0, 32, 1), meteo_bookends=([4, 5], [1]),\n get_reverse=False, get_clipped=False,\n hysplit=ASSETS_DIRECTORY + \"/hysplit/exec/hyts_std\"):\n```\n\n### Step 7\n\nAfter these steps, you can finally run the script:\n\n```bash\ncd tropomi-emission-source\npython3 main.py\n```\n\nThe script produces two kinds of outputs while running:\n\n- a graphical plot of the scenario with the filling mask overlay in PNG format, in the *plots* folder\n- a CSV/JSON response of the segmentation with the total SO2 mass, and the segmentation metrics (if available), in the *output* folder\n\nThese output objects can be queried one-by-one, too, without a need for a pre-defined list, with the help of the API. More on that in the chapter \"Using the API\".\n\n## Using the API\n\nFollow the \"Getting started\" steps until Step 4.\n\nAfter setting up everything we can start the server for the API:\n\n```bash\ncd tropomi-emission-source\npython3 -m uvicorn app:app --reload\n```\n\nAfter that, it should look like this:\n\n```\nINFO: Will watch for changes in these directories: ['/home/user/tropomi-emission-source']\nINFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)\nINFO: Started reloader process [36006] using watchgod\nINFO: Started server process [36008]\nINFO: Waiting for application startup.\nINFO: Application startup complete.\n```\n\nOpen up a web browser with the mentioned web address: http://127.0.0.1:8000/docs\n\nWith the Swagger UI the different API endpoints can be explored. Just click the \"Try it out button\" on the endpoint. Some examples can be found below.\n\n### API Endpoint examples\n\nThere are always three input parameters, the name (sabancaya), the date (2021-01-04), and the method (dbscan_withwind). Pay attention to the correct format, otherwise the script fails.\n\n#### /return_so2_mass_of_volcano\n\nCalling this endpoint:\n\n```\ncurl -X 'GET' \\\n 'http://127.0.0.1:8000/return_so2_mass_of_volcano?volcano_name=sangay&volcano_date=2021-08-15&method=floodfill' \\\n -H 'accept: application/json'\n```\n\nReturns the mass of the SO2 plumes originating from the selected volcano.\n\n#### /plot_so2_mass_of_volcano\n\n*Return plots (as a .png image)*\n\nCalling this endpoint from a web browser:\n\n```\ncurl -X 'GET' \\\n 'http://127.0.0.1:8000/plot_so2_mass_of_volcano?volcano_name=sangay&volcano_date=2021-08-15&method=dbscan' \\\n -H 'accept: */*'\n```\n\nReturns a .png image plot of the SO2 plumes originating from the selected volcano.\n"
},
{
"alpha_fraction": 0.6138175129890442,
"alphanum_fraction": 0.6182462573051453,
"avg_line_length": 35.03191375732422,
"blob_id": "50a351b8a01e2ac12eb15e70e35f4985b6eb2594",
"content_id": "3c802ef68154e506482601ceab77a091e812536b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3387,
"license_type": "no_license",
"max_line_length": 168,
"num_lines": 94,
"path": "/trajectory_algorithms.py",
"repo_name": "bazsimarkus/tropomi-emission-source",
"src_encoding": "UTF-8",
"text": "# -----------------------------------------------------------\n# trajectory_algorithms.py\n# Contains all functions to generate forward and reverse trajectories for the wind-based algorithms in segmentation_algorithms.py\n# -----------------------------------------------------------\n\nimport pysplit\nimport os\nfrom CONFIG_PARAMETERS import ASSETS_DIRECTORY\n\n\ndef get_forward_trajectory(current_volcano):\n \"\"\"Generate a forward trajectory. Takes in a Volcano class, returns the lat/lon points of the trajectory\"\"\"\n\n basename = 'volcano'\n\n working_dir = ASSETS_DIRECTORY + '/hysplit/working'\n storage_dir = ASSETS_DIRECTORY + '/trajectories/' + basename + \"/\"\n meteo_dir = ASSETS_DIRECTORY + '/gdas'\n\n years = [current_volcano.year]\n months = [current_volcano.month]\n day = current_volcano.day\n hours = [current_volcano.hour - 1]\n altitudes = [current_volcano.elevation]\n location = (current_volcano.latitude, current_volcano.longitude)\n\n runtime = 12\n\n pysplit.generate_bulktraj(basename, working_dir, storage_dir, meteo_dir,\n years, months, hours, altitudes, location, runtime,\n monthslice=slice(day - 1, day, 1), get_reverse=False,\n get_clipped=True)\n list_of_files = [storage_dir + name for name in os.listdir(storage_dir) if\n os.path.isfile(os.path.join(storage_dir, name))]\n\n latest_file = max(list_of_files, key=os.path.getmtime)\n hysplit_filename = str(os.path.basename(latest_file))\n\n # print(hysplit_filename)\n\n test_data = pysplit.load_hysplitfile(storage_dir + hysplit_filename)\n xtraj = []\n ytraj = []\n\n for current_point in test_data[1]:\n lat_coord = current_point[0]\n lon_coord = current_point[1]\n xtraj.append(lat_coord)\n ytraj.append(lon_coord)\n\n return xtraj, ytraj\n\n\ndef get_reverse_trajectory(latitude, longitude, year, month, day, hour, elevation):\n \"\"\"Generate a reverse trajectory. Takes in latitude, longitude, year, month, day, hour, and elevation to start from, returns the lat/lon points of the trajectory\"\"\"\n\n basename = 'volcano'\n\n working_dir = ASSETS_DIRECTORY + '/hysplit/working'\n storage_dir = ASSETS_DIRECTORY + '/trajectories/' + basename + \"/\"\n meteo_dir = ASSETS_DIRECTORY + '/gdas'\n\n years = [year]\n months = [month]\n day = day\n hours = [hour]\n altitudes = [elevation]\n location = (latitude, longitude)\n\n runtime = -12\n\n pysplit.generate_bulktraj(basename, working_dir, storage_dir, meteo_dir,\n years, months, hours, altitudes, location, runtime,\n monthslice=slice(day - 1, day, 1), get_reverse=False,\n get_clipped=True)\n list_of_files = [storage_dir + name for name in os.listdir(storage_dir) if\n os.path.isfile(os.path.join(storage_dir, name))]\n\n latest_file = max(list_of_files, key=os.path.getmtime)\n hysplit_filename = str(os.path.basename(latest_file))\n\n # print(hysplit_filename)\n\n test_data = pysplit.load_hysplitfile(storage_dir + hysplit_filename)\n xtraj = []\n ytraj = []\n\n for current_point in test_data[1]:\n lat_coord = current_point[0]\n lon_coord = current_point[1]\n xtraj.append(lat_coord)\n ytraj.append(lon_coord)\n\n return xtraj, ytraj\n"
},
{
"alpha_fraction": 0.4745098054409027,
"alphanum_fraction": 0.6901960968971252,
"avg_line_length": 16.066667556762695,
"blob_id": "14fa56d2e7796cf19ae9bdbeae03c98f9ddf93a7",
"content_id": "df78b9644a5abaecbc602f745228881b5e6d1d42",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 255,
"license_type": "no_license",
"max_line_length": 23,
"num_lines": 15,
"path": "/requirements.txt",
"repo_name": "bazsimarkus/tropomi-emission-source",
"src_encoding": "UTF-8",
"text": "numpy~=1.23.4\nCartopy~=0.21.0\nmatplotlib~=3.5.3\nnetCDF4~=1.6.1\nopencv-python~=4.6.0.66\nstarlette~=0.20.4\nfastapi~=0.85.1\npandas~=1.5.1\ngeopandas~=0.11.1\nShapely~=1.8.5.post1\nscipy~=1.9.3\ngeopy~=2.2.0\nscikit-image~=0.19.3\nscikit-learn~=1.1.2\nPySPLIT~=0.3.6"
}
] | 9 |
cernyjan/vivofitLaps | https://github.com/cernyjan/vivofitLaps | a999cf0bc8999494882def933e5388c717c8daef | e2a95250dc6afb95aa6c04f6440ab96c483203ba | 1d8b558b669f1b6965014f432fe8c89a99940c19 | refs/heads/master | 2022-12-16T15:57:21.604475 | 2020-09-22T17:56:26 | 2020-09-22T17:56:26 | 297,105,771 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6946107745170593,
"alphanum_fraction": 0.703592836856842,
"avg_line_length": 21.266666412353516,
"blob_id": "acb1a5b89608e6e0dce1e15e511d236929119aab",
"content_id": "5e7a8c06133451cd0af9272c6f78cb4d22d08bb0",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 334,
"license_type": "permissive",
"max_line_length": 85,
"num_lines": 15,
"path": "/README.md",
"repo_name": "cernyjan/vivofitLaps",
"src_encoding": "UTF-8",
"text": "# vivofitLaps\npython tool for generating laps per approx 1 km from \\*.tcx file from Garmin Connect.\n\n## Prerequisites\npython3\n\n## Demo\n### Usage\n```\nPS D:\\_work\\vivofitLaps> python .\\vivofitLaps.py .\\vivofit4_sample.tcx\n```\n### Garmin Connect - all in one lap\n![GC](gc.png)\n### vivofitLaps - per laps \n![vivofitLaps](vivofitLaps.png)\n"
},
{
"alpha_fraction": 0.5473793745040894,
"alphanum_fraction": 0.573585569858551,
"avg_line_length": 37.41999816894531,
"blob_id": "dedfc19a42f7f937f505b5cff0ab9623effd6b96",
"content_id": "d6fbcf9296eaccc7f6ceaa649817ec95c44482cc",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5762,
"license_type": "permissive",
"max_line_length": 170,
"num_lines": 150,
"path": "/vivofitLaps.py",
"repo_name": "cernyjan/vivofitLaps",
"src_encoding": "UTF-8",
"text": "import sys\nimport xml.etree.ElementTree as ET\nimport datetime\nfrom datetime import datetime\nimport math\n\n\ndef into_float(value):\n try:\n return float(value)\n except ValueError:\n sys.exit(\"Unexpected input format, exiting...\")\n\ndef into_int(value):\n try:\n return int(value)\n except ValueError:\n sys.exit(\"Unexpected input format, exiting...\")\n\ndef get_date_format(value):\n date = value.split('T')[0].split('-')\n time = value.split('T')[1].split('Z')[0].split(':')\n head, _, _ = time[2].partition('.')\n time[2] = head\n return datetime(into_int(date[0]), into_int(date[1]), into_int(date[2]), into_int(time[0]), into_int(time[1]), into_int(time[2]))\n\ndef seconds_into_time(seconds): \n seconds = seconds % (24 * 3600) \n hour = seconds // 3600\n seconds %= 3600\n minutes = seconds // 60\n seconds %= 60 \n return \"{:.0f}:{:02.0f}:{:02.0f}\".format(hour, minutes, seconds)\n\ndef meters_into_kilometers(meters): \n return math.floor((meters / 1000.0) * 10 ** 3) / 10 ** 3\n\n\nclass Lap:\n def __init__(self, id, time, distance = 1.0):\n self.id = id\n self.time = time\n self.distance = distance\n\n\nclass Activity:\n def __init__(self, activity_type='Running'):\n self.id = ''\n self.activity_type = activity_type\n self.total_time_seconds = 0\n self.total_distance_meters = 0\n self.total_completed_laps = 0\n self.track = None\n self.laps = []\n self.device_name = ''\n self.device_version = ''\n\n def set_total_laps(self):\n self.total_completed_laps = int(self.total_distance_meters // 1000)\n\n def render_info(self):\n print(self.id)\n print(self.device_name)\n print(self.device_version)\n print(self.total_time_seconds) \n print(self.total_distance_meters)\n print(self.total_completed_laps)\n\n def get_lap_time(self, start_datetime, finish_datetime):\n return (finish_datetime - start_datetime)\n\n def get_rest_distance(self, distance_meters):\n return math.floor(((self.total_distance_meters / 1000.0) - distance_meters) * 10 ** 3) / 10 ** 3\n\n def get_rest_distance_approx(self, prev_laps_count):\n return math.floor(((self.total_distance_meters / 1000.0) - prev_laps_count) * 10 ** 3) / 10 ** 3\n\n def set_laps(self):\n lap_number = 1\n if self.total_distance_meters < 1000:\n self.laps.append(Lap(\"{}.\".format(lap_number), seconds_into_time(self.total_time_seconds), meters_into_kilometers(self.total_distance_meters)))\n else:\n if self.track == None:\n sys.exit(\"No valid input data (missing track in .tcx file), exiting...\")\n else:\n start_datetime = get_date_format(activity.id)\n distance_meters = 0.0\n #finish_distance = distance_meters\n trackpoint_number = 1\n trackpoint_count = len(self.track)\n for trackpoint in self.track:\n distance_meters = into_float(trackpoint[1].text)\n if distance_meters // 1000 >= lap_number:\n finish_datetime = get_date_format(trackpoint[0].text)\n #finish_distance = distance_meters / 1000.0\n self.laps.append(Lap(\"{}.\".format(lap_number), self.get_lap_time(start_datetime, finish_datetime)))\n start_datetime = finish_datetime\n lap_number = lap_number + 1\n elif trackpoint_number == trackpoint_count:\n finish_datetime = get_date_format(trackpoint[0].text)\n #self.laps.append(Lap(\"{}.\".format(lap_number), self.get_lap_time(start_datetime, finish_datetime), self.get_rest_distance(finish_distance)))\n self.laps.append(Lap(\"{}.\".format(lap_number), self.get_lap_time(start_datetime, finish_datetime), self.get_rest_distance_approx(lap_number - 1)))\n trackpoint_number = trackpoint_number + 1\n\n def render_into_table(self):\n print('\\n')\n laps = []\n times = []\n distances = []\n laps.append(\"{} v{} ({})\".format(self.device_name, self.device_version, self.activity_type))\n times.append(\"{} (total)\".format(seconds_into_time(self.total_time_seconds)))\n distances.append(\"{:.03f} (total)\".format(meters_into_kilometers(self.total_distance_meters)))\n for lap in self.laps:\n laps.append(lap.id)\n times.append(lap.time)\n distances.append(\"{:.03f}\".format(lap.distance)) \n\n titles = ['Laps', 'Times [h:m:s]', 'Distances [km]']\n data = [titles] + list(zip(laps, times, distances))\n\n for i, d in enumerate(data):\n line = '|'.join(str(x).ljust(30) for x in d)\n print(line)\n if i == 0:\n print('-' * len(line))\n \n \nif __name__ == '__main__':\n file_path = \"\"\n if len(sys.argv) > 1:\n file_path = sys.argv[1]\n else:\n file_path = input(\"tcx file path: \")\n\n tree = ET.parse(file_path)\n root = tree.getroot()\n\n activity = Activity(root[0][0].attrib['Sport'])\n activity.id = root[0][0][0].text\n activity.device_name = root[0][0][2][0].text\n activity.device_version = \"{}.{}.{}.{}\".format(root[0][0][2][3][0].text, root[0][0][2][3][1].text, root[0][0][2][3][2].text,root[0][0][2][3][3].text)\n activity.total_time_seconds = into_float(root[0][0][1][0].text)\n activity.total_distance_meters = into_float(root[0][0][1][1].text)\n activity.set_total_laps()\n activity.track = root[0][0][1][6]\n activity.set_laps()\n \n activity.render_into_table()\n\n input(\"\\n\\nPress enter to exit\\n\")"
}
] | 2 |
victoriasovereigne/smnlp-hw3 | https://github.com/victoriasovereigne/smnlp-hw3 | 824339834a864f2b63e54dd843afc25d0df1a137 | 0f1c701c7d5d3c91ae2de14cf7f8efb1080577d9 | d71f8f16aded421c4117e2ed2d124b1a353f0ca0 | refs/heads/master | 2020-03-24T21:47:08.745743 | 2018-07-31T18:10:53 | 2018-07-31T18:10:53 | 143,049,617 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.508086621761322,
"alphanum_fraction": 0.5182659029960632,
"avg_line_length": 41.34791946411133,
"blob_id": "e8bcee2444ffba7cb19f1aa724b87199641d392b",
"content_id": "ddfc6fcd3f2309cd9a3c6c51856d79e9fac76cac",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 19353,
"license_type": "no_license",
"max_line_length": 173,
"num_lines": 457,
"path": "/models.py",
"repo_name": "victoriasovereigne/smnlp-hw3",
"src_encoding": "UTF-8",
"text": "# models.py\n\nimport tensorflow as tf\nimport numpy as np\nimport random\nfrom sentiment_data import *\n\n\n# Returns a new numpy array with the data from np_arr padded to be of length length. If length is less than the\n# length of the base array, truncates instead.\ndef pad_to_length(np_arr, length):\n result = np.zeros(length)\n result[0:np_arr.shape[0]] = np_arr\n return result\n\n\n# Train a feedforward neural network on the given training examples, using dev_exs for development and returning\n# predictions on the *blind* test_exs (all test_exs have label 0 as a dummy placeholder value). Returned predictions\n# should be SentimentExample objects with predicted labels and the same sentences as input (but these won't be\n# read for evaluation anyway)\n\n# train_exs is a list of SentimentExample objects\n# word_vectors is a list of WordEmbeddings objects\ndef train_ffnn(train_exs, dev_exs, test_exs, word_vectors):\n # 59 is the max sentence length in the corpus, so let's set this to 60\n seq_max_len = 60\n # To get you started off, we'll pad the training input to 60 words to make it a square matrix.\n train_mat = np.asarray([pad_to_length(np.array(ex.indexed_words), seq_max_len) for ex in train_exs])\n # Also store the sequence lengths -- this could be useful for training LSTMs\n train_seq_lens = np.array([len(ex.indexed_words) for ex in train_exs])\n # Labels\n train_labels_arr = np.array([ex.label for ex in train_exs])\n\n # development and test data\n dev_mat = np.asarray([pad_to_length(np.array(ex.indexed_words), seq_max_len) for ex in dev_exs])\n dev_seq_lens = np.array([len(ex.indexed_words) for ex in dev_exs])\n dev_labels_arr = np.array([ex.label for ex in dev_exs])\n\n test_mat = np.asarray([pad_to_length(np.array(ex.indexed_words), seq_max_len) for ex in test_exs])\n test_seq_lens = np.array([len(ex.indexed_words) for ex in test_exs])\n test_labels_arr = np.array([ex.label for ex in test_exs])\n\n word_indexer = word_vectors.word_indexer # word indexer\n vectors = word_vectors.vectors # the word embeddings \n\n # =========================================================================\n # Construct the training data\n # =========================================================================\n train_xs = []\n\n for sent_num in xrange(len(train_exs)):\n tmp = np.zeros(shape=(len(vectors[0])))\n\n for word_num in xrange(train_seq_lens[sent_num]):\n index = int(train_mat[sent_num, word_num])\n embeddings = vectors[index]\n tmp += embeddings\n\n tmp /= train_seq_lens[sent_num] # divide by sentence length like in slide\n train_xs.append(tmp)\n \n train_xs = np.array(train_xs)\n train_ys = train_labels_arr\n \n # =========================================================================\n # Construct the development data\n # =========================================================================\n dev_xs = []\n\n for sent_num in xrange(len(dev_exs)):\n tmp = np.zeros(shape=(len(vectors[0])))\n\n for word_num in xrange(dev_seq_lens[sent_num]):\n index = int(dev_mat[sent_num, word_num])\n embeddings = vectors[index]\n tmp += embeddings\n\n tmp /= dev_seq_lens[sent_num] # divide by sentence length like in slide\n dev_xs.append(tmp)\n \n dev_xs = np.array(dev_xs)\n dev_ys = dev_labels_arr\n\n # =========================================================================\n # Construct the test data\n # =========================================================================\n test_xs = []\n\n for sent_num in xrange(len(test_exs)):\n tmp = np.zeros(shape=(len(vectors[0])))\n\n for word_num in xrange(test_seq_lens[sent_num]):\n index = int(test_mat[sent_num, word_num])\n embeddings = vectors[index]\n tmp += embeddings\n\n tmp /= test_seq_lens[sent_num] # divide by sentence length like in slide\n test_xs.append(tmp)\n \n test_xs = np.array(test_xs)\n test_ys = test_labels_arr\n\n # =========================================================================\n # Start the FFNN\n # =========================================================================\n feat_vec_size = len(vectors[0])\n hidden_size = 10 \n num_classes = 2\n\n fx = tf.placeholder(tf.float32, feat_vec_size) # vector length 50\n # fx = tf.placeholder(tf.float32, [batch_size, feat_vec_size])\n\n # 10 x 50 matrix\n V = tf.get_variable(\"V\", [hidden_size, feat_vec_size], initializer=tf.contrib.layers.xavier_initializer(seed=0))\n \n # 10 x 1 matrix\n z = tf.sigmoid(tf.tensordot(V, fx, 1))\n W1 = tf.get_variable(\"W1\", [hidden_size, hidden_size], initializer=tf.contrib.layers.xavier_initializer(seed=0))\n # W2 = tf.get_variable(\"W2\", [hidden_size, hidden_size], initializer=tf.contrib.layers.xavier_initializer(seed=0))\n W3 = tf.get_variable(\"W3\", [num_classes, hidden_size], initializer=tf.contrib.layers.xavier_initializer(seed=0))\n\n # h1 = tf.nn.softmax(tf.tensordot(W1, z, 1))\n h1 = tf.nn.sigmoid(tf.tensordot(W1, z, 1))\n # keep_prob = tf.placeholder(tf.float32) \n # h2 = tf.nn.softmax(tf.tensordot(W2, h1, 1))\n probs = tf.nn.softmax(tf.tensordot(W3, h1, 1))\n # dropout = tf.nn.dropout(h2, keep_prob)\n # probs = tf.nn.softmax(tf.tensordot(W3, dropout, 1))\n\n # probs = tf.nn.softmax(tf.tensordot(W, z, 1))\n label = tf.placeholder(tf.int32, 1)\n\n one_best = tf.argmax(probs)\n label_onehot = tf.reshape(tf.one_hot(label, num_classes), shape=[num_classes])\n loss = tf.negative(tf.log(tf.tensordot(probs, label_onehot, 1)))\n\n decay_steps = 1000\n learning_rate_decay_factor = 0.99\n global_step = tf.contrib.framework.get_or_create_global_step()\n initial_learning_rate = 0.1\n\n lr = tf.train.exponential_decay(initial_learning_rate, \n global_step,\n decay_steps,\n learning_rate_decay_factor,\n staircase=True)\n \n # Tensorboard thingy\n tf.summary.scalar('learning_rate', lr)\n tf.summary.scalar('loss', loss)\n\n # opt = tf.train.AdamOptimizer(lr)\n opt = tf.train.GradientDescentOptimizer(lr)\n grads = opt.compute_gradients(loss)\n apply_gradient_op = opt.apply_gradients(grads, global_step=global_step)\n\n with tf.control_dependencies([apply_gradient_op]):\n train_op = tf.no_op(name='train')\n\n # Training and testing\n init = tf.global_variables_initializer()\n num_epochs = 200\n merged = tf.summary.merge_all()\n batch_num = 0\n\n\n # compute the loss\n with tf.Session() as sess:\n train_writer = tf.summary.FileWriter('logs/', sess.graph)\n tf.set_random_seed(0)\n sess.run(init)\n\n step_idx = 0\n\n for i in range(0, num_epochs):\n loss_this_iter = 0\n\n for ex_idx in xrange(0, len(train_seq_lens)):\n [_, loss_this_instance, summary, probs_this_instance, pred_this_instance, z_this_instance] = sess.run([apply_gradient_op, loss, merged, probs, one_best, z], \n feed_dict={fx:train_xs[ex_idx],\n label:np.array([train_ys[ex_idx]]),\n # keep_prob:0.75\n })\n\n train_writer.add_summary(summary, step_idx)\n step_idx += 1\n loss_this_iter += loss_this_instance\n print \"Loss for iteration \" + repr(i) + \": \" + repr(loss_this_iter)\n\n\n # =========================================================================\n # Evaluate on the train set\n # =========================================================================\n train_correct = 0\n predicted = []\n\n for ex_idx in xrange(0, len(train_seq_lens)):\n [probs_this_instance, pred_this_instance, z_this_instance] = sess.run([probs, one_best, z],\n feed_dict={fx: train_xs[ex_idx],\n # keep_prob:1.0\n })\n\n pred_sentiment = SentimentExample(train_exs[ex_idx].indexed_words, pred_this_instance)\n predicted.append(pred_sentiment)\n if (train_ys[ex_idx] == pred_this_instance):\n train_correct += 1\n\n print repr(train_correct) + \"/\" + repr(len(train_labels_arr)) + \" correct after training\"\n print float(train_correct) / len(train_labels_arr)\n\n # =========================================================================\n # Evaluate on the dev set\n # =========================================================================\n dev_correct = 0\n predicted_dev = []\n\n for ex_idx in xrange(0, len(dev_seq_lens)):\n [probs_this_instance, pred_this_instance, z_this_instance] = sess.run([probs, one_best, z],\n feed_dict={fx: dev_xs[ex_idx],\n # keep_prob:1.0\n })\n \n pred_sentiment = SentimentExample(dev_exs[ex_idx].indexed_words, pred_this_instance)\n predicted_dev.append(pred_sentiment)\n if (dev_ys[ex_idx] == pred_this_instance):\n dev_correct += 1\n\n print repr(dev_correct) + \"/\" + repr(len(dev_labels_arr)) + \" correct after development\"\n print float(dev_correct) / len(dev_labels_arr)\n\n # =========================================================================\n # Predict on the test set\n # =========================================================================\n predicted_test = []\n\n for ex_idx in xrange(0, len(test_seq_lens)):\n [probs_this_instance, pred_this_instance, z_this_instance] = sess.run([probs, one_best, z],\n feed_dict={fx: test_xs[ex_idx],\n # keep_prob:1.0\n })\n \n pred_sentiment = SentimentExample(test_exs[ex_idx].indexed_words, pred_this_instance)\n predicted_test.append(pred_sentiment)\n\n return predicted_test\n\n\n# Analogous to train_ffnn, but trains your fancier model.\n# Ref: https://www.oreilly.com/learning/perform-sentiment-analysis-with-lstms-using-tensorflow\ndef train_fancy(train_exs, dev_exs, test_exs, word_vectors):\n seq_max_len = 60\n train_mat = np.asarray([pad_to_length(np.array(ex.indexed_words), seq_max_len) for ex in train_exs])\n train_seq_lens = np.array([len(ex.indexed_words) for ex in train_exs])\n train_labels_arr = np.array([ex.label for ex in train_exs])\n\n word_indexer = word_vectors.word_indexer # word indexer\n vectors = word_vectors.vectors # the word embeddings \n\n # =========================================================================\n # Construct the training data\n # ========================================================================= \n train_xs = np.array(train_mat)\n train_ys = [] # train_labels_arr\n\n for element in train_labels_arr:\n ans = [0, 0]\n ans[element] = 1\n train_ys.append(ans)\n\n train_ys = np.array(train_ys)\n\n # =========================================================================\n # Construct the dev data\n # =========================================================================\n dev_mat = np.asarray([pad_to_length(np.array(ex.indexed_words), seq_max_len) for ex in dev_exs])\n dev_seq_lens = np.array([len(ex.indexed_words) for ex in dev_exs])\n dev_labels_arr = np.array([ex.label for ex in dev_exs])\n\n dev_xs = np.array(dev_mat)\n dev_ys = [] # dev_labels_arr\n\n for element in dev_labels_arr:\n ans = [0, 0]\n ans[element] = 1\n dev_ys.append(ans)\n\n dev_ys = np.array(dev_ys)\n\n # =========================================================================\n # Construct the test data\n # =========================================================================\n test_mat = np.asarray([pad_to_length(np.array(ex.indexed_words), seq_max_len) for ex in test_exs])\n test_seq_lens = np.array([len(ex.indexed_words) for ex in test_exs])\n test_labels_arr = np.array([ex.label for ex in test_exs])\n\n test_xs = np.array(test_mat)\n test_ys = [] # test_labels_arr\n\n for element in test_labels_arr:\n ans = [0, 0]\n ans[element] = 1\n test_ys.append(ans)\n\n test_ys = np.array(test_ys)\n\n # =========================================================================\n # Create batches\n # =========================================================================\n batch_size = 10\n num_classes = 2\n lstm_size = 64 #128 #64\n feat_vec_size = len(vectors[0])\n\n tf.reset_default_graph()\n labels = tf.placeholder(tf.float32, [None, num_classes])\n \n input_data = tf.placeholder(tf.int32, [None, seq_max_len])\n data = tf.nn.embedding_lookup(vectors.astype('float32'), input_data)\n\n mode = 'basic' #bidirectional\n\n # basic LSTM \n if mode == 'basic':\n print \"basic\"\n lstm_cell = tf.contrib.rnn.BasicLSTMCell(lstm_size)\n # lstm_cell = tf.contrib.rnn.DropoutWrapper(cell=lstm_cell, output_keep_prob=0.75)\n \n value, _ = tf.nn.dynamic_rnn(lstm_cell, data, dtype=tf.float32)\n\n # weight = tf.Variable(tf.truncated_normal([lstm_size, num_classes]))\n weight = tf.get_variable(\"W\", [lstm_size, num_classes], initializer=tf.contrib.layers.xavier_initializer(seed=0))\n bias = tf.Variable(tf.constant(0.1, shape=[num_classes]))\n # bias = tf.get_variable(\"b\", [num_classes], initializer=tf.contrib.layers.xavier_initializer(seed=0))\n \n value = tf.transpose(value, [1,0,2])\n last = tf.gather(value, int(value.get_shape()[0]) - 1)\n prediction = (tf.tensordot(last, weight, 1) + bias)\n\n # https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/bidirectional_rnn.py\n elif mode == 'bidirectional':\n print \"bidirectional\"\n lstm_fw_cell = tf.contrib.rnn.BasicLSTMCell(lstm_size)\n # lstm_fw_cell = tf.contrib.rnn.DropoutWrapper(cell=lstm_fw_cell, output_keep_prob=0.75)\n\n lstm_bw_cell = tf.contrib.rnn.BasicLSTMCell(lstm_size)\n # lstm_bw_cell = tf.contrib.rnn.DropoutWrapper(cell=lstm_bw_cell, output_keep_prob=0.75)\n \n data = tf.unstack(data, seq_max_len, 1)\n value, _, _ = tf.nn.static_bidirectional_rnn(lstm_fw_cell, lstm_bw_cell, data, dtype=tf.float32)\n\n weight = tf.get_variable(\"W\", [2*lstm_size, num_classes], initializer=tf.contrib.layers.xavier_initializer(seed=0))\n bias = tf.Variable(tf.constant(0.1, shape=[num_classes]))\n\n prediction = tf.matmul(value[-1], weight) + bias \n\n correct_pred = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1))\n accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))\n # one_best = tf.argmax(prediction)\n\n decay_steps = 1000\n learning_rate_decay_factor = 0.99\n global_step = tf.contrib.framework.get_or_create_global_step()\n initial_learning_rate = 0.001\n\n lr = tf.train.exponential_decay(initial_learning_rate, \n global_step,\n decay_steps,\n learning_rate_decay_factor,\n staircase=True)\n\n loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=prediction, labels=labels))\n optimizer = tf.train.AdamOptimizer(lr)\n # optimizer = tf.train.GradientDescentOptimizer(lr)\n grads = optimizer.compute_gradients(loss)\n apply_gradient_op = optimizer.apply_gradients(grads, global_step=global_step)\n\n with tf.control_dependencies([apply_gradient_op]):\n train_op = tf.no_op(name='train')\n \n init = tf.global_variables_initializer()\n num_epochs = 100\n\n with tf.Session() as sess:\n sess.run(init)\n\n for i in range(num_epochs):\n loss_this_iter = 0\n batch_num = 0\n\n while batch_num < len(train_seq_lens):\n train_xs_batch = train_xs[batch_num:batch_num+batch_size]\n train_ys_batch = train_ys[batch_num:batch_num+batch_size]\n\n [_, loss_this_instance] = sess.run([train_op, loss],\n feed_dict={input_data:train_xs_batch, \n labels:train_ys_batch})\n\n loss_this_iter += loss_this_instance\n batch_num += batch_size\n\n print \"Loss for iteration \" + repr(i) + \": \" + repr(loss_this_iter)\n\n if loss_this_iter < 0.1:\n break\n\n # evaluation\n correct = sess.run(accuracy, {input_data: train_xs, labels:train_ys})\n print \"train --> correct:\", correct\n\n correct = sess.run(accuracy, {input_data: dev_xs, labels:dev_ys})\n print \"dev --> correct:\", correct\n\n # dev evaluation\n batch_num = 0\n index = 0\n predicted_dev = []\n total_correct = 0\n\n while batch_num < len(dev_seq_lens):\n dev_xs_batch = dev_xs[batch_num:batch_num+batch_size]\n pred = sess.run(prediction, feed_dict={input_data:dev_xs})\n\n for values in pred:\n if index < len(dev_seq_lens):\n true_y = dev_labels_arr[index]\n pred_y = np.argmax(values)\n pred_sentiment = SentimentExample(dev_exs[index].indexed_words, pred_y)\n predicted_dev.append(pred_sentiment)\n\n if true_y == pred_y:\n total_correct += 1\n index += 1\n batch_num += batch_size\n\n print total_correct / float(len(dev_seq_lens))\n \n # test evaluation\n batch_num = 0\n index = 0\n predicted_test = []\n total_correct = 0\n\n while batch_num < len(test_seq_lens):\n test_xs_batch = test_xs[batch_num:batch_num+batch_size]\n pred = sess.run(prediction, feed_dict={input_data:test_xs})\n\n for values in pred:\n if index < len(test_seq_lens):\n pred_y = np.argmax(values)\n pred_sentiment = SentimentExample(test_exs[index].indexed_words, pred_y)\n predicted_test.append(pred_sentiment)\n index += 1\n batch_num += batch_size\n\n return predicted_test\n\n # python /v/filer5b/v20q001/vlestari/.local/lib/python2.7/site-packages/tensorboard/main.py\n"
}
] | 1 |
hemalathak10/Cloud_Computing | https://github.com/hemalathak10/Cloud_Computing | 2820815f2206056446a20d989d5d8dd72b0bb681 | ef6c1135d4a00d0807d7764eec4a7ba7260a1bd8 | 44e7a29e33d109015d8332595e11e35f277072f2 | refs/heads/master | 2021-09-14T04:03:35.617364 | 2018-05-08T04:20:03 | 2018-05-08T04:20:03 | 100,320,516 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5048220157623291,
"alphanum_fraction": 0.50920569896698,
"avg_line_length": 35.33121109008789,
"blob_id": "8876dbc910736a15b8486c027d8e968f83cdbc4f",
"content_id": "e3118b2e5897b6cdfc66f296b0ce2c3e77750059",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5703,
"license_type": "no_license",
"max_line_length": 160,
"num_lines": 157,
"path": "/1-IBMBluemix/server.py",
"repo_name": "hemalathak10/Cloud_Computing",
"src_encoding": "UTF-8",
"text": "#################################################################################\n#\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t#\n#\tFilename\t:\tserver.py\t\t\t\t\t\t\t\t\t\t\t\t\t#\n#\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t#\n#\tAuthor\t\t:\tHemalatha Krishnan\t\t\t\t\t\t\t\t\t\t\t#\n#\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t#\n#\tDescription\t:\tThe scripts connects to the IBM Bluemix for upload and\t\t#\n#\t\t\t\t\tdownload actions with encryption and decryption\t\t\t\t#\n#\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t#\n#\tCourse No\t:\t6331\t\t\t\t\t\t\t\t\t\t\t\t\t\t#\n#\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t#\n#\tLab No\t\t:\t1\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t#\t\n#\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t#\n#################################################################################\n\n#importing the required modules from the python library\nimport swiftclient\nimport keystoneclient\nimport os\nfrom cryptography.fernet import Fernet\nimport mimetypes\n\n#key for encryption and decryption\nkey = 'mPugypH1sWC82zJQ3_He9DmCbheSd29zLDMoUhAQPZo='\nencryptKey = Fernet(key)\n\n#defining the variables required for the connection to the IBM Bluemix\npassword='password'\nauth_url='https://identity.open.softlayer.com/v3/'\nauth_version=3\nprojId='projid'\nuserId='userid'\nregion='region'\ncontainer_name='ContainerForFiles'\n\n#Connecting to the IBM Bluemix server\nprint 'Connecting to the IBM Bluemix server......'\nconn_obj = swiftclient.Connection(key=password, authurl=auth_url, auth_version='3', os_options={\"project_id\": projId, \"user_id\": userId, \"region_name\": region})\nif not conn_obj:\n\tprint 'Connection to the IBM Bluemix failed!!'\nprint 'Connection to IBM Bluemix......'\n\n#creating a container on the IBM Bluemix server to store files\nconn_obj.put_container(container_name)\nprint 'Container created on the cloud'\n\n#################################################################################\n#\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t#\n#\tFunctionname\t:\tupload\t\t\t\t\t\t\t\t\t\t\t\t\t#\n#\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t#\n#\tAuthor\t\t\t:\tHemalatha Krishnan\t\t\t\t\t\t\t\t\t\t#\n#\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t#\n#\tDescription\t\t:\tThe function uploads a folder to the cloud after\t\t# \n#\t\t\t\t\t\tencryption \t\t\t\t\t\t\t\t\t\t\t\t#\n#\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t#\n#################################################################################\n\t\ndef upload(path):\n\t#rootDir = 'F:\\Python\\Flask_Examples\\IBMBluemix-Assignment1\\upload_folder'\n\tfor dirName, subdirList, fileList in os.walk(path):\n\t\tfor fname in fileList:\n\t\t\tfile_location = (dirName+ \"\\\\\" + fname)\n\t\t\twith open(file_location, 'rb') as example_file:\t\t\t\t\n\t\t\t\tconn_obj.put_object(container_name,fname,contents=encryptKey.encrypt(example_file.read()),content_type='jpg')\n\t\t\t\t#conn_obj.put_object(container_name,fname,contents=example_file.read(),content_type='jpg')\n\t\t\t\tprint fname + ' file uploaded successfully!!'\n\t\t\t\texample_file.close()\n\tprint '\\n';\n\n#################################################################################\n#\t\t\t\thema\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t#\n#\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t#\n#\tFunctionname\t:\tupload_file\t\t\t\t\t\t\t\t\t\t\t\t#\n#\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t#\n#\tAuthor\t\t\t:\tHemalatha Krishnan\t\t\t\t\t\t\t\t\t\t#\n#\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t#\n#\tDescription\t\t:\tThe function uploads a file to the cloud after\t\t\t# \n#\t\t\t\t\t\tencryption \t\t\t\t\t\t\t\t\t\t\t\t#\n#\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t#\n#################################################################################\n\t\t\t\t\ndef upload_file(path,fname):\n\t#rootDir = 'F:\\Python\\Flask_Examples\\IBMBluemix-Assignment1\\upload_folder'\n\tfor dirName, subdirList, fileList in os.walk(path):\n\t\tfile_location = (dirName+ \"\\\\\" + fname)\n\t\tif os.path.isfile(file_location):\n\t\t\twith open(file_location, 'rb') as example_file:\t\t\t\t\n\t\t\t\tconn_obj.put_object(container_name,fname,contents=encryptKey.encrypt(example_file.read()),content_type='text')\n\t\t\t\t#conn_obj.put_object(container_name,fname,contents=example_file.read(),content_type='jpg')\n\t\t\t\tprint fname + ' file uploaded successfully!!'\n\t\t\t\texample_file.close()\n\t\telse:\n\t\t\tprint name + ' not present in the path ' + path + '\\n'\n\t\t\t\n#################################################################################\n#\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t#\n#\tFunctionname\t:\tdownload\t\t\t\t\t\t\t\t\t\t\t\t#\n#\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t#\n#\tAuthor\t\t\t:\tHemalatha Krishnan\t\t\t\t\t\t\t\t\t\t#\n#\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t#\n#\tDescription\t\t:\tThe function downloads a file to the cloud\t\t\t\t#\n#\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t#\n#################################################################################\n\ndef download(filename):\n\tfile_list = [];\n\tfor container in conn_obj.get_account()[1]:\n\t\tfor data in conn_obj.get_container(container['name'])[1]:\n\t\t\tfile_list.append(data['name'])\n\tprint file_list;\n\tif filename in file_list:\t\t\n\t\tfile_obj = conn_obj.get_object(container_name, filename)\n\t\tnew_file_name = 'C:\\Users\\H.E.M.A\\Downloads' + \"\\\\\" + filename\n\t\twith open(new_file_name,'wb') as my_copy_file:\n\t\t\tmy_copy_file.write(encryptKey.decrypt(file_obj[1]))\n\t\t\t#my_copy_file.write(file_obj[1])\n\t\t\tprint filename + ' file downloaded successfully!!\\n'\n\telse:\n\t\tprint filename + ' not present on the cloud\\n'\n\t\t\n#################################################################################\n#\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t#\n#\tFunctionname\t:\tmain\t\t\t\t\t\t\t\t\t\t\t\t\t#\n#\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t#\n#\tAuthor\t\t\t:\tHemalatha Krishnan\t\t\t\t\t\t\t\t\t\t#\n#\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t#\n#\tDescription\t\t:\tThe function takes input from the user\t\t\t\t\t#\n#\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t#\n#################################################################################\n\t\t\ndef main():\n\twhile(1):\n\t\tprint 'Select among the below options:'\n\t\tprint '1: Upload'\n\t\tprint '2: Upload file'\n\t\tprint '3: Download'\n\t\tprint '4: Sync'\n\t\tprint '5: Exit'\n\t\toption = input('Enter your choice: ')\n\t\tif option == 1:\n\t\t\tpath = raw_input('Enter the file path: ')\n\t\t\tupload(path);\n\t\telif option == 2:\n\t\t\tpath = raw_input('Enter the file path: ')\n\t\t\tname = raw_input('Enter the file name: ')\n\t\t\tupload_file(path,name)\n\t\telif option == 3:\n\t\t\tfilename = raw_input('Enter the file name: ')\n\t\t\tdownload(filename)\n\t\telif option == 4:\n\t\t\tpath = raw_input('Enter the file path: ')\n\t\t\tupload(path);\n\t\telif option == 5:\n\t\t\treturn\n\t\t\nif __name__ == '__main__':\n\tmain()"
},
{
"alpha_fraction": 0.49512365460395813,
"alphanum_fraction": 0.5017415285110474,
"avg_line_length": 41.22793960571289,
"blob_id": "1c0b42dc962400889357ce0b68486d74e3d309fa",
"content_id": "9853db52ec2ae4cc2d9f7df813dd6b89f4478523",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5742,
"license_type": "no_license",
"max_line_length": 116,
"num_lines": 136,
"path": "/2-AWS-S3/flaskapp.py",
"repo_name": "hemalathak10/Cloud_Computing",
"src_encoding": "UTF-8",
"text": "from flask import Flask, render_template, request, jsonify, make_response\nimport boto\nimport boto.s3.connection\nimport boto.rds\nfrom boto.s3.key import Key\nimport datetime\nimport hashlib\nimport base64\n\napp = Flask(__name__)\n\naccess_key = 'accesskey'\nsecret_access_key = 'secretaccesskey'\nbucket_name = 'my_bucket_list'\n\ns3 = boto.connect_s3(access_key,secret_access_key)\nbucket = s3.create_bucket(bucket_name, location=boto.s3.connection.Location.DEFAULT)\n\[email protected]('/')\ndef login():\n filename = 'user.txt'\n key = bucket.new_key(filename)\n key.key = filename\n key.set_contents_from_string(base64.b64encode('vijay:vijay@123\\nuser:user@123'))\n return render_template('login.html')\n\[email protected]('/send', methods=['GET','POST'])\ndef send():\n if request.method == 'POST':\n username = request.form['username'];\n password = request.form['password'];\n for mylist in bucket.list():\n if 'user.txt' == mylist.key :\n mylist.get_contents_to_filename('/home/ubuntu/flaskapp/downloaded_file')\n with open('/home/ubuntu/flaskapp/downloaded_file','r') as fp:\n downloaded_contents = base64.b64decode(fp.read())\n print 'filecontents: ',downloaded_contents\n fp.close()\n data1 = downloaded_contents.split('\\n')\n print data1\n for data in data1:\n details = data.split(':')\n print details,username,password\n if username == details[0] and password == details[1]:\n print 'login successful'\n return render_template('index.html', user=username)\n return render_template('login.html', msg='failure')\n\n\[email protected]('/home', methods=['GET','POST'])\ndef home():\n return render_template('index.html')\n\[email protected]('/upload', methods=['GET','POST'])\ndef upload():\n return render_template('upload.html')\n\[email protected]('/uploadToCloud', methods=['GET','POST'])\ndef uploadToCloud():\n if request.method == 'POST':\n fname = request.files['file']\n contents = fname.read()\n encoded_contents = base64.b64encode(contents);\n key = bucket.new_key(fname.filename)\n key.key = fname.filename\n key.set_contents_from_string(encoded_contents)\n b_list = bucket.list()\n files = getfilelist()\n if fname.filename in files:\n print 'file uploaded successfully'\n return render_template('upload.html',msg='success')\n else:\n print 'file upload failed'\n return render_template('upload.html',msg='failed')\n return render_template('upload.html',msg='')\n\[email protected]('/download', methods=['GET','POST'])\ndef download():\n filelist = getfilelist()\n return render_template('download.html',files=filelist)\n\[email protected]('/downloadFromCloud', methods=['GET','POST'])\ndef downloadFromCloud():\n if request.method == 'POST':\n filename = request.form['filename']\n for mylist in bucket.list():\n if filename == mylist.key:\n print 'Name: ',mylist.key\n mylist.get_contents_to_filename('/home/ubuntu/flaskapp/downloaded_file')\n with open('/home/ubuntu/flaskapp/downloaded_file','r') as fp:\n downloaded_contents = base64.b64decode(fp.read())\n fp.close()\n response = make_response(downloaded_contents)\n response.headers['Content-disposition'] = \"attachment; filename=%s\"%filename\n return response\n return render_template('download.html',files=filelist, msg='')\n\[email protected]('/remoteList', methods=['GET','POST'])\ndef remoteList():\n files = getfilelist()\n return render_template('remoteList.html',filelist=files)\n\[email protected]('/delete', methods=['GET','POST'])\ndef delete():\n files = getfilelist()\n return render_template('delete.html',filelist=files)\n\[email protected]('/deleteFromCloud', methods=['GET','POST'])\ndef deleteFromCloud():\n if request.method == 'POST':\n fname = request.form['filename']\n print fname\n for mylist in bucket.list():\n if fname == mylist.key:\n bucket.delete_key(mylist.key)\n break\n flag = 0\n for mylist in bucket.list():\n if fname == mylist.key:\n flag = 1\n if flag == 0:\n print 'File deleted successfully'\n else:\n print 'File deletion failed'\n return render_template('delete.html',msg='success')\n\ndef getfilelist():\n files = []\n for mylist in bucket.list():\n if 'user.txt' != mylist.key:\n files.append(mylist.key)\n print files\n return files\n\nif __name__ == '__main__':\n app.run()"
},
{
"alpha_fraction": 0.6596176028251648,
"alphanum_fraction": 0.6809096336364746,
"avg_line_length": 38.68390655517578,
"blob_id": "efaa4f9fad57012b034c574f67c6d6187e6b6628",
"content_id": "85624ceaa9c305d0915c318f18a25e5914ab2c97",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6904,
"license_type": "no_license",
"max_line_length": 164,
"num_lines": 174,
"path": "/6-AZURE/flaskapp.py",
"repo_name": "hemalathak10/Cloud_Computing",
"src_encoding": "UTF-8",
"text": "from flask import Flask, render_template, request, make_response\nfrom azure.storage.blob import BlockBlobService\nfrom azure.storage.blob import ContentSettings\nfrom azure.storage.blob import PublicAccess\nfrom pymongo import MongoClient\nimport pyodbc\nimport MySQLdb\nimport time\nimport pymongo\nimport base64\nimport os\n\napp = Flask(__name__)\n\naccountname = 'name'\naccountkey = 'key'\ncontainername = 'container1'\nblobservice = BlockBlobService(account_name=accountname,account_key=accountkey)\nif blobservice:\n\tprint 'success'\nblobservice.create_container(containername,public_access=PublicAccess.Container)\n\nhost = 'hostname'\nuser = 'user'\npassword = 'password'\ndbname = 'mydb'\ndb = MySQLdb.connect(host=host,user=user,passwd=password,db=dbname)\ncur = db.cursor()\n\nclient = MongoClient('mongoclient')\ndb = client['mongodb-1']\ndata1 = db.data\n\[email protected]('/')\ndef hello_world():\n\treturn render_template('index.html')\n\t\t\[email protected]('/uploadcsv', methods=['GET','POST'])\ndef uploadcsv():\n\tcur.execute('drop table mytable1;')\n\tcur.execute('drop table mytable;')\n\tcur.execute('create table mytable(foodname varchar(35),foodtype varchar(15),col1 integer,col2 integer,col3 integer,qcount integer, primary key (foodname));')\n\tcur.execute('commit;')\n\tcur.execute('create table mytable1(ingredient varchar(15),qcount integer,foodname varchar(35),foreign key(foodname) references mytable(foodname));')\n\tcur.execute('commit;')\n\tpath = '/home/user/flaskapp/csvfiles/'\n\tfor file in os.listdir(path):\n\t\tfilename = path + file\n\t\tprint filename\n\t\tfp = open(filename,'r')\n\t\tcontents = fp.readlines()\n\t\tfp.close()\n\t\tfoodtype = contents[2].split(',')\n\t\tfirst = contents[0].split(',')\n\t\tprint 'insert into mytable values(\"' + file.split('.')[0] + '\",\"' + foodtype[0] + '\",' + first[0] + ',' + first[1] + ',' + first[2].split('\\r')[0] + ',0);'\n\t\tcur.execute('insert into mytable values(\"' + file.split('.')[0] + '\",\"' + foodtype[0] + '\",' + first[0] + ',' + first[1] + ',' + first[2].split('\\r')[0] + ',0);')\n\t\tcur.execute('commit;')\t\t\n\t\tdata = contents[1].split(',')\n\t\tprint 'insert into mytable1 values(\"' + data[0] + ',0,\"' + file.split('.')[0] + '\");'\n\t\tfor i in range(len(data)):\n\t\t\tcur.execute('insert into mytable1 values(\"' + data[i].split('\\r')[0] + '\",0,\"' + file.split('.')[0] + '\");')\n\t\t\tcur.execute('commit;')\n\treturn render_template('index.html',msg='success')\n\t\[email protected]('/query1', methods=['GET','POST'])\ndef query1():\n\tcur.execute('select count(*) from mytable;')\n\tresult = cur.fetchall()\n\trows = cur.rowcount\n\tone = result[0]\n\tcols = len(one)\n\tprint one\n\tprint 'rowcount: ',rows\n\tprint 'colcount: ',cols\n\treturn render_template('index.html',res1=result,row=rows,col=cols,header='COUNT')\n\t\[email protected]('/query2',methods=['GET','POST'])\ndef query2():\n\tparameter1 = request.form['parameter1']\n\t#print 'select count(*),' + parameter1 + ',foodname from mytable1 group by ' + parameter1 + ' order by count(*) desc;'\n\t#cur.execute('select count(*),' + parameter1 + ',foodname from mytable1 group by ' + parameter1 + ' order by count(*) desc;')\n\tprint 'select count(*),ingredient,foodname from mytable1 where ingredient=\"' + parameter1 +'\";'\n\tcur.execute('select foodname from mytable1 where ingredient=\"' + parameter1 +'\";')\n\tresult = cur.fetchall()\n\tcount = []\n\tfield = []\n\tfood = []\n\tfor i in range(cur.rowcount):\n\t\tdocument = data1.find_one({\"name\": result[i][0]+'.jpg'})\n\t\tprint result[i][0]+'.jpg'\n\t\tif document:\n\t\t\tfood.append(document[\"content\"])\n\t\telse:\n\t\t\tprint 'inside else'\n\t\t\tfood.append(\"None\")\n\trow=cur.rowcount\n\tcur.execute('select * from mytable where foodname=\"'+ parameter1 +'\" or foodtype=\"' + parameter1 +'\";')\n\tr1 = cur.fetchall()\n\tif cur.rowcount > 0:\n\t\tprint 'update mytable set qcount=qcount+1 where foodname=\"' + parameter1 + '\" or foodtype=\"' + parameter1 + '\";'\n\t\tcur.execute('update mytable set qcount=qcount+1 where foodname=\"' + parameter1 + '\" or foodtype=\"' + parameter1 + '\";')\n\t\tcur.execute('commit;')\n\tcur.execute('select * from mytable1 where ingredient=\"'+ parameter1 + '\";')\n\tr2 = cur.fetchall()\n\tif cur.rowcount > 0:\t\t\n\t\tprint 'update mytable1 set qcount=qcount+1 where ingredient=\"' + parameter1 + '\";'\n\t\tcur.execute('update mytable1 set qcount=qcount+1 where ingredient=\"' + parameter1 + '\";')\n\t\tcur.execute('commit;')\t\n\treturn render_template('index.html',msg2=row,imagecontent=food)\n\t\[email protected]('/query3',methods=['GET','POST'])\ndef query3():\n\tparameter1 = request.form['parameter1']\n\tparameter2 = request.form['parameter2']\n\tprint parameter1\t\n\tcur.execute('select * from mytable where foodname=\"'+ parameter1 +'\" or foodtype=\"' + parameter1 +'\";')\n\tr1 = cur.fetchall()\n\tif cur.rowcount > 0:\n\t\tprint 'update mytable set qcount=qcount+1 where foodname=\"' + parameter1 + '\" or foodtype=\"' + parameter1 + '\";'\n\t\tcur.execute('update mytable set qcount=qcount+1 where foodname=\"' + parameter1 + '\" or foodtype=\"' + parameter1 + '\";')\n\t\tcur.execute('commit;')\n\tcur.execute('select * from mytable1 where ingredient=\"'+ parameter1 + '\";')\n\tr2 = cur.fetchall()\n\tif cur.rowcount > 0:\t\t\n\t\tprint 'update mytable1 set qcount=qcount+1 where ingredient=\"' + parameter1 + '\";'\n\t\tcur.execute('update mytable1 set qcount=qcount+1 where ingredient=\"' + parameter1 + '\";')\n\t\tcur.execute('commit;')\n\treturn render_template('index.html')\n\[email protected]('/query4',methods=['GET','POST'])\ndef query4():\n\tpath = '/home/user/flaskapp/images/'\n\tfiles = os.listdir(path)\n\timages = []\n\tfor file in files:\n\t\tdocument = data1.find_one({\"name\": file})\n\t\tif document:\n\t\t\timages.append(document['content'])\n\treturn render_template('index.html',display=images,row=len(images),files=files)\n\t\[email protected]('/query5',methods=['GET','POST'])\ndef query5():\n\tparameter1 = request.form['parameter1']\n\trange = parameter1.split('-')\n\tprint 'select foodname from mytable where col1>=' + range[0] + ' and col1<=' + range[1] + ';'\n\tcur.execute('select foodname from mytable where col1>=' + range[0] + ' and col1<=' + range[1] + ';')\n\tresult = cur.fetchall()\n\tprint result\n\timages = []\n\tnames = []\n\tfor file in result:\t\t\n\t\tdocument = data1.find_one({\"name\": file[0]+'.jpg'})\n\t\tif document:\n\t\t\timages.append(document['content'])\n\t\t\tnames.append(file[0])\n\treturn render_template('index.html',image1=images,files=names,rows=cur.rowcount)\n\t\[email protected]('/query6',methods=['GET','POST'])\ndef query6():\n\tprint 'select distinct qcount,ingredient from mytable1 order by qcount desc;'\n\tcur.execute('select distinct qcount,ingredient from mytable1 order by qcount desc;')\n\tresult = cur.fetchall()\n\treturn render_template('index.html',res6=result,rows=5)\n\t\[email protected]('/query7',methods=['GET','POST'])\ndef query7():\n\tparameter1 = request.form['parameter1']\n\tprint 'select foodname from mytable where col2<' + parameter1 + ';'\n\tcur.execute('select foodname from mytable where col2<' + parameter1 + ';')\n\tresult = cur.fetchall()\n\treturn render_template('index.html',res7=result,rows=cur.rowcount)\n\t\nif __name__ == '__main__':\n app.run()"
},
{
"alpha_fraction": 0.3606295883655548,
"alphanum_fraction": 0.37557119131088257,
"avg_line_length": 35.08900451660156,
"blob_id": "cd564735254618fa6dd1f84702b9a8e38f92666e",
"content_id": "ad42443eebe24d537990f7d4d5019fb55240c878",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "HTML",
"length_bytes": 13787,
"license_type": "no_license",
"max_line_length": 144,
"num_lines": 382,
"path": "/3-AWS-SQL/templates/index.html",
"repo_name": "hemalathak10/Cloud_Computing",
"src_encoding": "UTF-8",
"text": "<html>\n\t<head>\n\t\t<!-- Latest compiled and minified CSS -->\n\t\t<link rel=\"stylesheet\" href=\"https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css\" integrity=\"sha384-BVYiiSIFeK1dGmJRAkycuHAHRg32OmUcww7on3RYdg4Va+PmSTsz/K68vbdEjh4u\" crossorigin=\"anonymous\">\n\t\t<style>\n\t\t\tbody {\n\t\t\t\tbackground-color: rgb(232, 150, 241);\n\t\t\t}\n\n\t\t\ttable, th, td {\n \t\t\t\tborder: 1px solid black;\n \t\t\t\tborder-collapse: collapse;\n\t\t\t}\n\t\t\tth, td {\n \t\t\t \tpadding: 15px;\n\t\t\t}\n\t\t</style>\n\t</head>\n\t<body>\n\t\t<h2>Name: Hemalatha Krishnan </h2>\n\t\t<h2>UTA ID: 1001430934</h2>\n\t\t<img src='https://s3.amazonaws.com/bucket_list/upload.jpg' height=250 width=150>\n\t\t<br></br>\n\t\t<form method=\"POST\" action=\"/uploadcsv\">\n\t\t\t<div class=\"form-group\">\n\t\t\t\t<h4>Please click to upload CSV file to AWS ... !!!\n\t\t\t\t<input class=\"btn btn-primary\" type=\"submit\" value=\"UPLOAD CSV\">\n\t\t\t\t</h4>\n\t\t\t\t{% if msg: %}\n\t\t\t\t\t<h4>CSV File loaded successfully to the database ... !!!</h4>\n\t\t\t\t{% endif %}\n\t\t\t</div>\n\t\t</form>\n\t\t<form method=\"POST\" action=\"/query7\">\n\t\t\t<div class=\"form-group\">\t\t\t\t\n\t\t\t\t<br></br>\n\t\t\t\t<input class=\"btn btn-primary\" type=\"submit\" value=\"UPLOAD DATA\">\n\t\t\t\t</h4>\n\t\t\t\t{% if msg1 == 'success': %}\n\t\t\t\t\t<h4>Data File loaded successfully to AWS ... !!!</h4>\n\t\t\t\t{% elif msg1 == 'failure': %}\n\t\t\t\t\t<h4>Data File loading failed ... !!!</h4>\n\t\t\t\t{% endif %}\n\t\t\t</div>\n\t\t</form>\n\t\t<table>\n {% if res7: %}\n <tr>\n\t\t\t\t\t{% if header: %}\n\t\t\t\t\t\t<th>{{ header }}</th>\n\t\t\t\t\t{% endif %}\n \t{% for i in range(0,row): %}\n {% for j in range(0,col) %}\n <th>{{ res7[i][j] }}</th>\n {% endfor %}\n \t{% endfor %}\n </tr>\n {% endif %}\n </table>\n\t\t<form method=\"POST\" action=\"/query1\">\n\t\t\t<div class=\"form-group\">\n\t\t\t\t<br></br>\n\t\t\t\t<input class=\"btn btn-primary\" type=\"submit\" value=\"SHOW TOTAL NO OF ENTRIES\">\n\t\t\t</div>\n\t\t</form>\n\t\t<table>\n {% if res1: %}\n <tr>\n\t\t\t\t\t{% if header: %}\n\t\t\t\t\t\t<th>{{ header }}</th>\n\t\t\t\t\t{% endif %}\n\t\t\t\t</tr>\n\t\t\t\t<tr>\n \t{% for i in range(0,row): %}\n {% for j in range(0,col) %}\n <td>{{ res1[i][j] }}</td>\n {% endfor %}\n \t{% endfor %}\n </tr>\n {% endif %}\n </table>\n\t\t<form method=\"POST\" action=\"/query2\">\n\t\t\t<div class=\"form-group\">\t\t\t\t\n\t\t\t\t<br></br>\n\t\t\t\t<h4>Enter age: \n\t\t\t\t<input class=\"btn btn-primary\" type=\"text\" name=\"parameter1\">  \n\t\t\t\t<h4>Enter sex: \n\t\t\t\t<input class=\"btn btn-primary\" type=\"text\" name=\"parameter2\">  \n\t\t\t\t<input class=\"btn btn-primary\" type=\"submit\" value=\"EXECUTE QUERY 2\">\n\t\t\t\t</h4>\n\t\t\t</div>\n\t\t</form>\n\t\t<table>\n {% if res2: %}\n\t\t\t\t{% if c: %}\n\t\t\t\t\t<tr>\n\t\t\t\t\t\t<th>Count</th>\n\t\t\t\t\t</tr>\n\t\t\t\t\t<tr>\n\t\t\t\t\t\t<td>{{ c[0][0] }}</td>\n\t\t\t\t\t</tr>\n\t\t\t\t{% endif %}\n <tr>\n\t\t\t\t\t{% if header: %}\n\t\t\t\t\t\t<th>{{ header }}</th>\n\t\t\t\t\t{% endif %}\n\t\t\t\t</tr>\n {% for i in range(0,row): %}\n\t\t\t\t\t<tr>\n {% for j in range(0,col) %}\n <td>{{ res2[i][j] }}</td>\n {% endfor %}\n \t</tr>\n {% endfor %}\n {% endif %}\n </table>\n\t\t<form method=\"POST\" action=\"/query3\">\n\t\t\t<div class=\"form-group\">\t\t\t\t\n\t\t\t\t<br></br>\n\t\t\t\t<h4>Enter Column Name : \n\t\t\t\t<input class=\"btn btn-primary\" type=\"text\" name=\"parameter\">  \n\t\t\t\t<input class=\"btn btn-primary\" type=\"submit\" value=\"EXECUTE RANDOM QUERY 500 TIMES MEMCACHE\">\n\t\t\t\t</h4>\n\t\t\t</div>\n\t\t</form>\n\t\t<table>\n {% if time: %}\n <tr>\n\t\t\t\t\t<th>Execution Time for 500 random queries in memcache</th>\n\t\t\t\t</tr>\n\t\t\t\t<tr>\n \t <td>{{ time }} secs</td>\n \t</tr>\n {% endif %}\n </table>\n\t\t<form method=\"POST\" action=\"/query4\">\n\t\t\t<div class=\"form-group\">\t\t\t\t\n\t\t\t\t<br></br>\n\t\t\t\t<h4>Enter Column Name:  \n\t\t\t\t<input class=\"btn btn-primary\" type=\"text\" name=\"parameter\">  \t\t\n\t\t\t\t<input class=\"btn btn-primary\" type=\"submit\" value=\"EXECUTE RANDOM QUERIES 500 TIMES SQL\">\n\t\t\t\t</h4>\n\t\t\t</div>\n\t\t</form>\n\t\t<table>\n {% if time1: %}\n <tr>\n\t\t\t\t\t<th>Execution Time for 500 random queries in SQL</th>\n\t\t\t\t</tr>\n\t\t\t\t<tr>\n \t <td>{{ time1 }} secs</td>\n \t</tr>\n {% endif %}\n </table>\n\t\t<form method=\"POST\" action=\"/query5\">\n\t\t\t<div class=\"form-group\">\t\t\t\t\n\t\t\t\t<br></br>\n\t\t\t\t<h4>Enter Column Name:  \n\t\t\t\t<input class=\"btn btn-primary\" type=\"text\" name=\"parameter1\">  \t\t\n\t\t\t\t<h4>Enter Parameter Value:  \n\t\t\t\t<input class=\"btn btn-primary\" type=\"text\" name=\"parameter2\">  \t\t\n\t\t\t\t<input class=\"btn btn-primary\" type=\"submit\" value=\"EXECUTE SAME QUERY 500 TIMES MEMCACHE\">\n\t\t\t\t</h4>\n\t\t\t</div>\n\t\t</form>\n\t\t<table>\n {% if time2: %}\n <tr>\n\t\t\t\t\t<th>Execution Time for 500 queries in memcache</th>\n\t\t\t\t</tr>\n\t\t\t\t<tr>\n \t <td>{{ time2 }} secs</td>\n \t</tr>\n {% endif %}\n </table>\n\t\t<form method=\"POST\" action=\"/query6\">\n\t\t\t<div class=\"form-group\">\t\t\t\t\n\t\t\t\t<br></br>\n\t\t\t\t<h4>Enter Column Name:  \n\t\t\t\t<input class=\"btn btn-primary\" type=\"text\" name=\"parameter1\">  \t\t\n\t\t\t\t<h4>Enter Parameter Value:  \n\t\t\t\t<input class=\"btn btn-primary\" type=\"text\" name=\"parameter2\">  \t\t\n\t\t\t\t<input class=\"btn btn-primary\" type=\"submit\" value=\"EXECUTE SAME QUERY 500 TIMES SQL\">\n\t\t\t\t</h4>\n\t\t\t</div>\n\t\t</form>\n\t\t<table>\n {% if time3: %}\n <tr>\n\t\t\t\t\t<th>Execution Time for 500 queries in SQL</th>\n\t\t\t\t</tr>\n\t\t\t\t<tr>\n \t <td>{{ time3 }} secs</td>\n \t</tr>\n {% endif %}\n </table>\n\t\t<!--form method=\"POST\" action=\"/query8\">\n\t\t\t<div class=\"form-group\">\t\t\t\t\n\t\t\t\t<br></br>\n\t\t\t\t<h4>Enter the parameter name to be fetched from the database  \n\t\t\t\t<input class=\"btn btn-primary\" type=\"text\" name=\"parameter\">  \t\t\n\t\t\t\t<input class=\"btn btn-primary\" type=\"submit\" value=\"EXECUTE QUERY 8\">\n\t\t\t\t</h4>\n\t\t\t</div>\n\t\t</form>\n\t\t<table>\n {% if res8: %}\n <tr>\n\t\t\t\t\t{% if header: %}\n\t\t\t\t\t\t<th>{{ header }}</th>\n\t\t\t\t\t{% endif %}\n \t{% for i in range(0,row): %}\n {% for j in range(0,col) %}\n <th>{{ res8[i][j] }}</th>\n {% endfor %}\n \t{% endfor %}\n </tr>\n {% endif %}\n </table>\n\t\t<form method=\"POST\" action=\"/query9\">\n\t\t\t<div class=\"form-group\">\t\t\t\t\n\t\t\t\t<br></br>\n\t\t\t\t<h4>Enter the parameter name to be fetched from the database  \n\t\t\t\t<input class=\"btn btn-primary\" type=\"text\" name=\"parameter\">  \t\t\n\t\t\t\t<input class=\"btn btn-primary\" type=\"submit\" value=\"EXECUTE QUERY 9\">\n\t\t\t\t</h4>\n\t\t\t</div>\n\t\t</form>\n\t\t<table>\n {% if res9: %}\n <tr>\n\t\t\t\t\t{% if header: %}\n\t\t\t\t\t\t<th>{{ header }}</th>\n\t\t\t\t\t{% endif %}\n \t{% for i in range(0,row): %}\n {% for j in range(0,col) %}\n <th>{{ res9[i][j] }}</th>\n {% endfor %}\n \t{% endfor %}\n </tr>\n {% endif %}\n </table>\n\t\t<form method=\"POST\" action=\"/query10\">\n\t\t\t<div class=\"form-group\">\t\t\t\t\n\t\t\t\t<br></br>\n\t\t\t\t<h4>Enter the parameter name to be fetched from the database  \n\t\t\t\t<input class=\"btn btn-primary\" type=\"text\" name=\"parameter\">  \t\t\n\t\t\t\t<input class=\"btn btn-primary\" type=\"submit\" value=\"EXECUTE QUERY 10\">\n\t\t\t\t</h4>\n\t\t\t</div>\n\t\t</form>\n\t\t<table>\n {% if res10: %}\n <tr>\n\t\t\t\t\t{% if header: %}\n\t\t\t\t\t\t<th>{{ header }}</th>\n\t\t\t\t\t{% endif %}\n \t{% for i in range(0,row): %}\n {% for j in range(0,col) %}\n <th>{{ res10[i][j] }}</th>\n {% endfor %}\n \t{% endfor %}\n </tr>\n {% endif %}\n </table>\n\t\t<form method=\"POST\" action=\"/query11\">\n\t\t\t<div class=\"form-group\">\t\t\t\t\n\t\t\t\t<br></br>\n\t\t\t\t<h4>Enter the parameter name to be fetched from the database  \n\t\t\t\t<input class=\"btn btn-primary\" type=\"text\" name=\"parameter\">  \t\t\n\t\t\t\t<input class=\"btn btn-primary\" type=\"submit\" value=\"EXECUTE QUERY 11\">\n\t\t\t\t</h4>\n\t\t\t</div>\n\t\t</form>\n\t\t<table>\n {% if res11: %}\n <tr>\n\t\t\t\t\t{% if header: %}\n\t\t\t\t\t\t<th>{{ header }}</th>\n\t\t\t\t\t{% endif %}\n \t{% for i in range(0,row): %}\n {% for j in range(0,col) %}\n <th>{{ res11[i][j] }}</th>\n {% endfor %}\n \t{% endfor %}\n </tr>\n {% endif %}\n </table>\n\t\t<form method=\"POST\" action=\"/query12\">\n\t\t\t<div class=\"form-group\">\t\t\t\t\n\t\t\t\t<br></br>\n\t\t\t\t<h4>Enter the parameter name to be fetched from the database  \n\t\t\t\t<input class=\"btn btn-primary\" type=\"text\" name=\"parameter\">  \t\t\n\t\t\t\t<input class=\"btn btn-primary\" type=\"submit\" value=\"EXECUTE QUERY 12\">\n\t\t\t\t</h4>\n\t\t\t</div>\n\t\t</form>\n\t\t<table>\n {% if res12: %}\n <tr>\n\t\t\t\t\t{% if header: %}\n\t\t\t\t\t\t<th>{{ header }}</th>\n\t\t\t\t\t{% endif %}\n \t{% for i in range(0,row): %}\n {% for j in range(0,col) %}\n <th>{{ res12[i][j] }}</th>\n {% endfor %}\n \t{% endfor %}\n </tr>\n {% endif %}\n </table>\n\t\t<form method=\"POST\" action=\"/query13\">\n\t\t\t<div class=\"form-group\">\t\t\t\t\n\t\t\t\t<br></br>\n\t\t\t\t<h4>Enter the parameter name to be fetched from the database  \n\t\t\t\t<input class=\"btn btn-primary\" type=\"text\" name=\"parameter\">  \t\t\n\t\t\t\t<input class=\"btn btn-primary\" type=\"submit\" value=\"EXECUTE QUERY 13\">\n\t\t\t\t</h4>\n\t\t\t</div>\n\t\t</form>\n\t\t<table>\n {% if res13: %}\n <tr>\n\t\t\t\t\t{% if header: %}\n\t\t\t\t\t\t<th>{{ header }}</th>\n\t\t\t\t\t{% endif %}\n \t{% for i in range(0,row): %}\n {% for j in range(0,col) %}\n <th>{{ res13[i][j] }}</th>\n {% endfor %}\n \t{% endfor %}\n </tr>\n {% endif %}\n </table>\n\t\t<form method=\"POST\" action=\"/query14\">\n\t\t\t<div class=\"form-group\">\t\t\t\t\n\t\t\t\t<br></br>\n\t\t\t\t<h4>Enter the parameter name to be fetched from the database  \n\t\t\t\t<input class=\"btn btn-primary\" type=\"text\" name=\"parameter\">  \t\t\n\t\t\t\t<input class=\"btn btn-primary\" type=\"submit\" value=\"EXECUTE QUERY 14\">\n\t\t\t\t</h4>\n\t\t\t</div>\n\t\t</form>\n\t\t<table>\n {% if res14: %}\n <tr>\n\t\t\t\t\t{% if header: %}\n\t\t\t\t\t\t<th>{{ header }}</th>\n\t\t\t\t\t{% endif %}\n \t{% for i in range(0,row): %}\n {% for j in range(0,col) %}\n <th>{{ res14[i][j] }}</th>\n {% endfor %}\n \t{% endfor %}\n </tr>\n {% endif %}\n </table>\n\t\t<form method=\"POST\" action=\"/query15\">\n\t\t\t<div class=\"form-group\">\t\t\t\t\n\t\t\t\t<br></br>\n\t\t\t\t<h4>Enter the parameter name to be fetched from the database  \n\t\t\t\t<input class=\"btn btn-primary\" type=\"text\" name=\"parameter\">  \t\t\n\t\t\t\t<input class=\"btn btn-primary\" type=\"submit\" value=\"EXECUTE QUERY 15\">\n\t\t\t\t</h4>\n\t\t\t</div>\n\t\t</form>\n\t\t<table>\n {% if res15: %}\n <tr>\n\t\t\t\t\t{% if header: %}\n\t\t\t\t\t\t<th>{{ header }}</th>\n\t\t\t\t\t{% endif %}\n \t{% for i in range(0,row): %}\n {% for j in range(0,col) %}\n <th>{{ res15[i][j] }}</th>\n {% endfor %}\n \t{% endfor %}\n </tr>\n {% endif %}\n </table-->\n\t</body>\n</html>\n\n"
},
{
"alpha_fraction": 0.6828749179840088,
"alphanum_fraction": 0.7054126262664795,
"avg_line_length": 32.94578170776367,
"blob_id": "9189f2f2503886a684a7fb285a9b1d1987edb899",
"content_id": "95e21cb14763d65c2508c684777a8b2afcb11730",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5635,
"license_type": "no_license",
"max_line_length": 400,
"num_lines": 166,
"path": "/3-AWS-SQL/flaskapp.py",
"repo_name": "hemalathak10/Cloud_Computing",
"src_encoding": "UTF-8",
"text": "from flask import Flask, render_template, request, jsonify, make_response\nimport boto\nimport boto.s3.connection\nimport boto.rds\nfrom boto.s3.key import Key\nimport datetime\nimport hashlib\nimport base64\nimport mimetypes\nimport MySQLdb\nimport csv\nimport memcache\nimport random\nimport time\n\napp = Flask(__name__)\n\naccesskey = 'accesskey'\nsecretaccesskey = 'secretccesskey'\nbucket_name = 'mybucket'\n\nhost = 'hostname'\nuser = 'user'\npassword = 'password'\ndbname = 'mydb'\nfarerange = []\n\ns3 = boto.connect_s3(accesskey,secretaccesskey)\nbucket = s3.create_bucket(bucket_name, location=boto.s3.connection.Location.DEFAULT)\n\ndb = MySQLdb.connect(host=host,user=user,passwd=password,db=dbname)\ncur = db.cursor()\n\nmemhost = 'mymemcache.uacsj9.cfg.use2.cache.amazonaws.com:11211'\n\nmc = memcache.Client([memhost], debug=0)\n\[email protected]('/')\ndef login():\n\treturn render_template('index.html')\n\[email protected]('/uploadcsv', methods=['GET','POST'])\ndef uploadcsv():\n\tcur.execute('drop table mytable;')\n\tcur.execute('create table mytable(mytime timestamp,latitude double,longitude double,depth double,mag double,magType varchar(4),nst integer,gap double,dmin double,rms double,net varchar(3),id varchar(11),updated timestamp,place varchar(50),typess varchar(20),horizontalError double,depthError double,magError double,magNst double,statuss varchar(20),locationSource varchar(3),magSource varchar(3));')\n\t#cur.execute('create table mytable(pclass integer,survived integer,name varchar(30),sex varchar(10),age double,ticket\tvarchar(15),fare double,cabin varchar(15),home varchar(30));')\n\tcur.execute('create index index1 on mytable(longitude) using btree;')\n\tcur.execute('load data local infile \\'/home/ubuntu/flaskapp/all_month.csv\\' into table mytable fields terminated by \\',\\' optionally enclosed by \\'\"\\' lines terminated by \\'\\n\\' ignore 1 lines;')\n\tcur.execute('commit;')\n\treturn render_template('index.html',msg='success')\n\[email protected]('/query1', methods=['GET','POST'])\ndef query1():\n\tcur.execute('select count(*) from mytable;')\n\tresult = cur.fetchall()\n\trows = cur.rowcount\n\tone = result[0]\n\tcols = len(one)\n\tprint one\n\tprint 'rowcount: ',rows\n\tprint 'colcount: ',cols\n\treturn render_template('index.html',res1=result,row=rows,col=cols,header='COUNT')\n\[email protected]('/query2', methods=['GET','POST'])\ndef query2():\n\tparameter1 = request.form['parameter1']\n\tparameter2 = request.form['parameter2']\n\tcur.execute('select count(*) from mytable where age = %s and sex = %s;',(parameter1,parameter2))\n\tcount = cur.fetchall()\n\tcur.execute('select name from mytable where age = %s and sex = %s;',(parameter1,parameter2))\n\tresult = cur.fetchall()\n\trows = cur.rowcount\n\tone = result[0]\n\tcols = len(one)\n\tprint one\n\tprint 'rowcount: ',rows\n\tprint 'colcount: ',cols\n\tquery = 'select name from mytable where age = ' + parameter1 + ' and sex = ' + parameter2 + ';'\n\tquery = query.replace(' ',':')\n\tmc.set(query,result)\n\tprint 'memcache: ',mc.get(query)\n\treturn render_template('index.html',res2=result,row=rows,col=cols,header='name',c=count)\n\[email protected]('/query3', methods=['GET','POST'])\ndef query3():\n\tparameter = request.form['parameter']\n\tstarttime = time.time()\n\tfor i in range(0,500):\n\t\trn = random.uniform(-62.5965,84.659)\n\t\tmykey = parameter + ':' + str(rn)\n\t\tdatafrommc = mc.get(mykey)\n\t\tif not datafrommc:\n\t\t\tquery = 'select * from mytable where ' + parameter + ' = ' + str(rn) + ';'\n\t\t\tcur.execute(query)\n\t\t\tif cur.rowcount > 0:\n\t\t\t\tprint 'inside if'\n\t\t\t\tmc.set(mykey,cur.fetchall())\n\tendtime = time.time()\n\texecutiontime = endtime - starttime\n\tprint 'time: ',executiontime\n\treturn render_template('index.html',time=executiontime)\t\n\[email protected]('/query4', methods=['GET','POST'])\ndef query4():\n\tparameter = request.form['parameter']\n\tstarttime = time.time()\n\tfor i in range(0,500):\n\t\trn = random.uniform(-62.5965,84.659)\n\t\tquery = 'select * from mytable where ' + parameter + ' = ' + str(rn) + ';'\n\t\tcur.execute(query)\n\tendtime = time.time()\n\texecutiontime = endtime - starttime\n\tprint 'time: ',executiontime\n\treturn render_template('index.html',time1=executiontime)\t\n\[email protected]('/query5', methods=['GET','POST'])\ndef query5():\n\tparameter1 = request.form['parameter1']\n\tparameter2 = request.form['parameter2']\n\tstarttime = time.time()\n\tfor i in range(0,500):\n\t\tmykey = parameter1 + ':' + parameter2\n\t\tdatafrommc = mc.get(mykey)\n\t\tif not datafrommc:\n\t\t\tquery = 'select * from mytable where ' + parameter1 + ' = ' + parameter2 + ';'\n\t\t\tcur.execute(query)\n\t\t\tif cur.rowcount > 0:\n\t\t\t\tprint 'inside if'\n\t\t\t\tmc.set(mykey,cur.fetchall())\n\tendtime = time.time()\n\texecutiontime = endtime - starttime\n\tprint 'time: ',executiontime\n\treturn render_template('index.html',time2=executiontime)\t\n\[email protected]('/query6', methods=['GET','POST'])\ndef query6():\n\tparameter1 = request.form['parameter1']\n\tparameter2 = request.form['parameter2']\n\tstarttime = time.time()\n\tfor i in range(0,500):\n\t\tquery = 'select * from mytable where ' + parameter1 + ' = ' + parameter2 + ';'\n\t\tcur.execute(query)\n\tendtime = time.time()\n\texecutiontime = endtime - starttime\n\tprint 'time: ',executiontime\n\treturn render_template('index.html',time3=executiontime)\n\[email protected]('/query7', methods=['GET','POST'])\ndef query7():\n\tfname = 'all_month.csv'\n\tfp = open(fname,'r')\n\tkey = bucket.new_key(fname)\n\tkey.key = fname\n\tkey.set_contents_from_string(fp.read())\n\tfp.close()\n\tfor mylist in bucket.list():\n\t\tif fname == mylist.key:\n\t\t\tprint 'file uploaded successfully'\n\t\t\treturn render_template('index.html',msg1='success')\n\t\telse:\n\t\t\tprint 'file upload failed'\n\t\t\treturn render_template('index.html',msg1='failure')\n\treturn render_template('index.html')\n\nif __name__ == '__main__':\n\tapp.run()\n"
},
{
"alpha_fraction": 0.6930403113365173,
"alphanum_fraction": 0.7081807255744934,
"avg_line_length": 35.57143020629883,
"blob_id": "b6c09c7df4befdc276bcf0aca6735f63b5a634f2",
"content_id": "ebd66124c6a33f7f84d59abd2a714206479bdc61",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4095,
"license_type": "no_license",
"max_line_length": 319,
"num_lines": 112,
"path": "/4-AWS-RENDER_CHART/flaskapp.py",
"repo_name": "hemalathak10/Cloud_Computing",
"src_encoding": "UTF-8",
"text": "from flask import Flask, render_template, request\nimport MySQLdb\nimport numpy as np\nfrom sklearn.cluster import KMeans\nimport matplotlib.pyplot as plt\nimport boto\nimport boto.s3\nimport sys\nfrom boto.s3.key import Key\nimport json\nimport pygal\nimport time\n\napp = Flask(__name__)\n\naccesskey = 'accesskey'\nsecretaccesskey = 'secretaccesskey'\nbucket_name = 'mybucket'\nhost = 'hostname'\nuser = 'user'\npassword = 'password'\ndbname = 'mydb'\n\ndb = MySQLdb.connect(host=host,user=user,passwd=password,db=dbname)\ncur = db.cursor()\ns3 = boto.connect_s3(accesskey,secretaccesskey)\nbucket = s3.create_bucket(bucket_name, location=boto.s3.connection.Location.DEFAULT)\n\[email protected]('/', methods=['GET','POST'])\ndef mainpage():\n\treturn render_template('index.html')\n\[email protected]('/uploadcsv', methods=['GET','POST'])\ndef uploadcsv():\n\tcur.execute('drop table mytable;')\n\tcur.execute('create table mytable(Gender\tvarchar(10),GivenName varchar(20),Surname varchar(20),StreetAddress varchar(20),City varchar(15),State varchar(20),EmailAddress varchar(30),Username varchar(15),TelephoneNumber varchar(15),Age double,BloodType varchar(5),Centimeters integer,Latitude double,Longitude double);')\n\tcur.execute('create index index1 on mytable(Surname) using btree;')\n\tcur.execute('create index index2 on mytable(Centimeters) using btree;')\n\tcur.execute('create index index3 on mytable(State) using btree;')\n\tcur.execute('create index index4 on mytable(Age) using btree;')\n\tcur.execute('load data local infile \\'/home/ubuntu/flaskapp/data.csv\\' into table mytable fields terminated by \\',\\' optionally enclosed by \\'\"\\' lines terminated by \\'\\n\\' ignore 1 lines;')\n\tcur.execute('commit;')\n\treturn render_template('index.html',msg='success')\n\[email protected]('/query1', methods=['GET','POST'])\ndef query1():\n\tcur.execute('select count(*) from mytable;')\n\tresult = cur.fetchall()\n\trows = cur.rowcount\n\tone = result[0]\n\tcols = len(one)\n\tprint one\n\tprint 'rowcount: ',rows\n\tprint 'colcount: ',cols\n\treturn render_template('index.html',res1=result,row=rows,col=cols,header='COUNT')\n\t\[email protected]('/query2', methods=['GET','POST'])\ndef query2():\n\tparameter1 = request.form['parameter1']\n\tparameter2 = request.form['parameter2']\n\tparameter3 = request.form['parameter3']\n\tcharttype = request.form['chart']\n\tprint charttype\n\tcur.execute('select ' + parameter1 + ',' + parameter2 + ' from mytable limit 1000;')\n\tresult = cur.fetchall()\n\tfinal = []\n\tfor data in result:\n\t\tfinal.append([data[0],data[1]])\n\tX = np.array(final)\n\tstarttime = time.time()\n\tclt = KMeans(n_clusters=int(parameter3),random_state=0).fit(X)\n\tcentroids = clt.cluster_centers_\n\tlabels = clt.labels_\n\tu,c = np.unique(labels,return_counts=True)\n\td = dict(zip(u,c))\n\tif charttype == 'RENDER PIE CHART' or charttype == 'RENDER BAR CHART':\n\t\tcontent = 'xaxis,yaxis\\n'\n\t\tfor k,v in d.iteritems():\n\t\t\tcontent = content + str(k)+','+str(v)+'\\n'\n\t\tkey = bucket.new_key('piechart.csv')\n\t\tkey.key = 'piechart.csv'\n\t\tkey.set_contents_from_string(content)\n\t\tbucket.set_acl('public-read',key)\n\t\tif charttype == 'RENDER PIE CHART':\n\t\t\tendtime = time.time()\n\t\t\texecutiontime = endtime - starttime\n\t\t\treturn render_template('piechart.html',t=executiontime)\n\t\telse:\n\t\t\tendtime = time.time()\n\t\t\texecutiontime = endtime - starttime\n\t\t\treturn render_template('barchart.html',t=executiontime)\n\telif charttype == 'RENDER SCATTER CHART':\n\t\txy_chart = pygal.XY(stroke=False)\n\t\txy_chart.title = 'SCATTER PLOT'\n\t\tdatalist = {}\n\t\tfor i in range(len(result)):\n\t\t\tif labels[i] in datalist.keys():\n\t\t\t\tdatalist[labels[i]].append((result[i][0],result[i][1]))\n\t\t\telse:\n\t\t\t\tdatalist[labels[i]] = [(result[i][0],result[i][1])]\n\t\tfor i in range(int(parameter3)):\n\t\t\tdatapoints = len(datalist[i])\n\t\t\txy_chart.add('cluster '+str(i+1)+' ('+str(datapoints)+')', datalist[i])\n\t\tfor i in range(int(parameter3)):\t\t\t\n\t\t\txy_chart.add('centroid '+str(i+1), [(centroids[i][0],centroids[i][1])])\n\t\tchart = xy_chart.render(is_unicode=True)\n\t\tendtime = time.time()\n\t\texecutiontime = endtime - starttime\n\t\treturn render_template('scatterchart.html',chart=chart,t=executiontime)\n\t\nif __name__ == '__main__':\n app.run()"
},
{
"alpha_fraction": 0.8125,
"alphanum_fraction": 0.8125,
"avg_line_length": 31,
"blob_id": "614d644ad89c7444b7ccfdd4419407994fb7950f",
"content_id": "99da52075ed2d87b2d4a42088a6c8bb9c7db2588",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 64,
"license_type": "no_license",
"max_line_length": 45,
"num_lines": 2,
"path": "/README.md",
"repo_name": "hemalathak10/Cloud_Computing",
"src_encoding": "UTF-8",
"text": "# Cloud_Computing\nCourse work as part of Cloud Computing course\n"
},
{
"alpha_fraction": 0.7185747027397156,
"alphanum_fraction": 0.735730767250061,
"avg_line_length": 42.29999923706055,
"blob_id": "664eb973dc82660a4dd6eb422ef1583767eade52",
"content_id": "6b1f5891471eaa0b18d9b0dd114aa9770633cc11",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3031,
"license_type": "no_license",
"max_line_length": 398,
"num_lines": 70,
"path": "/5-AWS-HADOOP/flaskapp.py",
"repo_name": "hemalathak10/Cloud_Computing",
"src_encoding": "UTF-8",
"text": "from flask import Flask, render_template, request\nimport MySQLdb\nimport numpy as np\nimport boto\nimport boto.s3\nimport sys\nfrom boto.s3.key import Key\nimport time\nfrom subprocess import call\nimport commands\n\napp = Flask(__name__)\n\naccesskey = 'accesskey'\nsecretaccesskey = 'secretaccesskey'\nbucket_name = 'mybucket'\nhost = 'hostname'\nuser = 'user'\npassword = 'password'\ndbname = 'mydb'\n\ndb = MySQLdb.connect(host=host,user=user,passwd=password,db=dbname)\ncur = db.cursor()\ns3 = boto.connect_s3(accesskey,secretaccesskey)\nbucket = s3.create_bucket(bucket_name, location=boto.s3.connection.Location.DEFAULT)\n\[email protected]('/', methods=['GET','POST'])\ndef mainpage():\n\treturn render_template('index.html')\n\[email protected]('/uploadcsv', methods=['GET','POST'])\ndef uploadcsv():\n\tcur.execute('drop table mytable;')\n\tcur.execute('create table mytable(Gender\tvarchar(10),GivenName varchar(20),Surname varchar(20),StreetAddress varchar(20),City varchar(15),State varchar(20),EmailAddress varchar(30),Username varchar(15),TelephoneNumber varchar(15),Age double,BloodType varchar(5),Centimeters integer,Latitude double,Longitude double);')\n\tcur.execute('load data local infile \\'/tmp/gutenberg/data.csv\\' into table mytable fields terminated by \\',\\' optionally enclosed by \\'\"\\' lines terminated by \\'\\n\\' ignore 1 lines;')\n\tcur.execute('commit;')\n\treturn render_template('index.html',msg='success')\n\[email protected]('/query1', methods=['GET','POST'])\ndef query1():\n\tcur.execute('select count(*) from mytable;')\n\tresult = cur.fetchall()\n\trows = cur.rowcount\n\tone = result[0]\n\tcols = len(one)\n\tprint one\n\tprint 'rowcount: ',rows\n\tprint 'colcount: ',cols\n\treturn render_template('index.html',res1=result,row=rows,col=cols,header='COUNT')\n\[email protected]('/query2', methods=['GET','POST'])\ndef query2():\n\tprint 'executing!!'\n\tprint commands.getoutput('sudo -S -u hduser rm /tmp/user/part-00000')\n\tprint commands.getoutput('sudo -S -u hduser /usr/local/hadoop/bin/hadoop dfs -rmr /user/hduser/*')\n\tprint commands.getoutput('sudo -S -u hduser /usr/local/hadoop/bin/hadoop dfs -copyFromLocal /tmp/user /user/hduser/user')\n\tcommand_output = commands.getoutput('sudo -S -u hduser /usr/local/hadoop/bin/hadoop jar /usr/local/hadoop/share/hadoop/tools/lib/hadoop-streaming-2.6.1.jar -file /home/hduser/flaskapp/mapper.py -mapper \"python /home/hduser/flaskapp/mapper.py\" -file /home/hduser/flaskapp/reduce.py -reducer \"python /home/hduser/flaskapp/reduce.py\" -input /user/hduser/user/* -output /user/hduser/user/user-output')\n\tprint command_output\n\tif 'streaming.StreamJob: Output directory: /user/hduser/user/user-output' in command_output:\n\t\tprint 'Success !!!'\n\t\tprint commands.getoutput('sudo -S -u hduser /usr/local/hadoop/bin/hadoop dfs -copyToLocal /user/hduser/user/user-output/part-00000 /tmp/user')\n\t\tfp = open('/tmp/user/part-00000','r')\n\t\tcontents = fp.readlines()\n\t\tfp.close()\n\t\treturn render_template('index.html',msg2='success',content=contents)\n\telse:\n\t\treturn render_template('index.html',msg2='failure')\n\t\nif __name__ == '__main__':\n app.run()\n"
}
] | 8 |
joevko/projects | https://github.com/joevko/projects | e61f921b74958fe96a91257dfdfbd30ea61b80e4 | 8a78554b0fa20ac47f99dc8c88f7ccf01897f874 | b6638d095176d6de51b2ffc85ed4a0b853e35ce6 | refs/heads/master | 2023-01-07T01:46:06.574386 | 2020-11-02T20:47:46 | 2020-11-02T20:47:46 | 284,535,439 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.7259842753410339,
"alphanum_fraction": 0.7275590300559998,
"avg_line_length": 25.45833396911621,
"blob_id": "377509e26645eb76fc04a17c651b4406dfd5387b",
"content_id": "2534b3546afa241d6289478c5b4d2cab775dfd92",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 636,
"license_type": "no_license",
"max_line_length": 108,
"num_lines": 24,
"path": "/in1010/oblig6/Telegrafist.java",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "public class Telegrafist extends Thread{\n\n\tprivate Kanal kanal;\n\tprivate MonitorKryptert monitor;\n\n\tpublic Telegrafist(Kanal kanal, MonitorKryptert monitor) {\n\t\tthis.kanal = kanal;\n\t\tthis.monitor = monitor;\n\t}\n\n\t//Lytter på kanal, oppretter melding med sekvensnummer og kanal id og setter melding inn i monitor kryptert\n\tpublic void run() {\n\t\tthis.monitor.incAntallTelegrafister();\n\t\tString melding = null;\n\t\tint sekvensnummer = 0;\n\n\t\t\twhile ((melding = this.kanal.lytt()) != null) {\n\t\t\t\tthis.monitor.settInnMelding(\n\t\t\t\t\tnew Melding(melding, this.kanal.hentId(), sekvensnummer++));\n\t\t\t}\n\n\t\tthis.monitor.decAntallTelegrafister();\n\t}\n}\n"
},
{
"alpha_fraction": 0.7087576389312744,
"alphanum_fraction": 0.7433808445930481,
"avg_line_length": 16.85454559326172,
"blob_id": "a73677e11feaa444f2fa7cda3922e0dcd2ae4bba",
"content_id": "b7a3c0ed79aacde9d484530579f56fa319b3f9b2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 982,
"license_type": "no_license",
"max_line_length": 81,
"num_lines": 55,
"path": "/in3110/assignment6/README.md",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "# Assignment 6\n\nThis is Assignment 6 in IN3110.\n\n## 6.1 Handling the data\n\nPython script that reads data from diabetes.csv using pandas.\n\nRun it like so:\n\n```bash\npython data.py\n```\n\n## 6.2 Fitting a machine learning model\n\nThis part of the tasks contains a function fit which fits the data using sklearn.\n\nRun it like this:\n\n```bash\npython fitting.py\n```\n\n## 6.3 Visualizing the classifier\n\nVisualization of the classifier with coloured background which represents which\nclass the classifier predicts for that area.\n\nTo run:\n\n ```bash\n python visualize.py\n ```\n\n## 6.4 Visualization through a web app\n\nFlask app which uses code from visualize.py to generate a plot and display it\non a webpage.\nOBS: To use 6.4 the folders static and templates must be in the same directory as\nweb_visualization.py\n\nTo run:\n\n```bash\npython web_visualization.py\n\nNext, copy the address and paste in your browser.\n```\nAddress example:\nhttp://127.0.0.1:5001/\n\n## 6.5 & 6.6\n\nUnfortunately not done.\n"
},
{
"alpha_fraction": 0.7019704580307007,
"alphanum_fraction": 0.7019704580307007,
"avg_line_length": 35.818180084228516,
"blob_id": "facb7dc8514a936b5c8d335ae7cba10d191156b7",
"content_id": "f01d4ab9d8e90583889abadcdc4a02b87550db7e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 406,
"license_type": "no_license",
"max_line_length": 137,
"num_lines": 11,
"path": "/in1010/oblig4/HvitResept.java",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "abstract public class HvitResept extends Resept{\n public HvitResept(Legemiddel legemiddel, Lege utskrivendeLege, Pasient pasient, int reit){\n super(legemiddel, utskrivendeLege, pasient, reit);\n }\n\n @Override\n public String toString(){\n String output = legemiddel.hentNavn() + \", \" + legemiddel.hentId() + \", \" + utskrivendeLege.hentLegeNavn() + \", \" + pasient.hentId();\n return output;\n }\n}\n\n"
},
{
"alpha_fraction": 0.5860465168952942,
"alphanum_fraction": 0.5906976461410522,
"avg_line_length": 13.357142448425293,
"blob_id": "8924be03c8aafcce739cd282e223134279704fb1",
"content_id": "bbe737f725a8f41b6ccf92179362e01c07dcca9c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 430,
"license_type": "no_license",
"max_line_length": 60,
"num_lines": 28,
"path": "/in3110/assignment5/5.2/demo.py",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "import sys\r\nimport re\r\n\r\n# A fuction\r\ndef a_function(var_one, var_two): # A comment after fucntion\r\n\r\n\tret_String = {}\r\n\r\n\tfor string in string_list:\r\n\t\tif string == \"first\": #comment after check\r\n\t\t\tprint(\"first done\")\r\n\t\t\tbreak;\r\n\t\tret_String = string\r\n\r\n\treturn ret_String\r\n\r\nclass A:\r\n\r\n\tdef a_loop_func():\r\n\t\twhile i < 42:\r\n\t\t\ttry:\r\n\t\t\t\tpass\r\n\t\t\texcept Exception as e: \r\n\t\t\t\traise\r\n\t\t\telse:\r\n\t\t\t\tpass\r\n\t\t\tfinally:\r\n\t\t\t\tpass\r\n"
},
{
"alpha_fraction": 0.5086132884025574,
"alphanum_fraction": 0.6149870753288269,
"avg_line_length": 29.552631378173828,
"blob_id": "b31e1105d40c809460e10b8a95279dfd6b97ba0f",
"content_id": "6961d24bc1012e312ec0472f3ccb503d459b6c07",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2326,
"license_type": "no_license",
"max_line_length": 103,
"num_lines": 76,
"path": "/in2110/oblig2b/in2110-oblig-2b-precode (1).py",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "import spacy\nfrom spacy import displacy\n\nfrom in2110.oblig2b import plot_learning_curve\nfrom in2110.conllu import ConlluDoc\n\ndef accuracy(true, pred):\n true_positives = 0.0\n total = 0.0\n\n for i in range(len(true)):\n for j in range(len(true[i])):\n total += 1.0\n if true[i][j] == pred[i][j]:\n true_positives += 1.0\n\n return true_positives/total\n\ndef attachment_score(true, pred):\n words_with_correct_head = 0\n words = 0\n words_w_cor_head_dep = 0\n\n for i in range(len(true)):\n for j in range(len(true[i])):\n words += 1\n a = true[i][j].head\n b = pred[i][j].head\n if a.text == b.text:\n words_with_correct_head += 1\n if true[i][j].dep_ == pred[i][j].dep_:\n words_w_cor_head_dep += 1\n las = words_with_correct_head/words\n uas = words_w_cor_head_dep/words\n\n return uas, las\n\nnb = spacy.load(\"my-model/model-best\")\n\n#POS tagging\nconllu_dev = ConlluDoc.from_file(\"no_bokmaal-ud-dev.conllu\")\ndev_docs = conllu_dev.to_spacy(nb) #gold standard, doc objects\ndev_docs_unlabeled = conllu_dev.to_spacy(nb, keep_labels=False) #train data, spacy objects\n\ntrue = []\npred = []\nfor i in range(len(dev_docs)):\n nb.tagger(dev_docs_unlabeled[i])\n t = [token.tag_ for token in dev_docs[i]]\n true.append(t)\n p = [token.tag_ for token in dev_docs_unlabeled[i]]\n pred.append(p)\n\nacc = accuracy(true, pred) #0.9674997937804174\n\n#Parsing\nantall_setninger = 0\nfor i in range(len(dev_docs)):\n antall_setninger += 1\n nb.parser(dev_docs_unlabeled[i])\n\natt = attachment_score(dev_docs, dev_docs_unlabeled) #UAS = 0,7721136132420467 LAS = 0,8679919711842503\n\ndoc = nb(\"Å være i familie involverer ikke nødvendigvis ubetinget kjærlighet.\")\ndisplacy.serve(doc, style=\"dep\", options={\"fine_grained\": True})\n\n\"\"\"\na = [0.967, 0.916, 0.892, 0.852, 0.780, 0.692, 0.539]\na.reverse()\nb = [0.772, 0.615, 0.537, 0.426, 0.270, 0.148, 0.052]\nb.reverse()\nc = [0.868, 0.746, 0.686, 0.605, 0.492, 0.398, 0.194]\nc.reverse()\nplot_learning_curve([0.015625, 0.03125, 0.0625, 0.125, 0.25, 0.5, 1.0], a, \"Accuracy\")\nplot_learning_curve([0.015625, 0.03125, 0.0625, 0.125, 0.25, 0.5, 1.0], b, \"UAS\")\nplot_learning_curve([0.015625, 0.03125, 0.0625, 0.125, 0.25, 0.5, 1.0], c, \"LAS\")\"\"\"\n"
},
{
"alpha_fraction": 0.6401734352111816,
"alphanum_fraction": 0.6401734352111816,
"avg_line_length": 16.743589401245117,
"blob_id": "d6298a236f189441cbf70a1830c8ca9104544ed9",
"content_id": "b79f2ba5b1dcf3c69979d0eb6c56afe9c87fb4d1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 692,
"license_type": "no_license",
"max_line_length": 49,
"num_lines": 39,
"path": "/in1010/oblig4/Pasient.java",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "public class Pasient {\n String navn;\n String foedselsnr;\n int id;\n static int teller;\n Stabel<Resept> resepter;\n\n public Pasient(String navn, String foedselsnr){\n this.navn = navn;\n this.foedselsnr = foedselsnr;\n resepter = new Stabel<Resept>();\n id = teller;\n teller++;\n }\n\n public void leggTilResept(Resept r){\n resepter.leggPaa(r);\n }\n\n public Stabel<Resept> hentResepter(){\n return resepter;\n }\n public int hentId(){\n return id;\n }\n\n public String hentNavn(){\n return navn;\n }\n public String foedselsnr(){\n return foedselsnr;\n }\n\n @Override\n public String toString(){\n String output = navn + \", \" + foedselsnr;\n return output;\n }\n}\n"
},
{
"alpha_fraction": 0.7768924236297607,
"alphanum_fraction": 0.7768924236297607,
"avg_line_length": 49.20000076293945,
"blob_id": "5926bed98f0b0cd7120c476aaf7ced719d06ab14",
"content_id": "a0d6354da166a81745a752fe8e88f66823a5b18e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 251,
"license_type": "no_license",
"max_line_length": 71,
"num_lines": 5,
"path": "/in3110/assignment3/README.txt",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "To run tests for complex.py program, first open file test_complex.py.\nThen, at the and of the script you will see commented out calls for the\ntest function. Just simply remove the hash to run the tests one by one.\n\nI run the test program with pytest!\n"
},
{
"alpha_fraction": 0.6203299164772034,
"alphanum_fraction": 0.625687301158905,
"avg_line_length": 33.70098114013672,
"blob_id": "054fae991067170d0ef14ad27346223b525f6e10",
"content_id": "3eb939abf83b1d9c1c8c9b8397d1571799f5ea8c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7093,
"license_type": "no_license",
"max_line_length": 94,
"num_lines": 204,
"path": "/in2110/in2110-lab-master/in2110-lab-master/obliger/2b/oblig2b_løsning.py",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "# Copyright 2017 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\n\"\"\"The file contains two implementations:\n1) A method to compute BLEU scores\n\n2) A retrieval-based chatbot based on TF-IDF. \n\"\"\"\n\n\nimport collections\nimport math\nimport numpy as np\n\n\ndef get_sentences(text_file):\n \"\"\"Given a text file with one (tokenised) sentence per line, returns a list \n of sentences , where each sentence is itself represented as a list of tokens.\n The tokens are all converted into lowercase.\n \"\"\"\n \n sentences = []\n fd = open(text_file)\n for sentence_line in fd:\n \n # We convert everything to lowercase\n sentence_line = sentence_line.rstrip(\"\\n\").lower()\n \n # We also replace special ' characters to '\n sentence_line = sentence_line.replace(\"'\", \"'\")\n \n sentences.append(sentence_line.split())\n fd.close()\n \n return sentences\n \n\ndef _get_ngrams(tokens, ngram_order):\n \"\"\"\n Extracts all n-grams counts of a given order from an input sequence of tokens.\n \"\"\"\n ngrams = collections.Counter()\n for i in range(0, len(tokens) - ngram_order + 1):\n ngram = tuple(tokens[i:i+ngram_order])\n ngrams[ngram] += 1\n return ngrams\n\n\ndef compute_precision(reference_file, output_file, ngram_order):\n \"\"\"\n Computes the precision score for a given N-gram order. The first file contains the \n reference translations, while the second file contains the translations actually\n produced by the system. ngram_order is 1 to compute the precision over unigrams, \n 2 for the precision over bigrams, and so forth. \n \"\"\"\n \n ref_sentences = get_sentences(reference_file)\n output_sentences = get_sentences(output_file)\n \n ngrams_overlaps = 0\n ngrams_in_output = 0\n \n for i, (ref_sentence, output_sentence) in enumerate(zip(ref_sentences, output_sentences)):\n ref_ngrams = _get_ngrams(ref_sentence, ngram_order)\n output_ngrams = _get_ngrams(output_sentence, ngram_order)\n \n overlap = ref_ngrams & output_ngrams\n ngrams_overlaps += sum(overlap.values())\n ngrams_in_output += sum(output_ngrams.values())\n \n precision = ngrams_overlaps / ngrams_in_output\n print(ngrams_overlaps, ngrams_in_output, precision)\n return precision\n\n\ndef compute_brevity_penalty(reference_file, output_file):\n \"\"\"Computes the brevity penalty.\"\"\"\n \n ref_sentences = get_sentences(reference_file)\n output_sentences = get_sentences(output_file)\n \n nb_ref_tokens = sum([len(sentence) for sentence in ref_sentences])\n nb_output_tokens = sum([len(sentence) for sentence in output_sentences])\n \n penalty = min(1, nb_output_tokens/nb_ref_tokens)\n return penalty\n\n \ndef compute_bleu(reference_file, output_file, max_order=4):\n \"\"\"\n Given a reference file, an output file from the translation system, and a \n maximum order for the N-grams, computes the BLEU score for the translations \n in the output file.\n \"\"\"\n \n precision_product = 1\n for i in range(1, max_order+1):\n precision_product *= compute_precision(reference_file, output_file, i) \n \n brevity_penalty = compute_brevity_penalty(reference_file, output_file)\n \n bleu = brevity_penalty * math.pow(precision_product, 1/max_order)\n return bleu\n\n\n\nclass RetrievalChatbot:\n \"\"\"Retrieval-based chatbot using TF-IDF vectors\"\"\"\n \n def __init__(self, dialogue_file):\n \"\"\"Given a corpus of dialoge utterances (one per line), computes the\n document frequencies and TF-IDF vectors for each utterance\"\"\"\n \n # We store all utterances (as lists of lowercased tokens)\n self.utterances = []\n fd = open(dialogue_file)\n for line in fd:\n utterance = self._tokenise(line.rstrip(\"\\n\"))\n self.utterances.append(utterance)\n fd.close()\n \n self.doc_freqs = self._compute_doc_frequencies()\n self.tf_idfs = [self.get_tf_idf(utterance) for utterance in self.utterances]\n\n \n def _tokenise(self, utterance):\n \"\"\"Convert an utterance to lowercase and tokenise it by splitting on space\"\"\"\n return utterance.strip().lower().split()\n\n \n def _compute_doc_frequencies(self):\n \"\"\"Compute the document frequencies (necessary for IDF)\"\"\"\n \n doc_freqs = {}\n for utterance in self.utterances:\n for word in set(utterance):\n doc_freqs[word] = doc_freqs.get(word, 0) + 1\n return doc_freqs\n\n \n def get_tf_idf(self, utterance):\n \"\"\"Compute the TF-IDF vector of an utterance. The vector can be represented \n as a dictionary mapping words to TF-IDF scores.\"\"\"\n \n tf_idf_vals = {}\n word_counts = {word:utterance.count(word) for word in utterance}\n for word, count in word_counts.items():\n idf = math.log(len(self.utterances)/(self.doc_freqs.get(word,0) + 1))\n tf_idf_vals[word] = count * idf\n return tf_idf_vals\n\n \n def _get_norm(self, tf_idf):\n \"\"\"Compute the vector norm\"\"\"\n \n return math.sqrt(sum([v**2 for v in tf_idf.values()]))\n\n \n def get_response(self, query):\n \"\"\"\n Finds out the utterance in the corpus that is closed to the query\n (based on cosine similarity with TF-IDF vectors) and returns the \n utterance following it. \n \"\"\"\n\n # If the query is a string, we first tokenise it\n if type(query)==str:\n query = self._tokenise(query)\n \n tf_idf_query = self.get_tf_idf(query)\n cosines = np.zeros(len(self.utterances))\n for i in range(len(cosines)):\n if set(self.utterances[i]) & set(query):\n cosines[i] = self._compute_cosine(tf_idf_query, self.tf_idfs[i])\n \n most_similar = np.argmax(cosines)\n # print(\"most similar\", self.utterances[most_similar])\n \n if most_similar < len(self.utterances)-1:\n return \" \".join(self.utterances[most_similar+1])\n \n \n \n def _compute_cosine(self, tf_idf1, tf_idf2):\n \"\"\"Computes the cosine similarity between two vectors\"\"\"\n \n dotproduct = 0\n for word, tf_idf_val in tf_idf1.items():\n if word in tf_idf2:\n dotproduct += tf_idf_val*tf_idf2[word]\n \n return dotproduct / (self._get_norm(tf_idf1) * self._get_norm(tf_idf2))\n \n\n \n"
},
{
"alpha_fraction": 0.4790167808532715,
"alphanum_fraction": 0.6498801112174988,
"avg_line_length": 31.076923370361328,
"blob_id": "dc493be44ce3667c77516758fbf0912f261ecf64",
"content_id": "2c797a76f213f8047241dbbc261d90ae747dce07",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1668,
"license_type": "no_license",
"max_line_length": 138,
"num_lines": 52,
"path": "/in2110/in2110-lab-master/in2110-lab-master/ekstra/try.py",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "Python 3.6.4 (default, Jan 25 2018, 13:11:40)\n[GCC 4.8.5 20150623 (Red Hat 4.8.5-16)] on linux\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import nltk\n>>> dataset = nltk.corpus.brown.tagged_words()\ns>>> dataset\n[('The', 'AT'), ('Fulton', 'NP-TL'), ...]\n>>> len(dataset)\n1161192\n>>> from nltk.probability import ConditionalFreqDist\n>>> cfd = ConditionalFreqDist()\n>>> for token,tag in dataset:\n... cfd[tag][token] += 1\n...\n>>> cfd\n<ConditionalFreqDist with 472 conditions>\n>>> cfd['AT']\nFreqDist({'the': 62288, 'a': 21824, 'The': 6725, 'an': 3518, 'no': 1575, 'A': 1119, 'every': 434, 'No': 230, 'An': 188, 'Every': 56, ...})\n>>> from nltk.probability import MLEProbDist\n>>> from nltk.probability import ConditionalProbDist\n>>> cpd = ConditionalProbDist(cfd, MLEProbDist)\n>>> cpd\n<ConditionalProbDist with 472 conditions>\n>>> cpd['AT']\n<MLEProbDist based on 97959 samples>\n>>> cpd['AT'].prob('the')\n0.6358578589001521\n>>> cpd['AT'].logprob('the')\n-0.6532237966397387\n>>> cpd['AT'].logprob('The')\n-3.8645718736556733\n>>> cpd['AT'].prob('The')\n0.06865117038761115\n>>> cpd['NP-TL'].prob('Fulton')\n0.002488181139586962\n>>> cpd['AT'].prob('The')*cpd['NP-TL'].prob('Fulton')\n0.000170816547369025\n>>> import math\n>>> 2**(cpd['AT'].logprob('The')+cpd['NP-TL'].logprob('Fulton'))\n0.00017081654736902482\n>>> cpd['AT'].prob('The')*cpd['NP-TL'].prob('Fulton')\n0.000170816547369025\n>>> 2**(cpd['AT'].logprob('The')+cpd['NP-TL'].logprob('Fulton'))\n0.00017081654736902482\n>>> cpd['AT'].prob('google')\n0.0\n>>> cpd['AT'].prob('The')*cpd['NP-TL'].prob('google')\n0.0\n>>> smoothing = 1e-20\n1e-20\n>>> cpd['AT'].prob('The')*smoothing\n6.865117038761114e-22\n"
},
{
"alpha_fraction": 0.6790996789932251,
"alphanum_fraction": 0.6845659017562866,
"avg_line_length": 32.440860748291016,
"blob_id": "fa5e27675f7322632d6869fb57957b590c824eb5",
"content_id": "78f762d5d3106284a19ac703affddceea86e8bd3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3121,
"license_type": "no_license",
"max_line_length": 122,
"num_lines": 93,
"path": "/in2110/in2110-lab-master/in2110-lab-master/ekstra/task3.py",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\nimport pandas as pd\nimport sklearn\nimport numpy as np\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.model_selection import train_test_split\nfrom matplotlib import pyplot as plt\n\n\ndf = pd.read_csv(\"insekter_fra_Ecuador.csv\", index_col=0)\ntrain_df, test_df = sklearn.model_selection.train_test_split(df, shuffle=False)\n\nX = train_df.loc[:, 'Number of tokens':'Number of links']\ny = train_df.loc[:, 'Ecuadors insekter']\nX_test = test_df.loc[:, 'Number of tokens':'Number of links']\ny_test = test_df.loc[:, 'Ecuadors insekter']\n\n\nlogReg = LogisticRegression(solver=\"lbfgs\", multi_class='ovr')\n\nfitted = logReg.fit(X, y)\nlogReg.coef_\nlogReg.intercept_\n\nscore = logReg.score(X_test, y_test)\nprint(score)\n\nsushi = X.loc[y == True]\ntaco = X.loc[y == False]\nplt.scatter(sushi.loc[:, 'Number of tokens'], sushi.loc[:, 'Number of links'], color='g')\nplt.scatter(taco.loc[:, 'Number of tokens'], taco.loc[:, 'Number of links'], color='r')\n\n# getting the x co-ordinates of the decision boundary\nplot_x = np.array([min(X.loc[:, 'Number of tokens']) - 2, max(X.loc[:, 'Number of tokens']) + 2])\n# getting corresponding y co-ordinates of the decision boundary\ncoef = logReg.coef_[0]\nintr = logReg.intercept_\nprint(coef, intr)\nplot_y = (-1/coef[1]) * (coef[0] * plot_x + intr[0])\nplt.plot(plot_x, plot_y)\nplt.show()\n\n\n#ax = plt.gca()\n#ax.autoscale(False)\n#x_vals = np.array(ax.get_xlim())\n#y_vals = -(x_vals * sushi.loc[:, 'Number of tokens'] + taco.loc[:, 'Number of tokens'])/ sushi.loc[:, 'Number of links']\n#plt.plot(x_vals, y_vals, '--', c=\"red\")\n#plt.show()\n\n# plt.scatter(x_np[:,0], x_np[:,1], c=y_np.reshape(-1),cmap=mpl.colors.ListedColormap(colors))\n# ax = plt.gca()\n# ax.autoscale(False)\n# x_vals = np.array(ax.get_xlim())\n# y_vals = -(x_vals * w_guess[0] + b_guess[0])/w_guess[1]\n# plt.plot(x_vals, y_vals, '--', c=\"red\")\n\n\n\n# read_data = pd.read_csv(\"insekter_fra_Ecuador.csv\", sep=',', index_col=0)\n# list_of_rows = [list(row) for row in read_data.values]\n# column_names = list(read_data.columns.values.tolist())\n#distances = np.array(list_of_rows)\n\n#print(list_of_rows)\n#print(train_df)\n\n# for t in train_df:\n# print(t)\n# print(\"----------------------------\")\n# for t in test_df:\n# print(t)\n\n\n# logReg = LogisticRegression(solver=\"lbfgs\")\n# fitted = logReg.fit(x, y)\n# print(fitted)\n\n\n\n\n# Oppgave (A)\n# Tren en logistisk regresjonsmodell på train_df (uten regularisering) som predikerer om wiki-siden handler\n# om et insekt fra Ecuador basert på to numeriske trekk (features), nemlig antall tokens og antall lenker på siden.\n# Hvilke accuracy oppnår du på testsettet test_df?\n\n\n\n# Oppgave (B) Forklar hvorfor verdien du oppnådde for accuracy er så lav, basert på det du vet om logistisk regresjon. For\n# å støtte din forklaring, tegn en scatter plot (du kan bruke funksjonen scatterplot i Seaborn1) hvor de to\n# aksene står for de to numeriske trekkene, og fargen (hue) representerer outputklassen (insekt fra Ecuador eller\n# ikke). Tegn deretter beslutningslinjen som er assosiert med den logistiske regresjonsmodellen du nettopp har\n# trent.\n"
},
{
"alpha_fraction": 0.7292545437812805,
"alphanum_fraction": 0.7334739565849304,
"avg_line_length": 27.440000534057617,
"blob_id": "b7beabb390758c4e9caba3e66298a1fe5324ce12",
"content_id": "5387371f4e4fbd33a11f1ed6ce63fbf6d63449ac",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 1426,
"license_type": "no_license",
"max_line_length": 104,
"num_lines": 50,
"path": "/in1010/oblig6/MonitorKryptert.java",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "import java.util.*;\n\npublic class MonitorKryptert {\n\n\tprivate LinkedList<Melding> meldinger;\n\n\t//Variabel for å holde orden på hvor mange telegrafister som jobber med monitoren.\n\tprivate int antallTelegrafister;\n\n\tpublic MonitorKryptert() {\n\t\tthis.meldinger = new LinkedList<Melding>();\n\t\tthis.antallTelegrafister = -1;\n\t}\n\n\t//Setter inn melding i monitoren og gir beskjed til kryptografene at\n\t//en ny melding har blitt lagt til\n\tpublic synchronized void settInnMelding(Melding melding) {\n\t\tthis.meldinger.push(melding);\n\t\tnotifyAll();\n\t}\n\n\tpublic synchronized void incAntallTelegrafister() {\n\t\tif (this.antallTelegrafister == -1) {\n\t\t\tthis.antallTelegrafister = 1;\n\t\t} else {\n\t\t\tthis.antallTelegrafister++;\n\t\t}\n\t}\n\n\tpublic synchronized void decAntallTelegrafister() {\n\t\tthis.antallTelegrafister--;\n\t}\n\n\t//Kryptografene henter en melding fra monitoren eller venter\n\t//hvis det ikke er noen meldinger i monitoren og telegrafistene ikke er ferdige.\n\t//Når alle telegrafistene er ferdige og det ikke er flere meldinger i monitoren\n\t//så returneres null\n\tpublic synchronized Melding hentMelding() {\n\t\twhile (this.meldinger.isEmpty() && (this.antallTelegrafister == -1 || this.antallTelegrafister > 0)) {\n\t\t\ttry {\n\t\t\t\twait();\n\t\t\t} catch (InterruptedException e) {}\n\t\t}\n\t\tif (this.antallTelegrafister == 0 && this.meldinger.isEmpty()) {\n\t\t\treturn null;\n\t\t}\n\t\tMelding melding = this.meldinger.pollLast();\n\t\treturn melding;\n\t}\n}\n"
},
{
"alpha_fraction": 0.5096839666366577,
"alphanum_fraction": 0.5124022960662842,
"avg_line_length": 30.700000762939453,
"blob_id": "c1b954bc2a98fe492207d274e138d6694c38a686",
"content_id": "88771759c508b2cc5590b456590d9283503cf90e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2943,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 90,
"path": "/in3110/assignment5/5.4/grep.py",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "import re\r\nimport argparse\r\nimport highlighter\r\n\r\n\r\ndef main():\r\n \"\"\"\r\n ArgumentParser for the command line interface.\r\n Then calling scanner function to color the matching syntaxes.\r\n -f: filename to scan\r\n -r: regex to detect\r\n -t: highlight found syntaxes\r\n \"\"\"\r\n\r\n parser = argparse.ArgumentParser(description=\"Grep Utility\")\r\n parser.add_argument(\"-f\", \"--file\",\r\n help=\"file\")\r\n parser.add_argument(\"-r\", \"--regex\",\r\n required=True,\r\n nargs=\"+\",\r\n dest=\"regex_list\",\r\n help=\"regex\",\r\n type=str)\r\n parser.add_argument(\"-t\", \"--highlight\",\r\n dest='highlight',\r\n action='store_true',\r\n help=\"highlight\")\r\n results = parser.parse_args()\r\n\r\n with open(results.file) as file:\r\n file_content = file.readlines()\r\n\r\n coloured_source_file = scanner(file_content,\r\n results.regex_list,\r\n results.highlight)\r\n\r\n for i in coloured_source_file:\r\n print(i[:-1])\r\n\r\n\r\ndef scanner(text, syntax_file, highlight):\r\n \"\"\"\r\n Looping through source file(text) to find keywords to color.\r\n Looping through syntax file to get info on which words to color and then\r\n using the previously written color_print function to change the colors of\r\n the found keywords. Used the built-in func replace() to replace the found\r\n keywords with coloured keywords.\r\n\r\n Args:\r\n text (str): text to style.\r\n syntax_file(dict): where key is regex and value a name for it.\r\n highlight (dict): where key is corresponding name from the syntax file\r\n and value is a bash color.\r\n Returns:\r\n output (list): list with added attributes\r\n \"\"\"\r\n\r\n # highlight = True;\r\n output = []\r\n\r\n for line in text:\r\n found_syntaxes = []\r\n coloured_line = line\r\n # print(line)\r\n key_colour = 0\r\n coloured = False\r\n for key in syntax_file:\r\n key_colour += 1\r\n # print(key)\r\n\r\n found_syntaxes = re.findall(key, line)\r\n if len(found_syntaxes) > 0:\r\n # We have found one or more syntaxes!\r\n for i in found_syntaxes:\r\n if highlight:\r\n # Color the found syntaxes if the user needs highlights\r\n if key_colour > 5:\r\n key_colour = 0\r\n new_text = highlighter.color_print(\r\n i, int(31 + key_colour))\r\n coloured_line = coloured_line.replace(i, new_text)\r\n coloured = True\r\n if coloured:\r\n output.append(coloured_line)\r\n\r\n return output\r\n\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n"
},
{
"alpha_fraction": 0.6699029207229614,
"alphanum_fraction": 0.6739482283592224,
"avg_line_length": 28.783132553100586,
"blob_id": "e8c5e5ab5ad6c1c93e6b37644e8f8ce824934e55",
"content_id": "1f027495e7742fcb90e51f2a963a9c9f7eeb10be",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 2475,
"license_type": "no_license",
"max_line_length": 113,
"num_lines": 83,
"path": "/in1010/oblig1/Regneklynge (1).java",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "import java.util.ArrayList;\nimport java.util.Scanner;\nimport java.io.*; //imports all the classes that are defined in java.io package to your file\n\npublic class Regneklynge {\n\n private File tekstfil;\n int noderPerRack;\n private ArrayList<Rack> rackListe;\n\n public Regneklynge(String filnavn) {\n tekstfil = new File(filnavn);\n Scanner innlesing = null;\n try { //kode som kan feile\n innlesing = new Scanner(tekstfil);\n } catch(FileNotFoundException e) {\n System.out.println(\"Kan ikke åpne denne filen.\"); // printer ut beskjeden hvis det ikke gaar aa aapne filen\n System.exit(0);\n }\n\n\nif (innlesing.hasNext()){\n String line = innlesing.nextLine();\n this.noderPerRack = Integer.parseInt(line);\n}\n rackListe = new ArrayList<Rack>();\n Rack rack = new Rack();\n rackListe.add(rack);\n\n\n while (innlesing.hasNext()) {\n String linje = innlesing.nextLine();\n String [] splitted = linje.split(\" \");\n int antNoder = Integer.parseInt(splitted[0]);\n int minne = Integer.parseInt(splitted [1]);\n int prosessorer = Integer.parseInt(splitted[2]);\n\n for (int i=0; i<antNoder; i++){\n Node node = new Node(minne, prosessorer);\n this.settInnNode(node);\n }\n }\n}\n//plasserer en node inn i et rack med ledig plass, eller i et nytt rackk\n public void settInnNode(Node node) {\n //finner siste element i racklisten\n Rack sisteElement = rackListe.get(rackListe.size()-1);\n\n //nå sjekker jeg om sisteElement i rackListe inneholder færre noder enn noderPerRack\n // Hvis True- legges noden inn i racket.\n //Hvis False- lages det et bytt rack\n if(sisteElement.getAntNoder() < noderPerRack) {\n sisteElement.settInn(node);\n }else{\n Rack rack = new Rack();\n rackListe.add(rack); //legger til nytt racket i rackListe\n rack.settInn(node);//legger til noden i racket\n }\n }\n //Beregner totalt antall prosessorer i hele regneklyngen\n\n public int antProsessorer(){\n int teller = 0;\n for(int i = 0; i<rackListe.size();i++) {\n teller += rackListe.get(i).antProsessorer();\n }\n return teller;\n }\n\n //beregner antall noder i regneklyngen men minne over angitt grense\n\n public int noderMedNokMinne(int paakrevdMinne) {\n int teller = 0;\n for(int i = 0; i<rackListe.size(); i++) {\n teller += rackListe.get(i).noderMedNokMinne(paakrevdMinne);\n }\n return teller;\n }\n //Henter antall racks i regneklyngen\n public int antRacks() {\n return rackListe.size();\n }\n}\n"
},
{
"alpha_fraction": 0.6091954112052917,
"alphanum_fraction": 0.6524356603622437,
"avg_line_length": 28.95081901550293,
"blob_id": "f803746ae3d85ababf4ac719fe89f3fb77a82675",
"content_id": "d439f096dc820fea32869f875359132ce1ae10a8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 1827,
"license_type": "no_license",
"max_line_length": 71,
"num_lines": 61,
"path": "/in1010/oblig2/TestResepter.java",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "public class TestResepter{\n public static void main(String[] args) {\n PreparatA aa = new PreparatA(\"aa\", 500, 20, 7);\n PreparatB bb = new PreparatB(\"bb\", 300, 40, 8);\n PreparatC cc = new PreparatC(\"cc\", 100, 10);\n\n Lege lege1 = new Lege(\"Hans The Doctor\");\n\n // Oppretter resepter:\n BlaaResept res1 = new BlaaResept(aa, lege1, 123, 1);\n MResept mres1 = new MResept(bb, lege1, 456, 10);\n PResept pres1 = new PResept(cc, lege1, 789, 3);\n\n\n\n // Sjekker om unik legemiddel id fungerer\n System.out.println(\"Henter legemiddel ID 1: \" + res1.hentId());\n System.out.println(\"Henter legemiddel ID 1: \" + res1.hentId());\n System.out.println(\"Henter legemiddel ID 2: \" + mres1.hentId());\n System.out.println(\"Henter legemiddel ID 3: \" + pres1.hentId());\n\n //\n System.out.println(res1.hentLegemiddel());\n System.out.println(mres1.hentLegemiddel());\n System.out.println(pres1.hentLegemiddel());\n\n System.out.println(res1.hentLege().hentLegeNavn());\n\n // Sjekker om unik pasientId fungerer\n System.out.println(\"Henter pasientId 1: \" + res1.hentPasientId());\n System.out.println(\"Henter pasientId 1: \" + res1.hentPasientId());\n System.out.println(\"Henter pasientId 2: \" + mres1.hentPasientId());\n System.out.println(\"Reit Test: \");\n System.out.println(res1.hentReit());\n res1.bruk();\n System.out.println(res1.hentReit());\n res1.bruk();\n\n // Tester farge\n System.out.println(\"Farge Test: \");\n System.out.println(res1.farge());\n System.out.println(mres1.farge());\n System.out.println(pres1.farge());\n\n // System.out.println(lege1.skrivResept(aa,231,3));\n // System.out.println(lege1.skrivResept(aa, 2334, 7));\n\n //exception\n try{\n lege1.skrivResept(aa, 2334, 7);\n\n }\n catch (UlovligUtskrift e){\n System.out.println(e);\n }\n\n\n\n\n }\n}\n"
},
{
"alpha_fraction": 0.502267599105835,
"alphanum_fraction": 0.5158730149269104,
"avg_line_length": 27.45161247253418,
"blob_id": "5d4260915f9389b20526ff37204a7d59a5c4484f",
"content_id": "b9661b31445ab5b9bdb25287bf41ffe8c57e3fbb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1764,
"license_type": "no_license",
"max_line_length": 83,
"num_lines": 62,
"path": "/in3110/assignment3/wc.py",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "#encoding utf-8\nimport sys\nimport glob\nimport os\n\n\n#directory = r\"/Users/joannakorzeniowska/INF3331-joannaek/assignment3.*\"\ndef readFile():\n wordCount = 0\n charCount = 0\n lineCount = 0\n with open(sys.argv[1], 'r') as f:\n for line in f:\n lineCount += 1\n charCount += len(line)\n for word in line.split():\n wordCount += 1\n print(\"a: \",lineCount, \", b: \", wordCount, \", c: \", charCount, \" ,fn: \",f)\n\ndef readAllFiles():\n wordCount = 0\n charCount = 0\n lineCount = 0\n for filename in glob.glob('*'):\n print(str(filename))\n with open(filename, 'r') as f:\n for line in f:\n lineCount += 1\n charCount += len(line)\n for word in line.split():\n wordCount += 1\n print(\"a: \",lineCount, \", b: \", wordCount, \", c: \", charCount)\n\ndef readPyFiles():\n wordCount = 0\n charCount = 0\n lineCount = 0\n for filename in glob.glob('*.py'):\n print(str(filename))\n with open(filename, 'r') as f:\n for line in f:\n lineCount += 1\n charCount += len(line)\n for word in line.split():\n wordCount += 1\n print(\"a: \",lineCount, \", b: \", wordCount, \", c: \", charCount)\n\nif __name__ == \"__main__\":\n if(sys.argv[1] == \"*\"):\n readAllFiles()\n elif(sys.argv[1] == \"*.py\"):\n readPyFiles()\n else:\n readFile()\n\n\n\n#'''tried to print out the file name, could not really find a solution..\n#I think I was somehow on the right path to solve it... ;) '''\n# for path in glob.glob(directory):\n# dn, fn = os.path.split(path)\n# print( \"a: \",lineCount, \", b: \", wordCount, \", c: \", charCount, \" ,fn: \",fn )\n"
},
{
"alpha_fraction": 0.48868778347969055,
"alphanum_fraction": 0.5316742062568665,
"avg_line_length": 17.94285774230957,
"blob_id": "d408f9c6a75516d27b4d02581df9723926434f58",
"content_id": "eaba2ab2f1228ea14bc479c086493db12baf220b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1326,
"license_type": "no_license",
"max_line_length": 69,
"num_lines": 70,
"path": "/in3110/assignment3/test_complex.py",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "from complex import Complex\n\ndef test_add_two_complex():\n x = Complex(1, 1)\n y = Complex(4, 9)\n z = x + y\n assert z == Complex(5, 10)\n #z.re == 5 and z.im == 9\n\n\ndef test_sub_two_complex():\n x = Complex(1, 1)\n y = Complex(2, 4)\n z = x - y\n assert z == Complex(-1, -3)\n #z.re == -6 and z.im == 4\n\ndef test_mul_two_complex():\n x = Complex(2,3)\n y = Complex(4,2)\n z = x * y\n assert z == Complex(2, 16)\n #z.re == 2 and z.im == 16\n\ndef test_conj_complex():\n c = Complex(1,3).conjugate()\n print(\"test \", c.re, \" \", c.im)\n assert c == Complex(1, -3)\n #c.re == 1 and c.im == -3\n\ndef test_mod_complex():\n m = Complex(6,-8).modulus()\n assert m == 10\n\n\ndef test_eq_two_complex():\n x = Complex(2,3)\n y = Complex(2,3)\n assert x == y\n\ndef test_radd():\n x = 1\n y = Complex(3, 5)\n z = x + y\n assert z == Complex(4, 5)\n\ndef test_rmul():\n x = 2\n y = Complex(4, 6)\n z = x * y\n assert z == Complex(8, 12)\n\ndef test_rsub():\n x = 1\n y = Complex(2, 3)\n z = x - y\n assert z == Complex(-1, -3)\n\n\n'''Unlock the test functions below to test the complex.py program.'''\n\n#test_add_two_complex()\n#test_sub_two_complex()\n#test_mul_two_complex()\n#test_conj_complex()\n#test_mod_complex()\n#test_eq_two_complex()\n#test_radd()\n#test_rmul()\n#test_rsub()\n"
},
{
"alpha_fraction": 0.6428571343421936,
"alphanum_fraction": 0.6428571343421936,
"avg_line_length": 3,
"blob_id": "2f5f001495d13412489e192a400f95d08b347dc6",
"content_id": "51fadca0f1a40e52918976e8597b417bdbd997dc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 28,
"license_type": "no_license",
"max_line_length": 3,
"num_lines": 6,
"path": "/in3110/assignment5/5.5/demo2.py",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "CCC\r\nBBB\r\nCCC\r\nDDD\r\nEEE\r\nEEE"
},
{
"alpha_fraction": 0.5707656741142273,
"alphanum_fraction": 0.5846867561340332,
"avg_line_length": 22.94444465637207,
"blob_id": "2b0234ee95a2545560fbe7a87195e8325028a525",
"content_id": "689e36d44e9a621cfa654de4cf1d864caed23bb8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 431,
"license_type": "no_license",
"max_line_length": 72,
"num_lines": 18,
"path": "/in3110/assignment2/climb4.sh",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n# This is a program that allows the user to climb up the directory tree.\n# commands \"climb\" and \"climb 1\" take you one directory back.\n# climb up the directory tree by calling the climb function followed by\n# a number\nfunction climb {\n local a=$1\n local FILEUP=\"../\"\n if [[ \"$a\" -eq \"0\" || \"$a\" -eq \"1\" ]]; then\n cd \"$FILEUP\"\n else\n for (( i=0; i < $1; i++ ))\n \tdo\n \t\tUP+=../\n \tdone\n \tcd \"$UP\"\n fi\n}\n"
},
{
"alpha_fraction": 0.6678635478019714,
"alphanum_fraction": 0.6732495427131653,
"avg_line_length": 26.850000381469727,
"blob_id": "76ac18105e41878d6331b3e06963a1c2f1108cd4",
"content_id": "001cba6c96030c14fc2be246af7b6408f75474c3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 558,
"license_type": "no_license",
"max_line_length": 137,
"num_lines": 20,
"path": "/in1010/oblig4/BlaaResept.java",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "public class BlaaResept extends Resept {\n public BlaaResept(Legemiddel legemiddel, Lege utskrivendeLege, Pasient pasient, int reit) {\n super(legemiddel, utskrivendeLege, pasient, reit);\n super.beloep = super.beloep * 0.25;\n }\n public String farge(){\n return \"Blå resept\";\n }\n\n public double prisAaBetale(){\n return super.beloep;\n }\n\n @Override\n public String toString(){\n String output = legemiddel.hentNavn() + \", \" + legemiddel.hentId() + \", \" + utskrivendeLege.hentLegeNavn() + \", \" + pasient.hentId();\n return output;\n }\n \n}\n"
},
{
"alpha_fraction": 0.6494662165641785,
"alphanum_fraction": 0.6743772029876709,
"avg_line_length": 39.07143020629883,
"blob_id": "c21453fd2946306f4f1bafda119e9491c052de07",
"content_id": "132a16d29cc2c2207414ff58f280c782b826380b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 562,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 14,
"path": "/in1010/oblig1/Hovedprogram.java",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "\npublic class Hovedprogram {\n public static void main(String[] args) {\n Regneklynge abel = new Regneklynge(\"regneklynge.txt\");\n System.out.println(\"--- Regneklynge Abel ---\" + \"\\n\");\n System.out.println(\"Antall rack:\" + abel.antRacks());\n System.out.println(\"Antall prosessorer:\" + abel.antProsessorer() + \"\\n\");\n System.out.println(\"Noder med minst 32:\" + abel.noderMedNokMinne(32));\n System.out.println(\"Noder med minst 64:\" + abel.noderMedNokMinne(64));\n System.out.println(\"Noder med minst 128:\" + abel.noderMedNokMinne(128));\n\n\n\n }\n}\n"
},
{
"alpha_fraction": 0.6746987700462341,
"alphanum_fraction": 0.8192771077156067,
"avg_line_length": 40.5,
"blob_id": "38d1bd9cbe17acd2c109f7e795b6f140e9d17743",
"content_id": "a0aeb0404d55576519cb4448731f815748de4e99",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 84,
"license_type": "no_license",
"max_line_length": 69,
"num_lines": 2,
"path": "/in2110/in2110-lab-master/in2110-lab-master/README.md",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "# in2110-lab\nUkesoppgaver og andre ressurser til gruppetimene i IN2110 våren 2020.\n"
},
{
"alpha_fraction": 0.4982432723045349,
"alphanum_fraction": 0.513133704662323,
"avg_line_length": 30.792552947998047,
"blob_id": "1b1d43970f50fbf4732f7dc8f6b07a315041c08b",
"content_id": "b9794e28814920b4ef065ec78ec6b5dbe9b05231",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5980,
"license_type": "no_license",
"max_line_length": 140,
"num_lines": 188,
"path": "/in2110/oblig2a/in2110-oblig-2a-precode (1).py",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "# encoding: utf-8\nimport nltk\nfrom nltk.probability import ConditionalFreqDist, ConditionalProbDist, MLEProbDist\n\nTRAIN_DATA = \"norne.txt\"\n#DEV_DATA = \"no_bokmaal-ud-dev.tt\"\n\ndef read_dataset(filename):\n \"\"\"Read dataset from file and return a list of sentences, each\n sentence a list of words, and a list of the corresponding pos\n tags.\n \"\"\"\n\n filen = open(filename, encoding=\"utf8\")\n setningene = filen.read().split(\"\\n\")\n # i = 0\n # for s in setningene:\n # print(\"__________________________\")\n # print(i)\n # print(s)\n # print(\"__________________________\")\n # i = i+1\n\n sentences = []\n labels = []\n\n for s in setningene:\n if s != \"\":\n ordene_tagene = s.split(\" \")\n o = []\n t = []\n for o_t in ordene_tagene:\n l = o_t.split(\"([^_ ]+)\")\n o.append(l[0])\n #t.append(l[1])\n sentences.append(o)\n #labels.append(t)\n\n\n print(sentences)\n\n\n#train_sents, train_labels = read_dataset(TRAIN_DATA)\n#dev_sents, dev_labels = read_dataset(DEV_DATA)\n#test_sent, test_labels = read_dataset(\"eisner.tt\")\n\n# def bigrams(sequence):\n# \"\"\"Return a sequence of bigrams from input sequence.\"\"\"\n#\n# bgs = []\n#\n# s = [\"<s>\"] + sequence\n#\n# i = 0\n# j = 1\n#\n# while j != len(s):\n# bgs.append([s[i]] + [s[j]])\n# i += 1\n# j += 1\n#\n# return bgs\n#\n# class SmoothProbDist(MLEProbDist):\n# \"\"\"Probability distribution with simple smoothing.\"\"\"\n#\n# def prob(self, key):\n# \"\"\"Return probability for key. Add smoothing to zero values.\"\"\"\n#\n# value = super(SmoothProbDist, self).prob(key)\n#\n# if value == 0:\n# return 1e-20\n# else:\n# return value\n#\n# class PosTagger(object):\n# \"\"\"Pos tagger.\"\"\"\n#\n# def __init__(self):\n# self.transition = {}\n# self.emission = {}\n# self.tagger = []\n#\n# def transform(self, sentences):\n#\n# labels = []\n#\n# for s in sentences:\n# l = self.viterbi(s)\n# labels.append(l)\n#\n# return labels\n#\n# def viterbi(self, ordene):\n#\n# #initialiserer V\n# viterbi = [{}]\n# for st in self.tagger:\n# viterbi[0][st] = {\"prob\" : self.transition[\"<s>\"].prob(st) * self.emission[st].prob(ordene[0]), \"prev\" : None}\n#\n# #rekursion steget, finner de hoyeste probabilitene\n# for t in range(1, len(ordene)):\n# viterbi.append({})\n# for st in self.tagger:\n# max_tr_prob = viterbi[t-1][self.tagger[0]][\"prob\"]*self.transition[self.tagger[0]].prob(st)\n# prev_st_selected = self.tagger[0]\n# for prev_st in self.tagger[1:]:\n# tr_prob = viterbi[t-1][prev_st][\"prob\"]*self.transition[prev_st].prob(st)\n# if tr_prob > max_tr_prob:\n# max_tr_prob = tr_prob\n# prev_st_selected = prev_st\n# max_prob = max_tr_prob * self.emission[st].prob(ordene[t])\n# viterbi[t][st] = {\"prob\" : max_prob, \"prev\": prev_st_selected}\n#\n#\n# path = []\n# max_prob = max(value[\"prob\"] for value in viterbi[-1].values())\n# previous = None\n#\n# #finner det første (siste) tagget\n# for tag, data in viterbi[-1].items():\n# if data[\"prob\"] == max_prob:\n# path.append(tag)\n# previous = tag\n#\n# #gaar bakover for aa lagre den mest sannsynlig sekvensen\n# for t in range(len(viterbi) - 2, -1, -1):\n# path.insert(0, viterbi[t + 1][previous][\"prev\"])\n# previous = viterbi[t + 1][previous][\"prev\"]\n#\n# return path\n#\n# def fit(self, sentences, labels):\n# \"\"\"Fit pos tagger to training data.\"\"\"\n#\n# cfd_e = ConditionalFreqDist()\n# i = 0\n# while i < len(sentences):\n# j = 0\n# while j < len(sentences[i]):\n# cfd_e[labels[i][j]][sentences[i][j]] += 1\n# if labels[i][j] not in self.tagger:\n# self.tagger.append(labels[i][j])\n# j += 1\n# i += 1\n#\n# self.emission = ConditionalProbDist(cfd_e, SmoothProbDist)\n#\n# cfd_t = ConditionalFreqDist()\n# for s in labels:\n# t_b = bigrams(s)\n# for b in t_b:\n# cfd_t[b[0]][b[1]] += 1\n#\n# self.transition = ConditionalProbDist(cfd_t, MLEProbDist)\n#\n# pt = PosTagger()\n# pt.fit(train_sents, train_labels)\n#\n# def accuracy(true, pred):\n# \"\"\"Return accuracy score for predictions.\"\"\"\n#\n# true_positives = 0.0\n# total = 0.0\n#\n# for i in range(len(true)):\n# for j in range(len(true[i])):\n# total += 1.0\n# if true[i][j] == pred[i][j]:\n# true_positives += 1.0\n#\n# return true_positives/total\n#\n# \"\"\"\n# Accuracy with MLEProbDist: 0.545794495312\n# Accuracy with SmoothProbDist: 0.911545547032\n# \"\"\"\n# print (\">>> train_sents[8634]\\n\", train_sents[8634])\n# print (\">>> train_labels[8634]\\n\", train_labels[8634])\n# print( \">>> bigrams(['vi','vil','ikke','ha','det','normale','.'])\\n\", bigrams(['vi','vil','ikke','ha','det','normale','.']))\n# print (\">>> pt.transition['NOUN'].prob('VERB')\\n\", pt.transition[\"NOUN\"].prob(\"VERB\"))\n# print (\">>> pt.emission['NOUN'].prob('hest')\\n\", pt.emission[\"NOUN\"].prob(\"hest\"))\n# print (\">>> pt.transform([['vi','vil','ikke','ha','det','normale','.']])\\n\", pt.transform([['vi','vil','ikke','ha','det','normale','.']]))\n# print (\">>> accuracy(train_labels, pt.transform(train_sents))\\n\", accuracy(train_labels, pt.transform(train_sents)))\n# print (\">>> pt.emission['VERB'].prob('tæsje')\\n\", pt.emission['VERB'].prob(\"tæsje\"))\n# print (\">>> accuracy(dev_labels, pt.transform(dev_sents))\\n\", accuracy(dev_labels, pt.transform(dev_sents)))\nread_dataset(TRAIN_DATA)\n"
},
{
"alpha_fraction": 0.6222501397132874,
"alphanum_fraction": 0.6241357922554016,
"avg_line_length": 31.804122924804688,
"blob_id": "cfda639e55e95d5c4affb4db1cf504724647c7f7",
"content_id": "f2f08507b907e6ef28edb8f01543cce81e9f8bae",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 3184,
"license_type": "no_license",
"max_line_length": 95,
"num_lines": 97,
"path": "/in1010/oblig7/Main.java",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "import javafx.application.Application;\nimport javafx.fxml.FXMLLoader;\nimport javafx.scene.Parent;\nimport javafx.scene.Scene;\nimport javafx.scene.control.Label;\nimport javafx.stage.FileChooser;\nimport javafx.stage.Stage;\n\nimport java.io.File;\nimport java.io.FileNotFoundException;\nimport java.util.HashMap;\nimport java.util.LinkedList;\n\npublic class Main extends Application {\n Stage primaryStage;\n Labyrint labyrint;\n LabyrintGrid labyrintGrid;\n HashMap<String, RuteRect> hviteRuter;\n LinkedList<String> losninger;\n int losningIndex = 0;\n Label label;\n\n @Override\n public void start(Stage primaryStage) throws Exception {\n this.hviteRuter = new HashMap<String, RuteRect>();\n this.labyrintGrid = new LabyrintGrid();\n primaryStage.setTitle(\"Hello World\");\n primaryStage.show();\n this.primaryStage = primaryStage;\n File file = this.chooseFile();\n this.labyrint = lagLabyrint(file);\n this.primaryStage.setScene(new Scene(\n this.labyrintGrid,\n this.labyrint.getKolonner() * Settings.RECT_HEIGHT_WIDTH + 120,\n this.labyrint.getRader() * Settings.RECT_HEIGHT_WIDTH)\n );\n this.label = new Label();\n this.drawGrid();\n }\n\n public void drawSolution () {\n for (RuteRect r: this.hviteRuter.values()) {\n r.setFill(Settings.getWalkableColor());\n r.setFillColors(Settings.getWalkableColor(), Settings.getWalkableSecondaryColor());\n }\n String path = this.losninger.get(this.losningIndex);\n for (String p : path.split(\" -> \")) {\n RuteRect r = this.hviteRuter.get(p);\n r.setFillColors(Settings.getPathColor(), Settings.getPathSecondaryColor());\n r.setFill(Settings.getPathColor());\n }\n this.label.setText(\"Løsninger: \" + this.losninger.size());\n }\n\n public void finnLosning (Rute rute) {\n this.losningIndex = 0;\n this.losninger = this.labyrint.finnUtveiFra(rute);\n if (this.losninger == null) {\n this.label.setText(\"Ingen løsninger\");\n } else {\n this.drawSolution();\n }\n }\n\n public RuteRect createRuteRect (Rute rute) {\n if (rute instanceof SortRute) {\n return RuteRect.SortRute();\n } else {\n RuteRect ruteRect = RuteRect.HvitRute(rute, this);\n this.hviteRuter.put(rute.getCoords(), ruteRect);\n return ruteRect;\n }\n }\n\n public void drawGrid () {\n Rute[][] ruter = this.labyrint.getRuter();\n for (Rute[] rr : ruter) {\n for (Rute r : rr) {\n this.labyrintGrid.add(this.createRuteRect(r), r.getKolonne(), r.getRad());\n }\n }\n this.labyrintGrid.add(this.label, this.labyrint.getRader(), 0);\n }\n\n private Labyrint lagLabyrint(File file) throws FileNotFoundException {\n return Labyrint.lesFraFil(file);\n }\n\n private File chooseFile() {\n FileChooser fileChooser = new FileChooser();\n return fileChooser.showOpenDialog(this.primaryStage);\n }\n\n public static void main(String[] args) {\n launch(args);\n }\n}\n"
},
{
"alpha_fraction": 0.6410991549491882,
"alphanum_fraction": 0.653285562992096,
"avg_line_length": 26.532894134521484,
"blob_id": "109bbcc1a2dc2c0e7d3e326b67fc4d58e6ee2a04",
"content_id": "38a1fde1736f54314dff8c21925700f2467c3fb0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4193,
"license_type": "no_license",
"max_line_length": 103,
"num_lines": 152,
"path": "/in2110/oblig1a/in2110-oblig-1a-precode (1).py",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "# coding=utf-8\n# IN2110 Oblig 1 pre-kode\nimport nltk\nfrom nltk import word_tokenize\n# Klasser og funksjoner fra scikit-learn som vi skal bruke i obligen\nfrom sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.metrics import accuracy_score\n\nfrom udtk import Model\n# Norec dataset\nfrom in2110.corpora import norec\n\n# Hjelpefunksjoner for visualisering\nfrom in2110.oblig1 import scatter_plot\n\ndef categories(labels):\n rest = 0\n games = 0\n lit = 0\n for item in labels:\n if item == \"restaurants\":\n rest = rest +1\n elif item == \"games\":\n games = games+1\n else:\n lit = lit+1\n\n print(\"Total: \", len(labels))\n print(\"Restaurants: \", rest, \"\\nGames: \", games, \"\\nLiterature: \", lit)\n\n print(\"Restaurants: \", rest/len(labels)*100, \"%\")\n print(\"Games: \", games/len(labels)*100, \"%\")\n print(\"Literature: \", lit/len(labels)*100, \"%\")\n\ndef prepare_data(documents):\n \"\"\"Tar inn en iterator (kan være en liste) over dokumenter fra norec\n og returnerer to lister:\n\n - data : En liste over dokument-tekstene.\n - labels : En liste over hvilken kategori dokumentet tilhører.\n\n Begge listene skal være like lange og for dokumentet i data[i]\n skal vi kunne finne kategorien i labels[i].\n \"\"\"\n\n # Din kode her\n\n data = []\n labels = []\n for item in documents:\n cat = item.metadata.get(\"category\")\n if cat == \"games\" or cat == \"restaurants\" or cat == \"literature\":\n data.append(item.text)\n labels.append(cat)\n\n\n return data, labels\n\ndef tokenize(text):\n doc = [item.lower() for item in word_tokenize(text)]\n return doc\n\ndef tokenize_pluss_info(text):\n \"\"\"Tar inn en streng med tekst og returnerer en liste med tokens.\"\"\"\n\n # Å splitte på mellomrom er fattigmanns tokenisering. Endre til noe\n # bedre!\n split_funksjon = text.split()\n print(\".split() Tokens: \", len(split_funksjon), \" types: \", len(set(split_funksjon)))\n\n doc2 = word_tokenize(text)\n print(\"word_tokenize() Tokens: \", len(doc2), \"types: \", len(set(doc2)))\n\n doc3 = [item.lower() for item in word_tokenize(text)]\n print(\"lower() Tokens: \", len(doc3), \"types: \", len(set(doc3)))\n\n m = Model(\"norwegian-bokmaal\")\n doc4 = m.tokenize(text)\n print(\"m.tokenize() Tokens: \", len(doc4), \"types: \", len(set(doc4)))\n\nclass Vectorizer():\n def __init__(self):\n \"\"\"Konstruktør som tar inn antall klasser som argument.\"\"\"\n\n self.count_vectorizer = CountVectorizer(max_features=5000, tokenizer=tokenize, lowercase=False)\n self.tfidf = TfidfTransformer()\n\n def train(self, data):\n \"\"\"Tilpass vektorisereren til data. Returner de vektoriserte\n treningsdataene med og uten tfidf-vekting.\n\n \"\"\"\n\n # Din kode her\n\n # Tips: Bruk fit_transform() for å spare kjøretid.\n\n vec = self.count_vectorizer.fit_transform(data)\n vec_tfidf = self.tfidf.fit_transform(vec)\n\n return vec, vec_tfidf\n\n def vectorize(self, data):\n \"\"\"Vektoriser dokumentene i data. Returner vektorer med og uten\n tfidf-vekting.\n\n \"\"\"\n # Din kode her\n vec = self.count_vectorizer.transform(data)\n vec_tfidf = self.tfidf.transform(vec)\n\n return vec, vec_tfidf\n\ndef create_knn_classifier(vec, labels, k):\n \"\"\"Lag en k-NN-klassifikator, tren den med vec og labels, og returner\n den.\n\ntokenize_pluss_info(train_data[0])\n \"\"\"\n\n clf = KNeighborsClassifier(k)\n clf.fit(vec, labels)\n\n return clf\n\n# Treningsdata\ntrain_data, train_labels = prepare_data(norec.train_set())\n\n# Valideringsdata\ndev_data, dev_labels = prepare_data(norec.dev_set())\n\n# Testdata\ntest_data, test_labels = prepare_data(norec.test_set())\n\n# Din kode her\n\nvec = Vectorizer()\nmat, matTF = vec.train(train_data)\n\nscatter_plot(mat, train_labels)\nscatter_plot(matTF, train_labels)\n\nmat2, matTF2 = vec.vectorize(dev_data)\n\nmat3, matTF2 = vec.vectorize(test_data)\n\nclassifier = create_knn_classifier(matTF, train_labels, 10)\n\nknn = classifier.predict(mat3)\n\nprint(accuracy_score(test_labels, knn))\n"
},
{
"alpha_fraction": 0.6433756947517395,
"alphanum_fraction": 0.647912859916687,
"avg_line_length": 49.86153793334961,
"blob_id": "31ff3d6a34fa3f7c32200187de719bcff140c3a0",
"content_id": "5b58cf4bff6059f39785a84742d73aefc94b279c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 9969,
"license_type": "no_license",
"max_line_length": 131,
"num_lines": 195,
"path": "/in2110/in2110-lab-master/in2110-lab-master/ekstra/logistisk_regresjon.py",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "import urllib.request\nimport pandas, re, random\nimport numpy as np\nimport sklearn.linear_model, sklearn.metrics, sklearn.model_selection\nimport scipy.sparse\n\nordfiler = {\"norsk\":\"https://github.com/open-dict-data/ipa-dict/blob/master/data/nb.txt?raw=true\",\n \"arabisk\":\"https://raw.githubusercontent.com/open-dict-data/ipa-dict/master/data/ar.txt?raw=true\",\n \"finsk\":\"https://raw.githubusercontent.com/open-dict-data/ipa-dict/master/data/fi.txt?raw=true\",\n \"patwa\":\"https://raw.githubusercontent.com/open-dict-data/ipa-dict/master/data/jam.txt?raw=true\",\n \"farsi\":\"https://raw.githubusercontent.com/open-dict-data/ipa-dict/master/data/fa.txt?raw=true\",\n \"tysk\":\"https://raw.githubusercontent.com/open-dict-data/ipa-dict/master/data/de.txt?raw=true\",\n \"engelsk\":\"https://raw.githubusercontent.com/open-dict-data/ipa-dict/master/data/en_UK.txt?raw=true\",\n \"rumensk\":\"https://raw.githubusercontent.com/open-dict-data/ipa-dict/master/data/ro.txt?raw=true\",\n \"khmer\":\"https://raw.githubusercontent.com/open-dict-data/ipa-dict/master/data/km.txt?raw=true\",\n \"fransk\":\"https://raw.githubusercontent.com/open-dict-data/ipa-dict/master/data/fr_FR.txt?raw=true\",\n \"japansk\":\"https://raw.githubusercontent.com/open-dict-data/ipa-dict/master/data/ja.txt?raw=true\",\n \"spansk\":\"https://raw.githubusercontent.com/open-dict-data/ipa-dict/master/data/es_ES.txt?raw=true\",\n \"svensk\":\"https://raw.githubusercontent.com/open-dict-data/ipa-dict/master/data/sv.txt?raw?true\",\n \"koreansk\":\"https://raw.githubusercontent.com/open-dict-data/ipa-dict/master/data/ko.txt?raw?true\",\n \"swahilisk\":\"https://raw.githubusercontent.com/open-dict-data/ipa-dict/master/data/sw.txt?raw?true\",\n \"vietnamesisk\":\"https://raw.githubusercontent.com/open-dict-data/ipa-dict/master/data/vi_C.txt?raw?true\",\n \"mandarin\":\"https://raw.githubusercontent.com/open-dict-data/ipa-dict/master/data/zh_hans.txt?raw?true\",\n \"malayisk\":\"https://raw.githubusercontent.com/open-dict-data/ipa-dict/master/data/ma.txt?raw?true\",\n \"kantonesisk\":\"https://raw.githubusercontent.com/open-dict-data/ipa-dict/master/data/yue.txt?raw?true\",\n \"islandsk\":\"https://raw.githubusercontent.com/open-dict-data/ipa-dict/master/data/is.txt?raw=true\"}\n\ndef extract_wordlist(max_nb_words_per_language=2500):\n \"\"\"\n Laster ned fra Github en rekke ordlister med ord og deres phonetiske transkripsjoner i flere språk.\n Ordlistene er deretter satt sammen i en pandas DataFrame, og delt i en treningsett og en testsett.\n \"\"\"\n \n wordlist = []\n for lang, wordfile in ordfiler.items():\n print(\"Nedlasting av ordisten for\", lang, end=\"... \")\n data = urllib.request.urlopen(wordfile)\n wordlist_for_language = []\n for linje in data:\n linje = linje.decode(\"utf8\").rstrip(\"\\n\")\n word, transcription = linje.split(\"\\t\")\n\n # vi tar den første transkripsjon (hvis det finnes flere) \n # og fjerner slashtegnene ved start og slutten\n match = re.match(\"/(.+?)/\", transcription)\n if not match:\n continue\n transcription = match.group(1) \n wordlist_for_language.append({\"ord\":word, \"IPA\":transcription, \"språk\":lang})\n data.close()\n random.shuffle(wordlist_for_language)\n wordlist += wordlist_for_language[:max_nb_words_per_language]\n print(\"ferdig!\")\n\n # Nå bygger vi en DataFrame med alle ordene\n wordlist = pandas.DataFrame.from_records(wordlist)\n \n # Og vi blander sammen ordene i tilfeldig rekkefølge\n wordlist = wordlist.sample(frac=1,random_state=73)\n \n # Lage et treningssett og en testsett (med 10% av data)\n wordlist_train, wordlist_test = sklearn.model_selection.train_test_split(wordlist, test_size=0.1)\n print(\"Treningsett: %i eksempler, testsett: %i eksempler\"%(len(wordlist_train), len(wordlist_test)))\n \n return wordlist_train, wordlist_test\n\n\nclass LanguageIdentifier:\n \"\"\"Logistisk regresjonsmodell som tar IPA transkripsjoner av ord som input, \n og predikerer hvilke språkene disse ordene hører til.\"\"\"\n \n def __init__(self, max_ngram_order=1):\n \"\"\"Initialisere modellen basert på en maksimum N-gram ordre\"\"\" \n \n self.model = sklearn.linear_model.SGDClassifier(loss=\"log\", n_jobs=8, penalty=\"none\")\n self.symbols = [] \n self.languages =[]\n self.max_ngram_order = max_ngram_order \n\n \n def train(self, transcriptions, languages):\n \"\"\"Gitt en rekke med IPA transkripsjoner og en rekke med språknavn, trene\n den logistisk regresjonsmodellen. De to rekkene må ha samme lendgen\"\"\"\n \n self.symbols = self._extract_unique_symbols(transcriptions)\n \n feats = self._extract_feats(transcriptions)\n \n # Vi ekstrahere en liste over alle mulige språknavn \n self.languages = sorted(set(languages))\n # Vi konvertere liste over språk til heltall \n outputs = [self.languages.index(lang) for lang in languages]\n \n print(\"Starte trening\", end=\"... \")\n self.model.fit(feats, outputs)\n print(\"ferdig\")\n return self\n \n def _extract_unique_symbols(self, transcriptions):\n \"\"\"Gitt en rekke med IPA fonetiske transkripsjoner, ektrahere en liste med alle delsekvenser av IPA symboler \n (med lengde fra 1 til max_ngram_order) som finnes i transkripsjonene.\"\"\"\n\n # Vi må først finne ut hvilke delsekvenser skal brukes som features\n print(\"Starte preprocessering\", end=\"... \")\n symbols = {}\n for transcription in transcriptions:\n for k in range(self.max_ngram_order+1):\n for i in range(0, len(transcription)-k+1):\n ngram = transcription[i:i+k]\n symbols[ngram] = symbols.get(ngram,0) + 1\n print(\"ferdig\")\n \n symbols = sorted([p for p,c in symbols.items()])\n return symbols\n\n def _extract_feats(self, transcriptions):\n \"\"\"Gitt en rekke med IP transkripsjoner, ekstrahere en matrise av størrelse |T|x|F|,\n hvor |T| er antall transkripsjoner, og |F| er antall features brukt i modellen.\"\"\"\n \n # print(\"Starte featureekstrahering\", end=\"... \")\n feats_rows = []\n feats_cols = []\n symbol_indices = {p:i for i, p in enumerate(self.symbols)}\n for i, transcription in enumerate(transcriptions):\n for k in range(self.max_ngram_order+1):\n for j in range(0, len(transcription)-k+1):\n ngram = transcription[j:j+k]\n if ngram in symbol_indices:\n feats_rows.append(i)\n feats_cols.append(symbol_indices[ngram])\n\n # Vi bruker en sparse matrix for å gjøre læringsprosessen raskere\n feats = scipy.sparse.csr_matrix((np.ones(len(feats_rows), dtype=bool), (feats_rows, feats_cols)), \n shape=(len(transcriptions), len(self.symbols)), dtype=bool) \n \n return feats\n \n def predict(self, transcriptions):\n \"\"\"Gitt en rekke med IPA transkripsjoner, finne ut det mest sansynnlige språket\n for hver transkripsjon. Rekken som returneres må ha samme lengden som inputrekken\"\"\"\n \n feats = self._extract_feats(transcriptions)\n outputs = self.model.predict(feats)\n outputs = [self.languages[i] for i in outputs]\n return outputs\n \n def evaluate(self, transcriptions, languages):\n \"\"\"Gitt en rekke med IPA transkripsjoner og en rekke med språknavn, evaluere hvor godt\n modellen fungerer ved å beregne:\n 1) accuracy\n 2) precision, recall og F1 for hvert språk\n 3) micro- og macro-averaged F1.\n \"\"\"\n predictions = self.predict(transcriptions)\n accuracy = sklearn.metrics.accuracy_score(languages, predictions)\n print(\"Global accuracy:\", accuracy)\n scores_per_language = sklearn.metrics.precision_recall_fscore_support(languages, predictions, average=None)\n scores_per_language = pandas.DataFrame(scores_per_language).T\n scores_per_language.index = self.languages\n scores_per_language.columns = [\"precision\", \"recall\", \"f1\", \"support\"]\n print(\"Scores per language:\\n\", scores_per_language[[\"precision\", \"recall\", \"f1\"]])\n \n micro_f1 = sklearn.metrics.f1_score(languages, predictions, average=\"micro\")\n macro_f1 = sklearn.metrics.f1_score(languages, predictions, average=\"macro\")\n print(\"Micro F1: %.8f, Macro F1: %.3f\"%(micro_f1, macro_f1)) \n print(\"Micro precision\", sklearn.metrics.precision_score(languages,predictions, average=\"micro\"))\n print(\"Micro recall\", sklearn.metrics.recall_score(languages, predictions, average=\"micro\"))\n\n \n#######################\n# Brukseksempel:\n#######################\nif __name__ == \"__main__\":\n\n # Vi laster ned dataene (vi trenger kun å gjøre det én gang)\n train_data, test_data = extract_wordlist()\n \n # Vi teller antall ord per språk\n print(\"Statistikk over språkene i treningsett:\")\n print(train_data.språk.value_counts())\n print(\"Første 30 ord:\")\n print(train_data[:30])\n\n # Vi bygge og trene modellen\n model = LanguageIdentifier()\n transcriptions = train_data.IPA.values\n languages = train_data.språk.values\n model.train(transcriptions, languages)\n\n # Vi kan nå test modellen på nye data\n predicted_langs = model.predict([\"konstituˈθjon\", \"ɡrʉnlɔʋ\", \"stjourtnar̥skrauːɪn\", \"bʊndɛsvɛɾfaszʊŋ\"])\n print(\"Mest sansynnlige språk for ordene:\", predicted_langs)\n\n # Til slutt kan vi evaluere hvor godt modellen fungerer på testsett\n model.evaluate(test_data.IPA.values, test_data.språk.values)\n"
},
{
"alpha_fraction": 0.6053008437156677,
"alphanum_fraction": 0.6217765212059021,
"avg_line_length": 23.491228103637695,
"blob_id": "e56657e034194d72d00ce78297ed240899bbe6aa",
"content_id": "96c5aa60c6fd55c7fd5616cd6caa075b1bf3bdec",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1396,
"license_type": "no_license",
"max_line_length": 67,
"num_lines": 57,
"path": "/in2110/in2110-lab-master/in2110-lab-master/gruppetimer/02-oblig-v2019/in2110-oblig-1b-precode.py",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "from in2110.oblig1b import visualize_word_vectors\nfrom in2110.corpora import aviskorpus_10_nn\n\ndef preprocess(sentences):\n \"\"\"Return list of preprocessed tokens.\"\"\"\n\n return []\n\ndef context_window(sent, pos, size):\n \"\"\"Return context window for word at pos of given size.\"\"\"\n\n return []\n\nclass WordVectorizer(object):\n \"\"\"Word vectorizer with sklearn-compatible interface.\"\"\"\n\n def __init__(self, max_features, window_size, normalize=False):\n self.max_features = max_features\n self.window_size = window_size\n self.normalize = normalize\n self.matrix = None\n self.is_normalized = False\n\n def fit(self, sentences):\n \"\"\"Fit vectorizer to sentences.\"\"\"\n\n pass\n\n def transform(self, words):\n \"\"\"Return vectors for each word in words.\"\"\"\n\n return [self.matrix[w] for w in words]\n\n def vector_norm(self, word):\n \"\"\"Compute vector norm for word.\"\"\"\n\n return 0\n\n def normalize_vectors(self):\n \"\"\"Normalize vectors.\"\"\"\n\n pass\n\n def euclidean_distance(self, w1, w2):\n \"\"\"Compute euclidean distance between w1 and w2.\"\"\"\n\n return 0\n\n def cosine_similarity(self, w1, w2):\n \"\"\"Compute cosine similarity between w1 and w2.\"\"\"\n\n return 0\n\n def nearest_neighbors(self, w, k=5):\n \"\"\"Return list of the k nearest neighbors to w.\"\"\"\n\n return []\n"
},
{
"alpha_fraction": 0.5137614607810974,
"alphanum_fraction": 0.538226306438446,
"avg_line_length": 15.769230842590332,
"blob_id": "9a27772ec489e57da9bb4cb3d35da72f9721c4fa",
"content_id": "69985aa84b5eeea4c7658de61a25975fef9c7f86",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 654,
"license_type": "no_license",
"max_line_length": 58,
"num_lines": 39,
"path": "/in3110/assignment5/5.3/demo.java",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "public class AddTwoNumbers {\n\n public static void main(String[] args) {\n //adding two number\n int num1 = 5, num2 = 15, sum;\n sum = num1 + num2;\n\n System.out.println(\"Sum of these numbers: \"+sum);\n }\n}\n\nclass ForLoopExample {\n\n public static void main(String[] args) {\n for (int i = 10; i > 1; i--){\n System.out.println(\"The value of i is: \"+i);\n }\n }\n}\n\nclass IfStatementExample {\n\n int timea = 20;\n\n if (timea < 18) {\n System.out.println(\"Good day.\");\n } else {\n System.out.println(\"Good evening.\");\n }\n}\n\n\n\n int i = 0;\n do {\n System.out.println(i);\n i++;\n }\n while (i < 5);\n"
},
{
"alpha_fraction": 0.6601915955543518,
"alphanum_fraction": 0.6636508703231812,
"avg_line_length": 47.49032211303711,
"blob_id": "c2c1e64e65932ad4ab1aad90246dc96cb0ddc2df",
"content_id": "5e3dd7b152e3eb7c4657a31f38e208ba757b5379",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7565,
"license_type": "no_license",
"max_line_length": 114,
"num_lines": 155,
"path": "/in2110/in2110-lab-master/in2110-lab-master/obliger/1b/logistisk_regresjon.py",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\nimport urllib.request\nimport pandas, re, random\nimport numpy as np\nimport sklearn.linear_model, sklearn.metrics, sklearn.model_selection\n\nORDFILER = {\"norsk\":\"https://github.com/open-dict-data/ipa-dict/blob/master/data/nb.txt?raw=true\",\n \"arabisk\":\"https://raw.githubusercontent.com/open-dict-data/ipa-dict/master/data/ar.txt?raw=true\",\n \"finsk\":\"https://raw.githubusercontent.com/open-dict-data/ipa-dict/master/data/fi.txt?raw=true\",\n \"patwa\":\"https://raw.githubusercontent.com/open-dict-data/ipa-dict/master/data/jam.txt?raw=true\",\n \"farsi\":\"https://raw.githubusercontent.com/open-dict-data/ipa-dict/master/data/fa.txt?raw=true\",\n \"tysk\":\"https://raw.githubusercontent.com/open-dict-data/ipa-dict/master/data/de.txt?raw=true\",\n \"engelsk\":\"https://raw.githubusercontent.com/open-dict-data/ipa-dict/master/data/en_UK.txt?raw=true\",\n \"rumensk\":\"https://raw.githubusercontent.com/open-dict-data/ipa-dict/master/data/ro.txt?raw=true\",\n \"khmer\":\"https://raw.githubusercontent.com/open-dict-data/ipa-dict/master/data/km.txt?raw=true\",\n \"fransk\":\"https://raw.githubusercontent.com/open-dict-data/ipa-dict/master/data/fr_FR.txt?raw=true\",\n \"japansk\":\"https://raw.githubusercontent.com/open-dict-data/ipa-dict/master/data/ja.txt?raw=true\",\n \"spansk\":\"https://raw.githubusercontent.com/open-dict-data/ipa-dict/master/data/es_ES.txt?raw=true\",\n \"svensk\":\"https://raw.githubusercontent.com/open-dict-data/ipa-dict/master/data/sv.txt?raw?true\",\n \"koreansk\":\"https://raw.githubusercontent.com/open-dict-data/ipa-dict/master/data/ko.txt?raw?true\",\n \"swahilisk\":\"https://raw.githubusercontent.com/open-dict-data/ipa-dict/master/data/sw.txt?raw?true\",\n \"vietnamesisk\":\"https://raw.githubusercontent.com/open-dict-data/ipa-dict/master/data/vi_C.txt?raw?true\",\n \"mandarin\":\"https://raw.githubusercontent.com/open-dict-data/ipa-dict/master/data/zh_hans.txt?raw?true\",\n \"malayisk\":\"https://raw.githubusercontent.com/open-dict-data/ipa-dict/master/data/ma.txt?raw?true\",\n \"kantonesisk\":\"https://raw.githubusercontent.com/open-dict-data/ipa-dict/master/data/yue.txt?raw?true\",\n \"islandsk\":\"https://raw.githubusercontent.com/open-dict-data/ipa-dict/master/data/is.txt?raw=true\"}\n\nclass LanguageIdentifier:\n \"\"\"Logistisk regresjonsmodell som tar IPA transkripsjoner av ord som input, \n og predikerer hvilke språkene disse ordene hører til.\"\"\"\n \n def __init__(self):\n \"\"\"Initialiser modellen\"\"\" \n # selve regresjonsmodellen (som brukes all CPU-er på maskinen for trening)\n self.model = sklearn.linear_model.LogisticRegression(solver=\"lbfgs\", multi_class='ovr', n_jobs=-1)\n\n # Hvis den går for treigt kan dere også bruke:\n # self.model = sklearn.linear_model.SGDClassifier(loss=\"log\", n_jobs=-1) \n \n def train(self, transcriptions, languages):\n \"\"\"Gitt en rekke med IPA transkripsjoner og en rekke med språknavn, tren\n den logistisk regresjonsmodellen. De to rekkene må ha samme lendgen\"\"\"\n \n raise NotImplementedException()\n \n def predict(self, transcriptions):\n \"\"\"Gitt en rekke med IPA transkripsjoner, finn ut det mest sansynnlige språket\n for hver transkripsjon. Rekken som returneres må ha samme lengden som rekken i input\"\"\"\n \n raise NotImplementedException()\n\n \n def _extract_unique_symbols(self, transcriptions, min_nb_occurrences=10):\n \"\"\"Gitt en rekke med IPA fonetiske transkripsjoner, ektraher en liste med alle IPA \n symboler som finnes i transkripsjonene og forekommer minst min_nb_occurrences.\"\"\"\n raise NotImplementedException()\n \n def _extract_feats(self, transcriptions):\n \"\"\"Gitt en rekke med IPA transkripsjoner, ekstraher en matrise av størrelse |T|x|F|,\n hvor |T| er antall transkripsjoner, og |F| er antall features brukt i modellen.\"\"\"\n \n raise NotImplementedException()\n\n def evaluate(self, transcriptions, languages): \n \"\"\"Gitt en rekke med IPA transkripsjoner og en rekke med språknavn, evaluer hvor godt\n modellen fungerer ved å beregne:\n 1) accuracy\n 2) precision, recall og F1 for hvert språk\n 3) micro- og macro-averaged F1.\n \"\"\"\n \n # See API fra sklearn.metrics for å finne ut hvordan dette kan gjøres! \n raise NotImplementedException()\n \n\n\n \ndef extract_wordlist(max_nb_words_per_language=20000):\n \"\"\"\n Laster ned fra Github ordlister med ord og deres phonetiske transkripsjoner i flere språk.\n Ordlistene er deretter satt sammen i en pandas DataFrame, og delt i en treningsett og en testsett.\n \"\"\"\n \n full_wordlist = []\n for lang, wordfile in ORDFILER.items():\n \n print(\"Nedlasting av ordisten for\", lang, end=\"... \")\n data = urllib.request.urlopen(wordfile)\n \n wordlist_for_language = []\n for linje in data:\n linje = linje.decode(\"utf8\").rstrip(\"\\n\")\n word, transcription = linje.split(\"\\t\")\n \n # Noen transkripsjoner har feil tegn for \"primary stress\"\n transcription = transcription.replace(\"\\'\", \"ˈ\")\n \n # vi tar den første transkripsjon (hvis det finnes flere) \n # og fjerner slashtegnene ved start og slutten\n match = re.match(\"/(.+?)/\", transcription)\n if not match:\n continue\n transcription = match.group(1) \n wordlist_for_language.append({\"ord\":word, \"IPA\":transcription, \"språk\":lang})\n data.close()\n \n # Vi blander sammen ordene, og reduserer mengder hvis listen er for lang\n random.shuffle(wordlist_for_language)\n wordlist_for_language = wordlist_for_language[:max_nb_words_per_language]\n \n full_wordlist += wordlist_for_language\n print(\"ferdig!\")\n\n # Nå bygger vi en DataFrame med alle ordene\n full_wordlist = pandas.DataFrame.from_records(full_wordlist)\n \n # Og vi blander sammen ordene i tilfeldig rekkefølge\n full_wordlist = full_wordlist.sample(frac=1)\n \n # Lage et treningssett og en testsett (med 10% av data)\n wordlist_train, wordlist_test = sklearn.model_selection.train_test_split(full_wordlist, test_size=0.1)\n print(\"Treningsett: %i eksempler, testsett: %i eksempler\"%(len(wordlist_train), len(wordlist_test)))\n \n return wordlist_train, wordlist_test\n\n \n \n \n#######################\n# Brukseksempel:\n#######################\nif __name__ == \"__main__\":\n\n # Vi laster ned dataene (vi trenger kun å gjøre det én gang)\n train_data, test_data = extract_wordlist()\n \n # Vi teller antall ord per språk\n print(\"Statistikk over språkene i treningsett:\")\n print(train_data.språk.value_counts())\n print(\"Første 30 ord:\")\n print(train_data[:30])\n\n # Vi bygge og trene modellen\n model = LanguageIdentifier()\n transcriptions = train_data.IPA.values\n languages = train_data.språk.values\n model.train(transcriptions, languages)\n\n # Vi kan nå test modellen på nye data\n predicted_langs = model.predict([\"konstituˈθjon\", \"ɡrʉnlɔʋ\", \"stjourtnar̥skrauːɪn\", \"bʊndɛsvɛɾfaszʊŋ\"])\n print(\"Mest sansynnlige språk for ordene:\", predicted_langs)\n\n # Til slutt kan vi evaluere hvor godt modellen fungerer på testsett\n model.evaluate(test_data.IPA.values, test_data.språk.values)\n"
},
{
"alpha_fraction": 0.5765720009803772,
"alphanum_fraction": 0.5801216959953308,
"avg_line_length": 28,
"blob_id": "f33c28457db5f6849477ca9ffd3bdee7bdb4c577",
"content_id": "9f93f7ed0f7a9d16bd9c5ad84b275264203a7f77",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1972,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 68,
"path": "/in3110/assignment5/5.5/diff.py",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "import argparse\n\n\ndef superdiff(input, modified):\n \"\"\"\n This function compares two files. It checks if the lines of input file are\n included in the modified file. If yes, adds 0. If not, adds -. Next, we go\n through rest of the lines in the modified file and add them(with a + sign)\n to the output list.\n\n Args:\n input: first input file- original file\n modified: second modified file- modified file\n\n Returns:\n output(array): list of lines with either +,- or 0 in the beginning\n \"\"\"\n output = input\n\n for i in input:\n if i in modified:\n # adds 0 in the beginning of the line\n output[input.index(i)] = \"0 \" + i\n modified[modified.index(i)] = True\n else:\n # if the line is not there add -\n output[input.index(i)] = \"- \" + i\n\n for y in modified:\n if not y:\n # adds the line to output with a + sign\n output.insert(modified.index(y), \"+ \" + y)\n return output\n\n\ndef strip_newline(list):\n \"\"\"\n Getting rid of \"\\n\" at the end of each element in list since\n list() function adds it automatically.\n \"\"\"\n array = []\n for elem in list:\n array.append(elem.rstrip())\n return array\n\n\ndef main():\n # just an argparse to have a command line interface\n parser = argparse.ArgumentParser(description=\"SuperDiff\")\n # it is required to have exactly to arguments\n parser.add_argument(\"-f\", \"--files\",\n required=True,\n nargs=2,\n dest=\"files_list\",\n help=\"choose two files for comparison\")\n results = parser.parse_args()\n\n open_if = open(results.files_list[0], \"r\")\n open_of = open(results.files_list[1], \"r\")\n\n # applying superdiff and printing it\n output = superdiff(strip_newline(open_if), strip_newline(open_of))\n for i in output:\n print(i)\n\n\nif __name__ == \"__main__\":\n main()\n"
},
{
"alpha_fraction": 0.5391908884048462,
"alphanum_fraction": 0.5600505471229553,
"avg_line_length": 34.155555725097656,
"blob_id": "f651f101ba30385ead2c056fd6b35d2e65a859db",
"content_id": "492f703d5520acc595f0e35fc4a3ac0bb87e7c3f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1582,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 45,
"path": "/in3110/assignment4/blur/blur_3.py",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\nimport numba\n#importing stuff\nimport numpy as np\nimport cv2\nimport time\nfrom check_fileformat import format_test, failed_format_test_print\n\[email protected]\ndef blur_jit(src, image):\n ''' Added jit. Rest stays exactly the same as before. '''\n\n dst = np.zeros(image.shape)\n for h in range(1, dst.shape[0]-1):\n for w in range(1, dst.shape[1]-1):\n for c in range(0, dst.shape[2]):\n dst [h,w,c] = (src [h,w,c] + src[h -1,w,c] + src[h+1,w,c]\n + src[h,w -1,c] + src[h,w+1,c]\n + src[h -1,w -1,c] + src[h -1,w+1,c]\n + src[h+1,w -1,c] + src[h+1,w+1,c]) /9\n return dst\n\n#The numba_blur works great as well, have removed the copying-function and made\n# the jit-method returnable so that we can use it for other functions.\ndef numba_blur(input_img, output_img):\n if format_test(input_img) and not isinstance(imput_img, np.ndarray):\n failed_format_test_print(input_img)\n return\n else:\n #reading in image\n image = cv2.imread(input_img)\n\n src = np.pad(image,((1,1),(1,1),(0,0)),mode='edge')\n src = src.astype(\"uint32\")\n #startint calculating the time\n start = time.time()\n dst = blur_jit(src, image)\n end = time.time()\n dst = dst.astype(\"uint8\")\n cv2.imwrite(output_img, dst)\n\n #ending the time count and calculating elapsed_time\n elapsed_time = end - start\n print (\"numba time: \", elapsed_time)\n return dst\n"
},
{
"alpha_fraction": 0.5867714285850525,
"alphanum_fraction": 0.5939751267433167,
"avg_line_length": 22.136363983154297,
"blob_id": "9f0cb19a3519f7a33be93565b6162cc5ec55902f",
"content_id": "69caf3b2c0c90348f9eb7071ee01aa3be91979a0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 1530,
"license_type": "no_license",
"max_line_length": 86,
"num_lines": 66,
"path": "/in1010/oblig3/SortertLenkeliste.java",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "public class SortertLenkeliste <T extends Comparable<T>> extends Lenkeliste<T>{\n public SortertLenkeliste(){\n super();\n }\n //klassen kommer ikke gjennom alle testene(24 av 31 - ok)\n //jeg synes at problemet ligger her et sted... dessverre klarte jeg ikke å løse det.\n\n @Override\n public void leggTil(T x){\n Node n = new Node(x); // noden som skal legges til\n Node p = start; // hjelpe-noden som vi bruker til å iterere gjennom listen\n if (this.stoerrelse == 0) {\n super.leggTil(x);\n } else {\n while (p.neste != null) {\n if (n.element.compareTo(p.neste.element) < 0) {\n break;\n } else {\n p = p.neste;\n }\n }\n\n n.neste = p.neste;\n p.neste = n;\n stoerrelse++;\n\n }\n\n }\n\n @Override\n public T fjern() {\n if (stoerrelse == 0){\n throw new UgyldigListeIndeks(-1);\n }\n Node p = start;\n while (p.neste.neste != null){\n p = p.neste;\n }\n Node f = p.neste;\n p.neste = null;\n stoerrelse--;\n return f.element;\n }\n\n @Override\n public void sett(int pos, T x){\n throw new UnsupportedOperationException();\n }\n @Override\n public void leggTil(int pos, T x){\n throw new UnsupportedOperationException();\n }\n}\n\nclass TestMe {\n public static void main(String[] args) {\n SortertLenkeliste<String> l = new SortertLenkeliste<>();\n l.leggTil(\"CCC\");\n l.leggTil(\"BBB\");\n l.leggTil(\"AAA\");\n System.out.println(l.hent(0));\n System.out.println(l.hent(1));\n System.out.println(l.hent(2));\n }\n}\n"
},
{
"alpha_fraction": 0.667129635810852,
"alphanum_fraction": 0.6736111044883728,
"avg_line_length": 22.736263275146484,
"blob_id": "b891fb3668412ece5f055a8d7d87f1b2912540d3",
"content_id": "c356607c77b2873f858a8ce839f5646234970108",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2168,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 91,
"path": "/in2110/in2110-lab-master/in2110-lab-master/obliger/1a/in2110-oblig-1a-precode.py",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "# IN2110 Oblig 1 pre-kode\n\n# Klasser og funksjoner fra scikit-learn som vi skal bruke i obligen\nfrom sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.metrics import accuracy_score\n\n# Norec dataset\nfrom in2110.corpora import norec\n\n# Hjelpefunksjoner for visualisering\nfrom in2110.oblig1 import scatter_plot\n\ndef prepare_data(documents):\n \"\"\"Tar inn en iterator (kan være en liste) over dokumenter fra norec\n og returnerer to lister:\n\n - data : En liste over dokument-tekstene.\n - labels : En liste over hvilken kategori dokumentet tilhører.\n\n Begge listene skal være like lange og for dokumentet i data[i]\n skal vi kunne finne kategorien i labels[i].\n \"\"\"\n\n # Din kode her\n\n return data, labels\n\ndef tokenize(text):\n \"\"\"Tar inn en streng med tekst og returnerer en liste med tokens.\"\"\"\n\n # Å splitte på mellomrom er fattigmanns tokenisering. Endre til noe\n # bedre!\n\n return text.split()\n\nclass Vectorizer:\n def __init__(self):\n \"\"\"Konstruktør som tar inn antall klasser som argument.\"\"\"\n\n self.vectorizer = None\n self.tfidf = None\n\n def vec_train(self, data):\n \"\"\"Tilpass vektorisereren til treningsdata. Returner de vektoriserte\n treningsdataene med og uten tfidf-vekting.\n\n \"\"\"\n\n vec = None\n vec_tfidf = None\n\n # Din kode her\n\n # Tips: Bruk fit_transform() for å spare kjøretid.\n\n return vec, vec_tfidf\n\n def vec_test(self, data):\n \"\"\"Vektoriser dokumentene i nye data. Returner vektorer med og uten\n tfidf-vekting.\n\n \"\"\"\n\n vec = None\n vec_tfidf = None\n\n # Din kode her\n\n return vec, vec_tfidf\n\ndef create_knn_classifier(vec, labels, k):\n \"\"\"Lag en k-NN-klassifikator, tren den med vec og labels, og returner\n den.\n\n \"\"\"\n\n clf = None\n\n return clf\n\n# Treningsdata\ntrain_data, train_labels = prepare_data(norec.train_set())\n\n# Valideringsdata\ndev_data, dev_labels = prepare_data(norec.dev_set())\n\n# Testdata\ntest_data, test_labels = prepare_data(norec.test_set())\n\n# Din kode her\n"
},
{
"alpha_fraction": 0.754807710647583,
"alphanum_fraction": 0.7644230723381042,
"avg_line_length": 33.66666793823242,
"blob_id": "1d83185d720bb8b9546a599db0d4cb2fcc8e4915",
"content_id": "fc3918278800d204425d66d3d04eab390feb2521",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 624,
"license_type": "no_license",
"max_line_length": 84,
"num_lines": 18,
"path": "/in3110/assignment4/README.md",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "Assignment 4 is about blurring and image.\nFile blur.py uses argparse to provide a command line user interface\nFile blur_1.py blurres an image by pure python\nFile blur_2.py blurres an image by numpy\nFile blur_3.py blurres an image by numba\n\nInstall the blur package (named blur) like this:\npip3 install . --user\n\nRun f.e. like this :\npython blur.py beatles.jpg blurred.jpg -b regular_blur\nHere your input image is the beatles.jpg picture, it is then saved as 'blurred.jpg',\nflag \"-b\" (blur) with regular_blur - which is blur_1.py\n\nTo run the user interface:\npython blur.py --help\n\nUnfortunately test_blur.py is not finished.\n"
},
{
"alpha_fraction": 0.6812412142753601,
"alphanum_fraction": 0.6925246715545654,
"avg_line_length": 25.259260177612305,
"blob_id": "ad48081f5d404acc473e957889a961d33e61a9b6",
"content_id": "b58c88a7ebec43e0f04315ec5766a4ea0d2296a0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 709,
"license_type": "no_license",
"max_line_length": 82,
"num_lines": 27,
"path": "/in3110/assignment4/blur/blur_image.py",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\nimport argparse\nimport blur_1\nimport blur_2\nimport blur_3\nimport cv2\n\ndef blur_image(input_filename, output_filename = None):\n '''This function blurres an image using blur_1, provided input_filename. It\n takes in one or two arguments- output_filename is optional. If output_filename\n argument is not provided, then the function s\n\n Args:\n input_filename (str): The first parameter.\n output_filename (str): The second parameter.\n\n Returns:\n numpy array \"image\"\n '''\n\n image = cv2.imread(input_filename)\n blurred = blur_1.blur_matrix(image)\n\n if output_filename is not None:\n cv2.imwrite(output_filename, blurred)\n\n return blurred\n"
},
{
"alpha_fraction": 0.5304268598556519,
"alphanum_fraction": 0.5356494188308716,
"avg_line_length": 31.382352828979492,
"blob_id": "8d51a9538f95615a2089b6f2cf5155e343f47eca",
"content_id": "5874b3ef0e390c22be63eb69b92f39457de3f226",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4404,
"license_type": "no_license",
"max_line_length": 90,
"num_lines": 136,
"path": "/in3110/assignment3/complex.py",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "import math\nclass Complex:\n\n\n def __init__(self, re, im):\n '''Constructor of the Complex class. '''\n self.re = re\n self.im = im\n\n\n # Assignment 3.3\n\n def conjugate(self):\n '''The conjugate function changes reverts signs of a complex number expression.\n If-statement to evaluate the sign of individual parts of the complex expression'''\n a = self.re\n b = self.im\n\n if b < 0:\n b = b * (-1)\n print(\"b1 \", b)\n elif b > 0:\n b = b * (-1)\n print(\"b2 \", b)\n print (a,b)\n return Complex(a,b)\n\n\n def modulus(self):\n '''This function calculates the modulus of a complex number.'''\n x = self.re * self.re + self.im * self.im\n absx = abs(x)\n mod = math.sqrt(absx)\n return mod\n\n\n def __add__(self, other):\n '''This function adds to numbers. Using if-statement to check what kind of\n values we are about to add. 3 possibilities: i.e int, complex, or Complex'''\n #if isinstance(other, int) or isinstance(other, float) :\n if isinstance(other, (int, float)):\n x = self.re + other\n y = self.im\n return Complex(x,y)\n elif isinstance(other, complex):\n x = self.re + other.real\n y = self.im + other.imag\n return Complex(x,y)\n elif isinstance(other, Complex):\n x = self.re + other.re\n y = self.im + other.im\n return Complex(x,y)\n\n\n\n def __sub__(self, other):\n '''This function behaves similarly to __add__ function.\n This is subtraction.'''\n #if isinstance(other, int) or isinstance(other, float):\n if isinstance(other, (int, float)):\n x = self.re - other\n y = self.im\n return Complex(x,y)\n elif isinstance(other, complex):\n x = self.re - other.real\n y = self.im - other.imag\n return Complex(x,y)\n elif isinstance(other, Complex):\n x = self.re - other.re\n y = self.im - other.im\n return Complex(x,y)\n\n\n\n def __mul__(self, other):\n '''This function multiplies 2 numbers. If statement to check what type of number\n we want to multiply. Also, 3 possibilities as in the functions above.'''\n #if isinstance(other, int) or isinstance(other, int):\n if isinstance(other, (int, float)):\n x = self.re * other\n y = self.im * other\n return Complex(x,y)\n elif isinstance(other, complex):\n x = (self.re * other.real) + ((self.im * other.imag) * (-1))\n y = (self.re * other.imag) + (self.im * other.real)\n return Complex(x,y)\n elif isinstance(other, Complex):\n x = (self.re * other.re) + ((self.im * other.im) * (-1))\n y = (self.re * other.im) + (self.im * other.re)\n return Complex(x,y)\n\n #return Complex(x,y)\n\n\n def __eq__(self, other):\n '''Checks if 2 numbers are equal'''\n #if isinstance(other, int) or isinstance(other, int):\n if isinstance(other, (int, float)):\n return self.re == other and self.im == 0\n elif isinstance(other, complex):\n return self.re == other.real and self.im == other.imag\n elif isinstance(other, Complex):\n return self.re == other.re and self.im == other.im\n\n\n\n # Assignment 3.4 REVERSED FUNCTIONS FOR __add__, __sub__, __mul__.\n\n def __radd__(self, other):\n '''Addition is commutative, so we have to allow reversed addition\n f.ex: Complex(5,7) + 3j and 3j + Complex(5,7)'''\n return self + other\n\n def __rsub__(self, other):\n '''Same idea as in __radd__. This function is called when the first\n arg in __sub__ is not a Complex number'''\n #print(\"in rsub, self= \" + str(self) + \", other=\" + str(other))\n #z = other.__sub__(self)\n #x = other - self.re\n #y = -self.im\n #return Complex(x,y)\n return (-self) + other\n ''''''\n def __rmul__(self, other):\n return self * other\n\n\n # Optional, possibly useful methods\n\n # Allows you to write `-a`\n def __neg__(self):\n return Complex(-self.re, -self.im)\n\n # Make the `complex` function turn this into Python's version of a complex number\n def __complex__(self):\n pass\n"
},
{
"alpha_fraction": 0.6521739363670349,
"alphanum_fraction": 0.6521739363670349,
"avg_line_length": 3,
"blob_id": "b6de21a35fad64f09fa77155de94bedc91b50636",
"content_id": "275785e91559dc4808b388e6a557470eff021c51",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 23,
"license_type": "no_license",
"max_line_length": 3,
"num_lines": 5,
"path": "/in3110/assignment5/5.5/demo1.py",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "AAA\r\nBBb\r\nCCC\r\nDDD\r\nEEE"
},
{
"alpha_fraction": 0.6168582439422607,
"alphanum_fraction": 0.6245210766792297,
"avg_line_length": 22.727272033691406,
"blob_id": "8f134503b3a8c9db282f18d55f1519b1fb5ae425",
"content_id": "e4d0dae102ce53d97632730a6f03d1f13399151d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1044,
"license_type": "no_license",
"max_line_length": 82,
"num_lines": 44,
"path": "/in2110/in2110-lab-master/in2110-lab-master/ekstra/task4.py",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "# encoding: utf-8\nimport nltk\nfrom nltk.probability import ConditionalFreqDist, ConditionalProbDist, MLEProbDist\n\nwith open(\"norne.txt\", encoding=\"utf8\") as f:\n mylist = [nltk.tag.str2tuple(t, sep='_') for t in f.read().split()]\n\n# string = \" \".join([\"_\".join(t) for t in mylist])\n# with open('sushi-txt', 'x') as f:\n# f.write(string)\n\nprint(mylist)\n#\n# word = []\n# tag = []\n\n# print(\"-----LENGHT-----\", len(mylist))\n#\n# cfd = ConditionalFreqDist()\n#\n# w = []\n# t = []\n# for token,tag in mylist:\n# cfd[tag][token] += 1\n#\n# cpd = ConditionalProbDist(cfd, MLEProbDist)\n# print(cpd['AT'].prob('the'))\n\n#self.transition = ConditionalProbDist(cfd_t, MLEProbDist)\n# filen = open(\"norne.txt\", encoding=\"utf8\")\n# setningene = filen.read().split(\"\\n\")\n\n# filen = open(\"norne.txt\", encoding=\"utf8\")\n# for line in filen:\n# print(line)\n#setningene = filen.read().split(\"\\n\")\n\n# abc = []\n# for t in filen.split():\n# abc.append(nltk.tag.str2tuple(t))\n# print(t)\n\n# sushi = [nltk.tag.str2tuple(t) for t in filen.split()]\n# print(sushi)\n"
},
{
"alpha_fraction": 0.6617646813392639,
"alphanum_fraction": 0.6911764740943909,
"avg_line_length": 14.11111068725586,
"blob_id": "4e68f7a8de21ef33f0ef230d58c0712d79c7b7c4",
"content_id": "c1e1791f04bc79ba27b63180340cd105a6ec763d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 136,
"license_type": "no_license",
"max_line_length": 35,
"num_lines": 9,
"path": "/in2110/in2110-lab-master/in2110-lab-master/obliger/2a/in2110-oblig-2a-precode.py",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "import spacy\nfrom in2110.conllu import ConlluDoc\n\n\ndef attachment_score(true, pred):\n las = None\n uas = None\n\n return uas, las\n"
},
{
"alpha_fraction": 0.7006121277809143,
"alphanum_fraction": 0.7111853361129761,
"avg_line_length": 21.462499618530273,
"blob_id": "2eef16b27db0be652793be093c24ec921cc6b45c",
"content_id": "675655229b67c77437337fea22755b3b4234dea9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1797,
"license_type": "no_license",
"max_line_length": 97,
"num_lines": 80,
"path": "/in3110/assignment6/data.py",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "import pandas as pd\nimport matplotlib.pyplot as plt\nimport pylab as pl\n\ndef plot_data(featureOne, featureTwo):\n\n\t\"\"\"\n\tThis function plots two features against diabetes data\n\n\tArgs:\n\t\tfeatureOne(['']): first feature\n\t\tfeatureTwo(['']): second featue\n\tReturns:\n\t\tNone\n\t\"\"\"\n\n\tplt.scatter(list(pos[featureOne]), list(pos[featureTwo]), color = \"red\" )\n\tplt.scatter(list(neg[featureOne]), list(neg[featureTwo]), color = \"green\" )\n\tplt.xlabel(featureOne)\n\tplt.ylabel(featureTwo)\n\tplt.show()\n\n\ndef get_training_set():\n\t\"\"\"\n\tThis function gets training set\n\n\tArgs:\n\t\tNone\n\tReturns:\n\t\ttrain(Dataframe)\n\t\"\"\"\n\treturn train\n\n\ndef get_validation_set():\n \t\"\"\"\n\tThis function gets validation set\n\n\tArgs:\n\t\tNone\n\tReturns:\n\t\tvalidation(Dataframe)\n\t\"\"\"\n \treturn validation\n\n\n# Reading data from cvs file\ndata = pd.read_csv(\"diabetes.csv\", header=0)\n\n# Drop rows with NaN values\nremoveNanValues = data.dropna(axis=0, how='any', thresh=None, inplace=False)\n\n#Drop the first row and make copy\ncopy_removeNanValues = removeNanValues.copy()\ncopy_removeNanValues = copy_removeNanValues.drop(copy_removeNanValues.columns[0], axis='columns')\n\n#replace all neg & pos values with 0 and 1\nd = {'neg':0,'pos':1}\ndf = copy_removeNanValues.replace(d)\n#spliting data into pos and neg diabetes\nmask = df['diabetes'] == 1;\npos = df[mask]\nneg = df[~mask]\n#getting same distrbution between pos and neg diabetes into final trainig and validation set\ntrain = pos.sample(frac=0.8)\nvalidation = pos.drop((train.index))\nneg2 = neg.sample(frac=0.8)\n\n#true training set with 80%\ntrain = train.append(neg2, ignore_index = True)\n#true validation set with 20%\nvalidation = validation.append(neg.drop(neg2.index), ignore_index = True)\n\n\nif __name__ == '__main__':\n\n plot_data('age', 'pressure')\n get_training_set()\n get_validation_set()\n"
},
{
"alpha_fraction": 0.568411648273468,
"alphanum_fraction": 0.5826888680458069,
"avg_line_length": 31.640777587890625,
"blob_id": "5e252b72e0cd7de653178a4b518d1ae915d92d5d",
"content_id": "bf97d0a494f6a11c88ff6adf647cfbd1968e44d4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3362,
"license_type": "no_license",
"max_line_length": 76,
"num_lines": 103,
"path": "/in2110/in2110-lab-master/in2110-lab-master/gruppetimer/02-oblig-v2019/in2110-oblig-1b-solution.py",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "import math\nimport heapq\n\nfrom collections import Counter, defaultdict\n\nfrom in2110.oblig1b import visualize_word_vectors\n\nfrom in2110.corpora import aviskorpus_10_nn\n\ndef preprocess(sentences):\n return list(([t.lower() for t in sent.split()] for sent in sentences))\n\ndef context_window(sent, pos, size):\n return sent[max(pos-size, 0):pos] + sent[pos+1:pos+1+size]\n\nclass WordVectorizer(object):\n def __init__(self, max_features, window_size, normalize=False):\n self.max_features = max_features\n self.window_size = window_size\n self.normalize = normalize\n self.matrix = defaultdict(Counter)\n self.is_normalized = False\n\n def fit(self, sentences):\n # Count word frequencies in order to use only the most\n # frequent words as features.\n freqs = Counter(t for s in sentences for t in s)\n\n # Create vocabulary form the most frequent words\n self.vocab = set(w for w,c in freqs.most_common(self.max_features))\n\n # Iterate over sentences to build co-occurence matrix\n for sent in sentences:\n for i in range(len(sent)):\n t = sent[i]\n if t in self.vocab:\n context = context_window(sent, i, self.window_size)\n\n for ct in context:\n if ct in self.vocab:\n self.matrix[t][ct] += 1\n\n if self.normalize:\n self.normalize_vectors()\n\n def transform(self, words):\n vecs = []\n for w in words:\n assert w in self.matrix\n vecs.append(self.matrix[w])\n return vecs\n\n def vector_norm(self, word):\n vec = self.matrix[word]\n return math.sqrt(sum(v**2 for v in vec.values()))\n\n def normalize_vectors(self):\n for w, vec in self.matrix.items():\n norm = self.vector_norm(w)\n\n for key in vec:\n vec[key] /= norm\n\n self.is_normalized = True\n\n def euclidean_distance(self, w1, w2):\n vec1, vec2 = self.transform((w1, w2))\n\n # Save computation by only computing values for features that\n # are active in one of the vectors.\n union = vec1.keys() | vec2.keys()\n\n return math.sqrt(sum((vec1[key] - vec2[key])**2\n for key in union))\n\n def cosine_similarity(self, w1, w2):\n vec1, vec2 = self.transform((w1, w2))\n\n # Save computatoin by only computing values for features that\n # are active in both vectors\n intersect = set(vec1.keys()).intersection(vec2.keys())\n\n dot_product = sum(vec1[key]*vec2[key] for key in intersect)\n\n # Return dot product for normalized vectors\n if self.is_normalized:\n return dot_product\n else:\n return dot_product / (self.vector_norm(w1)*self.vector_norm(w2))\n\n def nearest_neighbors(self, w, k=5):\n if w in self.vocab:\n # Use heapq to efficienctly find the top k neighbors.\n return heapq.nlargest(k,\n ((other, self.cosine_similarity(w, other))\n for other in self.matrix if other != w),\n key=lambda x : x[1])\n\nprint(\"Loading corpus...\")\nsentences = preprocess(aviskorpus_10_nn.sentences())\n\nvec = WordVectorizer(5000, 5)\nvec.fit(sentences)\n"
},
{
"alpha_fraction": 0.6862457394599915,
"alphanum_fraction": 0.6940773129463196,
"avg_line_length": 27.60869598388672,
"blob_id": "5d6b5fb6417c95ce806323e1eaa1cbbe2aa831c4",
"content_id": "de1d653a7047e2df0dceb4db1cc416ef320704fb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2043,
"license_type": "no_license",
"max_line_length": 97,
"num_lines": 69,
"path": "/in3110/assignment6/fitting.py",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "import sklearn\r\nsklearn.__version__\r\nimport matplotlib.pyplot as plt\r\nfrom sklearn.neighbors import KNeighborsClassifier\r\nfrom sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis\r\nimport data\r\nfrom sklearn.metrics import accuracy_score\r\n\r\n\r\ndef fit(training_set_X, features, model):\r\n\t\"\"\"\r\n\tThis function fits training sets based on the training set, it's features and the selected model\r\n\r\n\tArgs:\r\n\t\ttraining_set_X(Dataframe): Training set\r\n\t\tfeatures(['']): features to fit trainig set with\r\n\t\tmodel(int): 1 for K Neighbors Classifier, 2 for Quadratic Discriminant Analysis\r\n\r\n\tReturns:\r\n\t\tmodel.fit()(fitted clasifier): fitted clasifier for either knn or qda\r\n\t\"\"\"\r\n\r\n\tknn = KNeighborsClassifier()\r\n\tqda = QuadraticDiscriminantAnalysis()\r\n\r\n\ttarget = training_set_X.pop('diabetes')\r\n\ttrain = training_set_X[features]\r\n\r\n\tif model == 1 :\r\n\t\treturn knn.fit(train, target)\r\n\telif model == 2 :\r\n\t\treturn qda.fit(train, target)\r\n\r\n\r\ndef Test():\r\n\t\"\"\"\r\n\t\tThis function runs two models and checks their accuracy\r\n\r\n\t\tArgs:\r\n\t\t\tNone\r\n\t\tReturns:\r\n\t\t\tNone\r\n\t\"\"\"\r\n\r\n\ttraining_set = data.get_training_set().copy()\r\n\tvalidation_set = data.get_validation_set().copy()\r\n\tfeatures = ['age','pressure', 'triceps','mass']\r\n\tfitted = fit(training_set, features, 1)\r\n\tpred_fitted = fitted.predict(validation_set[features])\r\n\tfit_score = accuracy_score(validation_set['diabetes'], pred_fitted)\r\n\r\n\tprint(\"KNeighborsClassifier with features: age, pressure, triceps, mass\" )\r\n\tprint(\"Score: \"\"{0:.2f}\".format(fit_score*100) + \"%\")\r\n\r\n\ttraining_set = data.get_training_set().copy()\r\n\tvalidation_set = data.get_validation_set().copy()\r\n\tfeatures = ['age','pressure', 'triceps','mass']\r\n\tfitted = fit(training_set, features, 2)\r\n\tpred_fitted = fitted.predict(validation_set[features])\r\n\tfit_score = accuracy_score(validation_set['diabetes'], pred_fitted)\r\n\r\n\tprint(\"QuadraticDiscriminantAnalysis with features: age, pressure, triceps, mass\" )\r\n\tprint(\"Score: \"\"{0:.2f}\".format(fit_score*100) + \"%\")\r\n\r\n\r\nif __name__ == '__main__':\r\n\r\n\r\n\tTest()\r\n"
},
{
"alpha_fraction": 0.7228915691375732,
"alphanum_fraction": 0.7228915691375732,
"avg_line_length": 40.5,
"blob_id": "5173b73591722b3832b97deb6bcd5131e8f411f2",
"content_id": "8f8fb0487240dfc7a625ed4a661c9e9071a590b3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 83,
"license_type": "no_license",
"max_line_length": 43,
"num_lines": 2,
"path": "/README.md",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "# *Still working on updating my repos*\n### A bunch of projects from my degree @UiO\n"
},
{
"alpha_fraction": 0.5809956789016724,
"alphanum_fraction": 0.5855445861816406,
"avg_line_length": 38.10396194458008,
"blob_id": "ebc612ecf12f4628931e47eee9d40eae8b96ed06",
"content_id": "7cc6dc12b17852a9bd75d6f73fac2fb5ca0d3564",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7932,
"license_type": "no_license",
"max_line_length": 93,
"num_lines": 202,
"path": "/in2110/in2110-lab-master/in2110-lab-master/obliger/1b/ner.py",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "import re\n\nclass NamedEntityRecogniser:\n \"\"\"Gjenkjenning av navngitte enheter ved bruk av HMM\"\"\"\n \n def __init__(self):\n \"\"\"Intialiserer alle variablene som er nødvendig for å representere og \n estimere sekvensmodellen (en Hidden Markov Model) som brukes til å \n gjenkjenne de navngitte enhetene\"\"\"\n \n # alle labellene som forekommer i treningsettet\n self.labels = set()\n\n # alle token som forekommer i treningsettet\n self.vocab = set()\n\n # hvor mange ganger en label (f.eks. B-ORG) forekommer i treningsettet\n self.label_counts = {} \n\n # hvor mange transisjoner fra label_1 til label2 forekommer i treningsettet\n self.transition_counts = {}\n \n # hvor mange emisjoner fra label til token forekommer i treningsettet\n # (Merk at vi legger et spesielt symbol for ord som aldri forekommer\n # i treningsettet, men kan forekomme i testsettet)\n self.emission_counts = {(\"O\", \"<UNK>\"):1}\n \n # Sansynnlighet P(label_2 | label_1)\n self.transition_probs = {}\n \n # Sansynnlighet P(token | label)\n self.emission_probs = {}\n \n \n def fit(self, tagged_text):\n \"\"\"Estimerer tallene og sansynnlighetene for HMM, basert på (tokenisert)\n tekst hvor navngitte enhetene er markert med XML tags (se norne.txt)\"\"\"\n \n # Ekstrahere setninger og navngitte enheter markert i hver setning\n sentences, all_spans = preprocess(tagged_text)\n \n for sentence, spans in zip(sentences, all_spans):\n \n # Ekstrahere labelsekvenser, med BIO (også kalt IOB) marking\n label_sequence = get_BIO_sequence(spans, len(sentence))\n \n # Oppdatere tallene \n self._add_counts(sentence, label_sequence)\n \n # Beregne sansynnlighetene (transition og emission) ut fra tallene\n self._fill_probs()\n \n \n def _add_counts(self, sentence, label_sequence):\n \"\"\"Oppdaterer variablene self.vocab, self.labels, self.label_counts, \n self.transition_counts og self.emission_counts, basert på setningen og \n sekvenslabellen assosiert med dem. \n Merk at setningen og label_sequence har samme lengde.\"\"\"\n \n raise NotImplementedException()\n \n def _fill_probs(self, alpha_smoothing=1E-6):\n \"\"\"Beregne sannsynlihetsfordelinger self.transition_probs og\n self.emission_probs basert på tallene som er samlet inn i \n self.label_counts, self.transition_counts og self.emission_counts.\n \n Når det gjeler self.emission_probs bør vi legge Laplace smoothing, med en\n verdi for alpha som er alpha_smoothing.\"\"\"\n \n raise NotImplementedException()\n \n \n def _viterbi(self, sentence):\n \"\"\"Kjører Viterbi-algoritmen på setningen (liste over tokens), og\n returnerer to outputs: \n 1) en labelsekvens (som har samme lengde som setningen)\n 2) sansynnlighet for hele sekvensen \"\"\"\n\n # De 2 datastrukturer fra Viterbi algoritmen, som dere må fylle ut\n lattice = [{label:None for label in self.labels}\n for _ in range(len(sentence))]\n backpointers = [{label:None for label in self.labels}\n for _ in range(len(sentence))]\n\n # Fylle ut lattice og backpointers for setningen\n for i, token in enumerate(sentence):\n for label in self.labels:\n raise NotImplementedException()\n\n # Finne ut det mest sannsynlig merkelapp for det siste ordet\n best_final_label = max(lattice[-1].keys(), key=lambda x: lattice[-1][x])\n best_final_prob = lattice[-1][best_final_label]\n\n # Ekstrahere hele sekvensen ved å følge de \"backpointers\"\n best_path = [best_final_label]\n for i in range(i,0,-1):\n best_path.insert(0, backpointers[i][best_path[0]])\n\n # Returnerer den mest sannsynlige sekvensen (og dets sannsynlighet)\n return best_path, best_final_prob\n \n \n \n def label(self, text):\n \"\"\"Gitt en tokenisert tekst, finner ut navngitte enheter og markere disse\n med XML tags. \"\"\"\n sentences, _ = preprocess(text)\n spans = []\n for sentence in sentences:\n sentence = [token if token in self.vocab else \"<UNK>\" for token in sentence]\n label_sequence, _ = self._viterbi(sentence)\n spans.append(get_spans(label_sequence))\n \n return postprocess(sentences, spans)\n \n\n \ndef get_BIO_sequence(spans, sentence_length):\n \"\"\"Gitt en liste over \"spans\", representert som tuples (start, end, tag),\n og en setningslengde, produserer en sekvens med BIO (også kalt IOB) labeller\n for setningen. \n Eksempel: hvis spans=[(1,3,'ORG')] og sentence_length=6 bør resultatet være\n ['O', 'B-ORG', 'I-ORG', 'O', 'O', 'O']\"\"\"\n \n raise NotImplementedException()\n \n\ndef get_spans(label_sequence):\n \"\"\"Gitt en labelsekvens med BIO markering, returner en lister over \"spans\" med \n navngitte enheter. Metoden er altså den motsatte av get_BIO_sequence\"\"\"\n \n spans = [] \n i = 0\n while i < len(label_sequence):\n label = label_sequence[i]\n if label.startswith(\"B-\"):\n start = i\n label = label[2:]\n end = start + 1\n while end < len(label_sequence) and label_sequence[end].startswith(\"I-%s\"%label):\n end += 1\n spans.append((start, end, label))\n i = end\n else:\n i += 1\n return spans\n\n\ndef preprocess(tagged_text):\n \"\"\"Tar en tokenisert tekst med XML tags (som f.eks. <ORG>Stortinget</ORG>) og\n returnerer en liste over setninger (som selv er lister over tokens), sammen med\n en liste av samme lengde som inneholder de markerte navngitte enhetene. \"\"\"\n \n sentences = []\n spans = []\n \n for i, line in enumerate(tagged_text.split(\"\\n\")):\n\n tokens = []\n spans_in_sentence = []\n \n for j, token in enumerate(line.split(\" \")):\n \n # Hvis token starter med en XML tag\n start_match = re.match(\"<(\\w+?)>\", token)\n if start_match:\n new_span = (j, None, start_match.group(1))\n spans_in_sentence.append(new_span)\n token = token[start_match.end(0):]\n \n # Hvis token slutter med en XML tag\n end_match = re.match(\"(.+)</(\\w+?)>$\", token)\n if end_match:\n if not spans_in_sentence or spans_in_sentence[-1][1]!=None:\n raise RuntimeError(\"Closing tag without corresponding open tag\")\n start, _ , tag = spans_in_sentence[-1]\n if tag != end_match.group(2):\n raise RuntimeError(\"Closing tag does not correspond to open tag\")\n token = token[:end_match.end(1)]\n spans_in_sentence[-1] = (start, j+1, tag)\n \n tokens.append(token)\n \n sentences.append(tokens)\n spans.append(spans_in_sentence)\n \n return sentences, spans\n\n\ndef postprocess(sentences, spans):\n \"\"\"Gitt en liste over setninger og en tilsvarende liste over \"spans\" med\n navngitte enheter, produserer en tekst med XML markering.\"\"\"\n \n tagged_sentences = []\n for i, sentence in enumerate(sentences):\n new_sentence = list(sentence)\n for start, end, tag in spans[i]:\n new_sentence[start] = \"<%s>%s\"%(tag, new_sentence[start])\n new_sentence[end-1] = \"%s</%s>\"%(new_sentence[end-1], tag)\n tagged_sentences.append(\" \".join(new_sentence))\n \n return \"\\n\".join(tagged_sentences)\n\n \n \n"
},
{
"alpha_fraction": 0.5203831791877747,
"alphanum_fraction": 0.5311862826347351,
"avg_line_length": 27.358381271362305,
"blob_id": "3b9449be6dfdb2f3f6ca5c237e36a1b37d9f0e8e",
"content_id": "5e1ecdf612de03ae9e4694800910fc8a368d8d74",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4907,
"license_type": "no_license",
"max_line_length": 98,
"num_lines": 173,
"path": "/in2110/oblig1b/in2110-oblig-1b-precode (1).py",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "from in2110.oblig1b import visualize_word_vectors\nfrom in2110.corpora import aviskorpus_10_nn\nimport re\nimport math\n\ndef preprocess(sentences):\n \"\"\"Return list of preprocessed tokens.\"\"\"\n\n korpus = []\n for item in sentences:\n sentence = item.split(\" \")\n korpus.append(sentence)\n\n return korpus\n\ndef preprocess_downcase(sentences):\n korpus = []\n for item in sentences:\n sentence = [i.lower() for i in item.split(\" \")]\n korpus.append(sentence)\n\n return korpus\n\ndef preprocess_special_symbols(sentences):\n korpus = []\n for item in sentences:\n sentence = [i for i in item.split(\" \") if re.match(r\"[a-zA-Z]+\", i)]\n korpus.append(sentence)\n\n return korpus\n\ndef preprocess_all_in(sentences):\n korpus = []\n for item in sentences:\n sentence = [i.lower() for i in item.split(\" \") if re.match(r\"[a-z]+\", i)]\n korpus.append(sentence)\n\n return korpus\n\ndef context_window(sent, pos, size):\n \"\"\"Return context window for word at pos of given size.\"\"\"\n start_a = pos+1\n end_a = pos+size+1\n start_b = pos-size\n\n if start_b < 0:\n cont_window = sent[0:pos] + sent[start_a:end_a]\n else:\n cont_window = sent[start_b:pos] + sent[start_a:end_a]\n return cont_window\n\nclass WordVectorizer(object):\n \"\"\"Word vectorizer with sklearn-compatible interface.\"\"\"\n\n def __init__(self, max_features, window_size, normalize=False):\n self.max_features = max_features\n self.window_size = window_size\n self.normalize = normalize\n self.matrix = None\n self.is_normalized = False\n\n def fit(self, sentences):\n \"\"\"Fit vectorizer to sentences.\"\"\"\n #finner de mest frekvente\n dict = {}\n for item in sentences:\n for i in item:\n if i in dict.keys():\n dict[i] = dict[i] + 1\n else:\n dict[i] = 1\n\n sorted_d = sorted((value, key) for (key, value) in dict.items())\n sorted_d.reverse()\n\n d = sorted_d[0:self.max_features]\n s = set([key for (value, key) in d])\n \n #lager matrisen\n self.matrix = {el:{} for el in s}\n \n #utfyller matrisen\n for senten in sentences:\n i = 0\n for word in senten:\n if word in s:\n c_w = context_window(senten, i, self.window_size)\n for w in c_w:\n if w in s:\n if w in self.matrix[word].keys():\n self.matrix[word][w] = self.matrix[word][w] + 1\n else:\n self.matrix[word][w] = 1\n i = i + 1\n\n if self.normalize:\n self.normalize_vectors()\n self.is_normalized = True\n\n def transform(self, words):\n \"\"\"Return vectors for each word in words.\"\"\"\n\n return [self.matrix[w] for w in words]\n\n def vector_norm(self, word):\n \"\"\"Compute vector norm for word.\"\"\"\n vector = self.matrix[word]\n norm_vector = 0\n for w in vector:\n norm_vector = norm_vector + vector[w]**2\n \n return math.sqrt(norm_vector)\n\n def normalize_vectors(self):\n \"\"\"Normalize vectors.\"\"\"\n\n for key, value in self.matrix.items():\n a = self.vector_norm(key)\n for k, v in value.items():\n self.matrix[key][k] = v/a\n\n def euclidean_distance(self, w1, w2):\n \"\"\"Compute euclidean distance between w1 and w2.\"\"\"\n\n p = self.matrix[w1]\n q = self.matrix[w2]\n \n eu_dist = 0\n\n for key, value in p.items():\n if key in q.keys():\n eu_dist = eu_dist + (value-q[key])**2\n else:\n eu_dist = eu_dist + (value-0)**2\n\n for key, value in q.items():\n if key not in p.keys():\n eu_dist = eu_dist + (0-value)**2\n\n return math.sqrt(eu_dist) \n \n def cosine_similarity(self, w1, w2):\n \"\"\"Compute cosine similarity between w1 and w2.\"\"\"\n\n p = self.matrix[w1]\n q = self.matrix[w2]\n\n sim = 0\n \n for key, value in p.items():\n if key in q.keys():\n sim = sim + (value*q[key])\n\n if self.is_normalized:\n return sim\n else:\n return sim/((self.vector_norm(w1))*(self.vector_norm(w2)))\n\n def nearest_neighbors(self, w, k):\n \"\"\"Return list of the k nearest neighbors to w.\"\"\"\n\n l = sorted([self.cosine_similarity(key, w), key] for key, value in self.matrix[w].items())\n l.reverse()\n return l[0:k]\n\nsentences = list(aviskorpus_10_nn.sentences())\np = preprocess_special_symbols(sentences)\nww = WordVectorizer(5000, 5)\nww.fit(p)\n\n\n\nvisualize_word_vectors(ww, [\"nynorsk\", \"bokmål\", \"aktuelt\", \"akseptert\"])\n"
},
{
"alpha_fraction": 0.6849502921104431,
"alphanum_fraction": 0.693520724773407,
"avg_line_length": 26.048076629638672,
"blob_id": "1ba39ab4e2279853dafb1fb2a23b0d39f47a46dc",
"content_id": "6b1dd57f649095d7761cb6be87b74f744c33d79e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2917,
"license_type": "no_license",
"max_line_length": 126,
"num_lines": 104,
"path": "/in3110/assignment6/web_visualization.py",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "import sys\r\nsys.path.insert(1,'../')\r\nfrom flask import Flask\r\nfrom flask import render_template, request, send_from_directory, current_app\r\nimport os\r\nimport matplotlib.pyplot as plt\r\nimport matplotlib\r\nfrom visualize import plot\r\nfrom sklearn.neighbors import KNeighborsClassifier\r\nfrom sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis\r\n\r\napp = Flask(__name__)\r\n\r\[email protected](\"/\", methods = ['POST', 'GET'])\r\ndef plot_visulization():\r\n\r\n\t\"\"\"\r\n\tThis function calls web_visualization.html when requested. \r\n\r\n\tUser can set two features and one classifier in order to calculate a new graph.\r\n\r\n\tArgs:\r\n\t\tNone\r\n\tReturns\r\n\t\tflask.render_template(): rendering of the page\r\n\t\"\"\"\r\n\t# If user accesses through the form/POST:\r\n\tif request.method == 'POST':\r\n\t\treq = request.form\r\n\t\tfeatures=['glucose','mass']\r\n\t\tif 'first_feature' in request.form:\r\n\t\t\tfirst = request.form.get('first_feature')\r\n\t\t\tfeatures[0]=first\r\n\t\t\t\r\n\t\tif 'second_feature' in request.form:\r\n\t\t\tsecond = req.get('second_feature')\r\n\r\n\t\t\tfeatures[1]=second\r\n\t\t\r\n\t\tif 'classifier' in request.form:\r\n\t\t\tclassifier = request.form.get('classifier')\r\n\r\n\t\tif classifier is ('knn'):\r\n\t\t\t\tselected_classifier = KNeighborsClassifier()\r\n\t\telse:\r\n\t\t\tselected_classifier = QuadraticDiscriminantAnalysis()\r\n\t# If the page is not accessed through the form:\r\n\telse:\r\n\t\tclassifier ='knn'\r\n\t\tselected_classifier = KNeighborsClassifier()\r\n\t\tfeatures = ['glucose','mass']\r\n\r\n\t# Sets plot name\r\n\tplot_name='static/images/'+str(features[0])+str(features[1])+str(classifier)+'.png'\r\n\t# Plots the data\r\n\tpl = plot(features, selected_classifier)\r\n\t# Saves the figure\r\n\tpl.savefig(plot_name)\r\n\r\n\t# Renders the page\r\n\treturn render_template('web_visualization.html', url=plot_name, classifier=classifier, first=features[0], second=features[1])\r\n\r\n# Help pages\r\[email protected](\"/help\")\r\ndef help():\r\n return render_template('help.html')\r\n\r\[email protected](\"/help/visualize\")\r\ndef visualize():\r\n return render_template('help/visualize.html')\r\n\r\[email protected](\"/help/fitting\")\r\ndef fitting():\r\n return render_template('help/fitting.html')\r\n\r\[email protected](\"/help/data\")\r\ndef data():\r\n return render_template('help/data.html')\r\n\r\n# A better-looking pydoc I created with a module called \"pdoc\"\r\[email protected](\"/help2/index.html\")\r\ndef help2_index():\r\n return current_app.send_static_file('help2/index.html')\r\n\r\[email protected](\"/help2/data.html\")\r\ndef data2():\r\n return current_app.send_static_file('help2/data.html')\r\n\r\[email protected](\"/help2/visualize.html\")\r\ndef visualize2():\r\n return current_app.send_static_file('help2/visualize.html')\r\n\r\[email protected](\"/help2/fitting.html\")\r\ndef fitting2():\r\n return current_app.send_static_file('help2/fitting.html')\r\n\r\[email protected](\"/help2/web_visualization.html\")\r\ndef web_visualization():\r\n return current_app.send_static_file('help2/web_visualization.html')\r\n\r\n\r\n\r\nif __name__ == \"__main__\":\r\n\tapp.run(debug=True,port=5001)\r\n"
},
{
"alpha_fraction": 0.4955448806285858,
"alphanum_fraction": 0.5078821182250977,
"avg_line_length": 20.776119232177734,
"blob_id": "cf76f088d044937f07160fcf0d2ed80120da1a0f",
"content_id": "57a5d14cd542c24bf34a04ae5d3e08a4229ba25c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 1459,
"license_type": "no_license",
"max_line_length": 83,
"num_lines": 67,
"path": "/in3110/assignment2/logfile.sh",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n# time tracker\n# commands:\n# * track start \"label\"\n# * track stop\n# * track status\n# * log\n\nexport LOGFILE1=\"logfile.txt\"\nexport LOGFILE2=\"log.txt\"\nongoing=0\nlabel=\"\"\n\n\nfunction track() {\n task=$1\n #label=$2\n if [ $task == \"start\" ];\n\tthen\n if [ $ongoing -eq 0 ];\n then\n START_TIME=$SECONDS;\n label=$2\n ongoing=1\n echo \"START \" $(date) >> $LOGFILE1\n echo \"LABEL \" $label >> $LOGFILE1\n echo \"START \" $(date)\n echo \"LABEL \" $label\n\n else\n echo \"Already running.\"\n fi\n elif [ $task == \"stop\" ];\n then\n if [ $ongoing -eq 1 ];\n then\n ELAPSED_TIME=$(($SECONDS - $START_TIME))\n ongoing=0\n echo \"END \" $(date) >> $LOGFILE1\n echo \"END \" $(date)\n TZ=UTC0 printf '%s: %(%H:%M:%S)T\\n' $label \"$ELAPSED_TIME\" >> $LOGFILE2\n else\n echo \"Task is not running. Nothing to stop.\"\n fi\n elif [ $task == \"status\" ];\n then\n if [ $ongoing -eq 1 ];\n then\n echo LABEL $label is currently running.\n else\n echo \"No tasks running.\"\n fi\n fi\n}\n# the elapsed time is saved in log txt file\n# use command log to check all the logged times for the tasks\nfunction log() {\nif [ ! -f \"$LOGFILE2\" ]\nthen\n echo \"No tasks recorded\"\nelse\nwhile IFS= read -r line\ndo\n echo \"$line\"\ndone < \"$LOGFILE2\"\nfi\n}\n"
},
{
"alpha_fraction": 0.5557894706726074,
"alphanum_fraction": 0.5831578969955444,
"avg_line_length": 37.513511657714844,
"blob_id": "557a69616be1544e18cfba7f5a55ea7ee5738107",
"content_id": "bab00f8af334ae3aa61179a09fbbde1892bd3e03",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1425,
"license_type": "no_license",
"max_line_length": 91,
"num_lines": 37,
"path": "/in3110/assignment4/blur/blur_2.py",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\nimport numpy as np\nimport cv2\nimport time\nfrom check_fileformat import format_test, failed_format_test_print\n\ndef numpy_blur(input_img, output_img):\n ''' Needs to arguments- similiar as before i.e. one input image\n and one output image '''\n if format_test(input_img) and not isinstance(imput_img, np.ndarray):\n failed_format_test_print(input_img)\n return\n else:\n #measuring runtime\n start = time.time()\n\n #reading in image\n image = cv2.imread(input_img)\n src = np.pad(image, [[1, 1],[1, 1],[0, 0]], mode='edge')\n src = src.astype(\"uint32\")\n #The method you have made works great, and it does so with the speed\n # we want, great job! Still you can do it slightly more readable.\n #When you have initialized src.astype(\"uint32\") you can initialize dst without\n # copying src, also you can calculate the average, normal blurring, in one sentence\n #like this\n dst = (src[:-2,:-2,:] + src[:-2,1:-1,:] + src[:-2, 2:,:] +\n src[1:-1,:-2,:] + src[1: -1,1:-1,:] + src[1:-1,2:,:] +\n src[2:,:-2,:] + src[2:,1:-1,:] + src[2:,2:, :])/9\n\n dst = dst.astype(\"uint8\")\n\n cv2.imwrite(output_img, dst)\n\n #ending the time count and calculating elapsed_time\n end = time.time()\n elapsed_time = end - start\n print (\"numpy time: \", elapsed_time)\n"
},
{
"alpha_fraction": 0.6811665892601013,
"alphanum_fraction": 0.6811665892601013,
"avg_line_length": 25.63157844543457,
"blob_id": "8e6cf9388967905c9802399e6c735d3880e9a17c",
"content_id": "74084a00b896c0f23e40d1d85cdbe33d9662089d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 2027,
"license_type": "no_license",
"max_line_length": 94,
"num_lines": 76,
"path": "/in1010/oblig5/Rute.java",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "import java.util.*;\n\npublic abstract class Rute {\n\n\tprivate int\t\t\trad;\n\tprivate int\t\t\tkolonne;\n\n\tprivate Labyrint \tlabyrint;\n\n\tprivate Rute\t\tnord;\n\tprivate Rute\t\tsor;\n\tprivate Rute\t\tost;\n\tprivate Rute\t\tvest;\n\n\tpublic Rute(int rad, int kolonne) {\n\t\tthis.rad = rad;\n\t\tthis.kolonne = kolonne;\n\t}\n\n\tpublic void setLabyrint (Labyrint labyrint) {\n\t\tthis.labyrint = labyrint;\n\t}\n\t//Setter nabo rute til nord for denne ruten. \n\tpublic void setNord(Rute nRute) {\n\t\tthis.nord = nRute;\n\t}\n\t//Setter nabo rute til sor for denne ruten. \n\tpublic void setSor(Rute sRute) {\n\t\tthis.sor = sRute;\n\t}\n\t//Setter nabo rute til ost for denne ruten. \n\tpublic void setOst(Rute oRute) {\n\t\tthis.ost = oRute;\n\t}\n\t//Setter nabo rute til vest for denne ruten. \n\tpublic void setVest(Rute vRute) {\n\t\tthis.vest = vRute;\n\t}\n\n\tpublic int getRad() {\n\t\treturn this.rad;\n\t}\n\n\tpublic int getKolonne() {\n\t\treturn this.kolonne;\n\t}\n\n\tpublic abstract LinkedList<String> gaa(Rute caller);\n\n\t//Sjekker om rute har nord rute og at den ikke kom fra den, og går ditt eller ikke rekursivt.\n\tprotected LinkedList<String> gaaNord(Rute caller) {\n\t\treturn this.nord != null && this.nord != caller ? this.nord.gaa(this) : null;\n\t}\n\t//Sjekker om rute har sor rute og at den ikke kom fra den og går ditt eller ikke..\n\tprotected LinkedList<String> gaaSor(Rute caller) {\n\t\treturn this.sor != null && this.sor != caller ? this.sor.gaa(this) : null;\n\t}\n\t//Sjekker om rute har ost rute og at den ikke kom fra den og går ditt eller ikke..\n\tprotected LinkedList<String> gaaOst(Rute caller) {\n\t\treturn this.ost != null && this.ost != caller ? this.ost.gaa(this) : null;\n\t}\n\t//Sjekker om rute har vest rute og at den ikke kom fra den og går ditt eller ikke..\n\tprotected LinkedList<String> gaaVest(Rute caller) {\n\t\treturn this.vest != null && this.vest != caller ? this.vest.gaa(this) : null;\n\t}\n\n\tpublic abstract char tilTegn();\n\n\tpublic String getCoords() {\n\t\treturn \"(\" + this.kolonne + \", \" + this.rad + \")\";\n\t};\n\n\tpublic String toString() {\n\t\treturn \" \" + this.tilTegn() + \" \";\n\t}\n}"
},
{
"alpha_fraction": 0.7723971009254456,
"alphanum_fraction": 0.7869249582290649,
"avg_line_length": 117,
"blob_id": "c22981cf27af03751efc95a83ee58708864b34d7",
"content_id": "00ac4bdca5f61b260e113c3f7a7c747ee4ca8e5d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 840,
"license_type": "no_license",
"max_line_length": 477,
"num_lines": 7,
"path": "/in3110/README.md",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "# IN3110 Problemløsning med høynivå-språk\n### https://www.uio.no/studier/emner/matnat/ifi/IN3110/index.html\n## Om emnet:\n>Emnet gir en innføring i mer avanserte sider ved script- og programmeringsspråket Python, bl.a. objektorientert programmering, regulære uttrykk, interaksjon >med operativsystemet, plattform-uavhengig kode, effektiv design av programsystemer med tidskritiske operasjoner, utvidelser i kompilerte språk som C/C++, >data-analyse og web-programmering. Emnet gir også en grunnleggende innføring i script-språket Bash, testing og dokumentering av kode, og \n>versjonskontrollsystem git. Spesiell vekt legges på praktisk problemløsning med et fokus på interessante og studierelevante oppgaver.\n\n[Du kan finne mer info her](https://www.uio.no/studier/emner/matnat/ifi/IN3110/index.html)\n"
},
{
"alpha_fraction": 0.6612244844436646,
"alphanum_fraction": 0.6666666865348816,
"avg_line_length": 24.789474487304688,
"blob_id": "2bad2f23b44c5d74f62e88d67714e0ea4e257958",
"content_id": "0ab90c8bb0c583950535a70971ed5ee0edcbe025",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 1470,
"license_type": "no_license",
"max_line_length": 81,
"num_lines": 57,
"path": "/in1010/oblig6/Operasjonsleder.java",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "import java.util.ArrayList;\nimport java.util.Collections;\nimport java.io.File;\nimport java.io.PrintWriter;\nimport java.io.BufferedWriter;\nimport java.io.FileWriter;\nimport java.io.IOException;\n\npublic class Operasjonsleder extends Thread {\n\tprivate MonitorDekryptert monitor;\n\tprivate String filnavn;\n\n\tpublic Operasjonsleder(MonitorDekryptert monitor, String filnavn) {\n\t\tthis.monitor = monitor;\n\t\tthis.filnavn = filnavn;\n\t}\n\n\tpublic void skrivTilFil(String data, int kanalID) {\n\t\ttry {\n\t\t\tString filename = hentFilnavn(kanalID);\n\t\t\tPrintWriter pw = new PrintWriter(new File(filename), \"utf-8\");\n\t\t\tpw.print(data);\n\t\t\tpw.close();\n\t\t} catch(Exception e) {\n\n\t\t}\n\t}\n\n\t//Sorterer meldingene og skriver til fil\n\tpublic void run() {\n\t\tArrayList<Melding> meldinger = this.monitor.hentMeldinger();\n\t\tCollections.sort(meldinger);\n\t\tString filename = null;\n\t\tint kanalID = -1;\n\t\tPrintWriter pw = null;\n\t\tString data = \"\";\n\t\t\tfor (int i = 0; i < meldinger.size();i++) {\n\t\t\t\tif (kanalID == -1) {\n\t\t\t\t\tkanalID = meldinger.get(i).getKanalId();\n\t\t\t\t}\n\t\t\t\tdata += meldinger.get(i).toString() + \"\\n\\n\";\n\t\t\t\tif (meldinger.size() > i + 1 && meldinger.get(i+1).getKanalId() != kanalID) {\n\t\t\t\t\tthis.skrivTilFil(data, kanalID);\n\t\t\t\t\tdata = \"\";\n\t\t\t\t\tkanalID = meldinger.get(i+1).getKanalId();\n\t\t\t\t}\n\t\t\t\telse if (meldinger.size() == i + 1) {\n\t\t\t\t\tthis.skrivTilFil(data, kanalID);\n\t\t\t\t}\n\t\t\t}\n\t}\n\n\tprivate String hentFilnavn(int kanalId) {\n\t\treturn filnavn + \"-kanal-\" + kanalId + \".txt\";\n\t}\n\n}\n"
},
{
"alpha_fraction": 0.5844686627388,
"alphanum_fraction": 0.5923024415969849,
"avg_line_length": 28.656564712524414,
"blob_id": "df249741fdc9a51973bc32970d07b1deac37fa99",
"content_id": "a74411139e4baddc2e64880a7d098a4fb389a8c5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2940,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 99,
"path": "/in3110/assignment5/5.2/highlighter.py",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "import sys\nimport re\n\n\ndef main():\n # Read files and starts the program\n syntax_dict = create_dictionary(open(sys.argv[1], \"r\", encoding=\"utf8\"))\n theme_dict = create_dictionary(open(sys.argv[2], \"r\", encoding=\"utf8\"))\n source_file = open(sys.argv[3], \"r\", encoding=\"utf8\")\n\n coloured_source_file = scanner(source_file, syntax_dict, theme_dict)\n\n # Prints the coloured file to the terminal\n for i in coloured_source_file:\n print(i[:-1])\n\n\ndef color_print(text, style):\n \"\"\"\n This functions styles the given source text with a style of a choice.\n\n Args:\n text (str): text to style\n style (int): int of a choice that corresponds to a desired style\n\n Returns:\n styled (str): text with added attributes\n\n \"\"\"\n # print(\" asd \" + text)\n formatting = \"\\033[{}m\".format(style)\n # The “\\033[0m” sequence removes all attributes (formatting and colors)\n reset = \"\\033[0m\"\n styled = formatting + text + reset\n # print(styled[:-1])\n return styled\n\n\ndef scanner(text, syntax_file, theme_file):\n \"\"\"\n Looping through source file(text) to find keywords to color.\n Looping through syntax file to get info on which words to color and then\n using the previously written color_print function to change the colors of\n the found keywords. Used the built-in func replace() to replace the found\n keywords with coloured keywords.\n\n Args:\n text (str): text to style.\n syntax_file(dict): where key is regex and value a name for it.\n theme_file (dict): where key is corresponding name from the syntax file\n and value is a bash color.\n Returnes:\n output (list): list with added attributes\n \"\"\"\n output = []\n\n for line in text:\n coloured_line = line\n for key in syntax_file:\n # print(key)\n found_syntaxes = re.findall(key[1:-1], line)\n if len(found_syntaxes) > 0:\n # We have found one or more syntaxes!\n for i in found_syntaxes:\n # Colour all the found syntaxes\n # print(i)\n new_text = color_print(\n i, theme_file.get(syntax_file.get(key))[2:])\n coloured_line = coloured_line.replace(i, new_text)\n # print(coloured_line)\n # We are done with this line; adding the line to the output\n output.append(coloured_line)\n\n return output\n\n\ndef create_dictionary(file):\n \"\"\"\n Creates a dictionary out of an input file.\n\n Args:\n file: just a file\n\n Returns:\n file_dict: input file changed into a dictionary\n \"\"\"\n file_dict = {}\n with file as f:\n for line in f:\n key, val = line.strip().split(': ')\n file_dict[key.strip()] = val.strip()\n\n # print (file_dict)\n file.close()\n return file_dict\n\n\nif __name__ == \"__main__\":\n main()\n"
},
{
"alpha_fraction": 0.7115570306777954,
"alphanum_fraction": 0.730850875377655,
"avg_line_length": 49.813724517822266,
"blob_id": "cc209d4935639193f2be8cbdaaa85488d0adef08",
"content_id": "70ab9f256179c74307de2d746e8e030a7957e952",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 5232,
"license_type": "no_license",
"max_line_length": 440,
"num_lines": 102,
"path": "/in2110/in2110-lab-master/in2110-lab-master/gruppetimer/02-oblig-v2019/readme.md",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "# Oblig V2019: Ordvektorer\nDenne obligen fra 2019 går mer i dybden på hvordan man implementerer vektorromrepresentasjoner. Løsningsforslaget gir blant annet eksempler på hvordan man kan beregne euclidian distance, cosine similarity og implementere KNN. \n\n## Oppgave 1\nDataene som benyttes i obligen er et ferdig tokenisert aviskorpus som skal prosesseres videre i oppgave 1a.\nFor å representere ord som vektorer benytter vi et \"kontekstvindu\", det vil si at vi ser på et visst antall ord før og etter ordet. Dette implementeres i 1b. \n\n\n### 1c: Telling\nFordi vektorene vil inneholde et stort antall nuller, implementerer vi co-occurence-matrisen som en \"sparse matrix\" i form av en dictionary av dictionaries. Fit og transform fungerer her noe annerledes enn de gjør for dokumentfevktorer:\n<br><br><b>Dokumentvektorer</b><br>\n<i>Fit:</i> Oppretter vokabularet - velger hvilke ord som skal være i matrisene basert på hva som finnes i treningsdataene.<br>\n<i>Transform:</i> Tar inn ett eller flere dokumenter, teller ordfrekvenser og returnerer dokumentene som vektorer. Den returnerer en document-term matrix som viser frekvensene ordene i vokabularet har i hvert dokument.\n\n<b>Ordvektorer</b><br>\n<i>Fit:</i> oppretter matrise (co-occurence-matrix) og fyller inn frekvenser for ord i treningsdataene<br>\n<i>Transform:</i> Tar inn ett eller flere ord og returnerer dem som vektorer ved å hente de ut av matrisen. Her gir det altså ikke mening å bruke ord som modellen ikke har \"sett\".\n\n\n## Oppgave 2\n\n### 2a: Vektorlengde\n```python\ndef vector_norm(self, word):\n vec = self.matrix[word]\n return math.sqrt(sum(v**2 for v in vec.values()))\n```\nFor å finne lengen på en vektor finner vi kvadratroten av summen til kvadratet av alle dimensjonene.\nDet vil si at vi bruker pythagoras læresetning. For å forstå hvordan den kan brukes i flere enn to dimensjoner kan man tenke seg hvordan det foregår i en tredimensjonal boks. Man regner da først ut lengden på diagonalen i grunnflaten. Ved å anvende pythagoras læresetning rekursivt kan vi da finne lengden på den tredimensjonale vektoren. Vi regner med andre ord ut lengden av hypotenusen til en ny trekant som står diagonalt inne i boksen:\n\n![3d norm](3d_norm.png)\n\nSe også:\n[https://www.uio.no/studier/emner/matnat/ifi/IN2110/v19/foiler/05_ordvektorer.pdf](https://www.uio.no/studier/emner/matnat/ifi/IN2110/v19/foiler/05_ordvektorer.pdf)\n\n\n### 2b Avstand (Euclidean distance)\n```python\ndef euclidean_distance(self, w1, w2):\n vec1, vec2 = self.transform((w1, w2))\n \n # Save computation by only computing values for features that\n # are active in one of the vectors.\n union = vec1.keys() | vec2.keys()\n \n return math.sqrt(sum((vec1[key] - vec2[key])**2\n for key in union))\n```\nHer regner vi på en måte ut lengden på en vektor mellom \"endene\" av de to ordvektorene.<br>\nSe også: [https://www.uio.no/studier/emner/matnat/ifi/IN2110/v19/foiler/05_ordvektorer.pdf](https://www.uio.no/studier/emner/matnat/ifi/IN2110/v19/foiler/05_ordvektorer.pdf)\n\n\n### 2c Likhet (cosine similarity)\n```python\ndef cosine_similarity(self, w1, w2):\n \n vec1, vec2 = self.transform((w1, w2))\n \n # Save computatoin by only computing values for features that\n # are active in both vectors\n intersect = set(vec1.keys()).intersection(vec2.keys())\n\n dot_product = sum(vec1[key]*vec2[key] for key in intersect)\n\n # Return dot product for normalized vectors\n if self.is_normalized:\n return dot_product\n else:\n return dot_product / (self.vector_norm(w1)*self.vector_norm(w2))\n```\nFor å finne cosine similarity deler vi prikkproduktet med produktet av lengden på vektorene. Prikkproduktet kan forstås som lengden på en ny vektor langs den ene av vektorene som danner en rettvinklet trekant med den andre vektoren.<br>\nSe også: [https://www.uio.no/studier/emner/matnat/ifi/IN2110/v19/foiler/05_ordvektorer.pdf](https://www.uio.no/studier/emner/matnat/ifi/IN2110/v19/foiler/05_ordvektorer.pdf)\n\n### 2d Normalisering\n```python\ndef normalize_vectors(self):\n for w, vec in self.matrix.items():\n norm = self.vector_norm(w)\n\n for key in vec:\n vec[key] /= norm\n\n self.is_normalized = True\n```\nVi normaliserer ved å dele ordrekvensene på vektorlengden. Verdiene vil da ligge mellom 0 og 1.\nMed normaliserte vektorer blir cosine similarity det samme som prikkproduktet. For å visualisere det:<br>\n![cos](cos.png)\n<br>Her kan man se at lengden på r nærmer seg lengden av q hvis vinkelen mellom p og q blir mindre. Siden vi har å gjøre med normaliserte vektorer (enhetsvektorer), vil denne lengden maksimaøt være 1. Hvis vinkelen mellom p og q er 90 grader, vil lengden på r bli 0.\n\n### 3 Naboskap (knn)\n```python\ndef nearest_neighbors(self, w, k = 5):\n if w in self.vocab:\n others = []\n \n for other in self.matrix:\n if other != w:\n others.append((other, self.cosine_similarity(w, other))) \n \n return sorted(others, key = lambda x : -x[1])[:k]\n```\nHer returnerer vi de k ordene med størst cosine similarity til ordet w. Vi passer også på å ikke regne med likheten ordet har med seg selv.\n"
},
{
"alpha_fraction": 0.6465346813201904,
"alphanum_fraction": 0.6633663177490234,
"avg_line_length": 26.545454025268555,
"blob_id": "30ffe3d9a1aaf0c648194508875953be493afcb4",
"content_id": "92946ceef79f906574b03af1cbcdf423fa83f545",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3034,
"license_type": "no_license",
"max_line_length": 156,
"num_lines": 110,
"path": "/in2110/in2110-lab-master/in2110-lab-master/ekstra/oppg4a.py",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\nimport nltk\nimport sys\nfrom nltk.corpus import brown\nfrom nltk.probability import ConditionalFreqDist, ConditionalProbDist, MLEProbDist\n\n#########\n# Oppgave 1:\n\n#conditional frequency distribution\n#tar inn en liste av par, der første elementet blir betingelsen.\n\n#en dictionary der hver tagg er en nøkkel og hver verdi er en dictionary på formen {ord:frekvens}. Feks {NN:{bil:2,tre:3,gutt:4,..}, VB:{skyte:1,løpe:3,..}}\n\n\n# # #######\n# # # emission probabilities:\n# # # P(w_i | t_i)\n\nwith open(\"norne.txt\", encoding=\"utf8\") as f:\n mylist = [nltk.tag.str2tuple(t, sep='_') for t in f.read().split()]\n\n#brown_ord_tagger = brown.tagged_words()\nbrown_tagger_ord = [ (tag, word) for (word, tag) in mylist ]\n\n\n# # frekvensdistribusjon\ncfd_tagger_ord = nltk.ConditionalFreqDist(brown_tagger_ord)\n\n# max_nn = cfd_tagger_ord['NN'].most_common()[0]\n# f_nn = cfd_tagger_ord['NN']['linguist']\n# max_jj = cfd_tagger_ord['JJ'].most_common()[0]\n#\n# print(\"Oppgave 1:\")\n# print(\"1.2 (a) Mest frekvente substantivet:\", max_nn)\n# print(\"1.2 (b) Antall forekomster av 'linguist':\", f_nn)\n# print(\"1.2 (c) Mest frekvente adjektivet:\", max_jj)\n\n\n# # sannsynlighetsdistribusjon\ncpd_tagger_ord = nltk.ConditionalProbDist(cfd_tagger_ord, nltk.MLEProbDist)\n#print(\"Mest sannsynlige adjektivet:\", cpd_tagger_ord['JJ'].max())\n#print(\"Sannsynligheten for new gitt JJ:\", cpd_tagger_ord['JJ'].prob(\"new\"))\n\n\n# # ########\n# # # transition probabilities:\n# # # P(t_i | t_{i-1})\n\n# # bigram over tagger\nbrown_tagger = [tag for (word, tag) in mylist]\nbrown_tagg_par = nltk.bigrams(brown_tagger)\n\n\n# # frekvensdistribusjon over bigrammene\ncfd_tagger= nltk.ConditionalFreqDist(brown_tagg_par)\nprint(\"BLAAAAAAAA\",cfd_tagger['NOUN'])\n\n#print(cfd_tagger.conditions())\nmax_etter_nn = cfd_tagger['NOUN'].most_common()[0]\nf_dtnn = cfd_tagger['ADJ']['NOUN']\n\nprint(\"1.5 (a) Taggen som oftest etterfolger NN er:\", max_etter_nn)\nprint(\"1.5 (b) Antall forekomster av bigrammet 'DT NN' er:\", f_dtnn)\n\n# # sannsynlighetsdistribusjon over taggbigram\ncpd_tagger = nltk.ConditionalProbDist(cfd_tagger, nltk.MLEProbDist)\n\nprint(\"1.6 Sannsynligheten for NOUN gitt ADJ er:\", cpd_tagger[\"ADJ\"].prob(\"NOUN\"))\n\n# # ############\n# # Oppgave 2\n\n# # # #emission probabilities\nw1a = cpd_tagger_ord['NOUN'].prob(\"taler\")\nw2a = cpd_tagger_ord['VERB'].prob(\"taler\")\nw3a = cpd_tagger_ord['ADJ'].prob(\"flere\")\n\nprint(\"Oppgave 4b):\")\nprint(\"P(taler|NOUN):\", w1a)\nprint(\"P(taler|VERB):\", w2a)\nprint(\"P(flere|ADJ):\", w3a)\n\n# w1b = cpd_tagger_ord['PPO'].prob(\"her\")\n# w2b = cpd_tagger_ord['VB'].prob(\"duck\")\n#\n# print(\"P(her|PPO):\", w1b)\n# print(\"P(duck|VB):\", w2b)\n\n\n# # # #transition probabilities\nt1a = cpd_tagger[\"ADJ\"].prob(\"NOUN\")\nt2a = cpd_tagger[\"ADJ\"].prob(\"VERB\")\n\nprint(\"P(NOUN|ADJ):\", t1a)\nprint(\"P(VERB|ADJ):\", t2a)\n\n# t1b = cpd_tagger[\"VBD\"].prob(\"PPO\")\n# t2b = cpd_tagger[\"PPO\"].prob(\"VB\")\n#\n# print(\"P(PPO|VBD):\", t1b)\n# print(\"P(VB|PPO):\", t2b)\n\n#Oppgave 4c)\nfirst = w1a * t1a\nsecond = w2a * t2a\n\nprint(\" SVARET First: \", first)\nprint(\"Second: \", second)\n"
},
{
"alpha_fraction": 0.5553580522537231,
"alphanum_fraction": 0.5616309642791748,
"avg_line_length": 40.16379165649414,
"blob_id": "dd869307a048bda692492dbf8a6dda5a086a179d",
"content_id": "8f63a64574b7a08842fe9f738eb7546a3732e0bb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 9580,
"license_type": "no_license",
"max_line_length": 116,
"num_lines": 232,
"path": "/in2110/in2110-lab-master/in2110-lab-master/obliger/1b/ner_løsning.py",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "import re\n\nclass NamedEntityRecogniser:\n \"\"\"Gjenkjenning av navngitte enheter ved bruk av HMM\"\"\"\n \n def __init__(self):\n \"\"\"Intialiserer alle variablene som er nødvendig for å representere og \n estimere sekvensmodellen (en Hidden Markov Model) som brukes til å \n gjenkjenne de navngitte enhetene\"\"\"\n \n # alle labellene som forekommer i treningsettet\n self.labels = set()\n\n # alle token som forekommer i treningsettet\n self.vocab = set()\n\n # hvor mange ganger en label (f.eks. B-ORG) forekommer i treningsettet\n self.label_counts = {} \n\n # hvor mange overgang fra label_1 til label2 forekommer i treningsettet\n self.transition_counts = {}\n \n # hvor mange \"utslipp\" fra label til token forekommer i treningsettet\n # (Merk at vi legger et spesielt symbol for ord som aldri forekommer\n # i treningsettet, men kan forekomme i testsettet)\n self.emission_counts = {(\"O\", \"<UNK>\"):1}\n \n # Sansynnlighet P(label_2 | label_1)\n self.transition_probs = {}\n \n # Sansynnlighet P(token | label)\n self.emission_probs = {}\n \n \n def fit(self, tagged_text):\n \"\"\"Estimerer tallene og sansynnlighetene for HMM, basert på (tokenisert)\n tekst hvor navngitte enhetene er markert med XML tags (se norne.txt)\"\"\"\n \n # Ekstrahere setninger og navngitte enheter markert i hver setning\n sentences, all_spans = preprocess(tagged_text)\n \n for sentence, spans in zip(sentences, all_spans):\n \n # Ekstrahere labelsekvenser, med BIO (også kalt IOB) marking\n label_sequence = get_BIO_sequence(spans, len(sentence))\n \n # Oppdatere tallene \n self._add_counts(sentence, label_sequence)\n \n # Beregne sansynnlighetene (transition og emission) ut fra tallene\n self._fill_probs()\n \n \n def _add_counts(self, sentence, label_sequence):\n \"\"\"Oppdaterer variablene self.vocab, self.labels, self.label_counts, \n self.transition_counts og self.emission_counts, basert på setningen og \n sekvenslabellen assosiert med dem. \n Merk at setningen og label_sequence har samme lengde.\"\"\"\n \n self.vocab.update(sentence)\n self.labels.update(label_sequence)\n \n for label1, label2 in zip([\"<s>\"] + label_sequence, label_sequence + [\"</s>\"]):\n self.label_counts[label1] = self.label_counts.get(label1, 0) + 1\n self.transition_counts[(label1, label2)] = self.transition_counts.get((label1, label2), 0) +1\n \n for label, token in zip(label_sequence, sentence):\n self.emission_counts[(label, token)] = self.emission_counts.get((label, token), 0) +1\n \n \n def _fill_probs(self, alpha_smoothing=1E-6):\n \"\"\"Beregne sannsynlihetsfordelinger self.transition_probs og\n self.emission_probs basert på tallene som er samlet inn i \n self.label_counts, self.transition_counts og self.emission_counts.\n \n Når det gjeler self.emission_probs bør vi legge Laplace smoothing, med en\n verdi for alpha som er alpha_smoothing.\"\"\"\n \n for (start, end), count in self.transition_counts.items():\n label_count = self.label_counts[start]\n self.transition_probs[(start, end)] = count / label_count\n\n # Her bruker vi Laplace smoothing\n for label in self.labels:\n for token in self.vocab:\n self.emission_counts[(label, token)] = self.emission_counts.get((label, token), 0) + alpha_smoothing\n\n for (label, token), count in self.emission_counts.items():\n label_count = self.label_counts[label]\n self.emission_probs[(label, token)] = count / (label_count + alpha_smoothing*len(self.vocab))\n \n \n def _viterbi(self, sentence):\n \"\"\"Kjører Viterbi-algoritmen på setningen (liste over tokens), og\n returnerer to outputs: \n 1) en labelsekvens (som har samme lengde som setningen)\n 2) sansynnlighet for hele sekvensen \"\"\"\n\n lattice = [{} for _ in range(len(sentence))]\n backpointers = [{} for _ in range(len(sentence))]\n \n for i, token in enumerate(sentence):\n for label in self.labels:\n if i == 0:\n lattice[i][label] = (self.transition_probs.get((\"<s>\", label), 0) \n * self.emission_probs.get((label, token), 0))\n else:\n path_probs = {prev_label:self.transition_probs.get((prev_label, label), 0) \n * self.emission_probs.get((label, token), 0) \n * lattice[i-1][prev_label] \n for prev_label in lattice[i-1]}\n \n best_path = max(path_probs.keys(), key=lambda x: path_probs[x])\n best_prob = path_probs[best_path]\n if best_prob > 0.0:\n lattice[i][label] = best_prob\n backpointers[i][label] = best_path\n \n best_final_label = max(lattice[-1].keys(), key=lambda x: lattice[-1][x])\n best_final_prob = lattice[-1][best_final_label]\n \n best_path = [best_final_label]\n for i in range(i,0,-1):\n best_path.insert(0, backpointers[i][best_path[0]])\n \n return best_path, best_final_prob\n \n \n def label(self, text):\n \"\"\"Gitt en tokenisert tekst, finner ut navngitte enheter og markere disse\n med XML tags. \"\"\"\n sentences, _ = preprocess(text)\n spans = []\n for sentence in sentences:\n sentence = [token if token in self.vocab else \"<UNK>\" for token in sentence]\n label_sequence, _ = self._viterbi(sentence)\n spans.append(get_spans(label_sequence))\n \n return postprocess(sentences, spans)\n \n\n \ndef get_BIO_sequence(spans, sentence_length):\n \"\"\"Gitt en liste over \"spans\", representert som tuples (start, end, tag),\n og en setningslengde, produserer en sekvens med BIO (også kalt IOB) labeller\n for setningen. \n Eksempel: hvis spans=[(1,3,'ORG')] og sentence_length=6 bør resultatet være\n ['O', 'B-ORG', 'I-ORG', 'O', 'O', 'O']\"\"\"\n \n sequence = [\"O\"]*sentence_length\n for start, end, tag in spans:\n sequence[start] = \"B-%s\"%tag\n for i in range(start+1, end):\n sequence[i] = \"I-%s\"%tag\n return sequence\n \n\ndef get_spans(label_sequence):\n \"\"\"Gitt en labelsekvens med BIO markering, returner en lister over \"spans\" med \n navngitte enheter. Metoden er altså den motsatte av get_BIO_sequence\"\"\"\n \n spans = [] \n i = 0\n while i < len(label_sequence):\n label = label_sequence[i]\n if label.startswith(\"B-\"):\n start = i\n label = label[2:]\n end = start + 1\n while end < len(label_sequence) and label_sequence[end].startswith(\"I-%s\"%label):\n end += 1\n spans.append((start, end, label))\n i = end\n else:\n i += 1\n return spans\n\n\ndef preprocess(tagged_text):\n \"\"\"Tar en tokenisert tekst med XML tags (som f.eks. <ORG>Stortinget</ORG>) og\n returnerer en liste over setninger (som selv er lister over tokens), sammen med\n en liste av samme lengde som inneholder de markerte navngitte enhetene. \"\"\"\n \n sentences = []\n spans = []\n \n for i, line in enumerate(tagged_text.split(\"\\n\")):\n\n tokens = []\n spans_in_sentence = []\n \n for j, token in enumerate(line.split(\" \")):\n \n # Hvis token starter med en XML tag\n start_match = re.match(\"<(\\w+?)>\", token)\n if start_match:\n new_span = (j, None, start_match.group(1))\n spans_in_sentence.append(new_span)\n token = token[start_match.end(0):]\n \n # Hvis token slutter med en XML tag\n end_match = re.match(\"(.+)</(\\w+?)>$\", token)\n if end_match:\n if not spans_in_sentence or spans_in_sentence[-1][1]!=None:\n raise RuntimeError(\"Closing tag without corresponding open tag\")\n start, _ , tag = spans_in_sentence[-1]\n if tag != end_match.group(2):\n raise RuntimeError(\"Closing tag does not correspond to open tag\")\n token = token[:end_match.end(1)]\n spans_in_sentence[-1] = (start, j+1, tag)\n \n tokens.append(token)\n \n sentences.append(tokens)\n spans.append(spans_in_sentence)\n \n return sentences, spans\n\n\ndef postprocess(sentences, spans):\n \"\"\"Gitt en liste over setninger og en tilsvarende liste over \"spans\" med\n navngitte enheter, produserer en tekst med XML markering.\"\"\"\n \n tagged_sentences = []\n for i, sentence in enumerate(sentences):\n new_sentence = list(sentence)\n for start, end, tag in spans[i]:\n new_sentence[start] = \"<%s>%s\"%(tag, new_sentence[start])\n new_sentence[end-1] = \"%s</%s>\"%(new_sentence[end-1], tag)\n tagged_sentences.append(\" \".join(new_sentence))\n \n return \"\\n\".join(tagged_sentences)\n\n \n \n"
},
{
"alpha_fraction": 0.5758082270622253,
"alphanum_fraction": 0.5942028760910034,
"avg_line_length": 35.61224365234375,
"blob_id": "edc7da0da59f9ab65ef4e05f04b937b198f46ef9",
"content_id": "539d7d7cf178013b350b937a0d1b58364124d411",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1794,
"license_type": "no_license",
"max_line_length": 104,
"num_lines": 49,
"path": "/in3110/assignment4/blur/blur_1.py",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n#importing stuff\nimport numpy as np\nimport cv2\nimport time\n#imported separated method for checking input-format\nfrom check_fileformat import format_test, failed_format_test_print\n\ndef blur_matrix(image):\n '''Takes an image as input; then returns blurred image without padding'''\n\n src = np.pad(image, [[1, 1],[1, 1],[0, 0]], mode='edge')\n src = src.astype(\"uint32\")\n dst = np.zeros(image.shape)\n\n for h in range(1, dst.shape[0]-1):\n for w in range(1, dst.shape[1]-1):\n for c in range(0, dst.shape[2]):\n dst [h,w,c] = (src [h,w,c] + src[h -1,w,c] + src[h+1,w,c]\n + src[h,w -1,c] + src[h,w+1,c]\n + src[h -1,w -1,c] + src[h -1,w+1,c]\n + src[h+1,w -1,c] + src[h+1,w+1,c]) /9\n dst = dst.astype(\"uint8\")\n return dst\n\n#The method regular blue(input,inpyt) can take a jpg image, run blur_matrix, make an outfil and time it.\n#The method seems to be working great and splitting different tasks to different methods makes the\n# code more readable, awesome job!\n\n'''Take an image as input and a <string>.jpg as output. '''\ndef regular_blur(input_img, output_img):\n #checks inputformat\n if format_test(input_img) and not isinstance(imput_img, np.ndarray):\n failed_format_test_print(input_img)\n return\n else:\n #startint calculating the time\n start = time.time()\n\n #reading in image\n image = cv2.imread(input_img)\n blurred_image = blur_matrix(image)\n\n cv2.imwrite(output_img, blurred_image)\n\n #ending the time count and calculating elapsed_time\n end = time.time()\n elapsed_time = end - start\n print (\"python time: \", elapsed_time)\n"
},
{
"alpha_fraction": 0.690507709980011,
"alphanum_fraction": 0.6966887712478638,
"avg_line_length": 24.438201904296875,
"blob_id": "029b6f115717be7277e31524f457bad3a4fb123c",
"content_id": "8cb2bf2bcb41b00e6bd41f8482fdae1d657867dd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2266,
"license_type": "no_license",
"max_line_length": 103,
"num_lines": 89,
"path": "/in2110/in2110-lab-master/in2110-lab-master/obliger/2a/in2110-oblig-2a-solution.py",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "import spacy\nfrom in2110.conllu import ConlluDoc\n\nMODEL=\"models/model-best\"\nUD_TRAIN=\"no_bokmaal-ud-train.conllu\"\nUD_DEV=\"no_bokmaal-ud-dev.conllu\"\nNN_DEV=\"no_nynorsk-ud-dev.conllu\"\nNNLIA_DEV=\"no_nynorsklia-ud-dev.conllu\"\n\n\n\ndef attachment_score(true, pred):\n \"\"\"\nIterates over sentences and tokens in each sentence and compares their head index (head.i) \nto compute UAS and in addition the deprel to compute LAS.\nAddition: spaCy introduces its own root label (ROOT) which does not match gold, lowercasing fixes this \n\n \"\"\"\n\n las = 0\n uas = 0\n total = 0\n\n for s1, s2 in zip(true, pred):\n for true_token,pred_token in zip(s1, s2):\n if true_token.head.i == pred_token.head.i:\n uas += 1\n if true_token.dep_.lower() == pred_token.dep_.lower():\n las += 1\n\n total += 1\n\n return uas/total, las/total\n\n\ndef parse(model, docs):\n \"\"\"\nIterates over sentences in the document and parses each sentence using model\n \"\"\"\n for doc in docs:\n model.parser(doc)\n\n\n# Load conllu files\nconllu_dev = ConlluDoc.from_file(UD_DEV)\nconllu_nn_dev = ConlluDoc.from_file(NN_DEV)\nconllu_nnlia_dev = ConlluDoc.from_file(NNLIA_DEV)\n\n\n# Load spacy model for Norwegian\nnb = spacy.load(MODEL)\n\n\n# Convert conllu data to spacy format\ndev_docs_labeled = conllu_dev.to_spacy(nb)\ndev_docs_unlabeled = conllu_dev.to_spacy(nb, keep_labels=False)\n\n\n# Load nynorsk\ndev_docs_nn_labeled = conllu_nn_dev.to_spacy(nb)\ndev_docs_nn_unlabeled = conllu_nn_dev.to_spacy(nb, keep_labels=False)\n\n# Load NynorskLIA\ndev_docs_nnlia_labeled = conllu_nnlia_dev.to_spacy(nb)\ndev_docs_nnlia_unlabeled = conllu_nnlia_dev.to_spacy(nb, keep_labels=False)\n\n# Parse bokmål\nparse(nb, dev_docs_unlabeled)\n\n# Parse nynorsk\nparse(nb, dev_docs_nn_unlabeled)\n\n# Parse nynorskLIA\nparse(nb, dev_docs_nnlia_unlabeled)\n\n# Evaluate\nuas, las = attachment_score(dev_docs_labeled, dev_docs_unlabeled)\nnnuas, nnlas = attachment_score(dev_docs_nn_labeled, dev_docs_nn_unlabeled)\nnnliauas, nnlialas = attachment_score(dev_docs_nnlia_labeled, dev_docs_nnlia_unlabeled)\n\n\nprint(\"UAS:\",uas)\nprint(\"LAS:\",las)\n\nprint(\"UAS-nynorsk:\",nnuas)\nprint(\"LAS-nynorsk:\",nnlas)\n\nprint(\"UAS-nynorskLIA:\",nnliauas)\nprint(\"LAS-nynorskLIA:\",nnlialas)\n\n"
},
{
"alpha_fraction": 0.6573770642280579,
"alphanum_fraction": 0.6819671988487244,
"avg_line_length": 28.047618865966797,
"blob_id": "26167f0ec46a46bbbc858162c3213aa97cb6de0e",
"content_id": "5c45a3c94f7f3ca4489c9650156a2cf3241f8fb6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 610,
"license_type": "no_license",
"max_line_length": 66,
"num_lines": 21,
"path": "/in3110/assignment4/test_blur.py",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\nimport numpy as np\nimport cv2\nimport random\nfrom blur.blur_3 import numba_blur\n\ndef test_one():\n #random seed makes it so that random array is equal every time\n seed = random.seed(32)\n random_image = np.random.randint(0, 256, size=(100, 100, 3))\n #get highest value from array\n result = np.amax(random_image)\n #blur to make an item in array the average of its neighbours\n blurred_array = numba_blur(random_image)\n #highest value should now be lower than before\n excp = np.amax(blurred_array)\n assert excp < result\n\n\nif __name__ == '__main__':\n test_one()\n"
},
{
"alpha_fraction": 0.5865724086761475,
"alphanum_fraction": 0.5867137908935547,
"avg_line_length": 41.62650680541992,
"blob_id": "32464f95ec7720ffd39e3a28800acb271f915ad4",
"content_id": "35c7ac567e937fea7a5153a3af4a3ffe86ed0147",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7075,
"license_type": "no_license",
"max_line_length": 102,
"num_lines": 166,
"path": "/in2110/in2110-lab-master/in2110-lab-master/ekstra/fsm.py",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "import re\n\nclass FiniteStateManager:\n \"\"\"Simple finite-state automaton for dialogue management:\n - each state has a unique identifier and is associated with a system response\n - each edge represents a transition between states, and is associated with a regular expression.\n \n \"\"\"\n \n def __init__(self):\n self.start_state = None\n self.final_states = []\n self.states = {}\n self.edges = {}\n \n def add_state(self, state_id, system_output, is_start=False, is_final=False):\n \"\"\"Adds a new state with a unique string identifier and a system output (as a string).\n The flags is_start/is_final indicates whether the state is a start or final state\"\"\"\n \n if state_id in self.states:\n raise RuntimeError(state_id + \" is already present in the automaton\")\n \n self.states[state_id] = system_output\n if is_start:\n self.start_state = state_id\n if is_final:\n self.final_states.append(state_id)\n \n def add_edge(self, from_state_id, to_state_id, regexp):\n \"\"\"Adds a new edge between two states (which must be present in the automaton). Each\n edge is associated with a regular expression (as a string) that specifies the pattern\n that must be matched (on the user input) in order to traverse the edge.\n \n If the regexp is None, the edge is assumed to be empty and will be automatically traversed.\"\"\"\n \n if from_state_id not in self.states:\n raise RuntimeError(from_state_id + \" not in list of possible states\")\n if to_state_id not in self.states:\n raise RuntimeError(to_state_id + \" not in list of possible states\")\n \n self.edges[from_state_id] = self.edges.get(from_state_id, []) + [(to_state_id, regexp)]\n \n def start_dialogue(self):\n \"\"\"Starts a new dialogue\"\"\"\n \n if self.start_state is None:\n raise RuntimeError(\"No start state has been defined\")\n elif len(self.final_states)==0:\n raise RuntimeError(\"No final state(s) has been defined\")\n \n return Interaction(self)\n \n \n def to_smcat(self):\n \"\"\"Return the finite-state automaton in the SMCAT text format such that it can be \n easily rendered on https://state-machine-cat.js.org/\"\"\"\n \n state_lines = []\n for state, action in self.states.items():\n state_lines.append(\"\\\"[%s]\\\": \\\" System: '%s' \\\"\"%(state, action))\n \n smcat = \",\\n\".join(state_lines) + \";\\n\\n\"\n \n if self.start_state:\n smcat += \"initial => \\\"[%s]\\\";\\n\"%self.start_state\n\n edge_lines = []\n for from_state_id, edges_from_state in self.edges.items():\n for to_state_id, regexp in edges_from_state:\n edge_val = \": \\\"User: '%s'\\\"\"%regexp if regexp is not None else \"\"\n edge_lines.append(\"\\\"[%s]\\\" => \\\"[%s]\\\" %s;\"%(from_state_id, to_state_id, edge_val))\n smcat += \"\\n\".join(edge_lines) + \"\\n\"\n \n for final_state in self.final_states:\n smcat += \"\\\"[%s]\\\" => final;\\n\"%final_state\n \n return smcat\n \n \n \nclass Interaction:\n \"\"\"Interaction based on a finite-state automaton. The object comprises the automaton itself\n along with a reference to the current dialogue state.\n \n The finite-state automaton is traversed until a final state is reached. In a given state,\n we look at all edges going out of the current state, and check whether the pattern \n (regular expression) is matched on the user input. If that is the case, we traverse the \n edge, update the current dialogue state, and produce the corresponding system output.\n \n Note that the patterns are evaluated in the order of insertion of the edges. This means\n that generic \"catch-all\" patterns should be inserted last.\n \"\"\"\n \n def __init__(self, fsm):\n self.fsm = fsm\n self.current_state = self.fsm.start_state\n \n # We produce a system response\n print(\"System: %s\"%self.fsm.states[self.current_state])\n # We traverse subsequent empty edges\n self._traverse_empty()\n \n # We continue the dialogue until we reach a final state\n while self.current_state not in self.fsm.final_states:\n \n # We wait for the user to provide an input\n user_input = input(\"User:\")\n \n # And we react to it by moving through the automaton\n self._react(user_input)\n \n \n def _react(self, user_input):\n \"\"\"React to a given user input by looking at possible edges to traverse (based on\n whether their patterns are matched or not). If a pattern is matched, traverse the edge,\n update the current dialogue state and produce the response.\"\"\"\n \n # We loop on each edge (in order)\n for to_state_id, regexp in self.fsm.edges.get(self.current_state,[]):\n \n # We check whether the pattern is satisfied or not\n if re.search(regexp, user_input, flags=re.I):\n \n # If yes, we move to the next state\n self.current_state = to_state_id\n \n # We print the associated system response\n system_output = self.fsm.states[self.current_state]\n print(\"System: %s\"%system_output)\n \n # And we traverse subsequent empty edges\n self._traverse_empty()\n \n return\n \n raise RuntimeError(\"Could not find valid transition from state %s\"%self.current_state)\n \n \n def _traverse_empty(self):\n \"\"\"Traverse empty edges (that is, edges associated with a None regexp).\"\"\"\n \n for to_state_id, regexp in self.fsm.edges.get(self.current_state,[]):\n if regexp is None:\n self.current_state = to_state_id\n system_output = self.fsm.states[self.current_state]\n print(\"System: %s\"%system_output)\n self._traverse_empty()\n \n\n \n# Simple example (adapted from the one in the course slides)\nif __name__ == \"__main__\": \n \n manager = FiniteStateManager()\n manager.add_state(\"start\", \"What do you want?\", is_start=True)\n manager.add_state(\"offer_apples\", \"OK, here are your apples!\")\n manager.add_state(\"offer_oranges\", \"OK, her are your oranges!\")\n manager.add_state(\"not_understood\", \"Sorry, I did not understand\")\n manager.add_state(\"end\", \"Goodbye!\", is_final=True)\n\n manager.add_edge(\"start\", \"offer_apples\", r\"\\bapples?\\b\")\n manager.add_edge(\"start\", \"offer_oranges\", r\"\\boranges?\\b\")\n manager.add_edge(\"start\", \"not_understood\", r\".+\")\n manager.add_edge(\"not_understood\", \"start\", None)\n manager.add_edge(\"offer_apples\", \"end\", None)\n manager.add_edge(\"offer_oranges\", \"end\", None)"
},
{
"alpha_fraction": 0.7056856155395508,
"alphanum_fraction": 0.7083612084388733,
"avg_line_length": 42.97058868408203,
"blob_id": "bd20d645a93064141edd3df4e65c3a7153da50d2",
"content_id": "c4e74f053cac33dee26512bb9332a8956c23fd3f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1495,
"license_type": "no_license",
"max_line_length": 86,
"num_lines": 34,
"path": "/in3110/assignment4/blur/blur.py",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\nimport argparse\nfrom blur.blur_1 import regular_blur\nfrom blur.blur_2 import numpy_blur\nfrom blur.blur_3 import numba_blur\n#import cv2\n'''Take a <your_image>.jpg and a <your_requested_outputfile>.jpg as input'''\n'''Takes also a selected method by typing -p, -np or -n as a third argument'''\n#I didnt quit get the argument choices, regular_blur, numpy_blur, numba_blur to work\n#so I implementet this jusing a add_mutually_exclusive_group from the argparse library\n#Writing from terminal it is crucial to NOT use \" \" around input arguments.\nparser = argparse.ArgumentParser(description = \"User interface for image blur\")\nparser.add_argument(\"input_img\", help = \"Input image\")\nparser.add_argument(\"output_img\", help = \"Output image\")\n#mutually exclusice group, lets user have access\n#to only one method at a time\ngroup = parser.add_mutually_exclusive_group()\ngroup.add_argument(\"-p\", \"--python\", help=\"Blurring image using python\",\\\n action=\"store_true\")\ngroup.add_argument(\"-np\", \"--numpy\", help=\"Blurring image using numpy\",\\\n action=\"store_true\")\ngroup.add_argument(\"-n\", \"--numba\", help=\"Blurring image using numba\",\\\n action=\"store_true\")\nargs = parser.parse_args()\n\n\nif args.python:\n regular_blur(args.input_img, args.output_img)\nelif args.numpy:\n numpy_blur(args.input_img, args.output_img)\nelif args.numba:\n numba_blur(args.input_img, args.output_img)\nelse:\n print(\"Did not choose a method to use\")\n"
},
{
"alpha_fraction": 0.7425357103347778,
"alphanum_fraction": 0.7581133842468262,
"avg_line_length": 27.875,
"blob_id": "5d85565c4b13b3de54509f0d1ee51f8c1009a7a7",
"content_id": "550389b3dd24eb6507fe06956f5bd52303bba49b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 2354,
"license_type": "no_license",
"max_line_length": 130,
"num_lines": 80,
"path": "/in2110/in2110-lab-master/in2110-lab-master/gruppetimer/01-Introduksjon/README.md",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "# Lab 1 – Oppvarming\n\n## Utviklingsmiljø\n\nDet ligger et ferdig utviklingsmiljø på IFI-klyngen. Dette kan brukes på IFI sine Linux-terminaler, eller via ssh på \nlogin.ifi.uio.no. Det kan også installeres på egen maskin, men dette er kun anbefalt for de som har erfaring med Linux fra før.\nKoden for å sette opp eget miljø ligger her: https://github.uio.no/IN2110/in2110-shared\n\nFor å starte miljøet, kjør:\n\n```\n$ ~henraskj/IN2110/in2110-shared/in2110-shell\n```\n\nNå er du inne i IN2110-miljøet og kan starte Python:\n\n```\n$ python\n```\n\nHer har du tilgang til en rekke Python-moduler for NLP og datasett på både Norsk og engelsk.\n\n## Oppgave 1 – Tokenisering\n> *NB: Om du kjører utviklingsmiljøet på egen maskin og får en error kan du kjøre denne kommandoen for å installere punkt i NLTK:*\n>\n> ```\n> $ python -m nltk.downloader punkt\n> ```\n\nFør vi kan prosessere tekst er det viktig at den er pre-prossesert. Det finnes mange grader av pre-prossesering, men vi vil\nstort sett alltid ønske å tokenisere teksten. Start med å hente ut et dokument fra NoReC (sentimentdatasett utviklet her på IFI\nav språkgruppen):\n\n```python\nfrom in2110.corpora import norec\n\ndoc = norec.get_document(\"105812.txt\")\n```\n\n### a) Splitte på mellomrom\n\nDen enkleste formen for tokenisering er å splitte på mellomrom. Tokeniser `doc` med denne metoden. Ser du noen problemer?\n\n### b) NLTK\n\nNLTK kommer med en egen tokenizer. Denne er utviklet for engelsk, men kan også brukes på norsk. Eksempel:\n\n```python\nfrom nltk import word_tokenize\n\nword_tokenize(doc)\n```\n\nTokeniser `doc` med `word_tokenize()`. Hvor bra fungerer den for norsk?\n\n### c) UDTK\n\nUDPipe er et verktøy for å trene modeller for tokenisering, lemmatisering, POS-tagging og dependensparsing. Grensesnittet\ner dessverre ikke veldig brukervenlig, men vi har laget en enkel tokeniserer basert på UDPipe som dere kan bruke. Denne er\ntrent på norske versjonen av Universal Dependencies. Eksempel:\n\n```python\nfrom udtk import Model\nm = Model(\"norwegian-bokmaal\")\n\nm.tokenize(doc)\n```\n\nHvordan fungerer denne?\n\n### d) Setningssegmentering\n\nNoen ganger vil vi også dele opp teksten i setninger. For dette finnes det en pakke i NLTK som har en modell for norsk:\n\n```python\nimport nltk.data\nsegmenter = nltk.data.load(\"tokenizers/punkt/norwegian.pickle\")\n\nsegmenter.tokenize(doc)\n```\n\n"
},
{
"alpha_fraction": 0.6538952589035034,
"alphanum_fraction": 0.6596423983573914,
"avg_line_length": 26.234783172607422,
"blob_id": "17c54774e3ac883f98dba34f03572b1878c37153",
"content_id": "9ad306690c8e1e39790aac77e5e4886775964d46",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3132,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 115,
"path": "/in2110/in2110-lab-master/in2110-lab-master/obliger/1a/in2110-oblig-1a-solution.py",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "from collections import Counter\n\nfrom sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.metrics import accuracy_score\n\nfrom in2110.corpora import norec\nfrom in2110.oblig1 import scatter_plot\nfrom nltk import word_tokenize\n\nfrom sklearn.preprocessing import StandardScaler\n\n\ndef prepare_data(documents):\n data = []\n labels = []\n\n for doc in documents:\n if doc.metadata['category'] in (\"restaurants\", \"games\", \"literature\"):\n data.append(doc.text)\n labels.append(doc.metadata['category'])\n\n return data, labels\n\n\ndef tokenize(doc):\n return [t.lower() for t in word_tokenize(doc)]\n\n\nclass Vectorizer:\n def __init__(self):\n self.vectorizer = CountVectorizer(tokenizer=tokenize, max_features=5000)\n self.tfidf = TfidfTransformer()\n\n def vec_train(self, data):\n vec = self.vectorizer.fit_transform(data)\n vec_tfidf = self.tfidf.fit_transform(vec)\n\n return vec, vec_tfidf\n\n def vec_test(self, data):\n vec = self.vectorizer.transform(data)\n\n return vec, self.tfidf.transform(vec)\n\n\ndef create_knn_classifier(vec, labels, k):\n clf = KNeighborsClassifier(k)\n clf.fit(vec, labels)\n\n return clf\n\n\n# Innlesing av data\nprint(\"Reading data\")\ntrain_data, train_labels = prepare_data(norec.train_set())\ndev_data, dev_labels = prepare_data(norec.dev_set())\ntest_data, test_labels = prepare_data(norec.test_set())\n\n# Testing av tokenisering\nfor label, fn in ((\"split\", str.split),\n (\"tokenize\", word_tokenize),\n (\"tokenize+downcase\", lambda s: [t.lower()\n for t in word_tokenize(s)])):\n data, _ = prepare_data(norec.train_set())\n\n tokens = [t for s in data\n for t in fn(s)]\n types = set(tokens)\n\n print(\"Tokens and types for {}\".format(label))\n print(\"Tokens: {}\".format(len(tokens)))\n print(\"Types: {}\".format(len(types)))\n\nprint()\n\n# Statistikk\ncategories = Counter(train_labels)\n\nprint(\"Category distribution:\")\nfor category, count in categories.items():\n print(\"{}: {}\".format(category, count))\n\nprint()\n\n# Vektorisering\nvectorizer = Vectorizer()\n\nprint(\"Vectorizing training data\")\nvec, vec_tfidf = vectorizer.vec_train(train_data)\n\nprint(\"Plotting\")\nscatter_plot(vec, train_labels)\nscatter_plot(vec_tfidf, train_labels)\n\nprint(\"Vecorizing dev data\")\ndev_vec, dev_vec_tfidf = vectorizer.vec_test(dev_data)\n\nfor k in range(1, 16):\n clf = create_knn_classifier(vec, train_labels, k)\n\n acc = accuracy_score(dev_labels, clf.predict(dev_vec))\n\n clf = create_knn_classifier(vec_tfidf, train_labels, k)\n acc_tfidf = accuracy_score(dev_labels, clf.predict(dev_vec_tfidf))\n\n print(\"k={}, raw: {}, tf-idf: {}\".format(k, acc, acc_tfidf))\n\nclf = create_knn_classifier(vec_tfidf, train_labels, 12)\n\nprint(\"Vecorizing test data\")\ntest_vec, test_vec_tfidf = vectorizer.vec_test(test_data)\n\nprint(\"Test accuracy: {}\".format(accuracy_score(test_labels,\n clf.predict(test_vec_tfidf))))\n"
},
{
"alpha_fraction": 0.7389330267906189,
"alphanum_fraction": 0.757661759853363,
"avg_line_length": 23.47222137451172,
"blob_id": "f3191ecb94a791a68c7208991645f1275da5470b",
"content_id": "e8f981b20f142df8a42aae19351fbd3c6f138b55",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 1762,
"license_type": "no_license",
"max_line_length": 93,
"num_lines": 72,
"path": "/in3110/assignment5/README.md",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "# Assignment 5\n\nThis is Assignment 5 in IN3110.\nThe task is divided into folders 5.1-5.6. Each folder contains all needed files\nto run each of the subtasks (included highlighter.py).\n\n## 5.1 Syntax highlighting\n\nThe first part of the assignment is about colouring keywords found in a sent file\nbased on given syntax and theme files.\n\nYou can run it like so:\n\n```bash\npython3 highlighter.py naython.syntax naython.theme hello.ny\n```\n\n## 5.2 Python syntax\n\nThis task is about writing regular expressions for Python.\n\nRun it like this:\n\n```bash\npython3 highlighter.py python.syntax python.theme demo.py\n```\nor the second theme for python:\n\n```bash\npython3 highlighter.py python.syntax python2.theme demo.ny\n```\n## 5.3 Syntax for Java\n\nThis task is about writing regular expressions for Java.\n\n ```bash\n python3 highlighter.py favorite_language.syntax favorite_language.theme demo.java\n ```\n Code in the demo.java files is copied examples from the internet.\n## 5.4 Grep\n\nGrep takes a filename along with arbitrary number of regualar expressions,\nand prints out all the lines where the regexes where found.\n\nUse -f flag before a file name and -r flag before your regular expressions.\nAdd --highlight flag to add colours to matching regexes.\n\nExample:\n\n```bash\npython3 grep.py -f demo.py -r print string --highlight\n```\n## 5.5 Superdiff\n\nThis task is about comparing two files. It adds 0 if a line remains unchanged, + if a line is\nnew, and - if something has been removed.\n\nRun:\n\n```bash\npython3 diff.py -f demo1.py demo2.py\n```\n\n## 5.6 Colouring Superdiff\nColouring the output from previous task. Additions are green, deletions are red,\nand unchanged lines have just a default color.\n\nRun:\n\n```bash\npython3 highlighter.py diff.syntax diff.theme diff_output.txt\n```\n"
},
{
"alpha_fraction": 0.6175548434257507,
"alphanum_fraction": 0.6295715570449829,
"avg_line_length": 31,
"blob_id": "063530cfe728cda61219b71a9379f159ba4b46f5",
"content_id": "d32c3e2a99ee1ff9d1ce107da31ce61706eb214d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1914,
"license_type": "no_license",
"max_line_length": 90,
"num_lines": 58,
"path": "/in3110/assignment6/visualize.py",
"repo_name": "joevko/projects",
"src_encoding": "UTF-8",
"text": "from sklearn import neighbors, datasets, linear_model\r\nimport pylab as pl\r\nimport numpy as np\r\nfrom matplotlib.colors import ListedColormap\r\nfrom data import get_training_set\r\nfrom sklearn.neighbors import KNeighborsClassifier\r\nfrom sklearn.discriminant_analysis import LinearDiscriminantAnalysis\r\n\r\ncmap_light = ListedColormap(['#90ee90', '#ffcccb', '#ffcccb'])\r\ncmap_bold = ListedColormap(['green', 'red', 'red'])\r\n\r\n\r\ndef plot(features, model):\r\n \"\"\"\r\n This function plots data for 2 features for a certain model/clasifier\r\n\r\n Args:\r\n features(['']): features to plot for\r\n model(classifier): a clasisfier to fit for\r\n Returns:\r\n pl(plot): returns plot\r\n\r\n \"\"\"\r\n #Getting training set, setting it up and fitting it\r\n training_set = get_training_set()\r\n # Change: removed the pop because it only worked half of the time I used web_visualize\r\n target = training_set['diabetes']\r\n training_set = training_set[features]\r\n model.fit(training_set, target)\r\n\r\n #Setting up range for the plots\r\n x_min, x_max = training_set.iloc[:, 0].min() - .1, training_set.iloc[:, 0].max() + .1\r\n y_min, y_max = training_set.iloc[:, 1].min() - .1, training_set.iloc[:, 1].max() + .1\r\n xx, yy = np.meshgrid(np.linspace(x_min, x_max, 100),\r\n np.linspace(y_min, y_max, 100))\r\n Z = model.predict(np.c_[xx.ravel(), yy.ravel()])\r\n\r\n # Put the result into a color plot\r\n Z = Z.reshape(xx.shape)\r\n pl.figure()\r\n pl.pcolormesh(xx, yy, Z, cmap=cmap_light)\r\n\r\n # Plot also the training points\r\n pl.scatter(training_set.iloc[:, 0], training_set.iloc[:, 1], c=target, cmap=cmap_bold)\r\n pl.xlabel(features[0])\r\n pl.ylabel(features[1])\r\n pl.axis('tight')\r\n\r\n \r\n return pl\r\n\r\n\r\nif __name__ == '__main__':\r\n\r\n knn = KNeighborsClassifier()\r\n features = ['age','pressure']\r\n pl = plot(features, knn)\r\n pl.show()\r\n"
}
] | 63 |
Lidong-2021/HTTPSEVER | https://github.com/Lidong-2021/HTTPSEVER | 2bb1c6037190cbe6a47d118e68ff031b0dee3f3d | 0f689e54ba8a302b89ee8894480b91018be46cd0 | 3e3004f53834be999cc4664a4cf9f022061c43c0 | refs/heads/master | 2023-04-12T11:30:57.780576 | 2021-04-18T13:29:15 | 2021-04-18T13:29:15 | 359,147,850 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.44897958636283875,
"alphanum_fraction": 0.5714285969734192,
"avg_line_length": 11.25,
"blob_id": "cba472e2d43dc14b030287632f8ee42bb39d8639",
"content_id": "53b13f3321b22e833f32c02186a37e5a63239b67",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 106,
"license_type": "no_license",
"max_line_length": 21,
"num_lines": 8,
"path": "/WebFrame/settings.py",
"repo_name": "Lidong-2021/HTTPSEVER",
"src_encoding": "UTF-8",
"text": "\"\"\"\nWEB FRAME 配置文件\n\"\"\"\n# frame ip ='0.0.0.0'\nframe_ip = '0.0.0.0'\nframe_port = 8080\n\nDEBUG = True\n"
},
{
"alpha_fraction": 0.503311276435852,
"alphanum_fraction": 0.6225165724754333,
"avg_line_length": 10.615385055541992,
"blob_id": "ca2be7e1448528b8eebb957f7bce1d5f29af430d",
"content_id": "ec43f864c075c32b86d3483b5e242a9d016f3cbd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 177,
"license_type": "no_license",
"max_line_length": 22,
"num_lines": 13,
"path": "/httpserver/config.py",
"repo_name": "Lidong-2021/HTTPSEVER",
"src_encoding": "UTF-8",
"text": "\"\"\"\nhttpserver 的配置文件\n\"\"\"\n# http Server ip\n\nHOST = '0.0.0.0'\nPOST = 8000\n# 是否调试模式\nDEBUG = True\n\n# web frame 地址\nframe_ip = '127.0.0.1'\nframe_port = 8080\n"
}
] | 2 |
hayashi-leo/django-locallibrary | https://github.com/hayashi-leo/django-locallibrary | 31096de052be4781ac03a7c3b491723eebc923d4 | b804774c080f029984ffe51012b61e40ab0ee778 | d0ea0bf6f02eac1d76e44024cb1ca7e4670e20f6 | refs/heads/master | 2020-03-28T13:38:36.980567 | 2018-09-18T10:29:04 | 2018-09-18T10:29:04 | 98,708,127 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.7018265128135681,
"alphanum_fraction": 0.7022830843925476,
"avg_line_length": 32.69230651855469,
"blob_id": "2f4a2e17943021510d6af4ea04236d80792c878a",
"content_id": "a0f1c20f889f748507556ed1fa85212c98461566",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2190,
"license_type": "no_license",
"max_line_length": 97,
"num_lines": 65,
"path": "/locallibrary/catalog/admin.py",
"repo_name": "hayashi-leo/django-locallibrary",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\n\nfrom django.contrib import admin\n\n# Register your models here.\n\n# lin,leo - if you register your models under admin,\n# you can use admin portal to add/edit/remove model instances\n\nfrom . models import Author, Genre, Book, BookInstance\n\n\n### this is the simplest way to use admin to configure/edit your model's data\n\n## myModels = [Author, Genre, Book, BookInstance]\n\n## for model in myModels:\n## admin.site.register(model)\n\n### lin,leo - you can also extend the way admin presents your model data\n### like so,\n\n\n# extending ModelAdmin for Author model\nclass AuthorAdmin(admin.ModelAdmin):\n # pass ## uncomment to use default admin presentation style\n list_display = ('last_name', 'first_name', 'date_of_birth', 'date_of_death')\n ### the 'fields' attribute list just those files that are to be display on the admin form,\n ### in order. Fields are display vertically by default, but will display horizontally if you\n ### further ground them in a tuple (as shown below).\n fields = ['first_name', 'last_name', ('date_of_birth', 'date_of_death')]\n\n# now register Admin new model\nadmin.site.register(Author, AuthorAdmin)\n\n# extending ModelAdmin for BookInstance model\n## we use here a decorator to register BookInstance and BookInstanceAdmin\[email protected](BookInstance)\nclass BookInstanceAdmin(admin.ModelAdmin):\n list_filter = ('status', 'due_back')\n\n ## this creates a section view of BookInstance details\n fieldsets = (\n (None, {\n 'fields':('book', 'imprint', 'id')\n }),\n ('Availability Section',{\n 'fields':('status', 'due_back')\n }),\n )\n### now register BookInstance new model\n#admin.site.register(BookInstance, BookInstanceAdmin) # commented out!, use decorator instead\n\n\n# extending ModelAdmin for Book model\n### Associates records at the same time.\nclass BooksInstanceInline(admin.TabularInline):\n model = BookInstance\n\[email protected](Book)\nclass BookAdmin(admin.ModelAdmin):\n # pass ## uncomment to use default admin presentation style\n list_display = ('title', 'author', 'display_genre')\n inlines = [BooksInstanceInline]\n"
}
] | 1 |
hwk06023/Python_SchoolProject | https://github.com/hwk06023/Python_SchoolProject | 384eed596a2aa609959566043278d43736a27ede | 23c270d7045d7c388dc14a54f8e5577ec50e6b57 | f2b8690482117852813cfa5cdd8b99fb9b15c21f | refs/heads/master | 2020-09-22T11:59:33.362806 | 2019-12-03T05:18:31 | 2019-12-03T05:18:31 | 225,184,377 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5852156281471252,
"alphanum_fraction": 0.6529774069786072,
"avg_line_length": 31.53333282470703,
"blob_id": "80a6fee7f21e352be237d14906ddcdeb9c6afb65",
"content_id": "a307f917a1e5578828e4ecf6683b8d51b7660b49",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 487,
"license_type": "no_license",
"max_line_length": 147,
"num_lines": 15,
"path": "/crawler.py",
"repo_name": "hwk06023/Python_SchoolProject",
"src_encoding": "UTF-8",
"text": "import requests as req\nfrom bs4 import BeautifulSoup\nimport re\n\ndef crawler(url):\n header = {'User-Agent' : 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/13.0 Safari/605.1.15'}\n\n page = req.get(url, headers = header)\n soup = BeautifulSoup(page.text, 'html.parser')\n divs = soup.findAll(\"div\", {\"class\": \"document_243457_2250 xe_content\"})\n\n divs = str(divs)\n divs = re.sub('<.+?>', '', divs, 0).strip()\n\n return divs"
},
{
"alpha_fraction": 0.7386363744735718,
"alphanum_fraction": 0.7689393758773804,
"avg_line_length": 28.44444465637207,
"blob_id": "fb89cc16a5f030f797c66cf920d1db4e57d46828",
"content_id": "576c379a36070c5d2230cad42eadf7a1ab12a6df",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 314,
"license_type": "no_license",
"max_line_length": 72,
"num_lines": 9,
"path": "/wcloud_main.py",
"repo_name": "hwk06023/Python_SchoolProject",
"src_encoding": "UTF-8",
"text": "from crawler import crawler\nfrom word_clouding import wordclouding\n\n# url을 수정할 때 crawler.py에서 수동으로 div class 또는 id를 바꾸십쇼\nurl = 'http://www.dbhs.co.kr/zbxe/index.php?mid=a27&document_srl=243457'\ncrawl_data = crawler(url)\nprint('데이터 수집 완료')\n\nwordclouding(crawl_data)"
},
{
"alpha_fraction": 0.6389526724815369,
"alphanum_fraction": 0.6642168164253235,
"avg_line_length": 28.821918487548828,
"blob_id": "07ca614d525647a35e539c7da2081bb2d69670a1",
"content_id": "c33db2b1e78aaafb65739a6aa367918ca47def29",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2237,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 73,
"path": "/novel_main.py",
"repo_name": "hwk06023/Python_SchoolProject",
"src_encoding": "UTF-8",
"text": "from read_novel import read_data, summarize\nfrom novel_train import vectorize\n\nfrom keras.callbacks import LambdaCallback\nfrom keras.models import Sequential\nfrom keras.layers import Dense, LSTM\nfrom keras.optimizers import RMSprop\nfrom keras.utils.data_utils import get_file\n\nimport numpy as np\nimport random\nimport sys\n\nnovel_1 = './data/Novel_1.txt'\nnovel_2 = './data/Novel_2.txt'\nnovel_3 = './data/Novel_3.txt'\n\n# 소설 데이터를 읽어와 정규식을 통해 소설의 내용을 정제하여 가져옵니다.\ntext = read_data(novel_2)\n\nchars, char_indices, indices_char = summarize(text)\n\nx, y, sentences = vectorize(char_indices, text, chars)\n\nprint('Build model...')\nmodel = Sequential()\nmodel.add(LSTM(1024, input_shape=(40, len(chars))))\nmodel.add(Dense(len(chars), activation='softmax'))\n\nmodel.compile(loss='categorical_crossentropy', optimizer=RMSprop(lr=0.001))\n\ndef sample(preds, temperature=1.0):\n # helper function to sample an index from a probability array\n preds = np.asarray(preds).astype('float64')\n preds = np.log(preds) / temperature\n exp_preds = np.exp(preds)\n preds = exp_preds / np.sum(exp_preds)\n probas = np.random.multinomial(1, preds, 1)\n return np.argmax(probas)\n\n\ndef on_epoch_end(epoch, _):\n print('\\n----- Generating text after Epoch: %d' % epoch)\n\n start_index = random.randint(0, len(text) - 40 - 1)\n# for diversity in [0.2, 0.5, 1.0, 1.2]:\n# print('----- diversity:', diversity)\n\n generated = ''\n sentence = text[start_index: start_index + 40]\n generated += sentence\n print('----- Generating with seed: \"' + sentence + '\"')\n sys.stdout.write(generated)\n\n for i in range(400):\n x_pred = np.zeros((1, 40, len(chars)))\n for t, char in enumerate(sentence):\n x_pred[0, t, char_indices[char]] = 1.\n\n preds = model.predict(x_pred, verbose=0)[0]\n next_index = sample(preds, 0.5)\n next_char = indices_char[next_index]\n\n generated += next_char\n sentence = sentence[1:] + next_char\n\n sys.stdout.write(next_char)\n sys.stdout.flush()\n print()\n\nprint_callback = LambdaCallback(on_epoch_end=on_epoch_end)\n\nmodel.fit(x, y, batch_size=128, epochs=60, callbacks=[print_callback])\n"
},
{
"alpha_fraction": 0.6683804392814636,
"alphanum_fraction": 0.6683804392814636,
"avg_line_length": 15.956521987915039,
"blob_id": "c1f4c6a0e7224f089eba6b7fed86fea541af67fa",
"content_id": "e7d5b8fa82fcd380d594e953b68f4df489928c8a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 703,
"license_type": "no_license",
"max_line_length": 62,
"num_lines": 23,
"path": "/README.md",
"repo_name": "hwk06023/Python_SchoolProject",
"src_encoding": "UTF-8",
"text": "# Python_SchoolProject\n\n대회 끝나고 왔더니 수행이 내일까지라고?? <br/>\n\n큰일났다. 심지어 지금도 대회 기간이라 시간이 많이 없다. <br/>\n\n해커톤 하듯이 뚞딲 만들어보도록 하자. <br/>\n\n\n## Wordcloud - 니체\n\nwcloud_main을 실행시키면 니체 명언을 블로그에서 크롤링 해와서, Wordcloud로 만들어줍니다.\n<br/>\n\n## Novel - 소설\n\nnovel_main을 실행시키면 소설 데이터를 읽어와 LSTM으로 학습 한 모델을 활용하여, 소설을 생성합니다.\n<br/>\n\n> 소설 부분은 [kairess](https://github.com/kairess)님의 코드를 참조했습니다.\n\n\n#### 다행히도 발표까지 잘 마무리되었다. (성공!)"
},
{
"alpha_fraction": 0.5750528573989868,
"alphanum_fraction": 0.5813953280448914,
"avg_line_length": 23.947368621826172,
"blob_id": "5554bec599cfbd71d45a5fc2e8e4b4ef14237b5f",
"content_id": "37bb1db966f4992f7b2cd76c6ece82368b76634f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 473,
"license_type": "no_license",
"max_line_length": 60,
"num_lines": 19,
"path": "/read_novel.py",
"repo_name": "hwk06023/Python_SchoolProject",
"src_encoding": "UTF-8",
"text": "import io\nimport re\n\ndef read_data(path):\n with io.open(path, encoding='cp949') as f:\n text = f.read().lower()\n\n text = re.sub(r'<.*>', '', text)\n text = re.sub(r'\\n', ' ', text)\n text = re.sub(r' +', ' ', text)\n\n return text\n\ndef summarize(text):\n chars = sorted(list(set(text)))\n char_indices = dict((c, i) for i, c in enumerate(chars))\n indices_char = dict((i, c) for i, c in enumerate(chars))\n\n return chars, char_indices, indices_char"
},
{
"alpha_fraction": 0.6510319113731384,
"alphanum_fraction": 0.6622889041900635,
"avg_line_length": 25.024391174316406,
"blob_id": "7c790be27c47bbae3bf0fee511748c72172adfd3",
"content_id": "56d56f34464894dde98929006e1bec8ba58c0d6b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1066,
"license_type": "no_license",
"max_line_length": 76,
"num_lines": 41,
"path": "/word_clouding.py",
"repo_name": "hwk06023/Python_SchoolProject",
"src_encoding": "UTF-8",
"text": "from konlpy.tag import Twitter\nimport nltk\nfrom nltk import FreqDist\nfrom nltk.tokenize import word_tokenize\n\nfrom wordcloud import WordCloud\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom PIL import Image\n\ndef wordclouding(text):\n twitter = Twitter()\n tokens = twitter.pos(text, stem=True)\n \n tokens_noun = []\n for token in tokens:\n if token[1] == \"Noun\":\n tokens_noun.append(token[0])\n\n texts_noun = \" \".join(tokens_noun)\n\n tokens = word_tokenize(texts_noun)\n freqtxt = pd.Series(dict(FreqDist(tokens))).sort_values(ascending=False)\n \n mask = np.array(Image.open(\"./image/icon.png\"))\n\n wcloud = WordCloud(\n font_path='/Library/Fonts/NanumSquareRegular.ttf',\n mask=mask,\n width = 500,\n height = 500,\n background_color='white'\n )\n wcloud = wcloud.generate_from_frequencies(freqtxt)\n\n fig = plt.figure(figsize=(10, 10))\n plt.imshow(wcloud, interpolation=\"bilinear\")\n plt.show()\n fig.savefig('./image/wordcloud_without_axisoff.png')"
}
] | 6 |
osonwanne/akilunaphotography | https://github.com/osonwanne/akilunaphotography | d312adbafc463c30e17e3ca4f26e33fd9f167e5a | 8e797f660e3bdc124f97bfeaef161a41ddf96b08 | 04b9cae621111d3aba5fe9fa1b7ce72c223fa07a | refs/heads/master | 2021-01-21T14:43:26.029067 | 2017-09-04T23:06:19 | 2017-09-04T23:06:19 | 95,328,822 | 1 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5646036863327026,
"alphanum_fraction": 0.5646036863327026,
"avg_line_length": 24.61111068725586,
"blob_id": "9e7eea32656866fb1d375455b1cddf3704633d16",
"content_id": "9481c2a3e3f09bd3adb17cc8cf18bd18be1fddd0",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 921,
"license_type": "permissive",
"max_line_length": 96,
"num_lines": 36,
"path": "/contact/views.py",
"repo_name": "osonwanne/akilunaphotography",
"src_encoding": "UTF-8",
"text": "from django.core.mail import send_mail\nfrom django.shortcuts import redirect, render\n\n# Create your views here.\nfrom contact.forms import ContactForm\n\ndef contact(request):\n form_class = ContactForm\n\n # new logic!\n if request.method == 'POST':\n form = form_class(request.POST, request.FILES)\n\n if form.is_valid():\n\n contact_name = request.POST.get(\n 'Name'\n , '')\n contact_email = request.POST.get(\n 'Email'\n , '')\n form_content = request.POST.get('Message', '')\n\n form.save()\n\n send_mail('AkilunaPhotography.com - new contact: %s' % contact_name, form_content, contact_email,\n ['[email protected]'])\n\n return redirect('success')\n\n return render(request, 'contact.html', {\n 'form': form_class,\n })\n\ndef success(request):\n return render(request, 'contact_form_sent.html')"
},
{
"alpha_fraction": 0.6902654767036438,
"alphanum_fraction": 0.8053097128868103,
"avg_line_length": 15.285714149475098,
"blob_id": "dc807ff4c8a502994cf1bae3b81eae4296f501dc",
"content_id": "8eeb7136dc178105fbb21c45142261830e40f9cf",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 113,
"license_type": "permissive",
"max_line_length": 23,
"num_lines": 7,
"path": "/requirements.txt",
"repo_name": "osonwanne/akilunaphotography",
"src_encoding": "UTF-8",
"text": "Django>=1.11,<1.12\nwagtail>=1.10,<1.11\ndjango-bootstrap-themes\ndjango-bootstrap3\ndjango-taggit\nunidecode\nsendgrid"
},
{
"alpha_fraction": 0.7678795456886292,
"alphanum_fraction": 0.7854453921318054,
"avg_line_length": 24.74193572998047,
"blob_id": "b11367ed135755a15ae6c56e289630dadf3fc485",
"content_id": "e753da3fecbfdd8bc4c3497da5bc48cf23439c17",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 797,
"license_type": "permissive",
"max_line_length": 143,
"num_lines": 31,
"path": "/README.md",
"repo_name": "osonwanne/akilunaphotography",
"src_encoding": "UTF-8",
"text": "# Akiluna Photography\n\nAkiluna Photography - portraits and landscape photography. http://akilunaphotography.com\n\n[![Akiluna Photography](https://s3-us-west-2.amazonaws.com/akilunaphotography/AkilunaPhotography.PNG \"Website\")](http://akilunaphotography.com)\n\n**Set-up and installation:**\n\n1. git clone https://github.com/osonwanne/akilunaphotography.git\n\n2. cd akilunaphotography\n\n3. virtualenv .\n\n4. source bin/activate\n\n5. pip install -r requirements.txt\n\n(Optional - resolves ImportErrors on Ubuntu)\n* sudo apt-get build-dep python-imaging\n* sudo apt-get install libjpeg62 libjpeg62-dev\n* pip install Pillow\n* easy_install django-treebeard \n* sudo apt-get install python-unicodecsv\n* pip install glue\n\n6. python manage.py migrate\n\n7. python manage.py collectstatic\n\n8. python manage.py runserver"
}
] | 3 |
feargswalsh92/HW1Python | https://github.com/feargswalsh92/HW1Python | c954da9d8c34449fd093d16fe26e90ccb0e02bb2 | dada020c169adc3388538aafe46927575c09d9a9 | ff8926a5555f4479a8cf46b108897fb05582a558 | refs/heads/master | 2016-09-13T14:20:34.785726 | 2016-07-06T14:00:39 | 2016-07-06T14:00:39 | 61,267,256 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.7930821776390076,
"alphanum_fraction": 0.7974057793617249,
"avg_line_length": 54.75862121582031,
"blob_id": "c365b4d0cf61e91ad5ae5bdd7b2c0b6168e687b1",
"content_id": "0deeb9c4c6d987150d99c6531e83337ff8656668",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 1619,
"license_type": "no_license",
"max_line_length": 192,
"num_lines": 29,
"path": "/README.md",
"repo_name": "feargswalsh92/HW1Python",
"src_encoding": "UTF-8",
"text": "# HW1Python\n\nIntroduction\n\nDIVVY is the bikesharing scheme in Chicago. It is an incredible resource,that I use daily. However, whether you have a year membership or you are only\nrenting out for a day trip the same rules apply as regards the amount of time(thirty mins)that you can cycle uninterrupted without DIVVY charging you extra. This can often\nlead to frantic searches for bike stations to dock your bike for a second so you can go off on your merry way for another half an hour. I have had this problem personally and I \nhave talked to fellow divvyers and they say that it can be stressful too.\n\nData input\n\nThe two data files that I will be using for this project are the Chicago Bike Routes and their coordinate geometry and the locations of all of the Divvy Stations in the city.\n\nObjectives\n\nObjective 1\n\nTo input all of the latitudes and longtitudes from the chicago bike routes into a list for each route and find the distance from the first to the last point on each route,\nif that distance were to take longer than twenty five minutes to cycle based on the average chicagoans bicycling speed( approx 12mph) , and then sort each route based on whether it would\ntake more or less than this time to travel.\n\nObjective 2\n\nBased on the data obtained in objective one, Objective two aims to find all of the divvy stations in a 0.5 mile radius of the point the cyclist would be passing on the route after twenty five \nminutes cycling.\n\nPotential uses for this project\n\nThe data would be beneficial for cyclists who like to plan their trips in advance so they can get to their destination as fast as possible.\n\n\n"
},
{
"alpha_fraction": 0.5970394611358643,
"alphanum_fraction": 0.6118420958518982,
"avg_line_length": 23.239999771118164,
"blob_id": "7d05d1fc58b3834f147c036eecc6d077697e4550",
"content_id": "05d687ad0de6d6b0d138436b6f4c25f77e51fb57",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 608,
"license_type": "no_license",
"max_line_length": 64,
"num_lines": 25,
"path": "/test_CSVreader.py",
"repo_name": "feargswalsh92/HW1Python",
"src_encoding": "UTF-8",
"text": "from unittest import TestCase\nimport unittest\nimport tkinter\nimport tkinter.ttk\nimport ttk\n\n# from CSVreader import CSVreader\nimport re\nimport csv\n\n\nclass TestCVSreader(unittest.TestCase):\n def setUp(self):\n f = open('CDOT_Bike_Routes_2014_1216.csv', newline='')\n bikeroutes = csv.reader(f)\n\n tkinter.ttk.widget = []\n for row in bikeroutes:\n widget = widget.append(re.findall(r'\\d+', (row[1])))\n\n def test_CSVreader(self):\n self.assertTrue(s for s in widget if s.isdigit())\n\n if __name__ == '__main__':\n unittest.main()\n\n\n"
},
{
"alpha_fraction": 0.5170454382896423,
"alphanum_fraction": 0.5478895902633667,
"avg_line_length": 28.85365867614746,
"blob_id": "c3bee02a6adb3b760f96a0eb4b0e83431d579033",
"content_id": "c2db4deab1c6012ccfbabb111998743bd5d5a4ed",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1232,
"license_type": "no_license",
"max_line_length": 97,
"num_lines": 41,
"path": "/CSVreader.py",
"repo_name": "feargswalsh92/HW1Python",
"src_encoding": "UTF-8",
"text": "\nimport math\nfrom math import sin, cos, sqrt, atan2, radians\n\nclass CSVreader:\n # global thing\n import csv\n import re\n import itertools\n\n f = open('CDOT_Bike_Routes_2014_1216.csv', newline='')\n bikeroutes = csv.reader(f)\n header = bikeroutes.__next__()\n\n latandlongindex = header.index(\"the_geom\")\n\n def createcoordslist(latandlongindex, bikeroutes):\n\n import re\n coordlist = []\n for row in bikeroutes:\n latandlong = row[latandlongindex]\n coordlist.append(re.findall(\"\\d+\\.\\d+\", latandlong))\n\n #lat_long = [tuple(row[3:5]) for row in coordlist # print(lat_long)ist[1:]]\n\n\n def distbetweenPoints(lat, lng,lat_long):\n\n for (lat, lng) in lat_long:\n R = 6373.0\n radlat1 = radians(float(lat[0]))\n radlong1 = radians(float(lng[0]))\n radlat2 = radians(float(lat[1]))\n radlong2 = radians(float(lng[1]))\n dlon = (-radlong2)- (-radlong1)\n dlat = radlat2 - radlat1\n\n a = sin(dlat / 2) ** 2 + cos(radlat1) * cos(radlat2) * sin(dlon / 2) ** 2\n c = 2 * atan2(sqrt(a), sqrt(1 - a))\n distance = R * c\n print(distance)\n \n\n\n"
}
] | 3 |
ajay094/Django_Project | https://github.com/ajay094/Django_Project | 744aa1ecedd1a140c12b2bdeb56a4c6fdcfde601 | 7c29a9a2c4a5b14de76b32b37eaf8ae5047ad332 | f0601c96bc6b2d8a473c9f2a66b613542b1af602 | refs/heads/master | 2023-02-13T10:31:20.223590 | 2021-01-07T15:48:54 | 2021-01-07T15:48:54 | 327,646,323 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.7475042343139648,
"alphanum_fraction": 0.7477477192878723,
"avg_line_length": 34.713043212890625,
"blob_id": "43d2062f5e1687ccc5035dec4e439c02e8496790",
"content_id": "f452e3eca5e141462bb1da183d27fa80f6a15493",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4107,
"license_type": "no_license",
"max_line_length": 200,
"num_lines": 115,
"path": "/foodzone/views.py",
"repo_name": "ajay094/Django_Project",
"src_encoding": "UTF-8",
"text": "from django.shortcuts import render,redirect\nfrom django.http import HttpResponse\nfrom foodzone.forms import Userregister\nfrom fooddelivery import settings\nfrom django.core.mail import send_mail\nfrom django.contrib.auth.decorators import login_required\nfrom django.contrib import messages\nfrom foodzone.models import Product\nfrom foodzone import forms,models\n\n# Create your views here.\n\ndef home(request):\n context = {'products':Product.objects.all()}\n return render(request,'html/home.html',context)\n\n@login_required\ndef myorder(request):\n\treturn render(request,'html/myorders.html')\n\ndef about(request):\n\treturn render(request,'html/about.html')\n\ndef contact(request):\n\tsub = forms.ContactusForm()\n\tif request.method == 'POST':\n\t\tsub = forms.ContactusForm(request.POST)\n\t\tif sub.is_valid():\n\t\t\temail = sub.cleaned_data['Email']\n\t\t\tname=sub.cleaned_data['Name']\n\t\t\tmessage = sub.cleaned_data['Message']\n\t\t\tsend_mail(str(name)+' || '+str(email),message, settings.EMAIL_HOST_USER, settings.EMAIL_RECEIVING_USER, fail_silently = False)\n\t\t\treturn render(request, 'ecom/contactussuccess.html')\n\treturn render(request,'html/contact.html')\n\n@login_required\ndef userdashboard(request):\t\n\treturn render(request,'html/admindashboard.html')\n\ndef register(request):\n\tif request.method == \"POST\":\n\t\ta = Userregister(request.POST)\n\t\tif a.is_valid():\n\t\t\tp = a.save(commit=False)\n\t\t\trc = p.email\n\t\t\tsb = \"Welcome to fooddelivery\"\n\t\t\tmsg = \"Dear {},You have successfully created your Onlinegrocery account.Congratulations and welcome to a whole new world of grocery shopping.Your details: password:{}\".format(p.username,p.password)\n\t\t\tsd = settings.EMAIL_HOST_USER\n\t\t\tsnt = send_mail(sb,msg,sd,[rc])\n\t\t\tif snt == 1:\n\t\t\t\tp.save()\n\t\t\t\tmessages.success(request,\"Please check your {} for login creadentials\".format(rc))\n\t\t\t\treturn redirect(\"/lg\")\n\ta = Userregister()\n\treturn render(request,'html/register.html',{'b':a})\n\ndef addtocart(request,pk):\n\tproducts = models.Product.objects.all()\n\n@login_required\ndef admindashboard(request):\n\tproductcount=models.Product.objects.all().count()\n\tordercount=models.Orders.objects.all().count()\n\torders = models.Orders.objects.all()\n\tordered_products = []\n\tordered_buys = []\n\tfor order in orders:\n\t\tordered_product = models.Product.objects.all().filter(id = order.product.id)\n\t\tordered_buys = models.Customer.objects.all().filter(id = order.customer.id)\n\t\tordered_products.append(ordered_product)\n\t\tordered_buys.append(ordered_buys)\n\n\tmydict = {\n\t'productcount':productcount,\n\t'ordercount':ordercount,\n\t'data':zip(ordered_products,ordered_buys,orders)\n\t}\n\treturn render(request,'html/admindashboard.html',context=mydict)\n\n@login_required\ndef products(request):\n productss = models.Product.objects.all()\n return render(request, 'html/productscatalogue.html', {'pro':productss})\n\n@login_required\ndef addproducts(request):\n\ta = forms.ProductForm()\n\tif request.method == \"POST\":\n\t\ta = forms.ProductForm(request.POST,request.FILES)\n\t\tif a.is_valid():\n\t\t\ta.save()\n\t\t\tmessages.success(request,\"{} you have successfully added a new product details in productscatalogue\".format(request.user.username))\n\t\t\treturn redirect('/prod')\n\treturn render(request,'html/addproducts.html',{'k':a})\n\n@login_required\ndef updateproducts(request,pk):\n\tproduct = models.Product.objects.get(id=pk)\n\tproductForm = forms.ProductForm(instance = product)\n\tif request.method == \"POST\":\n\t\tproductForm = forms.ProductForm(request.POST,request.FILES,instance=product)\n\t\tif productForm.is_valid():\n\t\t\tproductForm.save()\n\t\t\tmessages.warning(request,\"{} you have successfully updated the product details in productscatalogue\".format(request.user.username))\n\t\treturn redirect('/prod')\t\n\treturn render(request,'html/updateproduct.html',{'productForm':productForm})\n\n@login_required\ndef deleteproducts(request,pk):\n\tproduct = models.Product.objects.filter(id=pk)\n\tif request.method == \"POST\":\n\t\tproduct.delete()\n\t\tmessages.warning(request,\"{} you have successfully deleted the product from productscatalogue\".format(request.user.username))\n\t\treturn redirect('/prod')\n\treturn render(request,'html/deleteproduct.html')\n"
},
{
"alpha_fraction": 0.6628198623657227,
"alphanum_fraction": 0.6653286218643188,
"avg_line_length": 27.028169631958008,
"blob_id": "9fd9c39b0ed7dd44b9fe0b5f860e714f7792f61f",
"content_id": "082a15ff76ca69271bddce7ed8fb2a8bad03f1d3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1993,
"license_type": "no_license",
"max_line_length": 131,
"num_lines": 71,
"path": "/foodzone/forms.py",
"repo_name": "ajay094/Django_Project",
"src_encoding": "UTF-8",
"text": "from django.contrib.auth.models import User\nfrom django.contrib.auth.forms import UserCreationForm\nfrom django import forms\nfrom foodzone import models\n\nclass Userregister(UserCreationForm):\n\tpassword1 = forms.CharField(widget=forms.PasswordInput(attrs={\"class\":\"form-control\",\"placeholder\":\"Enter You Password\"}))\n\tpassword2 = forms.CharField(widget=forms.PasswordInput(attrs={\"class\":\"form-control\",\"placeholder\":\"Enter You Confirm Password\"}))\n\tclass Meta:\n\t\tmodel = User\n\t\tfields = [\"first_name\",\"last_name\",\"email\",\"username\"]\n\t\twidgets = {\n\t\t\"first_name\": forms.TextInput(attrs = {\n\t\t\t\"class\":\"form-control\",\n\t\t\t\"placeholder\":\"Enter Your First Name\",\n\t\t\t\"required\":True,\n\t\t\t}),\n\t\t\"last_name\": forms.TextInput(attrs = {\n\t\t\t\"class\":\"form-control\",\n\t\t\t\"placeholder\":\"Enter Your Last Name\",\n\t\t\t}),\n\t\t\"email\": forms.EmailInput(attrs = {\n\t\t\t\"class\":\"form-control\",\n\t\t\t\"placeholder\":\"Enter Your Email\",\n\t\t\t\"required\":True,\n\t\t\t}),\n\t\t\"username\": forms.TextInput(attrs = {\n\t\t\t\"class\":\"form-control\",\n\t\t\t\"placeholder\":\"Enter Your User Name\",\n\t\t\t\"required\":True,\n\t\t\t}),\n\t\t}\n\nclass ProductForm(forms.ModelForm):\n\tclass Meta:\n\t\tmodel = models.Product\n\t\tfields = ['productname','price','description','product_image']\n\t\twidgets = {\n\t\t\"productname\":forms.TextInput(attrs={\n\t\t\t\"class\":\"form-control\",\n\t\t\t\"placeholder\":\"Enter Your Product Name\",\n\t\t\t\"required\":True,\n\t\t\t}),\n\t\t\"price\":forms.NumberInput(attrs={\n\t\t\t\"class\":\"form-control\",\n\t\t\t\"placeholder\":\"Enter Your Product Price\",\n\t\t\t\"required\":True,\n\t\t\t}),\n\t\t\"description\":forms.Textarea(attrs={\n\t\t\t\"class\":\"form-control\",\n\t\t\t\"rows\":5,\n\t\t\t\"cols\":10,\n\t\t\t\"placeholder\":\"Enter Your Product Discription\",\n\t\t\t\"required\":True,\n\t\t\t}),\n\t\t\"category\":forms.Select(attrs={\n\t\t\t\"class\":\"form-control\",\n\t\t\t}),\n\t\t}\n\nclass ContactusForm(forms.Form):\n\tclass Meta:\n\t\tmodel = models.Product \n\t\tfields = ['name','email','message']\n\t\twidgets = {\n\t\t\"name\":forms.TextInput(attrs={\n\t\t\t\"class\":\"form-control\",\n\t\t\t\"placeholder\":\"Enter Your Name\",\n\t\t\t\"required\":True,\n\t\t\t}),\n\t\t}\n\n\n "
},
{
"alpha_fraction": 0.5631510615348816,
"alphanum_fraction": 0.58203125,
"avg_line_length": 41.66666793823242,
"blob_id": "3fe5982253e588941089538f19b5565df948a87f",
"content_id": "14ce57e1a3a15eab4c8f0dfde7677f43f45822c4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1536,
"license_type": "no_license",
"max_line_length": 215,
"num_lines": 36,
"path": "/foodzone/migrations/0001_initial.py",
"repo_name": "ajay094/Django_Project",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.0.5 on 2021-01-07 04:51\n\nfrom django.db import migrations, models\nimport django.db.models.deletion\n\n\nclass Migration(migrations.Migration):\n\n initial = True\n\n dependencies = [\n ]\n\n operations = [\n migrations.CreateModel(\n name='Product',\n fields=[\n ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),\n ('productname', models.CharField(max_length=100)),\n ('product_image', models.ImageField(blank=True, default='brand.png', null=True, upload_to='product_image/')),\n ('price', models.PositiveIntegerField()),\n ('description', models.CharField(max_length=40)),\n ],\n ),\n migrations.CreateModel(\n name='Orders',\n fields=[\n ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),\n ('email', models.CharField(max_length=50, null=True)),\n ('address', models.CharField(max_length=300)),\n ('mobile', models.CharField(max_length=10)),\n ('status', models.CharField(choices=[('Pending', 'Pending'), ('Order Confirmed', 'Order Confirmed'), ('Out for Delivery', 'Out for Delivery'), ('Delivered', 'Delivered')], max_length=50, null=True)),\n ('product', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to='foodzone.Product')),\n ],\n ),\n ]\n"
}
] | 3 |
GeneOpt/RockPhysics | https://github.com/GeneOpt/RockPhysics | 89a2ea7879c9634a6c91a888b351a186594995c9 | 2340dce1699f85861f79a2973df78b4f650c2103 | 7e04eb245bb3bdcb101c0ebfedcf502640231d52 | refs/heads/master | 2020-03-27T04:02:56.220655 | 2018-02-12T01:19:11 | 2018-02-12T01:19:11 | null | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.7975460290908813,
"alphanum_fraction": 0.7975460290908813,
"avg_line_length": 26.16666603088379,
"blob_id": "3192bc154d3fc36a30e19404dc74e370a7f1b5cd",
"content_id": "1cc3272d2073f0497124433854f4f7e410e1d0ef",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 163,
"license_type": "permissive",
"max_line_length": 76,
"num_lines": 6,
"path": "/README.md",
"repo_name": "GeneOpt/RockPhysics",
"src_encoding": "UTF-8",
"text": "# RockPhysics\nPython tools for storing rock physics data and for doing common calculations\n\nBased in part on the Stanford MATLAB (tm) rock physics library\n\nAlan Jackson\n"
},
{
"alpha_fraction": 0.44574692845344543,
"alphanum_fraction": 0.4574691951274872,
"avg_line_length": 30.10280418395996,
"blob_id": "fbb7adf4edb411aa19ab3fba3cf904d6a58c7374",
"content_id": "9d2485d1202f438289d4f32d498a1e14ae6ff079",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3327,
"license_type": "permissive",
"max_line_length": 115,
"num_lines": 107,
"path": "/ManageRPDB.py",
"repo_name": "GeneOpt/RockPhysics",
"src_encoding": "UTF-8",
"text": "#============================================================================\n# FILE NAME: ManageRPDB.py\n#----------------------------------------------------------------------------\n# PURPOSE:\n# Tools for creating and managing a Rock Properties Data Base\n#\n# Requirements:\n# 1. Read standard csv files for database creation\n# 2. Create hdf5 files for different databases\n# 3. Provide utilities for reading the database in various useful ways\n#\n#----------------------------------------------------------------------------\n# HISTORY:\n#\n# Date Author Notes\n# -------- ---------------- -----------------------------------------------\n# 08Mar15 A. K. Jackson Initial Implementation.\n#============================================================================\n\n# To do\n\nimport csv\nimport numpy as np\n\n#\n# define an object and fill it from a CSV file\n# regressions (regressions file)\n#\ndef openRegcsv(infile):\n # initialize some stuff\n# index = []\n# name = []\n# func = []\n# form = []\n# units = []\n# source = []\n# priority = []\n# a = []\n# b = []\n# c = []\n# r = []\n # open the csv file\n try:\n f = open(infile)\n except IOError:\n print(\"IO error opening \",infile)\n else:\n # Parse row into lists that will end up becoming numpy arrays\n pass\n return csv.reader(f)\n \n #\n# define an object and fill it from a CSV file\n# Minerals (minerals file)\n#\ndef openMincsv(infile):\n # initialize some stuff\n index = []\n name = []\n Rho = []\n Rhoerr = []\n VpK = []\n VpKerr = []\n VsMu = []\n VsMuerr = []\n eps = []\n epserr = []\n delta = []\n deltaerr = []\n gamma = []\n gammaerr = []\n priority = []\n source = []\n units = []\n # open the csv file\n try:\n f = open(infile)\n except IOError:\n print(\"IO error opening \",infile)\n else:\n # Parse row into lists that will end up becoming numpy arrays\n for row in csv.reader(f):\n # skip blank lines and comments\n if len(row) == 0 or len(row[0].strip()) == 0 or row[0][0] == \"#\":\n print(\"--1--\")\n else:\n print(\"--2--\")\n index.append(row[0].strip())\n name.append(row[1].strip())\n Rho.append(float(row[2].strip()))\n Rhoerr.append(float(row[3].strip()))\n VpK.append(float(row[4].strip()))\n VpKerr.append(float(row[5].strip()))\n VsMu.append(float(row[6].strip()))\n VsMuerr.append(float(row[7].strip()))\n eps.append(float(row[8].strip()))\n epserr.append(float(row[9].strip()))\n delta.append(float(row[10].strip()))\n deltaerr.append(float(row[11].strip()))\n gamma.append(float(row[12].strip()))\n gammaerr.append(float(row[13].strip()))\n priority.append(int(row[14].strip()))\n source.append(row[15].strip())\n units.append(row[16].strip())\n values = np.array((Rho, Rhoerr, VpK, VpKerr, VsMu, VsMuerr, eps, epserr, delta, deltaerr, gamma, gammaerr))\n print(values) \n return (index, name, priority, source, units, values)"
},
{
"alpha_fraction": 0.4021163880825043,
"alphanum_fraction": 0.4195011258125305,
"avg_line_length": 29.76744270324707,
"blob_id": "f76386ab7ee36ab7071e42aae0b48b4706779990",
"content_id": "4686800c7f34d2880afaee5c436f6cf6c209b182",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1323,
"license_type": "permissive",
"max_line_length": 82,
"num_lines": 43,
"path": "/TestInputs.py",
"repo_name": "GeneOpt/RockPhysics",
"src_encoding": "UTF-8",
"text": "#============================================================================\n# FILE NAME: TestInputs.csv\n#----------------------------------------------------------------------------\n# PURPOSE:\n# Test that RockValues.csv and RockReg.csv contain sensible values\n#\n# Requirements:\n# 1. Test values against sensible ranges \n# 1 < Rho < 3 (gm/cc)\n# 1500 < Vp < 8000 (m/s)\n# 300 < Vs < 6000 (m/s)\n# 2. Generate crossplots for comparison\n# Vp vs. Vs\n# Vp vs. Rho\n# Vp vs. Phi\n# Vs vs. Phi\n#\n#----------------------------------------------------------------------------\n# HISTORY:\n#\n# Date Author Notes\n# -------- ---------------- -----------------------------------------------\n# 08Mar15 A. K. Jackson Initial Implementation.\n#============================================================================\n\n# To do\n\n# import routines that will actually do all the work\n\n#import RockPhysics as RP\n#import RockPhysicsPlots as RPPlot\nimport ManageRPDB as RPDB\n\nminerals_csv = \"RockValues.csv\"\nregressions_csv = \"RockReg.csv\"\n\ntry:\n (index, name, priority, source, units, values) = RPDB.openMincsv(minerals_csv)\nexcept IOError:\n print(\"opencsv returns IOError\")\nelse:\n # do stuff\n pass\n"
}
] | 3 |
nedjmeddinebourqfq/holland | https://github.com/nedjmeddinebourqfq/holland | 58904c9b8f6bc30893d3acfa541b022246c29f2e | 80d95cd1a301aa82d5e75944e6b0575a4b0ec60c | 153b6a59165769c3261928240c587575cc81d786 | refs/heads/master | 2023-03-26T12:47:11.732791 | 2021-03-25T11:39:30 | 2021-03-25T11:39:30 | 351,389,131 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5,
"alphanum_fraction": 0.6222222447395325,
"avg_line_length": 21.5,
"blob_id": "2cdcc1ca8ed11e0991f9270a61bcaae6ff779f08",
"content_id": "4e6c67d87b7107527e6e5ea1e7202b89b120a227",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 90,
"license_type": "no_license",
"max_line_length": 55,
"num_lines": 4,
"path": "/nedjmo.py",
"repo_name": "nedjmeddinebourqfq/holland",
"src_encoding": "UTF-8",
"text": "num = 15\nnum = 16\nsum = num1+num2\nprint(\"sum of {0} and {1} is 2\" .format(num1,num2,sum))\n"
}
] | 1 |
fantasylc/haiyuan | https://github.com/fantasylc/haiyuan | 8a299bd62e2a525a8f38869dea5cf7741b9c8480 | c245a3559ed913211f27158b75ca47d805f5fa58 | bfa7a8dcd54ca49ba243093f5e652471b8bf2471 | refs/heads/master | 2021-06-18T21:04:48.689848 | 2017-06-29T18:49:06 | 2017-06-29T18:49:06 | null | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.539752185344696,
"alphanum_fraction": 0.5482705235481262,
"avg_line_length": 42.0444450378418,
"blob_id": "aa99a41e7e9870b081d2f9e92b270b311fc33fcb",
"content_id": "b93448833405aee02ecfd46d7563003be422c2ed",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4020,
"license_type": "no_license",
"max_line_length": 182,
"num_lines": 90,
"path": "/shangcheng/migrations/0014_auto_20160604_2213.py",
"repo_name": "fantasylc/haiyuan",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n# Generated by Django 1.9.6 on 2016-06-04 14:13\nfrom __future__ import unicode_literals\n\nfrom django.conf import settings\nfrom django.db import migrations, models\nimport django.db.models.deletion\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n migrations.swappable_dependency(settings.AUTH_USER_MODEL),\n ('shangcheng', '0013_product_unit'),\n ]\n\n operations = [\n migrations.CreateModel(\n name='AboutUs',\n fields=[\n ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),\n ('content', models.TextField(default='', verbose_name='关于我们')),\n ],\n options={\n 'verbose_name': '关于我们',\n 'verbose_name_plural': '关于我们',\n },\n ),\n migrations.CreateModel(\n name='Cartitem',\n fields=[\n ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),\n ('quantity', models.IntegerField(default=1, verbose_name='数量')),\n ('sum_price', models.FloatField(default=0.0, verbose_name='小计')),\n ('date_added', models.DateTimeField(auto_now_add=True)),\n ],\n options={\n 'verbose_name': '购物车头目',\n 'verbose_name_plural': '购物车头目',\n 'ordering': ['date_added'],\n },\n ),\n migrations.CreateModel(\n name='Order',\n fields=[\n ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),\n ('uid', models.CharField(default='', max_length=80, verbose_name='Token')),\n ('date_add', models.DateTimeField(auto_now_add=True, verbose_name='创建时间')),\n ('total_money', models.FloatField(default=0.0, verbose_name='总价')),\n ('user', models.ForeignKey(default=None, on_delete=django.db.models.deletion.CASCADE, related_name='orders', to=settings.AUTH_USER_MODEL, verbose_name='拥有者')),\n ],\n options={\n 'verbose_name': '订单',\n 'verbose_name_plural': '订单',\n },\n ),\n migrations.CreateModel(\n name='TopProductCategory',\n fields=[\n ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),\n ('name', models.CharField(max_length=20, verbose_name='顶级分类名')),\n ('index', models.IntegerField(default=1, verbose_name='排序')),\n ],\n options={\n 'verbose_name': '商品顶级分类',\n 'verbose_name_plural': '商品顶级分类',\n 'ordering': ['index', 'name'],\n },\n ),\n migrations.AlterField(\n model_name='product',\n name='status',\n field=models.CharField(choices=[('1', '上线'), ('0', '采购中')], default='', max_length=5, verbose_name='状态'),\n ),\n migrations.AlterField(\n model_name='productcategory',\n name='parent',\n field=models.ForeignKey(default=None, on_delete=django.db.models.deletion.CASCADE, related_name='childs', to='shangcheng.TopProductCategory', verbose_name='顶级分类'),\n ),\n migrations.AddField(\n model_name='cartitem',\n name='order',\n field=models.ForeignKey(blank=True, default=None, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='items', to='shangcheng.Order', verbose_name='条目'),\n ),\n migrations.AddField(\n model_name='cartitem',\n name='product',\n field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='car', to='shangcheng.Product', verbose_name='商品'),\n ),\n ]\n"
},
{
"alpha_fraction": 0.6132513284683228,
"alphanum_fraction": 0.6254538297653198,
"avg_line_length": 32.36700439453125,
"blob_id": "26f3dc1d6fa66109d804c24e8c8643bdb3c8bfc8",
"content_id": "44222007e8769b81648f695e91e4d67f79560e3e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 10296,
"license_type": "no_license",
"max_line_length": 108,
"num_lines": 297,
"path": "/shangcheng/models.py",
"repo_name": "fantasylc/haiyuan",
"src_encoding": "UTF-8",
"text": "#coding:utf-8\n__author__ = 'liuchao'\nimport os\nfrom django.db import models\nfrom django.conf import settings\nimport uuid\nfrom django.utils.deconstruct import deconstructible\nSTATUS = {\n '0':'采购中',\n '1':'上线',\n}\n\n\n@deconstructible\nclass PathAndRename(object):\n\n def __init__(self, sub_path):\n self.path = sub_path\n\n def __call__(self, instance, filename):\n ext = filename.split('.')[-1]\n\n filename = '{}.{}'.format(uuid.uuid4().hex,ext)\n\n return os.path.join(self.path, filename)\n\n@deconstructible\nclass ThumPathAndRename(object):\n\n def __init__(self, sub_path):\n self.path = sub_path\n\n def __call__(self, instance, filename):\n ext = filename.split('.')[-1]\n\n filename = '{}.{}'.format(uuid.uuid4().hex+'_thum',ext)\n\n return os.path.join(self.path, filename)\n\n\n\n\n\nclass string_with_title(str):\n def __new__(cls, value, title):\n instance = str.__new__(cls, value)\n instance._title = title\n return instance\n\n def title(self):\n return self._title\n\n __copy__ = lambda self: self\n __deepcopy__ = lambda self, memodict: self\n\n\nclass AboutUs(models.Model):\n content = models.TextField(default='',verbose_name='关于我们')\n\n class Meta:\n verbose_name_plural = verbose_name='关于我们'\n\n def __str__(self):\n return str(self.id)\n\nclass TopProductCategory(models.Model):\n name = models.CharField(max_length=20,verbose_name='顶级分类名')\n index = models.IntegerField(default=1,verbose_name='排序')\n\n class Meta:\n verbose_name = verbose_name_plural = '商品顶级分类'\n ordering = ['index','name']\n\n def __str__(self):\n return self.name\n\nclass ProductCategory(models.Model):\n uid = models.CharField(max_length=80,default='',verbose_name='Token')\n name = models.CharField(max_length=15,verbose_name='产品分类名')\n index = models.IntegerField(default=1,verbose_name='排序')\n parent = models.ForeignKey(TopProductCategory,default=None,related_name='childs',verbose_name='顶级分类')\n\n\n def save(self, *args,**kwargs):\n self.uid = uuid.uuid5(uuid.NAMESPACE_DNS,str(self.pk))\n super(ProductCategory,self).save(*args,**kwargs)\n\n class Meta:\n verbose_name = verbose_name_plural = '商品分类'\n ordering = ['index','name']\n\n\n\n def __str__(self):\n\n return self.name\n\nclass Brand(models.Model):\n name = models.CharField(max_length=30,default='', verbose_name='品牌名称')\n index = models.IntegerField(default=1,verbose_name='排列顺序')\n\n class Meta:\n verbose_name = '品牌'\n verbose_name_plural = verbose_name\n ordering = ['index',]\n\n def __str__(self):\n return self.name\n\n\n\nclass Product(models.Model):\n #id = models.AutoField(primary_key=True)\n uid = models.CharField(max_length=80,default='',verbose_name='Token')\n category = models.ForeignKey(ProductCategory,related_name='products',verbose_name='产品分类')\n brand = models.ForeignKey(Brand,verbose_name='品牌',blank=True,null=True)\n name = models.CharField(max_length=50,default='',verbose_name='商品名')\n desc = models.CharField(max_length=100,default='',verbose_name='商品简介')\n detail = models.TextField(default='',verbose_name='商品详情')\n unit = models.CharField(default='',max_length=10,verbose_name='单位')\n color = models.CharField(max_length=10,default='',null=True,blank=True,verbose_name='颜色')\n sales = models.IntegerField(default=0,verbose_name='销量')\n status = models.CharField(max_length=5,default='',choices=STATUS.items(),verbose_name='状态')\n add_time = models.DateTimeField(auto_now_add=True)\n is_tuijian = models.BooleanField(default=False,verbose_name='是否首页推荐')\n index = models.IntegerField(default=1,verbose_name='排序')\n\n\n\n # version = models.ManyToManyField(Version,verbose_name='版本')\n number = models.IntegerField(default=0,verbose_name='库存')\n img_show = models.ImageField(upload_to=PathAndRename(\"product/show/\"),\n null=True,blank=True,editable=True,verbose_name='展示图片地址')\n img_detail_1 = models.ImageField(upload_to=PathAndRename(\"product/detail/\"),\n null=True,blank=True,editable=True,verbose_name='详情图片地址1')\n img_detail_2 = models.ImageField(upload_to=PathAndRename(\"product/detail/\"),\n null=True,blank=True,editable=True,verbose_name='详情图片地址2')\n img_detail_3= models.ImageField(upload_to=PathAndRename(\"product/detail/\"),\n null=True,blank=True,editable=True,verbose_name='详情图片地址3')\n thum_width = models.PositiveIntegerField(default=50,verbose_name='缩略图宽度')\n thum_height = models.PositiveIntegerField(default=50,verbose_name='缩略图高度')\n img_thum = models.ImageField(upload_to=ThumPathAndRename(\"product/thum/\"),width_field='thum_width',\n height_field='thum_height',\n null=True,blank=True,editable=True,verbose_name='缩略图')\n\n class Meta:\n verbose_name = '商品'\n verbose_name_plural = verbose_name\n ordering = ['id']\n\n\n def img_show_tag(self):\n if self.img_show:\n return '<img src=\"/media/%s\" width=100,height=100/>' %(self.img_show)\n\n def img_d1_tag(self):\n if self.img_detail_1:\n return '<img src=\"/media/%s\" width=100,height=100/>' %(self.img_detail_1)\n\n def img_d2_tag(self):\n if self.img_detail_2:\n return '<img src=\"/media/%s\" width=100,height=100/>' %(self.img_detail_2)\n\n def img_d3_tag(self):\n if self.img_detail_3:\n return '<img src=\"/media/%s\" width=100,height=100/>' %(self.img_detail_3)\n\n def img_thum_tag(self):\n if self.img_thum:\n return '<img src=\"/media/%s\" width=100,height=100/>' %(self.img_thum)\n\n img_show_tag.short_description = '列表展示图'\n img_show_tag.allow_tags = True\n img_d1_tag.short_description = '详情图1'\n img_d1_tag.allow_tags = True\n img_d2_tag.short_description = '详情图2'\n img_d2_tag.allow_tags = True\n img_d3_tag.short_description = '详情图3'\n img_d3_tag.allow_tags = True\n img_thum_tag.short_description = '缩略图'\n img_thum_tag.allow_tags = True\n\n\n def save(self,*args,**kwargs):\n self.uid = uuid.uuid5(uuid.NAMESPACE_DNS,str(self.pk))\n if self.id is not None:\n current = Product.objects.get(id=self.id)\n if self.img_show != current.img_show:\n # Delete old image and thumbnail\n current.img_show.delete(save=False)\n # Set save=False because it's saving now.\n if self.img_detail_1 != current.img_detail_1:\n current.img_detail_1.delete(save=False)\n if self.img_detail_2 != current.img_detail_2:\n current.img_detail_2.delete(save=False)\n if self.img_detail_3 != current.img_detail_3:\n current.img_detail_3.delete(save=False)\n if self.img_thum != current.img_thum:\n current.img_thum.delete(save=False)\n super(Product,self).save(*args,**kwargs)\n\n\n def __str__(self):\n return self.name\n\n\n\nclass Version(models.Model):\n product = models.ForeignKey(Product,related_name='versions',verbose_name='商品')\n name = models.CharField(max_length=30,default='',verbose_name='商品版本')\n old_price = models.FloatField(default=0.0,verbose_name='原价')\n discount = models.FloatField(default=1,verbose_name='折扣')\n now_price = models.FloatField(default=0,verbose_name='现价')\n\n class Meta:\n verbose_name_plural = verbose_name = '商品版本'\n\n def __str__(self):\n return self.name\n\n\n#订单类\nclass Order(models.Model):\n uid = models.CharField(max_length=80,default='',verbose_name='Token')\n user = models.ForeignKey(settings.AUTH_USER_MODEL,related_name='orders',default=None,verbose_name='拥有者')\n date_add = models.DateTimeField(auto_now_add=True,verbose_name='创建时间')\n total_money = models.FloatField(default=0.0,verbose_name='总价')\n\n def save(self,*args,**kwargs):\n super(Order,self).save(*args,**kwargs)\n self.uid = uuid.uuid5(uuid.NAMESPACE_DNS,str(self.pk))\n super(Order,self).save(*args,**kwargs)\n\n class Meta:\n verbose_name_plural = verbose_name='订单'\n\n def __str__(self):\n return str(self.id)\n\n\n\n\n\nclass Cartitem(models.Model):\n product = models.ForeignKey(Product,related_name='car',verbose_name='商品')\n quantity = models.IntegerField(default=1,verbose_name='数量')\n sum_price = models.FloatField(default=0.0,verbose_name='小计')\n date_added = models.DateTimeField(auto_now_add=True)\n order = models.ForeignKey(Order,default=None,blank=True,null=True,\n related_name='items', verbose_name='条目')\n\n class Meta:\n verbose_name_plural = verbose_name = '购物车头目'\n ordering = ['date_added']\n\n def save(self,*args,**kwargs):\n self.sum_price = self.product.versions.all()[0].now_price*self.quantity\n super(Cartitem,self).save(*args,**kwargs)\n\n\n def add_quantity(self,quantity):\n self.quantity = self.quantity+int(quantity)\n self.save()\n\n\n\n def __str__(self):\n return '%s--%d'.format(self.product.name,self.quantity)\n#购物车\nclass Cart(object):\n def __init__(self):\n self.items = []\n self._total_price = 0.0\n\n def add(self,cartitem):\n for item in self.items:\n if item.product.id == cartitem.product.id:\n item.quantity+=cartitem.quantity\n item.sum_price+=cartitem.sum_price\n return\n else:\n print('addcar',cartitem)\n self.items.append(cartitem)\n return\n self.items.append(cartitem)\n\n @property\n def total_price(self):\n money = 0.0\n for item in self.items:\n money = money+item.sum_price\n return money\n\n @total_price.setter\n def total_price(self, value):\n\n self._total_price = value\n\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.5109717845916748,
"alphanum_fraction": 0.5673981308937073,
"avg_line_length": 24.520000457763672,
"blob_id": "18ee8a3837c9ff91ea8ece7a00f98318ea021a3c",
"content_id": "cb2b7dcaf3e9303b713a9123a064d69dcec68f55",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 642,
"license_type": "no_license",
"max_line_length": 81,
"num_lines": 25,
"path": "/account/migrations/0007_auto_20160518_2117.py",
"repo_name": "fantasylc/haiyuan",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n# Generated by Django 1.9.6 on 2016-05-18 13:17\nfrom __future__ import unicode_literals\n\nfrom django.db import migrations, models\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('account', '0006_auto_20160518_1733'),\n ]\n\n operations = [\n migrations.AddField(\n model_name='user',\n name='yuanxi',\n field=models.CharField(default='', max_length=30),\n ),\n migrations.AlterField(\n model_name='user',\n name='role',\n field=models.CharField(default='', max_length=10, verbose_name='身份'),\n ),\n ]\n"
},
{
"alpha_fraction": 0.5196629166603088,
"alphanum_fraction": 0.5716292262077332,
"avg_line_length": 27.479999542236328,
"blob_id": "128362f643825a5e1985c0669362677a4e2ade88",
"content_id": "176fcbc6b405905f81309ae68d5aca1c47d37f08",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 726,
"license_type": "no_license",
"max_line_length": 117,
"num_lines": 25,
"path": "/shangcheng/migrations/0015_auto_20160604_2256.py",
"repo_name": "fantasylc/haiyuan",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n# Generated by Django 1.9.6 on 2016-06-04 14:56\nfrom __future__ import unicode_literals\n\nfrom django.db import migrations, models\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('shangcheng', '0014_auto_20160604_2213'),\n ]\n\n operations = [\n migrations.AddField(\n model_name='productcategory',\n name='uid',\n field=models.CharField(default='', max_length=80, verbose_name='Token'),\n ),\n migrations.AlterField(\n model_name='product',\n name='status',\n field=models.CharField(choices=[('0', '采购中'), ('1', '上线')], default='', max_length=5, verbose_name='状态'),\n ),\n ]\n"
},
{
"alpha_fraction": 0.6643598675727844,
"alphanum_fraction": 0.6660899519920349,
"avg_line_length": 33.05882263183594,
"blob_id": "010cb7f9eca6e87e4bbbc00aca4a865e5e74c1c5",
"content_id": "87a59fbc213e8a81a245d19ed364b47e73f0551a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 578,
"license_type": "no_license",
"max_line_length": 74,
"num_lines": 17,
"path": "/account/urls.py",
"repo_name": "fantasylc/haiyuan",
"src_encoding": "UTF-8",
"text": "#coding:utf-8\nfrom django.conf.urls import patterns,url\nfrom django.conf import settings\nfrom . import views\n\nurlpatterns = [\n url(r'^login/$',views.user_login),\n url(r'^register/$',views.register),\n url(r'^logout/$',views.user_logout),\n url(r'^confirm/(?P<active_code>[\\w\\-]+)/$',views.active_user),\n url(r'^userinfo/$',views.userinfo),\n url(r'^changepasswd/$',views.changepasswd),\n url(r'^forgetpassword/$',views.forgetpassword),\n url(r'^resetpassword/(?P<active_code>[\\w\\-]+)/$',views.resetpassword),\n url(r'^confirmreset/$',views.confirmreset),\n\n]"
},
{
"alpha_fraction": 0.5929648280143738,
"alphanum_fraction": 0.6193467378616333,
"avg_line_length": 37.82926940917969,
"blob_id": "e2c6342200db04c3865b4dca3bf696957318e238",
"content_id": "4125529191998a1477a8de29cf690a342b269308",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1646,
"license_type": "no_license",
"max_line_length": 176,
"num_lines": 41,
"path": "/shangcheng/migrations/0010_auto_20160520_1709.py",
"repo_name": "fantasylc/haiyuan",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n# Generated by Django 1.9.6 on 2016-05-20 09:09\nfrom __future__ import unicode_literals\n\nfrom django.db import migrations, models\nimport shangcheng.models\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('shangcheng', '0009_auto_20160520_1626'),\n ]\n\n operations = [\n migrations.AddField(\n model_name='product',\n name='img_thum',\n field=models.ImageField(blank=True, height_field=50, null=True, upload_to=shangcheng.models.ThumPathAndRename('product/thum/'), verbose_name='缩略图', width_field=50),\n ),\n migrations.AlterField(\n model_name='product',\n name='img_detail_1',\n field=models.ImageField(blank=True, null=True, upload_to=shangcheng.models.PathAndRename('product/'), verbose_name='详情图片地址1'),\n ),\n migrations.AlterField(\n model_name='product',\n name='img_detail_2',\n field=models.ImageField(blank=True, null=True, upload_to=shangcheng.models.PathAndRename('product/'), verbose_name='详情图片地址2'),\n ),\n migrations.AlterField(\n model_name='product',\n name='img_detail_3',\n field=models.ImageField(blank=True, null=True, upload_to=shangcheng.models.PathAndRename('product/'), verbose_name='详情图片地址3'),\n ),\n migrations.AlterField(\n model_name='product',\n name='img_show',\n field=models.ImageField(blank=True, null=True, upload_to=shangcheng.models.PathAndRename('product/'), verbose_name='展示图片地址'),\n ),\n ]\n"
},
{
"alpha_fraction": 0.5304877758026123,
"alphanum_fraction": 0.5995935201644897,
"avg_line_length": 23.600000381469727,
"blob_id": "5f680502b6d57743177009ef9befa50f78904e31",
"content_id": "ec2bf10baf0f29872482fd494ce42951047b3e78",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 498,
"license_type": "no_license",
"max_line_length": 95,
"num_lines": 20,
"path": "/account/migrations/0003_auto_20160518_0013.py",
"repo_name": "fantasylc/haiyuan",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n# Generated by Django 1.9.6 on 2016-05-17 16:13\nfrom __future__ import unicode_literals\n\nfrom django.db import migrations, models\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('account', '0002_auto_20160518_0012'),\n ]\n\n operations = [\n migrations.AlterField(\n model_name='user',\n name='phone',\n field=models.CharField(default='', max_length=20, unique=True, verbose_name='手机号'),\n ),\n ]\n"
},
{
"alpha_fraction": 0.6365453004837036,
"alphanum_fraction": 0.6372113823890686,
"avg_line_length": 34.472442626953125,
"blob_id": "8e3cd6210d283ca7061d0c722fc1b4098df5445c",
"content_id": "627a6ff4cbe6ab7e9273fc01be225e4b78addc7a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4622,
"license_type": "no_license",
"max_line_length": 112,
"num_lines": 127,
"path": "/shangcheng/views.py",
"repo_name": "fantasylc/haiyuan",
"src_encoding": "UTF-8",
"text": "#coding:utf-8\nfrom django.shortcuts import render,render_to_response\nfrom .models import Product,AboutUs\nfrom django.http import HttpResponse\nfrom django.contrib.auth.decorators import login_required\n# Create your views here.\nfrom .models import *\nfrom django.db.models import Q\nfrom urllib.parse import urlparse\nimport json\ndef index(request):\n pd_tuijians = Product.objects.filter(is_tuijian = True).all()\n topfenleis = TopProductCategory.objects.all()\n # for product in pd_tuijians:\n # product.versions = product.versions.all()\n return render(request,'index.html',locals())\n #return render_to_response('index.html',{'pd_tuijians':pd_tuijians})\n\ndef aboutus(request):\n about = AboutUs.objects.all().last()\n return render(request,'aboutus.html',locals())\n\ndef product(request,uid=None):\n product = Product.objects.get(uid=uid)\n return render(request,'store/product-detail.html',locals())\n\n@login_required()\ndef view_cart(request):\n if request.user.is_authenticated():\n redirect_to = request.GET['next']\n\n cart = request.session.get(request.user.id,None)\n return render(request,'store/viewcart.html',locals())\n\n\n\n#添加购物车\ndef add_cart(request):\n if request.user.is_authenticated():\n pd_id = request.POST.get('pd_id',None)\n pd_quantity = int(request.POST.get('pd_quantity',None))\n\n try:\n product = Product.objects.get(pk=pd_id)\n sum_price = product.versions.all()[0].now_price*pd_quantity\n cartitem = Cartitem(product=product,quantity=pd_quantity,sum_price=sum_price)\n\n except Product.DoesNotExist:\n return HttpResponse(json.dumps({'status':'error','message':'您购买的商品不存在'}))\n\n cart = request.session.get(request.user.id,None)\n if not cart:\n cart = Cart()\n cart.add(cartitem)\n print('car:',cart.items)\n request.session[request.user.id] = cart\n return HttpResponse(json.dumps({'status':'success','message':'添加购物车成功'}))\n else:\n print('cart',cart)\n print('pdname',cartitem.product.name)\n cart.add(cartitem)\n print('car:',cart.items)\n request.session[request.user.id] = cart\n return HttpResponse(json.dumps({'status':'success','message':'添加购物车成功2!'}))\n\n else:\n return HttpResponse(json.dumps({'status':'error','message':'您需要先登陆!'}))\n\ndef clear_cart(request):\n if request.user.is_authenticated():\n cart = Cart()\n request.session[request.user.id] = cart\n return HttpResponse(json.dumps({'status':'success','message':'清除购物车成功!'}))\n\n else:\n return HttpResponse(json.dumps({'status':'error','message':'请先登陆!'}))\n\n\ndef del_cart(request):\n if request.user.is_authenticated():\n index = request.POST.get('index',None)\n if index:\n index = int(index)\n cart = request.session.get(request.user.id,None)\n del cart.items[index]\n request.session[request.user.id] = cart\n return HttpResponse(json.dumps({'status':'success','message':'删除成功!'}))\n return HttpResponse(json.dumps({'status':'error','message':'cuola!'}))\n\n else:\n return HttpResponse(json.dumps({'status':'error','message':'请先登陆!'}))\n\n@login_required()\ndef submitorder(request):\n if request.method == 'POST':\n cart = request.session.get(request.user.id,None)\n if cart.items:\n order = Order(user=request.user)\n order.save()\n for item in cart.items:\n item.order = order\n item.save()\n\n order.total_money = cart.total_price\n order.save()\n cart = Cart()\n request.session[request.user.id] = cart\n return HttpResponse(json.dumps({'status':'success','message':'chenggongla'}))\n\n\n return render(request,'store/submitorder.html',locals())\n\n@login_required()\ndef myorders(request):\n orders = request.user.orders.all()\n return render(request,'store/myorders.html',locals())\n\ndef productcategory(request,uid = ''):\n category = ProductCategory.objects.get(uid=uid)\n products = category.products.all()\n return render(request,'store/categorys.html',locals())\n\ndef search(request):\n if request.method == 'GET':\n word = request.GET.get('word','')\n products = Product.objects.only('name','desc').filter(Q(name__icontains=word) | Q(desc__icontains=word))\n return render(request,'store/searchs.html',locals())"
},
{
"alpha_fraction": 0.5206422209739685,
"alphanum_fraction": 0.5619266033172607,
"avg_line_length": 28.066667556762695,
"blob_id": "f9541e4dc2095d4e7ab28a11e6f5759cdebd02c7",
"content_id": "ea19ba23eaed91511feab2b5465c796cb7ccf233",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 902,
"license_type": "no_license",
"max_line_length": 117,
"num_lines": 30,
"path": "/shangcheng/migrations/0008_auto_20160520_1523.py",
"repo_name": "fantasylc/haiyuan",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n# Generated by Django 1.9.6 on 2016-05-20 07:23\nfrom __future__ import unicode_literals\n\nfrom django.db import migrations, models\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('shangcheng', '0007_auto_20160520_1505'),\n ]\n\n operations = [\n migrations.AddField(\n model_name='product',\n name='index',\n field=models.IntegerField(default=1, verbose_name='排序'),\n ),\n migrations.AddField(\n model_name='product',\n name='is_tuijian',\n field=models.BooleanField(default=False, verbose_name='是否首页推荐'),\n ),\n migrations.AlterField(\n model_name='product',\n name='status',\n field=models.CharField(choices=[('1', '上线'), ('0', '采购中')], default='', max_length=5, verbose_name='状态'),\n ),\n ]\n"
},
{
"alpha_fraction": 0.5884353518486023,
"alphanum_fraction": 0.6428571343421936,
"avg_line_length": 27,
"blob_id": "93e321d8a8d655175f780e78ad692cc99b4e16a9",
"content_id": "e0d798960d28755c1c2e7db0559be0e97507012a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 592,
"license_type": "no_license",
"max_line_length": 150,
"num_lines": 21,
"path": "/shangcheng/migrations/0004_auto_20160520_0039.py",
"repo_name": "fantasylc/haiyuan",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n# Generated by Django 1.9.6 on 2016-05-19 16:39\nfrom __future__ import unicode_literals\n\nfrom django.db import migrations, models\nimport django.db.models.deletion\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('shangcheng', '0003_auto_20160520_0013'),\n ]\n\n operations = [\n migrations.AlterField(\n model_name='version',\n name='product',\n field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='versions', to='shangcheng.Product', verbose_name='商品'),\n ),\n ]\n"
},
{
"alpha_fraction": 0.5330188870429993,
"alphanum_fraction": 0.5676100850105286,
"avg_line_length": 25.5,
"blob_id": "1fccb866530801a4335dd2f4cb9958b599426d0b",
"content_id": "738e5003491cccf5d051684a5d56be478fd5f94f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 650,
"license_type": "no_license",
"max_line_length": 95,
"num_lines": 24,
"path": "/account/migrations/0002_auto_20160518_0012.py",
"repo_name": "fantasylc/haiyuan",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n# Generated by Django 1.9.6 on 2016-05-17 16:12\nfrom __future__ import unicode_literals\n\nfrom django.db import migrations, models\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('account', '0001_initial'),\n ]\n\n operations = [\n migrations.AlterModelOptions(\n name='user',\n options={'ordering': ['phone'], 'verbose_name': '用户', 'verbose_name_plural': '用户'},\n ),\n migrations.AddField(\n model_name='user',\n name='phone',\n field=models.CharField(default='', max_length=20, verbose_name='手机号'),\n ),\n ]\n"
},
{
"alpha_fraction": 0.5308641791343689,
"alphanum_fraction": 0.5788751840591431,
"avg_line_length": 28.15999984741211,
"blob_id": "ebf1c6c2a72b2f19eb691127515e968f918319f7",
"content_id": "a60b3432fa67aae63a951219a34f7c22535d1591",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 743,
"license_type": "no_license",
"max_line_length": 117,
"num_lines": 25,
"path": "/shangcheng/migrations/0012_auto_20160520_1749.py",
"repo_name": "fantasylc/haiyuan",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n# Generated by Django 1.9.6 on 2016-05-20 09:49\nfrom __future__ import unicode_literals\n\nfrom django.db import migrations, models\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('shangcheng', '0011_auto_20160520_1742'),\n ]\n\n operations = [\n migrations.AlterField(\n model_name='product',\n name='id',\n field=models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),\n ),\n migrations.AlterField(\n model_name='product',\n name='status',\n field=models.CharField(choices=[('0', '采购中'), ('1', '上线')], default='', max_length=5, verbose_name='状态'),\n ),\n ]\n"
},
{
"alpha_fraction": 0.5350772738456726,
"alphanum_fraction": 0.5945303440093994,
"avg_line_length": 29.035715103149414,
"blob_id": "f89b45c9417dc33e95e01bb097c2ee1888dab06a",
"content_id": "ecb19098111147a2e66715fd172b4ac975226d41",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 855,
"license_type": "no_license",
"max_line_length": 126,
"num_lines": 28,
"path": "/shangcheng/migrations/0007_auto_20160520_1505.py",
"repo_name": "fantasylc/haiyuan",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n# Generated by Django 1.9.6 on 2016-05-20 07:05\nfrom __future__ import unicode_literals\n\nimport datetime\nfrom django.db import migrations, models\nfrom django.utils.timezone import utc\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('shangcheng', '0006_auto_20160520_1432'),\n ]\n\n operations = [\n migrations.AddField(\n model_name='product',\n name='add_time',\n field=models.DateTimeField(auto_now_add=True, default=datetime.datetime(2016, 5, 20, 7, 5, 9, 22429, tzinfo=utc)),\n preserve_default=False,\n ),\n migrations.AlterField(\n model_name='product',\n name='status',\n field=models.CharField(choices=[('0', '采购中'), ('1', '上线')], default='', max_length=5, verbose_name='状态'),\n ),\n ]\n"
},
{
"alpha_fraction": 0.5136896967887878,
"alphanum_fraction": 0.5149934887886047,
"avg_line_length": 26.39285659790039,
"blob_id": "73e234f4b94f1a72bdf73417f8d641fda37a23ff",
"content_id": "982475d6a83c827c4820ad5f4c9b0110574b1546",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 793,
"license_type": "no_license",
"max_line_length": 70,
"num_lines": 28,
"path": "/account/auth.py",
"repo_name": "fantasylc/haiyuan",
"src_encoding": "UTF-8",
"text": "#coding:utf-8\n__author__ = 'liuchao'\n\nfrom .models import User\n\nclass MyBackend(object):\n '自定义用户认证,实现学号登陆'\n def authenticate(self,phone=None, password=None,active_code=None):\n try:\n user = User.objects.get(phone=phone)\n if user:\n if user.check_password(password):\n return user\n elif user.active_code==active_code:\n return user\n else:\n return None\n except User.DoesNotExist:\n return None\n\n def get_user(self,user_id):\n try:\n user = User.objects.get(pk=user_id)\n if user:\n return user\n return None\n except User.DoesNotExist:\n return None\n"
},
{
"alpha_fraction": 0.5425457954406738,
"alphanum_fraction": 0.552283763885498,
"avg_line_length": 48.574710845947266,
"blob_id": "26f3c32eefd542d1e194cb5fc04a02f48d8ceef5",
"content_id": "4d638f31240c1f46f87d685bfcb7734ba1cfcf56",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4517,
"license_type": "no_license",
"max_line_length": 182,
"num_lines": 87,
"path": "/shangcheng/migrations/0001_initial.py",
"repo_name": "fantasylc/haiyuan",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n# Generated by Django 1.9.6 on 2016-05-19 15:44\nfrom __future__ import unicode_literals\n\nfrom django.db import migrations, models\nimport django.db.models.deletion\n\n\nclass Migration(migrations.Migration):\n\n initial = True\n\n dependencies = [\n ]\n\n operations = [\n migrations.CreateModel(\n name='Brand',\n fields=[\n ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),\n ('name', models.CharField(default='', max_length=30, verbose_name='品牌名称')),\n ('index', models.IntegerField(default=1, verbose_name='排列顺序')),\n ],\n options={\n 'verbose_name': '品牌',\n 'verbose_name_plural': '品牌',\n 'ordering': ['index'],\n },\n ),\n migrations.CreateModel(\n name='Product',\n fields=[\n ('id', models.AutoField(primary_key=True, serialize=False)),\n ('token', models.CharField(default='', max_length=50, verbose_name='Token')),\n ('name', models.CharField(default='', max_length=50, verbose_name='商品名')),\n ('desc', models.CharField(default='', max_length=100, verbose_name='商品简介')),\n ('detail', models.TextField(default='', verbose_name='商品详情')),\n ('color', models.CharField(blank=True, default='', max_length=10, null=True, verbose_name='颜色')),\n ('sales', models.IntegerField(default=0, verbose_name='销量')),\n ('number', models.IntegerField(default=0, verbose_name='库存')),\n ('img_show', models.ImageField(default='product/default.jpg', upload_to='product/show/', verbose_name='展示图片地址')),\n ('img_detail_1', models.ImageField(default='product/default.jpg', upload_to='product/detail/', verbose_name='详情图片地址')),\n ('img_detail_2', models.ImageField(default='product/default.jpg', upload_to='product/detail/', verbose_name='详情图片地址')),\n ('img_detail_3', models.ImageField(default='product/default.jpg', upload_to='product/detail/', verbose_name='详情图片地址')),\n ('brand', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='shangcheng.Brand', verbose_name='品牌')),\n ],\n options={\n 'verbose_name': '商品',\n 'verbose_name_plural': '商品',\n 'ordering': ['id'],\n },\n ),\n migrations.CreateModel(\n name='ProductCategory',\n fields=[\n ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),\n ('name', models.CharField(max_length=15, verbose_name='产品分类名')),\n ('index', models.IntegerField(default=1, verbose_name='排序')),\n ('parent', models.ForeignKey(blank=True, default=None, null=True, on_delete=django.db.models.deletion.CASCADE, to='shangcheng.ProductCategory', verbose_name='上级分类')),\n ],\n options={\n 'verbose_name': '商品分类',\n 'verbose_name_plural': '商品分类',\n 'ordering': ['index', 'name'],\n },\n ),\n migrations.CreateModel(\n name='Version',\n fields=[\n ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),\n ('name', models.CharField(default='', max_length=30, verbose_name='商品版本')),\n ('old_prize', models.FloatField(default=0.0, verbose_name='原价')),\n ('discount', models.FloatField(default=1, verbose_name='折扣')),\n ('now_prize', models.FloatField(default=0, verbose_name='现价')),\n ('product', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='shangcheng.Product', verbose_name='商品')),\n ],\n options={\n 'verbose_name': '商品版本',\n 'verbose_name_plural': '商品版本',\n },\n ),\n migrations.AddField(\n model_name='product',\n name='category',\n field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='shangcheng.ProductCategory', verbose_name='产品分类'),\n ),\n ]\n"
},
{
"alpha_fraction": 0.6412971615791321,
"alphanum_fraction": 0.6492389440536499,
"avg_line_length": 34.13953399658203,
"blob_id": "4270b8c195368e6218c5dcf219160d1e7bd0044e",
"content_id": "b3d6eca7ec48ee6f9e89a930d62b865d400b8db8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1511,
"license_type": "no_license",
"max_line_length": 108,
"num_lines": 43,
"path": "/shangcheng/admin.py",
"repo_name": "fantasylc/haiyuan",
"src_encoding": "UTF-8",
"text": "#coding:utf-8\n__author__ = 'liuchao'\nfrom django.contrib import admin\nfrom .models import *\n\nclass VersionAdminInline(admin.TabularInline):\n model = Version\n extra = 0\n\nclass CartitemAdminInline(admin.TabularInline):\n model = Cartitem\n extra = 0\n\nclass ProductAdmin(admin.ModelAdmin):\n inlines = (VersionAdminInline,)\n list_display = ('name','category','number',)\n fieldsets = (\n ('None',{'fields':('id','category','name','brand','unit',\n 'desc','detail','sales','status','is_tuijian','number','img_show','img_show_tag',\n 'img_detail_1','img_d1_tag','img_detail_2','img_d2_tag',\n 'img_detail_3','img_d3_tag','img_thum','img_thum_tag')}),\n )\n readonly_fields = ('id','img_show_tag','img_d1_tag','img_d2_tag','img_d3_tag','img_thum_tag')\n\nclass OrderAdmin(admin.ModelAdmin):\n inlines = (CartitemAdminInline,)\n list_display = ('id','user','date_add')\n fields = ('id','user','total_money','date_add')\n readonly_fields = ('id','date_add',)\n\nclass ProductCategoryAdmin(admin.ModelAdmin):\n list_display = ('name','parent')\n fields = ('id','name','index','parent')\n readonly_fields = ('id',)\n\nadmin.site.register(AboutUs)\nadmin.site.register(TopProductCategory)\nadmin.site.register(ProductCategory,ProductCategoryAdmin)\nadmin.site.register(Version)\nadmin.site.register(Brand)\nadmin.site.register(Cartitem)\nadmin.site.register(Order,OrderAdmin)\nadmin.site.register(Product,ProductAdmin)\n"
},
{
"alpha_fraction": 0.3909962475299835,
"alphanum_fraction": 0.3930804431438446,
"avg_line_length": 33.264286041259766,
"blob_id": "5447eee56ac361fbab660027a4a9e94b4c59e3f1",
"content_id": "9fb4e554b18262907a267cf2d9d5f8188fdbfdd5",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 4798,
"license_type": "permissive",
"max_line_length": 83,
"num_lines": 140,
"path": "/static/js/shangcheng/store.js",
"repo_name": "fantasylc/haiyuan",
"src_encoding": "UTF-8",
"text": "/**\n * Created by liuchao on 16-5-20.\n */\n$(document).ready(function(){\n\n $(\".img-thum\").mouseover(function(){\n\n $(\".product-img-main img\").attr('src',$(this).children('img').attr('src'));\n });\n\n $(\"#product-add\").click(function(){\n $(\"#pd-quantity\").val(String(Number($(\"#pd-quantity\").val())+1))\n });\n\n $(\"#product-sub\").click(function(){\n if($(\"#pd-quantity\").val()>1){\n $(\"#pd-quantity\").val(String(Number($(\"#pd-quantity\").val())-1))\n }\n });\n\n $(\"#add-to-cart\").click(function(){\n var pd_id = $(\"#pd-id\").val();\n var pd_quantity = $(\"#pd-quantity\").val();\n if(pd_id && pd_quantity){\n $.ajax({\n type:'POST',\n url:'/add_cart/',\n data:{'pd_id':pd_id,'pd_quantity':pd_quantity},\n dataType:'json',\n beforeSend:function(xhr){\n xhr.setRequestHeader(\"X-CSRFToken\", $.cookie('csrftoken'));\n },\n success:function(data,textStatus){\n var status = data['status'];\n var message = data['message'];\n if(status == 'error'){\n alert(message);\n }\n else if(status == 'success'){\n alert(message);\n location.reload();\n }\n },\n error:function(XMLHttpRequest, textStatus, errorThrown){\n alert(XMLHttpRequest.responseText);\n }\n })\n }\n });\n\n $(\"#cart-clear\").click(function(){\n $.ajax({\n type:'POST',\n url:'/clearcart/',\n dataType:'json',\n beforeSend:function(xhr){\n xhr.setRequestHeader(\"X-CSRFToken\", $.cookie('csrftoken'));\n },\n success:function(data,textStatus){\n var status = data['status'];\n var message = data['message'];\n if(status == 'error'){\n alert(message)\n }\n else if(status == 'success'){\n location.reload();\n }\n },\n error:function(XMLHttpRequest, textStatus, errorThrown){\n alert(XMLHttpRequest.responseText);\n }\n })\n });\n\n $(\".cart-del\").click(function(){\n var index = $(this).attr(\"data-index\");\n if(index){\n $.ajax({\n type:'POST',\n url:'/delcart/',\n data:{\"index\":index},\n dataType:'json',\n beforeSend:function(xhr){\n xhr.setRequestHeader(\"X-CSRFToken\", $.cookie('csrftoken'));\n },\n success:function(data,textStatus){\n var status = data['status'];\n var message = data['message'];\n if(status == 'error'){\n alert(message)\n }\n else if(status == 'success'){\n location.reload();\n }\n },\n error:function(XMLHttpRequest, textStatus, errorThrown){\n alert(XMLHttpRequest.responseText);\n }\n })\n }\n\n });\n\n $(\"#cart-submit\").click(function(){\n $.ajax({\n type:'POST',\n url:'/submitorder/',\n dataType:'json',\n beforeSend:function(xhr){\n xhr.setRequestHeader(\"X-CSRFToken\", $.cookie('csrftoken'));\n },\n success:function(data,textStatus){\n var status = data['status'];\n var message = data['message'];\n if(status == 'error'){\n alert(message)\n }\n else if(status == 'success'){\n\n location.replace('/submitorder/');\n }\n },\n error:function(XMLHttpRequest, textStatus, errorThrown){\n alert(XMLHttpRequest.responseText);\n }\n })\n });\n\n $('.order-item-main-b').each(function(){\n $(this).css('height',$(this).prev().height()+8);\n });\n\n $('.order-item-main-c').each(function(){\n $(this).css('height',$(this).prev().height()+8);\n });\n\n $('.order-item-main-d').each(function(){\n $(this).css('height',$(this).prev().height());\n });\n});\n\n"
},
{
"alpha_fraction": 0.5973413586616516,
"alphanum_fraction": 0.6003430485725403,
"avg_line_length": 38.744319915771484,
"blob_id": "74f25e449cd683aa53ed35258d7afa3f0b453d7d",
"content_id": "7e5b4d8566d31f1eb940a08d4da05390b7dd6566",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7342,
"license_type": "no_license",
"max_line_length": 117,
"num_lines": 176,
"path": "/account/views.py",
"repo_name": "fantasylc/haiyuan",
"src_encoding": "UTF-8",
"text": "#coding:utf-8\nfrom django.shortcuts import render\n\n# Create your views here.\n\nfrom django.shortcuts import render\nfrom django.contrib.auth import authenticate,login,logout,update_session_auth_hash\nfrom django.http import HttpResponse,HttpResponseRedirect,HttpResponseNotFound\nimport json\nfrom .models import User\nimport uuid\nfrom .utils import sendconfirmemail\nfrom django.contrib import messages\nimport time\nfrom django.contrib.auth.decorators import login_required\nfrom .config import ROLE,YUANXI\nfrom django.contrib.auth.forms import PasswordChangeForm\n\ndef user_login(request):\n if request.method == \"POST\":\n\n try:\n phone = request.POST.get('phone','')\n password = request.POST.get('password','')\n user = authenticate(phone=phone,password=password)\n if user:\n if user.is_active:\n login(request,user)\n info = {'status':'success','message':'登录成功'}\n return HttpResponse(json.dumps(info))\n else:\n return HttpResponse(json.dumps({'status':'failure','message':'邮箱还没有验证,请验证你的邮箱!'}))\n\n else:\n info = {'status':'failure','message':'手机号或密码错误'}\n return HttpResponse(json.dumps(info))\n\n except Exception as e:\n return HttpResponse(json.dumps({'status':'failure','message':'登录异常,请再试一次!'}))\n\n return render(request,'account/login2.html',locals())\n\n\ndef user_logout(request):\n try:\n logout(request)\n return HttpResponse('注销成功!')\n except Exception as e:\n return HttpResponse('注销异常')\n\n\ndef register(request):\n if request.method == 'POST':\n errors = []\n phone = request.POST.get('phone','')\n email = request.POST.get('email','')\n password = request.POST.get('password','')\n active_code = str(uuid.uuid5(uuid.NAMESPACE_DNS,phone+str(time.time())))\n user = User.objects.filter(phone = phone).first()\n if user:\n if user.is_active:\n return HttpResponse(json.dumps({'status':'failure','message':'手机号已注册,请直接登陆或用其他手机号注册!'}))\n #return HttpResponse(user)\n else:\n return HttpResponse(json.dumps({'status':'failure','message':'手机号已经注册,请验证邮箱!'}))\n try:\n sendconfirmemail(email=email,active_code=active_code,other='confirm')\n\n\n except Exception as e:\n return HttpResponse('邮件发送出错,注册失败!',status=500)\n\n User.objects.create_user(phone,email,password=password,active_code=active_code)\n return HttpResponse(json.dumps({'status':'success','message':'验证邮件已发送给你,请查收验证邮箱!'}))\n\ndef active_user(request,active_code=None):\n try:\n user = User.objects.filter(active_code=active_code).first()\n if user:\n user.is_active=True\n user.save()\n user = authenticate(phone=user.phone,active_code=active_code)\n if user.is_active:\n login(request,user)\n messages.success(request,\"你的邮箱已经验证成功!\")\n user.active_code = None\n user.save()\n return HttpResponseRedirect('/')\n\n return HttpResponseNotFound('<h1>Page not found</h1>')\n except:\n return HttpResponseNotFound('<h2>没有查找到用户!</h2>')\n\n@login_required()\ndef userinfo(request):\n if request.method == 'POST':\n user = request.user\n user.email = request.POST.get('email','').replace('-','\\-').replace(\"'\",\"\")\n user.nickname = request.POST.get('nickname','').replace('-','\\-').replace(\"'\",\"\")\n user.address = request.POST.get('address','').replace('-','\\-').replace(\"'\",\"\")\n user.yuanxi = request.POST.get('yuanxi','').replace('-','\\-').replace(\"'\",\"\")\n user.shenfen = request.POST.get('shenfen','').replace('-','\\-').replace(\"'\",\"\")\n user.xuehao = request.POST.get('xuehao','').replace('-','\\-').replace(\"'\",\"\")\n user.realname = request.POST.get('realname','').replace('-','\\-').replace(\"'\",\"\")\n user.save()\n return HttpResponse(json.dumps({'status':'success','message':''}))\n\n passwdresetform = PasswordChangeForm(user=request.user)\n roles = ROLE\n yuanxis = YUANXI\n return render(request,'account/userinfo.html',locals())\n\n@login_required()\ndef changepasswd(request):\n if request.method == 'POST':\n error=[]\n form = PasswordChangeForm(user=request.user,data=request.POST)\n if form.is_valid():\n form.save()\n update_session_auth_hash(request,form.user)\n return HttpResponse(json.dumps({'status':'success','message':'修改成功'}),content_type='application/json')\n\n else:\n for k,v in form.error_messages.items():\n error.append(str(v))\n return HttpResponse(json.dumps({'status':'failure','message':error}),content_type='application/json')\n\ndef forgetpassword(request):\n if request.method == 'POST':\n email = request.POST.get('email',None)\n if email:\n try:\n user = User.objects.get(email=email)\n if user:\n active_code = str(uuid.uuid5(uuid.NAMESPACE_DNS,user.phone+str(time.time())))\n try:\n sendconfirmemail(email=email,active_code=active_code,other='resetpassword')\n except:\n return HttpResponse(json.dumps({'status':'failure','message':'系统错误!'}))\n user.active_code = active_code\n user.save()\n return HttpResponse(json.dumps({'status':'success','message':'邮件发送成功,请查收!'}))\n except User.DoesNotExist:\n return HttpResponse(json.dumps({'status':'failure','message':'此邮箱之前并未注册,请更换邮箱!'}))\n\n\n if request.method == 'GET':\n return render(request,'account/forgetpassword.html',locals())\n\n else:\n return HttpResponseNotFound('<h1>Page not found</h1>')\n\ndef resetpassword(request,active_code=None):\n\n return render(request,'account/resetpassword.html',locals())\n\ndef confirmreset(request):\n if request.method == 'POST':\n password1 = request.POST.get('password1',None)\n password2 = request.POST.get('password2',None)\n active_code = request.POST.get('active_code',None)\n if not (password1 and password2 and active_code):\n return HttpResponse(json.dumps({'status':'failure','message':'选项不能为空!'}))\n try:\n user = User.objects.get(active_code=active_code)\n user.set_password(password2)\n user.save()\n user = authenticate(phone=user.phone,active_code=active_code)\n if user.is_active:\n login(request,user)\n return HttpResponse(json.dumps({'status':'success','message':'重置密码成功!'}))\n except User.DoesNotExist:\n return HttpResponse(json.dumps({'status':'failure','message':'用户不存在!!'}))\n\n else:\n return HttpResponseNotFound('<h1>Page not found</h1>')\n\n"
},
{
"alpha_fraction": 0.35598376393318176,
"alphanum_fraction": 0.35598376393318176,
"avg_line_length": 34.25,
"blob_id": "89fa28df47aecd42447c44eba5d52cc4bf033ef3",
"content_id": "547aa7fe9d9291bd5653865dd1209684a094a49d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "HTML",
"length_bytes": 998,
"license_type": "no_license",
"max_line_length": 90,
"num_lines": 28,
"path": "/shangcheng/templates/includes/nav_fenlei.html",
"repo_name": "fantasylc/haiyuan",
"src_encoding": "UTF-8",
"text": "<div class='sub-col'>\n <div class=\"category\">\n <div id=\"spfenlei\">\n 所有商品分类\n </div>\n <div id='hy_cate'>\n <ul id=\"hy_category\" class=\"item\">\n {% for topfenlei in topfenleis %}\n <li>{{ topfenlei.name }}</li>\n {% endfor %}\n </ul>\n <div>\n <div id=\"hy_popCategory\" class=\"pop-category\">\n {% for top in topfenleis %}\n <div class='sub-item' style='display:none;'>\n {% for child in top.childs.all %}\n <span>\n <span> </span>\n <a href=\"/productcategory/{{ child.uid }}/\">{{ child.name }}</a>\n </span>\n {% endfor %}\n </div>\n {% endfor %}\n </div>\n </div>\n </div>\n </div>\n </div>"
},
{
"alpha_fraction": 0.5062656402587891,
"alphanum_fraction": 0.5396825671195984,
"avg_line_length": 28.924999237060547,
"blob_id": "a76809198598fb6c1e8c49844f62f66eef601809",
"content_id": "bbeb893361720309fdea6ba63b06d3204ab96097",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1233,
"license_type": "no_license",
"max_line_length": 129,
"num_lines": 40,
"path": "/account/migrations/0006_auto_20160518_1733.py",
"repo_name": "fantasylc/haiyuan",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n# Generated by Django 1.9.6 on 2016-05-18 09:33\nfrom __future__ import unicode_literals\n\nfrom django.db import migrations, models\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('account', '0005_auto_20160518_1232'),\n ]\n\n operations = [\n migrations.RenameField(\n model_name='user',\n old_name='name',\n new_name='nickname',\n ),\n migrations.AddField(\n model_name='user',\n name='address',\n field=models.CharField(default='', max_length=100, verbose_name='收货地址'),\n ),\n migrations.AddField(\n model_name='user',\n name='huiyuan',\n field=models.IntegerField(default=0, verbose_name='会员等级'),\n ),\n migrations.AddField(\n model_name='user',\n name='realname',\n field=models.CharField(default='', max_length=10, verbose_name='真实姓名'),\n ),\n migrations.AddField(\n model_name='user',\n name='role',\n field=models.CharField(choices=[('student', '学生'), ('teacher', '老师')], default='', max_length=10, verbose_name='身份'),\n ),\n ]\n"
},
{
"alpha_fraction": 0.7122507095336914,
"alphanum_fraction": 0.7279202342033386,
"avg_line_length": 40.17647171020508,
"blob_id": "02d900aba9449accbd9a44654954cd63870398f3",
"content_id": "ac03ba332cd12dea93ceeb0e06fefa60b5e2d7a3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 714,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 17,
"path": "/account/utils.py",
"repo_name": "fantasylc/haiyuan",
"src_encoding": "UTF-8",
"text": "#coding:utf-8\n__author__ = 'liuchao'\n\nfrom django.conf import settings\nfrom django.core.mail import EmailMultiAlternatives\nfrom django.template import Context,loader\n\ndef sendconfirmemail(email=None,active_code=None,other=None):\n title = '欢迎注册知否!'\n subject,from_email,to = title,settings.EMAIL_HOST_USER,email\n active_address = 'http://127.0.0.1:8000/account/'+other+'/'+active_code+'/'\n html = loader.get_template('account/email/confirm_email.html')\n context = {'active_address':active_address}\n html_content = html.render(Context(context))\n msg = EmailMultiAlternatives(subject,html_content,from_email,(to,))\n msg.attach_alternative(html_content,'text/html')\n msg.send()\n\n\n"
},
{
"alpha_fraction": 0.5736246109008789,
"alphanum_fraction": 0.6019417643547058,
"avg_line_length": 33.33333206176758,
"blob_id": "5b3cd496fad539f559779042c91beaf60502ce06",
"content_id": "4cc78e8caab060643c8168aee023127f2c3efc09",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1284,
"license_type": "no_license",
"max_line_length": 142,
"num_lines": 36,
"path": "/shangcheng/migrations/0009_auto_20160520_1626.py",
"repo_name": "fantasylc/haiyuan",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n# Generated by Django 1.9.6 on 2016-05-20 08:26\nfrom __future__ import unicode_literals\n\nfrom django.db import migrations, models\nimport shangcheng.models\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('shangcheng', '0008_auto_20160520_1523'),\n ]\n\n operations = [\n migrations.AlterField(\n model_name='product',\n name='img_detail_1',\n field=models.ImageField(blank=True, null=True, upload_to='product/detail/', verbose_name='详情图片地址'),\n ),\n migrations.AlterField(\n model_name='product',\n name='img_detail_2',\n field=models.ImageField(blank=True, null=True, upload_to='product/detail/', verbose_name='详情图片地址'),\n ),\n migrations.AlterField(\n model_name='product',\n name='img_detail_3',\n field=models.ImageField(blank=True, null=True, upload_to='product/detail/', verbose_name='详情图片地址'),\n ),\n migrations.AlterField(\n model_name='product',\n name='img_show',\n field=models.ImageField(blank=True, null=True, upload_to=shangcheng.models.PathAndRename('product/show/'), verbose_name='展示图片地址'),\n ),\n ]\n"
},
{
"alpha_fraction": 0.6464272737503052,
"alphanum_fraction": 0.6549426317214966,
"avg_line_length": 27.421052932739258,
"blob_id": "c3c9c62ba4e41f62e16d32651830d1d6c309a26f",
"content_id": "797c5c1ac3c84cad52c43782c4d1154f070dbe39",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2803,
"license_type": "no_license",
"max_line_length": 85,
"num_lines": 95,
"path": "/account/models.py",
"repo_name": "fantasylc/haiyuan",
"src_encoding": "UTF-8",
"text": "#coding:utf-8\n__author__ = 'liuchao'\n\n# Create your models here.\nfrom django.db import models\nfrom django.contrib.auth.models import AbstractBaseUser\nfrom django.contrib.auth.models import BaseUserManager\n# Create your models here.\nfrom .config import ROLE,YUANXI\n\n\nclass MyUserManager(BaseUserManager):\n def create_user(self,phone,email,password=None,**kwargs):\n if not phone:\n raise ValueError('注册必须使用学号!')\n user = self.model(phone=phone)\n user.email = email\n user.set_password(password)\n\n if kwargs:\n if kwargs.get('active_code',None):\n user.active_code=kwargs['active_code']\n if kwargs.get('nickname',None):\n user.username = kwargs['nickname']\n\n user.save(using=self._db)\n\n\n return user\n\n def create_superuser(self,phone,email,password,**kwargs):\n user = self.create_user(phone,email,password=password)\n user.is_admin = True\n #user.is_staff=True\n user.is_active=True\n user.is_superuser = True\n\n user.save(using=self._db)\n return user\n\n\n\n\nclass User(AbstractBaseUser):\n\n phone = models.CharField(max_length=20,default='',verbose_name='手机号',unique=True)\n email = models.EmailField(verbose_name='邮箱',max_length=255)\n xuehao = models.CharField(max_length=20,default='',verbose_name='学号')\n nickname = models.CharField(max_length=30, default='改个昵称吧', verbose_name='昵称')\n realname = models.CharField(max_length=10,default='',verbose_name='真实姓名')\n avatar = models.ImageField(upload_to='avatars', verbose_name='头像')\n addtime = models.DateTimeField(auto_now_add=True, verbose_name='注册时间')\n is_active = models.BooleanField(default=False,verbose_name='是否激活')\n active_code = models.CharField(max_length=200,default='',verbose_name='激活码')\n is_admin = models.BooleanField(default=False)\n\n yuanxi = models.CharField(default='',max_length=30)\n role = models.CharField(default='',max_length=10,verbose_name='身份')\n huiyuan = models.IntegerField(default=0,verbose_name='会员等级')\n address = models.CharField(max_length=100,default='',verbose_name='收货地址')\n\n\n\n USERNAME_FIELD = 'phone'\n REQUIRED_FIELDS = ['email']\n\n\n\n objects = MyUserManager()\n\n def get_full_name(self):\n return self.phone\n\n def get_short_name(self):\n return self.nickname\n\n def has_perm(self, perm, obj=None):\n return True\n\n def has_module_perms(self, app_label):\n return True\n\n @property\n def is_staff(self):\n return self.is_admin\n\n\n\n class Meta:\n verbose_name_plural = verbose_name = '用户'\n ordering = ['phone']\n\n\n def __str__(self):\n return self.phone\n\n"
},
{
"alpha_fraction": 0.6096345782279968,
"alphanum_fraction": 0.6112957000732422,
"avg_line_length": 32.5,
"blob_id": "d7df5eacb48c7c0247b9f6a3ae04387e7f8267a3",
"content_id": "139feaef1704f47e6d02bdd91acd75bdec5c6ee4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 602,
"license_type": "no_license",
"max_line_length": 70,
"num_lines": 18,
"path": "/shangcheng/urls.py",
"repo_name": "fantasylc/haiyuan",
"src_encoding": "UTF-8",
"text": "#coding:utf-8\n__author__ = 'liuchao'\n\nfrom django.conf.urls import url,patterns\nfrom . import views\nurlpatterns = [\n url(r'^$',views.index),\n url(r'^product/(?P<uid>[\\w\\-]+)/$',views.product),\n url(r'^aboutus/$',views.aboutus),\n url(r'^add_cart/$',views.add_cart),\n url(r'^viewcart/$',views.view_cart),\n url(r'^clearcart/$',views.clear_cart),\n url(r'^delcart/$',views.del_cart),\n url(r'^submitorder/$',views.submitorder),\n url(r'^myorders/$',views.myorders),\n url(r'^productcategory/(?P<uid>[\\w\\-]+)/$',views.productcategory),\n url(r'^search/$',views.search),\n ]"
},
{
"alpha_fraction": 0.4982817769050598,
"alphanum_fraction": 0.5532646179199219,
"avg_line_length": 22.280000686645508,
"blob_id": "0718b6698980c405c1413e7dfe5d486d1cb01b79",
"content_id": "16f73becb5e073ee4dbe4c7bf3066668850dcaa0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 582,
"license_type": "no_license",
"max_line_length": 50,
"num_lines": 25,
"path": "/shangcheng/migrations/0005_auto_20160520_0045.py",
"repo_name": "fantasylc/haiyuan",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n# Generated by Django 1.9.6 on 2016-05-19 16:45\nfrom __future__ import unicode_literals\n\nfrom django.db import migrations\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('shangcheng', '0004_auto_20160520_0039'),\n ]\n\n operations = [\n migrations.RenameField(\n model_name='version',\n old_name='now_prize',\n new_name='now_price',\n ),\n migrations.RenameField(\n model_name='version',\n old_name='old_prize',\n new_name='old_price',\n ),\n ]\n"
},
{
"alpha_fraction": 0.8163265585899353,
"alphanum_fraction": 0.8163265585899353,
"avg_line_length": 23.5,
"blob_id": "e3d55b15d16ceb8abacf6ae5fd72b668bc87224e",
"content_id": "ec21c4519186781493a586c2cea57b4a30a88f02",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 176,
"license_type": "no_license",
"max_line_length": 45,
"num_lines": 4,
"path": "/README.md",
"repo_name": "fantasylc/haiyuan",
"src_encoding": "UTF-8",
"text": "# haiyuan\na simple electronic mall based on django\n\n基于django的一个简易电子商城,支持商品展示,搜索,分类,下单,购物车,订单管理等功能\n"
},
{
"alpha_fraction": 0.7684210538864136,
"alphanum_fraction": 0.7684210538864136,
"avg_line_length": 18,
"blob_id": "76158ac5933cad9dadf5db7ccd25958fb83156e0",
"content_id": "ac858435e8a6a872f4726e95774bb6b73810a8fe",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 95,
"license_type": "no_license",
"max_line_length": 34,
"num_lines": 5,
"path": "/shangcheng/apps.py",
"repo_name": "fantasylc/haiyuan",
"src_encoding": "UTF-8",
"text": "from django.apps import AppConfig\n\n\nclass ShangchengConfig(AppConfig):\n name = 'shangcheng'\n"
},
{
"alpha_fraction": 0.6246376633644104,
"alphanum_fraction": 0.6289855241775513,
"avg_line_length": 25.576923370361328,
"blob_id": "02721758b798cd349d654ff3876717f31c0326f7",
"content_id": "5a95c35ac4b6b5b757966d0641750f61325f4533",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 694,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 26,
"path": "/account/forms.py",
"repo_name": "fantasylc/haiyuan",
"src_encoding": "UTF-8",
"text": "#coding:utf-8\n__author__ = 'liuchao'\n\n\nfrom django import forms\n\nfrom .models import User\n\nfrom django.forms import TextInput\n\n# class MyTextInput(TextInput):\n# def __init__(self,*args,**kwargs):\n# self.attrs = None\n# super(MyTextInput,self).__init__(*args,**kwargs)\n#\n# class UserInfoForm(forms.ModelForm):\n#\n# def __init__(self,*args,**kwargs):\n# self.request = kwargs.pop('request',None)\n# super(UserInfoForm,self).__init__(*args,**kwargs)\n# self.fields['nikename'].attrs.value = self.request.user.nikename\n#\n#\n# nickname = forms.CharField(label='昵称',max_length=20,widget=MyTextInput(\n# attrs={'class':'form-control',}\n# ))"
},
{
"alpha_fraction": 0.39410480856895447,
"alphanum_fraction": 0.39519649744033813,
"avg_line_length": 20.325580596923828,
"blob_id": "f722b223b76a10a2155b1ceafafc06a9aeea4ede",
"content_id": "1397c3de6ff850151b50864841e241afe18ac28b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1574,
"license_type": "no_license",
"max_line_length": 32,
"num_lines": 43,
"path": "/account/config.py",
"repo_name": "fantasylc/haiyuan",
"src_encoding": "UTF-8",
"text": "#coding:utf-8\n__author__ = 'liuchao'\n\nROLE = ('老师','学生',)\n\nYUANXI = ('海洋环境学院',\n '信息科学与工程学院',\n '化学化工学院',\n '海洋地球科学学院',\n '海洋生命学院',\n '水产学院',\n '食品科学与工程学院',\n '医药学院',\n '工程学院',\n '环境科学与工程学院',\n '管理学院',\n '经济学院',\n '外国语学院',\n '文学与新闻传播学院',\n '法政学院',\n '数学科学学院',\n '材料科学与工程研究院',\n '基础教育中心',\n '高等职业技术学院',\n '继续教育学院',\n '国际教育学院',\n '基础教学中心',\n '船舶中心',\n '网络中心',\n '海洋发展研究院',\n '国际教育交流中心',\n '基础实验教学中心',\n '研究生教育中心',\n '国家海洋药物工程技术研究中心',\n '联合国教科文组织中国海洋生物工程中心',\n '国家海洋生命科学与技术人才培养基地',\n '物理海洋教育部重点实验室',\n '海洋药物教育部重点实验室',\n '海水养殖教育部重点实验室',\n '海洋环境与生态教育部重点实验室',\n '海洋遥感教育部重点实验室',\n '海洋化学理论与工程技术教育部重点实验室',\n '海底科学与探测技术教育部重点实验室',)"
}
] | 29 |
jlandonedwards/AutoPlaylistContinuation | https://github.com/jlandonedwards/AutoPlaylistContinuation | 447dad4c28d34ce050a96f2f10f49a196da45c15 | 4da868d146a0f44ead1e5cd02663dfce965f2666 | 07558f0ca8e5f3e853d6af787302e3ba77307eff | refs/heads/main | 2023-04-28T08:08:51.780440 | 2021-05-11T03:10:17 | 2021-05-11T03:10:17 | 346,497,497 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5175504088401794,
"alphanum_fraction": 0.5234183073043823,
"avg_line_length": 38.720340728759766,
"blob_id": "4b117b728e9dd17f770180c487ed771bfa65d2c5",
"content_id": "48c56ad8aceae508be240d98753a3cd2e10aa3fa",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 9373,
"license_type": "no_license",
"max_line_length": 122,
"num_lines": 236,
"path": "/main_train.py",
"repo_name": "jlandonedwards/AutoPlaylistContinuation",
"src_encoding": "UTF-8",
"text": "import os\nimport tensorflow as tf\n# from tensorflow.python.client import device_lib\nfrom data_reader import *\nfrom DAEs import *\nimport datetime\nimport tensorflow.compat.v1 as v1\nimport random\nimport metrics\nfrom get_title_model import get_model\n\n# Environtment/Machine specific\nos.environ[\"CUDA_DEVICE_ORDER\"] = \"PCI_BUS_ID\"\nos.environ[\"CUDA_VISIBLE_DEVICES\"] = \"0\"\n\n# Verify GPU\n# print(device_lib.list_local_devices())\n\ndef log_write(dir, test, log):\n test = test+'.txt'\n p = os.path.join(dir, test)\n with open(p, \"a\") as f:\n f.write(log)\n f.write('\\n')\n print(log)\n\ndef show_result(rprecision, ndcg, rsc):\n return \"rprecision: %f ndcg: %f rsc: %f\" % (rprecision, ndcg, rsc)\n\ndef show_result_rprec(rprecision):\n return \"rprecision: %f\" % (rprecision)\n\ndef eval(reader_test, conf, sess, model, model_title):\n # evaluation\n ssq = 0\n total_rprecision = 0\n hit_by_cls_mat = []\n cand_cls_dist_mat = []\n encoder_grad_sqrsum_by_cls = []\n decoder_grad_sqrsum_by_cls = []\n\n total_hidden_sqrsum = 0\n # total_ndcg = 0\n # total_rsc = 0\n test_size = len(reader_test.playlists)\n\n while True:\n predicted_matrix = None\n x_positions, test_seed, test_answer, titles, x_ones = reader_test.next_batch_test()\n if conf.mode in ['pretrain', 'dae']:\n predicted_matrix = sess.run(model.y_pred, feed_dict={model.x_positions: x_positions,\n model.x_ones: x_ones,\n model.keep_prob: 1.0, model.input_keep_prob: 1.0})\n elif conf.mode == 'title':\n len_titles = len(titles)\n if len_titles < conf.batch:\n zeros = [-1] * conf.strmaxlen\n titles = titles + [zeros] * (conf.batch - len_titles)\n predicted_matrix = sess.run(model.y_pred, feed_dict={model.x_positions: x_positions,\n model.x_ones: x_ones,\n model_title.titles: titles,\n model.keep_prob: 1.0, model_title.keep_prob: 1.0,\n model.input_keep_prob: 1.0,\n model.titles_use: [[1]] * conf.batch})\n \n # total_hidden_sqrsum += np.sum(hidden_sqrsum)\n\n # ssq += np.sum(grad[0]**2)\n predicted_matrix = predicted_matrix[:, :conf.n_tracks]\n \n for i in range(len(test_seed)):\n rprecision, hit_by_cls, cand_cls_dist = metrics.single_eval(predicted_matrix[i], test_seed[i], test_answer[i],\n titles[i], conf.class_divpnt)\n total_rprecision += rprecision\n hit_by_cls_mat.append(hit_by_cls)\n cand_cls_dist_mat.append(cand_cls_dist)\n # print(hit_by_cls_mat)\n # total_ndcg += ndcg\n # total_rsc += rsc\n if reader_test.test_idx == 0:\n break\n\n total_rprecision /= test_size\n # total_hidden_sqrsum /= test_size\n\n # hit_by_cls_mat = np.matrix(hit_by_cls_mat)\n # hr_by_cls = hit_by_cls_mat.mean(axis=0).tolist()[0]\n\n # cand_cls_dist_mat = np.matrix(cand_cls_dist_mat)\n # cand_cls_dist = cand_cls_dist_mat.mean(axis=0).tolist()[0]\n\n # encoder_grad_sqrsum_by_cls = np.matrix(encoder_grad_sqrsum_by_cls)\n # encoder_grad_sqrsum_by_cls = encoder_grad_sqrsum_by_cls.mean(axis=0).tolist()[0]\n\n # decoder_grad_sqrsum_by_cls = np.matrix(decoder_grad_sqrsum_by_cls)\n # decoder_grad_sqrsum_by_cls = decoder_grad_sqrsum_by_cls.mean(axis=0).tolist()[0]\n\n # total_ndcg /= test_size\n # total_rsc /= test_size\n # print(ssq)\n \n # return total_rprecision, hr_by_cls, cand_cls_dist, total_hidden_sqrsum, \\\n # encoder_grad_sqrsum_by_cls, decoder_grad_sqrsum_by_cls\n\n return total_rprecision\n\ndef run(conf, only_testmode):\n if -1 in conf.firstN:\n reader = data_reader(data_dir=conf.data_dir, filename='train', batch_size=conf.batch)\n else:\n reader = data_reader_firstN(data_dir=conf.data_dir, filename='train',\n batch_size=conf.batch, from_to=conf.firstN)\n \n conf.class_divpnt = reader.class_divpnt\n conf.n_tracks = reader.num_tracks\n conf.n_input = reader.num_items\n conf.n_output = reader.num_items\n conf.charsize = reader.num_char\n conf.strmaxlen = reader.max_title_len\n\n kp_range = conf.input_kp\n test_seed = conf.test_seed\n update_seed = conf.update_seed\n\n readers_test = {}\n for seed in test_seed:\n readers_test[seed] = data_reader_test(data_dir=conf.data_dir, filename=seed,\n batch_size=conf.batch, test_num=conf.testsize)\n\n print(conf.n_input)\n\n model_title = None\n if conf.mode == 'pretrain':\n info = '[pretrain mode]'\n model = DAE_tied(conf)\n elif conf.mode == 'dae':\n if only_testmode:\n conf.initval = conf.save\n info = '[dae mode]'\n model = DAE(conf)\n elif conf.mode == 'title':\n info = '[title mode]'\n model_title = get_model(conf)\n model = DAE_title(conf, model_title.output)\n\n info += ' start at ' + str(datetime.datetime.now())\n log_write(conf, '*'*10)\n log_write(conf, info)\n\n model.fit()\n sess = v1.Session()\n sess.run(model.init_op)\n saver = v1.train.Saver()\n\n epoch = 0\n iter = 0\n loss = 0.0\n max_eval = 0.0\n\n # if test mode is specified, just test the result and no training session.\n if only_testmode:\n log_write(conf, '<<only test mode>>')\n if conf.mode == 'title':\n saver.restore(sess, conf.save)\n\n for seed_num, reader_test in readers_test.items():\n log_write(conf, \"seed num: \" + seed_num)\n # rprec, ndcg, rsc = eval(reader_test, conf, sess, model, model_title)\n # r = show_result(rprec, ndcg, rsc)\n rprec = eval(reader_test, conf, sess, model, model_title)\n r = show_result_rprec(rprec)\n log_write(conf, r)\n return\n \n while True:\n\n start_idx = reader.train_idx\n trk_positions, art_positions, y_positions, titles, trk_val, art_val = reader.next_batch()\n end_idx = reader.train_idx\n\n input_kp = random.uniform(kp_range[0], kp_range[-1])\n\n if conf.mode in ['pretrain', 'dae']:\n rand_int = np.random.randint(2)\n if rand_int == 0:\n _, l = sess.run([model.optimizer, model.cost],\n feed_dict={model.x_positions: trk_positions, model.x_ones: trk_val,\n model.y_positions: y_positions, model.y_ones: np.ones(len(y_positions)),\n model.keep_prob: conf.kp, model.input_keep_prob: input_kp})\n\n elif rand_int == 1:\n _, l = sess.run([model.optimizer, model.cost],\n feed_dict={model.x_positions: art_positions, model.x_ones: art_val,\n model.y_positions: y_positions, model.y_ones: np.ones(len(y_positions)),\n model.keep_prob: conf.kp, model.input_keep_prob: input_kp})\n elif conf.mode == 'title':\n _, l = sess.run([model.optimizer, model.cost],\n feed_dict={model.x_positions: y_positions, model.x_ones: np.ones(len(y_positions)),\n model.y_positions: y_positions, model.y_ones: np.ones(len(y_positions)),\n model_title.titles: titles,\n model.keep_prob: conf.kp, model_title.keep_prob: conf.title_kp,\n model.input_keep_prob: input_kp,\n model.titles_use: [[1]] * conf.batch})\n\n loss += l\n iter += 1\n\n if start_idx > end_idx or end_idx == 0:\n\n epoch += 1\n loss = loss / iter\n if epoch >= 0:\n log_write(conf, \"epoch \"+str(epoch))\n log_write(conf, \"training loss: \"+str(loss))\n cur_eval = 0\n for seed_num, reader_test in readers_test.items():\n log_write(conf, \"seed num: \"+seed_num)\n # rprec, ndcg, rsc = eval(reader_test, conf, sess, model, model_title)\n # r = show_result(rprec, ndcg, rsc)\n rprec = eval(reader_test, conf, sess, model, model_title)\n r = show_result_rprec(rprec)\n log_write(conf, r)\n if seed_num in update_seed:\n cur_eval += rprec\n\n if cur_eval >= max_eval:\n if conf.mode in ['pretrain', 'dae']:\n model.save_model(sess)\n elif conf.mode == 'title':\n saver.save(sess, conf.save)\n max_eval = cur_eval\n log_write(conf, \"The highest score is updated. Parameters are saved\")\n loss = 0\n iter = 0\n if epoch == conf.epochs:\n break"
},
{
"alpha_fraction": 0.5657788515090942,
"alphanum_fraction": 0.5867635011672974,
"avg_line_length": 40.47190856933594,
"blob_id": "f831c0f385f7ba33054736418cfa00b6d27e0b31",
"content_id": "9375d6f9156edbf52c7cc05b593d6801ec16d5cb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3717,
"license_type": "no_license",
"max_line_length": 170,
"num_lines": 89,
"path": "/toy_experiment_env/Metrics.py",
"repo_name": "jlandonedwards/AutoPlaylistContinuation",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\n\"\"\"\nCreated on Tuesday April 6th 2021\n\n@author: landon,ylalwani\n\"\"\"\n\nimport tensorflow as tf\n\nclass Metrics():\n def __init__(self,n_ids,n_track_ids,):\n self.n_ids = n_ids\n self.n_track_ids = n_track_ids \n self.train_batch_metrics = []\n self.epochs_train_metrics = []\n self.val_batch_metrics = []\n self.val_sets_metrics = []\n self.epochs_val_metrics = []\n \n \n def update_metrics(self,mode='train',metrics=None):\n \n if mode == 'epoch':\n epoch_train = tf.reduce_mean(tf.stack(self.train_batch_metrics,0),0)\n self.epochs_train_metrics.append(epoch_train)\n self.train_batch_idx = 0\n val_sets_metrics = tf.stack(self.val_sets_metrics,0)\n self.epochs_val_metrics.append(val_sets_metrics)\n epoch_val = tf.reduce_mean(val_sets_metrics,0)\n self.train_batch_metrics = []\n self.val_sets_metrics = []\n return epoch_train,epoch_val\n \n elif mode == 'val_set':\n self.val_sets_metrics.append(tf.reduce_mean(tf.stack(self.val_batch_metrics,0),0))\n self.val_batch_metrics = []\n \n else:\n \n if mode == 'train_batch':\n self.train_batch_metrics.append(metrics)\n \n \n elif mode == 'val_batch':\n self.val_batch_metrics.append(metrics)\n \n \n \n def calculate_metrics(self,rec_tracks,rec_artists,y_tracks,y_artists):\n \n # R-precision\n correct_tracks = tf.sets.intersection(rec_tracks,y_tracks.to_sparse())\n correct_artists = tf.sets.intersection(rec_artists,y_artists.to_sparse())\n r_precision = self.r_precision(correct_tracks,y_tracks,correct_artists,y_artists)\n \n # Normalized Discounted Cumulative Gain\n idxs_3d = tf.where(tf.equal(tf.expand_dims(tf.sparse.to_dense(correct_tracks,default_value=-1),2),tf.expand_dims(rec_tracks,1)))\n ndcg = tf.cond(tf.greater(tf.size(idxs_3d), tf.constant(0,dtype=tf.int32)),lambda: self.NDCG(idxs_3d,y_tracks.shape[0]),lambda : tf.constant(0,dtype=tf.float32))\n\n # Reccomended Song Clicks\n \n rec_clicks = tf.reduce_mean(tf.cast((tf.reduce_min(tf.RaggedTensor.from_value_rowids(idxs_3d[:,2],idxs_3d[:,0]).to_tensor(511),1)-1) //10,tf.float32),0)\n \n return r_precision,ndcg,tf.cast(rec_clicks,tf.float32)\n \n \n \n def r_precision(self,correct_tracks,y_tracks,correct_artists=None,y_artists=None):\n \n n_correct_tracks = tf.cast(tf.sets.size(correct_tracks),tf.float32)\n n_correct_artists = tf.cast(tf.sets.size(correct_artists),tf.float32)\n n_total_targets = tf.cast(tf.sets.size(y_tracks.to_sparse()),tf.float32)\n return tf.math.reduce_mean((n_correct_tracks + .25*n_correct_artists)/n_total_targets)\n \n \n def NDCG(self,idxs_3d,nrows):\n log2 = tf.math.log(2.0)\n obs_pos = tf.cast(tf.RaggedTensor.from_value_rowids(idxs_3d[:,2],idxs_3d[:,0],nrows=nrows),tf.float32)\n log2_pos = tf.math.log(obs_pos + 2) / log2\n dcg_inv = tf.math.reduce_sum(log2_pos,1)\n best_pos = tf.cast(tf.RaggedTensor.from_value_rowids(idxs_3d[:,1],idxs_3d[:,0],nrows=nrows),tf.float32)\n log2_best_pos = tf.math.log(best_pos + 2) / log2\n idcg_inv = tf.math.reduce_sum(log2_best_pos,1)\n mask = tf.not_equal(dcg_inv,0)\n dcg_inv = tf.boolean_mask(dcg_inv,mask)\n idcg_inv = tf.boolean_mask(idcg_inv,mask)\n return tf.math.reduce_mean(idcg_inv/dcg_inv)\n \n \n \n \n\n\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.7598425149917603,
"alphanum_fraction": 0.774114191532135,
"avg_line_length": 39.619998931884766,
"blob_id": "3d507130057c9c3f7ec1449f2277003d994df480",
"content_id": "3d430226c4988d1e1e374be074ad556432959a5c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 2032,
"license_type": "no_license",
"max_line_length": 259,
"num_lines": 50,
"path": "/README.md",
"repo_name": "jlandonedwards/AutoPlaylistContinuation",
"src_encoding": "UTF-8",
"text": "# Automated Playlist Continuation\n\n**Team Members:** Landon Edwards , Yash Lalwani\n\n## Introduction\n\nThis repository contains a Python implementation of our music recommendation system for the [Spotify Million Playlist Dataset Challenge](https://www.aicrowd.com/challenges/spotify-million-playlist-dataset-challenge).\n\n## Development Environment\n\nPython Anaconda v4.10.1\n\ntf-nightly-gpu v2.6.0.dev20210415 (to be updated when tensorflow 2.6 is released) \n\nCUDA v11.3\n\nCUDNN v8.2.0\n\nGeForce RTX 3090\n\n## Dataset\n\nSpotify has produced the Million Playlist Dataset . This data can be obtained by signing up for the [aicrowd competition](https://www.aicrowd.com/challenges/spotify-million-playlist-dataset-challenge),and download the zips contained within the Resources tab.\n\n## Directory Structure\n\nThe scripts read inputs by default based on the following organization of files.\n\n1. Unzip the file spotify_million_playlist_dataset.zip and move all the files located within **spotify_million_playlist_dataset/data/** to the folder **data/**\n2. Unzip the the file spotify_million_playlist_dataset_challenge.zip and move the file **spotify_million_playlist_dataset_challenge/challenge_set.json** to the folder **challenge_data/**.\n\n## Data Preprocessing\n\nRun the script `DataPreProcess.py` to produce the directory preprpocessed_data which is required for use in following scripts.\n\nRun the script `trainingValidationSplit.py` to produce training and validation sets for use in training.\n\n## Executing\n\nA model can be trained and saved by running `train.py`\n\n*Arguments of* `train.py`:\n\n--mode : One of **train** (Begin New training session), **resume** (Resume a previous checkpoint), **load** (Load a previously trained model and begin new training session) .\n\n--model_name : A unique identifier to create a folder in which to save checkpoints and training results under.\n\n--models_dir : directory path of where to save model.\n\n--resume_dir : If selected mode is resume, specify location of directory where model checkpoint is stored\n\n"
},
{
"alpha_fraction": 0.6193941235542297,
"alphanum_fraction": 0.6386991143226624,
"avg_line_length": 33.38974380493164,
"blob_id": "5dc0f860c57b525f8d7274325ff91231a7fac0d6",
"content_id": "316ed6162c0eccca8eef0dd72510a197b497671a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6734,
"license_type": "no_license",
"max_line_length": 133,
"num_lines": 195,
"path": "/toy_experiment_env/trainingValidationSplit.py",
"repo_name": "jlandonedwards/AutoPlaylistContinuation",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Tue Mar 16 19:09:40 2021\n\n@author: landon\n\"\"\"\n\nimport tensorflow as tf\nimport numpy as np\nimport json\nimport argparse\nimport os\nimport time\n\nHAS_TITLE = [1,1,0,1,1,1,1,1,1]\nN_TRACKS = [0,5,5,10,25,25,100,100,1]\nIS_RANDOM = [0,0,0,0,0,1,0,1,0]\n\nTENSOR_SPEC = tf.RaggedTensorSpec(tf.TensorShape([4, None]), tf.int32, 1, tf.int64)\n\n\n\ndef delete_tensor_by_indices(tensor,indices,n_tracks):\n idxs = tf.reshape(indices,(-1,1))\n mask = ~tf.scatter_nd(indices=idxs,updates=tf.ones_like(indices,dtype=tf.bool),shape=[n_tracks])\n return tf.boolean_mask(tensor,mask)\n\[email protected]_not_convert\ndef map_func(x):\n return {\n 'track_ids':x[0],\n 'title_ids':x[1],\n 'n_tracks':x[2][0],\n }\[email protected]_not_convert\ndef challenge_map_func(x):\n return (x[0],x[1],x[2],x[3])\n \[email protected]_not_convert\ndef get_artists(track_ids):\n artist_ids = tid_2_aid[track_ids]\n mask = ~tf.equal(artist_ids,-1)\n return tf.boolean_mask(artist_ids,mask)\n\[email protected]_not_convert\ndef no_title(x):\n x['title_ids'] = tf.constant([],dtype=tf.int32)\n return x\n\[email protected]_not_convert\ndef no_track(x):\n x['input_track_ids'] = tf.constant([],dtype=tf.int32)\n x['input_artist_ids'] = tf.constant([],dtype=tf.int32)\n x['target_track_ids'] = x['track_ids']\n x['target_artist_ids'] = get_artists(x['track_ids'])\n del x['track_ids']\n return x\n\[email protected]_not_convert\ndef random_id_corrupt(x,n_inputs):\n n_tracks = x['n_tracks']\n idxs = tf.random.shuffle(tf.range(n_tracks))[:n_inputs]\n x['input_track_ids'] = tf.gather(x['track_ids'],idxs)\n x['input_artist_ids'] = get_artists(x['input_track_ids'])\n x['target_track_ids'] = delete_tensor_by_indices(x['track_ids'],idxs,n_tracks)\n x['target_artist_ids'] = get_artists(x['target_track_ids'])\n del x['track_ids']\n return x\n\[email protected]_not_convert\ndef seq_id_corrput(x,n_inputs):\n x['input_track_ids'] = x['track_ids'][0:n_inputs]\n x['input_artist_ids'] = get_artists(x['input_track_ids'])\n x['target_track_ids'] = x['track_ids'][n_inputs:]\n x['target_artist_ids'] = get_artists(x['target_track_ids'])\n del x['track_ids']\n return x\n \[email protected]_not_convert \ndef le(x,n_input_tracks):\n return x['n_tracks'] <= n_input_tracks\n\[email protected]_not_convert \ndef gt(x,n_input_tracks):\n return x['n_tracks'] > n_input_tracks\n\[email protected]_not_convert \ndef val_to_list(x):\n return (x[\"input_track_ids\"],x['input_artist_ids'],x[\"title_ids\"],x[\"target_track_ids\"],x['target_artist_ids'])\n\[email protected]_not_convert \ndef train_to_ragged(x):\n tensors = [x[\"track_ids\"],x[\"title_ids\"],[x[\"n_tracks\"]]]\n values = tf.concat(tensors, axis=0)\n lens = tf.stack([tf.shape(t, out_type=tf.int64)[0] for t in tensors])\n return tf.RaggedTensor.from_row_lengths(values, lens)\n \n\ndef create_val_set(tf_dataset,n_playlists,n_input_tracks,title=1,rand=0):\n\n if n_input_tracks > 0:\n sample_excl = tf_dataset.filter(lambda x: le(x,n_input_tracks))\n sample_cand = tf_dataset.filter(lambda x: gt(x,n_input_tracks)).shuffle(n_playlists,seed=2021,reshuffle_each_iteration=False)\n selections = sample_cand.take(1000)\n unselected = sample_cand.skip(1000)\n dataset = sample_excl.concatenate(unselected)\n else:\n sample_cand = tf_dataset.shuffle(1000,reshuffle_each_iteration=False)\n selections = sample_cand.take(1000)\n dataset = sample_cand.skip(1000)\n \n if not title:\n selections = selections.map(no_title)\n if n_input_tracks > 0:\n if rand:\n selections = selections.map(lambda x: random_id_corrupt(x,n_input_tracks))\n else:\n selections = selections.map(lambda x: seq_id_corrput(x,n_input_tracks))\n else:\n selections = selections.map(no_track)\n \n selections = selections.map(val_to_list)\n\n return selections,dataset,n_playlists - 1000\n\n\n \n \nif __name__ == '__main__':\n start_time = time.time()\n args = argparse.ArgumentParser(description=\"args\")\n args.add_argument('--data_dir', type=str, default='./toy_preprocessed', help=\"directory where preprocessed data is stored\")\n args.add_argument('--utils_dir', type=str, default='./utils', help=\"directory where to write training sets to\")\n args = args.parse_args()\n \n if not os.path.isdir(\"./val\"):\n os.mkdir(\"./val\")\n \n \n if not os.path.isdir(\"./train\"):\n os.mkdir(\"./train\")\n \n if not os.path.isdir(\"./challenge\"):\n os.mkdir(\"./challenge\")\n \n tf.random.set_seed(2021)\n np.random.seed(2021)\n with open(args.data_dir + \"/\" + \"data\") as file:\n data = json.load(file)\n file.close() \n playlists = data['playlists'] \n del data\n with open(args.utils_dir + \"/\" + \"tid_2_aid\") as file:\n tid_2_aid = tf.constant(json.load(file))\n file.close() \n tid_2_aid = tf.lookup.StaticHashTable(tf.lookup.KeyValueTensorInitializer(tid_2_aid[:,0],tid_2_aid[:,1]),default_value=-1)\n n_playlists = len(playlists)\n \n dataset = tf.data.Dataset.from_tensor_slices(tf.ragged.constant(playlists))\n dataset = dataset.map(map_func)\n first = 1\n\n for has_title,n_track,is_random in zip(HAS_TITLE,N_TRACKS,IS_RANDOM):\n val_set,train_set,n_playlists = create_val_set(dataset,n_playlists,n_track,has_title,is_random)\n if first:\n validation_sets = val_set\n first = 0\n else:\n validation_sets = validation_sets.concatenate(dataset=val_set)\n \n \n # Dataset where each element is it own validation set\n tf.data.experimental.save(validation_sets, \"./val\")\n \n # Full Uncorrupted Training Dataset\n train_set = train_set.map(train_to_ragged)\n tf.data.experimental.save(train_set, \"./train\")\n \n # Load Challenge Set\n with open(args.data_dir + \"/challenge_data\") as file:\n data = json.load(file)\n file.close() \n challenge_sets = tf.data.Dataset.from_tensor_slices(tf.ragged.constant(data['challenge_playlists']))\n challenge_sets = challenge_sets.map(challenge_map_func)\n del data\n tf.data.experimental.save(challenge_sets, \"./challenge\")\n \n print(\"---completed in %s seconds ---\" % round((time.time() - start_time),2))\n \n'''\nTo Do:\n Figure out how to feed Validation sets 1 to 9 sequentially into model.fit(val=validation_data)\n So that it gets called and reports the meterics at the end of each epoch\n''' \n \n \n \n \n "
},
{
"alpha_fraction": 0.6470428705215454,
"alphanum_fraction": 0.6592512130737305,
"avg_line_length": 41.70588302612305,
"blob_id": "8672a33f73a70d8850e7b910146aebf804abc2d4",
"content_id": "55e246b7a032d04d3bbf289b520de125c5547993",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3686,
"license_type": "no_license",
"max_line_length": 176,
"num_lines": 85,
"path": "/experiment_env/train.py",
"repo_name": "jlandonedwards/AutoPlaylistContinuation",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Wed Apr 7 22:13:10 2021\n\n@author: landon\n\"\"\"\n\n\nimport tensorflow as tf\nimport tensorflow_addons as tfa\nimport numpy as np\nimport tensorflow.keras as keras\nfrom DataLoader import DataLoader\nfrom Model import Model\nimport argparse\nimport time\nimport sys\nimport json\nimport os\n\ntf.random.set_seed(2021)\nnp.random.seed(2021)\n\n\n\nif __name__=='__main__':\n args = argparse.ArgumentParser(description=\"args\")\n args.add_argument('--models_dir', type=str, default='./trained_models', help=\"directory where to save model checkpoints\")\n args.add_argument('--load_dir', type=str, default='./trained_models/final_sub/post_train', help=\"directory where to save model checkpoints\")\n args.add_argument('--resume_dir', type=str, default='./trained_models/final_sub/resume/best_RP', help=\"directory where to save model checkpoints to resumed to in training\")\n args.add_argument('--model_name', type=str, default='final_sub', help=\"Unique Name to Save Model\")\n args.add_argument('--mode', type=str, default='resume', help=\"whether to train,resume training, or load model and start new training session\")\n args = args.parse_args()\n \n if args.mode not in [\"train\",\"resume\",\"load\"]:\n print(\"Please specify --mode arg as one of 'train' ,'resume', or 'load'\")\n sys.exit()\n \n if not os.path.isdir(args.models_dir):\n os.mkdir(args.models_dir)\n \n resume = 0\n resume_path=\"\"\n # All the hyperparamters that will be passed in by config object\n n_epochs = 30\n train_batch_size = 128\n val_batch_size = 100\n n_val_batches = 1000 // val_batch_size\n with open(\"./utils/data_properties\") as file:\n data_prop = json.load(file)\n file.close()\n n_ids = data_prop[\"n_tracks_artists\"]\n n_track_ids = data_prop[\"n_tracks\"]\n n_cids = data_prop[\"n_chars\"]\n dataset = DataLoader('./utils')\n training_set = dataset.get_traing_set(train_batch_size,2020)\n n_train_batches = len(training_set)\n validation_sets = dataset.get_validation_sets(val_batch_size)\n opt = keras.optimizers.Adam(learning_rate=0.005)\n model = Model(n_ids,n_track_ids,n_cids)\n model.optimizer = opt\n save_train_path = args.models_dir +\"/\" + args.model_name +\"/\" +\"resume/\"\n \n \n if args.mode == 'load':\n checkpoint = tf.train.Checkpoint(model=model)\n load_manager = tf.train.CheckpointManager(checkpoint, args.load_dir, max_to_keep=3)\n checkpoint.restore(load_manager.latest_checkpoint)\n model.Metrics.epochs_train_metrics = np.load(args.load_dir + \"/train_metrics.npy\").tolist()\n model.Metrics.epochs_val_metrics = np.load(args.load_dir + \"/val_metrics.npy\").tolist()\n elif args.mode == 'resume':\n resume_path = args.resume_dir\n resume=1\n \n \n \n model.train(training_set,validation_sets,n_epochs,train_batch_size,val_batch_size,save_train_path,resume_path,resume)\n # Save the Model Upon Completeing Traing\n checkpoint = tf.train.Checkpoint(model=model,epochs_train_metrics=tf.Variable(model.Metrics.epochs_train_metrics),\n epoch_val_metrics=tf.Variable(model.Metrics.epochs_val_metrics))\n save_manager = tf.train.CheckpointManager(checkpoint, args.models_dir +\"/\" + args.model_name +\"/\" +\"post_train\" , max_to_keep=3)\n save_manager.save()\n np.save(args.models_dir +\"/\" + args.model_name +\"/\" +\"post_train\" + \"/train_metrics\",model.Metrics.epochs_train_metrics)\n np.save(args.models_dir +\"/\" + args.model_name +\"/\" +\"post_train\" + \"/val_metrics\",model.Metrics.epochs_val_metrics)\n \n \n \n \n \n \n \n \n "
},
{
"alpha_fraction": 0.6584947109222412,
"alphanum_fraction": 0.6663398146629333,
"avg_line_length": 42.7282600402832,
"blob_id": "fa4f79357796dcea513d0fae8ba701daf1d81c67",
"content_id": "8101ed6728ef81679ae08381a23bb38bace53478",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4079,
"license_type": "no_license",
"max_line_length": 171,
"num_lines": 92,
"path": "/toy_experiment_env/train.py",
"repo_name": "jlandonedwards/AutoPlaylistContinuation",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Wed Apr 7 22:13:10 2021\n\n@author: landon\n\"\"\"\n\nimport logging\nlogging.getLogger(\"tensorflow\").setLevel(logging.ERROR)\nimport tensorflow as tf\ntf.autograph.set_verbosity(0)\nimport numpy as np\nimport tensorflow.keras as keras\nfrom DataLoader import DataLoader\nfrom Model import Model\nimport argparse\nimport time\nimport sys\nimport time\nimport json\n\n\ngpus = tf.config.list_physical_devices('GPU')\nif gpus:\n try:\n # Currently, memory growth needs to be the same across GPUs\n for gpu in gpus:\n tf.config.experimental.set_memory_growth(gpu, True)\n logical_gpus = tf.config.experimental.list_logical_devices('GPU')\n print(len(gpus), \"Physical GPUs,\", len(logical_gpus), \"Logical GPUs\")\n except RuntimeError as e:\n # Memory growth must be set before GPUs have been initialized\n print(e)\ntf.get_logger().setLevel('INFO')\n\nif __name__=='__main__':\n args = argparse.ArgumentParser(description=\"args\")\n args.add_argument('--models_dir', type=str, default='./trained_models', help=\"directory where to save model checkpoints\")\n args.add_argument('--load_dir', type=str, default='./trained_models/junk/post_train', help=\"directory where to save model checkpoints\")\n args.add_argument('--resume_dir', type=str, default='./trained_models/junk/resume/best_RP', help=\"directory where to save model checkpoints to resumed to in training\")\n args.add_argument('--model_name', type=str, default='junk', help=\"Unique Name to Save Model\")\n args.add_argument('--mode', type=str, default='train', help=\"whether to train,resume training, or load model and start new training session\")\n args = args.parse_args()\n \n if args.mode not in [\"train\",\"resume\",\"load\"]:\n print(\"Please specify --mode arg as one of 'train' ,'resume', or 'load'\")\n sys.exit()\n \n resume = 0\n resume_path=\"\"\n # All the hyperparamters that will be passed in by config object\n n_epochs = 1\n train_batch_size = 128\n val_batch_size = 200\n n_val_batches = 1000 // val_batch_size\n with open(\"./utils/data_properties\") as file:\n data_prop = json.load(file)\n file.close()\n n_ids = data_prop[\"n_tracks_artists\"]\n n_track_ids = data_prop[\"n_tracks\"]\n n_cids = data_prop[\"n_chars\"]\n dataset = DataLoader('./utils')\n training_set = dataset.get_traing_set(train_batch_size,123)\n n_train_batches = len(training_set)\n validation_sets = dataset.get_validation_sets(val_batch_size)\n opt = keras.optimizers.Adam()\n model = Model(n_ids,n_track_ids,n_cids)\n model.optimizer = opt\n save_train_path = args.models_dir +\"/\" + args.model_name +\"/\" +\"resume/\"\n \n \n if args.mode == 'load':\n checkpoint = tf.train.Checkpoint(model=model)\n load_manager = tf.train.CheckpointManager(checkpoint, args.load_dir, max_to_keep=3)\n checkpoint.restore(load_manager.latest_checkpoint)\n model.Metrics.epochs_train_metrics = np.load(args.load_dir + \"/train_metrics.npy\").tolist()\n model.Metrics.epochs_val_metrics = np.load(args.load_dir + \"/val_metrics.npy\").tolist()\n elif args.mode == 'resume':\n resume_path = args.resume_dir\n resume=1\n \n \n \n model.train(training_set,validation_sets,n_epochs,train_batch_size,val_batch_size,save_train_path,resume_path,resume)\n # Save the Model Upon Completeing Traing\n checkpoint = tf.train.Checkpoint(model=model,epochs_train_metrics=tf.Variable(model.Metrics.epochs_train_metrics),\n epoch_val_metrics=tf.Variable(model.Metrics.epochs_val_metrics))\n save_manager = tf.train.CheckpointManager(checkpoint, args.models_dir +\"/\" + args.model_name +\"/\" +\"post_train\" , max_to_keep=3)\n save_manager.save()\n np.save(args.models_dir +\"/\" + args.model_name +\"/\" +\"post_train\" + \"/train_metrics\",model.Metrics.epochs_train_metrics)\n np.save(args.models_dir +\"/\" + args.model_name +\"/\" +\"post_train\" + \"/val_metrics\",model.Metrics.epochs_val_metrics)\n \n \n \n \n \n \n \n \n "
},
{
"alpha_fraction": 0.6356797218322754,
"alphanum_fraction": 0.6458892822265625,
"avg_line_length": 32.12963104248047,
"blob_id": "e37c753f646361dc4f1af5e9be4cd399c7f75457",
"content_id": "47a093812bde884a00a8029a6a63318f034ef343",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1861,
"license_type": "no_license",
"max_line_length": 146,
"num_lines": 54,
"path": "/experiment_env/challenge_predict.py",
"repo_name": "jlandonedwards/AutoPlaylistContinuation",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Wed Apr 7 22:13:10 2021\n\n@author: landon\n\"\"\"\n\nimport logging\n\nimport tensorflow as tf\nimport tensorflow.keras as keras\nfrom DataLoader import DataLoader\nfrom Model import Model\nimport argparse\nimport json\nimport os\n\n\n\n\nif __name__=='__main__':\n args = argparse.ArgumentParser(description=\"args\")\n args.add_argument('--load_dir', type=str, default='./trained_models/final_sub/resume/best_RP', help=\"directory where to load a trained model\")\n args.add_argument('--save_name', type=str, default='sub.csv', help=\"name of submission file\")\n args.add_argument('--team_name', type=str, default='JEYL', help=\"challange team name\")\n args.add_argument('--email', type=str, default='[email protected]', help=\"team contact email\")\n args = args.parse_args()\n \n if not os.path.isdir(\"./submissions\"):\n os.mkdir(\"./submissions\")\n save_path = \"./submissions/\" + args.save_name\n \n with open(\"./utils/data_properties\") as file:\n data_prop = json.load(file)\n file.close()\n n_ids = data_prop[\"n_tracks_artists\"]\n n_track_ids = data_prop[\"n_tracks\"]\n n_cids = data_prop[\"n_chars\"]\n \n \n opt = keras.optimizers.Adam()\n model = Model(n_ids,n_track_ids,n_cids)\n model.optimizer = opt\n \n checkpoint = tf.train.Checkpoint(model=model,curr_epoch=tf.Variable(0), best_val_rp = tf.Variable(0.0))\n load_manager = tf.train.CheckpointManager(checkpoint, args.load_dir,max_to_keep=3)\n checkpoint.restore(load_manager.latest_checkpoint)\n \n dataset = DataLoader('./utils')\n challenge_batch_size = 50\n challenge_sets = dataset.get_challenge_sets(challenge_batch_size)\n \n model.generate_challenge_submissions(challenge_sets,save_path,args.team_name,args.email)\n \n\n \n \n \n \n \n \n \n \n \n \n "
},
{
"alpha_fraction": 0.590345561504364,
"alphanum_fraction": 0.6075856685638428,
"avg_line_length": 43.40065002441406,
"blob_id": "d7524b3c99868e03027b51d4f521e673b0adc37b",
"content_id": "1c73ccf859805eb4548aaeef31611945e3175714",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 13631,
"license_type": "no_license",
"max_line_length": 148,
"num_lines": 307,
"path": "/experiment_env/Model.py",
"repo_name": "jlandonedwards/AutoPlaylistContinuation",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Mon Apr 5 14:13:50 2021\n\n@author: landon\n\"\"\"\nimport tensorflow as tf\nimport numpy as np\nimport math\nfrom tensorflow import keras\nfrom tensorflow.keras import backend as K\nfrom Metrics import Metrics\nimport time\nimport json\nimport csv\nimport tensorflow_addons as tfa\n\n\n# class MultiHeadAttention(tf.keras.layers.Layer):\n# def __init__(self,head_size,num_heads):\n# super(MultiHeadAttention, self).__init__()\n# self.ff1 = keras.layers.Dense(units=head_size*num_heads,activation=tfa.activations.mish)\n# self.ff2 = keras.layers.Dense(units=head_size*num_heads,activation=tfa.activations.mish)\n# self.mha = tfa.layers.MultiHeadAttention(head_size, num_heads)\n \n# def call(self,query,key):\n# key = self.ff1(key)\n# query = self.ff2(query)\n# return self.mha([query,key])\n\n\nimport tensorflow as tf\nimport numpy as np\nimport math\nfrom tensorflow import keras\nfrom tensorflow.keras import backend as K\nfrom Metrics import Metrics\nimport time\nimport json\nimport csv\n\nclass Embedding(tf.keras.layers.Layer):\n def __init__(self,n_ids,embedding_dim=32):\n super(Embedding, self).__init__()\n self.n_ids = n_ids\n self.embedding_dim = embedding_dim\n \n def build(self,inputs):\n self.w = tf.Variable(\n tf.random.truncated_normal(\n shape=[self.n_ids, self.embedding_dim],\n stddev=1.0/math.sqrt(self.embedding_dim)),\n name = \"var_w\",\n trainable=True) \n \n def call(self, inputs):\n x = tf.nn.embedding_lookup(self.w,inputs)\n x = keras.backend.sum(x,1)\n return x\n \nclass CharacterCNN(tf.keras.layers.Layer):\n def __init__(self,n_cids,n_ids,embedding_dim=32):\n super(CharacterCNN, self).__init__()\n self.n_ids = n_ids\n self.emb = Embedding(n_cids,embedding_dim)\n self.ff1 = keras.layers.Dense(1014,activation='relu')\n self.conv1 = tf.keras.layers.Conv1D(256, 7, activation='relu')\n self.maxpool1 = tf.keras.layers.MaxPooling1D(pool_size=3)\n self.conv2 = tf.keras.layers.Conv1D(256, 7, activation='relu')\n self.maxpool2 = tf.keras.layers.MaxPooling1D(pool_size=3)\n self.conv3 = tf.keras.layers.Conv1D(256, 3, activation='relu')\n self.conv4 = tf.keras.layers.Conv1D(256, 3, activation='relu')\n self.conv5 = tf.keras.layers.Conv1D(256, 3, activation='relu')\n self.conv6 = tf.keras.layers.Conv1D(256, 3, activation='relu')\n self.maxpool3 = tf.keras.layers.MaxPooling1D(pool_size=3)\n self.flatten = tf.keras.layers.Flatten()\n self.ff2 = keras.layers.Dense(1024,activation='relu')\n self.dropout1 = keras.layers.Dropout(0.5)\n self.ff3 = keras.layers.Dense(1024,activation='relu')\n self.dropout2 = keras.layers.Dropout(0.5)\n self.ff4 = keras.layers.Dense(32,activation='relu')\n self.bn1 = keras.layers.BatchNormalization()\n self.bn2 = keras.layers.BatchNormalization()\n self.bn3 = keras.layers.BatchNormalization()\n self.bn4 = keras.layers.BatchNormalization()\n \n def call(self, ids,training=False):\n x = self.emb(ids) \n x = self.ff1(x)\n x = tf.reshape(x, (x.shape[0], x.shape[1], 1))\n x = self.conv1(x)\n x = self.maxpool1(x)\n x = self.conv2(x)\n x = self.maxpool2(x)\n x= self.bn1(x)\n x = self.conv3(x)\n x = self.conv4(x)\n x= self.bn2(x)\n x = self.conv5(x)\n x = self.conv6(x)\n x = self.maxpool3(x)\n x = self.flatten(x)\n x= self.bn3(x)\n x = self.ff2(x)\n x = self.dropout1(x)\n x = self.ff3(x)\n x = self.dropout2(x)\n x = self.ff4(x)\n x = self.bn4(x)\n return x\n \n \n \n \n\nclass DAE(tf.keras.layers.Layer):\n def __init__(self,n_ids,embedding_dim=64):\n super(DAE, self).__init__()\n self.b0 = tf.Variable(tf.random.normal(shape=[embedding_dim],dtype=tf.float32),trainable=True)\n self.b1 = tf.Variable(tf.random.normal(shape=[n_ids],dtype=tf.float32),trainable=True)\n self.emb = Embedding(n_ids,embedding_dim)\n self.ff1 = keras.layers.Dense(32,activation='relu')\n self.bn = keras.layers.BatchNormalization()\n \n def call(self, ids,training=False):\n x = self.emb(ids) + self.b0\n x = keras.activations.relu(x)\n x = x @ K.transpose(self.emb.w) + self.b1\n y_pred = self.ff1(x)\n y_pred = self.bn(y_pred)\n #y_pred = keras.activations.softmax(x,axis=1)\n #y_pred = keras.activations.sigmoid(x)\n return y_pred\n \n \n#loss_tracker = keras.metrics.Mean(name=\"loss\")\n\nclass Model(tf.keras.Model):\n def __init__(self,n_ids,n_track_ids,n_cids):\n super(Model, self).__init__()\n self.n_ids = n_ids\n self.n_track_ids = n_track_ids\n self.Metrics = Metrics(n_ids,n_track_ids)\n self.DAE = DAE(n_ids)\n self.charCNN = CharacterCNN(n_cids,n_ids)\n self.ff = keras.layers.Dense(n_ids,activation='relu')\n\n \n def call(self,ids,cids,training=False):\n y_pred_DAE = self.DAE(ids)\n y_pred_CNN = self.charCNN(cids)\n y_pred = self.ff(tf.concat([y_pred_DAE,y_pred_CNN],1))\n y_pred = keras.activations.softmax(y_pred,axis=1)\n return y_pred\n \n \n def loss(self,y_tracks,y_artists,y_pred):\n y_pred_tracks = y_pred[:,0:self.n_track_ids]\n y_pred_artists = y_pred[:,self.n_track_ids:]\n y_tracks = tf.cast(y_tracks,tf.float32).to_tensor(default_value=0,shape=(y_tracks.shape[0],self.n_track_ids))\n y_artists = tf.cast(y_artists,tf.float32).to_tensor(default_value=0,shape=(y_tracks.shape[0],self.n_ids-self.n_track_ids))\n l = self.cross_entropy(y_tracks,y_pred_tracks) + .5* self.cross_entropy(y_artists,y_pred_artists)\n reg = tf.linalg.norm(tf.concat([tf.reshape(w,[-1]) for w in self.trainable_weights],0))\n \n return l + 0.01* reg\n \n def cross_entropy(self,y_true,y_pred):\n return tf.reduce_mean(-tf.reduce_sum(y_true*tf.math.log(y_pred+1e-10) + .55*(1-y_true)*tf.math.log(1 -y_pred+1e-10),axis=1),axis=0)\n \n def get_reccomendations(self,x_tracks,y_pred):\n cand_ids = self._zero_by_ids(y_pred,x_tracks)\n cand_track = cand_ids[:,0:self.n_track_ids]\n cand_artist = cand_ids[:,self.n_track_ids:]\n\n _,rec_tracks = tf.math.top_k(cand_track,k=500)\n test = tf.sets.intersection(tf.cast(rec_tracks,tf.int32),x_tracks.to_sparse())\n missed = tf.sets.size(test)\n total_missed = tf.reduce_sum(missed,0)\n if total_missed > 0:\n print(\"debug\")\n _,rec_artists = tf.math.top_k(cand_artist,k=500)\n rec_artists += self.n_track_ids\n return rec_tracks,rec_artists\n \n def _zero_by_ids(self,tensor,ids):\n ids_2d = tf.stack([tf.ones_like(ids)*tf.expand_dims(tf.range(tensor.shape[0]),1),ids],2)\n ones = tf.ones_like(ids,dtype=tf.float32) *-1\n return tf.tensor_scatter_nd_update(tensor,ids_2d.flat_values,ones.flat_values)\n \n @tf.function\n def train_step(self, data):\n x_tracks,x_artists,x_titles,y_tracks,y_artists = data\n with tf.GradientTape() as tape:\n y_pred = self(tf.concat([x_tracks,x_artists],axis=1),x_titles,training=True) # Forward pass\n # Compute our own loss\n loss = self.loss(y_tracks,y_artists,y_pred)\n # Compute gradients\n trainable_vars = self.trainable_variables\n gradients = tape.gradient(loss, trainable_vars)\n self.optimizer.apply_gradients(zip(gradients, trainable_vars))\n rec_tracks,rec_artists = self.get_reccomendations(x_tracks,y_pred)\n r_precision,ndcg,rec_clicks = self.Metrics.calculate_metrics(rec_tracks,rec_artists,y_tracks,y_artists)\n return loss,r_precision,ndcg,rec_clicks\n \n #@tf.function\n def val_step(self,data):\n x_tracks,x_artists,x_titles,y_tracks,y_artists = data\n y_pred = self(tf.concat([x_tracks,x_artists],axis=1),x_titles, training=False)\n loss = self.loss(y_tracks,y_artists,y_pred)\n rec_tracks,rec_artists = self.get_reccomendations(x_tracks,y_pred)\n r_precision,ndcg,rec_clicks = self.Metrics.calculate_metrics(rec_tracks,rec_artists,y_tracks,y_artists)\n return loss,r_precision,ndcg,rec_clicks\n \n def train(self,training_set,validation_sets,\n n_epochs,train_batch_size,val_batch_size,\n save_train_path,resume_path,resume=0):\n self.train_batch_size = train_batch_size\n self.val_batch_size = val_batch_size\n n_batches = len(training_set)\n n_val_batches = 1000//val_batch_size\n training_set = iter(training_set.repeat(n_epochs))\n \n self.checkpoint = tf.train.Checkpoint(model=self,training_set=training_set,curr_epoch=tf.Variable(0), best_val_rp = tf.Variable(0.0))\n \n if resume: \n self.resume_manager = tf.train.CheckpointManager(self.checkpoint, resume_path , max_to_keep=1)\n self.checkpoint.restore(self.resume_manager.latest_checkpoint)\n self.Metrics.epochs_train_metrics = np.load(resume_path + \"/train_metrics.npy\").tolist()\n self.Metrics.epochs_val_metrics = np.load(resume_path + \"/val_metrics.npy\").tolist()\n \n self.most_recent_manager = tf.train.CheckpointManager(self.checkpoint, save_train_path + \"/most_recent\" , max_to_keep=1)\n self.best_RP_manager = tf.train.CheckpointManager(self.checkpoint, save_train_path + \"/best_RP\" , max_to_keep=1)\n curr_epoch = self.checkpoint.curr_epoch.numpy()\n best_val_rp = self.checkpoint.best_val_rp.numpy()\n \n \n pb_train_metrics_names = ['batch_loss','batch_R-Prec']\n \n progress_bar = tf.keras.utils.Progbar(self.train_batch_size*n_batches, stateful_metrics= pb_train_metrics_names, width=50,unit_name=\"batch\")\n \n for epoch in range(curr_epoch,n_epochs):\n print(\"\\nepoch {}/{}\".format(epoch+1,n_epochs))\n start_time = time.time()\n for batch_step in range(n_batches):\n batch = next(training_set)\n loss,r_precision,ndcg,rec_clicks = self.train_step(batch)\n progress_bar.update((batch_step+1)*train_batch_size,list(zip(pb_train_metrics_names,[loss,np.round(r_precision,3)])))\n self.Metrics.update_metrics(\"train_batch\",tf.stack([loss,r_precision,ndcg,rec_clicks],0))\n \n count = 0\n set_count = 0\n for batch in validation_sets:\n loss,r_precision,ndcg,rec_clicks = self.val_step(batch)\n self.Metrics.update_metrics(\"val_batch\",tf.stack([loss,r_precision,ndcg,rec_clicks],0))\n count += 1\n if count == n_val_batches:\n self.Metrics.update_metrics(\"val_set\",tf.stack([loss,r_precision,ndcg,rec_clicks],0))\n count = 0\n set_count +=1\n \n metrics_train,metrics_val = self.Metrics.update_metrics(\"epoch\")\n \n \n loss,r_precision,ndcg,rec_clicks = metrics_train\n print(\"\\nAVG Train:\\n loss:{0:g}\\n R-precison:{1:g}\\n NDCG:{2:g}\\n Rec-Clicks:{3:g}\".format(loss,r_precision,ndcg,rec_clicks))\n loss,r_precision,ndcg,rec_clicks = metrics_val\n print(\"AVG Val:\\n loss:{0:g}\\n R-precison:{1:g}\\n NDCG:{2:g}\\n Rec-Clicks:{3:g}\\n\".format(loss,r_precision,ndcg,rec_clicks))\n \n \n \n self.checkpoint.curr_epoch.assign_add(1)\n if r_precision > best_val_rp:\n best_val_rp = r_precision\n self.checkpoint.best_val_rp.assign(best_val_rp)\n self.best_RP_manager.save()\n np.save(save_train_path + \"/best_RP/train_metrics\",self.Metrics.epochs_train_metrics)\n np.save(save_train_path + \"/best_RP/val_metrics\",self.Metrics.epochs_val_metrics)\n # Save most recent checkpoint\n self.most_recent_manager.save()\n np.save(save_train_path + \"/most_recent/train_metrics\",self.Metrics.epochs_train_metrics)\n np.save(save_train_path + \"/most_recent/val_metrics\",self.Metrics.epochs_val_metrics)\n \n print(\"-----Epoch {0:} completed in {1:.2f} minutes-----\".format(epoch,(time.time() - start_time)/60))\n \n def generate_challenge_submissions(self,challenge_sets,save_path,team_name,email):\n with open('./utils/tid_2_uri') as file:\n tid_2_uri = tf.constant(list(json.load(file).items()))\n file.close()\n \n tid_2_uri = tf.lookup.StaticHashTable(tf.lookup.KeyValueTensorInitializer(\n tf.strings.to_number(tid_2_uri[:,0],out_type=tf.int32),tid_2_uri[:,1]),\"-\")\n submissions= []\n for x_tracks,x_artists,x_titles,(pid) in challenge_sets:\n y_pred = self(tf.concat([x_tracks,x_artists],axis=1),x_titles,training=False)\n \n rec_tracks,_ = self.get_reccomendations(x_tracks,y_pred)\n uris = \"spotify:track:\" + tid_2_uri[rec_tracks]\n uris = tf.concat([tf.strings.as_string(pid),uris],1)\n submissions += uris.numpy().astype(str).tolist()\n \n with open(save_path, 'w', newline = '') as outFile:\n wr = csv.writer(outFile, quoting = csv.QUOTE_NONE)\n wr.writerow(['team_info',team_name,email])\n wr.writerows(submissions)\n outFile.close()\n print(\"---- Submission saved to {0:} ------\".format(save_path))\n"
},
{
"alpha_fraction": 0.5474137663841248,
"alphanum_fraction": 0.5649744868278503,
"avg_line_length": 45.407405853271484,
"blob_id": "1cfd1b41099db9cd6cdf5abf4449812977162c81",
"content_id": "63cdf84d925c1fd95b52c861f59fe2c18fe1f614",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6264,
"license_type": "no_license",
"max_line_length": 164,
"num_lines": 135,
"path": "/experiment_env/DataLoader.py",
"repo_name": "jlandonedwards/AutoPlaylistContinuation",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Tue Apr 6 00:13:53 2021\n\n@author: landon\n\"\"\"\n\nimport tensorflow as tf\nimport numpy as np\nimport json\n \nclass DataLoader():\n \n def __init__(self,util_dir):\n self.util_dir = util_dir \n \n def get_traing_set(self,batch_size,seed):\n training_set = tf.data.experimental.load(\"./train\",tf.RaggedTensorSpec(tf.TensorShape([3, None]), tf.int32, 1, tf.int64))\n \n with open('./utils/tid_2_aid') as file:\n tid_2_aid = tf.constant(json.load(file))\n file.close() \n \n self.tid_2_aid = tf.lookup.StaticHashTable(tf.lookup.KeyValueTensorInitializer(tid_2_aid[:,0],tid_2_aid[:,1]),default_value=-1)\n del tid_2_aid\n tf.random.set_seed(seed)\n np.random.seed(seed)\n return training_set.map(lambda x: self.corrupt(x)).shuffle(1000,seed,True).apply(tf.data.experimental.dense_to_ragged_batch(batch_size,drop_remainder=True))\n \n def get_validation_sets(self,batch_size):\n validation_sets = tf.data.experimental.load(\"./val\",\n (tf.TensorSpec(shape=(None,), dtype=tf.int32, name=None),\n tf.TensorSpec(shape=(None,), dtype=tf.int32, name=None),\n tf.TensorSpec(shape=(None,), dtype=tf.int32, name=None),\n tf.TensorSpec(shape=(None,), dtype=tf.int32, name=None),\n tf.TensorSpec(shape=(None,), dtype=tf.int32, name=None)))\n return validation_sets.apply(tf.data.experimental.dense_to_ragged_batch(batch_size))\n \n def get_challenge_sets(self,batch_size):\n challenge_data = tf.data.experimental.load(\"./challenge\",(tf.TensorSpec(shape=(None,), dtype=tf.int32, name=None),\n tf.TensorSpec(shape=(None,), dtype=tf.int32, name=None),\n tf.TensorSpec(shape=(None,), dtype=tf.int32, name=None),\n tf.TensorSpec(shape=(None,), dtype=tf.int32, name=None)))\n return challenge_data.apply(tf.data.experimental.dense_to_ragged_batch(batch_size))\n\n @tf.autograph.experimental.do_not_convert\n def corrupt(self,x):\n p1 = np.random.uniform(size=1)\n if p1 > 0.5: \n corrupt_track = True\n else: \n corrupt_track = False \n input_tracks,input_artists,target_tracks,target_artists = tf.cond(tf.greater(x[2][0],25),\n lambda: self.gt_n(x,corrupt_track),\n lambda: self.le_n(x,corrupt_track))\n return (input_tracks,input_artists,x[1],target_tracks,target_artists)\n \n @tf.autograph.experimental.do_not_convert\n def gt_n(self,x,corrupt_track):\n p2 = np.random.uniform(size=1)\n if p2 > 0.5:\n keep_rate = tf.random.uniform(minval=.4,maxval=.7,shape=())\n n_inputs = tf.cast(tf.cast(x[2][0],tf.float32) * keep_rate,tf.int32)\n return self.random_id_corrupt(x,corrupt_track,n_inputs)\n \n else:\n return self.le_n(x,corrupt_track)\n \n\n @tf.autograph.experimental.do_not_convert\n def le_n(self,x,corrupt_track):\n keep_rate = tf.random.uniform(minval=.7,maxval=.9,shape=())\n n_inputs = tf.cast(tf.cast(x[2][0],tf.float32) * keep_rate,tf.int32)\n return self.seq_id_corrput(x,corrupt_track,n_inputs)\n \n @tf.autograph.experimental.do_not_convert\n def delete_tensor_by_indices(self,tensor,indices,n_tracks):\n idxs = tf.reshape(indices,(-1,1))\n mask = ~tf.scatter_nd(indices=idxs,updates=tf.ones_like(indices,dtype=tf.bool),shape=[n_tracks])\n return tf.boolean_mask(tensor,mask)\n\n @tf.autograph.experimental.do_not_convert\n def random_id_corrupt(self,x,corrupt_track,n_inputs,):\n n_tracks = x[2][0]\n if corrupt_track:\n feat = x[0]\n else: \n feat = self.tid_2_aid[x[0]]\n idxs = tf.random.shuffle(tf.range(n_tracks))[:n_inputs]\n input_ids = tf.gather(feat,idxs)\n removed_elements = self.delete_tensor_by_indices(feat,idxs,n_tracks)\n if corrupt_track :\n input_tracks = input_ids\n input_artists = tf.constant([],dtype=tf.int32)\n target_tracks = removed_elements\n target_artists = self.get_artists(target_tracks)\n else:\n input_tracks = tf.constant([],dtype=tf.int32)\n input_mask = ~tf.equal(input_ids,-1)\n input_artists = tf.boolean_mask(input_ids,input_mask)\n target_tracks = x[0]\n target_mask = ~tf.equal(removed_elements,-1)\n target_artists = tf.boolean_mask(removed_elements,target_mask)\n return input_tracks,input_artists,target_tracks,target_artists\n \n @tf.autograph.experimental.do_not_convert\n def seq_id_corrput(self,x,corrupt_track,n_inputs):\n n_tracks = x[2][0]\n if corrupt_track:\n feat = x[0]\n else: \n feat = self.tid_2_aid[x[0]]\n \n input_ids = feat[0:n_inputs]\n removed_elements = feat[n_inputs:]\n if corrupt_track :\n input_tracks = input_ids\n input_artists = tf.constant([],dtype=tf.int32)\n target_tracks = removed_elements\n target_artists = self.get_artists(target_tracks)\n else:\n input_tracks = tf.constant([],dtype=tf.int32)\n input_mask = ~tf.equal(input_ids,-1)\n input_artists = tf.boolean_mask(input_ids,input_mask)\n target_tracks = x[0]\n target_mask = ~tf.equal(removed_elements,-1)\n target_artists = tf.boolean_mask(removed_elements,target_mask)\n return input_tracks,input_artists,target_tracks,target_artists\n \n @tf.autograph.experimental.do_not_convert\n def get_artists(self,track_ids):\n artist_ids = self.tid_2_aid[track_ids]\n mask = ~tf.equal(artist_ids,-1)\n return tf.boolean_mask(artist_ids,mask)"
},
{
"alpha_fraction": 0.5643835663795471,
"alphanum_fraction": 0.5813698768615723,
"avg_line_length": 25.08571434020996,
"blob_id": "0131e9723253911053392bed0d04d0c6ca472e35",
"content_id": "e31544a2d3508308f32401f3bd7f0cca8d828dca",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1825,
"license_type": "no_license",
"max_line_length": 93,
"num_lines": 70,
"path": "/metrics.py",
"repo_name": "jlandonedwards/AutoPlaylistContinuation",
"src_encoding": "UTF-8",
"text": "import math\nimport numpy as np\n\n\ndef get_class(class_divpnt, idx):\n for c in class_divpnt:\n if idx <= c:\n return class_divpnt.index(c)\n return len(class_divpnt)\n\n\ndef get_class_dist(cls_list, num_cls):\n cls_dist = [1e-9] * num_cls\n for i in cls_list:\n if i is not -1:\n cls_dist[i]+=1\n return cls_dist\n\n\ndef get_r_precision(answer, cand, answer_cls, class_divpnt):\n num_cls = len(class_divpnt) + 1\n hr_by_cls = [0] * num_cls\n cls_dist = get_class_dist(answer_cls, num_cls)\n\n set_answer = set(answer)\n r = len(set_answer&set(cand[:len(answer)])) / len(answer)\n return r\n\ndef get_ndcg(answer, cand):\n cand_len = len(cand) \n idcg=1\n idcg_idx=2\n dcg=0\n if cand[0] in answer: dcg=1\n \n for i in range(1,cand_len):\n if cand[i] in answer: \n dcg += (1/math.log(i+1,2))\n idcg += (1/math.log(idcg_idx,2))\n idcg_idx+=1\n \n return dcg/idcg\n\ndef get_rsc(answer, cand):\n cand_len = len(cand)\n for i in range(cand_len):\n if cand[i] in answer:\n return i//10\n return 51\n\ndef get_metrics(answer,cand, answer_cls, num_cls):\n r_precision, hr_by_cls, cand_cls_dist = get_r_precision(answer,cand, answer_cls, num_cls)\n # ndcg = get_ndcg(answer,cand)\n # rsc = get_rsc(answer,cand)\n \n return r_precision, hr_by_cls, cand_cls_dist\n\ndef single_eval(scores, seed, answer, answer_cls, num_cls):\n cand = np.argsort(-1*scores)\n cand = cand.tolist()\n #print(\"sort:\",np.sort(-1*scores)[:10])\n #print(\"cand:\",cand[:10])\n for i in seed:\n try:\n cand.remove(i)\n except:\n pass\n cand = cand[:500]\n rprecision, hr_by_cls, cand_cls_dist = get_metrics(answer,cand, answer_cls, num_cls)\n return rprecision, hr_by_cls, cand_cls_dist"
},
{
"alpha_fraction": 0.601141631603241,
"alphanum_fraction": 0.6114876866340637,
"avg_line_length": 35.88815689086914,
"blob_id": "8734de86ec37dbfb29e7f6e3f7bfb7138749c195",
"content_id": "deb069736c2acdfab07ce1fac74eb1551fcbe87c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5606,
"license_type": "no_license",
"max_line_length": 159,
"num_lines": 152,
"path": "/toy_experiment_env/CreateSubmission.py",
"repo_name": "jlandonedwards/AutoPlaylistContinuation",
"src_encoding": "UTF-8",
"text": "##\n# NOTE: File is now defunct as functionality has been moved to Model.py\n##\n\nimport tensorflow as tf\nimport json\nimport csv\nimport time\n\n# TODO optimize to speed up/parallelize\n# - Can use Tensor of strings with index as keys to replace tid_2_uri_dict\n# - Can use Tensor dict\ndef get_challenge_submission(pids, rec_tracks, tid_2_uri_dict):\n \n def convert_tid_2_uri(tracks_tensor):\n return(tf.map_fn(fn = lambda x: tid_2_uri_dict[str(x.numpy())], elems = tracks_tensor, fn_output_signature = tf.string))\n \n submission_tracks = tf.map_fn(fn = lambda x: convert_tid_2_uri(x), elems = rec_tracks, fn_output_signature = tf.TensorSpec(rec_tracks[0].shape, tf.string))\n\n submissions = list(map(lambda x: [pids[x]] + [\"spotify:track:\" + a.decode(\"utf-8\") for a in list(submission_tracks.numpy()[x])], range(0,len(pids))))\n\n return submissions\n\ndef process_challenge_batch_for_prediction(current_batch):\n # tracks\n list_of_track_tensors = [row[0] for row in current_batch]\n vals1 = tf.concat(list_of_track_tensors, axis = 0)\n lens1 = tf.stack([tf.shape(t, out_type = tf.int64)[0] for t in list_of_track_tensors])\n x_tracks = tf.RaggedTensor.from_row_lengths(vals1, lens1)\n del list_of_track_tensors\n\n # artists\n list_of_artist_tensors = [row[1] for row in current_batch]\n vals2 = tf.concat(list_of_artist_tensors, axis = 0)\n lens2 = tf.stack([tf.shape(t, out_type = tf.int64)[0] for t in list_of_artist_tensors])\n x_artists = tf.RaggedTensor.from_row_lengths(vals2, lens2)\n del list_of_artist_tensors\n\n list_of_pids = [row[3].numpy()[0] for row in current_batch]\n\n return x_tracks, x_artists, list_of_pids\n\ndef write_submission_to_file(submissions, path_to_file):\n \n with open(path_to_file, 'w', newline = '') as outFile:\n wr = csv.writer(outFile, quoting = csv.QUOTE_NONE)\n wr.writerow(['team_info'] + ['my awesome team name'] + ['[email protected]'])\n wr.writerows(submissions)\n outFile.close()\n\n \nif __name__ == \"__main__\":\n from wip_DAE import *\n from DataLoader import *\n from DataPreprocess import *\n\n start = time.time()\n \n #####################\n # SETUP\n #####################\n \n # data = DataPreprocess('./toy_preprocessed')\n # data.process_train_val_data('./toy_data', 2, 2)\n # data.process_challenge_data('./challenge_data')\n # Run script 'trainingValidationSplit.py'\n \n BATCH_SIZE = 50\n EPOCH = 1\n \n dataset = DataLoader('./toy_preprocessed/id_dicts')\n training_set = dataset.get_traing_set('./toy_train',BATCH_SIZE,123)\n validation_sets = dataset.get_validation_sets('./toy_val')\n challenge_sets = dataset.get_challenge_sets('./toy_preprocessed/challenge_data')\n\n model = DAE(BATCH_SIZE)\n opt = keras.optimizers.Adam()\n \n #####################\n # TRAIN MODEL\n #####################\n \n # print(\"Initial Training\")\n \n # count = 0\n \n # for epoch in range(EPOCH):\n # for x_tracks,x_artists,y_tracks,y_artists in training_set:\n # with tf.GradientTape() as tape:\n # y_pred = model(tf.concat([x_tracks,x_artists],axis=1), training=False) # Forward pass\n # # Compute our own loss\n # loss = model.loss(y_tracks,y_artists,y_pred)\n # # Compute gradients\n # trainable_vars = model.trainable_variables\n # gradients = tape.gradient(loss, trainable_vars)\n\n # # Update weights\n # opt.apply_gradients(zip(gradients, trainable_vars))\n \n # rec_tracks,rec_artists = model.get_reccomendations(x_tracks,y_tracks,y_artists,y_pred)\n # r_precision,ndcg,rec_clicks = model.Metrics.collect_metrics(1,loss,rec_tracks,rec_artists,y_tracks,y_artists)\n # print(\"[Batch #{0}],loss:{1:.2f},R-precison:{2:.2f},NDCG:{3:.2f},Rec-Clicks:{4:.2f}\".format(count,loss,r_precision,ndcg,rec_clicks))\n # count +=1\n # model.Metrics.collect_metrics(0)\n \n # model.save_weights(\"weights\",save_format=\"tf\")\n # print(\"Done Initial training\")\n \n #####################\n # LOAD TRAINED MODEL AND GENERATE SUBMISSION\n #####################\n \n reconstructed_model = DAE(BATCH_SIZE)\n reconstructed_model.load_weights(\"weights\")\n\n submissions = []\n ctr = 0\n \n with open('./toy_preprocessed/challenge_data') as cfile:\n cdata = json.load(cfile)\n cfile.close()\n tid_2_uri_dict = cdata['tid_2_uri']\n del(cdata)\n \n for current_batch in challenge_sets:\n \n # TODO skipping first batch for now since tracks are nonexistent\n print(f\"Batch number = {ctr}\")\n if (ctr == 0):\n ctr = 1\n continue\n\n ctr = ctr + 1\n\n x_tracks, x_artists, pids = process_challenge_batch_for_prediction(current_batch = current_batch)\n \n # Since x_artists is supposedly empty, seems no need to concatenate x_tracks with x_artists\n y_pred = model(x_tracks, training=False)\n\n rec_tracks,rec_artists = model.get_reccomendations(x_tracks = x_tracks, y_tracks = None, y_artists = None, y_pred = y_pred) \n \n # Takes about 5 minutes (see get_challenge_submission() definition)\n submissions = submissions + get_challenge_submission(pids, rec_tracks, tid_2_uri_dict)\n \n \n print(\"Submissions generated, outputting to file\")\n\n submission_file_name = 'submission.csv'\n write_submission_to_file(submissions, submission_file_name)\n\n end = time.time()\n print(f\"Done in = {end - start} seconds\")"
},
{
"alpha_fraction": 0.5499787330627441,
"alphanum_fraction": 0.5613568425178528,
"avg_line_length": 36.650604248046875,
"blob_id": "e8641059122a9b80e0ad3c4ef1a042ff7f2831ff",
"content_id": "5c3e7d017cd34acc6f28c66c01399dea3d9d2447",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 9404,
"license_type": "no_license",
"max_line_length": 139,
"num_lines": 249,
"path": "/toy_experiment_env/DataPreprocess.py",
"repo_name": "jlandonedwards/AutoPlaylistContinuation",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Wed Mar 10 16:09:22 2021\n@author: landon\nCode was derived by following some of the implmentation by Hojin Yang:\nhttps://github.com/hojinYang/spotify_recSys_challenge_2018/blob/master/utils/spotify_reader.py\n\n\"\"\"\nimport time\nimport sys\nimport json\nimport collections\nimport re\nimport os\nimport argparse\n\nVARIOUS_ARTISTS_URI = '0LyfQWJT6nXafLPZqxe9Of'\nchars = list('''abcdefghijklmnopqrstuvwxyz/<>+-1234567890''')\nchar2id = {ch: i for i, ch in enumerate(chars)}\n\ndef string_normalize(title):\n t = title.lower()\n t = re.sub(r\"[.,#!$%\\^\\*;:{}=\\_`~()@]\", ' ', t)\n t = re.sub(r'\\s+', ' ', t).strip()\n return t\n\nMAX_TITLE_LEN = 25\ndef title2ids(title):\n ids = []\n for char in title:\n cid = char2id.get(char,-1)\n if cid != -1:\n ids.append(cid)\n if len(ids) == MAX_TITLE_LEN : break\n\n return ids\n\n\nclass DataPreprocess:\n \"\"\"\n Convert original provided data into structured inputs for model (WIP)\n save_dir: STRING\n Path where processed data files will be is saved\n \"\"\"\n def __init__(self,save_dir):\n self.save_dir = save_dir\n self.tid_2_aid = None\n \n def process_train_val_data(self, data_dir,min_track=2, min_artist=2):\n \"\"\"\n \n Parameters\n ----------\n in_path : STRING\n Path to directory containing all original playlist data\n \n min_track: INT\n minimum # of appreance of a track over all playlists\n min_artist: INT\n minimum # of appearnces of an artist over all playlists\n\n Returns\n -------\n None.\n\n \"\"\"\n \n self.playlists_titles = list()\n self.playlists_tracks = list()\n self.t_uri_2_a_uri = dict()\n self.track_counts = collections.Counter()\n self.artist_counts = collections.Counter()\n self.playlist_len_counts = collections.Counter()\n \n for file in os.listdir(data_dir):\n f = open(os.sep.join((data_dir,file)))\n file_data = f.read() \n f.close()\n mpd_slice = json.loads(file_data)\n for playlist in mpd_slice['playlists']:\n self.process_playlist(playlist)\n \n # Map tracks all unqiue tracks uris to interger ids\n o_track_counts = collections.OrderedDict(self.track_counts.most_common())\n tracks,track_counts,t_uri2id = self.create_ids(o_track_counts, min_track, 0)\n del o_track_counts\n # Map all unqiue artists uris to ids\n del self.artist_counts[VARIOUS_ARTISTS_URI]\n o_artist_counts = collections.OrderedDict(self.artist_counts.most_common())\n artists,artist_counts,a_uri2id = self.create_ids(o_artist_counts, min_artist, len(t_uri2id))\n del o_artist_counts\n \n tid_2_aid = []\n for t_uri in t_uri2id.keys():\n a_uri = self.t_uri_2_a_uri[t_uri]\n a_id = a_uri2id.get(a_uri,-1)\n if a_id == -1: continue\n tid_2_aid.append((t_uri2id[t_uri] , a_id))\n \n with open(args.utils_dir + '/tid_2_aid', 'w') as file:\n json.dump(tid_2_aid,file,indent=\"\\t\")\n file.close()\n \n self.t_uri2id = t_uri2id\n self.tid_2_aid = dict(tid_2_aid)\n \n \n # Convert artists and tracks uris to coressponding ids and titles to seperate set of character ids\n playlists = []\n for t_uris, title in zip(self.playlists_tracks, self.playlists_titles):\n tid = self.playlist_uri2id(t_uris,t_uri2id)\n if len(tid) == 0:\n continue\n cid = title2ids(title)\n self.playlist_len_counts[str(len(tid))] +=1\n playlists.append([tid,cid,[len(tid)]])\n \n data = dict()\n data_properties = dict()\n data_properties['max_title_len'] = MAX_TITLE_LEN\n data_properties['n_chars'] = len(char2id)\n data_properties['n_tracks'] = len(t_uri2id)\n data_properties['n_artists'] = len(a_uri2id) \n data_properties['n_tracks_artists'] = len(t_uri2id) + len(a_uri2id)\n data_properties['n_playlists'] = len(playlists)\n data['playlists'] = playlists\n #data['playlists_counts'] = self.playlist_len_counts.most_common()\n \n with open(self.save_dir+'/'+'data', 'w') as file:\n json.dump(data,file,indent=\"\\t\")\n file.close()\n \n with open(args.utils_dir + '/data_properties', 'w') as file:\n json.dump(data_properties,file,indent=\"\\t\")\n file.close()\n \n del data\n \n print(\"num playlists: %d \\nnum tracks>=min_count: %d \\nnum artists>=min_count: %d \\nnum track and artists ids: %d\" %\n (len(playlists), len(t_uri2id),len(a_uri2id),data_properties['n_tracks_artists']))\n #print(self.playlist_len_counts.most_common())\n \n def process_challenge_data(self,challenge_dir):\n if self.tid_2_aid is None:\n print(\"Run process_train_val_data() before processing challenge data\")\n return\n \n self.playlists_titles = list()\n self.playlists_tracks = list()\n self.playlists_artists = list()\n self.count = 0\n self.id_count = 0\n challenge_playlists = []\n for file in os.listdir(challenge_dir):\n f = open(os.sep.join((challenge_dir,file)))\n file_data = f.read() \n f.close()\n mpd_slice = json.loads(file_data)\n for playlist in mpd_slice['playlists']:\n challenge_playlists.append(self.process_challenge_playlist(playlist))\n \n data = {'challenge_playlists':challenge_playlists}\n tid_2_uri = {v: k for k, v in self.t_uri2id.items()}\n with open(self.save_dir+'/'+'challenge_data', 'w') as file:\n json.dump(data,file,indent=\"\\t\")\n file.close()\n \n with open(args.utils_dir + '/tid_2_uri', 'w') as file:\n json.dump(tid_2_uri,file,indent=\"\\t\")\n file.close()\n \n def process_playlist(self,playlist):\n title = string_normalize(playlist['name'])\n self.playlists_titles.append(title)\n tracks = []\n for track in playlist['tracks']:\n t_uri = track['track_uri'].split(':')[2]\n a_uri = track['artist_uri'].split(':')[2]\n if t_uri not in self.t_uri_2_a_uri:\n self.t_uri_2_a_uri[t_uri] = a_uri\n tracks.append(t_uri)\n self.track_counts[t_uri] += 1\n self.artist_counts[a_uri] += 1 \n self.playlists_tracks.append(tracks)\n \n def process_challenge_playlist(self,playlist):\n c_ids = []\n if 'name' in playlist:\n c_ids = title2ids(string_normalize(playlist['name']))\n t_ids = []\n a_ids = []\n pid = playlist['pid']\n for track in playlist['tracks']:\n self.count += 1\n t_uri = track['track_uri'].split(':')[2]\n t_id = self.t_uri2id.get(t_uri,-1)\n if t_id != -1:\n self.id_count += 1\n t_ids.append(t_id)\n a_id = self.tid_2_aid.get(t_id,-1)\n if a_id != -1:\n a_ids.append(a_id) \n return (t_ids,a_ids,c_ids,[pid]) \n \n @staticmethod \n def create_ids(o_dict,min_count,start_id):\n uri_list = list(o_dict.keys())\n valid_uri_list = uri_list[:]\n count_list = list(o_dict.values())\n if min_count > 1:\n rm_from = count_list.index(min_count-1)\n del count_list[rm_from:]\n del valid_uri_list[rm_from:]\n uri2id = dict(zip(valid_uri_list, range(start_id, start_id + len(valid_uri_list))))\n return valid_uri_list, count_list, uri2id\n \n @staticmethod\n def playlist_uri2id(playlist_uris,uri2id):\n ids = []\n for uri in playlist_uris:\n i_d = uri2id.get(uri,-1)\n if i_d == -1: continue\n ids.append(i_d)\n return ids\n \nif __name__ == '__main__':\n start_time = time.time()\n args = argparse.ArgumentParser(description=\"args\")\n args.add_argument('--data_dir', type=str, default='./toy_data', help=\"directory where mpd slices are stored\")\n args.add_argument('--challenge_data_dir', type=str, default='./challenge_data', help=\"directory where challenge mpd slices are stored\")\n args.add_argument('--save_dir', type=str, default='./toy_preprocessed', help=\"directory where to store outputed data file\")\n args.add_argument('--utils_dir', type=str, default='./utils', help=\"directory where to store outputed data file\")\n args.add_argument('--min_track', type=int, default=2, help='minimum count of tracks')\n args.add_argument('--min_artist', type=int, default=2, help='minimum count of artists')\n args = args.parse_args()\n \n if not os.path.isdir(args.save_dir):\n os.mkdir(args.save_dir)\n if not os.path.isdir(\"./utils\"):\n os.mkdir(\"./utils\")\n \n data = DataPreprocess(args.save_dir)\n data.process_train_val_data(args.data_dir,args.min_track,args.min_artist)\n data.process_challenge_data(args.challenge_data_dir)\n \n \n \n print(\"---completed in %s seconds ---\" % round((time.time() - start_time),2))\n \n\n\n\n \n \n "
}
] | 12 |
snscrawler/ChickenAnalysis | https://github.com/snscrawler/ChickenAnalysis | 59db146ee6392ce8abf78f6aa4801c2201f19f27 | 07f66f36fefa53a125b8021a04ac1f1896f71b86 | 6de91fe33baaa69fa5b2e135cbcd0442766aa627 | refs/heads/master | 2021-01-25T06:29:53.862765 | 2017-06-07T01:37:35 | 2017-06-07T01:37:35 | 93,579,679 | 1 | 3 | null | null | null | null | null | [
{
"alpha_fraction": 0.5259299874305725,
"alphanum_fraction": 0.5584570169448853,
"avg_line_length": 31.484663009643555,
"blob_id": "a9b8c410ffc844472bd090f4df3fdd2a0f85b8f5",
"content_id": "6a1d4e35ea4e74f75d89ef3f56cb5b8297dabe75",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 11532,
"license_type": "no_license",
"max_line_length": 355,
"num_lines": 326,
"path": "/Chicken.py",
"repo_name": "snscrawler/ChickenAnalysis",
"src_encoding": "UTF-8",
"text": "import urllib.request\r\nfrom bs4 import BeautifulSoup\r\nimport pandas as pd\r\nimport datetime\r\nfrom itertools import count\r\nimport xml.etree.ElementTree as ET\r\n\r\ndef get_request_url(url, enc='utf-8'):\r\n \r\n req = urllib.request.Request(url)\r\n\r\n try: \r\n response = urllib.request.urlopen(req)\r\n if response.getcode() == 200:\r\n try:\r\n rcv = response.read()\r\n ret = rcv.decode(enc)\r\n except UnicodeDecodeError:\r\n ret = rcv.decode(enc, 'replace') \r\n \r\n return ret\r\n \r\n except Exception as e:\r\n print(e)\r\n print(\"[%s] Error for URL : %s\" % (datetime.datetime.now(), url))\r\n return None\r\n\r\n\r\n'''\r\n#BBQ HTML 구조\r\n<tbody>\r\n <tr>\r\n <td class=\"pdL25\">강일지구점</td>\r\n <td>서울특별시 강동구 아리수로93길 27 202(강일동,,2강일타워2층202호~203호)</td>\r\n <td class=\"alignC\">02-429-0669</td>\r\n <td class=\"alignL\"><img src='/images/shop/ico_cafe.png' title='프리미엄카페'> <img src='/images/shop/ico_parking.png' alt='주차가능' title='주차가능'> <img src='/images/shop/ico_family.png' alt='패밀리룸' title='패밀리룸'> <img src='/images/shop/ico_wifi.png' alt='와이파이' title='와이파이'> <img src='/images/shop/ico_gg.png' alt='단체주문' title='단체주문'></td>\r\n <td class=\"alignC\"><a href=\"/shop/shop_view.asp?CHAINID=3203\" class=\"f12bG btn8\">매장 상세정보</a></td>\r\n </tr>\r\n ... (이하 생략)\r\n</tbody>\r\n'''\r\ndef getBBQAddress(result):\r\n \r\n BBQ_URL = 'https://www.bbq.co.kr/shop/shop_ajax.asp?page=1&pagesize=2000&gu=&si='\r\n print(BBQ_URL)\r\n\r\n rcv_data = get_request_url(BBQ_URL)\r\n soupData = BeautifulSoup(rcv_data, 'html.parser')\r\n\r\n tbody = soupData.find('tbody')\r\n \r\n tr_tag = []\r\n\r\n for store_tr in tbody.findAll('tr'):\r\n tr_tag = list(store_tr.strings)\r\n store_name = tr_tag[1]\r\n store_address = tr_tag[3]\r\n store_sido_gu = store_address.split()[:2]\r\n \r\n result.append([store_name] + store_sido_gu + [store_address])\r\n \r\n return\r\n\r\n'''\r\n#페리카나 HTML 구조\r\n<table class=\"table mt20\">\r\n<tbody>\r\n\t<tr>\r\n\t <td class=\"t_center\">가양동점</td>\r\n\t <td>서울특별시 강서구 강서로74길 12 (가양동)</td>\r\n\t <td class=\"t_center\">\r\n\t 02-3663-3700</td>\r\n\t <td class=\"t_center\"><a href=\"#none\" class=\"button h22 btn_gray\" onclick=\"store_view('126.84170552834682','37.56748111916124','가양동점','02-3663-3700','서울특별시 강서구 강서로74길 12 (가양동)' );\">상세정보</a></td>\r\n\t</tr>\r\n</tbody>\r\n</table>\r\n'''\r\ndef getPelicanaAddress(result):\r\n \r\n for page_idx in count():\r\n \r\n Pelicana_URL = 'http://www.pelicana.co.kr/store/stroe_search.html?&branch_name=&gu=&si=&page=%s' % str(page_idx + 1)\r\n print (\"[Pericana Page] : [%s]\" % (str(page_idx + 1)))\r\n\r\n rcv_data = get_request_url(Pelicana_URL)\r\n soupData = BeautifulSoup(rcv_data, 'html.parser')\r\n \r\n store_table = soupData.find('table', attrs={'class':'table mt20'})\r\n tbody = store_table.find('tbody')\r\n bEnd = True\r\n for store_tr in tbody.findAll('tr'):\r\n bEnd = False\r\n tr_tag = list(store_tr.strings)\r\n store_name = tr_tag[1]\r\n store_address = tr_tag[3]\r\n store_sido_gu = store_address.split()[:2]\r\n\r\n result.append([store_name] + store_sido_gu + [store_address])\r\n\r\n if (bEnd == True):\r\n return\r\n\r\n return\r\n\r\n'''\r\n#굽네치킨 XML 형식\r\n<lists>\r\n <item seq=\"a\">\r\n <aname1>경기가평군가평점</aname1>\r\n <aname2>경기</aname2>\r\n <aname3>가평군</aname3>\r\n <aname4>경기 가평군 가평읍 석봉로</aname4>\r\n <aname5>경기도가평군가평읍석봉로230</aname5>\r\n <aname6>031</aname6>\r\n <aname7>031-581-9982</aname7>\r\n <aname8>287</aname8>\r\n </item>\r\n</lists>\r\n'''\r\ndef getNeneAdddress(result):\r\n\r\n Nene_URL = 'http://nenechicken.com/subpage/where_list.asp?target_step2=%s&proc_type=step1&target_step1=%s' % (urllib.parse.quote('전체'), urllib.parse.quote('전체'))\r\n \r\n rcv_data = get_request_url(Nene_URL)\r\n\r\n root = ET.fromstring(rcv_data)\r\n \r\n for element in root.findall('item'):\r\n store_name = element.findtext('aname1')\r\n store_sido = element.findtext('aname2')\r\n store_gungu = element.findtext('aname3')\r\n store_address = element.findtext('aname5')\r\n \r\n result.append([store_name] + [store_sido] + [store_gungu] + [store_address])\r\n\r\n return\r\n\r\n'''\r\n#교촌치킨 HTML 구조\r\n<div class=\"shopSchList\">\r\n\t<!-- 매장 리스트 -->\r\n\t<ul class=\"list\">\r\n\t\t<li>\r\n\t\t\t<a href=\"javascript:mapchange('서울 강동구 고덕동 650-1','고덕1호','541');\">\r\n\t\t\t\t<dl>\r\n\t\t\t\t\t<dt>고덕1호</dt>\r\n\t\t\t\t\t<dd>\r\n\t\t\t\t\t\t서울 강동구 고덕동 650-1<br />\r\n\t\t\t\t\t\t(서울특별시 강동구 고덕로61길 116)<br />\r\n\t\t\t\t\t\t02 -481-9503~4\r\n\t\t\t\t\t</dd>\r\n\t\t\t\t</dl>\r\n\t\t\t</a>\r\n\t\t\t<p class=\"goView\" onclick=\"return location.href='/shop/domestic_sch.asp?shop_id=541&sido1=1&sido2=2'\"><img src=\"../images/shop/bg_btn_shop_on.gif\" alt=\"상세\" /></p>\r\n\t\t</li>\r\n\t</ul>\r\n\t<!-- 지도 -->\r\n\t<div class=\"mapBox\" id=\"itfsMap\">\r\n\t</div>\r\n</div>\r\n'''\r\ndef getKyochonAddress(sido1, result):\r\n \r\n for sido2 in count():\r\n Kyochon_URL = 'http://www.kyochon.com/shop/domestic.asp?txtsearch=&sido1=%s&sido2=%s' % (str(sido1), str(sido2+1))\r\n print (Kyochon_URL)\r\n\r\n try:\r\n rcv_data = get_request_url(Kyochon_URL)\r\n soupData = BeautifulSoup(rcv_data, 'html.parser')\r\n \r\n ul_tag= soupData.find('ul', attrs={'class': 'list'})\r\n\r\n for store_data in ul_tag.findAll('a', href=True):\r\n store_name = store_data.find('dt').get_text()\r\n store_address = store_data.find('dd').get_text().strip().split('\\r')[0]\r\n store_sido_gu = store_address.split()[:2]\r\n result.append([store_name] + store_sido_gu + [store_address])\r\n except:\r\n break\r\n\r\n return\r\n\r\n'''\r\n#처갓집 양념치킨 HTML 구조\r\n<table width=\"430\" border=\"0\" cellpadding=\"0\" cellspacing=\"1\" bgcolor=\"#E8E8E8\">\r\n<tr> \r\n <td height=\"2\" colspan=\"3\" bgcolor=\"#70C5C2\"></td>\r\n</tr>\r\n<tr align=\"center\" bgcolor=\"#DDEFEE\"> \r\n <td width='80'><b>체인명</b></td>\r\n <td><b>주소</b></td>\r\n <td width='100'><b>전화번호</b></td>\r\n</tr>\r\n<tr align=\"center\" bgcolor=\"#FFFFFF\"> \r\n <td>강화남산점<br></td>\r\n <td align='left'>인천시 강화군 강화읍 충렬사로 57</td>\r\n <td>032-933-2201<br/></td>\r\n</tr>\r\n</table>\r\n'''\r\ndef CheogajipAddress(result):\r\n \r\n for page_idx in count():\r\n\r\n Cheogajip_URL = 'http://www.cheogajip.co.kr/establish02_02.html?&search=&keyword=&page=%s' % str(page_idx+1)\r\n \r\n print (Cheogajip_URL)\r\n response = urllib.request.urlopen(Cheogajip_URL)\r\n soupData = BeautifulSoup(response.read().decode('CP949'), 'html.parser')\r\n\r\n store_trs = soupData.findAll('tr', attrs={'align': 'center', 'bgcolor':'#FFFFFF'})\r\n\r\n if (store_trs):\r\n for store_tr in store_trs:\r\n tr_tag = list(store_tr.strings)\r\n if (tr_tag[1].count('[휴점]') == 0):\r\n store_name = tr_tag[1]\r\n store_address = tr_tag[3]\r\n store_sido_gu = store_address.split()[:2]\r\n result.append([store_name] + store_sido_gu + [store_address])\r\n else:\r\n break\r\n\r\n return\r\n'''\r\n#굽네치킨 HTML 형식\r\n<tbody id=\"store_list\">\r\n <tr class=\"on lows\" idx=\"788\" onclick=\"store.viewdt('788','37.2755111612','127.070853941');\" id=\"788\">\r\n <td>흥덕지구점<span><!--031-651-9294--></span></td>\r\n <td class=\"store_phone\">\r\n <a href=\"javascript:void(0);\" onclick=\"store.teldt('031-212-9293');\">031-212-9293</a>\r\n </td>\r\n <td class=\"t_left\">\r\n \t\t<a href=\"javascript:void(0);\">경기도 용인시 기흥구 흥덕1로 79번길 9, 105호</a>\r\n \t\t<p>\r\n \t\t<i class=\"online \">온라인</i>\r\n \t\t<i class=\"coupon \">e-쿠폰</i>\r\n\t\t\t<!--<i class=\"cesco on\">세스코</i>-->\r\n\t\t\t<i class=\"card_dis \">카드할인</i>\r\n \t\t</p>\r\n\t</td>\r\n</tr>\r\n'''\r\nfrom selenium import webdriver\r\nimport time\r\ndef GoobneAddress(result):\r\n\r\n Goobne_URL = 'http://www.goobne.co.kr/store/search_store.jsp'\r\n \r\n wd = webdriver.Chrome('d:/Program Files/Python/WebDriver/chromedriver.exe')\r\n wd.get(Goobne_URL)\r\n time.sleep(10)\r\n\r\n for page_idx in count():\r\n \r\n wd.execute_script(\"store.getList('%s')\" % str(page_idx + 1))\r\n print (\"PageIndex [%s] Called\" % (str(page_idx + 1)))\r\n\r\n time.sleep(5)\r\n \r\n rcv_data = wd.page_source\r\n \r\n soupData = BeautifulSoup(rcv_data, 'html.parser')\r\n \r\n for store_list in soupData.findAll('tbody', attrs={'id': 'store_list'}):\r\n for store_tr in store_list:\r\n tr_tag = list(store_tr.strings)\r\n if (tr_tag[0] == '등록된 데이터가 없습니다.'):\r\n return result\r\n \r\n store_name = tr_tag[1]\r\n if (tr_tag[3] == ''):\r\n store_address = tr_tag[5]\r\n else:\r\n store_address = tr_tag[6]\r\n store_sido_gu = store_address.split()[:2]\r\n\r\n result.append([store_name] + store_sido_gu + [store_address])\r\n\r\n return\r\n\r\ndef main():\r\n\r\n result = []\r\n\r\n print('BBQ ADDRESS CRAWLING START')\r\n getBBQAddress(result)\r\n bbq_table = pd.DataFrame(result, columns=('store', 'sido', 'gungu', 'store_address'))\r\n bbq_table.to_csv(\"d:/temp/chicken_data/bbq.csv\", encoding=\"cp949\", mode='w', index=True)\r\n del result[:]\r\n\r\n print('PERICANA ADDRESS CRAWLING START')\r\n getPelicanaAddress(result)\r\n pericana_table = pd.DataFrame(result, columns=('store', 'sido', 'gungu', 'store_address'))\r\n pericana_table.to_csv(\"d:/temp/chicken_data/pericana.csv\", encoding=\"cp949\", mode='w', index=True)\r\n del result[:]\r\n\r\n print('NENE ADDRESS CRAWLING START')\r\n getNeneAdddress(result)\r\n nene_table = pd.DataFrame(result, columns=('store', 'sido', 'gungu', 'store_address'))\r\n nene_table.to_csv(\"d:/temp/chicken_data/nene.csv\", encoding=\"cp949\", mode='w', index=True)\r\n del result[:]\r\n\r\n print('KYOCHON ADDRESS CRAWLING START')\r\n for sido1 in range(1, 18):\r\n getKyochonAddress(sido1, result)\r\n kyochon_table = pd.DataFrame(result, columns=('store', 'sido', 'gungu', 'store_address'))\r\n kyochon_table.to_csv(\"d:/temp/chicken_data/kyochon.csv\", encoding=\"cp949\", mode='w', index=True)\r\n del result[:]\r\n\r\n print('CHEOGAJIP ADDRESS CRAWLING START')\r\n CheogajipAddress(result)\r\n cheogajip_table = pd.DataFrame(result, columns=('store', 'sido', 'gungu', 'store_address'))\r\n cheogajip_table.to_csv(\"d:/temp/chicken_data/cheogajip.csv\", encoding=\"cp949\", mode='w', index=True)\r\n del result[:]\r\n\r\n print('GOOBNE ADDRESS CRAWLING START')\r\n GoobneAddress(result)\r\n goobne_table = pd.DataFrame(result, columns=('store', 'sido', 'gungu', 'store_address'))\r\n goobne_table.to_csv(\"d:/temp/chicken_data/goobne.csv\", encoding=\"cp949\", mode='w', index=True)\r\n\r\n print('FINISHED')\r\n \r\nif __name__ == '__main__':\r\n main()"
},
{
"alpha_fraction": 0.7419354915618896,
"alphanum_fraction": 0.774193525314331,
"avg_line_length": 14.5,
"blob_id": "82c589c2f489f83095531cc5a7a47421ee27edcd",
"content_id": "d4f59ad364751e9e80d25bf2081983803c4c3db2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 47,
"license_type": "no_license",
"max_line_length": 17,
"num_lines": 2,
"path": "/README.md",
"repo_name": "snscrawler/ChickenAnalysis",
"src_encoding": "UTF-8",
"text": "# ChickenAnalysis\n국내 5대 치킨집 분석\n"
}
] | 2 |
kailash16/Freshworks-Backend-Assignment | https://github.com/kailash16/Freshworks-Backend-Assignment | dd859a0948bff74485e994253c754b127986b163 | ec74e9a01be266893a50d043e9ae12eb6f6b28b2 | a555e85c91ec9d6d85218a33e9e5a46340e1a6a3 | refs/heads/main | 2023-01-27T19:31:33.338450 | 2020-12-10T06:16:03 | 2020-12-10T06:16:03 | 320,177,623 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.7631579041481018,
"alphanum_fraction": 0.7631579041481018,
"avg_line_length": 18,
"blob_id": "742ead8fa834c63b2f123a994e20ab0b83ac7e05",
"content_id": "e1fdc17e2cc8e4afba77df8d7833a7b54d76876d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 76,
"license_type": "no_license",
"max_line_length": 31,
"num_lines": 4,
"path": "/README.md",
"repo_name": "kailash16/Freshworks-Backend-Assignment",
"src_encoding": "UTF-8",
"text": "# Freshworks-Backend-Assignment\n\nopen the Start.py file \n-> Run the program\n"
},
{
"alpha_fraction": 0.4560818374156952,
"alphanum_fraction": 0.46497663855552673,
"avg_line_length": 37.01818084716797,
"blob_id": "caf26906f9dfd0c678f4f0f822d153ba745d40f0",
"content_id": "d434bbbd8063aa342989e6beb6cde2f9e59d3920",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4497,
"license_type": "no_license",
"max_line_length": 162,
"num_lines": 110,
"path": "/sourcecode.py",
"repo_name": "kailash16/Freshworks-Backend-Assignment",
"src_encoding": "UTF-8",
"text": "import os\r\nclass main:\r\n def __init__(self):\r\n #d={}\r\n pass;\r\n \r\n\r\n\r\n\r\n\r\n \r\n def create(self):#for create operation\r\n d={}#'d' is the dictionary in which we store data\r\n key=input(\"ENTER THE KEY(MAXIMUM LENGTH OF 32 AND MUST BE A STRING): \");#user can enter the key value\r\n filename=key;\r\n if(len(key)>32):#constraints for input key_name capped at 32char\r\n print(\"Memory limit exceeded !\")#error message when key constraints not satisfy\r\n print(\"Please try again\\n\")\r\n print(\"----------------------------\\n\")\r\n create();# It will start the session again\r\n elif(key.isalpha())==False:#constraints for input key_name should be string\r\n print(\"Invalind key_name!! key_name must contain only alphabets and no special characters or numbers\")#error message when key constraints not satisfy\r\n print(\"Please try again\\n\")\r\n print(\"----------------------------\\n\")\r\n create();#It will start the session again\r\n \r\n value=input(\"ENTER THE VALUE(MAXIMUM LENGTH 16):\");\r\n \r\n if(len(d)<(1024*10245*1024) and len(value)>(16*1024*1024)):#constraints for file size less than 1GB and Jasonobject value less than 16KB\r\n print(\"Memory limit exceed !\\n\")#error message when value and file size constraints not satisfy\r\n print(\"Create key once again/\")\r\n print(\"----------------------------\\n\")\r\n create();#It will start the session again\r\n else:\r\n \r\n try:\r\n f = open(filename, 'x') #It will create a file\r\n print(\"Key is created\")\r\n f.close()\r\n except:\r\n print(\"Key already exits\")# It will show error when the file already exits\r\n exit();\r\n try:\r\n d[key]=value;\r\n f=open(filename,'wt')\r\n f.write(str(d))#It will write the data in the file\r\n f.close()\r\n except:\r\n print(\"Unable to store data\")#It will show error when unable to store the data\r\n\r\n \r\n def parse(self,d): #This function is used read data from a text file as a string and convert that data into a dictionary and return the dictionary\r\n \r\n dictionary = dict()\r\n # Removes curly braces and splits the pairs into a list\r\n pairs = d.strip('{}').split(', ')\r\n for i in pairs:\r\n pair = i.split(': ')\r\n # Other symbols from the key-value pair should be stripped.\r\n dictionary[pair[0].strip('\\'\\'\\\"\\\"')] = pair[1].strip('\\'\\'\\\"\\\"')\r\n \r\n return dictionary\r\n\r\n def read(self):#for read operation\r\n flag=0;\r\n d={}#'d' is the dictionary in which we store data\r\n key=input(\"ENTER THE KEY NAME: \");\r\n filename=key;\r\n \r\n try:\r\n file = open(filename, 'rt')#for read the textfile\r\n \r\n lines = file.read().split('\\n')\r\n \r\n for l in lines:\r\n if l != '':\r\n dictionary = self.parse(l)\r\n #print(dictionary)\r\n for keys, value in dictionary.items():\r\n if keys==key:\r\n print(dictionary[keys])#It will show the value for the respective key that users gave as a input\r\n flag=1; #if the key is found assign flag to 1;\r\n break;\r\n \r\n if(flag==0):#if the key is not found then the key is invalid\r\n print(\"Key is invalid\");\r\n \r\n file.close()\r\n except:#if the file is not exits\r\n print(\"Key not found! \")\r\n\r\n \r\n def delete(self):#for delete operation\r\n key=input(\"ENTER THE KEY NAME: \")\r\n filename=key\r\n \r\n \r\n try:\r\n \r\n \r\n os.remove(filename)\r\n print(\"Key is deleted\")\r\n \r\n \r\n \r\n \r\n \r\n \r\n except Exception as e:# if the file is not found then show file not exits\r\n print(\"Key not exits!\")\r\n \r\n \r\n \r\n \r\n \r\n \r\n \r\n \r\n \r\n \r\n\r\n\r\n \r\n \r\n \r\n \r\n \r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n"
},
{
"alpha_fraction": 0.5899705290794373,
"alphanum_fraction": 0.5988200306892395,
"avg_line_length": 31.899999618530273,
"blob_id": "72b70631f32a287835bcc617de4aaaabc69b7cfb",
"content_id": "813ee5a929b0afedf43760c8e20256a091310ef5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 678,
"license_type": "no_license",
"max_line_length": 69,
"num_lines": 20,
"path": "/start.py",
"repo_name": "kailash16/Freshworks-Backend-Assignment",
"src_encoding": "UTF-8",
"text": "from sourcecode import main #import the sourcecode.py and class main\r\ndef initial():\r\n x=main()#create the object \r\n print(''' 1. Create file and store data\r\n 2. Read data\r\n 3. Delete data''')\r\n choice=int(input(\"ENTER THE CHOICE...\"))\r\n if(choice==1):\r\n \r\n x.create()#call the create method\r\n elif (choice==2):\r\n x.read();#call the read method\r\n elif(choice==3):\r\n x.delete();#call the delete method\r\n else:\r\n print(\"Invalid choice\\n\")\r\n print(\"Please try again\\n\")\r\n initial()#if the option is wrong then call the function again\r\n \r\ninitial()#call the initial fucntion to start the process\r\n"
}
] | 3 |
lxzhu94/andes_tony | https://github.com/lxzhu94/andes_tony | 25d24e8bf5b0f9d36fe97e29f6bd01a7a3b595ec | 56193cbd20aacb7798d1edca616755cfb358a1c2 | f20b771527c3e86ec0d1fb673ed96de7a5ce2fae | refs/heads/master | 2018-02-18T06:09:32.091971 | 2017-03-22T03:15:54 | 2017-03-22T03:15:54 | 63,757,192 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5276615619659424,
"alphanum_fraction": 0.5406787395477295,
"avg_line_length": 36.094825744628906,
"blob_id": "23feb2d7113608235af38dba7e90de071eb1c809",
"content_id": "6ed88086d91a2fdc1202fa3638aa66cfa7db98dd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 4314,
"license_type": "no_license",
"max_line_length": 111,
"num_lines": 116,
"path": "/scripts/share/coupleRank.js",
"repo_name": "lxzhu94/andes_tony",
"src_encoding": "UTF-8",
"text": "/**\n * Created by Tony on 2016/7/15.\n */\nfunction remAdjust(defaultFontSize, defaultScreenHeight, defaultScreenWidth) {\n var htmlNode = document.getElementsByTagName('html')[0];\n\n function resize() {\n var screenWidth = document.body.offsetWidth;\n var screenHeight = window.screen.height;console.log(screenHeight+\"+\"+screenWidth);\n if(screenWidth / defaultScreenWidth * defaultFontSize>28)\n htmlNode.style.fontSize=28+'px';\n else\n htmlNode.style.fontSize = screenWidth / defaultScreenWidth * defaultFontSize + 'px';\n }\n document.addEventListener('DOMContentLoaded', function () {\n resize();\n });\n window.addEventListener('resize', resize);\n}\nremAdjust(20,568,320);\nfunction p(num){\n return num>=10?num.toString():('0'+num);\n}\nvar CR={\n messageUrl:'http://cootek-walkie-talkie.oss-cn-hangzhou.aliyuncs.com/external/couple_20160713/data/',\n isReceived:false,\n timeout:null,\n rankDetailDom:$('.rankDetailDom'),\n getDate:function(offset){\n var date=new Date();\n date.setDate(date.getDate()+offset);\n return date;\n },\n getDateFormat:function(offset){\n var date=new Date();\n date.setDate(date.getDate()+offset);\n var year=date.getFullYear().toString();\n return year+p(date.getMonth()+1)+p(date.getDate());\n },\n getData:function(day_offset){\n var url,data;\n if(day_offset<=-2) {\n $('#i').attr('src', CR.messageUrl + 'default.html');\n clearInterval(CR.timeout);\n }\n else\n $('#i').attr('src',CR.messageUrl+CR.getDateFormat(day_offset)+'.html');\n $('#i').get(0).onload=function(){\n window.frames[0].postMessage('',CR.messageUrl);\n };\n if(day_offset>-2){\n var timeout=setTimeout(function(){\n if(!CR.isReceived)\n CR.getData(--day_offset);\n },1000);\n }\n if(window.addEventListener)\n window.addEventListener('message',CR.messageCallback,false);\n else if(window.attachEvent)\n window.attachEvent('onmessage',CR.messageCallback);\n },\n messageCallback:function(e){\n clearTimeout(CR.timeout);\n if(window.removeEventListener)\n window.removeEventListener('message',CR.messageCallback,false);\n else\n window.detachEvent('onmessage',CR.messageCallback);\n CR.isReceived=true;\n var data= e.data;\n if(typeof(e.data )==='string')\n data=JSON.parse(e.data);\n var phone,index;\n if(Andes.handler)\n phone=Andes.getLoginPhone();\n else\n phone='123';\n function getSortFun(order, sortBy) {\n var ordAlpah = (order == 'asc') ? '>' : '<';\n var sortFun = new Function('a', 'b', 'return a.' + sortBy + ordAlpah + 'b.' + sortBy + '?1:-1');\n return sortFun;\n }\n data.sort(getSortFun('desc','grade'));\n var isInRank=false;\n for(var i in data) {\n if (data[i].mPhone == phone || data[i].wPhone == phone) {\n $('.nickname1').text(data[i].mNickname);\n $('.nickname2').text(data[i].wNickname);\n $('.my-grade>span').text(data[i].grade);\n $('.my-rank>span').text(parseInt(i)+1);\n isInRank=true;\n }\n if (i <= 2) {\n i=parseInt(i);\n $('#rankDetail' + (i + 1)).find('.nickname').text(data[i].mNickname + '&' + data[i].wNickname);\n $('#rankDetail'+(i+1)).find('.grade').text(data[i].grade);\n }\n else{\n var newDom=CR.rankDetailDom;\n newDom.find('.nickname').text(data[i].mNickname + '&' + data[i].wNickname);\n newDom.find('.grade').text(data[i].grade);\n newDom.find('.rank').text(parseInt(i)+1);\n $('.rank-list').append(newDom.clone());\n }\n }\n if(!isInRank){\n $('.nickname1').text(\"未设置\");\n $('.nickname2').text(\"未设置\");\n $('.my-grade>span').text('-');\n $('.my-rank>span').text('-');\n }\n }\n};\nfunction init(){\n $('#i').attr('src',CR.messageUrl+CR.getDateFormat(0)+'.html');\n CR.getData(0);\n}"
},
{
"alpha_fraction": 0.5524861812591553,
"alphanum_fraction": 0.5849447250366211,
"avg_line_length": 33.90361404418945,
"blob_id": "ae74c54fc734417a975a2e0f87ea654860b11b35",
"content_id": "64ba6e45952438b0bac7a397073148216d827082",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 2976,
"license_type": "no_license",
"max_line_length": 132,
"num_lines": 83,
"path": "/scripts/andes/widget_guide_swipe.js",
"repo_name": "lxzhu94/andes_tony",
"src_encoding": "UTF-8",
"text": "'use strict';\n\n/**\n * Created by zhulixiao on 2017/2/20.\n */\nmyLib.remAdjust(20, 320);\ndocument.setTitle = function (t) {\n document.title = t;\n var i = document.createElement('iframe');\n i.src = '//m.baidu.com/favicon.ico';\n i.style.display = 'none';\n i.onload = function () {\n setTimeout(function () {\n i.remove();\n }, 9);\n };\n document.body.appendChild(i);\n};\nvar img = new Image();\nvar mySwiper = {};\nimg.onload = function () {\n mySwiper = new Swiper('.swiper-container', {\n direction: 'horizontal',\n autoplay: 6000,\n autoplayDisableOnInteraction: false,\n pagination: '.swiper-pagination',\n longSwipesRatio: 0.3,\n onTransitionStart: function onTransitionStart(swiper) {\n mySwiper.slides[mySwiper.activeIndex].children[0].src = mySwiper.slides[mySwiper.activeIndex].children[0].getAttribute('src');\n }\n });\n var src = '/andesres/image/andes/widget_guide/';\n if (myLib.GetQueryString('language') === 'en') {\n $('.swiper-slide')[0].children[0].src = src + '';\n $('.swiper-slide')[0].children[0].src = src + 'en/添加插件1.gif';\n setTimeout(function () {\n $('.swiper-slide')[1].children[0].src = src + 'en/添加插件2.gif';\n $('.swiper-slide')[2].children[0].src = src + 'en/添加插件3.gif';\n $('#page2 img').attr('src', '/andesres/image/andes/widget_guide/en/操作说明.gif');\n }, 200);\n } else {\n $('.swiper-slide')[0].children[0].src = src + '';\n $('.swiper-slide')[0].children[0].src = src + '/添加插件1.gif';\n setTimeout(function () {\n $('.swiper-slide')[1].children[0].src = src + '/添加插件2.gif';\n $('.swiper-slide')[2].children[0].src = src + '/添加插件3.gif';\n $('#page2 img').attr('src', '/andesres/image/andes/widget_guide/操作说明.gif');\n }, 200);\n }\n};\nif (myLib.GetQueryString('language') === 'en') {\n img.src = '/andesres/image/andes/widget_guide/en/添加插件1.gif';\n} else {\n img.src = '/andesres/image/andes/widget_guide/添加插件1.gif';\n}\nwindow.onload = function () {\n FastClick.attach(document.body);\n if (myLib.GetQueryString('language') !== 'en') document.setTitle('\\u9501\\u5C4F\\u5BF9\\u8BB2\\u5C0F\\u63D2\\u4EF6');else {\n document.setTitle('Widget Guide');\n $('#btn1').text('Add Widget');\n $('#btn2').text('Instructions');\n }\n var current_id = 'btn1';\n $('.btn1,.btn2').click(function () {\n var id = $(this).attr('id');\n if (current_id === id) return;\n current_id = id;\n $('.btn1,.btn2').removeClass('clicked');\n $('#' + id).addClass('clicked');\n if (id === 'btn1') {\n $('#page1').show();\n // $('#page2').hide();\n // $('#page2 img').get(0).src = '';\n mySwiper.slideTo(0, 0, function () {});\n mySwiper.slides[0].children[0].src = mySwiper.slides[0].children[0].getAttribute('src');\n } else {\n $('#page1').hide();\n // $('#page2').show();\n $('#page2 img').get(0).src = $('#page2 img').attr('src');\n }\n });\n $('#content').css('opacity', 1);\n};"
},
{
"alpha_fraction": 0.5193046927452087,
"alphanum_fraction": 0.5217715501785278,
"avg_line_length": 38.1707878112793,
"blob_id": "3b8d43c9940caa1c4483200c279b049e563667da",
"content_id": "f6413527d5cc5e21c75b989a21d6f87a9764d671",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 17633,
"license_type": "no_license",
"max_line_length": 158,
"num_lines": 445,
"path": "/scripts/andes/sound_wall.js",
"repo_name": "lxzhu94/andes_tony",
"src_encoding": "UTF-8",
"text": "var SW = {\n soundContainer: $('.sound-list-box'),\n soundTopContainer: $('.sound-list-top-box'),\n playAudio: $('#play_audio')[0],\n progress: $('.sound-progress'),\n soundItemTpl: $('#sound_item_tpl'),\n soundTopItemTpl: $('#sound_top_item_tpl'),\n soundListItemTpl: $('#sound_top_item_tpl'),\n soundTotalIndex: -1,\n rotateInterval: null,\n isInShare: false,\n isInSharePlayed: false,\n title : '嘘,一般人我不告诉他~',\n content : '快来这里听让你脸红心跳的匿名声音故事……BiBi声音墙',\n url : 'http://andes.cootekservice.com/andes/sound_wall.html?current=0&num=20',\n urlSMS : 'http://url.cn/go7cHJ',\n urlCOPY : 'http://url.cn/2F2NiU4',\n imageUrl : 'http://andes.cootekservice.com/andesres/image/andes/andes_icon.png',\n usagePath : 'PAGE_SOUND_WALL/',\n usagePathItem : 'PAGE_SOUND_WALL/ITEM/',\n init: function() {\n var self = this;\n if (!Andes.handler) {\n SW.isInShare = true;\n $('.download-div').removeClass('display-none');\n $('.andes-btn').fastClick(SW.downloadClick);\n $('.banner').css('margin-top', '7rem');\n $('.wechat-download-intruct').fastClick(function() {\n $(this).addClass('display-none');\n });\n }\n self.initData();\n $('.sound-item').fastClick(SW.itemClick);\n $('.play-btn').fastClick(SW.playClick);\n $('.sound-item-like').fastClick(SW.likeClick);\n $('.sound-like-div').fastClick(SW.likeClick);\n $('.play-like').fastClick(SW.likeClick);\n AudioPlayer.init(SW.playAudio, SW.audioCallback);\n\n $('.show-cancel').fastClick(SW.uploadClick);\n $('.show-thumb').fastClick(SW.uploadThumbClick);\n \n $('.ios-div').fastClick(function() {\n $('.ios-div').addClass('display-none');\n });\n $('.ios-show-thumb').fastClick(function() {\n $('.ios-div').addClass('display-none');\n });\n\n Andes.enableHeadbarShare('SW.headShare');\n Andes.enableHeadbarUpload('SW.headUpload');\n if (WechatShare.isWeixin()) {\n WechatShare.config(SHARE.appId, SHARE.timestamp, SHARE.noncestr, SHARE.signature, SW.title, SW.content, location.href.split('#')[0], SW.imageUrl);\n }\n },\n downloadClick: function() {\n window.location.href = 'http://andes.cootekservice.com/andes/download.html?from=sound_wall';\n },\n uploadClick: function() {\n $('.upload-div').addClass('display-none');\n },\n uploadThumbClick: function() {\n Andes.usageRecord(SW.usagePath, 'BUTTON_UPLOAD_THUMB', 'CLICK');\n $('.upload-div').addClass('display-none');\n },\n headShare: function() {\n Andes.showShare(SW.title, SW.content, SW.url, SW.urlSMS, SW.urlCOPY, SW.imageUrl, 'WEBPAGE_SOUNDWALL', '');\n },\n headUpload: function() {\n Andes.usageRecord(SW.usagePath, 'BUTTON_UPLOAD', 'CLICK');\n $('.show-top-text').text('上传功能正在玩命开发中,很快您的BiBi故事就可以与大家分享了,敬请期待。');\n $('.upload-div').removeClass('display-none');\n },\n initData: function() {\n if(null == INFO || null == INFO.sounds || 0 == INFO.sounds.length) {\n return;\n }\n var topItemIndex = 0;\n var listItemIndex = 0;\n SW.soundTotalIndex = INFO.sounds.length;\n for(var i=0; i<INFO.sounds.length; i++) {\n var sound = INFO.sounds[i];\n console.log(sound.layout);\n if('list' == sound.layout) {\n var tplId = 'sound_item_' + listItemIndex;\n var soundTemplate = SW.soundItemTpl.clone();\n soundTemplate.attr('data-id', sound.id);\n soundTemplate.attr('data-title', sound.title); \n soundTemplate.attr('data-sub-title', sound.sub_title); \n soundTemplate.attr('data-layout', sound.layout); \n soundTemplate.attr('data-audio', sound.audio_url);\n soundTemplate.attr('data-icon', sound.icon_url);\n soundTemplate.attr('data-like', sound.like_total);\n soundTemplate.attr('data-user-like', sound.user_like);\n soundTemplate.attr('id', tplId);\n \n soundTemplate.find('.sound-name').text(sound.title);\n soundTemplate.find('.sound-name-sub').text(sound.sub_title);\n if ('' == sound.icon_url) {\n soundTemplate.find('.sound-cover').addClass('cover-default');\n } else {\n soundTemplate.find('.sound-cover').addClass('cover-custom');\n soundTemplate.find('.cover-custom').css('background-image', 'url(' + sound.icon_url +')');\n soundTemplate.find('.cover-custom').css('background-repeat', 'no-repeat');\n soundTemplate.find('.cover-custom').css('background-size', 'cover');\n }\n soundTemplate.find('.sound-like').text(sound.like_total);\n soundTemplate.find('.sound-like-div').attr('data-item-id', tplId);\n soundTemplate.find('.sound-item-like').attr('data-item-id', tplId);\n var soundLikeIcon = soundTemplate.find('.sound-item-like');\n if(sound.user_like) {\n //todo sound like color change\n soundLikeIcon.addClass('user-like');\n }\n soundTemplate.removeClass('display-none');\n SW.soundContainer.append(soundTemplate);\n listItemIndex += 1;\n } else if ('square' == sound.layout) {\n console.log('in square');\n var soundListItem = null;\n var soundItem = null;\n if(0 == topItemIndex % 2 ) {\n soundListItem = SW.soundListItemTpl.clone();\n soundListItem.attr('id', 'sound_top_item_tp_' + topItemIndex);\n SW.soundTopContainer.append(soundListItem);\n soundItem = soundListItem.find('.sound-item-left');\n } else {\n soundListItem = $('#sound_top_item_tp_' + (topItemIndex -1));\n soundItem = soundListItem.find('.sound-item-right');\n }\n \n var tplId = 'sound_top_item_' + topItemIndex;\n soundItem.attr('data-id', sound.id);\n soundItem.attr('data-title', sound.title);\n soundItem.attr('data-sub-title', sound.sub_title);\n soundItem.attr('data-layout', sound.layout);\n soundItem.attr('data-audio', sound.audio_url);\n soundItem.attr('data-icon', sound.icon_url);\n soundItem.attr('data-like', sound.like_total);\n soundItem.attr('data-user-like', sound.user_like);\n soundItem.attr('data-is-new', sound.is_new);\n soundItem.attr('data-is-hot', sound.is_hot);\n soundItem.attr('id', tplId);\n\n soundItem.find('.sound-name').text(sound.title);\n soundItem.find('.sound-like').text(sound.like_total);\n soundItem.find('.sound-item-like').attr('data-item-id', tplId);\n if(sound.user_like) {\n soundItem.find('.sound-item-like').addClass('user-like');\n }\n if ('' != sound.icon_url) {\n soundItem.find('.sound-item-square').css('background-image', 'url(' + sound.icon_url + ')');\n soundItem.find('.sound-item-square').css('background-repeat', 'no-repeat');\n soundItem.find('.sound-item-square').css('background-size', 'cover');\n }\n if(sound.is_hot) {\n soundItem.find('.hot').removeClass('display-none');\n } else if(sound.is_new) {\n soundItem.find('.new').removeClass('display-none');\n }\n\n soundItem.removeClass('display-none');\n soundListItem.removeClass('display-none');\n topItemIndex += 1;\n }\n }\n\n },\n playClick: function() {\n console.log(AudioPlayer.audioStatus);\n if('playing' == AudioPlayer.audioStatus) {\n AudioPlayer.pause();\n } else {\n AudioPlayer.play();\n }\n },\n itemClick: function() {\n var item = $(this);\n if (SW.isInShare) {\n if (SW.isInSharePlayed) {\n $('.show-top-text').text('想要听更多的声音故事,请下载BiBi吧!');\n $('.show-cancel').fastClick(function() {\n $('.upload-div').addClass('display-none');\n $('.show-thumb').text('点赞'); \n });\n $('.show-thumb').text('去下载'); \n $('.show-thumb').fastClick(function(){\n $('.upload-div').addClass('display-none');\n SW.downloadClick();\n });\n $('.upload-div').removeClass('display-none');\n return;\n } else {\n SW.isInSharePlayed = true;\n }\n }\n var itemId = item.attr('id');\n var audio_url = item.attr('data-audio');\n var icon_url = item.attr('data-icon');\n var user_like = item.attr('data-user-like');\n var title = item.attr('data-title');\n console.log(itemId);\n console.log(audio_url);\n console.log(icon_url);\n console.log(user_like);\n console.log(title);\n\n //usage\n usageValue = {\n 'ACTION': 'ITEM_CLICK',\n 'TITLE': title\n }\n Andes.usageRecord(SW.usagePathItem, itemId, usageValue);\n \n var play_audio_url = $('#play_audio').attr('src');\n console.log(play_audio_url);\n\n if (play_audio_url == audio_url) {\n if('playing' == AudioPlayer.audioStatus) {\n AudioPlayer.pause();\n } else {\n AudioPlayer.play();\n }\n return;\n }\n\n var dataItemId = $('.sound-play-box').attr('data-item-id');\n console.log('dataItemId:' + dataItemId);\n\n if ('' != dataItemId) {\n var lastItem = $('#' + dataItemId);\n lastItem.find('.sound-play-btn-icon').text('c');\n }\n\n $('.bottom-div').removeClass('display-none');\n $('.float-bottom').removeClass('display-none');\n\n $('.play-sound-name').text(title);\n if ('' == icon_url) {\n $('.play-sound-cover').addClass('cover-default');\n } else {\n $('.play-sound-cover').css('background-image', 'url(' + icon_url +')');\n $('.play-sound-cover').addClass('cover-custom-small');\n }\n console.log('user_like:' + user_like);\n if (user_like && 'true' == user_like) {\n $('.play-like').addClass('user-like');\n } else {\n $('.play-like').removeClass('user-like');\n }\n\n $('.sound-play-box').attr('data-item-id', itemId);\n $('.play-like').attr('data-item-id', itemId);\n \n //play\n $('#play_audio').attr('src', audio_url);\n $('.play-btn').attr('data-item-id', itemId);\n AudioPlayer.play();\n },\n audioCallback: function(ret) {\n if ('loadstart' == ret) {\n SW.progress.css('width', '0px');\n $('.sound-loading').removeClass('display-none');\n }\n if ('canplay' == ret) {\n $('.sound-loading').addClass('display-none');\n }\n if ('timeupdate' == ret) {\n var duration = AudioPlayer.audio.duration;\n if (duration > 0) {\n var width = (AudioPlayer.audio.currentTime/duration)*100 + '%';\n SW.progress.css('width', width);\n }\n }\n if ('play' == ret) {\n var itemId = $('.play-btn').attr('data-item-id');\n var item = $('#' + itemId);\n item.find('.sound-play-btn-icon').text('d');\n clearInterval(SW.rotateInterval);\n SW.rotateInterval = SW.rotate('play_sound_cover', 60);\n }\n if ('playing' == ret) {\n $('.play-btn').text('d');\n }\n if ('pause' == ret) {\n var itemId = $('.play-btn').attr('data-item-id');\n var item = $('#' + itemId);\n item.find('.sound-play-btn-icon').text('c');\n $('.play-btn').text('c');\n clearInterval(SW.rotateInterval);\n }\n if ('ended' == ret) {\n var itemId = $('.play-btn').attr('data-item-id');\n var title = $('.play-sound-name').text();\n values = {\n 'ACTION': 'ITEM_PLAY_TO_END',\n 'TITLE': title\n }\n Andes.usageRecord(SW.usagePathItem, itemId, values);\n clearInterval(SW.rotateInterval);\n }\n },\n rotate : function(id, interval) {\n var ele = document.getElementById(id);\n var deg = 0;\n\n function _rotate() {\n if (deg < 360) {\n deg += 15;\n } else {\n deg = 15;\n }\n ele.style.webkitTransform = 'rotate(' + deg + 'deg)';\n }\n var _interval = (interval) ? interval : 20;\n return setInterval(_rotate, _interval);\n },\n likeClick: function() {\n if (SW.isInShare) {\n console.log('in share, cannot like');\n return;\n }\n var item = $(this);\n var soundItemId = item.attr('data-item-id');\n var soundItem = $('#' + soundItemId);\n if (soundItem.find('.sound-item-like').hasClass('user-like')) {\n Andes.showToast('已经赞过了哦~去听听其他的BiBi故事吧~');\n return;\n }\n\n var title = soundItem.attr('data-title');\n var values = {\n 'ACTION': 'ITEM_LIKE_CLICK',\n 'TITLE': title\n }\n Andes.usageRecord(SW.usagePathItem, soundItemId, values);\n\n var soundId = soundItem.attr('data-id');\n var likeTotal = soundItem.attr('data-like');\n var url = Andes.netService + '/andes/sound_like?_token=' + Andes.getToken() + '&id=' + soundId + '&like=1';\n\n $.ajax({\n url: url,\n type: 'get',\n success: function(ret) {\n console.log(JSON.stringify(ret));\n if(ret && 2000 == ret.result_code) {\n soundItem.find('.sound-like').text(parseInt(likeTotal) + 1);\n soundItem.find('.sound-item-like').addClass('user-like');\n soundItem.attr('data-user-like', 'true');\n if($('.play-like').attr('data-item-id') == soundItemId) {\n $('.play-like').addClass('user-like');\n }\n }\n },\n error: function(ret) {\n console.log(JSON.stringify(ret));\n }\n });\n }\n};\n\nvar AudioPlayer = {\n audio: null,\n audioStatus: null,\n callback: null,\n init: function(audio, callback) {\n var self = this;\n self.audio = audio;\n self.callback = callback;\n self.audio.addEventListener('loadeddata', self.audioLoaded, false);\n self.audio.addEventListener('pause', self.audioPause, false);\n self.audio.addEventListener('play', self.audioPlay, false);\n self.audio.addEventListener('playing', self.audioPlaying, false);\n self.audio.addEventListener('ended', self.audioEnded, false);\n self.audio.addEventListener('progress', self.audioLoadProgress, false);\n self.audio.addEventListener('loadstart', self.audioLoadStart, false);\n self.audio.addEventListener('timeupdate', self.audioTimeUpdate, false);\n self.audio.addEventListener('canplay', self.audioCanPlay, false);\n },\n audioCanPlay: function() {\n console.log('audio can play');\n AudioPlayer.callback('canplay');\n },\n audioTimeUpdate: function() {\n console.log('audio time update');\n AudioPlayer.callback('timeupdate');\n },\n audioLoadStart: function() {\n console.log('audio load start');\n AudioPlayer.audioStatus = 'loadstart';\n AudioPlayer.callback('loadstart');\n },\n audioLoadProgress: function() {\n console.log('audio load progress');\n AudioPlayer.callback('loadprogress');\n },\n audioLoaded: function() {\n console.log('audio loaded');\n AudioPlayer.audioStatus = 'loaded';\n AudioPlayer.callback('loaded');\n },\n audioPause: function() {\n console.log('audio pause');\n AudioPlayer.audioStatus = 'pause';\n AudioPlayer.callback('pause');\n },\n audioPlay: function() {\n console.log('audio play');\n AudioPlayer.audioStatus = 'play';\n AudioPlayer.callback('play');\n },\n audioPlaying: function() {\n console.log('audio playing');\n AudioPlayer.audioStatus = 'playing';\n AudioPlayer.callback('playing');\n },\n audioEnded: function() {\n console.log('audio ended');\n AudioPlayer.audioStatus = 'ended';\n AudioPlayer.callback('ended');\n },\n play: function() {\n if(null != AudioPlayer.audio) {\n AudioPlayer.audio.play();\n }\n },\n pause: function() {\n if(null != AudioPlayer.audio) {\n AudioPlayer.audio.pause();\n }\n },\n stop: function() {\n if(null != AudioPlayer.audio) {\n AudioPlayer.audio.stop();\n }\n }\n};\n\nfunction init() {\n window.onload = function() {\n FastClick.attach(document.body);\n }\n SW.init();\n}\n"
},
{
"alpha_fraction": 0.6909887194633484,
"alphanum_fraction": 0.6939073204994202,
"avg_line_length": 29.631284713745117,
"blob_id": "732731f0d2b58f26a616df55493234cdd6a11296",
"content_id": "e5bcdfb264c2741570d47c12c49db7712f5429a1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 5610,
"license_type": "no_license",
"max_line_length": 133,
"num_lines": 179,
"path": "/scripts/bestvoice/bestvoice_mobile.js",
"repo_name": "lxzhu94/andes_tony",
"src_encoding": "UTF-8",
"text": "function remAdjust(defaultFontSize, defaultScreenWidth) {\n\tvar htmlNode = document.getElementsByTagName('html')[0];\n\tfunction resize() {\n\t\tvar screenWidth = document.body.offsetWidth;\n\t\thtmlNode.style.fontSize = screenWidth / defaultScreenWidth * defaultFontSize + 'px';\n\t}\n\tdocument.addEventListener('DOMContentLoaded', function() {\n\t\tresize();\n\t});\n\twindow.addEventListener('resize', resize);\n}\nremAdjust(10, 300);\n\nvar $listItemArr = document.getElementsByClassName('list-item');\nvar $voiceArr = document.getElementsByClassName('voice');\n\n\nvar $rulesContainer = document.getElementById('rulesContainer');\nvar $detail = document.getElementById('detail');\nvar $detailClose = document.getElementById('detailClose');\nvar $ruleBanner = document.getElementById('ruleBanner');\nvar $voteBanner = document.getElementById('voteBanner');\nvar $downloadTips = document.getElementById('downloadTipsContainer');\nvar $listenMyVoice = document.getElementById('listenMyVoice');\n\nvar showTips = function () {\n\tconsole.log('show tips');\n\t$downloadTips.style.display = 'block';\n\t$detailClose.style.display = 'none';\n};\nvar hideTips = function () {\n\tconsole.log('hide tips');\n\t$downloadTips.style.display = 'none';\n\t$detailClose.style.display = 'block';\n};\n\nvar audio=new Audio();\nvar soundControl = function (url) {\n\tif (!soundPlaying&&url) {\n\t\tsoundPlaying = true;\n\t\t$listenMyVoice.classList.add('playing');\n\t\tif(current_audio!=url)\n\t\t\taudio.src=url;\n\t\taudio.play();\n\t\tcurrent_audio=url;\n\t\taudio.addEventListener('ended',function(){\n\t\t\tsoundPlaying=true;\n\t\t\tsoundControl();\n\t\t})\n\t} else {\n\t\tsoundPlaying=false;\n\t\tif(url&&url!=current_audio){\n\t\t\tsoundControl(url);\n\t\t\treturn;\n\t\t}\n\t\t$listenMyVoice.classList.remove('playing');\n\t\taudio.pause();\n\t}\n};\nvar netService='http://andes.cootekservice.com';\nvar list_item=$('#list-item-dom').find('.list-item');\nvar current_uid=0;\nvar current_dom=null;\nvar current_audio='';\nvar soundPlaying = false;\nvar shareTitle='BiBi寻找最美声音大赛,投票火热进行中!';\nvar shareContent=\"( ~っ~)唔…这么好玩的东西我轻易不分享的…\";\nvar shareUrl=netService + '/andes/best_voice_m.html';\nvar shareImageUrl=netService + '/andesres/image/andes/andes_share_icon.png';\nvar token='ce8f447d-d917-4aab-b0a3-984df87a75f5';\nfunction getList(){\n\t$.ajax({\n\t\ttype:\"GET\",\n\t\tdataType:\"json\",\n\t\turl:\"http://andes.cootekservice.com/andes/bestvoice/list?_v=1&_token=\"+token+\"&_ts=\"+Date.parse(new Date())/1000,\n\t\tsuccess: getListCallback\n\t});\n}\n\nfunction getListCallback(data){\n\tvar arr=data.result.list;\n\tfor (var i in arr) {\n\t\tvar uid=arr[i].best_voice_id-1;\n\t\tvoice_list[uid].totel_like=arr[i].total_votes;\n\t\tlist_item.find('.user-tag>span').text(voice_list[uid].user_tag);\n\t\tlist_item.find('.portrait').attr('src',voice_list[uid].photo[0]);\n\t\tlist_item.find('.like-num').text(voice_list[uid].totel_like);\n\t\tlist_item.data('uid',uid);\n\t\t$('.list-container').append(list_item.clone());\n\t}\n\t$('.list-item').fastClick( function () {\n\t\t// 绑定每一个item的事件\n\t\tcurrent_uid=$(this).data('uid');\n\t\tcurrent_dom=$(this);\n\t\t$('.user-pic-ver').attr('src',voice_list[current_uid].photo[0]);\n\t\t$('#detail').find('.user-tag').text(voice_list[current_uid].user_tag);\n\t\t$('#detail').find('.like-num').text(voice_list[current_uid].totel_like);\n\t\t$('.user-pic-ver').get(0).onload=function(){\n\t\t\t$detail.classList.add('normal');\n\t\t\t$detail.style.display='block';\n\t\t\t$ruleBanner.style.display = 'none';\n\t\t\t$voteBanner.style.display = 'block';\n\t\t\tdocument.body.style.overflow = 'hidden';\n\t\t}\n\t});\n\t$('.tips-close').fastClick(function(){\n\t\t$normalTipsContainer.style.display = 'none';\n\t\t$inviteTipsContainer.style.display = 'none';\n\t\t$shareTipsContainer.style.display = 'none';\n\t\t$detailClose.style.display = 'block';\n\t});\n\t$('.voice').fastClick(function(){\n\t\tvar uid=$(this).parent().data('uid');\n\t\tsoundControl(voice_list[uid].audio);\n\t})\n}\nfunction preventDefault(e){\n\te.preventDefault();\n}\nfunction init(){\n\twindow.onload = function() {\n\t\tFastClick.attach(document.body);\n\t\t$rulesContainer.addEventListener('touchmove',preventDefault);\n\t\t$detail.addEventListener('touchmove',preventDefault);\n\t};\n\tgetList();\n\t//if (WechatShare.isWeixin()) {\n\t//\tWechatShare.config(INFO.appId, INFO.timestamp, INFO.noncestr, INFO.signature, shareTitle, shareContent, shareUrl, shareImageUrl);\n\t//}\n\t$('#rule').fastClick(function () {\n\t\t//document.body.style.overflow = 'hidden';\n\t\t$rulesContainer.style.display = 'block';\n\t});\n\t$('#ruleClose').fastClick(function () {\n\t\t//document.body.style.overflow = 'auto';\n\t\t$rulesContainer.style.display = 'none';\n\t});\n\n\t$('#detailClose').fastClick(function (event) {\n\t\tevent.stopPropagation();\n\t\tdocument.body.classList.remove('outStopScroll');\n\t\t//$outerContainer.style.overflow = 'auto';\n\t\tsoundControl();\n\t\tdocument.body.style.overflow = 'auto';\n\t\t$detail.style.display='none';\n\t\t$ruleBanner.style.display = 'block';\n\t\t$voteBanner.style.display = 'none';\n\t});\n\t$('#botBannerShare').fastClick(function () {\n\t\t// TODO 底部固定的share按钮的点击事件, 需要处理分享, 数据点\n\t});\n\n\t$('#voteMe').fastClick(function () {\n\t\tshowTips();\n\t});\n\t$('#canvassMe').fastClick(function () {\n\t\tshowTips();\n\t});\n\t$('#tipsClose').fastClick(function (event) {\n\t\tevent.stopPropagation();\n\t\tevent.preventDefault();\n\t\tconsole.log('click');\n\t\thideTips();\n\t});\n\n\t$('#downloadBiBi').fastClick(function () {\n\t\tlocation.href='http://andes.cootekservice.com/andes/new_download.html?from=bestvoice';\n\t});\n\n\t$('#detail').fastClick(function () {\n\t\tshowTips();\n\t});\n\n\t$('#listenMyVoice').fastClick(function (event) {\n\t\tevent.stopPropagation();\n\t\tsoundControl(voice_list[current_uid].audio);\n\t});\n}\ninit();"
},
{
"alpha_fraction": 0.5701871514320374,
"alphanum_fraction": 0.5964794754981995,
"avg_line_length": 39.44144058227539,
"blob_id": "789e0c4a0aa23678c35a2337a34c3a0685e97a50",
"content_id": "b459cfb6ff15cf22336fa6ecd8e34680f0593350",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 4596,
"license_type": "no_license",
"max_line_length": 194,
"num_lines": 111,
"path": "/scripts/share/likerCollectBefore.js",
"repo_name": "lxzhu94/andes_tony",
"src_encoding": "UTF-8",
"text": "/**\n * Created by Tony on 2016/5/27.\n */\nfunction remAdjust(defaultFontSize, defaultScreenHeight, defaultScreenWidth) {\n var htmlNode = document.getElementsByTagName('html')[0];\n\n function resize() {\n var screenWidth = document.body.offsetWidth;\n var screenHeight = window.screen.height;console.log(screenHeight+\"+\"+screenWidth);\n htmlNode.style.fontSize = screenWidth / defaultScreenWidth * defaultFontSize + 'px';\n }\n document.addEventListener('DOMContentLoaded', function () {\n resize();\n });\n window.addEventListener('resize', resize);\n}\nremAdjust(20,568,320);\n\nfunction base64_encode(str){\n var c1, c2, c3;\n var base64EncodeChars = \"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/\";\n var i = 0, len= str.length, string = '';\n\n while (i < len){\n c1 = str.charCodeAt(i++) & 0xff;\n if (i == len){\n string += base64EncodeChars.charAt(c1 >> 2);\n string += base64EncodeChars.charAt((c1 & 0x3) << 4);\n string += \"==\";\n break;\n }\n c2 = str.charCodeAt(i++);\n if (i == len){\n string += base64EncodeChars.charAt(c1 >> 2);\n string += base64EncodeChars.charAt(((c1 & 0x3) << 4) | ((c2 & 0xF0) >> 4));\n string += base64EncodeChars.charAt((c2 & 0xF) << 2);\n string += \"=\";\n break;\n }\n c3 = str.charCodeAt(i++);\n string += base64EncodeChars.charAt(c1 >> 2);\n string += base64EncodeChars.charAt(((c1 & 0x3) << 4) | ((c2 & 0xF0) >> 4));\n string += base64EncodeChars.charAt(((c2 & 0xF) << 2) | ((c3 & 0xC0) >> 6));\n string += base64EncodeChars.charAt(c3 & 0x3F)\n }\n return string\n}\n\nfunction GetQueryString(name)\n{\n var reg = new RegExp(\"(^|&)\"+ name +\"=([^&]*)(&|$)\");\n var r = window.location.search.substr(1).match(reg);\n if(r!=null)return unescape(r[2]); return null;\n}\nvar zanBefore = {\n shareTitle: '我想玩这个,快来帮我一下',\n shareContent: 'BiBi的语音表情超好玩,快帮我解锁,来和我一起玩儿!',\n useShortUrl: 1,\n shareUrl: Andes.netService + '/andes/liker_collect.html',\n shareAndroidUrl: Andes.netService + '/andes/liker_collect.html?os=android',\n shareIOSUrl: Andes.netService + '/andes/liker_collect_.html?os=ios',\n shareSMSAndroidUrl: '',\n shareSMSIOSUrl: '',\n shareCopyAndroidUrl: '',\n shareCopyIOSUrl: '',\n contentImageUrl: Andes.netService + '/andesres/image/andes/andes_icon.png',\n shareImageUrl: Andes.netService + '/andesres/image/andes/andes_share_icon.png',\n bgImageUrl: Andes.netService + '/andesres/image/andes/andes_icon.png',\n init: function() {\n var self = this;\n var uid=GetQueryString('uid');\n $.ajax({\n type:\"POST\",\n data:JSON.stringify({'uid':uid}),\n async:false,\n dataType:\"json\",\n url:'http://183.136.223.43:30007'+\"/andes/collectlike/liker_info\",\n success:function(data){\n var liker_list=data.result;\n $('.num-tip>span').text(liker_list.length);\n $('.white-bg').hide();\n }\n });\n $('.back').fastClick(function(){\n Andes.finish();\n });\n $('.zan-btn').fastClick(function() {\n _hmt.push(['_trackEvent', 'APP内集赞分享页面', '点赞按钮', '召唤好友 解锁表情']);\n if (!isIOS) {\n Andes.showShare(self.shareTitle, self.shareContent, zanBefore.makeWechatUrl('android'), self.shareSMSAndroidUrl, self.shareCopyAndroidUrl, self.shareImageUrl, 'ZAN_SHARE', null);\n } else {\n Andes.showShareOnlyWechat(self.shareTitle, self.shareContent, self.useShortUrl, self.contentImageUrl, self.makeWechatUrl('iphone'), self.shareImageUrl, 'ZAN_SHARE', null);\n }\n });\n },\n makeWechatUrl:function(device){\n var uri=zanBefore.shareUrl+\"?os=\"+device+\"&uid=\"+GetQueryString('uid')+\"&appid=\"+INFO.appId;\n var target=\"1#1#\"+uri;\n target=base64_encode(target);\n redirect_url = 'http://touchlife.cootekservice.com/callback/wechat?target='+target;\n return 'https://open.weixin.qq.com/connect/oauth2/authorize?appid='+INFO.appId+'&redirect_uri='+redirect_url+'&response_type=code&scope=snsapi_userinfo#wechat_redirect';\n }\n\n};\n(function(){\n window.onload = function() {\n FastClick.attach(document.body);\n //$(\"body\").get(0).addEventListener('touchmove',function(event){event.preventDefault()});\n };\n zanBefore.init();\n})();"
},
{
"alpha_fraction": 0.5134483575820923,
"alphanum_fraction": 0.5246130228042603,
"avg_line_length": 46.7757568359375,
"blob_id": "dcff6d11f1c5ab6970591f0c0d9e1ee02024effb",
"content_id": "530dcc57b9284e446d61021b7683d558f5f410fc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 8242,
"license_type": "no_license",
"max_line_length": 383,
"num_lines": 165,
"path": "/scripts/share/share.js",
"repo_name": "lxzhu94/andes_tony",
"src_encoding": "UTF-8",
"text": "'use strict';\n\n/**\n * Created by Tony on 2016/5/16.\n */\nfunction remAdjust(defaultFontSize, defaultScreenHeight, defaultScreenWidth) {\n var htmlNode = document.getElementsByTagName('html')[0];\n\n function resize() {\n var screenWidth = document.body.offsetWidth;\n var screenHeight = document.body.offsetHeight;\n if (screenWidth / screenHeight < 0.6) htmlNode.style.fontSize = screenWidth / defaultScreenWidth * defaultFontSize + 'px';else htmlNode.style.fontSize = screenHeight / defaultScreenHeight * defaultFontSize * 1.1 + 'px';\n }\n\n document.addEventListener('DOMContentLoaded', function () {\n resize();\n });\n window.addEventListener('resize', resize);\n}\nremAdjust(20, 568, 320);\nvar netService = 'http://andes.cootekservice.com';\nvar share = new Vue({\n delimiters: ['<%', '%>'],\n el: '#content',\n mounted: function mounted() {\n if (WechatShare.isWeixin() && location.href.indexOf('appid') < 0) {\n location.href = this.makeWechatUrl();\n return;\n }\n if (typeof user_info != 'undefined') this.username = user_info.nickname;\n $(\"body\").get(0).addEventListener('touchmove', function (event) {\n event.preventDefault();\n });\n this.getInviteInfo(this.redeem);\n $('#content').show();\n },\n\n data: {\n username: '',\n invite_info: {\n type: '',\n user: {\n name: '',\n head_image_url: ''\n }\n },\n group_info: {\n group_name: '',\n members: []\n },\n redeem: myLib.GetQueryString('redeem') || myLib.GetQueryString('code'),\n is_ios: myLib.getHostDevice() == 'iphone',\n url: netService + '/andes/oauth_share.html',\n state: {\n isInit: 0,\n showBrowserTip: 0,\n showInvitePanel: 0,\n isShowDownload: 0,\n isLongTouchCode: 0\n },\n touch_start_time: null,\n test_num_f: 2, //好友分享文案数量\n test_num_g: 1, //群分享文案数量\n share_title_f: ['电话要被颠覆了,你还不知道?', '我要开始BiBi了,你不看一下吗?'],\n share_desc_f: ['邀请你成为我的BiBi好友,体验纯净高效的新沟通方式!', '邀请你成为我的BiBi好友,体验纯净高效的新沟通方式!'],\n share_title_g: [['邀请你加入BiBi群【', '】,速来围观!']],\n share_desc_g: ['我正在这儿和大家BiBi,同时支持20人的群对讲通话,想嗨你就来!'],\n share_title_l: [['邀请你加入BiBi直播【', '】,速来围观!']],\n share_image: netService + '/andesres/image/andes/andes_share_icon.png'\n },\n methods: {\n getInviteInfo: function getInviteInfo(redeem) {\n var _this = this;\n\n $.ajax({\n type: \"POST\",\n data: JSON.stringify({ invite_code: this.redeem }),\n dataType: \"json\",\n url: netService + \"/pushtalk/query_code\",\n success: function success(data) {\n if (data.result_code != 2000) {\n share.invite_type = 'expire';\n }\n share.invite_info = data.result;\n var img_url = share.invite_info.user.head_image_url;\n if (img_url.indexOf('local://') > -1) img_url = '/andesres/image/andes/' + img_url.substr(8) + '.png';else if (img_url.indexOf('netfile') > -1) {\n img_url = img_url.substr(8);\n }\n share.invite_info.user.head_image_url = img_url;\n var current_num_f = parseInt(Math.random() * _this.test_num_f); //当前好友分享文案编号\n var current_num_g = parseInt(Math.random() * _this.test_num_g); //当前群分享文案编号\n if (share.invite_info.type == 'user') {\n if (WechatShare.isWeixin()) WechatShare.config(INFO.appId, INFO.timestamp, INFO.noncestr, INFO.signature, share.share_title_f[current_num_f], share.share_desc_f[current_num_f], share.makeUrl(), share.share_image, share.makeWechatUrl(current_num_f, 'f'));\n } else if (share.invite_info.type == 'group') {\n if (WechatShare.isWeixin()) WechatShare.config(INFO.appId, INFO.timestamp, INFO.noncestr, INFO.signature, share.share_title_g[current_num_g], share.share_desc_g[current_num_g], share.makeUrl(), share.share_image, share.makeWechatUrl(current_num_g, 'g'));\n $.ajax({\n type: \"POST\",\n data: JSON.stringify({ group_id: share.invite_info.gid }),\n dataType: \"json\",\n url: netService + \"/pushtalk/query_group\",\n success: function success(data) {\n share.group_info = data.result;\n share.group_info.head_image = new Array(3);\n for (var i = 0; i < share.group_info.members.length; i++) {\n if (i >= 3) break;\n var _img_url = share.group_info.members[i].head_image_url;\n if (_img_url.indexOf('netfile') > -1) _img_url = _img_url.substr(8);\n share.group_info.members[i].head_image_url = _img_url;\n }\n }\n });\n }\n }\n });\n },\n joinClick: function joinClick() {\n if (myLib.getHostApp() != 'other') {\n //若寄主应用为微信、微博、qq空间浏览器\n this.state.showBrowserTip = 1;\n return;\n }\n var scheme = \"bibi://openbibi?code=\" + this.redeem;\n if (myLib.isIOS10()) location.href = scheme;else if (!this.is_ios) {\n var ifr = document.createElement('iframe');\n ifr.src = scheme;\n ifr.style.display = 'none';\n document.body.appendChild(ifr);\n }\n this.state.showInvitePanel = 1;\n },\n closePanelClick: function closePanelClick() {\n this.state.showInvitePanel = 0;\n this.state.isShowDownload = 1;\n },\n downloadClick: function downloadClick() {\n if (this.is_ios) location.href = 'https://itunes.apple.com/app/apple-store/id1090811540?pt=618432&ct=sharepage&mt=8';else location.href = 'http://cootek-walkie-talkie.cdn.cootekservice.com/android/andes/version/andes-0A00A0.apk';\n },\n codeTouchStart: function codeTouchStart() {\n this.touch_start_time = Date.now();\n },\n codeTouchEnd: function codeTouchEnd() {\n var _this2 = this;\n\n setTimeout(function () {\n return _this2.state.isLongTouchCode = !_this2.state.isLongTouchCode ? Date.now() - _this2.touch_start_time > 400 : 1;\n }, 1000);\n },\n makeUrl: function makeUrl() {\n return this.url + \"?redeem=\" + this.redeem;\n },\n\n makeWechatUrl: function makeWechatUrl(num, type) {\n var redeem = this.redeem;\n var uri;\n if (type == 'f') uri = this.url + \"?redeem=\" + redeem + \"&test_f=\" + num + \"&appid=\" + INFO.appId;else if (type == 'g') uri = this.url + \"?redeem=\" + redeem + \"&test_g=\" + num + \"&appid=\" + INFO.appId;else if (type == 'l') uri = this.url + \"?redeem=\" + redeem + \"&test_l=\" + num + \"&appid=\" + INFO.appId;else uri = this.url + \"?redeem=\" + redeem + \"&appid=\" + INFO.appId;\n var target = \"1#1#\" + uri;\n target = myLib.base64_encode(target);\n var redirect_url = 'http://touchlife.cootekservice.com/callback/wechat?target=' + target;\n return 'https://open.weixin.qq.com/connect/oauth2/authorize?appid=' + INFO.appId + '&redirect_uri=' + redirect_url + '&response_type=code&scope=snsapi_userinfo#wechat_redirect';\n }\n }\n});\nwindow.onload = function () {\n FastClick.attach(document.body);\n};"
},
{
"alpha_fraction": 0.6760674118995667,
"alphanum_fraction": 0.6875927448272705,
"avg_line_length": 33.92073059082031,
"blob_id": "7b069c7e85132dfd92f390c76b482a75b332406f",
"content_id": "f66fd0c4d41218466feafa4221e9adc600631094",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 12364,
"license_type": "no_license",
"max_line_length": 158,
"num_lines": 328,
"path": "/scripts/bestvoice/bestvoice.js",
"repo_name": "lxzhu94/andes_tony",
"src_encoding": "UTF-8",
"text": "function remAdjust(defaultFontSize, defaultScreenWidth) {\n\tvar htmlNode = document.getElementsByTagName('html')[0];\n\n\tfunction resize() {\n\t\tvar screenWidth = document.body.offsetWidth;\n\t\thtmlNode.style.fontSize = screenWidth / defaultScreenWidth * defaultFontSize + 'px';\n\t}\n\n\tdocument.addEventListener('DOMContentLoaded', function () {\n\t\tresize();\n\t});\n\twindow.addEventListener('resize', resize);\n}\nremAdjust(10, 300);\n\nvar $rulesContainer = document.getElementById('rulesContainer');\nvar $shareTipsContainer = document.getElementById('shareTipsContainer');\nvar $inviteTipsContainer = document.getElementById('inviteTipsContainer');\nvar $inviteTipsText = document.getElementById('inviteTipsText');\nvar $normalTipsContainer = document.getElementById('normalTipsContainer');\nvar $normalTipsText = document.getElementById('normalTipsText');\nvar $detailClose = document.getElementById('detailClose');\nvar $ruleBanner = document.getElementById('ruleBanner');\nvar $voteBanner = document.getElementById('voteBanner');\nvar $listenMyVoice = document.getElementById('listenMyVoice');\n\nvar $detail = $('#detail');\nvar $voteAgain = $('.vote-again');\nvar $voteMe = $('.vote-me');\n\nvar showTips = function (dom, textDom, tipsText) {\n\tdom.addEventListener('touchmove',function(e){\n\t\te.preventDefault();\n\t\te.stopPropagation();\n\t});\n\tdom.style.display = 'block';\n\t$detailClose.style.display = 'none';\n\tif(textDom && tipsText){\n\t\ttextDom.innerHTML = tipsText;\n\t}\n};\nvar netService=\"http://docker-ws2.cootekservice.com\";\nvar list_item=$('#list-item-dom').find('.list-item');\nvar current_uid=0;\nvar current_dom=null;\nvar current_audio='';\nvar soundPlaying = false;\nvar audio=new Audio();\nvar shareTitle='BiBi寻找最美声音大赛,投票火热进行中!';\nvar shareContent=\"( ~っ~)唔…这么好玩的东西我轻易不分享的…\";\nvar useShortUrl=1;\nvar shareUrl=Andes.netService + '/andes/best_voice_m.html?os=android';\nvar shareIOSUrl=Andes.netService + '/andes/best_voice_m.html?os=ios';\nvar shareImageUrl=Andes.netService + '/andesres/image/andes/bestvoice/bestvoice_share.png';\nvar friendNum=0;\nvar leftVotes=0;\nvar token='350a6bff-c629-4ba9-8ef8-d9515f88a157';\nfunction getList(){\n\t$.ajax({\n\t\ttype:\"GET\",\n\t\tdataType:\"json\",\n\t\turl:Andes.netService+\"/andes/bestvoice/list?_v=1&_token=\"+Andes.getToken()+\"&_ts=\"+Date.parse(new Date())/1000,\n\t\tsuccess: getListCallback,\n\t\terror:getList\n\t});\n}\n\nfunction getListCallback(data){\n\tvote_again_num=data.result.other_vote_times;\n\tvar is_share_list=data.result.shared_voice_ids;\n\tif(is_share_list[0]==0)\n\t\tis_share_list=[];\n\tfor(var i in is_share_list)\n\t\tvoice_list[parseInt(is_share_list[i])-1].isShare=true;\n\tif(friendNum<3)\n\t\tleftVotes=0;\n\telse if(friendNum<10)\n\t\tleftVotes=vote_again_num>=1?0:1;\n\telse\n\t\tleftVotes=vote_again_num>=20?0:20-vote_again_num;\n\t$('.vote-again span').text(\"再投一票(*\"+leftVotes+\")\");\n\tvar arr=data.result.list;\n\tfor (var i in arr) {\n\t\tvar uid=arr[i].best_voice_id-1;\n\t\tvoice_list[uid].totel_like=arr[i].total_votes;\n\t\tvoice_list[uid].isVoted=arr[i].today_vote_times>0;\n\t\tlist_item.find('.user-tag>span').text(voice_list[uid].user_tag);\n\t\tlist_item.find('.portrait').attr('src',voice_list[uid].photo[0]);\n\t\tlist_item.find('.like-num').text(voice_list[uid].totel_like);\n\t\tlist_item.data('uid',uid);\n\t\tif(voice_list[uid].isShare)\n\t\t\tlist_item.find('.portrait').removeClass('blur-filter');\n\t\telse\n\t\t\tlist_item.find('.portrait').addClass('blur-filter');\n\t\t$('.list-container').append(list_item.clone());\n\t}\n\t$('.list-item').fastClick( function () {\n\t\t// 绑定每一个item的事件\n\t\tcurrent_uid=$(this).data('uid');\n\t\tcurrent_dom=$(this);\n\t\tif(voice_list[current_uid].isShare) {\n\t\t\t$('.detail-tips').hide();\n\t\t\t$('#slider img').fastClick(toggleImgFully);\n\t\t\t$('#slider .swipe-wrap').removeClass('blur-filter');\n\t\t}\n\t\telse{\n\t\t\t$('#slider .swipe-wrap').addClass('blur-filter');\n\t\t\t$('#slider img').fastClick(function(){showTips($shareTipsContainer);});\n\t\t\t$('.detail-tips').show();\n\t\t\t$('#tipsShare').fastClick(tipsShareClick);\n\t\t\t$('.swipe').find('.portrait').addClass('blur-filter');\n\t\t\tunshowImgFully();\n\t\t}\n\t\tif(!voice_list[current_uid].isVoted) {\n\t\t\t$voteAgain.addClass('btn-disabled');\n\t\t\t$voteAgain.fastClick(null);\n\t\t\t$voteMe.removeClass('btn-disabled');\n\t\t\t$voteMe.fastClick(voteMeClick);\n\t\t}\n\t\telse{\n\t\t\t$voteAgain.removeClass('btn-disabled');\n\t\t\t$voteAgain.fastClick(voteAgainClick);\n\t\t\t$voteMe.fastClick(null);\n\t\t\t$voteMe.addClass('btn-disabled');\n\t\t}\n\t\t$('.swipe-wrap').find('img').eq(0).attr('src',voice_list[current_uid].photo[2]);\n\t\t$('.swipe-wrap').find('img').eq(1).attr('src',voice_list[current_uid].photo[0]);\n\t\t$('.swipe-wrap').find('img').eq(2).attr('src',voice_list[current_uid].photo[1]);\n\t\t$('.swipe-wrap').find('img').eq(3).attr('src',voice_list[current_uid].photo[2]);\n\t\t$('.swipe-wrap').find('img').eq(4).attr('src',voice_list[current_uid].photo[0]);\n\t\t$detail.find('.user-tag').text(voice_list[current_uid].user_tag);\n\t\t$detail.find('.like-num').text(voice_list[current_uid].totel_like);\n\t\tsetTimeout(function(){\n\t\t\t$ruleBanner.style.display = 'none';\n\t\t\t$voteBanner.style.display = 'block';\n\t\t\tdocument.body.style.overflow = 'hidden';\n\t\t\t$('.swipe-prev,.swipe-next,#detailClose').show();\n\t\t\t$detail.get(0).classList.add('normal');\n\t\t},200);\n\t});\n\t$('.tips-close').fastClick(function(){\n\t\t$normalTipsContainer.style.display = 'none';\n\t\t$inviteTipsContainer.style.display = 'none';\n\t\t$shareTipsContainer.style.display = 'none';\n\t\t$detailClose.style.display = 'block';\n\t});\n\t$('.voice').fastClick(function(){\n\t\tvar uid=$(this).parent().data('uid');\n\t\tsoundControl(voice_list[uid].audio);\n\t});\n}\nvar soundControl = function (url) {\n\tif (!soundPlaying&&url) {\n\t\tsoundPlaying = true;\n\t\t$listenMyVoice.classList.add('playing');\n\t\tif(current_audio!=url)\n\t\t\taudio.src=url;\n\t\taudio.play();\n\t\tcurrent_audio=url;\n\t\taudio.addEventListener('ended',function(){\n\t\t\tsoundPlaying=true;\n\t\t\tsoundControl();\n\t\t})\n\t} else {\n\t\tsoundPlaying=false;\n\t\tif(url&&url!=current_audio){\n\t\t\tsoundControl(url);\n\t\t\treturn;\n\t\t}\n\t\t$listenMyVoice.classList.remove('playing');\n\t\taudio.pause();\n\t}\n};\nfunction saveShare(){\n\t$.ajax({\n\t\ttype:\"GET\",\n\t\tdataType:\"json\",\n\t\turl:Andes.netService+\"/andes/bestvoice/view_share?voice_id=\"+(current_uid+1)+\"&_v=1&_token=\"+Andes.getToken()+\"&_ts=\"+Date.parse(new Date())/1000,\n\t\tsuccess:function(){\n\t\t\tvoice_list[current_uid].isShare=true;\n\t\t\t$('#slider .swipe-wrap').removeClass('blur-filter');\n\t\t\t$('#slider img').fastClick(toggleImgFully);\n\t\t\tcurrent_dom.find('.portrait').removeClass('blur-filter');\n\t\t\t$shareTipsContainer.style.display = 'none';\n\t\t\t$detailClose.style.display = 'block';\n\t\t\t$('.detail-tips').hide();\n\t\t}\n\t});\n}\nfunction voteMeClick() {\n\tshowTips($normalTipsContainer, $normalTipsText, '活动已经结束啦~不能再投票了哦~');\n\treturn;\n\t$voteMe.fastClick(function(){});\n\t$voteMe.addClass('btn-disabled');\n\t$.ajax({\n\t\ttype:\"GET\",\n\t\tdataType:\"json\",\n\t\turl:Andes.netService+\"/andes/bestvoice/vote?voice_id=\"+(current_uid+1)+\"&vote_type=daily&_v=1&_token=\"+Andes.getToken()+\"&_ts=\"+Date.parse(new Date())/1000,\n\t\tsuccess:function(){\n\t\t\tvoice_list[current_uid].isVoted=true;\n\t\t\tvoice_list[current_uid].totel_like++;\n\t\t\t$('#detail .like-num').text(voice_list[current_uid].totel_like);\n\t\t\tcurrent_dom.find('.like-num').text(voice_list[current_uid].totel_like);\n\t\t\t$voteAgain.removeClass('btn-disabled').fastClick(voteAgainClick);\n\t\t\tif (friendNum < 3) {\n\t\t\t\tshowTips($inviteTipsContainer, $inviteTipsText, '(ಥ_ಥ) 拥有3个及以上BiBi好友,获得1次“再投一票”机会;而拥有10个及以上BiBi好友,居然有20次机会!一般人我不告诉他…');\n\t\t\t} else if (friendNum <= 9) {\n\t\t\t\tif(vote_again_num>=1)\n\t\t\t\t\tshowTips($inviteTipsContainer, $inviteTipsText, '(✪ω✪)啊哦,“再投一票”机会用完咯!偷偷告诉你,拥有 10 个及以上 BiBi 好友可以再获得 19 次“再投一票”的机会呢~')\n\t\t\t\telse\n\t\t\t\t\tshowTips($inviteTipsContainer, $inviteTipsText, '(≧∀≦)真棒!你已经获得了1次“再投一票”机会;如果拥有10个及以上BiBi好友,会有20次机会噢!加油,就差一点点了…');\n\t\t\t} else {\n\t\t\t\tif(vote_again_num>=20)\n\t\t\t\t\tshowTips($normalTipsContainer, $normalTipsText, '(.Q.)你已经用完了所有“再投一票”机会,拜托别再戳我了,会痒…');\n\t\t\t\telse\n\t\t\t\t\tshowTips($normalTipsContainer, $normalTipsText, '(❁´▽`❁)太不可思议了!好友力MAX的你,已经拥有10个及以上BiBi好友,成功解锁了20次“再投一票”机会呢!But…省着点儿用…');\n\t\t\t}\n\t\t}\n\t});\n}\nfunction voteAgainClick(){\n\tshowTips($normalTipsContainer, $normalTipsText, '活动已经结束啦~不能再投票了哦~');\n\treturn;\n\tif(friendNum<3){\n\t\tshowTips($inviteTipsContainer, $inviteTipsText, '(ಥ_ಥ) 拥有3个及以上BiBi好友,获得1次“再投一票”机会;而拥有10个及以上BiBi好友,居然有20次机会!一般人我不告诉他…');\n\t\treturn;\n\t}\n\tif (friendNum <= 9&&vote_again_num>=1) {\n\t\tshowTips($inviteTipsContainer, $inviteTipsText, '(✪ω✪)啊哦,“再投一票”机会用完咯!偷偷告诉你,拥有 10 个及以上 BiBi 好友可以再获得 19 次“再投一票”的机会呢~')\n\t\treturn;\n\t}\n\tif(friendNum >= 10&&vote_again_num>=20){\n\t\tshowTips($normalTipsContainer, $normalTipsText, '(.Q.)你已经用完了所有“再投一票”机会,拜托别再戳我了,会痒…');\n\t\treturn;\n\t}\n\t$voteAgain.fastClick(function(){});\n\t$.ajax({\n\t\ttype:\"GET\",\n\t\tdataType:\"json\",\n\t\turl:Andes.netService+\"/andes/bestvoice/vote?voice_id=\"+(current_uid+1)+\"&vote_type=other&_v=1&_token=\"+Andes.getToken()+\"&_ts=\"+Date.parse(new Date())/1000,\n\t\tsuccess:function(){\n\t\t\tAndes.showToast('投票成功~');\n\t\t\t$('.vote-again span').text(\"再投一票(*\"+(--leftVotes)+\")\");\n\t\t\tvote_again_num++;\n\t\t\tvoice_list[current_uid].totel_like++;\n\t\t\t$('#detail .like-num').text(voice_list[current_uid].totel_like);\n\t\t\tcurrent_dom.find('.like-num').text(voice_list[current_uid].totel_like);\n\t\t\t$detail.fastClick(toggleImgFully);\n\t\t\t$voteAgain.fastClick(voteAgainClick);\n\t\t}\n\t});\n}\nfunction tipsShareClick(){\n\tif (!is_ios) {\n\t\tAndes.showShare(shareTitle, shareContent, shareUrl, '', shareUrl, shareImageUrl, 'BEST_VOICE', 'saveShare');\n\t} else {\n\t\tAndes.showShareWithoutClip(shareTitle, shareContent, useShortUrl, '', shareIOSUrl, shareImageUrl, 'BEST_VOICE', saveShare);\n\t}\n}\nfunction toggleImgFully(){\n\t$('.user-detail-area .user-tag,.user-detail-area .user-like,.voice-button').toggle();\n}\nfunction unshowImgFully(){\n\t$('.user-detail-area .user-tag,.user-detail-area .user-like,.voice-button').show();\n}\nfunction init(){\n\tif(Andes.getToken()==''){\n\t\tAndes.getToken=function(){\n\t\t\treturn \"ce8f447d-d917-4aab-b0a3-984df87a75f5\";\n\t\t}\n\t}\n\t$('.swiper-wrapper img').css('height',document.body.clientWidth*0.88*4/3+'px');//解决部分手机图片比例失调\n\t$detail.get(0).addEventListener('touchmove',function(e){\n\t\te.preventDefault();\n\t});\n\twindow.onload = function() {\n\t\tFastClick.attach(document.body);\n\t};\n\tif (!is_ios) {\n\t\tfriendNum=Andes.getFriendCount();\n\t\tgetList();\n\t} else {\n\t\tAndes.getFriendCount(function(data){\n\t\t\tfriendNum=parseInt(data);\n\t\t\tgetList();\n\t\t});\n\t}\n\t$('#rule').fastClick( function () {\n\t\tdocument.body.style.overflow = 'hidden';\n\t\t$rulesContainer.style.display = 'block';\n\t});\n\t$('#ruleClose').fastClick( function () {\n\t\tdocument.body.style.overflow = 'auto';\n\t\t$rulesContainer.style.display = 'none';\n\t});\n\n\t$('#detailClose').fastClick(function () {\n\t\tsoundControl();\n\t\t$('.swipe-prev,.swipe-next,#detailClose').hide();\n\t\tdocument.body.style.overflow = 'auto';\n\t\t$detail.get(0).classList.remove('normal');\n\t\t$ruleBanner.style.display = 'block';\n\t\t$voteBanner.style.display = 'none';\n\t\tunshowImgFully();\n\t});\n\n\t$('#botBannerShare').fastClick(function(){\n\t\tif (!is_ios) {\n\t\t\tAndes.showShare(shareTitle, shareContent, shareUrl, '', shareUrl, shareImageUrl, 'BEST_VOICE', null);\n\t\t} else {\n\t\t\tAndes.showShareWithResult(shareTitle, shareContent, useShortUrl, '', shareIOSUrl, shareImageUrl, 'BEST_VOICE', null);\n\t\t}\n\t});\n\t$('#tipsShare').fastClick(tipsShareClick);\n\t$('#slider img,.detail-tips').fastClick( function () {\n\t\tshowTips($shareTipsContainer);\n\t});\n\n\t$('#inviteFriends').fastClick(function () {\n\t\tAndes.finish();\n\t});\n\n\t$('#listenMyVoice').fastClick(function (event) {\n\t\tevent.stopPropagation();\n\t\tconsole.log(soundPlaying);\n\t\tsoundControl(voice_list[current_uid].audio);\n\t});\n}"
},
{
"alpha_fraction": 0.47099971771240234,
"alphanum_fraction": 0.4917905330657959,
"avg_line_length": 34.74305725097656,
"blob_id": "2bf9532b568d872526a12ff76c0af1b0f45b598d",
"content_id": "756dcfd56c4f3149262aab60e00fa1d5bb0b322b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 10469,
"license_type": "no_license",
"max_line_length": 207,
"num_lines": 288,
"path": "/scripts/live/danmu.js",
"repo_name": "lxzhu94/andes_tony",
"src_encoding": "UTF-8",
"text": "/**\n * Created by Tony on 2016/8/18.\n */\nvar Andes={};\nAndes.getToken=function(){\n return \"a2d384bc-107c-46fb-bc24-bdbb2ef31239\";\n};\nAndes.netService='http://andes.cootekservice.com';\nvar fworks=new Fireworks();\nVue.config.delimiters = ['<%', '%>'];\nVue.transition('fade',{\n type:'animation',\n enterClass:'fadeIn',\n leaveClass:'fadeOut'\n});\nVue.transition('zanTipBounce',{\n type:'animation',\n enterClass:'bounceInLeft',\n leaveClass:'bounceOutRight'\n});\nvar vue=new Vue({\n init:function(){\n myLib.remAdjustPC(20, 720);\n document.body.style.display='block';\n fworks.hideCanvas();\n $(\"#bg-switch\").bootstrapSwitch();\n $('#bg-switch').on('switchChange.bootstrapSwitch', function (event, state) {\n if(state==false) {//关灯\n vue.bgStyle='black';\n }\n else{\n vue.bgStyle='white';\n }\n\n });\n },\n el: 'body',\n data: {\n //initial\n roomId:'',\n cm:{},\n isActiveTab:1,\n bgStyle:'white',\n list:[],\n borderStyleList:[4,2,0,3,1],\n caidanList:['','/andesres/image/live/danmu/大笑.gif','/andesres/image/live/danmu/爱心.gif'],\n chooseGif:0,\n showGif:0,\n caidanInterval:500,//小彩蛋间隔数\n bigCaidanInterval:2000,//大彩蛋间隔数\n commentFade:0,\n isShowSysTip:0,\n showLogin:1,\n showComment:0,\n commentTimeout:null,\n\n showVote:0,\n total_num:1,\n vote_detail:{\n \"is_owner\":false,\n \"status\":\"\",\n \"title\":\"\",\n \"is_voted\":0,\n \"max_vote\":0,\n \"candidate\":[]\n },\n get_vote_url:Andes.netService+'/vote/list?_token='+Andes.getToken(),\n\n zanTipBounce:0,//是否出现2000赞提示\n zanStyleNum:1,//2000赞彩蛋样式\n zanNum:0,\n zanParseInt:2000,\n heartList:[\n [\n [{color:'blue',left:'2rem'},{color:'green',left:'4rem'}],\n [{color:'red',left:'3rem'},{color:'blue',left:'4rem'}],\n [{color:'green',left:'4rem'},{color:'red',left:'2rem'}],\n [{src:'/andesres/image/live/danmu/present/薯条.png',left:'2.4rem'},{src:'/andesres/image/live/danmu/present/钻戒.png',left:'2rem'}]\n ],\n [\n [{color:'blue',left:'2rem'},{color:'red',left:'3rem'},{color:'yellow',left:'4rem'}],\n [{color:'green',left:'4rem'}],\n [{color:'red',left:'3rem'},{color:'blue',left:'4rem'}],\n [{src:'/andesres/image/live/danmu/present/悍马.png',left:'2.4rem'}]\n ],\n [\n [{color:'blue',left:'4rem'}],\n [{color:'yellow',left:'4rem'},{color:'blue',left:'3rem'},{color:'red',left:'2rem'}],\n [{color:'blue',left:'2rem'},{color:'green',left:'4rem'}],\n [{src:'/andesres/image/live/danmu/present/气球.png',left:'3rem'},{src:'/andesres/image/live/danmu/present/跑车.png',left:'2.2rem'},{src:'/andesres/image/live/danmu/present/棒棒糖1.png',left:'2rem'}]\n ],\n [\n [{color:'red',left:'3rem'},{color:'yellow',left:'2rem'},{color:'blue',left:'4rem'}],\n [{color:'yellow',left:'4rem'},{color:'blue',left:'3rem'},{color:'red',left:'2rem'}],\n [{color:'blue',left:'2rem'},{color:'red',left:'3rem'},{color:'green',left:'4rem'}]\n ],\n [\n [{color:'blue',left:'2rem'},{color:'red',left:'3rem'},{color:'green',left:'4rem'}],\n [{color:'red',left:'3rem'},{color:'green',left:'2rem'},{color:'blue',left:'4rem'}],\n [{color:'green',left:'4rem'},{color:'blue',left:'3rem'},{color:'red',left:'2rem'}]\n ]\n ],\n heartDetail:[],\n //config\n sysTip:'下载BiBi,在直播间评论、参与投票',\n getCommentUrl:'http://andes.cootekservice.com/live/get_comment?_ts=' + Date.parse(new Date()) / 1000,\n refreshTime:1.5,\n scrollTime:0.7\n },\n methods:{\n loginClick:function(){\n $.ajax({\n type:\"POST\",\n url:vue.getCommentUrl,\n dataType:\"json\",\n data:JSON.stringify({live_id:vue.roomId}),\n success:function(data){\n if(data.result_code!=2000) {\n alert('房间号不存在');\n vue.roomId=null;\n return;\n }\n vue.zanNum=data.result.like_num;\n vue.showLogin=0;\n vue.cm=new Cm();\n vue.cm.init();\n }\n });\n },\n tabClick:function(tabName){\n if(tabName=='comment') {\n if(vue.isActiveTab==1)\n return;\n clearInterval(vue.vote_interval);\n vue.cm.finish();\n vue.cm.init();\n vue.isActiveTab = 1;\n vue.showVote=0;\n vue.showComment=1;\n }\n else{\n if(vue.isActiveTab==0)\n return;\n vue.getVoteDetail();\n vue.vote_interval=setInterval(function(){vue.getVoteDetail()},8000);\n vue.cm.finish();\n vue.isActiveTab = 0;\n vue.showComment=0;\n vue.showVote=1;\n }\n },\n getVoteDetail: function () {\n $.ajax({//TODO\n type: \"POST\",\n url: vue.get_vote_url,\n dataType: \"json\",\n data:JSON.stringify({live_id:vue.roomId}),\n success: function (data) {\n if (data.result_code != 2000) {\n return;\n }\n if(data.result.vote[0]) {\n vue.vote_detail = data.result.vote[0];\n vue.vote_detail.candidate = data.result.vote[0].candidate.sort(myLib.getSortFun('desc','result'));\n if(vue.vote_detail.title=='Fudan Idol Final Vot')\n vue.vote_detail.title='Fudan Idol Final Vote';\n vue.total_num=1;\n for (var i in vue.vote_detail.candidate)\n vue.total_num += vue.vote_detail.candidate[i].result;\n if(vue.total_num>1)\n vue.total_num--;\n }\n }\n });\n }\n }\n});\nfunction Cm(){//滚动评论类\n var duration=0;\n var sysTipDuraion=30;\n var timestamp=0;\n var count=0;\n var interval;\n var commentTimeout;\n var againTime=0;\n var getComment = function(){\n $.ajax({\n url:vue.getCommentUrl,\n dataType:\"json\",\n type:\"POST\",\n data:JSON.stringify({live_id:vue.roomId,ts:timestamp}),\n success:function(data){\n var comment=data.result.comment;\n caidan(data);//显示点赞彩蛋\n vue.zanNum=data.result.like_num;\n if(comment.length>0)\n timestamp=comment[comment.length-1].ts+1;\n scrollComment(comment);\n againTime=0;\n },\n error:function() {\n againTime++;\n if(againTime>5) {\n alert('网络不好');\n againTime=0;\n return;\n }\n getComment();\n }\n });\n };\n var scrollComment=function(comment){//一次请求多次滚动评论\n var cur_item=comment.shift();\n if(!cur_item) {\n clearInterval(interval);\n commentTimeout=setTimeout(function(){\n getComment();\n },vue.refreshTime*1000);\n return;\n }\n cur_item.borderStyle=parseInt(Math.random()*5-0.01)+1;\n if(window.atob)\n cur_item.text=myLib.base64Decode(cur_item.text);\n else\n cur_item.text=myLib.utf8to16(myLib.base64_decode(cur_item.text));\n if(duration!=0&&duration%sysTipDuraion==0){\n if(vue.list.length>=5)\n vue.list.$remove(vue.list[0]);\n vue.list.push({nick_name:null,text:null,borderStyle:6});\n duration=0;\n }\n if(vue.list.length>=5)\n vue.list.$remove(vue.list[0]);\n var img = new Image();\n img.src='http://cootek-walkie-talkie.cdn.cootekservice.com/head/'+cur_item.user_id+'.png';\n img.onerror=function(){\n cur_item.user_id=null;\n vue.list.push(cur_item);\n };\n img.onload=function(){\n vue.list.push(cur_item);\n };\n duration++;\n scrollComment(comment);\n };\n var zanTipShow=function(){\n fworks.showCanvas();\n setTimeout(function(){fworks.createFworks(0.4,0.4);},150);\n setTimeout(function(){fworks.createFworks(0.5,0.5);},0);\n setTimeout(function(){fworks.createFworks(0.6,0.3);},300);\n vue.zanTipBounce=1;\n setTimeout(function(){\n vue.zanTipBounce=0;\n fworks.hideCanvas();\n },2000);\n };\n var p=0;\n var caidan=function(data){\n var heartDetail=new Object(myLib.deepClone(vue.heartList[parseInt(Math.random()*5)]));\n data.result.like_num+=++p;\n if(vue.zanNum<data.result.like_num)\n vue.heartDetail=heartDetail;\n delete heartDetail;\n if(parseInt(vue.zanNum/vue.bigCaidanInterval)<parseInt(data.result.like_num/vue.bigCaidanInterval)){\n vue.showGif=0;\n vue.zanParseInt=parseInt(data.result.like_num/vue.bigCaidanInterval)*vue.bigCaidanInterval;\n vue.zanStyleNum=parseInt(Math.random()*3+1);\n zanTipShow();\n }\n else if(parseInt(vue.zanNum/vue.caidanInterval)<parseInt(data.result.like_num/vue.caidanInterval)){\n vue.chooseGif=vue.chooseGif==1?2:1;\n vue.showGif=1;\n setTimeout(function(){vue.showGif=0},vue.chooseGif==1?5200:3100);\n }\n };\n this.init=function(){\n vue.isShowSysTip=0;\n vue.showComment=1;\n getComment();\n };\n this.finish=function(){\n vue.chooseGif=0;\n vue.zanTipBounce=0;\n fworks.hideCanvas();\n clearTimeout(commentTimeout);\n clearTimeout(interval);\n };\n}"
},
{
"alpha_fraction": 0.4987654387950897,
"alphanum_fraction": 0.5149520039558411,
"avg_line_length": 33.72380828857422,
"blob_id": "41dab94969e7ed86906274724e62ed64ed2cb0f8",
"content_id": "9e1ba25875637ce14eaa95d473e62ab5207c469e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 3649,
"license_type": "no_license",
"max_line_length": 284,
"num_lines": 105,
"path": "/scripts/andes/tutorial.js",
"repo_name": "lxzhu94/andes_tony",
"src_encoding": "UTF-8",
"text": "/**\n * Created by Tony on 2016/7/27.\n */\nvar swipe={\n num:7,\n img:['/andesres/image/andes/tutorial/封面.jpg','/andesres/image/andes/tutorial/1.jpg','/andesres/image/andes/tutorial/2.jpg','/andesres/image/andes/tutorial/3.jpg','/andesres/image/andes/tutorial/4.jpg','/andesres/image/andes/tutorial/5.jpg','/andesres/image/andes/tutorial/6.jpg'],\n swipeDom:$('.swipe'),\n clientHeight:$('.swipe').height(),\n index:0,\n init:function(){\n document.body.addEventListener('touchmove',function(e){\n e.preventDefault();\n });\n swipe.initImg();\n $('.back-icon').click(function(){\n Andes.finish();\n });\n },\n initImg:function(){\n var image=[];\n for (var i in this.img) {\n var tmp = new Image();\n tmp.src = this.img[i];\n image.push(tmp);\n this.swipeDom.find('img').eq(i).attr('src',this.img[i]);\n }\n function img_load(){\n var t=setInterval(function(){\n var isComplete=true;\n for (var i in swipe.img) {\n if(!image[i].complete)\n return;\n }\n clearInterval(t);\n swipe.touchSwipe();\n swipe.arrow();\n },100);\n }\n img_load();\n },\n transform3d:function(dom,offset){\n dom.css({'-webkit-transform':'translate3d(0,'+offset+'px,0)',\n '-moz-transform':'translate3d(0,'+offset+'px,0)',\n '-ms-transform':'translate3d(0,'+offset+'px,0)',\n '-o-transform':'translate3d(0,'+offset+'px,0)',\n 'transform':'translate3d(0,'+offset+'px,0)'\n });\n },\n checkLock:function(){\n },\n touchSwipe:function(){\n var startY,offsetY,originY= 0,_lock=false;\n swipe.swipeDom.on('touchstart',function(e){\n if(_lock)\n return;\n startY=e.touches[0].pageY;\n });\n swipe.swipeDom.on('touchmove',function(e){\n e.preventDefault();\n if(_lock)\n return;\n offsetY=e.touches[0].pageY-startY;\n if(offsetY>0&&swipe.index==0||offsetY<0&&swipe.index==swipe.num-1)\n offsetY=0;\n swipe.transform3d(swipe.swipeDom,originY+offsetY)\n });\n swipe.swipeDom.on('touchend',function(e){\n if(offsetY==0||_lock)\n return;\n _lock=true;\n swipe.swipeDom.addClass('img-transition');\n if(Math.abs(offsetY)>0.1*swipe.clientHeight) {\n swipe.transform3d(swipe.swipeDom, offsetY < 0 ? originY -= swipe.clientHeight : originY += swipe.clientHeight);\n offsetY < 0 ? swipe.index++ : swipe.index--;\n if(swipe.index==swipe.num-1)\n $('.arrow').hide();\n else\n $('.arrow').show();\n }\n else\n swipe.transform3d(swipe.swipeDom,originY);\n setTimeout(function(){\n _lock=false;\n swipe.swipeDom.removeClass('img-transition');\n },400);\n });\n },\n arrow:function(){\n var arrowInterval=setInterval(function(){\n $('.arrow').toggleClass('arrow-disappear');\n },700);\n }\n};\nfunction remAdjust() {\n function resize() {\n swipe.clientHeight=$('.swipe').height();\n swipe.transform3d(swipe.swipeDom,-1*swipe.index*swipe.clientHeight);\n }\n document.addEventListener('DOMContentLoaded', function () {\n resize();\n });\n window.addEventListener('resize', resize);\n}\nremAdjust();\nswipe.init();"
},
{
"alpha_fraction": 0.5684001445770264,
"alphanum_fraction": 0.5860484838485718,
"avg_line_length": 38.08571243286133,
"blob_id": "9f19c547fad5b27e47f95bf263a0c1dacf74a300",
"content_id": "f8cb88cc77a24d49b86a336f8a7df0bc171dbdcc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 9704,
"license_type": "no_license",
"max_line_length": 159,
"num_lines": 245,
"path": "/es6/share/star_chat.js",
"repo_name": "lxzhu94/andes_tony",
"src_encoding": "UTF-8",
"text": "/**\n * Created by Tony on 2017/1/9.\n */\nmyLib.remAdjust(20,320);\nvar netService='';\nif(location.href.indexOf('https')>-1)\n netService='https://andes.cootekservice.com';\nelse\n netService='http://andes.cootekservice.com';\nvar share_title='',\n share_desc='',\n share_image='http://andes.cootekservice.com/andesres/image/andes/andes_share_icon.jpg';\nvar vue=new Vue({\n delimiters: ['<%', '%>'],\n el: '#content',\n mounted(){\n document.setTitle = (t) => {\n document.title = t;\n var i = document.createElement('iframe');\n i.src = '//m.baidu.com/favicon.ico';\n i.style.display = 'none';\n i.onload = function() {\n setTimeout(function(){\n i.remove();\n }, 9)\n };\n document.body.appendChild(i);\n };\n //if(this.is_history_page)\n // Andes.stopStarChatAudio('vue.stopStarChatAudio');\n $('#content').show();\n setTimeout(() => {\n this.init();\n },200);\n },\n data: {\n rem_rate: 320 / document.body.clientWidth,\n is_history_page: location.href.indexOf('_history')>-1,\n template:myLib.GetQueryString('template'),\n uid:myLib.GetQueryString('uid'),\n ts:myLib.GetQueryString('ts'),\n dialog_list: data[myLib.GetQueryString('template')],\n star_name: data[myLib.GetQueryString('template')][0].sub_title,\n nickname:'',\n chat_list: [],\n audio_list: [],\n answer: [],\n audio_index: 0,\n current_audio: new Audio(),\n showBrowserTip:0,\n hostNotOther: myLib.getHostApp() !='other',\n playIcon: 5,\n canClick: 0,\n is_ios: myLib.getHostDevice() === 'iphone'\n },\n methods:{\n init(){\n $.ajax({\n url: netService + '/activity/star_access/user_audio_list?_ts=' + Date.parse(new Date()) / 1000,\n dataType: 'json',\n data: JSON.stringify({template:this.template,uid:this.uid,ts:this.ts}),\n type: 'post',\n success: (data) => {\n this.answer = data.result.audio_lst;\n //this.answer = answer;\n this.nickname = data.result.nick_name;\n setTimeout(() => {\n if(this.is_history_page)\n document.setTitle(`${this.star_name}访问记录`);\n else\n document.setTitle(`${this.nickname}正在接受${this.star_name}访问`);\n }, 100);\n share_title = `快来围观,${this.nickname}正在接受${this.star_name}的访问`;\n share_desc = `快来看${this.nickname}和${this.star_name}的访谈现场`;\n share_image = `http://andes.cootekservice.com/andesres/image/andes/star_chat/${this.template}/${this.dialog_list[0].contact_image}`;\n if (!this.is_history_page && WechatShare.isWeixin())\n WechatShare.config(INFO.appId, INFO.timestamp, INFO.noncestr, INFO.signature, share_title, share_desc, location.href, share_image);\n this.addAudioList(0, 0, 0);\n }\n })\n },\n addAudioList(dialog_index, audio_index, answer_index){\n if(dialog_index >= this.dialog_list.length){\n let autoPlayAudio1 = () => {\n wx.ready(() => {\n this.audioPlay();\n });\n };\n if(this.is_history_page || navigator.userAgent.indexOf('Safari')>-1) {\n let handler = (e) => {\n e.stopPropagation();\n document.body.removeEventListener('touchstart', handler);\n this.audioPlay();\n setTimeout(() => {\n this.canClick = 1;\n },300)\n };\n document.body.addEventListener('touchstart', handler);\n }\n else\n WechatShare.isWeixin() ? autoPlayAudio1() : this.audioPlay();\n if(this.is_history_page)\n this.initChatListHistory(data[this.template]);\n else\n this.initChatList(data[this.template]);\n return;\n }\n if(this.dialog_list[dialog_index].audio_file.indexOf('question') < 0 || answer_index >= this.answer.length) {\n this.audio_list.push({url: `/andesres/image/andes/star_chat/${this.template}/${this.dialog_list[dialog_index].audio_file}`});\n this.audio_list[audio_index].show_duration=this.dialog_list[dialog_index].show_duration;\n this.addAudioList(dialog_index+1, audio_index+1, answer_index);\n }\n else{\n this.audio_list.push({url: `/andesres/image/andes/star_chat/${this.template}/${this.dialog_list[dialog_index].audio_file}`});\n this.audio_list[audio_index].show_duration=this.dialog_list[dialog_index].show_duration;\n this.audio_list.push({url: this.answer[answer_index].url});\n let audio = new Audio();\n audio.src = this.audio_list[audio_index+1].url;\n if(this.audio_list[audio_index+1].url.indexOf('.aud') <= -1){\n this.audio_list[audio_index + 1].show_duration = myLib.GetQueryString('t' + (answer_index + 1)) ? myLib.GetQueryString('t' + (answer_index + 1)) : 1;\n if(this.audio_list[audio_index + 1].show_duration > 30)\n this.audio_list[audio_index + 1].show_duration = 30;\n this.addAudioList(dialog_index+1, audio_index+2, answer_index+1);\n return;\n }\n audio.onloadedmetadata = () => {\n this.audio_list[audio_index + 1].show_duration = parseInt(audio.duration);\n this.addAudioList(dialog_index + 1, audio_index + 2, answer_index + 1);\n };\n audio.onerror = () => {\n this.audio_list[audio_index + 1].show_duration = 1;\n this.addAudioList(dialog_index + 1, audio_index + 2, answer_index + 1);\n };\n }\n },\n audioPlay(audio_index = 0){\n console.log(audio_index);\n if(audio_index === 0)\n this.playIcon = 4; //换成暂停icon\n if(audio_index>=this.audio_list.length) {\n console.log('audioPlay end');\n this.playIcon = 5;\n this.audio_index = 0;\n this.current_audio = new Audio();\n return;\n }\n if(this.audio_index !== audio_index || !this.current_audio.src) {\n this.audio_index = audio_index;\n if(this.audio_list[audio_index].url.indexOf('.aud')>-1){\n this.audioPlay(audio_index+1);\n return;\n }\n this.current_audio.src = this.audio_list[audio_index].url;\n this.current_audio.onended = () => {\n this.current_audio.onended = () => {};\n setTimeout(() => this.audioPlay(audio_index+1),0);\n }\n }\n this.current_audio.onerror = () => {\n this.current_audio.onerror = () => {};\n setTimeout(() => this.audioPlay(audio_index+1),0);\n };\n console.log('audio play');\n this.current_audio.play();\n },\n audioPlayClick(){\n if(!this.canClick && navigator.userAgent.indexOf('Safari')>-1)\n return;\n console.log('audio stop:'+!this.current_audio.paused || this.current_audio.readyState<3);\n console.log('audio readyState:'+this.current_audio.readyState);\n if(!this.current_audio.paused || this.current_audio.readyState<2) {\n this.playIcon = 5;//换成播放icon\n this.current_audio.pause();\n }\n else {\n this.playIcon = 4;//换成暂停icon\n this.audioPlay(this.audio_index);\n }\n },\n stopStarChatAudio(){\n this.playIcon = 5;//换成播放icon\n this.current_audio.pause();\n setTimeout(() => this.current_audio.pause(), 1000);\n this.current_audio.onloadedmetadata = () => {\n this.current_audio.pause();\n this.current_audio.onloadedmetadata = {};\n };\n },\n initChatList(arr){\n let addItem = (index, answer_index) => {\n if(index>=this.dialog_list.length)\n return;\n //setTimeout(() => {document.getElementsByClassName('chat-panel')[0].scrollBottom = 0;},200);\n if(this.dialog_list[index].audio_file.indexOf('question') > -1 && answer_index < this.answer.length) {\n this.chat_list.push(this.dialog_list[index]);\n setTimeout(() => this.chat_list.push(this.audio_list[index + 1 + answer_index]),1000);\n //setTimeout(() => {document.getElementsByClassName('chat-panel')[0].scrollBottom = 0;},1500);\n setTimeout(() => addItem(index + 1,answer_index + 1),2000);\n }\n else {\n this.chat_list.push(this.dialog_list[index]);\n setTimeout(() => addItem(index + 1,answer_index),1000);\n }\n };\n addItem(0,0);\n },\n initChatListHistory(arr){\n let addItem = (index, answer_index) => {\n if(index>=this.dialog_list.length)\n return;\n //document.getElementsByClassName('chat-panel')[0].scrollBottom = 0;\n if(this.dialog_list[index].audio_file.indexOf('question') > -1 && answer_index < this.answer.length) {\n this.chat_list.push(this.dialog_list[index]);\n this.chat_list.push(this.audio_list[index + 1 + answer_index]);\n //document.getElementsByClassName('chat-panel')[0].scrollBottom = 0;\n addItem(index + 1,answer_index + 1);\n }\n else {\n this.chat_list.push(this.dialog_list[index]);\n addItem(index + 1,answer_index);\n }\n };\n addItem(0,0);\n },\n joinClick(){\n console.log('joinClick');\n if (myLib.getHostApp() !='other') {\n //若寄主应用为微信、微博、qq空间浏览器\n this.showBrowserTip=1;\n return;\n }\n this.stopStarChatAudio();\n if (this.is_ios)\n location.href = 'https://itunes.apple.com/app/apple-store/id1090811540?pt=618432&ct=startalk_share&mt=8';\n else\n location.href = 'http://cootek-file.oss-cn-hangzhou.aliyuncs.com/AndesDailyBuild/Andes_20170213110044_interview_0A00A1.apk';\n },\n openAppClick(){\n location.href = 'bibi://openbibi?page=star_chat'\n }\n }\n});\nwindow.onload=function(){\n FastClick.attach(document.body);\n};\n"
},
{
"alpha_fraction": 0.40687987208366394,
"alphanum_fraction": 0.41682344675064087,
"avg_line_length": 30.2605037689209,
"blob_id": "daf3635fd71481f28f008b30d52989046c06c759",
"content_id": "20b836ea38b322388ddbcbc28c8ffb5bd5da080c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 3751,
"license_type": "no_license",
"max_line_length": 91,
"num_lines": 119,
"path": "/scripts/main.js",
"repo_name": "lxzhu94/andes_tony",
"src_encoding": "UTF-8",
"text": "//加上这句话微信就好使了\\\n//document.addEventListener('touchmove', function (e) {\n// e.preventDefault();\n//}, false);\n$(document).ready(function () {\n //控制滚动\n //ie-hack\n setTimeout(function () {\n $('.bg_1 .word').fadeIn(function(){\n $('.bg_1 .mobile').fadeIn();\n });\n },500);\n if(!!window.ActiveXObject || \"ActiveXObject\" in window) {\n $('.part').addClass('ie-hack')\n }\n (function () {\n var scroll = {\n 'scrollFlag':true,\n 'index': 1\n };\n var index = 0;\n var part = $('.part');\n var $pageRightLi = $('.page-right > li');\n var length = part.length;\n var $downloadButton = $('#downButton');\n $pageRightLi.on('click', function () {\n console.log($(this).index());\n var liIndex = $(this).index();\n part.eq(index).removeClass('part-in').addClass('part-out');\n if (index < liIndex) {\n index = liIndex - 1;\n down()\n } else {\n index = liIndex + 1;\n up();\n }\n\n });\n $downloadButton.on('click', function () {\n down();\n });\n $(document).on('mousewheel', function (event) {\n if (scroll.scrollFlag == false) {\n return;\n }\n scroll.scrollFlag = false;\n setTimeout(function () {\n scroll.scrollFlag = true;\n }, 1000);\n if (event.deltaY == -1) {\n down();\n } else if (event.deltaY == 1) {\n up();\n }\n\n });\n\n $(document).keydown(function(event){\n var keycode = event.which;\n if (scroll.scrollFlag == false) {\n return;\n }\n scroll.scrollFlag = false;\n setTimeout(function () {\n scroll.scrollFlag = true;\n }, 1000);\n if (keycode == 40) {\n down();\n } else if (keycode == 38) {\n up();\n }\n });\n $('.container').on('swipeup', function () {\n down();\n });\n $('.container').on('swipedown', function () {\n up();\n });\n function down() {\n $('.word,.footer,.mobile').hide();\n if (index == length - 1) {\n return;\n }\n if (index == length - 2) {\n $downloadButton.hide();\n } else {\n $downloadButton.show();\n }\n part.eq(index).removeClass('part-in').addClass('part-out');\n part.eq(index + 1).removeClass('part-out').addClass('part-in');\n $pageRightLi.eq(index + 1).addClass('active').siblings().removeClass('active');\n index++;\n showDetail(index);\n }\n function up() {\n $('.word,.footer,.mobile').hide();\n if (index == 0) {\n return;\n }\n $downloadButton.show();\n part.eq(index).removeClass('part-in').addClass('part-out');\n part.eq(index - 1).removeClass('part-out').addClass('part-in');\n $pageRightLi.eq(index - 1).addClass('active').siblings().removeClass('active');\n index--;\n showDetail(index);\n }\n function showDetail(index){\n setTimeout(function () {\n $('.bg_'+(index+1)+' .word').fadeIn(function(){\n $('.bg_'+(index+1)+' .mobile').fadeIn(function(){\n if(index==5)\n $('.footer').fadeIn();\n })\n });\n },1300)\n }\n })();\n\n});\n\n"
},
{
"alpha_fraction": 0.6556927561759949,
"alphanum_fraction": 0.6582990288734436,
"avg_line_length": 49.97901916503906,
"blob_id": "a3d96c4016db4ddae0c850f0609cbe8b7dc371da",
"content_id": "2c33ebdaf449342a908f7c5dfe32808d91a38135",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7290,
"license_type": "no_license",
"max_line_length": 135,
"num_lines": 143,
"path": "/andesres/image/andes/assistant/iOS/service.py",
"repo_name": "lxzhu94/andes_tony",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nfrom baseclient.python.tornado_proto_gclient import Application\nimport os\nimport json\nimport baseclient\nimport tornado.httpserver\nimport tornado.ioloop\nimport tornado.web\nfrom Crypto.Cipher import AES\nimport baseclient.python.andes_pb2 as proto_andes\nimport http.public_var as public_var\n\nimport http.andes.andes as Andes\nimport http.pushtalk.pushtalk as PushTalk\nimport http.wechat.wechat as Wechat\nimport http.wechat_test.wechat_test as WechatTest\nimport http.tool.tool as Tool\nimport http.invite.invite as Invite\nimport http.upload.upload as Upload\nimport http.msg.msg as Msg\nimport http.group.group as Group\nimport http.activity.activity as Activity\nimport logging\nimport environment\n\nclass App(Application):\n def __init__(self, hostid, out_addr, out_port, service_name, template_path):\n self._hostid = hostid\n handlers = [\n (r'/andes/share.html', Andes.ShareHandler),\n (r'/andes/new_share.html', Andes.NewShareHandler),\n (r'/andes/oauth_share.html', Andes.OAuthShareHandler),\n (r'/andes/share_before.html', Andes.ShareBeforeHandler),\n (r'/andes/liker_collect.html', Andes.LikerCollectHandler),\n (r'/andes/liker_collect_before.html', Andes.LikerCollectBeforeHandler),\n (r'/andes/assistant.html', Andes.AssistantHandler),\n (r'/andes/couple_rank.html', Andes.CoupleRankHandler),\n (r'/andes/download.html', Andes.DownloadHandler),\n (r'/andes/new_download.html', Andes.NewDownloadHandler),\n (r'/andes/sound_wall.html', Andes.SoundWallHandler),\n (r'/andes/sound_like', Andes.SoundLikeHandler),\n (r'/andes/invite_card.html', Invite.InviteCardHandler),\n (r'/andes/invite_choose.html', Invite.InviteChooseHandler),\n (r'/andes/invite_record.html', Invite.InviteRecordHandler),\n (r'/andes/iperson.html', Invite.InvitePersonHandler),\n (r'/andes/idownload.html', Invite.InviteDownloadHandler),\n (r'/andes/statistic', Andes.StatisticHandler),\n (r'/andes/group/football.html', Group.FootballHandler),\n (r'/andes/group/tvshow.html', Group.TVShowHandler),\n (r'/andes/group/search', Group.GroupSearchHandler),\n (r'/andes/group/join', Group.GroupJoinHandler),\n (r'/andes/group/exit', Group.GroupExitHandler),\n (r'/andes/group/cancel_booking', Group.GroupCancelBookingHandler),\n (r'/andes/group/share.html', Group.GroupShareHandler),\n (r'/andes/bestvoice/index.html', Activity.BestVoiceIndexHandler),\n (r'/andes/bestvoice/vote', Activity.BestVoiceVoteHandler),\n (r'/andes/bestvoice/view_share', Activity.BestVoiceViewShareHandler),\n (r'/andes/bestvoice/share.html', Activity.BestVoiceShareHandler),\n (r'/wechat/oauth_callback', Wechat.WechatOauthCallbackHandler),\n (r'/wechat/test', WechatTest.PageWechatShopHandler),\n (r'/pushtalk/invite_search', Invite.InviteSearchHandler),\n (r'/pushtalk/gen_code', PushTalk.GenCodeHandler),\n (r'/pushtalk/query_code', PushTalk.QueryCodeHandler),\n (r'/pushtalk/query_group', PushTalk.QueryGroupHandler),\n (r'/pushtalk/upload_head', PushTalk.UploadHeadHandler),\n (r'/pushtalk/user_exist', PushTalk.PushTalkUserExistHandler),\n (r'/pushtalk/user_id_exist', PushTalk.PushTalkUserExistHandler),\n (r'/pushtalk/user_exist2', PushTalk.PushTalkUserExist2Handler),\n (r'/pushtalk/register', PushTalk.PushTalkRegisterHandler),\n (r'/pushtalk/search', PushTalk.PushTalkSearchHandler),\n (r'/pushtalk/user_info', PushTalk.PushTalkUserInfoHandler),\n (r'/pushtalk/upload_bibi_stat', PushTalk.PushTalkUploadBiBiStatHandler),\n (r'/pushtalk/getRole', PushTalk.EchoGetPersonalRoleHandler),\n (r'/pushtalk/setRole', PushTalk.EchoSetPersonalRoleHandler),\n (r'/pushtalk/getRandomSubTags', PushTalk.EchoGetRandomSubTagsHandler),\n (r'/pushtalk/getTags', PushTalk.EchoGetTagsHandler),\n (r'/pushtalk/matchSubTag', PushTalk.EchoMatchSubTagHandler),\n (r'/pushtalk/uploadBugReport', Msg.UploadBugReportHandler),\n (r'/tool/group/console', Tool.GroupConsoleHandler),\n (r'/tool/group/list', Tool.GroupListHandler),\n (r'/tool/group/add', Tool.GroupAddHandler),\n (r'/tool/group/del', Tool.GroupDelHandler),\n (r'/tool/group/update_group', Tool.GroupUpdateGroupHandler),\n (r'/tool/group/update_group_item', Tool.GroupUpdateGroupItemHandler),\n (r'/tool/ad', Tool.LoginHandler),\n (r'/tool/dologin', Tool.DoLoginHandler),\n (r'/tool/uz', Tool.UserHandler),\n (r'/tool/uz/qr', Tool.UserQueryHandler),\n (r'/tool/echo', Tool.EchoHandler),\n (r'/tool/echo/update', Tool.EchoUpdateHandler),\n (r'/tool/echo/del', Tool.EchoDeleteHandler),\n (r'/tool/echo/insert', Tool.EchoInsertHandler),\n (r'/tool/echo/refresh', Tool.EchoRefreshHandler),\n (r'/tool/echo/appUpdate', Tool.AppUpdateHandler),\n (r'/upload/voice_card', Upload.UploadVoiceCardHandler),\n (r'/(favicon.ico)', tornado.web.StaticFileHandler, {\"path\": template_path}),\n ]\n settings = {\n 'out_addr' : out_addr,\n 'out_port': out_port,\n 'service_id' : service_name,\n 'host_id' : str(self._hostid),\n 'template_path' : template_path,\n 'cookie_secret': 'pLRK2sFqWV_AF4ik3l=zJb'\n }\n Application.__init__(self, handlers, **settings)\n self.service.proto = proto_andes\n\ndef start_http(hostid, service_name, **config):\n global io_loop\n global application\n\n bibi_log_handler = logging.handlers.TimedRotatingFileHandler(\n os.path.join(config['bibi_call_log_path'], 'bibi_call_stat.log'), when='midnight', encoding='utf-8')\n bibi_formatter = logging.Formatter('[%(levelname)1.1s %(asctime)s, %(name)s %(module)s:%(lineno)d] %(message)s', '%y%m%d %H:%M:%S')\n bibi_log_handler.setFormatter(bibi_formatter)\n\n public_var.bibi_call_log = logging.getLogger('bibi_call')\n public_var.bibi_call_log.addHandler(bibi_log_handler)\n public_var.bibi_call_log.setLevel(logging.INFO)\n public_var.voice_card_file_path = config['voice_card_file_path']\n\n baseclient.python.tornado_proto_gclient.install()\n io_loop = tornado.ioloop.IOLoop.instance()\n application = App(hostid, config['addr'], config['port'], service_name, config['template_path'] )\n application.req_id = 0;\n if 'timeout' in config:\n application.timeout = config['timeout']\n else:\n application.timeout = 5000 #default 5000ms\n http_server = tornado.httpserver.HTTPServer(application, io_loop = io_loop, xheaders=True)\n http_server.listen(config['start_hostids'][hostid])\n\n # Put public_var to public_var.py from config file.\n public_var.timestamp_exp_time = config.get('timestamp_exp_time', 60 * 5)\n aes_key = config['access_token_aes_key']\n public_var.access_token_cipher = AES.new(aes_key)\n\n io_loop.start()\n\ndef stop_http(signum, frame):\n application.stop()\n"
},
{
"alpha_fraction": 0.5180235505104065,
"alphanum_fraction": 0.5335529446601868,
"avg_line_length": 37.5,
"blob_id": "f084f4e41cd4016e370ef74d0838ef7934a9ca6d",
"content_id": "6fc5015f43c896c7fa68d5c8767fab7357edada4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 10985,
"license_type": "no_license",
"max_line_length": 177,
"num_lines": 276,
"path": "/scripts/share/likerCollect.js",
"repo_name": "lxzhu94/andes_tony",
"src_encoding": "UTF-8",
"text": "/**\n * Created by Tony on 2016/6/21.\n */\nfunction remAdjust(defaultFontSize, defaultScreenHeight, defaultScreenWidth) {\n var htmlNode = document.getElementsByTagName('html')[0];\n\n function resize() {\n var screenWidth = document.body.offsetWidth;\n var screenHeight = window.screen.height;console.log(screenHeight+\"+\"+screenWidth);\n htmlNode.style.fontSize = screenWidth / defaultScreenWidth * defaultFontSize + 'px';\n }\n document.addEventListener('DOMContentLoaded', function () {\n resize();\n });\n window.addEventListener('resize', resize);\n}\nfunction GetQueryString(name)\n{\n var reg = new RegExp(\"(^|&)\"+ name +\"=([^&]*)(&|$)\");\n var r = window.location.search.substr(1).match(reg);\n if(r!=null)return unescape(r[2]); return null;\n}\nfunction base64_encode(str){\n var c1, c2, c3;\n var base64EncodeChars = \"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/\";\n var i = 0, len= str.length, string = '';\n\n while (i < len){\n c1 = str.charCodeAt(i++) & 0xff;\n if (i == len){\n string += base64EncodeChars.charAt(c1 >> 2);\n string += base64EncodeChars.charAt((c1 & 0x3) << 4);\n string += \"==\";\n break;\n }\n c2 = str.charCodeAt(i++);\n if (i == len){\n string += base64EncodeChars.charAt(c1 >> 2);\n string += base64EncodeChars.charAt(((c1 & 0x3) << 4) | ((c2 & 0xF0) >> 4));\n string += base64EncodeChars.charAt((c2 & 0xF) << 2);\n string += \"=\";\n break;\n }\n c3 = str.charCodeAt(i++);\n string += base64EncodeChars.charAt(c1 >> 2);\n string += base64EncodeChars.charAt(((c1 & 0x3) << 4) | ((c2 & 0xF0) >> 4));\n string += base64EncodeChars.charAt(((c2 & 0xF) << 2) | ((c3 & 0xC0) >> 6));\n string += base64EncodeChars.charAt(c3 & 0x3F)\n }\n return string\n}\n\nfunction encodeUTF8(str){\n var temp = \"\",rs = \"\";\n for( var i=0 , len = str.length; i < len; i++ ){\n temp = str.charCodeAt(i).toString(16);\n rs += \"\\\\u\"+ new Array(5-temp.length).join(\"0\") + temp;\n }\n return rs;\n}\nfunction decodeUTF8(str){\n return str.replace(/(\\\\u)(\\w{4}|\\w{2})/gi, function($0,$1,$2){\n return String.fromCharCode(parseInt($2,16));\n });\n}\n\nfunction getHostDevice(){\n var result = {\n android: window.navigator.userAgent.indexOf('Android') > -1 || window.navigator.userAgent.indexOf('Linux') > -1,\n iphone: window.navigator.userAgent.indexOf('iPhone') > -1\n };\n if (result.android) return 'android';\n if (result.iphone) return 'iphone';\n return \"other\";\n}\nremAdjust(20,568,320);\nvar zan = {\n audio:[],\n iphone_link:'http://a.app.qq.com/o/simple.jsp?pkgname=com.cootek.walkietalkie',\n android_link:'http://a.app.qq.com/o/simple.jsp?pkgname=com.cootek.walkietalkie',\n isZan:false,\n isAutoPlay:true,\n timeout:null,\n length:0,\n share_image: Andes.netService+'/andesres/image/andes/andes_icon.png',\n share_title:'我想玩这个,快来帮我一下!',\n share_desc:'BiBi的语音表情超好玩,快帮我解锁,来和我一起玩儿!',\n people_words:[\"手够快吧,果然是真爱!\",\"我这么善良,你知道吗?\",\"快送我一朵小红花!\",\"点赞功德无量!\",\"送人一赞,手留余香\",\"听说好人都来玩BiBi了\",\"语音表情现在很火啊,你还不知道?\"],\n word_num:7,\n url:Andes.netService+\"/andes/liker_collect.html\",\n getUid:function(){\n return GetQueryString('uid');\n },\n getZanInfo:function(){\n $.ajax({\n type:\"POST\",\n data:JSON.stringify({'uid':zan.getUid()}),\n async:false,\n dataType:\"json\",\n url:\"http://andes.cootekservice.com/andes/collectlike/liker_info\",\n success:zan.getZanInfoCallback\n });\n },\n getZanInfoCallback:function(data){\n if(data.result_code!=2000)\n return;\n var liker_list=data.result;\n zan.length=liker_list.length;\n if(liker_list.length>4) {\n ;//$('.scroll-tip').removeClass('hide');\n //$('.people-list').addClass('people-list-scroll');\n }\n // window.onscroll = function(){\n // var offset=$('.arrow').position().top-window.innerHeight;\n // if(document.body.scrollTop>=offset+40) {\n // setTimeout(function(){\n // $(\".people-detail\").removeClass('hide');\n // $('.scroll-tip').hide();\n // },300)\n // }\n // };\n //}\n $('.num-tip>span').text(liker_list.length);\n liker_list.reverse();\n for(index in liker_list){\n $('.people-list').append(\n \"<div class='people-detail'>\" +\n \"<div class='detail-inside'>\" +\n \"<div class='photo block fleft'><img src='\" + liker_list[index].headimgurl + \"'></div>\" +\n \"<p class='nickname'>\" + decodeUTF8(liker_list[index].nickname) + \"</p>\" +\n \"<p class='intro-word'>\"+zan.people_words[Math.floor(Math.random()*zan.word_num)]+\"</p>\" +\n \"</div>\" +\n \"</div>\");\n if(liker_list[index].openid==user_info.openid){\n zan.isZan=true;\n $('.zan-btn').text('我也要玩');\n $('.zan-btn').fastClick(zan.downloadClick);\n }\n }\n if(!zan.isZan) {\n $('.zan-btn').text('赞一下');\n $('.zan-btn').fastClick(zan.zanClick);\n }\n $('.white-bg').hide();\n $('body').css(\"height\",\"inherit\");\n $('body').css(\"overflow\",\"inherit\");\n },\n audioPlayClick:function(){\n clearTimeout(zan.timeout);\n $('body').unbind('touchstart');\n var id=$(this).data('id');\n var self=this;\n zan.isAutoPlay=false;\n if($(this).find('audio').get(0).paused) //点击后播放\n zan.audioPlay(id);\n else\n zan.audioStop();\n },\n audioPlay:function(id){\n zan.audioStop();\n $('audio').eq(id).get(0).play();\n if($('audio').eq(id).get(0).paused)\n return;\n $('.audio').find('span').text('2');\n $('.audio').find('span').removeClass('audio-play');\n $('.audio').find('span').addClass('audio-stop');\n $('.audio').find('span').eq(id).text('3');\n $('.audio').find('span').eq(id).removeClass('audio-stop');\n $('.audio').find('span').eq(id).addClass('audio-play');\n },\n audioStop:function(){\n $('audio').get(0).pause();\n $('audio').get(1).pause();\n $('audio').get(2).pause();\n $('.audio').find('span').text('2');\n $('.audio').find('span').removeClass('audio-play');\n $('.audio').find('span').addClass('audio-stop');\n },\n audioAutoPlayLoop:function(){\n zan.audioPlay(0);\n var id=0;\n $('audio').bind('ended', function () {\n zan.audioStop(id);\n if(zan.isAutoPlay){\n if(id>1)\n id=-1;\n zan.audioPlay(++id);\n }\n });\n },\n zanClick:function(){\n $.ajax({\n type:\"POST\",\n async:\"false\",\n data:JSON.stringify({ 'uid': zan.getUid(),\n 'openid': user_info.openid,\n 'nickname': encodeUTF8(user_info.nickname),\n 'headimgurl': user_info.headimgurl\n }),\n dataType:\"json\",\n url:\"http://andes.cootekservice.com/andes/collectlike/upload_liker_info\",\n success:zan.zanClickCallback\n });\n },\n zanClickCallback:function(data){\n if(data.result_code!=2000)\n return;\n _hmt.push(['_trackEvent', '集赞分享页面', '点赞按钮', '赞一下']);\n zan.isZan=true;\n $('.num-tip>span').text(zan.length+1);\n $('.people-list').prepend(\"<div class='people-detail'>\" +\n \"<div class='detail-inside'>\" +\n \"<div class='photo block fleft'><img src='\" + user_info.headimgurl + \"'></div>\" +\n \"<p class='nickname'>\" + decodeUTF8(user_info.nickname) + \"</p>\" +\n \"<p class='intro-word'>\"+zan.people_words[Math.floor(Math.random()*zan.word_num)]+\"</p>\" +\n \"</div>\" +\n \"</div>\");\n if($('.people-list').find('hide').size()>0)\n $('.people-list').find('.people-detail').eq(4).addClass('hide');\n $('.download-panel,.black-opacity').show();\n $('body').css('overflow','hidden');\n $('.zan-btn').text('我也要玩');\n $('.zan-btn').attr('id','zan-download');\n $('.zan-btn').fastClick(zan.downloadClick);\n },\n closeClick:function(){\n $('.download-panel,.black-opacity').hide();\n $('body').css('overflow','initial');\n },\n downloadClick:function(){\n if(getHostDevice()=='iphone')\n window.open(zan.iphone_link);\n else\n window.open(zan.android_link);\n if($(this).attr('id')=='center-download')\n _hmt.push(['_trackEvent', '集赞分享页面', '下载按钮', '弹出框下载']);\n if($(this).attr('id')=='top-download')\n _hmt.push(['_trackEvent', '集赞分享页面', '下载按钮', '顶部栏下载']);\n if($(this).attr('id')=='zan-download')\n _hmt.push(['_trackEvent', '集赞分享页面', '下载按钮', '我也要玩']);\n },\n makeWechatUrl:function(){\n var uri=zan.url+\"?uid=\"+zan.getUid()+\"&appid=\"+INFO.appId;\n var target=\"1#1#\"+uri;\n target=base64_encode(target);\n redirect_url = 'http://touchlife.cootekservice.com/callback/wechat?target='+target;\n return 'https://open.weixin.qq.com/connect/oauth2/authorize?appid='+INFO.appId+'&redirect_uri='+redirect_url+'&response_type=code&scope=snsapi_userinfo#wechat_redirect';\n }\n};\n(function(){\n //if(WechatShare.isWeixin()&&location.href.indexOf('appid')<0){\n // location.href=zan.makeWechatUrl();\n // return;\n //}\n window.onload = function() {\n FastClick.attach(document.body);\n //if (WechatShare.isWeixin()) {\n // WechatShare.config(INFO.appId, INFO.timestamp, INFO.noncestr, INFO.signature, zan.share_title, zan.share_desc, zan.makeWechatUrl(), zan.share_image);\n //}\n //zan.audioAutoPlayLoop();\n window.audio1 = document.createElement('audio');\n audio1.src = 'http://cootek-walkie-talkie.oss-cn-hangzhou.aliyuncs.com/html/music/h5_zhilinjiejie.mp3';\n audio1.play();\n\n if($('audio').get(0).paused){\n $('body').bind('touchstart',function(){\n zan.timeout=setTimeout(zan.audioAutoPlayLoop,200);\n $('body').unbind('touchstart');\n })\n }\n };\n zan.getZanInfo();\n $('.audio').fastClick(zan.audioPlayClick);\n $('.close').fastClick(zan.closeClick);\n $('.download-btn').fastClick(zan.downloadClick);\n})();"
},
{
"alpha_fraction": 0.48529550433158875,
"alphanum_fraction": 0.5137086510658264,
"avg_line_length": 43.2617073059082,
"blob_id": "dcafa728d6a27a66cd1e4518d11114aaedab6bcb",
"content_id": "c8e3901c3a48a53ba9e52f5e7aa5a92a10d176ea",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 32827,
"license_type": "no_license",
"max_line_length": 120,
"num_lines": 726,
"path": "/scripts/andes/assistant.js",
"repo_name": "lxzhu94/andes_tony",
"src_encoding": "UTF-8",
"text": "/**\n * Created by Tony on 2016/7/4.\n */\nfunction remAdjust(defaultFontSize, defaultScreenHeight, defaultScreenWidth) {\n var htmlNode = document.getElementsByTagName('html')[0];\n\n function resize() {\n var screenWidth = document.body.offsetWidth;\n var screenHeight = window.screen.height;\n if(screenWidth / defaultScreenWidth * defaultFontSize>28)\n htmlNode.style.fontSize=28+'px';\n else\n htmlNode.style.fontSize = screenWidth / defaultScreenWidth * defaultFontSize + 'px';\n }\n document.addEventListener('DOMContentLoaded', function () {\n resize();\n });\n window.addEventListener('resize', resize);\n}\nremAdjust(20,568,320);\nfunction getHostDevice(){\n result = {\n android: window.navigator.userAgent.indexOf('Android') > -1 || window.navigator.userAgent.indexOf('Linux') > -1,\n iphone: window.navigator.userAgent.indexOf('iPhone') > -1\n };\n if (result.android) return 'android';\n if (result.iphone) return 'iphone';\n return \"other\";\n}\nvar assistant = {\n playUrl:'',\n bgUrl:'http://cootek-walkie-talkie.oss-cn-hangzhou.aliyuncs.com/html/music/bibihelper/start.mp3',\n timeoutArray:[],\n intervalArray:[],\n bgTimeout:null,\n messageInterval:null,\n talkInterval:null,\n sayBorderInterval:null,\n pushedInterval:null,\n connectInterval:null,\n connectSpinTimeout:null,\n connectedInterval:null,\n recordInterval:null,\n recordingInterval:null,\n coverInterval:null,\n friendTabInterval:null,\n findTabInterval:null,\n voiceFaceTabInterval:null,\n groupTabInterval:null,\n wordImgPosition:$('html').get(0).clientHeight+200,\n htmlHeight:$('html').get(0).clientHeight,\n init:function(){\n $('.que-panel>div').fastClick(this.helpClick);\n $('.back-icon').fastClick(function () {Andes.finish();});\n $('.back-btn').fastClick(this.backClick);\n if(getHostDevice()=='iphone')\n assistant.audioPlay('#bg-audio',function(){});\n },\n preventDefault:function(e){\n e.preventDefault();\n },\n clearTimer:function(){\n var i=0;\n for(i in assistant.timeoutArray)\n clearTimeout(assistant.timeoutArray[i]);\n for(i in assistant.intervalArray)\n clearInterval(assistant.intervalArray[i]);\n assistant.intervalArray=[];\n assistant.timeoutArray=[];\n },\n helpClick:function(){\n if(!window.navigator.onLine) {\n Andes.showToast('当前网络不可用,请联网后再重新打开');\n return;\n }\n var id = $(this).attr('id');\n $('body').get(0).addEventListener('touchmove',assistant.preventDefault);\n if(getHostDevice()=='android') {\n assistant.clearTimer();\n switch (id) {\n case 'talk':\n _hmt.push(['_trackEvent', 'BiBi小助手', 'Android', '如何对讲']);\n assistant.audioPlay('#talk-audio', assistant.talkAnimation);\n break;\n case 'message':\n _hmt.push(['_trackEvent', 'BiBi小助手', 'Android', '语音留言']);\n assistant.audioPlay('#message-audio', assistant.messageAnimation)\n break;\n case 'addFriend':\n _hmt.push(['_trackEvent', 'BiBi小助手', 'Android', '添加好友']);\n assistant.audioPlay('#addFriend-audio', assistant.friendTabAnimation)\n break;\n case 'findFriend':\n _hmt.push(['_trackEvent', 'BiBi小助手', 'Android', '寻找好友']);\n assistant.audioPlay('#findFriend-audio', assistant.findTabAnimation)\n break;\n case 'voiceFace':\n _hmt.push(['_trackEvent', 'BiBi小助手', 'Android', '语音表情']);\n assistant.audioPlay('#voiceFace-audio', assistant.voiceFaceTabAnimation)\n break;\n case 'group':\n _hmt.push(['_trackEvent', 'BiBi小助手', 'Android', '关闭群聊']);\n assistant.audioPlay('#group-audio', assistant.groupTabAnimation)\n break;\n case 'suggestion':\n _hmt.push(['_trackEvent', 'BiBi小助手', 'Android', '反馈']);\n assistant.audioPlay('#suggestion-audio', assistant.meTabAnimation)\n break;\n }\n }\n else{\n switch(id){\n case 'talk':\n _hmt.push(['_trackEvent', 'BiBi小助手', 'iphone', '如何对讲']);\n assistant.audioPlay('#talk-audio',assistant.talkAnimation);\n break;\n case 'message':\n _hmt.push(['_trackEvent', 'BiBi小助手', 'iphone', '语音留言']);\n assistant.audioPlay('#message-audio',assistant.messageAnimation);\n break;\n case 'addFriend':\n _hmt.push(['_trackEvent', 'BiBi小助手', 'iphone', '添加好友']);\n assistant.audioPlay('#addFriend-audio',assistant.friendTabAnimationIOS);\n break;\n case 'findFriend':\n _hmt.push(['_trackEvent', 'BiBi小助手', 'iphone', '寻找好友']);\n assistant.audioPlay('#findFriend-audio',assistant.findTabAnimationIOS);\n break;\n case 'voiceFace':\n _hmt.push(['_trackEvent', 'BiBi小助手', 'iphone', '语音表情']);\n assistant.audioPlay('#voiceFace-audio',assistant.voiceFaceTabAnimationIOS);\n break;\n case 'group':\n _hmt.push(['_trackEvent', 'BiBi小助手', 'iphone', '关闭群聊']);\n assistant.audioPlay('#group-audio',assistant.groupTabAnimationIOS);\n break;\n case 'suggestion':\n _hmt.push(['_trackEvent', 'BiBi小助手', 'iphone', '反馈']);\n assistant.audioPlay('#suggestion-audio',assistant.meTabAnimationIOS);\n break;\n }\n }\n },\n backClick:function(){\n $('body').get(0).removeEventListener('touchmove',assistant.preventDefault);\n assistant.clearTimer();\n assistant.fingerPointReset();\n assistant.connectSpinReset();\n assistant.duijiangOpen();\n assistant.audioStop();\n $('#centerBlack').css('height','30rem');//find-tab\n $('.word-img').css('top',assistant.wordImgPosition);\n $('.word-img').css({\n '-webkit-transform': 'none',\n '-moz-transform': 'none',\n '-ms-transform': 'none',\n '-o-transform': 'none',\n 'transform': 'none'\n });\n $('.back-btn').hide();\n $('.tab-panel>div .white-div,.tab-panel>div').hide();\n $('.black-tab').hide();\n $('#qunliao').hide();\n $('.record-tip').hide();\n $('.talk-panel>.push-circle,.talk-panel,.zhibo-img,.black-opacity,.tab-panel').hide();\n $('.tab-panel img.hide').hide();\n },\n backClickIOS:function(){\n $('body').get(0).removeEventListener('touchmove',assistant.preventDefault);\n assistant.clearTimer();\n assistant.fingerPointReset();\n assistant.connectSpinReset();\n assistant.duijiangOpen();\n assistant.audioStop();\n $('#centerBlack').css('height','30rem');//find-tab\n $('.word-img').css('top',assistant.wordImgPosition);\n $('.word-img').css({\n '-webkit-transform': 'none',\n '-moz-transform': 'none',\n '-ms-transform': 'none',\n '-o-transform': 'none',\n 'transform': 'none'\n });\n $('.back-btn').hide();\n $('.tab-panel>div .white-div,.tab-panel>div').hide();\n $('.black-tab').hide();\n $('#qunliao').hide();\n $('.record-tip').hide();\n $('.talk-panel>.push-circle,.talk-panel,.zhibo-img,.black-opacity,.tab-panel').hide();\n $('.tab-panel img.hide').hide();\n },\n talkAnimation:function(){\n $('.push-circle-tosay .microphone-word').text('点击发起对讲');\n $('.red-point').show();\n assistant.duijiangOpen();\n $('.talk-panel-title').text('BiBi豆').attr('style','');\n $('.talk-panel,.black-opacity,.push-circle-tosay').show();\n $('.back-btn').show();\n var t=0;\n assistant.talkInterval=setInterval(function(){\n if(t==2800)assistant.fingerPointToCircle();//2s move\n if(t==6000)assistant.circlePushed();\n if(t==6200)assistant.circleConnect();\n if(t==7800)assistant.circleConnected();\n if(t==11000)assistant.circleSay();\n if(t==16200){assistant.circleConnected();assistant.fingerPointToRightSpace();}\n if(t==18200)assistant.circleListen();\n if(t==23600){assistant.circleConnected();assistant.fingerPointToCover();}\n if(t==26000)$('.zhibo-img').show();\n if(t>=35000)\n assistant.backClick();\n t+=200;\n },200);\n assistant.intervalArray.push(assistant.talkInterval);\n },\n messageAnimation:function(){\n assistant.duijiangClose();\n $('.red-point').hide();\n assistant.circleRecord();\n $('.back-btn').show();\n var t=0;\n assistant.messageInterval=setInterval(function(){\n if(t==200)assistant.fingerPointToCloseTalk();\n if(t==4000)assistant.fingerPointToCircle();//2s move\n if(t==7000)assistant.circlePushed();\n if(t==7200)assistant.circleRecording();\n if(t==10000)assistant.circleRecord();\n if(t==12000)assistant.fingerPointToHistory();\n if(t==14600)$('.record-tip').show();\n if(t>=21000)\n assistant.backClick();\n t+=200;\n },200);\n assistant.intervalArray.push(assistant.messageInterval);\n },\n friendTabAnimation:function(){\n var img=new Image();\n img.src='/andesres/image/andes/assistant/问题3.1.png';\n img.onload=function(){\n img.onload=null;\n $('.friend-tab').css('background-image','url('+img.src+')');\n $('.tab-panel,.friend-tab').show();\n var t=0;\n var offset=3000;\n var h1=assistant.htmlHeight*0.24-assistant.wordImgPosition;\n var h2=assistant.htmlHeight*0.55-assistant.wordImgPosition;\n assistant.friendTabInterval=setInterval(function(){\n t+=500;\n if(t==4000-offset)$('.friend-tab>div.black-tab,.back-btn,#friend-tab1').show();\n //if(t==5000-offset)$('#add-friend-img').css('top','24%');\n if(t==5000-offset)$('#add-friend-img').css({\n '-webkit-transform': 'translate3d(0,'+h1+'px,0)',\n '-moz-transform': 'translate3d(0,'+h1+'px,0)',\n '-ms-transform': 'translate3d(0,'+h1+'px,0)',\n '-o-transform': 'translate3d(0,'+h1+'px,0)',\n 'transform': 'translate3d(0,'+h1+'px,0)'\n });\n if(t==7000-offset)$('#friend-tab2').show();\n if(t==8000-offset)$('#choose-friend-img').css({\n '-webkit-transform': 'translate3d(0,'+h2+'px,0)',\n '-moz-transform': 'translate3d(0,'+h2+'px,0)',\n '-ms-transform': 'translate3d(0,'+h2+'px,0)',\n '-o-transform': 'translate3d(0,'+h2+'px,0)',\n 'transform': 'translate3d(0,'+h2+'px,0)'\n });\n if(t>=24000)\n assistant.backClick();\n },500);\n assistant.intervalArray.push(assistant.friendTabInterval);\n };\n img.onerror=function(){\n Andes.showToast('当前网络不可用,请联网后再重新打开');\n };\n },\n findTabAnimation:function(){\n var img=new Image();\n img.src='/andesres/image/andes/assistant/问题4.1.png';\n img.onload=function() {\n img.onload=null;\n $('.find-tab').css('background-image','url('+img.src+')');\n $('.tab-panel,.find-tab').show();\n var t = 0;\n var offset = 2000;\n var h1=assistant.htmlHeight*0.80-assistant.wordImgPosition;\n var h2=assistant.htmlHeight*0.08-assistant.wordImgPosition;\n var h3=assistant.htmlHeight*0.49-assistant.wordImgPosition;\n assistant.findTabInterval = setInterval(function () {\n t += 500;\n if (t == 3000 - offset)$('.find-tab>div.black-tab,.back-btn,#find-tab1').show();\n if (t == 4000 - offset)$('#find-tab-img').css({\n '-webkit-transform': 'translate3d(0,'+h1+'px,0)',\n '-moz-transform': 'translate3d(0,'+h1+'px,0)',\n '-ms-transform': 'translate3d(0,'+h1+'px,0)',\n '-o-transform': 'translate3d(0,'+h1+'px,0)',\n 'transform': 'translate3d(0,'+h1+'px,0)'\n });\n if (t == 6000 - offset)$('#find-tab2').show();\n if (t == 7000 - offset)$('#find-friend-img').css({\n '-webkit-transform': 'translate3d(0,'+h2+'px,0)',\n '-moz-transform': 'translate3d(0,'+h2+'px,0)',\n '-ms-transform': 'translate3d(0,'+h2+'px,0)',\n '-o-transform': 'translate3d(0,'+h2+'px,0)',\n 'transform': 'translate3d(0,'+h2+'px,0)'\n });\n //if (t == 8000 - offset)$('#watch-show-img').css({\n // '-webkit-transform': 'translate3d(0,'+h3+'px,0)',\n // '-moz-transform': 'translate3d(0,'+h3+'px,0)',\n // '-ms-transform': 'translate3d(0,'+h3+'px,0)',\n // '-o-transform': 'translate3d(0,'+h3+'px,0)',\n // 'transform': 'translate3d(0,'+h3+'px,0)'\n //});\n if (t >= 12000)\n assistant.backClick();\n }, 500);\n assistant.intervalArray.push(assistant.findTabInterval);\n };\n img.onerror=function(){\n Andes.showToast('当前网络不可用,请联网后再重新打开');\n };\n },\n voiceFaceTabAnimation:function(){\n var img=new Image();\n img.src='/andesres/image/andes/assistant/问题5.1.png';\n img.onload=function() {\n img.onload=null;\n $('.voiceFace-tab').css('background-image','url('+img.src+')');\n $('.tab-panel,.voiceFace-tab').show();\n var t = 0;\n var offset = 3000;\n var h1=assistant.htmlHeight*0.82-assistant.wordImgPosition;\n assistant.voiceFaceTabInterval = setInterval(function () {\n t += 1000;\n if (t == 4000 - offset)$('.voiceFace-tab>div.black-tab,.back-btn,#voiceFace-tab1').show();\n if (t == 5000 - offset)$('#more-face-img').css({\n '-webkit-transform': 'translate3d(0,'+h1+'px,0)',\n '-moz-transform': 'translate3d(0,'+h1+'px,0)',\n '-ms-transform': 'translate3d(0,'+h1+'px,0)',\n '-o-transform': 'translate3d(0,'+h1+'px,0)',\n 'transform': 'translate3d(0,'+h1+'px,0)'\n });\n if (t >= 14000)\n assistant.backClick();\n }, 1000);\n assistant.intervalArray.push(assistant.voiceFaceTabInterval);\n };\n img.onerror=function(){\n Andes.showToast('当前网络不可用,请联网后再重新打开');\n };\n },\n groupTabAnimation:function(){\n var img=new Image();\n img.src='/andesres/image/andes/assistant/问题6.1.png';\n img.onload=function() {\n img.onload = null;\n $('.group-tab').css('background-image','url('+img.src+')');\n assistant.circleConnected();\n assistant.duijiangOpen();\n $('.red-point').hide();\n $('.talk-panel,.black-opacity,.center-icon>span,.back-btn').show();\n var t = 0;\n var offset = 2000;\n var h1=assistant.htmlHeight*0.63-assistant.wordImgPosition;\n assistant.groupTabInterval = setInterval(function () {\n t += 500;\n if (t == 4000)assistant.fingerPointToCloseTalk();//2s move\n if (t == 7000)assistant.duijiangClose();\n if (t == 10000)assistant.fingerPointReset();//2s move\n if (t == 13000) {\n $('.talk-panel,.black-opacity,.back-btn').hide();\n $('.tab-panel,.group-tab').show();\n }\n if (t == 16000 - offset)$('.group-tab>div.black-tab,.group-tab .white-div,.back-btn').show();\n if (t == 16500 - offset)$('#delete-group-img').css({\n '-webkit-transform': 'translate3d(0,'+h1+'px,0)',\n '-moz-transform': 'translate3d(0,'+h1+'px,0)',\n '-ms-transform': 'translate3d(0,'+h1+'px,0)',\n '-o-transform': 'translate3d(0,'+h1+'px,0)',\n 'transform': 'translate3d(0,'+h1+'px,0)'\n });\n if (t >= 19000)\n assistant.backClick();\n }, 500);\n assistant.intervalArray.push(assistant.groupTabInterval);\n };\n img.onerror=function(){\n Andes.showToast('当前网络不可用,请联网后再重新打开');\n };\n },\n meTabAnimation:function(){\n var img=new Image();\n img.src='/andesres/image/andes/assistant/问题7.1.png';\n img.onload=function() {\n img.onload = null;\n $('.me-tab').css('background-image','url('+img.src+')');\n $('.tab-panel,.me-tab').show();\n var t = 0;\n var h1=assistant.htmlHeight*0.58-assistant.wordImgPosition;\n assistant.groupTabInterval = setInterval(function () {\n t += 1000;\n if (t == 3000 - 2000)$('.me-tab>div.black-tab,.white-div,.back-btn').show();\n if (t == 4000 - 2000)$('#suggestion-img').css({\n '-webkit-transform': 'translate3d(0,'+h1+'px,0)',\n '-moz-transform': 'translate3d(0,'+h1+'px,0)',\n '-ms-transform': 'translate3d(0,'+h1+'px,0)',\n '-o-transform': 'translate3d(0,'+h1+'px,0)',\n 'transform': 'translate3d(0,'+h1+'px,0)'\n });\n if (t == 19000)\n assistant.backClick();\n }, 1000);\n assistant.intervalArray.push(assistant.groupTabInterval);\n };\n img.onerror=function(){\n Andes.showToast('当前网络不可用,请联网后再重新打开');\n };\n },\n friendTabAnimationIOS:function(){\n var img=new Image();\n img.src='/andesres/image/andes/assistant/iOS/问题3.1.png';\n img.onload=function() {\n img.onload = null;\n $('.tab-panel,.friend-tab').show();\n $('.friend-tab').css('background-image', 'url(' + img.src + ')');\n $('.tab-panel,.friend-tab').show();\n var t = 0;\n var offset = 3000;\n var h1=assistant.htmlHeight*0.33-assistant.wordImgPosition;\n var h2=assistant.htmlHeight*0.68-assistant.wordImgPosition;\n assistant.friendTabInterval = setInterval(function () {\n t += 500;\n if (t == 4000 - offset)$('.friend-tab>div.black-tab,.back-btn,#friend-tab1').show();\n if(t==5000-offset)$('#add-friend-img').css({\n '-webkit-transform': 'translate3d(0,'+h1+'px,0)',\n '-moz-transform': 'translate3d(0,'+h1+'px,0)',\n '-ms-transform': 'translate3d(0,'+h1+'px,0)',\n '-o-transform': 'translate3d(0,'+h1+'px,0)',\n 'transform': 'translate3d(0,'+h1+'px,0)'\n });\n if(t==7000-offset)$('#friend-tab2').show();\n if(t==8000-offset)$('#choose-friend-img').css({\n '-webkit-transform': 'translate3d(0,'+h2+'px,0)',\n '-moz-transform': 'translate3d(0,'+h2+'px,0)',\n '-ms-transform': 'translate3d(0,'+h2+'px,0)',\n '-o-transform': 'translate3d(0,'+h2+'px,0)',\n 'transform': 'translate3d(0,'+h2+'px,0)'\n });\n if (t >= 24000)\n assistant.backClick();\n }, 500);\n assistant.intervalArray.push(assistant.friendTabInterval);\n };\n img.onerror=function(){\n Andes.showToast('当前网络不可用,请联网后再重新打开');\n };\n },\n findTabAnimationIOS:function(){\n var img=new Image();\n img.src='/andesres/image/andes/assistant/iOS/问题4.1.png';\n img.onload=function() {\n img.onload = null;\n $('.find-tab').css('background-image', 'url(' + img.src + ')');\n $('.tab-panel,.find-tab').show();\n var t = 0;\n var offset = 2000;\n var h1=assistant.htmlHeight*0.82-assistant.wordImgPosition;\n var h2=assistant.htmlHeight*0.07-assistant.wordImgPosition;\n var h3=assistant.htmlHeight*0.45-assistant.wordImgPosition;\n assistant.findTabInterval = setInterval(function () {\n t += 500;\n if (t == 3000 - offset)$('.find-tab>div.black-tab,.back-btn,#find-tab1').show();\n if (t == 4000 - offset)$('#find-tab-img').css({\n '-webkit-transform': 'translate3d(0,'+h1+'px,0)',\n '-moz-transform': 'translate3d(0,'+h1+'px,0)',\n '-ms-transform': 'translate3d(0,'+h1+'px,0)',\n '-o-transform': 'translate3d(0,'+h1+'px,0)',\n 'transform': 'translate3d(0,'+h1+'px,0)'\n });\n if (t == 6000 - offset)$('#find-tab2').show();\n if (t == 7000 - offset)$('#find-friend-img').css({\n '-webkit-transform': 'translate3d(0,'+h2+'px,0)',\n '-moz-transform': 'translate3d(0,'+h2+'px,0)',\n '-ms-transform': 'translate3d(0,'+h2+'px,0)',\n '-o-transform': 'translate3d(0,'+h2+'px,0)',\n 'transform': 'translate3d(0,'+h2+'px,0)'\n });\n //if (t == 8000 - offset)$('#watch-show-img').css({\n // '-webkit-transform': 'translate3d(0,'+h3+'px,0)',\n // '-moz-transform': 'translate3d(0,'+h3+'px,0)',\n // '-ms-transform': 'translate3d(0,'+h3+'px,0)',\n // '-o-transform': 'translate3d(0,'+h3+'px,0)',\n // 'transform': 'translate3d(0,'+h3+'px,0)'\n //});\n if (t >= 12000)\n assistant.backClick();\n }, 500);\n assistant.intervalArray.push(assistant.findTabInterval);\n };\n img.onerror=function(){\n Andes.showToast('当前网络不可用,请联网后再重新打开');\n };\n },\n voiceFaceTabAnimationIOS:function(){\n var img=new Image();\n img.src='/andesres/image/andes/assistant/iOS/问题5.1.png';\n img.onload=function() {\n img.onload = null;\n $('.voiceFace-tab').css('background-image', 'url(' + img.src + ')');\n $('.tab-panel,.voiceFace-tab').show();\n var t = 0;\n var offset = 3000;\n var h1=assistant.htmlHeight*0.83-assistant.wordImgPosition;\n assistant.voiceFaceTabInterval = setInterval(function () {\n t += 1000;\n if (t == 4000 - offset)$('.voiceFace-tab>div.black-tab,.back-btn,#voiceFace-tab1').show();\n if (t == 5000 - offset)$('#more-face-img').css({\n '-webkit-transform': 'translate3d(0,'+h1+'px,0)',\n '-moz-transform': 'translate3d(0,'+h1+'px,0)',\n '-ms-transform': 'translate3d(0,'+h1+'px,0)',\n '-o-transform': 'translate3d(0,'+h1+'px,0)',\n 'transform': 'translate3d(0,'+h1+'px,0)'\n });\n if (t >= 14000)\n assistant.backClick();\n }, 1000);\n assistant.intervalArray.push(assistant.voiceFaceTabInterval);\n };\n img.onerror=function(){\n Andes.showToast('当前网络不可用,请联网后再重新打开');\n };\n },\n groupTabAnimationIOS:function(){\n var img=new Image();\n img.src='/andesres/image/andes/assistant/iOS/问题6.1.png';\n img.onload=function() {\n img.onload = null;\n $('.group-tab').css('background-image', 'url(' + img.src + ')');\n };\n img.onerror=function(){\n Andes.showToast('当前网络不可用,请联网后再重新打开');\n };\n assistant.circleConnected();\n assistant.duijiangOpen();\n $('.red-point').hide();\n $('.talk-panel,.black-opacity,.center-icon>span,.back-btn').show();\n var t=0;\n var offset=2000;\n var h1=assistant.htmlHeight*0.69-assistant.wordImgPosition;\n assistant.groupTabInterval=setInterval(function(){\n t+=500;\n if(t==4000)assistant.fingerPointToCloseTalk();//2s move\n if(t==7000)assistant.duijiangClose();\n if(t==10000)assistant.fingerPointReset();//2s move\n if(t==13000){$('.talk-panel,.black-opacity,.back-btn').hide();$('.tab-panel,.group-tab').show();}\n if(t==16000-offset)$('.group-tab>div.black-tab,.group-tab .white-div,.back-btn').show();\n if (t == 16500 - offset)$('#delete-group-img').css({\n '-webkit-transform': 'translate3d(0,'+h1+'px,0)',\n '-moz-transform': 'translate3d(0,'+h1+'px,0)',\n '-ms-transform': 'translate3d(0,'+h1+'px,0)',\n '-o-transform': 'translate3d(0,'+h1+'px,0)',\n 'transform': 'translate3d(0,'+h1+'px,0)'\n });\n if(t>=19000)\n assistant.backClick();\n },500);\n\n assistant.intervalArray.push(assistant.groupTabInterval);\n },\n meTabAnimationIOS:function(){\n var img=new Image();\n img.src='/andesres/image/andes/assistant/iOS/问题7.1.png';\n img.onload=function() {\n img.onload = null;\n $('.me-tab').css('background-image', 'url(' + img.src + ')');\n $('.tab-panel,.me-tab').show();\n var t = 0;\n var h1=assistant.htmlHeight*0.63-assistant.wordImgPosition;\n assistant.groupTabInterval = setInterval(function () {\n t += 1000;\n if (t == 3000 - 2000)$('.me-tab>div.black-tab,.white-div,.back-btn').show();\n if (t == 4000 - 2000)$('#suggestion-img').css({\n '-webkit-transform': 'translate3d(0,'+h1+'px,0)',\n '-moz-transform': 'translate3d(0,'+h1+'px,0)',\n '-ms-transform': 'translate3d(0,'+h1+'px,0)',\n '-o-transform': 'translate3d(0,'+h1+'px,0)',\n 'transform': 'translate3d(0,'+h1+'px,0)'\n });\n if (t == 19000)\n assistant.backClick();\n }, 1000);\n assistant.intervalArray.push(assistant.groupTabInterval);\n };\n img.onerror=function(){\n Andes.showToast('当前网络不可用,请联网后再重新打开');\n };\n },\n duijiangOpen:function(){\n // $('.duijiang').removeClass('duijiang-close').addClass('duijiang-open');\n $('#red').hide();\n $('#green').show();\n $('.duijiang-word').text('对讲(开)');\n },\n duijiangClose:function(){\n $('#green').hide();\n $('#red').show();\n $('.duijiang-word').text('对讲(关)');\n },\n fingerPointReset:function(){\n $('.finger-point').attr('class','finger-point circle-radius');\n },\n circlePushed:function(){\n $('.talk-panel>.push-circle,.push-circle-connect-notspin').hide();\n $('.talk-panel-title').text('BiBi豆').attr('style','');\n $('.push-circle-pushed').show();\n },\n circleConnect:function(){\n $('.talk-panel>.push-circle,.push-circle-connect-notspin').hide();\n $('.push-circle-connect-notspin').show();\n $('.push-circle-connect').show();\n $('.talk-panel-title').text('正在呼叫').css('color','#ff8f8a');\n assistant.connectSpinTimeout=setTimeout(function(){$('.push-circle-connect').addClass('connect-spin');},50);\n assistant.timeoutArray.push(assistant.connectSpinTimeout);\n },\n connectSpinReset:function(){\n $('.push-circle-connect').removeClass('connect-spin');\n },\n circleConnected:function(){\n $('.talk-panel>.push-circle,.push-circle-connect-notspin').hide();\n $('.push-circle-tosay>.microphone-word').text('按住说话');\n $('.talk-panel-title').text('BiBi豆').attr('style','');\n $('.push-circle-tosay').show();\n },\n circleSay:function(){\n $('.talk-panel>.push-circle,.push-circle-connect-notspin').hide();\n $('.talk-panel-title').text('我正在说').css('color','#7fd43e');\n $('.push-circle-say').show();\n var haveBorder=false;\n assistant.sayBorderInterval=setInterval(function(){\n if(haveBorder){\n $('.push-circle-say-border').removeClass('say-border-bigger');\n $('.push-circle-say-border').addClass('say-border-origin');\n }\n else{\n $('.push-circle-say-border').removeClass('say-border-origin');\n $('.push-circle-say-border').addClass('say-border-bigger');\n }\n haveBorder=!haveBorder;\n },450);\n assistant.intervalArray.push(assistant.sayBorderInterval);\n },\n circleListen:function(){\n $('.talk-panel>.push-circle,.push-circle-connect-notspin').hide();\n $('.talk-panel-title').text('对方正在说').css('color','#7fd43e');\n $('.push-circle-listen-border').show();\n var haveBorder=false;\n },\n circleRecord:function(){\n $('.talk-panel>.push-circle,.push-circle-connect-notspin').hide();\n $('.talk-panel-title').text('BiBi豆').css('color','black');\n $('.talk-panel,.black-opacity,.push-circle-record').show();\n },\n circleRecording:function(){\n $('.talk-panel>.push-circle,.push-circle-connect-notspin').hide();\n $('.push-circle-tosay>.microphone-word').text('按住说话');\n $('.talk-panel-title').text('正在录音,松手发送').css('color','#ff8f8a');\n $('.push-circle-recording').show();\n },\n fingerPointToCircle:function(){\n assistant.fingerPointReset();\n $('.finger-point').addClass('finger-point-toCircle');\n },\n fingerPointToCover:function(){\n assistant.fingerPointReset();\n $('.finger-point').addClass('finger-point-toCover');\n },\n fingerPointToCloseTalk:function(){\n assistant.fingerPointReset();\n $('.finger-point').addClass('finger-point-toClose');\n },\n fingerPointToRightSpace:function(){\n assistant.fingerPointReset();\n $('.finger-point').addClass('finger-point-toRightSpace');\n },\n fingerPointToHistory:function(){\n assistant.fingerPointReset();\n $('.finger-point').addClass('finger-point-toHistory');\n },\n audioPlay:function(id,callback){\n assistant.audioStop();\n var src=$(id).data('src');\n assistant.playUrl=src;\n if(Andes.handler) {\n Andes.startPlayUrl(src,false,false,'');\n callback();\n }\n else{\n $(id).attr('src',src);\n $(id).get(0).onloadedmetadata=function(){\n $(id).get(0).play();\n callback();\n }\n }\n },\n audioStop:function(){\n if(Andes.handler)\n Andes.stopPlayUrl(assistant.playUrl);\n $('audio').attr('src','');\n }\n};\n(function(){\n window.onload = function(){\n FastClick.attach(document.body);\n $('.word-img').css('top',assistant.wordImgPosition);\n if(getHostDevice()=='android') {\n assistant.bgTimeout=setTimeout(function () {\n assistant.audioPlay('#bg-audio', function(){});\n }, 300);\n assistant.timeoutArray.push(assistant.bgTimeout);\n }\n };\n window.onblur=function(){\n //assistant.backClick();\n };\n})();\nfunction init(){\n assistant.init();\n}"
},
{
"alpha_fraction": 0.45684340596199036,
"alphanum_fraction": 0.47102344036102295,
"avg_line_length": 27.473684310913086,
"blob_id": "3eb156734c4176b6d39501a7ceeb0f8c80890344",
"content_id": "e46450f86a9a8252cc77fbaffbf4a33f5869030f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 1622,
"license_type": "no_license",
"max_line_length": 86,
"num_lines": 57,
"path": "/scripts/share/face_share.js",
"repo_name": "lxzhu94/andes_tony",
"src_encoding": "UTF-8",
"text": "'use strict';\n\n/**\n * Created by Tony on 2016/11/8.\n */\nvar vue = new Vue({\n el: '#content',\n mounted: function mounted() {},\n data: {\n message: [{\n name: 'c',\n num: 3\n }, {\n name: 'b',\n num: 2\n }, {\n name: 'a',\n num: 4\n }],\n show: 0,\n sortItem: 'name'\n },\n methods: {\n beforeEnter: function beforeEnter(el) {\n el.style.opacity = 0;\n console.log('beforeEnter');\n },\n enter: function enter(el, done) {\n el.style.right = -el.offsetWidth + 'px';\n el.style.opacity = 1;\n el.style.top = Math.random() * document.body.clientHeight * 0.8 + 'px';\n el.style.opacity = 1;\n el.style.transition = 'transform ' + (Math.random() * 6 + 6) + 's linear';\n console.log('enter');\n },\n leave: function leave(el, done) {\n var move = document.body.clientWidth + el.offsetWidth;\n el.style.transform = 'translate3d(-' + move + 'px,0,0)';\n console.log('leave');\n },\n afterLeave: function afterLeave(el) {\n el.style.transition = null;\n console.log('afterLeave');\n },\n add: function add() {\n this.message.push(this.show++);\n },\n remove: function remove() {\n this.message.splice(this.message.length - 1, 1);\n }\n },\n computed: {\n messageSort: function messageSort() {\n return this.message.sort(myLib.getSortFun('desc', this.sortItem));\n }\n }\n});"
},
{
"alpha_fraction": 0.4510718584060669,
"alphanum_fraction": 0.4909777045249939,
"avg_line_length": 23.589155197143555,
"blob_id": "b4ffc39946d4d0126c3f093672ad8d32984bf7d5",
"content_id": "255023055869d5a3151ca9522adaabfeb599d8b4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 52508,
"license_type": "no_license",
"max_line_length": 48,
"num_lines": 1918,
"path": "/scripts/share/star_chat_data.js",
"repo_name": "lxzhu94/andes_tony",
"src_encoding": "UTF-8",
"text": "\"use strict\";\n\n/**\n * Created by Tony on 2017/1/10.\n */\nvar data = {\n star_access_default: [{\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"何炅\",\n \"audio_file\": \"question1.mp3\",\n \"contact_image\": \"hejiong.jpg\",\n \"show_text\": \"你懒吗?\",\n \"id\": \"1\",\n \"show_duration\": \"510\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"张艺兴\",\n \"audio_file\": \"follow1.mp3\",\n \"contact_image\": \"zhangyixing.jpg\",\n \"show_text\": \"今天的天气好晴朗\",\n \"id\": \"2\",\n \"show_duration\": \"3400\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"何炅\",\n \"audio_file\": \"question2.mp3\",\n \"contact_image\": \"hejiong.jpg\",\n \"show_text\": \"你是会做菜的人吗?\",\n \"id\": \"3\",\n \"show_duration\": \"1300\"\n }, {\n \"show_image\": \"emoticon_guzhang.gif\",\n \"show_type\": \"3d\",\n \"sub_title\": \"张艺兴\",\n \"audio_file\": \"follow2.aud\",\n \"contact_image\": \"zhangyixing.jpg\",\n \"show_text\": \"鼓掌\",\n \"id\": \"4\",\n \"show_duration\": \"1190\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"何炅\",\n \"audio_file\": \"question3.mp3\",\n \"contact_image\": \"hejiong.jpg\",\n \"show_text\": \"嘴上自称屌丝,心里根本不觉得自己是,请问这正常吗?\",\n \"id\": \"5\",\n \"show_duration\": \"3670\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"薛之谦\",\n \"audio_file\": \"follow3.mp3\",\n \"contact_image\": \"xuezhiqian.jpg\",\n \"show_text\": \"就是这么有尿性\",\n \"id\": \"6\",\n \"show_duration\": \"2170\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"何炅\",\n \"audio_file\": \"question4.mp3\",\n \"contact_image\": \"hejiong.jpg\",\n \"show_text\": \"根本就不想请客,却又争着买单,这正常吗?\",\n \"id\": \"7\",\n \"show_duration\": \"4030\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"薛之谦\",\n \"audio_file\": \"follow4.mp3\",\n \"contact_image\": \"xuezhiqian.jpg\",\n \"show_text\": \"简单点,说话的方式简单点\",\n \"id\": \"8\",\n \"show_duration\": \"6560\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"何炅\",\n \"audio_file\": \"question5.mp3\",\n \"contact_image\": \"hejiong.jpg\",\n \"show_text\": \"真的很想要嫁进豪门呐,这正常吗?\",\n \"id\": \"9\",\n \"show_duration\": \"2930\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"小岳岳\",\n \"audio_file\": \"follow5.mp3\",\n \"contact_image\": \"xiaoyueyue.jpg\",\n \"show_text\": \" 我的天哪!\",\n \"id\": \"10\",\n \"show_duration\": \"1620\"\n }],\n star_access_default_new: [{\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"鲁豫\",\n \"audio_file\": \"question50-1.mp3\",\n \"contact_image\": \"luyu.jpg\",\n \"show_text\": \"让你最有成就感的事情是什么?\",\n \"id\": \"1\",\n \"show_duration\": \"2466\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"陈汉典\",\n \"audio_file\": \"follow50-1.mp3\",\n \"contact_image\": \"chenhandian.jpg\",\n \"show_text\": \"Why~~?\",\n \"id\": \"2\",\n \"show_duration\": \"805\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"鲁豫\",\n \"audio_file\": \"question50-2.mp3\",\n \"contact_image\": \"luyu.jpg\",\n \"show_text\": \"生命中的最后一天,想做一件什么事情?\",\n \"id\": \"3\",\n \"show_duration\": \"2813\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"孟非\",\n \"audio_file\": \"follow50-2.mp3\",\n \"contact_image\": \"mengfei.jpg\",\n \"show_text\": \"我想谈一场,轰轰烈烈的恋爱。\",\n \"id\": \"4\",\n \"show_duration\": \"1853\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"鲁豫\",\n \"audio_file\": \"question50-3.mp3\",\n \"contact_image\": \"luyu.jpg\",\n \"show_text\": \"自己是偶像派还是实力派?\",\n \"id\": \"5\",\n \"show_duration\": \"1723\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"谢依霖\",\n \"audio_file\": \"follow50-3.mp3\",\n \"contact_image\": \"xieyilin.jpg\",\n \"show_text\": \"真的假的~~~\",\n \"id\": \"4\",\n \"show_duration\": \"1931\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"鲁豫\",\n \"audio_file\": \"question50-4.mp3\",\n \"contact_image\": \"luyu.jpg\",\n \"show_text\": \"最近看了什么电影?\",\n \"id\": \"5\",\n \"show_duration\": \"1050\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"刘烨\",\n \"audio_file\": \"follow50-4.mp3\",\n \"contact_image\": \"liuye.jpg\",\n \"show_text\": \"他挺有文化的\",\n \"id\": \"6\",\n \"show_duration\": \"1250\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"鲁豫\",\n \"audio_file\": \"question50-5.mp3\",\n \"contact_image\": \"luyu.jpg\",\n \"show_text\": \"最想合作的演员?\",\n \"id\": \"7\",\n \"show_duration\": \"1158\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"陶子\",\n \"audio_file\": \"follow50-5.mp3\",\n \"contact_image\": \"taozi.jpg\",\n \"show_text\": \"看中的是他的人品好~\",\n \"id\": \"10\",\n \"show_duration\": \"1944\"\n }],\n star_access_net_1: [{\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"金星老师\",\n \"audio_file\": \"question1-1.mp3\",\n \"contact_image\": \"jinxing.jpg\",\n \"show_text\": \"你的女神是谁?\",\n \"id\": \"1\",\n \"show_duration\": \"790\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"志玲姐姐\",\n \"audio_file\": \"follow1-1.mp3\",\n \"contact_image\": \"zhilinjiejie.jpg\",\n \"show_text\": \"加油加油加油\",\n \"id\": \"2\",\n \"show_duration\": \"1470\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"金星老师\",\n \"audio_file\": \"question1-2.mp3\",\n \"contact_image\": \"jinxing.jpg\",\n \"show_text\": \"在你孤独的时候,你排解孤独的是什么方法?\",\n \"id\": \"3\",\n \"show_duration\": \"2600\"\n }, {\n \"show_image\": \"emoticon_zan.gif\",\n \"show_type\": \"3d\",\n \"sub_title\": \"Papi酱\",\n \"audio_file\": \"follow1-2.aud\",\n \"contact_image\": \"papijiang.jpg\",\n \"show_text\": \"太有才了\",\n \"id\": \"4\",\n \"show_duration\": \"2000\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"金星老师\",\n \"audio_file\": \"question1-3.mp3\",\n \"contact_image\": \"jinxing.jpg\",\n \"show_text\": \"面对家庭和事业必须选择的话,你会选择什么?\",\n \"id\": \"5\",\n \"show_duration\": \"5040\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"张大大\",\n \"audio_file\": \"follow1-3.mp3\",\n \"contact_image\": \"zhangdada.jpg\",\n \"show_text\": \"没错\",\n \"id\": \"6\",\n \"show_duration\": \"480\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"金星老师\",\n \"audio_file\": \"question1-4.mp3\",\n \"contact_image\": \"jinxing.jpg\",\n \"show_text\": \"第一次拉女孩的手什么时候?\",\n \"id\": \"7\",\n \"show_duration\": \"1500\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"陈赫\",\n \"audio_file\": \"follow1-4.mp3\",\n \"contact_image\": \"chenhe.jpg\",\n \"show_text\": \"接下来啊,要搞大事情了😱\",\n \"id\": \"8\",\n \"show_duration\": \"1980\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"金星老师\",\n \"audio_file\": \"question1-5.mp3\",\n \"contact_image\": \"jinxing.jpg\",\n \"show_text\": \"你失恋的时候你会痛苦嘛?\",\n \"id\": \"9\",\n \"show_duration\": \"1390\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"鹿晗\",\n \"audio_file\": \"follow1-5.mp3\",\n \"contact_image\": \"luhan.jpg\",\n \"show_text\": \"Baby, don't cry.\",\n \"id\": \"10\",\n \"show_duration\": \"690\"\n }],\n star_access_net_2: [{\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"金星老师\",\n \"audio_file\": \"question2-1.mp3\",\n \"contact_image\": \"jinxing.jpg\",\n \"show_text\": \"你喜欢什么样的女孩儿?\",\n \"id\": \"1\",\n \"show_duration\": \"1220\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"陈乔恩\",\n \"audio_file\": \"follow2-1.mp3\",\n \"contact_image\": \"chenqiaoen.jpg\",\n \"show_text\": \"你si不si傻\",\n \"id\": \"2\",\n \"show_duration\": \"1210\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"金星老师\",\n \"audio_file\": \"question2-2.mp3\",\n \"contact_image\": \"jinxing.jpg\",\n \"show_text\": \"第一次拉女孩手什么时候?\",\n \"id\": \"3\",\n \"show_duration\": \"1500\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"谢依霖\",\n \"audio_file\": \"follow2-2.mp3\",\n \"contact_image\": \"xieyilin.jpg\",\n \"show_text\": \"你咋那么欠打呢?\",\n \"id\": \"4\",\n \"show_duration\": \"1440\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"金星老师\",\n \"audio_file\": \"question2-3.mp3\",\n \"contact_image\": \"jinxing.jpg\",\n \"show_text\": \"父母逼着你去相过亲吗?\",\n \"id\": \"5\",\n \"show_duration\": \"1600\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"蔡康永\",\n \"audio_file\": \"follow2-3.mp3\",\n \"contact_image\": \"caikangyong.jpg\",\n \"show_text\": \"恭喜你啊再见。\",\n \"id\": \"6\",\n \"show_duration\": \"1120\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"金星老师\",\n \"audio_file\": \"question2-4.mp3\",\n \"contact_image\": \"jinxing.jpg\",\n \"show_text\": \"面对家庭和事业必须选择的话,你会选择什么?\",\n \"id\": \"5\",\n \"show_duration\": \"5040\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"Papi酱\",\n \"audio_file\": \"follow2-4.mp3\",\n \"contact_image\": \"papijiang.jpg\",\n \"show_text\": \"哈哈哈哈哈哈哈\",\n \"id\": \"8\",\n \"show_duration\": \"600\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"金星老师\",\n \"audio_file\": \"question2-5.mp3\",\n \"contact_image\": \"jinxing.jpg\",\n \"show_text\": \"你失恋的时候你会痛苦嘛?\",\n \"id\": \"9\",\n \"show_duration\": \"1390\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"小岳岳\",\n \"audio_file\": \"follow2-5.mp3\",\n \"contact_image\": \"xiaoyueyue.jpg\",\n \"show_text\": \" 我的天哪!\",\n \"id\": \"10\",\n \"show_duration\": \"1620\"\n }],\n star_access_net_21: [{\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"金星\",\n \"audio_file\": \"question21-1.mp3\",\n \"contact_image\": \"jinxing.jpg\",\n \"show_text\": \"你是一个爱美的人吗?\",\n \"id\": \"1\",\n \"show_duration\": \"1877\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"金星\",\n \"audio_file\": \"follow21-1.mp3\",\n \"contact_image\": \"jinxing.jpg\",\n \"show_text\": \"姑娘们,节操!\",\n \"id\": \"2\",\n \"show_duration\": \"1037\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"金星\",\n \"audio_file\": \"question21-2.mp3\",\n \"contact_image\": \"jinxing.jpg\",\n \"show_text\": \"你在男人面前不会发嗲吧?\",\n \"id\": \"3\",\n \"show_duration\": \"1632\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"陈乔恩\",\n \"audio_file\": \"follow21-2.mp3\",\n \"contact_image\": \"chenqiaoen.jpg\",\n \"show_text\": \"你si不si傻\",\n \"id\": \"4\",\n \"show_duration\": \"1206\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"金星\",\n \"audio_file\": \"question21-3.mp3\",\n \"contact_image\": \"jinxing.jpg\",\n \"show_text\": \"您哭过吗?\",\n \"id\": \"5\",\n \"show_duration\": \"800\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"尉迟琳嘉\",\n \"audio_file\": \"follow21-3.mp3\",\n \"contact_image\": \"yuchilinjia.jpg\",\n \"show_text\": \"怎么这么严重呢……\",\n \"id\": \"6\",\n \"show_duration\": \"1069\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"金星\",\n \"audio_file\": \"question21-4.mp3\",\n \"contact_image\": \"jinxing.jpg\",\n \"show_text\": \"现在在爱情状态中么?\",\n \"id\": \"7\",\n \"show_duration\": \"1771\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"朱碧石\",\n \"audio_file\": \"follow21-4.mp3\",\n \"contact_image\": \"zhubishi.jpg\",\n \"show_text\": \"我不相信!\",\n \"id\": \"8\",\n \"show_duration\": \"777\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"金星\",\n \"audio_file\": \"question21-5.mp3\",\n \"contact_image\": \"jinxing.jpg\",\n \"show_text\": \"如果找男朋友的话,需要哪一个特质?\",\n \"id\": \"9\",\n \"show_duration\": \"2623\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"黄渤\",\n \"audio_file\": \"follow21-5.mp3\",\n \"contact_image\": \"huangbo.jpg\",\n \"show_text\": \"不可能吧!\",\n \"id\": \"10\",\n \"show_duration\": \"796\"\n }],\n star_access_net_22: [{\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"蔡康永\",\n \"audio_file\": \"question22-1.mp3\",\n \"contact_image\": \"caikangyong.jpg\",\n \"show_text\": \"出生在哪里?\",\n \"id\": \"1\",\n \"show_duration\": \"979\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"甜馨\",\n \"audio_file\": \"follow22-1.mp3\",\n \"contact_image\": \"tianxin.jpg\",\n \"show_text\": \"别问我到哪去~\",\n \"id\": \"2\",\n \"show_duration\": \"3661\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"蔡康永\",\n \"audio_file\": \"question22-2.mp3\",\n \"contact_image\": \"caikangyong.jpg\",\n \"show_text\": \"大S跟你谁漂亮?\",\n \"id\": \"3\",\n \"show_duration\": \"1111\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"甜馨\",\n \"audio_file\": \"follow22-2.mp3\",\n \"contact_image\": \"tianxin.jpg\",\n \"show_text\": \"我们白着呐~\",\n \"id\": \"4\",\n \"show_duration\": \"1883\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"马东\",\n \"audio_file\": \"question22-3.mp3\",\n \"contact_image\": \"madong.jpg\",\n \"show_text\": \"在你心里,真正的男神是什么样的?\",\n \"id\": \"5\",\n \"show_duration\": \"2189\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"小岳岳\",\n \"audio_file\": \"follow22-3.mp3\",\n \"contact_image\": \"xiaoyueyue.jpg\",\n \"show_text\": \"马上就要给你生孩子啦!\",\n \"id\": \"4\",\n \"show_duration\": \"1461\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"金星\",\n \"audio_file\": \"question22-4.mp3\",\n \"contact_image\": \"jinxing.jpg\",\n \"show_text\": \"一见钟情呢还是日久生情?\",\n \"id\": \"7\",\n \"show_duration\": \"1927\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"周星驰\",\n \"audio_file\": \"follow22-4.mp3\",\n \"contact_image\": \"zhouxingchi.jpg\",\n \"show_text\": \"我说我这个人很专一的,老婆我只要一个就够了!\",\n \"id\": \"8\",\n \"show_duration\": \"2859\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"蔡康永\",\n \"audio_file\": \"question22-5.mp3\",\n \"contact_image\": \"caikangyong.jpg\",\n \"show_text\": \"你怕不怕你妈妈?\",\n \"id\": \"9\",\n \"show_duration\": \"1252\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"周星驰\",\n \"audio_file\": \"follow22-5.mp3\",\n \"contact_image\": \"zhouxingchi.jpg\",\n \"show_text\": \"做错事还顶嘴!都几十岁的人了,要不然早骂哭你了!\",\n \"id\": \"10\",\n \"show_duration\": \"5006\"\n }],\n star_access_net_23: [{\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"何炅\",\n \"audio_file\": \"question23-1.mp3\",\n \"contact_image\": \"hejiong.jpg\",\n \"show_text\": \"偷拍过好看的异性,这正不正常?\",\n \"id\": \"1\",\n \"show_duration\": \"2266\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"Papi酱\",\n \"audio_file\": \"follow23-1.mp3\",\n \"contact_image\": \"papijiang.jpg\",\n \"show_text\": \"臭流氓!\",\n \"id\": \"2\",\n \"show_duration\": \"860\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"何炅\",\n \"audio_file\": \"question23-2.mp3\",\n \"contact_image\": \"hejiong.jpg\",\n \"show_text\": \"当有豪车路过,总想看司机长啥样子,这正常吗?\",\n \"id\": \"3\",\n \"show_duration\": \"4635\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"蔡明\",\n \"audio_file\": \"follow23-2.mp3\",\n \"contact_image\": \"caiming.jpg\",\n \"show_text\": \"你人是微缩的,心还是猥琐的~\",\n \"id\": \"4\",\n \"show_duration\": \"2681\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"何炅\",\n \"audio_file\": \"question23-3.mp3\",\n \"contact_image\": \"hejiong.jpg\",\n \"show_text\": \"根本就不想请客,却又争着买单,这正常吗?\",\n \"id\": \"7\",\n \"show_duration\": \"4030\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"蔡明\",\n \"audio_file\": \"follow23-3.mp3\",\n \"contact_image\": \"caiming.jpg\",\n \"show_text\": \"真是十年磨一贱~人!\",\n \"id\": \"4\",\n \"show_duration\": \"2475\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"何炅\",\n \"audio_file\": \"question23-4.mp3\",\n \"contact_image\": \"hejiong.jpg\",\n \"show_text\": \"羡慕别人的父母比自己的好,这正常吗?\",\n \"id\": \"7\",\n \"show_duration\": \"3646\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"孙红雷\",\n \"audio_file\": \"follow23-4.mp3\",\n \"contact_image\": \"sunhonglei.jpg\",\n \"show_text\": \"你有病啊!\",\n \"id\": \"8\",\n \"show_duration\": \"965\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"何炅\",\n \"audio_file\": \"question23-5.mp3\",\n \"contact_image\": \"hejiong.jpg\",\n \"show_text\": \"真的很想要嫁进豪门呐,这正常吗?\",\n \"id\": \"9\",\n \"show_duration\": \"2930\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"孙红雷\",\n \"audio_file\": \"follow23-5.mp3\",\n \"contact_image\": \"sunhonglei.jpg\",\n \"show_text\": \"这~就~是~命~!\",\n \"id\": \"10\",\n \"show_duration\": \"1991\"\n }],\n star_access_net_41: [{\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"何炅\",\n \"audio_file\": \"question41-1.mp3\",\n \"contact_image\": \"hejiong.jpg\",\n \"show_text\": \"你爱运动吗?\",\n \"id\": \"1\",\n \"show_duration\": \"930\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"谢依霖\",\n \"audio_file\": \"follow41-1.mp3\",\n \"contact_image\": \"xieyilin.jpg\",\n \"show_text\": \"怎么啦我觉得很正常啊\",\n \"id\": \"2\",\n \"show_duration\": \"1920\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"何炅\",\n \"audio_file\": \"question41-2.mp3\",\n \"contact_image\": \"hejiong.jpg\",\n \"show_text\": \"你觉得谁的颜值在你之上?\",\n \"id\": \"3\",\n \"show_duration\": \"1790\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"孙杨\",\n \"audio_file\": \"follow41-2.mp3\",\n \"contact_image\": \"sunyang.jpg\",\n \"show_text\": \"看破不说破\",\n \"id\": \"4\",\n \"show_duration\": \"1020\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"何炅\",\n \"audio_file\": \"question41-3.mp3\",\n \"contact_image\": \"hejiong.jpg\",\n \"show_text\": \"必须P图才发朋友圈正常吗?\",\n \"id\": \"5\",\n \"show_duration\": \"2780\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"孟非\",\n \"audio_file\": \"follow41-3.mp3\",\n \"contact_image\": \"mengfei.jpg\",\n \"show_text\": \"重新做人\",\n \"id\": \"6\",\n \"show_duration\": \"666\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"何炅\",\n \"audio_file\": \"question41-4.mp3\",\n \"contact_image\": \"hejiong.jpg\",\n \"show_text\": \"你其实最在意的是什么?\",\n \"id\": \"7\",\n \"show_duration\": \"1250\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"谢依霖\",\n \"audio_file\": \"follow41-4.mp3\",\n \"contact_image\": \"xieyilin.jpg\",\n \"show_text\": \"这又没有什么,我们又没有怎样~\",\n \"id\": \"8\",\n \"show_duration\": \"2270\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"何炅\",\n \"audio_file\": \"question41-5.mp3\",\n \"contact_image\": \"hejiong.jpg\",\n \"show_text\": \"伴侣不在家真的开心,正常吗?\",\n \"id\": \"9\",\n \"show_duration\": \"3160\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"小S和一群人\",\n \"audio_file\": \"follow41-5.mp3\",\n \"contact_image\": \"xiaos.jpg\",\n \"show_text\": \"啊哈哈哈(乌鸦飞过)\",\n \"id\": \"10\",\n \"show_duration\": \"2550\"\n }],\n star_access_net_42: [{\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"何炅\",\n \"audio_file\": \"question42-1.mp3\",\n \"contact_image\": \"hejiong.jpg\",\n \"show_text\": \"最反感朋友圈别人发什么?\",\n \"id\": \"1\",\n \"show_duration\": \"2810\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"蔡明\",\n \"audio_file\": \"follow42-1.mp3\",\n \"contact_image\": \"caiming.jpg\",\n \"show_text\": \"恶心他妈夸恶心-好恶心\",\n \"id\": \"2\",\n \"show_duration\": \"2950\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"李静\",\n \"audio_file\": \"question42-2.mp3\",\n \"contact_image\": \"lijing.jpg\",\n \"show_text\": \"用几个成语形容一下自己?\",\n \"id\": \"3\",\n \"show_duration\": \"1990\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"谢娜\",\n \"audio_file\": \"follow42-2.mp3\",\n \"contact_image\": \"xiena.jpg\",\n \"show_text\": \"终~于~来~了~\",\n \"id\": \"4\",\n \"show_duration\": \"2610\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"鲁豫\",\n \"audio_file\": \"question42-3.mp3\",\n \"contact_image\": \"luyu.jpg\",\n \"show_text\": \"你小时候的兴趣是什么?\",\n \"id\": \"5\",\n \"show_duration\": \"1780\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"陶子\",\n \"audio_file\": \"follow42-3.mp3\",\n \"contact_image\": \"taozi.jpg\",\n \"show_text\": \"果然是行家\",\n \"id\": \"4\",\n \"show_duration\": \"2960\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"何炅\",\n \"audio_file\": \"question42-4.mp3\",\n \"contact_image\": \"hejiong.jpg\",\n \"show_text\": \"最爱喝什么?\",\n \"id\": \"7\",\n \"show_duration\": \"590\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"蔡明\",\n \"audio_file\": \"follow42-4.mp3\",\n \"contact_image\": \"caiming.jpg\",\n \"show_text\": \"我勒个去~\",\n \"id\": \"8\",\n \"show_duration\": \"2270\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"鲁豫\",\n \"audio_file\": \"question42-5.mp3\",\n \"contact_image\": \"luyu.jpg\",\n \"show_text\": \"你知道雷锋是谁吗?\",\n \"id\": \"9\",\n \"show_duration\": \"1350\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"刘烨\",\n \"audio_file\": \"follow42-5.mp3\",\n \"contact_image\": \"liuye.jpg\",\n \"show_text\": \"他挺有文化的\",\n \"id\": \"10\",\n \"show_duration\": \"1490\"\n }],\n star_access_net_43: [{\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"鲁豫\",\n \"audio_file\": \"question43-1.mp3\",\n \"contact_image\": \"luyu.jpg\",\n \"show_text\": \"你小时候的兴趣是什么?\",\n \"id\": \"1\",\n \"show_duration\": \"1773\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"Papi酱\",\n \"audio_file\": \"follow43-1.mp3\",\n \"contact_image\": \"papijiang.jpg\",\n \"show_text\": \"是不是有毛病啊\",\n \"id\": \"2\",\n \"show_duration\": \"1100\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"鲁豫\",\n \"audio_file\": \"question43-2.mp3\",\n \"contact_image\": \"luyu.jpg\",\n \"show_text\": \"理想的生活状态是什么样?\",\n \"id\": \"3\",\n \"show_duration\": \"1827\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"孟非\",\n \"audio_file\": \"follow43-2.mp3\",\n \"contact_image\": \"mengfei.jpg\",\n \"show_text\": \"我想谈一场轰轰烈烈的恋爱\",\n \"id\": \"4\",\n \"show_duration\": \"2130\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"鲁豫\",\n \"audio_file\": \"question43-3.mp3\",\n \"contact_image\": \"luyu.jpg\",\n \"show_text\": \"对你影响最大的人是谁?\",\n \"id\": \"5\",\n \"show_duration\": \"1810\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"谢依霖\",\n \"audio_file\": \"follow43-3.mp3\",\n \"contact_image\": \"xieyilin.jpg\",\n \"show_text\": \"对我来说,都~是~帅~哥~\",\n \"id\": \"4\",\n \"show_duration\": \"3141\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"鲁豫\",\n \"audio_file\": \"question43-4.mp3\",\n \"contact_image\": \"luyu.jpg\",\n \"show_text\": \"如果可以实现一个愿望,会是什么愿望?\",\n \"id\": \"7\",\n \"show_duration\": \"2246\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"蔡康永\",\n \"audio_file\": \"follow43-4.mp3\",\n \"contact_image\": \"caikangyong.jpg\",\n \"show_text\": \"一定会哒\",\n \"id\": \"8\",\n \"show_duration\": \"1379\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"鲁豫\",\n \"audio_file\": \"question43-5.mp3\",\n \"contact_image\": \"luyu.jpg\",\n \"show_text\": \"赚了钱之后打算怎么花?\",\n \"id\": \"9\",\n \"show_duration\": \"1648\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"志玲姐姐\",\n \"audio_file\": \"follow43-5.aud\",\n \"contact_image\": \"zhilinjiejie.jpg\",\n \"show_text\": \"加油加油\",\n \"id\": \"10\",\n \"show_duration\": \"1100\"\n }],\n star_access_net_44: [{\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"小S\",\n \"audio_file\": \"question44-1.mp3\",\n \"contact_image\": \"xiaos.jpg\",\n \"show_text\": \"笨人在你的标准里,怎么样可以称作笨?\",\n \"id\": \"1\",\n \"show_duration\": \"2892\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"谢依霖\",\n \"audio_file\": \"follow44-1.mp3\",\n \"contact_image\": \"xieyilin.jpg\",\n \"show_text\": \"你很小心眼诶~\",\n \"id\": \"2\",\n \"show_duration\": \"1744\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"小S\",\n \"audio_file\": \"question44-2.mp3\",\n \"contact_image\": \"xiaos.jpg\",\n \"show_text\": \"你是不是最近有赚钱?\",\n \"id\": \"3\",\n \"show_duration\": \"1098\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"谢依霖\",\n \"audio_file\": \"follow44-2.mp3\",\n \"contact_image\": \"xieyilin.jpg\",\n \"show_text\": \"你咋那么欠打呢\",\n \"id\": \"4\",\n \"show_duration\": \"1433\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"马东\",\n \"audio_file\": \"question44-3.mp3\",\n \"contact_image\": \"madong.jpg\",\n \"show_text\": \"你愿意跟王思聪换爹吗?\",\n \"id\": \"5\",\n \"show_duration\": \"2761\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"陶子\",\n \"audio_file\": \"follow44-3.mp3\",\n \"contact_image\": \"taozi.jpg\",\n \"show_text\": \"看中的是他的人品好~\",\n \"id\": \"4\",\n \"show_duration\": \"1944\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"何炅\",\n \"audio_file\": \"question44-4.mp3\",\n \"contact_image\": \"hejiong.jpg\",\n \"show_text\": \"嘴上自称屌丝,心里根本不觉得自己是,请问这正常吗?\",\n \"id\": \"5\",\n \"show_duration\": \"3670\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"薛之谦\",\n \"audio_file\": \"follow44-4.mp3\",\n \"contact_image\": \"xuezhiqian.jpg\",\n \"show_text\": \"就是这么有尿性\",\n \"id\": \"6\",\n \"show_duration\": \"2170\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"鲁豫\",\n \"audio_file\": \"question44-5.mp3\",\n \"contact_image\": \"luyu.jpg\",\n \"show_text\": \"如果可以实现一个愿望,会是什么愿望?\",\n \"id\": \"7\",\n \"show_duration\": \"2246\"\n }, {\n \"show_image\": \"emoticon_haha.gif\",\n \"show_type\": \"3d\",\n \"sub_title\": \"何炅\",\n \"audio_file\": \"follow44-5.aud\",\n \"contact_image\": \"hejiong.jpg\",\n \"show_text\": \"哈哈\",\n \"id\": \"10\",\n \"show_duration\": \"1100\"\n }],\n star_access_net_45: [{\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"蔡康永\",\n \"audio_file\": \"question45-1.mp3\",\n \"contact_image\": \"caikangyong.jpg\",\n \"show_text\": \"出生在哪里?\",\n \"id\": \"1\",\n \"show_duration\": \"979\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"小岳岳\",\n \"audio_file\": \"follow45-1.mp3\",\n \"contact_image\": \"xiaoyueyue.jpg\",\n \"show_text\": \"骗你是太监!\",\n \"id\": \"2\",\n \"show_duration\": \"753\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"蔡康永\",\n \"audio_file\": \"question45-2.mp3\",\n \"contact_image\": \"caikangyong.jpg\",\n \"show_text\": \"几岁时候去求学?\",\n \"id\": \"3\",\n \"show_duration\": \"2506\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"张艺兴\",\n \"audio_file\": \"follow45-2.mp3\",\n \"contact_image\": \"zhangyixing.jpg\",\n \"show_text\": \"我们很努力啊,我们努力努力再努力啊\",\n \"id\": \"4\",\n \"show_duration\": \"3462\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"蔡康永\",\n \"audio_file\": \"question45-3.mp3\",\n \"contact_image\": \"caikangyong.jpg\",\n \"show_text\": \"你怕不怕你妈妈?\",\n \"id\": \"5\",\n \"show_duration\": \"1252\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"张艺兴\",\n \"audio_file\": \"follow45-3.mp3\",\n \"contact_image\": \"zhangyixing.jpg\",\n \"show_text\": \"哇……忒黑了!\",\n \"id\": \"4\",\n \"show_duration\": \"1350\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"蔡康永\",\n \"audio_file\": \"question45-4.mp3\",\n \"contact_image\": \"caikangyong.jpg\",\n \"show_text\": \"在生活中遇到其他可怕的人,要怎么发泄你的愤怒?\",\n \"id\": \"5\",\n \"show_duration\": \"3133\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"志玲姐姐\",\n \"audio_file\": \"follow45-4.mp3\",\n \"contact_image\": \"zhilinjiejie.jpg\",\n \"show_text\": \"加油加油加油~~\",\n \"id\": \"6\",\n \"show_duration\": \"1462\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"蔡康永\",\n \"audio_file\": \"question45-5.mp3\",\n \"contact_image\": \"caikangyong.jpg\",\n \"show_text\": \"你应该很喜欢现在的自己吧?\",\n \"id\": \"7\",\n \"show_duration\": \"2200\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"朱碧石\",\n \"audio_file\": \"follow45-5.mp3\",\n \"contact_image\": \"zhubishi.jpg\",\n \"show_text\": \"Hello我美不美啊~!\",\n \"id\": \"10\",\n \"show_duration\": \"1654\"\n }],\n star_access_net_46: [{\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"小S\",\n \"audio_file\": \"question46-1.mp3\",\n \"contact_image\": \"xiaos.jpg\",\n \"show_text\": \"你是不是最近有赚钱?\",\n \"id\": \"1\",\n \"show_duration\": \"1098\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"谢依霖\",\n \"audio_file\": \"follow46-1.mp3\",\n \"contact_image\": \"xieyilin.jpg\",\n \"show_text\": \"反正我看谁都有鬼!\",\n \"id\": \"2\",\n \"show_duration\": \"1219\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"小S\",\n \"audio_file\": \"question46-2.mp3\",\n \"contact_image\": \"xiaos.jpg\",\n \"show_text\": \"有去法国旅游对不对?\",\n \"id\": \"3\",\n \"show_duration\": \"1523\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"蔡康永\",\n \"audio_file\": \"follow46-2.mp3\",\n \"contact_image\": \"caikangyong.jpg\",\n \"show_text\": \"恭喜你啊再见\",\n \"id\": \"4\",\n \"show_duration\": \"1315\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"小S\",\n \"audio_file\": \"question46-3.mp3\",\n \"contact_image\": \"xiaos.jpg\",\n \"show_text\": \"笨人在你的标准里怎么样可以称作笨?\",\n \"id\": \"5\",\n \"show_duration\": \"2892\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"陈汉典\",\n \"audio_file\": \"follow46-3.mp3\",\n \"contact_image\": \"chenhandian.jpg\",\n \"show_text\": \"有我有我,今天有我!\",\n \"id\": \"4\",\n \"show_duration\": \"1966\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"小S\",\n \"audio_file\": \"question46-4.mp3\",\n \"contact_image\": \"xiaos.jpg\",\n \"show_text\": \"哪一种举动被你看在眼里,觉得不妥当吗?\",\n \"id\": \"5\",\n \"show_duration\": \"3037\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"王嘉尔\",\n \"audio_file\": \"follow46-4.mp3\",\n \"contact_image\": \"wangjiaer.jpg\",\n \"show_text\": \"可以说话慢一点吗\",\n \"id\": \"6\",\n \"show_duration\": \"1254\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"小S\",\n \"audio_file\": \"question46-5.mp3\",\n \"contact_image\": \"xiaos.jpg\",\n \"show_text\": \"我的个性会是你喜欢的样子吗?\",\n \"id\": \"7\",\n \"show_duration\": \"2183\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"蔡康永\",\n \"audio_file\": \"follow46-5.mp3\",\n \"contact_image\": \"caikangyong.jpg\",\n \"show_text\": \"我真的不敢相信你找我来诶~\",\n \"id\": \"10\",\n \"show_duration\": \"2010\"\n }],\n star_access_net_47: [{\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"金星\",\n \"audio_file\": \"question47-1.mp3\",\n \"contact_image\": \"jinxing.jpg\",\n \"show_text\": \"最近忙什么?\",\n \"id\": \"1\",\n \"show_duration\": \"1308\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"谢依霖\",\n \"audio_file\": \"follow47-1.mp3\",\n \"contact_image\": \"xieyilin.jpg\",\n \"show_text\": \"感觉身体被掏空\",\n \"id\": \"2\",\n \"show_duration\": \"1704\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"何炅\",\n \"audio_file\": \"question47-2.mp3\",\n \"contact_image\": \"hejiong.jpg\",\n \"show_text\": \"你懒吗?\",\n \"id\": \"3\",\n \"show_duration\": \"504\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"尉迟琳嘉\",\n \"audio_file\": \"follow47-2.mp3\",\n \"contact_image\": \"yuchilinjia.jpg\",\n \"show_text\": \"有没有愧疚感!\",\n \"id\": \"4\",\n \"show_duration\": \"1066\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"鲁豫\",\n \"audio_file\": \"question47-3.mp3\",\n \"contact_image\": \"luyu.jpg\",\n \"show_text\": \"赚了钱之后你打算怎么花?\",\n \"id\": \"5\",\n \"show_duration\": \"1648\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"金星\",\n \"audio_file\": \"follow47-3.mp3\",\n \"contact_image\": \"jinxing.jpg\",\n \"show_text\": \"姑娘们,节操!\",\n \"id\": \"4\",\n \"show_duration\": \"1037\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"何炅\",\n \"audio_file\": \"question47-4.mp3\",\n \"contact_image\": \"hejiong.jpg\",\n \"show_text\": \"大家有吃过什么特别恐怖的黑暗料理吗?\",\n \"id\": \"5\",\n \"show_duration\": \"2590\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"蔡康永\",\n \"audio_file\": \"follow47-4.mp3\",\n \"contact_image\": \"caikangyong.jpg\",\n \"show_text\": \"恭喜你啊再见\",\n \"id\": \"6\",\n \"show_duration\": \"1315\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"何炅\",\n \"audio_file\": \"question47-5.mp3\",\n \"contact_image\": \"hejiong.jpg\",\n \"show_text\": \"那如果有机会可以让你测一下爸爸妈妈有没有骗你,你会用吗?\",\n \"id\": \"7\",\n \"show_duration\": \"3699\"\n }, {\n \"show_image\": \"emoticon_daku.gif\",\n \"show_type\": \"3d\",\n \"sub_title\": \"蔡康永\",\n \"audio_file\": \"follow47-5.aud\",\n \"contact_image\": \"caikangyong.jpg\",\n \"show_text\": \"大哭\",\n \"id\": \"10\",\n \"show_duration\": \"1000\"\n }],\n star_access_net_48: [{\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"鲁豫\",\n \"audio_file\": \"question48-1.mp3\",\n \"contact_image\": \"luyu.jpg\",\n \"show_text\": \"对你影响最大的人是谁?\",\n \"id\": \"1\",\n \"show_duration\": \"1810\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"陈汉典\",\n \"audio_file\": \"follow48-1.mp3\",\n \"contact_image\": \"chenhandian.jpg\",\n \"show_text\": \"有我有我,今天有我!\",\n \"id\": \"2\",\n \"show_duration\": \"1966\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"鲁豫\",\n \"audio_file\": \"question48-2.mp3\",\n \"contact_image\": \"luyu.jpg\",\n \"show_text\": \"理想的生活状态是什么样?\",\n \"id\": \"3\",\n \"show_duration\": \"1827\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"小S\",\n \"audio_file\": \"follow48-2.mp3\",\n \"contact_image\": \"xiaos.jpg\",\n \"show_text\": \"懂不懂~~ It's my show!\",\n \"id\": \"4\",\n \"show_duration\": \"2811\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"鲁豫\",\n \"audio_file\": \"question48-3.mp3\",\n \"contact_image\": \"luyu.jpg\",\n \"show_text\": \"对十年以前的那个自己说一句话?\",\n \"id\": \"5\",\n \"show_duration\": \"2275\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"张大大\",\n \"audio_file\": \"follow48-3.mp3\",\n \"contact_image\": \"zhangdada.jpg\",\n \"show_text\": \"没错!\",\n \"id\": \"4\",\n \"show_duration\": \"475\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"杨澜\",\n \"audio_file\": \"question48-4.mp3\",\n \"contact_image\": \"yanglan.jpg\",\n \"show_text\": \"15年前都在忙什么,那时候对于未来的想法是什么?\",\n \"id\": \"5\",\n \"show_duration\": \"3871\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"孟非\",\n \"audio_file\": \"follow48-4.mp3\",\n \"contact_image\": \"mengfei.jpg\",\n \"show_text\": \"重新做人!\",\n \"id\": \"6\",\n \"show_duration\": \"666\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"杨澜\",\n \"audio_file\": \"question48-5.mp3\",\n \"contact_image\": \"yanglan.jpg\",\n \"show_text\": \"你有什么特别的方式表白吗?\",\n \"id\": \"7\",\n \"show_duration\": \"2339\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"蔡康永\",\n \"audio_file\": \"follow48-5.mp3\",\n \"contact_image\": \"caikangyong.jpg\",\n \"show_text\": \"太帅了\",\n \"id\": \"10\",\n \"show_duration\": \"723\"\n }],\n star_access_net_49: [{\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"何炅\",\n \"audio_file\": \"question49-1.mp3\",\n \"contact_image\": \"hejiong.jpg\",\n \"show_text\": \"男人在别人面前落泪,你会觉得他?\",\n \"id\": \"1\",\n \"show_duration\": \"3494\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"小S\",\n \"audio_file\": \"follow49-1.mp3\",\n \"contact_image\": \"xiaos.jpg\",\n \"show_text\": \"怎么会呢……\",\n \"id\": \"2\",\n \"show_duration\": \"1090\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"何炅\",\n \"audio_file\": \"question49-2.mp3\",\n \"contact_image\": \"hejiong.jpg\",\n \"show_text\": \"收到前任的婚礼邀请,愿意参加,这正常吗?\",\n \"id\": \"3\",\n \"show_duration\": \"4227\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"张大大\",\n \"audio_file\": \"follow49-2.mp3\",\n \"contact_image\": \"zhangdada.jpg\",\n \"show_text\": \"嗨咖你干嘛突然变得感伤……\",\n \"id\": \"4\",\n \"show_duration\": \"1818\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"何炅\",\n \"audio_file\": \"question49-3.mp3\",\n \"contact_image\": \"hejiong.jpg\",\n \"show_text\": \"如果真的可以白头偕老,你希望?\",\n \"id\": \"5\",\n \"show_duration\": \"2443\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"蔡康永\",\n \"audio_file\": \"follow49-3.mp3\",\n \"contact_image\": \"caikangyong.jpg\",\n \"show_text\": \"一定会哒\",\n \"id\": \"4\",\n \"show_duration\": \"1212\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"何炅\",\n \"audio_file\": \"question49-4.mp3\",\n \"contact_image\": \"hejiong.jpg\",\n \"show_text\": \"什么时候要Baby?\",\n \"id\": \"5\",\n \"show_duration\": \"965\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"小S\",\n \"audio_file\": \"follow49-4.mp3\",\n \"contact_image\": \"xiaos.jpg\",\n \"show_text\": \"不要一直乱插话又无聊好不好\",\n \"id\": \"6\",\n \"show_duration\": \"1723\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"何炅\",\n \"audio_file\": \"question49-5.mp3\",\n \"contact_image\": \"hejiong.jpg\",\n \"show_text\": \"你是怕死的人吗?\",\n \"id\": \"7\",\n \"show_duration\": \"908\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"谢依霖\",\n \"audio_file\": \"follow49-5.mp3\",\n \"contact_image\": \"xieyilin.jpg\",\n \"show_text\": \"怎么了我觉得很正常啊~\",\n \"id\": \"10\",\n \"show_duration\": \"1920\"\n }],\n star_access_net_50: [{\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"鲁豫\",\n \"audio_file\": \"question50-1.mp3\",\n \"contact_image\": \"luyu.jpg\",\n \"show_text\": \"让你最有成就感的事情是什么?\",\n \"id\": \"1\",\n \"show_duration\": \"2466\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"陈汉典\",\n \"audio_file\": \"follow50-1.mp3\",\n \"contact_image\": \"chenhandian.jpg\",\n \"show_text\": \"Why~~?\",\n \"id\": \"2\",\n \"show_duration\": \"805\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"鲁豫\",\n \"audio_file\": \"question50-2.mp3\",\n \"contact_image\": \"luyu.jpg\",\n \"show_text\": \"生命中的最后一天,想做一件什么事情?\",\n \"id\": \"3\",\n \"show_duration\": \"2813\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"孟非\",\n \"audio_file\": \"follow50-2.mp3\",\n \"contact_image\": \"mengfei.jpg\",\n \"show_text\": \"我想谈一场,轰轰烈烈的恋爱。\",\n \"id\": \"4\",\n \"show_duration\": \"1853\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"鲁豫\",\n \"audio_file\": \"question50-3.mp3\",\n \"contact_image\": \"luyu.jpg\",\n \"show_text\": \"自己是偶像派还是实力派?\",\n \"id\": \"5\",\n \"show_duration\": \"1723\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"谢依霖\",\n \"audio_file\": \"follow50-3.mp3\",\n \"contact_image\": \"xieyilin.jpg\",\n \"show_text\": \"真的假的~~~\",\n \"id\": \"4\",\n \"show_duration\": \"1931\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"鲁豫\",\n \"audio_file\": \"question50-4.mp3\",\n \"contact_image\": \"luyu.jpg\",\n \"show_text\": \"最近看了什么电影?\",\n \"id\": \"5\",\n \"show_duration\": \"1050\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"刘烨\",\n \"audio_file\": \"follow50-4.mp3\",\n \"contact_image\": \"liuye.jpg\",\n \"show_text\": \"他挺有文化的\",\n \"id\": \"6\",\n \"show_duration\": \"1250\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"鲁豫\",\n \"audio_file\": \"question50-5.mp3\",\n \"contact_image\": \"luyu.jpg\",\n \"show_text\": \"最想合作的演员?\",\n \"id\": \"7\",\n \"show_duration\": \"1158\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"陶子\",\n \"audio_file\": \"follow50-5.mp3\",\n \"contact_image\": \"taozi.jpg\",\n \"show_text\": \"看中的是他的人品好~\",\n \"id\": \"10\",\n \"show_duration\": \"1944\"\n }],\n star_access_net_51: [{\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"鲁豫\",\n \"audio_file\": \"question51-1.mp3\",\n \"contact_image\": \"luyu.jpg\",\n \"show_text\": \"你的偶像是谁?\",\n \"id\": \"1\",\n \"show_duration\": \"907\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"金星\",\n \"audio_file\": \"follow51-1.mp3\",\n \"contact_image\": \"jinxing.jpg\",\n \"show_text\": \"不喜欢的明星就取消关注!\",\n \"id\": \"2\",\n \"show_duration\": \"2500\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"鲁豫\",\n \"audio_file\": \"question51-2.mp3\",\n \"contact_image\": \"luyu.jpg\",\n \"show_text\": \"您经历的最害怕的事情是什么?\",\n \"id\": \"3\",\n \"show_duration\": \"2359\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"蔡康永\",\n \"audio_file\": \"follow51-2.mp3\",\n \"contact_image\": \"caikangyong.jpg\",\n \"show_text\": \"因为我手上有很多你的把柄啊~\",\n \"id\": \"4\",\n \"show_duration\": \"1843\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"鲁豫\",\n \"audio_file\": \"question51-3.mp3\",\n \"contact_image\": \"luyu.jpg\",\n \"show_text\": \"您认为幸福是什么?\",\n \"id\": \"5\",\n \"show_duration\": \"1410\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"志玲姐姐\",\n \"audio_file\": \"follow51-3.mp3\",\n \"contact_image\": \"zhilinjiejie.jpg\",\n \"show_text\": \"希望我们的友情,可以一直维持下去~\",\n \"id\": \"4\",\n \"show_duration\": \"3450\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"鲁豫\",\n \"audio_file\": \"question51-4.mp3\",\n \"contact_image\": \"luyu.jpg\",\n \"show_text\": \"对十年以前的那个自己说一句话?\",\n \"id\": \"7\",\n \"show_duration\": \"2275\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"周星驰\",\n \"audio_file\": \"follow51-4.mp3\",\n \"contact_image\": \"zhouxingchi.jpg\",\n \"show_text\": \"我对你的景仰,犹如滔滔江水绵延不绝~\",\n \"id\": \"8\",\n \"show_duration\": \"4296\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"鲁豫\",\n \"audio_file\": \"question51-5.mp3\",\n \"contact_image\": \"luyu.jpg\",\n \"show_text\": \"生命中的最后一天,想做一件什么事情?\",\n \"id\": \"9\",\n \"show_duration\": \"2837\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"周星驰\",\n \"audio_file\": \"follow51-5.mp3\",\n \"contact_image\": \"zhouxingchi.jpg\",\n \"show_text\": \"别人笑我太疯癫,我笑他人看不穿\",\n \"id\": \"10\",\n \"show_duration\": \"5000\"\n }],\n star_access_net_52: [{\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"鲁豫\",\n \"audio_file\": \"question52-1.mp3\",\n \"contact_image\": \"luyu.jpg\",\n \"show_text\": \"除了工作以外您的爱好是什么?\",\n \"id\": \"1\",\n \"show_duration\": \"2050\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"甜馨\",\n \"audio_file\": \"follow52-1.mp3\",\n \"contact_image\": \"tianxin.jpg\",\n \"show_text\": \"我饿啦!\",\n \"id\": \"2\",\n \"show_duration\": \"1181\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"鲁豫\",\n \"audio_file\": \"question52-2.mp3\",\n \"contact_image\": \"luyu.jpg\",\n \"show_text\": \"当你压力很大的时候,怎么样排解你的压力?\",\n \"id\": \"3\",\n \"show_duration\": \"3660\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"郭德纲\",\n \"audio_file\": \"follow52-2.mp3\",\n \"contact_image\": \"guodegang.jpg\",\n \"show_text\": \"对自己要好一些\",\n \"id\": \"4\",\n \"show_duration\": \"1789\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"鲁豫\",\n \"audio_file\": \"question52-3.mp3\",\n \"contact_image\": \"luyu.jpg\",\n \"show_text\": \"对你影响最大的人是谁?\",\n \"id\": \"5\",\n \"show_duration\": \"1800\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"王迅\",\n \"audio_file\": \"follow52-3.mp3\",\n \"contact_image\": \"wangxun.jpg\",\n \"show_text\": \"我的妈呀\",\n \"id\": \"4\",\n \"show_duration\": \"1134\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"鲁豫\",\n \"audio_file\": \"question52-4.mp3\",\n \"contact_image\": \"luyu.jpg\",\n \"show_text\": \"理想的生活状态是什么样?\",\n \"id\": \"7\",\n \"show_duration\": \"1827\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"王迅\",\n \"audio_file\": \"follow52-4.mp3\",\n \"contact_image\": \"wangxun.jpg\",\n \"show_text\": \"瞎说什么实话!\",\n \"id\": \"8\",\n \"show_duration\": \"875\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"鲁豫\",\n \"audio_file\": \"question52-5.mp3\",\n \"contact_image\": \"luyu.jpg\",\n \"show_text\": \"将来会让小孩儿进演艺圈吗?\",\n \"id\": \"9\",\n \"show_duration\": \"1642\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"黄渤\",\n \"audio_file\": \"follow52-5.mp3\",\n \"contact_image\": \"huangbo.jpg\",\n \"show_text\": \"好玩死啦\",\n \"id\": \"10\",\n \"show_duration\": \"930\"\n }],\n star_access_net_53: [{\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"孟非\",\n \"audio_file\": \"question53-1.mp3\",\n \"contact_image\": \"mengfei.jpg\",\n \"show_text\": \"你叫什么名字,来自什么地方?\",\n \"id\": \"1\",\n \"show_duration\": \"1521\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"Papi酱\",\n \"audio_file\": \"follow52-1.mp3\",\n \"contact_image\": \"papijiang.jpg\",\n \"show_text\": \"我是Papi酱~\",\n \"id\": \"2\",\n \"show_duration\": \"612\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"孟非\",\n \"audio_file\": \"question53-2.mp3\",\n \"contact_image\": \"mengfei.jpg\",\n \"show_text\": \"你是什么品种?\",\n \"id\": \"3\",\n \"show_duration\": \"852\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"蔡明\",\n \"audio_file\": \"follow53-2.mp3\",\n \"contact_image\": \"caiming.jpg\",\n \"show_text\": \"都是千年的狐狸,你给我玩什么聊斋呀!\",\n \"id\": \"4\",\n \"show_duration\": \"3131\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"尉迟琳嘉\",\n \"audio_file\": \"question53-3.mp3\",\n \"contact_image\": \"yuchilinjia.jpg\",\n \"show_text\": \"你多大年纪了?\",\n \"id\": \"5\",\n \"show_duration\": \"840\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"谢依霖\",\n \"audio_file\": \"follow53-3.mp3\",\n \"contact_image\": \"xieyilin.jpg\",\n \"show_text\": \"真的假的~\",\n \"id\": \"4\",\n \"show_duration\": \"1931\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"张大大\",\n \"audio_file\": \"question53-4.mp3\",\n \"contact_image\": \"zhangdada.jpg\",\n \"show_text\": \"你有看过橄榄球吗?还有芭蕾舞?\",\n \"id\": \"7\",\n \"show_duration\": \"3837\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"蔡明\",\n \"audio_file\": \"follow53-4.mp3\",\n \"contact_image\": \"caiming.jpg\",\n \"show_text\": \"我不信!\",\n \"id\": \"8\",\n \"show_duration\": \"850\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"金星\",\n \"audio_file\": \"question53-5.mp3\",\n \"contact_image\": \"jinxing.jpg\",\n \"show_text\": \"理性与感性,这两个东西在你身上哪个比例占得多一点点?\",\n \"id\": \"9\",\n \"show_duration\": \"3380\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"薛之谦\",\n \"audio_file\": \"follow53-5.mp3\",\n \"contact_image\": \"xuezhiqian.jpg\",\n \"show_text\": \"你开心就好啦\",\n \"id\": \"10\",\n \"show_duration\": \"855\"\n }],\n star_access_net_54: [{\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"陶子\",\n \"audio_file\": \"question54-1.mp3\",\n \"contact_image\": \"taozi.jpg\",\n \"show_text\": \"现在女生长得好不好看不重要,重要的是要会P对嘛?\",\n \"id\": \"1\",\n \"show_duration\": \"3290\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"张大大\",\n \"audio_file\": \"follow54-1.mp3\",\n \"contact_image\": \"zhangdada.jpg\",\n \"show_text\": \"没错\",\n \"id\": \"2\",\n \"show_duration\": \"475\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"陶子\",\n \"audio_file\": \"question54-2.mp3\",\n \"contact_image\": \"taozi.jpg\",\n \"show_text\": \"你觉得自己的眼光准确吗?\",\n \"id\": \"3\",\n \"show_duration\": \"1790\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"陶子\",\n \"audio_file\": \"follow54-2.mp3\",\n \"contact_image\": \"taozi.jpg\",\n \"show_text\": \"啊~果然是行家!\",\n \"id\": \"4\",\n \"show_duration\": \"2122\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"李静\",\n \"audio_file\": \"question54-3.mp3\",\n \"contact_image\": \"lijing.jpg\",\n \"show_text\": \"关键词形容一下自己的性格?\",\n \"id\": \"5\",\n \"show_duration\": \"2212\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"谢依霖\",\n \"audio_file\": \"follow54-3.mp3\",\n \"contact_image\": \"xieyilin.jpg\",\n \"show_text\": \"你咋那么欠打呢?\",\n \"id\": \"4\",\n \"show_duration\": \"1433\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"李静\",\n \"audio_file\": \"question54-4.mp3\",\n \"contact_image\": \"lijing.jpg\",\n \"show_text\": \"经常送东西给别人是吗?\",\n \"id\": \"7\",\n \"show_duration\": \"1950\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"谢依霖\",\n \"audio_file\": \"follow54-4.mp3\",\n \"contact_image\": \"xieyilin.jpg\",\n \"show_text\": \"怎么啦我觉得很正常啊~\",\n \"id\": \"8\",\n \"show_duration\": \"1920\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"陶子\",\n \"audio_file\": \"question54-5.mp3\",\n \"contact_image\": \"taozi.jpg\",\n \"show_text\": \"有没有带来什么其他的才艺?\",\n \"id\": \"9\",\n \"show_duration\": \"1629\"\n }, {\n \"show_image\": \"\",\n \"show_type\": \"\",\n \"sub_title\": \"谢娜\",\n \"audio_file\": \"follow54-5.mp3\",\n \"contact_image\": \"xiena.jpg\",\n \"show_text\": \"终于来了\",\n \"id\": \"10\",\n \"show_duration\": \"2610\"\n }]\n};"
},
{
"alpha_fraction": 0.4491291046142578,
"alphanum_fraction": 0.4597102403640747,
"avg_line_length": 36.69325256347656,
"blob_id": "9d2628844b9988a6c7b46c4d6d27414f5524564a",
"content_id": "526dcb4c6d744b1b22bfc3d1b5e59504f84a8afd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 6293,
"license_type": "no_license",
"max_line_length": 217,
"num_lines": 163,
"path": "/scripts/share/live_share.js",
"repo_name": "lxzhu94/andes_tony",
"src_encoding": "UTF-8",
"text": "/**\n * Created by Tony on 2016/8/18.\n */\nvar netService='http://andes.cootekservice.com';\nVue.config.delimiters = ['<%', '%>'];\nvar vue=new Vue({\n init:function(){\n myLib.remAdjust(20, 320);\n if(myLib.getHostDevice()!='iphone')\n $(\"#browser-tip>img\").attr('src', '/andesres/image/andes/tip2.png');\n document.body.style.display='block';\n },\n el: 'body',\n data: {\n live_title:'',\n live_img:\"\",\n showBrowserTip:0,\n live_id:'',\n list:[],\n displayNum:9,\n getCommentUrl:'http://andes.cootekservice.com/live/get_comment',\n refreshTime:4,\n scrollTime:0.7,\n share_title:[['邀请你加入BiBi直播【','】,速来围观!']],\n share_desc:'我正在这儿和大家BiBi,同时支持20人的群对讲通话,想嗨你就来!',\n share_image:myLib.netService+'/andesres/image/andes/andes_share_icon.png',\n url:myLib.netService+\"/andes/live_share.html?redeem=\"+myLib.GetQueryString('redeem'),\n //comment data\n defaultComment:[\n {\n nick_name:\"小新MM\",\n text:\"哈哈哈,上墙啦\"\n },\n {\n nick_name:\"寻北Girl*\",\n text:\"试试试试\"\n },\n {\n nick_name:\"阿金\",\n text:\"开始啦开始啦开始啦!\"\n },\n {\n nick_name:\"Neeya\",\n text:\"哎哟,不错呀\"\n },\n {\n nick_name:\"MADED\",\n text:\"HOHO\"\n }\n ],\n commentTimeout:null,\n duration:0,\n timestamp:0,\n count:0,\n comment:[],\n interval:null,\n isChinese:true\n },\n methods:{\n downloadClick:function(){\n if(myLib.getHostApp()!='other')\n vue.showBrowserTip=1;\n else{\n var scheme= \"bibi://openbibi?code=\"+myLib.GetQueryString('redeem');\n if(myLib.isIOS9())\n location.href=scheme;\n else {\n var ifr = document.createElement('iframe');\n ifr.src = scheme;\n ifr.style.display = 'none';\n document.body.appendChild(ifr);\n }\n var t=setTimeout(function(){\n location.href='http://andes.cootekservice.com/andes/download_before.html?from=live_share';\n },1100);\n }\n },\n getLiveInfo:function(redeem){\n $.ajax({\n type:\"POST\",\n data:JSON.stringify({'invite_code':redeem}),\n dataType:\"json\",\n url:netService+\"/pushtalk/query_code\",\n success:function(data){\n if(data.result_code!=2000)\n return;\n vue.live_id=data.result.lid;\n vue.live_img=\"url('http://cootek-walkie-talkie.cdn.cootekservice.com/live/\"+vue.live_id+\".png')\";\n $.ajax({\n type:\"POST\",\n data:JSON.stringify({'live_id':vue.live_id}),\n dataType:\"json\",\n url:netService+\"/live/detail\",\n success:function(data){\n vue.live_title=data.result.title;\n //if (WechatShare.isWeixin())\n // WechatShare.config(INFO.appId, INFO.timestamp, INFO.noncestr, INFO.signature, vue.share_title[0][0]+data.result.title+vue.share_title[0][1], vue.share_desc, vue.url, vue.share_image);\n }\n });\n vue.getComment(vue.live_id);\n }\n })\n },\n getComment:function(live_id){\n $.ajax({\n url:netService+'/live/get_comment',\n dataType:\"json\",\n type:\"POST\",\n data:JSON.stringify({live_id:79,ts:vue.timestamp}),\n success:function(data){\n vue.comment=data.result.comment;\n vue.count=0;\n if(vue.isChinese&&vue.comment.length<=0) {\n vue.comment = new Object(myLib.deepClone(vue.defaultComment));\n }\n if(vue.comment.length==0||vue.comment[vue.comment.length-1].ts<=vue.timestamp) {\n vue.commentTimeout = setTimeout(function () {\n vue.getComment(live_id);\n }, vue.refreshTime * 1000);\n return;\n }\n for(var i in vue.comment){\n if(vue.comment[i].ts>vue.timestamp){\n vue.timestamp=vue.comment[vue.comment.length-1].ts;\n vue.comment.splice(0,i);\n break;\n }\n }\n vue.scrollComment();\n vue.interval=setInterval(vue.scrollComment,vue.scrollTime*1000);\n }\n });\n },\n scrollComment:function(){\n if(vue.count>=vue.comment.length) {\n vue.isChinese=false;\n clearInterval(vue.interval);\n vue.commentTimeout=setTimeout(function(){\n vue.getComment(vue.live_id);\n },vue.refreshTime*1000);\n return;\n }\n if(!vue.isChinese) {\n if (window.atob)\n vue.comment[vue.count].text = myLib.base64Decode(vue.comment[vue.count].text);\n else\n vue.comment[vue.count].text = myLib.utf8to16(myLib.base64_decode(vue.comment[vue.count].text));\n }\n if(vue.list.length>=vue.displayNum)\n vue.list.$remove(vue.list[0]);\n vue.list.push(vue.comment[vue.count]);\n vue.count++;\n vue.duration+=vue.scrollTime;\n }\n }\n});\nwindow.onload = function() {\n FastClick.attach(document.body);\n document.body.addEventListener('touchmove',function(e){\n e.preventDefault();\n });\n};\nvue.getLiveInfo(myLib.GetQueryString('redeem'));"
},
{
"alpha_fraction": 0.47054409980773926,
"alphanum_fraction": 0.4938086271286011,
"avg_line_length": 32.32500076293945,
"blob_id": "3d63a439d8d85a640bfb75ae76813ecb5e425c02",
"content_id": "a962462c671a8c5014610b7e432cb2834a2094fd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 2727,
"license_type": "no_license",
"max_line_length": 172,
"num_lines": 80,
"path": "/scripts/live/create_vote.js",
"repo_name": "lxzhu94/andes_tony",
"src_encoding": "UTF-8",
"text": "/**\n * Created by Tony on 2016/9/13.\n */\nAndes.getToken=function(){\n return \"a93418b5-2ce2-4c99-9a39-47c3601a7d7b\";\n};\nAndes.netService='http://183.136.223.43:30007';\nmyLib.remAdjust(20, 320);\nVue.config.delimiters = ['<%', '%>'];\nvar vue=new Vue({\n init:function(){\n document.getElementById('content').style.display='block';\n },\n el: 'body',\n data: {\n vote_data:{\n live_id:myLib.GetQueryString('live_id'),\n title:'',\n candidate:[{name:''},{name:''}],\n max_vote:1\n },\n isSingle:true,\n submitVoteUrl:Andes.netService+\"/vote/new\"\n },\n methods: {\n canSubmit:function(){\n return !this.arrayHasEmpty(this.vote_data.candidate)&&this.vote_data.title&&this.vote_data.candidate.length>=this.vote_data.max_vote&&this.vote_data.max_vote>0;\n },\n submit:function(){\n if(!this.canSubmit())\n return;\n vue.vote_data.max_vote=parseInt(vue.vote_data.max_vote);\n $.ajax({//TODO\n type:\"POST\",\n url:vue.submitVoteUrl+'?_token='+Andes.getToken(),\n dataType:\"json\",\n data:JSON.stringify(vue.vote_data),\n success:function(data){\n if(data.result_code!=2000) {\n Andes.showToast('创建失败!');\n return;\n }\n Andes.showToast('创建成功');\n Andes.finish();\n }\n });\n },\n addChoice:function(){\n if(!this.vote_data.candidate[this.vote_data.candidate.length-1].name)\n return;\n this.vote_data.candidate.push({name:''});\n },\n deleteChoice:function(item){\n if(this.vote_data.candidate.length>2)\n this.vote_data.candidate.$remove(item);\n },\n chooseClick:function(choose_single){\n this.isSingle=choose_single;\n if(this.isSingle)\n this.vote_data.max_vote=1;\n },\n arrayHasEmpty:function(arr){\n for(var i in arr){\n if(arr[i].name=='')\n return true;\n }\n return false;\n },\n restrictChar:function(item,i){\n while(this.len(item.name)>12){\n this.vote_data.candidate.$set(i,{name:item.name.substr(0,item.name.length-1)});\n item=this.vote_data.candidate[i];\n }\n },\n len:function(s){//获取字符串的字节长度\n s=String(s);\n return s.length+(s.match(/[^\\x00-\\xff]/g) ||\"\").length;//加上匹配到的全角字符长度\n }\n }\n});"
},
{
"alpha_fraction": 0.5109127163887024,
"alphanum_fraction": 0.5249432921409607,
"avg_line_length": 33.09178924560547,
"blob_id": "1ea43a64d18336e867ac67385bd087c69e03ed62",
"content_id": "d0982cb5ec4aa6faa4964545cd4fd196f6c13306",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 7166,
"license_type": "no_license",
"max_line_length": 140,
"num_lines": 207,
"path": "/scripts/andes/soundTransform.js",
"repo_name": "lxzhu94/andes_tony",
"src_encoding": "UTF-8",
"text": "/**\n * Created by Tony on 2016/7/28.\n */\nfunction remAdjust(defaultFontSize, defaultScreenHeight, defaultScreenWidth) {\n var htmlNode = document.getElementsByTagName('html')[0];\n\n function resize() {\n var screenWidth = document.body.offsetWidth;\n var screenHeight = window.screen.height;\n if(screenWidth / defaultScreenWidth * defaultFontSize>28)\n htmlNode.style.fontSize=28+'px';\n else\n htmlNode.style.fontSize = screenWidth / defaultScreenWidth * defaultFontSize + 'px';\n }\n document.addEventListener('DOMContentLoaded', function () {\n resize();\n });\n window.addEventListener('resize', resize);\n}\nAndes.getToken = function () {\n return \"a2d384bc-107c-46fb-bc24-bdbb2ef31239\";//正式服\n};\nremAdjust(20,568,320);\nvar st={\n oSoundAud:'',\n currentIndex:6,\n soundDuration:15,\n soundArray:['','','','','','',''],\n currentSound:'origin_audio',\n processDom:$('.process-circle'),\n processInterval:null,\n rotateInterval:null,\n isPlaying:false,\n init:function(){\n $('.type').fastClick(this.audioClick);\n $('.btn-reRecord').fastClick(this.reRecordClick);\n $('.btn-save').fastClick(this.saveClick);\n $('.btn-reRecord').on('touchstart',function(){$(this).addClass('grey-bg');});\n $('.btn-reRecord').on('touchend',function(){$(this).removeClass('grey-bg');});\n $('.btn-save').on('touchstart',function(){$(this).addClass('grey-bg');});\n $('.btn-save').on('touchend',function(){$(this).removeClass('grey-bg');});\n if(is_ios)\n Andes.getSourceAud(st.setOAud);\n else\n Andes.getSourceAud('st.setOAud');\n },\n setOAud:function(data){\n st.oSoundAud=data;\n st.currentIndex=6;\n st.soundArray=['','','','','','',''];\n st.soundArray[st.currentIndex]=st.oSoundAud;\n st.audioClick('origin_audio');\n },\n audioClick:function(audioId){\n clearInterval(st.processInterval);\n if(st.currentSound)\n $('#'+st.currentSound+' .time').hide();\n //判断是否需要停止播放\n if(st.currentSound==($(this).attr('id')||audioId)&&st.isPlaying) {\n st.audioSelected();\n }\n else{//播放新音频或重新播放当前音频\n st.currentSound= $(this).attr('id')||audioId;\n Andes.stopPlayAud();\n //获取变声音频\n if(st.currentSound!='origin_audio'){\n if(!st.soundArray[$(this).data('index')]||$(this).data('index')=='3') {\n st.audioLoading($(this));\n st.getAudioTransformData($(this).data('index'));\n }\n else{\n st.currentIndex=$(this).data('index');\n st.audioReset();\n st.audioPlay();\n }\n return;\n }\n st.currentIndex=6;\n st.audioReset();\n st.audioPlay();\n }\n },\n reRecordClick:function(){\n $(this).removeClass('grey-bg');\n Andes.recordRetry();\n },\n saveClick:function(){\n $(this).removeClass('grey-bg');\n Andes.recordSave(st.soundArray[st.currentIndex]);\n },\n audioLoading:function(dom){\n $('.prevent-touch,.prevent-click').show();\n dom.find('.loading').show();\n var deg=0;\n st.rotateInterval=setInterval(function(){\n deg+=10;\n dom.find('.loading').css({\n '-webkit-transform':'rotate('+deg+'deg)',\n '-moz-transform':'rotate('+deg+'deg)',\n '-ms-transform':'rotate('+deg+'deg)',\n '-o-transform':'rotate('+deg+'deg)',\n 'transform':'rotate('+deg+'deg)'\n });\n },50);\n },\n audioSelected:function(){\n Andes.stopPlayAud();\n st.processCircle(1000);\n st.isPlaying=false;\n },\n audioReset:function(){\n clearInterval(st.rotateInterval);\n $('.loading').attr('style','').hide();\n $('.prevent-click,.prevent-touch').hide();\n st.processDom.hide();\n $('.time').text('00:00');\n st.processDom.find('.left,.right').attr('style','');\n },\n audioPlayCallback:function(data){\n st.soundDuration=parseInt(data);\n st.isPlaying=true;\n $('#'+st.currentSound+' .type-img').prepend(st.processDom);\n st.processCircle(0);\n st.processDom.show();\n $('.prevent-touch').hide();\n $('#'+st.currentSound).find('.time').show();\n var t= 0,per= 0,interval_time= 10;\n st.processInterval=setInterval(function(){\n t++;\n //走秒\n if(t%100==0) {\n var sec = t/100;\n var duration = sec < 10 ? '0' + sec : sec;\n $('#' + st.currentSound + ' .time').text('00:' + duration);\n }\n st.processCircle(10/st.soundDuration*t);\n //播放结束\n if(t/100>=st.soundDuration) {\n setTimeout(function(){\n st.audioClick(st.currentSound);\n },100);\n clearInterval(st.processInterval);\n }\n },interval_time);\n },\n audioPlay:function(){\n if(is_ios)\n Andes.playAud(st.soundArray[st.currentIndex],st.audioPlayCallback);\n else\n Andes.playAud(st.soundArray[st.currentIndex],'st.audioPlayCallback');\n },\n getAudioTransformData:function(index){\n $('.btn-save').fastClick(function(){});\n $.ajax({\n type:'POST',\n url:'http://andes.cootekservice.com'+'/pushtalk/voice_change?_token='+Andes.getToken()+ \"&_ts=\" + Date.parse(new Date()) / 1000,\n dataType:'json',\n data:JSON.stringify({\n aud_base64:st.oSoundAud,\n type:index\n }),\n success:function(data){\n st.audioReset();\n $('.btn-save').fastClick(st.saveClick);\n if (data.result_code != 2000){\n st.currentSound='origin_sound';\n st.currentIndex=6;\n Andes.showToast('变声失败');\n return;\n }\n st.soundArray[index]=data.result.aud_base64;\n st.currentIndex=index;\n st.audioPlay();\n },\n error:function(){\n st.audioReset();\n Andes.showToast('网络异常');\n }\n });\n },\n processCircle:function(value){\n radialIndicator.value(value);\n }\n\n};\nfunction init(){\n $('body').on('touchmove',function(e){e.preventDefault();});\n window.onload = function(){\n FastClick.attach(document.body);\n st.init();\n }\n}\n//圆环进度条初始化\nvar barColor=is_ios?'#4ed198':'#4ed198';\nvar radialIndicator = radialIndicator('#indicatorContainer', {\n barColor: barColor,\n barBgColor: 'white',\n barWidth: 10,\n initValue: 0,\n displayNumber: false,\n percentage: false,\n frameNum:500,\n minValue:0,\n maxValue:1000\n});\nradialIndicator.option('barWidth', 3);\n$('#indicatorContainer>canvas').css('width', '2.9rem');"
},
{
"alpha_fraction": 0.4407890737056732,
"alphanum_fraction": 0.46182727813720703,
"avg_line_length": 44.10483932495117,
"blob_id": "c5972e1602b527b3d1023f4d1a200b10dcb69447",
"content_id": "8daff767da0ec50eac02586c5cbf47eda88cf7c5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 17249,
"license_type": "no_license",
"max_line_length": 301,
"num_lines": 372,
"path": "/es6/share/new_share.js",
"repo_name": "lxzhu94/andes_tony",
"src_encoding": "UTF-8",
"text": "/**\n * Created by Tony on 2016/5/16.\n */\nfunction remAdjust(defaultFontSize, defaultScreenHeight, defaultScreenWidth) {\n var htmlNode = document.getElementsByTagName('html')[0];\n\n function resize() {\n var screenWidth = document.body.offsetWidth;\n var screenHeight = document.body.offsetHeight;\n if(screenWidth/screenHeight<0.6)\n htmlNode.style.fontSize = screenWidth / defaultScreenWidth * defaultFontSize + 'px';\n else\n htmlNode.style.fontSize = screenHeight / defaultScreenHeight * defaultFontSize*1.1 + 'px';\n }\n\n document.addEventListener('DOMContentLoaded', function () {\n resize();\n });\n window.addEventListener('resize', resize);\n}\nremAdjust(20, 667,375);\nvar netService='http://andes.cootekservice.com';\nvar android_download = 'https://itunes.apple.com/app/apple-store/id1090811540?pt=618432&ct=sharepage&mt=8';\nvar ios_download = myLib.GetQueryString('bundleid') && myLib.GetQueryString('bundleid').indexOf('international') > -1 ? 'https://itunes.apple.com/us/app/bibi-international/id1204152982?l=zh&ls=1&mt=8' : 'https://itunes.apple.com/app/apple-store/id1090811540?pt=618432&ct=sharepage&mt=8';\nvar share=new Vue({\n delimiters: ['<%', '%>'],\n el: '#content',\n mounted(){\n //if(WechatShare.isWeixin()&&location.href.indexOf('appid')<0) {\n // location.href = this.makeWechatUrl();\n // return;\n //}\n if(typeof user_info!='undefined')\n this.username=user_info.nickname;\n $(\"body\").get(0).addEventListener('touchmove',function(event){event.preventDefault()});\n if (this.redeem == 'bibiangel') {\n this.invite_info = {\n type: 'user',\n user: {\n name: 'BiBi小天使',\n head_image_url: '/andesres/image/andes/share/bibi_angel.jpg'\n }\n };\n var current_num_f = parseInt(Math.random() * this.test_num_f); //当前好友分享文案编号\n if (WechatShare.isWeixin())\n WechatShare.config(INFO.appId, INFO.timestamp, INFO.noncestr, INFO.signature, this.share_title_f[current_num_f], this.share_desc_f[current_num_f], this.makeUrl(current_num_f, 'f'), this.share_image);\n this.jump_text_left_list=this.jump_text_left_list.reverse();\n this.animationDisplay();\n } else this.getInviteInfo();\n $('#content').show();\n this.jump_text_left_list=this.jump_text_left_list.reverse();\n },\n data: {\n username:'',\n invite_info:{\n type:'',\n user:{\n name:'',\n head_image_url:''\n }\n },\n group_info:{\n group_name:'',\n members:[]\n },\n jump_text_list:[],\n jump_text_left_list:[\n {\n gif:'/andesres/image/andes/share/jump_text/没事就要瞎BiBi.gif',\n head_img:'http://cootek-walkie-talkie.cdn.cootekservice.com/head/6045112588781459405.png',\n height:'3rem',\n style:{\n top : Math.random()*30 + 'px',\n opacity:0,\n width:'10.5rem'\n }\n },\n {\n gif:'/andesres/image/andes/share/jump_text/听说这里有3D表情.gif',\n head_img:'',\n height:'7rem',\n style:{\n top : Math.random()*30+140 + 'px',\n opacity:0,\n width:'14rem'\n }\n },\n {\n gif:'/andesres/image/andes/share/jump_text/臣妾等你很久了.gif',\n head_img:'http://cootek-walkie-talkie.cdn.cootekservice.com/head/6045112588781459405.png',\n height:'6rem',\n style:{\n top : Math.random()*30+80 + 'px',\n opacity:0,\n width:'15rem'\n }\n },\n {\n gif:'/andesres/image/andes/share/jump_text/厉害了我的弹跳文.gif',\n head_img:'',\n height:'4rem',\n style:{\n top : Math.random()*30+180 + 'px',\n opacity:0,\n width:'13.5rem'\n }\n },\n {\n gif:'/andesres/image/andes/share/jump_text/人间精品.gif',\n head_img:'http://cootek-walkie-talkie.cdn.cootekservice.com/head/6045112588781459405.png',\n height:'4rem',\n style:{\n top : Math.random()*30+60 + 'px',\n opacity:0,\n width:'15rem'\n }\n },\n {\n gif:'/andesres/image/andes/share/jump_text/撩撩再聊聊.gif',\n head_img:'',\n height:'3rem',\n style:{\n top : Math.random()*30+160 + 'px',\n opacity:0,\n width:'14rem'\n }\n },\n {\n gif:'/andesres/image/andes/share/jump_text/我爱学习求满分.gif',\n head_img:'',\n height:'5.8rem',\n style:{\n top : Math.random()*30+50 + 'px',\n opacity:0,\n width:'16rem'\n }\n }\n ],\n gif:['/andesres/image/andes/share/jump_text/么么哒.gif','/andesres/image/andes/share/jump_text/胸大.gif','/andesres/image/andes/share/jump_text/吻我.gif'],\n redeem: myLib.GetQueryString('redeem')||myLib.GetQueryString('code'),\n is_ios:myLib.getHostDevice()=='iphone',\n is_kiw: window.navigator.userAgent.indexOf('kiw') > -1 || window.navigator.userAgent.indexOf('HUAWEIFRD') > -1,\n url: 'http://andes.cootekservice.com' + '/andes/oauth_share.html',\n state:{\n isInit:0,\n showBrowserTip:0,\n showInviteDiv:0,\n isShowDownload:0,\n isLongTouchCode:0,\n showGif:[0,0,0]\n },\n touch_start_time:null,\n test_num_f: 2,//好友分享文案数量\n test_num_g: 1,//群分享文案数量\n share_title_f: ['BiBi好友申请', '我要开始BiBi了,你不看一下吗?'],\n share_desc_f: ['有人加我嘛?求撩~没事就要瞎BiBi,等你来玩不一样的花式聊天~', '邀请你成为我的BiBi好友,体验纯净高效的新沟通方式!'],\n share_title_g: [['邀请你加入BiBi群【', '】,速来围观!']],\n share_desc_g: ['我正在这儿和大家BiBi,同时支持20人的群对讲通话,想嗨你就来!'],\n share_image: netService + '/andesres/image/andes/andes_share_icon.jpg'\n },\n methods:{\n getInviteInfo(){\n $.ajax({\n type:\"POST\",\n data:JSON.stringify({invite_code:this.redeem}),\n dataType:\"json\",\n url:netService+\"/pushtalk/query_code\",\n success:(data) => {\n if(data.result_code!=2000) {\n this.invite_info.type='expire';\n return;\n }\n this.invite_info=data.result;\n let img_url = this.invite_info.user.head_image_url;\n if (img_url.indexOf('local://') > -1)\n img_url='/andesres/image/andes/' + img_url.substr(8) + '.png';\n else if(img_url.indexOf('netfile')>-1){\n img_url=img_url.substr(8);\n }\n this.invite_info.user.head_image_url=img_url;\n var current_num_f=parseInt(Math.random()*this.test_num_f);//当前好友分享文案编号\n var current_num_g=parseInt(Math.random()*this.test_num_g);//当前群分享文案编号\n if(this.invite_info.type=='user'){\n this.animationDisplay();\n if (WechatShare.isWeixin())\n WechatShare.config(INFO.appId, INFO.timestamp, INFO.noncestr, INFO.signature, this.share_title_f[current_num_f], this.share_desc_f[current_num_f], this.makeUrl(current_num_f,'f'), this.share_image);\n }\n else if(this.invite_info.type=='group'){\n $.ajax({\n type:\"POST\",\n data:JSON.stringify({group_id:this.invite_info.gid}),\n dataType:\"json\",\n url:netService+\"/pushtalk/query_group\",\n success:(data) => {\n if(data.result_code!=2000) {\n this.invite_info.invite_type='expire';\n }\n this.group_info=data.result;\n this.group_info.head_image=new Array(3);\n let obj={};\n let isSplice=0\n for(let i=0;i<this.group_info.members.length;i++){\n let img_url=this.group_info.members[i].head_image_url;\n if(img_url.indexOf('netfile')>-1)\n img_url=img_url.substr(8);\n if(img_url.indexOf('local')>-1)\n img_url = '/andesres/image/andes/share/' + img_url.substr(8) + '.png';\n this.group_info.members[i].head_image_url=img_url;\n console.log(this.group_info.members[i].head_image_url+' '+i)\n if(this.group_info.members[i].user_id==this.invite_info.user.user_id&&!isSplice) {\n obj=new Object(myLib.deepClone(this.group_info.members[i]));\n this.group_info.members.splice(i,1);\n this.group_info.members.push(obj);\n isSplice=1;\n i--;\n }\n }\n if(!isSplice)\n this.group_info.members.push(this.invite_info.user);\n this.group_info.group_name=this.group_info.group_name?':'+this.group_info.group_name:'\"群聊('+this.group_info.members.length+'人)\"';\n if (WechatShare.isWeixin())\n WechatShare.config(INFO.appId, INFO.timestamp, INFO.noncestr, INFO.signature, this.share_title_g[current_num_g][0]+this.group_info.group_name+this.share_title_g[current_num_g][1], this.share_desc_g[current_num_g], this.makeUrl(current_num_g,'g'), this.share_image);\n this.animationDisplay();\n }\n });\n }\n }\n });\n },\n joinClick(isOpen){\n if (myLib.getHostApp() !='other') {\n //若寄主应用为微信、微博、qq空间浏览器\n this.state.showBrowserTip=1;\n return;\n }\n if (this.redeem == 'bibiangel'||this.invite_info.type=='expire') {\n if (this.is_ios)\n location.href = ios_download;\n else\n location.href = android_download;\n return;\n }\n var scheme= myLib.GetQueryString('bundleid') && myLib.GetQueryString('bundleid').indexOf('international') > -1 ? \"bibii://openbibi?code=\"+this.redeem : \"bibi://openbibi?code=\"+this.redeem;\n if(myLib.isIOS10())\n location.href=scheme;\n else if(this.is_ios&&isOpen)\n location.href=scheme;\n else if(!this.is_ios){\n var ifr = document.createElement('iframe');\n ifr.src = scheme;\n ifr.style.display = 'none';\n document.body.appendChild(ifr);\n }\n this.state.showInviteDiv=1;\n },\n closePanelClick(){\n this.state.showInvitePanel=0;\n this.state.isShowDownload=1;\n },\n downloadClick(){\n if(this.is_ios)\n location.href=ios_download;\n else\n location.href=android_download;\n },\n codeTouchStart(){\n setTimeout(() => this.state.isLongTouchCode=1,2500);\n },\n codeTouchEnd(){\n setTimeout(() => this.state.isLongTouchCode=!this.state.isLongTouchCode?(Date.now()-this.touch_start_time)>400:1,1000);\n },\n makeUrl(num,type){\n var uri,redeem=this.redeem;\n if(type=='f')\n uri=this.url+\"?redeem=\"+redeem+\"&test_f=\"+num;\n else if(type=='g')\n uri=this.url+\"?redeem=\"+redeem+\"&test_g=\"+num;\n else if(type=='l')\n uri=this.url+\"?redeem=\"+redeem+\"&test_l=\"+num;\n else\n uri=this.url+\"?redeem=\"+redeem;\n if(myLib.GetQueryString('bundleid'))\n uri += '&bundleid=' + myLib.GetQueryString('bundleid');\n return uri;\n },\n makeWechatUrl:function(num,type){\n var redeem=this.redeem;\n var uri;\n if(type=='f')\n uri=this.url+\"?redeem=\"+redeem+\"&test_f=\"+num+\"&appid=\"+INFO.appId;\n else if(type=='g')\n uri=this.url+\"?redeem=\"+redeem+\"&test_g=\"+num+\"&appid=\"+INFO.appId;\n else if(type=='l')\n uri=this.url+\"?redeem=\"+redeem+\"&test_l=\"+num+\"&appid=\"+INFO.appId;\n else\n uri=this.url+\"?redeem=\"+redeem+\"&appid=\"+INFO.appId;\n var target=\"1#1#\"+uri;\n target=myLib.base64_encode(target);\n var redirect_url = 'http://touchlife.cootekservice.com/callback/wechat?target='+target;\n return 'https://open.weixin.qq.com/connect/oauth2/authorize?appid='+INFO.appId+'&redirect_uri='+redirect_url+'&response_type=code&scope=snsapi_userinfo#wechat_redirect';\n },\n leave(el, done) {\n let move=document.body.clientWidth+el.offsetWidth+20;\n setTimeout(()=>el.remove(),10000);\n el.style.transition=`transform ${Math.random()*4+7}s linear,opacity 1.5s ease-out`;\n el.style.webkitTransition=`-webkit-transform ${Math.random()*4+7}s linear,opacity 1.5s ease-out`;\n el.style.opacity='1';\n el.style.transform=`translate3d(-${move}px,0,0)`;\n el.style.webkitTransform=`translate3d(-${move}px,0,0)`;\n },\n afterLeave(el) {\n el.style.transition=null;\n },\n animationDisplay(is_repeat){\n $('.jump-text-item').children().remove();\n this.jump_text_list = new Object(myLib.deepClone(this.jump_text_left_list));\n let aud1 = new Audio(), aud2 = new Audio();\n aud1.src = '/andesres/image/andes/share/jump_text/么么哒.mp3';\n aud2.src = '/andesres/image/andes/share/jump_text/不要这么害羞嘛.mp3';\n let t=-1500;\n for(let i = 0;i < 10;i++) {\n if(i==0&&!is_repeat) {\n this.state.showGif = [0, 0, 1];\n this.gif[2] = this.gif[2];\n }\n if(i<3)\n setTimeout(()=>{\n if(i==1)\n this.state.showGif=[0,0,0];\n this.jump_text_list.pop()\n }, t+=2500);\n else if(i==3) {\n setTimeout(()=> {\n this.state.showGif = [1, 0, 0];\n this.gif[0] = this.gif[0];\n if(!is_repeat)\n aud1.play();\n }, t += 5000);\n }\n else if(i<6)\n setTimeout(()=>{\n if(i==5)\n this.state.showGif=[0,0,0];\n this.jump_text_list.pop()\n }, t+=2500);\n else if(i==6)\n setTimeout(()=>{\n this.state.showGif=[0,1,0];\n this.gif[1] = this.gif[1];\n if (!is_repeat)\n aud2.play();\n }, t+=5000);\n else if(i<9)\n setTimeout(()=>{\n if(i==8)\n this.state.showGif=[0,0,0];\n this.jump_text_list.pop()\n }, t+=2000);\n else {\n setTimeout(()=>{\n this.state.showGif = [0, 0, 1];\n this.gif[2] = this.gif[2];\n },t+=5000);\n setTimeout(()=>this.animationDisplay(1),t+=6000);\n }\n }\n }\n }\n});\nwindow.onload=function(){\n FastClick.attach(document.body);\n};\n"
},
{
"alpha_fraction": 0.5020400285720825,
"alphanum_fraction": 0.514319658279419,
"avg_line_length": 32.84048080444336,
"blob_id": "abbf12fc735e73534cc41b677330233af7f1369d",
"content_id": "9ff8b7c3381d105b15b8b9aee97e64d34a3d6efa",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 26367,
"license_type": "no_license",
"max_line_length": 627,
"num_lines": 746,
"path": "/es6/tool/face_admin.js",
"repo_name": "lxzhu94/andes_tony",
"src_encoding": "UTF-8",
"text": "/**\n * Created by Tony on 2016/12/26.\n */\nconst netService = \"http://183.136.223.43:30007\";\nconst apiUrl = {\n getFaceTopic: '/tool/voice/subject/list',\n getFaceTopicName: '/tool/voice/subject/name/list',\n modifyFaceTopic: '/tool/voice/subject/update',\n deleteFaceTopic: '/tool/voice/subject/del',\n createFaceTopic: '/tool/voice/subject/new',\n changeTopicStatus: '/tool/voice/subject/status',\n changeTopicRecommend: '/tool/voice/subject/recommended/status',\n topTopic: '/tool/voice/subject/stick',\n getSubFace: '/tool/voice/subject/sub/expression/list',\n modifySubFace: '/tool/voice/subject/sub/expression/update',\n createSubFace: '/tool/voice/subject/sub/expression/new',\n deleteSubFace: '/tool/voice/subject/sub/expression/del',\n changeSubFaceStatus: '/tool/voice/subject/sub/expression/status',\n uploadImage: '/tool/voice/expression/pic/upload',\n uploadAudio: '/tool/voice/expression/sound/upload',\n uploadZip: '/tool/voice/expression/zip/upload',\n testDeploy: '/tool/voice/expression/test/deploy',\n clickFormalDeploy: '/tool/voice/expression/last/version',\n formalDeploy: '/tool/voice/expression/official/deploy',\n getLog: '/tool/voice/subject/history'\n};\nDate.prototype.Format = function (fmt) { //author: meizz\n var o = {\n \"M+\": this.getMonth() + 1, //月份\n \"d+\": this.getDate(), //日\n \"h+\": this.getHours(), //小时\n \"m+\": this.getMinutes(), //分\n \"s+\": this.getSeconds(), //秒\n \"q+\": Math.floor((this.getMonth() + 3) / 3), //季度\n \"S\": this.getMilliseconds() //毫秒\n };\n if (/(y+)/.test(fmt)) fmt = fmt.replace(RegExp.$1, (this.getFullYear() + \"\").substr(4 - RegExp.$1.length));\n for (var k in o)\n if (new RegExp(\"(\" + k + \")\").test(fmt)) fmt = fmt.replace(RegExp.$1, (RegExp.$1.length == 1) ? (o[k]) : ((\"00\" + o[k]).substr((\"\" + o[k]).length)));\n return fmt;\n};\nvar page = {\n face_topic: 'faceTopic',\n sub_face: 'subFace',\n upload: 'upload',\n log: 'log'\n};\nvar vue ={\n faceTopic : new Vue({\n delimiters: ['<%', '%>'],\n\n mounted: function () {\n //this.init();\n },\n\n el: '.face-topic',\n\n data: {\n Date: new Date(),\n face_topic_list: [],\n empty_item:{id: '', name: '', subtitle: '个表情', size: '', status: 1, is_recommended: 0, flag: 'new', tag: '', pre_id: 'new_pre', file_name: '', image_path: '', bg_path_AND: '', bg_path_IOS: '', zip_file: '', item_url: '', description: '',},\n current_item:{},\n pre_item:{},\n current_id:'',\n state:{\n isShowSelf:0,\n isShowForm:0,\n },\n submitUrl:'',\n searchQuery:''\n },\n\n methods: {\n init(){\n console.log('faceTopic init');\n $.ajax({\n type:'get',\n dataType:'json',\n url:netService + apiUrl.getFaceTopic,\n //async:false,\n success:(data) => {\n console.log(data);\n if(data.result_code != 2000){\n alert('服务器接口出错');\n return;\n }\n // let data = {result:{subject_list:[{id:'123',name:'啦啦啦',flag:'new',new_change:1,status:1,is_recommended:0,modified_time:'1482821116', file_name: '', image_path: '', bg_path_AND: '', bg_path_IOS: '', zip_file: '', item_url: ''},{id:'116',name:'呼呼呼',flag:'hot',new_change:0,status:0,is_recommended:1,modified_time:'1412821116', file_name: '', image_path: '', bg_path_AND: '', bg_path_IOS: '', zip_file: '', item_url: ''},{id:'149',name:'嘻嘻嘻',flag:'new',new_change:1,status:1,is_recommended:0,modified_time:'1482821116', file_name: '', image_path: '', bg_path_AND: '', bg_path_IOS: '', zip_file: '', item_url: ''}]}};\n this.face_topic_list=data.result.subject_list;\n this.state.isShowSelf=1;\n }\n });\n },\n statusChange(type, item){\n let url = '', id = '', data={};\n if (!confirm('请确认要修改测试环境和正式环境的配置文件再修改状态,确定要修改吗?'))\n return;\n if (type === 'status') {\n url = apiUrl.changeTopicStatus;\n data=JSON.stringify({id:item.id , status: Number(!item.status)});\n }\n if (type === 'is_recommended') {\n url = apiUrl.changeTopicRecommend;\n data=JSON.stringify({id:item.id , status: Number(!item.is_recommended)});\n }\n $.ajax({\n type:'post',\n dataType:'json',\n url:netService+url,\n data:data,\n //async:false,\n success:(data) => {\n if(data.result_code != 2000){\n alert('服务器接口出错');\n return;\n }\n item[type]=!item[type];\n }\n });\n },\n modifyTopic(item, index){\n this.current_item = myLib.deepClone(item);\n this.current_index = index;\n this.current_item.before_id=this.current_item.id;\n this.submitUrl=apiUrl.modifyFaceTopic;\n this.state.isShowForm=1;\n },\n createTopic(){\n this.current_item = myLib.deepClone(this.empty_item);\n this.submitUrl=apiUrl.createFaceTopic;\n this.state.isShowForm=1;\n },\n topTopic(item, index){\n if(!confirm('确定要将'+item.name+'移动到最上方吗?'))\n return;\n $.ajax({\n type:'post',\n dataType:'json',\n url:netService+apiUrl.topTopic,\n data:JSON.stringify({id: item.id}),\n //async:false,\n success:(data) => {\n if(data.result_code != 2000){\n alert('服务器接口出错');\n return;\n }\n let obj = myLib.deepClone(item);\n this.face_topic_list.splice(index,1);\n this.face_topic_list.splice(0,0,obj);\n }\n });\n },\n submitTopic(){\n if(this.current_item.id == '' || this.current_item.name == ''|| this.current_item.flag == ''|| this.current_item.file_name == ''|| this.current_item.image_path == ''|| this.current_item.bg_path_AND == ''|| this.current_item.bg_path_IOS == ''|| this.current_item.item_url == ''){\n alert('请输入所有必填字段');\n return;\n }\n $.ajax({\n type:'post',\n dataType:'json',\n url:netService+this.submitUrl,\n data:JSON.stringify(this.current_item),\n //async:false,\n success:(data) => {\n if(data.result_code != 2000){\n alert('服务器接口出错');\n return;\n }\n if(this.submitUrl.indexOf('update')>-1){\n Vue.set(this.face_topic_list, this.current_index, this.current_item);\n alert('更新成功');\n }\n else{\n this.face_topic_list.splice(0,0,this.current_item);\n alert('添加成功');\n }\n this.state.isShowForm=0;\n }\n });\n },\n deleteTopic(index){\n if (!confirm('如若想删除专题请先将专题的启动状态修改为关闭,并且部署到了测试环境和正式环境上,否则会有冗余的配置项无法被删除,确定要删除吗?'))\n return;\n $.ajax({\n type:'post',\n dataType:'json',\n url:netService+apiUrl.deleteFaceTopic,\n data:JSON.stringify({id: this.face_topic_list[index].id}),\n //async:false,\n success:(data) => {\n if(data.result_code != 2000){\n alert('服务器接口出错');\n return;\n }\n this.face_topic_list.splice(index, 1);\n }\n });\n },\n cancelSubmitTopic(){\n this.state.isShowForm=0;\n },\n autoAddId(){\n this.current_item.file_name=this.current_item.file_name?this.current_item.file_name:this.current_item.id+'.wav';\n this.current_item.image_path=this.current_item.image_path?this.current_item.image_path:this.current_item.id+'.jpg';\n this.current_item.bg_path_AND=this.current_item.bg_path_AND?this.current_item.bg_path_AND:this.current_item.id+'_AND.jpg';\n this.current_item.bg_path_IOS=this.current_item.bg_path_IOS?this.current_item.bg_path_IOS:this.current_item.id+'_iOS.jpg';\n this.current_item.zip_file=this.current_item.zip_file?this.current_item.zip_file:this.current_item.id+'.zip';\n this.current_item.item_url=this.current_item.item_url?this.current_item.item_url:this.current_item.id+'.json';\n }\n },\n\n computed: {\n filteredFaceTopicList(){\n return this.face_topic_list.filter((item)=> {\n return item.id.indexOf(this.searchQuery) !== -1 || item.name.indexOf(this.searchQuery) !== -1\n })\n }\n }\n }),\n\n subFace : new Vue({\n delimiters: ['<%', '%>'],\n\n mounted: function () {\n //this.init();\n },\n\n el: '.sub-face',\n\n data: {\n topic_name_list:[],\n sub_face_list: [],\n empty_item:{voice_subject_id:'', subject_id:'' ,id: '', name: '', show_image: '', show_type: 'text',status: 1,new_change: 0, tag: '', file_name: '', show_duration: 0, show_text_color:'#000000',},\n current_item:{voice_subject_id:'', subject_id:''},\n pre_item:{},\n current_id:'',\n state:{\n isShowSelf:0,\n isShowForm:0,\n },\n submitUrl:'',\n searchQuery:'',\n needSubjectId: 1\n },\n\n methods: {\n init(){\n console.log('subFace init');\n $.ajax({\n type:'get',\n dataType:'json',\n url:netService+apiUrl.getFaceTopicName,\n //async:false,\n success:(data) => {\n console.log(data);\n if(data.result_code != 2000){\n alert('服务器接口出错');\n return;\n }\n //let data = {result:{name_list:[{id:'123',name:'啦啦啦'},{id:'116',name:'呼呼呼'}]}};\n this.topic_name_list=data.result.name_list;\n $.ajax({\n type:'post',\n dataType:'json',\n data:JSON.stringify({subject_id:''}),\n url:netService+apiUrl.getSubFace,\n //async:false,\n success:(data) => {\n console.log(data);\n if(data.result_code != 2000){\n alert('服务器接口出错');\n return;\n }\n this.current_item.subject_id = this.topic_name_list[0].id;\n this.sub_face_list=data.result.sub_expression_list;\n this.state.isShowSelf=1;\n }\n });\n }\n });\n //data = {result:{sub_expression_list :[{subject_id:'123', id: '123_12', name: '房间我IE佛', show_image: '', show_type: 'text',status: 1,new_change: 0, tag: '', file_name: '', show_duration: '', show_text_color:''},{subject_id:'116', id: '116_15', name: '分为', show_image: '', show_type: 'text',status: 0,new_change: 1, tag: '', file_name: '', show_duration: '', show_text_color:'',}]}};\n },\n statusChange(type, item){\n let url = '', id = '', data={};\n if (!confirm('请确认要修改测试环境和正式环境的配置文件再修改状态,确定要修改吗?'))\n return;\n if (type === 'status') {\n url = apiUrl.changeSubFaceStatus;\n data=JSON.stringify({id:item.id , status: Number(!item.status)});\n }\n $.ajax({\n type:'post',\n dataType:'json',\n url:netService+url,\n data:data,\n //async:false,\n success:(data) => {\n if(data.result_code != 2000){\n alert('服务器接口出错');\n return;\n }\n item[type]=!item[type];\n }\n });\n },\n modifySubFace(item, index){\n this.needSubjectId = item.id.indexOf('_')>-1;\n this.current_item = myLib.deepClone(item);\n if(this.current_item.id.indexOf('_')>-1)\n this.current_item.sub_id = this.current_item.id.substr(this.current_item.subject_id.length+1,this.current_item.id.length);\n else\n this.current_item.sub_id = this.current_item.id;\n this.current_index = index;\n this.current_item.before_id=this.current_item.id;\n this.submitUrl=apiUrl.modifySubFace;\n this.state.isShowForm=1;\n },\n createSubFace(){\n this.needSubjectId=1;\n this.empty_item.subject_id = this.current_item.subject_id;\n this.current_item = myLib.deepClone(this.empty_item);\n this.current_item.sub_id = '';\n this.submitUrl=apiUrl.createSubFace;\n this.state.isShowForm=1;\n },\n submitSubFace(){\n if(this.current_item.id == '' || this.current_item.name == ''|| this.current_item.image_path == ''|| this.current_item.file_name == ''){\n alert('请输入所有必填字段');\n return;\n }\n this.current_item.id = this.current_item.subject_id + '_' + this.current_item.sub_id;\n this.current_item.voice_subject_id=this.current_item.subject_id;\n $.ajax({\n type:'post',\n dataType:'json',\n url:netService+this.submitUrl,\n data:JSON.stringify(this.current_item),\n async:false,\n success:(data) => {\n if(data.result_code != 2000){\n alert('服务器接口出错');\n return;\n }\n if(this.submitUrl.indexOf('update')>-1){\n Vue.set(this.sub_face_list, this.current_index, this.current_item);\n alert('更新成功');\n }\n else{\n this.sub_face_list.splice(0,0,this.current_item);\n alert('添加成功');\n }\n this.state.isShowForm=0;\n }\n });\n },\n cancelSubmitSubFace(){\n this.state.isShowForm=0;\n },\n deleteSubFace(index){\n if (!confirm('确认要删除吗?'))\n return;\n $.ajax({\n type: 'post',\n dataType: 'json',\n url: netService + apiUrl.deleteSubFace,\n data: JSON.stringify({id:this.sub_face_list[index].id}),\n async: false,\n success: (data) => {\n if(data.result_code != 2000){\n alert('服务器接口出错');\n return;\n }\n this.sub_face_list.splice(index, 1);\n }\n });\n },\n autoAddId(){\n if(!this.current_item.id)\n this.current_item.id = this.current_item.subject_id + '_' + this.current_item.sub_id;\n if(this.current_item.id.indexOf('_')>-1)\n this.current_item.sub_id = this.current_item.id.substr(this.current_item.subject_id.length + 1, this.current_item.id.length);\n else\n this.current_item.sub_id = this.current_item.id;\n this.current_item.file_name=this.current_item.file_name?this.current_item.file_name:this.current_item.id+'.wav';\n if(this.current_item.show_type !== '3d')\n this.current_item.show_image=this.current_item.show_image?this.current_item.show_image:this.current_item.id+'.png';\n else\n this.current_item.show_image=this.current_item.show_image?this.current_item.show_image:this.current_item.id+'.gif';\n this.current_item.image_path=this.current_item.image_path?this.current_item.image_path:this.current_item.id+'.png';\n }\n },\n\n computed: {\n filteredSubFaceList(){\n return this.sub_face_list.filter((item)=> {\n return (!this.current_item.subject_id || item.subject_id == this.current_item.subject_id) && (item.id.indexOf(this.searchQuery) !== -1 || item.name.indexOf(this.searchQuery) !== -1)\n })\n }\n }\n }),\n\n upload : new Vue({\n delimiters: ['<%', '%>'],\n\n mounted: function () {\n\n },\n\n el: '.upload',\n\n data: {\n Date: new Date(),\n imageList:[],\n audioList:[],\n zipList:[],\n state: {\n isShowSelf: 0,\n isShowLoadingTip: 0,\n testFinished: 0,\n isShowFormalDeploy: 0\n },\n last_version_info: {\n version_num:0,\n last_deploy_time:''\n },\n deploy_info: {\n modified_log:'',\n timing_deploy_time:''\n },\n lack_filename_list:[]\n },\n\n methods: {\n init(){\n this.state.isShowSelf=1;\n },\n imageUpload(){\n this.state.isShowLoadingTip = 1;\n let imageFiles = document.getElementById('imageFiles').files;\n let imageFilesLength = imageFiles.length;\n let imageList = [];\n let reader = new FileReader();\n //将文件以Data URL形式读入页面\n let readImageFile = (index) => {\n if(index>=imageFilesLength) {//图片上传完成\n console.log(imageList);\n $.ajax({\n type:'post',\n dataType:'json',\n url:netService+apiUrl.uploadImage,\n data:JSON.stringify({pics: imageList}),\n async:false,\n success:(data) => {\n this.state.isShowLoadingTip = 0;\n if(data.result_code != 2000){\n alert('服务器接口出错');\n return;\n }\n alert('上传成功');\n },\n error:() => {\n this.state.isShowLoadingTip = 0;\n alert('服务器连接失败');\n }\n });\n return;\n }\n if(!/image\\/\\w+/.test(imageFiles[index].type)){\n alert(\"有文件不是图片,请重新上传\");\n this.state.isShowLoadingTip = 0;\n return;\n }\n reader.readAsDataURL(imageFiles[index]);\n reader.onload = function (e) {\n imageList.push({pic_content: this.result.split(',')[1], pic_name: imageFiles[index].name});\n readImageFile(index+1)\n };\n reader.onerror = () => {\n this.state.isShowLoadingTip = 0;\n alert('图片读取失败,请重新上传');\n }\n };\n readImageFile(0);\n },\n audioUpload(){\n this.state.isShowLoadingTip = 1;\n let audioFiles = document.getElementById('audioFiles').files, audio = document.getElementById('audioFiles');\n let audioFilesLength = audioFiles.length;\n let audioList = [];\n let reader = new FileReader();\n //将文件以Data URL形式读入页面\n let readAudioFile = (index) => {\n if(index>=audioFilesLength) {//音频上传完成\n console.log(audioList);\n $.ajax({\n type:'post',\n dataType:'json',\n url:netService+apiUrl.uploadAudio,\n data:JSON.stringify({sounds:audioList}),\n async:false,\n success:(data) => {\n this.state.isShowLoadingTip = 0;\n if(data.result_code != 2000){\n alert('服务器接口出错');\n return;\n }\n alert('上传成功');\n },\n error:() => {\n this.state.isShowLoadingTip = 0;\n alert('服务器连接失败');\n }\n });\n return;\n }\n if(!/audio\\/\\w+/.test(audioFiles[index].type)){\n alert(\"有文件不是音频,请重新上传\");\n this.state.isShowLoadingTip = 0;\n return;\n }\n reader.readAsDataURL(audioFiles[index]);\n reader.onload = function (e) {\n let audio = new Audio();\n audio.src=this.result;\n audio.onloadedmetadata = () => {console.log('audio success')\n audioList.push({sound_content: this.result.split(',')[1], sound_name: audioFiles[index].name, show_duration: parseInt(audio.duration*1000)});\n audio.onerror = () => {};\n readAudioFile(index+1);\n };\n audio.onerror = () => {\n alert('音频加载获取时间出错,请重新上传');\n }\n };\n reader.onerror = () => {\n alert('音频读取失败,请重新上传');\n }\n };\n readAudioFile(0);\n },\n zipUpload(){\n this.state.isShowLoadingTip = 1;\n let zipFiles = document.getElementById('zipFiles').files;\n let zipFilesLength = zipFiles.length;\n let zipList = [];\n let reader = new FileReader();\n //将文件以Data URL形式读入页面\n let readZipFile = (index) => {\n if(index>=zipFilesLength) {//Zip上传完成\n console.log(zipList);\n $.ajax({\n type:'post',\n dataType:'json',\n url:netService+apiUrl.uploadZip,\n data:JSON.stringify({zips:zipList}),\n async:false,\n success:(data) => {\n this.state.isShowLoadingTip = 0;\n if(data.result_code != 2000){\n alert('服务器接口出错');\n return;\n }\n alert('上传成功');\n },\n error:() => {\n this.state.isShowLoadingTip = 0;\n alert('服务器连接失败');\n }\n });\n return;\n }\n if(zipFiles[index].type.indexOf('zip')<0){\n alert(\"有文件不是zip文件,请重新上传\");\n this.state.isShowLoadingTip = 0;\n return;\n }\n reader.readAsDataURL(zipFiles[index]);\n reader.onload = function (e) {\n zipList.push({zip_content: this.result.split(',')[1], zip_name: zipFiles[index].name});\n readZipFile(index+1)\n };\n reader.onerror = () => {\n this.state.isShowLoadingTip = 0;\n alert('zip读取失败,请重新上传');\n }\n };\n readZipFile(0);\n },\n testDeploy(){\n if (!confirm('确认提交本次改动到测试环境?'))\n return;\n $.ajax({\n type:'get',\n dataType:'json',\n url:netService+apiUrl.testDeploy,\n success:(data) => {\n if(data.result_code !==2000) {\n alert('测试环境上线失败');\n return;\n }\n if(data.result.lack_filename_list.length>0) {\n //let data = {result:{lack_filename_list:['a.mp3','b.jpg']}};\n alert('缺少文件:' + JSON.stringify(data.result.lack_filename_list));\n this.lack_filename_list = data.result.lack_filename_list;\n }\n else\n alert('测试环境上线成功');\n }\n });\n },\n confirmFinishTestDeploy(){\n document.getElementById('formal_deploy_button').disabled='';\n },\n clickFormalDeploy(){\n $.ajax({\n type:'get',\n dataType:'json',\n url:netService+apiUrl.clickFormalDeploy,\n success:(data) => {\n if(data.result_code !== 2000) {\n alert('上一次版本信息返回失败');\n return;\n }\n //let data = {result:{last_version:{version_num:17,last_deploy_time:'1482801116'}}};\n this.last_version_info = data.result.last_version;\n this.deploy_info={modified_log:'', timing_deploy_time:''};\n this.state.isShowFormalDeploy = 1;\n }\n });\n },\n formalDeploy(){\n let time_string=this.deploy_info.timing_deploy_time?this.deploy_info.timing_deploy_time.replace('T',' '):'现在';\n if(!confirm('确认要在' + time_string + '上线正式环境吗?'))\n return;\n this.deploy_info.timing_deploy_time=this.deploy_info.timing_deploy_time?Date.parse(this.deploy_info.timing_deploy_time+\":00\")/1000:0;\n console.log(JSON.stringify(this.deploy_info));\n $.ajax({\n type:'post',\n dataType:'json',\n url:netService+apiUrl.formalDeploy,\n data:JSON.stringify(this.deploy_info),\n success:(data) => {\n if(data.result_code !== 2000) {\n alert('正式环境部署失败');\n return;\n }\n alert('正式环境部署成功');\n }\n });\n vue.log.log_list.push(this.deploy_info);\n },\n cancelFormalDeploy(){\n this.state.isShowFormalDeploy = 0;\n }\n }\n }),\n\n log : new Vue({\n delimiters: ['<%', '%>'],\n\n mounted: function () {\n //this.init();\n },\n\n el: '.log',\n\n data: {\n Date: new Date(),\n log_list:[],\n state:{\n isShowSelf:0,\n isShowForm:0,\n },\n searchQuery:''\n },\n\n methods: {\n init(){\n console.log('log init');\n $.ajax({\n type:'post',\n dataType:'json',\n data:JSON.stringify({search_string:''}),\n url:netService + apiUrl.getLog,\n //async:false,\n success:(data) => {\n console.log(data);\n if(data.result_code != 2000){\n alert('服务器接口出错');\n return;\n }\n this.log_list=data.result.history_list;\n this.state.isShowSelf=1;\n }\n });\n //let data = {result:{history_list:[{id:'17',modified_log:'啦啦啦',timing_deploy_time:'1482221116'}]}};\n }\n },\n\n computed: {\n filteredLogList(){\n return this.log_list.filter((item)=> {\n return item.modified_log.indexOf(this.searchQuery) !== -1\n })\n }\n }\n })\n};\n\nvar NavBar = new Vue({\n delimiters: ['<%', '%>'],\n\n mounted: function () {\n this.showPage(location.href.split('#/')[1]);\n },\n\n el: '.nav-bar',\n\n data: {\n page_route: {\n face_topic: 0,\n sub_face: 0,\n upload: 0,\n //topic_data: 0,\n //default_data: 0,\n log: 0\n },\n default_page: 'face_topic',\n current_page: ''\n },\n\n methods: {\n showPage(pageName){\n if(typeof this.page_route[pageName] === 'undefined') {\n pageName = this.default_page;\n location.href = location.href.split('#/')[0]+'#/'+this.default_page;\n }\n if(this.current_page) {\n vue[page[this.current_page]].state.isShowSelf = 0;\n this.page_route[this.current_page] = 0;\n }\n this.current_page=pageName;\n this.page_route[pageName] = 1;\n vue[page[pageName]].init();\n document.body.style.display = 'block';\n }\n }\n});\n"
},
{
"alpha_fraction": 0.595127284526825,
"alphanum_fraction": 0.5956442952156067,
"avg_line_length": 35.840476989746094,
"blob_id": "be36b3df3b3c7bb5f48447001191e429552cf72f",
"content_id": "8aa8cd71f117055e9d7e6f5a7597b7b8b0bd5a01",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 15520,
"license_type": "no_license",
"max_line_length": 136,
"num_lines": 420,
"path": "/andesres/image/andes/assistant/iOS/tool.py",
"repo_name": "lxzhu94/andes_tony",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nimport struct\nfrom baseclient.python.tornado_proto_gclient import JinjaBaseHandler\nimport http.wrapper as wrapper\nimport http.public_var as public_var\nimport baseclient.python.module as module\nimport baseclient.python.andes_pb2 as proto_andes\nimport util.normalizer as normalizer\nimport logging\nfrom pyDes import *\nimport gzip\nimport urllib\nimport json\nimport string\nimport random\nimport time\nimport hashlib\n\n\nFIND_USER_TIMEOUT = 5000\n\nclass ToolBaseHandler(wrapper.BasicHandler):\n def check_user(self):\n if not self.current_user:\n self.redirect('/tool/ad')\n return False\n else:\n return True\n\n def get_current_user(self):\n return self.get_secure_cookie(\"andes_2\")\n\nclass UserHandler(ToolBaseHandler, JinjaBaseHandler):\n def prepare(self):\n self.content_type = 'html'\n\n def get(self):\n if not self.check_user():\n return\n template_name = 'tools_user.html.tpl'\n\n template_vars = {\n 'title': u'用户查询',\n }\n\n self.make_return(public_var.RESULT_CODE_SUC, template_vars, 'Ok', template_name)\n\nclass AppUpdateHandler(ToolBaseHandler, JinjaBaseHandler):\n def prepare(self):\n self.content_type = 'html'\n\n def get(self):\n if not self.check_user():\n return\n template_name = 'app_update.html.tpl'\n\n template_vars = {\n 'title': u'版本更新',\n }\n\n self.make_return(public_var.RESULT_CODE_SUC, template_vars, 'Ok', template_name)\n\nclass UserQueryHandler(ToolBaseHandler):\n def prepare(self):\n self.content_type = 'html'\n\n def post(self):\n if not self.check_user():\n return\n body = self.json_load_body()\n logging.debug(body)\n req = proto_andes.UserIdPhoneChangeRequest()\n req.type = body['type']\n req.is_encrypt = body['encrypt']\n req.content = body['content']\n if \"'\" in req.content or '=' in req.content or 'and' in req.content or 'or' in req.content:\n self.make_return(public_var.RESULT_CODE_SUC, '', '')\n return\n res = self.service_call(module.MODULE_ANDES_USER, req, timeout=FIND_USER_TIMEOUT) \n assert type(res) == proto_andes.UserIdPhoneChangeResponse\n if res.status == res.SUCCESS:\n result = self.load_result(res)\n self.make_return(public_var.RESULT_CODE_SUC, result, '')\n else:\n self.make_return(public_var.RESULT_CODE_SERVER_ERROR, '', 'internal server error', http_status=public_var.HTTP_SERVER_ERROR)\n\n def load_result(self, pkg):\n content = []\n for r in pkg.results:\n content.append(r)\n return {\n 'list': content\n }\n\nclass GroupConsoleHandler(ToolBaseHandler, JinjaBaseHandler):\n def prepare(self):\n self.content_type = 'html'\n\n def get(self):\n if not self.check_user():\n return\n template_name = 'gconsole.html.tpl'\n template_vars = {\n 'title': u'公开群后台',\n }\n self.make_return(public_var.RESULT_CODE_SUC, template_vars, 'Ok', template_name)\n\nclass GroupListHandler(ToolBaseHandler, JinjaBaseHandler):\n def prepare(self):\n self.content_type = 'json'\n\n def get(self):\n if not self.check_user():\n return\n show_type = self.get_argument('show_type', None)\n if show_type is None:\n self.make_return(public_var.RESULT_CODE_SERVER_ERROR, '', 'internal server error', http_status=public_var.HTTP_SERVER_ERROR)\n req = proto_andes.PublicGroupOPAction()\n req.action = req.LIST\n req.show_type = show_type\n res = self.service_call(module.MODULE_PUBLIC_GROUP, req)\n assert type(res) == proto_andes.PublicGroupOPResponse\n if res.status == res.SUCCESS:\n result = self.load_result(res)\n self.make_return(public_var.RESULT_CODE_SUC, result, '')\n else:\n self.make_return(public_var.RESULT_CODE_SERVER_ERROR, '', 'internal server error', http_status=public_var.HTTP_SERVER_ERROR)\n\n def load_result(self, res):\n groups = []\n for group in res.public_groups:\n group_dict = {\n 'group_id': group.public_group_id,\n 'name': group.name,\n 'show_time': group.show_time,\n 'end_time': group.end_time,\n 'show_type': group.show_type,\n 'total_audiance': group.total_audiance,\n 'is_hot': group.is_hot,\n 'show_status': group.show_status\n }\n items = []\n for item in group.items:\n items.append({\n 'item_id': item.item_id,\n 'name': item.name,\n 'logo': item.logo,\n 'total_audiance': item.total_audiance\n })\n group_dict['items'] = items\n groups.append(group_dict)\n return groups\n \n\nclass GroupAddHandler(ToolBaseHandler, JinjaBaseHandler):\n def prepare(self):\n self.content_type = 'json'\n\n def post(self):\n if not self.check_user():\n return\n body = self.json_load_body()\n logging.debug(body)\n group = proto_andes.PublicGroup()\n group.name = body['name']\n group.show_time = body['show_time']\n group.end_time = body['end_time']\n group.show_type = body['show_type']\n group.total_audiance = int(body['total_audiance'])\n group.show_type = body['show_type']\n group.is_hot = body['is_hot']\n group.show_status = body['show_status']\n items = []\n for b_item in body['items']: \n item = proto_andes.PublicGroupItem()\n item.name = b_item['name']\n item.logo = b_item['logo']\n item.total_audiance = int(b_item['total_audiance'])\n items.append(item)\n group.items.extend(items)\n req = proto_andes.PublicGroupOPAction()\n req.action = req.ADD\n req.public_group.MergeFrom(group)\n res = self.service_call(module.MODULE_PUBLIC_GROUP, req)\n assert type(res) == proto_andes.PublicGroupOPResponse\n if res.status == res.SUCCESS:\n self.make_return(public_var.RESULT_CODE_SUC, 'ok', '')\n else:\n self.make_return(public_var.RESULT_CODE_SERVER_ERROR, '', 'internal server error', http_status=public_var.HTTP_SERVER_ERROR)\n\nclass GroupDelHandler(ToolBaseHandler, JinjaBaseHandler):\n def prepare(self):\n self.content_type = 'json'\n\n def get(self):\n if not self.check_user():\n return\n group_id = self.get_argument('group_id', None)\n if group_id is None:\n self.make_return(public_var.RESULT_CODE_SERVER_ERROR, '', 'internal server error', http_status=public_var.HTTP_SERVER_ERROR)\n req = proto_andes.PublicGroupOPAction()\n req.action = req.DEL\n req.public_group_id = group_id\n res = self.service_call(module.MODULE_PUBLIC_GROUP, req)\n assert type(res) == proto_andes.PublicGroupOPResponse\n if res.status == res.SUCCESS:\n self.make_return(public_var.RESULT_CODE_SUC, 'ok', '')\n else:\n self.make_return(public_var.RESULT_CODE_SERVER_ERROR, '', 'internal server error', http_status=public_var.HTTP_SERVER_ERROR)\n\n\nclass GroupUpdateGroupHandler(ToolBaseHandler, JinjaBaseHandler):\n def prepare(self):\n self.content_type = 'json'\n\n def post(self):\n if not self.check_user():\n return\n body = self.json_load_body()\n logging.debug(body)\n group = proto_andes.PublicGroup()\n group.public_group_id = body['group_id']\n group.name = body['name']\n group.show_time = body['show_time']\n group.end_time = body['end_time']\n group.show_type = body['show_type']\n group.total_audiance = int(body['total_audiance'])\n group.is_hot = body['is_hot']\n group.show_status = body['show_status']\n req = proto_andes.PublicGroupOPAction()\n req.action = req.UPDATE_GROUP \n req.public_group.MergeFrom(group)\n res = self.service_call(module.MODULE_PUBLIC_GROUP, req)\n assert type(res) == proto_andes.PublicGroupOPResponse\n if res.status == res.SUCCESS:\n self.make_return(public_var.RESULT_CODE_SUC, 'ok', '')\n else:\n self.make_return(public_var.RESULT_CODE_SERVER_ERROR, '', 'internal server error', http_status=public_var.HTTP_SERVER_ERROR)\n\n\nclass GroupUpdateGroupItemHandler(ToolBaseHandler, JinjaBaseHandler):\n def prepare(self):\n self.content_type = 'json'\n\n def post(self):\n if not self.check_user():\n return\n body = self.json_load_body()\n logging.debug(body)\n item = proto_andes.PublicGroupItem()\n item.item_id = body['item_id']\n item.name = body['name']\n item.logo = body['logo']\n item.total_audiance = int(body['total_audiance'])\n req = proto_andes.PublicGroupOPAction()\n req.action = req.UPDATE_GROUP_ITEM\n req.public_group_item.MergeFrom(item)\n res = self.service_call(module.MODULE_PUBLIC_GROUP, req)\n assert type(res) == proto_andes.PublicGroupOPResponse\n if res.status == res.SUCCESS:\n self.make_return(public_var.RESULT_CODE_SUC, 'ok', '')\n else:\n self.make_return(public_var.RESULT_CODE_SERVER_ERROR, '', 'internal server error', http_status=public_var.HTTP_SERVER_ERROR)\n\nclass EchoUpdateHandler(ToolBaseHandler, JinjaBaseHandler):\n def prepare(self):\n self.content_type = 'json'\n\n def post(self):\n if not self.check_user():\n return\n body = self.json_load_body()\n req = proto_andes.EchoTagOPAction()\n req.action = req.UPDATE\n tag = proto_andes.EchoTag()\n tag.tag_id = body['tag_id']\n tag.tag_name = body['tag_name']\n tag.parent_id = body['parent_id']\n tag.enable = body['enable']\n req.echo_tag.CopyFrom(tag)\n res = self.service_call(module.MODULE_ECHO, req)\n assert type(res) == proto_andes.EchoTagOPResponse\n if res.status == res.SUCCESS:\n self.make_return(public_var.RESULT_CODE_SUC, 'ok', '')\n else:\n self.make_return(public_var.RESULT_CODE_SERVER_ERROR, '', 'internal server error', http_status=public_var.HTTP_SERVER_ERROR)\n\nclass EchoDeleteHandler(ToolBaseHandler, JinjaBaseHandler):\n def prepare(self):\n self.content_type = 'json'\n\n def post(self):\n if not self.check_user():\n return\n body = self.json_load_body()\n req = proto_andes.EchoTagOPAction()\n req.action = req.DEL\n tag = proto_andes.EchoTag()\n tag.tag_id = body['tag_id']\n tag.parent_id = body['parent_id']\n req.echo_tag.CopyFrom(tag)\n res = self.service_call(module.MODULE_ECHO, req)\n assert type(res) == proto_andes.EchoTagOPResponse\n if res.status == res.SUCCESS:\n self.make_return(public_var.RESULT_CODE_SUC, 'ok', '')\n else:\n self.make_return(public_var.RESULT_CODE_SERVER_ERROR, '', 'internal server error', http_status=public_var.HTTP_SERVER_ERROR)\n\nclass EchoInsertHandler(ToolBaseHandler, JinjaBaseHandler):\n def prepare(self):\n self.content_type = 'json'\n\n def post(self):\n if not self.check_user():\n return\n body = self.json_load_body()\n main_tag = proto_andes.EchoTag()\n main_tag.tag_id = body['main_tag']['tag_id']\n main_tag.tag_name = body['main_tag']['tag_name']\n main_tag.parent_id = body['main_tag']['parent_id']\n main_tag.enable = body['main_tag']['enable']\n sub_tags = []\n for sub_item in body['sub_tags']:\n sub_tag = proto_andes.EchoTag()\n sub_tag.tag_id = sub_item['tag_id']\n sub_tag.tag_name = sub_item['tag_name']\n sub_tag.parent_id = body['main_tag']['tag_id']\n sub_tag.enable = sub_item['enable']\n sub_tags.append(sub_tag)\n req = proto_andes.EchoTagOPAction()\n req.action = req.INSERT\n req.echo_tag.CopyFrom(main_tag)\n req.sub_tags.extend(sub_tags)\n res = self.service_call(module.MODULE_ECHO, req)\n assert type(res) == proto_andes.EchoTagOPResponse\n if res.status == res.SUCCESS:\n self.make_return(public_var.RESULT_CODE_SUC, 'ok', '')\n else:\n self.make_return(public_var.RESULT_CODE_SERVER_ERROR, '', 'internal server error', http_status=public_var.HTTP_SERVER_ERROR)\n\nclass EchoRefreshHandler(ToolBaseHandler, JinjaBaseHandler):\n def prepare(self):\n self.content_type = 'html'\n\n def get(self):\n if not self.check_user():\n return\n req = proto_andes.EchoTagOPAction()\n req.action = req.REFRESH\n res = self.service_call(module.MODULE_ECHO, req)\n assert type(res) == proto_andes.EchoTagOPResponse\n if res.status == res.SUCCESS:\n self.make_return(public_var.RESULT_CODE_SUC, 'ok', '')\n else:\n self.make_return(public_var.RESULT_CODE_SERVER_ERROR, '', 'internal server error', http_status=public_var.HTTP_SERVER_ERROR)\n\nclass EchoHandler(ToolBaseHandler, JinjaBaseHandler):\n def prepare(self):\n self.content_type = 'html'\n\n def get(self):\n if not self.check_user():\n return\n req = proto_andes.EchoTagOPAction()\n req.action = req.LIST\n res = self.service_call(module.MODULE_ECHO, req)\n logging.debug(res)\n assert type(res) == proto_andes.EchoTagOPResponse\n result = {}\n if res.status == res.SUCCESS:\n result = self.on_load_result(res)\n template_name = 'echo_admin.html.tpl'\n template_vars = {\n 'title': u'管理员后台', \n 'info': json.dumps(result)\n }\n self.make_return(public_var.RESULT_CODE_SUC, template_vars, 'Ok', template_name)\n\n def on_load_result(self, res):\n result = []\n for tag in res.echo_tags:\n result.append({\n 'name': tag.tag_name,\n 'id': tag.tag_id,\n 'enable': tag.enable,\n 'parent_id': tag.parent_id if tag.parent_id is not None else ''\n })\n return result\n\nclass LoginHandler(wrapper.BasicHandler, JinjaBaseHandler):\n def prepare(self):\n self.content_type = 'html'\n\n def get(self):\n template_name = 'login.html.tpl'\n template_vars = {\n 'title': u'管理员后台', \n 'IS_LOGIN': True\n }\n self.make_return(public_var.RESULT_CODE_SUC, template_vars, 'Ok', template_name)\n\n\nclass DoLoginHandler(wrapper.BasicHandler, JinjaBaseHandler):\n def prepare(self):\n self.content_type = 'json'\n\n def post(self):\n body = self.json_load_body()\n user_name = body['name']\n pwd = body['pwd']\n if user_name is None or pwd is None:\n self.make_return(public_var.RESULT_CODE_SERVER_ERROR, '', 'internal server error', http_status=public_var.HTTP_SERVER_ERROR)\n result = {}\n if user_name == 'andes' and pwd == '35RYezct2kQF0oBh':\n self.set_secure_cookie(\"andes_2\", user_name)\n result['login'] = 'ok'\n else:\n result['login'] = 'fail'\n self.make_return(public_var.RESULT_CODE_SUC, result, '')\n\n"
},
{
"alpha_fraction": 0.5616790652275085,
"alphanum_fraction": 0.5904330611228943,
"avg_line_length": 42.011451721191406,
"blob_id": "a790bd758e15cc20fa72410125d3befbadf2800a",
"content_id": "f07d6a1bea9939f1aebc0f3539de6a22bb16338b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 11334,
"license_type": "no_license",
"max_line_length": 269,
"num_lines": 262,
"path": "/scripts/share/star_chat.js",
"repo_name": "lxzhu94/andes_tony",
"src_encoding": "UTF-8",
"text": "'use strict';\n\nvar _typeof = typeof Symbol === \"function\" && typeof Symbol.iterator === \"symbol\" ? function (obj) { return typeof obj; } : function (obj) { return obj && typeof Symbol === \"function\" && obj.constructor === Symbol && obj !== Symbol.prototype ? \"symbol\" : typeof obj; };\n\n/**\n * Created by Tony on 2017/1/9.\n */\nmyLib.remAdjust(20, 320);\nvar netService = '';\nif (location.href.indexOf('https') > -1) netService = 'https://andes.cootekservice.com';else netService = 'http://andes.cootekservice.com';\nvar share_title = '',\n share_desc = '',\n share_image = 'http://andes.cootekservice.com/andesres/image/andes/andes_share_icon.jpg';\nvar vue = new Vue({\n delimiters: ['<%', '%>'],\n el: '#content',\n mounted: function mounted() {\n var _this = this;\n\n document.setTitle = function (t) {\n document.title = t;\n var i = document.createElement('iframe');\n i.src = '//m.baidu.com/favicon.ico';\n i.style.display = 'none';\n i.onload = function () {\n setTimeout(function () {\n i.remove();\n }, 9);\n };\n document.body.appendChild(i);\n };\n //if(this.is_history_page)\n // Andes.stopStarChatAudio('vue.stopStarChatAudio');\n $('#content').show();\n setTimeout(function () {\n _this.init();\n }, 200);\n },\n\n data: {\n rem_rate: 320 / document.body.clientWidth,\n is_history_page: location.href.indexOf('_history') > -1,\n template: myLib.GetQueryString('template'),\n uid: myLib.GetQueryString('uid'),\n ts: myLib.GetQueryString('ts'),\n dialog_list: data[myLib.GetQueryString('template')],\n star_name: data[myLib.GetQueryString('template')][0].sub_title,\n nickname: '',\n chat_list: [],\n audio_list: [],\n answer: [],\n audio_index: 0,\n current_audio: new Audio(),\n showBrowserTip: 0,\n hostNotOther: myLib.getHostApp() != 'other',\n playIcon: 5,\n canClick: 0,\n is_ios: myLib.getHostDevice() === 'iphone'\n },\n methods: {\n init: function init() {\n var _this2 = this;\n\n $.ajax({\n url: netService + '/activity/star_access/user_audio_list?_ts=' + Date.parse(new Date()) / 1000,\n dataType: 'json',\n data: JSON.stringify({ template: this.template, uid: this.uid, ts: this.ts }),\n type: 'post',\n success: function success(data) {\n _this2.answer = data.result.audio_lst;\n //this.answer = answer;\n _this2.nickname = data.result.nick_name;\n setTimeout(function () {\n if (_this2.is_history_page) document.setTitle(_this2.star_name + '\\u8BBF\\u95EE\\u8BB0\\u5F55');else document.setTitle(_this2.nickname + '\\u6B63\\u5728\\u63A5\\u53D7' + _this2.star_name + '\\u8BBF\\u95EE');\n }, 100);\n share_title = '\\u5FEB\\u6765\\u56F4\\u89C2\\uFF0C' + _this2.nickname + '\\u6B63\\u5728\\u63A5\\u53D7' + _this2.star_name + '\\u7684\\u8BBF\\u95EE';\n share_desc = '\\u5FEB\\u6765\\u770B' + _this2.nickname + '\\u548C' + _this2.star_name + '\\u7684\\u8BBF\\u8C08\\u73B0\\u573A';\n share_image = 'http://andes.cootekservice.com/andesres/image/andes/star_chat/' + _this2.template + '/' + _this2.dialog_list[0].contact_image;\n if (!_this2.is_history_page && WechatShare.isWeixin()) WechatShare.config(INFO.appId, INFO.timestamp, INFO.noncestr, INFO.signature, share_title, share_desc, location.href, share_image);\n _this2.addAudioList(0, 0, 0);\n }\n });\n },\n addAudioList: function addAudioList(dialog_index, audio_index, answer_index) {\n var _this3 = this;\n\n if (dialog_index >= this.dialog_list.length) {\n var autoPlayAudio1 = function autoPlayAudio1() {\n wx.ready(function () {\n _this3.audioPlay();\n });\n };\n if (this.is_history_page || navigator.userAgent.indexOf('Safari') > -1) {\n (function () {\n var handler = function handler(e) {\n e.stopPropagation();\n document.body.removeEventListener('touchstart', handler);\n _this3.audioPlay();\n setTimeout(function () {\n _this3.canClick = 1;\n }, 300);\n };\n document.body.addEventListener('touchstart', handler);\n })();\n } else WechatShare.isWeixin() ? autoPlayAudio1() : this.audioPlay();\n if (this.is_history_page) this.initChatListHistory(data[this.template]);else this.initChatList(data[this.template]);\n return;\n }\n if (this.dialog_list[dialog_index].audio_file.indexOf('question') < 0 || answer_index >= this.answer.length) {\n this.audio_list.push({ url: '/andesres/image/andes/star_chat/' + this.template + '/' + this.dialog_list[dialog_index].audio_file });\n this.audio_list[audio_index].show_duration = this.dialog_list[dialog_index].show_duration;\n this.addAudioList(dialog_index + 1, audio_index + 1, answer_index);\n } else {\n var _ret2 = function () {\n _this3.audio_list.push({ url: '/andesres/image/andes/star_chat/' + _this3.template + '/' + _this3.dialog_list[dialog_index].audio_file });\n _this3.audio_list[audio_index].show_duration = _this3.dialog_list[dialog_index].show_duration;\n _this3.audio_list.push({ url: _this3.answer[answer_index].url });\n var audio = new Audio();\n audio.src = _this3.audio_list[audio_index + 1].url;\n if (_this3.audio_list[audio_index + 1].url.indexOf('.aud') <= -1) {\n _this3.audio_list[audio_index + 1].show_duration = myLib.GetQueryString('t' + (answer_index + 1)) ? myLib.GetQueryString('t' + (answer_index + 1)) : 1;\n if (_this3.audio_list[audio_index + 1].show_duration > 30) _this3.audio_list[audio_index + 1].show_duration = 30;\n _this3.addAudioList(dialog_index + 1, audio_index + 2, answer_index + 1);\n return {\n v: void 0\n };\n }\n audio.onloadedmetadata = function () {\n _this3.audio_list[audio_index + 1].show_duration = parseInt(audio.duration);\n _this3.addAudioList(dialog_index + 1, audio_index + 2, answer_index + 1);\n };\n audio.onerror = function () {\n _this3.audio_list[audio_index + 1].show_duration = 1;\n _this3.addAudioList(dialog_index + 1, audio_index + 2, answer_index + 1);\n };\n }();\n\n if ((typeof _ret2 === 'undefined' ? 'undefined' : _typeof(_ret2)) === \"object\") return _ret2.v;\n }\n },\n audioPlay: function audioPlay() {\n var _this4 = this;\n\n var audio_index = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : 0;\n\n console.log(audio_index);\n if (audio_index === 0) this.playIcon = 4; //换成暂停icon\n if (audio_index >= this.audio_list.length) {\n console.log('audioPlay end');\n this.playIcon = 5;\n this.audio_index = 0;\n this.current_audio = new Audio();\n return;\n }\n if (this.audio_index !== audio_index || !this.current_audio.src) {\n this.audio_index = audio_index;\n if (this.audio_list[audio_index].url.indexOf('.aud') > -1) {\n this.audioPlay(audio_index + 1);\n return;\n }\n this.current_audio.src = this.audio_list[audio_index].url;\n this.current_audio.onended = function () {\n _this4.current_audio.onended = function () {};\n setTimeout(function () {\n return _this4.audioPlay(audio_index + 1);\n }, 0);\n };\n }\n this.current_audio.onerror = function () {\n _this4.current_audio.onerror = function () {};\n setTimeout(function () {\n return _this4.audioPlay(audio_index + 1);\n }, 0);\n };\n console.log('audio play');\n this.current_audio.play();\n },\n audioPlayClick: function audioPlayClick() {\n if (!this.canClick && navigator.userAgent.indexOf('Safari') > -1) return;\n console.log('audio stop:' + !this.current_audio.paused || this.current_audio.readyState < 3);\n console.log('audio readyState:' + this.current_audio.readyState);\n if (!this.current_audio.paused || this.current_audio.readyState < 2) {\n this.playIcon = 5; //换成播放icon\n this.current_audio.pause();\n } else {\n this.playIcon = 4; //换成暂停icon\n this.audioPlay(this.audio_index);\n }\n },\n stopStarChatAudio: function stopStarChatAudio() {\n var _this5 = this;\n\n this.playIcon = 5; //换成播放icon\n this.current_audio.pause();\n setTimeout(function () {\n return _this5.current_audio.pause();\n }, 1000);\n this.current_audio.onloadedmetadata = function () {\n _this5.current_audio.pause();\n _this5.current_audio.onloadedmetadata = {};\n };\n },\n initChatList: function initChatList(arr) {\n var _this6 = this;\n\n var addItem = function addItem(index, answer_index) {\n if (index >= _this6.dialog_list.length) return;\n //setTimeout(() => {document.getElementsByClassName('chat-panel')[0].scrollBottom = 0;},200);\n if (_this6.dialog_list[index].audio_file.indexOf('question') > -1 && answer_index < _this6.answer.length) {\n _this6.chat_list.push(_this6.dialog_list[index]);\n setTimeout(function () {\n return _this6.chat_list.push(_this6.audio_list[index + 1 + answer_index]);\n }, 1000);\n //setTimeout(() => {document.getElementsByClassName('chat-panel')[0].scrollBottom = 0;},1500);\n setTimeout(function () {\n return addItem(index + 1, answer_index + 1);\n }, 2000);\n } else {\n _this6.chat_list.push(_this6.dialog_list[index]);\n setTimeout(function () {\n return addItem(index + 1, answer_index);\n }, 1000);\n }\n };\n addItem(0, 0);\n },\n initChatListHistory: function initChatListHistory(arr) {\n var _this7 = this;\n\n var addItem = function addItem(index, answer_index) {\n if (index >= _this7.dialog_list.length) return;\n //document.getElementsByClassName('chat-panel')[0].scrollBottom = 0;\n if (_this7.dialog_list[index].audio_file.indexOf('question') > -1 && answer_index < _this7.answer.length) {\n _this7.chat_list.push(_this7.dialog_list[index]);\n _this7.chat_list.push(_this7.audio_list[index + 1 + answer_index]);\n //document.getElementsByClassName('chat-panel')[0].scrollBottom = 0;\n addItem(index + 1, answer_index + 1);\n } else {\n _this7.chat_list.push(_this7.dialog_list[index]);\n addItem(index + 1, answer_index);\n }\n };\n addItem(0, 0);\n },\n joinClick: function joinClick() {\n console.log('joinClick');\n if (myLib.getHostApp() != 'other') {\n //若寄主应用为微信、微博、qq空间浏览器\n this.showBrowserTip = 1;\n return;\n }\n this.stopStarChatAudio();\n if (this.is_ios) location.href = 'https://itunes.apple.com/app/apple-store/id1090811540?pt=618432&ct=startalk_share&mt=8';else location.href = 'http://cootek-file.oss-cn-hangzhou.aliyuncs.com/AndesDailyBuild/Andes_20170213110044_interview_0A00A1.apk';\n },\n openAppClick: function openAppClick() {\n location.href = 'bibi://openbibi?page=star_chat';\n }\n }\n});\nwindow.onload = function () {\n FastClick.attach(document.body);\n};"
},
{
"alpha_fraction": 0.6802610158920288,
"alphanum_fraction": 0.6916802525520325,
"avg_line_length": 30.05063247680664,
"blob_id": "26a716d42fc1b5cbfcfa07397c154a08e0c9e14d",
"content_id": "34da88bdfe183d8ad711c1cdfa0674d11b5b81be",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 2452,
"license_type": "no_license",
"max_line_length": 109,
"num_lines": 79,
"path": "/scripts/bestvoice/bestvoice_pc.js",
"repo_name": "lxzhu94/andes_tony",
"src_encoding": "UTF-8",
"text": "function remAdjust(defaultFontSize, defaultScreenWidth) {\n\tvar htmlNode = document.getElementsByTagName('html')[0];\n\tfunction resize() {\n\t\tvar screenWidth = document.body.offsetWidth;\n\t\thtmlNode.style.fontSize = screenWidth / defaultScreenWidth * defaultFontSize + 'px';\n\t}\n\tdocument.addEventListener('DOMContentLoaded', function() {\n\t\tresize();\n\t});\n\twindow.addEventListener('resize', resize);\n}\nremAdjust(10, 640);\nvar $rulesContainer = document.getElementById('rulesContainer');\nvar $detail = document.getElementById('detail');\nvar $detailClose = document.getElementById('detailClose');\nvar $ruleBanner = document.getElementById('ruleBanner');\nvar $voteBanner = document.getElementById('voteBanner');\nvar $listenMyVoice = $('.voice-button');\nvar $voiceArr = document.getElementsByClassName('voice-listen');\nvar soundPlaying = false;\nvar audio=new Audio();\nvar soundControl = function (url) {\n\tif (!soundPlaying&&url) {\n\t\tsoundPlaying = true;\n\t\t$listenMyVoice.addClass('playing');\n\t\tif(current_audio!=url)\n\t\t\taudio.src=url;\n\t\taudio.play();\n\t\tcurrent_audio=url;\n\t\taudio.addEventListener('ended',function(){\n\t\t\tsoundPlaying=true;\n\t\t\tsoundControl();\n\t\t})\n\t} else {\n\t\tsoundPlaying=false;\n\t\tif(url&&url!=current_audio){\n\t\t\tsoundControl(url);\n\t\t\treturn;\n\t\t}\n\t\t$listenMyVoice.removeClass('playing');\n\t\taudio.pause();\n\t}\n};\nvar netService='http://docker-ws2.cootekservice.com'\nvar list_item=$('#list-item-dom').find('.list-item');\nvar current_uid=0;\nvar current_audio='';\nvar token='350a6bff-c629-4ba9-8ef8-d9515f88a157';\n\nfunction getList(){\n\t$.ajax({\n\t\ttype:\"GET\",\n\t\tdataType:\"json\",\n\t\turl:\"http://183.136.223.43:30007/andes/bestvoice/list?_v=1&_token=\"+token+\"&_ts=\"+Date.parse(new Date())/1000,\n\t\tsuccess: getListCallback\n\t});\n}\n\nfunction getListCallback(data) {\n\tvar arr = data.result.list;\n\tfor (var i in arr) {\n\t\tvar uid = arr[i].best_voice_id - 1;\n\t\tvoice_list[uid].totel_like = arr[i].total_votes;\n\t\tlist_item.find('.user-tag>span').text(voice_list[uid].user_tag);\n\t\tlist_item.find('.user-pic').attr('src', voice_list[uid].photo[0]);\n\t\tlist_item.find('.like-num').text(voice_list[uid].totel_like);\n\t\tlist_item.data('uid', uid);\n\t\t$('.competitor-list').append(list_item.clone());\n\t}\n\t$('.voice-listen').fastClick(function(){\n\t\tcurrent_uid=$(this).parent().parent().data('uid');\n\t\tsoundControl(voice_list[current_uid].audio);\n\t});\n\t$('.download').fastClick(function(){\n\t\t$('.download-img').show();\n\t});\n\t$('.download-img').fastClick(function(){$('.download-img').hide()});\n}\ngetList();"
}
] | 24 |
111Seven/Python | https://github.com/111Seven/Python | 159f4bc03e8f0e1a74cced910eaa93abeca6d848 | efea8d705d2a359c518af46676bdccf61eba5f4b | ccfe5fc1483fbffd227bd3885f510b6940547291 | refs/heads/master | 2023-09-06T03:54:23.108622 | 2021-10-22T05:35:07 | 2021-10-22T05:35:07 | null | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.8533333539962769,
"alphanum_fraction": 0.8533333539962769,
"avg_line_length": 74,
"blob_id": "80580dd64ca6092ef049a180c1019be19b97d485",
"content_id": "1eae967d284c1923f4b620cf3e3df6cfff7b0b66",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 75,
"license_type": "no_license",
"max_line_length": 74,
"num_lines": 1,
"path": "/Pandas/readme.md",
"repo_name": "111Seven/Python",
"src_encoding": "UTF-8",
"text": "This folder explores the potential of pandas applied to different problems\n"
},
{
"alpha_fraction": 0.7557252049446106,
"alphanum_fraction": 0.7557252049446106,
"avg_line_length": 17.428571701049805,
"blob_id": "1dc8e782603fa847177b94e39c0244f1a24b4291",
"content_id": "514f69d2ef4658363b74a3d92c39eaba09607166",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 131,
"license_type": "no_license",
"max_line_length": 63,
"num_lines": 7,
"path": "/README.md",
"repo_name": "111Seven/Python",
"src_encoding": "UTF-8",
"text": "# Python\n\nThis module explores the capabilites of python and its modules.\n\nSome of the modules we will explore:\n- Numpy\n- Pandas\n\n\n"
},
{
"alpha_fraction": 0.7707736492156982,
"alphanum_fraction": 0.7707736492156982,
"avg_line_length": 52.61538314819336,
"blob_id": "85875c1d5426d2c98ffec28c32b56598fa2e958d",
"content_id": "a61dbfe17f2b6736faedcb429678e6cfe04042ff",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 698,
"license_type": "no_license",
"max_line_length": 385,
"num_lines": 13,
"path": "/Numpy/readme.md",
"repo_name": "111Seven/Python",
"src_encoding": "UTF-8",
"text": "# Numpy\nIn here we explore the potential of numpy applied to linear algebra. We will explore all the properties related to vectors and matrices. But we will stay focussed on exploring the abilities of numpy applied to linear algebra. I will think that you have some basic understanding of numpy. This series of notebooks will help you to organize your understanding on the abilities of numpy. \n## Topics:\n### Array Creation\n### Vector Operations\n### Dot Product\n### Cross Product\n### Matrix creation\n### Matrix Opertaions\n### Vectorization\n### Broadcasting\n\n**Note:** I will mostly follow the documentation of numpy. If you have any doubts or need further explanation refer to https://numpy.org/.\n\n"
},
{
"alpha_fraction": 0.5284552574157715,
"alphanum_fraction": 0.6097561120986938,
"avg_line_length": 14.5,
"blob_id": "f6e025e58fcae169887c27cd844c9b08b80318ad",
"content_id": "dd9430f0bf91e487fbd6ca375da0d210763c9ce0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 123,
"license_type": "no_license",
"max_line_length": 31,
"num_lines": 8,
"path": "/Matplotlib/plot.py",
"repo_name": "111Seven/Python",
"src_encoding": "UTF-8",
"text": "import matplotlib.pyplot as plt\n\n#Create data to plot\na = [1, 2, 3, 4, 5]\nb = [1, 2, 3, 4, 5]\n\n#Plot the data\nplt.plot(a,b)"
}
] | 4 |
kleintk/CookieClickerKlonProzessManager | https://github.com/kleintk/CookieClickerKlonProzessManager | 5ff99af1da7dae4736a9f52a1eef804be4dcd3d5 | f4aa17f6b00fb87133991823968e021364d8d6bd | 7da94c6837c156cb5769b53af0d7a90fb2f61840 | refs/heads/master | 2021-01-19T11:37:27.632054 | 2014-04-29T14:29:51 | 2014-04-29T14:29:51 | null | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.807947039604187,
"alphanum_fraction": 0.8145695328712463,
"avg_line_length": 20.714284896850586,
"blob_id": "58dff9b5a6016e5d96aa16cca3adc57348c3f5ce",
"content_id": "588b59c4df65cde2ab152de9bc5b28f383fd2ac8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 152,
"license_type": "no_license",
"max_line_length": 59,
"num_lines": 7,
"path": "/README.txt",
"repo_name": "kleintk/CookieClickerKlonProzessManager",
"src_encoding": "ISO-8859-1",
"text": "CookieClickerKlonProzessManager\n\nBeschreibung:\nEinfacher CookieClicker-Klon ohne Kekse aber mit Prozessen.\n- Abhängigkeiten:\n o Python3\n o PySide"
},
{
"alpha_fraction": 0.7120636105537415,
"alphanum_fraction": 0.7420220971107483,
"avg_line_length": 75.8713150024414,
"blob_id": "f8dc42363b459357bc024b4a2e412f2bd5a20a76",
"content_id": "59af7e826bfdd96e6264aa6035cdaa7cca40f69e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 28681,
"license_type": "no_license",
"max_line_length": 155,
"num_lines": 373,
"path": "/hauptfenster.py",
"repo_name": "kleintk/CookieClickerKlonProzessManager",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\n# Form implementation generated from reading ui file '.\\hauptfenster.ui'\n#\n# Created: Thu Mar 27 20:44:04 2014\n# by: pyside-uic 0.2.15 running on PySide 1.2.1\n#\n# WARNING! All changes made in this file will be lost!\n\nfrom PySide import QtCore, QtGui\n\nclass Ui_MainWindow(object):\n def setupUi(self, MainWindow):\n MainWindow.setObjectName(\"MainWindow\")\n MainWindow.resize(1020, 620)\n icon = QtGui.QIcon()\n icon.addPixmap(QtGui.QPixmap(\":/bilder/processor.png\"), QtGui.QIcon.Normal, QtGui.QIcon.Off)\n MainWindow.setWindowIcon(icon)\n self.centralwidget = QtGui.QWidget(MainWindow)\n self.centralwidget.setObjectName(\"centralwidget\")\n self.label = QtGui.QLabel(self.centralwidget)\n self.label.setGeometry(QtCore.QRect(30, 20, 81, 31))\n self.label.setStyleSheet(\"color: rgb(85, 170, 0);\\n\"\n\"font-size: 20px;\\n\"\n\"font-weight: 800;\")\n self.label.setObjectName(\"label\")\n self.pushButtonProzess = QtGui.QPushButton(self.centralwidget)\n self.pushButtonProzess.setGeometry(QtCore.QRect(60, 140, 111, 101))\n self.pushButtonProzess.setStyleSheet(\"\")\n self.pushButtonProzess.setObjectName(\"pushButtonProzess\")\n self.groupBox = QtGui.QGroupBox(self.centralwidget)\n self.groupBox.setGeometry(QtCore.QRect(310, 80, 321, 431))\n self.groupBox.setObjectName(\"groupBox\")\n self.groupBoxAssistenten = QtGui.QGroupBox(self.groupBox)\n self.groupBoxAssistenten.setGeometry(QtCore.QRect(20, 30, 281, 81))\n self.groupBoxAssistenten.setObjectName(\"groupBoxAssistenten\")\n self.label_22 = QtGui.QLabel(self.groupBoxAssistenten)\n self.label_22.setGeometry(QtCore.QRect(15, 20, 51, 20))\n self.label_22.setObjectName(\"label_22\")\n self.labelAnzahlAssitenten = QtGui.QLabel(self.groupBoxAssistenten)\n self.labelAnzahlAssitenten.setGeometry(QtCore.QRect(15, 50, 41, 20))\n self.labelAnzahlAssitenten.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing|QtCore.Qt.AlignVCenter)\n self.labelAnzahlAssitenten.setObjectName(\"labelAnzahlAssitenten\")\n self.label_24 = QtGui.QLabel(self.groupBoxAssistenten)\n self.label_24.setGeometry(QtCore.QRect(75, 20, 101, 20))\n self.label_24.setObjectName(\"label_24\")\n self.label_25 = QtGui.QLabel(self.groupBoxAssistenten)\n self.label_25.setGeometry(QtCore.QRect(195, 20, 81, 20))\n self.label_25.setObjectName(\"label_25\")\n self.labelAssistentenProzesseProSekunde = QtGui.QLabel(self.groupBoxAssistenten)\n self.labelAssistentenProzesseProSekunde.setGeometry(QtCore.QRect(95, 50, 71, 20))\n self.labelAssistentenProzesseProSekunde.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing|QtCore.Qt.AlignVCenter)\n self.labelAssistentenProzesseProSekunde.setObjectName(\"labelAssistentenProzesseProSekunde\")\n self.labelAssistentenProzesseGesammt = QtGui.QLabel(self.groupBoxAssistenten)\n self.labelAssistentenProzesseGesammt.setGeometry(QtCore.QRect(195, 50, 71, 20))\n self.labelAssistentenProzesseGesammt.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing|QtCore.Qt.AlignVCenter)\n self.labelAssistentenProzesseGesammt.setObjectName(\"labelAssistentenProzesseGesammt\")\n self.groupBoxDOS = QtGui.QGroupBox(self.groupBox)\n self.groupBoxDOS.setGeometry(QtCore.QRect(20, 130, 281, 81))\n self.groupBoxDOS.setObjectName(\"groupBoxDOS\")\n self.label_28 = QtGui.QLabel(self.groupBoxDOS)\n self.label_28.setGeometry(QtCore.QRect(15, 20, 51, 20))\n self.label_28.setObjectName(\"label_28\")\n self.labelAnzahlDOS = QtGui.QLabel(self.groupBoxDOS)\n self.labelAnzahlDOS.setGeometry(QtCore.QRect(15, 50, 41, 20))\n self.labelAnzahlDOS.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing|QtCore.Qt.AlignVCenter)\n self.labelAnzahlDOS.setObjectName(\"labelAnzahlDOS\")\n self.label_30 = QtGui.QLabel(self.groupBoxDOS)\n self.label_30.setGeometry(QtCore.QRect(75, 20, 101, 20))\n self.label_30.setObjectName(\"label_30\")\n self.label_31 = QtGui.QLabel(self.groupBoxDOS)\n self.label_31.setGeometry(QtCore.QRect(195, 20, 81, 20))\n self.label_31.setObjectName(\"label_31\")\n self.labelDOSProzesseProSekunde = QtGui.QLabel(self.groupBoxDOS)\n self.labelDOSProzesseProSekunde.setGeometry(QtCore.QRect(95, 50, 71, 20))\n self.labelDOSProzesseProSekunde.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing|QtCore.Qt.AlignVCenter)\n self.labelDOSProzesseProSekunde.setObjectName(\"labelDOSProzesseProSekunde\")\n self.labelDOSProzesseGesammt = QtGui.QLabel(self.groupBoxDOS)\n self.labelDOSProzesseGesammt.setGeometry(QtCore.QRect(195, 50, 71, 20))\n self.labelDOSProzesseGesammt.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing|QtCore.Qt.AlignVCenter)\n self.labelDOSProzesseGesammt.setObjectName(\"labelDOSProzesseGesammt\")\n self.groupBoxServer = QtGui.QGroupBox(self.groupBox)\n self.groupBoxServer.setGeometry(QtCore.QRect(20, 230, 281, 81))\n self.groupBoxServer.setObjectName(\"groupBoxServer\")\n self.label_34 = QtGui.QLabel(self.groupBoxServer)\n self.label_34.setGeometry(QtCore.QRect(15, 20, 51, 20))\n self.label_34.setObjectName(\"label_34\")\n self.labelAnzahlServer = QtGui.QLabel(self.groupBoxServer)\n self.labelAnzahlServer.setGeometry(QtCore.QRect(15, 50, 41, 20))\n self.labelAnzahlServer.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing|QtCore.Qt.AlignVCenter)\n self.labelAnzahlServer.setObjectName(\"labelAnzahlServer\")\n self.label_36 = QtGui.QLabel(self.groupBoxServer)\n self.label_36.setGeometry(QtCore.QRect(75, 20, 101, 20))\n self.label_36.setObjectName(\"label_36\")\n self.label_37 = QtGui.QLabel(self.groupBoxServer)\n self.label_37.setGeometry(QtCore.QRect(195, 20, 81, 20))\n self.label_37.setObjectName(\"label_37\")\n self.labelServerProzesseProSekunde = QtGui.QLabel(self.groupBoxServer)\n self.labelServerProzesseProSekunde.setGeometry(QtCore.QRect(95, 50, 71, 20))\n self.labelServerProzesseProSekunde.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing|QtCore.Qt.AlignVCenter)\n self.labelServerProzesseProSekunde.setObjectName(\"labelServerProzesseProSekunde\")\n self.labelServerProzesseGesammt = QtGui.QLabel(self.groupBoxServer)\n self.labelServerProzesseGesammt.setGeometry(QtCore.QRect(195, 50, 71, 20))\n self.labelServerProzesseGesammt.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing|QtCore.Qt.AlignVCenter)\n self.labelServerProzesseGesammt.setObjectName(\"labelServerProzesseGesammt\")\n self.groupBoxMatrix = QtGui.QGroupBox(self.groupBox)\n self.groupBoxMatrix.setGeometry(QtCore.QRect(20, 330, 281, 81))\n self.groupBoxMatrix.setObjectName(\"groupBoxMatrix\")\n self.label_40 = QtGui.QLabel(self.groupBoxMatrix)\n self.label_40.setGeometry(QtCore.QRect(15, 20, 51, 20))\n self.label_40.setObjectName(\"label_40\")\n self.labelAnzahlMatrix = QtGui.QLabel(self.groupBoxMatrix)\n self.labelAnzahlMatrix.setGeometry(QtCore.QRect(15, 50, 41, 20))\n self.labelAnzahlMatrix.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing|QtCore.Qt.AlignVCenter)\n self.labelAnzahlMatrix.setObjectName(\"labelAnzahlMatrix\")\n self.label_42 = QtGui.QLabel(self.groupBoxMatrix)\n self.label_42.setGeometry(QtCore.QRect(75, 20, 101, 20))\n self.label_42.setObjectName(\"label_42\")\n self.label_43 = QtGui.QLabel(self.groupBoxMatrix)\n self.label_43.setGeometry(QtCore.QRect(195, 20, 81, 20))\n self.label_43.setObjectName(\"label_43\")\n self.labelMatrixProzesseProSekunde = QtGui.QLabel(self.groupBoxMatrix)\n self.labelMatrixProzesseProSekunde.setGeometry(QtCore.QRect(95, 50, 71, 20))\n self.labelMatrixProzesseProSekunde.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing|QtCore.Qt.AlignVCenter)\n self.labelMatrixProzesseProSekunde.setObjectName(\"labelMatrixProzesseProSekunde\")\n self.labelMatrixProzesseGesammt = QtGui.QLabel(self.groupBoxMatrix)\n self.labelMatrixProzesseGesammt.setGeometry(QtCore.QRect(195, 50, 71, 20))\n self.labelMatrixProzesseGesammt.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing|QtCore.Qt.AlignVCenter)\n self.labelMatrixProzesseGesammt.setObjectName(\"labelMatrixProzesseGesammt\")\n self.groupBox_2 = QtGui.QGroupBox(self.centralwidget)\n self.groupBox_2.setGeometry(QtCore.QRect(660, 80, 321, 431))\n self.groupBox_2.setObjectName(\"groupBox_2\")\n self.frameAssistent = QtGui.QFrame(self.groupBox_2)\n self.frameAssistent.setGeometry(QtCore.QRect(10, 30, 291, 81))\n self.frameAssistent.setFrameShape(QtGui.QFrame.StyledPanel)\n self.frameAssistent.setFrameShadow(QtGui.QFrame.Raised)\n self.frameAssistent.setObjectName(\"frameAssistent\")\n self.pushButtonAssistentHinzufuegen = QtGui.QPushButton(self.frameAssistent)\n self.pushButtonAssistentHinzufuegen.setGeometry(QtCore.QRect(10, 30, 131, 41))\n icon1 = QtGui.QIcon()\n icon1.addPixmap(QtGui.QPixmap(\":/bilder/assistent.png\"), QtGui.QIcon.Normal, QtGui.QIcon.Off)\n self.pushButtonAssistentHinzufuegen.setIcon(icon1)\n self.pushButtonAssistentHinzufuegen.setIconSize(QtCore.QSize(32, 32))\n self.pushButtonAssistentHinzufuegen.setObjectName(\"pushButtonAssistentHinzufuegen\")\n self.label_4 = QtGui.QLabel(self.frameAssistent)\n self.label_4.setGeometry(QtCore.QRect(10, 10, 71, 16))\n self.label_4.setObjectName(\"label_4\")\n self.labelPreisAssistent = QtGui.QLabel(self.frameAssistent)\n self.labelPreisAssistent.setGeometry(QtCore.QRect(200, 10, 81, 20))\n self.labelPreisAssistent.setObjectName(\"labelPreisAssistent\")\n self.labelProduktionAssistent = QtGui.QLabel(self.frameAssistent)\n self.labelProduktionAssistent.setGeometry(QtCore.QRect(200, 50, 71, 16))\n self.labelProduktionAssistent.setObjectName(\"labelProduktionAssistent\")\n self.frameDOS = QtGui.QFrame(self.groupBox_2)\n self.frameDOS.setGeometry(QtCore.QRect(10, 120, 291, 81))\n self.frameDOS.setFrameShape(QtGui.QFrame.StyledPanel)\n self.frameDOS.setFrameShadow(QtGui.QFrame.Raised)\n self.frameDOS.setObjectName(\"frameDOS\")\n self.pushButtonDOSHinzufuegen = QtGui.QPushButton(self.frameDOS)\n self.pushButtonDOSHinzufuegen.setGeometry(QtCore.QRect(10, 30, 131, 41))\n icon2 = QtGui.QIcon()\n icon2.addPixmap(QtGui.QPixmap(\":/bilder/dos.png\"), QtGui.QIcon.Normal, QtGui.QIcon.Off)\n self.pushButtonDOSHinzufuegen.setIcon(icon2)\n self.pushButtonDOSHinzufuegen.setIconSize(QtCore.QSize(32, 32))\n self.pushButtonDOSHinzufuegen.setObjectName(\"pushButtonDOSHinzufuegen\")\n self.label_7 = QtGui.QLabel(self.frameDOS)\n self.label_7.setGeometry(QtCore.QRect(10, 10, 71, 16))\n self.label_7.setObjectName(\"label_7\")\n self.labelPreisDOS = QtGui.QLabel(self.frameDOS)\n self.labelPreisDOS.setGeometry(QtCore.QRect(200, 10, 81, 20))\n self.labelPreisDOS.setObjectName(\"labelPreisDOS\")\n self.labelProduktionDOS = QtGui.QLabel(self.frameDOS)\n self.labelProduktionDOS.setGeometry(QtCore.QRect(200, 50, 71, 16))\n self.labelProduktionDOS.setObjectName(\"labelProduktionDOS\")\n self.frameServer = QtGui.QFrame(self.groupBox_2)\n self.frameServer.setGeometry(QtCore.QRect(10, 220, 291, 81))\n self.frameServer.setFrameShape(QtGui.QFrame.StyledPanel)\n self.frameServer.setFrameShadow(QtGui.QFrame.Raised)\n self.frameServer.setObjectName(\"frameServer\")\n self.pushButtonServerHinzufuegen = QtGui.QPushButton(self.frameServer)\n self.pushButtonServerHinzufuegen.setGeometry(QtCore.QRect(10, 30, 131, 41))\n icon3 = QtGui.QIcon()\n icon3.addPixmap(QtGui.QPixmap(\":/bilder/server.png\"), QtGui.QIcon.Normal, QtGui.QIcon.Off)\n self.pushButtonServerHinzufuegen.setIcon(icon3)\n self.pushButtonServerHinzufuegen.setIconSize(QtCore.QSize(32, 32))\n self.pushButtonServerHinzufuegen.setObjectName(\"pushButtonServerHinzufuegen\")\n self.label_10 = QtGui.QLabel(self.frameServer)\n self.label_10.setGeometry(QtCore.QRect(10, 10, 71, 16))\n self.label_10.setObjectName(\"label_10\")\n self.labelPreisServer = QtGui.QLabel(self.frameServer)\n self.labelPreisServer.setGeometry(QtCore.QRect(200, 10, 81, 20))\n self.labelPreisServer.setObjectName(\"labelPreisServer\")\n self.labelProduktionServer = QtGui.QLabel(self.frameServer)\n self.labelProduktionServer.setGeometry(QtCore.QRect(200, 50, 81, 16))\n self.labelProduktionServer.setObjectName(\"labelProduktionServer\")\n self.frameMatrix = QtGui.QFrame(self.groupBox_2)\n self.frameMatrix.setGeometry(QtCore.QRect(10, 320, 291, 81))\n self.frameMatrix.setFrameShape(QtGui.QFrame.StyledPanel)\n self.frameMatrix.setFrameShadow(QtGui.QFrame.Raised)\n self.frameMatrix.setObjectName(\"frameMatrix\")\n self.pushButtonMatrixHinzufuegen = QtGui.QPushButton(self.frameMatrix)\n self.pushButtonMatrixHinzufuegen.setGeometry(QtCore.QRect(10, 30, 131, 41))\n icon4 = QtGui.QIcon()\n icon4.addPixmap(QtGui.QPixmap(\":/bilder/matrix.png\"), QtGui.QIcon.Normal, QtGui.QIcon.Off)\n self.pushButtonMatrixHinzufuegen.setIcon(icon4)\n self.pushButtonMatrixHinzufuegen.setIconSize(QtCore.QSize(32, 32))\n self.pushButtonMatrixHinzufuegen.setObjectName(\"pushButtonMatrixHinzufuegen\")\n self.label_19 = QtGui.QLabel(self.frameMatrix)\n self.label_19.setGeometry(QtCore.QRect(10, 10, 71, 16))\n self.label_19.setObjectName(\"label_19\")\n self.labelPreisMatrix = QtGui.QLabel(self.frameMatrix)\n self.labelPreisMatrix.setGeometry(QtCore.QRect(200, 10, 81, 20))\n self.labelPreisMatrix.setObjectName(\"labelPreisMatrix\")\n self.labelProduktionMatrix = QtGui.QLabel(self.frameMatrix)\n self.labelProduktionMatrix.setGeometry(QtCore.QRect(190, 50, 91, 20))\n self.labelProduktionMatrix.setObjectName(\"labelProduktionMatrix\")\n self.label_2 = QtGui.QLabel(self.centralwidget)\n self.label_2.setGeometry(QtCore.QRect(30, 70, 111, 16))\n self.label_2.setObjectName(\"label_2\")\n self.labelProzesseProSekunde = QtGui.QLabel(self.centralwidget)\n self.labelProzesseProSekunde.setGeometry(QtCore.QRect(150, 70, 101, 20))\n self.labelProzesseProSekunde.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing|QtCore.Qt.AlignVCenter)\n self.labelProzesseProSekunde.setObjectName(\"labelProzesseProSekunde\")\n self.labelScore = QtGui.QLabel(self.centralwidget)\n self.labelScore.setGeometry(QtCore.QRect(130, 30, 121, 20))\n font = QtGui.QFont()\n font.setPointSize(14)\n self.labelScore.setFont(font)\n self.labelScore.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing|QtCore.Qt.AlignVCenter)\n self.labelScore.setObjectName(\"labelScore\")\n MainWindow.setCentralWidget(self.centralwidget)\n self.menubar = QtGui.QMenuBar(MainWindow)\n self.menubar.setGeometry(QtCore.QRect(0, 0, 1020, 21))\n self.menubar.setObjectName(\"menubar\")\n self.menuDatei = QtGui.QMenu(self.menubar)\n self.menuDatei.setObjectName(\"menuDatei\")\n self.menuInfo = QtGui.QMenu(self.menubar)\n self.menuInfo.setObjectName(\"menuInfo\")\n self.menuHilfe = QtGui.QMenu(self.menuInfo)\n self.menuHilfe.setObjectName(\"menuHilfe\")\n MainWindow.setMenuBar(self.menubar)\n self.statusbar = QtGui.QStatusBar(MainWindow)\n self.statusbar.setObjectName(\"statusbar\")\n MainWindow.setStatusBar(self.statusbar)\n self.toolBar = QtGui.QToolBar(MainWindow)\n self.toolBar.setToolButtonStyle(QtCore.Qt.ToolButtonIconOnly)\n self.toolBar.setObjectName(\"toolBar\")\n MainWindow.addToolBar(QtCore.Qt.TopToolBarArea, self.toolBar)\n self.actionNeues_Spiel = QtGui.QAction(MainWindow)\n icon5 = QtGui.QIcon()\n icon5.addPixmap(QtGui.QPixmap(\":/icons/start.png\"), QtGui.QIcon.Normal, QtGui.QIcon.Off)\n self.actionNeues_Spiel.setIcon(icon5)\n self.actionNeues_Spiel.setObjectName(\"actionNeues_Spiel\")\n self.actionSpiel_speichern = QtGui.QAction(MainWindow)\n icon6 = QtGui.QIcon()\n icon6.addPixmap(QtGui.QPixmap(\":/icons/speichern.png\"), QtGui.QIcon.Normal, QtGui.QIcon.Off)\n self.actionSpiel_speichern.setIcon(icon6)\n self.actionSpiel_speichern.setObjectName(\"actionSpiel_speichern\")\n self.actionSpiel_laden = QtGui.QAction(MainWindow)\n icon7 = QtGui.QIcon()\n icon7.addPixmap(QtGui.QPixmap(\":/icons/laden.png\"), QtGui.QIcon.Normal, QtGui.QIcon.Off)\n self.actionSpiel_laden.setIcon(icon7)\n self.actionSpiel_laden.setObjectName(\"actionSpiel_laden\")\n self.actionBeenden = QtGui.QAction(MainWindow)\n icon8 = QtGui.QIcon()\n icon8.addPixmap(QtGui.QPixmap(\":/icons/beenden.png\"), QtGui.QIcon.Normal, QtGui.QIcon.Off)\n self.actionBeenden.setIcon(icon8)\n self.actionBeenden.setObjectName(\"actionBeenden\")\n self.actionAnleitung = QtGui.QAction(MainWindow)\n self.actionAnleitung.setObjectName(\"actionAnleitung\")\n self.actionOnline_Dokumentation = QtGui.QAction(MainWindow)\n self.actionOnline_Dokumentation.setObjectName(\"actionOnline_Dokumentation\")\n self.actionUeber = QtGui.QAction(MainWindow)\n icon9 = QtGui.QIcon()\n icon9.addPixmap(QtGui.QPixmap(\":/icons/ueber.png\"), QtGui.QIcon.Normal, QtGui.QIcon.Off)\n self.actionUeber.setIcon(icon9)\n self.actionUeber.setObjectName(\"actionUeber\")\n self.menuDatei.addAction(self.actionNeues_Spiel)\n self.menuDatei.addSeparator()\n self.menuDatei.addAction(self.actionSpiel_speichern)\n self.menuDatei.addAction(self.actionSpiel_laden)\n self.menuDatei.addSeparator()\n self.menuDatei.addAction(self.actionBeenden)\n self.menuHilfe.addAction(self.actionAnleitung)\n self.menuHilfe.addAction(self.actionOnline_Dokumentation)\n self.menuInfo.addAction(self.menuHilfe.menuAction())\n self.menuInfo.addAction(self.actionUeber)\n self.menubar.addAction(self.menuDatei.menuAction())\n self.menubar.addAction(self.menuInfo.menuAction())\n self.toolBar.addAction(self.actionNeues_Spiel)\n self.toolBar.addSeparator()\n self.toolBar.addAction(self.actionSpiel_speichern)\n self.toolBar.addAction(self.actionSpiel_laden)\n\n self.retranslateUi(MainWindow)\n QtCore.QMetaObject.connectSlotsByName(MainWindow)\n\n def retranslateUi(self, MainWindow):\n MainWindow.setWindowTitle(QtGui.QApplication.translate(\"MainWindow\", \"MainWindow\", None, QtGui.QApplication.UnicodeUTF8))\n self.label.setText(QtGui.QApplication.translate(\"MainWindow\", \"Score:\", None, QtGui.QApplication.UnicodeUTF8))\n self.pushButtonProzess.setText(QtGui.QApplication.translate(\"MainWindow\", \"Prozess starten\", None, QtGui.QApplication.UnicodeUTF8))\n self.groupBox.setTitle(QtGui.QApplication.translate(\"MainWindow\", \"Besitztümer\", None, QtGui.QApplication.UnicodeUTF8))\n self.groupBoxAssistenten.setTitle(QtGui.QApplication.translate(\"MainWindow\", \"Assistenten:\", None, QtGui.QApplication.UnicodeUTF8))\n self.label_22.setText(QtGui.QApplication.translate(\"MainWindow\", \"Anzahl:\", None, QtGui.QApplication.UnicodeUTF8))\n self.labelAnzahlAssitenten.setText(QtGui.QApplication.translate(\"MainWindow\", \"0\", None, QtGui.QApplication.UnicodeUTF8))\n self.label_24.setText(QtGui.QApplication.translate(\"MainWindow\", \"Prozesse / Sekunde:\", None, QtGui.QApplication.UnicodeUTF8))\n self.label_25.setText(QtGui.QApplication.translate(\"MainWindow\", \"Prozesse ges.:\", None, QtGui.QApplication.UnicodeUTF8))\n self.labelAssistentenProzesseProSekunde.setText(QtGui.QApplication.translate(\"MainWindow\", \"0\", None, QtGui.QApplication.UnicodeUTF8))\n self.labelAssistentenProzesseGesammt.setText(QtGui.QApplication.translate(\"MainWindow\", \"0\", None, QtGui.QApplication.UnicodeUTF8))\n self.groupBoxDOS.setTitle(QtGui.QApplication.translate(\"MainWindow\", \"DOS:\", None, QtGui.QApplication.UnicodeUTF8))\n self.label_28.setText(QtGui.QApplication.translate(\"MainWindow\", \"Anzahl:\", None, QtGui.QApplication.UnicodeUTF8))\n self.labelAnzahlDOS.setText(QtGui.QApplication.translate(\"MainWindow\", \"0\", None, QtGui.QApplication.UnicodeUTF8))\n self.label_30.setText(QtGui.QApplication.translate(\"MainWindow\", \"Prozesse / Sekunde:\", None, QtGui.QApplication.UnicodeUTF8))\n self.label_31.setText(QtGui.QApplication.translate(\"MainWindow\", \"Prozesse ges.:\", None, QtGui.QApplication.UnicodeUTF8))\n self.labelDOSProzesseProSekunde.setText(QtGui.QApplication.translate(\"MainWindow\", \"0\", None, QtGui.QApplication.UnicodeUTF8))\n self.labelDOSProzesseGesammt.setText(QtGui.QApplication.translate(\"MainWindow\", \"0\", None, QtGui.QApplication.UnicodeUTF8))\n self.groupBoxServer.setTitle(QtGui.QApplication.translate(\"MainWindow\", \"Server:\", None, QtGui.QApplication.UnicodeUTF8))\n self.label_34.setText(QtGui.QApplication.translate(\"MainWindow\", \"Anzahl:\", None, QtGui.QApplication.UnicodeUTF8))\n self.labelAnzahlServer.setText(QtGui.QApplication.translate(\"MainWindow\", \"0\", None, QtGui.QApplication.UnicodeUTF8))\n self.label_36.setText(QtGui.QApplication.translate(\"MainWindow\", \"Prozesse / Sekunde:\", None, QtGui.QApplication.UnicodeUTF8))\n self.label_37.setText(QtGui.QApplication.translate(\"MainWindow\", \"Prozesse ges.:\", None, QtGui.QApplication.UnicodeUTF8))\n self.labelServerProzesseProSekunde.setText(QtGui.QApplication.translate(\"MainWindow\", \"0\", None, QtGui.QApplication.UnicodeUTF8))\n self.labelServerProzesseGesammt.setText(QtGui.QApplication.translate(\"MainWindow\", \"0\", None, QtGui.QApplication.UnicodeUTF8))\n self.groupBoxMatrix.setTitle(QtGui.QApplication.translate(\"MainWindow\", \"Matrix:\", None, QtGui.QApplication.UnicodeUTF8))\n self.label_40.setText(QtGui.QApplication.translate(\"MainWindow\", \"Anzahl:\", None, QtGui.QApplication.UnicodeUTF8))\n self.labelAnzahlMatrix.setText(QtGui.QApplication.translate(\"MainWindow\", \"0\", None, QtGui.QApplication.UnicodeUTF8))\n self.label_42.setText(QtGui.QApplication.translate(\"MainWindow\", \"Prozesse / Sekunde:\", None, QtGui.QApplication.UnicodeUTF8))\n self.label_43.setText(QtGui.QApplication.translate(\"MainWindow\", \"Prozesse ges.:\", None, QtGui.QApplication.UnicodeUTF8))\n self.labelMatrixProzesseProSekunde.setText(QtGui.QApplication.translate(\"MainWindow\", \"0\", None, QtGui.QApplication.UnicodeUTF8))\n self.labelMatrixProzesseGesammt.setText(QtGui.QApplication.translate(\"MainWindow\", \"0\", None, QtGui.QApplication.UnicodeUTF8))\n self.groupBox_2.setTitle(QtGui.QApplication.translate(\"MainWindow\", \"Baumenü\", None, QtGui.QApplication.UnicodeUTF8))\n self.pushButtonAssistentHinzufuegen.setText(QtGui.QApplication.translate(\"MainWindow\", \"hinzufügen\", None, QtGui.QApplication.UnicodeUTF8))\n self.label_4.setText(QtGui.QApplication.translate(\"MainWindow\", \"Assistent\", None, QtGui.QApplication.UnicodeUTF8))\n self.labelPreisAssistent.setText(QtGui.QApplication.translate(\"MainWindow\", \"Preis: $ 100\", None, QtGui.QApplication.UnicodeUTF8))\n self.labelProduktionAssistent.setText(QtGui.QApplication.translate(\"MainWindow\", \"+1 / Sekunde\", None, QtGui.QApplication.UnicodeUTF8))\n self.pushButtonDOSHinzufuegen.setText(QtGui.QApplication.translate(\"MainWindow\", \"hinzufügen\", None, QtGui.QApplication.UnicodeUTF8))\n self.label_7.setText(QtGui.QApplication.translate(\"MainWindow\", \"DOS\", None, QtGui.QApplication.UnicodeUTF8))\n self.labelPreisDOS.setText(QtGui.QApplication.translate(\"MainWindow\", \"Preis: $ 500\", None, QtGui.QApplication.UnicodeUTF8))\n self.labelProduktionDOS.setText(QtGui.QApplication.translate(\"MainWindow\", \"+30 / Sekunde\", None, QtGui.QApplication.UnicodeUTF8))\n self.pushButtonServerHinzufuegen.setText(QtGui.QApplication.translate(\"MainWindow\", \"hinzufügen\", None, QtGui.QApplication.UnicodeUTF8))\n self.label_10.setText(QtGui.QApplication.translate(\"MainWindow\", \"Server\", None, QtGui.QApplication.UnicodeUTF8))\n self.labelPreisServer.setText(QtGui.QApplication.translate(\"MainWindow\", \"Preis: $ 5000\", None, QtGui.QApplication.UnicodeUTF8))\n self.labelProduktionServer.setText(QtGui.QApplication.translate(\"MainWindow\", \"+200 / Sekunde\", None, QtGui.QApplication.UnicodeUTF8))\n self.pushButtonMatrixHinzufuegen.setText(QtGui.QApplication.translate(\"MainWindow\", \"hinzufügen\", None, QtGui.QApplication.UnicodeUTF8))\n self.label_19.setText(QtGui.QApplication.translate(\"MainWindow\", \"Matrix\", None, QtGui.QApplication.UnicodeUTF8))\n self.labelPreisMatrix.setText(QtGui.QApplication.translate(\"MainWindow\", \"Preis: $ 25000\", None, QtGui.QApplication.UnicodeUTF8))\n self.labelProduktionMatrix.setText(QtGui.QApplication.translate(\"MainWindow\", \"+2500 / Sekunde\", None, QtGui.QApplication.UnicodeUTF8))\n self.label_2.setText(QtGui.QApplication.translate(\"MainWindow\", \"Prozesse / Sekunde:\", None, QtGui.QApplication.UnicodeUTF8))\n self.labelProzesseProSekunde.setText(QtGui.QApplication.translate(\"MainWindow\", \"0\", None, QtGui.QApplication.UnicodeUTF8))\n self.labelScore.setText(QtGui.QApplication.translate(\"MainWindow\", \"0\", None, QtGui.QApplication.UnicodeUTF8))\n self.menuDatei.setTitle(QtGui.QApplication.translate(\"MainWindow\", \"Datei\", None, QtGui.QApplication.UnicodeUTF8))\n self.menuInfo.setTitle(QtGui.QApplication.translate(\"MainWindow\", \"Info\", None, QtGui.QApplication.UnicodeUTF8))\n self.menuHilfe.setTitle(QtGui.QApplication.translate(\"MainWindow\", \"Hilfe\", None, QtGui.QApplication.UnicodeUTF8))\n self.toolBar.setWindowTitle(QtGui.QApplication.translate(\"MainWindow\", \"toolBar\", None, QtGui.QApplication.UnicodeUTF8))\n self.actionNeues_Spiel.setText(QtGui.QApplication.translate(\"MainWindow\", \"Neues Spiel\", None, QtGui.QApplication.UnicodeUTF8))\n self.actionNeues_Spiel.setToolTip(QtGui.QApplication.translate(\"MainWindow\", \"Neues Spiel\", None, QtGui.QApplication.UnicodeUTF8))\n self.actionNeues_Spiel.setStatusTip(QtGui.QApplication.translate(\"MainWindow\", \"Neues Spiel starten.\", None, QtGui.QApplication.UnicodeUTF8))\n self.actionNeues_Spiel.setShortcut(QtGui.QApplication.translate(\"MainWindow\", \"Ctrl+N\", None, QtGui.QApplication.UnicodeUTF8))\n self.actionSpiel_speichern.setText(QtGui.QApplication.translate(\"MainWindow\", \"Spiel speichern\", None, QtGui.QApplication.UnicodeUTF8))\n self.actionSpiel_speichern.setStatusTip(QtGui.QApplication.translate(\"MainWindow\", \"Spiel speichern.\", None, QtGui.QApplication.UnicodeUTF8))\n self.actionSpiel_speichern.setShortcut(QtGui.QApplication.translate(\"MainWindow\", \"Ctrl+S\", None, QtGui.QApplication.UnicodeUTF8))\n self.actionSpiel_laden.setText(QtGui.QApplication.translate(\"MainWindow\", \"Spiel laden\", None, QtGui.QApplication.UnicodeUTF8))\n self.actionSpiel_laden.setStatusTip(QtGui.QApplication.translate(\"MainWindow\", \"Spiel laden.\", None, QtGui.QApplication.UnicodeUTF8))\n self.actionSpiel_laden.setShortcut(QtGui.QApplication.translate(\"MainWindow\", \"Ctrl+L\", None, QtGui.QApplication.UnicodeUTF8))\n self.actionBeenden.setText(QtGui.QApplication.translate(\"MainWindow\", \"Beenden\", None, QtGui.QApplication.UnicodeUTF8))\n self.actionBeenden.setStatusTip(QtGui.QApplication.translate(\"MainWindow\", \"Programm beenden.\", None, QtGui.QApplication.UnicodeUTF8))\n self.actionBeenden.setShortcut(QtGui.QApplication.translate(\"MainWindow\", \"Ctrl+Q\", None, QtGui.QApplication.UnicodeUTF8))\n self.actionAnleitung.setText(QtGui.QApplication.translate(\"MainWindow\", \"Anleitung\", None, QtGui.QApplication.UnicodeUTF8))\n self.actionOnline_Dokumentation.setText(QtGui.QApplication.translate(\"MainWindow\", \"Online-Dokumentation\", None, QtGui.QApplication.UnicodeUTF8))\n self.actionUeber.setText(QtGui.QApplication.translate(\"MainWindow\", \"Über\", None, QtGui.QApplication.UnicodeUTF8))\n self.actionUeber.setStatusTip(QtGui.QApplication.translate(\"MainWindow\", \"Informationen über das Programm.\", None, QtGui.QApplication.UnicodeUTF8))\n\nimport icons_rc\n"
},
{
"alpha_fraction": 0.6046745181083679,
"alphanum_fraction": 0.6142609119415283,
"avg_line_length": 35.3089714050293,
"blob_id": "4b528c7c84d32e6cc011be4bddc6b37ad1555cc7",
"content_id": "088d539111462792c37ea8a08be55f573be1373a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 10959,
"license_type": "no_license",
"max_line_length": 97,
"num_lines": 301,
"path": "/program.py",
"repo_name": "kleintk/CookieClickerKlonProzessManager",
"src_encoding": "UTF-8",
"text": "\n# -*- coding: utf-8 -*-\n'''\nCreated on 27.03.2014\n\n@author: Phate\n'''\n\n\n\n\nfrom PySide.QtCore import *\nfrom PySide.QtGui import *\nimport sys\nimport time\nimport pickle\n\nimport hauptfenster\n\n\nclass MainWindow(QMainWindow, hauptfenster.Ui_MainWindow):\n \n # SCORE Werte\n SCORE = 600\n SCOREassistenten = 0\n SCOREdos = 0\n SCOREserver = 0\n SCOREmatrix = 0\n # Baumenue KONSTANTEN\n PREISassistent = 200\n PRODUKTIONassistent = 1\n PREISdos = 500\n PRODUKTIONdos = 4\n PREISserver = 3000\n PRODUKTIONserver = 10\n PREISmatrix = 10000\n PRODUKTIONmatrix = 40\n # Klick KONSTANTE\n PRODUKTIONklick = 1\n \n BESITZTUEMER = {\"assistent\": 0, \"dos\": 0, \"server\": 0, \"matrix\": 0}\n \n def __init__(self, parent=None):\n super(MainWindow, self).__init__(parent)\n self.setupUi(self)\n \n self.setWindowTitle(\"Prozess Manager\")\n # alle Besitztümer verstecken\n #self.groupBoxAssistenten.hide()\n #self.groupBoxDOS.hide()\n #self.groupBoxServer.hide()\n #self.groupBoxMatrix.hide()\n \n # alle Baumöglichekiten verstecken \n #self.frameAssistent.hide()\n #self.frameDOS.hide()\n #self.frameServer.hide()\n #self.frameMatrix.hide()\n \n # Kosten und Produktion im Interface anpassen\n self.labelPreisAssistent.setText(\"Preis : $ \"+str(self.PREISassistent))\n self.labelProduktionAssistent.setText(\"+\"+str(self.PRODUKTIONassistent)+\" / Sekunde\")\n self.labelPreisDOS.setText(\"Preis : $ \"+str(self.PREISdos))\n self.labelProduktionDOS.setText(\"+\"+str(self.PRODUKTIONdos)+\" / Sekunde\")\n self.labelPreisServer.setText(\"Preis : $ \"+str(self.PREISserver))\n self.labelProduktionServer.setText(\"+\"+str(self.PRODUKTIONserver)+\" / Sekunde\")\n self.labelPreisMatrix.setText(\"Preis : $ \"+str(self.PREISmatrix))\n self.labelProduktionMatrix.setText(\"+\"+str(self.PRODUKTIONmatrix)+\" / Sekunde\")\n \n # Buttons verknuepfen\n self.pushButtonAssistentHinzufuegen.clicked.connect(self.assistentHinzufuegen)\n self.pushButtonDOSHinzufuegen.clicked.connect(self.dosHinzufuegen)\n self.pushButtonServerHinzufuegen.clicked.connect(self.serverHinzufuegen)\n self.pushButtonMatrixHinzufuegen.clicked.connect(self.matrixHinzufuegen)\n \n self.pushButtonProzess.clicked.connect(self.prozessButtonGeklickt)\n \n # startet den Thread der konstant ein showSignal emitet\n self.schuftet = scoreThread()\n self.schuftet.start() \n self.schuftet.showSignal.connect(self.scoreAusgabe)\n \n # startet den Thread der konstant die automatischen einnahmen berechnet\n self.schuftetAuch = besitztuemerProduktionsThread()\n self.schuftetAuch.start()\n \n # buttons des menues triggern\n self.actionBeenden.triggered.connect(self.programmBeenden)\n self.actionUeber.triggered.connect(self.ueberKlicken)\n self.actionSpiel_speichern.triggered.connect(self.speichern)\n self.actionSpiel_laden.triggered.connect(self.laden)\n \n \n # gibt konstant die punkte am scoreboard aus\n def scoreAusgabe(self):\n self.labelScore.setText(str(self.SCORE))\n self.labelAssistentenProzesseGesammt.setText(str(self.SCOREassistenten))\n self.labelDOSProzesseGesammt.setText(str(self.SCOREdos))\n self.labelServerProzesseGesammt.setText(str(self.SCOREserver))\n self.labelMatrixProzesseGesammt.setText(str(self.SCOREmatrix))\n \n def prozessButtonGeklickt(self):\n self.SCORE +=self.PRODUKTIONklick\n \n \n def assistentHinzufuegen(self):\n if self.SCORE >= self.PREISassistent:\n self.SCORE -= self.PREISassistent\n self.BESITZTUEMER[\"assistent\"] += 1\n zahl = int(self.labelAnzahlAssitenten.text())\n self.labelAnzahlAssitenten.setText(str(zahl+1))\n \n zahl = int(self.labelAssistentenProzesseProSekunde.text())\n self.labelAssistentenProzesseProSekunde.setText(str(zahl + self.PRODUKTIONassistent))\n zahl = int(self.labelProzesseProSekunde.text())\n self.labelProzesseProSekunde.setText(str(zahl + self.PRODUKTIONassistent))\n \n self.PREISassistent = round(self.PREISassistent * 1.15)\n self.labelPreisAssistent.setText(\"Preis : $ \"+str(self.PREISassistent))\n else:\n self.statusbar.showMessage(\"Zu wenig Prozesse für den Kauf.\", 2000)\n \n \n def dosHinzufuegen(self):\n if self.SCORE >= self.PREISdos:\n self.SCORE -= self.PREISdos\n self.BESITZTUEMER[\"dos\"] += 1\n zahl = int(self.labelAnzahlDOS.text())\n self.labelAnzahlDOS.setText(str(zahl+1))\n \n zahl = int(self.labelDOSProzesseProSekunde.text())\n self.labelDOSProzesseProSekunde.setText(str(zahl + self.PRODUKTIONdos))\n zahl = int(self.labelProzesseProSekunde.text())\n self.labelProzesseProSekunde.setText(str(zahl + self.PRODUKTIONdos))\n \n self.PREISdos = round(self.PREISdos * 1.15)\n self.labelPreisDOS.setText(\"Preis : $ \"+str(self.PREISdos))\n else:\n self.statusbar.showMessage(\"Zu wenig Prozesse für den Kauf.\", 2000)\n\n def serverHinzufuegen(self):\n if self.SCORE >= self.PREISserver:\n self.SCORE -= self.PREISserver\n self.BESITZTUEMER[\"server\"] += 1\n zahl = int(self.labelAnzahlServer.text())\n self.labelAnzahlServer.setText(str(zahl+1))\n \n zahl = int(self.labelServerProzesseProSekunde.text())\n self.labelServerProzesseProSekunde.setText(str(zahl + self.PRODUKTIONserver))\n zahl = int(self.labelProzesseProSekunde.text())\n self.labelProzesseProSekunde.setText(str(zahl + self.PRODUKTIONserver))\n \n self.PREISserver = round(self.PREISserver * 1.15)\n self.labelPreisServer.setText(\"Preis : $ \"+str(self.PREISserver))\n else:\n self.statusbar.showMessage(\"Zu wenig Prozesse für den Kauf.\", 2000)\n\n def matrixHinzufuegen(self):\n if self.SCORE >= self.PREISmatrix:\n self.SCORE -= self.PREISmatrix\n self.BESITZTUEMER[\"matrix\"] += 1\n zahl = int(self.labelAnzahlMatrix.text())\n self.labelAnzahlMatrix.setText(str(zahl+1))\n \n zahl = int(self.labelMatrixProzesseProSekunde.text())\n self.labelMatrixProzesseProSekunde.setText(str(zahl + self.PRODUKTIONmatrix))\n zahl = int(self.labelProzesseProSekunde.text())\n self.labelProzesseProSekunde.setText(str(zahl + self.PRODUKTIONmatrix))\n \n self.PREISmatrix = round(self.PREISmatrix * 1.15)\n self.labelPreisMatrix.setText(\"Preis : $ \"+str(self.PREISmatrix))\n else:\n self.statusbar.showMessage(\"Zu wenig Prozesse für den Kauf.\", 2000)\n\n \n def programmBeenden(self):\n self.schuftet.terminate()\n self.schuftetAuch.terminate()\n sys.exit()\n \n def ueberKlicken(self):\n QMessageBox.information(self, \"Prozess Manager!\", \"Coded by Tkinter/Phate\")\n \n def speichern(self):\n try:\n fobj = open(\"savegame.sav\", \"wb\")\n liste = []\n liste.append(self.SCORE) # 0\n liste.append(self.SCOREassistenten)\n liste.append(self.SCOREdos)\n liste.append(self.SCOREserver)\n liste.append(self.SCOREmatrix)\n liste.append(self.PREISassistent)\n liste.append(self.PRODUKTIONassistent)\n liste.append(self.PREISdos)\n liste.append(self.PRODUKTIONdos)\n liste.append(self.PREISserver)\n liste.append(self.PRODUKTIONserver)\n liste.append(self.PREISmatrix)\n liste.append(self.PRODUKTIONmatrix)\n liste.append(self.PRODUKTIONklick)\n liste.append(self.BESITZTUEMER) # 14 \n pickle.dump(liste, fobj)\n QMessageBox.information(self, \"Prozess Manager\", \"Gespeichert.\")\n fobj.close()\n except:\n QMessageBox.warning(self, \"Prozess Manager\", \"Fehler beim Speichern.\")\n \n def laden(self):\n try:\n fobj = open(\"savegame.sav\", \"rb\")\n liste = pickle.load(fobj)\n fobj.close()\n self.SCORE = liste[0]\n self.SCOREassistenten = liste[1]\n self.SCOREdos = liste[2]\n self.SCOREserver = liste[3]\n self.SCOREmatrix = liste[4]\n self.PREISassistent = liste[5]\n self.PRODUKTIONassistent = liste[6]\n self.PREISdos = liste[7]\n self.PRODUKTIONdos = liste[8]\n self.PREISserver = liste[9]\n self.PRODUKTIONserver = liste[10]\n self.PREISmatrix = liste[11]\n self.PRODUKTIONmatrix = liste[12]\n self.PRODUKTIONklick = liste[13]\n self.BESITZTUEMER = liste[14]\n \n #self.labelAnzahlAssitenten.setText(self.BESITZTUEMER[\"assistenten\"])\n except:\n QMessageBox.warning(self, \"Prozess Manager\", \"Fehler beim Laden.\")\n \n \n\nclass scoreThread(QThread):\n \n showSignal = Signal()\n \n def __init__(self, parent=None):\n super(scoreThread, self).__init__(parent)\n \n def run(self): # override der run-methode des QThread\n while True:\n time.sleep(0.1)\n self.showSignal.emit()\n \n\nclass besitztuemerProduktionsThread(QThread):\n \n \n def __init__(self, parent=None):\n super(besitztuemerProduktionsThread, self).__init__(parent)\n \n def run(self): # override der run-methode des QThread\n while True:\n time.sleep(1)\n zusatzSCORE = 0\n localDICT = form.BESITZTUEMER\n \n anzahlAssisten = localDICT[\"assistent\"]\n anzahlDOS = localDICT[\"dos\"]\n anzahlServer = localDICT[\"server\"]\n anzahlMatrix = localDICT[\"matrix\"]\n \n for i in range(anzahlAssisten):\n zusatzSCORE += form.PRODUKTIONassistent\n form.SCOREassistenten +=form.PRODUKTIONassistent\n \n for i in range(anzahlDOS):\n zusatzSCORE += form.PRODUKTIONdos\n form.SCOREdos +=form.PRODUKTIONdos\n\n for i in range(anzahlServer):\n zusatzSCORE += form.PRODUKTIONserver\n form.SCOREserver +=form.PRODUKTIONserver\n \n for i in range(anzahlMatrix):\n zusatzSCORE += form.PRODUKTIONmatrix\n form.SCOREmatrix +=form.PRODUKTIONmatrix\n\n form.SCORE += zusatzSCORE\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \napp = QApplication(sys.argv)\nform = MainWindow()\nform.show()\napp.exec_()\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n"
}
] | 3 |
kc1997/NaukariApp | https://github.com/kc1997/NaukariApp | 48b5d61069df4987b0442cf28fa301276ee4de11 | 706b3dd1f4099e9fbb6c0e97440df05478c4ed98 | 0fad6d067f5ff9c50e1202116ed547dca8ed5ac1 | refs/heads/master | 2020-05-31T18:25:53.696060 | 2019-06-05T17:03:05 | 2019-06-05T17:03:05 | 190,434,085 | 0 | 0 | null | 2019-06-05T16:54:46 | 2019-06-05T17:03:08 | 2020-06-05T21:14:24 | HTML | [
{
"alpha_fraction": 0.7438016533851624,
"alphanum_fraction": 0.7438016533851624,
"avg_line_length": 17.83333396911621,
"blob_id": "aa6abf57df6542dcc754ea8829eaa61e5b9d4b42",
"content_id": "1a570a06d012b6b290763719bd68fcea514fdbc5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 121,
"license_type": "no_license",
"max_line_length": 32,
"num_lines": 6,
"path": "/admin.py",
"repo_name": "kc1997/NaukariApp",
"src_encoding": "UTF-8",
"text": "from django.contrib import admin\r\nfrom .models import *\r\n\r\n\r\nadmin.site.register(Register)\r\nadmin.site.register(Jobs)\r\n\r\n"
},
{
"alpha_fraction": 0.6736401915550232,
"alphanum_fraction": 0.6736401915550232,
"avg_line_length": 23.263158798217773,
"blob_id": "b86f7ebdfbf38e05a7d7c520f1595dc9728ca410",
"content_id": "3fccf68cedd24be597ff187abe6d6bee0dc5a0ab",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 478,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 19,
"path": "/urls.py",
"repo_name": "kc1997/NaukariApp",
"src_encoding": "UTF-8",
"text": "from django.urls import path\r\nfrom . import views\r\nfrom django.conf import settings\r\nfrom django.conf.urls.static import static\r\n\r\napp_name = 'Naukriapp'\r\n\r\nurlpatterns = [\r\n path('', views.home, name='home'),\r\n path('register/', views.register, name='register'),\r\n path('jobs/', views.job, name='job'),\r\n path('contact/', views.contact, name='contact')\r\n\r\n\r\n\r\n]\r\n\r\nif settings.DEBUG:\r\n urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)"
}
] | 2 |
bmustiata/termcolor_util | https://github.com/bmustiata/termcolor_util | e5fec786ee4a0d23a30bc9b3e015b97c783a79f5 | 957c8abc02763620629d222a74705d48b4faefe2 | b91e93a3fbfee130c4e5607bd2b5a856e96a4a8f | refs/heads/master | 2021-10-22T14:23:13.600980 | 2021-10-12T21:40:19 | 2021-10-12T21:40:19 | 140,620,070 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6052631735801697,
"alphanum_fraction": 0.6052631735801697,
"avg_line_length": 17.627450942993164,
"blob_id": "0319dc7555b4f5fa694e630906605aac98d51994",
"content_id": "7f8df164d8282a8af2d93c9ae89e149868209eb1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1900,
"license_type": "no_license",
"max_line_length": 59,
"num_lines": 102,
"path": "/termcolor_util/__init__.py",
"repo_name": "bmustiata/termcolor_util",
"src_encoding": "UTF-8",
"text": "from termcolor import colored\nimport sys\n\n\ndef yellow(text: str, bold=False, underline=False) -> str:\n attrs = []\n\n if bold:\n attrs.append(\"bold\")\n\n if underline:\n attrs.append(\"underline\")\n\n return colored(text, color=\"yellow\", attrs=attrs)\n\n\ndef green(text: str, bold=False, underline=False) -> str:\n attrs = []\n\n if bold:\n attrs.append(\"bold\")\n\n if underline:\n attrs.append(\"underline\")\n\n return colored(text, color=\"green\", attrs=attrs)\n\n\ndef blue(text: str, bold=False, underline=False) -> str:\n attrs = []\n\n if bold:\n attrs.append(\"bold\")\n\n if underline:\n attrs.append(\"underline\")\n\n return colored(text, color=\"blue\", attrs=attrs)\n\n\ndef red(text: str, bold=False, underline=False) -> str:\n attrs = []\n\n if bold:\n attrs.append(\"bold\")\n\n if underline:\n attrs.append(\"underline\")\n\n return colored(text, color=\"red\", attrs=attrs)\n\n\ndef gray(text: str, bold=False, underline=False) -> str:\n attrs = []\n\n if bold:\n attrs.append(\"bold\")\n\n if underline:\n attrs.append(\"underline\")\n\n return colored(text, color=\"grey\", attrs=attrs)\n\n\ndef cyan(text: str, bold=False, underline=False) -> str:\n attrs = []\n\n if bold:\n attrs.append(\"bold\")\n\n if underline:\n attrs.append(\"underline\")\n\n return colored(text, color=\"cyan\", attrs=attrs)\n\n\ndef magenta(text: str, bold=False, underline=False) -> str:\n attrs = []\n\n if bold:\n attrs.append(\"bold\")\n\n if underline:\n attrs.append(\"underline\")\n\n return colored(text, color=\"magenta\", attrs=attrs)\n\n\ndef white(text: str, bold=False, underline=False) -> str:\n attrs = []\n\n if bold:\n attrs.append(\"bold\")\n\n if underline:\n attrs.append(\"underline\")\n\n return colored(text, color=\"white\", attrs=attrs)\n\n\ndef eprint(*args) -> None:\n print(*args, file=sys.stderr)\n"
},
{
"alpha_fraction": 0.6247086524963379,
"alphanum_fraction": 0.6247086524963379,
"avg_line_length": 22.83333396911621,
"blob_id": "3d5d4557dcf0af9ce207ec7514cfcff15668a098",
"content_id": "a2b4ad63dadbb57b421ebe2aeab3b81a0d8a5bc8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "reStructuredText",
"length_bytes": 858,
"license_type": "no_license",
"max_line_length": 71,
"num_lines": 36,
"path": "/README.rst",
"repo_name": "bmustiata/termcolor_util",
"src_encoding": "UTF-8",
"text": "``termcolor_util`` is a set of functions on top of termcolor for every\nsingle color.\n\nInstallation\n============\n\n.. code:: sh\n\n pip install termcolor_util\n\nFunctions\n=========\n\n.. code:: python\n\n def yellow(text: str, bold=False, underline=False) -> str: ...\n\n def green(text: str, bold=False, underline=False) -> str: ...\n\n def blue(text: str, bold=False, underline=False) -> str: ...\n\n def red(text: str, bold=False, underline=False) -> str: ...\n\n def gray(text: str, bold=False, underline=False) -> str: ...\n\n def cyan(text: str, bold=False, underline=False) -> str: ...\n\n def magenta(text: str, bold=False, underline=False) -> str: ...\n\n def white(text: str, bold=False, underline=False) -> str: ...\n\nBeside colors, there is a function for directly printing on the stderr.\n\n.. code:: python\n\n def eprint(*args) -> None: ...\n"
},
{
"alpha_fraction": 0.5635964870452881,
"alphanum_fraction": 0.5701754093170166,
"avg_line_length": 37,
"blob_id": "698596ce9e9830c11cf00cdf05cb1a248917f5aa",
"content_id": "cbdcb76f76574c002745543da88c434dbf05d470",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 456,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 12,
"path": "/termcolor_util/__main__.py",
"repo_name": "bmustiata/termcolor_util",
"src_encoding": "UTF-8",
"text": "from termcolor_util import yellow, green, blue, red, gray, cyan, magenta, white\n\n\nif __name__ == '__main__':\n print(cyan(\"termcolor_util\"),\n cyan(\"v1.0.2\", bold=True))\n\n for color in [yellow, green, blue, red, gray, cyan, magenta, white]:\n print(color(color.__name__),\n color(color.__name__, underline=True),\n color(color.__name__, bold=True),\n color(color.__name__, bold=True, underline=True))\n"
},
{
"alpha_fraction": 0.6327543258666992,
"alphanum_fraction": 0.6327543258666992,
"avg_line_length": 25.866666793823242,
"blob_id": "c5d883ae6a8ddc9916aa33b1bda3c1af389e1acc",
"content_id": "35c8db01625802f42b728ec465c90adbd4ef69b3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 806,
"license_type": "no_license",
"max_line_length": 71,
"num_lines": 30,
"path": "/README.md",
"repo_name": "bmustiata/termcolor_util",
"src_encoding": "UTF-8",
"text": "`termcolor_util` is a set of functions on top of termcolor for every\nsingle color.\n\nInstallation\n============\n\n pip install termcolor_util\n\nFunctions\n=========\n\n def yellow(text: str, bold=False, underline=False) -> str: ...\n\n def green(text: str, bold=False, underline=False) -> str: ...\n\n def blue(text: str, bold=False, underline=False) -> str: ...\n\n def red(text: str, bold=False, underline=False) -> str: ...\n\n def gray(text: str, bold=False, underline=False) -> str: ...\n\n def cyan(text: str, bold=False, underline=False) -> str: ...\n\n def magenta(text: str, bold=False, underline=False) -> str: ...\n\n def white(text: str, bold=False, underline=False) -> str: ...\n\nBeside colors, there is a function for directly printing on the stderr.\n\n def eprint(*args) -> None: ...\n"
},
{
"alpha_fraction": 0.43393275141716003,
"alphanum_fraction": 0.43393275141716003,
"avg_line_length": 32.657894134521484,
"blob_id": "48fcba0939afa7354e28b0a6ae610393b3aa10de",
"content_id": "fe876f48f9642fd9521386f0d270cc1bfcb1a78c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "AsciiDoc",
"length_bytes": 1279,
"license_type": "no_license",
"max_line_length": 82,
"num_lines": 38,
"path": "/README.adoc",
"repo_name": "bmustiata/termcolor_util",
"src_encoding": "UTF-8",
"text": "= termcolor_util\n\n`termcolor_util` is a set of functions on top of termcolor for every single color.\n\n== Installation\n\n[source,sh]\n-----------------------------------------------------------------------------\npip install termcolor_util\n-----------------------------------------------------------------------------\n\n== Functions\n\n[source,python]\n-----------------------------------------------------------------------------\ndef yellow(text: str, bold=False, underline=False) -> str: ...\n\ndef green(text: str, bold=False, underline=False) -> str: ...\n\ndef blue(text: str, bold=False, underline=False) -> str: ...\n\ndef red(text: str, bold=False, underline=False) -> str: ...\n\ndef gray(text: str, bold=False, underline=False) -> str: ...\n\ndef cyan(text: str, bold=False, underline=False) -> str: ...\n\ndef magenta(text: str, bold=False, underline=False) -> str: ...\n\ndef white(text: str, bold=False, underline=False) -> str: ...\n-----------------------------------------------------------------------------\n\nBeside colors, there is a function for directly printing on the stderr.\n\n[source,python]\n-----------------------------------------------------------------------------\ndef eprint(*args) -> None: ...\n-----------------------------------------------------------------------------\n"
}
] | 5 |
joaoh9/simplex | https://github.com/joaoh9/simplex | 4352550f11ab72824d6c474459a4fcc2ca342bc7 | bf7b18108008d8a1c31ff77047fef12a5a9685e3 | dc42b3aba6440c739ee333a69575149c968604aa | refs/heads/master | 2020-07-27T00:06:33.935698 | 2019-09-16T13:23:58 | 2019-09-16T13:23:58 | 208,805,689 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5860806107521057,
"alphanum_fraction": 0.6007326245307922,
"avg_line_length": 13.368420600891113,
"blob_id": "31a2d40f41705cb117e08141d9d2cd8a3c3c1b9b",
"content_id": "4ac677015945612c4321e621a9bce5ec66505ab9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 273,
"license_type": "no_license",
"max_line_length": 34,
"num_lines": 19,
"path": "/init.py",
"repo_name": "joaoh9/simplex",
"src_encoding": "UTF-8",
"text": "import sys\n\nfrom simplex import Tableaux\n\ninput_fie = open(sys.argv[1], \"r\")\n\nT = Tableaux(input_fie.readline())\n\nlines = input_fie.readlines()\n\ni = 0\nfor line in lines:\n if i == 0:\n T.getC(line)\n else:\n T.getA_andB(line)\n i += 1\n\nT.print_tableaux()\n"
},
{
"alpha_fraction": 0.48256880044937134,
"alphanum_fraction": 0.48990824818611145,
"avg_line_length": 24.34883689880371,
"blob_id": "eeb45c876ed631dcb1f8cd779a8e20a25c85b270",
"content_id": "cbb5ecc1e85335b4c0a596cdc0d2d673fd4623b6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1090,
"license_type": "no_license",
"max_line_length": 65,
"num_lines": 43,
"path": "/simplex.py",
"repo_name": "joaoh9/simplex",
"src_encoding": "UTF-8",
"text": "class Identity:\n def __init__(self, N):\n self.id = []\n\n for i in range(N):\n self.id.append([])\n for j in range(N):\n if i == j:\n self.id[i].append(1)\n else:\n self.id[i].append(0)\n\n\nclass VERO:\n def __init__(self, size):\n self.upper = [0] * size\n self.identity = Identity(size)\n\n\nclass Tableaux:\n def __init__(self, line):\n NM = self.string_to_int_list(line)\n self.N = NM[0]\n self.M = NM[1]\n self.A = []\n self.B = []\n self.VERO = VERO(self.N)\n\n def getC(self, vector):\n self.C = self.string_to_int_list(vector)\n\n def getA_andB(self, line):\n row = self.string_to_int_list(line)\n self.A.append(row[:-1])\n self.B.append(row[-1])\n\n def string_to_int_list(self, string):\n return list(map(int, string.split(\" \")))\n\n def print_tableaux(self):\n print(self.VERO.upper, self.C, 0)\n for i in range(self.N):\n print(self.VERO.identity.id[i], self.A[i], self.B[i])\n"
}
] | 2 |
prajaktapatil97/backend_crud_application | https://github.com/prajaktapatil97/backend_crud_application | 785083be5e369edc517261c5a949b8ac42cc4a19 | 4a189b32e95370d22848441fac2f055223025568 | 022de83db18052fa17469651e29f49be02f96185 | refs/heads/master | 2023-06-12T02:43:18.565298 | 2021-07-09T13:57:59 | 2021-07-09T13:57:59 | 384,452,284 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5157116651535034,
"alphanum_fraction": 0.5619223713874817,
"avg_line_length": 22.521739959716797,
"blob_id": "153d9735dad3a0cf7e1831c44aaa672914c24766",
"content_id": "89bc40826c7535f029e6d8aeaa93c1ba0c39c0fa",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 541,
"license_type": "no_license",
"max_line_length": 51,
"num_lines": 23,
"path": "/crudexample/employee/migrations/0002_auto_20210709_1341.py",
"repo_name": "prajaktapatil97/backend_crud_application",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.2.5 on 2021-07-09 08:11\n\nfrom django.db import migrations, models\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('employee', '0001_initial'),\n ]\n\n operations = [\n migrations.AlterField(\n model_name='usertable',\n name='user_id',\n field=models.CharField(max_length=256),\n ),\n migrations.AlterField(\n model_name='usertable',\n name='user_name',\n field=models.CharField(max_length=256),\n ),\n ]\n"
},
{
"alpha_fraction": 0.5164557099342346,
"alphanum_fraction": 0.594936728477478,
"avg_line_length": 20.94444465637207,
"blob_id": "43e0534b721b748a3f010f22100db77108309237",
"content_id": "ff6226f5339aeec06b9a99054dca92cc13fd2e9d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 395,
"license_type": "no_license",
"max_line_length": 53,
"num_lines": 18,
"path": "/crudexample/employee/migrations/0003_usertable_is_deleted.py",
"repo_name": "prajaktapatil97/backend_crud_application",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.2.5 on 2021-07-09 09:00\n\nfrom django.db import migrations, models\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('employee', '0002_auto_20210709_1341'),\n ]\n\n operations = [\n migrations.AddField(\n model_name='usertable',\n name='is_deleted',\n field=models.BooleanField(default=False),\n ),\n ]\n"
},
{
"alpha_fraction": 0.551194965839386,
"alphanum_fraction": 0.5547170042991638,
"avg_line_length": 31.300813674926758,
"blob_id": "e5ac7301cff62a430b5efaee2d6d69487f263e3c",
"content_id": "5a27c17600d9c1da709f0b9216d0e4d660679c54",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3975,
"license_type": "no_license",
"max_line_length": 99,
"num_lines": 123,
"path": "/crudexample/employee/views.py",
"repo_name": "prajaktapatil97/backend_crud_application",
"src_encoding": "UTF-8",
"text": "from django.shortcuts import render\n\n# Create your views here.\nfrom django.shortcuts import render\nfrom django.conf import settings\nfrom django.http import HttpResponse\nfrom django.template.loader import get_template\nfrom django.views.decorators.csrf import csrf_exempt\n# import json, requests, datetime, time\nfrom django.http import JsonResponse\nfrom datetime import date, datetime\n# from requests.adapters import HTTPAdapter\n# from requests.packages.urllib3.util.retry import Retry\n# from django.utils import timezone\nfrom django.core import serializers\nimport json , datetime, time\n# from . import models\nfrom .models import userTable \n\n@csrf_exempt\ndef add_user(request):\n try:\n if request is not dict:\n user_obj_dict = json.loads(request.body)\n else:\n user_dict = request\n if all (key in user_obj_dict and user_obj_dict[key] for key in ['user_name', 'email_id']):\n user_dict = userTable()\n user_dict.user_id = \"UI\" + str(int(time.time_ns()*10))\n user_dict.user_name = user_obj_dict['user_name']\n user_dict.user_email = user_obj_dict['email_id']\n user_dict.save()\n return_object = {\n \"status\": '0',\n \"message\": 'User added Suceessfully'\n }\n else:\n return_object = {\n \"status\": '1',\n \"message\": 'Fail'\n }\n except (Exception) as error:\n print(error)\n return_object = {\n \"status\": '1',\n \"message\": 'Fail'\n }\n return JsonResponse(return_object, safe=False)\n\n\n@csrf_exempt\ndef update_user(request):\n try:\n if request is not dict:\n user_obj_dict = json.loads(request.body)\n else:\n user_dict = request\n if all (key in user_obj_dict and user_obj_dict[key] for key in ['user_name', 'email_id']):\n user_dict = userTable.objects.get(user_id = user_obj_dict['user_id'])\n user_dict.user_name = user_obj_dict['user_name']\n user_dict.user_email = user_obj_dict['email_id']\n user_dict.save()\n return_object = {\n \"status\": '0',\n \"message\": 'User updated Suceessfully'\n }\n else:\n return_object = {\n \"status\": '1',\n \"message\": 'Fail'\n }\n except (Exception) as error:\n print(error)\n return_object = {\n \"status\": '1',\n \"message\": 'Fail'\n }\n return JsonResponse(return_object, safe=False)\n\n@csrf_exempt\ndef delete_user(request):\n try:\n user_data = json.loads(request.body)\n status = userTable.objects.filter(user_id = user_data['user_id']).update(is_deleted = True)\n if status:\n return_object = {\n \"status\" : '0',\n \"message\" : 'User deleted Successfully'\n }\n else:\n return_object = {\n \"status\" : '0',\n \"message\" : 'User Data not found',\n }\n except (Exception) as error:\n print(\"error in update cost details \",error)\n return_object = {\n \"status\": '1',\n \"message\": 'Fail to delete user'\n }\n return JsonResponse(return_object,safe = False)\n\n\n@csrf_exempt\ndef get_user_details(request):\n try:\n records = userTable.objects.filter(is_deleted = False) \n data = serializers.serialize(\"json\", records)\n result = json.loads(data)\n for data in range(len(result)):\n user_info = result[data]['fields']\n result.append(user_info) \n returnObject = {\n \"status\" : '0',\n \"message\" :'User Data retrived suceessfully',\n \"result\" :result\n }\n except:\n returnObject = {\n \"status\" : '1',\n \"message\" : 'Fail to retrive user data',\n }\n return JsonResponse(returnObject,safe=False) \n"
},
{
"alpha_fraction": 0.6777777671813965,
"alphanum_fraction": 0.6944444179534912,
"avg_line_length": 31.81818199157715,
"blob_id": "be03cf1585a32fb0629df1cd5a2a7b4464b086cf",
"content_id": "fa82a8a41b4960d36fbe73b42b002c96627ef2a2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 360,
"license_type": "no_license",
"max_line_length": 51,
"num_lines": 11,
"path": "/crudexample/employee/models.py",
"repo_name": "prajaktapatil97/backend_crud_application",
"src_encoding": "UTF-8",
"text": "from django.db import models\n\n# Create your models here.\nfrom django.db import models \nclass userTable(models.Model): \n user_id = models.CharField(max_length=256) \n user_name = models.CharField(max_length=256) \n user_email = models.EmailField() \n is_deleted = models.BooleanField(default=False)\n class Meta: \n db_table = \"userTable\""
},
{
"alpha_fraction": 0.7096773982048035,
"alphanum_fraction": 0.7096773982048035,
"avg_line_length": 27.18181800842285,
"blob_id": "23825f42fb0df3933368111025c1b5e6a4ea5a9c",
"content_id": "d29345ac4dda06f8d421b4f0c1efddcfd624b1c0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 310,
"license_type": "no_license",
"max_line_length": 45,
"num_lines": 11,
"path": "/crudexample/employee/urls.py",
"repo_name": "prajaktapatil97/backend_crud_application",
"src_encoding": "UTF-8",
"text": "\nfrom django.contrib import admin\nfrom django.urls import path\nfrom django.conf.urls import include\nfrom . import views\n\nurlpatterns = [\n path('add-user', views.add_user),\n path('update-user', views.update_user),\n path('delete-user', views.delete_user),\n path('get-user', views.get_user_details),\n]"
}
] | 5 |
SoulXHades/ClassesRelationSearcher | https://github.com/SoulXHades/ClassesRelationSearcher | f1c5a8018216ba4698c252868a7e6b70024bf707 | c5907754509dabd66b1eaf691f15fe59ff2946c1 | 3cc08bce24da74f4e34c7fbc0b952b4a80b5b91b | refs/heads/master | 2020-05-04T15:29:03.367889 | 2019-04-03T18:09:13 | 2019-04-03T18:09:13 | 179,242,791 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.7523105144500732,
"alphanum_fraction": 0.7615526914596558,
"avg_line_length": 22.565217971801758,
"blob_id": "920389832bf6b5b11146f4e44eed99aa65789c26",
"content_id": "b3c2a1cce777be3a9ce15830026afd9bcfbc1e2a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 541,
"license_type": "no_license",
"max_line_length": 60,
"num_lines": 23,
"path": "/ClassesRelationSearcher.py",
"repo_name": "SoulXHades/ClassesRelationSearcher",
"src_encoding": "UTF-8",
"text": "import os\n\nfd = open(input(\"Input class list directory: \"), \"r\")\nlistOfClasses = fd.readlines();\n\nfd1 = open(input(\"Input programming file directory: \"), \"r\")\nlistofCodes = fd1.readlines();\n\nlistOfClassesExistsInCode = []\n\n\nfor rowsOfCode in listofCodes:\n\tfor eachclass in listOfClasses:\n\t\tif eachclass[:-2] in rowsOfCode[:-2]:\n\t\t\tif eachclass not in listOfClassesExistsInCode:\n\t\t\t\tlistOfClassesExistsInCode.append(eachclass)\n\nfor eachClassesExistsInCode in listOfClassesExistsInCode:\n\tprint(eachClassesExistsInCode)\n\n\nfd.close()\nfd1.close()"
},
{
"alpha_fraction": 0.773118257522583,
"alphanum_fraction": 0.7763440608978271,
"avg_line_length": 70.53845977783203,
"blob_id": "b115d8b7b52bb55ac55f6cd4ddfb80fb429f098d",
"content_id": "cdcb29851cc518ecab5cba463672db93778c41aa",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 930,
"license_type": "no_license",
"max_line_length": 126,
"num_lines": 13,
"path": "/README.md",
"repo_name": "SoulXHades/ClassesRelationSearcher",
"src_encoding": "UTF-8",
"text": "# ClassesRelationSearcher\nA Python script that search for class names in current class's code. \nIf it exist, it means there is a relationship (dependency, association, etc) with that class.\n\nIt was built by me to ease the need to search for existence of other classes in a class file when I was doing class diagram.\n\nCurrently, it search using the keyword 'in'. Thus, if you have words that are substring of the classes you are looking for,\nit might appear as false positive. No pure word for word comparison is used as there might be cases like Android Development's\nIntent where the classes are declared as 'Intent intent = new Intent(<class name>.this, <class name1>.class);'.\nSo use of 'in' keyword allows verstility.\n\nThe point of this script is to ease the need to use CTRL+F to search for many classes such as 15 or even more classes. \nHence, with this script, the number of classes you need to search for, will be lesser.\n"
}
] | 2 |
rocktyy/fsd-events | https://github.com/rocktyy/fsd-events | f67f1bec1ccf0c452febbb8716eb540331ba3e78 | 89e7423f05ede80f78a86bb3eef6b3669a3891a0 | 3bbfe7f8331e5369d925383ac54481c892786e1f | refs/heads/main | 2023-07-06T06:42:04.072648 | 2021-07-27T22:39:15 | 2021-07-27T22:39:15 | null | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6864334344863892,
"alphanum_fraction": 0.7246160507202148,
"avg_line_length": 44.07692337036133,
"blob_id": "a1989e4c9ef566510f2087ccd133c4c0fac72366",
"content_id": "081b0b4b8b0b4dfb26aee8ed030212f190fb39fb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 4688,
"license_type": "no_license",
"max_line_length": 175,
"num_lines": 104,
"path": "/README.md",
"repo_name": "rocktyy/fsd-events",
"src_encoding": "UTF-8",
"text": "# fsd-events\n\nCompilation of events occurring in Tesla FSD videos.\n\nFSD9 so far seems to be an improvement over FSD8.2. Since the release of FSD9, drivers on average intervene once every 3-4 minutes, compared to once every 2-3 minutes for 8.2.\n\n## Status\n\nversion | videos | drivers | length | events | interventions | time between events | time between interventions\n--- | --- | --- | --- | --- | --- | --- | --- \nOverall | 105 | 16 | 1 day, 2:00:37 | 1010 | 657 | 1m32s | 2m22s\n7.7 | 1 | 1 | 0:18:19 | 28 | 24 | 39s | 45s\n7.8 | 1 | 1 | 0:31:24 | 44 | 32 | 42s | 58s\n7.9 | 1 | 1 | 0:10:00 | 8 | 7 | 1m15s | 1m25s\n8.1 | 8 | 3 | 1:42:11 | 66 | 40 | 1m32s | 2m33s\n8.2 | 38 | 11 | 9:54:55 | 382 | 258 | 1m33s | 2m18s\n9.0 | 56 | 15 | 13:23:48 | 482 | 296 | 1m40s | 2m42s\n\n## Methodology\n\n### Video Selection\n\nCriteria to include videos:\n- Mostly 1x, uncut footage\n- Driver shares their thoughts, observations and reactions\n - Driver shares when they press the accelerator\n - No music\n- Central display clearly visible, especially speed gauge and FSD indicator\n\n### Codes\n\nWe record three kinds of events: disengagements (D), interventions (I) and any kind of\ninappropriate or wrong behavior (W).\n\nNote that when we report statistics on interventions, we include both disengagements (D) and interventions (I).\n\nWhenever possible, we further break down these events as follows.\n\n#### Disengagements\n\nCode | Description\n--- | ---\nD-C | car disengages FSD\nD-D | driver disengages FSD (by braking or turning the steering wheel)\n\n#### Interventions\n\nCode | Description\n--- | ---\nI-FAST | driver increases speed using throttle on RHS of steering wheel\nI-GAS | driver taps accelerator or uses stalk to tell car to proceed\nI-SLOW | driver reduces speed using throttle on RHS of steering wheel\nI-TURN | driver forces lane change by using turn signal\n\n#### Wrong behavior\n\nCode | Description\n--- | ---\nW-2FAST | car drives too quickly\nW-AVOID | car causes other driver or pedestrian to maneuver to avoid collision\nW-BLINK | car activates spurious blinker\nW-BUS | car drives into bus only lane\nW-CLI | car changes lane in intersection\nW-CWWAY | car drives into incoming lane\nW-DPL | car drives in parking lane\nW-DRUNK | car acts drunk\nW-MS | car misses stop\nW-MT | car misses turn\nW-MUST | car drives straight through in a turn-only lane\nW-PWWAY | car partially drives into incoming lane\nW-RED | car crosses on red\nW-SA | car stops abruptly\nW-SWL | car crosses solid white line\nW-TRFL | car turns right from left lane\nW-TWL | car drives into wrong lane (of correct direction)\nW-YSLOW | car slows down unnecessarily\nW-YSTOP | car stops unnecessarily\n\n### Drivers\n\n#### Match Requirements\n\nUser name | Youtube | Twitter\n--- | --- | ---\nAI Addict | [Channel](https://www.youtube.com/channel/UCnSt1nfVXyTyMbKhk-IaTJw/about) | -\nBrandonee916 | [Brandonee916](https://www.youtube.com/c/Brandonee916/about) | [@brandonee916](https://twitter.com/brandonee916)\nChuck Cook | [Channel](https://www.youtube.com/channel/UCwdbsDtaMAh6QXvcbp08YzQ/about) | [@Chazman](https://twitter.com/chazman)\nDave Mac | [DMacTech](https://www.youtube.com/c/DMacTech/about) | [@CGDaveMac](https://twitter.com/CGDaveMac)\nDirty Tesla | [DirtyTesla](https://www.youtube.com/c/DirtyTesla/about) | [@DirtyTesla](https://twitter.com/DirtyTesla)\nFrenchie | [Channel](https://www.youtube.com/channel/UCt8fkjhFgywzLGLVIz7Z7-g/about) | [@FrenchieEAP](https://twitter.com/FrenchieEAP)\nJames Locke | [pilotjc78](https://www.youtube.com/user/pilotjc78) | [@arctechinc](https://twitter.com/arctechinc)\nHyperChange | [HyperChangeTV](https://www.youtube.com/c/HyperChangeTV/about) | [@HyperChangeTV](https://twitter.com/hyperchangetv)\nKim Paquette| [bimbels](https://www.youtube.com/user/bimbels/about) | [@kimpaquette](https://twitter.com/kimpaquette)\nNicholas Howard | [NicholasHoward](https://www.youtube.com/c/NicholasHoward/about) | -\noisiaa | [oisiaa](https://www.youtube.com/user/oisiaa/about) | -\nTesLatino | [TesLatino](https://www.youtube.com/c/TesLatino/about) | [@TesLatino](https://twitter.com/TesLatino)\nTesla Owners Silicon Valley | [TeslaOwnersSiliconValley](https://www.youtube.com/c/TeslaOwnersSiliconValley/about) | [@teslaownersSV](https://twitter.com/teslaownersSV)\nTrevor Mahlmann | [TrevorMahlmann](https://www.youtube.com/c/TrevorMahlmann/about) | [@TrevorMahlmann](https://twitter.com/TrevorMahlmann)\n\n#### Don't Match Requirements\n\n- [AI DRIVR](https://www.youtube.com/c/AIDRIVR/about): driver does not share their thoughts live but after the fact; videos are divided into small segments\n- [CMePrint](https://www.youtube.com/c/CMePrint/about): driver is silent\n- [Whole Mars Catalog](https://www.youtube.com/c/WholeMarsCatalog/about): driver is silent\n"
},
{
"alpha_fraction": 0.5858585834503174,
"alphanum_fraction": 0.6060606241226196,
"avg_line_length": 44,
"blob_id": "7475495d156e45cda16f9a49b3c8715d5bdf0366",
"content_id": "e1b024522a01fe8a781d82b4ca5d73194b530b5b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Makefile",
"length_bytes": 495,
"license_type": "no_license",
"max_line_length": 119,
"num_lines": 11,
"path": "/Makefile",
"repo_name": "rocktyy/fsd-events",
"src_encoding": "UTF-8",
"text": ".PHONY: redo-readme\nredo-readme: do-report\n\tcat README.md | awk '{print} (NF==2 && $$1 == \"##\" && $$2 == \"Status\") {exit}' > /tmp/header.md\n\tcat README.md | awk 'BEGIN{ok=0} (NF==2 && $$1 == \"##\" && $$2 == \"Methodology\") {ok=1} ok==1 {print}' > /tmp/footer.md\n\ttail -9 /tmp/report.md > /tmp/middle.md\n\techo > /tmp/empty.md\n\tcat /tmp/header.md /tmp/empty.md /tmp/middle.md /tmp/empty.md /tmp/footer.md > README.md\n\n.PHONY: do-report\ndo-report:\n\t./scripts/report.py videos/*.json > /tmp/report.md\n"
},
{
"alpha_fraction": 0.5935251712799072,
"alphanum_fraction": 0.6015188097953796,
"avg_line_length": 37.79069900512695,
"blob_id": "f3b37311a7d579584306056581189c04c5bcfae9",
"content_id": "fadb1ea8c26d91ba04a647ab37801e9a6bb04876",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5004,
"license_type": "no_license",
"max_line_length": 197,
"num_lines": 129,
"path": "/scripts/report.py",
"repo_name": "rocktyy/fsd-events",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n\n\"\"\"\nReport basic statistics on FSD videos.\n\nUsage: ./scripts/report.py videos/*.json\n\"\"\"\n\nimport json\nimport re\nimport sys\nfrom collections import defaultdict\nfrom datetime import timedelta\n\n\nclass Stats:\n def __init__(self, video_length=timedelta(), events=0, interventions=0, drivers=[], videos=0):\n self.video_length = video_length\n self.events = events\n self.interventions = interventions\n self.drivers = set(drivers)\n self.videos = videos\n def __add__(self, other):\n return Stats(\n self.video_length + other.video_length,\n self.events + other.events,\n self.interventions + other.interventions,\n self.drivers.union(other.drivers),\n self.videos + other.videos,\n )\n\ndef pretty_time_interval(n, ti):\n if n == 0:\n return \"NA\"\n h, m, s = 0, 0, int(ti.total_seconds() / n)\n ret = []\n if s >= 3600:\n h = s // 3600\n s = s % 3600\n ret.append(f\"{h}h\")\n if s >= 60:\n m = s // 60\n s = s % 60\n ret.append(f\"{m}m\")\n ret.append(f\"{s}s\")\n return \"\".join(ret)\n \n\ndef get_per_hour(stats):\n hours = stats.video_length.total_seconds() / 3600\n events_per_hour = int(stats.events / hours)\n interventions_per_hour = int(stats.interventions / hours)\n return (events_per_hour, interventions_per_hour)\n\n\ndef print_stats(stats, fsd_version=None, fout=sys.stdout):\n title = \"videos\" if not fsd_version else f\"{fsd_version} videos\"\n events_per_hour, interventions_per_hour = get_per_hour(stats)\n fout.write(f\"Watched {stats.videos} {title} produced by {len(stats.drivers)} drivers and running for {stats.video_length}:\\n\")\n fout.write(f\"- Observed {stats.events} events, i.e. about {events_per_hour} events per hour\\n\")\n fout.write(f\"- Of those events, {stats.interventions} are interventions, i.e. about {interventions_per_hour} interventions per hour\\n\")\n\n\ndef print_stats_table(stats_list, fsd_versions, fout=sys.stdout):\n fout.write(\"version | videos | drivers | length | events | interventions | time between events | time between interventions\\n\")\n fout.write(\"--- | --- | --- | --- | --- | --- | --- | --- \\n\")\n for stats, fsd_version in zip(stats_list, fsd_versions):\n time_between_events = pretty_time_interval(stats.events, stats.video_length)\n time_between_interventions = pretty_time_interval(stats.interventions, stats.video_length)\n fout.write(f\"{fsd_version} | {stats.videos} | {len(stats.drivers)} | {stats.video_length} | {stats.events} | {stats.interventions} | {time_between_events} | {time_between_interventions}\\n\")\n\n\n# Return (major, minor, version_string?)\nfsd_version_re = re.compile(\"(\\\\d+)[.](\\\\d+)(.+)?\")\ndef get_fsd_version(s):\n m = fsd_version_re.match(s)\n major = int(m.group(1))\n minor = int(m.group(2))\n version_string = None if not m.group(3) else m.group(3).strip().replace('(', '').replace(')', '')\n return (major, minor, version_string)\n\n\ndef get_video_length(s):\n components = s.split(\":\")\n minutes, seconds = int(components[0]), int(components[1])\n return timedelta(minutes=minutes, seconds=seconds)\n\n\ndef get_interventions(events):\n def has_intervention_code(event):\n intervention_codes = set([\"D\", \"D-C\", \"D-D\", \"I-GAS\", \"I-FAST\", \"I-SLOW\", \"I-TURN\",])\n return len(intervention_codes.intersection(set(event[\"codes\"]))) > 0\n return list(filter(has_intervention_code, events))\n\n\nif __name__ == '__main__':\n event_files = sys.argv[1:]\n total_stats = Stats()\n fsd_version_to_stats = defaultdict(Stats)\n for event_file in event_files:\n with open(event_file) as fin:\n try:\n d = json.load(fin)\n fsd_version = get_fsd_version(d[\"metadata\"][\"fsd-version\"])\n fsd_version = str(fsd_version[0]) + \".\" + str(fsd_version[1])\n events = d[\"events\"]\n interventions = get_interventions(events)\n driver = d[\"metadata\"][\"user\"]\n stats = Stats(\n get_video_length(d[\"metadata\"][\"video-length\"]),\n len(events),\n len(interventions),\n [driver],\n 1,\n )\n print(f\"{event_file}: {driver} {fsd_version} {stats.video_length} {stats.events} {stats.interventions}\")\n total_stats += stats\n fsd_version_to_stats[fsd_version] += stats\n except Exception as e:\n print(f\"Failed to load: {event_file}: {e}\")\n print_stats(total_stats)\n sorted_fsd_versions = sorted(fsd_version_to_stats.keys())\n for fsd_version in sorted_fsd_versions:\n stats = fsd_version_to_stats[fsd_version]\n print_stats(stats, fsd_version)\n print_stats_table(\n [total_stats] + [fsd_version_to_stats[fsd_version] for fsd_version in sorted_fsd_versions],\n [\"Overall\"] + sorted_fsd_versions,\n )\n"
}
] | 3 |
veeru2015/airflow | https://github.com/veeru2015/airflow | 387d5ec21258562ee3accfc689394697437744a6 | 46bb7ce760c8f38e294fba4edeeccbbaf7c49419 | 991dcaa81fe7c4f9751708c32307607fa7f1f97a | refs/heads/master | 2020-06-18T13:00:40.326702 | 2019-07-11T02:59:15 | 2019-07-11T02:59:15 | 196,310,429 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6592920422554016,
"alphanum_fraction": 0.6605562567710876,
"avg_line_length": 20.6849308013916,
"blob_id": "14fafd03807d360ae69fae856ceb990aef8e0a42",
"content_id": "e6587ed8d8bdbaf1604545582f6af259ce3c2ce6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1582,
"license_type": "no_license",
"max_line_length": 81,
"num_lines": 73,
"path": "/dags/load_data_pandas_all.py",
"repo_name": "veeru2015/airflow",
"src_encoding": "UTF-8",
"text": "\"\"\"\n\"\"\"\n\nimport pandas as pd\n\nimport airflow\nimport requests\n\nfrom airflow.models import DAG\n\nfrom airflow.hooks.postgres_hook import PostgresHook\nfrom airflow.operators.python_operator import PythonOperator\nfrom airflow.models import Variable\nfrom airflow.operators.postgres_operator import PostgresOperator\n\n\nargs = {\n \"owner\": \"airflow\",\n \"start_date\": airflow.utils.dates.days_ago(2)\n}\n\ndag = DAG(\n dag_id=\"load_data_into_pandas_all\",\n default_args=args,\n schedule_interval=\"*/1 * * * *\"\n)\n\ndata_sources = Variable.get(\"data_sources\")\n\nfor conn_id in data_sources.split(\",\"):\n task = PostgresOperator(\n task_id=\"collect_from_\" + conn_id,\n provide_context=True,\n sql=\"COPY batches TO \" + conn_id + \"_data.csv DELIMITER ',' CSV HEADER;\",\n dag=dag\n )\n\ndef process_data_operator(ds, **kwargs):\n open(\"collected_data_all.csv\", \"w\")\n\nprocess_data = PythonOperator(\n task_id=\"process_data\",\n python_callable=process_data_operator,\n provide_context=True,\n dag=dag\n)\n\ntask >> process_data\n\n\n# def process_data_operator(ds, **kwargs):\n# pghook = PostgresHook(postgres_conn_id=\"postgres_AO_Test\")\n# sql = \"select * from batches;\"\n# df = pd.read_sql(sql, pghook.get_conn())\n# df.to_csv(\"sometestcsv.csv\")\n# return None\n\n# process_data = PythonOperator(\n# task_id=\"process_data\",\n# python_callable=process_data_operator,\n# provide_context=True,\n# dag=dag\n# )\n\n\n# [START save_data_operator]\n\n# [END collect_data_operator]\n\n# process_data >> save_data\n\nif __name__ == \"__main__\":\n dag.cli()"
},
{
"alpha_fraction": 0.691752552986145,
"alphanum_fraction": 0.692783534526825,
"avg_line_length": 19.659574508666992,
"blob_id": "ffaff54caf3749c8262934f7149b3bc76f32f49c",
"content_id": "36fbfb61e54abeebe035ab14f035c3aacfb2f010",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 970,
"license_type": "no_license",
"max_line_length": 66,
"num_lines": 47,
"path": "/dags/load_data_pandas.py",
"repo_name": "veeru2015/airflow",
"src_encoding": "UTF-8",
"text": "\"\"\"\n\"\"\"\n\nimport pandas as pd\n\nimport airflow\nimport requests\nfrom airflow.models import DAG\nfrom airflow.hooks.postgres_hook import PostgresHook\nfrom airflow.operators.python_operator import PythonOperator\n# from airflow.operators.postgres_operator import PostgresOperator\n\nargs = {\n \"owner\": \"airflow\",\n \"start_date\": airflow.utils.dates.days_ago(2)\n}\n\ndag = DAG(\n dag_id=\"load_data_into_pandas\",\n default_args=args,\n schedule_interval= None\n)\n\n\ndef collect_data_operator(ds, **kwargs):\n pghook = PostgresHook(postgres_conn_id=\"postgres_AO_Test\")\n sql = \"select * from batches;\"\n df = pd.read_sql(sql, pghook.get_conn())\n df.to_csv(\"sometestcsv.csv\")\n return None\n\ncollect_data = PythonOperator(\n task_id=\"collect_data\",\n python_callable=collect_data_operator,\n provide_context=True,\n dag=dag\n)\n\n\n# [START save_data_operator]\n\n# [END collect_data_operator]\n\n# process_data >> save_data\n\nif __name__ == \"__main__\":\n dag.cli()"
},
{
"alpha_fraction": 0.6464826464653015,
"alphanum_fraction": 0.6482635736465454,
"avg_line_length": 18.36206817626953,
"blob_id": "07d8af895b60921e581ad64a708f939ea842b842",
"content_id": "526e557b06ae5f96758d1723abec02eceb91ebed",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1123,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 58,
"path": "/dags/load_data_2.py",
"repo_name": "veeru2015/airflow",
"src_encoding": "UTF-8",
"text": "\"\"\"\n\"\"\"\n\n\nimport airflow\n\nfrom airflow.models import DAG\n\n# from airflow.hooks.postgres_hook import PostgresHook\nfrom airflow.operators.python_operator import PythonOperator\nfrom airflow.models import Variable\nfrom airflow.operators.postgres_operator import PostgresOperator\n\n\nargs = {\n \"owner\": \"airflow\",\n \"start_date\": airflow.utils.dates.days_ago(2)\n}\n\ndag = DAG(\n dag_id=\"load_data_2\",\n default_args=args,\n schedule_interval=None\n)\n\ndata_sources = Variable.get(\"data_sources\")\n\n\n\ndef process_data_operator(ds, **kwargs):\n pass\n\n\nprocess_data = PythonOperator(\n task_id=\"process_data\",\n python_callable=process_data_operator,\n provide_context=True,\n dag=dag\n)\n\n\nfor conn_id in data_sources.split(\",\"):\n conn_id = conn_id.strip()\n if not conn_id:\n continue\n filename = \"/tmp/\" + conn_id + \"_data.csv\"\n task = PostgresOperator(\n task_id=\"collect_from_\" + conn_id,\n sql=\"COPY batches TO '\" + filename + \"' DELIMITER ',' CSV HEADER;\",\n postgres_conn_id=conn_id,\n dag=dag\n )\n\n task >> process_data\n\n\nif __name__ == \"__main__\":\n dag.cli()\n"
},
{
"alpha_fraction": 0.5893030762672424,
"alphanum_fraction": 0.5918962955474854,
"avg_line_length": 23.687999725341797,
"blob_id": "343502cd27c1565980dda0817124bce8e481b143",
"content_id": "57fcc787ce02e3fa5b579eb314dddcaa30bd99a9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3085,
"license_type": "no_license",
"max_line_length": 66,
"num_lines": 125,
"path": "/dags/load_data_into_csv.py",
"repo_name": "veeru2015/airflow",
"src_encoding": "UTF-8",
"text": "\"\"\"\n\"\"\"\n\nimport csv\nimport pickle\nfrom pprint import pprint\n\nimport airflow\nimport requests\nfrom airflow.models import DAG\nfrom airflow.hooks.postgres_hook import PostgresHook\nfrom airflow.operators.python_operator import PythonOperator\n# from airflow.operators.postgres_operator import PostgresOperator\nfrom bs4 import BeautifulSoup\n\n\nargs = {\n \"owner\": \"airflow\",\n \"start_date\": airflow.utils.dates.days_ago(2)\n}\n\ndag = DAG(\n dag_id=\"load_data_into_csv\",\n default_args=args,\n schedule_interval=\"*/3 * * * *\"\n)\n\n\n# [START collect_data_operator]\ndef collect_data_operator(ds, **kwargs):\n # http://mfd.gov.np/\n data = requests.get(\"http://mfd.gov.np/\")\n # print(data, dir(data), data.text)\n soup = BeautifulSoup(data.text)\n div = soup.find(\"div\", attrs={\"class\": \"weather-data-table\"})\n with open(\"collected_data.html\", \"w\") as fp:\n fp.write(str(div))\n return None\n\n\ncollect_data = PythonOperator(\n task_id=\"collect_data\",\n provide_context=True,\n python_callable=collect_data_operator,\n dag=dag\n)\n# [END collect_data_operator]\n\n\n# [START process_data_operator]\ndef process_data_operator(ds, **kwargs):\n rows = []\n with open(\"collected_data.html\", \"r\") as fp:\n soup = BeautifulSoup(fp.read())\n for tr in soup.find(\"table\").find_all(\"tr\"):\n tds = tr.find_all(\"td\")\n if not tds or len(tds) < 4:\n continue\n _data = {\n \"station\": tds[0].string,\n \"maximum\": tds[1].string,\n \"minimum\": tds[2].string,\n \"rainfall\": tds[3].string\n }\n rows.append(_data)\n with open(\"collected_data.csv\", \"w\") as fp:\n csvfile = csv.DictWriter(fp, fieldnames=[\n \"station\", \"maximum\", \"minimum\", \"rainfall\"\n ])\n csvfile.writeheader()\n csvfile.writerows(rows)\n return None\n\n\nprocess_data = PythonOperator(\n task_id=\"process_data\",\n provide_context=True,\n python_callable=process_data_operator,\n dag=dag\n)\n# [END collect_data_operator]\n\n\ncollect_data >> process_data\n\n\n# save_data = PostgresOperator(\n# sql=\"\"\n# )\n\n\n# [START save_data_operator]\ndef save_data_operator(ds, **kwargs):\n pghook = PostgresHook(postgres_conn_id=\"postgres_default\")\n sql = \"\"\"INSERT INTO \n weather(station, maximum, minimum, rainfall)\n VALUES(%s, %s, %s, %s);\"\"\"\n with open(\"collected_data.csv\", \"r\") as fp:\n csvfile = csv.DictReader(fp, fieldnames=[\n \"station\", \"maximum\", \"minimum\", \"rainfall\"\n ])\n line = next(csvfile)\n for line in csvfile:\n print(line)\n pghook.run(sql, parameters=(\n line[\"station\"],\n line[\"maximum\"],\n line[\"minimum\"],\n line[\"rainfall\"]\n ))\n return None\n\n\nsave_data = PythonOperator(\n task_id=\"save_data\",\n provide_context=True,\n python_callable=save_data_operator,\n dag=dag\n)\n# [END collect_data_operator]\n\nprocess_data >> save_data\n\nif __name__ == \"__main__\":\n dag.cli()"
}
] | 4 |
elementofprgress/MeltyElement_Sights | https://github.com/elementofprgress/MeltyElement_Sights | 23e300e9dfa1dfd32fb9d6ead5800d072f468d72 | d32af68ba894195648895af0967651bd3ba1ac17 | 5e80515e1953bf291c489963f566464e794b346d | refs/heads/master | 2020-03-22T06:21:01.617467 | 2018-07-03T23:49:48 | 2018-07-03T23:49:48 | 139,627,261 | 2 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.7144686579704285,
"alphanum_fraction": 0.7221511006355286,
"avg_line_length": 44.94117736816406,
"blob_id": "77b7f51f8c6fc4726dd6ad1643fead4c5c02ff35",
"content_id": "6bd37bd0ed4d4aeccde10ad0414c8834b5c769e5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2343,
"license_type": "no_license",
"max_line_length": 147,
"num_lines": 51,
"path": "/buildRelease.py",
"repo_name": "elementofprgress/MeltyElement_Sights",
"src_encoding": "UTF-8",
"text": "import py_compile, zipfile, os\nimport subprocess\n\nimport shutil\n\n\ndef move(src, dest):\n shutil.copy(src, dest)\n\ndef prepareDir(targetPath):\n if os.path.exists(targetPath):\n return True\n dirName = os.path.dirname(targetPath)\n if not os.path.exists(dirName): os.makedirs(dirName)\n\n\nWOTVersion = \"1.0.2.1\"\nModVersion = \"v1.0.0b\"\nGUIFlashVer = \"0.2.5\"\n\nreleaseDir = \"release/WoT_\" + WOTVersion\nif not os.path.exists(releaseDir):\n os.makedirs(releaseDir)\n\nreleaseFile = \"release/WoT_\" + WOTVersion + \"/mod_MeltyElementSights_\" + ModVersion + \".7z\"\nif os.path.exists(releaseFile):\n os.remove(releaseFile)\n\npy_compile.compile(\"source/python/mod_ArcadeBattleFlash.py\")\npy_compile.compile(\"source/python/mod_ArtyBattleFlash.py\")\npy_compile.compile(\"source/python/mod_SniperBattleFlash.py\")\n\nprepareDir(releaseDir + \"/res_mods/\" + WOTVersion + \"/scripts/client/gui/mods/\")\nmove(\"source/python/mod_ArcadeBattleFlash.pyc\", releaseDir + \"/res_mods/\" + WOTVersion + \"/scripts/client/gui/mods/\" + \"mod_ArcadeBattleFlash.pyc\")\nmove(\"source/python/mod_ArtyBattleFlash.pyc\", releaseDir + \"/res_mods/\" + WOTVersion + \"/scripts/client/gui/mods/\" + \"mod_ArtyBattleFlash.pyc\")\nmove(\"source/python/mod_SniperBattleFlash.pyc\", releaseDir + \"/res_mods/\" + WOTVersion + \"/scripts/client/gui/mods/\" + \"mod_SniperBattleFlash.pyc\")\n\nprepareDir(releaseDir + \"/res_mods/\" + WOTVersion + \"/gui/flash/\")\nmove(\"source/flash/ArcadeBattleFlash/ArcadeBattleFlash.swf\", releaseDir + \"/res_mods/\" + WOTVersion + \"/gui/flash/\" + \"ArcadeBattleFlash.swf\")\nmove(\"source/flash/ArtyBattleFlash/ArtyBattleFlash.swf\", releaseDir + \"/res_mods/\" + WOTVersion + \"/gui/flash/\" + \"ArtyBattleFlash.swf\")\nmove(\"source/flash/SniperBattleFlash/SniperBattleFlash.swf\", releaseDir + \"/res_mods/\" + WOTVersion + \"/gui/flash/\" + \"SniperBattleFlash.swf\")\n\nmove(\"source/config/ElementCrosshairSettings.xml\", releaseDir + \"/res_mods/\" + WOTVersion + \"/gui/flash/\" + \"ElementCrosshairSettings.xml\")\nsubprocess.call(['7z', 'a', releaseDir + '/Melty_Element_Fonts.7z', \"source/fonts/*.*\"])\n\nprepareDir(releaseDir + \"/mods/\" + WOTVersion + \"/\")\nmove(\"GUIFlash/gambiter.guiflash_0.2.5.wotmod\", releaseDir + \"/mods/\" + WOTVersion + \"/gambiter.guiflash_\" + GUIFlashVer + \".wotmod\")\n\nsubprocess.call(['7z', 'a', releaseFile, releaseDir])\n\nos.remove(releaseDir + '/Melty_Element_Fonts.7z')\n"
}
] | 1 |