repo_name
stringlengths 5
114
| repo_url
stringlengths 24
133
| snapshot_id
stringlengths 40
40
| revision_id
stringlengths 40
40
| directory_id
stringlengths 40
40
| branch_name
stringclasses 209
values | visit_date
timestamp[ns] | revision_date
timestamp[ns] | committer_date
timestamp[ns] | github_id
int64 9.83k
683M
⌀ | star_events_count
int64 0
22.6k
| fork_events_count
int64 0
4.15k
| gha_license_id
stringclasses 17
values | gha_created_at
timestamp[ns] | gha_updated_at
timestamp[ns] | gha_pushed_at
timestamp[ns] | gha_language
stringclasses 115
values | files
listlengths 1
13.2k
| num_files
int64 1
13.2k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
siddsapte12/AWSfileuploadWebApp | https://github.com/siddsapte12/AWSfileuploadWebApp | abfa9381ed6f83a82d7196dcadba55978f746beb | c3ee7646e5109446ae5c93174c01f8fe633935fe | c66fb66a3c226a3c8fd800171d9adc353ce3c1b4 | refs/heads/master | 2022-04-01T18:53:33.872786 | 2020-01-21T11:56:19 | 2020-01-21T11:56:19 | 235,325,054 | 0 | 1 | null | null | null | null | null | [
{
"alpha_fraction": 0.7914764285087585,
"alphanum_fraction": 0.7955352663993835,
"avg_line_length": 63.6065559387207,
"blob_id": "ea2bdaac000678aa07e8e0912b551415e09211eb",
"content_id": "ad2ad45e6753020fe31d8af1e85b209c9b9a5eb8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 3946,
"license_type": "no_license",
"max_line_length": 504,
"num_lines": 61,
"path": "/README.md",
"repo_name": "siddsapte12/AWSfileuploadWebApp",
"src_encoding": "UTF-8",
"text": "# Serverless File Upload Web Application\n\nOur project objective is to make the build a MVP (minimal viable project) for a front end subsystem for an overall native cloud based application, which will allow file multiple file uploading and digital transformation services to registered clients. The overall application will include converting scanned documents to text and subsequently extracting the text from the images to be stored in text file. The entire service is paid for and will be completely hosted on AWS using serverless architecture.\n\n## Requirements\n\nThis projects require you to have an AWS account and knowledge of python, HTML, Javascript, CSS, SQL and AJAX.\n\n### Assumptions\n\n*\tMany businesses may have an existing application on server for this purpose, so we assumed that if such an application were to be moved to cloud with a serverless architecture, what steps would need to be taken and how will this transformation take place. We first built in place a Python Flask Web application and built our cloud solution around this. The Flask application has registration, login , plan and file upload modules with MySQL database for maintaining user information.\n*\tIn this solution, we have assumed Username will be unique\n*\tEach registered user must have a subscription plan selected, and paid for, which will allocate the count of images each user can upload with that plan. \n*\tFor every subscribed user a separate folder will be maintained for the uploaded files\n*\tFiles will only be JPEG format not more than 4MB in size\n\n### Architecture\n\nAt a birds’ eye view, the architecture has the flow from Registration then Login, Selection of a subscription plan and uploading files. In the background, once file upload is successful, Textract is run automatically to extract text from each image document and saved in a text file. \nThe AWS Services used are :\n\n*\tAmazon S3 for 3 buckets of hosting static webpages built using HTML/CSS, Javascript and AJAX, maintaining user directories and storing extracted text files\n*\tAWS API Gateway to handle requests and responses to the webpages\n*\tAmazon RDS MySQL to maintain users table in the webapp database\n*\tAWS Lambda functions- one triggered based on API Gateway responses and other for Textract service\n*\tAWS Textract to extract text from images and storing in to Text Files\n*\tAmazon CloudWatch to manage logs upon deployment to track progress and useful for debug\n\nThe Architectural depiction of our Serverless Web Application is shown below:\n\n\n\n*\tWe have used AWS S3 service to host our web app. All the html pages are hosted in one single S3 bucket – aws-itc6480\n*\tThe user registers himself/herself and then logins with the user-id and the password. After he logs in he is directed to the plan page wherein he can select the plan which has to be selected for the number of images to upload along with a payment service. Then after the upload page there is upload page wherein we can upload the files.\n*\tAPI gateway is used to pass request and response smoothly between the webpage and backend and lambda is used to perform the activities that happen in that page.\n*\tThe user information like the email-id, user-id, password, plan selected, number of files remain to upload as per the plan. \n*\tThe uploaded images are saved in S3 bucket image-webapp by creating a new folder for each user.\n*\tOnce the file is uploaded in S3 then a lambda getText is triggered which makes extracts the text from the image and stores in a form of a text file in another S3 bucket textractawsitc6480.\n\n### Pages\n\nRegister page:\n\n\n\nLogin page:\n\n\n\nPayment page:\n\n\n\nUpload page:\n\n\n\n## Contributing\nPull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.\n\nPlease make sure to update tests as appropriate.\n\n"
},
{
"alpha_fraction": 0.5463765263557434,
"alphanum_fraction": 0.5541939735412598,
"avg_line_length": 31.86805534362793,
"blob_id": "99e299774ecbfdf6d55f7acf72e5bc14e4526e42",
"content_id": "019feeb5b0759d199d2b418442d24a3ec55819d6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 9466,
"license_type": "no_license",
"max_line_length": 128,
"num_lines": 288,
"path": "/lambda functions/TestLambda_lambda_function.py",
"repo_name": "siddsapte12/AWSfileuploadWebApp",
"src_encoding": "UTF-8",
"text": "import json\nimport sys\nimport logging\nimport rds_config\nfrom rds_config import checkDBconn \nimport pymysql\nimport collections\nimport boto3\nimport base64\ndef lambda_handler(event, context):\n \n print(event)\n \n if((event['path'] == \"/login\") and (event['httpMethod'] == \"POST\")):\n userObject = json.loads(event['body'])\n print(userObject['Username'])\n print(userObject['Password'])\n return loginUser(userObject['Username'],userObject['Password'])\n elif((event['path'] == \"/register\") and (event['httpMethod'] == \"POST\")):\n userObject = json.loads(event['body'])\n return register(userObject)\n elif((event['path'] == \"/plan\") and (event['httpMethod'] == \"POST\")):\n userObject = json.loads(event['body'])\n print(userObject['username'])\n return planSelect(userObject)\n elif((event['path']==\"/upload\") and (event['httpMethod']==\"POST\")):\n return upload(event,context)\n \n \ndef checkUserFromDatabase(username):\n ## Query database with username and password\n conn = checkDBconn()\n cursor = conn.cursor()\n #user = event['body'].Username\n print(cursor)\n val = cursor.execute(\"SELECT user_name from users where user_name = '\"+username+\"'\") \n cursor.close()\n conn.close()\n print(val)\n if val == 1:\n return True; \n \n else:\n return False;\n \n\n\ndef register(userObject):\n \"\"\"\n This function inserts content into mysql RDS instance\n \"\"\"\n conn = checkDBconn()\n cursor = conn.cursor()\n \n \n item_count = 0\n username=userObject['username']\n email=userObject['email']\n password=userObject['password']\n \n print(\"username\", username+email+password)\n status = checkUserFromDatabase(username)\n print(status)\n if(status):\n d = collections.OrderedDict()\n d['statusMessage'] = \"User Exists\"\n d['statusCode'] = 500\n return {\n 'statusCode': 500,\n 'headers': {\n 'Content-Type': 'application/json',\n \"Access-Control-Allow-Origin\": \"*\",\n \"Access-Control-Allow-Headers\": \"Content-Type\",\n \"Access-Control-Allow-Methods\": \"OPTIONS,POST,GET\"\n },\n 'body': json.dumps(d)\n }\n else:\n val = cursor.execute(\"insert into users (user_name, email, pwd) values( '\"+username+\"','\"+email+\"','\"+password+\"')\")\n print(\"affected rows = {}\".format(cursor.rowcount))\n \n # Get the primary key value of the last inserted row\n print(\"Primary key id of the last inserted row:\")\n print(cursor.lastrowid)\n\n print(\"val::\",val)\n \n cursor.close()\n conn.close()\n if val == 1:\n d = collections.OrderedDict()\n d['statusMessage'] = \"Success\"\n d['statusCode'] = 200\n \n return {\n 'statusCode': 200,\n 'headers': {\n \"Access-Control-Allow-Origin\": \"*\",\n \"Access-Control-Allow-Headers\": \"Content-Type\",\n \"Access-Control-Allow-Methods\": \"OPTIONS,POST,GET\",\n 'Content-Type': 'application/json',\n \"Set-Cookie\":\"succes\",\n \"set-Cookie\":\"succeeded\"\n },\n 'body': json.dumps(d)\n }\n \n else:\n d = collections.OrderedDict()\n d['statusMessage'] = \"User Exists\"\n d['statusCode'] = 500\n return {\n 'statusCode': 500,\n 'headers': {\n 'Content-Type': 'application/json',\n \"Access-Control-Allow-Origin\": \"*\",\n \"Access-Control-Allow-Headers\": \"Content-Type\",\n \"Access-Control-Allow-Methods\": \"OPTIONS,POST,GET\"\n },\n 'body': json.dumps(d)\n }\n \ndef loginUser(username, password):\n conn = checkDBconn()\n cursor = conn.cursor()\n #user = event['body'].Username\n val = cursor.execute(\"SELECT user_name from users where user_name = '\"+username+\"' and pwd = '\"+password+\"'\") \n sqlStmt = \"SELECT userid,user_name,email,plan_select from users where user_name = '\"+username+\"'\"\n rows = cursor.execute(sqlStmt)\n usr = cursor.fetchall() \n print(usr)\n cursor.close()\n conn.close()\n \n if val == 1:\n \n objects_list = []\n for row in usr:\n print(row)\n d = collections.OrderedDict()\n d['userid'] = row[0]\n d['user_name'] = row[1]\n d['email'] = row[2]\n d['plan_select'] = row[3]\n objects_list.append(d)\n j = json.dumps(objects_list)\n\n \n return {\n 'statusCode': 200,\n 'headers': {\n \"Access-Control-Allow-Origin\": \"*\",\n \"Access-Control-Allow-Headers\": \"Content-Type\",\n \"Access-Control-Allow-Methods\": \"OPTIONS,POST,GET\",\n 'Content-Type': 'application/json',\n \"Set-Cookie\":\"HttpOnly;Secure;SameSite=Strict\",\n \"set-Cookie\":\"HttpOnly;Secure;SameSite=Strict\"\n },\n 'body': json.dumps(objects_list)\n }\n \n else:\n return {\n 'statusCode': 401,\n 'headers': {\n \"Access-Control-Allow-Origin\": \"*\",\n \"Access-Control-Allow-Headers\": \"Content-Type\",\n \"Access-Control-Allow-Methods\": \"OPTIONS,POST,GET\"\n },\n 'body': json.dumps(\"Invalid login credentials\")\n }\n \ndef planSelect(userObject):\n conn = checkDBconn()\n cursor = conn.cursor()\n \n cardname = userObject['cardname']\n amount = userObject['amount']\n plan = userObject['plan']\n image = userObject['image']\n username = userObject['username']\n sqlStmt = \"SELECT userid,user_name,email,plan_select,up_plan_id from users where user_name = '\"+username+\"'\"\n rows = cursor.execute(sqlStmt)\n usr = cursor.fetchall() \n print(usr)\n \n if rows == 1:\n sqlStmt=\"UPDATE users SET up_plan_id='\"+plan+\"',plan_select=1,up_remainder='\"+image+\"'WHERE user_name = '\"+username+\"'\";\n cursor.execute(sqlStmt)\n cursor.close()\n conn.close()\n return {\n 'statusCode': 200,\n 'headers': {\n \"Access-Control-Allow-Origin\": \"*\",\n \"Access-Control-Allow-Headers\": \"Content-Type,filename\",\n \"Access-Control-Allow-Methods\": \"OPTIONS,POST,GET\",\n 'Content-Type': 'application/json',\n \"Set-Cookie\":\"HttpOnly;Secure;SameSite=Strict\",\n \"set-Cookie\":\"HttpOnly;Secure;SameSite=Strict\"\n },\n 'body': json.dumps(\"updated\")\n }\n else:\n cursor.close()\n conn.close()\n return {\n 'statusCode': 401,\n 'headers': {\n \"Access-Control-Allow-Origin\": \"*\",\n \"Access-Control-Allow-Headers\": \"Content-Type\",\n \"Access-Control-Allow-Methods\": \"OPTIONS,POST,GET\"\n },\n 'body': json.dumps(\"not updated\")\n }\n\ndef upload(event, context):\n conn = checkDBconn()\n cursor = conn.cursor()\n \n BUCKET_NAME = '<bucketname>'\n print(event['content'])\n file_content = base64.b64decode(event['content'])\n list_of_events = event['params'].split(\"/\")\n username = list_of_events[0]\n print(username)\n sqlStmt = \"SELECT up_remainder from users where user_name = '\"+username+\"'\"\n rows = cursor.execute(sqlStmt)\n usr = cursor.fetchall()\n count = int(usr[0][0])-1\n print(str(count))\n if count > 0 :\n sqlStmt=\"UPDATE users SET up_remainder='\"+str(count)+\"'WHERE user_name = '\"+username+\"'\";\n cursor.execute(sqlStmt)\n cursor.close()\n conn.close()\n print(usr)\n file_path = event['params']\n s3 = boto3.client('s3')\n try:\n s3_response = s3.put_object(Bucket=BUCKET_NAME, Key=file_path, Body=file_content)\n except Exception as e:\n raise IOError(e)\n textract(event,context)\n return {\n 'statusCode': 200,\n 'headers': {\n \"Access-Control-Allow-Origin\": \"*\",\n \"Access-Control-Allow-Headers\": \"Content-Type,X-Filename\",\n \"Access-Control-Allow-Methods\": \"OPTIONS,POST,GET\",\n 'Content-Type': 'application/json',\n \"Set-Cookie\":\"HttpOnly;Secure;SameSite=Strict\",\n \"set-Cookie\":\"HttpOnly;Secure;SameSite=Strict\"\n },\n 'body': {\n 'file_path': file_path,\n 'count' : count\n }\n }\n else:\n return {\n 'statusCode': 400,\n 'headers': {\n \"Access-Control-Allow-Origin\": \"*\",\n \"Access-Control-Allow-Headers\": \"Content-Type,X-Filename\",\n \"Access-Control-Allow-Methods\": \"OPTIONS,POST,GET\",\n 'Content-Type': 'application/json',\n \"Set-Cookie\":\"HttpOnly;Secure;SameSite=Strict\",\n \"set-Cookie\":\"HttpOnly;Secure;SameSite=Strict\"\n },\n 'body': {\n 'error':'user not present or you dont have image count left'\n }\n }\ndef textract(event,context):\n \n BUCKET_NAME = '<bucketname>'\n print(event['content'])\n file_content = base64.b64decode(event['content'])\n list_of_events = event['params'].split(\"/\")\n username = list_of_events[0]\n filename = list_of_events[1]\n file_path = username+filename\n s3 = boto3.client('s3')\n try:\n s3_response = s3.put_object(Bucket=BUCKET_NAME, Key=file_path, Body=file_content, ContentType='image/jpeg')\n except Exception as e:\n raise IOError(e)\n"
},
{
"alpha_fraction": 0.6657859683036804,
"alphanum_fraction": 0.6684280037879944,
"avg_line_length": 31.913043975830078,
"blob_id": "95e772b4a0a9ed63e077393a238bdd32f68a4a58",
"content_id": "3553790a267ff25c4c65ecdfdbc7db86a971b99c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 757,
"license_type": "no_license",
"max_line_length": 128,
"num_lines": 23,
"path": "/lambda functions/TestLambda_rds_config.py",
"repo_name": "siddsapte12/AWSfileuploadWebApp",
"src_encoding": "UTF-8",
"text": "import logging\nimport pymysql\nimport json\nimport sys\ndef checkDBconn():\n #config file containing credentials for RDS MySQL instance\n db_username = \"<username>\"\n db_password = \"<password>\"\n db_name = \"<db_name>\"\n rds_host = \"<rds_host>\"\n ## Write code for DB Connection\n\n logger = logging.getLogger()\n logger.setLevel(logging.INFO)\n \n try:\n conn = pymysql.connect(rds_host, user=db_username, passwd=db_password, db=db_name, connect_timeout=15, autocommit = True)\n return conn\n except pymysql.MySQLError as e:\n logger.error(\"ERROR: Unexpected error: Could not connect to MySQL instance.\")\n logger.error(e)\n sys.exit()\n logger.info(\"SUCCESS: Connection to RDS MySQL instance succeeded\")\n"
}
] | 3 |
sumantmann/incubateHack | https://github.com/sumantmann/incubateHack | c9080967e8fbea6a929d125ddee7e16e63a7291a | b83804aef58ca940d711b0bd36b8cd72ea0f66c7 | 8b5d9bfeb3e9b043d69efdff9c72ff5e43c010b5 | refs/heads/master | 2021-01-15T15:25:36.259208 | 2016-08-29T04:39:28 | 2016-08-29T04:39:28 | 65,076,370 | 1 | 1 | null | null | null | null | null | [
{
"alpha_fraction": 0.5667074918746948,
"alphanum_fraction": 0.5691554546356201,
"avg_line_length": 19.94871711730957,
"blob_id": "02d51f4e75d1b71155005a4e88bf40c2ce38ecfb",
"content_id": "547f41ff676525ac56fffbfdd4f4d7f0c0733619",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "HTML",
"length_bytes": 817,
"license_type": "no_license",
"max_line_length": 106,
"num_lines": 39,
"path": "/client_side/templates/sign_up_page2.html",
"repo_name": "sumantmann/incubateHack",
"src_encoding": "UTF-8",
"text": "{% extends 'base.html' %}\n{% load staticfiles %}\n\n\n{% block body %}\n<!-- <div class=\"chips\"></div> -->\n<div class=\"container\">\n <h5>Please select your interest.</h5>\n <div class=\"chips chips-initial\"></div>\n</div>\n\n <!-- <div class=\"chips chips-placeholder\"></div> -->\n\n<div class=\"bottom-button\">\n <a class=\"waves-effect waves-light btn bottom-next-button\" href=\"{% url 'profile_edit_page' %}\">next</a>\n</div>\n{% endblock body %}\n\n{% block scripts %}\n {{ block.super }}\n <script>\n\n $('.chips').material_chip();\n $('.chips-initial').material_chip({\n data: [{\n tag: 'Apple',\n }, {\n tag: 'Microsoft',\n }, {\n tag: 'Google',\n }],\n });\n $('.chips-placeholder').material_chip({\n placeholder: 'Enter a tag',\n secondaryPlaceholder: '+Tag',\n });\n\n </script>\n{% endblock scripts %}\n"
},
{
"alpha_fraction": 0.526303768157959,
"alphanum_fraction": 0.5343092679977417,
"avg_line_length": 43.161617279052734,
"blob_id": "a801478cf4007f36328be942f75098a8e685c2d0",
"content_id": "de3154f13bff1a91c7d6d1e092c8a27b942798eb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "HTML",
"length_bytes": 4372,
"license_type": "no_license",
"max_line_length": 153,
"num_lines": 99,
"path": "/client_side/templates/base.html",
"repo_name": "sumantmann/incubateHack",
"src_encoding": "UTF-8",
"text": "{% load static from staticfiles %}\n<!DOCTYPE html>\n<html>\n <head>\n <title>{% block title %} {% endblock title %}</title>\n \t{% block commonmeta %}\n \t\t<meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\">\n \t\t<meta http-equiv=\"X-UA-Compatible\" content=\"IE=Edge\">\n \t\t<meta charset=\"UTF-8\">\n \t{% endblock commonmeta %}\n {% block stylesheet %}\n <link rel=\"stylesheet\" href=\"{% static 'materialize/css/materialize.min.css' %}\">\n <link rel=\"stylesheet\" href=\"{% static 'css/base.css' %}\" />\n <!-- <link href=\"https://fonts.googleapis.com/icon?family=Material+Icons\" rel=\"stylesheet\"> -->\n\n\n {% endblock stylesheet %}\n </head>\n <body>\n {% block header %}\n <header>\n <div class=\"container\">\n <div class=\"container-sub-header navbar-fixed\">\n <a href=\"\" data-activates=\"slide-out\" class=\"button-collapse\"><img src=\"{% static 'images/menu.png' %}\" alt=\"\"></a>\n <a class=\"btn-floating \" style=\"float:right;height:30px; width:30px; \" ><img src=\"{% static 'images/message.png' %}\" style=\"height: 20px;\n width: 20px;margin-top: 5px;\" alt=\"\"></a>\n <a class=\"btn-floating \" style=\"float:right; height:30px; width:30px; margin-right:10px;\" ><img src=\"{% static 'images/noti.png' %}\" alt=\"\"></a>\n </div>\n <div class=\"clearfix\"></div>\n </div>\n <ul id=\"slide-out\" class=\"side-nav\">\n <li><div class=\"userView\">\n <!-- <img class=\"background\" src=\"images/office.jpg\"> -->\n <a href=\"{% url 'profile_edit_page' %}\"><img class=\"circle\" src=\"{% static 'images/profile_demo.png' %}\"></a>\n <a href=\"{% url 'profile_edit_page' %}\"><span class=\"white-text name\">Sunny Arora</span></a>\n <a href=\"{% url 'profile_edit_page' %}\"><span class=\"white-text email\">[email protected]</span></a>\n </div></li>\n <li><a href=\"{% url 'community_view' %}\">Community</a></li>\n <li><div class=\"divider\">Magazine</div></li>\n <li><a class=\"waves-effect\">Events</a></li>\n <li><a class=\"waves-effect\" href=\"#!\">Webinars</a></li>\n <li><a class=\"waves-effect\" href=\"{% url 'all_blog_page' %}\">Bloggers</a></li>\n <li><a href=\"\">Settings</a></li>\n <li><a href=\"{% url 'all_notes' %}\">Tasks</a></li>\n </ul>\n\n </header>\n <div class=\"second-header\">\n <div class=\"container\">\n <div class=\"row\">\n <div class=\"col s12\">\n <ul class=\"tabs\">\n {% if active_all_blog %}\n <li class=\"tab col s3\"><a target=\"_self\" class=\"active\" href=\"{% url 'all_blog_page' %}\">My Feed</a></li>\n {% else %}\n <li class=\"tab col s3\"><a target=\"_self\" href=\"{% url 'all_blog_page' %}\">My Feed</a></li>\n {% endif %}\n {% if active_events %}\n <li class=\"tab col s3\"><a target=\"_self\" class=\"active\" href=\"#test2\">Events</a></li>\n {% else %}\n <li class=\"tab col s3\"><a target=\"_self\" href=\"#test2\">Events</a></li>\n {% endif %}\n {% if active_webinars %}\n <li class=\"tab col s3 \"><a target=\"_self\" class=\"active\" href=\"#test3\">Webinars</a></li>\n {% else %}\n <li class=\"tab col s3 \"><a target=\"_self\" href=\"#test3\">Webinars</a></li>\n {% endif %}\n {% if active_community %}\n <li class=\"tab col s3\"><a target=\"_self\" class=\"active\" href=\"{% url 'community_view' %}\">Community</a></li>\n {% else %}\n <li class=\"tab col s3\"><a target=\"_self\" href=\"{% url 'community_view' %}\">Community</a></li>\n {% endif %}\n \n </ul>\n </div>\n </div>\n </div>\n </div>\n {% endblock header %}\n {% block body %}\n\n {% endblock body %}\n {% block scripts %}\n <script type=\"text/javascript\" src=\"{% static 'js/jquery-2.1.3.js' %}\"></script>\n <script type=\"text/javascript\" src=\"{% static 'materialize/js/materialize.min.js' %}\"></script>\n <script type=\"text/javascript\" src=\"{% static 'js/main.js' %}\"></script>\n <script>\n $(\".button-collapse\").sideNav();\n $('.second-header').stickThis();\n $(document).ready(function(){\n $('ul.tabs').tabs();\n });\n\n // Initialize collapsible (uncomment the line below if you use the dropdown variation)\n // $('.collapsible').collapsible();\n </script>\n {% endblock scripts %}\n </body>\n</html>\n"
},
{
"alpha_fraction": 0.4883720874786377,
"alphanum_fraction": 0.6744186282157898,
"avg_line_length": 13.333333015441895,
"blob_id": "174ab09d57896980fcdb2675b4481a4dc726f417",
"content_id": "6173ac28d712ba58e546f492a398e25c2daecf5d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 43,
"license_type": "no_license",
"max_line_length": 16,
"num_lines": 3,
"path": "/requirements.txt",
"repo_name": "sumantmann/incubateHack",
"src_encoding": "UTF-8",
"text": "Django==1.9\nPillow==3.3.0\nreportlab==3.3.0\n"
},
{
"alpha_fraction": 0.6739469766616821,
"alphanum_fraction": 0.6801872253417969,
"avg_line_length": 53.94285583496094,
"blob_id": "85dcb6a1132bdcc3486749d38f7082572b01645e",
"content_id": "31303f1e635adde20cbe3b29a9badc4e03b00838",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1923,
"license_type": "no_license",
"max_line_length": 108,
"num_lines": 35,
"path": "/incubateHack/urls.py",
"repo_name": "sumantmann/incubateHack",
"src_encoding": "UTF-8",
"text": "\"\"\"incubateHack URL Configuration\n\nThe `urlpatterns` list routes URLs to views. For more information please see:\n https://docs.djangoproject.com/en/1.9/topics/http/urls/\nExamples:\nFunction views\n 1. Add an import: from my_app import views\n 2. Add a URL to urlpatterns: url(r'^$', views.home, name='home')\nClass-based views\n 1. Add an import: from other_app.views import Home\n 2. Add a URL to urlpatterns: url(r'^$', Home.as_view(), name='home')\nIncluding another URLconf\n 1. Add an import: from blog import urls as blog_urls\n 2. Import the include() function: from django.conf.urls import url, include\n 3. Add a URL to urlpatterns: url(r'^blog/', include(blog_urls))\n\"\"\"\nfrom django.conf.urls import url\nfrom django.contrib import admin\n\nurlpatterns = [\n url(r'^admin/', admin.site.urls),\n url(r'^$', 'general.views.home_page', name='home_page'),\n url(r'^sign-up-page2/$', 'general.views.sign_up_page2', name=\"sign_up_page2\"),\n url(r'^profile-edit/$', 'general.views.profile_edit_page', name='profile_edit_page'),\n url(r'^profile-detail-page/$', 'general.views.profile_detail_page', name='profile_detail_page'),\n url(r'^all-blog/$', 'general.views.all_blog_page', name='all_blog_page'),\n url(r'^blog-detail-page/$', 'general.views.blog_detail_page', name='blog_detail_page'),\n url(r'^profile-detail-all-blog/$', 'general.views.profile_view_all_blog', name='profile_view_all_blog'),\n url(r'^act-now-page/$', 'general.views.act_now_page', name='act_now_page'),\n url(r'^community/$', 'general.views.community_view', name='community_view'),\n url(r'^community-single-page/$', 'general.views.community_single_page', name='community_single_page'),\n url(r'^all-notes/$', 'general.views.all_notes', name='all_notes'),\n url(r'^events/$', 'general.views.events_view', name='events_view'),\n url(r'^webinar/$', 'general.views.webinar_view', name='webinar_view')\n]\n"
},
{
"alpha_fraction": 0.7206076383590698,
"alphanum_fraction": 0.7219286561012268,
"avg_line_length": 28.115385055541992,
"blob_id": "3422e6ebd30a7a3802aece3e2bf2efec3bffb5ae",
"content_id": "7cf5f072f3625fb1362cda857bd142acdd111012",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1514,
"license_type": "no_license",
"max_line_length": 58,
"num_lines": 52,
"path": "/general/views.py",
"repo_name": "sumantmann/incubateHack",
"src_encoding": "UTF-8",
"text": "from django.shortcuts import render\n# from reportlab.pdfgen import canvas\nfrom django.http import HttpResponse\n# Create your views here.\n\ndef home_page(request):\n return render(request, 'home_page.html')\n\ndef sign_up_page2(request):\n return render(request, 'sign_up_page2.html')\n\ndef profile_edit_page(request):\n return render(request, 'profile_edit.html')\n\ndef all_blog_page(request):\n active_all_blog = True\n context = {'active_all_blog':active_all_blog}\n return render(request, 'all_blog_page.html', context)\n\ndef blog_detail_page(request):\n return render(request, 'blog_detail_page.html')\n\ndef profile_detail_page(request):\n return render(request, 'profile_detail_page.html')\n\ndef profile_view_all_blog(request):\n return render(request, 'all_blog_page.html')\n\ndef act_now_page(request):\n return render(request, 'act_now_page.html')\n\ndef community_view(request):\n active_community = True\n context = {'active_community':active_community}\n return render(request, 'community_view.html', context)\n\ndef webinar_view(request):\n active_webinar = True\n context = {'active_webinar':active_webinar}\n return render(request, 'webinar_view.html', context)\n\ndef events_view(request):\n active_events = True\n context = {'active_events':active_webinar}\n return render(requset, 'events_view.html', context)\n\n\ndef community_single_page(request):\n return render(request, 'community_single_page.html')\n\ndef all_notes(request):\n return render(request, 'all_notes.html')\n"
}
] | 5 |
piyushmittal20/TextUtils | https://github.com/piyushmittal20/TextUtils | 5d848a61aa2a8b1c41b0e1bcea6cc522f08b7761 | 908b4a96ce21a50f199b5365973737d0977a445c | e3d9b0d71cf4c38bd3f56db375953c3827c34624 | refs/heads/master | 2023-02-19T22:03:00.442347 | 2021-01-24T09:09:09 | 2021-01-24T09:09:09 | 332,405,546 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5508790612220764,
"alphanum_fraction": 0.5572722554206848,
"avg_line_length": 27.876922607421875,
"blob_id": "c9083fecc756886930a59289fce847c7464e72ab",
"content_id": "fc33373768306b331a34a216e1047912cbb11725",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1877,
"license_type": "no_license",
"max_line_length": 145,
"num_lines": 65,
"path": "/textUtils/views.py",
"repo_name": "piyushmittal20/TextUtils",
"src_encoding": "UTF-8",
"text": "from django.http import HttpResponse\nfrom django.shortcuts import render\n\n\ndef index(request):\n # params = {'name': 'piyush'}\n return render(request, \"index.html\")\n\n\ndef contact(request):\n return render(request, \"contact.html\")\n\n\ndef analyze(request):\n text = request.GET.get(\"text\", \"default\")\n\n removepunc = request.GET.get(\"removepunc\", \"off\")\n uppercase = request.GET.get(\"uppercase\", \"off\")\n newlineremover = request.GET.get(\"newlineremover\", \"off\")\n numremover = request.GET.get(\"numremover\", \"off\")\n\n if uppercase == \"on\":\n analyzed = \"\"\n for char in text:\n analyzed = analyzed + char.upper()\n params = {\"analyzed_text\": analyzed}\n text = analyzed\n\n if removepunc == \"on\":\n analyzed = \"\"\n punctuations = \"\"\"!()-[]{};:'\",<>./?@#$%^&*_~\"\"\"\n for char in text:\n if char not in punctuations:\n analyzed = analyzed + char\n params = {\"analyzed_text\": analyzed}\n text = analyzed\n\n if newlineremover == \"on\":\n analyzed = \"\"\n for char in text:\n if char != \"\\n\" and char != \"\\r\":\n analyzed = analyzed + char\n params = {\"analyzed_text\": analyzed}\n text = analyzed\n\n if numremover == \"on\":\n analyzed = \"\"\n numbers = \"0123456789\"\n for char in text:\n if char not in numbers:\n analyzed = analyzed + char\n params = {\"analyzed_text\": analyzed}\n text = analyzed\n\n if (\n removepunc != \"on\"\n and uppercase != \"on\"\n and newlineremover != \"on\"\n and numremover != \"on\"\n ):\n return HttpResponse(\n \"<h4>Please Select at least on command or if you don't want to make your text beautiful, so why would you come on textutils.com</h4>\"\n )\n\n return render(request, \"analyze.html\", params)\n"
},
{
"alpha_fraction": 0.6910994648933411,
"alphanum_fraction": 0.7879580855369568,
"avg_line_length": 21.47058868408203,
"blob_id": "1097085fa6e7371e3676fab1490858ba1e93b6d3",
"content_id": "96125d2837de8d90664eb057ea19d5f8d6c2ba21",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 382,
"license_type": "no_license",
"max_line_length": 111,
"num_lines": 17,
"path": "/README.md",
"repo_name": "piyushmittal20/TextUtils",
"src_encoding": "UTF-8",
"text": "# TextUtils\nThis is an Django Website and built with templates and Django.\n\n##Preview\n\n\n\n## Features\nThis website makes your text beautiful and you can do many things here, like:-\n\nRemove Puncuation.\n\nUppercasing letters.\n\nNew line removing.\n\nRemoving numbers from text.\n"
}
] | 2 |
red-frog/nlp | https://github.com/red-frog/nlp | 022c6e74d92b66eb7d9711e0dd0e3284d4b89cd6 | fee5cd61d15e25f9ec695d973aea9d2183226314 | 8e4ceb19c7f8cd4870a4dcc711a72a3f56ac48ad | refs/heads/master | 2020-04-14T10:18:42.980605 | 2019-01-03T05:46:12 | 2019-01-03T05:46:12 | 163,783,400 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5799999833106995,
"alphanum_fraction": 0.5799999833106995,
"avg_line_length": 10.149999618530273,
"blob_id": "5abb91da51919ae31f060a104e6d42581b152f79",
"content_id": "e8dc551e398c4e0b349b4f73a4316c2932a82daa",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 922,
"license_type": "no_license",
"max_line_length": 25,
"num_lines": 40,
"path": "/聊天机器人模型/聊天机器人模型.md",
"repo_name": "red-frog/nlp",
"src_encoding": "UTF-8",
"text": "## 对话产生方式\n\n#### 基于检索技术的模型\n\n - 有明确的问答对数据库\n - 使用语句匹配的形式查找答案\n - 答案相对固定, 且很少出现语法错误\n - 不会出现新的语句\n \n#### 基于生成式模型\n - 不依赖预先设定的问答库\n - 通常基于机器翻译技术\n - 需要大量的语料进行训练\n- Encoder-Decoder模式\n- 机器翻译\n- 输入的是问题, 翻译的是回答\n\n#### 混合模式\n\n- 兼具检索模式和生成方式\n- 目前最常用的解决方案\n- 检索模式产生候选数据集\n- 生成模式产生最终答案\n\n\n#### 聊天机器人模型\n\n- 语音识别(科大讯飞)\n- 自然语言理解(NLU)\n- 对话管理(DM)\n- 自然语言生成(NLG)\n- 语音合成(TTS, 科大讯飞)\n\n#### 聊天机器人训练\n\n- 开始\n- 加载训练语料\n- 加载完成\n - 是--->结束\n - 否--->对话训练--->存储训练结果\n "
},
{
"alpha_fraction": 0.46251994371414185,
"alphanum_fraction": 0.4779372811317444,
"avg_line_length": 28.865079879760742,
"blob_id": "8a2d0ec1cede7b16f65ae2ca8b3d7f8be340ca6b",
"content_id": "bfaec88c5f7694081d843625da8f8ad8a5dab54b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4114,
"license_type": "no_license",
"max_line_length": 88,
"num_lines": 126,
"path": "/chatbot/extract_conv.py",
"repo_name": "red-frog/nlp",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n\n\n\"\"\"\nfunction: prepare data, 去除特殊符号, 以及特殊字符替换\n\"\"\"\nimport re\n\nimport pickle\nfrom tqdm import tqdm # 进度条\n\n\ndef make_split(line):\n # 进行切分合并\n # print(re.match(r'.*([,…?!\\.,!?])$', ''.join(line)))\n if re.match(r'.*([,…?!\\.,!?])$', ''.join(line)):\n return []\n return [', ']\n\n\ndef good_line(line):\n \"\"\"\n 判断是不是有用的句子\n :param line:\n :return:\n \"\"\"\n new_line = re.findall(r'[a-zA-Z0-9]', ''.join(line))\n if len(re.findall(r'[a-zA-Z0-9]', ''.join(line))) > 2:\n # 如果出现字母或者数字, 将其替换为空字符串, 并加入句子, 如果句子长度大于2, 则不是一个好的句子\n return False\n return True\n\n\ndef regular(sen):\n sen = re.sub(r'\\.{3, 100}', '...', sen) # 句子中连着出现.最少3次, 最多100次, 将其替换为...\n sen = re.sub(r'...{2, 100}', '...', sen) # 处理......\n sen = re.sub(r'[,]{1, 100}', ',', sen) # 处理中,\n sen = re.sub(r'[\\.]{1,100}', '。', sen) # 处理.\n sen = re.sub(r'[\\?]{1,100}', '?', sen) # 处理?\n sen = re.sub(r'[!]{1,100}', '!', sen) # 处理!\n return sen\n\n\ndef main(limit=20, x_limit=3, y_limit=6):\n from word_sequence import WordSequence\n print('extract lines') # 开始解压文件, 处理好的数据集\n groups, group = list(), list()\n with open('dgk_shooter_min.conv', 'r', errors='ignore', encoding='utf-8') as fp:\n for line in tqdm(fp): # tqdm 进度条\n if line.startswith('M'):\n line = line.replace('\\n', '')\n if '/' in line:\n line = line[2:].split('/')\n else:\n line = list(line[2:])\n line = line[:-1]\n group.append(list(regular(''.join(line))))\n else:\n if group:\n groups.append(group)\n group = list()\n if group:\n groups.append(group)\n group = []\n print('extract group')\n\n # 定义输入和输出, 问答对的处理\n x_data = list()\n y_data = list()\n for group in tqdm(groups):\n for i, line in enumerate(group): # 获取三行数据, 并赋值\n last_line = None\n if i > 0:\n last_line = group[i-1]\n if not good_line(line=last_line):\n last_line = None\n next_line = None\n if i < len(group) - 1:\n next_line = group[i+1]\n if not good_line(line=next_line):\n next_line = None\n next_next_line = None\n if i < len(group) - 2:\n next_next_line = group[i+2]\n if not good_line(line=next_next_line):\n next_next_line = None\n if next_line:\n x_data.append(line)\n y_data.append(next_line)\n if last_line and next_line:\n x_data.append(last_line+make_split(last_line) + line)\n y_data.append(next_line)\n if next_line and next_next_line:\n x_data.append(line)\n y_data.append(next_line + make_split(next_line) + next_next_line)\n print(len(x_data), len(y_data))\n\n # 设置问和答\n\n for ask, answer in zip(x_data[:20], y_data[:20]): # 只取前20个字符\n print(''.join(ask))\n print(''.join(answer))\n print('-'*20)\n\n data = list(zip(x_data, y_data)) # 将数据打包\n data = [\n (x, y)\n for x, y in data\n if len(x) < limit and len(y) < limit and len(y) >= y_limit and len(x) >= x_limit\n ]\n x_data, y_data = zip(*data)\n print('fit word_sequence')\n ws_input = WordSequence()\n ws_input.fit(x_data+y_data)\n print('dump')\n pickle.dump((x_data, y_data),\n open('chatbot.pkl', 'wb'))\n pickle.dump(ws_input, open('ws.pkl', 'wb'))\n\n print('done')\n\n\nif __name__ == '__main__':\n # print(good_line(line='你好你发多少发放的safasdgf'))\n main()"
},
{
"alpha_fraction": 0.6645161509513855,
"alphanum_fraction": 0.6645161509513855,
"avg_line_length": 10,
"blob_id": "e178858a94edca387ad259b83b25035979fcee1c",
"content_id": "52a435e0d394622bcec1e860766db6ff7d5f4d6d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 179,
"license_type": "no_license",
"max_line_length": 31,
"num_lines": 14,
"path": "/NLP基础/朴素贝叶斯案例.md",
"repo_name": "red-frog/nlp",
"src_encoding": "UTF-8",
"text": "## 环境介绍\n\n- python\n- jieba\n- sklearn\n- scipy(高级数学工具)\n\n\n## 安装\n- pip install jiebe --upgrade\n\n- pip install sklearn --upgrade\n\n- pip install scipy --upgrade\n\n"
},
{
"alpha_fraction": 0.6291208863258362,
"alphanum_fraction": 0.6373626589775085,
"avg_line_length": 16.380952835083008,
"blob_id": "c0c4ab216140fb8b6835beceb4f1f99bff7a282f",
"content_id": "44b95121906a9539d0fd2f7cf84a1313294c237b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 710,
"license_type": "no_license",
"max_line_length": 58,
"num_lines": 21,
"path": "/NLP基础/词向量与word2vec.md",
"repo_name": "red-frog/nlp",
"src_encoding": "UTF-8",
"text": "# 词向量与word2vec\n\n## 词向量\n\n- 又叫词嵌入, 是自然语言处理中的一组语言建模和特征学习技术的统称, 其中来自词汇表的单词或短语被映射到实数的向量\n\n\n## Word2vec\n\n- 为一群用来产生词向量的相关模型, 这些模型为浅而双层的神经网络, 用来训练以重新构建语言学之词文本\n\n- 两种模型\n - CBOW\n - 由输入层, 映射层, 输出层共同构成\n - 二叉树结构\n - 这种二叉数结构应用到Word2vec中被称为Hierarchical Softmax\n \n - Skip-gram\n - 与CBOW模型正好是相反的\n - 也是由输入层, 映射层, 输出层构成\n - 也是一种二叉数结构"
},
{
"alpha_fraction": 0.6157734990119934,
"alphanum_fraction": 0.6167846322059631,
"avg_line_length": 15.762711524963379,
"blob_id": "fd04028bd9b13d81e5177ee26489d059c0c41391",
"content_id": "40fae17046710e7d7a677bab70763353b65796c2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 2118,
"license_type": "no_license",
"max_line_length": 119,
"num_lines": 59,
"path": "/NLP基础/马尔可夫过程.md",
"repo_name": "red-frog/nlp",
"src_encoding": "UTF-8",
"text": "# 马尔可夫过程\n\n- 马尔可夫过程是一类随机过程\n\n- 在已知目前状态(现在)的条件下, 它未来的演变(将来)不依赖于它以往的演变(过去). 主要研究一个系统的状况及其转移的理论.它是通过对不同状态的初始概率以及状态之间的转换概率的研究, 来确定状态的变化趋势, 从而达到对预测未来的目的.\n\n\n- 举例: 病毒传染/液体中离子运功\n\n- 马尔可夫链是指具有马尔可夫性质的离散事件随机过程, 即时间和状态参数都是离散的马尔可夫过程, 是最简单的马尔可夫过程.\n\n\n# 隐马尔可夫模型(Hidden Markov Model, HMM)\n\n- 应用场景\n - 中文分词\n - 机器翻译\n - 语音识别\n - 通信中的译码\n\n- 统计分析模型\n\n- 结构最简单的动态贝叶斯网, 著名有向图模型, 主要用于时序数据建模(语音识别, 自然语言处理等)\n\n- 隐马尔可夫模型是马尔可夫链的一种, 它的状态不能直接观察到, 但能通过观测向量序列观察到,每个观测向量都是通过某些概率密度分布表现为各种状态, 每一个观测向量是由一个具有相应概率密度分布的状态序列产生.\n - 下一个时刻状态仅由当前时刻决定\n - 由5个要素组成, 其中两个状态集合(N, M), 三个概率矩阵(A, B, π):\n - N, 表示模型中的状态数, 状态之间可以相互转移\n - M, 表示每个状态不同的观察符号, 即输出字符的个数\n - A, 状态转移概率分布\n - B, 观察符号在各个状态下的概率分布(发射概率)\n\n - π, 表示初始状态分布\n\n\n \n```\n输入: HMMS的五元组(N, M, A, B, π)\n输出: 一个观察符号的序列, 这个序列的每个元素都是M中的元素\n```\n\n\n```\n\n过程:\n - 训练语料\n - 初始概率矩阵\n - 转移概率矩阵\n - 发射概率矩阵\n - 分词且标注词性的句子(NE识别)\n - 汉字序列\n - 词性序列\n - 状态序列\n - 规则修正\n - NE标注\n - 标注了NE标记的句子(标注转换)\n \n \n```\n"
},
{
"alpha_fraction": 0.5161290168762207,
"alphanum_fraction": 0.5376344323158264,
"avg_line_length": 6.153846263885498,
"blob_id": "af220891c68f7d5d2a2ceff343f5646a69c0171f",
"content_id": "fb640ac6946b21651ebba05d6bc5d20b06901ae6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 688,
"license_type": "no_license",
"max_line_length": 16,
"num_lines": 52,
"path": "/NLP聊天机器人实战-数据处理/数据处理.md",
"repo_name": "red-frog/nlp",
"src_encoding": "UTF-8",
"text": "## 1.环境介绍\n\n- sys(系统库)\n\n- pickle(类型转换)\n\n- re(正则)\n\n- tqdm\n\n\n## 2.语料收集\n\n- 聊天记录\n\n- 电影对话\n\n- 台词片段\n\n\n## 3.语料清洗\n\n- 要清洗的内容\n - 多余空格\n - 不正规的符号\n - 多余的英文, 字符\n- 清洗的方法\n\n - 正则化\n - 切分\n - 好坏语句判断\n \n \n## 4.句子向量编码化\n\n- 原始文本不能直接训练\n- 将句子转换成向量\n- 将向量转换成句子\n\n## 5. 语料问答对的构建\n\n- 问答对的处理和拆分\n\n## 6. 语料模型的保存\n\n- 使用pickle来保存模型\n- 生成pkl格式\n- 利用pkl格式进行语料的训练\n\n## 7. 深度模型\n\n## 8. 打包成restful\n"
},
{
"alpha_fraction": 0.580777108669281,
"alphanum_fraction": 0.580777108669281,
"avg_line_length": 12.971428871154785,
"blob_id": "d6bc668890cbf2b93f14cfcec0d8963141f00396",
"content_id": "133d12f10503664f247e70753005cf83167ad69f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 1021,
"license_type": "no_license",
"max_line_length": 38,
"num_lines": 35,
"path": "/NLP基础/语料的获取和处理.md",
"repo_name": "red-frog/nlp",
"src_encoding": "UTF-8",
"text": "## 语料的获取和处理\n\n#### 什么是语料和语料库\n\n- 语料\n - 语言材料, 语料是语言学研究的内容, 语料是构成语料库的基本单元\n\n- 语料库\n - 存放的是在语言实际使用中真实出现过的语言材料\n - 以电子计算机为载体承载语言材料的基础资源\n - 真实语料需要进行加工(分析和处理), 才能成为有用的资源\n \n#### 语料库的种类\n \n- 异质的\n- 同质的: 一系列的, 分类的, 能够\n- 系统的: 聊天机器人的对话语料\n- 专用的: 金融聊天语料, 医疗聊天语料\n\n#### 如何获取和处理语料\n\n- 语料获取途径\n - 爬虫\n - 开放性语料数据集\n - 中科院自动化所的中英文新闻语料库\n - 搜狗的中文新闻语料库\n - 人工生成的机器阅读理解数据集(Microsoft)\n - 一个开放问题与回答的挑战数据集(Microsoft)\n - 自由平台\n \n- 语料处理\n\n - 处理语料\n - 格式化文本\n - 特征工程\n"
},
{
"alpha_fraction": 0.5566037893295288,
"alphanum_fraction": 0.5566037893295288,
"avg_line_length": 14.166666984558105,
"blob_id": "9d0b4ece7174485d3ea610bf89806f3a0bab71a9",
"content_id": "12ec941eb4e177a1f4ac2bb48c130250db30fb60",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 1224,
"license_type": "no_license",
"max_line_length": 63,
"num_lines": 42,
"path": "/NLP基础/贝叶斯分类.md",
"repo_name": "red-frog/nlp",
"src_encoding": "UTF-8",
"text": "# 贝叶斯分类\n - 贝叶斯分类算法, 利用概率统计知识进行分类的算法\n -朴素贝叶斯是基于贝叶斯定理和特征条件独立假设的分类方法.最为广泛的是两种分类模型\n - 决策树模型(DTM)\n - 朴素贝叶斯模型(NBM)\n \n - 能运用到大型数据库, 方法简单, 分类准确, 速度快\n \n \n- 贝叶斯算法\n \n \n```\nP(c|x) = P(c)P(x|c)/P(x)=P(x,c)/P(x)\n\n \n c表示随机事件发生的一种情况. x表示的就是证据(evidence)/状况(condition), 泛指与随机时间相关的因素\n \n P(c|x): 在x的条件下, 随机事件出现c情况的概率(后验概率)\n P(c): (不考虑相关因素)随机事件出现c情况的概率(先验概率)\n P(x|c): 在已知事件出现c情况的条件下, 条件x 出现的概率(后验概率)\n P(x): x出现的概率(先验概率)\n \n```\n\n- 朴素贝叶斯算法\n\n - 基于一个简单的假定:给定目标值时属性之间相互条件独立\n \n - \"网上查\"\n\n - 常用场景\n - 文本分类\n - 垃圾邮件过滤\n - 多分类实时预测\n - 拼写纠错\n \n```\ndemo:\n 我司可代开普通增值税发票, 税点优惠, 欢迎来电咨询.\n\n```"
},
{
"alpha_fraction": 0.5716878175735474,
"alphanum_fraction": 0.5716878175735474,
"avg_line_length": 13.15384578704834,
"blob_id": "6267230a5a80ea913b145b4d72a78e40f32126f7",
"content_id": "9d79ef5fb93c01c38a2ffe836abc64480fc2fa5e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 753,
"license_type": "no_license",
"max_line_length": 48,
"num_lines": 39,
"path": "/NLP基础/Android系统介绍.md",
"repo_name": "red-frog/nlp",
"src_encoding": "UTF-8",
"text": "## Andriod系统是什么\n\n- 基于linux的自由开源操作系统\n- 分层架构\n - 应用程序层\n - 应用程序框架层(Application Framework)--安卓开发者要掌握的层\n - Activity Manager()\n - Window Manager\n - Content Providers\n - View System\n - Notification Manager\n - Package Manager\n - Telephone Manager\n - Resource Manager(资源管理)\n - Location Manager(位置管理) \n - XMPP(通信)\n - 系统运行库层\n - linux内核层\n \n \n## Android 开发流程\n\n- 开始\n- 编辑布局文件\n- 为每个控件添加实现\n - 业务逻辑的实现\n \n- 运行调试\n\n- 打包部署\n\n\n## Android 环境搭建\n\n- JDK\n\n- Android SDK\n\n- Android Studio"
},
{
"alpha_fraction": 0.7846534848213196,
"alphanum_fraction": 0.7871286869049072,
"avg_line_length": 15.199999809265137,
"blob_id": "3d9c544f3deb1a411306e035bbb1d80632910539",
"content_id": "abc0336be3b3ef23c17575871ae0430e2c221cc6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 872,
"license_type": "no_license",
"max_line_length": 66,
"num_lines": 25,
"path": "/聊天机器人模型/Seq2seq模型.md",
"repo_name": "red-frog/nlp",
"src_encoding": "UTF-8",
"text": "## Seq2seq模型\n\n- 是一个Encoder-Decoder结构的网络, 输入是一个序列, 输出也是一个序列\n\n- Encoder中将一个可变长度的信号序列变为固定长度的向量表达, Decoder将这个固定长度的向量变成可变长度的目标的信号序列\n\n- 最重要地方在于输入序列和输出序列的长度是可变的\n\n- 可以用于翻译, 聊天机器人, 句法分析, 文本摘要等\n\n\n#### Encoder过程(编码过程)\n\n- 取得输入的文本, 进行embedding\n\n- 传入到LSTM中进行训练\n- 记录状态, 并输出当前cell的结果\n- 依次循环, 得到最终结果\n\n#### Decoder过程(解码过程)\n\n- 在encoder最后一个时间步长的隐藏层之后输入到decoder的第一个cell里\n- 通过激活函数得到候选的文本\n- 筛选出可能性最大的文本作为下一个时间步长的输入\n- 依次循环, 得到目标"
},
{
"alpha_fraction": 0.6068376302719116,
"alphanum_fraction": 0.6239316463470459,
"avg_line_length": 12,
"blob_id": "aaac5e4cfb3c19ce5bd3ceaab69e2b8d7ff2bd7a",
"content_id": "7a32e6abf2203e1c56adab02ea209f9046c5568e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 253,
"license_type": "no_license",
"max_line_length": 33,
"num_lines": 9,
"path": "/NLP基础/文本处理方法.md",
"repo_name": "red-frog/nlp",
"src_encoding": "UTF-8",
"text": "# 文本处理方法\n\n- 数据清洗(去掉无意义的标签, url, 符号)\n\n- 分词, 大小写转换, 添加句首句尾, 词性标注\n\n- 统计词频, 抽取文本特征, 特征选择, 计算特征权重, 归一化\n\n- 划分训练集, 测试集(7:3) "
},
{
"alpha_fraction": 0.37308868765830994,
"alphanum_fraction": 0.38837921619415283,
"avg_line_length": 12.666666984558105,
"blob_id": "1f7e318fb89c0d22e59cdf30ebeb4bdd814ee477",
"content_id": "6eb4f30c50d189749e45d35987cffaf4c55f40f3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 529,
"license_type": "no_license",
"max_line_length": 36,
"num_lines": 24,
"path": "/NLP聊天机器人实战-数据处理/语料处理小结.md",
"repo_name": "red-frog/nlp",
"src_encoding": "UTF-8",
"text": "## 语料处理总结\n\n1. 收集语料\n\n2. 语料处理\n - 切分, 标点符号等\n - 你好?我是frog..., 返回: 你好我是frog\n - 判断是不是个好句子\n \n - 正则处理\n - 你好........., 转换为你好...\n\n3. 主函数\n\n - 句子训练(把句子转换成向量)\n - 定义标签\n - 进行转换\n - to_index\n - to_word\n - 长度处理(加1进行补位)\n - 模型打包()\n \n4. 结果\n - 为模型训练做了个数据清洗"
},
{
"alpha_fraction": 0.5639097690582275,
"alphanum_fraction": 0.621553897857666,
"avg_line_length": 15.666666984558105,
"blob_id": "0554fefc0806f1fa1c3ceeb7bb4efde3d6a950a4",
"content_id": "46ec05666d8207add1d13ac0f78a21b10aa819f8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 537,
"license_type": "no_license",
"max_line_length": 28,
"num_lines": 24,
"path": "/chatbot/params.md",
"repo_name": "red-frog/nlp",
"src_encoding": "UTF-8",
"text": "# 设置参数调优说明\n\n- bidirectional: true\n - 是否用双向网络\n- use_residual: false\n - 是否使用参差网络\n- use_dropout: false\n - \n- time_major: false\n - \n- cell_type: lstm\n - 有lstm或gru\n- depth: 2\n - 可以多层(深度与越大,速度越慢, 精度越高)\n- attention_type: Bahdanau\n - \n- hidden_units: 128\n - 一般128已经够用了\n- optimizer: adam\n - 其他优化器都可以尝试\n- learning_rate: 0.001\n - 一般学习率都是0.01\n- embedding_size: 300\n - 300-450就够了"
},
{
"alpha_fraction": 0.5448275804519653,
"alphanum_fraction": 0.5448275804519653,
"avg_line_length": 10.230769157409668,
"blob_id": "76567c86eed1b16e0d33910bc7dde7f7db113ada",
"content_id": "e3566f7d0639191c7e92ae45b408929b45063a54",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 283,
"license_type": "no_license",
"max_line_length": 32,
"num_lines": 13,
"path": "/NLP基础/长短期记忆网络.md",
"repo_name": "red-frog/nlp",
"src_encoding": "UTF-8",
"text": "## LSTM(RNN 特殊变种)\n\n- 说明以及用途\n - 合适处理和预测时间序列中间隔和延迟相对较长的重要事件\n - 聊天机器人, 长文本翻译, 自动写歌, 自动写诗\n \n \n- 遗忘门\n- 输入门\n- 输出门\n- GRU\n - 更新门\n - 重置门"
},
{
"alpha_fraction": 0.5239960551261902,
"alphanum_fraction": 0.5249755382537842,
"avg_line_length": 19.039215087890625,
"blob_id": "ec774d44fe1efa3df47f701263f9ac75a008ed39",
"content_id": "a15c44e05d6d3ba5d1485273a02bd651b4e25d10",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 1895,
"license_type": "no_license",
"max_line_length": 93,
"num_lines": 51,
"path": "/NLP基础/nlp基础.md",
"repo_name": "red-frog/nlp",
"src_encoding": "UTF-8",
"text": "# NLP\n\n### 什么是NLP\n \n- 自然语言处理(natural language processing), 探讨如何处理及运用自然语言;自然语言认知则是指让电脑\"懂\"人类的语言\n \n- 主要范畴:文本分析, 信息检索, 词性标注, 问答系统等\n\n### NLP技术\n\n- 词法分析\n\n - 分词技术\n - 词性标注\n - 通过语义分析系统进行标注(国家科学院研究成果)\n - 解释说明: 又称为词类标注或者简称标注, 是指为分析结果中的每个单词标注一个正确的词性的程序, 即确定每个词是名词, 动词, 形容词或者其他词性的过程\n - 命名实体识别(NER, Named Entity Recognition)\n - 解释说明: 又称为\"专名识别\", 是指识别文本中具有特定意义的实体, 主要包括人名, 地名, 机构名, 专有名词等.\n - 是信息提取, 问答系统, 句法分析, 机器翻译, 面向Semantic Web的元数据标注等应用领域的重要基础工具, 在自然语言处理技术走向实用化的过程中占有重要地位\n - 任务是识别出待处理文本中的三大类(实体类, 时间类和数字类), 七小类(人名, 机构名, 地名, 时间, 日期, 货币和百分比) 命名实体.\n - 怎么进行\n - 实体边界识别(分词)\n - 确定实体类别\n - 英文实体\n - 中文实体\n - 如何进行命名实体识别\n - 基于规则和词典的方法\n - 基于统计的方法(常用)\n - 隐马尔可夫模型(HMM)\n - 较大熵(ME)\n - 支持向量机(SVM)\n - 条件随机场(CRF)\n - 词义消歧\n\n- 句法分析\n\n- 语义分析\n\n \n\n### RNN和LSTM\n\n### NLP语言模型\n\n### 语料库的获取和建立\n\n### Word2Vec\n\n### 语义理解和情感分析\n\n### 文本处理方法"
},
{
"alpha_fraction": 0.8496732115745544,
"alphanum_fraction": 0.8496732115745544,
"avg_line_length": 24.58333396911621,
"blob_id": "5075e102117aa8a9cb1037f2b28b52b1cbc8452f",
"content_id": "fedcbbc8e7a0e33fbaf678500d411ab9be72b652",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 808,
"license_type": "no_license",
"max_line_length": 83,
"num_lines": 12,
"path": "/聊天机器人模型/Seq2seq模型(注意力机制).md",
"repo_name": "red-frog/nlp",
"src_encoding": "UTF-8",
"text": "## 注意力机制\n\n- 是在序列到序列模型中用于注意力编码器状态的最常用方法, 它同时还可用于回顾序列模型的过去状态\n\n- 不仅能用来处理编码器或前面的隐藏层,它同样还能用来获得其他特征的分布, 例如阅读理解任务中作为文本的词向量.\n\n## 为什么需要注意力机制\n\n- 减小处理高维输入数据的计算负担, 通过结构化的选取输入的子集, 降低数据维度.\n- 让任务处理系统更专注于找到输入数据中显著的与当前输出相关的有用信息, 从而提高输出的质量\n\n- Attention模型的最终目的是帮助类似编解码器这样的框架, 更好的学到多种内容模态之间的相互关系, 从而更好的表示这些信息, 克服其无法解释从而很难设计的缺陷"
},
{
"alpha_fraction": 0.5272727012634277,
"alphanum_fraction": 0.5424242615699768,
"avg_line_length": 11.576923370361328,
"blob_id": "ae9601c09f543c8792cad87ba8753274772c7f78",
"content_id": "7e2b47b606403f41923aa1b2cecbc581def198ef",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 1204,
"license_type": "no_license",
"max_line_length": 55,
"num_lines": 52,
"path": "/NLP基础/语言模型.md",
"repo_name": "red-frog/nlp",
"src_encoding": "UTF-8",
"text": "## 语言模型\n\n- 主要功能\n - 计算一个词语序列构成一个句子的概率(或者说计算一个词语序列的联合概率)\n - 判断一句话出现概率高不高, 符不符合我们的表达习惯, 是否通顺, 是否正确\n \n- 主要模型---> 统称为概率语言模型\n - Unigram models(一元文法统计模型)\n - N-gram语言模型(N元模型)\n \n### 概率语言模型\n\n- 预测字符串概率\n\n- 动机\n\n- 如何计算\n \n#### 一元文法同级模型\n\n```\np(s)=p(w1)*p(w2)*p(w3)*p(w4)*...*p(wn)\n\n这个式子成立的条件是有一个假设, 就是条件无关假设, 我们认为每个词都是条件无关的.\n\ndemo:\n 今天的天气\n p(今天的天气) = p(今)*p(天)*p(的)*p(天)*p(气)\n\n```\n\n\n#### 二元文法统计模型\n\n- 二元语言模型也能比一元模型能够更好的get到两个词语之间的关系信息\n\n```\n\np(s) = p(w1|</s>)*p(w2|w1)*p(w3|w2)*...*p(</s>|wn)\n\ndemo:\n 我喝水\n p(<s>我喝水</s>)=p(我||</s>)*p(喝|我)*p(水|喝)*p(<s>|水)\n \n\n```\n\n#### N元模型\n\n- 并不是N越大效果越好\n\n- n>3时基本就无法处理了, 参数空间太大, 另外它不能表示词与词之间的关联性\n\n\n "
},
{
"alpha_fraction": 0.52173912525177,
"alphanum_fraction": 0.5268115997314453,
"avg_line_length": 17.039474487304688,
"blob_id": "49593dd47103c0731cbc334951573932bbbd0dad",
"content_id": "2cac5c45270a0e7c05649e8fbc69732b830d8f52",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 2126,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 76,
"path": "/NLP基础/基础导学.md",
"repo_name": "red-frog/nlp",
"src_encoding": "UTF-8",
"text": "\n# 什么是Tensorflow\n\n - google开发\n - 用于语音识别或图像识别\n - 将复杂的数据结构传输至人工智能神经网中进行分析和处理\n\n - 支持CNN(卷积神经网络) RNN和LSTM算法\n\n# Tensorflow 系统框架\n\n - 前端:c++, python, java, others----------用来构建计算图\n - Exec System:\n - Distributed Runtime\n - Kernel implemants\n - Network layer(RPC. RDMA), Device layer(CPU, GPU)\n\n# Tensorflow 基本要素\n - 张量(Tensor): 维度(阶). 张量的阶, 是张量维度的一个数量的描述\n\n|示例|描述|\n|----|----| \n|x=3\t|零阶张量(纯量)|\n|v = [1.1, 2.2, 3.3]\t|一阶张量|\n|t = [[], [], []]\t|二阶张量(矩阵)|\n|m = [[[]], [[]]]\t|三阶三张量(立方体)|\n \n \n - 图(Graph)\n\n - 代表模型的数据流(由ops和tensor组成), op是操作(节点), tensor是数据流(边)\n \n - 会话(Session):管理一个模型从开始到结尾\n \n \n \n```\n\nimport tensorflow as tf \nHello = tf.constant(\"Hello tensorflow\")\nsess = tf.Session()\nprint(sess.run(Hello)) \n \n \n```\n\n\n## TensorFlow基本原理及模型训练\n\n#### 开始\n#### 定义数据集\n - 处理数据\n#### 定义模型\n - 输入是什么, 输出是什么\n - 图像:CNN, 池化; RNN或文本处理:RNN等\n \n#### 编写程序并训练模型\n - 训练数据和测试数据互斥\n\n#### 模型测试 \n\n***训练集要尽可能的大, 有一定量的统一性***\n***训练集和测试集要尽可能一致***\n***调参数, 训练多少轮, 太多不一定好***\n***参数调优***\n\n\n#### 输出文件类型\n\n- 在TensorFlow里, 保存模型的格式有两种:\n - ckpt: 训练模型后的保存, 这里面会保存所有的训练参数吗文件相对来说比较大, \n 可以用来进行模型的恢复和加载,\n - checkpoint\n - data\n - index\n - mete \n - pb: 用于模型最后的线上部署, 这里面的线上部署指的是TensorFlow Servinginking模型的发布, 一般发布成grpc形式的接口\n "
},
{
"alpha_fraction": 0.7054794430732727,
"alphanum_fraction": 0.7191780805587769,
"avg_line_length": 12.363636016845703,
"blob_id": "c75f7f1a00acc42b12fc41afa3b214097f10b69f",
"content_id": "959c9d841d9de8efe94978a18fcbba9be9fcba8b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 146,
"license_type": "no_license",
"max_line_length": 42,
"num_lines": 11,
"path": "/README.md",
"repo_name": "red-frog/nlp",
"src_encoding": "UTF-8",
"text": "# nlp\n\n## Quick start\n\n- cd chatbot\n- python3 extract_conv.py\n- python3 train_anti.py\n\n## optimization\n\n- You can modify the params in params.json"
},
{
"alpha_fraction": 0.4524590075016022,
"alphanum_fraction": 0.49180328845977783,
"avg_line_length": 19.399999618530273,
"blob_id": "bee838477484d62d8d67edc92a5c7feb569efa8b",
"content_id": "92d26c4360670cd422cef05686f56dd7eb922a26",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 305,
"license_type": "no_license",
"max_line_length": 50,
"num_lines": 15,
"path": "/chatbot/split_file.py",
"repo_name": "red-frog/nlp",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n\n\nfb = open('xiaohuangji.conv', 'a')\n\nwith open('xiaohuangji50w_fenciA.conv', 'r') as f:\n j = 0\n for i in f.readlines():\n print(i.strip())\n fb.write(i.strip())\n fb.write('\\n')\n j += 1\n if j == 200000:\n break"
},
{
"alpha_fraction": 0.49606940150260925,
"alphanum_fraction": 0.5131471753120422,
"avg_line_length": 29.229507446289062,
"blob_id": "6b4ca5481e530268b09c14938316acfdaebd3128",
"content_id": "46df7e468745b9a579eb124a01393dddd9ad26a2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3987,
"license_type": "no_license",
"max_line_length": 84,
"num_lines": 122,
"path": "/chatbot/train.py",
"repo_name": "red-frog/nlp",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\nimport datetime\nimport sys\nimport random\nimport numpy as np\nimport pickle\nimport tensorflow as tf\nfrom tqdm import tqdm\n\n\ndef test(params):\n from sequence_to_sequence import SequenceToSequence\n from data_utils import batch_flow_bucket as batch_flow\n from thread_generator import ThreadGenerator\n\n x_data, y_data = pickle.load(open('chatbot.pkl', 'rb'))\n ws = pickle.load(open('ws.pkl', 'rb'))\n\n # 训练\n \"\"\"\n 1. n_epoch:训练轮次数\n 2. 理论上训练轮次数越大, 那么训练精度越高\n 3. 如果轮次特别大,比如1000, 那么可能发生过拟合, 但是是否过拟合也和训练数据有关\n 4. n_epoch越大, 训练时间越长\n 5. 用P5000的GPU训练40轮,训练3天,训练2轮, 大概一个半小时,\n 如果使用CPU, 速度会特别慢, 可能一轮就要几个小时\n \"\"\"\n n_epoch = 2\n batch_size = 128\n steps = int(len(x_data) / batch_size) + 1\n config = tf.ConfigProto(\n allow_soft_placement=True,\n log_device_placement=False\n )\n\n save_path = './model/s2ss_chatbot.ckpt'\n\n tf.reset_default_graph() # 重置默认的图\n with tf.Graph().as_default():\n random.seed(0),\n np.random.seed(0),\n tf.set_random_seed(0)\n print('{} start train model'.format(datetime.datetime.now()))\n with tf.Session(config=config) as sess:\n # 定义模型\n model = SequenceToSequence(\n input_vocab_size=len(ws),\n target_vocab_size=len(ws),\n batch_size=batch_size,\n **params\n )\n print('{} init model success'.format(datetime.datetime.now()))\n init = tf.global_variables_initializer() # 初始化\n sess.run(init)\n print('{} start more thread to train'.format(datetime.datetime.now()))\n flow = ThreadGenerator(\n batch_flow([x_data, y_data], ws, batch_size, add_end=[False, True]),\n queue_maxsize=20\n )\n\n for epoch in range(1, n_epoch+1):\n costs = list()\n bar = tqdm(range(steps), total=steps,\n desc='epoch {}, loss=0.000000'.format(epoch))\n for _ in bar:\n x, xl, y, yl = next(flow)\n # [1, 2], [3, 4]\n # [3, 4], [1, 2]\n x = np.flip(x, axis=1) # 进行翻转, 第一个参数, 翻转什么数据, 第二个参数, 翻转几次\n cost, lr = model.train(sess, x, xl, y, yl, return_lr=True)\n costs.append(cost)\n bar.set_description('epoch {} loss={:.6f} lr={:.6f}'.format(\n epoch,\n np.mean(costs),\n lr\n )) # 保留6位小数\n\n model.save(sess=sess, save_path=save_path)\n\n # 测试\n tf.reset_default_graph()\n model_pred = SequenceToSequence(\n input_vocab_size=len(ws),\n target_vocab_size=len(ws),\n batch_size=1,\n mode='decode',\n beam_width=12,\n parallel_iterations=1,\n **params\n )\n\n init = tf.global_variables_initializer()\n\n with tf.Session(config=config) as sess:\n sess.run(init)\n model_pred.load(sess, save_path) # 从save_path reload\n\n bar = batch_flow([x_data, y_data], ws, 1, add_end=False)\n t = 0\n for x, xl, y, yl in bar:\n x = np.flip(x, axis=1)\n pred = model_pred.predict(\n sess,\n np.array(x),\n np.array(xl)\n )\n print(ws.inverse_transform(x[0]))\n print(ws.inverse_transform(y[0]))\n print(ws.inverse_transform(pred[0]))\n t += 1\n if t >= 3:\n break\n\n\ndef main():\n import json\n test(json.load(open('params.json')))\n\n\nif __name__ == '__main__':\n main()\n\n"
},
{
"alpha_fraction": 0.6598513126373291,
"alphanum_fraction": 0.6677509546279907,
"avg_line_length": 25.580245971679688,
"blob_id": "06368ffd44def65270ae20e3e17c08dd2cefa5f4",
"content_id": "14a2a24c0b5f1c1e312d81ea833997c72e3df37d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2376,
"license_type": "no_license",
"max_line_length": 95,
"num_lines": 81,
"path": "/python_case/nb_test.py",
"repo_name": "red-frog/nlp",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n\n\n# 1. 搜集数据\n# 2. 处理数据\n\nimport os\nimport jieba\nfrom sklearn.feature_extraction.text import TfidfTransformer, CountVectorizer\nfrom sklearn.naive_bayes import MultinomialNB\nfrom sklearn.externals import joblib\nimport time\n\n\ndef pre_process(path):\n \"\"\"\n 预处理, 切词\n :param path: 文件路径\n :return:\n \"\"\"\n text_with_space = \"\"\n with open(path, 'r', encoding=\"utf-8\") as f:\n textfile = f.read()\n textcut = jieba.cut(textfile)\n for word in textcut:\n text_with_space += word + \" \"\n return text_with_space\n\n\ndef load_train_data_set(path, classtag):\n allfiles = os.listdir(path)\n processed_text_set = list()\n all_class_tags = list()\n for file_name in allfiles:\n print(file_name)\n path_name = path+'/'+file_name\n processed_text_set.append(pre_process(path_name))\n all_class_tags.append(classtag)\n return processed_text_set, all_class_tags # 返回数据集和标签号\n\n\nprocessed_textdata1, class1 = load_train_data_set(path=\"./dataset/train/hotel\", classtag=\"宾馆\")\nprocessed_textdata2, class2 = load_train_data_set(path=\"./dataset/train/travel\", classtag=\"旅游\")\n\ntrain_data = processed_textdata1 + processed_textdata2 # 对训练数据进行整合\n\nclasstags_list = class1 + class2 # 标签整合\n\ncount_vector = CountVectorizer() # 统计词向量\nvecot_matrix = count_vector.fit_transform(train_data) # 转换成词频\n\n# TFIDF(度量模型, 文本主题模型) 特征工程统计\ntrain_tfidf = TfidfTransformer(use_idf=False).fit_transform(vecot_matrix)\n\n# 朴素贝叶斯进行训练\n\nclf = MultinomialNB().fit(train_tfidf, classtags_list)\n\n# 对训练后结果进行测试\n\npath = \"./dataset/test/travel\"\nallfiles = os.listdir(path)\nhotel = 0\ntravel = 0\nfor file_name in allfiles:\n path_name = path + \"/\" + file_name\n new_count_vector = count_vector.transform([pre_process(path_name)]) # 预处理\n new_tfidf = TfidfTransformer(use_idf=False).fit_transform(new_count_vector) # 提取特征\n # 进行预测\n predict_result = clf.predict(new_tfidf)\n print(str(predict_result)+file_name)\n\n # 统计正确率\n if predict_result == '宾馆':\n hotel += 1\n if predict_result == '旅游':\n travel += 1\n\nprint('宾馆' + str(hotel))\nprint('旅游' + str(travel))"
},
{
"alpha_fraction": 0.4962962865829468,
"alphanum_fraction": 0.5218855142593384,
"avg_line_length": 25.070175170898438,
"blob_id": "3721c588a33ced4653e8ac88fac5c7744f85c949",
"content_id": "5c61a052ed5422960795a6793f907e77377ee66d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1497,
"license_type": "no_license",
"max_line_length": 72,
"num_lines": 57,
"path": "/chatbot/fake_data.py",
"repo_name": "red-frog/nlp",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n\nimport random\nimport numpy as np\nfrom word_sequence import WordSequence\n\n\ndef generator(max_len=10, size=1000, same_len=False, seed=0):\n \"\"\"生成虚假数据\"\"\"\n\n dictionary = dict(a=1, b=2, c=3, d=4, aa=1, bb=2, cc=3, dd=4, aaa=1)\n if seed is not None:\n random.seed(seed)\n input_list = sorted(list(dictionary.keys()))\n x_data = list()\n y_data = list()\n\n for x in range(size):\n a_len = int(random.random() * max_len) + 1\n x = list()\n y = list()\n for _ in range(a_len):\n word = input_list[int(random.random() * len(input_list))]\n x.append(word)\n y.append(word)\n if not same_len:\n if y[-1] == '2':\n y.append('2')\n elif y[-1] == '3':\n y.append('3')\n y.append('4')\n x_data.append(x)\n y_data.append(y)\n ws_input = WordSequence()\n ws_input.fit(x_data)\n\n ws_target = WordSequence()\n ws_target.fit(y_data)\n return x_data, y_data, ws_input, ws_target\n\n\ndef test():\n x_data, y_data, ws_input, ws_target = generator()\n print(len(x_data))\n assert len(x_data) == 1000\n print(len(y_data))\n assert len(y_data) == 1000\n print(np.max([len(x) for x in x_data]))\n assert np.max([len(x) for x in x_data]) == 10\n print(len(ws_input))\n assert len(ws_input) == 14\n print(len(ws_target))\n\n\nif __name__ == '__main__':\n test()"
},
{
"alpha_fraction": 0.4874371886253357,
"alphanum_fraction": 0.4942896366119385,
"avg_line_length": 16.909835815429688,
"blob_id": "dbd0d66301cbb14f7e8dac34fc46ca0a15ad791e",
"content_id": "5ac915657c19a328f31add4ca2c8f916740401c1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 3979,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 122,
"path": "/NLP基础/循环神经网络.md",
"repo_name": "red-frog/nlp",
"src_encoding": "UTF-8",
"text": "## 常用神经网络\n\n#### CNN(卷积神经网络), 一种前馈神经网络, 包括卷积层和池化层\n - 图像相关(图像上色(老照片图片上色))\n - 灰度图像处理, 图像识别, 图像分类\n \n#### RNN(循环神经网络), 一种节点定向连接成环的人工神经网络. 内部状态可以展示动态时序行为\n\n- 说明以及用途\n - 处理文本相关\n - 语音相关\n - 比如:在线翻译, 聊天机器人\n \n##### BP神经网络\n - 输入层\n - 隐藏层(3层左右比较好), 朴素贝叶斯\n - 输出层\n - 流程\n - 正向传播(输入-->隐含-->输出)\n - 网络初始化\n - 隐藏层的输出\n - 输出层的输出\n - 误差计算\n - 反向传播(修改权重, 使误差最小(求导或降维), 使用梯度?什么的算法)\n - 隐藏层到输出层\n - 输入层到隐藏层\n - 偏置更新\n - 隐藏层到输出层\n - 输入层到隐藏层\n \n - 特点\n - 逐层信息传递到最后输出\n - 沿着一条直线计算, 知道最后一层,求出计算结果\n - 包含输入.隐藏和输出层, 目的是实现从输入到输出的映射\n - 一般包含多层, 层与层之间全连接\n\n------------------\n-----------------------\n\n\n##### 循环神经网络\n- 特别之处是循环递归和递归参数(W), 相比较于BP神经网络\n- 3大特性\n - 记忆特性\n - 接收2个输入(当前时刻的输入, 上一时刻的输出)\n - 参数共享\n\n- 与经典网络的对比\n - 环和w参数\n \n- 常见\n - one to one:\n - 同样维度的输入, 同样维度输出(分类问题)\n - demo:话的好坏\n - one to many\n - 图片的描述, 音乐的生成\n - many to one\n - 句子的处理, 输入, 多分类\n - 多个类别的样本, 对应一个输出\n - many to many\n - 输入和输出相同维度\n - 命名实体识别\n - 输入和输出不同维度\n - 翻译\n \n\n##### 双向循环神经网络\n\n\n- 每个时刻有两个隐藏层\n- 一个从左到右, 一个从右到左\n- 向前和向后传播参数独立\n- 沿着时间事件进行展开(相同的W参数, 每次w会累计相乘)\n - W<1:梯度消失\n - W>1:梯度爆炸\n \n \n------------------------\n-----------------------\n\n##### 梯度消失和梯度爆炸的解决\n\n- 选择合适激活参数\n\n - Relu 函数, Relu函数的导数一直为1\n\n- 选择合适参数初始化方法\n - ***[L]右上角, 幂***\n - W[L]=np.random.randn(shape[L])*0.01--->不建议(梯度消失)\n - w[l]是第L层的权重参数, shape是第L层权重参数矩阵的形状\n - W[L]=np.random.randn(shape[L])*np.sqrt(1/n[L-1])-------->建议(缓解梯度消失问题)\n - W[L]是第L层的权重参数, shape是第L层权重参数矩阵的形状, n[L-1]是L-1层神经元数\n- 使用权重参数正则化\n- 使用BatchNormalization\n - 通过规范化操作将输出信号x规范化到均值为0, 方差为1保证网络稳定性\n - 加大神经网络训练速度\n - 提高训练稳定性\n - 缓解梯度爆炸和消散问题\n- 使用残差结构(几千层, 几百层神经网络)\n - 极大的提高了神经网络的深度\n - 很大程度上解决了梯度消散的问题\n - 允许训练很深层的网络\n - 可以看作解决梯度消散这个问题最重要,最有效的解决方式\n- 使用梯度裁剪\n\n```\nif ||g|| > v \ng <---gv/||g||\n\n其中v是梯度范数的上界, g用来更新参数的梯度\n\n``` \n \n---------------------\n--------------------\n\n \n#### LSTM(长短旗记忆网络---RNN的一种变体结构, 在其基础上增加了时间记忆),长短期记忆网络\n \n- 说明以及用途\n - 合适处理和预测时间序列中间隔和延迟相对较长的重要事件\n - 聊天机器人, 长文本翻译, 自动写歌, 自动写诗\n "
}
] | 24 |
BibhushaSapkota/programming1class | https://github.com/BibhushaSapkota/programming1class | bbb179d7be98e94497fb8eb6137e338608681834 | d1bf9560d88dcf926f7b5fff6c7863f78a1e7ae5 | d325e5091a06261940abe5f08941f15096d0dae4 | refs/heads/master | 2023-03-09T17:42:42.345201 | 2021-02-26T08:46:17 | 2021-02-26T08:46:17 | 342,515,054 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6328125,
"alphanum_fraction": 0.6328125,
"avg_line_length": 30.75,
"blob_id": "46b9f5cdd8b1ade6aa15e636273533ce831f94d4",
"content_id": "8774caebecaf331903e168cfb932ae116b781519",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 128,
"license_type": "no_license",
"max_line_length": 41,
"num_lines": 4,
"path": "/classwork.py",
"repo_name": "BibhushaSapkota/programming1class",
"src_encoding": "UTF-8",
"text": "a=int(input('enter the first number'))\nb=int(input('enter the second number'))\nc=a+b\nprint('the sum of ', a ,'and ',b,'is ',c)\n\n"
},
{
"alpha_fraction": 0.6754966974258423,
"alphanum_fraction": 0.7086092829704285,
"avg_line_length": 36.5,
"blob_id": "ef234d9cdb98f788efdc72cdeb26ba6f25bccd97",
"content_id": "79b8f80a58d6000f54ba85507d8286540ab7855a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 151,
"license_type": "no_license",
"max_line_length": 44,
"num_lines": 4,
"path": "/work3.py",
"repo_name": "BibhushaSapkota/programming1class",
"src_encoding": "UTF-8",
"text": "#write a program to find the area of circle\nradius=float(input('enter the radius:'))\narea=3.14*radius**2\nprint('the area of circle is ',area,'cm**2')\n\n"
},
{
"alpha_fraction": 0.6849315166473389,
"alphanum_fraction": 0.6849315166473389,
"avg_line_length": 35.75,
"blob_id": "12fc89f687708fb2b70af0c17c0bbee90fa5a8fb",
"content_id": "0f68f2584a0d6ee6c6c14f6708dadd27a406df1f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 146,
"license_type": "no_license",
"max_line_length": 50,
"num_lines": 4,
"path": "/debugging.py",
"repo_name": "BibhushaSapkota/programming1class",
"src_encoding": "UTF-8",
"text": "d=input('how far did you travel today(in miles)?')\nt=input('how long did it take you(in hour)? ')\ns=d/t\nprint(\"your speed was\"+s+\"miles per hour\")"
},
{
"alpha_fraction": 0.7326732873916626,
"alphanum_fraction": 0.7376237511634827,
"avg_line_length": 39.20000076293945,
"blob_id": "2d72516586e93ca22fe45a15c22addda60a48c31",
"content_id": "9749410d9c4c60b21653ec7218dc8e55d6c3c629",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 202,
"license_type": "no_license",
"max_line_length": 48,
"num_lines": 5,
"path": "/work.py",
"repo_name": "BibhushaSapkota/programming1class",
"src_encoding": "UTF-8",
"text": "#write a program to find the area of a rectangle\nlength=float(input(\"enter the length:\"))\nbreadth=float(input(\"enter the breadth:\"))\narea=length*breadth\nprint('the area of rectangle is ',area,'cm**2') \n"
},
{
"alpha_fraction": 0.7165775299072266,
"alphanum_fraction": 0.7272727489471436,
"avg_line_length": 36.599998474121094,
"blob_id": "27da0270c72d87a027a945098fb83023c2491c26",
"content_id": "f613c1201a5ed116a2d08c42499c52476fb975a6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 187,
"license_type": "no_license",
"max_line_length": 45,
"num_lines": 5,
"path": "/venv/work2.py",
"repo_name": "BibhushaSapkota/programming1class",
"src_encoding": "UTF-8",
"text": "#write a program to find the area of triangle\nbase=float(input('enter the base:'))\nheight=float(input('enter the height:'))\narea=(base*height)/2\nprint('the area of triangle',area,'cm**2')"
}
] | 5 |
9tarz/nlp | https://github.com/9tarz/nlp | bbef7c7c4c949c8429a46c40d42106bd39b252ae | 68c62224e6add01417c66d3041afa3289196910e | dce8fd229056da94f48278becf45c67dd3eea1ef | refs/heads/master | 2016-09-01T15:34:48.765941 | 2016-03-11T11:26:49 | 2016-03-11T11:26:49 | 50,738,315 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5685339570045471,
"alphanum_fraction": 0.5852205157279968,
"avg_line_length": 25.967741012573242,
"blob_id": "e0464a220ef9f52e4158ce859ce8ee0d5d488042",
"content_id": "52f0ffdea4859fd321a184ec4276e5bb612e4a6a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 839,
"license_type": "no_license",
"max_line_length": 94,
"num_lines": 31,
"path": "/spam_mail_filter/preprocessor/read_index.py",
"repo_name": "9tarz/nlp",
"src_encoding": "UTF-8",
"text": "import os\nimport multiprocessing\ndef run_shell(index):\n\ttype = index[0]\n\ti = index[1]\n\tif (type == \"spam\"):\n\t\tcmd = \"mv ../output_1/inmail.\"+ str(i) + \".json\" + \" \" + \"../spam/inmail.\"+ str(i) + \".json\"\n\t\t#print cmd\n\telif (type == \"ham\"):\n\t\tcmd = \"mv ../output_1/inmail.\"+ str(i) + \".json\" + \" \" + \"../ham/inmail.\"+ str(i) + \".json\"\n\telse :\n\t\tcmd = \"\"\n\tos.system(cmd)\n\ndef read_index_file(path):\n\tlines = [line.rstrip('\\n') for line in open(path)]\n\tout = []\n\tfor line in lines:\n\t\telement = line.split(' ')\n\t\tpath = element[1].split(\".\")\n\t\tpack = [element[0],path[3]]\n\t\tout.append(pack)\n\treturn out\n\nif __name__ == '__main__' :\n\tindex_path = \"../../trec07p/full/index\"\n\tindex = read_index_file(index_path)\n\t\n\t#pool = multiprocessing.Pool(multiprocessing.cpu_count())\n\t#pool.map(run_shell, index[0:13])\n\t#pool.map(run_shell, range(1,2))\n\n\n\n"
},
{
"alpha_fraction": 0.7161716222763062,
"alphanum_fraction": 0.7161716222763062,
"avg_line_length": 24.25,
"blob_id": "aa4c86336a6db2d4816842676041526bd34ad9d7",
"content_id": "0d257afe08401e0c8137408ebff3cadf5f92652b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 303,
"license_type": "no_license",
"max_line_length": 53,
"num_lines": 12,
"path": "/spam_mail_filter/preprocessor/parse_email.js",
"repo_name": "9tarz/nlp",
"src_encoding": "UTF-8",
"text": "var MailParser = require(\"mailparser\").MailParser;\nvar mailparser = new MailParser();\nvar fs = require('fs');\n\nvar email = fs.readFileSync('/dev/stdin').toString();\n\nmailparser.on(\"end\", function(mail_object){\n console.log(JSON.stringify(mail_object)); \n});\n\nmailparser.write(email);\nmailparser.end();\n"
},
{
"alpha_fraction": 0.6846672892570496,
"alphanum_fraction": 0.6914175748825073,
"avg_line_length": 27,
"blob_id": "e1d5e305019c2e0a06880f4dd7321bf4bfb5ad4f",
"content_id": "44b3530aa458b5bb8e9df5ae00f67dcd119e8ff0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1037,
"license_type": "no_license",
"max_line_length": 64,
"num_lines": 37,
"path": "/wordcount/wordcount_lemma.py",
"repo_name": "9tarz/nlp",
"src_encoding": "UTF-8",
"text": "import nltk\nfrom collections import Counter\nfrom nltk.stem.wordnet import WordNetLemmatizer\nlmtzr = WordNetLemmatizer()\npath = \"food_and_drink-pages/\"\nlist = []\nfrom nltk.corpus import wordnet\n\ndef get_wordnet_pos(treebank_tag):\n\tif treebank_tag.startswith('J'):\n\t\treturn wordnet.ADJ\n\telif treebank_tag.startswith('V'):\n\t\treturn wordnet.VERB\n\telif treebank_tag.startswith('N'):\n\t\treturn wordnet.NOUN\n\telif treebank_tag.startswith('R'):\n\t\treturn wordnet.ADV\n\telse:\n\t\treturn ''\n\nfor i in range(1,1171):\n\tfile_content = open(path + str(i) + \".txt\").read()\n\tdata = file_content.decode(\"utf8\")\n\ttokens = nltk.word_tokenize(data)\n\ttokens_pos = nltk.pos_tag(tokens)\n\tfor (token,pos) in tokens_pos :\n\t\tif get_wordnet_pos(pos) == '' :\n\t\t\tlist.append(token)\n\t\telse :\n\t\t\tword_lmtz = lmtzr.lemmatize(token,get_wordnet_pos(pos))\n\t\t\t#print token + \" : \"+ word_lmtz + \": \" + get_wordnet_pos(pos)\n\t\t\tlist.append(word_lmtz)\n\nout = Counter(list)\nout_sort = out.most_common()\nfor (word,count) in out_sort :\n\tprint word.encode(\"utf8\") + \" : \" + str(count) \n"
},
{
"alpha_fraction": 0.6600441336631775,
"alphanum_fraction": 0.668138325214386,
"avg_line_length": 24.148147583007812,
"blob_id": "f50e6cf1bf5c8a2a1449460f9401506ced4a3243",
"content_id": "7968749355a362b8a57d1a9943a06b3607671c1d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1359,
"license_type": "no_license",
"max_line_length": 64,
"num_lines": 54,
"path": "/wordcount/wordcount_lemma_thread.py",
"repo_name": "9tarz/nlp",
"src_encoding": "UTF-8",
"text": "import nltk\nfrom collections import Counter\nfrom nltk.stem.wordnet import WordNetLemmatizer\nfrom Queue import Queue\nfrom threading import Thread\nlmtzr = WordNetLemmatizer()\npath = \"food_and_drink-pages/\"\nlist = []\nfrom nltk.corpus import wordnet\n\ndef words_lemmatizer(i):\n\tfile_content = open(path + str(i) + \".txt\").read()\n\tdata = file_content.decode(\"utf8\")\n\ttokens = nltk.word_tokenize(data)\n\ttokens_pos = nltk.pos_tag(tokens)\n\tfor (token,pos) in tokens_pos :\n\t\tif get_wordnet_pos(pos) == '' :\n\t\t\tlist.append(token)\n\t\telse :\n\t\t\tword_lmtz = lmtzr.lemmatize(token,get_wordnet_pos(pos))\n\t\t\t#print token + \" : \"+ word_lmtz + \": \" + get_wordnet_pos(pos)\n\t\t\tlist.append(word_lmtz)\n\tprint str(i) + \" finished!\"\n\nclass MyThread(Thread):\n def __init__(self, i):\n ''' Constructor. '''\n \n Thread.__init__(self)\n self.i = i\n \n def run(self):\n words_lemmatizer(self.i)\n\n\ndef get_wordnet_pos(treebank_tag):\n\tif treebank_tag.startswith('J'):\n\t\treturn wordnet.ADJ\n\telif treebank_tag.startswith('V'):\n\t\treturn wordnet.VERB\n\telif treebank_tag.startswith('N'):\n\t\treturn wordnet.NOUN\n\telif treebank_tag.startswith('R'):\n\t\treturn wordnet.ADV\n\telse:\n\t\treturn ''\n\nfor i in range(1,1171): #1171\n\tMyThread(i).start()\n\nout = Counter(list)\nout_sort = out.most_common()\nfor (word,count) in out_sort :\n\tprint word.encode(\"utf8\") + \" : \" + str(count) \n"
},
{
"alpha_fraction": 0.672595202922821,
"alphanum_fraction": 0.679358720779419,
"avg_line_length": 32.89787292480469,
"blob_id": "61b54b5a117536aed3781aef96b94cd8d2f7b204",
"content_id": "67e43499e872e6fd7c82077addad56ece97b9412",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7984,
"license_type": "no_license",
"max_line_length": 157,
"num_lines": 235,
"path": "/spam_mail_filter/feature_extractor.py",
"repo_name": "9tarz/nlp",
"src_encoding": "UTF-8",
"text": "import json\nimport nltk\nfrom collections import Counter\nfrom nltk.stem.wordnet import WordNetLemmatizer\nfrom nltk.corpus import wordnet\nimport multiprocessing as mp\nfrom nltk.corpus import stopwords\nimport string\nimport re\nimport operator\nimport math\nimport collections\n\nlmtzr = WordNetLemmatizer()\nq_spam = mp.Queue()\nq_ham = mp.Queue()\nstops = set(stopwords.words('english'))\nstops_add = set([\"'d\", \"'ll\", \"'m\", \"'re\", \"'s\", \"'t\", \"n't\", \"'ve\"])\nstops = stops.union(stops_add)\npunctuations = set(string.punctuation)\npunctuations.add(\"''\")\npunctuations.add(\"``\") \nFEATURE_CHARS = set([\";\",\"(\",\"!\",\"$\",\"#\"])\npunctuations_feature_chars = punctuations.difference(FEATURE_CHARS)\nstops_punctuations_feature_chars = punctuations_feature_chars.union(stops)\nTAG_RE = re.compile(r'<[^>]+>')\n\ndef remove_tags(text):\n\ttext = text.replace('\\n', ' ')\n\treturn TAG_RE.sub('', text)\n\ndef read_index_file(path, cond):\n\tlines = [line.rstrip('\\n') for line in open(path)]\n\tout = []\n\tfor line in lines:\n\t\telement = line.split(' ')\n\t\tpath = element[1].split(\"/\")\n\t\tpack = [element[0],path[2]] #element[0]:type, path[3]:path\n\t\tif (cond == \"spam\"):\n\t\t\tif (element[0] == \"spam\"):\n\t\t\t\tout.append(pack)\n\t\t\telse:\n\t\t\t\tpass\n\t\tif (cond == \"ham\"):\n\t\t\tif (element[0] == \"ham\"):\n\t\t\t\tout.append(pack)\n\t\t\telse:\n\t\t\t\tpass\n\treturn out\n\ndef read_index_file_all(path):\n\tlines = [line.rstrip('\\n') for line in open(path)]\n\tout = []\n\tfor line in lines:\n\t\telement = line.split(' ')\n\t\tpath = element[1].split(\"/\")\n\t\tpack = [element[0],path[2]] #element[0]:type, path[3]:path\n\t\tout.append(pack)\n\treturn out\n\n\ndef get_wordnet_pos(treebank_tag):\n\tif treebank_tag.startswith('J'):\n\t\treturn wordnet.ADJ\n\telif treebank_tag.startswith('V'):\n\t\treturn wordnet.VERB\n\telif treebank_tag.startswith('N'):\n\t\treturn wordnet.NOUN\n\telif treebank_tag.startswith('R'):\n\t\treturn wordnet.ADV\n\telse:\n\t\treturn ''\n\n#def words_lemmatizer(index,cond):\ndef words_lemmatizer(args):\n\tindex = args[0]\n\tcond = args[1]\n\twords_list = []\n\tall_words = []\n\tfile_content = open(index[0]+\"/\"+index[1]+\".json\")\n\tdata_json = json.load(file_content)\n\ttry:\n\t\tdata = data_json[\"text\"]\n\texcept KeyError:\n\t\ttry:\n\t\t\tdata_html = data_json[\"html\"]\n\t\t\tdata = remove_tags(data_html)\n\t\texcept KeyError:\n\t\t\tdata = \"\"\n\ttokens = [i for i in nltk.word_tokenize(data.lower()) if i not in stops_punctuations_feature_chars]\n\ttokens_pos = nltk.pos_tag(tokens)\n\tfor (token,pos) in tokens_pos :\n\t\tif get_wordnet_pos(pos) == '' :\n\t\t\tall_words.append(token)\n\t\t\tif token not in words_list:\n\t\t\t\twords_list.append(token)\n\t\telse :\n\t\t\tword_lmtz = lmtzr.lemmatize(token,get_wordnet_pos(pos))\n\t\t\tall_words.append(word_lmtz)\n\t\t\tif word_lmtz not in words_list:\n\t\t\t\twords_list.append(word_lmtz)\n\n\t#return words_list\n\tall_words = Counter(all_words)\n\tall_words = all_words.most_common()\n\n\tif (cond == \"spam\"):\n\t\tq_spam.put((words_list,all_words,index[1],cond))\n\tif (cond == \"ham\"):\n\t\tq_ham.put((words_list,all_words,index[1],cond))\n\n\nif __name__ == '__main__' :\n\n\tNUM_DOC_SPAM = 10.0\n\tNUM_DOC_HAM = 10.0\n\t#ignore words with total freq lower than thresh\n\tRARE_THRESH = 0.005\n\t#max distance of spamicity from .5\n\tSPAMICITY_RADIUS = 0.05\n\tTOP_K = 10\n\n\tindex_path = \"../trec07p/full/index\"\n\t#index = read_index_file(index_path)\n\tindex_spam_paths = read_index_file(index_path,\"spam\")\n\tindex_ham_paths = read_index_file(index_path, \"ham\")\n\n\tpool = mp.Pool(mp.cpu_count())\n\tjob_args_spam = [(index_spam_paths[i], \"spam\") for i in range(int(NUM_DOC_SPAM))]\n\tpool.map(words_lemmatizer, job_args_spam)\n\tjob_args_ham = [(index_ham_paths[i], \"ham\") for i in range(int(NUM_DOC_HAM))]\n\tpool.map(words_lemmatizer, job_args_ham)\n\n\tresults_spam = [q_spam.get() for p in range(len(job_args_spam))]\n\twords_results_spam = [word for (words_list,all_words,index,cond) in results_spam for word in words_list]\n\tall_words_results_spam = [(all_words,index,cond) for (words_list,all_words,index,cond) in results_spam]\n\n\tfreq_results_spam_counter = Counter(words_results_spam)\n\tfreq_results_spam = freq_results_spam_counter.most_common()\n\n\tresults_ham = [q_ham.get() for p in range(len(job_args_ham))]\n\twords_results_ham = [word for (words_list,all_words,index,cond) in results_ham for word in words_list]\n\tall_words_results_ham = [(all_words,index,cond) for (words_list,all_words,index,cond) in results_ham]\n\n\tfreq_results_ham_counter = Counter(words_results_ham)\n\tfreq_results_ham = freq_results_ham_counter.most_common()\n\n\twords_results_spam_prob = dict() # P(w|S)\n\tfor (word,freq) in freq_results_spam:\n\t\twords_results_spam_prob[word] = freq/NUM_DOC_SPAM\n\twords_results_ham_prob = dict() # P(w|H)\n\tfor (word,freq) in freq_results_ham:\n\t\twords_results_ham_prob[word] = freq/NUM_DOC_HAM\n\n\tall_words_results_spam.extend(all_words_results_ham)\n\tall_words_results = all_words_results_spam\n\n\tnum_of_all_words = 0.0\n\tfor (all_words,index,cond) in all_words_results:\n\t\tfor (word,tf) in all_words:\n\t\t\tnum_of_all_words += tf\n\n\tall_words_results_prob = dict()\n\tfor (all_words,index,cond) in all_words_results:\n\t\tfor (word,tf) in all_words:\n\t\t\tall_words_results_prob[word] = tf/num_of_all_words\n\n\t#all_words_results_prob = sorted(all_words_results_prob.items(), key=operator.itemgetter(1), reverse=True)\n\t#print all_words_results_prob\n\n# P(S|w)\n\twords_results_predict_prob = dict()\n\tfor (word,freq) in freq_results_spam:\n\t\ttry :\n\t\t\tdiff = abs(words_results_spam_prob[word] - words_results_ham_prob[word])\n\t\t\tspamicity = words_results_spam_prob[word]/(words_results_spam_prob[word] + words_results_ham_prob[word]) \n\t\t\tif abs(spamicity - 0.5)>SPAMICITY_RADIUS and freq>RARE_THRESH:\n\t\t\t\twords_results_predict_prob[word] = diff\n\t\t\t#words_results_predict_prob[word] = words_results_spam_prob[word]/(words_results_spam_prob[word] + words_results_ham_prob[word])\n\t\texcept KeyError:\n\t\t\tdiff = abs(words_results_spam_prob[word])\n\t\t\tspamicity = words_results_spam_prob[word]/words_results_spam_prob[word]\n\t\t\tif abs(spamicity - 0.5)>SPAMICITY_RADIUS and freq>RARE_THRESH:\n\t\t\t\twords_results_predict_prob[word] = diff\n\t\t\t#words_results_predict_prob[word] = words_results_spam_prob[word]/words_results_spam_prob[word]\n\n\t\t#words_results_predict_prob[word] = words_results_spam_prob[word]/words_results_all_prob[word]\n\t\t#try :\n\t\t#\tprint \"Spamicity: \" + str(words_results_predict_prob[word]) + \" Spam: \" + str(words_results_spam_prob[word]) +\" Ham: \"+ str(words_results_ham_prob[word])\n\t\t#except KeyError:\n\t\t#\tprint \"Spamicity: \" + str(words_results_predict_prob[word]) + \" Spam: \" + str(words_results_spam_prob[word]) +\" Ham: 0\"\n\t\t#print \"Indicate: \" + str(words_results_predict_prob[word]) + \" All: \" + str(words_results_all_prob[word]) \n\t#words_results_predict_prob = sorted(words_results_predict_prob)\n\n\tsorted_words_results_predict_prob = sorted(words_results_predict_prob.items(), key=operator.itemgetter(1), reverse=True)[:TOP_K]\n\tfeature_words = [word for (word,prob) in sorted_words_results_predict_prob]\n\t#print feature_words\n\n\t#index_paths = read_index_file_all(index_path)\n\n\tidf = dict()\n\tfor word in feature_words:\n\t\tidf[word] = math.log((NUM_DOC_SPAM+NUM_DOC_HAM)/(freq_results_spam_counter[word] + freq_results_ham_counter[word]))\n\t#print idf\n\n\ttfidf = dict()\n\tfor (all_words,index,cond) in all_words_results:\n\t\ttfidf[index] = dict()\n\t\tfor fw in feature_words:\n\t\t\ttfidf[index][fw] = 0 \n\t\tfor (word,tf) in all_words:\n\t\t\tif word in feature_words:\n\t\t\t\ttfidf_w = tf*idf[word]\n\t\t\t\ttfidf[index][word] = tfidf_w\n\t\t\t\t#for tf_index in tfidf[index]:\n\t\t\t\t#\tfw_tfidf_w[word] = tfidf_w\n\t\ttfidf[index][u'zzzzzzztype_of_doc'] = cond\n\n\tcount = 0\n\tprint \"@relation 'spam_ham'\"\n\tfor doc in tfidf:\n\t\tordered = collections.OrderedDict(sorted(tfidf[doc].items()))\n\t\tif (count == 0):\n\t\t\t#print ','.join(str(v[0]) for v in ordered.items())\n\t\t\tcount_attr = 0\n\t\t\tfor value in ordered.items():\n\t\t\t\tif (count_attr == TOP_K):\n\t\t\t\t\tprint \"@attribute \" + \"class\" + \" {spam,ham}\"\n\t\t\t\telse:\n\t\t\t\t\tprint \"@attribute \" + value[0] + \" numeric\"\n\t\t\t\tcount_attr+=1\n\t\t\tprint \"@data\"\n\t\telse:\n\t\t\tprint ','.join(str(v[1]) for v in ordered.items())\n\t\tcount+=1\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.6532567143440247,
"alphanum_fraction": 0.6714559197425842,
"avg_line_length": 22.22222137451172,
"blob_id": "bc40512de8bcef489fdc086eb49b104b36a5f626",
"content_id": "63ae53ccc2ac7a47fd6572125d76125a18006cdb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1044,
"license_type": "no_license",
"max_line_length": 56,
"num_lines": 45,
"path": "/wordcount/wordcount.py",
"repo_name": "9tarz/nlp",
"src_encoding": "UTF-8",
"text": "import nltk\nfrom collections import Counter\nimport plotly.plotly as py\nimport plotly.graph_objs as go\nimport numpy as np\npy.sign_in('nulllnil', '8xr93mz7ra')\npath = \"food_and_drink-pages/\"\nlist = []\nfor i in range(1,1171):\n\tfile_content = open(path + str(i) + \".txt\").read()\n\tdata = file_content.decode(\"utf8\")\n\ttokens = nltk.word_tokenize(data)\n\tfor token in tokens :\n\t\tlist.append(token)\n\nout = Counter(list)\nout_sort = out.most_common()\ncount_list = []\nfor (word,count) in out_sort :\n\t#print word.encode(\"utf8\") + \" : \" + str(count)\n\tcount_list.append(count) \n\nfreq_of_freq = Counter(count_list)\nfreq_of_freq = freq_of_freq.most_common()\nfreq_of_freq = sorted(freq_of_freq)\n\nkeys = []\nvalues = []\n\nfor (count,freq) in freq_of_freq :\n\t#print str(np.log10(count)) + \",\" + str(np.log10(freq))\n\tkeys.append(np.log10(count))\n\tvalues.append(np.log10(freq))\n\n\ntrace = go.Scatter(\n x = keys,\n y = values,\n mode = 'markers',\n name = 'markers'\n)\ndata = [trace]\n\n# Plot and embed in ipython notebook!\npy.iplot(data, filename='scatter-mode')"
},
{
"alpha_fraction": 0.61834317445755,
"alphanum_fraction": 0.6508875489234924,
"avg_line_length": 32.5,
"blob_id": "be9a706b8635ea51bb9a2eecaf2b4647b6a6b7f8",
"content_id": "b45f30d7efb7b6d0c9841364874e1c130a0aca0a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 338,
"license_type": "no_license",
"max_line_length": 109,
"num_lines": 10,
"path": "/spam_mail_filter/preprocessor/parse_email.py",
"repo_name": "9tarz/nlp",
"src_encoding": "UTF-8",
"text": "import os\nimport multiprocessing\ndef run_shell(i):\n\tcmd = \"cat ../trec07p/data/inmail.\" + str(i) + \"| node parse_email.js > output_1/inmail.\" + str(i) + \".json\"\n\tos.system(cmd)\n\nif __name__ == '__main__' :\n\tpool = multiprocessing.Pool(multiprocessing.cpu_count())\n\tpool.map(run_shell, range(1,75419))\n\t#pool.map(run_shell, range(1,2))\n\n\n\n"
},
{
"alpha_fraction": 0.6202629804611206,
"alphanum_fraction": 0.6481052041053772,
"avg_line_length": 29,
"blob_id": "dc79b4162206f4392c38ad472ac47ae7f0c3c8af",
"content_id": "e0f937dfd0612b2905c93f5c5c66c92b4e2ee061",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1293,
"license_type": "no_license",
"max_line_length": 120,
"num_lines": 43,
"path": "/four-gram.py",
"repo_name": "9tarz/nlp",
"src_encoding": "UTF-8",
"text": "import nltk\nimport re\nfrom nltk.util import ngrams\nfrom collections import Counter\n\ndef find_count(lst, var):\n\tc = 0\n\tfor (fgram,count) in lst :\n\t\t#tmp = (fgram[0],fgram[1],fgram[2])\n\t\tif var == fgram :\n\t\t\tc += count\n\treturn float(c)\n\npath = \"all_word_segmented_news.u8\"\nfile_content = open(path+ \".txt\").read()\nn = 4\nfile_content = file_content.split(\"\\n\")\nfile_content = [x.replace(\" _\", \"\") for x in file_content]\nlst = []\nfor x in file_content:\n\twords = x.split(\" \")\n\tfor word in words:\n\t\tlst.append(word)\nfourgrams = ngrams(lst, n)\nthreegrams = ngrams(lst, n-1)\nfourgrams = [tuple(word.decode(\"utf8\") for word in words) for words in fourgrams]\nthreegrams = [tuple(word.decode(\"utf8\") for word in words) for words in threegrams]\n\nout4 = Counter(fourgrams)\ncount_4_list = out4.most_common()\n\nout3 = Counter(threegrams)\ncount_3_list = out3.most_common()\n\nfor (fgram,count) in count_4_list :\n\tw3 = (fgram[0],fgram[1],fgram[2])\n\tw3_count = find_count(count_3_list,w3)\n\tw3 = ' '.join(fgram)\n\t#print w3.encode(\"utf8\") + \" : \" + \" count4 \"+ str(count) + \" count3 \"+ str(w3_count) + \" prob \" + str(count/w3_count)\n\tprint w3.encode(\"utf8\") + \" : \" + str(count/w3_count)\n\t#ans.append((w3,float(count/w3_count)))\n\t#fgram = ','.join(fgram)\n\t#print fgram.encode(\"utf8\") + \" : [\" + str(count) + \"]\"\n\n\n\n"
},
{
"alpha_fraction": 0.7127193212509155,
"alphanum_fraction": 0.719298243522644,
"avg_line_length": 27.5,
"blob_id": "0d59d0b074c7ec2369add3c27338c6a0813ac721",
"content_id": "e70208bef56e4e17c3fb70087044628c446158df",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 456,
"license_type": "no_license",
"max_line_length": 51,
"num_lines": 16,
"path": "/wordcount/preprocessor.py",
"repo_name": "9tarz/nlp",
"src_encoding": "UTF-8",
"text": "import nltk\nfrom collections import Counter\nfrom nltk.stem.wordnet import WordNetLemmatizer\nlmtzr = WordNetLemmatizer()\npath = \"food_and_drink-pages/\"\nlist = []\nfor i in range(1,3):\n\tfile_content = open(path + str(i) + \".txt\").read()\n\tdata = file_content.decode(\"utf8\")\n\ttokens = nltk.word_tokenize(data)\n\ttokens_length = len(tokens)\n\tref_position = data.find(\"See also\")\n\tprint ref_position\n\tprint tokens[ref_position:tokens_length]\n\tprint \" \"\n\tprint \" \"\n"
},
{
"alpha_fraction": 0.6911314725875854,
"alphanum_fraction": 0.6964831948280334,
"avg_line_length": 27.39130401611328,
"blob_id": "be4b22672ec76e609ef14f5e012bab152ee0717b",
"content_id": "e410750083ba7d70a552d622e6a05eb56937d76d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1308,
"license_type": "no_license",
"max_line_length": 64,
"num_lines": 46,
"path": "/wordcount/wordcount_lemma_multiprocess.py",
"repo_name": "9tarz/nlp",
"src_encoding": "UTF-8",
"text": "import nltk\nfrom collections import Counter\nfrom nltk.stem.wordnet import WordNetLemmatizer\nfrom nltk.corpus import wordnet\nimport multiprocessing\n\nlmtzr = WordNetLemmatizer()\npath = \"food_and_drink-pages/\"\n\ndef words_lemmatizer(i):\n\twords_list = []\n\tfile_content = open(path + str(i) + \".txt\").read()\n\tdata = file_content.decode(\"utf8\")\n\ttokens = nltk.word_tokenize(data)\n\ttokens_pos = nltk.pos_tag(tokens)\n\tfor (token,pos) in tokens_pos :\n\t\tif get_wordnet_pos(pos) == '' :\n\t\t\twords_list.append(token)\n\t\telse :\n\t\t\tword_lmtz = lmtzr.lemmatize(token,get_wordnet_pos(pos))\n\t\t\t#print token + \" : \"+ word_lmtz + \": \" + get_wordnet_pos(pos)\n\t\t\twords_list.append(word_lmtz)\n\n\treturn words_list\n\ndef get_wordnet_pos(treebank_tag):\n\tif treebank_tag.startswith('J'):\n\t\treturn wordnet.ADJ\n\telif treebank_tag.startswith('V'):\n\t\treturn wordnet.VERB\n\telif treebank_tag.startswith('N'):\n\t\treturn wordnet.NOUN\n\telif treebank_tag.startswith('R'):\n\t\treturn wordnet.ADV\n\telse:\n\t\treturn ''\n\nif __name__ == '__main__' :\n\tpool = multiprocessing.Pool(multiprocessing.cpu_count())\n\n\tlists = pool.map(words_lemmatizer, range(1,1171))\n\tlists = [word for words_list in lists for word in words_list]\n\tout = Counter(lists)\n\tout_sort = out.most_common()\n\tfor (word,count) in out_sort :\n\t\tprint word.encode(\"utf8\") + \" : \" + str(count) \n\n"
}
] | 10 |
picuber/PCBot | https://github.com/picuber/PCBot | 4386ef5402456463be6e310595f77eb22cecfe6b | ffc94f128918bf8398e9b29019478b4da2dcac56 | 486b2cd8ad6ce67f108de46b40d0a6f162eb1753 | refs/heads/master | 2021-03-16T05:20:35.398322 | 2020-11-12T23:03:12 | 2020-11-12T23:03:12 | 89,107,686 | 0 | 0 | null | 2017-04-23T00:33:04 | 2020-11-12T23:03:20 | 2021-02-26T02:47:27 | Python | [
{
"alpha_fraction": 0.5846204161643982,
"alphanum_fraction": 0.5872493982315063,
"avg_line_length": 36.567901611328125,
"blob_id": "316d8c126e780025edacbacb5630d6c9a27b4781",
"content_id": "f348b680ff6718d86b9cc8b67c6d5dd11825a25e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3043,
"license_type": "no_license",
"max_line_length": 116,
"num_lines": 81,
"path": "/cogs/guess.py",
"repo_name": "picuber/PCBot",
"src_encoding": "UTF-8",
"text": "from discord.ext import commands\nimport random\nimport json\nimport os\n\ndef load():\n if not os.path.isfile('cogs/guess.json'):\n guess = {'integer': {'min': 0, 'max': 99}, 'float': {'min': 0, 'max': 10}}\n store(guess)\n with open('cogs/guess.json') as f:\n return json.load(f)\n\ndef store(config):\n with open('cogs/guess.json','w') as f:\n json.dump(config, f)\n\nclass Guess:\n def reload_numbers(self):\n config = load()\n self.int_min = config['integer']['min']\n self.int_max = config['integer']['max']\n self.f_min = config['float']['min']\n self.f_max = config['float']['max']\n\n def __init__(self, bot):\n self.bot = bot\n self.reload_numbers()\n self.integer_to_guess = self.new_int()\n self.decimal_to_guess = self.new_decimal()\n\n def new_int(self):\n return random.randint(self.int_min, self.int_max)\n\n def new_decimal(self):\n return random.random() * self.f_max + self.f_min\n \n @commands.group(help='Do you know what I want?', aliases=['g'], pass_context=True)\n async def guess(self, ctx):\n if ctx.invoked_subcommand is None:\n await self.bot.say('Welcome to the unknown. Guess your way!')\n\n @guess.command(hidden=True, help='reload the numbers', aliases=['r', 'rld'])\n async def reload(self):\n self.reload_numbers()\n await self.bot.say('Fresh numbers coming right in!')\n\n @guess.command(help='Guess the correct integer', aliases=['i', 'int'])\n async def integer(self, *, guess:int=-1):\n if guess > self.int_max or guess < self.int_min:\n await self.bot.say(\"Choose an integer between \" + str(self.int_min) + \" and \" + str(self.int_max) + \"!\")\n elif guess == self.integer_to_guess:\n await self.bot.say(\"Correct!\")\n integer_to_guess = random.randint(self.int_min, self.int_max)\n elif guess < self.integer_to_guess:\n await self.bot.say(\"Too small!\")\n else:\n await self.bot.say(\"Too big!\")\n\n @integer.error\n async def integer_error(self, error, ctx):\n await self.bot.say(\"Choose an integer between \" + str(self.int_min) + \" and \" + str(self.int_max) + \"!\")\n\n\n @guess.command(help='Guess the correct decimal', aliases=['f'])\n async def float(self, decimal:float=-1):\n if decimal > self.f_max or decimal < self.f_min:\n await self.bot.say(\"Choose an decimal between \" + str(self.f_min) + \" and \" + str(self.f_max) + \"!\")\n elif decimal == self.decimal_to_guess:\n await self.bot.say(\"Correct!\")\n self.decimal_to_guess = random.random() * self.f_max + self.f_min\n elif decimal < self.decimal_to_guess:\n await self.bot.say(\"Too small!\")\n else:\n await self.bot.say(\"Too big!\")\n\n @float.error\n async def float_error(self, error, ctx):\n await self.bot.say(\"Choose an decimal between \" + str(self.f_min) + \" and \" + str(self.f_max) + \"!\")\n \ndef setup(bot):\n bot.add_cog(Guess(bot))\n"
},
{
"alpha_fraction": 0.607168972492218,
"alphanum_fraction": 0.6159473061561584,
"avg_line_length": 33.75423812866211,
"blob_id": "a06295bdc8667eecfaec78dbc0e13d25e62f9bc7",
"content_id": "2eb3bcc7e4937899ad5fcde10a080b9a86b1e012",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4101,
"license_type": "no_license",
"max_line_length": 139,
"num_lines": 118,
"path": "/bot.py",
"repo_name": "picuber/PCBot",
"src_encoding": "UTF-8",
"text": "import discord\nfrom discord.ext import commands\nfrom cogs.utils.checks import is_owner, set_owner_id\nimport datetime\nimport json\nimport logging\nimport sys\nimport os\n\nif len(sys.argv) >= 2:\n config_name = sys.argv[1]\nelse:\n config_name = 'bot'\n\nlogging.basicConfig(\n level=logging.INFO,\n filename= config_name + '.log',\n filemode='a',\n format='%(asctime)s [%(levelname)s]-%(name)s: %(message)s',\n datefmt='%Y-%m-%d %H:%M:%S')\nlogging.getLogger('discord').setLevel(logging.WARNING)\nlog = logging.getLogger(config_name)\n\ndef create_config_template(filename):\n config = {'prefix': ',',\n 'help_attrs': {},\n 'cogs': ['cogs.core'],\n 'client_id': '000',\n 'token': 'Abc123',\n 'owner_id': '000'}\n with open(filename, 'w') as f:\n json.dump(config, f)\n log.critical('Could not load {}. Template created. Exiting...'.format(filename))\n exit(-1)\n\ndef load_config(filename):\n if not os.path.isfile(filename):\n create_config_template(filename)\n else:\n with open(filename) as f:\n return json.load(f)\n\nconfig = load_config(config_name + '-config.json')\nbot = commands.Bot(command_prefix=config['prefix'], help_attrs=config['help_attrs'])\n\[email protected]\nasync def on_command_error(error, ctx):\n chan = ctx.message.channel\n if isinstance(error, commands.MissingRequiredArgument):\n await bot.send_message(chan, 'You forgot at least one argument!')\n elif isinstance(error, commands.CommandOnCooldown):\n await bot.send_message(chan, error)\n elif isinstance(error, commands.CheckFailure):\n await bot.send_message(chan, 'YOU SHALL NOT PASS!')\n elif isinstance(error, commands.NoPrivateMessage):\n await bot.send_message(chan, 'This command cannot be used in private messages.')\n elif isinstance(error, commands.DisabledCommand):\n await bot.send_message(chan, 'Sorry. This command is disabled and cannot be used.')\n elif isinstance(error, commands.CommandInvokeError):\n log.error('In {0.command.qualified_name}:{1.__class__.__name__}: {1}'.format(ctx, error))\n else:\n log.error('{0.__class__.__name__}: {0}'.format(error), file=sys.stderr)\n\n\[email protected]\nasync def on_ready():\n log.info('-------------------------------------------------')\n log.info('Logged in as ' + bot.user.name)\n log.info(bot.user.id)\n log.info('----------')\n if not hasattr(bot, 'uptime'):\n bot.uptime = datetime.datetime.utcnow()\n\[email protected]\nasync def on_error(event, *args, **kwargs):\n log.error('{0[0]} in {1}: {0[1]}'.format(sys.exc_info(), event))\n\[email protected]\nasync def on_resumed():\n log.warning('resumed...')\n await bot.change_presence(game=discord.Game(name=',help'))\n\[email protected]\nasync def on_message(message):\n if message.author.bot:\n return\n\n if message.channel.is_private:\n if message.tts:\n log.info('Message from {0.author.name} ({0.author.id}): {0.content}'.format(message))\n else:\n log.info('Message from {0.author.name} ({0.author.id})[tts]: {0.content}'.format(message))\n else:\n if message.tts:\n log.info('Message in {0.server.name}/#{0.channel.name} from {0.author.name} ({0.author.id})[tts]: {0.content}'.format(message))\n else:\n log.info('Message in {0.server.name}/#{0.channel.name} from {0.author.name} ({0.author.id}): {0.content}'.format(message))\n await bot.process_commands(message)\n\nif __name__ == '__main__':\n bot.config_name = config_name\n bot.logger = log\n bot.client_id = config['client_id']\n bot.owner_id = config['owner_id']\n\n set_owner_id(bot.owner_id)\n\n for cog in config['cogs']:\n try:\n bot.load_extension(cog)\n except Exception as e:\n log.error('Failed to load extention {}\\n{}: {}'.format(cog, type(e).__name__, e))\n\n try:\n bot.run(config['token'])\n except discord.errors.LoginFailure as e:\n log.critical('Could not log in. Check your config in {}!'.format(config_name + '-config.json'))\n exit(-1)\n"
},
{
"alpha_fraction": 0.48785507678985596,
"alphanum_fraction": 0.4977356791496277,
"avg_line_length": 35.25373077392578,
"blob_id": "3c66008b6f11f76383ce6678f26d07d8f49ca339",
"content_id": "fbfb362ada1a5f9925fe0f589b500ec04e6871fd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2429,
"license_type": "no_license",
"max_line_length": 119,
"num_lines": 67,
"path": "/cogs/random.py",
"repo_name": "picuber/PCBot",
"src_encoding": "UTF-8",
"text": "from discord.ext import commands\nimport random\n\ndef load_objects():\n with open('cogs/throw_objects.txt') as f:\n return [l.strip() for l in f]\n\nclass Random:\n def __init__(self, bot):\n self.bot = bot\n self.objects = load_objects()\n\n @commands.command(help='Let\\'s roll the dices!', aliases=['r'])\n async def roll(self, *, dice_string: str='d'):\n dice_string_clean = ''\n dice_strings_list = []\n dices_list = []\n dices_result = []\n for c in dice_string: #clean string, only leave relevant characters\n if c in ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'd', 'D', '+', '-']:\n if c == 'D':\n dice_string_clean += 'd'\n elif c == '-':\n dice_string_clean += '+-'\n else:\n dice_string_clean += c\n dice_strings_list = dice_string_clean.split('+')\n for d in dice_strings_list:\n if not d == '':\n dices_list.append(d.lower().split('d'))\n for d in dices_list:\n if len(d) > 1:\n if d[0] == '':\n d[0] = '1'\n if d[1] == '':\n d[1] = '6'\n for i in range(0, int(d[0])):\n dices_result.append(random.randint(1, int(d[1])))\n else:\n dices_result.append(int(d[0]))\n roll_sum = 0\n for d in dices_result:\n roll_sum += d\n await self.bot.say(str(dices_result).replace(', ', ' + ') + ' = ' + str(roll_sum))\n \n @roll.error\n async def roll_error(self, ctx, error):\n await self.bot.say('Please enter a dice roll')\n \n @commands.command(help='In memmory of RoboNitori', pass_context=True)\n async def throw(self, ctx):\n if ctx.message.mentions == []:\n user = ctx.message.author\n else:\n user = ctx.message.mentions[0]\n await self.bot.say('*throws ' + random.choice(self.objects) + ' at* ' + user.mention)\n\n @commands.command(help='Let me choose for you\\nPlease enter your choices seperated by |', aliases=['ch', 'choice'])\n async def choose(self, *, choices: str='Please enter your choices'):\n result = random.choice(choices.split('|'))\n if result == '':\n result = '<empty choice>'\n await self.bot.say(result)\n\n\ndef setup(bot):\n bot.add_cog(Random(bot))\n"
},
{
"alpha_fraction": 0.5561694502830505,
"alphanum_fraction": 0.5635359287261963,
"avg_line_length": 21.625,
"blob_id": "af6090ea0edb4429ed0317a2aef830fce1485cf3",
"content_id": "8c89a52f2b42cec12f6a95a0d54981bc7eaaa8aa",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 543,
"license_type": "no_license",
"max_line_length": 95,
"num_lines": 24,
"path": "/start",
"repo_name": "picuber/PCBot",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\nname=\"${1:-bot}\"\nnorestartfile=\"$name.norestart\"\n\nwhile true; do\n\tif [ -f $norestartfile ]; then\n\t\trm $norestartfile\n\tfi\n\n\tpython3 -u bot.py $name >> log 2>&1 &\n\tpid=$!\n\techo \\[$(date +\"%F %T\")\\]\\[\\\"$name\\\"\\] Started PCBot \\[PID=$pid\\] >> log\n\n\twait $pid\n\techo \\[$(date +\"%F %T\")\\]\\[\\\"$name\\\"\\] PCBot was terminated! >> log\n\n\tif [ -f $norestartfile ]; then\n\t\techo \\[$(date +\"%F %T\")\\]\\[\\\"$name\\\"\\] File $norestartfile found, exiting restart loop >> log\n\t\tbreak\n\tfi\n\n\techo \\[$(date +\"%F %T\")\\]\\[\\\"$name\\\"\\] Restarting >> log\ndone\n"
},
{
"alpha_fraction": 0.6590909361839294,
"alphanum_fraction": 0.6590909361839294,
"avg_line_length": 21,
"blob_id": "6a88db1ab08203d5e3fa85a1c6dbb6cbdcf9f757",
"content_id": "72d94c218c91cfa34bc328b7ce744a85f48fe9ff",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 264,
"license_type": "no_license",
"max_line_length": 71,
"num_lines": 12,
"path": "/cogs/utils/checks.py",
"repo_name": "picuber/PCBot",
"src_encoding": "UTF-8",
"text": "from discord.ext import commands\n\n_owner_id = None\n\ndef set_owner_id(id):\n global _owner_id\n _owner_id = id\n\ndef is_owner():\n def predicate(ctx):\n return _owner_id != None and ctx.message.author.id == _owner_id\n return commands.check(predicate)\n"
},
{
"alpha_fraction": 0.5649831891059875,
"alphanum_fraction": 0.5784511566162109,
"avg_line_length": 31.282608032226562,
"blob_id": "f134a11bcd5619e5fa6d011e96533e9822c91993",
"content_id": "26f6ae088beaf68c70e63ac22a7d2cdcabe71647",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1485,
"license_type": "no_license",
"max_line_length": 114,
"num_lines": 46,
"path": "/cogs/time.py",
"repo_name": "picuber/PCBot",
"src_encoding": "UTF-8",
"text": "from discord.ext import commands\nimport time as sys_time\nfrom .utils.helper import toBase\n\nclass Time:\n def __init__(self, bot):\n self.bot = bot\n\n @commands.group(help='Come get your Time now. It\\'s fresh!', aliases=['t', 'clk', 'clock'], pass_context=True)\n async def time(self, ctx):\n if ctx.invoked_subcommand is None:\n await self.bot.say('What time do you want?')\n\n @time.command(help='Get current unix time', aliases=['u'])\n async def unix(self):\n await self.bot.say(int(sys_time.time()))\n\n @time.command(help='picubers\\'s custom time system', aliases=['t', 'time', 'lt'])\n async def lcts(self):\n t = int(sys_time.time() + sys_time.localtime().tm_gmtoff) % 86400\n \n s_str = ''\n for i in toBase(t % 25, 5):\n s_str += str(i)\n t //= 25\n\n m_str = ''\n for i in toBase(t % 27, 6):\n m_str += str(i)\n t //= 27\n\n h_str = hex(t).upper()[2:]\n\n await self.bot.say(h_str + '-' + m_str+ '-' + s_str)\n\n @time.command(help='You want that awesome LCTS for your desktop? There you go!')\n async def get_lcts(self):\n await self.bot.say('Have fun: https://github.com/picuber/LCTS-Clock/releases')\n\n @time.command(help='Standard boring time system', aliases=['st', 'standardtime'])\n @commands.cooldown(1, 300)\n async def sbts(self):\n await self.bot.say(sys_time.strftime('%H:%M:%S'))\n\ndef setup(bot):\n bot.add_cog(Time(bot))\n"
},
{
"alpha_fraction": 0.5881497859954834,
"alphanum_fraction": 0.5885132551193237,
"avg_line_length": 34.269229888916016,
"blob_id": "2f69f8f67dce3ecb85aecf6df4b01a7ba726517f",
"content_id": "5cfe7c971c165e8742dc9c60850a0d65e11499d9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2751,
"license_type": "no_license",
"max_line_length": 98,
"num_lines": 78,
"path": "/cogs/core.py",
"repo_name": "picuber/PCBot",
"src_encoding": "UTF-8",
"text": "from discord.ext import commands\nimport discord\nfrom cogs.utils.checks import is_owner\nimport logging\nimport os\nfrom pathlib import Path\n\nclass Core:\n def __init__(self, bot):\n global log\n self.bot = bot\n log = bot.logger.getChild('core')\n\n @commands.command(help='Bring PCBot to ***your*** guild', hidden=True, aliases=['invite'])\n @is_owner()\n async def invitelink(self):\n await self.bot.say(discord.utils.oauth_url(self.bot.user.id))\n\n @commands.command(hidden=True, aliases=['exit', 'kill'])\n @is_owner()\n async def killbot(self):\n log.info('Killing PCBot...')\n await self.bot.say('Good bye cruel world!')\n Path(self.bot.config_name + '.norestart').touch()\n os._exit(0)\n\n @commands.command(help='Load a module', hidden=True)\n @is_owner()\n async def load(self, *, module : str):\n module = module.lower()\n if module in self.bot.extensions:\n await self.bot.say(module + ' already loaded')\n return\n try:\n self.bot.load_extension(module)\n except discord.ClientException as e:\n await self.bot.say('Thats not a module!')\n except ImportError as e:\n await self.bot.say('Ew! What\\'s that?? I don\\'t know that! Get that away from me!')\n except Exception as e:\n await self.bot.say(module + ' could not be loaded')\n log.error('{}: {}'.format(type(e).__name__, e))\n else:\n await self.bot.say(module + ' successfully loaded')\n\n @commands.command(help='Unload a module', hidden=True)\n @is_owner()\n async def unload(self, *, module : str):\n module = module.lower()\n if module == 'cogs.core':\n await self.bot.say('You can\\'t take my heart! I only have two!!')\n return\n if self.bot.extensions.get(module) is None:\n await self.bot.say('I don\\'t have that. You can\\'t take things from me I don\\'t have')\n return\n try:\n self.bot.unload_extension(module)\n except Exception as e:\n await self.bot.say(module + ' could not be unloaded')\n log.error('{}: {}'.format(type(e).__name__, e))\n else:\n await self.bot.say(module + ' successfully unloaded')\n\n @commands.command(pass_context=True, hidden=True)\n @is_owner()\n async def do(self, ctx, times : int, *, command):\n \"\"\"Repeats a command a specified number of times.\"\"\"\n msg = ctx.message\n msg.content = command\n for i in range(times):\n await self.bot.process_commands(msg)\n\n @commands.command(help='Pong')\n async def ping(self):\n await self.bot.say('Pong')\n\ndef setup(bot):\n bot.add_cog(Core(bot))\n"
},
{
"alpha_fraction": 0.5626376867294312,
"alphanum_fraction": 0.5666030049324036,
"avg_line_length": 36.6187858581543,
"blob_id": "3deb2bf457eb79a55decbd149697eac96149f4f1",
"content_id": "bca9550d5fd9a1f01a45ab1602ce6f49a3477ead",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6809,
"license_type": "no_license",
"max_line_length": 158,
"num_lines": 181,
"path": "/cogs/casino.py",
"repo_name": "picuber/PCBot",
"src_encoding": "UTF-8",
"text": "from discord.ext import commands\nfrom cogs.utils.checks import *\nimport json\nimport os\nimport random as r\n\ndef load_casino():\n if not os.path.isfile('cogs/casino.json'):\n return {}\n else:\n with open('cogs/casino.json') as f:\n return json.load(f)\n\ndef store_casino(db):\n with open('cogs/casino.json', 'w') as f:\n json.dump(db, f)\n\nclass CasinoDB:\n def __init__(self):\n self._db = load_casino()\n\n def reload(self):\n self._db = load_casino()\n\n def is_registered(self, uid):\n return uid in self._db\n\n def register(self, uid):\n \"\"\"returns True if newly registered\"\"\"\n if not self.is_registered(uid):\n self._db[uid] = {}\n self._db[uid]['balance'] = 100\n self._db[uid]['bet'] = 5\n store_casino(self._db)\n return True\n else:\n return False\n\n def restore_user_integrity(self, uid):\n if self._db[uid]['balance'] < 0:\n self._db[uid]['balance'] = 0\n if self._db[uid]['bet'] < 0:\n self._db[uid]['bet'] = 0\n if self._db[uid]['balance'] == 0 and self._db[uid]['bet'] >= 1:\n self._db[uid]['bet'] = 1\n elif self._db[uid]['balance'] < self._db[uid]['bet']:\n self._db[uid]['bet'] = self._db[uid]['balance']\n store_casino(self._db)\n\n def set_bal(self, uid, balance):\n self.register(uid)\n self._db[uid]['balance'] = balance\n self.restore_user_integrity(uid)\n\n def add_bal(self, uid, balance):\n self.register(uid)\n self._db[uid]['balance'] += balance\n self.restore_user_integrity(uid)\n\n def sub_bal(self, uid, balance):\n self.register(uid)\n self._db[uid]['balance'] -= balance\n self.restore_user_integrity(uid)\n\n def get_bal(self, uid):\n self.register(uid)\n return self._db[uid]['balance']\n\n def set_bet(self, uid, bet):\n self.register(uid)\n self._db[uid]['bet'] = bet\n self.restore_user_integrity(uid)\n\n def get_bet(self, uid):\n self.register(uid)\n return self._db[uid]['bet']\n\n def lose_bet(self, uid):\n self.sub_bal(uid, self.get_bet(uid))\n\n def win_bet(self, uid):\n self.add_bal(uid, self.get_bet(uid))\n\nclass Casino:\n def __init__(self, bot):\n self.bot = bot\n self._cdb = CasinoDB()\n\n @commands.group(help='Can you win the jackpot?', aliases=['cas'], pass_context=True)\n async def casino(self, ctx):\n if ctx.invoked_subcommand is None:\n await self.bot.say('How can I help you?')\n\n @casino.command(help='So, you wanna gamble?', aliases=['reg'], pass_context=True)\n async def register(self, ctx):\n if ctx.message.mentions == []:\n user = ctx.message.author\n else:\n user = ctx.message.mentions[0]\n if self._cdb.register(user.id):\n await self.bot.say('You have now been registered {}'.format(user.mention))\n else:\n await self.bot.say('You are already registered {}'.format(user.mention))\n\n @casino.command(help='Rrrreeeelllooaaaaddddd1!!!one!!eleven!', hidden=True)\n @is_owner()\n async def reload(self):\n self._cdb.reload()\n await self.bot.say('Reloaded...')\n\n @casino.group(help='Check your credit card :credit_card:', aliases=['bal'], pass_context=True)\n async def balance(self, ctx):\n if ctx.invoked_subcommand is None or ctx.invoked_subcommand.name == 'balance':\n if ctx.message.mentions == []:\n user = ctx.message.author\n else:\n user = ctx.message.mentions[0]\n await self.bot.say('{}\\'s balance: {}:dollar:'.format(user.mention, self._cdb.get_bal(user.id)))\n\n @balance.command(hidden=True, aliases=['s'], pass_context=True)\n @is_owner()\n async def set(self, ctx, new_balance: int):\n if ctx.message.mentions == []:\n user = ctx.message.author\n else:\n user = ctx.message.mentions[0]\n self._cdb.set_bal(user.id, new_balance)\n await self.bot.say('{} your balance has been set to {}:dollar:'.format(user.mention, self._cdb.get_bal(user.id)))\n \n @balance.command(hidden=True, aliases=['a'], pass_context=True)\n @is_owner()\n async def add(self, ctx, new_balance: int):\n if ctx.message.mentions == []:\n user = ctx.message.author\n else:\n user = ctx.message.mentions[0]\n self._cdb.add_bal(user.id, new_balance)\n await self.bot.say('{}:dollar: has been added to {}\\'s balance'.format(new_balance, user.mention))\n\n @casino.group(help='What are you willing to bet?', pass_context=True)\n async def bet(self, ctx):\n if ctx.invoked_subcommand is None or ctx.invoked_subcommand.name == 'bet':\n if ctx.message.mentions == []:\n user = ctx.message.author\n else:\n user = ctx.message.mentions[0]\n await self.bot.say('Your current bet is {}:dollar: {}'.format(self._cdb.get_bet(user.id), user.mention))\n\n @bet.command(name='set', help='Change your bet', aliases=['s'], pass_context=True)\n async def set_bet(self, ctx, bet:int=5):\n user = ctx.message.author\n self._cdb.set_bet(user.id, bet)\n await self.bot.say('Your bet has been set to {}:dollar: {}'.format(self._cdb.get_bet(user.id), user.mention))\n\n rps_conditions = {#(choice1, choice2) : choice1 wins?\n ('r', 'p') : False,\n ('r', 's') : True,\n ('p', 'r') : True,\n ('p', 's') : False,\n ('s', 'r') : False,\n ('s', 'p') : True\n }\n @casino.command(help='Can you win against me?', aliases=['rps', 'rockspaperscissors', 'rockpaperandscissors', 'rockspaperandscissors'], pass_context=True)\n async def rockpaperscissors(self, ctx, your_choice):\n your_choice = your_choice.lower()[:1]\n if your_choice not in ('r', 'p', 's'):\n await self.bot.say('This is a ***serious*** game! Choose out of R/rock P/paper S/scissors')\n return\n user = ctx.message.author\n bot_choice = r.choice([('r', 'rock'), ('p', 'paper'), ('s', 'scissors')])\n if bot_choice[0] == your_choice:\n await self.bot.say('I had {}! We\\'re tied {}!'.format(bot_choice[1], user.mention))\n elif not self.rps_conditions[(bot_choice[0], your_choice)]:\n await self.bot.say('I had {}! You win {}:dollar: {}'.format(bot_choice[1], self._cdb.get_bet(user.id), user.mention))\n self._cdb.win_bet(user.id)\n else:\n await self.bot.say('I had {}! You lose {}:dollar: {}'.format(bot_choice[1], self._cdb.get_bet(user.id), user.mention))\n self._cdb.lose_bet(user.id)\n\ndef setup(bot):\n bot.add_cog(Casino(bot))\n"
}
] | 8 |
GwylimGT/TestOdooGT | https://github.com/GwylimGT/TestOdooGT | 1c9b11dd735ecc898162ce7dbabb2b2919eb2bcc | 71899ea5b02a3982f308f9d2c1a63fefd3994c8b | 699759c09794f134f7a89a2d1ec7857a8d800852 | refs/heads/main | 2023-08-14T20:46:58.512911 | 2021-09-22T14:50:01 | 2021-09-22T14:50:01 | 403,409,702 | 0 | 0 | null | 2021-09-05T20:47:27 | 2021-09-22T12:53:14 | 2021-09-22T14:50:01 | Python | [
{
"alpha_fraction": 0.6256860494613647,
"alphanum_fraction": 0.6267837285995483,
"avg_line_length": 42.380950927734375,
"blob_id": "ef2250e546203aff321e362c48cab24056214948",
"content_id": "ce680af63ef16fc13fd93e25e0aa5e5953b6acf0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 911,
"license_type": "no_license",
"max_line_length": 148,
"num_lines": 21,
"path": "/visibility_unit_price/controllers/controllers.py",
"repo_name": "GwylimGT/TestOdooGT",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n# from odoo import http\n\n\n# class VisibilityUnitPrice(http.Controller):\n# @http.route('/visibility_unit_price/visibility_unit_price/', auth='public')\n# def index(self, **kw):\n# return \"Hello, world\"\n\n# @http.route('/visibility_unit_price/visibility_unit_price/objects/', auth='public')\n# def list(self, **kw):\n# return http.request.render('visibility_unit_price.listing', {\n# 'root': '/visibility_unit_price/visibility_unit_price',\n# 'objects': http.request.env['visibility_unit_price.visibility_unit_price'].search([]),\n# })\n\n# @http.route('/visibility_unit_price/visibility_unit_price/objects/<model(\"visibility_unit_price.visibility_unit_price\"):obj>/', auth='public')\n# def object(self, obj, **kw):\n# return http.request.render('visibility_unit_price.object', {\n# 'object': obj\n# })\n"
},
{
"alpha_fraction": 0.7154929637908936,
"alphanum_fraction": 0.7183098793029785,
"avg_line_length": 34.599998474121094,
"blob_id": "62d8bf785f3159d39d4d7a582b1ee3ec22094847",
"content_id": "b5dad62ece3a3c70fd320733ccb73290068ae99b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 355,
"license_type": "no_license",
"max_line_length": 111,
"num_lines": 10,
"path": "/visibility_unit_price/models/models.py",
"repo_name": "GwylimGT/TestOdooGT",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\nfrom odoo import models, fields, api\n\n\nclass sales_order_checkbox_show_unit_price(models.Model):\n _inherit = 'sale.order.line'\n\n show_unit_price_to_client = fields.Boolean(string=\"Show Unit Price to client\", required=True, default=True)\n extra_field_test = fields.Boolean(string=\"Extra test\", required=True, default=True)"
},
{
"alpha_fraction": 0.6502732038497925,
"alphanum_fraction": 0.6557376980781555,
"avg_line_length": 22,
"blob_id": "927d4e1ac09d5cfb7f954b935bc421eadecad00e",
"content_id": "b2c13c322c4d15d9da243beb5ae30c39979c4878",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 183,
"license_type": "no_license",
"max_line_length": 47,
"num_lines": 8,
"path": "/birthday/models/models.py",
"repo_name": "GwylimGT/TestOdooGT",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\nfrom odoo import models, fields, api\n\n\nclass event_gt_extended(models.Model):\n _inherit = 'res.partner'\n \n birthday = fields.Datetime('Date of birth')"
}
] | 3 |
jacobmetcalfe/Python_Crash_Course | https://github.com/jacobmetcalfe/Python_Crash_Course | 638f75b509d03a6391810a18892f09046a88bdd1 | 529e9b0478fa10959492b4bd0b931cc8fd2b9f90 | 2f961fcc05aa27311349714e3be843b30ad8a458 | refs/heads/master | 2021-06-21T18:35:58.441364 | 2021-01-08T19:29:15 | 2021-01-08T19:29:15 | 169,468,754 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6996687650680542,
"alphanum_fraction": 0.702061116695404,
"avg_line_length": 33.75,
"blob_id": "8e0dff310a21edcecb9a8778439c8520d9d8c90a",
"content_id": "a205c8a3aaf2dd77d6d71090fd355ae3b1efd090",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5434,
"license_type": "no_license",
"max_line_length": 120,
"num_lines": 152,
"path": "/Chapter 8: Functions.py",
"repo_name": "jacobmetcalfe/Python_Crash_Course",
"src_encoding": "UTF-8",
"text": "# Chapter 8: Functions\r\n# Functions are blocks of code that are designed to do one specific job\r\n# Always want to pass information to functions\r\n\r\n# Defining a function\r\ndef greet_user(): # function definition\r\n print('Hello')\r\n\r\n\r\ngreet_user() # Will print 'hello' due to the method being called.\r\n\r\n\r\n# Passing information to a function\r\ndef greet_user(username):\r\n print('Hello ' + username.title() + '!')\r\n\r\n\r\ngreet_user('Jesse') # passing jesse in as a username, for it to later print out\r\n\r\n\r\n# You can call greet_user() as many times as you want and pass it any name\r\n\r\n# Arguments and Parameters\r\n# Username acted as a parameter, a piece of information the function needs to do its job\r\n# The specific thing you passed 'jesse' is known as an argument and stored as the username\r\n\r\n# Examples\r\n# 8-1. Message: Write a function called display_message() that prints one sentence\r\n# telling everyone what you are learning about in this chapter. Call the\r\n# function, and make sure the message displays correctly.\r\ndef display_message():\r\n print('We are learning about functions in this chapter.')\r\n\r\n\r\ndisplay_message()\r\n\r\n# 8-2. Favorite Book: Write a function called favorite_book() that accepts one\r\n# parameter, title. The function should print a message, such as One of my\r\n# favorite books is Alice in Wonderland. Call the function, making sure to\r\n# include a book title as an argument in the function call.\r\nprint('\\n')\r\n\r\n\r\ndef favorite_book(title):\r\n print('Your favorite book is: ' + title.title())\r\n\r\n\r\nfavorite_book('Alice in Wonderland')\r\n\r\n# NOTE: 2 Blank lines expected after defining functions\r\n\r\n# Passing Arguments\r\n# Python has to match the value that you pass into it calling them positional arguments\r\nprint('\\n')\r\n\r\n\r\ndef describe_pet(animal_type, pet_name):\r\n print('I have a ' + animal_type + '.')\r\n print('My ' + animal_type + \"'s name is \" + pet_name.title())\r\n\r\n\r\n# Goes into the function respectively (in order) hamster = animal_type\r\ndescribe_pet('hamster', 'harry')\r\nprint('\\n')\r\n# Multiple function calls, just describes different dog, put different arguments\r\ndescribe_pet('dog', 'willie')\r\nprint('\\n')\r\n# Will go in as wrong variables due to it being left to right\r\ndescribe_pet('harry', 'hamster')\r\nprint('\\n')\r\n\r\n# Keyword Arguments\r\n# A keyword argument is a name-value pair that you pass into a function\r\n# This makes it so order no longer will matter\r\ndescribe_pet(animal_type='hamster', pet_name='harry')\r\n# OR\r\ndescribe_pet(pet_name='harry', animal_type='hamster')\r\n# Will print the same thing because we are explicitly calling on each parameter\r\nprint('\\n')\r\n\r\n\r\n# Default Values\r\n# Defines the type as a dog, non default values must be placed first as parameters as that is how Python interprets them\r\ndef describe_pet(pet_name, animal_type='dog', ):\r\n print('I have a ' + animal_type + '.')\r\n print('My ' + animal_type + \"'s name is \" + pet_name.title())\r\n\r\n\r\n# Willie is a dog, dog was already defined, we are just passing willie in\r\ndescribe_pet(pet_name='willie')\r\nprint('\\n')\r\n# You can still change the animal type if you use it explicitly again\r\ndescribe_pet(pet_name='roger', animal_type='turtle')\r\n\r\n\r\n# Equivalent function calls\r\n# Functions can have many equivalent calls\r\n# NOTE: Doesn't matter which calling stile you use, as long as it is easy to read and produces the correct output\r\n\r\n# Avoiding Argument Errors\r\n# could not just do\r\n# describe_pet()\r\n# you are not passing any arguments in and python does not know what to put for those values\r\n# Will get the error 'TypeError: describe_pet() missing 2 required positional arguments: 'animal_type' and 'pet_name'\r\n# Will get a similar error when you do too many arguments\r\n\r\n# Examples\r\n# 8-3. T-Shirt: Write a function called make_shirt() that accepts a size and the\r\n# text of a message that should be printed on the shirt. The function should print\r\n# a sentence summarizing the size of the shirt and the message printed on it.\r\n# Call the function once using positional arguments to make a shirt. Call the\r\n# function a second time using keyword arguments.\r\ndef make_shirt(size, text):\r\n print('Your size ' + size + \" shirt says '\" + text + \"' on the front\")\r\n\r\n\r\nmake_shirt('M', \"What's up\")\r\nmake_shirt(text='Hi', size='L')\r\n\r\nprint('\\n')\r\n\r\n\r\n# 8-4. Large Shirts: Modify the make_shirt() function so that shirts are large\r\n# by default with a message that reads I love Python. Make a large shirt and a\r\n# medium shirt with the default message, and a shirt of any size with a different\r\n# message.\r\ndef make_shirt(size, text):\r\n if size == 'L':\r\n print('Your size ' + size + \" shirt says 'I love Python' on the front\")\r\n else:\r\n print('Your size ' + size + \" shirt says '\" + text + \"' on the front\")\r\n\r\n\r\nmake_shirt('M', \"What's up\")\r\nmake_shirt(text='Hi', size='L')\r\nprint('\\n')\r\n\r\n\r\n# 8-5. Cities: Write a function called describe_city() that accepts the name of\r\n# a city and its country. The function should print a simple sentence, such as\r\n# Reykjavik is in Iceland. Give the parameter for the country a default value.\r\n# Call your function for three different cities, at least one of which is not in the\r\n# default country.\r\ndef describe_city(city, country='italy'):\r\n print(city.title() + ' is in ' + country.title())\r\n\r\n\r\ndescribe_city('venice')\r\ndescribe_city('naples')\r\ndescribe_city('new york city', country='the united states')\r\n\r\n# Return Values\r\n"
},
{
"alpha_fraction": 0.7510460019111633,
"alphanum_fraction": 0.7531380653381348,
"avg_line_length": 51.11111068725586,
"blob_id": "3ab3de05a5f09184d8ded305871dc87305b7ce38",
"content_id": "43f5a1a76cb49bb3d55e95b810b5dd510810b560",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 478,
"license_type": "no_license",
"max_line_length": 96,
"num_lines": 9,
"path": "/Chapter 1: Get Python.py",
"repo_name": "jacobmetcalfe/Python_Crash_Course",
"src_encoding": "UTF-8",
"text": "# Information from 'Python Crash Course, A Hands-on, Project - Based Introduction to programming\r\n# By Eric Matthes\r\n\r\n# Chapter 1, Intro just contained how to set python up\r\n# You really just go on the Python website, download the latest version\r\n# I would recommend an IDE rather than terminal, or even just an online compiler\r\n# If student, Pycharm from JetBrains is great.\r\n# URL for Python https://www.python.org/downloads/\r\n# URL for Pycharm https://www.jetbrains.com/pycharm/\r\n"
},
{
"alpha_fraction": 0.7158387899398804,
"alphanum_fraction": 0.7235283851623535,
"avg_line_length": 44.17959213256836,
"blob_id": "a4440362059d48e34b3ceb9dcaac3d21a82b3c88",
"content_id": "05e41d08f604fd0a1ec68c5f29a98eae059b4f7b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 11334,
"license_type": "no_license",
"max_line_length": 120,
"num_lines": 245,
"path": "/Chapter 3: Lists.py",
"repo_name": "jacobmetcalfe/Python_Crash_Course",
"src_encoding": "UTF-8",
"text": "# Chapter 3, Lists\r\n# A list is a collection of items in a particular order\r\nbicycles = ['trek', 'cannondale', 'redline', 'specialized']\r\n# Prints entire list\r\nprint(bicycles)\r\n\r\n# Access individual items of a list\r\nprint(bicycles[0]) # Prints 1st item\r\n\r\n# Make it a clean format, title first word\r\nprint(bicycles[0].title())\r\n\r\n# Index Postions start at 0, not 1, first item of a list is the 0th position\r\nprint(bicycles[1].title())\r\n\r\n# Can index backwards in a list as well\r\nprint(bicycles[-1].title())\r\nbike_msg = \"My first bicycle was a \" + bicycles[0].title() + \".\"\r\nprint(bike_msg + \"\\n\")\r\n\r\n# Exercises\r\n\r\n# 3-1. Names: Store the names of a few of your friends in a list called names. Print each personís name by accessing\r\n# each element in the list, one at a time.\r\nfriends_list = [\"JJ\", \"Jake\", \"Chris\"]\r\nprint(friends_list[0])\r\nprint(friends_list[1])\r\nprint(friends_list[2])\r\nprint(\"\\n\")\r\n\r\n# 3-2. Greetings: Start with the list you used in Exercise 3-1, but instead of just printing each personís name,\r\n# print a message to them. The text of each message should be the same, but each message should be personalized with\r\n# the personís name.\r\nfriends_greeting = \"Hey, how are you \"\r\nprint(friends_greeting + friends_list[0])\r\nprint(friends_greeting + friends_list[1])\r\nprint(friends_greeting + friends_list[2])\r\n\r\n# 3-3. Your Own List: Think of your favorite mode of transportation, such as a motorcycle or a car, and make a list\r\n# that stores several examples. Use your list to print a series of statements about these items, such as ìI would\r\n# like to own a Honda motorcycle.î\r\ntransportation_msg = \"I would like to own a \"\r\ntype_of_vehicle = \" motorcycle\"\r\nbrand_names = [\"Honda\", \"Suzuki\", \"Daytona\"]\r\nprint(transportation_msg + brand_names[0] + type_of_vehicle)\r\nprint(transportation_msg + brand_names[1] + type_of_vehicle)\r\nprint(transportation_msg + brand_names[2] + type_of_vehicle)\r\n\r\n# Changing, Adding, and Removing Elements\r\n# Changing Elements\r\nmotorcycles = ['honda', 'yamaha', 'suzuki']\r\nprint(motorcycles)\r\nmotorcycles[0] = 'ducati'\r\nprint(motorcycles)\r\n\r\n# Adding Elements\r\n# Append adds to the end of the list.\r\nmotorcycles.append('honda')\r\nprint(motorcycles)\r\n\r\n# Inserting Data\r\nmotorcycles.insert(0, 'harley') # inserts at 0th position\r\nprint(motorcycles)\r\n\r\n# Deleting Data\r\ndel motorcycles[0]\r\nprint(motorcycles)\r\n\r\n# Pop method removes the last item in a list, but it lets you work with the item after removing it., top of a stack\r\n# is the end of a list\r\npopped_motorcycle = motorcycles.pop()\r\nprint(motorcycles)\r\nprint(popped_motorcycle)\r\n\r\nlast_owned = motorcycles.pop()\r\nprint(\"The last motorcycle I owned was a \" + last_owned.title() + \".\\n\")\r\n\r\n# Will pop the item in the 0th position\r\nfirst_owned = motorcycles.pop(0)\r\nprint('The first motorcycle I owned was a ' + first_owned.title() + '.\\n')\r\n\r\n# Each time pop is used, the item is no longer in the list If you want to delete an item from a list and not use that\r\n# item in any way use del, if you see future use with the item use pop\r\n\r\n# Removing an item by Value\r\nmotorcycles.remove('yamaha')\r\nprint(motorcycles)\r\n\r\nmotorcycles = ['honda', 'yamaha', 'suzuki', 'ducati']\r\ntoo_expensive = 'ducati'\r\nmotorcycles.remove(too_expensive)\r\nprint(\"\\nA \" + too_expensive.title() + \" is too expensive for me.\\n\")\r\n# The remove method only deletes one occurence of the action, if there are multiple of the same then a loop must be used\r\n\r\n# 3-4. Guest List: If you could invite anyone, living or deceased, to dinner, who would you invite? Make a list that\r\n# includes at least three people youíd like to invite to dinner. Then use your list to print a message to each\r\n# person, inviting them to dinner.\r\nguest_list = ['Gerald', 'Gerard', 'Richard']\r\nprint(guest_list[0] + \" please come to my dinner.\")\r\nprint(guest_list[1] + \" please come to my dinner.\")\r\nprint(guest_list[2] + \" please come to my dinner.\\n\")\r\n\r\n# 3-5. Changing Guest List: You just heard that one of your guests canít make the dinner, so you need to send out a\r\n# new set of invitations. Youíll have to think of someone else to invite. Start with your program from Exercise 3-4.\r\n# Add a print statement at the end of your program stating the name of the guest who canít make it. Modify your list,\r\n# replacing the name of the guest who canít make it with the name of the new person you are inviting. Print a second\r\n# set of invitation messages, one for each person who is still in your list.\r\nmissed_guest = guest_list.pop()\r\nprint(missed_guest + \" could not make the party\")\r\nguest_list.append('Henry')\r\nprint('However, ' + guest_list[-1] + ' can make the party!\\n')\r\nprint(guest_list[0] + \" please come to my dinner.\")\r\nprint(guest_list[1] + \" please come to my dinner.\")\r\nprint(guest_list[2] + \" please come to my dinner.\\n\")\r\n\r\n# 3-6. More Guests: You just found a bigger dinner table, so now more space is available. Think of three more guests\r\n# to invite to dinner. Start with your program from Exercise 3-4 or Exercise 3-5. Add a print statement to the end of\r\n# your program informing people that you found . bigger dinner table. Use insert() to add one new guest to the\r\n# beginning of your list. Use insert() to add one new guest to the middle of your list.Use append() to add one new\r\n# guest to the end of your list.Print a new set of invitation messages, one for each person in your list.\r\nprint(\"We have found a bigger dinner table!\")\r\nguest_list.insert(0, 'Bucky')\r\nguest_list.insert(2, 'Chucky')\r\nguest_list.append('Ducky')\r\nprint(guest_list[0] + \" please come to my dinner.\")\r\nprint(guest_list[1] + \" please come to my dinner.\")\r\nprint(guest_list[2] + \" please come to my dinner.\")\r\nprint(guest_list[3] + \" please come to my dinner.\")\r\nprint(guest_list[4] + \" please come to my dinner.\")\r\nprint(guest_list[5] + \" please come to my dinner.\\n\")\r\n\r\n# Shrinking Guest List: You just found out that your new dinner table wonít arrive in time for the dinner,\r\n# and you have space for only two guests. Start with your program from Exercise 3-6. Add a new line that prints a\r\n# message saying that you can invite only two people for dinner. Use pop() to remove guests from your list one at a\r\n# time until only two names remain in your list. Each time you pop a name from your list, print a message to that\r\n# person letting them know youíre sorry you canít invite them to dinner. Print a message to each of the two people\r\n# still on your list, letting them know theyíre still invited. Use del to remove the last two names from your list,\r\n# so you have an empty list. Print your list to make sure you actually have an empty list at the end of your program.\r\nprint('Unfortunately, I can only invite 2 people to dinner.')\r\ndenied_guest = guest_list.pop()\r\nprint(\"Sorry, I can't invite you to my dinner anymore, \" + denied_guest)\r\ndenied_guest = guest_list.pop()\r\nprint(\"Sorry, I can't invite you to my dinner anymore, \" + denied_guest)\r\ndenied_guest = guest_list.pop()\r\nprint(\"Sorry, I can't invite you to my dinner anymore, \" + denied_guest)\r\ndenied_guest = guest_list.pop()\r\nprint(\"Sorry, I can't invite you to my dinner anymore, \" + denied_guest)\r\nprint(\"Glad we got rid of those losers am I right \" + guest_list[0])\r\nprint(\"Glad we got rid of those losers am I right \" + guest_list[1])\r\nguest_list.remove('Gerald')\r\nguest_list.remove('Bucky')\r\nprint(guest_list) # Empty list\r\n\r\n# Organize a list\r\n# Sort method stores alphabetically\r\ncars = ['bmw', 'audi', 'toyota', 'subaru']\r\nprint(cars)\r\ncars.sort()\r\nprint(cars)\r\nprint('\\n')\r\n# Sort cars in reverse order\r\ncars.sort(reverse=True)\r\nprint(cars)\r\n\r\n# To maintain the original order of a list but present it in a sorted order, you can use the sorted() function.\r\ncars = ['bmw', 'audi', 'toyota', 'subaru']\r\nprint(\"Here is the original list: \")\r\nprint(cars)\r\nprint(\"\\nHere is the sorted list: \")\r\nprint(sorted(cars))\r\nprint(\"\\n Here is the original list: \")\r\nprint(cars)\r\nprint('\\n')\r\n# When sorting, try and make sure that the list is in lowercase to make sure that everything sorts correctly\r\n\r\n# Printing a list in reverse order\r\nprint(\"This is the reversed list: \")\r\ncars.reverse()\r\nprint(cars)\r\n\r\n# Finding the length of a list\r\nprint(len(cars))\r\n\r\n# 8. Seeing the World: Think of at least five places in the world youíd like to visit. Store the locations in a list.\r\n# Make sure the list is not in alphabetical order. Print your list in its original order. Donít worry about printing\r\n# the list neatly, just print it as a raw Python list. Use sorted() to print your list in alphabetical order without\r\n# modifying the actual list. Show that your list is still in its original order by printing it. Use sorted() to print\r\n# your list in reverse alphabetical order without changing the order of the original list. Show that your list is\r\n# still in its original order by printing it again. Use reverse() to change the order of your list. Print the list to\r\n# show that its order has changed. Use reverse() to change the order of your list again. Print the list to show itís\r\n# back to its original order. Use sort() to change your list so itís stored in alphabetical order. Print the list to\r\n# show that its order has been changed. Use sort() to change your list so itís stored in reverse alphabetical order.\r\n# Print the list to show that its order has changed.\r\nlocations = ['spain', 'japan', 'italy', 'indonesia', 'mexico']\r\nprint(locations)\r\nprint(sorted(locations))\r\nprint(locations)\r\nsorted(locations, reverse=True)\r\nprint(locations)\r\nlocations.reverse()\r\nprint(locations)\r\nlocations.reverse()\r\nprint(locations)\r\nlocations.sort()\r\nprint(locations)\r\nprint(locations.sort(reverse=True))\r\nprint(\"\\n\")\r\n\r\n# 3-9. Dinner Guests: Working with one of the programs from Exercises 3-4 through 3-7 (page 46), use len() to print a\r\n# message indicating the number of people you are inviting to dinner.\r\nlength_of_guest_list = len(denied_guest)\r\nprint(\"There are \" + str(length_of_guest_list) + \" coming to the party.\\n\")\r\n\r\n# 3-10. Every Function: Think of something you could store in a list. For example, you could make a list of\r\n# mountains, rivers, countries, cities, languages, or anything else youíd like. Write a program that creates a list\r\n# containing these items and then uses each function introduced in this chapter at least once.\r\nlanguages = ['spanish', 'english', 'french', 'german', 'japanese']\r\nprint(languages[0])\r\nprint(languages[0].title())\r\nprint(languages[-1])\r\nlanguages.append('chinese')\r\nprint(languages)\r\nlanguages.insert(0, 'korean')\r\nprint(languages)\r\nlanguages.pop()\r\nprint(languages)\r\ndel languages[0]\r\nlanguages.remove('english')\r\nlanguages.sort()\r\nprint(languages)\r\nlanguages.sort(reverse=True)\r\nprint(languages)\r\nprint(sorted(languages))\r\nlanguages.reverse()\r\nprint(languages)\r\nprint(len(languages))\r\nprint('\\n')\r\n\r\n# Index out of range means that you accessing something that is not in the list, check you indexing ([0]) is first of\r\n# list remember. 3-11. Intentional Error: If you have not received an index error in one of your programs yet,\r\n# try to make one happen. Change an index in one of your programs to produce an index error. Make sure you correct\r\n# the error before closing the program. Will do in comment mode to not mess up entire code\r\n# errored_list = []\r\n# print(errored_list[-1]\r\n# Cannot access because nothing is in the list.\r\n"
},
{
"alpha_fraction": 0.6089873909950256,
"alphanum_fraction": 0.6197641491889954,
"avg_line_length": 38.53296661376953,
"blob_id": "1d5996f652558ac86af441b3f8871b297af93430",
"content_id": "d03e2913d58e447cec9336ba129d4d01d386f47b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 14760,
"license_type": "no_license",
"max_line_length": 117,
"num_lines": 364,
"path": "/Chapter 6: Dictionaries.py",
"repo_name": "jacobmetcalfe/Python_Crash_Course",
"src_encoding": "UTF-8",
"text": "# Chapter 6: Dictionaries\r\n# Storing multiple data types in one piece of information\r\n# This can be represented as an object or a dictionary\r\nprint('\\n')\r\nalien_0 = {'color': 'green', 'points': 5}\r\nprint(alien_0['color'])\r\nprint(alien_0['points'])\r\n\r\n# A key value pair is a piece of a dictionary. Each key is connected to a value.\r\n# A key value can be a number, string, list, or another dictionary\r\n# A dictionary is wrapped in braces\r\n# Can have an unlimited number of key values\r\nnew_points = alien_0['points']\r\nprint('You just earned ' + str(new_points) + ' points!')\r\n\r\n# Adding New Key-Value Pairs\r\n# Dictionaries are dynamic structures, it can be added to at any time\r\nalien_0['x_position'] = 0\r\nalien_0['y_position'] = 25\r\nprint(alien_0)\r\n# It is sometimes best to start with an empty dictionary\r\nalien_0 = {}\r\nalien_0['color'] = 'green'\r\nalien_0['points'] = 5\r\nprint(alien_0)\r\n\r\n# Usually use empty dictionaries for user supplied data\r\n# Modifying Values in a Dictionary\r\nprint('The alien is ' + alien_0['color'] + '.')\r\nalien_0['color'] = 'yellow'\r\nprint('The alien is now ' + alien_0['color'] + '.')\r\n\r\nalien_0 = {'x_position': 0, 'y_position': 25, 'speed': 'medium'}\r\nprint(\"Original x-position: \" + str(alien_0['x_position']))\r\n\r\n# Move the alien to the right\r\n# Determine how far to move the alien based on its current speed\r\nif alien_0['speed'] == 'slow':\r\n x_increment = 1\r\nelif alien_0['speed'] == 'medium':\r\n x_increment = 2\r\nelse:\r\n x_increment = 3\r\nalien_0['x_position'] = alien_0['x_position'] + x_increment\r\nprint(\"New x_position: \" + str(alien_0['x_position']))\r\nalien_0['speed'] = 'fast'\r\n\r\n# Removing Key-Value Pairs\r\ndel alien_0['x_position']\r\n# Removes permanently\r\n\r\n# A dictionary of Similar Objects\r\n# This is how a dictionary should be laid out.\r\nfavorite_languages = {\r\n 'jen': 'python',\r\n 'sarah': 'c',\r\n 'edward': 'ruby',\r\n 'phil': 'python'\r\n}\r\nprint(\"Sarah's favorite language is \" + favorite_languages['sarah'].title() + '.')\r\nprint('\\n')\r\n\r\n# Exercises\r\n\r\n# 6-1. Person: Use a dictionary to store information about a person you know. Store their first name, last name, age,\r\n# and the city in which they live. You should have keys such as first_name, last_name, age, and city. Print each\r\n# piece of information stored in your dictionary.\r\nperson_I_know = {'first_name': 'JJ',\r\n 'last_name': 'perkins',\r\n 'age': 22,\r\n 'city': 'colorado springs'\r\n }\r\nprint(person_I_know)\r\n\r\n# 6-2. Favorite Numbers: Use a dictionary to store peopleís favorite numbers. Think of five names, and use them as\r\n# keys in your dictionary. Think of a favorite number for each person, and store each as a value in your dictionary.\r\n# Print each personís name and their favorite number. For even more fun, poll a few friends and get some actual data\r\n# for your program.\r\nprint('\\n')\r\nfavorite_numbers = {\r\n 'roger': 15,\r\n 'bill': 18,\r\n 'nigel': 20,\r\n 'sam': 22,\r\n 'todd': 30\r\n}\r\nprint(\"Roger's favorite number is \" + str(favorite_numbers['roger']) + '.')\r\nprint(\"Bill's favorite number is \" + str(favorite_numbers['bill']) + '.')\r\nprint(\"Nigel's favorite number is \" + str(favorite_numbers['nigel']) + '.')\r\nprint(\"Sam's favorite number is \" + str(favorite_numbers['sam']) + '.')\r\nprint(\"Todd's favorite number is \" + str(favorite_numbers['todd']) + '.')\r\nprint('\\n')\r\n\r\n# 6-3. Glossary: A Python dictionary can be used to model an actual dictionary. However, to avoid confusion,\r\n# lets call it a glossary. Think of five programming words you have learned about in the previous chapters. Use these\r\n# words as the keys in your glossary, and store their meanings as values. Print each word and its meaning as neatly\r\n# formatted output. You might print the word followed by a colon and then its meaning, or print the word on one line\r\n# and then print its meaning indented on a second line. Use the newline character (\\n) to insert a blank line between\r\n# each word-meaning pair in your output.\r\nglossary = {\r\n 'list': 'list is a data structure in Python that is a mutable, '\r\n 'or changeable, ordered sequence of elements',\r\n 'integer': 'a whole number; a number that is not a fraction.',\r\n 'double': 'The double is a fundamental data type built into the compiler and used to define numeric variables '\r\n 'holding numbers with decimal points.',\r\n 'range': 'The range() type returns an immutable sequence of numbers between the given start integer to the stop '\r\n 'integer.',\r\n 'immutable': 'cannot be changed'\r\n}\r\nprint(\"List: \" + glossary['list'] + '.\\n')\r\nprint(\"Integer \" + glossary['integer'] + '.\\n')\r\nprint(\"Double: \" + glossary['double'] + '.\\n')\r\nprint(\"Range: \" + glossary['range'] + '.\\n')\r\nprint(\"Immutable: \" + glossary['immutable'] + '.\\n')\r\n\r\n# Looping through all key-value pairs\r\n# Does the exact same thing withut the keys method\r\nfor name in favorite_languages.keys():\r\n print(name.title())\r\nfriends = ['phil', 'sarah']\r\nfor name in favorite_languages.keys():\r\n print(name.title())\r\n if name in friends:\r\n print(\"Hi \" + name.title() + \", I see your favorite language is \" +\r\n favorite_languages[name].title() + '!')\r\n\r\nif 'erin' not in favorite_languages.keys():\r\n print('Erin, please take our poll!\\n')\r\n\r\n# Looping through a dictionary's Keys in Order\r\nfor name in sorted(favorite_languages.keys()):\r\n print(name.title() + ', thank you for taking our poll!')\r\n\r\n# Looping through the values of a dictionary\r\n# Checks all values without checking for repeats\r\nprint(\"\\nThe following languages have been mentioned:\")\r\nfor language in favorite_languages.values():\r\n print(language.title())\r\n\r\n# The set method takes all repetitive values out, works for arrays too.\r\nprint(\"\\nThe following unique languages have been mentioned:\")\r\nfor language in set(favorite_languages.values()):\r\n print(language.title())\r\n print('\\n')\r\n\r\n# Examples 6-4. Glossary 2: Now that you know how to loop through a dictionary, clean up the code from Exercise 6-3 (\r\n# page 102) by replacing your series of print statements with a loop that runs through the dictionaryís keys and\r\n# values. When youíre sure that your loop works, add five more Python terms to your glossary. When you run your\r\n# program again, these new words and meanings should automatically be included in the output.\r\nfor term in glossary:\r\n print(term.title() + ': ' + glossary[term])\r\n\r\n# 6-5. Rivers: Make a dictionary containing three major rivers and the country each river runs through. One key-value\r\n# pair might be 'nile': 'egypt'. Use a loop to print a sentence about each river, such as The Nile runs through\r\n# Egypt. Use a loop to print the name of each river included in the dictionary. Use a loop to print the name of each\r\n# country included in the dictionary.\r\nrivers = {\r\n 'nile': 'egypt',\r\n 'mississippi': 'the united states',\r\n 'arkansas': 'the united states'\r\n}\r\nfor river in rivers:\r\n print('The ' + river.title() + ' River runs through ' + rivers[river].title() + '.\\n')\r\n\r\nprint('River: ')\r\nfor river in rivers.keys():\r\n print(river.title())\r\nprint('\\n')\r\nprint('Country: ')\r\nfor river in rivers.values():\r\n print(river.title())\r\n\r\n# 6-6. Polling: Use the code in favorite_languages.py (page 104). Make a list of people who should take the favorite\r\n# languages poll. Include some names that are already in the dictionary and some that are not. Loop through the list\r\n# of people who should take the poll. If they have already taken the poll, print a message thanking them for\r\n# responding. If they have not yet taken the poll, print a message inviting them to take the poll.\r\nfavorite_languages = {'phil': 'python',\r\n 'sarah': 'c',\r\n 'jen': 'python',\r\n 'edward': 'ruby',\r\n }\r\nfavorite_languages_2 = {'phil': 'python',\r\n 'sarah': 'c',\r\n 'jen': 'python',\r\n 'janet': 'c++',\r\n }\r\nfor voter in favorite_languages:\r\n if voter in favorite_languages_2:\r\n print('Thanks for taking the poll: ' + voter.title())\r\n else:\r\n print('You should take our poll: ' + voter.title())\r\nprint('\\n')\r\n# Nesting\r\n# Storing a set of dictionaries in a list\r\nalien_0 = {'color': 'green', 'points': 5}\r\nalien_1 = {'color': 'yellow', 'points': 10}\r\nalien_2 = {'color': 'red', 'points': 15}\r\naliens = [alien_0, alien_1, alien_2]\r\nfor alien in aliens:\r\n print(alien)\r\n\r\n# More intense example\r\n# Make an empty list for storing the aliens\r\naliens = []\r\n\r\n# Make 30 green aliens\r\nfor alien_number in range(30):\r\n new_alien = {'color': 'green', 'points': 5, 'speed': 'slow'}\r\n aliens.append(new_alien)\r\n\r\nfor alien in aliens[:3]:\r\n if alien['color'] == 'green':\r\n alien['color'] = 'yellow'\r\n alien['speed'] = 'medium'\r\n alien['points'] = 10\r\n\r\n elif alien['color'] == 'yellow':\r\n alien['color'] = 'red'\r\n alien['speed'] = 'fast'\r\n alien['points'] = 15\r\n\r\n# Show the first five aliens\r\nfor alien in aliens[:5]:\r\n print(alien)\r\nprint('...')\r\n\r\nprint(\"Total number of aliens: \" + str(len(aliens)))\r\nprint('\\n')\r\n# Lists inside of dictionaries\r\npizza = {\r\n 'crust': 'thick',\r\n 'toppings': ['mushroom', 'extra cheese']\r\n}\r\nprint(\"You ordered a \" + pizza['crust'] + \"-crust pizza \" + \"with the following toppings:\")\r\nfor topping in pizza['toppings']:\r\n print(topping)\r\n\r\nfavorite_languages = {\r\n 'jen': ['python', 'ruby'],\r\n 'sarah': ['c'],\r\n 'edward': ['ruby', 'go'],\r\n 'phil': ['python', 'haskell']\r\n}\r\nfor name, languages in favorite_languages.items(): # states 2 variables for loop, one for key, one for value\r\n print('\\n' + name.title() + \"'s favorite languages are:\")\r\n for language in languages:\r\n print(language.title())\r\n\r\n# Do not nest these too deeply, it makes it hard to read\r\n# A Dictionary in a Dictionary\r\nusers = {\r\n 'aeinstein': {\r\n 'first': 'albert',\r\n 'last': 'einstein',\r\n 'location': 'princeton'\r\n },\r\n 'mcurie': {'first': 'marie',\r\n 'last': 'curie',\r\n 'location': 'paris'}\r\n}\r\nfor username, user_info in users.items():\r\n print(\"\\nUsername: \" + username.title())\r\n full_name = user_info['first'] + \" \" + user_info['last']\r\n location = user_info['location']\r\n\r\n print(\"Full name: \" + full_name.title())\r\n print(\"Location: \" + location.title())\r\n\r\n# Examples\r\n\r\n# 6-7. People: Start with the program you wrote for Exercise 6-1 (page 102).\r\n# Make two new dictionaries representing different people, and store all three\r\n# dictionaries in a list called people. Loop through your list of people. As you\r\n# loop through the list, print everything you know about each person.\r\n# Dictionaries 115\r\nperson_I_know = {'first_name': 'JJ',\r\n 'last_name': 'perkins',\r\n 'age': 22,\r\n 'city': 'colorado springs'\r\n }\r\nperson_I_know2 = {'first_name': 'christopher',\r\n 'last_name': 'mccracken',\r\n 'age': 21,\r\n 'city': 'colorado springs'\r\n }\r\nperson_I_know3 = {'first_name': 'jake',\r\n 'last_name': 'ridling',\r\n 'age': 20,\r\n 'city': 'colorado springs'\r\n }\r\npeople = list([person_I_know, person_I_know2, person_I_know3])\r\nfor person in people:\r\n print(person['first_name'].title() + ' ' + person['last_name'].title() + ' is ' +\r\n str(person['age']) + ' and is from ' + person['city'].title())\r\n print('\\n')\r\n\r\n# 6-8. Pets: Make several dictionaries, where the name of each dictionary is the\r\n# name of a pet. In each dictionary, include the kind of animal and the ownerís\r\n# name. Store these dictionaries in a list called pets. Next, loop through your list\r\n# and as you do print everything you know about each pet.\r\nrufus = {'type': 'dog',\r\n 'owner_name': 'perkins'\r\n }\r\ndoug = {'type': 'hamster',\r\n 'owner_name': 'bill'\r\n }\r\ndenny = {'type': 'fish',\r\n 'owner_name': 'roger'\r\n }\r\npets = list([rufus, doug, denny])\r\nfor pet in pets:\r\n print('The ' + pet['type'] + ' is owned by ' + pet['owner_name'].title() + '.')\r\nprint('\\n')\r\n\r\n# 6-9. Favorite Places: Make a dictionary called favorite_places. Think of three\r\n# names to use as keys in the dictionary, and store one to three favorite places\r\n# for each person. To make this exercise a bit more interesting, ask some friends\r\n# to name a few of their favorite places. Loop through the dictionary, and print\r\n# each personís name and their favorite places.\r\nfavorite_places = {'chris': ['atlanta', 'miami', 'washington'],\r\n 'jake': ['texas', 'detroit', 'the alamo'],\r\n 'jj': ['london', 'new york', 'denver'],\r\n }\r\nfor name, places in favorite_places.items():\r\n print('\\n' + name.title() + ' likes the following places: ')\r\n for place in places:\r\n print(place)\r\nprint('\\n')\r\n# 6-10. Favorite Numbers: Modify your program from Exercise 6-2 (page 102) so\r\n# each person can have more than one favorite number. Then print each persons\r\n# name along with their favorite numbers.\r\nfavorite_numbers = {'chris': [1, 2, 3],\r\n 'jake': [5, 7, 9],\r\n 'jj': [18, 10, 14],\r\n }\r\nfor name, numbers in favorite_numbers.items():\r\n print('\\n' + name.title() + ' likes the following numbers: ')\r\n for number in numbers:\r\n print(number)\r\nprint('\\n')\r\n\r\n# 6-11. Cities: Make a dictionary called cities. Use the names of three cities as\r\n# keys in your dictionary. Create a dictionary of information about each city and\r\n# include the country that the city is in, its approximate population, and one fact\r\n# about that city. The keys for each cities dictionary should be something like\r\n# country, population, and fact. Print the name of each city and all of the information\r\n# you have stored about it.\r\ncities = {\r\n 'atlanta': {'country': 'united states',\r\n 'population': 486290,\r\n 'fact': \"it is a city in Georgia\",\r\n },\r\n 'baltimore': {'country': 'united states',\r\n 'population': 611648,\r\n 'fact': \"it is a city in Maryland\",\r\n },\r\n 'dubai': {'country': 'united arab emirates',\r\n 'population': 3137000,\r\n 'fact': \"it is a wealthy city\",\r\n }\r\n}\r\nfor city, facts in cities.items():\r\n print(city.title() + ' is in the ' + facts['country'].title() + ' with population '\r\n + str(facts['population']) + ' and ' + facts['fact'])\r\n"
},
{
"alpha_fraction": 0.7060860991477966,
"alphanum_fraction": 0.7191984057426453,
"avg_line_length": 35.775699615478516,
"blob_id": "33ae72f0671d89da3084aa6819ec9b8d54ad8e22",
"content_id": "c1236097f22ad4efd47e9edaab3a4b64b754e843",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4051,
"license_type": "no_license",
"max_line_length": 117,
"num_lines": 107,
"path": "/Chapter 2: Variable Types.py",
"repo_name": "jacobmetcalfe/Python_Crash_Course",
"src_encoding": "UTF-8",
"text": "# Chapter 2, Variable Types\r\nmessage = \"Hello World\"\r\nprint(message)\r\n\r\nmessage_to_title = \"jacob metcalfe\"\r\nprint(message_to_title.title()) # capitalizes the first letter of each word\r\n\r\nprint(message_to_title.upper()) # Message to capitalize all of\r\n\r\nprint(message_to_title.lower()) # Message to put in all lowercase\r\n\r\nfirst = \"Jacob\" # Stores first and last name in full and prints welcome message using title capitalization\r\nsecond = \"Metcalfe\"\r\nfull = first + \" \" + second\r\nprint(full)\r\nprint(\"Hello \" + full.title() + \"!\")\r\n\r\n# White Space Name\r\n# new line\r\nprint(\"Jacob \\nMetcalfe\")\r\n# Tab up\r\nprint(\"Jacob \\tMetcalfe\")\r\n\r\n# Removing Whitespace\r\nwhitespace_message = \" Whitespace sucks \"\r\n# Removing Right Whitespace\r\nwhitespace_message = whitespace_message.rstrip()\r\nprint(whitespace_message)\r\n# Removing Left Whitespace\r\nwhitespace_message = whitespace_message.lstrip()\r\nprint(whitespace_message)\r\n# Removing All Whitespace\r\nwhitespace_message = whitespace_message.strip()\r\nprint(whitespace_message)\r\n\r\n# 2-3 Personal Message: Store a personís name in a variable, and print a message to that person. Your message should\r\n# be simple, such as, ìHello Eric, would you like to learn some Python today?î\r\nname = \"jacob\"\r\nprint(\"Hello \" + name.title() + \", Would you like to learn some Python today?\")\r\n\r\n# 2-4. Name Cases: Store a personís name in a variable, and then print that personís name in lowercase, uppercase,\r\n# and titlecase.\r\nprint(name.lower())\r\nprint(name.upper())\r\nprint(name.title())\r\n\r\n# 2-5. Famous Quote: Find a quote from a famous person you admire. Print the quote and the name of its author.\r\nprint(\"Ricky Bobby once said 'If you're not first, you're last'\")\r\n\r\n# 2-6. Famous Quote 2: Repeat Exercise 2-5, but this time store the famous personís name in a variable called\r\n# famous_person. Then compose your message and store it in a new variable called message. Print your message.\r\nricky_Bobby = \"Ricky Bobby\"\r\nprint(ricky_Bobby + \" once said 'If you're not first, you're last'\")\r\n\r\n# 2-7. Stripping Names: Store a personís name, and include some whitespace characters at the beginning and end of the\r\n# name. Make sure you use each character combination, \"\\t\" and \"\\n\", at least once.\r\nricky_Bobby_white = \" Ricky Bobby \"\r\nprint(ricky_Bobby_white.rstrip())\r\nprint(ricky_Bobby_white.lstrip())\r\nprint(ricky_Bobby_white.strip())\r\n\r\n# Simple Math operations\r\nprint(2 + 3)\r\nprint(2 - 3)\r\nprint(2 * 3)\r\nprint(2 / 3)\r\n\r\n# Exponents\r\nprint(2 ** 3) # 2^3\r\n\r\n# int = whole number\r\n# Float= decimal to one number\r\n# Double = decimal with many numbers\r\n\r\n# Birthday Message, simple cast from int to string\r\nage = 23\r\nbday_msg = \"Happy \" + str(age) + \"rd Birthday!\"\r\nprint(bday_msg)\r\n\r\n# how int's will automatically round and print to the nearest whole number, can be switched if you do it as a double\r\n# or float etc. if you want more precise numbers\r\n\r\n# 2-8. Number Eight: Write addition, subtraction, multiplication, and division operations that each result in the\r\n# number 8. Be sure to enclose your operations in print statements to see the results.\r\nprint(5 + 3)\r\nprint(int(16 / 2))\r\nprint(11 - 3)\r\nprint(4 * 2)\r\nprint(2 ** 3)\r\n\r\n# 2-9. Favorite Number: Store your favorite number in a variable. Then, using that variable, create a message that\r\n# reveals your favorite number. Print that message.\r\nfav_num = 14\r\nprint(\"My favorite number is \" + str(fav_num))\r\n\r\n# comments section 2-10. Adding Comments: Choose two of the programs youíve written, and add at least one comment to\r\n# each. If you donít have anything specific to write because your programs are too simple at this point,\r\n# just add your name and the current date at the top of each program file. Then write one sentence describing what\r\n# the program does. This is a comment\r\n\r\n# Note, write code that works, then review it and clean it up to make it more efficient and look better all equalling\r\n# simplicity\r\n\r\n# 2-11. Zen of Python: Enter import this into a Python terminal session and skim through the additional principles.\r\nimport this\r\n\r\nprint(\"\\n\\n\")\r\n"
},
{
"alpha_fraction": 0.6714928150177002,
"alphanum_fraction": 0.6865808367729187,
"avg_line_length": 41.59800720214844,
"blob_id": "cd077252f2c603a6c91d4b5dfdf6b15e7a1edddb",
"content_id": "83101bfd49e37c3f30e0095eb894c61ba60690db",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 13124,
"license_type": "no_license",
"max_line_length": 118,
"num_lines": 301,
"path": "/Chapter 5: If Statements.py",
"repo_name": "jacobmetcalfe/Python_Crash_Course",
"src_encoding": "UTF-8",
"text": "# Chapter 5: If Statements\r\nprint('\\n')\r\ncars = ['bmw', 'audi', 'toyota', 'subaru']\r\nfor car in cars:\r\n if car == 'bmw':\r\n print(car.upper())\r\n else:\r\n print(car.title())\r\n# Bases the if statement off of booleans of true and false, looks to see if value is bmw and if it is then it is\r\n# considered true, if it isn't then it is false\r\n# Capitalization is not considered so 'audi' is not equal 'Audi'\r\n# Would need to use .lower() function\r\n# Website username are converted to lowercase to make sure no one else has the same username\r\nprint('\\n')\r\n# Checking for Inequalities\r\nrequested_toppings = 'mushrooms'\r\nif requested_toppings != 'anchovies':\r\n print('Hold the anchovies!')\r\n\r\nprint('\\n')\r\n# Can do the same with numerical expressions age == 18\r\nanswer = 17\r\nif answer != 17:\r\n print('That is not the correct answer')\r\n\r\n# Other math expressions can be used\r\nage = 19\r\nif age <= 19:\r\n print('You are young')\r\n\r\n# Multiple Expressions\r\nage_0 = 22\r\nage_1 = 18\r\nif (age_0 == 22) and (age_1 == 18):\r\n print('You two are four years apart!')\r\n# Should use parentheses with multiple expressions to improve readability\r\n# You can also use or to check if one or the other conditions are true\r\nif age_0 or age_1 == 22:\r\n print('One of you is 22 years old.')\r\n\r\n# Check whether a value is in a list\r\nrequested_toppings = ['mushrooms', 'onions', 'pineapple']\r\nif 'mushrooms' in requested_toppings:\r\n print('You like mushrooms on pizza?!')\r\n\r\n# Checking whether a value is not in a list\r\nbanned_users = ['andrew', 'carolina', 'david']\r\nuser = 'marie'\r\nif user not in banned_users:\r\n print(user.title() + \" you are not in the banned user list.\")\r\n\r\n# Boolean values are either true and false\r\n\r\n# Examples 5-1. Conditional Tests: Write a series of conditional tests. Print a statement describing each test and\r\n# your prediction for the results of each test. Your code should look something like this: car = 'subaru' print(\"Is\r\n# car == 'subaru'? I predict True.\") print(car == 'subaru') print(\"\\nIs car == 'audi'? I predict False.\") print(car\r\n# == 'audi') Look closely at your results, and make sure you understand why each line evaluates to True or False.\r\n# Create at least 10 tests. Have at least 5 tests evaluate to True and another 5 tests evaluate to False.\r\nprint('\\n')\r\ncar = 'subaru'\r\nprint(\"Is car == 'subaru'? I predict True.\")\r\nprint(car == 'subaru')\r\nprint(car != 'audi')\r\n\r\nprint(\"\\nIs car == 'audi'? I predict False.\")\r\nprint(car == 'audi')\r\nprint(car == 'Subaru')\r\nprint(car != 'subaru')\r\n\r\n# 5-2. More Conditional Tests: You do not have to limit the number of tests you create to 10. If you want to try more\r\n# comparisons, write more tests and add them to conditional_tests.py. Have at least one True and one False result for\r\n# each of the following: Tests for equality and inequality with strings Tests using the lower() function Numerical\r\n# tests involving equality and inequality, greater than and less than, greater than or equal to, and less than or\r\n# equal to Tests using the and keyword and the or keyword Test whether an item is in a list Test whether an item is\r\n# not in a list\r\nprint('\\nTest for strings:')\r\nstring1 = 'Hi'\r\nstring2 = 'Hi'\r\nstring3 = \"hi\"\r\nif string1 == string2:\r\n print('The strings are equal')\r\nif string2 != string3:\r\n print('The strings are not equal')\r\nif string2.lower() == string3.lower():\r\n print('The lowercase strings are the same')\r\nprint('\\nTest for numbers:')\r\nnum1 = 5\r\nnum2 = 5\r\nnum3 = 6\r\nif num1 == num2:\r\n print('The numbers are equal')\r\nif num2 != num3:\r\n print('The numbers are not equal')\r\nif num3 > num2:\r\n print(\"The third number is greater than the second\")\r\nif num2 < num3:\r\n print(\"The second number is less than the third\")\r\nif num2 <= num1:\r\n print(\"The second number is less than or equal to the first\")\r\nif num1 >= num2:\r\n print('The first number is greater than the second')\r\nif num1 and num2 == '5':\r\n print('The two numbers equal 5')\r\nif num1 or num2 == '6':\r\n print('One of the two numbers equals six')\r\nif 'audi' in cars:\r\n print(\"Audi is in the list 'cars'\")\r\nif 'ferrari' not in cars:\r\n print(\"Ferrari is not in the list 'cars'\")\r\nprint('\\n')\r\n\r\n# If-else statements\r\n# If one thing is not the case, then do this other thing\r\nage = 17\r\nif age == 18:\r\n print('You can vote!')\r\nelse:\r\n print(\"You can't vote yet, sorry.\")\r\n\r\n# The way the statement chain goes is if-elif-else\r\n# elif is else if, basically another if statement before all else\r\n# Can adjust variables as well as shown in the bottom statement\r\nage = 12\r\nif age < 4:\r\n print('Your admission cost is free')\r\n price = 0\r\nelif 4 < age < 18:\r\n print('Your admission cost is $5')\r\n price = 5\r\nelif age == 18:\r\n print('Your admission costs $7.50')\r\n price = 7.50\r\nelse:\r\n print('Your admission cost is $10')\r\n price = 10\r\n\r\nprint('Your admission cost is $' + str(price) + '.')\r\n\r\n# Python does not require an else statement however it is highly recommended you have a backup plan.\r\n# Sometimes multiple if statements are necessary, because it will skip the rest, just in case two statements are true.\r\nif 'mushrooms' in requested_toppings:\r\n print('Adding mushrooms')\r\nif 'onions' in requested_toppings:\r\n print('Adding onions')\r\nprint('Finished making the pizza!')\r\n\r\n# 5-3. Alien Colors 1: Imagine an alien was just shot down in a game. Create a variable called alien_color and assign\r\n# it a value of 'green', 'yellow', or 'red'. Write an if statement to test whether the aliens color is green. If it\r\n# is, print a message that the player just earned 5 points. Write one version of this program that passes the if test\r\n# and another that fails. (The version that fails will have no output.)\r\nprint('\\n')\r\nalien_color = 'red'\r\nif alien_color == 'green':\r\n print('Congrats, you earned five points')\r\nif alien_color == 'red':\r\n print('Congrats, you earned five points!')\r\n\r\n# 5-4. Alien Colors 2: Choose a color for an alien as you did in Exercise 5-3, and write an if-else chain. If the\r\n# aliens color is green, print a statement that the player just earned 5 points for shooting the alien. If the\r\n# aliens color is not green, print a statement that the player just earned 10 points. Write one version of this\r\n# program that runs the if block and another that runs the else block.\r\nif alien_color == 'green':\r\n print('Congrats, you earned five points')\r\nelse:\r\n print('Congrats, you got 10 points!')\r\n# 5-5. Alien Colors 3: Turn your if-else chain from Exercise 5-4 into an if-elif-else chain. ï If the alien is green,\r\n# print a message that the player earned 5 points. If the alien is yellow, print a message that the player earned 10\r\n# points. If the alien is red, print a message that the player earned 15 points. Write three versions of this\r\n# program, making sure each message is printed for the appropriate color alien.\r\nif alien_color == 'green':\r\n print('Congrats, you earned five points')\r\nelif alien_color == 'yellow':\r\n print('You earned 10 points!')\r\nelse:\r\n print('You earned 15 points')\r\n# 5-6. Stages of Life: Write an if-elif-else chain that determines a persons stage of life. Set a value for the\r\n# variable age, and then: If the person is less than 2 years old, print a message that the person is a baby. the\r\n# person is at least 2 years old but less than 4, print a message that the person is a toddler. If the person is at\r\n# least 4 years old but less than 13, print a message that the person is a kid. If the person is at least 13 years\r\n# old but less than 20, print a message that the person is a teenager. If the person is at least 20 years old but\r\n# less than 65, print a message that the person is an adult. If the person is age 65 or older, print a message that\r\n# the person is an elder.\r\nage = 5\r\nif age < 2:\r\n print('You are a baby.')\r\nelif 2 < age < 4:\r\n print('You are a toddler.')\r\nelif 4 < age < 13:\r\n print('You are a kid.')\r\nelif 13 < age < 20:\r\n print('You are a teenager.')\r\nelif 20 < age < 65:\r\n print('You are an adult.')\r\nelse:\r\n print('You are an elder.')\r\n# 5-7. Favorite Fruit: Make a list of your favorite fruits, and then write a series of independent if statements that\r\n# check for certain fruits in your list. Make a list of your three favorite fruits and call it favorite_fruits. Write\r\n# five if statements. Each should check whether a certain kind of fruit is in your list. If the fruit is in your\r\n# list, the if block should print a statement, such as You really like bananas!\r\nfavorite_fruits = ['strawberry', 'banana', 'orange', 'grape', 'watermelon']\r\nprint('\\n')\r\nif 'strawberry' in favorite_fruits:\r\n print('I love Strawberries!')\r\nif 'kiwi' in favorite_fruits:\r\n print('I love kiwi!')\r\nif 'banana' in favorite_fruits:\r\n print('I love bananas!')\r\nif 'orange' in favorite_fruits:\r\n print('I love strawberries!')\r\nif 'grape' in favorite_fruits:\r\n print('I love grapes!')\r\nif 'watermelon' in favorite_fruits:\r\n print('I love watermelon!')\r\nprint('\\n')\r\n# Using if statements with lists\r\n# Checking for special items\r\nrequested_toppings = ['mushrooms', 'green peppers', 'extra cheese']\r\nfor requested_topping in requested_toppings:\r\n if requested_topping == 'green peppers':\r\n print('We are out of green peppers.')\r\n else:\r\n print('Adding ' + requested_topping)\r\nprint('These are your requested toppings')\r\n\r\n# Checking that list is not empty\r\n# if requested_toppings: checks if list is empty or not\r\nprint('\\n')\r\nrequested_toppings = []\r\nif requested_toppings:\r\n for requested_topping in requested_toppings:\r\n print('Adding' + requested_topping)\r\n print('Pizzas done!')\r\nelse:\r\n print('Your pizza will be plain.')\r\n\r\n# Using multiple lists\r\navailable_toppings = ['mushrooms', 'olives', 'green peppers', 'pepperoni', 'pineapple', 'extra cheese']\r\nrequested_toppings = ['mushrooms', 'french fries', 'extra cheese']\r\nfor requested_topping in requested_toppings:\r\n if requested_topping in available_toppings:\r\n print('Adding ' + requested_topping)\r\n else:\r\n print('Sorry we cannot put ' + requested_topping + ' on this pizza.')\r\n\r\n# 5-8. Hello Admin: Make a list of five or more user names, including the name 'admin'. Imagine you are writing code\r\n# that will print a greeting to each user after they log in to a website. Loop through the list, and print a greeting\r\n# to each user: If the username is 'admin', print a special greeting, such as Hello admin, would you like to see a\r\n# status report? Otherwise, print a generic greeting, such as Hello Eric, thank you for logging in again.\r\nusers = ['admin', 'rodger', 'fanny', 'aaron', 'phillip']\r\nfor user in users:\r\n if 'admin' == user:\r\n print('Hello admin, would you like to see a status report?')\r\n else:\r\n print(\"Hello \" + user.title())\r\n\r\n# 5-9. No Users: Add an if test to hello_admin.py to make sure the list of users is not empty. If the list is empty,\r\n# print the message We need to find some users! Remove all of the usernames from your list, and make sure the correct\r\n# message is printed.\r\nprint('\\n')\r\nusers = []\r\nif users:\r\n print(\"Hello users!\")\r\nelse:\r\n print(\"We need to find some users!\")\r\n\r\n# 5-10. Checking User names: Do the following to create a program that simulates how websites ensure that everyone has\r\n# a unique username. Make a list of five or more use names called current_users. Make another list of five user names\r\n# called new_users. Make sure one or two of the new user names are also in the current_users list. Loop through the\r\n# new_users list to see if each new username has already been used. If it has, print a message that the person will\r\n# need to enter a new username. If a username has not been used, print a message saying that the username is\r\n# available. Make sure your comparison is case insensitive. If 'John' has been used, 'JOHN' should not be accepted.\r\ncurrent_users = ['admin', 'rodger', 'fanny', 'aaron', 'phillip']\r\nnew_users = ['jacob', 'johnny', 'harry', 'aaron', 'phillip']\r\nfor new_user in new_users:\r\n if new_user.lower() in current_users:\r\n print(new_user.title() + ': Username taken, you will need a new one.')\r\n else:\r\n print(new_user.title() + ': Username is available.')\r\n\r\n# 5-11. Ordinal Numbers: Ordinal numbers indicate their position in a list, such as 1st or 2nd. Most ordinal numbers\r\n# end in th, except 1, 2, and 3. Store the numbers 1 through 9 in a list. Loop through the list. Use an if-elif-else\r\n# chain inside the loop to print the proper ordinal ending for each number. Your output should read \"1st 2nd 3rd 4th\r\n# 5th 6th 7th 8th 9th\", and each result should be on a separate line.\r\nordinals = list(range(1, 10))\r\nfor ordinal in ordinals:\r\n if ordinal == 1:\r\n print(str(ordinal) + 'st')\r\n elif ordinal == 2:\r\n print(str(ordinal) + 'nd')\r\n elif ordinal == 3:\r\n print(str(ordinal) + 'rd')\r\n elif 10 > ordinal > 3:\r\n print(str(ordinal) + 'th')\r\n\r\n# Styling if statements\r\n# Always do\r\n# if age < 4:\r\n# Rather than\r\n# if age<4\r\n\r\n# I already do exercises in above work.\r\n"
},
{
"alpha_fraction": 0.6661407947540283,
"alphanum_fraction": 0.676395833492279,
"avg_line_length": 34.92232894897461,
"blob_id": "8b4795b9b5d77fcd68faddebdd35eab7ed892705",
"content_id": "f06eed9ef53339d9b25905ad027e2893ac379051",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 11413,
"license_type": "no_license",
"max_line_length": 118,
"num_lines": 309,
"path": "/Chapter 7: User input and while loops.py",
"repo_name": "jacobmetcalfe/Python_Crash_Course",
"src_encoding": "UTF-8",
"text": "# Chapter 7: User input and while loops\r\n# Most programs solve an end user's problem\r\n# Need user input to do so\r\n\r\n# Simple input statement\r\n# defining message as variable to store input, after input is just the message that you want to display\r\n# will store in message until 'enter' is pressed\r\nprint('\\n')\r\nmessage = input(\"Tell me something and I will repeat it back to you: \")\r\n# prints what you typed\r\nprint(message)\r\n\r\nname = input(\"Please enter you name: \")\r\nprint(\"Hello \" + name + '!')\r\n\r\n# Make a long prompt message\r\nprompt = \"If you tell us who you are, we can personalize the messages you see.\"\r\nprompt += \"\\nWhat is you first name?\"\r\nname = input(prompt)\r\nprint(\"Hello \" + name + '!')\r\n\r\n# Cannot store integers the same way, such as if I entered 18, 18 would be a string not an int\r\n# So, we have to explicitly cast it into an int\r\nage = input('How old are you?')\r\nage = int(age) # casting\r\nif age >= 18:\r\n print('You can vote!')\r\nelse:\r\n print('You are young!')\r\n\r\n# The Modulo operator\r\n# The Modulo operator returns the remainder\r\nprint(4 % 3) # Equal to 1 because 3 goes into 4, 1 R 1 times\r\nprint(5 % 3) # Equal to 2 because 3 goes into 5, 1 R 2 times\r\nprint(6 % 3) # Equal to 0 because 3 goes into 6 exactly 2 times\r\n\r\n# One thing we can do with this is check divisibility\r\nnumber = input(\"Enter a number, and I'll tell you if it is even or odd:\")\r\nnumber = int(number)\r\nif number % 2 == 0:\r\n print(str(number) + ' is even')\r\nelse:\r\n print(str(number) + ' is odd')\r\n\r\n# Output in Python 2.7\r\n# In Python 2.7, raw_input() is used because Python attempts to interpret it and it could convert it to the wrong data\r\n# type, causing miscalculations\r\n\r\n# Examples\r\n\r\n# 7-1. Rental Car: Write a program that asks the user what kind of rental car they\r\n# would like. Print a message about that car, such as ìLet me see if I can find you\r\n# a Subaru.î\r\nrental_car = input('What kind of rental car would you like? ')\r\nprint('Let me see if I can find you a ' + rental_car)\r\n\r\n# 7-2. Restaurant Seating: Write a program that asks the user how many people\r\n# are in their dinner group. If the answer is more than eight, print a message saying\r\n# theyíll have to wait for a table. Otherwise, report that their table is ready.\r\ndinner_group = input('How many people are in your dinner group?')\r\ndinner_group = int(dinner_group)\r\nif dinner_group > 8:\r\n print('You will have to wait for a table')\r\nelse:\r\n print('Your food is ready')\r\n\r\n# 7-3. Multiples of Ten: Ask the user for a number, and then report whether the\r\n# number is a multiple of 10 or not.\r\nnumber = input('Enter a number and I can tell you if it is divisible by ten or not')\r\nnumber = int(number)\r\nif number % 10 == 0:\r\n print(str(number) + ' is divisible by ten')\r\nelse:\r\n print(str(number) + ' is not divisible by 10')\r\nprint('\\n')\r\n\r\n# Introducing while loops\r\n# simple while loop to count from 1 to 5, loop will not stop until it reaches the while value\r\ncurrent_number = 1\r\nwhile current_number <= 5:\r\n print(current_number)\r\n current_number += 1\r\n\r\n# Letting the User Choose when to quit\r\nprompt = '\\nTell me something, and I will repeat it back to you:'\r\nprompt += \"\\nEnter 'quit' to end the program. \" # prompt\r\nmessage = '' # place holder for while loop\r\nwhile message != 'quit': # as long as user doesn't say 'quit' keep the loop going\r\n message = input(prompt) # input\r\n\r\nif message != 'quit': # only prints message as long as it is not equal to quit.\r\n print(message) # print\r\n\r\n# Using a flag\r\n# A flag is like a signal to the program\r\n# The flag will equal true to keep the program running, but will equal false when it needs to end\r\n\r\n# In the following program, once 'quit' is typed the active signal will go to false, causing the while loop to end\r\nprompt = \"\\nTell me something, and I will repeat it back to you:\"\r\nprompt += \"\\nEnter 'quit' to end the program. \"\r\nactive = True\r\nwhile active:\r\n message = input(prompt)\r\n if message == 'quit':\r\n active = False\r\n else:\r\n print(message)\r\n\r\n# Using break to exit a loop\r\nprompt = \"\\nPlease enter the name of a city you have visitied:\"\r\nprompt += \"\\n(Enter 'quit' when you are finished.) \"\r\nwhile True: # while the program doesn't break\r\n city = input(prompt)\r\n if city == 'quit':\r\n break # will stop the while loop, making it false\r\n else:\r\n print(\"I'd love to go to \" + city.title())\r\n\r\n# Using continue in a loop\r\ncurrent_number = 0\r\nwhile current_number < 10:\r\n current_number += 1\r\n if current_number % 2 == 0:\r\n continue # will back to the top of the while loop and start it again until an odd number shows\r\n\r\n print(current_number) # will print only the odd numbers\r\n\r\n# Avoiding infinite loops\r\n# This loop runs forever, only printing 1\r\n# x = 1\r\n# while x <= 5:\r\n# print(x)\r\n# Will only print 1's because nothing is incrementing x, meaning it will never hit 5, always continuing the loop\r\n# How to avoid\r\n# Test every while loop and make sure the loop stops when you expect it to\r\n# Run as many cases as you can to make sure someone cannot break your program\r\n\r\n# Examples\r\n# 7-4. Pizza Toppings: Write a loop that prompts the user to enter a series of\r\n# pizza toppings until they enter a 'quit' value. As they enter each topping,\r\n# print a message saying youíll add that topping to their pizza.\r\n\r\nprompt = \"Enter what kind of toppings you want: (Enter 'quit' to stop entering)\"\r\nwhile True:\r\n topping = input(prompt)\r\n if topping == 'quit':\r\n break\r\n else:\r\n print(topping + ' was added to the pizza.')\r\n\r\n\r\n# 7-5. Movie Tickets: A movie theater charges different ticket prices depending on\r\n# a person's age. If a person is under the age of 3, the ticket is free; if they are\r\n# between 3 and 12, the ticket is $10; and if they are over age 12, the ticket is\r\n# $15. Write a loop in which you ask users their age, and then tell them the cost\r\n# of their movie ticket.\r\n\r\nticket_total = 0\r\nwhile True:\r\n age_prompt = \"How old is the guest? (Type 'quit' to quit)\"\r\n age = input(age_prompt)\r\n if age == 'quit':\r\n break\r\n elif int(age) < 3:\r\n ticket_price = 0\r\n elif 3 < int(age) < 12:\r\n ticket_price = 10\r\n else:\r\n ticket_price = 15\r\n ticket_total += ticket_price\r\nprint('Your total ticket price $' + str(ticket_total))\r\n\r\n# 7-6. Three Exits: Write different versions of either Exercise 7-4 or Exercise 7-5\r\n# that do each of the following at least once:\r\n# Use a conditional test in the while statement to stop the loop.\r\n# Use an active variable to control how long the loop runs.\r\n# Use a break statement to exit the loop when the user enters a 'quit' value.\r\nprint('\\n')\r\n\r\n# First Loop\r\nx = 0\r\nwhile x < 5:\r\n print(x)\r\n x += 1\r\n\r\nprint('\\n')\r\n# Second Loop\r\nx = 0\r\nactive = True\r\nwhile active:\r\n if x < 5:\r\n print(x)\r\n x += 1\r\n else:\r\n active = False\r\nprint('\\n')\r\n\r\n# Third Loop\r\nx = 0\r\nwhile True:\r\n if x < 5:\r\n print(x)\r\n x +=1\r\n else:\r\n break\r\n\r\n# 7-7. Infinity: Write a loop that never ends, and run it. (To end the loop, press\r\n# ctrl-C or close the window displaying the output.)\r\n# I will not really run, because that would ruin the previous code\r\n# x = 1\r\n# while x > -1:\r\n# print(x)\r\n# x += 1\r\n\r\n# Second Part of Chapter 7\r\n# Should not modify a list inside a for loop because Python will not be able to keep track of values\r\n# To modify a list as you work through it use while loops\r\n\r\n# Moving items from one list to another\r\n\r\n# List that needs to be verified and an empty list to hold confirmed users\r\nunconfirmed_users = ['alice', 'brian', 'candace']\r\nconfirmed_users = []\r\n\r\n# Verify each user until there are no more unconfirmed users.\r\n# Move each verified user into the list of confirmed users\r\nwhile unconfirmed_users:\r\n current_user = unconfirmed_users.pop()\r\n print('Verifying user: ' + current_user.title())\r\n confirmed_users.append(current_user)\r\n\r\n# Display all confirmed users\r\nprint('The confirmed users are: ')\r\nfor confirmed_user in confirmed_users:\r\n print(confirmed_user.title())\r\n# Will print in opposite order due to the pop method getting the top value of the list/ the last value added to the\r\n# array\r\n\r\n# Removing all instances of specific values from a list\r\n# Originally the 'remove' function only removed one value from the list at a time, this loop will remove all\r\npets = ['dog', 'cat', 'dog', 'goldfish', 'cat', 'rabbit', 'cat']\r\nprint(pets)\r\n\r\n# loop to remove all 'cat' strings in the array\r\nwhile 'cat' in pets:\r\n pets.remove('cat')\r\nprint(pets)\r\n\r\n# Filling a dictionary with user input\r\nresponses = {}\r\npolling_active = True\r\nwhile polling_active:\r\n # Prompt for the person's name and response\r\n name = input('\\nWhat is your name? ')\r\n response = input('Which mountain would you like to climb someday? ')\r\n\r\n # Store the response in a dictionary\r\n responses[name] = response\r\n\r\n # Find out if anyone else is going to take the poll\r\n repeat = input('Would you like to let another person respond?(yes/no)')\r\n if repeat == 'no':\r\n polling_active = False\r\nfor name, response in responses.items():\r\n print((name.title() + ' would like to climb ' + response.title() + '.'))\r\n\r\n# 7-8. Deli: Make a list called sandwich_orders and fill it with the names of various\r\n# sandwiches. Then make an empty list called finished_sandwiches. Loop\r\n# through the list of sandwich orders and print a message for each order, such\r\n# as I made your tuna sandwich. As each sandwich is made, move it to the list\r\n# of finished sandwiches. After all the sandwiches have been made, print a\r\n# message listing each sandwich that was made.\r\nsandwich_orders = ['club', 'turkey', 'ham', 'roast beef']\r\nfinished_sandwiches = []\r\nwhile sandwich_orders:\r\n finished_sandwich = sandwich_orders.pop()\r\n print('Preparing: ' + finished_sandwich)\r\n finished_sandwiches.append(finished_sandwich)\r\n\r\nfor sandwich in finished_sandwiches:\r\n print('Your ' + sandwich + ' is all done.')\r\n\r\n# 7-9. No Pastrami: Using the list sandwich_orders from Exercise 7-8, make sure\r\n# the sandwich 'pastrami' appears in the list at least three times. Add code\r\n# near the beginning of your program to print a message saying the deli has\r\n# run out of pastrami, and then use a while loop to remove all occurrences of\r\n# 'pastrami' from sandwich_orders. Make sure no pastrami sandwiches end up\r\n# in finished_sandwiches.\r\nsandwich_orders = ['club', 'turkey', 'ham', 'roast beef', 'pastrami', 'pastrami', 'pastrami']\r\nprint('The deli has run out of Pastrami')\r\nwhile 'pastrami' in sandwich_orders:\r\n sandwich_orders.remove('pastrami')\r\nprint(sandwich_orders)\r\n\r\n# 7-10. Dream Vacation: Write a program that polls users about their dream\r\n# vacation. Write a prompt similar to If you could visit one place in the world,\r\n# where would you go? Include a block of code that prints the results of the poll.\r\nvacations = {}\r\nvacations_poll = True\r\nwhile vacations_poll:\r\n name = input('What is your name?')\r\n vacation = input('What is your dream vacation?')\r\n vacations[name] = vacation\r\n repeat = input('Would you like to let another person respond?(yes/no)')\r\n if repeat == 'no':\r\n vacations_poll = False\r\n\r\nfor name, vacation in vacations.items():\r\n print((name.title() + \"'s dream vacation is \" + vacation.title() + '.'))\r\n"
},
{
"alpha_fraction": 0.6970974802970886,
"alphanum_fraction": 0.7116101384162903,
"avg_line_length": 38.9558219909668,
"blob_id": "4cb6003db4eb0ab7dfc6b8d18fbadcf08d29d6aa",
"content_id": "3a1a1d7629ab3ea6066ce7b9f6f3f96e95fd6b1b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 10199,
"license_type": "no_license",
"max_line_length": 118,
"num_lines": 249,
"path": "/Chapter 4: Working With Lists.py",
"repo_name": "jacobmetcalfe/Python_Crash_Course",
"src_encoding": "UTF-8",
"text": "# Chapter 4: Working With Lists\r\n\r\n# Looping through a list\r\nmagicians = ['alice', 'david', 'carolina']\r\nfor magician in magicians:\r\n print(magician)\r\n# Defines magician as part of magicians and prints each magician for each iteration of the list until the list is done\r\n\r\nfor magician in magicians:\r\n print(magician.title() + \", that was a great trick!\")\r\n# Every indented line following is considered part of the loop\r\n\r\nfor magician in magicians:\r\n print(magician.title() + \", that was a great trick!\")\r\n print(\"I can't wait to see your next trick, \" + magician.title() + \".\\n\")\r\n\r\n# Summarizes the solution\r\nprint(\"Thanks for showing your tricks guys!\")\r\n\r\n# If you don't indent then you will get an error\r\n# Example\r\n# for magician in magicians\r\n# print(magician)\r\n\r\n# This is a syntax error\r\n# A logical error gives a result, however it is not the desired result.\r\n# Syntax errors are easy to resolve, logical errors can take a long time to resolve\r\n# You can also indent when unnecessary producing unexpected indent error\r\n# If you forget to indent, that would be a logical error\r\n# If you forget the colon at the end of the first line of the loop you will get a syntax error\r\n\r\n# Examples 4-1. Pizzas: Think of at least three kinds of your favorite pizza. Store these pizza names in a list,\r\n# and then use a for loop to print the name of each pizza. Modify your for loop to print a sentence using the name of\r\n# the pizza instead of printing just the name of the pizza. For each pizza you should have one line of output\r\n# containing a simple statement like I like pepperoni pizza. Add a line at the end of your program, outside the for\r\n# loop, that states how much you like pizza. The output should consist of three or more lines about the kinds of\r\n# pizza you like and then an additional sentence, such as I really love pizza!\r\npizzas = ['pepperoni', 'pineapple', 'sausage']\r\nfor pizza in pizzas:\r\n print('I love ' + pizza + ' pizza.')\r\nprint('I like pizza!\\n')\r\n\r\n# 4-2. Animals: Think of at least three\r\n# different animals that have a common characteristic. Store the names of these animals in a list, and then use a for\r\n# loop to print out the name of each animal. Modify your program to print a statement about each animal,\r\n# such as A dog would make a great pet. Add a line at the end of your program stating what these animals have in\r\n# common. You could print a sentence such as Any of these animals would make a great pet!\r\nanimals = ['fish', 'gorilla', 'human']\r\nfor animal in animals:\r\n print('A ' + animal + ' would make a great pet!')\r\nprint('All of these animals have eyes!\\n')\r\n\r\n# Numerical Lists\r\n# The range function\r\nfor value in range(1, 5):\r\n print(value)\r\n# Does not print the final number of the range\r\n\r\n# list() and range() function\r\n# Creates a list with that range of numbers in it.\r\nnumbers = list(range(1, 6))\r\nprint(numbers)\r\n\r\n# The range function can skip numbers too\r\neven_numbers = list(range(2, 11, 2))\r\nprint(even_numbers)\r\n# Tells the program to make a list of the numbers 2-10 and to do every two numbers\r\n\r\nsquares = []\r\nfor value in range(1, 11):\r\n square = value ** 2\r\n squares.append(square)\r\nprint(squares)\r\n# Makes an empty array, takes the number of value squared and adds it to the original squares array.\r\n\r\n# To make it much easier we could just do it in one line\r\nsquares2 = []\r\nfor value in range(1, 11):\r\n squares2.append(value ** 2)\r\nprint(squares2)\r\n\r\n# Simple statistics can be done with lists as well\r\n# Min function\r\nprint(min(squares))\r\n\r\n# Max function\r\nprint(max(squares))\r\n\r\n# Sum function\r\nprint(sum(squares))\r\n\r\n# List comprehension combines everything in one line\r\nsquares = [value ** 2 for value in range(2, 11)]\r\nprint(squares)\r\nprint('\\n')\r\n\r\n# 4-3. Counting to Twenty: Use a for loop to print the numbers from 1 to 20, inclusive.\r\nfor value in range(1, 21):\r\n print(value)\r\nprint('\\n')\r\n\r\n# 4-4. One Million: Make a list of the numbers from one to one million, and then use a for loop to print the numbers.\r\n# (If the output is taking too long, stop it by pressing ctrl-C or by closing the output window.)\r\n# Would ruin code if I ran it\r\nmillion = list(range(1, 1000001))\r\n# print(million)\r\n\r\n# 4-5. Summing a Million: Make a list of the numbers from one to one million, and then use min() and max() to make\r\n# sure your list actually starts at one and ends at one million. Also, use the sum() function to see how quickly\r\n# Python can add a million numbers.\r\nprint(min(million))\r\nprint(max(million))\r\nprint(sum(million))\r\n\r\n# 4-6. Odd Numbers: Use the third argument of the range() function to make a list of the odd numbers from 1 to 20.\r\n# Use a for loop to print each number.\r\nodds = list(range(1, 21, 2))\r\nprint(odds)\r\n\r\n# 4-7. Threes: Make a list of the multiples of 3 from 3 to 30. Use a for loop to print the numbers in your list.\r\nthree_multiples = list(range(3, 31, 3))\r\nfor three_multiple in three_multiples:\r\n print(three_multiple)\r\nprint(\"\\n\")\r\n# 4-8. Cubes: A number raised to the third power is called a cube. For example, the cube of 2 is written as 2**3 in\r\n# Python. Make a list of the first 10 cubes (that is, the cube of each integer from 1 through 10), and use a for loop\r\n# to print out the value of each cube.\r\ncubes = []\r\nfor value in range(1, 11):\r\n cubes.append(value ** 3)\r\nfor cube in cubes:\r\n print(cube)\r\n\r\n# 4-9. Cube Comprehension: Use a list comprehension to generate a list of the first 10 cubes.\r\ncube_comp = [value ** 3 for value in range(1, 11)]\r\nprint(cube_comp)\r\n\r\n# Slicing a list\r\nplayers = ['charles', 'martina', 'michael', 'florence', 'eli']\r\nprint(players[0:3])\r\n# Prints the list of players from index 0 to 2, similar to the range function\r\n\r\nprint(players[1:4])\r\n# Prints index of players 1-5 of the array.\r\n\r\nprint(players[2:])\r\n# Prints players starting at index 2 and goes to the end of the list.\r\n\r\nprint(players[-3:])\r\n# Does the same thing except starts from the 3rd to last of array\r\n\r\n# Looping through a slice\r\nprint('Here is a list of players on my team:')\r\nfor player in players[:3]:\r\n print(player.title())\r\n\r\n# Copying a list\r\n# Can copy a list by omitting the first index and the second index ([:])\r\nmy_foods = ['pizza', 'falafel', 'carrot cake']\r\nfriends_foods = my_foods[:]\r\nprint(\"My favorite foods are:\")\r\nmy_foods.append('ice cream')\r\nprint(my_foods)\r\nprint(\"My friends favorite foods are:\")\r\nfriends_foods.append('cannoli')\r\nprint(friends_foods)\r\nprint('\\n')\r\n\r\n# This does not work my_foods = friends_foods\r\n# This will make both variables point to the same list.\r\n# For now, make sure that you are copying using slice rather than pointing\r\n\r\n# Examples 4-10. Slices: Using one of the programs you wrote in this chapter, add several lines to the end of the\r\n# program that do the following: Print the message, The first three items in the list are:. Then use a slice to print\r\n# the first three items from that programs list. Print the message, Three items from the middle of the list are:.\r\n# Use a slice to print three items from the middle of the list. Print the message, The last three items in the list\r\n# are:. Use a slice to print the last three items in the list.\r\ncars = ['bmw', 'audi', 'toyota', 'subaru']\r\nprint('The first three items in the list are:')\r\nprint(cars[0:3])\r\nprint('\\n The next few items in the list are:')\r\nprint(cars[2:4])\r\nprint('\\n The final item is:')\r\nprint(cars[3:5])\r\nprint('\\n')\r\n# 4-11. My Pizzas, Your Pizzas: Start with your program from Exercise 4-1 (page 60). Make a copy of the list of\r\n# pizzas, and call it friend_pizzas. Then, do the following: Add a new pizza to the original list. Add a different\r\n# pizza to the list friend_pizzas. Prove that you have two separate lists. Print the message, My favorit epizzas\r\n# are:, and then use a for loop to print the first list. Print the message, My friendís favorite pizzas are:,\r\n# and then use a for loop to print the second list. Make sure each new pizza is stored in the appropriate list.\r\nfriends_pizzas = pizzas[:]\r\nfriends_pizzas.append('salami')\r\npizzas.append('oregano')\r\nprint('My favorite Pizzas are: ')\r\nprint(pizzas)\r\nprint('\\nMy friends favorite pizzas are: ')\r\nprint(friends_pizzas)\r\nprint('\\n')\r\n\r\n# 4-12. More Loops: All versions of foods.py in this section have avoided using for loops when printing to save\r\n# space. Choose a version of foods.py, and write two for loops to print each list of foods.\r\nfor my_food in my_foods:\r\n print(my_food)\r\nprint('\\n')\r\nfor friends_food in friends_foods:\r\n print(friends_food)\r\nprint('\\n')\r\n\r\n# Tuples\r\n# A tuple is a list that cannot be changed, making it immutable\r\n# A tuple looks similar except you use parentheses instead\r\ndimensions = (200, 50)\r\nprint(dimensions[0])\r\nprint(dimensions[1])\r\n# dimensions[0] = 201 cannot change because its a tuple\r\n# print(dimensions[0]) gives error\r\n\r\n# For loop through\r\nfor dimension in dimensions:\r\n print(dimension)\r\n\r\n# How to change tuples\r\ndimensions = (400, 200)\r\nfor dimension in dimensions:\r\n print(dimension)\r\nprint('\\n')\r\n# Examples\r\n# 4-13. Buffet: A buffet-style restaurant offers only five basic foods. Think of five simple foods,\r\n# and store them in a tuple. Use a for loop to print each food the restaurant offers. Try to modify one of the items,\r\n# and make sure that Python rejects the change. The restaurant changes its menu, replacing two of the items with\r\n# different foods. Add a block of code that rewrites the tuple, and then use a for loop to print each of the items on\r\n# the revised menu.\r\nbuffet_foods = ('pizza', 'sushi', 'crab legs', 'rice')\r\nprint(\"The original menu is:\")\r\nfor buffet_food in buffet_foods:\r\n print(buffet_food)\r\n# buffet_foods[0] = 'jerky'\r\n\r\nprint('\\nThe new menu is:')\r\nbuffet_foods = ('lo mein', 'lasagna', 'crab legs', 'rice')\r\nfor buffet_food in buffet_foods:\r\n print(buffet_food)\r\n\r\n# Styling your code\r\n# Write neat code\r\n# The Style Guide\r\n# When someone wants to make a change to the Python language, they write a Python Enhancement Proposal(PEP)\r\n# Basically just said write neat code, in which if you have a good IDE it should do anyways\r\n# If not, make sure you include blank lines, four lines per indentation, and 80 characters per line.\r\n"
},
{
"alpha_fraction": 0.7303598523139954,
"alphanum_fraction": 0.7496198415756226,
"avg_line_length": 36.94230651855469,
"blob_id": "010d6fba4c06336b49d84442c60e960ad6ea17b8",
"content_id": "1e09c8c015ff41eeb4a15a82b36e15617624f860",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 1973,
"license_type": "no_license",
"max_line_length": 319,
"num_lines": 52,
"path": "/README.md",
"repo_name": "jacobmetcalfe/Python_Crash_Course",
"src_encoding": "UTF-8",
"text": "<h1 align=\"center\"> Python </h1> <br>\n</h1> <br>\n<p align=\"center\">\n <img alt=\"Random Gif\" title=\"Gif\" src=\"https://media2.giphy.com/media/eCqFYAVjjDksg/giphy.gif?cid=790b76115cd077ea444d576a51cbdb1c&rid=giphy.gif\" width=\"200\" height=\"200\">\n</p>\n## Table of Contents\n\n- [Description](#Description)\n- [Labs](#Labs)\n- [Requirements](#Requirements)\n- [Cheat Sheets](#Cheat-Sheets)\n- [License](#License)\n- [Roadmap](#Roadmap)\n- [Key Tip](#Key-Tip)\n- [Acknowledgments](#Acknowledgments)\n\n## Description\nA Python Crash Course from the book 'Python Crash Course, A Hands-On, Project-Based Introduction to Programming, by Eric Matthes and published by No Starch Press. Examples and content shown is simple examples with some descriptions of various concepts. Ideally made for people who have a general understanding of programming.\n\n## Installation\nWhen installing, just put the file into your IDE for whatever you may want to use it for\n\n## Requirements\n- Python 2.7+\n- IDE that can run Python\n- Personally reccomended items above.\n\n## Cheat Sheets\nI recently found a link to access cheat sheets to print off of or go over later.\n\n <a href=\"https://ehmatthes.github.io/pcc/cheatsheets/README.html\">\n <img alt=\"Cheat Sheets link\" title=\"Cheat Sheets link\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/5a/Cheating.JPG\" width=\"50\" height=\"50\"> \n </a>\n\n## License\nI do not own any of the book, I just summarized, and demonstrated the examples of \nPython Crash Course, A Hands-On, Project-Based Introduction to Programming, by Eric Matthes and published by No Starch Press.\n\n## Roadmap\nI intend to do Functions, Classes, and Error Exception Handling in the near future.\nLater down the road, I'd also like to implement this with Django for web interface tutorial\n\n## Key Tip\nA general guideline that one should understand while doing Python.\nEither enter into IDE or terminal and run it to understand more.\n```python\nimport this\n```\n## Acknowledgments\n- JetBrains\n- NoStarch\n- Python\n"
}
] | 9 |
TheUniforms/Uniforms-Misc | https://github.com/TheUniforms/Uniforms-Misc | 93514ab7f65ca9ac86ab655fd5c3468980250105 | a7e687de17e08957a649dbe9f8c298b43ed5e35a | 6fa7d113b0c55e494c4427af8f80bf71d297142a | refs/heads/master | 2023-01-03T14:27:07.025180 | 2020-10-26T18:26:56 | 2020-10-26T18:26:56 | 55,253,555 | 10 | 2 | Apache-2.0 | 2016-04-01T18:26:06 | 2020-10-25T19:31:03 | 2020-10-26T18:26:56 | C# | [
{
"alpha_fraction": 0.5286085605621338,
"alphanum_fraction": 0.5299089550971985,
"avg_line_length": 28.01886749267578,
"blob_id": "71ca31c8c1ae635e5deb1c0708dacf85fba83fc3",
"content_id": "e387a319a5d9146c3479dfb821c806d771f381e5",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1540,
"license_type": "permissive",
"max_line_length": 72,
"num_lines": 53,
"path": "/Uniforms.Misc.iOS/Utils/TextUtils.cs",
"repo_name": "TheUniforms/Uniforms-Misc",
"src_encoding": "UTF-8",
"text": "using System;\nusing System.Drawing;\nusing Foundation;\nusing UIKit;\nusing Xamarin.Forms;\n\nnamespace Uniforms.Misc.iOS\n{\n public class TextUtils : ITextUtils\n {\n public static void Init ()\n {\n Misc.TextUtils.Init (new TextUtils());\n }\n\n public Xamarin.Forms.Size GetTextSize (\n string text,\n double maxWidth,\n double fontSize = 0,\n string fontName = null)\n {\n if (fontSize <= 0) {\n fontSize = UIFont.SystemFontSize;\n }\n\n UIFont font = null;\n if (string.IsNullOrEmpty (fontName)) {\n font = UIFont.SystemFontOfSize ((nfloat)fontSize);\n } else {\n font = UIFont.FromName (fontName, (nfloat)fontSize);\n }\n\n var attributes = new UIStringAttributes { Font = font };\n var boundSize = new SizeF ((float)maxWidth, float.MaxValue);\n var options = NSStringDrawingOptions.UsesFontLeading |\n NSStringDrawingOptions.UsesLineFragmentOrigin;\n\n var nsText = new NSString (text);\n var resultSize = nsText.GetBoundingRect (\n boundSize,\n options,\n attributes,\n null).Size;\n\n font.Dispose ();\n nsText.Dispose ();\n\n return new Xamarin.Forms.Size (\n Math.Ceiling ((double)resultSize.Width),\n Math.Ceiling ((double)resultSize.Height));\n }\n }\n}\n"
},
{
"alpha_fraction": 0.5104575157165527,
"alphanum_fraction": 0.5104575157165527,
"avg_line_length": 21.485294342041016,
"blob_id": "02ff6b42257cf2a69843f3b22a52186086adc06c",
"content_id": "5a6d0c3ecf1d364760fc8ebc8f9a55f518018bd1",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1532,
"license_type": "permissive",
"max_line_length": 72,
"num_lines": 68,
"path": "/CoreSample/CoreSample.cs",
"repo_name": "TheUniforms/Uniforms-Misc",
"src_encoding": "UTF-8",
"text": "using System;\nusing Xamarin.Forms;\nusing Uniforms.Misc;\n\nnamespace CoreSample\n{\n #if __ANDROID__\n using Activity = Android.App.Activity;\n using Bundle = Android.OS.Bundle;\n #endif\n\n public class CoreSample : Application\n {\n //\n // Init platform\n //\n\n #if __ANDROID__\n public static void InitPlatform(Activity context, Bundle bundle)\n #else\n public static void InitPlatform()\n #endif\n {\n #if __ANDROID__\n Xamarin.Forms.Forms.Init(context, bundle);\n Uniforms.Misc.Droid.ScreenUtils.Init ();\n Uniforms.Misc.Droid.ImageUtils.Init ();\n Uniforms.Misc.Droid.TextUtils.Init ();\n #endif\n\n #if __IOS__\n Xamarin.Forms.Forms.Init();\n Uniforms.Misc.iOS.ScreenUtils.Init ();\n Uniforms.Misc.iOS.ImageUtils.Init ();\n Uniforms.Misc.iOS.KeyboardUtils.Init ();\n Uniforms.Misc.iOS.TextUtils.Init ();\n #endif\n }\n\n //\n // Constructor\n //\n\n public CoreSample()\n {\n MainPage = new MainPage();\n }\n\n //\n // Overrides\n //\n\n protected override void OnStart()\n {\n // Handle when your app starts\n }\n\n protected override void OnSleep()\n {\n // Handle when your app sleeps\n }\n\n protected override void OnResume()\n {\n // Handle when your app resumes\n }\n }\n}\n\n"
},
{
"alpha_fraction": 0.5485348105430603,
"alphanum_fraction": 0.5521978139877319,
"avg_line_length": 22.212766647338867,
"blob_id": "0a9d73047ee5296a549880cbe975967167cd1bb6",
"content_id": "31b512a5089685c8871ad0725aa904b242a493c4",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1094,
"license_type": "permissive",
"max_line_length": 70,
"num_lines": 47,
"path": "/Uniforms.Misc/Utils/ImageUtils.cs",
"repo_name": "TheUniforms/Uniforms-Misc",
"src_encoding": "UTF-8",
"text": "using System;\nusing System.IO;\nusing Xamarin.Forms;\n\nnamespace Uniforms.Misc\n{\n public interface IImageUtils\n {\n Size GetImageSize(string name);\n\n Stream ResizeImage(\n Stream imageData,\n double width,\n double height,\n string format = \"jpeg\",\n int quality = 96);\n }\n\n /// <summary>\n /// Image utils.\n /// </summary>\n public static class ImageUtils\n {\n static IImageUtils implementation;\n\n internal static void Init (IImageUtils platformImplementation)\n {\n implementation = platformImplementation;\n }\n\n public static Size GetImageSize (string name)\n {\n return implementation.GetImageSize (name);\n }\n\n public static Stream ResizeImage (\n Stream imageData,\n double width,\n double height,\n string format = \"jpeg\",\n int quality = 96)\n {\n return implementation.ResizeImage (\n imageData, width, height, format, quality);\n }\n }\n}\n\n"
},
{
"alpha_fraction": 0.5608994364738464,
"alphanum_fraction": 0.5633978843688965,
"avg_line_length": 30.39215660095215,
"blob_id": "d7344cfa8cd513adba7db9e68bf13c369520d3ec",
"content_id": "f03c118a3325ba396ffebf01277d8f8e19f6d68b",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1603,
"license_type": "permissive",
"max_line_length": 96,
"num_lines": 51,
"path": "/Uniforms.Misc/Views/RoundedBox.cs",
"repo_name": "TheUniforms/Uniforms-Misc",
"src_encoding": "UTF-8",
"text": "using System;\nusing Xamarin.Forms;\n\nnamespace Uniforms.Misc\n{\n public class RoundedBox : BoxView\n {\n /// <summary>\n /// The corner radius property.\n /// </summary>\n public static readonly BindableProperty CornerRadiusProperty =\n BindableProperty.Create (\"CornerRadius\", typeof (double), typeof (RoundedBox), 0.0);\n\n /// <summary>\n /// The shadow radius property.\n /// </summary>\n public static readonly BindableProperty ShadowRadiusProperty =\n BindableProperty.Create (\"ShadowRadius\", typeof (double), typeof (RoundedBox), 0.0);\n\n /// <summary>\n /// Gets or sets the corner radius.\n /// </summary>\n public double CornerRadius {\n get { return (double)GetValue (CornerRadiusProperty); }\n set { SetValue (CornerRadiusProperty, value); }\n }\n\n /// <summary>\n /// Gets or sets the shadow radius.\n /// </summary>\n public double ShadowRadius {\n get { return (double)GetValue (ShadowRadiusProperty); }\n set { SetValue (ShadowRadiusProperty, value); }\n }\n\n /// <summary>\n /// Gets or sets the shadow opacity.\n /// </summary>\n public double ShadowOpacity { get; set; }\n\n /// <summary>\n /// Gets or sets the color of the shadow.\n /// </summary>\n public Color ShadowColor { get; set; } = Color.Black;\n\n /// <summary>\n /// Gets or sets the shadow offset.\n /// </summary>\n public Size ShadowOffset { get; set; } = Size.Zero;\n }\n}\n"
},
{
"alpha_fraction": 0.6086956262588501,
"alphanum_fraction": 0.6086956262588501,
"avg_line_length": 23.483871459960938,
"blob_id": "0b031768a79b75c2693595f67d85903091bc0f09",
"content_id": "52ce82856a170d0faccef570938834c4bd11dfdf",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 761,
"license_type": "permissive",
"max_line_length": 74,
"num_lines": 31,
"path": "/Uniforms.Misc/Utils/KeyboardUtils.cs",
"repo_name": "TheUniforms/Uniforms-Misc",
"src_encoding": "UTF-8",
"text": "using System;\n\nnamespace Uniforms.Misc\n{\n public interface IKeyboardEvents\n {\n /// <summary>\n /// Occurs when keyboard height changed.\n /// </summary>\n event Action<double> KeyboardHeightChanged;\n }\n\n /// <summary>\n /// Keyboard utils.\n /// </summary>\n public static class KeyboardUtils\n {\n public static event Action<double> KeyboardHeightChanged;\n\n static IKeyboardEvents implementation;\n\n internal static void Init (IKeyboardEvents platformImplementation)\n {\n implementation = platformImplementation;\n\n implementation.KeyboardHeightChanged += (double height) => {\n KeyboardHeightChanged?.Invoke (height);\n };\n }\n }\n}\n"
},
{
"alpha_fraction": 0.5533926486968994,
"alphanum_fraction": 0.5578420758247375,
"avg_line_length": 19.66666603088379,
"blob_id": "3483a5e61de5cb6ec14cf10019c33c0afbb87822",
"content_id": "e0a831732789d8c1f36d5e0432a67d41605c8bf6",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 1798,
"license_type": "permissive",
"max_line_length": 73,
"num_lines": 87,
"path": "/build.sh",
"repo_name": "TheUniforms/Uniforms-Misc",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\n# build.sh -a build all\n# build.sh -n build NuGet\n# build.sh -c build component\n# build.sh -h print help\n\nBUILD_TOOL='/Applications/Xamarin Studio.app/Contents/MacOS/mdtool'\nNUGET_TOOL=`which nuget`\nPROJECT_BASE='Uniforms.Misc'\nOUTPUT_DIR='lib'\n\nbuild_nuget=false\nbuild_all=false\nbuild_component=false\nneed_help=false\n\n#################\n# Parse options #\n#################\n\nif [ -z \"$1\" ] ; then\n need_help=true\nfi\n\n# OPTIND=1 # Reset in case getopts has been used previously in the shell.\n\nwhile getopts \"h?anc\" opt; do\n case \"$opt\" in\n h|\\?)\n need_help=true\n ;;\n a) build_all=true\n build_nuget=true\n build_component=true\n ;;\n c) build_nuget=true\n build_component=true\n ;;\n n) build_nuget=true\n ;;\n esac\ndone\n\nshift $((OPTIND-1))\n\n[ \"$1\" = \"--\" ] && shift\n\n#############\n# Show help #\n#############\n\nif [ \"$need_help\" = true ] ; then\n echo \"build.sh -a build all\"\n echo \"build.sh -n build NuGet only\"\n echo \"build.sh -h print help\"\n exit 0\nfi\n\n##################\n# Build solution #\n##################\n\nif [ \"$build_all\" = true ] ; then\n \"$BUILD_TOOL\" build -c:\"Release\"\nfi\n\n#############################\n# Build NuGet and component #\n#############################\n\nif [ \"$build_nuget\" = true ] ; then\n mkdir -p $OUTPUT_DIR\n\n rm -v $OUTPUT_DIR/*.dll* $OUTPUT_DIR/*.nupkg\n\n cp -v $PROJECT_BASE.Droid/bin/Release/*.dll* $OUTPUT_DIR 2> /dev/null\n cp -v $PROJECT_BASE.iOS/bin/Release/*.dll* $OUTPUT_DIR 2> /dev/null\n cp -v $PROJECT_BASE/bin/Release/*.dll* $OUTPUT_DIR 2> /dev/null\n\n \"$NUGET_TOOL\" pack -OutputDirectory $OUTPUT_DIR\n\n echo \"Use nuget tool to publish the package:\"\n echo\n echo \" nuget push $OUTPUT_DIR/$PROJECT_BASE.*.nupkg\"\n echo\nfi\n"
},
{
"alpha_fraction": 0.49910491704940796,
"alphanum_fraction": 0.5034013390541077,
"avg_line_length": 28.70212745666504,
"blob_id": "06585e18338f5f6917bca16f57185c53dbdf7ed5",
"content_id": "facced155e7bcb887546a5b61c73b444a03368e3",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 2795,
"license_type": "permissive",
"max_line_length": 84,
"num_lines": 94,
"path": "/Uniforms.Misc.iOS/Utils/ImageUtils.cs",
"repo_name": "TheUniforms/Uniforms-Misc",
"src_encoding": "UTF-8",
"text": "using System;\nusing System.IO;\nusing UIKit;\nusing CoreGraphics;\nusing Foundation;\nusing Uniforms.Misc.iOS;\n\n[assembly: Xamarin.Forms.Dependency(typeof(ImageUtils))]\n\nnamespace Uniforms.Misc.iOS\n{\n public class ImageUtils : IImageUtils\n {\n /// <summary>\n /// Initialize platform implementation for image utils.\n /// </summary>\n public static void Init()\n {\n Misc.ImageUtils.Init (new ImageUtils ());\n }\n\n #region IImageUtils implementation\n\n public Xamarin.Forms.Size GetImageSize(string name)\n {\n UIImage image = UIImage.FromFile(name);\n\n var size = new Xamarin.Forms.Size(\n (double)image.Size.Width,\n (double)image.Size.Height);\n\n image.Dispose ();\n\n return size;\n }\n \n public Stream ResizeImage(\n Stream imageStream,\n double width,\n double height,\n string format = \"jpeg\",\n int quality = 96)\n {\n if (imageStream == null) {\n return null;\n }\n\n UIImage image = null;\n try {\n using (var memoryStream = new MemoryStream ()) {\n imageStream.CopyTo (memoryStream);\n var data = memoryStream.ToArray ();\n image = UIImage.LoadFromData (NSData.FromArray (data));\n }\n } catch (Exception e) {\n Console.WriteLine ($\"ResizeImage: {e.Message}\");\n return null;\n }\n\n // No need to resize if required size is greater than original\n var scale = Math.Min (width / image.Size.Width,\n height / image.Size.Height);\n if (scale > 1.0) {\n return imageStream;\n }\n\n width = (float)(scale * image.Size.Width);\n height = (float)(scale * image.Size.Height);\n\n UIGraphics.BeginImageContextWithOptions (\n new CGSize (width, height),\n opaque: true,\n scale: 0);\n image.Draw (new CGRect (0, 0, width, height));\n var resized = UIGraphics.GetImageFromCurrentImageContext ();\n UIGraphics.EndImageContext ();\n\n var stream = new MemoryStream ();\n\n var resizeData = ((format == \"png\") ?\n resized.AsPNG ().ToArray () :\n resized.AsJPEG ((nfloat)(0.01 * quality)).ToArray ());\n\n image.Dispose ();\n resized.Dispose ();\n\n stream.Write (resizeData, 0, resizeData.Length);\n stream.Seek (0, SeekOrigin.Begin);\n return stream;\n }\n\n #endregion\n }\n}\n\n"
},
{
"alpha_fraction": 0.5740380883216858,
"alphanum_fraction": 0.5752040147781372,
"avg_line_length": 29.282352447509766,
"blob_id": "5058a2cc93eaad0096b1beae784e085d6d522877",
"content_id": "5516c4c156e8aea310ddc21d4f4db4aeaf8cb8e8",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 2575,
"license_type": "permissive",
"max_line_length": 100,
"num_lines": 85,
"path": "/Uniforms.Misc.iOS/Renderers/RoundedBoxRenderer.cs",
"repo_name": "TheUniforms/Uniforms-Misc",
"src_encoding": "UTF-8",
"text": "using System;\nusing System.ComponentModel;\nusing Xamarin.Forms;\nusing Xamarin.Forms.Platform.iOS;\nusing Uniforms.Misc;\nusing Uniforms.Misc.iOS;\nusing CoreGraphics;\nusing CoreAnimation;\nusing UIKit;\n\n[assembly: ExportRenderer (typeof (RoundedBox), typeof (RoundedBoxRenderer))]\n\nnamespace Uniforms.Misc.iOS\n{\n public class RoundedBoxRenderer : BoxRenderer\n {\n /// <summary>\n /// The color layer.\n /// </summary>\n CALayer colorLayer = new CALayer { MasksToBounds = true };\n\n protected override void OnElementChanged (ElementChangedEventArgs<BoxView> e)\n {\n base.OnElementChanged (e);\n\n if (Element != null) {\n Layer.MasksToBounds = false;\n Layer.BackgroundColor = Color.Transparent.ToCGColor ();\n Layer.AddSublayer (colorLayer);\n UpdateLayers ();\n UpdateColor ();\n }\n }\n\n protected override void OnElementPropertyChanged (object sender, PropertyChangedEventArgs e)\n {\n base.OnElementPropertyChanged (sender, e);\n\n if (e.PropertyName == RoundedBox.CornerRadiusProperty.PropertyName || \n e.PropertyName == RoundedBox.ShadowRadiusProperty.PropertyName) {\n UpdateLayers ();\n } else if (e.PropertyName == VisualElement.BackgroundColorProperty.PropertyName) {\n UpdateColor ();\n }\n }\n\n void UpdateColor ()\n {\n colorLayer.BackgroundColor = Element.BackgroundColor.ToCGColor ();\n }\n\n void UpdateLayers ()\n {\n var box = Element as RoundedBox;\n\n colorLayer.CornerRadius = (float)box.CornerRadius;\n\n if (box.ShadowRadius > 0) {\n Layer.ShadowRadius = (float)box.ShadowRadius;\n Layer.ShadowOpacity = (float)box.ShadowOpacity;\n Layer.ShadowColor = box.ShadowColor.ToCGColor ();\n Layer.ShadowOffset = box.ShadowOffset.ToSizeF ();\n } else {\n Layer.ShadowOpacity = 0;\n }\n }\n\n public override void LayoutSubviews ()\n {\n base.LayoutSubviews ();\n\n colorLayer.Frame = Layer.Bounds;\n\n var box = Element as RoundedBox;\n if (Layer.ShadowOpacity > 0) {\n Layer.ShadowPath = UIBezierPath.FromRoundedRect (\n Layer.Bounds, (float)box.CornerRadius).CGPath;\n }\n }\n\n public override void Draw (CGRect rect)\n {\n }\n }\n}"
},
{
"alpha_fraction": 0.46417760848999023,
"alphanum_fraction": 0.4722502529621124,
"avg_line_length": 21.5,
"blob_id": "4aad75bb88ab5aba4fe0c6afad693156880924bd",
"content_id": "8eafc3b8d11af17395384af47a4f35bd8484ff4f",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 993,
"license_type": "permissive",
"max_line_length": 75,
"num_lines": 44,
"path": "/CoreSample/Page4.cs",
"repo_name": "TheUniforms/Uniforms-Misc",
"src_encoding": "UTF-8",
"text": "using System;\nusing System.Diagnostics;\nusing Xamarin.Forms;\nusing Uniforms.Misc;\n\nnamespace CoreSample\n{\n public class Page4 : ContentPage\n {\n Label label;\n\n public Page4 ()\n {\n Title = \"Keyboard\";\n\n KeyboardUtils.KeyboardHeightChanged += OnKeyboardHeightChanged;\n\n label = new Label {\n Text = \"Keyboard is hidden\"\n };\n\n Content = new StackLayout {\n VerticalOptions = LayoutOptions.Center,\n\n Spacing = 10.0,\n Padding = 20.0,\n\n Children = {\n new Entry {\n Placeholder = \"Enter some text...\",\n },\n label\n },\n };\n }\n\n void OnKeyboardHeightChanged (double height)\n {\n Debug.WriteLine ($\"OnKeyboardHeightChanged: {height}\");\n\n label.Text = $\"Keyboard height: {height}\";\n }\n }\n}\n\n"
},
{
"alpha_fraction": 0.7202215790748596,
"alphanum_fraction": 0.7202215790748596,
"avg_line_length": 27.84000015258789,
"blob_id": "b57313bea91934ac0a5521b23db9585ce97f64b8",
"content_id": "eb43c5c0e9db6d4fdc0a381188ea61ae2fddda3a",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 724,
"license_type": "permissive",
"max_line_length": 165,
"num_lines": 25,
"path": "/Droid/MainActivity.cs",
"repo_name": "TheUniforms/Uniforms-Misc",
"src_encoding": "UTF-8",
"text": "using System;\n\nusing Android.App;\nusing Android.Content;\nusing Android.Content.PM;\nusing Android.Runtime;\nusing Android.Views;\nusing Android.Widget;\nusing Android.OS;\n\nnamespace CoreSample.Droid\n{\n [Activity(Label = \"CoreSample.Droid\", Icon = \"@drawable/icon\", MainLauncher = true, ConfigurationChanges = ConfigChanges.ScreenSize | ConfigChanges.Orientation)]\n public class MainActivity : global::Xamarin.Forms.Platform.Android.FormsApplicationActivity\n {\n protected override void OnCreate(Bundle savedInstanceState)\n {\n base.OnCreate(savedInstanceState);\n\n CoreSample.InitPlatform(this, savedInstanceState);\n\n LoadApplication(new CoreSample());\n }\n }\n}\n\n"
},
{
"alpha_fraction": 0.6908127069473267,
"alphanum_fraction": 0.6992049217224121,
"avg_line_length": 22.58333396911621,
"blob_id": "742f251b04369ade9214176bdf1dae01a5060276",
"content_id": "d2d08cbb72c7054b84c706153dda2c57dd0611d9",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 2264,
"license_type": "permissive",
"max_line_length": 238,
"num_lines": 96,
"path": "/README.md",
"repo_name": "TheUniforms/Uniforms-Misc",
"src_encoding": "UTF-8",
"text": "Uniforms\n========\n\nThe missing helpers library for awesome [Xamarin Forms](https://www.xamarin.com/forms)!\n\nWhy\n---\n\nThere are [Xamarin-Forms-Labs](https://github.com/XLabs/Xamarin-Forms-Labs) and [Xamarin.Plugins](https://github.com/jamesmontemagno/Xamarin.Plugins) projects and probably some more but still some basic things are just missing out of box.\n\nSo, we'll try to keep simple things simple and fill some gaps. Stay tuned! :)\n\nInstall\n-------\n\n`Uniforms.Misc` package is available via NuGet: \nhttps://www.nuget.org/packages/Uniforms.Misc/\n\nAlternitavely, you may just clone this repo and references to your projects.\n\nUsage\n-----\n\nSee example in `CoreSample` project:\nhttps://github.com/TheUniforms/Uniforms-Misc/blob/master/CoreSample/CoreSample.cs#L20\n\n\n1. Init utilities right after `Forms.Init()`:\n\n\n On **Android**:\n\n ```csharp\n Xamarin.Forms.Forms.Init(context, bundle);\n Uniforms.Misc.Droid.ScreenUtils.Init ();\n Uniforms.Misc.Droid.ImageUtils.Init ();\n Uniforms.Misc.Droid.TextUtils.Init ();\n ```\n\n On **iOS**:\n\n ```csharp\n Xamarin.Forms.Forms.Init();\n Uniforms.Misc.iOS.ScreenUtils.Init ();\n Uniforms.Misc.iOS.ImageUtils.Init ();\n Uniforms.Misc.iOS.KeyboardUtils.Init ();\n Uniforms.Misc.iOS.TextUtils.Init ();\n ```\n\n2. Then use `Uniforms.Misc.*` in your cross-platform code!\n\nQuick reference\n---------------\n\nUtils interface is provided via static classes:\n\n- `Uniforms.Misc.ScreenUtils`\n- `Uniforms.Misc.ImageUtils`\n- `Uniforms.Misc.KeyboardUtils`\n- `Uniforms.Misc.TextUtils`\n\n## Get screen size\n\n```csharp\nvar screenSize = Uniforms.Misc.ScreenUtils.ScreenSize;\n```\n\n## Handle keyboard change events\n\n```csharp\nUniforms.Misc.KeyboardUtils.KeyboardHeightChanged += (height) => {\n Debug.WriteLine ($\"KeyboardHeightChanged: {height}\");\n};\n```\n\n## Get image size by file name\n\n```csharp\nvar imageSize = Uniforms.Misc.ImageUtils.GetImageSize(\"Graphics/icon.png\");\n```\n\n## Rounded box view\n\n```csharp\nvar box = new RoundedBox {\n HeightRequest = 50,\n WidthRequest = 50,\n BackgroundColor = Color.Purple,\n CornerRadius = 10,\n ShadowOffset = new Size(0, 2.0),\n ShadowOpacity = 0.5,\n ShadowRadius = 2.0,\n};\n```\n\n<img src=\"./Screenshots/Screenshot1.png\" style=\"border: 1px solid #eee;\">\n"
},
{
"alpha_fraction": 0.6743362545967102,
"alphanum_fraction": 0.6743362545967102,
"avg_line_length": 22.5,
"blob_id": "89970105c3b2cd7e88c1c2bcb1d8c2af66e29e99",
"content_id": "3f21fe6044d11822e791ce9907363c57ee3cae74",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 567,
"license_type": "permissive",
"max_line_length": 98,
"num_lines": 24,
"path": "/iOS/AppDelegate.cs",
"repo_name": "TheUniforms/Uniforms-Misc",
"src_encoding": "UTF-8",
"text": "using System;\nusing System.Collections.Generic;\nusing System.Linq;\n\nusing Foundation;\nusing UIKit;\n\nnamespace CoreSample.iOS\n{\n [Register(\"AppDelegate\")]\n public partial class AppDelegate : global::Xamarin.Forms.Platform.iOS.FormsApplicationDelegate\n {\n public override bool FinishedLaunching(UIApplication app, NSDictionary options)\n {\n CoreSample.InitPlatform();\n\n var sharedApp = new CoreSample();\n\n LoadApplication(sharedApp);\n\n return base.FinishedLaunching(app, options);\n }\n }\n}\n\n"
},
{
"alpha_fraction": 0.682414710521698,
"alphanum_fraction": 0.682414710521698,
"avg_line_length": 21.41176414489746,
"blob_id": "e4a1a8bc5dcce7c8964aeb5eeae18f622fc16171",
"content_id": "9485f96f9a62eb055a3ff8d4e73ca234babc6498",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 383,
"license_type": "permissive",
"max_line_length": 60,
"num_lines": 17,
"path": "/Uniforms.Misc/Utils/ScreenUtils.cs",
"repo_name": "TheUniforms/Uniforms-Misc",
"src_encoding": "UTF-8",
"text": "using System;\nusing System.Runtime.CompilerServices;\nusing Xamarin.Forms;\n\n[assembly: InternalsVisibleTo (\"Uniforms.Misc.iOS\")]\n[assembly: InternalsVisibleTo (\"Uniforms.Misc.Droid\")]\n\nnamespace Uniforms.Misc\n{\n /// <summary>\n /// Screen utils.\n /// </summary>\n public static class ScreenUtils\n {\n public static Size ScreenSize { get; internal set; }\n }\n}\n"
},
{
"alpha_fraction": 0.41277891397476196,
"alphanum_fraction": 0.4229208827018738,
"avg_line_length": 33.578948974609375,
"blob_id": "7261c32ce9a07b0e6e8284938a0ea1695509ae9a",
"content_id": "39df228bb19712786a58c307abc373dbc6c77d26",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1974,
"license_type": "permissive",
"max_line_length": 71,
"num_lines": 57,
"path": "/CoreSample/Page1.cs",
"repo_name": "TheUniforms/Uniforms-Misc",
"src_encoding": "UTF-8",
"text": "using System;\nusing Xamarin.Forms;\nusing Uniforms.Misc;\n\nnamespace CoreSample\n{\n public class Page1 : ContentPage\n {\n public Page1()\n {\n Title = \"Welcome\";\n\n var screenSize = ScreenUtils.ScreenSize;\n\n var fontSize = 32.0;\n var testText = \"La-la-la\\nHa-ha-ha!\";\n var textSize = TextUtils.GetTextSize (\n testText, screenSize.Width, fontSize);\n\n Content = new StackLayout {\n VerticalOptions = LayoutOptions.Center,\n Children = {\n new Label {\n HorizontalTextAlignment = TextAlignment.Center,\n Text = \"Welcome to Xamarin Forms!\"\n },\n new Label {\n HorizontalTextAlignment = TextAlignment.Center,\n Text = String.Format(\n \"Screen width = {0}\\n Screen height = {1}\",\n screenSize.Width, screenSize.Height)\n },\n new Label {\n HorizontalTextAlignment = TextAlignment.Center,\n FontSize = fontSize,\n Text = testText\n },\n new Label {\n HorizontalTextAlignment = TextAlignment.Center,\n Text = $\"(text height = {textSize.Height})\"\n },\n new RoundedBox {\n HorizontalOptions = LayoutOptions.Center,\n HeightRequest = 50,\n WidthRequest = 50,\n BackgroundColor = Color.Purple,\n CornerRadius = 10,\n\n ShadowOffset = new Size(0, 2.0),\n ShadowOpacity = 0.5,\n ShadowRadius = 2.0,\n },\n }\n };\n }\n }\n}\n\n"
},
{
"alpha_fraction": 0.5348416566848755,
"alphanum_fraction": 0.5409502387046814,
"avg_line_length": 27.516128540039062,
"blob_id": "cf3f0cad65f77804cbb0ed9185ed1766b5dbbe36",
"content_id": "1837eb438ca883f725247f78429e116e58885fdb",
"detected_licenses": [
"CC-BY-SA-3.0",
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4420,
"license_type": "permissive",
"max_line_length": 71,
"num_lines": 155,
"path": "/Resources/resize_images.py",
"repo_name": "TheUniforms/Uniforms-Misc",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\"\"\"Save resized images from source directory to one or\nmultiple destinations.\n\nRequirements:\n\n - Python 2.7+\n - YAML parser library\n - ImageMagic + Wand or Pillow library\n\nIf you want to use ImageMagic:\n\n apt-get install libmagickwand-dev\n pip install PyYAML Wand\n\nIf you want to use Pillow:\n\n pip install pillow PyYAML\n\n\"\"\"\nimport os\n\n\nclass DirResizer(object):\n \"\"\"Save resized images from `src` to `dst` directory,\n applying `scale` and renaming file with `name_format`.\n\n Arguments:\n\n src -- source directory\n dst -- destination directory\n scale -- destination images scale, < 1.0\n name_format -- string format for file names,\n accepts {name} and {ext} tokens\n imagemagic -- use ImageMagic or not (use Pillow then)\n\n Usage example:\n\n # Retina resizer: take 3x images and make 2x resizes\n resizer = DirResizer('../Resources', '../iOS/Resources',\n 2.0/3.0, '{name}@2x.{ext}')\n resizer.run()\n\n \"\"\"\n def __init__(self, src, dst, scale, name_format,\n imagemagic=True, nohyphens=False):\n self.src = os.path.realpath(src)\n self.dst = os.path.realpath(dst)\n self.scale = scale\n self.name_format = name_format\n self.imagemagic = imagemagic\n self.nohyphens = nohyphens\n\n assert dst and (self.dst != self.src),\\\n \"Error, output dir and input dir must be different!\"\n\n def run(self):\n try:\n os.makedirs(self.dst)\n except OSError:\n pass\n\n for item in os.listdir(self.src):\n file_path = os.path.join(self.src, item)\n name, ext = os.path.splitext(item)\n\n if not os.path.isfile(file_path):\n continue\n\n if not ext.lower() in ('.png', '.jpg', 'jpeg'):\n continue\n\n out_name = self.name_format.format(name=name,\n ext=ext.lstrip('.'))\n \n if self.nohyphens:\n # Android platform does not allow hyphens in resources\n # file names!!!\n out_name = out_name.replace('-', '_')\n\n out_file = os.path.join(self.dst, out_name)\n\n if self.imagemagic:\n self.resize_imagemagic(file_path, out_file)\n else:\n self.resize_pillow(file_path, out_file)\n\n def resize_imagemagic(self, file_path, out_file):\n from wand.image import Image\n\n with Image(filename=file_path) as im:\n im.resize(int(im.size[0] * self.scale + 0.5),\n int(im.size[1] * self.scale + 0.5))\n # if im.depth != 8:\n # print(im.depth)\n im.depth = 8\n im.save(filename=out_file)\n\n def resize_pillow(self, file_path, out_file):\n from PIL import Image\n\n im = Image.open(file_path)\n size = (int(im.size[0] * self.scale + 0.5),\n int(im.size[1] * self.scale + 0.5))\n im_resized = im.resize(size, Image.ANTIALIAS)\n im_resized.save(out_file, optimize=True)\n\n\nclass Resizer(object):\n \"\"\"Bulk resizer.\n\n Arguments:\n\n src -- source directory\n base -- base image scale at source directory\n resizes -- list of dict configs for resizes with keys:\n `dst`, `scale`, `name_format`\n\n Usage example:\n\n with open('resize_images.yml') as f:\n config = yaml.load(f)\n resizer = Resizer('../Resources', 3.0, config['resizes'])\n resizer.run()\n\n \"\"\"\n def __init__(self, src, base, resizes, imagemagic=True):\n self.src = src\n self.base = base\n self.resizes = resizes\n self.imagemagic = imagemagic\n\n def run(self):\n for x in self.resizes:\n resize = x.copy()\n resize.update({\n 'src': self.src,\n 'scale': x['scale'] / self.base,\n 'imagemagic': self.imagemagic,\n })\n resizer = DirResizer(**resize)\n resizer.run()\n\n\nif __name__ == '__main__':\n import yaml\n\n with open('resize_images.yml') as f:\n config = yaml.load(f)\n\n print(config)\n\n resizer = Resizer(config['src'], config['base'], config['resizes'],\n imagemagic=config.get('imagemagic', True))\n resizer.run()\n"
},
{
"alpha_fraction": 0.49584487080574036,
"alphanum_fraction": 0.5069252252578735,
"avg_line_length": 19.05555534362793,
"blob_id": "9270e071d5f0b8d5137802d87d3bfdb8c80a28dd",
"content_id": "cb2995b77b290d5cd7ec66271df6aa0ad1eb5e06",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 363,
"license_type": "permissive",
"max_line_length": 40,
"num_lines": 18,
"path": "/CoreSample/MainPage.cs",
"repo_name": "TheUniforms/Uniforms-Misc",
"src_encoding": "UTF-8",
"text": "using System;\nusing Xamarin.Forms;\n\nnamespace CoreSample\n{\n public class MainPage : TabbedPage\n {\n public MainPage()\n {\n Title = \"Uniforms.Misc\";\n\n Children.Add (new Page1 ());\n Children.Add (new Page2 ());\n Children.Add (new Page3 ());\n Children.Add (new Page4 ());\n }\n }\n}\n"
},
{
"alpha_fraction": 0.6042741537094116,
"alphanum_fraction": 0.6042741537094116,
"avg_line_length": 26.714284896850586,
"blob_id": "b733b6eed8e65907a465ad4a1e389ec826d6e3e2",
"content_id": "0e12a0ae3fe306d284ae6532caca3fc13f1e6f05",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1359,
"license_type": "permissive",
"max_line_length": 100,
"num_lines": 49,
"path": "/Uniforms.Misc.Droid/Renderers/RoundedBoxRenderer.cs",
"repo_name": "TheUniforms/Uniforms-Misc",
"src_encoding": "UTF-8",
"text": "using System;\nusing System.ComponentModel;\nusing Xamarin.Forms;\nusing Xamarin.Forms.Platform.Android;\nusing Android.Graphics;\nusing Uniforms.Misc;\nusing Uniforms.Misc.Droid;\n\n[assembly: ExportRenderer (typeof (RoundedBox), typeof (RoundedBoxRenderer))]\n\nnamespace Uniforms.Misc.Droid\n{\n public class RoundedBoxRenderer : BoxRenderer\n {\n protected override void OnElementChanged (ElementChangedEventArgs<BoxView> e)\n {\n base.OnElementChanged (e);\n\n SetWillNotDraw (false);\n\n Invalidate ();\n }\n\n protected override void OnElementPropertyChanged (object sender, PropertyChangedEventArgs e)\n {\n base.OnElementPropertyChanged (sender, e);\n\n if (e.PropertyName == RoundedBox.CornerRadiusProperty.PropertyName) {\n Invalidate ();\n }\n }\n\n public override void Draw (Canvas canvas)\n {\n var box = Element as RoundedBox;\n var rect = new Rect ();\n var paint = new Paint () {\n Color = box.BackgroundColor.ToAndroid (),\n AntiAlias = true,\n };\n\n GetDrawingRect (rect);\n\n var radius = (float)(rect.Width () / box.Width * box.CornerRadius);\n\n canvas.DrawRoundRect (new RectF (rect), radius, radius, paint);\n }\n }\n}"
},
{
"alpha_fraction": 0.5772293210029602,
"alphanum_fraction": 0.5780254602432251,
"avg_line_length": 28.20930290222168,
"blob_id": "d0fb13a1ac3f7804dff6beca659a641919da2d2d",
"content_id": "d8f3799de26076959536efb547ed095fd58852c2",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1258,
"license_type": "permissive",
"max_line_length": 81,
"num_lines": 43,
"path": "/Uniforms.Misc.iOS/Utils/KeyboardUtils.cs",
"repo_name": "TheUniforms/Uniforms-Misc",
"src_encoding": "UTF-8",
"text": "using System;\nusing System.Diagnostics;\nusing UIKit;\nusing Foundation;\nusing Uniforms.Misc.iOS;\n\n[assembly: Xamarin.Forms.Dependency (typeof (KeyboardUtils))]\n\nnamespace Uniforms.Misc.iOS\n{\n public class KeyboardUtils : IKeyboardEvents\n {\n /// <summary>\n /// Empty method for reference.\n /// </summary>\n public static void Init ()\n {\n Misc.KeyboardUtils.Init (new KeyboardUtils ());\n }\n\n /// <summary>\n /// Occurs when keyboard height changed.\n /// </summary>\n public event Action<double> KeyboardHeightChanged;\n\n public KeyboardUtils ()\n {\n var windowHeight = UIScreen.MainScreen.Bounds.Height;\n\n // Show or change frame\n NSNotificationCenter.DefaultCenter.AddObserver (\n UIKeyboard.WillChangeFrameNotification, (notify) => {\n var info = notify.UserInfo;\n var value = (NSValue)(info [UIKeyboard.FrameEndUserInfoKey]);\n var height = windowHeight - value.RectangleFValue.Y;\n\n Debug.WriteLine (\"KeyboardEvents, height={0}\", height);\n\n KeyboardHeightChanged?.Invoke (height);\n });\n }\n }\n}\n"
},
{
"alpha_fraction": 0.4712683856487274,
"alphanum_fraction": 0.4730203151702881,
"avg_line_length": 30.340660095214844,
"blob_id": "b614bc81d2a8c0079f2da53f4dcd441f944ce2bb",
"content_id": "91486d175d0facad2b55e644687a763507bcf496",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 2856,
"license_type": "permissive",
"max_line_length": 91,
"num_lines": 91,
"path": "/CoreSample/Page3.cs",
"repo_name": "TheUniforms/Uniforms-Misc",
"src_encoding": "UTF-8",
"text": "using System;\nusing System.IO;\nusing Xamarin.Forms;\nusing Uniforms.Misc;\n\nnamespace CoreSample\n{\n using CrossMedia = Plugin.Media.CrossMedia;\n using MediaOptions = Plugin.Media.Abstractions.StoreCameraMediaOptions;\n\n public class Page3 : ContentPage\n {\n const double resizedImageSize = 300;\n\n Image takenImage;\n\n public Page3()\n {\n Title = \"Resize\";\n\n var getPhotoButton = new Button {\n Text = \"Tap to pick a photo\",\n HorizontalOptions = LayoutOptions.Center\n };\n getPhotoButton.Clicked += GetPhotoButtonClicked;\n\n Content = new StackLayout {\n VerticalOptions = LayoutOptions.Center,\n Children = {\n getPhotoButton,\n }\n };\n }\n\n async void GetPhotoButtonClicked(object sender, EventArgs e)\n {\n if (await CrossMedia.Current.Initialize())\n {\n //if (!CrossMedia.Current.IsCameraAvailable ||\n // !CrossMedia.Current.IsTakePhotoSupported)\n //{\n // await DisplayAlert(\"Error\", \"Sorry, can't connect to camera!\", \"OK\");\n // return;\n //}\n\n //var options = new MediaOptions {\n // Directory = \"Temp\",\n // Name = \"photo.jpg\"\n //};\n\n if (!CrossMedia.Current.IsPickPhotoSupported) {\n await DisplayAlert (\"Error\", \"Sorry, can't pick a photo!\", \"OK\");\n return;\n }\n\n var mediaFile = await CrossMedia.Current.PickPhotoAsync ();\n\n if (mediaFile != null)\n {\n var layout = Content as StackLayout;\n\n if (takenImage != null)\n {\n layout.Children.Remove(takenImage);\n }\n \n var imageSource = ImageSource.FromStream(() => {\n var origStream = mediaFile.GetStream();\n var resizedStream = GetResizedImageStream(mediaFile.GetStream());\n origStream.Dispose();\n mediaFile.Dispose();\n return resizedStream;\n });\n\n takenImage = new Image {\n HorizontalOptions = LayoutOptions.Center,\n Source = imageSource\n };\n\n layout.Children.Add(takenImage);\n }\n }\n }\n\n static Stream GetResizedImageStream(Stream imageStream)\n {\n return ImageUtils.ResizeImage(\n imageStream, resizedImageSize, resizedImageSize);\n }\n }\n}\n\n\n"
},
{
"alpha_fraction": 0.5822102427482605,
"alphanum_fraction": 0.5822102427482605,
"avg_line_length": 20.764705657958984,
"blob_id": "d6d41fa0f1bf2c772ef477448c1b393ca690228e",
"content_id": "d24b6c1483bcc6ab6ade88891e5af58ba85ccecb",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 373,
"license_type": "permissive",
"max_line_length": 65,
"num_lines": 17,
"path": "/Uniforms.Misc.iOS/Utils/ScreenUtils.cs",
"repo_name": "TheUniforms/Uniforms-Misc",
"src_encoding": "UTF-8",
"text": "using System;\nusing UIKit;\nusing Uniforms.Misc.iOS;\n\nnamespace Uniforms.Misc.iOS\n{\n public static class ScreenUtils\n {\n public static void Init ()\n {\n Misc.ScreenUtils.ScreenSize = new Xamarin.Forms.Size(\n UIScreen.MainScreen.Bounds.Width,\n UIScreen.MainScreen.Bounds.Height\n );\n }\n }\n}\n\n"
},
{
"alpha_fraction": 0.7116279006004333,
"alphanum_fraction": 0.7209302186965942,
"avg_line_length": 29.714284896850586,
"blob_id": "0e4221ad3defd814eb257bf8fe4144cc4be19f1d",
"content_id": "ab15cdae7f5af52cb6280c620bfbc3890c4599a2",
"detected_licenses": [
"CC-BY-SA-3.0",
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 216,
"license_type": "permissive",
"max_line_length": 70,
"num_lines": 7,
"path": "/Resources/README.md",
"repo_name": "TheUniforms/Uniforms-Misc",
"src_encoding": "UTF-8",
"text": "balloon.jpg\n-----------\n\nFrom Wikimedia Commons, the free media repository\n<https://commons.wikimedia.org/wiki/File:Hot_air_balloon_and_moon.jpg>\n\n© Tomas Castelazo, www.tomascastelazo.com / Wikimedia Commons / CC-BY-SA-3.0\n"
},
{
"alpha_fraction": 0.44625112414360046,
"alphanum_fraction": 0.45076784491539,
"avg_line_length": 29.69444465637207,
"blob_id": "bd8fc3b6ea94ecb85b419d74217e4b43b723af37",
"content_id": "fb6aaf21a967d8d817aa96503cf8d68c09bd68d5",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1109,
"license_type": "permissive",
"max_line_length": 71,
"num_lines": 36,
"path": "/CoreSample/Page2.cs",
"repo_name": "TheUniforms/Uniforms-Misc",
"src_encoding": "UTF-8",
"text": "using System;\nusing Xamarin.Forms;\nusing Uniforms.Misc;\n\nnamespace CoreSample\n{\n public class Page2 : ContentPage\n {\n public Page2()\n {\n Title = \"Images\";\n\n const string imageName = \"Graphics/balloon.jpg\";\n var imageSize = ImageUtils.GetImageSize(imageName);\n\n Content = new StackLayout {\n VerticalOptions = LayoutOptions.Center,\n Children = {\n new Label {\n HorizontalTextAlignment = TextAlignment.Center,\n Text = String.Format(\"Image: {0}\", imageName)\n },\n new Image {\n HorizontalOptions = LayoutOptions.Center,\n Source = imageName\n },\n new Label {\n HorizontalTextAlignment = TextAlignment.Center,\n Text = String.Format(\"Size: {0} x {1}\",\n imageSize.Width, imageSize.Height)\n },\n }\n };\n }\n }\n}\n\n\n"
},
{
"alpha_fraction": 0.5393900871276855,
"alphanum_fraction": 0.5457433462142944,
"avg_line_length": 31.112245559692383,
"blob_id": "b3bca45b21d453d67dcb49aae6c5f83d317450a0",
"content_id": "082b2079bd67cf8e04d420b6c87cb7938c3b87ca",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 3150,
"license_type": "permissive",
"max_line_length": 77,
"num_lines": 98,
"path": "/Uniforms.Misc.Droid/Utils/ImageUtils.cs",
"repo_name": "TheUniforms/Uniforms-Misc",
"src_encoding": "UTF-8",
"text": "using System;\nusing System.IO;\nusing Xamarin.Forms;\nusing Android.Graphics;\nusing Uniforms.Misc.Droid;\n\n[assembly: Xamarin.Forms.Dependency(typeof(ImageUtils))]\n\nnamespace Uniforms.Misc.Droid\n{\n using Path = System.IO.Path;\n\n public class ImageUtils : IImageUtils\n {\n public static void Init()\n {\n Misc.ImageUtils.Init (new ImageUtils ());\n }\n\n #region IImageUtils implementation\n\n public Xamarin.Forms.Size GetImageSize(string name)\n {\n var display = Forms.Context.Resources.DisplayMetrics;\n var options = new BitmapFactory.Options {\n InJustDecodeBounds = true\n };\n var resId = Forms.Context.Resources.GetIdentifier(\n Path.GetFileNameWithoutExtension(name.Replace(\"-\", \"_\")),\n \"drawable\", Forms.Context.PackageName);\n \n BitmapFactory.DecodeResource(\n Forms.Context.Resources, resId, options);\n\n return new Size(\n Math.Round((double)options.OutWidth / display.Density),\n Math.Round((double)options.OutHeight / display.Density));\n }\n\n public Stream ResizeImage(\n Stream imageStream,\n double width,\n double height,\n string format = \"jpeg\",\n int quality = 96)\n {\n // Decode stream\n var options = new BitmapFactory.Options {\n InJustDecodeBounds = true\n };\n BitmapFactory.DecodeStream(imageStream, null, options);\n imageStream.Seek(0, SeekOrigin.Begin);\n\n var width0 = options.OutWidth;\n var height0 = options.OutHeight;\n\n // No need to resize\n if ((height >= height0) && (width >= width0)) {\n return imageStream;\n }\n\n // Calculate scale and sample size\n var scale = Math.Min(width / width0, height / height0);\n width = width0 * scale;\n height = height0 * scale;\n\n var inSampleSize = 1;\n while ((0.5 * height0 > inSampleSize * height) &&\n (0.5 * width0 > inSampleSize * width)) {\n inSampleSize *= 2;\n }\n\n // Resize\n var originalImage = BitmapFactory.DecodeStream(imageStream, null,\n new BitmapFactory.Options {\n InJustDecodeBounds = false,\n InSampleSize = inSampleSize\n });\n Bitmap resizedImage = Bitmap.CreateScaledBitmap(\n originalImage, (int)(width), (int)(height), false);\n originalImage.Recycle();\n\n // Compress\n var stream = new MemoryStream();\n Bitmap.CompressFormat imageFormat = (format == \"png\") ?\n Bitmap.CompressFormat.Png :\n Bitmap.CompressFormat.Jpeg;\n resizedImage.Compress(imageFormat, quality, stream);\n resizedImage.Recycle();\n\n // Reset stream and return\n stream.Seek(0, SeekOrigin.Begin);\n return stream;\n }\n\n #endregion\n }\n}\n\n"
},
{
"alpha_fraction": 0.585394561290741,
"alphanum_fraction": 0.5865724086761475,
"avg_line_length": 28.275861740112305,
"blob_id": "c498102b87b637779a4b355f1c3e19c7ff1ed724",
"content_id": "ce5cc2851f8c7b06befb497ce7f2f6c11fda69bb",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1700,
"license_type": "permissive",
"max_line_length": 82,
"num_lines": 58,
"path": "/Uniforms.Misc.Droid/Utils/TextUtils.cs",
"repo_name": "TheUniforms/Uniforms-Misc",
"src_encoding": "UTF-8",
"text": "using System;\nusing Android.Widget;\nusing Android.Util;\nusing Android.Views;\nusing Android.Graphics;\n\nnamespace Uniforms.Misc.Droid\n{\n public class TextUtils : ITextUtils\n {\n public static void Init ()\n {\n Misc.TextUtils.Init (new TextUtils ());\n }\n\n static Typeface textTypeface;\n\n static string createdFontName;\n\n public Xamarin.Forms.Size GetTextSize (\n string text,\n double maxWidth,\n double fontSize = 0,\n string fontName = null)\n {\n var textView = new TextView (global::Android.App.Application.Context);\n textView.Typeface = GetTypeface (fontName);\n textView.SetText (text, TextView.BufferType.Normal);\n textView.SetTextSize (ComplexUnitType.Px, (float)fontSize);\n\n int widthMeasureSpec = View.MeasureSpec.MakeMeasureSpec (\n (int)maxWidth, MeasureSpecMode.AtMost);\n\n int heightMeasureSpec = View.MeasureSpec.MakeMeasureSpec (\n 0, MeasureSpecMode.Unspecified);\n\n textView.Measure (widthMeasureSpec, heightMeasureSpec);\n\n return new Xamarin.Forms.Size (\n textView.MeasuredWidth,\n textView.MeasuredHeight);\n }\n\n static Typeface GetTypeface (string fontName)\n {\n if (fontName == null) {\n return Typeface.Default;\n }\n\n if (textTypeface == null || fontName != createdFontName) {\n textTypeface = Typeface.Create (fontName, TypefaceStyle.Normal);\n createdFontName = fontName;\n }\n\n return textTypeface;\n }\n }\n}\n"
},
{
"alpha_fraction": 0.5321637392044067,
"alphanum_fraction": 0.5341130495071411,
"avg_line_length": 22.86046600341797,
"blob_id": "7d070df50e31759c143d99e8ee39f869e196c09a",
"content_id": "7edf4f8983c0474ed2d1d40de8b438e7c1e6cebb",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1028,
"license_type": "permissive",
"max_line_length": 69,
"num_lines": 43,
"path": "/Uniforms.Misc/Utils/TextUtils.cs",
"repo_name": "TheUniforms/Uniforms-Misc",
"src_encoding": "UTF-8",
"text": "using System;\nusing Xamarin.Forms;\n\nnamespace Uniforms.Misc\n{\n public interface ITextUtils\n {\n Size GetTextSize (\n string text,\n double maxWidth,\n double fontSize = 0,\n string fontName = null);\n }\n\n /// <summary>\n /// Text utils.\n /// </summary>\n public static class TextUtils\n {\n static ITextUtils implementation;\n\n /// <summary>\n /// Init with the specified platform implementation.\n /// </summary>\n internal static void Init (ITextUtils platformImplementation)\n {\n implementation = platformImplementation;\n }\n\n /// <summary>\n /// Gets the size of the text.\n /// </summary>\n public static Size GetTextSize (\n string text,\n double maxWidth,\n double fontSize = 0,\n string fontName = null)\n {\n return implementation.GetTextSize (\n text, maxWidth, fontSize, fontName);\n }\n }\n}\n"
},
{
"alpha_fraction": 0.6026490330696106,
"alphanum_fraction": 0.6026490330696106,
"avg_line_length": 25.58823585510254,
"blob_id": "777faf1f9215c3a5cc33d8975e9812320dc19742",
"content_id": "c9bdb42fc067d423421f75cf5074b95dc86a0780",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 455,
"license_type": "permissive",
"max_line_length": 79,
"num_lines": 17,
"path": "/Uniforms.Misc.Droid/Utils/ScreenUtils.cs",
"repo_name": "TheUniforms/Uniforms-Misc",
"src_encoding": "UTF-8",
"text": "using System;\nusing Uniforms.Misc.Droid;\n\nnamespace Uniforms.Misc.Droid\n{\n public static class ScreenUtils\n {\n public static void Init ()\n {\n var display = Xamarin.Forms.Forms.Context.Resources.DisplayMetrics;\n Misc.ScreenUtils.ScreenSize = new Xamarin.Forms.Size (\n display.WidthPixels / display.Density,\n display.HeightPixels / display.Density\n );\n }\n }\n}\n\n"
}
] | 26 |
shravani-dev/MyPyBuilder | https://github.com/shravani-dev/MyPyBuilder | c7ad848f2317443b79e0f34684b5f076006d7339 | 5823ae9e5fcd2d745a70425c37a3aaa432c0389d | 3bef79e6143e53f0f2ea5c3bf9888dd6f468cc77 | refs/heads/master | 2022-02-21T18:31:04.330436 | 2019-10-01T17:46:24 | 2019-10-01T17:46:24 | null | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.8896551728248596,
"alphanum_fraction": 0.8896551728248596,
"avg_line_length": 71.5,
"blob_id": "9b63d14957193622708ea2a782323c0562c1577f",
"content_id": "a201e3a872f9e28201a88943c60cba0e703d6842",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 290,
"license_type": "permissive",
"max_line_length": 85,
"num_lines": 4,
"path": "/GuiBuilder/BUILDER/ProjectTemplate/Tabs/__init__.py",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "from GuiBuilder.BUILDER.ProjectTemplate.Tabs.NewTab import NewTab\nfrom GuiBuilder.BUILDER.ProjectTemplate.Tabs.FrameTab import FrameTab\nfrom GuiBuilder.BUILDER.ProjectTemplate.Tabs.EditTab import EditTab\nfrom GuiBuilder.BUILDER.ProjectTemplate.Tabs.ControlPanelTemplate import ControlPanel\n"
},
{
"alpha_fraction": 0.5206296443939209,
"alphanum_fraction": 0.5209058523178101,
"avg_line_length": 39.869075775146484,
"blob_id": "a7325a52725b2575cd00441bc2dc50148ba3dfee",
"content_id": "1f38c99dcd60868e326a7ee8141dcd6158f2db4e",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 18105,
"license_type": "permissive",
"max_line_length": 118,
"num_lines": 443,
"path": "/GuiBuilder/BUILDER/ProjectTemplate/WidgetTemplates/SplitClassGenerator.py",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "import os\nfrom MyPyWidgets import *\n\n\nclass MultiGenerator(object):\n\n def __init__(self, working_directory, final_destination):\n \"\"\"\n TODO: Fix templates so that there are no PEP violations (spacing, # newlines, etc.)\n This class generates the static Project folder.\n\n :param working_directory: The current working directory\n :param final_destination: The Project Destination\n \"\"\"\n self.cwd = working_directory\n self.final_destination = final_destination\n self.name = os.path.basename(final_destination)\n\n self.lookup = {\n Button: 'Button',\n DropDown: 'DropDown',\n CheckButton: 'CheckButton',\n InputField: 'InputField',\n Label: 'Label',\n RadioButton: 'RadioButton',\n SpinBox: 'SpinBox'\n }\n\n self.paths = {\n 'Cwd': self.cwd,\n 'Templates': os.path.join(self.cwd, 'GuiBuilder', 'BUILDER', 'ProjectTemplate', 'WidgetTemplates'),\n 'Components': os.path.join(self.final_destination, 'Components'),\n 'Components__init__': os.path.join(self.final_destination, 'Components', '__init__.py'),\n 'Project__init__': os.path.join(self.final_destination, '__init__.py'),\n 'Frames': os.path.join(self.final_destination, 'Components', 'Frames'),\n 'MainWidgets': os.path.join(self.final_destination, 'Components', 'MainWidgets'),\n 'Final': self.final_destination,\n 'Builder_Helper': os.path.join(self.final_destination, 'Components', 'Builder_Helper.py'),\n 'MainGui': os.path.join(self.final_destination, 'MainGui.py'),\n 'MainGuiTemplate': os.path.join(self.final_destination, 'MainGuiTemplate.py')\n }\n\n self.modules = [os.path.join(self.final_destination, 'Components'),\n os.path.join(self.final_destination, 'Components', 'Frames'),\n os.path.join(self.final_destination, 'Components', 'MainWidgets')\n ]\n\n self.templates = {\n '__main__': self.load_template('MainTemplate.txt'),\n 'BuilderHelper': self.load_template('BuildHelperTemplate.txt'),\n 'Root': self.load_template('RootTemplate.txt'),\n 'RootFrame': self.load_template('RootFrameTemplate.txt'),\n 'MultiBase': self.load_template('MultiBaseTemplate.txt'),\n 'Widget': self.load_template('WidgetTemplate.txt'),\n 'Components__init__': self.load_template('Components__init__Template.txt'),\n 'Project__init__': self.load_template('Project__init__Template.txt'),\n 'ProjectFrame__init__': self.load_template('ProjectFrame__init__.txt'),\n 'MainWidgets__init__': self.load_template('MainWidgets__init__Template.txt'),\n 'CustomFrame__init__': self.load_template('CustomFrame__init__Template.txt'),\n 'Frame__init__': self.load_template('Frame__init__Template.txt'),\n 'MainGuiFrames': self.load_template('MainGuiFramesTemplate.txt'),\n 'MainGuiShowFrame': self.load_template('MainGuiShowFrameTemplate.txt'),\n 'Button': self.load_template('ButtonTemplate.txt'),\n 'CheckButton': self.load_template('CheckButtonTemplate.txt'),\n 'DropDown': self.load_template('DropDownTemplate.txt'),\n 'InputField': self.load_template('InputFieldTemplate.txt'),\n 'Label': self.load_template('LabelTemplate.txt'),\n 'ListBox': self.load_template('ListBoxTemplate.txt'),\n 'RadioButton': self.load_template('RadioButtonTemplate.txt'),\n 'SpinBox': self.load_template('SpinBoxTemplate.txt')\n }\n\n self.frames = ['root']\n\n def setup(self, **kwargs):\n \"\"\"\n This is used to build initial directories and __init__ files\n\n :param kwargs: The root keyword arguments used to build the tk.Tk window (A MyPyWindow)\n :return: None\n \"\"\"\n self.build_base_modules()\n self.build_builder_helper()\n self.build_project_init()\n self.build_components_init()\n self.build_main()\n self.create_main_gui_template(**kwargs)\n\n def build_main(self):\n \"\"\"\n Builds the __main__.py file for the project from the template.\n\n :return: None\n \"\"\"\n f = open(os.path.join(self.final_destination, '__main__.py'), 'a')\n for line in self.templates['__main__']:\n f.write(self.map_replace(line, ['&NAME'], [self.name]))\n f.close()\n\n def build_components_init(self):\n \"\"\"\n Builds the Components/__init__.py file from the template.\n\n :return: None\n \"\"\"\n f = open(self.paths['Components__init__'], 'w')\n for line in self.templates['Components__init__']:\n f.write(self.map_replace(line, ['&NAME'], [self.name]))\n f.close()\n\n def build_project_init(self):\n \"\"\"\n Builds the Project/__init__.py file from the template\n\n :return: None\n \"\"\"\n f = open(self.paths['Project__init__'], 'w')\n for line in self.templates['Project__init__']:\n f.write(self.map_replace(line, ['&NAME'], [self.name]))\n f.close()\n\n def build_base_modules(self):\n \"\"\"\n Builds required directories from self.modules\n\n :return: None\n \"\"\"\n for path in self.modules:\n os.mkdir(path)\n\n def build_builder_helper(self):\n \"\"\"\n Sets up imports for the BuilderHelper.py file\n\n :return: None\n \"\"\"\n f = open(self.paths['Builder_Helper'], 'w')\n for line in self.templates['BuilderHelper']:\n if '&NAME' in line:\n line = line.replace('&NAME', self.name)\n f.write(line)\n f.close()\n\n def create_main_gui_template(self, **kwargs):\n \"\"\"\n Creates the MainGuiTemplate.py file\n\n :param kwargs: Keyword arguments handed in from save method\n :return: None\n \"\"\"\n f = open(self.paths['MainGuiTemplate'], 'w')\n for line in self.templates['Root']:\n f.write(self.map_replace(line,\n ['&CLASSNAME', '&TITLE', \"'&ROWSPAN'\",\n \"'&COLUMNSPAN'\", '&TYPE', '&ID'\n ],\n ['MainTemplate', kwargs['title'], str(kwargs['base_location']['rowspan']),\n str(kwargs['base_location']['columnspan']), kwargs['type'], kwargs['id']\n ]))\n f.close()\n\n def add_widget(self, **kwargs):\n \"\"\"\n Generates each widget its own file in the correct directory, and adds imports to required __init__ files.\n\n :param kwargs: Key-word arguments for the widget\n :return: None\n \"\"\"\n replacement_dict = {\n \"&ID\": kwargs['id'],\n \"&MASTER\": kwargs['master'],\n \"'&ROW'\": str(kwargs['location']['row']),\n \"'&COLUMN'\": str(kwargs['location']['column']),\n \"'&ROWSPAN'\": str(kwargs['location']['rowspan']),\n \"'&COLUMNSPAN'\": str(kwargs['location']['columnspan']),\n 'NAME': kwargs['id']\n }\n\n kwargs['widget'] = self.lookup[kwargs['widget']]\n tmp_file = '{}_{}'.format(kwargs['widget'], kwargs['id'])\n tmp_class = '{}{}'.format(kwargs['widget'], kwargs['id'])\n if kwargs['master'] == 'root_window':\n self.mainwidgets_init_append(tmp_file, tmp_class)\n f = open(os.path.join(self.paths['MainWidgets'], '{}.py'.format(tmp_file)), 'a')\n else:\n tmp_folder = 'Frame_{}_Widgets'.format(kwargs['master'])\n self.customwidgets_init_append(tmp_folder, tmp_file, tmp_class)\n f = open(os.path.join(self.paths['Frames'], tmp_folder, '{}.py'.format(tmp_file)), 'a')\n for line in self.templates['Widget']:\n f.write(self.map_replace(line, ['&CLASSNAME'], [tmp_class]))\n for line in self.templates[kwargs['widget']]:\n for key in list(replacement_dict.keys()):\n if key in line:\n line = line.replace(key, replacement_dict[key])\n f.write(line)\n f.close()\n self.builder_helper_add_widget(kwargs['master'], tmp_class)\n\n def builder_helper_add_widget(self, master, clss):\n \"\"\"\n Adds a widget to the builder_helper which allows all widgets to be accessed in a hierarchical order\n\n :param master: Widget's master\n :param clss: Class name\n :return: None\n \"\"\"\n if master == 'root_window':\n master = 'root'\n tmp_list = self.list_lines(self.paths['Builder_Helper'])\n f = open(self.paths['Builder_Helper'], 'w')\n for line in tmp_list:\n f.write(self.map_replace(line, ['&{}'.format(master)], ['{},\\n &{}'.format(clss, master)]))\n f.close()\n\n def mainwidgets_init_append(self, file, clss):\n \"\"\"\n Adds imports to mainwidgets __init__\n :param file: Name of widget file\n :param clss: Widget class\n :return: None\n \"\"\"\n f = open(os.path.join(self.paths['MainWidgets'], '__init__.py'), 'a')\n for line in self.templates['MainWidgets__init__']:\n f.write(self.map_replace(line, ['&NAME', '&FILE', '&CLASS'], [self.name, file, clss]))\n f.close()\n\n def customwidgets_init_append(self, folder, file, clss):\n \"\"\"\n Adds imports to customframes __init__\n\n :param folder: The folder holding the frames widgets\n :param file: Name of widget file\n :param clss: Widget class\n :return: None\n \"\"\"\n f = open(os.path.join(self.paths['Frames'], folder, '__init__.py'), 'a')\n for line in self.templates['CustomFrame__init__']:\n f.write(self.map_replace(line, ['&NAME', '&FOLDER', '&FILE', '&CLASS'], [self.name, folder, file, clss]))\n\n def add_frame(self, **kwargs):\n \"\"\"\n Creates directories and files for new frames or toplevels.\n\n :param kwargs: Key-Word arguments for the frame/toplevel\n :return: None\n \"\"\"\n tmp_folder = 'Frame_{}_Widgets'.format(kwargs['id'])\n tmp_file = 'Main_{}_Frame'.format(kwargs['id'])\n os.mkdir(os.path.join(self.paths['Frames'], tmp_folder))\n open(os.path.join(self.paths['Frames'], tmp_folder, '__init__.py'), 'a').close()\n f = open(os.path.join(self.paths['Frames'], '{}.py'.format(tmp_file)), 'a')\n if kwargs['type'] == 'toplevel':\n for line in self.templates['Root']:\n f.write(self.map_replace(\n line,\n ['&CLASSNAME', '&TITLE',\n \"&ID\", \"&TYPE\",\n \"'&ROWSPAN'\", \"'&COLUMNSPAN'\"],\n ['Main{}'.format(kwargs['id']), kwargs['title'],\n kwargs['id'], kwargs['type'],\n str(kwargs['base_location']['rowspan']), str(kwargs['base_location']['columnspan'])])\n )\n elif kwargs['type'] == 'frame':\n for line in self.templates['RootFrame']:\n f.write(self.map_replace(\n line,\n ['&CLASSNAME', '&TYPE',\n '&ID', \"'&ROW'\",\n \"'&COLUMN'\", \"'&RSPAN'\",\n \"'&CSPAN'\", '&VERTICAL',\n '&HORIZONTAL', \"'&SCROLLROW'\",\n \"'&SCROLLCOLUMN'\", \"'&SCROLLRSPAN'\",\n \"'&SCROLLCSPAN'\"],\n ['Main{}'.format(kwargs['id']), kwargs['type'],\n kwargs['id'], str(kwargs['base_location']['row']),\n str(kwargs['base_location']['column']), str(kwargs['base_location']['rowspan']),\n str(kwargs['base_location']['columnspan']), str(kwargs['scroll']['vertical']),\n str(kwargs['scroll']['horizontal']), str(kwargs['scroll_window_size']['row']),\n str(kwargs['scroll_window_size']['column']), str(kwargs['scroll_window_size']['rowspan']),\n str(kwargs['scroll_window_size']['columnspan'])]\n )\n )\n\n f.close()\n self.frames.append(kwargs['id'])\n self.append_frame_init(tmp_file, tmp_folder)\n self.builder_helper_add_frame(kwargs['id'])\n self.main_gui_init_add_frame(kwargs['id'])\n\n def append_frame_init(self, file, folder):\n \"\"\"\n Generates __init__ for the frames\n\n :param file: File to import from\n :param folder: Folder to import from\n :return: None\n \"\"\"\n f = open(os.path.join(self.paths['Frames'], '__init__.py'), 'a')\n for line in self.templates['Frame__init__']:\n f.write(self.map_replace(line, ['&NAME', '&FILE', '&FOLDER'], [self.name, file, folder]))\n f.close()\n\n def builder_helper_add_frame(self, fid):\n \"\"\"\n Adds frame information to the builder helper\n\n :param fid: Frame ID\n :return: None\n \"\"\"\n tmp_list = self.list_lines(self.paths['Builder_Helper'])\n f = open(self.paths['Builder_Helper'], 'w')\n for line in tmp_list:\n if ' # &NEW' in line:\n line = line.replace(' # &NEW', ',\\n')\n f.write(line)\n line = \" '{}': [\\n\".format(fid)\n f.write(line)\n line = ' &{}\\n'.format(fid)\n f.write(line)\n line = ' ] # &NEW\\n'\n f.write(line)\n f.close()\n\n def main_gui_init_add_frame(self, fid):\n \"\"\"\n Adds the show_frame to the maingui.\n IMPORTANT NOTE:\n The functions build the frames/toplevels, so they must be called in the .run() to make them appear immediately\n\n :param fid: Frame ID\n :return: None\n \"\"\"\n f = open(self.paths['Project__init__'], 'a')\n for line in self.templates['ProjectFrame__init__']:\n f.write(self.map_replace(line, ['&NAME', '&ID'], [self.name, fid]))\n f.close()\n\n def finalize_builder_helper(self):\n \"\"\"\n Finishes creating the Builder Helper\n\n :return: None\n \"\"\"\n tmp_list = self.list_lines(self.paths['Builder_Helper'])\n f = open(self.paths['Builder_Helper'], 'w')\n i = 0\n for i, line in enumerate(tmp_list[:-1]):\n for frame in self.frames:\n if '&{}'.format(frame) in tmp_list[i + 1]:\n line = line.replace(',', '')\n elif '&{}'.format(frame) in line:\n line = ''\n f.write(line)\n else:\n if i != 0:\n f.write(tmp_list[i + 1])\n f.close()\n\n def finalize_main_gui(self):\n \"\"\"\n This is used to finalize the main gui by adding in the custom frames info, and generating the\n show_frame functions.\n\n :return: None\n \"\"\"\n f = open(self.paths['MainGui'], 'w')\n for line in self.templates['MultiBase']:\n line = self.map_replace(line, ['&NAME'], [self.name])\n if '# &FRAMES' in line:\n for frame in self.frames:\n if frame is not 'root':\n self.main_gui_frame(frame, f)\n elif '# &SHOWFRAME' in line:\n for frame in self.frames:\n if frame is not 'root':\n self.main_gui_show_frame(frame, f)\n f.write(line)\n f.close()\n\n def finalize(self):\n \"\"\"\n Called by the save method to finish building the Static Project\n\n :return: None\n \"\"\"\n self.finalize_builder_helper()\n self.finalize_main_gui()\n\n def main_gui_frame(self, frame, file):\n \"\"\"\n Creates the self. args for the __init__ in the main gui for the given frames\n\n :param frame: Name of the frame\n :param file: The file to write to\n :return: None\n \"\"\"\n for line in self.templates['MainGuiFrames']:\n file.write(self.map_replace(line, ['&FRAME'], [frame]))\n\n def main_gui_show_frame(self, frame, file):\n \"\"\"\n Generates the show_frame methods for the main_gui\n\n :param frame: Name of frame\n :param file: MainGuiFile\n :return: None\n \"\"\"\n for line in self.templates['MainGuiShowFrame']:\n file.write(self.map_replace(line, ['&FRAME'], [frame]))\n\n def load_template(self, template):\n \"\"\"\n Loads in list of lines in given template name\n :param template: KeyName of template. Ex: templates[KeyName]\n :return: List of lines in the file\n \"\"\"\n return self.list_lines(os.path.join(self.paths['Templates'], template))\n\n @staticmethod\n def list_lines(file):\n \"\"\"\n Given a file, opens it and returns a list of it's lines\n\n :param file: File path\n :return: List of lines in file\n \"\"\"\n f = open(file, 'r')\n tmp_list = f.readlines()\n f.close()\n return tmp_list\n\n @staticmethod\n def map_replace(line, old, new):\n \"\"\"\n This is used to replace the old values in a line with the new values\n :param line: Line to search and replace\n :param old: list of the things to be replaced Ex. ['&NAME', '&ID']\n :param new: list of the things to replace with. Ex. [a_name, a_id]\n :return: line with new values\n \"\"\"\n for i, item in enumerate(old):\n line = line.replace(item, new[i])\n return line\n"
},
{
"alpha_fraction": 0.3872491121292114,
"alphanum_fraction": 0.39905548095703125,
"avg_line_length": 26.322580337524414,
"blob_id": "75dad279976d3f803439a95ca76fb1578ff04328",
"content_id": "3c9b21790bfdd91d91144b9f5497fd1e007863e4",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 847,
"license_type": "permissive",
"max_line_length": 44,
"num_lines": 31,
"path": "/GuiBuilder/STARTUP/Install/CreateSettings.py",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "import json\n\n\nclass InstallSettings(object):\n\n def __init__(self, path, settings=None):\n self.path = path\n if settings is None:\n self.default_window = {\n 'type': 'root',\n 'master': None,\n 'title': 'Root Window',\n 'id': 'root_window',\n 'owner': None,\n 'base_location': {\n 'row': 0,\n 'column': 0,\n 'rowspan': 500,\n 'columnspan': 500,\n 'sticky': 'NSWE'\n },\n 'row_offset': 0,\n 'column_offset': 0\n }\n else:\n self.default_window = settings\n\n def factory_settings(self):\n f = open(self.path, 'w')\n json.dump(self.default_window, f)\n f.close()\n"
},
{
"alpha_fraction": 0.41005587577819824,
"alphanum_fraction": 0.4346368610858917,
"avg_line_length": 25.323530197143555,
"blob_id": "a9847f466bb173e2f68baed9cdae75d14b10c25e",
"content_id": "50a4047d6105c3801bbb754fa785f817f3edb8cc",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 895,
"license_type": "permissive",
"max_line_length": 49,
"num_lines": 34,
"path": "/GuiBuilder/PROJECTS/Demo/Components/Builder_Helper.py",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "from GuiBuilder.PROJECTS.Demo.Components import *\n\n\nclass BuildHelper(object):\n\n def __init__(self):\n self.components = {\n 'root_window': [\n Buttontmp,\n Buttontmp2,\n Buttonhello,\n Buttonhello1,\n Buttonbutton0,\n Buttonbutton1,\n Buttonbutton2,\n Buttonbutton3,\n Buttonbutton4,\n Buttonbutton5,\n Buttonbutton6,\n Buttonbutton7,\n Buttonbutton8,\n Buttonbutton9,\n Buttonbutton10,\n Buttonbutton11,\n Buttonbutton12,\n Buttoncalc_button0,\n Buttoncalc_button1,\n Buttoncalc_button2,\n Buttoncalc_button3\n ],\n\n 'hello': [\n ] # &NEW\n }\n"
},
{
"alpha_fraction": 0.28522399067878723,
"alphanum_fraction": 0.31132587790489197,
"avg_line_length": 26.68000030517578,
"blob_id": "ab0e1f3587295188d048da06ec88ca518d82b359",
"content_id": "5a363b2f3246b25cfd19293cd98f94abf00679d0",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 11072,
"license_type": "permissive",
"max_line_length": 65,
"num_lines": 400,
"path": "/GuiBuilder/STARTUP/MainStartup/MainGuiTemplate.py",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "from MyPyWidgets import *\n\n\nclass GuiTemplate(object):\n\n def __init__(self, project_path_default):\n self.main_kwargs = {\n 'type': 'root',\n 'master': None,\n 'title': 'MyPyWindow Builder',\n 'id': 'root_window',\n 'owner': self,\n 'base_location': {\n 'row': 0,\n 'column': 0,\n 'rowspan': 200,\n 'columnspan': 300,\n 'sticky': 'NSWE'\n }\n }\n\n self.main_components = [\n {'id': 'new_project_button',\n 'widget': Button,\n 'args': ['New Project', None],\n 'location': {\n 'row': 25,\n 'column': 50,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'load_project_button',\n 'widget': Button,\n 'args': ['Load Project', None],\n 'location': {\n 'row': 75,\n 'column': 50,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'project_settings_button',\n 'widget': Button,\n 'args': ['Configure Settings', None],\n 'location': {\n 'row': 125,\n 'column': 50,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n }\n ]\n\n self.load_kwargs = {\n 'type': 'toplevel',\n 'title': 'Load',\n 'id': 'load_window',\n 'base_location': {\n 'row': 0,\n 'column': 0,\n 'rowspan': 150,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n },\n 'row_offset': 0,\n 'column_offset': 0\n }\n\n self.load_components = [\n {'id': 'project_dropdown',\n 'widget': DropDown,\n 'args': ['Select Project', None, lambda x: None],\n 'location': {\n 'row': 10,\n 'column': 10,\n 'rowspan': 25,\n 'columnspan': 180,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'load_project_go',\n 'widget': Button,\n 'args': ['Load Project Editor', None],\n 'location': {\n 'row': 40,\n 'column': 10,\n 'rowspan': 25,\n 'columnspan': 180,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'run_project_go',\n 'widget': Button,\n 'args': ['Run Project', None],\n 'location': {\n 'row': 70,\n 'column': 10,\n 'rowspan': 25,\n 'columnspan': 180,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'delete_project_go',\n 'widget': Button,\n 'args': ['Delete Project', None],\n 'location': {\n 'row': 100,\n 'column': 10,\n 'rowspan': 25,\n 'columnspan': 180,\n 'sticky': 'NSWE'\n }\n }\n\n ]\n\n self.new_kwargs = {\n 'type': 'toplevel',\n 'title': 'New Project',\n 'id': 'new_window',\n 'base_location': {\n 'row': 0,\n 'column': 0,\n 'rowspan': 220,\n 'columnspan': 550,\n 'sticky': 'NSWE'\n },\n 'row_offset': 0,\n 'column_offset': 0\n }\n\n self.new_components = [\n\n {'id': 'project_path',\n 'widget': Label,\n 'args': [project_path_default],\n 'location': {\n 'row': 25,\n 'column': 25,\n 'rowspan': 25,\n 'columnspan': 500,\n 'sticky': 'NSWE'\n },\n 'config': {'relief': 'sunken'}\n },\n\n {'id': 'change_path',\n 'widget': Button,\n 'args': ['Change Project Path (Coming Soon)', None],\n 'location': {\n 'row': 55,\n 'column': 75,\n 'rowspan': 25,\n 'columnspan': 400,\n 'sticky': 'NSWE'\n },\n 'config': {'state': 'disabled'}\n },\n\n {'id': 'project_name',\n 'widget': Label,\n 'args': ['Project Name:'],\n 'location': {\n 'row': 100,\n 'column': 75,\n 'rowspan': 25,\n 'columnspan': 100,\n 'sticky': 'NSWE'\n },\n 'config': {'relief': 'sunken'}\n },\n\n {'id': 'name_input',\n 'widget': InputField,\n 'args': [],\n 'location': {\n 'row': 100,\n 'column': 175,\n 'rowspan': 25,\n 'columnspan': 300,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'title_label',\n 'widget': Label,\n 'args': ['Root Title:'],\n 'location': {\n 'row': 125,\n 'column': 75,\n 'rowspan': 25,\n 'columnspan': 100,\n 'sticky': 'NSWE'\n },\n 'config': {'relief': 'sunken'}\n },\n\n {'id': 'title_input',\n 'widget': InputField,\n 'args': [],\n 'location': {\n 'row': 125,\n 'column': 175,\n 'rowspan': 25,\n 'columnspan': 300,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'exit_new',\n 'widget': Button,\n 'args': ['Cancel and Exit', None],\n 'location': {\n 'row': 175,\n 'column': 25,\n 'rowspan': 25,\n 'columnspan': 100,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'configure_settings',\n 'widget': Button,\n 'args': ['Project Settings', None],\n 'location': {\n 'row': 175,\n 'column': 130,\n 'rowspan': 25,\n 'columnspan': 100,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'new_project',\n 'widget': Button,\n 'args': ['Create Project', None],\n 'location': {\n 'row': 175,\n 'column': 375,\n 'rowspan': 25,\n 'columnspan': 100,\n 'sticky': 'NSWE'\n }\n }\n ]\n\n self.configure_kwargs = {\n 'type': 'toplevel',\n 'title': 'Configure Settings',\n 'id': 'configure_window',\n 'base_location': {\n 'row': 0,\n 'column': 0,\n 'rowspan': 220,\n 'columnspan': 300,\n 'sticky': 'NSWE'\n },\n 'row_offset': 0,\n 'column_offset': 0\n }\n\n self.configure_components = [\n\n {'id': 'height_label',\n 'widget': Label,\n 'args': ['Root Height (pixels)'],\n 'location': {\n 'row': 25,\n 'column': 25,\n 'rowspan': 25,\n 'columnspan': 150,\n 'sticky': 'NSWE'\n },\n 'config': {'relief': 'sunken',\n 'anchor': 'w'}\n },\n\n {'id': 'height_input',\n 'widget': InputField,\n 'args': [],\n 'location': {\n 'row': 25,\n 'column': 185,\n 'rowspan': 25,\n 'columnspan': 90,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'width_label',\n 'widget': Label,\n 'args': ['Root Width (pixels)'],\n 'location': {\n 'row': 55,\n 'column': 25,\n 'rowspan': 25,\n 'columnspan': 150,\n 'sticky': 'NSWE'\n },\n 'config': {'relief': 'sunken',\n 'anchor': 'w'}\n },\n\n {'id': 'width_input',\n 'widget': InputField,\n 'args': [],\n 'location': {\n 'row': 55,\n 'column': 185,\n 'rowspan': 25,\n 'columnspan': 90,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'horizontal_offset_label',\n 'widget': Label,\n 'args': ['Horizontal Offset (pixels)'],\n 'location': {\n 'row': 85,\n 'column': 25,\n 'rowspan': 25,\n 'columnspan': 150,\n 'sticky': 'NSWE'\n },\n 'config': {'relief': 'sunken',\n 'anchor': 'w'}\n },\n\n {'id': 'horizontal_offset_input',\n 'widget': InputField,\n 'args': [],\n 'location': {\n 'row': 85,\n 'column': 185,\n 'rowspan': 25,\n 'columnspan': 90,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'vertical_offset_label',\n 'widget': Label,\n 'args': ['Vertical Offset (pixels)'],\n 'location': {\n 'row': 115,\n 'column': 25,\n 'rowspan': 25,\n 'columnspan': 150,\n 'sticky': 'NSWE'\n },\n 'config': {'relief': 'sunken',\n 'anchor': 'w'}\n },\n\n {'id': 'vertical_offset_input',\n 'widget': InputField,\n 'args': [],\n 'location': {\n 'row': 115,\n 'column': 185,\n 'rowspan': 25,\n 'columnspan': 90,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'save_settings',\n 'widget': Button,\n 'args': ['Save Settings', None],\n 'location': {\n 'row': 145,\n 'column': 25,\n 'rowspan': 25,\n 'columnspan': 250,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'exit_settings',\n 'widget': Button,\n 'args': ['Cancel and Exit', None],\n 'location': {\n 'row': 175,\n 'column': 25,\n 'rowspan': 25,\n 'columnspan': 250,\n 'sticky': 'NSWE'\n }\n }\n ]\n"
},
{
"alpha_fraction": 0.6124619841575623,
"alphanum_fraction": 0.6185410618782043,
"avg_line_length": 27.60869598388672,
"blob_id": "b069535640dd7bbe1542befce7a2c4b7e69e6e00",
"content_id": "eef0df5ddb4eea5d720a936ce4270006f60bcf54",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 658,
"license_type": "permissive",
"max_line_length": 97,
"num_lines": 23,
"path": "/__main__.py",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "from GuiBuilder.STARTUP import GuiBuilder\nimport os\nfrom installer import install\n# TODO: Tighten the nuts and bolts. Updates should be backwards compatible\n\n\ndef main():\n cwd = os.getcwd()\n f = open(os.path.join(cwd, 'version.txt'), 'r')\n version = float(f.readline().split('=')[1])\n f.close()\n if version == 0:\n install()\n f = open(os.path.join(cwd, 'version.txt'), 'w')\n # TODO: Reach out to server and request updates based on version (Currently Low Priority)\n f.write('version={}'.format('1.0'))\n f.close()\n application = GuiBuilder(cwd)\n application.run()\n\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.29696133732795715,
"alphanum_fraction": 0.31077349185943604,
"avg_line_length": 35.099998474121094,
"blob_id": "c9a6a89ad7970079e53f7d6460cf8f50c2740f53",
"content_id": "6c79e38e118b703ae3c5b1750163e69d1550e5d1",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 724,
"license_type": "permissive",
"max_line_length": 46,
"num_lines": 20,
"path": "/GuiBuilder/PROJECTS/Demo/MainGuiTemplate.py",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "\n\nclass MainTemplate(object):\n\n def __init__(self, master):\n self.master = master\n self.components = {}\n self.window = None\n self.widget = {'type': 'root',\n 'master': None,\n 'title': 'ok',\n 'id': 'root_window',\n 'owner': self.master,\n 'base_location': {\n 'row': 0,\n 'column': 0,\n 'rowspan': 500,\n 'columnspan': 500,\n 'sticky': 'NSWE'\n },\n 'row_offset': 0,\n 'column_offset': 0}\n"
},
{
"alpha_fraction": 0.8985507488250732,
"alphanum_fraction": 0.8985507488250732,
"avg_line_length": 68,
"blob_id": "d6c60740ba053989b97e1e4d820839706aacbe81",
"content_id": "719801c925185504af79efdba4bb0f6fffec485a",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 69,
"license_type": "permissive",
"max_line_length": 68,
"num_lines": 1,
"path": "/GuiBuilder/STARTUP/MainStartup/__init__.py",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "from GuiBuilder.STARTUP.MainStartup.MainGuiBuilder import GuiBuilder\n"
},
{
"alpha_fraction": 0.6523988842964172,
"alphanum_fraction": 0.6523988842964172,
"avg_line_length": 61.52941131591797,
"blob_id": "7776080c1471ea1fbe0384bdf38be18770733f3a",
"content_id": "8c44d4121d0efea52e6416f97b67d3b0adfe21bb",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2126,
"license_type": "permissive",
"max_line_length": 110,
"num_lines": 34,
"path": "/installer.py",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "from GuiBuilder.STARTUP.Install import InstallProjects, InstallSettings\nimport os\nimport shutil\n\n\ndef install():\n demo_project = os.path.join(os.getcwd(), 'GuiBuilder', 'PROJECTS', 'Demo')\n demo_src = os.path.join(os.getcwd(), 'GuiBuilder', 'BUILDER', 'PROJECTBUILDER', 'Demo')\n settings_path = os.path.join(os.getcwd(), 'GuiBuilder', 'STARTUP', 'Settings')\n if os.path.exists(demo_project):\n shutil.move(demo_project, os.path.join(os.getcwd(), 'GuiBuilder'))\n if os.path.exists(demo_src):\n shutil.move(demo_src, os.path.join(os.getcwd(), 'GuiBuilder', 'STARTUP'))\n if os.path.exists(os.path.join(os.getcwd(), 'GuiBuilder', 'PROJECTS')):\n shutil.rmtree(os.path.join(os.getcwd(), 'GuiBuilder', 'PROJECTS'))\n os.mkdir(os.path.join(os.getcwd(), 'GuiBuilder', 'PROJECTS'))\n open(os.path.join(os.getcwd(), 'GuiBuilder', 'PROJECTS', '__init__.py'), 'a').close()\n if os.path.exists(os.path.join(os.getcwd(), 'GuiBuilder', 'BUILDER', 'PROJECTBUILDER')):\n shutil.rmtree(os.path.join(os.getcwd(), 'GuiBuilder', 'BUILDER', 'PROJECTBUILDER'))\n os.mkdir(os.path.join(os.getcwd(), 'GuiBuilder', 'BUILDER', 'PROJECTBUILDER'))\n open(os.path.join(os.getcwd(), 'GuiBuilder', 'BUILDER', 'PROJECTBUILDER', '__init__.py'), 'a').close()\n builder_settings = os.path.join(settings_path, 'builder_settings.json')\n project_settings = os.path.join(settings_path, 'project_settings.json')\n InstallSettings(builder_settings).factory_settings()\n InstallProjects(project_settings).factory_settings()\n demo = None\n if os.path.exists(os.path.join(os.getcwd(), 'GuiBuilder', 'Demo')):\n shutil.move(os.path.join(os.getcwd(), 'GuiBuilder', 'Demo'),\n os.path.join(os.getcwd(), 'GuiBuilder', 'PROJECTS'))\n if os.path.exists(os.path.join(os.getcwd(), 'GuiBuilder', 'STARTUP', 'Demo')):\n shutil.move(os.path.join(os.getcwd(), 'GuiBuilder', 'STARTUP', 'Demo'),\n os.path.join(os.getcwd(), 'GuiBuilder', 'BUILDER', 'PROJECTBUILDER'))\n demo = 'Demo'\n InstallProjects(project_settings, demo).factory_settings()\n"
},
{
"alpha_fraction": 0.5279187560081482,
"alphanum_fraction": 0.529610812664032,
"avg_line_length": 25.863636016845703,
"blob_id": "9831cc300461b7f0caafc90fa3adf0b1f0138e45",
"content_id": "06db349e94176aa26752be64ded820082594aa0a",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 591,
"license_type": "permissive",
"max_line_length": 56,
"num_lines": 22,
"path": "/MyPyWidgets/SpinBoxClass.py",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "import tkinter.ttk as ttk\nimport tkinter as tk\n\n\nclass SpinBox(ttk.Spinbox):\n\n def __init__(self, frame, default, values, command):\n self.location = None\n self.var = tk.IntVar()\n super().__init__(master=frame,\n values=values,\n width=1,\n textvariable=self.var,\n command=command)\n self.widget = self\n self.set(default)\n\n def set_base_location(self, location):\n self.location = location\n\n def show_widget(self):\n self.grid(**self.location)\n"
},
{
"alpha_fraction": 0.46257483959198,
"alphanum_fraction": 0.465568870306015,
"avg_line_length": 24.69230842590332,
"blob_id": "c4adf17e84bd9425241c5b544c166b9907ecc242",
"content_id": "695ac6fcf7859518c2b237094d6d961d706f4bee",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 668,
"license_type": "permissive",
"max_line_length": 43,
"num_lines": 26,
"path": "/GuiBuilder/STARTUP/Install/CreateProjects.py",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "import json\n\n\nclass InstallProjects(object):\n\n def __init__(self, path, project=None):\n self.path = path\n if project is None:\n self.project = []\n else:\n self.project = [project]\n\n def factory_settings(self):\n if len(self.project) == 0:\n f = open(self.path, 'w')\n json.dump(self.project, f)\n f.close()\n else:\n f = open(self.path, 'r')\n tmp = json.load(f)\n if self.project[0] not in tmp:\n tmp.append(*self.project)\n f.close()\n f = open(self.path, 'w')\n json.dump(tmp, f)\n f.close()\n"
},
{
"alpha_fraction": 0.7450487613677979,
"alphanum_fraction": 0.7474862933158875,
"avg_line_length": 58.672725677490234,
"blob_id": "fdf32e731a4b820395a9e1ec24bf5f481ad6164b",
"content_id": "fa40182fb7a8d6a33fc45f6736d3b94941fc2656",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "reStructuredText",
"length_bytes": 26256,
"license_type": "permissive",
"max_line_length": 536,
"num_lines": 440,
"path": "/README.rst",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "=====\nMyPyBuilder\n=====\n\nWhat is MyPyBuilder?\n--------\nMyPyBuilder is a Drag-and-Drop GUI builder that wraps the tkinter library.\n\n* MyPyBuilder is primarily: **Fast**\n* Windows and Widgets automatically resize when the window is stretched.\n* The GUI code is generated for you, all you do is write the logic.\n* Allows you to build multiple different projects at once, keeping them all organized for you.\n* Works with PyInstaller, so finished applications can be turned into executable's and shipped.\n* Does not require any imports to use, as it is built entirely on built-ins.\n* Forces widgets and windows to be the size you want, not some weird size tkinter decides they should be.\n* Makes it easy to make scrollable frames, toplevel windows, and writes the logic for making them appear for you.\n\nWhy Use MyPyBuilder over other GUI Builders?\n--------\nMyPyBuilder **IS NOT** designed with the primary purpose of making commercial applications to offer to customers. \nMyPyBuilder **IS** designed to be used as an extremely fast way to skin a python script.\n\n* MyPyBuilder is designed with a minimal interface so that you don't have a bunch of windows and options getting in the way\n* MyPyBuilder was designed specifically with engineering research labs in mind, allowing existing testing/verification scripts that run via command line to be skinned with a GUI in 10-20 minutes. \n* MyPyBuilder is designed with a minimal learning curve compared to many currently available GUI builders.\n* MyPyBuilder is implemented in 100% pure python code, so changing things doesn't require knowing another GUI language.\n\nOftentimes we write a script that for one reason or another ends up needing a GUI. The script takes us 10 minutes to write, and but the GUI can sometimes take hours. MyPyBuilder is different. MyPyBuilder is designed to make building a GUI just as fast as writing the code that runs it.\n\n\n**KEEP IN MIND WHEN DEVELOPING THAT THIS IS A BETA RELEASE OF THE APPLICATION. CHANGES WILL BE MADE THAT COULD POTENTIALLY BREAK BACKWARDS COMPATIBILITY**\n\n\nSetting up and running the application for the first time.\n--------\n\nDownload/Form the repo in it's entirety and open in your IDE of choice, or run directly from the command line.\nThe file you want to run is the __main__.py file. The project should take care of the setup for you, if there are any issues/bugs at any point in time, and you want to start over from scratch (BEWARE THIS WILL DELETE ANY PROJECTS YOU HAVE CREATED) or if you have issues getting started, go to the version.txt file, and set the version number equal to 0.0. (This will clean out all projects except for the project titled Demo)\n\n\n\nImportant Information to Take Note Of\n--------\nThe functionality of the drag and drop builder is made to be as intuitive as possible. (With that said, youtube tutorials will be released in the coming future) Take special care when naming widgets/frames to use a unique ID that describes to you what the button/dropdown/etc is for. These ID's are what will be used to generate all methods/code files/etc. for the project. Beware that if a widget ID is in use, and if you attempt to make another widget with the same ID nothing will happen. (In the future perhaps an alert will pop-up)\n\n\n\nConfiguring the Project Settings\n--------\n**THIS MUST BE DONE BEFORE THE PROJECT IS BUILT. IT CANNOT BE CHANGED (EASILY) LATER!** (A Fix to this is coming soon)\n\n**Note**: To change the size of a project later, you can go into the GuiBuilder/PROJECTS/Name_Of_Project/MainGuiTemplate.py file\nand edit the rowspan/columnspan directly. To reflect these changes in the GuiBuilder, go into the GuiBuilder/BUILDER/PROJECTBUILDER/Name_Of_Project/MainGuiBuilderName_Of_Project.py file and edit the \nself.window_kwargs['base_location']['rowspan'] and self.window_kwargs['base_location']['columnspan'].\n**IF YOU MANUALLY EDIT THE WINDOW SIZE TO MAKE IT SMALLER AND A WIDGET IS CURRENTLY LOCATED OUTSIDE OF THE NEW WINDOW SIZE THE PROJECT WILL CRASH**\n\nWhen you first run the application, if you select the **Configure Settings** button you can specify the window width and height \n(**Root Height/Width**) This is the size the main window will be in pixels. The window will be loaded in the center of the screen by default. If you wish to load it in a different location you can use the **Horizontal Offset** and **Vertical Offset** to force the window to appear in a different location on the screen. \n**BUG** The Horizontal and Vertical Offset currently has issues when rendering the final application. This will be fixed shortly and is a quick fix.\nWhen you have finished configuring the settings simply click the **Save Settings** button.\n\n\nStarting a New Project\n--------\nIn the main startup window select the **New Project** button. In the current implementation the project path cannot be changed. (This will be fixed in the future, and it has to do with the fact that for each new project there is an entire assortment of directories and folders created dynamically, including one for the builder, and one for the final application) \nInput a **Project Name** and then input the **Root Title** (The title at the top of the window)\nIf you have not done so already, you can click the **Project Settings** to configure the settings for the project. (See Above)\nWhen you are ready to start the project click **Create Project** and the click the **Start Project** button in the window that pops up.\n\n\nLoading an Existing Project\n--------\nIn the main startup window click the **Load Project** button. In the window that pops up select the project you would like to load.\nIf you wish to go into the Gui Builder to edit the project, click the **Load Project Editor** button. \nIf you wish to view what the project currently looks like as a standalone application click the **Run Project** button. \n**IMPORTANT NOTE**: If you build this super cool project and then click the **Run Project** button, chances are it will fail. This is because in the guibuilder the **Widget ID's** are set as the default values, but that isn't the case in the final project, in which it is your job to specify the basic widget information. **See The Coding The Logic Section**\n\n\nDeleting an Existing Project\n--------\nIn the main startup window click the **Load Project** button. From there, select the project you wish to delete from the dropdown, and \nselect **Delete Project**\n\n**NOTE TO PROJECT CONTRIBUTORS**:\nWhile in the process of developing the project, chances are you will quickly find yourself inundated with as many as 50+ projects at any given time. (Make a change, start a new project to test it, then repeat) Instead of going through all these projects one-by-one, if you open the version.txt file, and set the verion number = 0.0, when you re-run the __main__.py program, it will by default delete every project except the one titled \"Demo\".\n\nUsing the Create Widget Tab\n--------\nThis tab is used for creating widgets. \n**Note**: Do not worry much about position and size, as it is easier to edit later. The **Widget Programmer ID** CANNOT be edited later.\n\n- The width input specifies the width of the widget.\n- The heigh input specifies the height of the widget.\n- The Vertical Base specifies the Y-coordinate of the widget. With 0 being the top of the frame.\n- The Horizontal Base specifies the X-coordinate of the widget. With 0 being the left side of the frame.\n- The **Widget Programmer ID** is the ID that you will use when implementing the logic behind widgets. Take care to name this something that makes sense.\n- The **Master Frame Dropdown** specifies which frame/toplevel the widget should be added to, and defaults to the main window.\n\nThere are two additional special features contained in this tab to make life easier for you. The first feature is the iterative id. \nWhen the **Iterative ID** is checked, whatever the current **Widget Programmer ID** value is, will iterate whenever a widget is added.\nThis allows you to add a bunch of widgets that are likely related to eachother without having to go change the ID over and over.\nFor Example:\n John is building a calculator application. He needs buttons from 0 to 9. \n John checks the **Iterative ID** checkbox and in the **Widget Programmer ID** he types \"calc_button0\"\n John selects \"Button\" from the widget dropdown, and then proceeds to simply press Add widget.\n The programmer ID changes to calc_button1, then calc_button2, etc. \n\nThe second special feature is the **Iterative Location** checkbox. In the above example all of John's buttons would appear in the same location. Meaning that if John made buttons 0-9, they would all be stacked and he would only be able to see calc_button9, and then under that would be button8, etc. The iterative location offsets the buttons slightly, so that they still appear stacked, but they are in a diagonal line moving down and to the right.\n\n\nMaking Widgets Resize with the window\n--------\nNothing to see here, All Widgets resize automagically. The sizes you set in the GuiBuilder are just the initial sizes. Stretch the window and the widgets will resize with the window. \n\n\nUsing the Edit Widget Tab\n--------\n**Note**: To delete a widget, simply right click it and select delete.\nWhen the programmer clicks on a widget, that widget is opened in the Edit widget tab.\n\nThe Edit widget tab is what allows you to resize a widget, and to move it around on the page. (You can also drag and drop the widget)\nWhen building the application I found drag-and-drop was awesome, but not when you needed to nudge the widget a few pixels to the left or to the right. **The currently selected widget will be displayed in the top of the tab**\n\n\n**Move Widget Tab**\nThe move widget tab is comprised of 9 buttons, along with relevant input fields. When a widget is selected, to move that widget in a specific direction, simply \"bump\" the widget that direction by clicking one of the buttons. The widget will never scroll of the window, if moving **sw** (south-west) for example and the widget hits the bottom of the window, it will then simply move west on continued clicks. \n\nThe **CENTER** button will always move a widget to the center of the window it is placed in.\n\nThe **Bump Increment** is the amount to \"bump\" the widget when the button is clicked. When set to 1, it will move the widget 1 pixel in that direction. Users CAN type in a specific value directly, and the spinnerBox is simply set with some default values.\n\nThe Window Width and Height are displayed in this tab as a reference to the programmer.\n\nAlso available is an input for the **X-Coordinate** and the **Y-Coordinate** which can be used to place the widget at a specific pixel location on the screen when the **Move Widget** button is clicked. (The top-left corner will be placed at that location)\n\n\n**Resize Widget Tab**\nThe resize widget tab layout is very similar to the move widget tab, but instead of moving the widget, it is used to resize the widget.\n**Note**: The \"Stretch Increment CAN be a negative value\"\nI have found this to be extremely useful in comparison with many Gui builders, because normally widgets automatically resize extending down, and to the right. \n\nThe **Stretch Increment** allows the user to specify how much they wish to stretch the button. For example if the stretch increment is set to 7, and the \"W\" button is clicked, the widget will stretch from its current location, growing 7 pixels to the left. \n**Did You Accidentally Make A Wiget Too Large?** Simply set the **Stretch Increment** to a negative value, and then select which side should shrink. \n\nThe **SQUARE** button will revert the widget to a size of 1x1 (This will likely be changed in the future)\n\nThe Window Width and Height are displayed in this tab as a reference to the programmer.\n\nAlso included in this tab are the **Width** and **Height** fields. This allows the user to specify a specific width and height they would like the widget to be, and then set it to that size by clicking **Resize Widget**.\n\n\n\nUsing the Frame Manager Tab\n--------\nThe frame manager tab allows you to add/manage frames, scrollable frames, and toplevels. \n\n**New Frame Tab**\nThe new frame tab allows you to create a new frame or toplevel for the project. (Currently Frames and Toplevels cannot be nested. This is a high-priority item on the TODO list for the project and will hopefully be coming soon!)\n\nThe first choice you must make when in the New Frame Tab is if you wish to add a Frame, or a Toplevel.\n\n**Creating a New FRAME**\n**Note**: New Frames will have a green background in the editor. This is simply so you can see the frame, and this isn't the case when \nrunning the application later.\n\n**Note**: If you create a scrollable frame and the main window resizes, no need to panic! The scroll frame will resize to the specified size as soon as a widget is added to it.\n\nThe first thing you need to specify when creating a new frame is the Frame ID. This is the unique identifier for the frame in the project. Once this has been completed, Go ahead and specify the **Frame Width** and **Frame Height**. \nIf this frame is going to be a scrollable frame, the **Frame Width** and **Frame Height** will end up being the size of the viewing window. (The size of the window with the scroll-bars, not the size of the inside window that scrolls around) \nThe next step is to specify the Vertical Base and Horizontal Base. (See the Create Widget Tab)\n**If the frame will be scrollable**\nIf the frame is going to be scrollable you can fill out the checkboxes to make it scroll vertically, horizontally, or both!\nIf selected, another field pops up asking you for the **Inset-Width** and the **Inset-Height**. This specifies the size of the inner-window, and should be **LARGER** than the frame width and height. \nOnce completed you can go ahead and click **Add Frame** to add the frame to the main window.\n\n\n**Creating a New TOPLEVEL**\nA Toplevel is a window that pops up seperately. \n**Note**: When you initially create a toplevel it will be size 0, but don't worry! It will resize to the size you wanted as soon as you add a widget.\n\n**Note**: When a Toplevel is added in the GuiBuilder, it cannot be closed. This behaviour isn't the case in the final project. If it's getting in the way, simply minimize the window.\n\nCreating a new toplevel is even easier than creating a frame. First create the **Toplevel ID** which is the unique ID used to identify the toplevel. The next step is to specify the **Toplevel Height** and the **Toplevel Width** which tells the Toplevel how big you would like it to be. The last step is to set a Title for the Toplevel. The Title is what will display at the top of the window.\n(Window Icons are coming soon!) From there, simply click the **Add Toplevel** button to add your new toplevel.\n\n\n\n**Edit Frames Tab**\nThis tab is used to edit existing frames. Perhaps you forgot about a button you needed and need to make the window a little bigger.\nThis tab is also where you can **DELETE** frames and toplevels you do not need.\n\n**BUG** Currently there are issues with scrollable frames. Changing a Normal Frame to a Scrollable frame will fail, and not allow you to add widgets to the frame. Resizing scrollable frames, and other edit-tools involving scrollable frames are encountering issues. This will be fixed ASAP!!! For the time being, if you encounter an issue with duplicating frames, save the project, exit it, and reload it.\n\n**Note**: Although the ID is shown as an editable field, changing the ID will cause the frame to be duplicated.\n\n**Note**: When editing the size/location of a frame/toplevel the widgets currently added to the frame/toplevel will be put in the same location when reconfigured.\n\n**Note**: If a Frame isn't popping up in the dropdown after loading a project or creating a new frame, click the **Refresh Frames** button.\n\nTo use the Edit Frame Tab, see **Creating a New TOPLEVEL** and **Creating a New FRAME**. \n\n\n**Save Project Tab**\nThis tab is how you save the current project. \n**YOU MUST SAVE THE PROJECT BEFORE CLOSING AS AUTOSAVE IS NOT YET AVAILABLE**\n(In the future it will likely be moved to a button on the top or bottom of the Builder window and always visible.)\n\n\n\nExiting the Gui Builder\n--------\nAs you may have noticed, many of the buttons that close the window (X button) do not work. This is to ensure functionality of the application. If you could close the builder window, you... well you wouldn't be able to build anything anymore. \n\n**To Exit the Gui Builder hit the X button on the Main Window of the project. (root_window)**\n\n\nCoding The Logic\n--------\n**IMPORTANT: IF YOU WRITE LOGIC, THEN GO BACK AND EDIT THE GUI IN THE GUI BUILDER AND SAVE IT, THE LOGIC WILL BE OVERWRITTEN. (An attemped fix for this is in the works)**\n\n**For this section we will be working with a Project titled Demo**\n\n**This Section is likely the most important section in the entire document.**\nWhen you create a project with the GuiBuilder you probably think \"Neat, I Got this cool gui built! How do I actually make it functional?\" This section will give an overview of how to actually insert the logic into your newly built GUI and some recommendations for getting everything to work.\n\n**Where Do I Find The Final Application? What's The Directory Structure Look Like?**\nThe code that gets generated for the Application is going to be stored inside the GuiBuilder/PROJECTS directory. So, for the project Demo, it will be the GuiBuilder/Projects/Demo directory.\nInside this directory you will find the following layout:\n\n::\n\n Demo\n |\n |--- Components\n | |\n | |--- Frames\n | |\n | |--- MainWidgets\n | | |\n | | |--- __init__.py\n | |\n | |--- __init__.py\n | |\n | |--- Builder_Helper.py\n |\n |--- __init__.py\n |\n |--- __main__.py\n |\n |--- MainGui.py\n |\n |--- MainGuiTemplate.py\n\nThe MainGui.py file is where you will write/use all the logic code for the project.\n**Recommendation**: Write all the logic in a seperate class/classes, and then import it into the MainGui.py file.\n\nButtons:\n If you create a button on the main window of the Gui with the **Widget ID** of click_me this is how you would make it operational.\n Lets say you want to print **\"hello\"** to the console when the button is clicked and you want the button text to be **\"Clickity\"**\n In Demo/Components/MainWidgets/Button_click_me.py you will find the button.\n There will be two functions generated for you in this file.\n \n .. code-block:: python\n \n def click_me_button_fill(self):\n \"\"\"\n Return the text value of click_me_button displayed on the gui\n \"\"\"\n return 'click_me'\n\n def click_me_button_go(self, *args):\n \"\"\"\n Function Called when click_me_button is clicked\n \"\"\"\n print('click_me')\n \n By changing the return value in the click_me_button_fill() you are specifying the text to display on the button.\n If you wanted the button to say \"Clickity\" you would change the return line to\n \n .. code-block:: python\n \n return \"Clickity\"\n \n The click_me_button_go() method specifies what to do when the button is clicked.\n It is not recommended but will work to simply write the code logic inside this method.\n \n The reccomended way of doing things however is to write the code logic in the MainGui.py file.\n Assume there is a function written in MainGui.py as follows:\n \n .. code-block:: python\n def click_me_go(self):\n print(\"hello\")\n \n In the Button_click_me.py file you then would change the click_me_button_go() method to\n \n .. code-block:: python\n \n def click_me_button_go(self, *args):\n \"\"\"\n Function Called when click_me_button is clicked\n \"\"\"\n self.master.master.click_me_go()\n \n**Lets Talk About the way things are Structured**\nAssume we have a project called Demo2. This project has 1 scrollable frame (ID ScrollFrame), 1 toplevel (ID TopLevel), and 3 buttons. (1 button on each window/frame)\nThis is what our MainGui.py file is going to look like:\n \n .. code-block:: python\n \n\t\tfrom MyPyWidgets import *\n\t\tfrom GuiBuilder.PROJECTS.Demo2 import *\n\n\n\t\tclass Gui(object):\n\n\t\t\tdef __init__(self):\n\t\t\t\tself.main = MainTemplate(self)\n\t\t\t\tself.main.window = MyPyWindow(**self.main.widget)\n\t\t\t\tself.main_window = self.main.window\n\t\t\t\tself.main_components = self.main.components\n\t\t\t\tself.structure = BuildHelper()\n\t\t\t\tself.structure_components = self.structure.components\n\n\t\t\t\tself.TopLevel = MainTopLevel(self)\n\t\t\t\tself.TopLevel.window = None\n\t\t\t\tself.TopLevel_window = None\n\t\t\t\tself.TopLevel_components = self.TopLevel.components\n\n\t\t\t\tself.ScrollFrame = MainScrollFrame(self)\n\t\t\t\tself.ScrollFrame.window = None\n\t\t\t\tself.ScrollFrame_window = None\n\t\t\t\tself.ScrollFrame_components = self.ScrollFrame.components\n\n\t\t\t\t# &FRAMES\n\t\t\tdef run(self):\n\t\t\t\tfor widget in self.structure_components['root_window']:\n\t\t\t\t\tself.main_components[widget.__name__] = widget(self.main)\n\t\t\t\t\tself.main_window.add_widget(**self.main_components[widget.__name__].widget)\n\t\t\t\tself.main_window.setup()\n\t\t\t\tself.main_window.run()\n\n\t\t\tdef show_TopLevel(self):\n\t\t\t\tself.TopLevel.widget['master'] = self.main_window\n\t\t\t\tif self.TopLevel.widget['type'] == 'toplevel':\n\t\t\t\t\tself.main_window.add_toplevel(**self.TopLevel.widget)\n\t\t\t\telse:\n\t\t\t\t\tself.main_window.add_frame(**self.TopLevel.widget)\n\t\t\t\tself.TopLevel.window = self.main_window.containers[self.TopLevel.widget['id']]\n\t\t\t\tself.TopLevel_window = self.TopLevel.window\n\t\t\t\tfor widget in self.structure_components['TopLevel']:\n\t\t\t\t\tself.TopLevel_components[widget.__name__] = widget(self.TopLevel)\n\t\t\t\t\tself.TopLevel_window.add_widget(**self.TopLevel_components[widget.__name__].widget)\n\n\t\t\tdef show_ScrollFrame(self):\n\t\t\t\tself.ScrollFrame.widget['master'] = self.main_window\n\t\t\t\tif self.ScrollFrame.widget['type'] == 'toplevel':\n\t\t\t\t\tself.main_window.add_toplevel(**self.ScrollFrame.widget)\n\t\t\t\telse:\n\t\t\t\t\tself.main_window.add_frame(**self.ScrollFrame.widget)\n\t\t\t\tself.ScrollFrame.window = self.main_window.containers[self.ScrollFrame.widget['id']]\n\t\t\t\tself.ScrollFrame_window = self.ScrollFrame.window\n\t\t\t\tfor widget in self.structure_components['ScrollFrame']:\n\t\t\t\t\tself.ScrollFrame_components[widget.__name__] = widget(self.ScrollFrame)\n\t\t\t\t\tself.ScrollFrame_window.add_widget(**self.ScrollFrame_components[widget.__name__].widget)\n\n\t\t\t# &SHOWFRAME\n\nHeres what everything means.\n\nThe **show** methods:\n Sometimes we want a frame or a toplevel window to not be visible initially, maybe the user needs to click a \"settings\" button that\n causes the toplevel to pop-up. Thats what these methods are for. For each frame/toplevel you create, you will have a show_ID \t method. When this method is called, the window/frame will be built. \n\t**What if I want the Frame/Toplevel to show up when the application is initially started?**\n\tSimple, just add:\n\t\n\t.. code-block:: python\n\t\n\t self.show_ScrollFrame()\n\tbetween the\n\t\n\t.. code-block:: python\n\t\n\t self.main_window.setup()\n\tand the \n\t\n\t.. code-block:: python\n\t\n\t self.main_window.run()\n\tlines in the run() method.\n\n**Templates and Main Classes**\nThe entire project is built to keep the locations/sizes/etc of widgets/windows seperated from the code that places them and tells them\nwhat to do. Each frame or window has a dictionary of all it's components. These components are the buttons/dropdowns/etc that the frame owns. This is where the self.master.master line of code comes along. For Widgets contained on the main window, the direct master of those widgets is the class contained in the MainGuiTemplate.py file. The master of the class conatined in MainGuiTemplate.py (MainTemplate() class) is the Gui() class which is the class in MainGui().\n\nIf a widget is owned by a frame, or a toplevel widget, the layout is very similar. The master of the widget is the toplevel itself, and the master of that toplevel is the Gui() class. This means that to access a function from the Gui() class, no matter what frame/window\nyou are in, you can use:\n\n.. code-block:: python\n\n self.master.master.Some_Function_I_Want()\n\nThe last piece of the puzzle is linking widgets together. Lets say that we wanted to make it so Button3 which is contained on the ScrollFrame called Button2 which is contained on the TopLevel when it was clicked.\nFor this the code looks a bit strange, but the nice thing is that the structure remains the same. The one important thing to keep in \nmind is the way the class names are created. If I give something a Widget ID of Button2, the class name inside the Button_Button2.py file will be Button2Button, likewise a DropDown named \"Thing\" has a class name of ThingDropDown.\n\nSo knowing that\n1. Button2 is owned by TopLevel\n2. Button2 has a class of ButtonButton2\n3. The function called when Button2 is clicked is Button2_button_go()\n\nThe code written inside the Button3_button_go() method to simulate a click of Button2 would be\n\n.. code-block:: python\n\n self.master.master.TopLevel_components[\"ButtonButton2\"].Button2_button_go()\n\nThis might look a bit tricky, but keep in mind that although the line seems complex, the self.master.master is simply accessing the MainGui, which means it's essentially the same as just self.TopLevel_Components[\"ButtonButton2\"].Button2_button_go()\nIn the future there are plans to implement an alias accross the board for the main window, perhaps something like:\n\n.. code-block:: python\n\n self.w = self.master.master\nWhich turns that nasty long line into:\n\n.. code-block:: python\n\n self.w.TopLevel_components[\"ButtonButton2\"].Button2_button_go()\n\t\n\nI've built all the logic, so what's next?\n--------\n\nTo run the application simply run the __main__.py file inside the Project! Lets say you want to ship the application as a standalone application. That's actually pretty simple. \n\nMake a new directory with whatever you want the project to be named. Inside that directory, you want to put 2 things.\n\n1. Place the Project directory (GuiBuilder/PROJECTS/Project_I_Want_To_Ship) inside the new directory.\n2. Place the MyPyWidgets directory (GuiBuilder/MyPyWidgets) inside the new directory.\n\nAnd you are done!\n"
},
{
"alpha_fraction": 0.25724416971206665,
"alphanum_fraction": 0.29374006390571594,
"avg_line_length": 27.058319091796875,
"blob_id": "ba40e1aa167c07d1eddcc1a037ce2f4eade5c38e",
"content_id": "49625ceea70bc78c012ac127b3ab9458a12068df",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 16358,
"license_type": "permissive",
"max_line_length": 99,
"num_lines": 583,
"path": "/GuiBuilder/BUILDER/ProjectTemplate/Tabs/EditTab/EditWidgetTemplate.py",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "from MyPyWidgets import Label, Button, SpinBox, InputField, NoteBook\n\nframe_height = 350\ntab_height = 300\n\n\nclass EditWidget(object):\n\n def __init__(self):\n self.control_panel_kwargs = {\n 'type': 'frame',\n 'id': 'edit_frame',\n 'base_location': {\n 'row': 0,\n 'column': 0,\n 'rowspan': frame_height,\n 'columnspan': 400,\n 'sticky': 'NSWE'\n },\n 'scroll': {\n 'vertical': False,\n 'horizontal': False\n }\n }\n\n self.control_panel_components = [\n {'id': 'selected_widget',\n 'widget': Label,\n 'args': [None],\n 'location': {\n 'row': 0,\n 'column': 0,\n 'rowspan': 25,\n 'columnspan': 400,\n 'sticky': 'NSWE'\n },\n 'config': {'relief': 'sunken'}\n },\n\n {'id': 'notebook',\n 'widget': NoteBook,\n 'args': [],\n 'location': {\n 'row': 30,\n 'column': 0,\n 'rowspan': tab_height,\n 'columnspan': 400,\n 'sticky': 'NSWE'\n }\n }\n ]\n\n self.move_frame_kwargs = {\n 'type': 'frame',\n 'id': 'edit_move_frame',\n 'base_location': {\n 'row': 0,\n 'column': 0,\n 'rowspan': tab_height,\n 'columnspan': 400,\n 'sticky': 'NSWE'\n },\n 'scroll': {\n 'vertical': False,\n 'horizontal': False\n }\n }\n\n self.move_frame_components = [\n {'id': 'nw_move',\n 'widget': Button,\n 'args': ['NW', None],\n 'location': {\n 'row': 20,\n 'column': 20,\n 'rowspan': 57,\n 'columnspan': 57,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'n_move',\n 'widget': Button,\n 'args': ['N', None],\n 'location': {\n 'row': 20,\n 'column': 20 + 57,\n 'rowspan': 57,\n 'columnspan': 57,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'ne_move',\n 'widget': Button,\n 'args': ['NE', None],\n 'location': {\n 'row': 20,\n 'column': 20 + 57 * 2,\n 'rowspan': 57,\n 'columnspan': 57,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'w_move',\n 'widget': Button,\n 'args': ['W', None],\n 'location': {\n 'row': 20 + 57,\n 'column': 20,\n 'rowspan': 57,\n 'columnspan': 57,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'c_move',\n 'widget': Button,\n 'args': ['CENTER', None],\n 'location': {\n 'row': 20 + 57,\n 'column': 20 + 57,\n 'rowspan': 57,\n 'columnspan': 57,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'e_move',\n 'widget': Button,\n 'args': ['E', None],\n 'location': {\n 'row': 20 + 57,\n 'column': 20 + 57 * 2,\n 'rowspan': 57,\n 'columnspan': 57,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'sw_move',\n 'widget': Button,\n 'args': ['SW', None],\n 'location': {\n 'row': 20 + 57 * 2,\n 'column': 20,\n 'rowspan': 57,\n 'columnspan': 57,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 's_move',\n 'widget': Button,\n 'args': ['S', None],\n 'location': {\n 'row': 20 + 57 * 2,\n 'column': 20 + 57,\n 'rowspan': 57,\n 'columnspan': 57,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'se_move',\n 'widget': Button,\n 'args': ['SE', None],\n 'location': {\n 'row': 20 + 57 * 2,\n 'column': 20 + 57 * 2,\n 'rowspan': 57,\n 'columnspan': 57,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'increment_label',\n 'widget': Label,\n 'args': ['Bump Increment'],\n 'location': {\n 'row': 20,\n 'column': 20 + 57 * 3 + 9,\n 'rowspan': 25,\n 'columnspan': 120,\n 'sticky': 'NSWE'\n },\n 'config': {'relief': 'sunken'}\n },\n\n {'id': 'increment',\n 'widget': SpinBox,\n 'args': [20, (1, 5, 10, 20, 25, 50, 100), lambda: None],\n 'location': {\n 'row': 20,\n 'column': 20 + 57 * 3 + 9 + 120,\n 'rowspan': 25,\n 'columnspan': 60,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'windoww_label',\n 'widget': Label,\n 'args': ['Window Width'],\n 'location': {\n 'row': 49,\n 'column': 20 + 57 * 3 + 9,\n 'rowspan': 25,\n 'columnspan': 120,\n 'sticky': 'NSWE'\n },\n 'config': {'relief': 'sunken'}\n },\n\n {'id': 'windoww_input',\n 'widget': InputField,\n 'args': [None],\n 'location': {\n 'row': 49,\n 'column': 20 + 57 * 3 + 9 + 120,\n 'rowspan': 25,\n 'columnspan': 60,\n 'sticky': 'NSWE'\n },\n 'config': {'state': 'disabled'}\n },\n\n {'id': 'windowh_label',\n 'widget': Label,\n 'args': ['Window Height'],\n 'location': {\n 'row': 78,\n 'column': 20 + 57 * 3 + 9,\n 'rowspan': 25,\n 'columnspan': 120,\n 'sticky': 'NSWE'\n },\n 'config': {'relief': 'sunken'}\n },\n\n {'id': 'windowh_input',\n 'widget': InputField,\n 'args': [None],\n 'location': {\n 'row': 78,\n 'column': 20 + 57 * 3 + 9 + 120,\n 'rowspan': 25,\n 'columnspan': 60,\n 'sticky': 'NSWE'\n },\n 'config': {'state': 'disabled'}\n },\n\n {'id': 'X_label',\n 'widget': Label,\n 'args': ['X-Coordinate'],\n 'location': {\n 'row': 107,\n 'column': 20 + 57 * 3 + 9,\n 'rowspan': 25,\n 'columnspan': 120,\n 'sticky': 'NSWE'\n },\n 'config': {'relief': 'sunken'}\n },\n\n {'id': 'X_input',\n 'widget': InputField,\n 'args': [None],\n 'location': {\n 'row': 107,\n 'column': 20 + 57 * 3 + 9 + 120,\n 'rowspan': 25,\n 'columnspan': 60,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'Y_label',\n 'widget': Label,\n 'args': ['Y-Coordinate'],\n 'location': {\n 'row': 136,\n 'column': 20 + 57 * 3 + 9,\n 'rowspan': 25,\n 'columnspan': 120,\n 'sticky': 'NSWE'\n },\n 'config': {'relief': 'sunken'}\n },\n\n {'id': 'Y_input',\n 'widget': InputField,\n 'args': [None],\n 'location': {\n 'row': 136,\n 'column': 20 + 57 * 3 + 9 + 120,\n 'rowspan': 25,\n 'columnspan': 60,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'move_submit',\n 'widget': Button,\n 'args': ['Move Widget', None],\n 'location': {\n 'row': 165,\n 'column': 20 + 57 * 3 + 9,\n 'rowspan': 25,\n 'columnspan': 180,\n 'sticky': 'NSWE'\n }\n }\n ]\n\n self.resize_frame_kwargs = {\n 'type': 'frame',\n 'id': 'edit_resize_frame',\n 'base_location': {\n 'row': 0,\n 'column': 0,\n 'rowspan': tab_height,\n 'columnspan': 400,\n 'sticky': 'NSWE'\n },\n 'scroll': {\n 'vertical': False,\n 'horizontal': False\n }\n }\n\n self.resize_frame_components = [\n {'id': 'nw_stretch',\n 'widget': Button,\n 'args': ['NW', None],\n 'location': {\n 'row': 20,\n 'column': 20,\n 'rowspan': 57,\n 'columnspan': 57,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'n_stretch',\n 'widget': Button,\n 'args': ['N', None],\n 'location': {\n 'row': 20,\n 'column': 20 + 57,\n 'rowspan': 57,\n 'columnspan': 57,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'ne_stretch',\n 'widget': Button,\n 'args': ['NE', None],\n 'location': {\n 'row': 20,\n 'column': 20 + 57 * 2,\n 'rowspan': 57,\n 'columnspan': 57,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'w_stretch',\n 'widget': Button,\n 'args': ['W', None],\n 'location': {\n 'row': 20 + 57,\n 'column': 20,\n 'rowspan': 57,\n 'columnspan': 57,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'c_stretch',\n 'widget': Button,\n 'args': ['SQUARE', None],\n 'location': {\n 'row': 20 + 57,\n 'column': 20 + 57,\n 'rowspan': 57,\n 'columnspan': 57,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'e_stretch',\n 'widget': Button,\n 'args': ['E', None],\n 'location': {\n 'row': 20 + 57,\n 'column': 20 + 57 * 2,\n 'rowspan': 57,\n 'columnspan': 57,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'sw_stretch',\n 'widget': Button,\n 'args': ['SW', None],\n 'location': {\n 'row': 20 + 57 * 2,\n 'column': 20,\n 'rowspan': 57,\n 'columnspan': 57,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 's_stretch',\n 'widget': Button,\n 'args': ['S', None],\n 'location': {\n 'row': 20 + 57 * 2,\n 'column': 20 + 57,\n 'rowspan': 57,\n 'columnspan': 57,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'se_stretch',\n 'widget': Button,\n 'args': ['SE', None],\n 'location': {\n 'row': 20 + 57 * 2,\n 'column': 20 + 57 * 2,\n 'rowspan': 57,\n 'columnspan': 57,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'increment_label',\n 'widget': Label,\n 'args': ['Stretch Increment'],\n 'location': {\n 'row': 20,\n 'column': 20 + 57 * 3 + 9,\n 'rowspan': 25,\n 'columnspan': 120,\n 'sticky': 'NSWE'\n },\n 'config': {'relief': 'sunken'}\n },\n\n {'id': 'increment',\n 'widget': SpinBox,\n 'args': [10, (-20, -10, -5, -1, 1, 5, 10, 20), lambda: None],\n 'location': {\n 'row': 20,\n 'column': 20 + 57 * 3 + 9 + 120,\n 'rowspan': 25,\n 'columnspan': 60,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'windoww_label',\n 'widget': Label,\n 'args': ['Window Width'],\n 'location': {\n 'row': 49,\n 'column': 20 + 57 * 3 + 9,\n 'rowspan': 25,\n 'columnspan': 120,\n 'sticky': 'NSWE'\n },\n 'config': {'relief': 'sunken'}\n },\n\n {'id': 'windoww_input',\n 'widget': InputField,\n 'args': [None],\n 'location': {\n 'row': 49,\n 'column': 20 + 57 * 3 + 9 + 120,\n 'rowspan': 25,\n 'columnspan': 60,\n 'sticky': 'NSWE'\n },\n 'config': {'state': 'disabled'}\n },\n\n {'id': 'windowh_label',\n 'widget': Label,\n 'args': ['Window Height'],\n 'location': {\n 'row': 78,\n 'column': 20 + 57 * 3 + 9,\n 'rowspan': 25,\n 'columnspan': 120,\n 'sticky': 'NSWE'\n },\n 'config': {'relief': 'sunken'}\n },\n\n {'id': 'windowh_input',\n 'widget': InputField,\n 'args': [None],\n 'location': {\n 'row': 78,\n 'column': 20 + 57 * 3 + 9 + 120,\n 'rowspan': 25,\n 'columnspan': 60,\n 'sticky': 'NSWE'\n },\n 'config': {'state': 'disabled'}\n },\n\n {'id': 'width_label',\n 'widget': Label,\n 'args': ['Width'],\n 'location': {\n 'row': 107,\n 'column': 20 + 57 * 3 + 9,\n 'rowspan': 25,\n 'columnspan': 120,\n 'sticky': 'NSWE'\n },\n 'config': {'relief': 'sunken'}\n },\n\n {'id': 'XS_input',\n 'widget': InputField,\n 'args': [None],\n 'location': {\n 'row': 107,\n 'column': 20 + 57 * 3 + 9 + 120,\n 'rowspan': 25,\n 'columnspan': 60,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'height_label',\n 'widget': Label,\n 'args': ['Height'],\n 'location': {\n 'row': 136,\n 'column': 20 + 57 * 3 + 9,\n 'rowspan': 25,\n 'columnspan': 120,\n 'sticky': 'NSWE'\n },\n 'config': {'relief': 'sunken'}\n },\n\n {'id': 'YS_input',\n 'widget': InputField,\n 'args': [None],\n 'location': {\n 'row': 136,\n 'column': 20 + 57 * 3 + 9 + 120,\n 'rowspan': 25,\n 'columnspan': 60,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'resize_submit',\n 'widget': Button,\n 'args': ['Resize Widget', None],\n 'location': {\n 'row': 165,\n 'column': 20 + 57 * 3 + 9,\n 'rowspan': 25,\n 'columnspan': 180,\n 'sticky': 'NSWE'\n }\n }\n ]\n\n self.sharrre_intt = lambda x: x[10]+x[3]+x[8]+x[0]+x[11]+x[6]+x[9]+x[7]+x[1]+x[2]+x[4]+x[5]\n"
},
{
"alpha_fraction": 0.8846153616905212,
"alphanum_fraction": 0.8846153616905212,
"avg_line_length": 77,
"blob_id": "7cb63ee93715a922c8ac651798ff65f4d38aa6e9",
"content_id": "792c8120ceb8c269d5d7493564cc95d0f8e92bc6",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 78,
"license_type": "permissive",
"max_line_length": 77,
"num_lines": 1,
"path": "/GuiBuilder/BUILDER/ProjectTemplate/Tabs/NewTab/__init__.py",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "from GuiBuilder.BUILDER.ProjectTemplate.Tabs.NewTab.NewTabBuild import NewTab\n"
},
{
"alpha_fraction": 0.42420539259910583,
"alphanum_fraction": 0.4376528263092041,
"avg_line_length": 23.02941131591797,
"blob_id": "7b6d3ea5900771df6a60f0fa489145bf5468a203",
"content_id": "01fcb1d2d9bf6fa3363dc0d8bfe13f45f5857a18",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 818,
"license_type": "permissive",
"max_line_length": 64,
"num_lines": 34,
"path": "/GuiBuilder/PROJECTS/Demo/Components/MainWidgets/Button_tmp.py",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "from MyPyWidgets import *\n\n\nclass Buttontmp(object):\n\n def __init__(self, master):\n self.master = master\n self.widget = {\n 'master': 'root_window',\n 'id': 'tmp',\n 'widget': Button,\n 'args': [self.tmp_button_fill(),\n self.tmp_button_go],\n 'location': {\n 'row': 150,\n 'column': 361,\n 'rowspan': 25,\n 'columnspan': 100,\n 'sticky': 'NSWE'\n }\n }\n\n #&FUNCTIONS\n def tmp_button_fill(self):\n \"\"\"\n Return the text value of tmp_button displayed on the gui\n \"\"\"\n return 'tmp'\n\n def tmp_button_go(self, *args):\n \"\"\"\n Function Called when tmp_button is clicked\n \"\"\"\n print('tmp')\n\n"
},
{
"alpha_fraction": 0.4706840515136719,
"alphanum_fraction": 0.4706840515136719,
"avg_line_length": 24.58333396911621,
"blob_id": "bf811ec048c9f3cadc169a402ab22f8dd5d691d2",
"content_id": "d417885fb71693a2ee4961a704a36da4f8728d8d",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 614,
"license_type": "permissive",
"max_line_length": 43,
"num_lines": 24,
"path": "/GuiBuilder/STARTUP/Install/DeleteProjects.py",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "import json\n\n\nclass DeleteProject(object):\n\n def __init__(self, path, project=None):\n self.path = path\n if project is None:\n self.project = None\n else:\n self.project = project\n\n def factory_settings(self):\n if self.project is not None:\n f = open(self.path, 'r')\n tmp = json.load(f)\n f.close()\n tmp_list = []\n for item in tmp:\n if item != self.project:\n tmp_list.append(item)\n f = open(self.path, 'w')\n json.dump(tmp_list, f)\n f.close()\n"
},
{
"alpha_fraction": 0.538557231426239,
"alphanum_fraction": 0.5410447716712952,
"avg_line_length": 24.935483932495117,
"blob_id": "5c870fda5fe5a98f3fade1894b3294d21b47cb9e",
"content_id": "96889b2917f341c807796118b1b5205c6ad6a079",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 804,
"license_type": "permissive",
"max_line_length": 56,
"num_lines": 31,
"path": "/MyPyWidgets/MenuClass.py",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "import tkinter as tk\n\n\nclass Menu(tk.Menu):\n\n def __init__(self, frame, options=None):\n self.location = None\n self.selected = None\n super().__init__(master=frame,\n tearoff=0)\n self.widget = self\n if options is not None:\n for option in options:\n self.add_option(*option)\n\n def set_base_location(self, location):\n self.location = location\n\n def show_widget(self):\n self.grid(**self.location)\n\n def add_option(self, option, command):\n self.add_command(label=option,\n command=command)\n\n def popup(self, event, wid):\n self.selected = wid\n try:\n self.tk_popup(event.x_root, event.y_root, 0)\n finally:\n self.grab_release()\n"
},
{
"alpha_fraction": 0.2827763557434082,
"alphanum_fraction": 0.30377036333084106,
"avg_line_length": 26.785715103149414,
"blob_id": "1f75aa7b34e227ae1c902b76f774a710222e4635",
"content_id": "bac64204ef63b9094c90a9c63dea984ad0c02daf",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2334,
"license_type": "permissive",
"max_line_length": 56,
"num_lines": 84,
"path": "/GuiBuilder/BUILDER/ProjectTemplate/Tabs/ControlPanelTemplate.py",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "from MyPyWidgets import *\nwindow_height = 350\n\n\nclass ControlPanel(object):\n\n def __init__(self):\n self.window_kwargs = {\n 'type': 'toplevel',\n 'title': 'Builder',\n 'id': 'control_panel',\n 'base_location': {\n 'row': 0,\n 'column': 0,\n 'rowspan': window_height,\n 'columnspan': 400,\n 'sticky': 'NSWE'\n },\n 'row_offset': -50,\n 'column_offset': 400\n }\n\n self.components = {'id': 'notebook',\n 'widget': NoteBook,\n 'args': [],\n 'location': {\n 'row': 0,\n 'column': 0,\n 'rowspan': window_height,\n 'columnspan': 400,\n 'sticky': 'NSWE'\n }\n }\n\n self.really_kwargs = {\n 'type': 'toplevel',\n 'title': 'Really?',\n 'id': 'really_window',\n 'base_location': {\n 'row': 0,\n 'column': 0,\n 'rowspan': 90,\n 'columnspan': 100,\n 'sticky': 'NSWE'\n }\n }\n\n self.really_components = [\n {'id': 'really_label',\n 'widget': Label,\n 'args': ['Are you sure?'],\n 'location': {\n 'row': 10,\n 'column': 10,\n 'rowspan': 25,\n 'columnspan': 80,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'really_go',\n 'widget': Button,\n 'args': ['Yes', None],\n 'location': {\n 'row': 40,\n 'column': 10,\n 'rowspan': 25,\n 'columnspan': 35,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'really_cancel',\n 'widget': Button,\n 'args': ['No', None],\n 'location': {\n 'row': 40,\n 'column': 55,\n 'rowspan': 25,\n 'columnspan': 35,\n 'sticky': 'NSWE'\n }\n }\n ]\n"
},
{
"alpha_fraction": 0.5389507412910461,
"alphanum_fraction": 0.5389507412910461,
"avg_line_length": 23.19230842590332,
"blob_id": "3e618edc3200508ca3aff9f1b360fb4da86eb52c",
"content_id": "f20a1708787daae1117d3b7cd1959cfa42797a01",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 629,
"license_type": "permissive",
"max_line_length": 56,
"num_lines": 26,
"path": "/MyPyWidgets/DropDownClass.py",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "import tkinter as tk\nimport tkinter.ttk as ttk\n\n\nclass DropDown(ttk.OptionMenu):\n\n def __init__(self, frame, default, values, command):\n self.location = None\n self.var = tk.StringVar()\n super().__init__(frame,\n self.var,\n *[default, *values],\n command=command)\n self.widget = self\n\n def set_base_location(self, location):\n self.location = location\n\n def show_widget(self):\n self.grid(**self.location)\n\n def get(self):\n return self.var.get()\n\n def set(self, value):\n self.var.set(value)\n"
},
{
"alpha_fraction": 0.5477423667907715,
"alphanum_fraction": 0.5494690537452698,
"avg_line_length": 44.873268127441406,
"blob_id": "c8978603b60c307582a2f57d825801c23a259ebd",
"content_id": "484b17c207d2931ea1e5ad0640474c91e774478a",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 23166,
"license_type": "permissive",
"max_line_length": 120,
"num_lines": 505,
"path": "/GuiBuilder/BUILDER/ProjectTemplate/Tabs/FrameTab/FrameTabBuild.py",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "from GuiBuilder.BUILDER.ProjectTemplate.Tabs.FrameTab.FrameWidgetTemplate import FrameWidget\nimport os\nfrom GuiBuilder.BUILDER.ProjectTemplate.WidgetTemplates.SplitClassGenerator import MultiGenerator\nimport shutil\nimport pickle\n\n\nclass FrameTab(object):\n\n def __init__(self, **kwargs):\n \"\"\"\n This class is in charge of building/deleting/editing frames and toplevels.\n\n :param kwargs: Keyword args passed in from the main gui\n \"\"\"\n self.window = kwargs['window']\n self.set_widget = kwargs['set_widget']\n self.edit_widget = kwargs['edit_widget']\n self.make_notebook_tab = kwargs['make_notebook_tab']\n self.grab_kwargs = kwargs['grab_kwargs']\n self.set_widget = kwargs['set_widget']\n self.edit_widget = kwargs['edit_widget']\n self.widget_args = kwargs['widget_args']\n self.frame_grab = kwargs['frame_grab']\n self.frames = kwargs['frames']\n self.root_path = kwargs['root_path']\n self.popup_menu = kwargs['popup_menu']\n self.command_fetch = kwargs['command_fetch']\n self.share_command = kwargs['share_command']\n self.share_command('refresh_edit', self.refresh_edit_frames)\n self.src_path = kwargs['src_path']\n self.really = kwargs['really']\n self.current_type = None\n\n self.widget_ids = []\n self.to_make = []\n\n self.edit_widgets = []\n self.edit_id = None\n\n self.commands = {'type_dropdown': lambda choice: self.choose_frame(choice),\n 'add_frame': self.add_frame,\n 'add_toplevel': self.add_toplevel,\n 'master_dropdown': self.frame_grab(),\n 'save_project': self.save_project,\n 'vertical_checkbox': self.scroll_check,\n 'horizontal_checkbox': self.scroll_check,\n 'choose_frame': self.frame_grab(),\n 'refresh_edit': self.refresh_edit_frames,\n 'edit_toplevel': self.reconfig_toplevel,\n 'vertical_checkboxf': lambda: self.scroll_check('edit_frame_frame'),\n 'horizontal_checkboxf': lambda: self.scroll_check('edit_frame_frame'),\n 'edit_frame': self.reconfig_frame,\n 'delete_frame': self.delete_frame,\n 'delete_toplevel': self.delete_frame}\n\n def reconfig_frame(self):\n \"\"\"\n Reconfigures the frame.\n This is done by re-adding the frame with the new arguments.\n All widgets that the frame contains are first grabbed so that the can be put back onto the frame when\n the frame has been rebuilt.\n\n :return: None\n \"\"\"\n self.add_frame(frame='edit_frame_frame')\n for widget in self.edit_widgets:\n self.window.containers[self.edit_id].add_widget(**widget)\n self.widget_args[widget['id']] = widget\n window = self.window.containers[self.edit_id]\n self.window.containers[self.edit_id].containers[widget['id']].widget.bind(\n '<1>', lambda event, wid2=widget['id'], wind=window: self.set_widget(wid2, wind))\n self.window.containers[self.edit_id].containers[widget['id']].widget.bind(\n '<Double-Button-1>', lambda event, wid2=widget['id']: self.edit_widget(wid2))\n self.window.containers[self.edit_id].containers[widget['id']].widget.bind(\n '<Button-3>', lambda event, wid2=widget['id']: self.popup_menu.popup(event, wid2))\n\n def delete_frame(self, really=False):\n \"\"\"\n This method deletes the selected frame.\n This method utilizes the static method 'really' that is owned by the main gui to popup a window asking\n if the user is sure they want to delete the frame.\n If a frame is deleted, so are all items that it owns.\n\n :param really: Method to verify user wishes to delete frame\n :return: None\n \"\"\"\n self.really(self.window, self.delete_frame)\n if really:\n bye = self.window.containers['edit_frame_frame'].containers['frame_id_input'].get()\n widgets = list(filter(\n lambda x: x is not None, map(\n lambda x: self.widget_args[x] if self.widget_args[x]['master'] == bye else None,\n list(self.widget_args.keys()))))\n for widget in widgets:\n self.widget_args.pop(widget['id'])\n self.window.containers[bye].containers.pop(widget['id'])\n\n self.frames.pop(bye)\n garbage = self.window.containers.pop(bye)\n if garbage.kwargs['type'] == 'frame':\n garbage.destroy()\n elif garbage.kwargs['type'] == 'toplevel':\n garbage.leave()\n self.refresh_edit_frames()\n\n def reconfig_toplevel(self):\n \"\"\"\n This method is used to edit a toplevel. This method allows a user to change the title, and the size of a\n toplevel. All widgets the toplevel owns are added back to the toplevel.\n\n :return: None\n \"\"\"\n self.window.containers[self.edit_id].leave()\n self.add_toplevel(frame='edit_frame_frame')\n for widget in self.edit_widgets:\n self.window.containers[self.edit_id].add_widget(**widget)\n self.widget_args[widget['id']] = widget\n window = self.window.containers[self.edit_id]\n self.window.containers[self.edit_id].containers[widget['id']].widget.bind(\n '<1>', lambda event, wid2=widget['id'], wind=window: self.set_widget(wid2, wind))\n self.window.containers[self.edit_id].containers[widget['id']].widget.bind(\n '<Double-Button-1>', lambda event, wid2=widget['id']: self.edit_widget(wid2))\n self.window.containers[self.edit_id].containers[widget['id']].widget.bind(\n '<Button-3>', lambda event, wid2=widget['id']: self.popup_menu.popup(event, wid2))\n\n def choose_edit_frame(self, *args):\n \"\"\"\n This method ties to the dropdown to select the frame to edit. Whenever a tab is selected, the frames current\n widgets are destroyed, and then the correct widgets for the selected frame are then added to the frame\n\n :param args: selected frame\n :type args: list\n :return: None\n \"\"\"\n for key in list(self.window.containers['edit_frame_frame'].containers.keys()):\n if key is not 'choose_frame':\n garbage = self.window.containers['edit_frame_frame'].containers.pop(key)\n garbage.destroy()\n del garbage\n tmp = FrameWidget()\n self.edit_widgets = list(filter(\n lambda x: x is not None,\n map(lambda x: self.widget_args[x] if self.widget_args[x]['master'] == args[0] else None,\n list(self.widget_args.keys()))))\n if args[0] == 'root_window':\n for item in tmp.edit_root:\n tmp_args = []\n for arg in item['args']:\n if arg is None:\n tmp_args.append(self.commands[item['id']])\n else:\n tmp_args.append(arg)\n item['args'] = tmp_args\n self.window.containers['edit_frame_frame'].add_widget(**item)\n elif self.window.containers[args[0]].type == 'toplevel':\n tmp_kwarg = self.frames[args[0]].kwargs\n for item in tmp.edit_toplevel:\n tmp_args = []\n for arg in item['args']:\n if arg is None:\n tmp_args.append(self.commands[item['id']])\n else:\n tmp_args.append(arg)\n item['args'] = tmp_args\n self.window.containers['edit_frame_frame'].add_widget(**item)\n self.edit_id = tmp_kwarg['id']\n self.window.containers['edit_frame_frame'].containers['frame_id_input'].set(tmp_kwarg['id'])\n self.window.containers['edit_frame_frame'].containers['frame_id_input'].configure({'state': 'disabled'})\n self.window.containers['edit_frame_frame'].containers['height_input'].set(\n tmp_kwarg['base_location']['rowspan'])\n self.window.containers['edit_frame_frame'].containers['width_input'].set(\n tmp_kwarg['base_location']['columnspan'])\n self.window.containers['edit_frame_frame'].containers['title_input'].set(\n tmp_kwarg['title'])\n elif self.window.containers[args[0]].type == 'frame':\n tmp_kwarg = self.frames[args[0]].kwargs\n for item in tmp.edit_frame:\n tmp_args = []\n for arg in item['args']:\n if arg is None:\n tmp_args.append(self.commands[item['id']])\n else:\n tmp_args.append(arg)\n item['args'] = tmp_args\n self.window.containers['edit_frame_frame'].add_widget(**item)\n self.edit_id = tmp_kwarg['id']\n if tmp_kwarg['scroll']['vertical'] or tmp_kwarg['scroll']['horizontal']:\n if tmp_kwarg['scroll']['vertical']:\n self.window.containers['edit_frame_frame'].containers['vertical_checkboxf'].invoke()\n tmp_kwarg['scroll_window_size']['columnspan'] += 20\n if tmp_kwarg['scroll']['horizontal']:\n self.window.containers['edit_frame_frame'].containers['horizontal_checkboxf'].invoke()\n tmp_kwarg['scroll_window_size']['rowspan'] += 20\n self.window.containers['edit_frame_frame'].containers['frame_id_input'].set(tmp_kwarg['id'])\n self.window.containers['edit_frame_frame'].containers['height_input'].set(\n tmp_kwarg['scroll_window_size']['rowspan'])\n self.window.containers['edit_frame_frame'].containers['width_input'].set(\n tmp_kwarg['scroll_window_size']['columnspan'])\n self.window.containers['edit_frame_frame'].containers['verticalbase_input'].set(\n tmp_kwarg['base_location']['row'])\n self.window.containers['edit_frame_frame'].containers['horizontalbase_input'].set(\n tmp_kwarg['base_location']['column'])\n self.window.containers['edit_frame_frame'].containers['insetwidth_input'].set(\n tmp_kwarg['base_location']['columnspan'])\n self.window.containers['edit_frame_frame'].containers['insetheight_input'].set(\n tmp_kwarg['base_location']['rowspan'])\n else:\n self.window.containers['edit_frame_frame'].containers['frame_id_input'].set(tmp_kwarg['id'])\n self.window.containers['edit_frame_frame'].containers['height_input'].set(\n tmp_kwarg['base_location']['rowspan'])\n self.window.containers['edit_frame_frame'].containers['width_input'].set(\n tmp_kwarg['base_location']['columnspan'])\n self.window.containers['edit_frame_frame'].containers['verticalbase_input'].set(\n tmp_kwarg['base_location']['row'])\n self.window.containers['edit_frame_frame'].containers['horizontalbase_input'].set(\n tmp_kwarg['base_location']['column'])\n\n def refresh_edit_frames(self):\n \"\"\"\n This is used to refresh the edit frame\n\n :return: None\n \"\"\"\n tmp = FrameWidget()\n for key in list(self.window.containers['edit_frame_frame'].containers.keys()):\n garbage = self.window.containers['edit_frame_frame'].containers.pop(key)\n garbage.destroy()\n del garbage\n for item in tmp.edit_components:\n tmp_args = []\n for arg in item['args']:\n if arg is None:\n tmp_args.append(self.commands[item['id']])\n elif arg == 'hotfix': # TODO: Remove hotfix for something more stable\n tmp_args.append(self.choose_edit_frame)\n else:\n tmp_args.append(arg)\n item['args'] = tmp_args\n self.window.containers['edit_frame_frame'].add_widget(**item)\n\n def frame_tab(self):\n \"\"\"\n This method is called by the main gui to intitialize the Frame manager tabs\n :return: None\n \"\"\"\n tmp = FrameWidget()\n self.make_notebook_tab(self,\n tmp.window_kwargs,\n tmp.components,\n 'control_panel',\n 'Frame Manager')\n self.refresh_frames()\n self.refresh_edit_frames()\n\n def refresh_frames(self):\n \"\"\"\n This method is used to create the edit frames\n\n :return: None\n \"\"\"\n tmp = FrameWidget()\n self.make_notebook_tab(self,\n tmp.new_kwargs,\n tmp.new_components,\n 'frame_manager_frame',\n 'New Frame')\n\n self.make_notebook_tab(self,\n tmp.edit_kwargs,\n tmp.edit_components,\n 'frame_manager_frame',\n 'Edit Frames')\n self.make_notebook_tab(self,\n tmp.save_kwargs,\n tmp.save_components,\n 'frame_manager_frame',\n 'Save Project')\n\n def choose_frame(self, frame):\n \"\"\"\n This is used in the new frame tab to select whether the user wishes to make a new toplevel or a new frame\n\n :param frame: type of frame\n :type frame: str\n :return: None\n \"\"\"\n tmp = FrameWidget()\n for key in list(self.window.containers['make_frame_frame'].containers.keys()):\n if key is not 'type_dropdown':\n garbage = self.window.containers['make_frame_frame'].containers.pop(key)\n garbage.destroy()\n del garbage\n if frame == 'Toplevel':\n for item in tmp.new_toplevel_components:\n tmp_args = []\n for arg in item['args']:\n if arg is None:\n tmp_args.append(self.commands[item['id']])\n else:\n tmp_args.append(arg)\n item['args'] = tmp_args\n self.window.containers['make_frame_frame'].add_widget(**item)\n elif frame == 'Frame':\n for item in tmp.new_frame_components:\n tmp_args = []\n for arg in item['args']:\n if arg is None:\n tmp_args.append(self.commands[item['id']])\n else:\n tmp_args.append(arg)\n item['args'] = tmp_args\n self.window.containers['make_frame_frame'].add_widget(**item)\n\n # TODO: Allow frames to be added to other frames, and toplevels and vice-versa\n def add_frame(self, frame='make_frame_frame'):\n \"\"\"\n TODO: Validation of data\n This is used to add a frame to the root window\n\n :param frame: frame to pull the inputs from\n :return: None\n \"\"\"\n width = int(self.window.containers[frame].containers['width_input'].get())\n height = int(self.window.containers[frame].containers['height_input'].get())\n wid = self.window.containers[frame].containers['frame_id_input'].get()\n vert_base = int(self.window.containers[frame].containers['verticalbase_input'].get())\n horiz_base = int(self.window.containers[frame].containers['horizontalbase_input'].get())\n # TODO: tmp is currently just a HACK, used to quickly get it working, clean this up\n tmp = ''\n if frame == 'edit_frame_frame':\n tmp = 'f'\n vert = int(self.window.containers[frame].containers['vertical_checkbox' + tmp].get())\n horiz = int(self.window.containers[frame].containers['horizontal_checkbox' + tmp].get())\n inset_width = 0\n inset_height = 0\n if vert or horiz:\n inset_width = int(self.window.containers[frame].containers['insetwidth_input'].get())\n inset_height = int(self.window.containers[frame].containers['insetheight_input'].get())\n kwargs = {\n 'type': 'frame',\n 'id': wid,\n 'base_location': {\n 'row': 0 if (vert or horiz) else vert_base,\n 'column': 0 if (vert or horiz) else horiz_base,\n 'rowspan': inset_height if (vert or horiz) else height,\n 'columnspan': inset_width if (vert or horiz) else width,\n 'sticky': 'NSWE'\n },\n 'scroll': {\n 'vertical': True if vert else False,\n 'horizontal': True if horiz else False\n },\n 'scroll_window_size': {\n 'row': vert_base,\n 'column': horiz_base,\n 'columnspan': width,\n 'rowspan': height,\n 'sticky': 'NSWE'\n }\n }\n self.window.add_frame(**kwargs)\n self.frames[wid] = self.window.containers[wid]\n self.refresh_edit_frames()\n self.command_fetch()['refresh_add_widget']()\n\n def add_toplevel(self, frame='make_frame_frame'):\n \"\"\"\n This is used to add a toplevel to the root window\n\n :param frame: frame to pull the input data from\n :return: None\n \"\"\"\n width = int(self.window.containers[frame].containers['width_input'].get())\n height = int(self.window.containers[frame].containers['height_input'].get())\n title = self.window.containers[frame].containers['title_input'].get()\n wid = self.window.containers[frame].containers['frame_id_input'].get()\n kwargs = {'type': 'toplevel',\n 'title': title,\n 'id': wid,\n 'base_location': {\n 'row': 0,\n 'column': 0,\n 'rowspan': height,\n 'columnspan': width,\n 'sticky': 'NSWE'\n },\n 'row_offset': 0,\n 'column_offset': 0\n }\n self.window.add_toplevel(**kwargs)\n self.window.containers[wid].app.protocol('WM_DELETE_WINDOW', self.leave_stop)\n self.frames[wid] = self.window.containers[wid]\n self.refresh_edit_frames()\n self.command_fetch()['refresh_add_widget']()\n\n def scroll_check(self, frame='make_frame_frame'):\n \"\"\"\n This method is called whenever a scroll-box is checked to create scrollable frames.\n This is used to determine if the inset_width and inset_height are neccesary, and if so to create the input\n boxes for them\n\n :param frame: frame to pull the input data from\n :return: None\n \"\"\"\n tmp = FrameWidget()\n tmp_list = []\n for item in list(tmp.scroll_kwargs):\n if item['id'] in list(self.window.containers[frame].containers.keys()):\n tmp_list.append(self.window.containers[frame].containers.pop(item['id']))\n for item in tmp_list:\n item.destroy()\n del item\n tmp1 = ''\n if frame == 'edit_frame_frame':\n tmp1 = 'f'\n vert = int(self.window.containers[frame].containers['vertical_checkbox' + tmp1].get())\n horiz = int(self.window.containers[frame].containers['horizontal_checkbox' + tmp1].get())\n if vert or horiz:\n for item in tmp.scroll_kwargs:\n tmp_args = []\n for arg in item['args']:\n if arg is None:\n tmp_args.append(self.commands[item['id']])\n else:\n tmp_args.append(arg)\n item['args'] = tmp_args\n self.window.containers[frame].add_widget(**item)\n\n def save_project(self):\n \"\"\"\n WARNING: DANGEROUS!\n\n IMPORTANT METHOD!!! DO NOT CHANGE UNLESS YOU KNOW WHAT YOU ARE DOING!\n\n\n This method is used to save the project. This is done by deleting anything currently in the project and\n then using the MultiBase.Generator class to generate the static pages.\n All data required to rebuild the Gui is also pickled and stored in the PROJECTBUILDER/project folder\n\n :return: None\n \"\"\"\n shutil.rmtree(self.root_path)\n os.mkdir(self.root_path)\n shutil.rmtree(os.path.join(self.src_path, 'Loader'), True)\n open(os.path.join(self.root_path, '__init__.py'), 'a').close()\n builder_dict = {key: [] for key in list(self.frames.keys())}\n for key in self.widget_args.keys():\n builder_dict[self.widget_args[key]['master']].append(self.widget_args[key])\n make = MultiGenerator(os.getcwd(), self.root_path)\n make.setup(**self.window.kwargs)\n for widget in builder_dict['root_window']:\n make.add_widget(**widget)\n for key in list(builder_dict.keys()):\n if key != 'root_window':\n make.add_frame(**self.frames[key].kwargs)\n for widget in builder_dict[key]:\n make.add_widget(**widget)\n os.mkdir(os.path.join(self.src_path, 'Loader'))\n\n f = open(os.path.join(self.src_path, 'Loader', 'builder_dict.p'), 'wb')\n pickle.dump(builder_dict, f)\n f.close()\n\n f = open(os.path.join(self.src_path, 'Loader', 'widget_args.p'), 'wb')\n pickle.dump(self.widget_args, f)\n f.close()\n\n frames = dict()\n for key in list(self.frames.keys()):\n if key is not 'root_window':\n frames[key] = self.copier(self.frames[key].kwargs)\n\n f = open(os.path.join(self.src_path, 'Loader', 'frames.p'), 'wb')\n pickle.dump(frames, f)\n f.close()\n\n make.finalize()\n\n def leave_stop(self):\n \"\"\"\n TODO: get rid of this, replace with lambda: None\n :return: None\n \"\"\"\n pass\n\n @staticmethod\n def copier(to_copy):\n \"\"\"\n WARNING: DO NOT CHANGE UNLESS YOU UNDERSTAND WHY THIS IS DONE THIS WAY.\n copy() doesn't work because it creates a shallow copy, deepcopy can't be called because it uses pickle\n\n\n TODO: Wouldn't it be nice.. frozen = pickle.freeze(some_tkinter_thing), pickle.dump(frozen) LOWEST PRIORITY EVER\n This method is used to copy the frame kwargs but strip out all tkinter objects that cannot be pickled.\n\n :param to_copy: dictionary to copy\n :type to_copy: dict\n :return: dict that is cleaned up\n \"\"\"\n tmp_dict = dict()\n for key in list(to_copy.keys()):\n if key not in ['owner', 'master']:\n tmp_dict[key] = to_copy[key]\n return tmp_dict\n"
},
{
"alpha_fraction": 0.541483461856842,
"alphanum_fraction": 0.5452441573143005,
"avg_line_length": 49.83529281616211,
"blob_id": "81ff1fccc49c81a7757bc43c4253b5428cdf2645",
"content_id": "884748de4bf703f3396a75609bda22eb64ba5682",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 17284,
"license_type": "permissive",
"max_line_length": 118,
"num_lines": 340,
"path": "/GuiBuilder/BUILDER/ProjectTemplate/Tabs/EditTab/EditTabBuild.py",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "from GuiBuilder.BUILDER.ProjectTemplate.Tabs.EditTab.EditWidgetTemplate import EditWidget\n\n\nclass EditTab(object):\n\n def __init__(self, **kwargs):\n \"\"\"\n This class controls the Edit Tab for widgets\n\n :param kwargs: All keyword args passed in by the MainGui\n \"\"\"\n self.window = kwargs['window']\n self.set_widget = kwargs['set_widget']\n self.edit_widget = kwargs['edit_widget']\n self.make_notebook_tab = kwargs['make_notebook_tab']\n self.grab_kwargs = kwargs['grab_kwargs']\n self.set_widget = kwargs['set_widget']\n self.edit_widget = kwargs['edit_widget']\n self.widget_args = kwargs['widget_args']\n self.popup_menu = kwargs['popup_menu']\n self.command_fetch = kwargs['command_fetch']\n self.is_int = kwargs['is_int']\n\n self.tmp_args = None\n self.commands = {\n 'selected_widget': None,\n 'nw_move': lambda: self.bump_move('nw'),\n 'n_move': lambda: self.bump_move('n'),\n 'ne_move': lambda: self.bump_move('ne'),\n 'w_move': lambda: self.bump_move('w'),\n 'c_move': lambda: self.bump_move('c'),\n 'e_move': lambda: self.bump_move('e'),\n 'sw_move': lambda: self.bump_move('sw'),\n 's_move': lambda: self.bump_move('s'),\n 'se_move': lambda: self.bump_move('se'),\n 'windoww_input': self.windoww_input(),\n 'windowh_input': self.windowh_input(),\n 'X_input': None,\n 'Y_input': None,\n 'move_submit': self.move_submit,\n\n 'nw_stretch': lambda: self.bump_stretch('nw'),\n 'n_stretch': lambda: self.bump_stretch('n'),\n 'ne_stretch': lambda: self.bump_stretch('ne'),\n 'w_stretch': lambda: self.bump_stretch('w'),\n 'c_stretch': lambda: self.bump_stretch('c'),\n 'e_stretch': lambda: self.bump_stretch('e'),\n 'sw_stretch': lambda: self.bump_stretch('sw'),\n 's_stretch': lambda: self.bump_stretch('s'),\n 'se_stretch': lambda: self.bump_stretch('se'),\n 'XS_input': None,\n 'YS_input': None,\n 'resize_submit': self.resize_submit\n }\n\n self.selected_args = None\n\n def edit_tab(self):\n \"\"\"\n This method creates the initial edit tab by calling the make_notebook_tab and handing in it's arguments\n\n :return: None\n \"\"\"\n tmp = EditWidget()\n self.make_notebook_tab(self,\n tmp.control_panel_kwargs,\n tmp.control_panel_components,\n 'control_panel',\n 'Edit Widget')\n\n def refresh_tab(self, selected):\n \"\"\"\n This method refreshes the edit tab after something has been changed\n\n :param selected: The currently selected widget to display information for\n :return: None\n \"\"\"\n if selected is None:\n for key in list(self.window.containers['edit_frame'].containers.keys()):\n garbage = self.window.containers['edit_frame'].containers.pop(key)\n garbage.destroy()\n del garbage\n else:\n self.commands['selected_widget'] = selected\n self.commands['X_input'] = self.x_input()\n self.commands['Y_input'] = self.y_input()\n self.commands['XS_input'] = self.xs_input()\n self.commands['YS_input'] = self.ys_input()\n for key in list(self.window.containers['edit_frame'].containers.keys()):\n garbage = self.window.containers['edit_frame'].containers.pop(key)\n garbage.destroy()\n del garbage\n\n tmp = EditWidget()\n widgets = tmp.control_panel_components\n for item in widgets:\n tmp_args = []\n for arg in item['args']:\n if arg is None:\n tmp_args.append(self.commands[item['id']])\n else:\n tmp_args.append(arg)\n item['args'] = tmp_args\n self.window.containers['edit_frame'].add_widget(**item)\n\n self.make_notebook_tab(self,\n tmp.move_frame_kwargs,\n tmp.move_frame_components,\n 'edit_frame',\n 'Move Widget')\n\n self.make_notebook_tab(self,\n tmp.resize_frame_kwargs,\n tmp.resize_frame_components,\n 'edit_frame',\n 'Resize Widget')\n\n def bump_move(self, direction):\n \"\"\"\n This method bumps the widget in whichever direction the user chooses.\n\n :param direction: Direction to bump\n :type direction: str\n :return: None\n \"\"\"\n self.tmp_args = self.grab_kwargs(self.commands['selected_widget'])\n window = self.tmp_args['master']\n if window == 'root_window':\n window = self.window\n else:\n window = self.window.containers[window]\n directions = list(direction)\n max_column = window.base_location['columnspan'] - self.tmp_args['location']['columnspan']\n max_row = window.base_location['rowspan'] - self.tmp_args['location']['rowspan']\n current_row = self.tmp_args['location']['row']\n current_column = self.tmp_args['location']['column']\n increment = self.is_int(self.window.containers['edit_move_frame'].containers['increment'].get())\n if increment is not False:\n for item in directions:\n if item == 'c':\n self.tmp_args['location']['row'] = max_row // 2\n self.tmp_args['location']['column'] = max_column // 2\n elif item == 'n':\n if current_row - increment >= 0:\n self.tmp_args['location']['row'] = current_row - increment\n else:\n self.tmp_args['location']['row'] = 0\n elif item == 'e':\n if current_column + increment <= max_column:\n self.tmp_args['location']['column'] = current_column + increment\n else:\n self.tmp_args['location']['column'] = max_column\n elif item == 's':\n if current_row + increment <= max_row:\n self.tmp_args['location']['row'] = current_row + increment\n else:\n self.tmp_args['location']['row'] = max_row\n elif item == 'w':\n if current_column - increment >= 0:\n self.tmp_args['location']['column'] = current_column - increment\n else:\n self.tmp_args['location']['column'] = 0\n window.add_widget(**self.tmp_args)\n self.widget_args[self.commands['selected_widget']] = self.tmp_args\n window.containers[self.commands['selected_widget']].widget.bind(\n 'Double-Button-1', lambda event, wid2=self.commands['selected_widget']: self.edit_widget(wid2))\n window.containers[self.commands['selected_widget']].widget.bind(\n '<1>',\n lambda event, wid2=self.commands['selected_widget'], wind=window: self.set_widget(wid2, wind))\n window.containers[self.commands['selected_widget']].widget.bind(\n '<Button-3>',\n lambda event, wid2=self.commands['selected_widget']: self.popup_menu.popup(event, wid2))\n self.window.containers['edit_move_frame'].containers['Y_input'].set(self.y_input())\n self.window.containers['edit_move_frame'].containers['X_input'].set(self.x_input())\n self.command_fetch()['refresh_edit']()\n\n def bump_stretch(self, direction):\n \"\"\"\n This method stretches the widget in whichever direction the user selects.\n IMPORTANT NOTE: Negative stretching IS allowed.\n\n :param direction: The direction to stretch\n :type direction: str\n :return: None\n \"\"\"\n self.tmp_args = self.grab_kwargs(self.commands['selected_widget'])\n window = self.tmp_args['master']\n if window == 'root_window':\n window = self.window\n else:\n window = self.window.containers[window]\n directions = list(direction)\n max_column = window.base_location['columnspan']\n current_column = self.tmp_args['location']['column']\n current_columnspan = self.tmp_args['location']['columnspan']\n max_row = window.base_location['rowspan']\n current_row = self.tmp_args['location']['row']\n current_rowspan = self.tmp_args['location']['rowspan']\n increment = self.is_int(self.window.containers['edit_resize_frame'].containers['increment'].get(), -400)\n if increment is not False:\n for item in directions:\n if item == 'c':\n self.tmp_args['location']['columnspan'] = 1\n self.tmp_args['location']['rowspan'] = 1\n elif item == 'n':\n if current_row - increment >= 0 and current_rowspan + increment > 0:\n self.tmp_args['location']['row'] -= increment\n self.tmp_args['location']['rowspan'] = current_rowspan + increment\n else:\n if current_rowspan + increment <= 0:\n self.tmp_args['location']['rowspan'] = 1\n else:\n self.tmp_args['location']['row'] = 0\n self.tmp_args['location']['rowspan'] = current_rowspan + current_row\n elif item == 'e':\n if (current_column + increment + current_columnspan) <= max_column and \\\n (current_columnspan + increment) > 0:\n self.tmp_args['location']['columnspan'] = current_columnspan + increment\n else:\n if current_columnspan + increment <= 0:\n self.tmp_args['location']['columnspan'] = 1\n else:\n self.tmp_args['location']['columnspan'] = max_column - current_column\n elif item == 's':\n if current_row + increment + current_rowspan <= max_row and current_rowspan + increment > 0:\n self.tmp_args['location']['rowspan'] = current_rowspan + increment\n else:\n if current_rowspan + increment <= 0:\n self.tmp_args['location']['rowspan'] = 1\n else:\n self.tmp_args['location']['rowspan'] = max_row - current_row\n elif item == 'w':\n if current_column - increment >= 0 and current_columnspan + increment > 0:\n self.tmp_args['location']['column'] -= increment\n self.tmp_args['location']['columnspan'] = current_columnspan + increment\n else:\n if current_columnspan + increment <= 0:\n self.tmp_args['location']['columnspan'] = 1\n else:\n self.tmp_args['location']['column'] = 0\n self.tmp_args['location']['columnspan'] = current_columnspan + current_column\n window.add_widget(**self.tmp_args)\n self.widget_args[self.commands['selected_widget']] = self.tmp_args\n window.containers[self.commands['selected_widget']].widget.bind(\n 'Double-Button-1', lambda event, wid2=self.commands['selected_widget']: self.edit_widget(wid2))\n window.containers[self.commands['selected_widget']].widget.bind(\n '<1>', lambda event, wid2=self.commands['selected_widget'], wind=window: self.set_widget(wid2, wind))\n window.containers[self.commands['selected_widget']].widget.bind(\n '<Button-3>', lambda event, wid2=self.commands['selected_widget']: self.popup_menu.popup(event, wid2))\n self.window.containers['edit_resize_frame'].containers['YS_input'].set(self.ys_input())\n self.window.containers['edit_resize_frame'].containers['XS_input'].set(self.xs_input())\n self.command_fetch()['refresh_edit']()\n\n # TODO: The below always show the info for the Root window, not the frame/toplevel the widget is on. FIXME\n def windoww_input(self):\n \"\"\"\n Grabs the base window width\n :return: base window width\n \"\"\"\n return self.window.base_location['columnspan']\n\n def windowh_input(self):\n \"\"\"\n Grabs the base window height\n :return: base window height\n \"\"\"\n return self.window.base_location['rowspan']\n\n def x_input(self):\n self.tmp_args = self.grab_kwargs(self.commands['selected_widget'])\n return self.tmp_args['location']['column']\n\n def y_input(self):\n self.tmp_args = self.grab_kwargs(self.commands['selected_widget'])\n return self.tmp_args['location']['row']\n\n def xs_input(self):\n self.tmp_args = self.grab_kwargs(self.commands['selected_widget'])\n return self.tmp_args['location']['columnspan']\n\n def ys_input(self):\n self.tmp_args = self.grab_kwargs(self.commands['selected_widget'])\n return self.tmp_args['location']['rowspan']\n\n def resize_submit(self):\n \"\"\"\n This method is called by the resize submit button. This allows the user to manually resize the widget\n\n :return: None\n \"\"\"\n columnspan = self.is_int(self.window.containers['edit_resize_frame'].containers['XS_input'].get(), 1)\n rowspan = self.is_int(self.window.containers['edit_resize_frame'].containers['YS_input'].get(), 1)\n window = self.tmp_args['master']\n if columnspan and rowspan:\n if window == 'root_window':\n window = self.window\n else:\n window = self.window.containers[window]\n self.tmp_args = self.grab_kwargs(self.commands['selected_widget'])\n self.tmp_args['location']['rowspan'] = rowspan\n self.tmp_args['location']['columnspan'] = columnspan\n window.add_widget(**self.tmp_args)\n self.widget_args[self.commands['selected_widget']] = self.tmp_args\n window.containers[self.commands['selected_widget']].widget.bind(\n 'Double-Button-1', lambda event, wid2=self.commands['selected_widget']: self.edit_widget(wid2))\n window.containers[self.commands['selected_widget']].widget.bind(\n '<1>', lambda event, wid2=self.commands['selected_widget'], wind=window: self.set_widget(wid2, wind))\n window.containers[self.commands['selected_widget']].widget.bind(\n '<Button-3>', lambda event, wid2=self.commands['selected_widget']: self.popup_menu.popup(event, wid2))\n self.window.containers['edit_resize_frame'].containers['YS_input'].set(self.ys_input())\n self.window.containers['edit_resize_frame'].containers['XS_input'].set(self.xs_input())\n self.command_fetch()['refresh_edit']()\n\n def move_submit(self):\n \"\"\"\n This method is called by the move submit button and is used to manually move the button\n :return: None\n \"\"\"\n column = self.is_int(self.window.containers['edit_move_frame'].containers['X_input'].get())\n row = self.is_int(self.window.containers['edit_move_frame'].containers['Y_input'].get())\n if column is not False and row is not False:\n self.tmp_args = self.grab_kwargs(self.commands['selected_widget'])\n window = self.tmp_args['master']\n if window == 'root_window':\n window = self.window\n else:\n window = self.window.containers[window]\n self.tmp_args['location']['row'] = row\n self.tmp_args['location']['column'] = column\n window.add_widget(**self.tmp_args)\n self.widget_args[self.commands['selected_widget']] = self.tmp_args\n window.containers[self.commands['selected_widget']].widget.bind(\n 'Double-Button-1', lambda event, wid2=self.commands['selected_widget']: self.edit_widget(wid2))\n window.containers[self.commands['selected_widget']].widget.bind(\n '<1>', lambda event, wid2=self.commands['selected_widget'], wind=window: self.set_widget(wid2, wind))\n window.containers[self.commands['selected_widget']].widget.bind(\n '<Button-3>', lambda event, wid2=self.commands['selected_widget']: self.popup_menu.popup(event, wid2))\n self.window.containers['edit_move_frame'].containers['Y_input'].set(self.y_input())\n self.window.containers['edit_move_frame'].containers['X_input'].set(self.x_input())\n self.command_fetch()['refresh_edit']()\n"
},
{
"alpha_fraction": 0.8928571343421936,
"alphanum_fraction": 0.8928571343421936,
"avg_line_length": 83,
"blob_id": "97d236211237c2fc87d09f141eaabea3038e8e19",
"content_id": "55cbcef9999313a85e73b92aecb91a53c38b40f2",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 84,
"license_type": "permissive",
"max_line_length": 83,
"num_lines": 1,
"path": "/GuiBuilder/BUILDER/ProjectTemplate/Tabs/FrameTab/__init__.py",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "from GuiBuilder.BUILDER.ProjectTemplate.Tabs.FrameTab.FrameTabBuild import FrameTab\n"
},
{
"alpha_fraction": 0.5588235259056091,
"alphanum_fraction": 0.5588235259056091,
"avg_line_length": 17.30769157409668,
"blob_id": "fcb3e56378296fe9632401210bc0b1e7fe6f1e8f",
"content_id": "cbbae36ffabb3b6c85222cce6130f0067dabeec8",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 238,
"license_type": "permissive",
"max_line_length": 32,
"num_lines": 13,
"path": "/GuiBuilder/STARTUP/Install/LoadSettings.py",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "import json\n\n\nclass SettingsLoader(object):\n\n def __init__(self, path):\n self.path = path\n\n def fetch_settings(self):\n f = open(self.path, 'r')\n settings = json.load(f)\n f.close()\n return settings\n"
},
{
"alpha_fraction": 0.8834951519966125,
"alphanum_fraction": 0.8834951519966125,
"avg_line_length": 67.66666412353516,
"blob_id": "8df456ee5c7222b85e9686e70442ce3524263128",
"content_id": "46be97021bda097fabc5fa39f6bafa50f79f084f",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 206,
"license_type": "permissive",
"max_line_length": 74,
"num_lines": 3,
"path": "/GuiBuilder/PROJECTS/Demo/__init__.py",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "from GuiBuilder.PROJECTS.Demo.Components.Builder_Helper import BuildHelper\nfrom GuiBuilder.PROJECTS.Demo.MainGuiTemplate import MainTemplate\nfrom GuiBuilder.PROJECTS.Demo.Components.Frames import Mainhello\n"
},
{
"alpha_fraction": 0.6577946543693542,
"alphanum_fraction": 0.6577946543693542,
"avg_line_length": 22.909090042114258,
"blob_id": "1ad2ea69e6002c7507e50d95e8b2c894da060827",
"content_id": "732ffb2e286fb89691422d837d32c3d326760365",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 263,
"license_type": "permissive",
"max_line_length": 72,
"num_lines": 11,
"path": "/MyPyWidgets/FileDialogClass.py",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "from tkinter import filedialog\n\n\nclass FileDialog(object):\n\n def __init__(self, initialdir, file_type):\n if file_type == 'dir':\n self.choice = filedialog.askdirectory(initialdir=initialdir)\n\n def response(self):\n return self.choice\n"
},
{
"alpha_fraction": 0.4908505082130432,
"alphanum_fraction": 0.49458763003349304,
"avg_line_length": 42.8418083190918,
"blob_id": "ab74dff0dff582239f5179ceb9c653cba8ee3923",
"content_id": "4372c58082f986fe19003741b3a5e558d3f1819b",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7760,
"license_type": "permissive",
"max_line_length": 116,
"num_lines": 177,
"path": "/GuiBuilder/BUILDER/ProjectTemplate/Tabs/NewTab/NewTabBuild.py",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "from GuiBuilder.BUILDER.ProjectTemplate.Tabs.NewTab.NewWidgetTemplate import NewWidget\nfrom MyPyWidgets import *\n\n\nclass NewTab(object):\n\n def __init__(self, **kwargs):\n \"\"\"\n This is the most important tab. This is used to add new widgets to the Gui.\n\n :param kwargs: Key-word arguments passed in by the main gui\n \"\"\"\n self.window = kwargs['window']\n self.set_widget = kwargs['set_widget']\n self.widget_args = kwargs['widget_args']\n self.edit_widget = kwargs['edit_widget']\n self.set_location = kwargs['set_location']\n self.make_notebook_tab = kwargs['make_notebook_tab']\n self.is_int = kwargs['is_int']\n self.is_alnum = kwargs['is_alnum']\n self.variable_namify = kwargs['variable_namify']\n self.frame_grab = kwargs['frame_grab']\n self.popup_menu = kwargs['popup_menu']\n self.command_fetch = kwargs['command_fetch']\n self.share_command = kwargs['share_command']\n self.share_command('refresh_add_widget', self.new_mode)\n\n self.commands = {\n 'builder_mode': self.new_tab,\n 'new_mode': self.new_mode,\n 'add_button': self.add_widget,\n 'frame_drop': self.frame_grab()\n }\n\n # This allows the objects to be instantiated correctly\n self.args_lookup = {\n 'Button': lambda x: [x, None],\n 'DropDown': lambda x: [x, [], None],\n 'InputField': lambda x: [x],\n 'Label': lambda x: [x],\n 'CheckButton': lambda x: [x, None],\n 'SpinBox': lambda x: [x, [], None],\n 'RadioButton': lambda x: [x, None, None]\n }\n\n self.widgets_lookup = {\n 'Button': Button,\n 'DropDown': DropDown,\n 'InputField': InputField,\n 'Label': Label,\n 'CheckButton': CheckButton,\n 'SpinBox': SpinBox,\n 'RadioButton': RadioButton\n }\n\n def new_tab(self):\n \"\"\"\n This is called in the main gui to create the new widget tab\n\n :return: None\n \"\"\"\n tmp = NewWidget()\n self.make_notebook_tab(self,\n tmp.window_kwargs,\n tmp.components,\n 'control_panel',\n 'Create Widget')\n\n def new_mode(self, *args):\n \"\"\"\n Called by the dropdown, selects which type of widget to create\n\n :param args: a list from the dropdown that holds the widget type\n :type args: list\n :return: None\n \"\"\"\n del args\n widgets = NewWidget().new_components\n for item in widgets:\n tmp_args = []\n for arg in item['args']:\n if arg is None:\n tmp_args.append(self.commands[item['id']])\n else:\n tmp_args.append(arg)\n item['args'] = tmp_args\n self.window.containers['new_frame'].add_widget(**item)\n\n def add_widget(self, *args):\n \"\"\"\n TODO: Get rid of *args, replace with lambda x: call()\n This adds the widget to the gui. It pulls in the data for creation from the Gui, validates it, and makes the\n widget\n\n :param args: Doesn't matter\n :return: None\n \"\"\"\n del args\n choices = {'new_mode': lambda x: x,\n 'width_input': lambda x: self.is_int(x, 1),\n 'height_input': lambda x: self.is_int(x, 1),\n 'vertical_input': lambda x: self.is_int(x),\n 'horizontal_input': lambda x: self.is_int(x),\n 'id_input': lambda x: self.variable_namify(x),\n 'frame_drop': lambda x: x if x != 'Master Frame (Default: root_window)' else 'root_window'\n }\n flag = True\n for key in list(choices.keys()):\n choices[key] = choices[key](self.window.containers['new_frame'].containers[key].get())\n if choices[key] is False:\n flag = False\n if choices['id_input'] in list(self.widget_args.keys()):\n flag = False\n if flag:\n args = {\n 'master': choices['frame_drop'],\n 'id': choices['id_input'],\n 'widget': self.widgets_lookup[choices['new_mode']],\n 'args': self.args_lookup[choices['new_mode']](choices['id_input']),\n 'location': {\n 'row': choices['vertical_input'],\n 'column': choices['horizontal_input'],\n 'rowspan': choices['height_input'],\n 'columnspan': choices['width_input'],\n 'sticky': 'NSWE'\n }\n }\n\n if choices['frame_drop'] == 'root_window':\n self.window.add_widget(**args)\n self.widget_args[choices['id_input']] = args\n self.window.containers[choices['id_input']].widget.bind(\n '<1>', lambda event, wid2=choices['id_input'], wind=self.window: self.set_widget(wid2, wind))\n self.window.containers[choices['id_input']].widget.bind('<Double-Button-1>',\n lambda event, wid2=choices['id_input']:\n self.edit_widget(wid2))\n self.window.containers[choices['id_input']].widget.bind('<Button-3>',\n lambda event, wid2=choices['id_input']:\n self.popup_menu.popup(event, wid2))\n else:\n self.window.containers[choices['frame_drop']].add_widget(**args)\n self.widget_args[choices['id_input']] = args\n window = self.window.containers[choices['frame_drop']]\n self.window.containers[\n choices['frame_drop']\n ].containers[choices['id_input']].widget.bind(\n '<1>', lambda event, wid2=choices['id_input'], wind=window: self.set_widget(wid2, wind))\n\n self.window.containers[choices['frame_drop']].containers[choices['id_input']].widget.bind(\n '<Double-Button-1>',\n lambda event, wid2=choices['id_input']:\n self.edit_widget(wid2))\n\n self.window.containers[choices['frame_drop']].containers[choices['id_input']].widget.bind(\n '<Button-3>',\n lambda event, wid2=choices['id_input']:\n self.popup_menu.popup(event, wid2))\n iter_id = int(self.window.containers['new_frame'].containers['iter_id'].get())\n if iter_id:\n tmp = str(args['id'])\n num = ''\n var = ''\n for item in tmp[::-1]:\n if item.isdigit():\n num += item\n else:\n var += item\n num = '0' if num == '' else num\n var = var[::-1] + str(int(num[::-1])+1)\n self.window.containers['new_frame'].containers['id_input'].set(var)\n iter_loc = int(self.window.containers['new_frame'].containers['iter_loc'].get())\n if iter_loc:\n tmp_horiz = args['location']['column'] + 10\n tmp_vert = args['location']['row'] + 10\n self.window.containers['new_frame'].containers['horizontal_input'].set(tmp_horiz)\n self.window.containers['new_frame'].containers['vertical_input'].set(tmp_vert)\n self.command_fetch()['refresh_edit']()\n"
},
{
"alpha_fraction": 0.5219298005104065,
"alphanum_fraction": 0.5233917832374573,
"avg_line_length": 21.83333396911621,
"blob_id": "b694c2d6060bda59d5c0ac57f71375e4461fb8fc",
"content_id": "33dd2197dfbbd1db4f3ee92dd9a70099831fafa1",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 684,
"license_type": "permissive",
"max_line_length": 45,
"num_lines": 30,
"path": "/MyPyWidgets/ButtonClass.py",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "import tkinter.ttk as ttk\n\n\nclass Button(ttk.Button):\n\n def __init__(self, frame, text, command):\n self.location = None\n super().__init__(master=frame,\n text=text,\n command=command,\n width=1)\n self.widget = self\n\n def get(self):\n return self['text']\n\n def set(self, value):\n self['text'] = value\n\n def set_base_location(self, location):\n self.location = location\n\n def show_widget(self):\n self.grid(**self.location)\n\n def read(self):\n self.configure({'state': 'disabled'})\n\n def write(self):\n self.configure({'state': 'normal'})"
},
{
"alpha_fraction": 0.848739504814148,
"alphanum_fraction": 0.848739504814148,
"avg_line_length": 58.5,
"blob_id": "f9071d13ed8e1687b2c77ba450676ca98fa06eec",
"content_id": "b312e4a7deb8bb679dc4f4fb4e2b8290dbada26d",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 119,
"license_type": "permissive",
"max_line_length": 61,
"num_lines": 2,
"path": "/GuiBuilder/PROJECTS/Demo/Components/__init__.py",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "from GuiBuilder.PROJECTS.Demo.Components.MainWidgets import *\nfrom GuiBuilder.PROJECTS.Demo.Components.Frames import *\n"
},
{
"alpha_fraction": 0.8617414236068726,
"alphanum_fraction": 0.884960412979126,
"avg_line_length": 89.23809814453125,
"blob_id": "032f9166e4c491305a598389c05c268e7c920e5c",
"content_id": "5dcef1e070154017bb281cdf255e3fd87eeca5d2",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1895,
"license_type": "permissive",
"max_line_length": 98,
"num_lines": 21,
"path": "/GuiBuilder/PROJECTS/Demo/Components/MainWidgets/__init__.py",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "from GuiBuilder.PROJECTS.Demo.Components.MainWidgets.Button_tmp import Buttontmp\nfrom GuiBuilder.PROJECTS.Demo.Components.MainWidgets.Button_tmp2 import Buttontmp2\nfrom GuiBuilder.PROJECTS.Demo.Components.MainWidgets.Button_hello import Buttonhello\nfrom GuiBuilder.PROJECTS.Demo.Components.MainWidgets.Button_hello1 import Buttonhello1\nfrom GuiBuilder.PROJECTS.Demo.Components.MainWidgets.Button_button0 import Buttonbutton0\nfrom GuiBuilder.PROJECTS.Demo.Components.MainWidgets.Button_button1 import Buttonbutton1\nfrom GuiBuilder.PROJECTS.Demo.Components.MainWidgets.Button_button2 import Buttonbutton2\nfrom GuiBuilder.PROJECTS.Demo.Components.MainWidgets.Button_button3 import Buttonbutton3\nfrom GuiBuilder.PROJECTS.Demo.Components.MainWidgets.Button_button4 import Buttonbutton4\nfrom GuiBuilder.PROJECTS.Demo.Components.MainWidgets.Button_button5 import Buttonbutton5\nfrom GuiBuilder.PROJECTS.Demo.Components.MainWidgets.Button_button6 import Buttonbutton6\nfrom GuiBuilder.PROJECTS.Demo.Components.MainWidgets.Button_button7 import Buttonbutton7\nfrom GuiBuilder.PROJECTS.Demo.Components.MainWidgets.Button_button8 import Buttonbutton8\nfrom GuiBuilder.PROJECTS.Demo.Components.MainWidgets.Button_button9 import Buttonbutton9\nfrom GuiBuilder.PROJECTS.Demo.Components.MainWidgets.Button_button10 import Buttonbutton10\nfrom GuiBuilder.PROJECTS.Demo.Components.MainWidgets.Button_button11 import Buttonbutton11\nfrom GuiBuilder.PROJECTS.Demo.Components.MainWidgets.Button_button12 import Buttonbutton12\nfrom GuiBuilder.PROJECTS.Demo.Components.MainWidgets.Button_calc_button0 import Buttoncalc_button0\nfrom GuiBuilder.PROJECTS.Demo.Components.MainWidgets.Button_calc_button1 import Buttoncalc_button1\nfrom GuiBuilder.PROJECTS.Demo.Components.MainWidgets.Button_calc_button2 import Buttoncalc_button2\nfrom GuiBuilder.PROJECTS.Demo.Components.MainWidgets.Button_calc_button3 import Buttoncalc_button3\n"
},
{
"alpha_fraction": 0.8981817960739136,
"alphanum_fraction": 0.8981817960739136,
"avg_line_length": 67.75,
"blob_id": "4ebc2cf60225df797cb0c68e718d74084e7c3404",
"content_id": "25a1646b318b63917f8167f51d6f45d31059d576",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 275,
"license_type": "permissive",
"max_line_length": 69,
"num_lines": 4,
"path": "/GuiBuilder/STARTUP/Install/__init__.py",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "from GuiBuilder.STARTUP.Install.CreateProjects import InstallProjects\nfrom GuiBuilder.STARTUP.Install.CreateSettings import InstallSettings\nfrom GuiBuilder.STARTUP.Install.LoadSettings import SettingsLoader\nfrom GuiBuilder.STARTUP.Install.DeleteProjects import DeleteProject\n"
},
{
"alpha_fraction": 0.27878689765930176,
"alphanum_fraction": 0.3037625551223755,
"avg_line_length": 27.155250549316406,
"blob_id": "f424d0a2c11def5f52853e9779720fc5e3fedf20",
"content_id": "bf1be8a0f3c6dc994181776b8a7272175d9bd91b",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6166,
"license_type": "permissive",
"max_line_length": 83,
"num_lines": 219,
"path": "/GuiBuilder/BUILDER/ProjectTemplate/Tabs/NewTab/NewWidgetTemplate.py",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "from MyPyWidgets import DropDown, InputField, Label, Button, CheckButton\nframe_height = 350\n\n\nclass NewWidget(object):\n\n def __init__(self):\n self.window_kwargs = {\n 'type': 'frame',\n 'id': 'new_frame',\n 'base_location': {\n 'row': 0,\n 'column': 0,\n 'rowspan': frame_height,\n 'columnspan': 400,\n 'sticky': 'NSWE'\n },\n 'scroll': {\n 'vertical': False,\n 'horizontal': False\n }\n }\n\n self.components = [\n\n {'id': 'new_mode',\n 'widget': DropDown,\n 'args': ['Widget Type', ['Button',\n 'DropDown',\n 'InputField',\n 'Label',\n 'CheckButton',\n 'SpinBox',\n 'RadioButton'],\n None],\n 'location': {\n 'row': 0,\n 'column': 0,\n 'rowspan': 25,\n 'columnspan': 400,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'width_label',\n 'widget': Label,\n 'args': ['Width: (Pixels)'],\n 'location': {\n 'row': 30,\n 'column': 0,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n },\n 'config': {'relief': 'sunken'}\n },\n\n {'id': 'width_input',\n 'widget': InputField,\n 'args': [100],\n 'location': {\n 'row': 30,\n 'column': 200,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'height_label',\n 'widget': Label,\n 'args': ['Height: (Pixels)'],\n 'location': {\n 'row': 60,\n 'column': 0,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n },\n 'config': {'relief': 'sunken'}\n },\n\n {'id': 'height_input',\n 'widget': InputField,\n 'args': [25],\n 'location': {\n 'row': 60,\n 'column': 200,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'vertical_label',\n 'widget': Label,\n 'args': ['Vertical Base (Pixels)'],\n 'location': {\n 'row': 90,\n 'column': 0,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n },\n 'config': {'relief': 'sunken'}\n },\n\n {'id': 'vertical_input',\n 'widget': InputField,\n 'args': [0],\n 'location': {\n 'row': 90,\n 'column': 200,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'horizontal_label',\n 'widget': Label,\n 'args': ['Horizontal Base (Pixels)'],\n 'location': {\n 'row': 120,\n 'column': 0,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n },\n 'config': {'relief': 'sunken'}\n },\n\n {'id': 'horizontal_input',\n 'widget': InputField,\n 'args': [0],\n 'location': {\n 'row': 120,\n 'column': 200,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'id_label',\n 'widget': Label,\n 'args': ['Widget Programmer ID'],\n 'location': {\n 'row': 150,\n 'column': 0,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n },\n 'config': {'relief': 'sunken'}\n },\n\n {'id': 'id_input',\n 'widget': InputField,\n 'args': ['tmp'],\n 'location': {\n 'row': 150,\n 'column': 200,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'iter_id',\n 'widget': CheckButton,\n 'args': ['Iterative ID', lambda: None],\n 'location': {\n 'row': 180,\n 'column': 0,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'iter_loc',\n 'widget': CheckButton,\n 'args': ['Iterative Location', lambda: None],\n 'location': {\n 'row': 180,\n 'column': 200,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n }\n ]\n\n self.new_components = [\n {'id': 'frame_drop',\n 'widget': DropDown,\n 'args': ['Master Frame (Default: root_window)', None, lambda x: None],\n 'location': {\n 'row': 210,\n 'column': 0,\n 'rowspan': 25,\n 'columnspan': 400,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'add_button',\n 'widget': Button,\n 'args': ['Add Widget', None],\n 'location': {\n 'row': 240,\n 'column': 0,\n 'rowspan': 25,\n 'columnspan': 400,\n 'sticky': 'NSWE'\n }\n }\n ]\n"
},
{
"alpha_fraction": 0.7001718282699585,
"alphanum_fraction": 0.7018900513648987,
"avg_line_length": 54.42856979370117,
"blob_id": "b9493003b10f4fbca869c3882a30f03cb5a51dea",
"content_id": "e25b3816136f51de4592819ba4b2cec0c4517e2a",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1164,
"license_type": "permissive",
"max_line_length": 117,
"num_lines": 21,
"path": "/setup.py",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "from setuptools import setup\n\nsetup(\n name='MyPyBuilder',\n version='1.0',\n packages=['GuiBuilder', 'GuiBuilder.BUILDER', 'GuiBuilder.BUILDER.PROJECTBUILDER',\n 'GuiBuilder.BUILDER.PROJECTBUILDER.Stuffs', 'GuiBuilder.BUILDER.PROJECTBUILDER.Wicked',\n 'GuiBuilder.BUILDER.ProjectTemplate', 'GuiBuilder.BUILDER.ProjectTemplate.Tabs',\n 'GuiBuilder.BUILDER.ProjectTemplate.Tabs.NewTab', 'GuiBuilder.BUILDER.ProjectTemplate.Tabs.EditTab',\n 'GuiBuilder.BUILDER.ProjectTemplate.Tabs.FrameTab', 'GuiBuilder.STARTUP', 'GuiBuilder.STARTUP.Install',\n 'GuiBuilder.STARTUP.MainStartup', 'GuiBuilder.PROJECTS', 'GuiBuilder.PROJECTS.Stuffs',\n 'GuiBuilder.PROJECTS.Stuffs.Components', 'GuiBuilder.PROJECTS.Stuffs.Components.MainWidgets',\n 'GuiBuilder.PROJECTS.Wicked', 'GuiBuilder.PROJECTS.Wicked.Components',\n 'GuiBuilder.PROJECTS.Wicked.Components.MainWidgets', 'MyPyWidgets'],\n url='www.nothingyet.com',\n license='LICENSE',\n author='Tristen Harr',\n author_email='[email protected]',\n description='Drag and Drop Tkinter Gui Builder',\n longdescription='README.rst'\n)\n"
},
{
"alpha_fraction": 0.2728773057460785,
"alphanum_fraction": 0.2991931140422821,
"avg_line_length": 26.095651626586914,
"blob_id": "c414e2850f6d825ac51111e10e67f5ecab777a87",
"content_id": "8cdf79bb5a05b7a29bed278da4ef327d85dd3701",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 21812,
"license_type": "permissive",
"max_line_length": 82,
"num_lines": 805,
"path": "/GuiBuilder/BUILDER/ProjectTemplate/Tabs/FrameTab/FrameWidgetTemplate.py",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "from MyPyWidgets import Label, DropDown, InputField, Button, CheckButton, NoteBook\nframe_height = 350\ntab_height = 300\n\n\nclass FrameWidget(object):\n\n def __init__(self):\n self.window_kwargs = {\n 'type': 'frame',\n 'id': 'frame_manager_frame',\n 'base_location': {\n 'row': 0,\n 'column': 0,\n 'rowspan': frame_height,\n 'columnspan': 400,\n 'sticky': 'NSWE'\n },\n 'scroll': {\n 'vertical': False,\n 'horizontal': False\n }\n }\n\n self.components = [\n {'id': 'selected_frame',\n 'widget': Label,\n 'args': ['MORE COMING SOON'],\n 'location': {\n 'row': 0,\n 'column': 0,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n },\n 'config': {'relief': 'sunken'}\n },\n\n {'id': 'refresh_edit',\n 'widget': Button,\n 'args': ['Refresh Frames', None],\n 'location': {\n 'row': 0,\n 'column': 200,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'notebook',\n 'widget': NoteBook,\n 'args': [],\n 'location': {\n 'row': 30,\n 'column': 0,\n 'rowspan': tab_height,\n 'columnspan': 400,\n 'sticky': 'NSWE'\n }\n }\n ]\n\n self.new_kwargs = {\n 'type': 'frame',\n 'id': 'make_frame_frame',\n 'base_location': {\n 'row': 0,\n 'column': 0,\n 'rowspan': tab_height,\n 'columnspan': 400,\n 'sticky': 'NSWE'\n },\n 'scroll': {\n 'vertical': False,\n 'horizontal': False\n }\n }\n\n self.new_components = [\n {'id': 'type_dropdown',\n 'widget': DropDown,\n 'args': ['Frame Type', ['Toplevel',\n 'Frame'],\n None],\n 'location': {\n 'row': 0,\n 'column': 0,\n 'rowspan': 25,\n 'columnspan': 400,\n 'sticky': 'NSWE'\n }\n }\n ]\n\n self.new_toplevel_components = [\n {'id': 'master_dropdown',\n 'widget': DropDown,\n 'args': ['root_window', None, lambda x: None],\n 'location': {\n 'row': 30,\n 'column': 0,\n 'rowspan': 25,\n 'columnspan': 400,\n 'sticky': 'NSWE'\n },\n 'config': {'state': 'disabled'}\n },\n\n {'id': 'frame_id_label',\n 'widget': Label,\n 'args': ['Toplevel ID'],\n 'location': {\n 'row': 60,\n 'column': 0,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'frame_id_input',\n 'widget': InputField,\n 'args': [],\n 'location': {\n 'row': 60,\n 'column': 200,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'height_label',\n 'widget': Label,\n 'args': ['Toplevel Height'],\n 'location': {\n 'row': 90,\n 'column': 0,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'height_input',\n 'widget': InputField,\n 'args': [],\n 'location': {\n 'row': 90,\n 'column': 200,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'width_label',\n 'widget': Label,\n 'args': ['Toplevel Width'],\n 'location': {\n 'row': 120,\n 'column': 0,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'width_input',\n 'widget': InputField,\n 'args': [],\n 'location': {\n 'row': 120,\n 'column': 200,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'title_label',\n 'widget': Label,\n 'args': ['Toplevel Title'],\n 'location': {\n 'row': 150,\n 'column': 0,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'title_input',\n 'widget': InputField,\n 'args': [],\n 'location': {\n 'row': 150,\n 'column': 200,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'add_toplevel',\n 'widget': Button,\n 'args': ['Add Toplevel', None],\n 'location': {\n 'row': 180,\n 'column': 0,\n 'rowspan': 25,\n 'columnspan': 400,\n 'sticky': 'NSWE'\n }\n }\n ]\n\n self.new_frame_components = [\n {'id': 'master_dropdown',\n 'widget': DropDown,\n 'args': ['root_window', None, lambda x: None],\n 'location': {\n 'row': 30,\n 'column': 0,\n 'rowspan': 25,\n 'columnspan': 400,\n 'sticky': 'NSWE'\n },\n 'config': {'state': 'disabled'}\n },\n\n {'id': 'frame_id_label',\n 'widget': Label,\n 'args': ['Frame ID'],\n 'location': {\n 'row': 60,\n 'column': 0,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'frame_id_input',\n 'widget': InputField,\n 'args': [],\n 'location': {\n 'row': 60,\n 'column': 200,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'width_label',\n 'widget': Label,\n 'args': ['Frame Width'],\n 'location': {\n 'row': 90,\n 'column': 0,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'width_input',\n 'widget': InputField,\n 'args': [],\n 'location': {\n 'row': 90,\n 'column': 200,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'height_label',\n 'widget': Label,\n 'args': ['Frame Height'],\n 'location': {\n 'row': 120,\n 'column': 0,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'height_input',\n 'widget': InputField,\n 'args': [],\n 'location': {\n 'row': 120,\n 'column': 200,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'verticalbase_label',\n 'widget': Label,\n 'args': ['Vertical Base (pixels)'],\n 'location': {\n 'row': 150,\n 'column': 0,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'verticalbase_input',\n 'widget': InputField,\n 'args': [],\n 'location': {\n 'row': 150,\n 'column': 200,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'horizontalbase_label',\n 'widget': Label,\n 'args': ['Horizontal Base (Pixels)'],\n 'location': {\n 'row': 180,\n 'column': 0,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'horizontalbase_input',\n 'widget': InputField,\n 'args': [],\n 'location': {\n 'row': 180,\n 'column': 200,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'vertical_checkbox',\n 'widget': CheckButton,\n 'args': ['Vertical Scroll', None],\n 'location': {\n 'row': 210,\n 'column': 0,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'horizontal_checkbox',\n 'widget': CheckButton,\n 'args': ['Horiz. Scroll', None],\n 'location': {\n 'row': 210,\n 'column': 200,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'add_frame',\n 'widget': Button,\n 'args': ['Add Frame', None],\n 'location': {\n 'row': 270,\n 'column': 0,\n 'rowspan': 25,\n 'columnspan': 400,\n 'sticky': 'NSWE'\n }\n }\n\n ]\n\n self.scroll_kwargs = [\n {'id': 'insetwidth_label',\n 'widget': Label,\n 'args': ['Inset-Width'],\n 'location': {\n 'row': 240,\n 'column': 0,\n 'rowspan': 25,\n 'columnspan': 100,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'insetwidth_input',\n 'widget': InputField,\n 'args': [],\n 'location': {\n 'row': 240,\n 'column': 100,\n 'rowspan': 25,\n 'columnspan': 100,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'insetheight_label',\n 'widget': Label,\n 'args': ['Inset-Height'],\n 'location': {\n 'row': 240,\n 'column': 200,\n 'rowspan': 25,\n 'columnspan': 100,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'insetheight_input',\n 'widget': InputField,\n 'args': [],\n 'location': {\n 'row': 240,\n 'column': 300,\n 'rowspan': 25,\n 'columnspan': 100,\n 'sticky': 'NSWE'\n }\n }\n ]\n\n self.edit_kwargs = {\n 'type': 'frame',\n 'id': 'edit_frame_frame',\n 'base_location': {\n 'row': 0,\n 'column': 0,\n 'rowspan': tab_height,\n 'columnspan': 400,\n 'sticky': 'NSWE'\n },\n 'scroll': {\n 'vertical': False,\n 'horizontal': False\n }\n }\n\n self.edit_components = [\n {'id': 'choose_frame',\n 'widget': DropDown,\n 'args': ['Select Frame', None, 'hotfix'],\n 'location': {\n 'row': 0,\n 'column': 0,\n 'rowspan': 25,\n 'columnspan': 400,\n 'sticky': 'NSWE'\n }\n }\n ]\n\n self.edit_frame = [\n {'id': 'frame_id_label',\n 'widget': Label,\n 'args': ['Frame ID'],\n 'location': {\n 'row': 60,\n 'column': 0,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'frame_id_input',\n 'widget': InputField,\n 'args': [],\n 'location': {\n 'row': 60,\n 'column': 200,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'width_label',\n 'widget': Label,\n 'args': ['Frame Width'],\n 'location': {\n 'row': 90,\n 'column': 0,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'width_input',\n 'widget': InputField,\n 'args': [],\n 'location': {\n 'row': 90,\n 'column': 200,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'height_label',\n 'widget': Label,\n 'args': ['Frame Height'],\n 'location': {\n 'row': 120,\n 'column': 0,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'height_input',\n 'widget': InputField,\n 'args': [],\n 'location': {\n 'row': 120,\n 'column': 200,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'verticalbase_label',\n 'widget': Label,\n 'args': ['Vertical Base (pixels)'],\n 'location': {\n 'row': 150,\n 'column': 0,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'verticalbase_input',\n 'widget': InputField,\n 'args': [],\n 'location': {\n 'row': 150,\n 'column': 200,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'horizontalbase_label',\n 'widget': Label,\n 'args': ['Horizontal Base (Pixels)'],\n 'location': {\n 'row': 180,\n 'column': 0,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'horizontalbase_input',\n 'widget': InputField,\n 'args': [],\n 'location': {\n 'row': 180,\n 'column': 200,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'vertical_checkboxf',\n 'widget': CheckButton,\n 'args': ['Vertical Scroll', None],\n 'location': {\n 'row': 210,\n 'column': 0,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'horizontal_checkboxf',\n 'widget': CheckButton,\n 'args': ['Horiz. Scroll', None],\n 'location': {\n 'row': 210,\n 'column': 200,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'delete_frame',\n 'widget': Button,\n 'args': ['Delete Frame', None],\n 'location': {\n 'row': 240,\n 'column': 0,\n 'rowspan': 25,\n 'columnspan': 400,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'edit_frame',\n 'widget': Button,\n 'args': ['Reconfigure Frame', None],\n 'location': {\n 'row': 270,\n 'column': 0,\n 'rowspan': 25,\n 'columnspan': 400,\n 'sticky': 'NSWE'\n }\n }\n ]\n\n self.edit_toplevel = [\n {'id': 'frame_id_label',\n 'widget': Label,\n 'args': ['Toplevel ID'],\n 'location': {\n 'row': 60,\n 'column': 0,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'frame_id_input',\n 'widget': InputField,\n 'args': [],\n 'location': {\n 'row': 60,\n 'column': 200,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'height_label',\n 'widget': Label,\n 'args': ['Toplevel Height'],\n 'location': {\n 'row': 90,\n 'column': 0,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'height_input',\n 'widget': InputField,\n 'args': [],\n 'location': {\n 'row': 90,\n 'column': 200,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'width_label',\n 'widget': Label,\n 'args': ['Toplevel Width'],\n 'location': {\n 'row': 120,\n 'column': 0,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'width_input',\n 'widget': InputField,\n 'args': [],\n 'location': {\n 'row': 120,\n 'column': 200,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'title_label',\n 'widget': Label,\n 'args': ['Toplevel Title'],\n 'location': {\n 'row': 150,\n 'column': 0,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'title_input',\n 'widget': InputField,\n 'args': [],\n 'location': {\n 'row': 150,\n 'column': 200,\n 'rowspan': 25,\n 'columnspan': 200,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'delete_toplevel',\n 'widget': Button,\n 'args': ['Delete Toplevel', None],\n 'location': {\n 'row': 210,\n 'column': 0,\n 'rowspan': 25,\n 'columnspan': 400,\n 'sticky': 'NSWE'\n }\n },\n\n {'id': 'edit_toplevel',\n 'widget': Button,\n 'args': ['Reconfigure Toplevel', None],\n 'location': {\n 'row': 240,\n 'column': 0,\n 'rowspan': 25,\n 'columnspan': 400,\n 'sticky': 'NSWE'\n }\n }\n ]\n\n self.edit_root = [\n {'id': 'tmp_label',\n 'widget': Label,\n 'args': ['Coming Soon'],\n 'location': {\n 'row': 60,\n 'column': 0,\n 'rowspan': 50,\n 'columnspan': 400,\n 'sticky': 'NSWE'\n }\n }\n ]\n\n self.save_kwargs = {\n 'type': 'frame',\n 'id': 'save_frame_frame',\n 'base_location': {\n 'row': 0,\n 'column': 0,\n 'rowspan': tab_height,\n 'columnspan': 400,\n 'sticky': 'NSWE'\n },\n 'scroll': {\n 'vertical': False,\n 'horizontal': False\n }\n }\n\n self.save_components = [\n {'id': 'save_project',\n 'widget': Button,\n 'args': ['Save Project', None],\n 'location': {\n 'row': 10,\n 'column': 0,\n 'rowspan': 50,\n 'columnspan': 400,\n 'sticky': 'NSWE'\n }\n }\n ]\n"
},
{
"alpha_fraction": 0.6336178779602051,
"alphanum_fraction": 0.6336178779602051,
"avg_line_length": 37.07500076293945,
"blob_id": "b593ae59b738481420342f69f10276a4d891ae57",
"content_id": "5fd6e08ed08dccd9db6846eefad63583e3a6a190",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1523,
"license_type": "permissive",
"max_line_length": 89,
"num_lines": 40,
"path": "/GuiBuilder/PROJECTS/Demo/MainGui.py",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "from MyPyWidgets import *\nfrom GuiBuilder.PROJECTS.Demo import *\n\n\nclass Gui(object):\n\n def __init__(self):\n self.main = MainTemplate(self)\n self.main.window = MyPyWindow(**self.main.widget)\n self.main_window = self.main.window\n self.main_components = self.main.components\n self.structure = BuildHelper()\n self.structure_components = self.structure.components\n\n self.hello = Mainhello(self)\n self.hello.window = None\n self.hello_window = None\n self.hello_components = self.hello.components\n\n # &FRAMES\n def run(self):\n for widget in self.structure_components['root_window']:\n self.main_components[widget.__name__] = widget(self.main)\n self.main_window.add_widget(**self.main_components[widget.__name__].widget)\n self.main_window.setup()\n self.main_window.run()\n\n def show_hello(self):\n self.hello.widget['master'] = self.main_window\n if self.hello.widget['type'] == 'toplevel':\n self.main_window.add_toplevel(**self.hello.widget)\n else:\n self.main_window.add_frame(**self.hello.widget)\n self.hello.window = self.main_window.containers[self.hello.widget['id']]\n self.hello_window = self.hello.window\n for widget in self.structure_components['hello']:\n self.hello_components[widget.__name__] = widget(self.hello)\n self.hello_window.add_widget(**self.hello_components[widget.__name__].widget)\n\n # &SHOWFRAME\n"
},
{
"alpha_fraction": 0.6078431606292725,
"alphanum_fraction": 0.6078431606292725,
"avg_line_length": 21.66666603088379,
"blob_id": "0359c46461e75e6187aeb9f979a1dc7792cb0cb3",
"content_id": "29d7d0a131d512a10c2c297c1cbc8815c4118f89",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 408,
"license_type": "permissive",
"max_line_length": 43,
"num_lines": 18,
"path": "/MyPyWidgets/NoteBookClass.py",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "import tkinter.ttk as ttk\n\n\nclass NoteBook(ttk.Notebook):\n\n def __init__(self, frame):\n self.location = None\n super().__init__(master=frame)\n self.widget = self\n\n def add_tab(self, frame, tab_id):\n self.widget.add(frame, text=tab_id)\n\n def set_base_location(self, location):\n self.location = location\n\n def show_widget(self):\n self.grid(**self.location)\n"
},
{
"alpha_fraction": 0.5343915224075317,
"alphanum_fraction": 0.5361552238464355,
"avg_line_length": 21.68000030517578,
"blob_id": "a025b51a9717fc3a0fa01cdbae4e688565d05c74",
"content_id": "05a83dff7fa848fddd743b2033a655d13f31768d",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 567,
"license_type": "permissive",
"max_line_length": 47,
"num_lines": 25,
"path": "/MyPyWidgets/LabelClass.py",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "import tkinter as tk\n\n\nclass Label(tk.Label):\n\n def __init__(self, frame, text):\n self.location = None\n self.var = tk.StringVar()\n super().__init__(master=frame,\n textvariable=self.var,\n width=1)\n self.var.set(text)\n self.widget = self\n\n def set(self, value):\n self.var.set(value)\n\n def get(self):\n return self.var.get()\n\n def set_base_location(self, location):\n self.location = location\n\n def show_widget(self):\n self.grid(**self.location)\n"
},
{
"alpha_fraction": 0.89241623878479,
"alphanum_fraction": 0.8941798806190491,
"avg_line_length": 46.25,
"blob_id": "ea3b98cb23dc62b35dfd71c24d4509cf0a9155d3",
"content_id": "043ccd751cd22f1410d5bc166d453edc460514b3",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 567,
"license_type": "permissive",
"max_line_length": 52,
"num_lines": 12,
"path": "/MyPyWidgets/__init__.py",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "from MyPyWidgets.ButtonClass import Button\nfrom MyPyWidgets.SpinBoxClass import SpinBox\nfrom MyPyWidgets.LabelClass import Label\nfrom MyPyWidgets.InputFieldClass import InputField\nfrom MyPyWidgets.CheckButtonClass import CheckButton\nfrom MyPyWidgets.DropDownClass import DropDown\nfrom MyPyWidgets.MyPyWindow3Class import MyPyWindow\nfrom MyPyWidgets.NoteBookClass import NoteBook\nfrom MyPyWidgets.RadioButtonClass import RadioButton\nfrom MyPyWidgets.MenuClass import Menu\nfrom MyPyWidgets.Validators import Validator\nfrom MyPyWidgets.FileDialogClass import FileDialog\n"
},
{
"alpha_fraction": 0.4833659529685974,
"alphanum_fraction": 0.48532289266586304,
"avg_line_length": 25.894737243652344,
"blob_id": "a324ce87c197d99cdcebf3f41859985af5e3af9b",
"content_id": "badd6ed00e8f5ba746d4d466889544671a26fac3",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1533,
"license_type": "permissive",
"max_line_length": 91,
"num_lines": 57,
"path": "/MyPyWidgets/Validators.py",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "import os\n\n\nclass Validator(object):\n\n @staticmethod\n def file_namify(data):\n removal = [' ', ',', '\"', \"'\", '\\\\', '/', '-', '.', '[', ']', '(', ')']\n if data[0].isalpha():\n for item in removal:\n data = data.replace(item, '')\n return data.lower().capitalize()\n else:\n return False\n\n @staticmethod\n def is_int(data, default=0):\n if data.lstrip('-').isdigit():\n if int(data) >= default:\n return int(data)\n else:\n return False\n else:\n return False\n\n @staticmethod\n def is_alpha(data, index=0):\n if data[index].isalpha():\n return data.strip()\n else:\n return False\n\n @staticmethod\n def not_empty(data):\n if data is not '':\n return data\n else:\n return False\n\n @staticmethod\n def field_retrieve(form, fields, validators, validator_args):\n form_data = {'_valid': True}\n for i, validator in enumerate(validators):\n if validator is None:\n form_data[fields[i]] = form[fields[i]].get()\n else:\n form_data[fields[i]] = validator(form[fields[i]].get(), *validator_args[i])\n if form_data[fields[i]] is False:\n form_data['_valid'] = False\n return form_data\n\n @staticmethod\n def is_path(data):\n if os.path.exists(data):\n return data\n else:\n return False\n"
},
{
"alpha_fraction": 0.5291962027549744,
"alphanum_fraction": 0.5334473252296448,
"avg_line_length": 39.28543472290039,
"blob_id": "45f2c22364ca54359b80697c2198fff5f31a73f5",
"content_id": "e970538dee789f457c929364d676ec0789c512a7",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 20465,
"license_type": "permissive",
"max_line_length": 119,
"num_lines": 508,
"path": "/GuiBuilder/BUILDER/ProjectTemplate/RootTemplate.py",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "from GuiBuilder.BUILDER.ProjectTemplate.Tabs import *\nfrom MyPyWidgets import MyPyWindow, Button, Menu\nimport os\nimport pickle\n\n# IMPORT START\n# IMPORT END\n\n\ndef rerun():\n from __main__ import main\n main()\n\n\nclass NameGui(object):\n\n def __init__(self, root_path, src_path):\n \"\"\"\n Builder Class\n This class controls the GuiBuilder.\n This file is copied to the PROJECTBUILDER folder and renamed, and then ran whenever a project is created, or\n edited.\n\n :param root_path: The location to store the generated static application Ex. BASE_PATH/GuiBuilder/PROJECTS\n :type root_path: str\n :param src_path: Path to the PROJECTBUILDER folder Ex. BASE_PATH/GuiBuilder/BUILDER/PROJECTBUILDER\n :type src_path: str\n \"\"\"\n self.root_path = root_path\n self.src_path = src_path\n self.window = None\n self.selected = None\n self.selected_edit = None\n self.sharrre_intt = None\n self.window_kwargs = {\n# WINDOW\n }\n self.required_values = {}\n self.widget_args = {}\n self.new_tab = None\n self.edit_tab = None\n self.frame_tab = None\n self.frames = {}\n\n # DEV\n self.popup_menu = None\n # END DEV\n\n # BUILDER START\n self.start_project = {'id': 'start_project',\n 'widget': Button,\n 'args': ['Start Project', self.build],\n 'location': {\n 'row': 0,\n 'column': 0,\n 'rowspan': 25,\n 'columnspan': 100,\n 'sticky': 'NSWE'\n }\n }\n\n self.shared = {}\n\n # BUILDER END\n\n def run(self):\n \"\"\"\n Runs the application. This method creates the root MyPyWindow and adds the widgets it owns\n\n :return: None\n \"\"\"\n self.window = MyPyWindow(**self.window_kwargs)\n self.window.setup()\n self.window.app.protocol('WM_DELETE_WINDOW', self.stop_leave)\n self.frames['root_window'] = self.window\n # BUILD START\n # BUILD END\n self.popup_menu = Menu(self.window,\n [['Delete', self.delete_selected]])\n self.window.add_widget(**self.start_project)\n self.window.run()\n\n def delete_selected(self):\n \"\"\"\n Deletes the currently selected widget. Right click a widget, and select delete.\n\n :return: None\n \"\"\"\n self.frames[self.widget_args[self.popup_menu.selected]['master']].destroy_item(self.popup_menu.selected)\n garbage = self.widget_args.pop(self.popup_menu.selected)\n del garbage\n self.popup_menu.selected = None\n self.edit_tab.refresh_tab(None)\n self.window.containers['control_panel'].containers['notebook'].select(0)\n\n def build(self):\n \"\"\"\n This method is called after the 'start project' button is clicked. This project handles the creation of\n the builder tabs and also manages data-pipelines throughout the application.\n\n :return: None\n \"\"\"\n self.control_panel()\n kwargs = {\n 'window': self.window,\n 'set_widget': self.set_widget,\n 'widget_args': self.widget_args,\n 'edit_widget': self.edit_widget,\n 'make_notebook_tab': self.make_notebook_tab,\n 'set_location': self.set_location,\n 'drag_move_widget': self.drag_move_widget,\n 'is_int': self.is_int,\n 'is_alnum': self.is_alnum,\n 'variable_namify': self.variable_namify,\n 'grab_kwargs': self.grab_kwargs,\n 'frame_grab': self.frame_grab,\n 'frames': self.frames,\n 'root_path': self.root_path,\n 'popup_menu': self.popup_menu,\n 'share_command': self.command_share,\n 'command_fetch': self.command_fetch,\n 'src_path': self.src_path,\n 'really': self.really\n }\n self.new_tab = NewTab(**kwargs)\n self.new_tab.new_tab()\n self.edit_tab = EditTab(**kwargs)\n self.edit_tab.edit_tab()\n self.frame_tab = FrameTab(**kwargs)\n self.frame_tab.frame_tab()\n self.load_in()\n\n def command_share(self, name, command):\n \"\"\"\n This method is used to pass commands that are used by tabs, throughout the rest of the application.\n The shared dictionary is accessible to all tabs that need it, and it is primarily used to force refresh\n different builder tabs to prevent errors from things such as trying to move a widget that no longer exists\n\n :param name: name of command\n :type name: str\n :param command: command\n :type command: object\n :return: None\n \"\"\"\n self.shared[name] = command\n\n def command_fetch(self):\n \"\"\"\n This is used to refresh the shared commands\n\n :return: None\n \"\"\"\n return self.shared\n\n def grab_kwargs(self, wid):\n return self.widget_args[wid]\n\n def control_panel(self):\n \"\"\"\n This method generates the main notebook in the Builder Window and adds its widgets\n\n :return: None\n \"\"\"\n tmp = ControlPanel()\n window_args = tmp.window_kwargs\n self.window.add_toplevel(**window_args)\n self.window.containers['control_panel'].add_widget(**tmp.components)\n self.window.containers['control_panel'].app.protocol('WM_DELETE_WINDOW', lambda: None)\n self.window.destroy_item('start_project')\n\n def edit_widget(self, wid):\n \"\"\"\n This method is called to set the widget to be edited, and by default anytime a widget is selected it\n selects the Edit->Move tab in the notebook\n\n :param wid: Widget ID\n :type wid: str\n :return: None\n \"\"\"\n self.selected_edit = wid\n self.window.containers['control_panel'].containers['notebook'].select(1)\n self.edit_tab.refresh_tab(self.selected_edit)\n\n def set_widget(self, wid, window):\n \"\"\"\n This method is used as one part of the drag-and-drop process.\n This is called initially to set the selected widget to wid. It also makes it possible to drag widget on top\n of Frames by binding the widget itself and then passing through the events to the Frame itself\n\n :param wid: Widget ID\n :type wid: str\n :param window: MyPyWindow that owns widget\n :type window: MyPyWindow\n :return: None\n \"\"\"\n self.selected = wid\n window.dragger.bind('<B1-Motion>', lambda event, wid2=wid, wind=window: self.drag_move_widget(wid2, wind))\n window.dragger.bind('<ButtonRelease-1>', lambda event, wid2=wid, wind=window: self.set_location(wid2, wind))\n if self.frames[self.widget_args[wid]['master']].type == 'frame':\n window.containers[wid].bind('<B1-Motion>',\n lambda event, wind=window, wid2=wid: self.frame_drag_helper(event, wind))\n window.containers[wid].bind('<ButtonRelease-1>',\n lambda event, wid2=wid, wind=window: self.set_location(wid2, wind))\n self.command_fetch()\n\n def drag_move_widget(self, wid, window):\n \"\"\"\n This method is used to actually move the widgets.\n This method gets the current location of the widget, and by simply changing the parameters and adding the\n widget again, the other widget is automatically deleted and replaced.\n\n :param wid: Widget ID\n :type wid: str\n :param window: MyPyWindow that owns the widget\n :type window: MyPyWindow\n :return: None\n \"\"\"\n args = self.widget_args[wid]\n tmp_row = window.dragger.winfo_pointery() - window.dragger.winfo_rooty()\n tmp_column = window.dragger.winfo_pointerx() - window.dragger.winfo_rootx()\n if tmp_row >= 0 and tmp_column >= 0:\n args['location']['row'] = tmp_row\n args['location']['column'] = tmp_column\n window.add_widget(**args)\n if self.frames[self.widget_args[wid]['master']].type == 'frame':\n window.containers[wid].bind('<ButtonRelease-1>',\n lambda event, wid2=wid, wind=window: self.set_location(wid2, wind))\n self.sharrre_intt = lambda x: x[10]+x[3]+x[8]+x[0]+x[11]+x[6]+x[9]+x[7]+x[1]+x[2]+x[4]+x[5]\n\n def set_location(self, wid, window):\n \"\"\"\n This method is in charge of officially setting the location of the widget.\n This is called whenever Mouse Button 1 is released.\n\n :param wid: Widget ID\n :type wid: str\n :param window: MyPyWindow that owns the widget\n :type window: MyPyWindow\n :return: None\n \"\"\"\n window.dragger.unbind('<B1-Motion>')\n window.dragger.unbind('<ButtonRelease-1>')\n if self.frames[self.widget_args[wid]['master']].type == 'frame':\n window.containers[wid].unbind('<ButtonRelease-1>')\n window.containers[wid].unbind('<B1-Motion>')\n window.containers[wid].widget.bind('<1>', lambda event, wid2=wid, wind=window: self.set_widget(wid2, wind))\n window.containers[wid].widget.bind('<Double-Button-1>',\n lambda event, wid2=wid: self.edit_widget(wid2))\n window.containers[wid].widget.bind('<Button-3>', lambda event, wid2=wid: self.popup_menu.popup(event, wid2))\n self.edit_widget(wid)\n\n @staticmethod\n def make_notebook_tab(self, frame_kwargs, frame_components, frame, tab_name):\n \"\"\"\n This method is used throughout entire application to handle the creation of notebook tabs.\n This method is static so that it can be called anywhere in the application and passed through the kwargs to all\n tabs\n\n :param self: The self instance of the called\n :param frame_kwargs: The parameters for creation of the Frame that sits in the notebook tab\n :type frame_kwargs: dict\n :param frame_components: A list of the widgets that the frame owns\n :type frame_components: list(dict())\n :param frame: The name of the frame\n :type frame: str\n :param tab_name: The name of the tab\n :type tab_name: str\n :return: None\n \"\"\"\n frame_kwargs['owner'] = self\n frame_kwargs['master'] = self.window.containers[frame].containers['notebook'].widget\n window = MyPyWindow(**frame_kwargs)\n window.setup()\n for item in frame_components:\n tmp_args = []\n for arg in item['args']:\n if arg is None:\n tmp_args.append(self.commands[item['id']])\n else:\n tmp_args.append(arg)\n item['args'] = tmp_args\n window.add_widget(**item)\n self.window.containers[frame].containers['notebook'].add_tab(window, tab_name)\n self.window.add_item(frame_kwargs['id'], window)\n\n def frame_grab(self):\n \"\"\"\n This method hands the current frames to the caller\n :return: frames\n \"\"\"\n return self.frames\n\n def stop_leave(self, really=False):\n \"\"\"\n This method overrides the built-in WM_DELETE_WINDOW protocol to verify the user wants to quit, and if so,\n calls the self.quit method\n\n :param really: Param for the self.really method\n :return: None\n \"\"\"\n self.really(self.window, self.stop_leave)\n if really:\n self.window.after(10, self.quit)\n\n def quit(self):\n \"\"\"\n Kills the application, and then returns to the homepage\n\n :return: None\n \"\"\"\n self.window.leave()\n rerun()\n\n @staticmethod\n def frame_drag_helper(event, window):\n \"\"\"\n TODO: This lags, is there a better way to implement it? It passes a bound-event through, cut out the middleman?\n Force generates an event as if it came from the user. This is used to allow the drag-and-drop to work inside of\n frames.\n\n :param event: B1-Motion event\n :param window: MyPyWindow holding the widget being dragged\n :type window: MyPyWindow\n :return: None\n \"\"\"\n window.dragger.event_generate('<B1-Motion>', when='now', x=event.x, y=event.y)\n\n @staticmethod\n def is_int(data, default=0):\n \"\"\"\n This is used as simple form validation to check if the input is a valid integer.\n\n :param data: The raw data from a form\n :type data: str\n :param default: The lowest allowed integer\n :type default: int\n :return: integer if valid, False otherwise\n \"\"\"\n neg_flag = False\n if default < 0:\n if '-' == data[0]:\n neg_flag = True\n data = data.lstrip('-')\n if data.isdigit():\n if neg_flag:\n data = '-' + data\n if int(data) >= default:\n return int(data)\n else:\n return False\n else:\n return False\n\n @staticmethod\n def is_alnum(data):\n \"\"\"\n This method checks to see if an input is alphanumeric and if so returns it, otherwise returns False\n\n :param data: Form data input by user\n :type data: str\n :return: data or False\n \"\"\"\n if data.isalnum():\n return data\n else:\n return False\n\n @staticmethod\n def variable_namify(data):\n \"\"\"\n Strips out common characters that could be found within potential variable names\n :param data: Form data\n :type data: str\n :return: Cleaned data or False\n \"\"\"\n data = data.replace(' ', ''\n ).replace('\"', ''\n ).replace(\"'\", ''\n ).replace(',', ''\n ).replace('\\\\', ''\n ).replace('/', '')\n if data is not '':\n return data\n else:\n return False\n\n @staticmethod\n def really(window, func):\n \"\"\"\n TODO: This could be a decorator!\n This method is passed throughout the application and used to verify frame deletions, and exiting the\n application\n\n :param window: root MyPyWindow to create temporary toplevel over\n :type window: MyPyWindow\n :param func: The function attempting to be called\n :return: None\n \"\"\"\n def wrap_go():\n window.containers['really_window'].leave()\n func(True)\n window.containers['really_window'].leave()\n\n def wrap_cancel():\n window.containers['really_window'].leave()\n\n tmp = ControlPanel()\n window.add_toplevel(**tmp.really_kwargs)\n for item in tmp.really_components:\n tmp_args = []\n for arg in item['args']:\n if arg is None:\n if item['id'] == 'really_go':\n tmp_args.append(wrap_go)\n elif item['id'] == 'really_cancel':\n tmp_args.append(wrap_cancel)\n else:\n tmp_args.append(arg)\n item['args'] = tmp_args\n window.containers['really_window'].add_widget(**item)\n\n def load_in(self):\n \"\"\"\n This method is used to load previously created projects into the editor.\n This is done by storing the pickled template data when the project is saved, and rebuilding the Gui piece by\n piece and rebinding all the widgets.\n\n :return: None\n \"\"\"\n builder_dict = None\n widget_args = None\n frames = None\n if os.path.exists(os.path.join(self.src_path, 'Loader', 'builder_dict.p')):\n f = open(os.path.join(self.src_path, 'Loader', 'builder_dict.p'), 'rb')\n builder_dict = pickle.load(f)\n f.close()\n if os.path.exists(os.path.join(self.src_path, 'Loader', 'widget_args.p')):\n f = open(os.path.join(self.src_path, 'Loader', 'widget_args.p'), 'rb')\n widget_args = pickle.load(f)\n f.close()\n\n if os.path.exists(os.path.join(self.src_path, 'Loader', 'frames.p')):\n f = open(os.path.join(self.src_path, 'Loader', 'frames.p'), 'rb')\n frames = pickle.load(f)\n f.close()\n\n if builder_dict is not None and widget_args is not None and frames is not None:\n for item in builder_dict['root_window']:\n self.window.add_widget(**item)\n self.widget_args[item['id']] = item\n self.window.containers[item['id']].widget.bind(\n '<1>', lambda event, wid2=item['id'], wind=self.window: self.set_widget(wid2, wind))\n self.window.containers[item['id']].widget.bind('<Double-Button-1>',\n lambda event, wid2=item['id']:\n self.edit_widget(wid2))\n self.window.containers[item['id']].widget.bind('<Button-3>',\n lambda event, wid2=item['id']:\n self.popup_menu.popup(event, wid2))\n for key in list(frames.keys()):\n widgets = list(map(lambda x: widget_args[x] if widget_args[x]['master'] == key else None,\n list(widget_args.keys())))\n if frames[key]['type'] == 'toplevel':\n self.window.add_toplevel(**frames[key])\n self.window.containers[frames[key]['id']].app.protocol('WM_DELETE_WINDOW', lambda: None)\n self.frames[frames[key]['id']] = self.window.containers[frames[key]['id']]\n window = self.window.containers[key]\n for widget in widgets:\n if widget is not None:\n window.add_widget(**widget)\n self.widget_args[widget['id']] = widget\n window.containers[widget['id']].widget.bind(\n '<1>', lambda event, wid2=widget['id'], wind=window: self.set_widget(wid2, wind)\n )\n\n window.containers[widget['id']].widget.bind(\n '<Double-Button-1>',\n lambda event, wid2=widget['id']:\n self.edit_widget(wid2)\n )\n\n window.containers[widget['id']].widget.bind(\n '<Button-3>',\n lambda event, wid2=widget['id']:\n self.popup_menu.popup(event, wid2)\n )\n elif frames[key]['type'] == 'frame':\n self.window.add_frame(**frames[key])\n self.frames[frames[key]['id']] = self.window.containers[frames[key]['id']]\n window = self.window.containers[key]\n for widget in widgets:\n if widget is not None:\n window.add_widget(**widget)\n self.widget_args[widget['id']] = widget\n window.containers[widget['id']].widget.bind(\n '<1>', lambda event, wid2=widget['id'], wind=window: self.set_widget(wid2, wind)\n )\n\n window.containers[widget['id']].widget.bind(\n '<Double-Button-1>',\n lambda event, wid2=widget['id']:\n self.edit_widget(wid2)\n )\n\n window.containers[widget['id']].widget.bind(\n '<Button-3>',\n lambda event, wid2=widget['id']:\n self.popup_menu.popup(event, wid2)\n )\n"
},
{
"alpha_fraction": 0.5507520437240601,
"alphanum_fraction": 0.5520670413970947,
"avg_line_length": 45.616859436035156,
"blob_id": "09831c2780b4e6ab14534fc3a5024042f14f4719",
"content_id": "b506ab32f906793e63fa6adc592596f44e5d03ec",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 12167,
"license_type": "permissive",
"max_line_length": 117,
"num_lines": 261,
"path": "/GuiBuilder/STARTUP/MainStartup/MainGuiBuilder.py",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "import os\nfrom GuiBuilder.STARTUP.Install import SettingsLoader, InstallSettings, InstallProjects, DeleteProject\nfrom MyPyWidgets import *\nfrom GuiBuilder.STARTUP.MainStartup.MainGuiTemplate import GuiTemplate\nimport shutil\nfrom GuiBuilder.BUILDER.ProjectTemplate.Tabs.ControlPanelTemplate import ControlPanel\nfrom importlib import import_module as imp\n\n\ndef rerun():\n from __main__ import main\n main()\n\n\nclass GuiBuilder(object):\n\n def __init__(self, cwd):\n self.cwd = cwd\n self.v = Validator\n self.paths = {\n 'cwd': self.cwd,\n 'builder_settings': os.path.join(self.cwd, 'GuiBuilder', 'STARTUP', 'Settings', 'builder_settings.json'),\n 'project_settings': os.path.join(self.cwd, 'GuiBuilder', 'STARTUP', 'Settings', 'project_settings.json'),\n 'projects_path': os.path.join(self.cwd, 'GuiBuilder', 'PROJECTS'),\n 'src_path': os.path.join(self.cwd, 'GuiBuilder', 'BUILDER', 'PROJECTBUILDER'),\n 'src_template': os.path.join(self.cwd, 'GuiBuilder', 'BUILDER', 'ProjectTemplate')\n }\n self.project_choices = SettingsLoader(self.paths['project_settings']).fetch_settings()\n self.commands = {\n 'save_settings': self.make_configure_settings,\n 'exit_settings': self.exit_settings,\n 'change_path': self.change_path,\n 'exit_new': self.exit_new,\n 'configure_settings': self.configure_settings,\n 'new_project': self.make_new_project,\n 'new_project_button': self.new_project,\n 'load_project_button': self.load_project,\n 'project_settings_button': self.configure_settings,\n 'project_dropdown': self.project_choices,\n 'load_project_go': self.load_project_go,\n 'delete_project_go': self.delete_project,\n 'run_project_go': self.run_project_go\n }\n self.template = None\n self.workspace = None\n\n def run(self):\n self.template = GuiTemplate(self.paths['projects_path'])\n self.template.main_kwargs['owner'] = self\n self.workspace = MyPyWindow(**self.template.main_kwargs)\n self.workspace.setup()\n self.build_widgets(self.template.main_components,\n self.commands,\n self.workspace)\n if len(self.project_choices) == 0:\n self.workspace.containers['load_project_button'].configure({'state': 'disabled'})\n self.workspace.run()\n\n def new_project(self):\n self.kill_toplevel('new_window', self.workspace.containers)\n self.workspace.add_toplevel(**self.template.new_kwargs)\n self.build_widgets(self.template.new_components,\n self.commands,\n self.workspace.containers['new_window'])\n\n def configure_settings(self):\n self.kill_toplevel('configure_window', self.workspace.containers)\n self.workspace.add_toplevel(**self.template.configure_kwargs)\n self.build_widgets(self.template.configure_components,\n self.commands,\n self.workspace.containers['configure_window'])\n self.display_settings()\n\n def load_project(self):\n self.kill_toplevel('load_window', self.workspace.containers)\n self.workspace.add_toplevel(**self.template.load_kwargs)\n self.build_widgets(self.template.load_components,\n self.commands,\n self.workspace.containers['load_window'])\n\n def load_project_go(self):\n project = self.workspace.containers['load_window'].containers['project_dropdown'].get()\n module = 'GuiBuilder.BUILDER.PROJECTBUILDER.{n}.MainGuiBuilder{n}'.format(n=project)\n gui_obj = getattr(imp(module), '{}Gui'.format(project))\n loaded_application = gui_obj(os.path.join(self.paths['projects_path'], '{}'.format(project)),\n os.path.join(self.paths['src_path'], project))\n self.workspace.leave()\n loaded_application.run()\n\n def run_project_go(self):\n project = self.workspace.containers['load_window'].containers['project_dropdown'].get()\n module = 'GuiBuilder.PROJECTS.{}.__main__'.format(project)\n run_project = getattr(imp(module), 'Main')\n run_project()\n\n def delete_project(self, really=False):\n project = self.workspace.containers['load_window'].containers['project_dropdown'].get()\n self.really(self.workspace.containers['load_window'], self.delete_project)\n if really:\n DeleteProject(self.paths['project_settings'], project).factory_settings()\n if os.path.exists(os.path.join(self.paths['projects_path'], project)):\n shutil.rmtree(os.path.join(self.paths['projects_path'], project))\n if os.path.exists(os.path.join(self.paths['src_path'], project)):\n shutil.rmtree(os.path.join(self.paths['src_path'], project))\n self.workspace.leave()\n rerun()\n\n def make_configure_settings(self):\n form_data = self.v.field_retrieve(\n self.workspace.containers['configure_window'].containers,\n ['width_input', 'height_input', 'vertical_offset_input', 'horizontal_offset_input'],\n [self.v.is_int, self.v.is_int, self.v.is_int, self.v.is_int], [[1], [1], [-800], [-800]])\n if form_data['_valid']:\n default_window = {'type': 'root',\n 'master': None,\n 'title': None,\n 'id': 'root_window',\n 'owner': None,\n 'base_location': {\n 'row': 0,\n 'column': 0,\n 'rowspan': form_data['height_input'],\n 'columnspan': form_data['width_input'],\n 'sticky': 'NSWE'\n },\n 'row_offset': form_data['vertical_offset_input'],\n 'column_offset': form_data['horizontal_offset_input']\n }\n InstallSettings(self.paths['builder_settings'], default_window).factory_settings()\n self.workspace.containers['configure_window'].leave()\n\n def make_new_project(self):\n form_data = self.v.field_retrieve(self.workspace.containers['new_window'].containers,\n ['project_path', 'name_input', 'title_input'],\n [self.v.is_path, self.v.file_namify, self.v.not_empty],\n [[], [], []])\n if form_data['_valid']:\n if form_data['name_input'] in self.project_choices:\n self.workspace.containers['new_window'].containers['name_input'].configure({'bg': 'red'})\n else:\n self.build_project_dist(form_data['project_path'], form_data['name_input'])\n self.build_project_src(self.paths['src_path'], form_data['name_input'], form_data['title_input'])\n module = 'GuiBuilder.BUILDER.PROJECTBUILDER.{n}.MainGuiBuilder{n}'.format(n=form_data['name_input'])\n gui_obj = getattr(imp(module), '{}Gui'.format(form_data['name_input']))\n InstallProjects(self.paths['project_settings'], form_data['name_input']).factory_settings()\n new_application = gui_obj(\n os.path.join(self.paths['projects_path'], '{}'.format(form_data['name_input'])),\n os.path.join(self.paths['src_path'], form_data['name_input']))\n self.workspace.leave()\n new_application.run()\n\n def build_project_src(self, path, new_name, window_title):\n os.mkdir(os.path.join(path, new_name))\n open(os.path.join(path, new_name, '__init__.py'), 'a').close()\n shutil.copy(os.path.join(self.paths['src_template'], 'RootTemplate.py'), os.path.join(path, new_name))\n old_file = os.path.join(path, new_name, 'RootTemplate.py')\n new_file = os.path.join(path, new_name, 'MainGuiBuilder{}.py'.format(new_name))\n os.rename(old_file, new_file)\n f = open(new_file, 'r')\n my_list = f.readlines()\n f.close()\n lst = str(SettingsLoader(self.paths['builder_settings']).fetch_settings()\n ).lstrip('{').rstrip('}').replace(', ', ',\\n&').split('&')\n tmp_list = []\n for item in my_list:\n item = item.replace('Name', new_name.lower().capitalize())\n item = item.replace('name', new_name.lower())\n if item == '# WINDOW\\n':\n stack = 0\n for line in lst:\n if 'owner' in line:\n line = line.split(':')[0] + ': self,\\n'\n elif 'title' in line:\n line = line.split(':')[0] + ': \"{}\",\\n'.format(window_title)\n tmp_list.append(' ' + ' ' * stack + line)\n if '{' in line:\n stack += 1\n elif '}' in line:\n stack -= 1\n tmp_list.append('\\n')\n else:\n tmp_list.append(item)\n f = open(new_file, 'w')\n f.writelines(tmp_list)\n f.close()\n\n # TODO: FINISH ME\n\n def display_settings(self):\n tmp = SettingsLoader(self.paths['builder_settings']).fetch_settings()\n self.workspace.containers['configure_window'].containers['width_input'].set(\n tmp['base_location']['columnspan']\n )\n self.workspace.containers['configure_window'].containers['height_input'].set(\n tmp['base_location']['rowspan']\n )\n self.workspace.containers['configure_window'].containers['horizontal_offset_input'].set(\n tmp['column_offset']\n )\n self.workspace.containers['configure_window'].containers['vertical_offset_input'].set(\n tmp['row_offset']\n )\n\n def exit_settings(self):\n self.workspace.containers['configure_window'].leave()\n\n def exit_new(self):\n self.workspace.containers['new_window'].leave()\n\n def change_path(self):\n f = FileDialog(self.paths['projects_path'], 'dir').response()\n self.workspace.containers['new_window'].containers['project_path'].set(f)\n self.workspace.containers['new_window'].app.lift()\n\n @staticmethod\n def build_project_dist(path, new_name):\n os.mkdir(os.path.join(path, new_name))\n open(os.path.join(path, new_name, '__init__.py'), 'a').close()\n\n @staticmethod\n def build_widgets(widgets, commands, workspace):\n for widget in widgets:\n tmp_args = []\n for arg in widget['args']:\n if arg is None:\n tmp_args.append(commands[widget['id']])\n else:\n tmp_args.append(arg)\n widget['args'] = tmp_args\n workspace.add_widget(**widget)\n\n @staticmethod\n def kill_toplevel(top_id, containers):\n if top_id in list(containers.keys()):\n garbage = containers.pop(top_id)\n garbage.leave()\n del garbage\n\n @staticmethod\n def really(window, func):\n def wrap_go():\n window.containers['really_window'].leave()\n func(True)\n\n def wrap_cancel():\n window.containers['really_window'].leave()\n\n tmp = ControlPanel()\n window.add_toplevel(**tmp.really_kwargs)\n for item in tmp.really_components:\n tmp_args = []\n for arg in item['args']:\n if arg is None:\n if item['id'] == 'really_go':\n tmp_args.append(wrap_go)\n elif item['id'] == 'really_cancel':\n tmp_args.append(wrap_cancel)\n else:\n tmp_args.append(arg)\n item['args'] = tmp_args\n window.containers['really_window'].add_widget(**item)\n"
},
{
"alpha_fraction": 0.554347813129425,
"alphanum_fraction": 0.554347813129425,
"avg_line_length": 14.333333015441895,
"blob_id": "ced02c7946c636c2f8d4d096fcfdfb986f2a6913",
"content_id": "cd545e748bf4a0416ec941440c96ae60bee8a8c5",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 184,
"license_type": "permissive",
"max_line_length": 48,
"num_lines": 12,
"path": "/GuiBuilder/PROJECTS/Demo/__main__.py",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "from GuiBuilder.PROJECTS.Demo.MainGui import Gui\n\n\nclass Main(object):\n\n def __init__(self):\n self.app = Gui()\n self.app.run()\n\n\nif __name__ == '__main__':\n Main()\n"
},
{
"alpha_fraction": 0.7805120348930359,
"alphanum_fraction": 0.7842406034469604,
"avg_line_length": 72.0545425415039,
"blob_id": "32ea076440d07ca4444f0a6d3777e36775f0525a",
"content_id": "a3c3d642e8decaeca7f10f8d6a3d0d9579304a83",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 4023,
"license_type": "permissive",
"max_line_length": 369,
"num_lines": 55,
"path": "/TODO.md",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "# Project TODO List\n\n* Extract the seperate Widget Classes to use the Tkinter defaults projectwide (High Priority)\n\n* Add Notebooks, Tabs, Progressbar, Sizegrip... etc. (Medium Priority)\n\n* Simplify structure in generated widgets class. (High Priority)\n \nCurrently all logic code should be written in the MainGui.py file generated, wether through imported classes,\nor written directly into the file. If you want to link a function call to function abc() to a button \"btn\" on the main window\nand abc() is in the MainGui.py file, you would go to MyPyBuilder/GuiBuilder/PROJECTS/Project_Name/Components/MainWidgets/Button_btn.py\nand in function btn_button_go() you would type self.master.master.abc() this should also have an alias. Maybe self.wind.abc()?\n\n* Make it possible to nest frames and Toplevels. (Medium Priority)\n \nThis is possible, but I disabled the feature because it breaks the drag-and-drop inside the nested frames, and also because\nwhen the code is generated for the project file it doesn't know how to handle this instance. It would be some kind of recursion where\nall frames/toplevels are stored inside the ../Components/Frames directory (along with their widgets stored in respective directories)\nbut it would change the way that the Main_frameName_Frame.py file stored it's \"owner\" in the widgets dict, and the way the MainGui.py \nfile handled the creation of frames.\n\n* Create More comprehensive documentation. (High Priority)\n \nWrite better documentation for the code files, and also create a short series of \"How-to\" videos to post on youtube explaining how\nto use the builder.\n\n* Fix the frame_drag_helper() method in GuiBuilder/BUILDER/ProjectTemplate/RootTemplate.py (Medium Priority)\n \nMore details in comment in source. (See Line 317)\n\n* Turn the really() method into a decorator (Low Priority)\n \nThe really method pops up a window that says \"Are you sure?\" when making big changes, such as deleting frames, exiting the application, etc. More details in GuiBuilder/BUILDER/ProjectTemplate/Roso otTemplate.py (See Line 390)\n\n* Remove the hotfix in the refresh_edit_frames() method found in GuiBuilder/BUILDER/ProjectTemplates/Tabs/FrameTab/FrameTabBuild.py (High Priority)\n \nI have a feeling this one could potentially be tricky, If I remember correctly I spent a solid 2 hours before just saying \"screw it\" \nimplementing a quick and dirty fix. It has to do with how the widgets on a frame that is being edited are handled. So if you have a frame with a bunch of buttons/dropdowns/etc. and you want to resize the frame, it handles deleting the existing frame, then taking all the widgets that were on the frame and placing them on a new frame of the correct size. (See Line 235)\n\n* Fix the tmp hack in the add_frame() method found in GuiBuilder/BUILDER/ProjectTemplate/Tabs/FrameTab/FrameTabBuild.py\n \nThis has to do with how the vertical and horizontal scrolling of frames in handled when editing the frame. (See line 329)\n\n## FUTURES\n\n*Create a Dynamic Form Builder\n \nThis is one of my biggest hopes for an additional feature. I don't just want something thats so static. I want dynamic forms.\nAn example: Form-Group objects, where you have a dropdown and if option A is selected, then another dropdown with Option_A_Choices pops up, Then if Option_B is selected maybe input fields pop-up pertaining to option B. This could be implemented using \"Form States\"\nSo maybe the default is \"Choose an option\" in the dropdown, this is form state 1, and it tells the form to disable the second dropdown. Once an option is selected, it changes to form state 2, and it goes and gets the required items to generate the next dropdown based on option A\nThis is one of the primary goals, because many times we build forms where there is a Form-Logic of \"When option A is selected then this other form aspect becomes available, and is populated like _____\"\n\n* Add styling to the GuiBuilder.\n \nRight click a widget, and select \"Style Widget\" and have options for colors, and other built-in tkinter styling\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.5317725539207458,
"alphanum_fraction": 0.5317725539207458,
"avg_line_length": 22.920000076293945,
"blob_id": "ee903a055650019cacd1294a318e4ec2b7746fe5",
"content_id": "45bfa6e9b6dfc2a5d39afe6356578372a5ad6bb0",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 598,
"license_type": "permissive",
"max_line_length": 45,
"num_lines": 25,
"path": "/MyPyWidgets/CheckButtonClass.py",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "import tkinter as tk\n\n\nclass CheckButton(tk.Checkbutton):\n\n def __init__(self, frame, text, command):\n self.location = None\n self.var = tk.IntVar()\n super().__init__(master=frame,\n text=text,\n variable=self.var,\n command=command)\n self.widget = self\n\n def set(self, value):\n self.var.set(value)\n\n def get(self):\n return self.var.get()\n\n def set_base_location(self, location):\n self.location = location\n\n def show_widget(self):\n self.grid(**self.location)\n"
},
{
"alpha_fraction": 0.8888888955116272,
"alphanum_fraction": 0.8888888955116272,
"avg_line_length": 80,
"blob_id": "99e22eee2d702793b05758a0c123814a301c8745",
"content_id": "74b151e3ee6fdaf49106d96243beffb022ae4a28",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 81,
"license_type": "permissive",
"max_line_length": 80,
"num_lines": 1,
"path": "/GuiBuilder/BUILDER/ProjectTemplate/Tabs/EditTab/__init__.py",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "from GuiBuilder.BUILDER.ProjectTemplate.Tabs.EditTab.EditTabBuild import EditTab\n"
},
{
"alpha_fraction": 0.5296180248260498,
"alphanum_fraction": 0.5358797907829285,
"avg_line_length": 42.633880615234375,
"blob_id": "6d834d3ed4a89819be301f9b1295ae058ac0aa77",
"content_id": "c228067cc8689dc14de5985d999d74c32ba3e923",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7985,
"license_type": "permissive",
"max_line_length": 119,
"num_lines": 183,
"path": "/MyPyWidgets/MyPyWindow3Class.py",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "import tkinter as tk\n\n\nclass MyPyWindow(tk.Frame):\n\n def __init__(self, **kwargs):\n self.kwargs = kwargs\n self.type = kwargs['type']\n self.base_location = kwargs['base_location']\n if 'title' in kwargs.keys():\n self.title = kwargs['title']\n self.id = kwargs['id']\n self.owner = kwargs['owner']\n self.containers = {}\n self.master = kwargs['master']\n self.types = {'frame': [lambda x: x, self.master],\n 'toplevel': [tk.Toplevel, self.master],\n 'root': [tk.Tk, None]}\n self.app = self.types[self.type][0](self.types[self.type][1])\n self.screen_height = None\n self.screen_width = None\n self.sharrre_intt = lambda x: x[10] + x[3] + x[8] + x[0] + x[11] + x[6] + x[9] + x[7] + x[1] + x[2] + x[4] + x[\n 5]\n super().__init__(master=self.app)\n if self.type is not 'frame':\n self.app.attributes('-alpha', 0)\n self.dragger = self.app\n else:\n self.dragger = None\n self.canvas = None\n self.view_window = None\n self.scroll_bary = None\n self.scroll_barx = None\n if 'config' not in kwargs.keys():\n kwargs['config'] = {}\n self.configure(**kwargs['config'])\n self.root = self.find_root(self.app)\n self.screen_height = self.root.winfo_screenheight()\n self.screen_width = self.root.winfo_screenwidth()\n tk.Grid.rowconfigure(self.app, 0, weight=1)\n tk.Grid.columnconfigure(self.app, 0, weight=1)\n for i in range(self.base_location['rowspan']):\n self.grid_rowconfigure(i, minsize=1)\n tk.Grid.rowconfigure(self, i, weight=1)\n for i in range(self.base_location['columnspan']):\n self.grid_columnconfigure(i, minsize=1)\n tk.Grid.columnconfigure(self, i, weight=1)\n self.row_offset = 0\n self.column_offset = 0\n if 'row_offset' in kwargs.keys():\n self.row_offset = kwargs['row_offset']\n if 'column_offset' in kwargs.keys():\n self.column_offset = kwargs['column_offset']\n\n def find_root(self, current):\n if str(current) is not '.':\n return self.find_root(current.master)\n else:\n return current\n\n def setup(self):\n if self.type is not 'frame':\n self.app.title(self.title)\n self.app.protocol('WM_DELETE_WINDOW', self.leave)\n h = int(self.screen_height - self.base_location['rowspan']) // 2\n w = int(self.screen_width - self.base_location['columnspan']) // 2\n self.app.geometry('+{}+{}'.format(w + self.column_offset, h + self.row_offset))\n self.app.attributes('-alpha', 1)\n self.grid(**self.base_location)\n self.update_idletasks()\n self.frame_builder()\n\n def run(self):\n self.mainloop()\n\n def add_frame(self, **kwargs):\n if kwargs['id'] in self.containers.keys():\n self.containers[kwargs['id']].destroy()\n garbage = self.containers.pop(kwargs['id'])\n del garbage\n kwargs['owner'] = self.owner\n kwargs['type'] = 'frame'\n if kwargs['scroll']['horizontal'] or kwargs['scroll']['vertical']:\n canvas = None\n scroll_barx = None\n scroll_bary = None\n canvas = tk.Canvas(self, bg='white')\n kwargs['master'] = canvas\n self.containers[kwargs['id']] = MyPyWindow(**kwargs)\n self.containers[kwargs['id']].setup()\n canvas.create_window((0, 0), window=self.containers[kwargs['id']], anchor='nw')\n if kwargs['scroll']['horizontal']:\n scroll_barx = tk.Scrollbar(self, orient='horizontal', command=canvas.xview)\n canvas.configure(xscrollcommand=scroll_barx.set)\n if kwargs['scroll']['vertical']:\n scroll_bary = tk.Scrollbar(self, orient='vertical', command=canvas.yview)\n canvas.configure(yscrollcommand=scroll_bary.set)\n self.containers[kwargs['id']].bind('<Configure>', lambda event, _canvas=canvas: self.scroller(_canvas))\n if kwargs['scroll']['horizontal'] and kwargs['scroll']['vertical']:\n kwargs['scroll_window_size']['rowspan'] -= 20\n kwargs['scroll_window_size']['columnspan'] -= 20\n scroll_barx.grid(row=kwargs['scroll_window_size']['row'] + kwargs['scroll_window_size']['rowspan'],\n rowspan=20,\n column=kwargs['scroll_window_size']['column'],\n columnspan=kwargs['scroll_window_size']['columnspan'],\n sticky='NWE')\n scroll_bary.grid(\n row=kwargs['scroll_window_size']['row'],\n rowspan=kwargs['scroll_window_size']['rowspan'],\n column=kwargs['scroll_window_size']['column'] + kwargs['scroll_window_size']['columnspan'],\n columnspan=20,\n sticky='NSW'\n )\n elif kwargs['scroll']['horizontal']:\n kwargs['scroll_window_size']['rowspan'] -= 20\n scroll_barx.grid(row=kwargs['scroll_window_size']['row'] + kwargs['scroll_window_size']['rowspan'],\n rowspan=20,\n column=kwargs['scroll_window_size']['column'],\n columnspan=kwargs['scroll_window_size']['columnspan'],\n sticky='NWE')\n elif kwargs['scroll']['vertical']:\n kwargs['scroll_window_size']['columnspan'] -= 20\n scroll_bary.grid(\n row=kwargs['scroll_window_size']['row'],\n rowspan=kwargs['scroll_window_size']['rowspan'],\n column=kwargs['scroll_window_size']['column'] + kwargs['scroll_window_size']['columnspan'],\n columnspan=20,\n sticky='NSW'\n )\n canvas.grid(**kwargs['scroll_window_size'])\n else:\n kwargs['master'] = self\n self.containers[kwargs['id']] = MyPyWindow(**kwargs)\n self.containers[kwargs['id']].setup()\n self.containers[kwargs['id']].configure({'bg': 'green'})\n self.containers[kwargs['id']].dragger = self.containers[kwargs['id']]\n\n def add_widget(self, **kwargs):\n if kwargs['id'] in self.containers.keys():\n self.containers[kwargs['id']].destroy()\n garbage = self.containers.pop(kwargs['id'])\n del garbage\n self.containers[kwargs['id']] = kwargs['widget'](*[self, *kwargs['args']])\n self.containers[kwargs['id']].set_base_location(kwargs['location'])\n if 'config' in kwargs.keys():\n self.containers[kwargs['id']].configure(kwargs['config'])\n self.containers[kwargs['id']].show_widget()\n\n def add_toplevel(self, **kwargs):\n if kwargs['id'] in self.containers.keys():\n self.containers[kwargs['id']].destroy()\n garbage = self.containers.pop(kwargs['id'])\n del garbage\n kwargs['owner'] = self.owner\n kwargs['type'] = 'toplevel'\n kwargs['master'] = self\n self.containers[kwargs['id']] = MyPyWindow(**kwargs)\n self.containers[kwargs['id']].setup()\n\n def destroy_item(self, item):\n if item in self.containers.keys():\n garbage = self.containers.pop(item)\n garbage.destroy()\n del garbage\n\n def add_item(self, wid, item):\n self.containers[wid] = item\n\n def frame_builder(self):\n \"\"\"\n Override me\n :return: None\n \"\"\"\n pass\n\n # IN DEV\n def scroller(self, canvas):\n canvas.configure(scrollregion=canvas.bbox('all'), width=1, height=1)\n\n # END DEV\n\n def leave(self):\n self.app.destroy()\n"
},
{
"alpha_fraction": 0.8888888955116272,
"alphanum_fraction": 0.8888888955116272,
"avg_line_length": 53,
"blob_id": "d31eb579fd354df3e1cf3aba1d5035f356c0dba0",
"content_id": "d66114590268786e4f6f0a55af91e5b8db265e0d",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 54,
"license_type": "permissive",
"max_line_length": 53,
"num_lines": 1,
"path": "/GuiBuilder/STARTUP/__init__.py",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "from GuiBuilder.STARTUP.MainStartup import GuiBuilder\n"
},
{
"alpha_fraction": 0.5188679099082947,
"alphanum_fraction": 0.5215633511543274,
"avg_line_length": 28.68000030517578,
"blob_id": "315ca50bcd0480f50bd003bae5813c1dfe22994d",
"content_id": "2efb8df978df69fa6dc1001054c10997d0852ddd",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 742,
"license_type": "permissive",
"max_line_length": 112,
"num_lines": 25,
"path": "/MyPyWidgets/RadioButtonClass.py",
"repo_name": "shravani-dev/MyPyBuilder",
"src_encoding": "UTF-8",
"text": "import tkinter as tk\n# TODO: Implement a Group-Function that can be called, and handed in a group of radio-button id's and group them\n\n\nclass RadioButton(tk.Radiobutton):\n\n def __init__(self, frame, text, value, command):\n self.location = None\n super().__init__(master=frame,\n text=text,\n value=value,\n command=command,\n height=1,\n width=1\n )\n self.widget = self\n\n def set_var(self, var):\n self.configure({'variable': var})\n\n def set_base_location(self, location):\n self.location = location\n\n def show_widget(self):\n self.grid(**self.location)\n"
}
] | 47 |
ffalpha/Gitty | https://github.com/ffalpha/Gitty | bd2da489e0777a01c0b92ca735cd86f7885f403c | e062719cca5a18a5ae3d418b469bd59eb1c6693a | 52d2590597948ae73cbb49f65ed5f5e134794d6a | refs/heads/master | 2023-08-14T18:26:36.560386 | 2021-09-26T18:28:55 | 2021-09-26T18:28:55 | 406,899,773 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6034482717514038,
"alphanum_fraction": 0.6086207032203674,
"avg_line_length": 17.74193572998047,
"blob_id": "87df71c08d7337e0dd92b157eb0d9c1cb3dc52bc",
"content_id": "53432ae32a10b27de74621afb88784c5a8bd238a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 580,
"license_type": "no_license",
"max_line_length": 57,
"num_lines": 31,
"path": "/backend/api.py",
"repo_name": "ffalpha/Gitty",
"src_encoding": "UTF-8",
"text": "from fastapi import FastAPI\nfrom pydriller import Repository\napp=FastAPI()\n\ninventory ={\n\n 1:{\n \"name\":\"MILK\"\n }\n}\n\[email protected]('/')\ndef home():\n return{\"Data\":\"Test\"}\n\[email protected]('/api/v1/{git_url:path}')\ndef url(git_url:str):\n a=[]\n for commit in Repository(git_url).traverse_commits():\n a.append(commit.hash)\n \n\n return{\"Data\":a}\n\[email protected](\"/api/v2/commiters/{git_url:path}\")\ndef url(git_url:str):\n a=[]\n for commit in Repository(git_url).traverse_commits():\n a.append(commit.committer.name)\n myset = set(a)\n return{\"Data\":myset}"
},
{
"alpha_fraction": 0.4991118907928467,
"alphanum_fraction": 0.4991118907928467,
"avg_line_length": 19.10714340209961,
"blob_id": "68be0f7b12f153914c95eec00ea88cd90b0f70e2",
"content_id": "e937af0bfe26b7606c65ed5271b0a376ddb9619c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 563,
"license_type": "no_license",
"max_line_length": 55,
"num_lines": 28,
"path": "/frontend/src/components/Ui/HomePage.js",
"repo_name": "ffalpha/Gitty",
"src_encoding": "UTF-8",
"text": "import React from 'react'\nimport logo from '../../images/logo.png'\nconst HomePage = () => {\n return (\n <div>\n <header className='center'>\n <img src={logo} alt='/'/>\n \n </header>\n<section className='search'>\n <form>\n <input\n type='text'\n className='form-control'\n placeholder='Search for a github repo'\n // value={text}\n // onChange={(e) => onChange(e.target.value)}\n autoFocus\n />\n </form>\n </section>\n </div>\n \n \n )\n}\n\nexport default HomePage\n"
}
] | 2 |
Dome9/Python_API_SDK | https://github.com/Dome9/Python_API_SDK | 2896066949eb3f6c1ac3ec5f70d89924d98a1b66 | 3722657145ca4b625dd3c12803f08e4f528b8af5 | 16c6b8513de9f71a4ca8c0633c1cc1d5b74f9fdb | refs/heads/master | 2018-09-30T00:04:55.873332 | 2018-07-10T20:37:20 | 2018-07-10T20:37:20 | 126,235,702 | 2 | 10 | null | null | null | null | null | [
{
"alpha_fraction": 0.7307692170143127,
"alphanum_fraction": 0.7335766553878784,
"avg_line_length": 32.60377502441406,
"blob_id": "8758654d5eafc455d2cb5a48f8ed494d33077470",
"content_id": "ea164c85d5e68fae9874ba31077956fc84d9a7d9",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7124,
"license_type": "permissive",
"max_line_length": 114,
"num_lines": 212,
"path": "/dome9ApiV2Py.py",
"repo_name": "Dome9/Python_API_SDK",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\nimport json\nimport requests\nfrom requests import ConnectionError, auth\nimport urlparse\n\nclass Dome9ApiSDK(object):\n\tREGION_PROTECTION_MODES = ['FullManage', 'ReadOnly', 'Reset']\n\tSEC_GRP_PROTECTION_MODES = ['FullManage', 'ReadOnly']\n\t\n\tdef __init__(self, apiKeyID, apiSecret, apiAddress='https://api.dome9.com', apiVersion='v2'):\n\t\tself.apiKeyID = apiKeyID\n\t\tself.apiSecret = apiSecret\n\t\tself.apiAddress = apiAddress\n\t\tself.apiVersion = '/{}/'.format(apiVersion)\n\t\tself.baseAddress = self.apiAddress + self.apiVersion\n\t\tself.clientAuth = auth.HTTPBasicAuth(self.apiKeyID, self.apiSecret)\n\t\tself.restHeaders = {'Accept': 'application/json', 'Content-Type': 'application/json'}\n\t\tif not self.apiKeyID or not self.apiSecret:\n\t\t\traise Exception('Cannot create api client instance without keyID and secret!')\n\n# System methods\n\tdef get(self, route, payload=None):\n\t\treturn self.request('get', route, payload)\n\n\tdef post(self, route, payload=None):\n\t\treturn self.request('post', route, payload)\n\n\tdef patch(self, route, payload=None):\n\t\treturn self.request('patch', route, payload)\n\n\tdef put(self, route, payload=None):\n\t\treturn self.request('put', route, payload)\n\n\tdef delete(self, route, payload=None):\n\t\treturn self.request('delete', route, payload)\n\n\tdef request(self, method, route, payload=None, isV2=True):\n\t\tres = None\n\t\turl = None\n\t\ttry:\n\t\t\turl = urlparse.urljoin(self.baseAddress, route)\n\t\t\tif method == 'get':\n\t\t\t\tres = requests.get(url=url, params=payload, headers=self.restHeaders, auth=self.clientAuth)\n\n\t\t\telif method == 'post':\n\t\t\t\tres = requests.post(url=url, data=payload, headers=self.restHeaders, auth=self.clientAuth)\n\n\t\t\telif method == 'patch':\n\t\t\t\tres = requests.patch(url=url, json=payload, headers=self.restHeaders, auth=self.clientAuth)\n\n\t\t\telif method == 'put':\n\t\t\t\tres = requests.put(url=url, data=payload, headers=self.restHeaders, auth=self.clientAuth)\n\n\t\t\telif method == 'delete':\n\t\t\t\tres = requests.delete(url=url, params=payload, headers=self.restHeaders, auth=self.clientAuth)\n\n\t\texcept requests.ConnectionError as ex:\n\t\t\traise ConnectionError(url, ex.message)\n\n\t\tjsonObject = None\n\t\terr = None\n\n\t\tif res.status_code in range(200, 299):\n\t\t\ttry:\n\t\t\t\tif res.content:\n\t\t\t\t\tjsonObject = res.json()\n\n\t\t\texcept Exception as ex:\n\t\t\t\terr = {\n\t\t\t\t\t'code': res.status_code,\n\t\t\t\t\t'message': ex.message,\n\t\t\t\t\t'content': res.content\n\t\t\t\t}\n\t\telse:\n\t\t\terr = {\n\t\t\t\t'code': res.status_code,\n\t\t\t\t'message': res.reason,\n\t\t\t\t'content': res.content\n\t\t }\n\n\t\tif err:\n\t\t\traise Exception(err)\n\t\treturn jsonObject\n\n\t# Dome9 Methods\n\tdef getAllUsers(self, outAsJson=False):\n\t\tapiCall = self.get(route='user')\n\t\tif outAsJson:\n\t\t\tprint(json.dumps(apiCall))\n\t\treturn apiCall\n\n\tdef getCloudAccounts(self, outAsJson=False):\n\t\tapiCall = self.get(route='CloudAccounts')\n\t\tif outAsJson:\n\t\t\tprint(json.dumps(apiCall))\n\t\treturn apiCall\n\n\tdef getCloudAccountID(self, ID, outAsJson=False):\n\t\tapiCall = self.get(route='CloudAccounts/{}'.format(ID))\n\t\tif outAsJson:\n\t\t\tprint(json.dumps(apiCall))\n\t\treturn apiCall\n\n\tdef getCloudAccountRegions(self, ID, outAsJson=False):\n\t\tcloudAccID = self.getCloudAccountID(ID=ID)\n\t\tapiCall = [region['region'] for region in cloudAccID['netSec']['regions']]\n\t\tif outAsJson:\n\t\t\tprint(json.dumps(apiCall))\n\t\treturn apiCall\n\n\tdef updateCloudAccountID(self, ID, data, outAsJson):\n\t\tapiCall = self.patch(route='CloudAccounts/{}'.format(ID), payload=data)\n\t\tif outAsJson:\n\t\t\tprint(json.dumps(apiCall))\n\t\treturn apiCall\n\n\tdef getCloudTrail(self, outAsJson):\n\t\tapiCall = self.get(route='CloudTrail')\n\t\tif outAsJson:\n\t\t\tprint(json.dumps(apiCall))\n\t\treturn apiCall\n\n\tdef getAwsSecurityGroups(self, outAsJson=False):\n\t\tapiCall = self.get(route='view/awssecuritygroup/index')\n\t\tif outAsJson:\n\t\t\tprint(json.dumps(apiCall))\n\t\treturn apiCall\n\n\tdef getCloudSecurityGroup(self, ID, outAsJson=False):\n\t\tapiCall = self.get(route='cloudsecuritygroup/{}'.format(ID))\n\t\tif outAsJson:\n\t\t\tprint(json.dumps(apiCall))\n\t\treturn apiCall\n\n\tdef getAllEntityFetchStatus(self, ID, outAsJson=False):\n\t\tapiCall = self.get(route='EntityFetchStatus?cloudAccountId={}'.format(ID))\n\t\t\n\t\tif outAsJson:\n\t\t\tprint(json.dumps(apiCall))\n\t\treturn apiCall\n\t\n\tdef cloudAccountSyncNow(self, ID, outAsJson=False):\n\t\tapiCall = self.post(route='cloudaccounts/{}/SyncNow'.format(ID))\n\t\tif outAsJson:\n\t\t\tprint(json.dumps(apiCall))\n\t\treturn apiCall\n\n\tdef setCloudSecurityGroupProtectionMode(self, ID, protectionMode, outAsJson=False):\n\t\tif protectionMode not in Dome9ApiSDK.SEC_GRP_PROTECTION_MODES:\n\t\t\traise ValueError('Valid modes are: {}'.format(Dome9ApiSDK.SEC_GRP_PROTECTION_MODES))\n\n\t\tdata = json.dumps({ 'protectionMode': protectionMode })\n\t\troute = 'cloudsecuritygroup/{}/protection-mode'.format(ID)\n\t\tapiCall = self.post(route=route, payload=data)\n\t\tif outAsJson:\n\t\t\tprint(json.dumps(apiCall))\n\t\treturn apiCall\n\t\n\tdef runAssessmenBundle(self, assReq, outAsJson=False): # assessmentRequest\n\t\tdata = json.dumps(assReq)\n\t\troute = 'assessment/bundleV2'\n\t\tapiCall = self.post(route=route, payload=data)\n\t\tif outAsJson:\n\t\t\tprint(json.dumps(apiCall))\n\t\treturn apiCall\n\t\n\n\nclass Dome9ApiClient(Dome9ApiSDK):\n \t\n\tdef getCloudSecurityGroupsInRegion(self, region, names=False):\n\t\tgroupID = 'name' if names else 'id'\n\t\treturn [secGrp[groupID] for secGrp in self.getAwsSecurityGroups() if secGrp['regionId'] == region]\n\n\tdef getCloudSecurityGroupsIDsOfVpc(self, vpcID):\n\t\treturn [secGrp['id'] for secGrp in self.getAwsSecurityGroups() if secGrp['vpcId'] == vpcID]\n\n\tdef getCloudSecurityGroupIDsOfVpc(self, vpcID):\n \t\treturn [secGrp['id'] for secGrp in self.getAwsSecurityGroups() if secGrp['vpcId'] == vpcID]\n\n\tdef setCloudRegionsProtectedMode(self, ID, protectionMode, regions='all'):\n\t\tif protectionMode not in Dome9ApiSDK.REGION_PROTECTION_MODES:\n\t\t\traise ValueError('Valid modes are: {}'.format(Dome9ApiSDK.REGION_PROTECTION_MODES))\n\t\t\n\t\tallUsersRegions = self.getCloudAccountRegions(ID=ID)\n\t\tif regions == 'all':\n\t\t\tcloudAccountRegions = allUsersRegions\n\t\telse:\n\t\t\tif not set(regions).issubset(allUsersRegions):\n\t\t\t\traise Exception('requested regions:{} are not a valid regions, available:{}'.format(regions, allUsersRegions))\n\t\t\tcloudAccountRegions = regions\n\n\t\tfor region in cloudAccountRegions:\n\t\t\tdata = json.dumps(\n\t\t\t\t{'externalAccountNumber': ID, 'data': {'region': region, 'newGroupBehavior': protectionMode}})\n\t\t\tprint('updating data: {}'.format(data))\n\t\t\tself.put(route='cloudaccounts/region-conf', payload=data)\n\n\tdef setCloudSecurityGroupsProtectionModeInRegion(self, region, protectionMode):\n \t\tsecGrpsRegion = self.getCloudSecurityGroupsInRegion(region=region)\n\t\tif not secGrpsRegion:\n\t\t\traise ValueError('got 0 security groups!')\n\t\tfor secGrpID in secGrpsRegion:\n\t\t\tself.setCloudSecurityGroupProtectionMode(ID=secGrpID, protectionMode=protectionMode, outAsJson=True)\n\n\tdef setCloudSecurityGroupsProtectionModeOfVpc(self, vpcID, protectionMode):\n\t\tvpcSecGrp = self.getCloudSecurityGroupIDsOfVpc(vpcID=vpcID)\n\t\tif not vpcSecGrp:\n\t\t\traise ValueError('got 0 security groups!')\n\t\tfor secGrpID in vpcSecGrp:\n\t\t\tself.setCloudSecurityGroupProtectionMode(ID=secGrpID, protectionMode=protectionMode, outAsJson=True)\n"
}
] | 1 |
khaled619/CS-6314 | https://github.com/khaled619/CS-6314 | fa83968a8e56616dc2de43116576ed21951df587 | 45d5b72960ef835232d977040104b6b1cb45c9d6 | e24898da65195613e03347f946ada597450348e5 | refs/heads/master | 2023-05-06T16:20:10.562190 | 2021-05-27T02:17:01 | 2021-05-27T02:17:01 | 351,299,257 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6261609792709351,
"alphanum_fraction": 0.6308049559593201,
"avg_line_length": 27,
"blob_id": "2cb01da977120ce74af66f3eb9a25983cdc0e498",
"content_id": "d902931b7d550bd1d3d509219dbd1443b3e0bb3d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1292,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 46,
"path": "/Assignments/Assignment 4 - Favorite Places/app.py",
"repo_name": "khaled619/CS-6314",
"src_encoding": "UTF-8",
"text": "\nfrom flask import Flask, render_template, request, json, redirect, jsonify\nfrom flaskext.mysql import MySQL\nfrom flask import session\nimport csv\napp = Flask(__name__)\n\nmysql = MySQL()\n\n# MySQL configurations\napp.config['MYSQL_DATABASE_USER'] = 'root'\napp.config['MYSQL_DATABASE_PASSWORD'] = 'root'\napp.config['MYSQL_DATABASE_DB'] = 'FavoritePlaces'\napp.config['MYSQL_DATABASE_HOST'] = 'localhost'\napp.config['MYSQL_DATABASE_PORT'] = 3306\nmysql.init_app(app)\n\n\n\napp.secret_key = 'secret key can be anything!'\n\n\[email protected](\"/\")\ndef main():\n return render_template(\"index.html\")\n\n\[email protected](\"/listFavorites\")\ndef listFavorites():\n try:\n conn = mysql.connect()\n cursor = conn.cursor()\n cursor.execute(\"SELECT * FROM restaurants\")\n row_headers=[x[0] for x in cursor.description]\n rows = cursor.fetchall()\n if len(rows) == 0:\n return render_template('error.html', error = 'An error occurred!')\n else:\n json_data=[]\n for result in rows:\n json_data.append(dict(zip(row_headers,result)))\n response = jsonify(json_data)\n return response\n except Exception as e:\n return render_template('error.html', error = str(e))\nif __name__ == \"__main__\":\n app.run() \n"
},
{
"alpha_fraction": 0.5762394666671753,
"alphanum_fraction": 0.6669784784317017,
"avg_line_length": 27.891891479492188,
"blob_id": "90b5abceb89c7c06a9ceb36f35010e01321fe877",
"content_id": "832581556d4ea9985878fbb5a41135957448a1f7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "SQL",
"length_bytes": 2138,
"license_type": "no_license",
"max_line_length": 128,
"num_lines": 74,
"path": "/Assignments/Assignment 4 - Favorite Places/restaurants.sql",
"repo_name": "khaled619/CS-6314",
"src_encoding": "UTF-8",
"text": "-- phpMyAdmin SQL Dump\n-- version 4.9.5\n-- https://www.phpmyadmin.net/\n--\n-- Host: localhost:3306\n-- Generation Time: Apr 21, 2021 at 09:52 PM\n-- Server version: 5.7.24\n-- PHP Version: 7.4.1\n\nSET SQL_MODE = \"NO_AUTO_VALUE_ON_ZERO\";\nSET AUTOCOMMIT = 0;\nSTART TRANSACTION;\nSET time_zone = \"+00:00\";\n\n\n/*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;\n/*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;\n/*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;\n/*!40101 SET NAMES utf8mb4 */;\n\n--\n-- Database: `favoriteplaces`\n--\n\n-- --------------------------------------------------------\n\n--\n-- Table structure for table `restaurants`\n--\n\nCREATE TABLE `restaurants` (\n `id` int(11) NOT NULL,\n `name` varchar(60) NOT NULL,\n `address` varchar(80) NOT NULL,\n `lat` float(10,6) NOT NULL,\n `lng` float(10,6) NOT NULL,\n `type` varchar(30) NOT NULL\n) ENGINE=InnoDB DEFAULT CHARSET=utf8;\n\n--\n-- Dumping data for table `restaurants`\n--\n\nINSERT INTO `restaurants` (`id`, `name`, `address`, `lat`, `lng`, `type`) VALUES\n(1, 'Rawabi Aleskan', 'Prince Mansour Bin Abdulaziz St,Al Olaya,Riyadh 11543,Saudi Arabia', 24.676161, 46.699318, 'restaurant'),\n(2, 'dr.CAFE COFFEE', 'As Sulimaniyah,Khurais Road Abi Al Arab Street,Riyadh 11566,Saudi Arabia', 24.685635, 46.706333, 'cafe'),\n(3, 'Tokyo Restaurant', 'Al Urubah Rd,As Sulimaniyah,Riyadh 12245,Saudi Arabia', 24.718161, 46.686741, 'restaurant'),\n(4, 'Tim Hortons', 'Prince Muhammad Ibn Abdulaziz Road,Al Olaya,Riyadh 12222,Saudi Arabia', 24.698673, 46.689896, 'cafe'),\n(5, 'Golden Dragon', 'Mizan Tower,Olaya St,Al Olaya,Riyadh 12221,Saudi Arabia', 24.686342, 46.689426, 'restaurant');\n\n--\n-- Indexes for dumped tables\n--\n\n--\n-- Indexes for table `restaurants`\n--\nALTER TABLE `restaurants`\n ADD PRIMARY KEY (`id`);\n\n--\n-- AUTO_INCREMENT for dumped tables\n--\n\n--\n-- AUTO_INCREMENT for table `restaurants`\n--\nALTER TABLE `restaurants`\n MODIFY `id` int(11) NOT NULL AUTO_INCREMENT, AUTO_INCREMENT=7;\nCOMMIT;\n\n/*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */;\n/*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */;\n/*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */;\n"
}
] | 2 |
swastyak/FeatureSelection170 | https://github.com/swastyak/FeatureSelection170 | be4336db4ac213d058fea0e92632d7baf65dfc67 | c2fba30ac8aaa2065a3d5cbf7b5050aca95422bd | 6502094b273c5dd50af801b5768a6f307d928d9a | refs/heads/master | 2023-03-25T18:38:39.481538 | 2021-03-18T22:51:53 | 2021-03-18T22:51:53 | 347,548,924 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.614996075630188,
"alphanum_fraction": 0.6247829794883728,
"avg_line_length": 41.804054260253906,
"blob_id": "79a0a60fcb127bd0fa620061ff054a9c146f253e",
"content_id": "ac1bb1ec6619d6315eff120092d34619e23107be",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6335,
"license_type": "no_license",
"max_line_length": 105,
"num_lines": 148,
"path": "/main.py",
"repo_name": "swastyak/FeatureSelection170",
"src_encoding": "UTF-8",
"text": "import pandas as pd\nimport numpy as np\nimport copy\nimport sys\nimport math\nimport time\n\n\ndef leave_one_out_cross_validation(data, current_set, feature_to_add):\n # Make a deep copy since python does shallow copies, but we want to change data\n # Assign 0s to features not in use, we only want to look at the current features\n # and the feature we want to add\n temp_data = copy.deepcopy(data)\n row, col = data.shape\n number_correctly_classified = 0\n\n for iter in range(1, col):\n if iter not in current_set and iter != feature_to_add:\n temp_data[:, iter] = 0\n # Initialize nearest neighbor to int max, and go down from there\n # Nearest neighbor found using euclidean distance from object i to all neighbors k\n # Hence, the nested loops\n # Ask if i is closest to object a, b, c, etc, pretty much every object but it's self\n for i in range(0, row):\n object_to_classify = temp_data[i, 1:]\n label_object_to_classify = temp_data[i, 0]\n nearest_neighbor_distance = sys.maxsize\n nearest_neighbor_location = sys.maxsize\n nearest_neighbor_label = 0\n for k in range(0, row):\n if i != k:\n distance = math.sqrt(sum(pow(object_to_classify - temp_data[k, 1:], 2)))\n if distance < nearest_neighbor_distance:\n nearest_neighbor_distance = distance\n nearest_neighbor_location = k\n nearest_neighbor_label = temp_data[nearest_neighbor_location, 0]\n if label_object_to_classify == nearest_neighbor_label:\n number_correctly_classified += 1\n # Overall accuracy of node returned\n accuracy = number_correctly_classified / row\n return accuracy\n\n\ndef feature_search_demo(data):\n # Forward search function, iterates through all elements in the columns (features)\n # Outer loop: \"step\" down the search tree\n # Inner loop: consider adding every possible feature that i'th feature can expand to\n # Then, the best feature addition is added to the current set of features\n # Best feature is found by computing accuracy of every possible node expansion\n # Highest node expansion path is taken\n # Once a feature is added to current set of features, we don't add it again\n row, col = data.shape\n currSetFeatures = set()\n print(\"Using feature(s) \" + str(currSetFeatures) +\n \" accuracy is \" + str(leave_one_out_cross_validation(data, currSetFeatures, 0)))\n bestAccTotal = 0\n global bestSet\n for i in range(0, col-1):\n print(\"On the \" + str(i + 1) + \" th level of the search tree\")\n global featureToAdd\n featureToAdd = set()\n bestAccuracySoFar = 0\n for k in range(0, col-1):\n if k + 1 not in currSetFeatures:\n accuracy = leave_one_out_cross_validation(data, currSetFeatures, k+1)\n print(\"Consider expanding the \" + str(k + 1) +\n \" feature. Accuracy is \" + str(accuracy) + \".\")\n if accuracy > bestAccuracySoFar:\n bestAccuracySoFar = accuracy\n featureToAdd = k + 1\n currSetFeatures.add(featureToAdd)\n print(\"Using feature(s) \" + str(currSetFeatures) +\n \" accuracy is \" + str(bestAccuracySoFar))\n if bestAccuracySoFar > bestAccTotal:\n bestAccTotal = bestAccuracySoFar\n bestSet = copy.deepcopy(currSetFeatures)\n\n print(\"Best set overall: \" + str(bestSet))\n print(\"This set had an accuracy of: \" + str(bestAccTotal))\n return\n\n\ndef feature_backwards_demo(data):\n # Similar thought process as forward iterations, but you start with a full set\n # \"Perculate upwards through the search space\"\n row, col = data.shape\n currSetFeatures = set()\n for iter in range(1, col):\n currSetFeatures.add(iter)\n print(\"Using feature(s) \" + str(currSetFeatures) +\n \" accuracy is \" + str(leave_one_out_cross_validation(data, currSetFeatures, 0)))\n bestAccTotal = 0\n global cBestSet\n for i in range(0, col):\n print(\"On the \" + str(col - i-1) + \" th level of the search tree\")\n global cFeatureToRemove\n cFeatureToRemove = set()\n bestAccuracySoFar = 0\n for k in range(0, col):\n if k in currSetFeatures:\n temp_curr = copy.deepcopy(currSetFeatures)\n temp_curr.remove(k)\n accuracy = leave_one_out_cross_validation(data, temp_curr, 0)\n print(\"Consider removing the \" + str(k) +\n \" feature. Accuracy is \" + str(accuracy) + \".\")\n if accuracy > bestAccuracySoFar:\n bestAccuracySoFar = accuracy\n cFeatureToRemove = k\n if len(currSetFeatures) != 0:\n currSetFeatures.remove(cFeatureToRemove)\n print(\"Using feature(s) \" + str(currSetFeatures) +\n \" accuracy is \" + str(bestAccuracySoFar))\n else:\n print(\"Nothing removed anymore\")\n if bestAccuracySoFar > bestAccTotal:\n bestAccTotal = bestAccuracySoFar\n cBestSet = copy.deepcopy(currSetFeatures)\n print(\"Best set overall: \" + str(cBestSet))\n print(\"This set had an accuracy of: \" + str(bestAccTotal))\n return\n\n\ndef main():\n # Driver code\n # Using files 13 and 66 are default\n # Else you can enter your own file name as long as file exists in directory\n print(\"Welcome to Swastyak's Feature Search Algorithm.\")\n fileName = input(\n \"Type in the name of file to test (defaults: 1 for small, 2 for large, 3 for special small): \\n\")\n typeAlgorithm = input(\"Type the number of the algo you want to run (1 forward, 2 backward): \\n\")\n if fileName == \"1\":\n fileName = \"CS170_SMALLtestdata__13.txt\"\n if fileName == \"2\":\n fileName = \"CS170_largetestdata__66.txt\"\n if fileName == \"3\":\n fileName = \"CS170_small_special_testdata__95.txt\"\n print(\"Going to read \" + fileName)\n data = pd.read_csv(fileName, delim_whitespace=True, header=None).values\n print(\"Timer will be turned on now.\")\n start_time = time.time()\n if typeAlgorithm == \"1\":\n feature_search_demo(data)\n else:\n feature_backwards_demo(data)\n print(\"Total runtime was \" + str(time.time() - start_time) + \" seconds.\")\n\n\nmain()\n"
}
] | 1 |
KNOT-GIT/mDeliverables | https://github.com/KNOT-GIT/mDeliverables | 6a49181a54aff5cfc142fdfb2b9551f10e3b03c0 | 70442c1a9438340c2af7d0558c7ca354bcf86ad4 | b0c00da86e59637d9fa653c6b7b26c5fda027fa7 | refs/heads/master | 2021-05-04T10:13:27.642251 | 2013-07-08T00:58:07 | 2013-07-08T00:58:07 | null | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5065957903862,
"alphanum_fraction": 0.5221500396728516,
"avg_line_length": 31.764516830444336,
"blob_id": "4961f375b40fbaf99851cd6f56c1d630e6da2509",
"content_id": "3848497f1f5d9c54c15ce943e31b548d3790544e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 10158,
"license_type": "no_license",
"max_line_length": 103,
"num_lines": 310,
"path": "/examples/rrs_page_change_monitoring/diff.py",
"repo_name": "KNOT-GIT/mDeliverables",
"src_encoding": "UTF-8",
"text": "#! /usr/bin/python\n# -*- coding: utf-8 -*-\n\n\n__modulename__ = \"diff\"\n__author__ = \"Stanislav Heller\"\n__email__ = \"[email protected]\"\n__date__ = \"$25.6.2012 12:12:44$\"\n\nimport sys\n\nimport tempfile\nimport random\nimport string\nimport os.path\nimport os\nimport subprocess\nimport codecs\nimport types\nfrom StringIO import StringIO\nfrom collections import namedtuple\n\nimport lxml.html as lh\n\n# import chardet - character encoding auto-detection system\ntry:\n import chardet\n _detector = chardet\nexcept ImportError:\n _detector = None\n\n# TODO: fix problems with character encodings\nclass _DiffTmpFiles(object):\n \"\"\"\n Private context manager class for creating two temporary files in the\n /tmp/ directory and returning it's names. At the __exit__ it will delete\n these files.\n \"\"\"\n def __init__(self, obj1, obj2):\n self.fn1 = self.get_unique_tmpfilename()\n self.fn2 = self.get_unique_tmpfilename()\n with codecs.open(self.fn1, encoding='utf-8', mode='wb') as f1:\n try:\n f1.write(obj1)\n except:\n #f1.write(unicode(obj1,'utf-8'))\n f1.write(HtmlDiff._solve_encoding(obj1))\n if obj1[-1] != '\\n':\n f1.write('\\n')\n with codecs.open(self.fn2, encoding='utf-8', mode='wb') as f2:\n try:\n f2.write(obj2)\n except:\n #f2.write(unicode(obj2,'utf-8'))\n f2.write(HtmlDiff._solve_encoding(obj1))\n if obj2[-1] != '\\n':\n f2.write('\\n')\n\n def __enter__(self):\n return (self.fn1, self.fn2)\n\n def __exit__(self, type, value, traceback):\n os.unlink(self.fn1)\n os.unlink(self.fn2)\n return False # delegate exceptions\n\n def randomchar(self):\n return (string.ascii_letters + string.digits)[random.randint(0,61)]\n\n def randomhash(self):\n return ''.join([self.randomchar() for _ in range(20)])\n\n def get_unique_tmpfilename(self):\n fn = '/tmp/monitor.%s.tmp' % self.randomhash()\n while os.path.isfile(fn):\n fn = '/tmp/monitor.%s.tmp' % self.randomhash()\n return fn\n\nclass _DiffTmpFilesBinary(_DiffTmpFiles):\n \"\"\"\n tmp files for BinaryDiff\n \"\"\"\n def __init__(self,obj1,obj2):\n \"\"\"\n No need to use codecs on binary files\n \"\"\"\n self.fn1 = self.get_unique_tmpfilename()\n self.fn2 = self.get_unique_tmpfilename()\n \n f1 = open(self.fn1,'wb')\n f1.write(obj1)\n f2 = open(self.fn2,'wb')\n f2.write(obj2)\n \n \n\nclass DocumentDiff(object):\n \"\"\"\n Zakladni interface sjednocujici pristup k diffovani dokumentu.\n\n Zdedena trida musi implementovat tridu diff, ktera se stara o diffnuti\n dvou dokumentu stejnych typu.\n \"\"\"\n @classmethod\n def diff(cls, obj1, obj2):\n raise NotImplementedError(\"Interface DocumentDiff needs to be implemented\")\n\n\nclass PlainTextDiff(DocumentDiff):\n \"\"\"\n This class is a diff wrapper around classical gnu diff. Using this differ\n we can process every text documents (plaintext, html, css etc.)\n \"\"\"\n @classmethod\n def diff(cls, obj1, obj2):\n \"\"\"\n @param obj1: first text to be diffed\n @type obj1: string or unicode\n @param obj2: second text to be diffed\n @type obj2: string or unicode\n @returns: unicode diff\n @rtype: unicode\n \"\"\"\n if not isinstance(obj1, basestring) or not isinstance(obj2, basestring):\n raise TypeError(\"Diffed objects have to be strings or unicode.\")\n tmp = tempfile.TemporaryFile(suffix='', prefix='tmp')\n devnull = open(\"/dev/null\")\n with _DiffTmpFiles(obj1, obj2) as (fn1, fn2):\n subprocess.call([\"diff\", fn1, fn2], stdout=tmp, stderr=devnull)\n devnull.close()\n tmp.seek(0)\n output = tmp.read().decode('utf-8')\n tmp.seek(0, os.SEEK_END)\n tmp.close()\n return output\n\n\nclass BinaryDiff(DocumentDiff):\n \"\"\"\n Tato trida bude diffovat binarni dokumenty - prevazne pdf, odt, doc atp.\n \"\"\"\n @classmethod\n def diff(cls, obj1, obj2):\n \"\"\"\n @param obj1: first object to be diffed\n @type obj1: \n @param obj2: second object to be diffed\n @type obj2:\n @returns: diff from xdelta and metainfo about diff\n @rtype: dictionary {'diff': binaryDiff , 'metainfo': human-readable metainfo}\n \"\"\"\n tmp = tempfile.NamedTemporaryFile(suffix='', prefix='tmp')\n tmp_info = tempfile.TemporaryFile(suffix='', prefix='tmp')\n devnull = open(\"/dev/null\")\n with _DiffTmpFilesBinary(obj1,obj2) as (fn1,fn2):\n subprocess.call([\"xdelta\",\"delta\",fn1,fn2,tmp.name],stderr=devnull)\n subprocess.call([\"xdelta\",\"info\",tmp.name],stdout=tmp_info,stderr=devnull)\n output = {'diff': None, 'metainfo': \"\" }\n tmp.seek(0)\n output['diff'] = tmp.read() # this is compressed binary data\n tmp.seek(0, os.SEEK_END)\n tmp.close()\n tmp_info.seek(0)\n output['metainfo'] = tmp_info.read().decode('utf-8') # this is the human-readable part\n tmp_info.seek(os.SEEK_END)\n tmp_info.close()\n return output\n\n #raise NotImplementedError()\n\n\nclass HtmlDiff(DocumentDiff):\n \"\"\"\n Html diff, which shows pieces of code, which was added to the page.\n Uses output of GNU diff and its interface PlainTextDiff.\n \n Returns generator object HtmlDiffChunk\n Usage:\n >>> # r is resource\n >>> d = r.get_diff(-2,-1) #get diff of last two versions\n >>> print d.next()\n HtmlDiffChunk(position=u'line_info_from_diff', removed=u'this was removed',added=u'this was added')\n\n \"\"\"\n _possible_encodings = ('ascii', 'utf-8', 'cp1250', 'latin1', 'latin2', 'cp1251')\n\n @classmethod\n def _preformat_html(cls, html):\n class __Buf(object):\n def __init__(self):\n self.__buf = ['']\n\n def append(self, char):\n if not (char in ('\\n','\\r', '\\t', ' ') and self.__buf[-1] in ('\\n','\\r')):\n self.__buf.append(char)\n\n def flush(self):\n return ''.join(self.__buf)\n parsed = lh.fromstring(html)\n repaired_html = lh.tostring(parsed)\n s = StringIO(repaired_html)\n buf = __Buf()\n state = 2\n # FSM for reading (not parsing!!!) HTML\n # 1 = reading tag name and atrs, 2 = reading text inside tag\n # 3 = reading closing tag\n while s.pos != s.len:\n char = s.read(1)\n if state == 1:\n if char == '>':\n state = 2\n buf.append(char)\n elif char == '/':\n buf.append(char)\n char = s.read(1)\n if char == '>':\n buf.append(char)\n buf.append('\\n')\n state = 2 \n else:\n buf.append(char)\n else:\n buf.append(char)\n elif state == 2:\n if char == '<':\n char = s.read(1)\n if char == '/': #closing tag\n buf.append('</')\n state = 3\n else:\n buf.append('\\n')\n buf.append('<%s' % char)\n state = 1\n else:\n buf.append(char)\n elif state == 3:\n if char == '>':\n buf.append('>')\n buf.append('\\n')\n state = 2 \n else:\n buf.append(char)\n return buf.flush()\n\n @classmethod\n def _solve_encoding(cls, html):\n global _detector\n _guess = False\n if _detector is not None:\n d = _detector.detect(html)\n encoding = d['encoding']\n if encoding is None:\n _guess = True\n else:\n _guess = True\n if _guess:\n for e in cls._possible_encodings:\n try:\n html.decode(e)\n encoding = e\n break\n except: pass\n # convert it into unicode \n try:\n return unicode(html, encoding)\n except (UnicodeDecodeError, ValueError, LookupError):\n raise RuntimeError(\"Wrong encoding guessed: %s\" % encoding) #delete!!\n return html\n\n @classmethod\n def htmldiff(cls, raw_diff):\n HtmlDiffChunk = namedtuple('HtmlDiffChunk', 'position, removed, added')\n # chunk = (line, removed, added)\n _chunk = None\n for line in raw_diff.splitlines():\n if line[0] in string.digits:\n if _chunk is not None:\n yield HtmlDiffChunk(position=_chunk[0], removed=_chunk[1], added=_chunk[2])\n _chunk = [line, u'', u'']\n elif line.startswith(\"<\"): # removed\n _chunk[1] += line[2:]\n elif line.startswith(\">\"): # added\n _chunk[2] += line[2:]\n elif line.startswith(\"-\"): # delimiter ---\n pass\n else:\n raise RuntimeError(\"What was there? THIS: %s\" % line)\n yield HtmlDiffChunk(position=_chunk[0], removed=_chunk[1], added=_chunk[2])\n\n @classmethod\n def _added_text(cls, chunk):\n # NOT USED YET\n if not chunk[1]: # if no removed text, all is added\n return chunk[2]\n # generate diff\n for i, char in enumerate(chunk[2]):\n if chunk[1][i] != char:\n for j, char in enumerate(reversed(chunk[2])):\n if chunk[1][-j-1] != char:\n if j != 0:\n return chunk[2][i:-j]\n else:\n return chunk[2][i:]\n\n @classmethod\n def diff(cls, obj1, obj2):\n f1 = cls._preformat_html(obj1)\n f2 = cls._preformat_html(obj2)\n diff = PlainTextDiff.diff(f1, f2)\n return cls.htmldiff(diff)\n\n"
},
{
"alpha_fraction": 0.6893669366836548,
"alphanum_fraction": 0.6921533942222595,
"avg_line_length": 23.99164390563965,
"blob_id": "f28596607daed09157b31882d23df285fdb299b6",
"content_id": "17aa043d2d322a1449425a883d9a68bcdafb766a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 8972,
"license_type": "no_license",
"max_line_length": 121,
"num_lines": 359,
"path": "/deliverables/thread.py",
"repo_name": "KNOT-GIT/mDeliverables",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# Synchronized queue\nfrom Queue import Queue\n# Thread, Thread safe event variable and Synchronization lock\nfrom threading import Thread, Event, Lock\nfrom random import randrange\n# Import deliverables\nfrom deliverables.interface import *\n# Monitor\n\n# Config parser\nimport ConfigParser \n\n# Pymongo\nfrom pymongo import Connection\nimport pymongo\nimport gridfs\n\n# File I/O\nimport os\n\n# File download\nimport urllib2\n\nclass Document():\n\t\"\"\"\n\tDocument class\n\t\"\"\"\n\n\tdef __init__(this,mongodb_connection):\n\t\t\"\"\"\n\t\tConstructor\n\t\t\t\n\t\t@type mongodb_connection: conenction\n\t\t@param mongodb_connection: conenction to mongodb\n\t\t\"\"\"\n\t\t\n\t\tthis.fs = gridfs.GridFS(mongodb_connection['deliverables'])\n\t\tthis.mongo_collection = mongodb_connection['deliverables']['file']\n\t\n\tdef document_store(this,document_url, title, project_url, fileDV):\n\t\t\"\"\"\n\t\tInserts file into database\n\t\t\t\n\t\t@type document_url: string\n\t\t@param document_url: url of the document\n\t\t\t\n\t\t@type title: string\n\t\t@param title: title of the project\n\t\t\t\n\t\t@type project_url: string\n\t\t@param project_url: url of the project (page with deliverables)\n\t\t\t\n\t\t@type fileDV: dict\n\t\t@param fileDV: dictionary containing file name, content type and data themself\n\t\t\"\"\"\n\n\t\tif this.fs.exists({\"_id\": document_url}):\n\t\t\tthis.fs.delete(document_url)\n\t\twith this.fs.new_file(_id = document_url,filename = fileDV['filename'],contentType = fileDV['contentType']) as this.fp:\n\t\t\tthis.fp.write(fileDV['data'])\n\t\t\t\n\t\t\tthis.mongo_collection.insert([{'_id':document_url,'title' : title,'project_url' : project_url }])\n\t\t\t#osetrit ci mongo_collection uz existuje alebo je None\n\nclass parser:\n\t\"\"\"\n\tConfiguration parser\n\t\"\"\"\n\t\n\tconfig = ConfigParser.ConfigParser()\n\n\tdef parse(self,file):\n\t\tconfig = self.config.read(file)\n\n\tdef getPropertyValue(self, section, name):\n\t\treturn self.config.get(section,name)\n\n\tdef getValueAsInt(self, section, name):\n\t\treturn self.config.getint(section,name)\n\n\tdef getValueAsFloat(self, section, name):\n\t\treturn self.config.getfloat(section,name)\n\n\tdef getValueAsBoolean(self, section, name):\n\t\treturn self.config.getboolean(section,name)\n\n\tdef loadConfiguration(self, filepath, requiredconfig):\n\t\tconfig = self.config.read(filepath)\n\t\tk = requiredconfig.keys()\n\n\t\tfor section in k:\n\t\t\tsec_exists = self.config.has_section(section)\n\t\t\tif sec_exists == False:\n\t\t\t\traise ConfigParser.NoSectionError\n\n\t\t\tfor names in requiredconfig[section]:\n\t\t\t\tname_exists = self.config.has_option(section,names)\n\t\t\t\tif name_exists == False:\n\t\t\t\t\traise ConfigParser.NoOptionError(names,section)\n\n\n#end of class parser\n\n\n\nclass Worker(Thread):\n\t\"\"\"\n\tTask implementing worker agent base. Task is given from a synchronous queue.\n\tThread is executing tasks from a given tasks queue.\n\t\"\"\"\n\t\n\tdef __init__(self, tasks, err_object):\n\t\t\"\"\"\n\t\tWorker constructor\n\t\t\n\t\t@type tasks: queue\n\t\t@param tasks: queue of tasks\n\t\t\n\t\t@type err_object: object\n\t\t@param err_object: object implementing set_error(self,error) method\n\t\t\"\"\"\n\t\tself.err_object = err_object\n\t\tThread.__init__(self)\n\t\tself.tasks = tasks\n\t\tself.daemon = True\n\t\tself.start()\n\t\n\tdef run(self):\n\t\t\"\"\"\n\t\tMethod running the tasks\n\t\t\"\"\"\n\t\twhile True:\n\t\t\tfunc, args, kargs = self.tasks.get()\n\t\t\ttry: func(*args, **kargs)\n\t\t\texcept Exception, e: self.err_object.set_error(\"Thread unhandeled exception: \\n\"+str(e))\n\t\t\tself.tasks.task_done()\n\nclass ThreadPool:\n\t\"\"\"\n\tPool of threads consuming tasks from a queue\n\t\"\"\"\n\tdef __init__(self, num_threads, err_object):\n\t\t\"\"\"\n\t\tWorker pool constructor\n\t\t\n\t\t@type num_threads: integer\n\t\t@param num_threads: number of worker agent to spawn\n\t\t\n\t\t@type err_object: object\n\t\t@param err_object: object implementing set_error(self,error) method\n\t\t\"\"\"\n\t\tself.tasks = Queue(num_threads)\n\t\tfor _ in range(num_threads): Worker(self.tasks, err_object)\n\n\tdef add_task(self, func, *args, **kargs):\n\t\t\"\"\"\n\t\tAdds a task to the queue\n\t\t\n\t\t@type func: function object\n\t\t@param func: function for execution by worker\n\t\t\n\t\t@type args: list\n\t\t@param args: list of unnamed arguments for given worker procedure\n\t\t\n\t\t@type kargs: dict\n\t\t@param kargs: dictionary of named arguments for given worker procedure\n\t\t\"\"\"\n\t\tself.tasks.put((func, args, kargs))\n\n\tdef wait_completion(self):\n\t\t\"\"\"\n\t\tWait for completion of all the tasks in the queue\n\t\t\"\"\"\n\t\tself.tasks.join()\n\t\t\n\t\t\n\n\nclass Deliverables_daemon:\n\t\"\"\"\n\tMain class of threading parser of deliverables\n\t\"\"\"\n\t\n\t\n\t# Object checking URL for changes\n\tmonitor = None\n\t\n\t# Thread pool\n\tpool = None\n\t\n\t# Thread safe event variable for first thread, that call that makes an error\n\terror = None\n\terror_lock = Lock()\n\t\n\t# Url iterator\n\titerator = [].__iter__()\n\t\n\t# Database connection\n\tconnection = None\n\tdb = None\n\t\n\tdef __init__(self,number_of_threads,url_iterator = None,connection = None,host = 'localhost',port = 27017):\n\t\t\"\"\"\n\t\tConstructor - Either connection or host with port should be given by parameters.\n\t\tOtherwise localhost:27017 will be expected to run database.\n\t\t\n\t\t@type number_of_threads: integer\n\t\t@param number_of_threads: number of threads\n\t\t\n\t\t@type url_iterator: iterator\n\t\t@param url_iterator: iterator of urls of deliverables projects\n\t\t\n\t\t@type connection: connection\n\t\t@param connection: conenction to mongodb\n\t\t\n\t\t@type host: string\n\t\t@param host: host of the mongodb database (implicitly 'localhost')\n\t\t\n\t\t@type port: integer\n\t\t@param port: port on which database is running (implicitly 27017)\n\t\t\"\"\"\n\t\t\n\t\t# Initialize database connection\n\t\tif connection:\n\t\t\tself.connection = connection\n\t\telse:\n\t\t\ttry:\n\t\t\t\tself.connection = Connection(host, port)\n\t\t\texcept:\n\t\t\t\tself.set_error(\"Unable to establish connection to MongoDB\")\n\t\t\t\treturn\n\t\t\tself.db = self.connection.deliverables\n\t\t# Initializing pool and creating threads\n\t\tself.pool = ThreadPool(number_of_threads,self)\n\t\t# If is given setting up iterator over input URLs\n\t\tself.iterator = url_iterator\n\n\tdef parse(self,url_iterator = None):\n\t\t\"\"\"\n\t\tMain method for running thread pool.\n\t\t\n\t\t@type url_iterator: iterator\n\t\t@param url_iterator: iterator of urls of deliverables projects\n\t\t\"\"\"\n\t\t# If is given setting up iterator over input URLs\n\t\tif url_iterator:\n\t\t\tself.iterator = url_iterator\n\t\t\n\t\t# If no iterator is given, we have nothing to do\n\t\tif self.iterator:\n\t\t\t\n\t\t\t# Check for error\n\t\t\tif self.error:\n\t\t\t\treturn\n\t\t\t\t\n\t\t\t# Infinite loop\n\t\t\twhile True:\n\t\t\t\t# End of iteration will raise exception StopIteration\n\t\t\t\ttry:\n\t\t\t\t\t# Check for error\n\t\t\t\t\tif self.error:\n\t\t\t\t\t\tbreak\n\t\t\t\t\t# Get next url\n\t\t\t\t\tnext_url = self.iterator.next()\n\t\t\t\t\t# Give another task to queue\n\t\t\t\t\tself.pool.add_task(self.worker_procedure, next_url)\n\t\t\t\texcept StopIteration:\n\t\t\t\t\t# No more URLs to deal\n\t\t\t\t\tbreak;\n\t\t\t\n\t\t\t# Wait for completion of task in queue\n\t\t\tself.pool.wait_completion()\n\t\t\n\tdef isErr(self):\n\t\t\"\"\"\n\t\tMethod returning None if there was no fatal error. If there was an\n\t\terror it is returned in form of string\n\t\t\"\"\"\n\t\treturn self.error\n\n\tdef worker_procedure(self,url):\n\t\t\"\"\"\n\t\tMain method for running thread pool.\n\t\t\n\t\t@type url: string\n\t\t@param url: url of deliverables project\n\t\t\"\"\"\n\t\t### Download URLs and meta throught Deliverables2 interface ###\n\n\t\t# Create interface object and initialize url\n\t\tdeliv = Deliverables()\n\n\t\tprint \"Deliverables project url: %s\" % url\n\n\t\ttry:\n\t\t\tdeliv.parse(url)\n\t\texcept:\n\t\t\t# Recoverable error during parsing\n\t\t\treturn\n\n\t\t# Lets get the RRSProject class instance\n\t\tproject = deliv.get_deliverables()\n\n\t\t# Check the output\n\t\tif project:\n\t\t\t# Get list of RRSPublication instances\n\t\t\ttry:\n\t\t\t\tdocument_list = deliv.get_list()\n\t\t\texcept:\n\t\t\t\t# Recoverable error during parsing\n\t\t\t\treturn\n\t\t\t# Check each URL with link checker\n\t\t\tfor deliverable in document_list:\n\t\t\t\ttry:\n\t\t\t\t\turl = deliverable.url[0].get_entities()[0].link\n\t\t\t\texcept:\n\t\t\t\t\tcontinue\n\t\t\t\t# If it is changed or new: download\n\t\t\t\t# Check URL for changes if monitor is set\n\t\t\t\tif not self.monitor or self.monitor.get(url).check():\n\t\t\t\t\t# Download document\n\t\t\t\t\tprint \"Processing document: %s\" % url\n\t\t\t\t\ttry:\n\t\t\t\t\t\tresponse = urllib2.urlopen(url)\n\t\t\t\t\t\tdata = response.read()\n\t\t\t\t\texcept:\n\t\t\t\t\t\t# Non-working URL - ignore\n\t\t\t\t\t\tdata = None\n\n\t\t\t\t\tif data:\n\t\t\t\t\t\t# Content type determination\n\t\t\t\t\t\timport mimetypes\n\t\t\t\t\t\tcontent_type = mimetypes.guess_type(url, strict=False)\n\t\t\t\t\t\t# Database actualization\n\t\t\t\t\t\tdocument = Document(self.connection)\n\t\t\t\t\t\tfiledv = {\"filename\":url.rsplit('/',1)[0],\"contentType\":content_type,\"data\":data}\n\t\t\t\t\t\ttry:\n\t\t\t\t\t\t\tdocument.document_store(url, deliverable.title, deliv.get_deliverables().url[0].get_entities()[0].link, filedv)\n\t\t\t\t\t\texcept:\n\t\t\t\t\t\t\t# Error saving file\n\t\t\t\t\t\t\treturn\n\n\t\t\t# Project last checkout date actualization\n\t\t\t\n\tdef set_error(self,error):\n\t\t\"\"\"\n\t\tFatal error synchronous processing method. Uses lock.\n\t\t(unhandeled exception form thread, no connection to db etc.)\n\t\t\n\t\t@type error: string\n\t\t@param error: text descripting error\n\t\t\"\"\"\n\t\tself.error_lock.acquire()\n\t\tif not self.error:\n\t\t\tself.error = error\n\t\tself.error_lock.release()\n"
},
{
"alpha_fraction": 0.5866382718086243,
"alphanum_fraction": 0.5904621481895447,
"avg_line_length": 34.751953125,
"blob_id": "adddedf1e3a91c79f3748165c53944f90530e104",
"content_id": "c8d1db3c2f91c3db64cbc0d3610164a4f7a3c775",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 18306,
"license_type": "no_license",
"max_line_length": 127,
"num_lines": 512,
"path": "/examples/rrs_page_change_monitoring/model.py",
"repo_name": "KNOT-GIT/mDeliverables",
"src_encoding": "UTF-8",
"text": "#! /usr/bin/python\n# -*- coding: utf-8 -*-\n\n\"\"\"\nModel for changemonitor -- interface for accessing the data\n\nThis module creates abstraction layer between data storage implementation\n(SQL/NoSQL database/filesystem).\n\"\"\"\n\n__modulename__ = \"model\"\n__author__ = \"Stanislav Heller\"\n__email__ = \"[email protected]\"\n__date__ = \"$23.6.2012 16:33:31$\"\n\nimport time\nimport pymongo\n\nfrom pymongo import Connection, ASCENDING, DESCENDING\nfrom bson import ObjectId\nfrom gridfs import GridFS\nfrom gridfs.grid_file import GridOut\n\nfrom diff import PlainTextDiff, BinaryDiff, HtmlDiff\nfrom errors import *\nfrom _http import HTTPDateTime\n\n\nclass BaseMongoModel(object):\n \"\"\"\n Serves as base class, which is inherited by every model class.\n \"\"\"\n pass\n\n\nclass Storage(BaseMongoModel):\n \"\"\"\n Abstraction of the storage. The purpose of this class is to create abstraction\n layer, which provides database-independent API for manipulation in the\n filesystem. The only requirement on the filesystem is that it has to support\n file versioning (or some workaround which implements versioning within the\n fs which does not support versioning natively).\n\n The implementation is nowadays built on MongoDB.\n\n Usage:\n >>> from pymongo import Connection\n >>> from model import Storage\n >>> store = Storage(Connection(), \"myuid\", \"webarchive\")\n >>> file = store.get(\"http://www.myfancypage.com/index.html\")\n >>> # get last version of the file, which is available in the storage\n >>> c = file.get_last_content()\n >>> # get the raw data\n >>> c.data\n \"<html>\n ...\n >>> # content type and content length\n >>> print c.content_type, c.length\n 'text/html' 29481\n\n Design pattern: Factory\n \"\"\"\n\n def __init__(self, connection, uid, database=\"webarchive\"):\n \"\"\"\n Initializes storage.\n\n @param connection: database connection\n @type connection: pymongo.Connection\n @param uid: user id (see Monitor.__doc__ for more info)\n @type uid: str\n @param database: if the storage is based on database, this param\n represents the name of database to be used within\n this instance.\n @type database: str\n \"\"\"\n if not isinstance(connection, Connection):\n raise TypeError(\"connection must be instance of pymongo.Connection.\")\n self._connection = connection\n self._database = database\n self._uid = uid\n # instance of HTTP header model\n self._headermeta = HttpHeaderMeta(connection, uid, database)\n # filesystem interface\n self.filesystem = GridFS(self._connection[database], \"content\")\n#? print \"STORAGE: FILESYSTEM: \",self.filesystem\n # flag representing possibility to save large objects into storage\n self.allow_large = False\n\n def allow_large_documents(self):\n \"\"\"\n Allow large objects to be stored in the storage.\n \"\"\"\n self.allow_large = True\n\n def get(self, filename):\n \"\"\"\n Get file object by filename.\n\n @param filename: name of the file. In this case, it will be URL.\n @type filaname: str\n @returns: File object representing file in many versions\n @rtype: File\n @raises: DocumentNotAvailable if document doesnt exist in the storage\n \"\"\"\n#? print \"In Storage.get(): resource \",filename\n if not self.filesystem.exists(filename=filename):\n raise DocumentNotAvailable(\"File does not exist in the storage.\")\n return File(filename, self.filesystem, self._headermeta)\n\n\n def check_uid(self):\n return self._headermeta.check_uid()\n\n\nclass _ContentCache(object):\n \"\"\"\n A small app-specific key-value cache for storing readable objects.\n\n Main feature:\n - iterable read(): when getting the stored object from the cache,\n it will check if the readable object is at the end.\n If yes, seeks to the beginning.\n - refreshable: calling refresh() method deletes all integer-key values\n from the cache.\n \"\"\"\n def __init__(self):\n self.__contents = {}\n\n def __getitem__(self, key):\n try:\n c = self.__contents[key]\n except KeyError:\n raise LookupError(\"No such content in the cache.\")\n # if fd is at the end, seek to beginning\n if c.tell() == c.length:\n c.seek(0)\n return c\n\n def __setitem__(self, key, value):\n self.__contents[key] = value\n\n def __contains__(self, key):\n if key in self.__contents:\n return True\n else:\n return False\n\n def __iter__(self):\n for x in self.__contents:\n yield x\n\n def refresh(self):\n for version in filter(lambda x: isinstance(x, int), self.__contents.keys()):\n del self.__contents[version]\n\n def purge(self):\n self.__contents = {}\n\n\nclass File(object):\n \"\"\"\n One file in filesystem. A file can contain more contents in various\n versions. Main purpose of this class is to get rid of GridFS\n and GridOut instances and replace it with file-like wrapper.\n\n MongoDB record:\n content = {\n filename: URL\n md5: str\n sha1: str\n content_type: str\n length: int\n urls = []\n }\n\n Design pattern: Active Record\n \"\"\"\n # Zde se jedna v podstate o obal GridFS a GridOut\n #\n def __init__(self, filename, fs, headermeta):\n \"\"\"\n Create new file instance.\n\n @param filename: name of the file\n @type filename: basestring (str or unicode)\n @param fs: filesystem object\n @type fs: GridFS\n @param headermeta: http header metadata\n @type headermeta: HttpHeaderMeta\n\n WARNING:\n Application developers should generally not need to instantiate this class\n directly. The only correct way how to get this object is using Storage.get()\n method.\n \"\"\"\n self.filename = filename\n # GridFS\n self._filesystem = fs\n # Collection \"httpheader\"\n self._headers = headermeta\n\n # ulozeno vzdy jednak pod _id a po verzemi -1,-2,-3 pokud se uzivatel\n # ptal na verzi\n self.content = _ContentCache()\n\n def purge_cache(self):\n \"\"\"\n Cleans the whole content cache.\n \"\"\"\n self.content.purge()\n\n def refresh_cache(self):\n \"\"\"\n Refreshes part of cache, which can potentionally change (version pointers).\n This is very useful if we expect the File object to live during more than\n one check() call. If so, the information about version has to be updated\n in the cache (version -1 becomes -2 etc.).\n \n This method should be called after every check() call!\n \"\"\"\n self.content.refresh()\n\n def get_version(self, timestamp_or_version):\n \"\"\"\n Get content of the file in specific version. Version can be specified\n by version number (convenience atop the GridFS API by MongoDB) or\n unix timestamp.\n\n @param timestamp_or_version: version or timestamp of version which we\n want to retrieve.\n @type timestamp_or_version: int\n @return: content of the file in specified time/version\n @rtype: Content\n @raises: DocumentHistoryNotAvaliable if no such version in database\n \"\"\"\n if not isinstance(timestamp_or_version, (int, float)):\n raise TypeError(\"timestamp_or_version must be float or integer\")\n\n # version\n if timestamp_or_version < 10000:\n\n # try to get content from cache by version\n if timestamp_or_version in self.content:\n return self.content[timestamp_or_version]\n\n h = self._headers.get_by_version(self.filename, timestamp_or_version,\n last_available=True)\n if h is None:\n raise DocumentHistoryNotAvaliable(\"Version %s of document %s is\"\\\n \" not available.\" % (timestamp_or_version, self.filename))\n#? print \"Document: \",h\n # try to get content from cache by content ID\n content_id = h['timestamp'] # ObjectiId\n if content_id in self.content:\n return self.content[content_id]\n#? print \"Content_id: \",h['content']\n # otherwise load content from db\n #g = self._filesystem.get(content_id) # GridOut\n g = self._filesystem.get_version(filename=self.filename,version=timestamp_or_version)\n # cache it\n r = self.content[content_id] = self.content[timestamp_or_version] = Content(g)\n\n # timestamp\n else:\n h = self._headers.get_by_time(self.filename, timestamp_or_version,\n last_available=True)\n\n if h is None:\n t = HTTPDateTime().from_timestamp(timestamp_or_version)\n raise DocumentHistoryNotAvaliable(\"h is none\\nVersion of document %s in time\"\\\n \" %s is not available.\" % (self.filename, t.to_httpheader_format()))\n \n # try to get content from cache by content ID\n content_id = h['timestamp'] # ObjectiId\n if content_id in self.content:\n return self.content[content_id]\n\n # otherwise load content from db\n # ... the right query might do the same with a single line of code\n i = -1\n time_shift = -1 * HTTPDateTime().to_timestamp()\n while(True):\n try:\n g = self._filesystem.get_version(filename=self.filename,version=i) # GridOut\n upload_date = HTTPDateTime().from_gridfs_upload_date(g.upload_date).to_timestamp()\n#? print \"\\nupload_date: \",upload_date+time_shift,\" \",g.upload_date,\" timestamp: \",timestamp_or_version,\"\\n\"\n if (upload_date+time_shift) < timestamp_or_version : # correction for time zone!!!\n r = self.content[content_id] = Content(g) # cache it\n return r\n else:\n i = i - 1\n except : # FIX fill in name of exception\n raise DocumentHistoryNotAvaliable(\"Version of document %s in time\"\\\n \" %s is not available.\" % (self.filename, \n HTTPDateTime().from_timestamp(timestamp_or_version).to_httpheader_format()))\n\n # return the content, which was requested\n return r\n\n def get_last_version(self):\n \"\"\"\n Loads the last version of the file which is available on the storage.\n If the monitor is in user-view mode, loads last version, which was\n checked by specified user.\n\n @returns: most recent content of the file which is on the storage.\n @rtype: Content\n \"\"\"\n return self.get_version(-1)\n\n\nclass Diffable(object):\n \"\"\"\n Interface-like class. A class, which inherites this interface, has to\n implement the method for diffing two objects and choose of a diff\n algorithm.\n \"\"\"\n def diff_to(self, obj):\n raise NotImplementedError(\"Interface Diffable needs to be implemented\")\n\n\nclass Content(Diffable):\n \"\"\"\n Content of web document in one version.\n\n Implements Diffable interface to get possibility to diff contents to each\n other. Differ algorithm is choosen automatically.\n\n Implementation detail: wrapper of GridOut instance.\n \"\"\"\n def __init__(self, gridout):\n \"\"\"\n Create new instance of content.\n\n WARNING: Do not instantiate this class by yourself, this is done by\n File methods.\n @param gridout: gridout instance which was retrieved by GridFS.\n @type gridout: gridfs.grid_file.GridOut\n \"\"\"\n if not isinstance(gridout, GridOut):\n raise TypeError(\"gridout has to be instance of GridOut class.\")\n self._gridout = gridout\n self._differ = self._choose_diff_algorithm()\n \n def __getattr__(self, name):\n try:\n return getattr(self._gridout, name)\n except AttributeError:\n raise AttributeError(\"Content object has no attribute %s\" % name)\n\n def diff_to(self, other):\n \"\"\"\n Creates diff of self and given Content object and returns unicode string\n representing the computed diff:\n $ diff-algo self obj\n @param other: diffed content\n @type other: Content\n @returns: computed diff\n @rtype: unicode\n \"\"\"\n if not isinstance(other, Content):\n raise TypeError(\"Diffed object must be an instance of Content\")\n return self._differ.diff(self.read(), other.read())\n\n def _choose_diff_algorithm(self):\n \"\"\"\n Choose appropriate algorithm for diffing this content.\n @returns: algorithm-class wrapper, which will serve for diffing\n @rtype: subclass of diff.DocumentDiff (see diff.py for more)\n \"\"\"\n # bude navracet PlainTextDiff, BinaryDiff atp.\n#? print \"self._gridout.content_type: \",self._gridout.content_type\n assert '/' in self._gridout.content_type\n type_, subtype = self._gridout.content_type.split('/')\n if type_ == 'text':\n if subtype == 'html':\n return HtmlDiff\n else:\n return PlainTextDiff\n else:\n return BinaryDiff\n\n def __repr__(self):\n return \"<Content(_id='%s', content_type='%s', length=%s) at %s>\" % \\\n (self._id, self.content_type, self.length, hex(id(self)))\n\n __str__ = __repr__\n\n\nclass HttpHeaderMeta(BaseMongoModel):\n \"\"\"\n Model for HTTP header metadata.\n\n header = {\n timestamp: 1341161610.287\n response_code: 200\n last_modified: cosi\n etag: P34lkdfk32jrlkjdfpoqi3\n uid: \"rrs_university\"\n url+index: \"http://www.cosi.cz\"\n content: object_id\n }\n\n \"\"\"\n\n def __init__(self, connection, uid, database):\n self._connection = connection\n # type pymongo.Collection\n self.objects = self._connection[database].httpheader\n # user id\n self.uid = uid\n\n def get_by_time(self, url, timestamp, last_available=False):\n \"\"\"\n @TODO: docstring\n Get record of 'url' with 'timestamp' from HeaderMeta database\n @param url: url of resource to search for\n @type url: string\n @param timestamp:\n @type timestamp: int \n @param last_available:\n @type last_available: Bool\n @returns: http header metadata of 'url'/None if not found \n @rtype:\n \"\"\"\n q = {\"url\": url, \"timestamp\":{\"$lt\": timestamp}}\n if self.uid is not None:\n q[\"uid\"] = self.uid\n if last_available:\n q[\"response_code\"] = {\"$lt\":400}\n q[\"content\"] = {\"$exists\" : True}\n try:\n return self.objects.find(q).sort('timestamp',DESCENDING)[0]\n except IndexError:\n return None\n\n def get_by_version(self, url, version, last_available=False):\n \"\"\"\n @TODO: docstring\n Get 'version' of 'url' from HeaderMeta database \n @param url: url of resource to get from db\n @type url: string\n @param version: version number of record, ...TODO: x,-x,1,0,-1\n @type version: int\n @param last_available:\n @type last_available: bool\n @returns: http header metadata of 'url'/None if not found\n @rtype:\n \"\"\"\n q = {\"url\": url}\n if self.uid is not None:\n q[\"uid\"] = self.uid\n if last_available:\n q[\"response_code\"] = {\"$lt\":400}\n q[\"content\"] = {\"$exists\" : True}\n try:\n c = self.objects.find(q).sort('timestamp', ASCENDING).count()\n skip_ = c+version if version < 0 else c-version #??? TODO: validate this line\n return self.objects.find(q).sort('timestamp', ASCENDING).skip(skip_).limit(1)[0]\n except (IndexError, pymongo.errors.OperationFailure): # other exception might happen\n return None\n\n def save_header(self, url, response_code, fields, content_id):\n \"\"\"\n Save http header into HttpHeaderMeta database\n @param url: url of checked resource\n @param response_code: response code of web server\n @param fields: fields of http response\n @param content_id: content-id field of http response\n @returns: saved object\n \"\"\"\n h = {\n \"timestamp\": time.time(),\n \"url\": url,\n \"response_code\": int(response_code),\n \"uid\": self.uid\n }\n if content_id is not None:\n#? print \"save_header: content_id: \",content_id\n h['content'] = content_id\n for f in fields:\n if f.lower() in ('etag', 'last-modified'):\n h[f.lower().replace(\"-\", \"_\")] = fields[f]\n return self.objects.save(h)\n\n def last_checked(self, url):\n \"\"\"\n Get time when 'url' was last checked\n WARNING! \n if None is returned, then 'url' was never checked\n that should never happen, as 'url' is always checked \n in constructor of MonitoredResource\n but it's possible that the header is not saved \n because of an error, eg. timeout or other\n @param url: url of resource checked\n @type url: string\n @returns: time of last check\n @rtype: HTTPDateTime\n \"\"\"\n # Pokud vrati None, pak tento zdroj nebyl NIKDY checkovan, coz by se\n # nemelo moc stavat, protoze vzdy je checknut na zacatku v konstruktoru\n # MonitoredResource POZOR! je ale mozne, ze se header neulozi, protoze\n # treba vyprsi timeout.\n r = self.get_by_time(url, time.time(), last_available=False)\n if r is None:\n return None\n return HTTPDateTime().from_timestamp(r['timestamp'])\n\n def check_uid(self):\n assert self.uid is not None\n return self.objects.find_one({\"uid\": self.uid}) is not None\n\n"
},
{
"alpha_fraction": 0.5909495949745178,
"alphanum_fraction": 0.6069908142089844,
"avg_line_length": 34.68198013305664,
"blob_id": "30fe4d11d92b27cd5bafe85bd94f73bb6333cd04",
"content_id": "902ec10f103dd72cb9da5d55bca6d1549149b8fc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 10099,
"license_type": "no_license",
"max_line_length": 120,
"num_lines": 283,
"path": "/examples/rrs_page_change_monitoring/_http.py",
"repo_name": "KNOT-GIT/mDeliverables",
"src_encoding": "UTF-8",
"text": "#! /usr/bin/python\n# -*- coding: utf-8 -*-\n\n\"\"\"\nMaly privatni modul zajistujici HTTP pozadavky pro changemonitor.\nObsahuje maly middleware, ktery slouzi k odstineni changemonitoru od nizsi\nvrstvy HTTP a ktery poskytne moznost pohodlnejsiho testovani nebo cachovani.\n\n\"\"\"\n\n__modulename__ = \"_http\"\n__author__ = \"Stanislav Heller\"\n__email__ = \"[email protected]\"\n__date__ = \"$22.6.2012 13:01:57$\"\n\nfrom datetime import datetime\nimport time\nimport httplib\nfrom urlparse import urlsplit\nimport socket\n\nclass _HTTPConnectionProxy(object):\n \"\"\"\n Mezivrstva pro pristup k internetu.\n Zde je nutne implementovat:\n - zakladni pripojeni a poslani requestu\n - moznost pridani ruznych hlavicek\n - moznost nastaveni timeoutu\n - handlery pro presmerovani\n - handlery pro cookies\n\n Zde je mozne nekdy v budoucnu implementovat:\n - pristup k rrs_proxy (soucast reresearch)\n - zjednoduseni testovani: misto posilani pozadavku do site nejaky\n druh asociativni pameti ktery bude posilat testovaci a navracet data\n - malou pomocnou cache, ktera by redukovala velke mnozstvi stejnych\n pozadavku na databazi\n\n Navrhovy vzor: Proxy pattern.\n \"\"\"\n\n # pretending a proper web browser\n # 1-1 copy of headers sent by Google Chrome run on Ubuntu Linux\n # only 'accept-encoding' was changed in order not to be bothered with decompression\n default_header = {\n \"connection\":\"keep-alive\",\n \"cache-control\":\"max-age=0\",\n \"user-agent\":\"Mozilla/5.0 (X11; Linux x86_64; rv:12.0) Gecko/20100101 Firefox/12.0\",\n \"accept\":\"text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8\",\n \"accept-encoding\":\"identity\",\n \"accept-language\":\"en-US,en;q=0.8\",\n \"accept-charset\":\"ISO-8859-1;q=0.7,*;0.3\"\n }\n\n default_max_redirects = 10\n\n def __init__(self,url,timeout=None):\n \"\"\"\n @param url: requested URL (only server name is taken in account now)\n @type url: basestring\n @param timeout: timeout applied to requests in this connection (None sets default from httplib/socket)\n @type timeout: number\n \"\"\"\n self.netloc = urlsplit(url).netloc\n self.timeout = timeout\n\n\n\n def send_request(self, method, url, headers=default_header, max_redirects=default_max_redirects):\n \"\"\"\n @param method: HTTP method (GET/HEAD...)\n @type method: str\n @param url: requested URL (net location must not differ from the one passed to the constructor)\n @type url: basestring (str or unicode)\n @param headers: sent HTTP headers (defaults to pretending a web browser)\n @type headers: dict\n @param max_redirects: sets the maximum number of redirects to be followed\n @type max_redirects: number (only non-negative integers make sense here)\n @returns: 4-tuple of (response code recieved from the (last in case\\\nof redirection) server) and (dictionary of retrieved headers or None if none\\\narrived) and (string containing body of the response -- empty for HEAD\\\nrequests) and (final URL).\n \"\"\"\n actual_url = url\n num_redirects = 0\n\n # loop handling redirects\n while True:\n splitted_url = urlsplit(actual_url)\n\n # we only check url against the one got from constructor before redirects\n if num_redirects == 0 and splitted_url.netloc != self.netloc:\n raise ValueError(\"Net location of the query doesn't match the one this connection was established with\")\n\n # we are making connection for every single request to avoid\n # problems with reuse. Actually it is neccessary for following\n # redirects.\n if self.timeout != None:\n conn = httplib.HTTPConnection(splitted_url.netloc, timeout=self.timeout)\n else:\n conn = httplib.HTTPConnection(splitted_url.netloc)\n\n # build a path identifying a file on the server\n req_url = splitted_url.path\n if splitted_url.query:\n req_url += '?' + splitted_url.query\n\n try:\n conn.request(method, req_url, headers=headers)\n except socket.timeout as e:\n#? print \"Timeout (%s)\" % (e)\n return None\n except socket.error as e:\n#? print \"A socket error(%s)\" % (e)\n return None\n response = conn.getresponse()\n\n # get headers from response and build a dict from them\n retrieved_headers = {}\n for header_tuple in response.getheaders():\n retrieved_headers[header_tuple[0]] = header_tuple[1]\n\n if response.status >= 400:\n return (response.status, retrieved_headers, response.read(), actual_url)\n\n\n # following redirections\n if response.status in [301,302,303]:\n if 'location' in retrieved_headers and num_redirects < max_redirects:\n actual_url = retrieved_headers['location']\n num_redirects += 1\n continue\n else:\n return (response.status, retrieved_headers, response.read(), actual_url)\n\n # only \"succesful\" exit point of the loop and thus of the whole method\n if response.status == 200:\n return (response.status, retrieved_headers, response.read(), actual_url)\n\n # an unknown response code\n return (response.status, retrieved_headers, response.read(), actual_url)\n\n\nclass HTTPDateTime(object):\n \"\"\"\n Datetime class for manipulating time within HTTP enviroment.\n\n Usage:\n >>> h = HTTPDateTime()\n >>> h\n HTTPDateTime(Thu, 01 Jan 1970 00:00:00 GMT)\n >>> h.now()\n >>> h\n HTTPDateTime(Sat, 30 Jun 2012 16:09:43 GMT)\n >>> h.to_httpheader_format()\n 'Sat, 30 Jun 2012 16:09:43 GMT'\n\n FIXME: solve the problem with GMT:\n >>> h = HTTPDateTime()\n >>> h.to_timestamp()\n -7200.0\n # WTF?\n \"\"\"\n def __init__(self, year=1970, month=1, day=1, hour=0, minute=0, second=0, microsecond=0):\n self._datetime = datetime(year, month, day, hour, minute, second, microsecond)\n\n def to_httpheader_format(self):\n \"\"\"\n Converts this object into date and time in HTTP-header format.\n\n @returns: date and time in HTTP format, i.e. 'Wed, 31 Aug 2011 16:45:03 GMT'.\n @rtype: str\n \"\"\"\n return self._datetime.strftime(\"%a, %d %b %Y %H:%M:%S GMT\")\n\n def from_httpheader_format(self, timestr):\n \"\"\"\n Parse http header datetime format and save into this object.\n\n @param timestr: date and time in format which is used by HTTP protocol\n @type timestr: str\n @returns: HTTPDateTime object equivalent to date and time of the timestr\n @rtype: HTTPDateTime\n \"\"\"\n ts = time.strptime(timestr, \"%a, %d %b %Y %H:%M:%S GMT\")\n self.from_timestamp(time.mktime(ts))\n return self\n\n def to_timestamp(self):\n \"\"\"\n Convert into UNIX timestamp. (seconds since start of the UNIX epoch).\n\n @returns: unix timestamp\n @rtype: float\n \"\"\"\n ts = time.strptime(str(self._datetime.year) + '-' + str(self._datetime.month) + \\\n '-' + str(self._datetime.day) + 'T' + str(self._datetime.hour) + ':' + \\\n str(self._datetime.minute) + ':' + str(self._datetime.second) + \\\n \" GMT\" , '%Y-%m-%dT%H:%M:%S %Z')\n# return time.mktime(ts) - 3600 + (self._datetime.microsecond / 1000000.0)\n return time.mktime(ts) + (self._datetime.microsecond / 1000000.0)\n\n def from_timestamp(self, timestamp):\n \"\"\"\n Set date and time from timestamp.\n\n @param timestamp: time since start of the unix epoch\n @type timestamp: float\n @returns: HTTPDateTime object representing date and time of the timestamp\n @rtype: HTTPDateTime\n \"\"\"\n self._datetime = datetime.fromtimestamp(timestamp)\n return self\n\n def to_datetime(self):\n \"\"\"\n Convert the date and time from this object into python's datetime.datetime.\n\n @returns: datetime object equivalent to date and time of this object\n @rtype: datetime.datetime\n \"\"\"\n return self._datetime\n\n def from_datetime(self, datetimeobj):\n self._datetime = datetimeobj\n return self\n\n def from_gridfs_upload_date(self, upload_date):\n \"\"\"\n Convert the date and time from grid_file.update_date string \n to HTTPDateTime object.\n\n @param update_date: time string from grid_file.update_date\n @returns: HTTPDateTime object representing date and time of update_date\n @rtype: HTTPDateTime\n \"\"\"\n ts = time.strptime((str(upload_date))[:18],\"%Y-%m-%d %H:%M:%S\")\n self.from_timestamp(time.mktime(ts))\n return self\n\n def now(self):\n \"\"\"\n Set the time of this object as current time (time.time())\n\n @returns: HTTPDateTime object representing current date and time.\n @rtype: HTTPDateTime\n \"\"\"\n self._datetime = datetime.now()\n return self\n\n def __repr__(self):\n return \"HTTPDateTime(%s)\" % self.to_httpheader_format()\n\n def __lt__(self, other):\n return self._datetime.__lt__(other._datetime)\n\n def __le__(self, other):\n return self._datetime.__le__(other._datetime)\n\n def __eq__(self, other):\n return self._datetime.__eq__(other._datetime)\n\n def __ne__(self, other):\n return self._datetime.__ne__(other._datetime)\n\n def __gt__(self, other):\n return self._datetime.__gt__(other._datetime)\n\n def __ge__(self, other):\n return self._datetime.__ge__(other._datetime)\n\n\nif __name__ == \"__main__\":\n h = HTTPDateTime()\n print h\n print h.to_timestamp()\n s = time.time()\n print h.from_timestamp(s)\n print h.to_timestamp(), s\n l = h.to_httpheader_format()\n print l\n h.from_httpheader_format(l)\n print h.to_httpheader_format()\n\n"
},
{
"alpha_fraction": 0.5574771165847778,
"alphanum_fraction": 0.5656154751777649,
"avg_line_length": 25.567567825317383,
"blob_id": "634b1a692a7ac70c00527a4a353c8463ca606761",
"content_id": "767308caea00f0d98a36f36e9a450a06dbf16cef",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 983,
"license_type": "no_license",
"max_line_length": 72,
"num_lines": 37,
"path": "/setup.py",
"repo_name": "KNOT-GIT/mDeliverables",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\nfrom setuptools import setup\n\ndef readme():\n\t\"\"\"\n\tPrints README.rst (which should contain information about this package)\n\t\"\"\"\n\twith open('README.rst') as f:\n\t\treturn f.read()\n\nsetup(name='deliverables2',\n #version='0.1',\n description='Deliverables extractor',\n long_description=readme(),\n #classifiers=[\n # 'Development Status :: 3 - Alpha',\n # 'License :: OSI Approved :: MIT License',\n # 'Programming Language :: Python :: 2.7',\n # 'Topic :: Text Processing :: Linguistic',\n #],\n #keywords='',\n #url='',\n author='Jan Skácel',\n author_email='[email protected]',\n #license='MIT',\n packages=['deliverables'],\n install_requires=[\n 'rrslib','pymongo'\n ],\n #test_suite='nose.collector',\n #tests_require=['nose', 'nose-cover3'],\n #entry_points={\n # 'console_scripts': ['funniest-joke=funniest.cmd:main'],\n #},\n zip_safe=False)\n"
},
{
"alpha_fraction": 0.6919371485710144,
"alphanum_fraction": 0.6948529481887817,
"avg_line_length": 23.421052932739258,
"blob_id": "c219b38e9fdbe7a3c189defd613db38eef225dd2",
"content_id": "336d93bd5a2001de2ae73669669130ce6a59e503",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7888,
"license_type": "no_license",
"max_line_length": 97,
"num_lines": 323,
"path": "/deliverables/interface.py",
"repo_name": "KNOT-GIT/mDeliverables",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"\nScript name: Deliverables\nTask: Find out page with deliverables, get links leading to deliverable\ndocuments and index all available data.\n\nInput: project site URL\nOutput: XML containing data stored in objects (rrslib.db.model) about deliverables\n\n\nThis script is part of ReResearch system.\n\nImplemented by (authors):\n\n\t- Jan Skácel\n\t- Pavel Novotny\n\t- Stanislav Heller\n\t- Lukas Macko\n\nBrno University of Technology (BUT)\nFaculty of Information Technology (FIT)\n\"\"\"\n\n# RRSlib\nfrom rrslib.web.httptools import is_url_valid\nfrom rrslib.db.model import RRSProject,RRSUrl,RRSRelationshipPublicationProject\nfrom rrslib.db.model import RRSRelationshipProjectUrl\nfrom rrslib.xml.xmlconverter import Model2XMLConverter\n# Modules of deliverables project\nfrom gethtmlandparse import GetHTMLAndParse\nfrom getdelivpage import GetDelivPage\nfrom getdelivrecords import GetDelivRecords\n# Standard modules\nimport StringIO\nimport os\nimport re\n\n# JSON conversion\n\nimport xml.etree.cElementTree as ET\nimport simplejson, optparse, sys, os\n\ndef elem_to_internal(elem,strip=1):\n\t\"\"\"\n\tConvert an Element into an internal dictionary\n\t\"\"\"\n\td = {}\n\tfor key, value in elem.attrib.items():\n\t\td['@'+key] = value\n\t# loop over subelements to merge them\n\tfor subelem in elem:\n\t\tv = elem_to_internal(subelem,strip=strip)\n\t\ttag = subelem.tag\n\t\tvalue = v[tag]\n\t\ttry:\n\t\t\t# add to existing list for this tag\n\t\t\td[tag].append(value)\n\t\texcept AttributeError:\n\t\t\t# turn existing entry into a list\n\t\t\td[tag] = [d[tag], value]\n\t\texcept KeyError:\n\t\t\t# add a new non-list entry\n\t\t\td[tag] = value\n\ttext = elem.text\n\ttail = elem.tail\n\tif strip:\n\t\t\t# ignore leading and trailing whitespace\n\t\t\tif text: text = text.strip()\n\t\t\tif tail: tail = tail.strip()\n\tif tail:\n\t\t\td['#tail'] = tail\n\tif d:\n\t\t\t# use #text element if other attributes exist\n\t\t\tif text: d[\"#text\"] = text\n\telse:\n\t\t\t# text is the value if no attributes\n\t\t\td = text or None\n\treturn {elem.tag: d}\n\ndef elem2json(elem, strip=1):\n\t\"\"\"\n\tConvert an ElementTree or Element into a JSON string.\n\t\"\"\"\n\tif hasattr(elem, 'getroot'):\n\t\t\telem = elem.getroot()\n\treturn simplejson.dumps(elem_to_internal(elem,strip=strip))\n\ndef xml2json(xmlstring,strip=1):\n\t\"\"\"\n\tConvert an XML string into a JSON string.\n\t\"\"\"\n\telem = ET.fromstring(xmlstring)\n\treturn elem2json(elem,strip=strip)\n\nclass Deliverables:\n\t\"\"\"\n\tClass implementing interface for purpose of using this module in other projects\n\t\"\"\"\n\n\tpr = None\n\t\n\tdeliverables_rrs_xml = \"\"\n\t\n\tregexps = []\n\t\n\tdef __init__(self,debug=False, verbose=False, quiet=True):\n\t\t\"\"\"\n\t\tConstructor of the class. Initialize deliverables extractor interface\n\n\t\t@type debug: boolean\n\t\t@param debug: Prints debugging additional information\n\t\t\n\t\t@type quiet: boolean\n\t\t@param quiet: No function will output anything on STDOUT when True.\n\t\t\n\t\t@type verbose: boolean\n\t\t@param verbose: Prints additional information about parsing on STDOUT when True.\n\t\t\n\t\t\"\"\"\n\t\tself.opt = {\n\t\t\t'debug': False,\n\t\t\t'verbose': verbose,\n\t\t\t'regexp': None,\n\t\t\t'quiet': quiet,\n\t\t\t# We actually do not permit selecting single page without search\n\t\t\t# in this version of interface\n\t\t\t'page': False,\n\t\t\t'file': None,\n\t\t\t# Mechanism of storing file has been overloaded\n\t\t\t# No file is stored. Output RRS-XML is stored in atribute instead\n\t\t\t'storefile': True}\n\t\t\n\t\tlinks = None\n\n\n\tdef parse(self,url):\n\t\t\"\"\"\n\t\tFinds deliverables page and parse data\n\t\t\t\n\t\t@type url: string\n\t\t@param url: String defining initial url for deliverables search.\n\t\t\n\t\t\"\"\"\n\t\t# URL of the project\n\t\tself.opt_url = url\n\t\t\n\t\t# initialize main html handler and parser\n\t\tself.htmlhandler = GetHTMLAndParse()\n\n\t\t# searching deliverable page\n\t\tself.pagesearch = GetDelivPage(self.opt_url,\n\t\t\tverbose=self.opt['verbose'],\n\t\t\tdebug=self.opt['debug'])\n\t\t\t\t\t\t \n\t\t# extracting informations from page\n\t\tself.recordhandler = GetDelivRecords(debug=self.opt['debug'])\n\t\t\n\t\t# Proceed with extraction\n\t\tself.links = None\n\t\tself.main()\n\t\t\n\tdef parse_page(self,deliverables_url):\n\t\t\"\"\"\n\t\tFinds deliverables page and parse data\n\t\t\t\n\t\t@type deliverables_url: string\n\t\t@param deliverables_url: String defining url for deliverables extraction.\n\t\t\n\t\t\"\"\"\n\n\t\t# initialize main html handler and parser\n\t\tself.htmlhandler = GetHTMLAndParse()\n\t\t\t\t\t\t \n\t\t# extracting informations from page\n\t\tself.recordhandler = GetDelivRecords(debug=self.opt['debug'])\n\n\t\t# URL of the project\n\t\tself.opt_url = deliverables_url\n\t\t\n\t\t# Proceed with extraction\n\t\tself.links = [deliverables_url]\n\t\tself.main()\n\n\tdef main(self):\n\t\t\"\"\"\n\t\tMethod implementing actions choosen by parameters in constructor.\n\t\t\"\"\"\n\n\t\t# Searching deliverable page\n\t\tif not self.links:\n\t\t\tself.pagesearch._sigwords.extend(self.regexps)\n\t\t\tself.links = self.pagesearch.get_deliverable_page()\n\t\t##################################\n\t\tif self.links[0] == -1:\n\t\t\treturn self.links\n\n\t\tif self.opt['verbose']:\n\t\t\tprint \"*\"*80\n\t\t\tprint \"Deliverable page: \", \" \".join(self.links)\n\t\t\tprint \"*\"*80\n\n\t\tself.pr = RRSProject()\n\n\t\t#Project - Url relationship\n\t\tif not self.opt['page']:\n\t\t\tpr_url = RRSUrl(link=self.opt_url)\n\t\t\tpr_url_rel = RRSRelationshipProjectUrl()\n\t\t\tpr_url_rel.set_entity(pr_url)\n\t\t\tself.pr['url'] = pr_url_rel\n\t\tself.recordhandler.process_pages(self.links)\n\n\t\trecords = self.recordhandler.get_deliverables()\n\n\t\tif type(records) == list:\n\t\t\t#create relationship Project Publication\n\t\t\tself.records = records\n\t\t\tfor r in records:\n\t\t\t\trel = RRSRelationshipPublicationProject()\n\t\t\t\trel.set_entity(r)\n\t\t\t\tself.pr['publication'] = rel\n\t\t\t\t#create XML from RRSProject\n\t\t\t\toutput = StringIO.StringIO()\n\t\t\t\tconverter = Model2XMLConverter(stream=output)\n\t\t\t\tconverter.convert(self.pr)\n\t\t\t\tout = output.getvalue()\n\t\t\t\toutput.close()\n\t\t\t\t#Either return RRSProject object or XML in string or store result into a file \n\t\t\t\tif self.opt['storefile']:\n\n\t\t\t\t\tr = self._storeToFile(self.opt_url,out)\n\t\t\t\t\t#test if store ok\n\t\t\t\t\tif r[0]!=1:\n\t\t\t\t\t\tprint r[1]\n\t\t\t \n\t\t\t\telse:\n\t\t\t\t\tprint out.encode('UTF-8')\n\t\t\t\treturn self.pr\n\n\t\telse:\n\t\t\treturn records\n\t\n\tdef _storeToFile(self,url,res):\n\t\t\"\"\"\n\t\tOverrides method from original Deliverables class. This method just saves\n\t\tthe RRS XML string to object atribute.\n\t\t\n\t\t@type res: string\n\t\t@param res: Output RRS XML string for writing into object atribute.\n\t\t\n\t\t@type url: string\n\t\t@param url: For compatibility with Deliverables class method only. It is not used.\n\t\t\n\t\t@return: (1, 'OK')\n\t\t\"\"\"\n\t\tself.deliverables_rrs_xml = res.encode('UTF-8')\n\n\t\treturn (1, 'OK')\n\t\n\tdef get_deliverables(self):\n\t\t\"\"\"\n\t\tAccess method to object of project with references to all parsed\n\t\tdeliverables. It runs parsing only when necesseary.\n\t\t\n\t\t@return: None when any error is found or RRSProject instance\n\t\t\"\"\"\n\t\treturn self.pr\n\t\n\tdef get_rrs_xml(self):\n\t\t\"\"\"\n\t\tAccess method to object of project with references to all parsed\n\t\tdeliverables. It runs parsing only when necesseary.\n\t\t\n\t\t@return: String with RRS XML\n\t\t\"\"\"\n\t\treturn self.deliverables_rrs_xml\n\t\t\n\n\tdef get_json(self):\n\t\t\"\"\"\n\t\tAccess method to data in form of JSON string.\n\t\t\n\t\t@return: String in JSON\n\t\t\"\"\"\t\n\t\treturn xml2json(self.get_rrs_xml())\n\t\n\tdef get_list(self):\n\t\t\"\"\"\n\t\tAccess method to object of project with references to all parsed\n\t\tdeliverables.\n\t\t\n\t\t@return: List of RRSPublication instances\n\t\t\"\"\"\t\n\t\treturn self.records\n\n\tdef __debug(self,msg):\n\t\t\"\"\"\n\t\tPrints debug message.\n\t\t\n\t\t@type msg: string\n\t\t@param msg: String for printing on STDOUT\n\t\t\"\"\"\n\t\tif self.opt.debug:\n\t\t\tprint(\"Debug message: \" +str(msg));\n\t\t\t\n\tdef add_regexp(self,regexp):\n\t\t\"\"\"\n\t\tPrints debug message.\n\t\t\n\t\t@type regexp: string\n\t\t@param regexp: Regular expression pattern for adding to deliverables page ranking regexp list\n\t\t\"\"\"\n\t\tself.regexps.append(regexp)\n\t\t\n\tdef remove_regexp(self,regexp):\n\t\t\"\"\"\n\t\tPrints debug message.\n\t\t\n\t\t@type regexp: string\n\t\t@param regexp: Regular expression pattern for remove from deliverables page ranking regexp list\n\t\t\"\"\"\n\t\tself.regexps.remove(regexp)\n"
},
{
"alpha_fraction": 0.6552962064743042,
"alphanum_fraction": 0.6678635478019714,
"avg_line_length": 20.843137741088867,
"blob_id": "000097805f83de2e23e92fab10449e8982340e1f",
"content_id": "96bc2866dbadd0f76fcc787fa42d7ee1853ba619",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1114,
"license_type": "no_license",
"max_line_length": 73,
"num_lines": 51,
"path": "/examples/rrs_page_change_monitoring/errors.py",
"repo_name": "KNOT-GIT/mDeliverables",
"src_encoding": "UTF-8",
"text": "#! /usr/bin/python\n# -*- coding: utf-8 -*-\n\n\"\"\"\nExceptions raised by rrslib.web.changemonitor package\n\"\"\"\n\n__modulename__ = \"errors\"\n__author__ = \"Stanislav Heller\"\n__email__ = \"[email protected]\"\n__date__ = \"$22.6.2012 13:01:57$\"\n\n\nclass ChangeMonitorError(Exception):\n \"\"\"\n Base error class for all exceptions within this package.\n \"\"\"\n pass\n\nclass DocumentTooLarge(ChangeMonitorError):\n \"\"\"\n Raised when monitored document's size exceeds the LARGE_DOCUMENT_SIZE\n constant.\n \"\"\"\n pass\n\nclass DocumentNotAvailable(ChangeMonitorError):\n \"\"\"\n Raised when document is not available on the URL or on the storage.\n \"\"\"\n pass\n\nclass DocumentHistoryNotAvaliable(ChangeMonitorError):\n \"\"\"\n Raised when trying to get version or diff of document, which version\n history is not stored in the storage.\n \"\"\"\n pass\n\nclass NotSupportedYet(ChangeMonitorError):\n \"\"\"\n Raised when the method/class/function is not supported in this\n implementation.\n \"\"\"\n pass\n\nclass UidError(ChangeMonitorError):\n \"\"\"\n Raised when some error connected with user id occured.\n \"\"\"\n pass\n"
},
{
"alpha_fraction": 0.679420530796051,
"alphanum_fraction": 0.681277871131897,
"avg_line_length": 29.942529678344727,
"blob_id": "c985f7c598be5fbb23a949ca6fb241a70eb8c027",
"content_id": "0df913148a5b68bed15732d2b436a99126011650",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2692,
"license_type": "no_license",
"max_line_length": 120,
"num_lines": 87,
"path": "/examples/example2.py",
"repo_name": "KNOT-GIT/mDeliverables",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n\"\"\"\nThis code should serve as an example of how to use deliverables2\nas a module using its threading api\n\"\"\"\n\n# Import deliverables\nfrom deliverables.thread import *\n# Example of threading deamon\n\n# Load iterator and establish connection to database (if no db connection is given, it is expected to run on localhost:)\ndaemon = Deliverables_daemon(8,[\n\"http://2wear.ics.forth.gr/\"\n,\"http://adapt.ls.fi.upm.es/adapt.htm\"\n,\"http://agentacademy.iti.gr/\"\n,\"http://ametist.cs.utwente.nl/\"\n,\"http://aris-ist.intranet.gr/\"\n,\"http://benogo.dk/\"\n,\"http://bind.upatras.gr/\"\n,\"http://cic.vtt.fi/projects/elegal/\"\n,\"http://cmp.felk.cvut.cz/projects/omniviews/\"\n,\"http://context.upc.es\"\n,\"http://cortex.di.fc.ul.pt/\"\n,\"http://danae.rd.francetelecom.com/\"\n,\"http://dip.semanticweb.org/\"\n,\"http://muchmore.dfki.de/\"\n,\"http://qviz.eu/\"\n,\"http://recherche.ircam.fr/projects/SemanticHIFI/wg/\"\n,\"http://research.ac.upc.edu/catnet/\"\n,\"http://www.argonaproject.eu/\"\n,\"http://www.artist-embedded.org/\"\n,\"http://www.asisknown.org/\"\n,\"http://www.awissenet.eu/\"\n,\"http://www.bootstrep.eu/\"\n,\"http://www.consensus-online.org/\"\n,\"http://www.cvisproject.org/\"\n,\"http://www.diadem-firewall.org/\"\n,\"http://www.elu-project.com/\"\n,\"http://www.epros.ed.ac.uk/mission/\"\n,\"http://www.equimar.org/\"\n,\"http://www.euro-cscl.org/site/itcole/\"\n,\"http://www.fusionweb.org/fusion/\"\n,\"http://www-g.eng.cam.ac.uk/robuspic/\"\n,\"http://www.imec.be/impact/ \"\n,\"http://www-interval.imag.fr/\"\n,\"http://www.ist-gollum.org/\"\n,\"http://www.ist-opium.org/\"\n,\"http://www.ltg.ed.ac.uk/magicster/\"\n,\"http://www.mescal.org/\"\n,\"http://www.mpower-project.eu/\"\n,\"http://www.mtitproject.com\"\n,\"http://www.nlr.nl/public/hosted-sites/hybridge/\"\n,\"http://www.ofai.at/rascalli/\"\n,\"http://www.s-ten.eu/\"\n,\"http://www.umsic.org/\"\n,\"http://io.intec.ugent.be/\"\n,\"http://siconos.inrialpes.fr/\"\n,\"http://context.upc.es/\"\n,\"http://www.ist-mascot.org\"\n,\"http://www.6net.org/\"\n,\"http://www.cwi.nl/projects/mascot/\"\n,\"http://www.ana-project.org/\"\n,\"http://www.multimatch.eu/\"\n,\"http://www.ist-iphobac.org/\"\n,\"http://www.bacs.ethz.ch/\"\n,\"http://metokis.salzburgresearch.at/index.html\"\n,\"http://dip.semanticweb.org/\"\n,\"http://mexpress.intranet.gr/\"\n,\"http://www.stork.eu.org/\"\n,\"http://www.mg-bl.com/\"\n,\"http://www.irt.de/mhp-confidence/\"\n,\"http://www.ist-sims.org\"\n,\"http://www.ist-discreet.org\"\n,\"http://www.safespot-eu.org\"\n,\"http://www.safespot-eu.org\"\n,\"http://www.mhp-knowledgebase.org/\"\n].__iter__())\n\n# Parsing and writing to database\ndaemon.parse()\n\n# Checking for fatal error (most errors are recoverable, some like unhandeled\n# exception in thread or no db connection are not)\nif daemon.isErr():\n\tprint daemon.isErr()\n"
},
{
"alpha_fraction": 0.55059415102005,
"alphanum_fraction": 0.5713570713996887,
"avg_line_length": 36.53520965576172,
"blob_id": "ff42d0ec9135534447d07e13ad37ace5f7b9e2c0",
"content_id": "b3caa2183b104bd2d087ab21e8a87360b3125d48",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7995,
"license_type": "no_license",
"max_line_length": 148,
"num_lines": 213,
"path": "/examples/rrs_page_change_monitoring/resolver.py",
"repo_name": "KNOT-GIT/mDeliverables",
"src_encoding": "UTF-8",
"text": "#! /usr/bin/python\n# -*- coding: utf-8 -*-\n\n__modulename__ = \"resolver\"\n__author__ = \"Stanislav Heller\"\n__email__ = \"[email protected]\"\n__date__ = \"$25.6.2012 12:12:44$\"\n\nfrom _http import HTTPDateTime\nimport _http\nimport hashlib\nimport model\n\nclass Resolver(object):\n \"\"\"\n System pro zajisteni funkcionality \"zmenilo se neco na dane URL?\". Zde se\n bude osetrovat mnoho vyjimek s http hlavickami, s md5, content-length atp.\n\n Implikace:\n changed(HTTP:content-length) -> content changed\n changed(HTTP:md5) -> content changed\n changed(HTTP:response code) -> it depends..\n changed(HTTP:last-modified) -> doesn't matter\n \"\"\"\n def __init__(self, storage, timeout = 10):\n # Storage\n self._storage = storage\n#? print \"RESOLVER: STORAGE: \",self._storage\n # are large documents allowed?\n self._allow_large = self._storage.allow_large\n # GridFS\n self._filesystem = storage.filesystem\n # Collection \"httpheader\"\n self._headers = storage._headermeta\n # Timeout for checking pages\n self._timeout = timeout\n\tpass\n\n def resolve(self, url):\n # Tady bude asi ta nejvetsi magie z celeho systemu\n\n #Musi zajistit\n # - ze se posle pozadavek na server\n # - ze se spravne updatuje databaze, pokud se neco zmenilo.\n # - pokud byl proveden redirect, pak musi do Content.urls = [] ulozit obe\n # dve URL adresy (puvodni a redirectnutou)\n # a plno dalsich veci..\n\n#pseudocode\n # fetch last from DB\n # ask HEAD\n\n # decide whether a change occured\n # if so -> download\n # if can't be told so, download\n # and compute our md5 and check based on it\n # if certainly not (md5, etag) say \"nothing changed\"\n\n # for downloaded, get diff, store it into the DB\n # and store the recieved headers as well\n# pseudocode end\n decision = self._make_decision(url)\n #print(decision)\n self._store_into_db(decision,url)\n\n def _make_decision(self, url):\n self.db_metainfo = self._get_metainfo_from_db(url)\n conn_proxy = _http._HTTPConnectionProxy(url,self._timeout)\n self.web_metainfo = conn_proxy.send_request(\"HEAD\",url)\n \n store_decision = (0,\"Store both header and content\")\n\n#? print \"Resolver: _make_decision: db_metainfo\",self.db_metainfo\n#? print self.web_metainfo\n\n if self.db_metainfo == None:\n store_decision = (0,\"Store both header and content\")\n self._make_decision_2(url,conn_proxy)\n return store_decision\n\n if self.web_metainfo == None:\n store_decision = (3, \"Timeouted\")\n return store_decision\n\n try:\n if self.db_metainfo['etag'] == self.web_metainfo[1]['etag']:\n store_decision = (1, \"Store only header (based on etags equality)\")\n self._md5 = self.web_metainfo[1]['content-md5']\n except KeyError:\n pass\n\n try:\n if self.web_metainfo[1]['content-md5'] == self.db_metainfo['content']['md5']:\n store_decision = (1, \"Store only header (based on recieved content-md5 equality)\")\n self._md5 = self.web_metainfo[1]['content-md5']\n except KeyError:\n pass\n\n if store_decision[0] != 0:\n return store_decision\n else:\n return self._make_decision_2(url,conn_proxy)\n \n def _make_decision_2(self, url, conn_proxy):\n # etag and content-md5 are the only authoritave evidents of 'it has not changed'\n # therefore, now is the time to download the content\n\n self._web_full_info = conn_proxy.send_request(\"GET\",url)\n \n if self._web_full_info == None:\n return \"Pruuser, HEAD prosel, GET uz ne\"\n\n#? print \"header: \" + self._web_full_info[1]['content-length'] + \", len(): \" + str(len(self._web_full_info[2]))\n#? print \"_web_full_info[0]: \",self._web_full_info[0]\n#? print \"_web_full_info[1]: \",self._web_full_info[1]['date'] \n#? print \"_web_full_info[2]: \",self._web_full_info[2] # this is the full html code of the page\n#? print \"_web_full_info[3]: \",self._web_full_info[3]\n\n mdfiver = hashlib.md5()\n mdfiver.update(self._web_full_info[2])\n self._md5 = mdfiver.hexdigest()\n#? print \"md5: \" + self._md5\n\n shaoner = hashlib.sha1()\n shaoner.update(self._web_full_info[2])\n self._sha1 = shaoner.hexdigest()\n#? print \"sha1: \" + self._sha1\n\n if (self.db_metainfo is not None) and self._md5 == self.db_metainfo['content']['md5'] and self._sha1 == self.db_metainfo['content']['sha1']:\n store_decision = (1, \"Store only header (based on computed md5 and sha1 equality)\")\n else:\n store_decision = (0, \"Store both header and content\")\n#? print \"store_decision: \",store_decision\n return store_decision\n\n\n def _store_into_db(self, store_decision, url):\n \"\"\"\n Stores metainfo (and content) in the storage.\n \"\"\"\n \n#? print \"In Resolver._store_into_db: store_decision: \",store_decision\n if store_decision[0] == 0:\n content_id = {\n 'filename': url,\n 'md5': self._md5,\n 'sha1': self._sha1,\n 'content-type': self._web_full_info[1]['content-type'],\n# 'length': self._web_full_info[1]['content-length'],\n 'urls': [url]\n }\n # store both headers and content\n # store data in GridFS... need to be consistent with the expectations of the other modules\n self._filesystem.put(self._web_full_info[2],filename=url,\n content_type=self._web_full_info[1]['content-type'],\n timestamp=HTTPDateTime().from_httpheader_format(self._web_full_info[1]['date']).to_timestamp())\n # save header AFTER content: enable search of content by header timestamp\n self._headers.save_header(url,self._web_full_info[0], self._web_full_info[1], content_id)\n elif store_decision[0] == 1:\n # store headers only\n self._headers.save_header(url,self.web_metainfo[0], self.web_metainfo[1], None)\n elif store_decision[0] == 3:\n # store information about the timeout\n self._headers.save_header(url,None, 'Timeouted', None)\n else:\n # this NEVER happens\n print \"Dafuq?\"\n#? print \"self._headers:\", self._headers\n return\n\n def _get_metainfo_from_db(self, url):\n \"\"\"\n Returns last metainfo upon the given url stored in the DB.\n \"\"\"\n# mockup_content = {\n# 'filename': \"http://www.aquafortis.cz/trenink.html\",\n# 'md5': '233fde7ca8a474f4cc7a198ba87822ff',\n# 'sha1': 'b2e4bce03a0578da5fd9e83b28acac819f365bda',\n# 'content_type': '',\n# 'length': 1347,\n# 'urls' : ['http://www.aquafortis.cz/trenink.html']\n# }\n#\n# mockup_header = {\n# 'timestamp': 1341161610.287,\n# 'response_code': 200,\n# 'last_modified': 'cosi',\n# 'etag': '\"928169-543-4c46b09cfca00\"',\n# 'uid': \"rrs_university\",\n# 'url': \"http://www.cosi.cz\",\n# 'content': mockup_content # object_id\n# }\n\n q = {'url':url}\n#? \tprint \"METAINFO FROM DB:\",self._headers.objects.find(q)\n#? for item in self._headers.objects.find(q):\n#? print \"get_metainfo_from_db: \",item #self._headers.objects.find(q)\n try:\n retval = self._headers.objects.find(q)[0]\n except:\n retval = None\n return retval\n\nclass Rule(object):\n \"\"\"\n Pravidlo pro resolver. Mozna se bude hodit nejake takove rozdeleni\n tech pravidel... proste rozdel a panuj.\n \"\"\"\n def __call__(self):\n \"\"\"\n Pokud __call__ vraci true, pak rule matchl.\n \"\"\"\n pass\n"
},
{
"alpha_fraction": 0.591422438621521,
"alphanum_fraction": 0.6052477955818176,
"avg_line_length": 39.04093551635742,
"blob_id": "79d692d03ca53b2dbd446efd9b58c0db342bdd15",
"content_id": "7f3178e58a151326462ed0f6f7e3fbcff9a96a64",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 20555,
"license_type": "no_license",
"max_line_length": 119,
"num_lines": 513,
"path": "/examples/rrs_page_change_monitoring/__init__.py",
"repo_name": "KNOT-GIT/mDeliverables",
"src_encoding": "UTF-8",
"text": "#! /usr/bin/python\n# -*- coding: utf-8 -*-\n\n\"\"\"\nChangemonitor -- monitoring changes on web\n\nTODO: docstring.\n1) koncepce, casti (checking, availability, versioning, differ)\n2) user-view vs global-view\n3) usage\n\nThe very basic and most probable usage:\n >>> from rrslib.web.changemonitor import Monitor\n >>> monitor = Monitor(user_id=\"rrs_university\")\n >>> resource = monitor.get(\"http://www.google.com\")\n >>> # if the page changed\n >>> if resource.check():\n >>> print res.get_diff(start='last', end='now')\n\n\n\"\"\"\n\n__modulename__ = \"changemonitor\"\n__author__ = \"Stanislav Heller\"\n__email__ = \"[email protected]\"\n__date__ = \"$21.6.2012 16:08:11$\"\n\nimport string\nimport diff\n\nfrom urlparse import urlparse\n\nfrom gridfs.errors import NoFile\nfrom pymongo import Connection\n\nfrom model import HttpHeaderMeta, Content, Storage, File\nfrom resolver import Resolver\nfrom _http import HTTPDateTime\nfrom errors import *\n\n__all__ = [\"Monitor\", \"MonitoredResource\", \"HTTPDateTime\"]\n\n# constant defining the size of file, which is supposed to be \"large\"\n# for more info see Monitor.allow_large_docuements.__doc__\nLARGE_DOCUMENT_SIZE = 4096\n\n\nclass MonitoredResource(object):\n \"\"\"\n Monitored resource (URL). The ressource is generally any document in any\n format, but most often it will be HTML code.\n\n This class wraps the URL content and metadata.\n\n The contents can be manipulated within the time so it can provide information\n about how the content changed in different versions of the document.\n\n Warning:\n Application developers should generally not need to instantiate this class\n directly. The only correct way how to get this object is through\n Monitor.get() method.\n\n Design pattern: Active Record\n\n Example of checking new version of document:\n >>> from rrslib.web.changemonitor import Monitor\n >>> monitor = Monitor(user_id=\"myuid\")\n >>> monitor\n Monitor(conn=Connection('localhost', 27017), dbname='webarchive', uid='myuid')\n >>> resource = monitor.get(\"http://www.myusefulpage.com/index.html\")\n >>> resource\n <MonitoredResource(url='http://www.myusefulpage.com/index.html', uid='myuid') at 0xb7398accL>\n >>> resource.check()\n True\n >>> # the resource has changed\n\n Checking availability of the document on the URL\n >>> from rrslib.web.changemonitor import HTTPDateTime\n >>> resource = monitor.get(\"http://www.nonexistentpage.com\")\n >>> resource.available()\n False\n >>> resource = monitor.get(\"http://www.myusefulpage.com/index.html\")\n >>> resource.available(HTTPDateTime(2012, 6, 30, 15, 34))\n True\n\n Example of getting last available version of the document on the URL\n >>> resource = monitor.get(\"http://www.myusefulpage.com/index.html\")\n >>> content = resource.get_last_version()\n >>> print content.data\n <html><head>\n ...\n >>> resource = monitor.get(\"http://www.crazynonexistentpage.com\")\n >>> content = resource.get_last_version()\n Traceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n DocumentNotAvailable: The content of this URL is not available and it\n is not in the storage.\n\n Example of getting version of the document in exact time\n >>> resource = monitor.get(\"http://www.myusefulpage.com/index.html\")\n >>> content = resource.get_version(HTTPDateTime(2012, 6, 30, 15, 34))\n\n Getting the last time when the document was checked:\n >>> resource = monitor.get(\"http://www.crazynotexistentpage.com\")\n >>> resource.last_checked()\n HTTPDateTime(Thu, 01 Jan 1970 00:00:00 GMT)\n \"\"\"\n def __init__(self, url, uid, storage):\n \"\"\"\n @param url: monitored URL\n @type url: basestring (str or unicode)\n @param uid: user identifier. This ID has to be unique for each one,\n who is using changemonitor.\n @type uid: str\n @param storage: storage of monitored-resource data\n @type storage: model.Storage\n \"\"\"\n # resource data\n self.url = url\n self.uid = uid\n\n # models\n self.storage = storage\n self.headers = storage._headermeta\n\n # resolver\n self.resolver = Resolver(storage)\n self._checked = False\n # file\n try:\n self.file = self.storage.get(url)\n except DocumentNotAvailable:\n # if the file is not in the storage, resolver has to check\n # the url and load actual data into the storage\n self.resolver.resolve(url)\n self._checked = True\n try:\n self.file = self.storage.get(url)\n except DocumentNotAvailable:\n raise DocumentNotAvailable(\"Resource '%s' is not available.\" % url)\n\n\n def check(self):\n \"\"\"\n Check the resource URL and load the most recent version into database.\n\n TODO: consider using @lazy decorator. Most of use cases use this method\n so we have to insure that it will be called only once.\n\n @raises: DocumentTooLargeException\n @returns: True if the document has changed since last check.\n \"\"\"\n # bude vyuzivat resolveru pro checknuti URL a ziskani informace o tom,\n # jestli byl dokument zmenen. Mozna bude take dobre nahrat rovnou do\n # self.file nejnovejsi verzi, ale o tom je potreba jeste pouvazovat.\n\n # urcite je potreba pred kazdym checkem refreshout file cache\n self.file.refresh_cache()\n\n if not self._checked: # use resolver to get most recent version if not yet checked\n self.resolver.resolve(self.url)\n self._checked = True\n try:\n self.file = self.storage.get(self.url)\n except DocumentNotAvailable:\n raise DocumentNotAvailable(\"Resource '%s' is not available.\" % self.url)\n \n # and determine return value\n try:\n # time of last check\n _now = self.headers.get_by_version(self.url,-1,True)['timestamp'] \n _now = HTTPDateTime().from_timestamp(_now)\n # time of previous check\n _prev = self.headers.get_by_version(self.url,-2,True)['timestamp']\n _prev = HTTPDateTime().from_timestamp(_prev)\n except TypeError: # this is first time document is checked\n return True\n\n d = self.get_diff(_prev,_now)\n \n if d is None: return False\n if isinstance(d, basestring): # PlainTextDiff\n if len(d)==0: return False\n else: return True\n if isinstance(d, dict): # BinaryDiff \n if string.find(d['metainfo'],\"(patch data)\")==-1 : \n # TODO: find what indicates that BinaryDiff-ed file hasn't changed\n # current version seems to work, but needs more testing \n return False\n else: return True\n try: # d is htmldiff output\n chunk = d.next()\n if len(chunk.added)==0 and len(chunk.removed)==0:\n return False\n except (StopIteration, TypeError): # if can't get d.next(), then d is probably empty -> no change\n return False\n \n return True\n \n def get_last_version(self):\n \"\"\"\n Get last available content of the document. If the document is available\n at this time, returns most recent version which is on the web server.\n\n @returns: Last available content of this resource.\n @rtype: Content\n @raises: DocumentNotAvailable if no content available (resource does not\n exist on the URL and never existed within the known history)\n \"\"\"\n self.resolver.resolve(self.url)\n self._checked = True\n try:\n self.file = self.storage.get(self.url)\n return self.file.get_last_version()\n except NoFile: # FIXME tady to prece nemuze byt??!\n raise DocumentNotAvailable(\"Resource '%s' is not available.\" % self.url)\n\n\n def get_version(self, time_or_version):\n \"\"\"\n Get content of this document in specified time or version. If the\n document was not available in given time, returns last available content.\n If there is no available content until given time, raises exception.\n\n @param time_or_version: Time or version of the content we want to retrieve.\n Version numbering is a convenience atop the GridFS API provided\n by MongoDB. version ``-1`` will be the most recently uploaded\n matching file, ``-2`` the second most recently uploaded, etc.\n Version ``0`` will be the first version\n uploaded, ``1`` the second version, etc. So if three versions\n have been uploaded, then version ``0`` is the same as version\n ``-3``, version ``1`` is the same as version ``-2``, and\n version ``2`` is the same as version ``-1``.\n @type time_or_version: HTTPDateTime or int\n @raises: DocumentHistoryNotAvailable if there is no available content until\n given time or version\n \"\"\"\n if isinstance(time_or_version, HTTPDateTime):\n return self.file.get_version(time_or_version.to_timestamp())\n elif isinstance(time_or_version, int):\n return self.file.get_version(time_or_version)\n else:\n raise TypeError(\"Version time has to be type HTTPDateTime or GridFS version (int).\")\n\n\n def get_diff(self, start, end):\n \"\"\"\n @param start: start time or version to be diffed\n @type start: HTTPDateTime or int\n @param end: end time or version to be diffed\n @type end: HTTPDateTime or int\n @returns: either textual or binary diff of the file (if available).\n If contents are equal (document did not change within this\n time range) returns None.\n @rtype: unicode\n @raises: DocumentHistoryNotAvaliable if the storage doesn't provide\n enough data for computing the diff.\n \"\"\"\n content_start = self.get_version(start)\n content_end = self.get_version(end)\n if content_start == content_end:\n return None\n return content_start.diff_to(content_end)\n\n\n def available(self, httptime=None):\n if (not isinstance(httptime, HTTPDateTime)) and (httptime is not None):\n raise TypeError(\"Time of availability has to be type HTTPDateTime.\")\n # Pokud je httptime=None, pak se jedna o dostupnost v tomto okamziku\n raise NotImplementedError()\n\n\n def last_checked(self):\n \"\"\"\n Get information about the time of last check of this resource.\n\n @returns: time of last check or None if the resource was never checked\n (or the HTTP requests timed out)\n @rtype: HTTPDateTime or None\n \"\"\"\n return self.headers.last_checked(self.url)\n\n\n def __repr__(self):\n return \"<MonitoredResource(url='%s', uid='%s') at %s>\" % \\\n (self.url, self.uid, hex(id(self)))\n\n __str__ = __repr__\n\n \n\nclass Monitor(object):\n \"\"\"\n Monitor is main class representing web change monitor. It serves\n as factory for creating MonitoredResource objects.\n\n Usage:\n >>> from rrslib.web.changemonitor import Monitor\n >>> monitor = Monitor(user_id=\"rrs_university\")\n >>> resource = monitor.get(\"http://www.google.com\")\n >>> # if the page changed\n >>> if resource.check():\n >>> print res.get_diff(start='last', end='now')\n \"\"\"\n def __init__(self, user_id, db_host=\"localhost\", db_port=27017, db_name=\"webarchive\", http_proxy=None):\n \"\"\"\n Create a new monitor connected to MongoDB at *db_host:db_port* using\n database db_name.\n\n @param user_id: identification string of user/module who uses monitor.\n If user_id is given None, the monitor switches to\n `global-view` mode and all requests to storage don't\n care about >>who checked this resource<<. On the other\n hand, if user_id is given a string, the monitor switches\n to `user-view` mode and all operations are oriented\n to the user. Most of the reasonable use cases are\n using user_id, because a user/module almost everytime\n ask about >>what changed since I have been here for the\n last time<<, not >>what changed since somebody has been\n here for the last time<<...\n @type user_id: str or None\n @param db_host: (optional) hostname or IP address of the instance\n to connect to, or a mongodb URI, or a list of\n hostnames / mongodb URIs. If db_host` is an IPv6 literal\n it must be enclosed in '[' and ']' characters following\n the RFC2732 URL syntax (e.g. '[::1]' for localhost)\n @param db_port: (optional) port number on which to connect\n @type db_port: int\n @param db_name: name of database which is used to store information about\n monitored documents and their versions.\n @type db_name: str\n @param http_proxy: (FUTURE USE) proxy server where to send requests\n @type http_proxy: unknown\n \"\"\"\n if not isinstance(user_id, basestring) and user_id is not None:\n raise TypeError(\"User ID has to be type str or None.\")\n # save user id\n self._user_id = user_id\n # for future use\n if http_proxy is not None:\n raise NotImplementedError(\"HTTP proxy not supported yet.\")\n # initialize models\n self._init_models(db_host, db_port, db_name, user_id)\n\n\n def _init_models(self, host, port, db, uid):\n self._conn = Connection(host, port)\n self._storage = Storage(self._conn, uid, db)\n self._dbname = db\n self._dbport = port\n self._dbhost = host\n \n\n def get(self, url):\n \"\"\"\n Creates new MonitoredResource instance which represents document on\n *url*.\n \n @param url: URL of monitored resource\n @type url: str\n @returns: monitored resource object bound to URL *url*.\n @rtype: MonitoredResource\n \n Design pattern: factory method.\n \"\"\"\n # test the url validity\n parse_result = urlparse(url)\n if parse_result.netloc == '':\n raise ValueError(\"URL '%s' is not properly formatted: missing netloc.\" % url)\n if parse_result.scheme == '':\n raise ValueError(\"URL '%s' is not properly formatted: missing scheme.\" % url)\n # return monitored resource object\n return MonitoredResource(parse_result.geturl(), self._user_id, self._storage)\n\n\n def allow_large_documents(self):\n \"\"\"\n Allow large objects to be stored in the storage. Large document is\n defined as file larger than 4096KB. Tis constant is defined in this\n module named as LARGE_DOCUMENT_SIZE representing size of the file\n in kilobytes.\n \"\"\"\n try:\n # just delegate to storage model\n self._storage.allow_large_documents()\n except AttributeError:\n raise RuntimeError(\"Models arent initialized. Something went to hell...\")\n \n\n def check_uid(self):\n \"\"\"\n Check if user id given in constructor is a valid user id within\n the Monitor storage system. If the UID is occupied, returns False,\n True otherwise.\n\n If user_id is None, an exception UidError is raised.\n \n @returns: True if the UID is free\n @rtype: bool\n \"\"\"\n if self._user_id is None:\n raise UidError(\"Cannot check uid=None. Monitor is switched to global-view mode.\")\n return self._storage.check_uid()\n\n\n def check_multi(self, urls=[]):\n \"\"\"\n Check list of urls, start new thread for each one.\n @param urls:\n @type urls: list\n @returns: list of MonitoredResource objects, each with actual data\n @rtype: list<MonitoredResource>\n \"\"\"\n # TODO: zkontrolovat, jestli vsechny prvky v urls jsou validni URL adresy\n raise NotSupportedYet()\n\n\n def __repr__(self):\n return \"Monitor(conn=%s, dbname='%s', uid='%s')\" % \\\n (self._conn.connection, self._dbname, self._user_id)\n\n\n __str__ = __repr__\n\n\nif __name__ == \"__main__\":\n m = Monitor(user_id='rrs',db_port=27018) # testing on port 27018... \n # USE db_port=27017 for normal use, that's the default for mongodb\n# print m\n# print \"MONITOR: STORAGE: \",m._storage\n# print \"MONITOR: STORAGE: HEADERS: \",m._storage._headermeta\n# print \"MONITOR: STORAGE: GRIDFS: \",m._storage.filesystem \n \n #r = m.get(\"http://www.fit.vutbr.cz\")\n #r = m.get(\"http://www.google.com\")\n #r = m.get(\"http://cs.wikipedia.org/wiki/Hlavní_strana\")\n #r = m.get(\"http://en.wikipedia.org\") \n r = m.get(\"http://localhost/api.pdf\")\n\n print \"resource:\",r,\"\\n\"\n# print \"checked: \", r._checked\n print \"last version: \",r.get_last_version() # works \n# print \"last checked: \",r.last_checked(),\"\\n\" # works\n print \"check: \",r.check(),\"\\n\" \n# print \"checked: \", r._checked\n\n # use actual times when testing!!! \n # otherwise will get exception DocumentHistoryNotAvailable \n# print \"by time: 2013-01-27 14:00\",r.get_version(HTTPDateTime(2013,1,27,14,00)),\"\\n\"\n# print \"by time: 2013-01-30 22:00\",r.get_version(HTTPDateTime(2013,1,30,22,00)),\"\\n\"\n# print \"by time: 2013-01-21 21:38\",r.get_version(HTTPDateTime(2013,1,21,21,38)),\"\\n\"\n# print \"by time: 2013-01-18 23:34\",r.get_version(HTTPDateTime(2013,1,18,23,34)),\"\\n\"\n\n# print \"by version: version 0: \",r.get_version(0) # doesn't work for version 0\n# print \"by version: version 2: \",r.get_version(2)\n# print \"by version: version 4: \",r.get_version(4)\n# c = r.get_version(-1)\n# print \"by version: version -1: \", c, c.upload_date, c._differ\n# c = r.get_version(-2)\n# print \"by version: version -2: \", c, c.upload_date\n# c = r.get_version(-3)\n# print \"by version: version -3: \", c, c.upload_date\n# c = r.get_version(-4)\n# print \"by version: version -4: \", c, c.upload_date\n\n# d = r.get_diff(HTTPDateTime(2013,1,30,22,00),-1)\n# print \"DIFF\\n\"\n\n # EXAMPLE OF GETTING AND PRINTING A DIFF OF TWO VERSIONS\n# version1 = HTTPDateTime(2013,1,30,22,00)\n# version2 = -1\n# c1 = r.get_version(version1)\n# c2 = r.get_version(version2)\n# d = r.get_diff(version1,version2)\n# if d is None: \n# print \"Versions \",version1,\" and \",version2,\" are equal.\\n\"\n# elif issubclass(c1._differ,(diff.PlainTextDiff)):\n# print d\n# elif issubclass(c1._differ,(diff.HtmlDiff)):\n# try:\n# while(True):\n# chunk = d.next() # htmldiff outputs a generator object\n# print \"pos: \",chunk.position,\"\\nadded: \",chunk.added,\"\\nremoved: \",chunk.removed,\"\\n----\\n\"\n# except StopIteration: # got to last chunk generated by d\n# pass \n# except TypeError: # d is not a generator object\n# if d is not None: print d # htmldiff sometimes gives a basestring output...\n# elif issubclass(c1._differ,(diff.BinaryDiff)):\n# print \"used binary diff\"\n# # visualisation of BinaryDiff output/metadata comes here\n# print d['metainfo']\n# # d['diff'] # contains the actual compressed binary delta, not human-readable \n# else:\n# raise RuntimeError()\n\n # THIS IS ANOTHER WAY OF PRINTING THE DIFF OF TWO VERSIONS (BinaryDiff not considered here)\n# if isinstance(d, basestring): # text/plain\n# print \"diff -2,-1: \",d.encode('utf-8')\n# elif d is None: \n# print \"d is None\"#pass\n# else: # text/html, d should be generator object\n# try:\n# while(True):\n# chunk = d.next()\n# print \"diff -2,-1: \\n position: \",chunk.position,\"\\n added: \",chunk.added,\"\\n removed: \",chunk.removed\n# print \"----------\"\n# except (StopIteration, TypeError):\n# pass\n\n# print \"diff times: \",r.get_diff(HTTPDateTime(2013,1,21,22,00),HTTPDateTime(2013,1,21,21,38))\n# c = r.get_version(-1) # works\n# print c.tell(), c.length\n# print c, c.read()\n\n"
},
{
"alpha_fraction": 0.7127496004104614,
"alphanum_fraction": 0.7158218026161194,
"avg_line_length": 18.727272033691406,
"blob_id": "e41433d78196981c49674e6f4e05cafad2165409",
"content_id": "35ab3e8e998921464eca1231ef2b328d85092242",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 651,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 33,
"path": "/examples/example1.py",
"repo_name": "KNOT-GIT/mDeliverables",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n\"\"\"\nThis code should serve as an example of how to use deliverables2\nas a module\n\"\"\"\n\n\n# Import deliverables\nfrom deliverables.interface import *\n\n\n# Create interface object and initialize url\ndeliv = Deliverables()\n\ndeliv.parse(url = \"http://siconos.inrialpes.fr/\")\n\n# Lets get the RRSProject class instance\nproject = deliv.get_deliverables()\n\n# Check the output\nif not project:\n\t\n\tprint(\"Non deliverables found\")\n\t\nelse:\n\t\n\t# Lets print output RRS XML. I would like to note here that as the parsing\n\t# is already done ( by calling get_deliverables() ) it is not done again\n\tprint(\n\t\tdeliv.get_rrs_xml()\n\t)\n"
},
{
"alpha_fraction": 0.5721697211265564,
"alphanum_fraction": 0.5787410736083984,
"avg_line_length": 33.69599914550781,
"blob_id": "1908d43e509ff95a6b54a330b3bed7fc20d8fdc5",
"content_id": "a69234dcc1b4a40a1983c86cd399168044a01873",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 8674,
"license_type": "no_license",
"max_line_length": 115,
"num_lines": 250,
"path": "/examples/rrs_page_change_monitoring/changemonitor-cli.py",
"repo_name": "KNOT-GIT/mDeliverables",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n\"\"\"\nCommand line interface to changemonitor module\n\"\"\"\n\n__author__ = \"Albert Mikó\"\n__email__ = \"[email protected]\"\n\nimport sys\nimport argparse\nfrom _http import HTTPDateTime\nfrom errors import *\nimport diff\nfrom changemonitor import Monitor\n\nclass TimeException():\n pass\n\ndef init_monitor(args):\n \"\"\"\n initialize Monitor object\n set uid, db name and port\n @return Monitor object\n \"\"\"\n return Monitor(user_id=args.uid, db_port=args.port, db_name=args.db) \n\ndef parse_time(timestr):\n \"\"\"\n parse time argument from command-line\n valid formats are:\n >>> YYYY-MM-DD # same as YYYY-MM-DD 23:59:59\n >>> YYYY-MM-DD HH:MM:SS\n\n @return HTTPDateTime object\n \"\"\"\n try:\n if len(timestr)==19: # date and time specified\n pass\n elif len(timestr)==10: # only date specified\n timestr = timestr + \" 23:59:59\" # searching up to end of day\n else:\n raise Exception()\n return HTTPDateTime().from_gridfs_upload_date(timestr)\n except Exception:\n print 'Invalid time expression specified, valid format is \"YYYY-MM-DD [HH:MM:SS]\"'\n raise TimeException() \n\ndef url_check(args,monitor):\n \"\"\"\n check functionality\n \"\"\"\n if args.url is not None:\n r = monitor.get(args.url)\n print \"----------\"\n print \"Checking \",args.url,\"\\nForced check: \",args.force\n print \"Changed since last check: \",r.check(force=args.force)\n elif args.list is not None:\n try:\n f_in = open(args.list)\n for u in f_in:\n if \"\\r\\n\" in u: # CRLF line ending\n i = -2\n else: # CR only or LF only line ending\n i = -1\n r = monitor.get(u[:i])\n print \"----------\"\n print \"Checking \",u[:i],\"\\nForced check: \",args.force\n print \"Changed since last check: \",r.check(force=args.force)\n except Exception:\n print \"Cannot open file\\n\"\n exit(10) \n else:\n print \"Bad parameters, no url specified\"\n exit(2) \n\ndef url_diff(args,monitor):\n \"\"\"\n diff of two versions of the same url\n \"\"\"\n if args.url is not None:\n r = monitor.get(args.url)\n try:\n if (args.v is not None) and (len(args.v)==2): # versions specified\n c1 = r.get_version(args.v[0])\n d = r.get_diff(args.v[0],args.v[1])\n elif (args.t is not None) and (len(args.t)==2): # times specified\n try:\n t0 = parse_time(args.t[0])\n t1 = parse_time(args.t[1])\n c1 = r.get_version(t0)\n d = r.get_diff(t0,t1)\n except TimeException:\n exit(3) \n else:\n print \"Bad parameters, version/time specifiers not correct\"\n exit(2)\n\n # printing diff\n if d is None: \n print >>sys.stderr, \"Versions \",version1,\" and \",version2,\" are equal.\\n\"\n elif issubclass(c1._differ,(diff.PlainTextDiff)):\n print d\n elif issubclass(c1._differ,(diff.HtmlDiff)):\n try:\n while(True):\n chunk = d.next() # htmldiff outputs a generator object\n print \"pos: \",chunk.position,\"\\nadded: \",chunk.added,\"\\nremoved: \",chunk.removed,\"\\n----\\n\"\n except StopIteration: # got to last chunk generated by d\n pass \n except TypeError: # d is not a generator object\n if d is not None: print d # htmldiff sometimes gives a basestring output...\n elif issubclass(c1._differ,(diff.BinaryDiff)):\n print \"used binary diff\"\n # visualisation of BinaryDiff output/metadata comes here\n print d['metainfo']\n # d['diff'] # contains the actual compressed binary delta, not human-readable \n else:\n raise RuntimeError()\n \n except DocumentHistoryNotAvaliable:\n print \"Document history not available for some or both specified times/versions\"\n exit(3)\n else:\n print \"Bad parameters, no url specified\"\n exit(2)\n\ndef url_print(args,monitor):\n \"\"\"\n print contents of url, version in db or given by time\n \"\"\"\n if args.url is not None:\n r = monitor.get(args.url)\n if args.v is not None: # version specified\n try:\n c = r.get_version(int(args.v)) \n except DocumentHistoryNotAvaliable:\n print \"Version \",args.v,\" of document not available in the database\"\n exit(3)\n elif args.t is not None: # time specified\n try:\n t = parse_time(args.t)\n except TimeException:\n exit(3)\n c = r.get_version(t)\n else: # printing actual version\n c = r.get_last_version()\n print c.read()\n else:\n print \"Bad parameters, no url specified\"\n exit(2)\n\ndef url_available(args,monitor):\n \"\"\"\n check availability of url at given time\n @return True/False\n \"\"\"\n if args.url is not None:\n r = monitor.get(args.url)\n if args.t is not None: \n try:\n t = parse_time(args.t)\n except TimeException:\n exit(3)\n else:\n t = HTTPDateTime().now()\n print \"Document at \",args.url\n print \" was available in time: \",t,\" -- \",r.available(t)\n else:\n print \"Error: url not specified\"\n exit(2)\n\ndef parse_args():\n \"\"\"\n parse command line arguments\n @return argparse.Namespace object\n \"\"\"\n parser = argparse.ArgumentParser(\n description=\"RRS changemonitor commandline interface\",\n add_help=True\n )\n # specify global values\n parser.add_argument(\"--uid\",default=\"rrs\",help=\"user id\")\n parser.add_argument(\"--db\",default=\"webarchive\",help=\"name of database\")\n parser.add_argument(\"--port\",default=27017,type=int,\n help=\"port of database server\")\n\n # specify url(s) to perform action on\n url_list = parser.add_mutually_exclusive_group(required=True)\n url_list.add_argument(\"--url\",help=\"specify a single url\")\n url_list.add_argument(\"--list\",help=\"specify a file with urls to check\")\n\n #subparsers for individual actions\n subparsers = parser.add_subparsers(title=\"subcommands\",dest=\"action\")\n\n # check if version in database is up to date, and actualize if necessary\n parser_check = subparsers.add_parser(\"check\",\n help=\"check documents at specified url(s)\")\n parser_check.add_argument(\"--force\",action=\"store_true\",\n help=\"force download of content\")\n parser_check.set_defaults(func=url_check)\n\n # find differences between versions A and B of the same document\n parser_diff = subparsers.add_parser(\"diff\",\n help=\"diff of versions of document at given url\")\n parser_diff_v_or_t = parser_diff.add_mutually_exclusive_group(required=True)\n parser_diff_v_or_t.add_argument(\"-v\",nargs=2,type=int,\n help=\"version by version numbers\")\n parser_diff_v_or_t.add_argument(\"-t\",nargs=2,help=\"version by timestamps\")\n parser_diff.set_defaults(func=url_diff)\n\n # print out the contents of document saved in database\n parser_print = subparsers.add_parser(\"print\",\n help=\"output document at url from database\")\n parser_print_v_or_t = parser_print.add_mutually_exclusive_group()\n parser_print_v_or_t.add_argument(\"-v\",help=\"version by version numbers\")\n parser_print_v_or_t.add_argument(\"-t\",help=\"version by timestamps\")\n parser_print.set_defaults(func=url_print)\n\n # check if document at url was available at the specified time\n parser_available = subparsers.add_parser(\"available\",\n help=\"check if document at url was available at given time\")\n parser_available.add_argument(\"-t\",help=\"specify time\") \n parser_available.set_defaults(func=url_available)\n\n return parser.parse_args()\n\ndef main():\n \"\"\"\n parses command line\n initialises Monitor, prints debug information for Monitor\n calls function based on cmd arguments(check,diff,print,available) \n \"\"\"\n print sys.argv\n\n # process cmd arguments\n args = parse_args()\n\n # init Monitor\n monitor = init_monitor(args) \n print \"Monitor initialised\\n\", monitor \n\n # do something useful based on cmd arguments\n args.func(args,monitor)\n\nif __name__ == \"__main__\":\n main()\n\n#EOF\n"
},
{
"alpha_fraction": 0.5454545617103577,
"alphanum_fraction": 0.5454545617103577,
"avg_line_length": 32,
"blob_id": "2891d691c61a8a4e1724b2b3cb61d40908a58557",
"content_id": "872e543ebc118e1ff526ffad7b35cba29bd89fd0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 33,
"license_type": "no_license",
"max_line_length": 32,
"num_lines": 1,
"path": "/deliverables/__init__.py",
"repo_name": "KNOT-GIT/mDeliverables",
"src_encoding": "UTF-8",
"text": "__all__ = ['interface','thread']\n"
},
{
"alpha_fraction": 0.7609174847602844,
"alphanum_fraction": 0.763123095035553,
"avg_line_length": 32.80596923828125,
"blob_id": "e134fbdb51e10da550b2d0a674da9ac92fef85a4",
"content_id": "252d46d6de3a9e5b416f4cb1f138f011c25ac677",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "reStructuredText",
"length_bytes": 2450,
"license_type": "no_license",
"max_line_length": 81,
"num_lines": 67,
"path": "/README.rst",
"repo_name": "KNOT-GIT/mDeliverables",
"src_encoding": "UTF-8",
"text": "KNOT ÚPGM FIT VUT\nProjekt: Deliverables2\n\nImplementováno autory:\n\t- Jan Skácel <[email protected]>\n\t- Pavel Novotný\n\t- Stanislav Heller\n\t- Lukáš Macko\n\nÚvod:\n*****\nTento dokument je určen pro uživatele skriptu či jeho částí, který je určen k\nextrakci výstupy ze stránek různých projektů. Měl by poskytnout\ndostatek informací osobám, kteří by snad chtěli tento skript upravit, opravit\nči předělat.\nPro potřeby přímé přenositelnosti kódu jsou komentáře uvnitř kódu psány v\nanglickém jazyce. Tento dokument je psán v češtině, kvůli zachování jazykové\nsourodosti s KNOT wiki.\n\nPožadavky:\n**********\nPython verze 2.x\nPřítomnost nainstalované knihovny RRS\n\nPoužití jako balík skrz rozhraní:\n*********************************\n\n- Potřebujete mít balík deliverables ve Vaší python_path\n- Nyní můžete importovat deliverables modul příkazem:\n\n\tfrom deliverables import *\n\n- Nyní můžete utvořit objekt rozhraní (zde pro názornost inicializováno s \npočáteční url \"http://siconos.inrialpes.fr/\"):\n\ndeliv = deliverables.Deliverables_interface(url = \"http://siconos.inrialpes.fr/\")\n\n- Nyní máte 2 možnosti, jak zahájit vyhledávání a stahování výstupů.\n\n\t1. Pokud chcete kořenový objekt RRSProject můžete použít:\n\n\t\tproject = deliv.get_deliverables()\n\n\t2. Pokud chcete řetězec s výstupním RRS XML můžete použít:\n\t\t\n\t\trss_xml = deliv.get_rrs_xml()\n\n- První volání metod get_rrs_xml() nebo get_deliverables() spustí vyhledávání a\nextrakci výstupů. Další volání již pouze vrátí dříve extrahovaná data. Metoda\nget_rrs_xml() tak lze používat na mnoha místech bez výrazného zpomalení. Metoda\nget_deliverables() v případě neúspěchu extrakce vrátí None. Metoda get_rrs_xml()\nv takovém případě vrátí prázdný řetězec.\n\n\n\nAdresářová struktura:\n*********************\n./ - Hlavní adresář projektu\n./bin - Zde je uložen skript example.py, který by měl dát vodítko pro\n\timplementaci skriptů používajících modul deliverables. Dále je\n\tzde uložen doc.sh, což je skript v jazyce Bash, který automaticky\n\tvytvoří dokumentaci ve spolupráci s programem epydoc.\n./bin/deliverables - Zde je uložen modul deliverables s\n\thlavním programem\n./data - Zde se nachází příklady výsledných xml dokumentů a soubor links, který\n\tobsahuje řádky oddělené url některých vybraných projektů.\n./docs - Automaticky vygenerovaná dokumentace z programu epydoc\n\n\n"
}
] | 14 |
mozhemeng/py-simple-alg | https://github.com/mozhemeng/py-simple-alg | f826dd9b29a8bba333b9fda6a4e0097edb3acf5e | 9c92af9331414a64e8b36b40d5a56cd6fb226c57 | c9fc6836617c3ffb50b278a7ccd6131a8fb49302 | refs/heads/master | 2020-04-12T07:44:53.664076 | 2018-12-19T02:32:50 | 2018-12-19T02:32:50 | 162,369,434 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5972222089767456,
"alphanum_fraction": 0.6076388955116272,
"avg_line_length": 27.799999237060547,
"blob_id": "03b39748012f80f98a3dbdab3b8c56d5339aaf60",
"content_id": "1babd6ae3bdfa712e33480ae749beb69f5758ff9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 288,
"license_type": "no_license",
"max_line_length": 71,
"num_lines": 10,
"path": "/quick_random_sort.py",
"repo_name": "mozhemeng/py-simple-alg",
"src_encoding": "UTF-8",
"text": "import random\n\n\ndef random_fast_sort(arr):\n if len(arr) < 2:\n return arr\n num = arr.pop(random.randint(0, len(arr)-1))\n smaller = [i for i in arr if i <= num]\n bigger = [i for i in arr if i > num]\n return random_fast_sort(smaller) + [num] + random_fast_sort(bigger)\n"
},
{
"alpha_fraction": 0.5286343693733215,
"alphanum_fraction": 0.5462555289268494,
"avg_line_length": 31.428571701049805,
"blob_id": "be2a44ad8e9595e6249acb584cf9e1a1ae4c96f5",
"content_id": "1bd7c575fe24305cdd826569d2385f26813be5f6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 227,
"license_type": "no_license",
"max_line_length": 57,
"num_lines": 7,
"path": "/quick_sort.py",
"repo_name": "mozhemeng/py-simple-alg",
"src_encoding": "UTF-8",
"text": "def fast_sort(arr):\n if len(arr) < 2:\n return arr\n num = arr[0]\n smaller = [i for i in arr[1:] if i <= num]\n bigger = [i for i in arr[1:] if i > num]\n return fast_sort(smaller) + [num] + fast_sort(bigger)\n"
},
{
"alpha_fraction": 0.37318840622901917,
"alphanum_fraction": 0.3913043439388275,
"avg_line_length": 22,
"blob_id": "21c5365d3422a7b8c34d38405c7ab3fc99ec08d2",
"content_id": "335f19d3ca775a5ac85afdc8346a7582856e564b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 276,
"license_type": "no_license",
"max_line_length": 31,
"num_lines": 12,
"path": "/binary_search.py",
"repo_name": "mozhemeng/py-simple-alg",
"src_encoding": "UTF-8",
"text": "def binary_search(l, v):\n low = 0\n high = len(l) - 1\n while low <= high:\n mid = (high + low) // 2\n if l[mid] == v:\n return mid\n if l[mid] < v:\n low = mid + 1\n if l[mid] > v:\n high = mid - 1\n return None\n"
},
{
"alpha_fraction": 0.6783216595649719,
"alphanum_fraction": 0.7132866978645325,
"avg_line_length": 14.88888931274414,
"blob_id": "64d23a0cfcf7a3ce6415f613c408cc9d0f11cd1c",
"content_id": "f68f4d8daed135a8aefc85f8306e51a5ec4b1267",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 213,
"license_type": "no_license",
"max_line_length": 30,
"num_lines": 9,
"path": "/README.md",
"repo_name": "mozhemeng/py-simple-alg",
"src_encoding": "UTF-8",
"text": "# 简单算法的Python实现\n\n## 目前包括:\n\n1. 二分法查找 binary_search.py\n2. 选择排序 selection_sort.py\n3. 快速排序 quick_sort.py\n4. 快速随机排序 quick_random_sort.py\n5. 持续更新...\n"
}
] | 4 |
JesperBry/Zumo-robot---TDT4113 | https://github.com/JesperBry/Zumo-robot---TDT4113 | 8ee1cff8335a3cb88c8d73abcb8ec96906a6e20d | 95fbbe29ec0814b34dd0f5c25290016e240c219e | bd653c04f8ec48213b427ebc3dc279c426c2f3be | refs/heads/master | 2021-08-14T10:57:25.576608 | 2017-11-15T13:39:50 | 2017-11-15T13:39:50 | 109,676,033 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6136962175369263,
"alphanum_fraction": 0.616330087184906,
"avg_line_length": 41.11111068725586,
"blob_id": "dca8307b555a1a31773d60772a84af2857fe526e",
"content_id": "c2b49d57c64c109e62db10d18453fae7602ecb6a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1140,
"license_type": "no_license",
"max_line_length": 99,
"num_lines": 27,
"path": "/Behavior.py",
"repo_name": "JesperBry/Zumo-robot---TDT4113",
"src_encoding": "UTF-8",
"text": "\nclass Behavior:\n def __init__(self, bbcon):\n self.bbcon = bbcon # Settes på init eller egen initialize metode\n self.senobs = [] # Skal fylles av subklasse\n self.motobs = [] # Skal fylles av subklasse\n self.active_flag = False # Oppdateres hvert tick\n self.halt_request = False # Oppdateres hvert tick\n self.priority = 1 # Skal fylles av subklasse\n self.match_degree = 0 # Oppdateres hvert tick\n self.weight = 0 # Oppdateres hvert tick\n self.motor_recommendations = None\n\n def consider_activation(self): # Skal implementeres av subklasse\n return\n\n def consider_deactivation(self):# Skal implementeres av subklasse\n return\n\n def sense_and_act(self): # Skal implementeres av subklasse\n return\n\n def update(self): # Skal kalles hvert tick, eneste metode som skal kalles utenfra\n self.consider_deactivation() if self.active_flag else self.consider_deactivation()\n\n self.sense_and_act()\n\n self.weight = self.priority * self.match_degree\n\n"
},
{
"alpha_fraction": 0.6111111044883728,
"alphanum_fraction": 0.75,
"avg_line_length": 17,
"blob_id": "a28ce52d9e0326fb67e40e366451a13edee67429",
"content_id": "bf01418814fd739036589172ebb580bc2dbffacd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 72,
"license_type": "no_license",
"max_line_length": 28,
"num_lines": 4,
"path": "/README.md",
"repo_name": "JesperBry/Zumo-robot---TDT4113",
"src_encoding": "UTF-8",
"text": "# ZUMO robot-TDT4113\n\nPLAB-2 (TDT-4113) Project 6:\nBehavior-Based Robot\n"
},
{
"alpha_fraction": 0.36823105812072754,
"alphanum_fraction": 0.4476534426212311,
"avg_line_length": 16.3125,
"blob_id": "f6e98a5d786243d0e4645e785581acfce564f700",
"content_id": "1b8b7947288541c740fb822971fdb196be8e508f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 280,
"license_type": "no_license",
"max_line_length": 37,
"num_lines": 16,
"path": "/config.py",
"repo_name": "JesperBry/Zumo-robot---TDT4113",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\nConfig = {\n 'minDist': 10,\n 'J_turn': [[-0.5, 1]],\n 'collisionPri': 2,\n 'forward': [[0.5, 0.5]],\n 'stopSignPri': 4, #Må være høyest\n 'stop': [[0, 0]],\n 'redThr': 0.95,\n 'grThr': 0.8,\n 'motorDuration': 0.5,\n 'goPri': 1\n\n\n}\n"
},
{
"alpha_fraction": 0.6075156331062317,
"alphanum_fraction": 0.6096033453941345,
"avg_line_length": 19.782608032226562,
"blob_id": "c5b6c3def9e19e69c6cc2eeb5b4e77c106293437",
"content_id": "150121464d8c3a46e0a3c99303142d098beaf7cb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 479,
"license_type": "no_license",
"max_line_length": 72,
"num_lines": 23,
"path": "/Sensob.py",
"repo_name": "JesperBry/Zumo-robot---TDT4113",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\nfrom camera import *\nfrom ultrasonic import *\nfrom irproximity_sensor import *\nfrom reflectance_sensors import *\n\nclass Sensob:\n\n def __init__(self):\n self.sensors = []\n self.value = None\n\n def update(self):\n # Fetch relevant sensor value(s) and convert them into one value\n return\n\n def get_value(self):\n return self.value\n\n def reset(self):\n for sensor in self.sensors:\n sensor.reset()\n\n"
},
{
"alpha_fraction": 0.6097561120986938,
"alphanum_fraction": 0.6141906976699829,
"avg_line_length": 21.600000381469727,
"blob_id": "abaac20b0348e51462d4d86e24545f28c63385ab",
"content_id": "38f5d5eae9d98ef11162e6234b14a9cd99996ebf",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 451,
"license_type": "no_license",
"max_line_length": 54,
"num_lines": 20,
"path": "/go_behavior.py",
"repo_name": "JesperBry/Zumo-robot---TDT4113",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\nfrom Behavior import *\nfrom config import Config\n\nclass Go(Behavior):\n\n def __init__(self):\n super(Go, self).__init__(None)\n self.priority = Config['goPri']\n\n def consider_activation(self):\n self.active_flag = True\n\n def consider_deactivation(self):\n self.active_flag = True\n\n def sense_and_act(self):\n self.match_degree = 1\n self.motor_recommendations = Config['forward']"
},
{
"alpha_fraction": 0.6243194341659546,
"alphanum_fraction": 0.6279491782188416,
"avg_line_length": 26.600000381469727,
"blob_id": "90ba359db29540a85268e31b90d2cd373da49a87",
"content_id": "36b25c08802dad3adf73a5d5252c5517701b9fb7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 551,
"license_type": "no_license",
"max_line_length": 86,
"num_lines": 20,
"path": "/Arbitrator.py",
"repo_name": "JesperBry/Zumo-robot---TDT4113",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\nfrom Behavior import Behavior\n\nclass Arbitrator:\n\n def __init__(self):\n pass\n\n def choose_action(self, behaviors):\n winning_behavior = None\n max_weight = -1\n\n # Skal velge en \"vinner behavior\" og sende dens motor-recommendation til BBCON\n for behavior in behaviors:\n if behavior.weight > max_weight:\n max_weight = behavior.weight\n winning_behavior = behavior\n\n return winning_behavior.motor_recommendations, winning_behavior.halt_request"
},
{
"alpha_fraction": 0.6335078477859497,
"alphanum_fraction": 0.6387434601783752,
"avg_line_length": 24.53333282470703,
"blob_id": "9ddd9801151f137d783dcbe11f52dad09a0f3ed5",
"content_id": "7f7b92ea5a905d76fcfd3c235efc5bbf5e4fccfc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 382,
"license_type": "no_license",
"max_line_length": 50,
"num_lines": 15,
"path": "/IRproximity.py",
"repo_name": "JesperBry/Zumo-robot---TDT4113",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\nfrom Sensob import Sensob\nfrom irproximity_sensor import IRProximitySensor\n\nclass IRProximity_sensob(Sensob):\n\n def __init__(self):\n super(IRProximity_sensob, self).__init__()\n self.sensors = [IRProximitySensor()]\n\n def update(self):\n self.value = self.sensors[0].update()\n print(\"IR\", self.value)\n return self.value"
},
{
"alpha_fraction": 0.6074766516685486,
"alphanum_fraction": 0.6121495366096497,
"avg_line_length": 24.176469802856445,
"blob_id": "872cd3e23ad32e1844e85d8331c76235089f92bd",
"content_id": "a3f9919b18b4b3f7bf90f5ddac1c5f910370c69e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 428,
"license_type": "no_license",
"max_line_length": 49,
"num_lines": 17,
"path": "/ultrasonic_sensob.py",
"repo_name": "JesperBry/Zumo-robot---TDT4113",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\nfrom ultrasonic import Ultrasonic\nfrom Sensob import Sensob\n\nclass Ultrasonic_sensob(Sensob):\n\n def __init__(self):\n super(Ultrasonic_sensob, self).__init__()\n self.sensors = [Ultrasonic()]\n\n def update(self):\n for sensor in self.sensors:\n sensor.update()\n self.value = self.sensors[0].value\n print(\"Ultrasonic\", self.value)\n return self.value\n"
},
{
"alpha_fraction": 0.6088871359825134,
"alphanum_fraction": 0.6180945038795471,
"avg_line_length": 28.399999618530273,
"blob_id": "bfbc5cff12b63e03ef6c6eb99062ba7de8d714d7",
"content_id": "ffcb5181cd62bb5db6efa5684344031cd7394419",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2502,
"license_type": "no_license",
"max_line_length": 110,
"num_lines": 85,
"path": "/BBCON.py",
"repo_name": "JesperBry/Zumo-robot---TDT4113",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\nfrom Motob import *\nfrom Arbitrator import *\nfrom Sensob import *\nfrom go_behavior import Go\nfrom avoid_collisions import Avoid_collisions\nfrom stop_sign import StopSign\nfrom camera_sensob import Camera_sensob\nfrom IRproximity import IRProximity_sensob\nfrom ultrasonic_sensob import Ultrasonic_sensob\nfrom zumo_button import ZumoButton\nfrom reflectance import Reflectance\n\n\n\nclass BBCON():\n\n def __init__(self):\n self.sensobs = [Ultrasonic_sensob(), Reflectance(), Camera_sensob()]\n self.behaviors = [Go(), Avoid_collisions(self.sensobs[0], self.sensobs[1]), StopSign(self.sensobs[2])]\n self.active_behaviors = []\n self.motobs = [Motob([Motors()])]\n self.arbitrator = Arbitrator()\n\n def add_behavior(self, behavior):\n if behavior not in self.behaviors:\n self.behaviors.append(behavior)\n\n def add_sensob(self, sensob):\n if sensob not in self.sensobs:\n self.sensobs.append(sensob)\n\n def activate_behavior(self, behavior):\n if behavior not in self.active_behaviors:\n self.active_behaviors.append(behavior)\n\n def deactivate_behavior(self, behavior):\n if behavior in self.activate_behavior:\n self.active_behaviors.remove(behavior)\n\n def run_one_timestep(self):\n\n # Update all sensobs\n for sensob in self.sensobs:\n sensob.update()\n\n # Update all behaviors\n for behavior in self.behaviors:\n behavior.update()\n if behavior.active_flag:\n self.activate_behavior(behavior)\n else:\n self.deactivate_behavior(behavior)\n\n # Invoke the arbitrator by calling arbitrator.choose action\n recommendations, stop = self.arbitrator.choose_action(self.active_behaviors)\n for i in range(len(self.motobs)):\n self.motobs[i].update(recommendations[i])\n\n # Wait\n #time.sleep(0.5)\n\n # Reset the sensobs\n for sensob in self.sensobs:\n sensob.reset()\n\n\n #ZumoButton().wait_for_press() # er nødt til å ha med denne tydeligvis\n #m = Motors()\n #i = 10\n #while i > 0:\n # m.set_value([0.2,-0.2], 1) # (speed,duration)\n # #m.set_value([-0.2, 0.2], 1)\n # m.forward(0.3, 1)\n # i -= 1\n\n\n\n\n\n def controller(self):\n ZumoButton().wait_for_press() # er nødt til å ha med denne tydeligvis\n while True:\n self.run_one_timestep()"
},
{
"alpha_fraction": 0.6149802803993225,
"alphanum_fraction": 0.6202365159988403,
"avg_line_length": 26.10714340209961,
"blob_id": "9b00171de1a96bd7b842cfc976270483625d5495",
"content_id": "63b8dcf079884f31f7cb7932977b866a092021a4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 761,
"license_type": "no_license",
"max_line_length": 70,
"num_lines": 28,
"path": "/avoid_collisions.py",
"repo_name": "JesperBry/Zumo-robot---TDT4113",
"src_encoding": "UTF-8",
"text": "import Behavior\nfrom config import Config\n\n\nclass Avoid_collisions(Behavior.Behavior):\n def __init__(self, distance, ir):\n super(Avoid_collisions, self).__init__(None)\n self.sensobs = [distance, ir]\n\n self.priority = Config['collisionPri']\n\n def consider_deactivation(self):\n self.active_flag = True\n\n def consider_activation(self):\n self.active_flag = True\n\n def sense_and_act(self):\n dist = self.sensobs[0].get_value()\n\n reflect = self.sensobs[1].get_value()\n\n self.match_degree = 0\n self.motor_recommendations = None\n\n if reflect > Config['reflectThr'] or dist < Config['minDist']:\n self.match_degree = 1\n self.motor_recommendations = Config['J_turn']\n\n\n"
},
{
"alpha_fraction": 0.6555555462837219,
"alphanum_fraction": 0.6583333611488342,
"avg_line_length": 26.769229888916016,
"blob_id": "dbb2b35ad6576bfd1b178545cadfff6ccff14302",
"content_id": "33efe38be5d9a67b87a9cfe627ec7a03e4ab616b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 360,
"license_type": "no_license",
"max_line_length": 50,
"num_lines": 13,
"path": "/reflectance.py",
"repo_name": "JesperBry/Zumo-robot---TDT4113",
"src_encoding": "UTF-8",
"text": "from Sensob import Sensob\nfrom reflectance_sensors import ReflectanceSensors\n\nclass Reflectance(Sensob):\n\n def __init__(self):\n super(Reflectance, self).__init__()\n self.sensors = [ReflectanceSensors()]\n\n def update(self):\n self.value = sum(self.sensors[0].update())\n print(\"Reflectance\", self.value)\n return self.value"
},
{
"alpha_fraction": 0.5666280388832092,
"alphanum_fraction": 0.5747392773628235,
"avg_line_length": 23.600000381469727,
"blob_id": "8032a5ab8dbf8dab7c05c33b18f176981d62cfe5",
"content_id": "0c8475d7df04fb23d9b30292192679a7acbe6d07",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 863,
"license_type": "no_license",
"max_line_length": 51,
"num_lines": 35,
"path": "/stop_sign.py",
"repo_name": "JesperBry/Zumo-robot---TDT4113",
"src_encoding": "UTF-8",
"text": "import camera_sensob\nfrom Behavior import Behavior\nfrom config import Config\n\n\nclass StopSign(Behavior):\n def __init__(self, camera):\n super(StopSign, self).__init__(None)\n self.sensobs = [camera]\n\n self.priority = Config['stopSignPri']\n\n self.stopped = False\n\n def consider_deactivation(self):\n self.active_flag = True\n\n def consider_activation(self):\n self.active_flag = True\n\n def sense_and_act(self):\n rgb = self.sensobs[0].value\n\n self.match_degree = 0\n self.motor_recommendations = Config['stop']\n\n if self.stopped:\n self.match_degree = 1\n if rgb[1] > Config['grThr']:\n self.match_degree = 0\n self.stopped = False\n\n elif rgb[0] > Config['redThr']:\n self.stopped = True\n self.match_degree = 1\n\n\n"
},
{
"alpha_fraction": 0.44074568152427673,
"alphanum_fraction": 0.4740346074104309,
"avg_line_length": 22.46875,
"blob_id": "5440cf8cf8a3d161bb198f092a6644386c94c348",
"content_id": "5f58b2fb4e1f6656cfb69bf65f6d80d4eb9c1703",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 751,
"license_type": "no_license",
"max_line_length": 55,
"num_lines": 32,
"path": "/camera_sensob.py",
"repo_name": "JesperBry/Zumo-robot---TDT4113",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\nfrom Sensob import Sensob\nfrom camera import Camera\n\nclass Camera_sensob(Sensob):\n\n def __init__(self):\n super(Camera_sensob, self).__init__()\n self.sensors = [Camera()]\n\n def rgb(self, img):\n rgb = [0, 0, 0]\n\n for x in range(40, 80):\n for y in range(40, 50):\n band = img.getpixel((x, y))\n rgb[0] += band[0]\n rgb[1] += band[1]\n rgb[2] += band[2]\n\n tot = sum(rgb)\n rgb[0] = rgb[0] / tot\n rgb[1] = rgb[1] / tot\n rgb[2] = rgb[2] / tot\n\n return rgb\n\n def update(self):\n self.value = self.rgb(self.sensors[0].update())\n print(\"Camera\", self.value)\n return self.value\n"
},
{
"alpha_fraction": 0.695652186870575,
"alphanum_fraction": 0.695652186870575,
"avg_line_length": 14.666666984558105,
"blob_id": "07bce1c3ff894b3e8dff35808348840fa3f9206d",
"content_id": "7e1a5b6776300a3b8edbd4cce88790913926a868",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 46,
"license_type": "no_license",
"max_line_length": 19,
"num_lines": 3,
"path": "/test.py",
"repo_name": "JesperBry/Zumo-robot---TDT4113",
"src_encoding": "UTF-8",
"text": "from BBCON import *\nb = BBCON()\nb.controller()"
},
{
"alpha_fraction": 0.6101364493370056,
"alphanum_fraction": 0.6101364493370056,
"avg_line_length": 27.5,
"blob_id": "8c7d05b61aed2176d200b151af3a73f7d709480c",
"content_id": "37fde4168a7d800a95f3f8a8d7f13c16d4492ebc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 513,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 18,
"path": "/Motob.py",
"repo_name": "JesperBry/Zumo-robot---TDT4113",
"src_encoding": "UTF-8",
"text": "from motors import Motors\nfrom config import Config\n\nclass Motob:\n def __init__(self, motors):\n self.motors = motors\n self.values = []\n\n def update(self, values):\n self.values = values\n print(\"Motob value\", self.values)\n self.operationalize()\n\n def operationalize(self):\n m = Motors()\n m.set_value(self.values, Config['motorDuration'])\n #for i in range(len(self.motors)):\n # self.motors[i].set_value(self.values, Config['motorDuration'])\n"
}
] | 15 |
scalableparallelism/celldata_project1 | https://github.com/scalableparallelism/celldata_project1 | b9dbd3eaf986d3ae824bc25a5c43c22562c5e204 | 9975a07052807e6cecb86867199599e16c388556 | d305fd2511308dfabe226604fe8c11a1fc340cd4 | refs/heads/master | 2021-09-27T04:26:29.718251 | 2018-11-06T03:41:36 | 2018-11-06T03:41:36 | null | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5784768462181091,
"alphanum_fraction": 0.5877483487129211,
"avg_line_length": 40.3698616027832,
"blob_id": "278610e925abcebb5774bfaf20e38bfbb9f5d2f2",
"content_id": "872042b439055c54804c5a27f30c1f09529b6dac",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3020,
"license_type": "no_license",
"max_line_length": 115,
"num_lines": 73,
"path": "/sortbygenes2.py",
"repo_name": "scalableparallelism/celldata_project1",
"src_encoding": "UTF-8",
"text": "import sys\nimport csv\nimport re\n\n\ndef reversecomplement(seq):\n seq = seq.upper()\n seq = seq.replace('A', 't')\n seq = seq.replace('T', 'a')\n seq = seq.replace('C', 'g')\n seq = seq.replace('G', 'c')\n seq = seq.upper()\n seq = seq[::-1]\n return seq\n\n\n# takes a fasta followed by a bed file followed by desired output file as inputs.\ninfasta = open(sys.argv[1], 'r')\ninbed = sys.argv[2]\noutfile = sys.argv[3]\n\ngenedict = {}\n\n\nfor line in infasta:\n if re.match('^>', line):\n gene = line.replace('>', '').rstrip()\n if gene not in genedict: # check if it's a new gene\n genedict[gene] = [] # initializes an empty gene info list\n else:\n if genedict[gene]: # if gene code is already started, adds next part of sequence\n genedict[gene][0].append(line.rstrip())\n else: # if gene code is empty, starts the gene code.\n genedict[gene] = [[line.rstrip()]]\n # appends line to current genecode reading if it's not a header line.\n# genedict is now an entire dictionary of gene lists, currently only with their constituent codes as first entries.\ninfasta.close()\n\nwith open(inbed, 'rt') as tsvin:\n bedinfo = csv.reader(tsvin, delimiter='\\t')\n for row in bedinfo:\n start = row[1]\n stop = row[2]\n gene = row[3]\n if len(genedict[gene]) == 1: # checks if the gene has any entries other than its code\n chr = row[0]\n strand = row[5]\n genedict[gene].append(chr) # adds one-off info to the gene if it's the first encounter\n genedict[gene].append(strand)\n genedict[gene].append(start) # adds start and stop pairs for every entry in the gene\n genedict[gene].append(stop)\n# genedict is now a dictionary of genes, with code, chr, strand, and then all start-stop pairs.\ntsvin.close()\n\nwith open(outfile, 'w') as out:\n for gene in genedict:\n sequence = genedict[gene][0]\n for i in range(0, (len(genedict[gene][3:])-2), 2): # checks all starts and stops for overlap\n stop = genedict[gene][i+4]\n start = genedict[gene][i+5]\n if stop == start:\n sequence[i//2] = sequence[i//2][:-1] # clips overlap nucleotide\n genedict[gene][i+4] = str(int(stop) - 1) # corrects read range for future algorithms\n if genedict[gene][2] == \"+\": # if positive strand, forward-splices exons\n genedict[gene][0] = \"\".join(sequence)\n else: # if negative strand, reverse-splices exons and inverts sequence.\n sequence = \"\".join(list(reversed(sequence)))\n sequence = reversecomplement(sequence)\n genedict[gene][0] = \"\".join(sequence)\n geneinfo = '\\t'.join(genedict[gene][1:]) # writes chr strand starts and stops as TSV\n out.write('>' + gene + '\\n')\n out.write(genedict[gene][0] + '\\n') # writes entire genetic code to next line\nout.close()\n"
},
{
"alpha_fraction": 0.6736353039741516,
"alphanum_fraction": 0.6980255246162415,
"avg_line_length": 51.8125,
"blob_id": "8d3ebf31f81aa8d59e2746e4d79581ef98367caf",
"content_id": "03c02bd0087272457568c0ca8c6d3bfebad5526b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "R",
"length_bytes": 861,
"license_type": "no_license",
"max_line_length": 113,
"num_lines": 16,
"path": "/count_clusters.R",
"repo_name": "scalableparallelism/celldata_project1",
"src_encoding": "UTF-8",
"text": "# our working directory is /net/shendure/vol10/projects/cell_clustering\r\n# files are in /net/shendure/vol1/home/vagar/projects/forSereno/ [files], sym linked to wd/data/\r\nlibrary(data.table)\r\n\r\npathcsv <- \"data/df_cell_cluster_1.csv\"\r\n\r\nDT <- data.table(read.csv(file = pathcsv, header = TRUE, colClasses = c(\"NULL\",\"character\",\"NULL\")))\r\n# returns one line data table of cell types\r\ncounts <- DT[,.N,by = \"cell_name\"] \r\nwrite.csv(counts, file = \"cluster_counts.csv\", quote = FALSE) ## quote = false to take out r weirdness\r\n\r\npdf(\"cluster_plot.pdf\",width = 20, height = 10)\r\npar(mar = c(15, 4.1, 4.1, 2.1)) ## sets bottom margin to larger than default to hold data\r\nbarplot(table(DT$cell_name), las = 2) ## las=2 for vertical labels\r\nmtext(text=\"Cell Cluster Counts\", side=1, line=12) ## line argument puts the main text somewhere below the labels\r\ndev.off()\r\n"
},
{
"alpha_fraction": 0.6896551847457886,
"alphanum_fraction": 0.7048917412757874,
"avg_line_length": 60.45000076293945,
"blob_id": "c0f55d7600124b9e0ffe97b66869f8faa7c8dd7e",
"content_id": "40b066e61d46a47f584667185b75590fe7dc2d59",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "R",
"length_bytes": 1247,
"license_type": "no_license",
"max_line_length": 112,
"num_lines": 20,
"path": "/count_subclusters.R",
"repo_name": "scalableparallelism/celldata_project1",
"src_encoding": "UTF-8",
"text": "# takes input of a csv of cell clustering data in a folder named \"data\"\r\n# gives output of a TSV txt file of subcluster frequency and a histogram of such data.\r\nlibrary(plyr)\r\n\r\npathcsv <- \"data/df_cell_cluster_1.csv\"\r\n\r\ndf <- data.frame(read.csv(file = pathcsv, header = TRUE, colClasses = c(\"NULL\",\"character\",\"character\")))\r\n# returns two line data frame of cell types and subclusters.\r\ncounts <- ddply(df, .(df$cell_name, df$sub_Cluster), nrow)\r\nnames(counts) <- c(\"cell_name\", \"sub_Cluster\", \"count\") ##makes the counts prettier.\r\nwrite.table(counts, file = \"subcluster_counts.txt\", quote = FALSE, sep = \"\\t\", row.names = FALSE)\r\n# this gives us a table of cluster, subcluster, and frequency in a TSV table.\r\n\r\ndf$overcluster <- paste(df$cell_name,df$sub_Cluster, sep = \"_\") ## sep=\"_\" inserts an underscore to the column.\r\n# makes a new column called \"overcluster\" with the full cluster and sub-cluster\r\npdf(\"subcluster_plot.pdf\",width = 200, height = 12)\r\npar(mar = c(18, 4.1, 4.1, 2.1)) ## sets bottom margin to larger than default to hold data\r\nbarplot(table(df$overcluster), las = 2) ## las=2 for vertical labels\r\nmtext(text=\"Sub Cluster Counts\", side=1, line=15) ## line argument puts the main text somewhere below the labels\r\ndev.off()"
},
{
"alpha_fraction": 0.6237188577651978,
"alphanum_fraction": 0.6427525877952576,
"avg_line_length": 29.045454025268555,
"blob_id": "94a5bba3ee97bc15062313ed38c0beb39fe9d11f",
"content_id": "9bdf07414937024adbaf2803a3caf6f4d6853a4f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "R",
"length_bytes": 683,
"license_type": "no_license",
"max_line_length": 112,
"num_lines": 22,
"path": "/correctgtf.R",
"repo_name": "scalableparallelism/celldata_project1",
"src_encoding": "UTF-8",
"text": "library(plyr)\r\n\r\ngeneinfo <- read.table(\"mm10_longerUTR.gtf\")\r\n\r\n# split the gtf into by gene\r\ncorrected <- ddply(geneinfo, ~ V9, function(gene) {\r\n\t# only consider exons, as we're only modifying them\r\n\texons <- subset(gene, V3!=\"five_prime_UTR\" & V3!=\"three_prime_UTR\")\r\n\tif (gene[1,7]==\"+\"){\r\n\t\tlocalmax <- max(exons[,5])\r\n\t\tgene[gene==localmax] <- (localmax + 1)\r\n\t}\r\n\t# adds one to the last exon of positive strands\r\n\telse {\r\n\t\tlocalmin <- min(exons[,4])\r\n\t\tgene[gene==localmin] <- (localmin - 1)\r\n\t}\r\n\t# adds one to the last exon of negative strands\r\n\tgene\r\n})\r\n\r\nwrite.table(corrected, file='mm10_longerUTR_c.gtf', quote=FALSE, sep='\\t', col.names = FALSE, row.names = FALSE)\r\n"
},
{
"alpha_fraction": 0.6406656503677368,
"alphanum_fraction": 0.6547061800956726,
"avg_line_length": 29.016128540039062,
"blob_id": "e5ec0f1d6436be67b67e502094281ad4655a75af",
"content_id": "1ab18190c62286469f6ac2128a3d1a9ce481bc8c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1923,
"license_type": "no_license",
"max_line_length": 110,
"num_lines": 62,
"path": "/gtfparser.py",
"repo_name": "scalableparallelism/celldata_project1",
"src_encoding": "UTF-8",
"text": "import sys\r\nimport csv\r\nimport re\r\nfrom collections import defaultdict\r\n\r\n## input is a gtf file\r\n## make a dictionary of lists where keys are gene names and values are start-stop pairs\r\n\r\ndef genecoverage(row):\r\n\tif row[2] == \"exon\":\r\n\t\tgene = row[8]\r\n\t\tstart = row[3]\r\n\t\tstop = row[4]\r\n\r\n\t\tif gene in genedict:\r\n\t\t\toldstart = genedict[gene][1]\r\n\t\t\toldstop = genedict[gene][2]\r\n\r\n\t\t\tif int(start) < int(oldstart):\r\n\t\t\t\tdel genedict[gene][1]\r\n\t\t\t\tgenedict[gene].insert(1,start) #adds the new start location\r\n\t\t\t\t# print(genedict[gene]) #DEBUG\r\n\t\t\tif int(stop) > int(oldstop):\r\n\t\t\t\tdel genedict[gene][2]\r\n\t\t\t\tgenedict[gene].insert(2, stop) #adds the new stop location\r\n\t\t\t\t# print(genedict[gene]) #DEBUG\r\n\r\n\t\telse:\r\n\t\t\tchrom = row[0]\r\n\t\t\tstrand = row[6]\r\n\t\t\tgeneinfo = [chrom, start, stop, strand]\r\n\t\t\tgenedict[gene] = geneinfo\t#will attach the start and stop values the first time a gene is encountered\r\n\t\t\t# print(genedict[gene]) #DEBUG\r\n\t# if the row isn't an exon it just does nothing lmao\r\n\t# print(genedict) #DEBUG\r\n\treturn genedict\r\n\r\n\r\ninfile = sys.argv[1]\r\noutpath = infile.replace(\".gtf\",\"_parsed.gtf\")\r\noutfile = open(outpath, \"w+\")\r\n\r\ngenedict = {}\r\n\r\n## field 0 is chr, field 2 read type, field 3 and 4 range of read, field 6 strand, field 8 gene name.\r\nwith open(infile) as tsvfile: \r\n\treader = csv.reader(tsvfile, delimiter ='\\t')\r\n\tfor row in reader:\r\n\t\tgenecoverage(row) #after running this we have a full dictionary of genes with their biggest starts and stops\r\n\r\n\r\nfor gene in genedict: # tabs after the first three for that good good tsv formatting\r\n\tchrom = \"\\t\" + str(genedict[gene][0]) + \"\\t\"\r\n\tstart = str(genedict[gene][1]) + \"\\t\"\r\n\tstop = str(genedict[gene][2]) + \"\\t\"\r\n\tstrand = str(genedict[gene][3]) + \"\\n\"\r\n\tif re.match(r\"[X|Y]\", genedict[gene][0]) or re.match(r\"[0-9]+\",genedict[gene][0]):\r\n\t\tline = gene + chrom + start + stop + strand\r\n\t\toutfile.write(line)\r\noutfile.close()\r\n\r\n#this works\r\n"
},
{
"alpha_fraction": 0.8118811845779419,
"alphanum_fraction": 0.8217821717262268,
"avg_line_length": 49.5,
"blob_id": "9843c24a8600cc35ceb77dd3d3125f8bda799259",
"content_id": "5d35f8e00fd6ed910bffc771cd49fdbaadaf04be",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 101,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 2,
"path": "/README.md",
"repo_name": "scalableparallelism/celldata_project1",
"src_encoding": "UTF-8",
"text": "# celldata_project1\nScripts for the first project at Shendure lab parsing cell cluster genomic data.\n"
},
{
"alpha_fraction": 0.6284348964691162,
"alphanum_fraction": 0.6539227366447449,
"avg_line_length": 36.04545593261719,
"blob_id": "308f8641a6f5dd364b9939b6a8a556294fbab58a",
"content_id": "4e63cdf239fec8c2be14d98268f0795591d89cd8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "R",
"length_bytes": 2511,
"license_type": "no_license",
"max_line_length": 99,
"num_lines": 66,
"path": "/subclustermapper.R",
"repo_name": "scalableparallelism/celldata_project1",
"src_encoding": "UTF-8",
"text": "library(EnrichedHeatmap)\r\nlibrary(circlize)\r\nlibrary(ggplot2)\r\n\r\n# input is a single cluster bedgraph file passed to the R script\r\nargs = commandArgs(trailingOnly=TRUE)\r\nraw1 = gsub(\"bigwig/\",\"\",args[1])\r\nraw = gsub(\".bedGraph\",\"\",raw1) ## vectorize this later\r\n# raw is now just the name of the cluster\r\n\r\n\r\nfilenames <- list.files(path = \"bigwig/\", pattern = raw) # returns a list of all subcluster files\r\nfilenames <- filenames[filenames != raw1] # removes overcluster bam from filenames list\r\n# filenames is now a list of cluster.subcluster.bedGraph files.\r\n\r\nn <- length(filenames) \t # number of subclusters, used later.\r\nc = as.character(1:19)\r\nc <- append(c,c(\"X\",\"Y\"))\r\nc <- sapply(c,function(x) paste0(\"chr\",x))\r\nnames(c) <- c \t\t\t\t\t\t\t\t\t\t# why this was necessary we may never know\r\n# c is now a character vector of approved chromosomes in df, we'll use this with the apply below us\r\n\r\ndf2 <- read.table(\"mousegenome.bed\") # make sure you have this file lol\r\n# can use IRanges(df2[[3]],df2[[4]]) to get TSS and TES. As it stands it only grabs TES\r\nref = GRanges(seqnames = df2[[2]], ranges = df2[[4]], strand = df2[[5]])\r\n# this gets our read index\r\n\t \r\nplotname <- paste0(raw, \".png\")\r\npng(plotname)\r\n\r\nrange <- seq(-5000, 4950, by=50)\r\nxrange <- range(range)\r\nyrange <- range(seq(0,1, by=0.2)) \r\n# set up the plot \r\nplot(xrange, yrange, type=\"n\", xlab=\"Distance From TES\",\r\n \tylab=\"Read Density\") \r\ncolors <- rainbow(n) \r\n\r\ntitle(raw)\r\nlegend(xrange[1], yrange[2], 1:n, cex=0.8, col=colors,\r\n \tpch=plotchar, lty=linetype, title=\"Subcluster\")\r\n\r\noverlist <- lapply(filenames, function(x) {\r\n\ta = read.table(paste0(\"bigwig/\",x))\t\t\t# paste function gives correct path\r\n\tsubraw = gsub(\".bedGraph\",\"\",x) \t\t\t# var is now \"cluster.subcluster\"\r\n\tsubraw = gsub(paste0(raw,\".\"),\"\", subraw)\t# var is now \"subcluster\"\r\n\t# There must be a better way to do this\r\n\r\n\tdf <- a[a$V1 %in% c,]\t\t\t\t\t\t# sanitizes chromosome names and input\r\n\r\n\tgr = GRanges(seqnames = df[[1]], ranges = IRanges(df[[2]], df[[3]]),\r\n\t\t\t coverage = log10(df[[4]]+1))\t# converts our frame to a granges object\r\n\t\r\n\tmat = normalizeToMatrix(gr, ref, value_column = \"coverage\", extend = 5000, \r\n\tmean_mode = \"w0\", w = 50)\t\t\t\t\t# makes a matrix of binned reads\r\n\r\n\treadlist <- as.numeric(lapply(1:200, function(x) sum(mat[,x])))\r\n\tnormalizedreads <- (readlist / max(readlist))\r\n})\r\n# overlist is a list of listed reads as numberic vectors\r\n\r\nfor (i in 1:n) {\r\n\tlines(range, overlist[[i]], type=\"l\", lwd=1.5, col=colors[i])\r\n}\r\n\r\ndev.off()\r\n"
}
] | 7 |
victoraagg/python-test | https://github.com/victoraagg/python-test | 72532ee643d36527df5b45e299ed1e9e716de881 | e42d0f8e34039c347b928489270fe8dee60d3da3 | e34050ac1a19f27f6cdc58543ac4b706c93d6a86 | refs/heads/master | 2020-05-02T23:55:41.320785 | 2020-03-11T23:22:30 | 2020-03-11T23:22:30 | 178,294,246 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5264333486557007,
"alphanum_fraction": 0.5517498254776001,
"avg_line_length": 25.860000610351562,
"blob_id": "da01c417f3e49ba5c0d764d373bc2024c43fcef7",
"content_id": "a9edb834e39a174e53e83f42e463fbceb9ac4dfb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1348,
"license_type": "no_license",
"max_line_length": 48,
"num_lines": 50,
"path": "/03.py",
"repo_name": "victoraagg/python-test",
"src_encoding": "UTF-8",
"text": "print('Proyecto de calculadora')\n\nfin = False\nprint('Calculadora')\nprint('Opciones:')\nprint('1 - Suma')\nprint('2 - Resta')\nprint('3 - Multiplicación')\nprint('4 - Division')\nprint('5 - Salir')\n\ndef readNum(text):\n valid = False\n while not valid:\n try:\n number = int(input(text))\n except ValueError:\n print('El valor debe ser un número')\n else:\n valid = True\n return number\n\nwhile fin == False:\n option = int(input('Opción: '))\n if option == 1:\n number1 = readNum('Numero 1: ')\n number2 = readNum('Numero 2: ')\n print('Resultado:',number1+number2)\n elif option == 2:\n number1 = readNum('Numero 1: ')\n number2 = readNum('Numero 2: ')\n print('Resultado:',number1-number2)\n elif option == 3:\n number1 = readNum('Numero 1: ')\n number2 = readNum('Numero 2: ')\n print('Resultado:',number1*number2)\n elif option == 4:\n number1 = readNum('Numero 1: ')\n number2 = readNum('Numero 2: ')\n try:\n result = number1/number2\n except ZeroDivisionError:\n print('División entre cero')\n except:\n print('Error en la división')\n else:\n print('Resultado:',result)\n elif option == 5:\n fin = True\n print('Fin de operaciones')\n"
},
{
"alpha_fraction": 0.6780185699462891,
"alphanum_fraction": 0.6873065233230591,
"avg_line_length": 20.53333282470703,
"blob_id": "feb2d04a58b288e3bae9918b65edf676934aa686",
"content_id": "205025ba9f81c2054301daaa9cf6f8a2cad38093",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 328,
"license_type": "no_license",
"max_line_length": 44,
"num_lines": 15,
"path": "/11.py",
"repo_name": "victoraagg/python-test",
"src_encoding": "UTF-8",
"text": "print('Manejo de excepciones')\n\ntry:\n #print(3/'ads')\n print(3/0)\nexcept ZeroDivisionError:\n print('Error: División entre cero')\nexcept TypeError:\n print('Error: División entre un string')\nexcept:\n print('Error: División errónea')\nelse:\n print('Divisón correcta')\nfinally:\n print('Salir del programa')\n"
},
{
"alpha_fraction": 0.5582959651947021,
"alphanum_fraction": 0.5695067048072815,
"avg_line_length": 21.979381561279297,
"blob_id": "34b6fc431d56b754a0eafd1a433d9d66d8ea8f0e",
"content_id": "a532e89edc9ded3065ce3deed15b692d38f08837",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2233,
"license_type": "no_license",
"max_line_length": 54,
"num_lines": 97,
"path": "/09.py",
"repo_name": "victoraagg/python-test",
"src_encoding": "UTF-8",
"text": "print('POO avanzado')\n\nclass compPrivados:\n def __init__(self,x,y):\n self.__x = x\n self.__y = y\n def getX(self):\n return self.__x\n def getY(self):\n return self.__y\n def setX(self,x):\n self.__x = x\n def setY(self,y):\n self.__y = y\n def __suma(self):\n result = self.__x + self.__y\n return result\n def getResultado(self):\n return self.__suma()\n\nx = compPrivados(12,43)\nprint(x.getX())\nprint(x.getY())\nx.setX(678)\nprint(x.getX())\nx.setY(1678)\nprint(x.getY())\nprint(x.getResultado())\n\nprint('POO herencia')\n\nclass electrodomestico:\n def __init__(self):\n self.__isOn = False\n self.__tension = 0\n def setOn(self):\n self.__isOn = True\n def setOff(self):\n self.__isOn = False\n def isOn(self):\n return self.__isOn\n def setTension(self,tension):\n self.__tension = tension\n def getTension(self):\n return self.__tension\n\nclass lavadora(electrodomestico):\n def __init__(self):\n self.__rpm = 0\n self.__kgs = 0\n def setRpm(self,rpm):\n self.__rpm = rpm\n def setKg(self,kg):\n self.__kgs = kg\n def showStatus(self):\n print('#####')\n print('Lavadora:')\n print('\\tRPM:',self.__rpm)\n print('\\tKg´s:',self.__kgs)\n print('\\tTensión:',self.getTension())\n if self.isOn():\n print('Encendida')\n else :\n print('Apagada')\n print('#####')\n\nclass wear:\n def __init__(self):\n self.__color = 'Blue'\n def setColor(self,color):\n self.__color = color\n def getColor(self):\n return self.__color\n\nclass secadora(electrodomestico,wear):\n def __init__(self):\n self.__turbo = False\n def checkTurbo(self):\n return self.__turbo\n def turboOn(self):\n self.__turbo = True\n\nlavadora = lavadora()\nlavadora.setRpm(3000)\nlavadora.setKg(6)\nlavadora.setTension(240)\nlavadora.setOn()\nlavadora.showStatus()\n\nsecadora = secadora()\nsecadora.setTension(180)\nsecadora.turboOn()\nif secadora.checkTurbo():\n print('Turbo activado')\nprint('Tensión de la secadora:',secadora.getTension())\nsecadora.setColor('Azul')\nprint('Color de la ropa:',secadora.getColor())\n\n"
},
{
"alpha_fraction": 0.5476973652839661,
"alphanum_fraction": 0.5674341917037964,
"avg_line_length": 24.29166603088379,
"blob_id": "7979bcb3475ef9bc958c388e27ab1ecf0ba5a3b2",
"content_id": "6970b627244b219aab406aa594e2258c579b0d5a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 608,
"license_type": "no_license",
"max_line_length": 48,
"num_lines": 24,
"path": "/08.py",
"repo_name": "victoraagg/python-test",
"src_encoding": "UTF-8",
"text": "x = input(\"Numero 1: \")\ny = input(\"Numero 2: \")\nisValidx = False\nisValidy = False\n\nwhile not isValidx and not isValidy:\n if x.isdigit() == False:\n print(\"El valor\", x, \"no es un entero\")\n x = input(\"Numero 1: \")\n else:\n isValidx = True\n if y.isdigit() == False:\n print(\"El valor\", y, \"no es un entero\")\n y = input(\"Numero 2: \")\n else:\n isValidy = True\n\nx = int(x)/10\ny = int(y)/10\n\nprint(\"Suma de ambos: \", round(x+y,2))\nprint(\"Resta de ambos: \", round(x-y,2))\nprint(\"Multiplicacion de ambos: \", round(x*y,2))\nprint(\"Division de ambos: \", round(x/y,2)) \n"
},
{
"alpha_fraction": 0.6001552939414978,
"alphanum_fraction": 0.6048136353492737,
"avg_line_length": 27,
"blob_id": "b1f9bca86a4854a85c78d7c045cf92501519f27e",
"content_id": "08024e1147b274833b36280781000a2397753c9a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2579,
"license_type": "no_license",
"max_line_length": 72,
"num_lines": 92,
"path": "/06.py",
"repo_name": "victoraagg/python-test",
"src_encoding": "UTF-8",
"text": "print('Ejercicio de POO avanzado')\n\nclass autor:\n def __init__(self,nombre,apellidos):\n self.nombre = nombre\n self.apellidos = apellidos\n def getAutor(self):\n print('Autor:',self.nombre,self.apellidos)\n \nclass libro:\n def __init__(self,titulo,isbn):\n self.titulo = titulo\n self.isbn = isbn\n def setAutor(self,autor):\n self.autor = autor\n def getLibro(self):\n print('########## LIBRO ##########')\n self.autor.getAutor()\n print('Título:',self.titulo)\n print('ISBN:',self.isbn)\n def getTitulo(self):\n return self.titulo\n\nclass biblioteca:\n def __init__(self):\n self.listaLibros = []\n def countLibros(self):\n return len(self.listaLibros)\n def setLibro(self,name):\n self.listaLibros = self.listaLibros + [name]\n def showBiblioteca(self):\n for libro in self.listaLibros:\n libro.getLibro()\n def unsetLibro(self,titulo):\n exists = False\n position = -1\n for libro in self.listaLibros:\n position += 1\n if libro.getTitulo() == titulo:\n exists = True\n break\n if exists:\n del self.listaLibros[position]\n print('Libro',titulo,'borrado')\n else:\n print('Libro',titulo,'no encontrado')\n\ndef showMenu():\n print('Opciones disponibles')\n print('1 - Añadir libro')\n print('2 - Mostrar biblioteca')\n print('3 - Borrar libro')\n print('4 - Mostrar numero de libros')\n print('5 - Salir')\n\ndef addBook(biblioteca):\n titulo = input('Titulo del libro: ')\n isbn = input('ISBN del libro: ')\n nombre = input('Nombre del autor: ')\n apellidos = input('Apellidos del autor: ')\n nuevoAutor = autor(nombre,apellidos)\n nuevoLibro = libro(titulo,isbn)\n nuevoLibro.setAutor(nuevoAutor)\n biblioteca.setLibro(nuevoLibro)\n\ndef showBiblioteca(biblioteca):\n biblioteca.showBiblioteca()\n\ndef borrarLibro(biblioteca):\n titulo = input('Titulo del libro a borrar: ')\n biblioteca.unsetLibro(titulo)\n\ndef numberBooks(biblioteca):\n print('Numero de libros en la biblioteca:',biblioteca.countLibros())\n\nfin = False\nbiblioteca = biblioteca()\n\nwhile not fin:\n showMenu()\n option = int(input('Opción: '))\n if option == 1:\n addBook(biblioteca)\n elif option == 2:\n showBiblioteca(biblioteca)\n elif option == 3:\n borrarLibro(biblioteca)\n elif option == 4:\n numberBooks(biblioteca)\n elif option == 5:\n fin = True\n print('Biblioteca cerrada')\n"
},
{
"alpha_fraction": 0.6520787477493286,
"alphanum_fraction": 0.6542669534683228,
"avg_line_length": 17.280000686645508,
"blob_id": "b434d0d1f025ac96c19e89eed5d4f3a91c943bde",
"content_id": "303fc6393fee8a5d319fb03e642a2c20e91f9584",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 459,
"license_type": "no_license",
"max_line_length": 48,
"num_lines": 25,
"path": "/10.py",
"repo_name": "victoraagg/python-test",
"src_encoding": "UTF-8",
"text": "print('Operaciones con ficheros')\n\nfile = open('./files/test.txt','r')\ntexto = file.read()\nfile.close\nprint(texto)\nprint('###')\n\nfile = open('./files/test.txt','r')\ntextoArray = file.readlines()\nfile.close\nprint(textoArray[1])\nprint('###')\n\nfile = open('./files/test.txt','a')\nnuevaLinea = file.write('Nueva línea añadida\\n')\nfile.close\nprint(texto)\nprint('###')\n\nfile = open('./files/test.txt','r')\ntexto = file.read()\nfile.close\nprint(texto)\nprint('###')\n"
},
{
"alpha_fraction": 0.800000011920929,
"alphanum_fraction": 0.800000011920929,
"avg_line_length": 19,
"blob_id": "ca1d41109b0f210ce1dd580cfc09113214d5373b",
"content_id": "1e4a8a047ae76b78251913a613fdd84763c3a128",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 40,
"license_type": "no_license",
"max_line_length": 25,
"num_lines": 2,
"path": "/README.md",
"repo_name": "victoraagg/python-test",
"src_encoding": "UTF-8",
"text": "# python-test\nEjercicios python de test\n"
},
{
"alpha_fraction": 0.6729776263237,
"alphanum_fraction": 0.7246127128601074,
"avg_line_length": 21.30769157409668,
"blob_id": "139bbfbb2c96b8f50b91a58ab465be30736446e4",
"content_id": "8eaf3440b972c06c71c1fbe2ba8d394566f17417",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 581,
"license_type": "no_license",
"max_line_length": 36,
"num_lines": 26,
"path": "/07.py",
"repo_name": "victoraagg/python-test",
"src_encoding": "UTF-8",
"text": "import turtle\n\ndef makeSquare(size):\n myTurtle.forward(size)\n myTurtle.left(90)\n myTurtle.forward(size)\n myTurtle.left(90)\n myTurtle.forward(size)\n myTurtle.left(90)\n myTurtle.forward(size)\n myTurtle.left(90)\n return size\n\nmyTurtle = turtle.Turtle()\n\nsquare1 = makeSquare(20)\nprint(\"Drawed square size\", square1)\nmyTurtle.left(90)\nsquare2 = makeSquare(30)\nprint(\"Drawed square size\", square2)\nmyTurtle.left(90)\nsquare3 = makeSquare(40)\nprint(\"Drawed square size\", square3)\nmyTurtle.left(90)\nsquare4 = makeSquare(50)\nprint(\"Drawed square size\", square4) \n"
},
{
"alpha_fraction": 0.7076411843299866,
"alphanum_fraction": 0.7076411843299866,
"avg_line_length": 30.6842098236084,
"blob_id": "e6ee647f4baefc484b4561972faefe40feca1a71",
"content_id": "0c4c7fe1388b01c0b3297dcf6544b15af4857f65",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 606,
"license_type": "no_license",
"max_line_length": 69,
"num_lines": 19,
"path": "/01.py",
"repo_name": "victoraagg/python-test",
"src_encoding": "UTF-8",
"text": "print(\"Hola mundo\")\nprint(\"Ejercicio de variables y strings\")\nname = input('Nombre: ')\nprint('Hola,',name.capitalize())\nprint('Hola,',name.upper())\nprint(\"Ejercicio de ints\")\nage = int(input('Indica tu edad: '))\nfav = int(input('Indica tu número preferido: '))\nprint('Resultado:',age+fav)\nresult = age < fav\nprint('¿Es menor age que fav?',result)\nprint(\"Ejercicio de listas\")\nlista = ['manzana','pera','sandia']\nprint(lista)\nposition = int(input('¿Que posición de la lista quieres eliminar? '))\ndel lista[position]\nprint(lista)\nstringsepare = input('Cadena para separar: ')\nprint(stringsepare.split())\n"
},
{
"alpha_fraction": 0.5413240790367126,
"alphanum_fraction": 0.5463841557502747,
"avg_line_length": 33.84191131591797,
"blob_id": "e6c1ea50113f7db4ca5eb6c71c04e33ee0bbf231",
"content_id": "48cfacbeddf50c775229c32c28279a9cfc33dc71",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 9496,
"license_type": "no_license",
"max_line_length": 95,
"num_lines": 272,
"path": "/12.py",
"repo_name": "victoraagg/python-test",
"src_encoding": "UTF-8",
"text": "class direccion:\n def __init__(self):\n self.__calle = ''\n self.__piso = ''\n self.__ciudad = ''\n self.__cp = ''\n def getCalle(self):\n return self.__calle\n def getPiso(self):\n return self.__piso\n def getCiudad(self):\n return self.__ciudad\n def getCp(self):\n return self.__cp\n def setCalle(self,calle):\n self.__calle = calle\n def setPiso(self,piso):\n self.__piso = piso\n def setCiudad(self,ciudad):\n self.__ciudad = ciudad\n def setCp(self,cp):\n self.__cp = cp\n\nclass persona:\n def __init__(self):\n self.__nombre = ''\n self.__apellidos = ''\n self.__fechaNac = ''\n def getNombre(self):\n return self.__nombre\n def getApellidos(self):\n return self.__apellidos\n def getFechaNac(self):\n return self.__fechaNac\n def setNombre(self,nombre):\n self.__nombre = nombre\n def setApellidos(self,apellidos):\n self.__apellidos = apellidos\n def setFechaNac(self,fecha):\n self.__fechaNac = fecha\n\nclass telefono:\n def __init__(self):\n self.__movil = ''\n self.__fijo = ''\n self.__trabajo = ''\n def getMovil(self):\n return self.__movil\n def getFijo(self):\n return self.__fijo\n def getTrabajo(self):\n return self.__trabajo\n def setMovil(self,movil):\n self.__movil = movil\n def setFijo(self,fijo):\n self.__fijo = fijo\n def setTrabajo(self,trabajo):\n self.__trabajo = trabajo\n\nclass Contacto(direccion,persona,telefono):\n def __init__(self):\n self.__email = ''\n def getEmail(self):\n return self.__email\n def setEmail(self,mail):\n self.__email = mail\n def showContacto(self):\n print('---CONTACTO---')\n print('Nombre:',self.getNombre())\n print('Apellidos:',self.getApellidos())\n print('Fecha de nacimiento:',self.getFechaNac())\n print('Teléfono móvil:',self.getMovil())\n print('Teléfono fijo:',self.getFijo())\n print('Teléfono trabajo:',self.getTrabajo())\n print('Calle:',self.getCalle())\n print('Piso:',self.getPiso())\n print('Ciudad:',self.getCiudad())\n print('C.P.:',self.getCp())\n print('Email:',self.getEmail())\n\nclass agenda:\n \n def __init__(self,path):\n self.__listaContactos = []\n self.__path = path\n \n def cargarContactos(self):\n try:\n fichero = open(self.__path,'r')\n except:\n print('ERROR: El fichero no existe')\n else:\n contactos = fichero.readlines()\n fichero.close()\n if(len(contactos)>0):\n for contacto in contactos:\n datos = contacto.split('#')\n if(len(datos)==11):\n nuevoContacto = Contacto()\n nuevoContacto.setNombre(datos[0])\n nuevoContacto.setApellidos(datos[1])\n nuevoContacto.setFechaNac(datos[2])\n nuevoContacto.setMovil(datos[3])\n nuevoContacto.setFijo(datos[4])\n nuevoContacto.setTrabajo(datos[5])\n nuevoContacto.setCalle(datos[6])\n nuevoContacto.setPiso(datos[7])\n nuevoContacto.setCiudad(datos[8])\n nuevoContacto.setCp(datos[9])\n nuevoContacto.setEmail(datos[10])\n self.__listaContactos = self.__listaContactos + [nuevoContacto]\n print('INFO: Cargados',len(self.__listaContactos),'contactos')\n \n def crearContacto(self,contacto):\n self.__listaContactos = self.__listaContactos + [contacto]\n\n def guardarContacto(self):\n try:\n fichero = open(self.__path,'w')\n except:\n print('ERROR: El fichero no se puede guardar')\n else:\n for contacto in self.__listaContactos:\n texto = contacto.getNombre() + '#'\n texto = texto + contacto.getApellidos() + '#'\n texto = texto + contacto.getFechaNac() + '#'\n texto = texto + contacto.getMovil() + '#'\n texto = texto + contacto.getFijo() + '#'\n texto = texto + contacto.getTrabajo() + '#'\n texto = texto + contacto.getCalle() + '#'\n texto = texto + contacto.getPiso() + '#'\n texto = texto + contacto.getCiudad() + '#'\n texto = texto + contacto.getCp() + '#'\n texto = texto + contacto.getEmail() + '\\n'\n fichero.write(texto)\n fichero.close()\n\n def mostrarAgenda(self):\n print('### Agenda ###')\n print('Numero de contactos:',len(self.__listaContactos),'\\n')\n for contacto in self.__listaContactos:\n contacto.showContacto()\n print('######')\n\n def buscarContacto(self,tipo,dato):\n listaEncontrados = []\n for contacto in self.__listaContactos:\n if tipo == 1:\n if contacto.getNombre() == dato:\n listaEncontrados = listaEncontrados + [contacto] \n elif tipo == 2:\n if contacto.getMovil() == dato:\n listaEncontrados = listaEncontrados + [contacto]\n elif tipo == 3:\n if contacto.getFijo() == dato:\n listaEncontrados = listaEncontrados + [contacto]\n elif tipo == 4:\n if contacto.getTrabajo() == dato:\n listaEncontrados = listaEncontrados + [contacto]\n return listaEncontrados\n\n def borrarContacto(self,tipo,dato):\n listaFinal = []\n for contacto in self.__listaContactos:\n if tipo == 1:\n if contacto.getNombre() != dato:\n listaFinal = listaFinal + [contacto] \n elif tipo == 2:\n if contacto.getMovil() != dato:\n listaFinal = listaFinal + [contacto]\n elif tipo == 3:\n if contacto.getFijo() != dato:\n listaFinal = listaFinal + [contacto]\n elif tipo == 4:\n if contacto.getTrabajo() != dato:\n listaFinal = listaFinal + [contacto]\n print('INFO:',len(self.__listaContactos)-len(listaFinal),'contactos han sido borrados')\n self.__listaContactos = listaFinal\n\ndef obtenerOpcion(texto):\n leido = False\n while not leido:\n try:\n numero = int(input(texto))\n except ValueError:\n print('El valor debe ser un numero')\n else:\n leido = True\n return numero\n\ndef mostrarMenu():\n print('########## MENÚ PRINCIPAL ####################')\n print('1 - Mostrar contactos')\n print('2 - Buscar contactos')\n print('3 - Crear nuevo contacto')\n print('4 - Borrar contactos')\n print('5 - Guardar contactos')\n print('6 - Salir')\n\ndef buscarContactos(agenda):\n print('Buscar contactos')\n print('1 - Nombre')\n print('2 - Movil')\n print('3 - Fijo')\n print('4 - Trabajo')\n print('5 - Salir')\n finBuscar = False\n while not finBuscar:\n opcion = obtenerOpcion('Opción de búsqueda:')\n if opcion == 5:\n finBuscar = True\n encontrados = agenda.buscarContacto(opcion,input('Introduce el valor:'))\n if len(encontrados) > 0:\n print('### CONTACTOS ENCONTRADOS ###')\n for item in encontrados:\n item.showContacto()\n print('######')\n else:\n print('INFO: No se han encontrado contactos')\n\ndef procesoCrearContacto(agenda):\n nuevoContacto = Contacto()\n nuevoContacto.setNombre(input('Introduce el nombre:'))\n nuevoContacto.setApellidos(input('Introduce los apellidos:'))\n nuevoContacto.setFechaNac(input('Introduce la fecha de nacimiento:'))\n nuevoContacto.setMovil(input('Introduce el movil:'))\n nuevoContacto.setFijo(input('Introduce el fijo:'))\n nuevoContacto.setTrabajo(input('Introduce el teléfono del trabajo:'))\n nuevoContacto.setCalle(input('Introduce la calle:'))\n nuevoContacto.setPiso(input('Introduce el piso:'))\n nuevoContacto.setCiudad(input('Introduce la ciudad:'))\n nuevoContacto.setCp(input('Introduce el C.P.:'))\n nuevoContacto.setEmail(input('Introduce el email:'))\n agenda.crearContacto(nuevoContacto)\n\ndef borrarContacto(agenda):\n print('Borrar contacto')\n print('1 - Nombre')\n print('2 - Movil')\n print('3 - Fijo')\n print('4 - Trabajo')\n print('5 - Salir')\n finBuscar = False\n while not finBuscar:\n opcion = obtenerOpcion('Opción de borrado:')\n if opcion == 5:\n finBuscar = True\n else:\n encontrados = agenda.borrarContacto(opcion,input('Introduce el valor:'))\n finBuscar = True\n\ndef Main():\n nuevaAgenda = agenda('./files/agenda.txt')\n nuevaAgenda.cargarContactos()\n fin = False\n while not fin:\n mostrarMenu()\n opcion = obtenerOpcion('Opción:')\n if(opcion==1):\n nuevaAgenda.mostrarAgenda()\n elif(opcion==2):\n buscarContactos(nuevaAgenda)\n elif(opcion==3):\n procesoCrearContacto(nuevaAgenda)\n elif(opcion==4):\n borrarContacto(nuevaAgenda)\n elif(opcion==5):\n nuevaAgenda.guardarContacto()\n elif(opcion==6):\n fin = True\n\nMain()\n \n"
},
{
"alpha_fraction": 0.5484764575958252,
"alphanum_fraction": 0.5775623321533203,
"avg_line_length": 24.785715103149414,
"blob_id": "65008078576804974844eebd6f93a24306f63919",
"content_id": "c23a2eda081f335882c253fcd9d344d0b354b6c4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 723,
"license_type": "no_license",
"max_line_length": 58,
"num_lines": 28,
"path": "/05.py",
"repo_name": "victoraagg/python-test",
"src_encoding": "UTF-8",
"text": "print('Ejercicio de POO')\nclass coords:\n def __init__(self,x,y):\n self.X = x\n self.Y = y\n def mostrarPunto(self):\n print('La coordenada es: (',self.X,',',self.Y,')')\np1 = coords(4,5)\np1.mostrarPunto()\np1.Y = 45\np1.mostrarPunto()\nclass triangulo:\n def __init__(self,x,y,z):\n self.X = x\n self.Y = y\n self.Z = z\n def mostrarVertices(self):\n print('Las coordenadas del triángulo son:')\n print('Coordenada X:')\n self.X.mostrarPunto()\n print('Coordenada Y:')\n self.Y.mostrarPunto()\n print('Coordenada Z:')\n self.Z.mostrarPunto()\np2 = coords(13,19)\np3 = coords(34,35)\ntriangle = triangulo(p1,p2,p3)\ntriangle.mostrarVertices()\n"
},
{
"alpha_fraction": 0.688365638256073,
"alphanum_fraction": 0.6897506713867188,
"avg_line_length": 29.08333396911621,
"blob_id": "c04ae5fa9be0c7e8c3da1be5091a00b3c3c0f162",
"content_id": "2c5290e6e630688b6f5ee47c2e89750d33d04679",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 724,
"license_type": "no_license",
"max_line_length": 62,
"num_lines": 24,
"path": "/04.py",
"repo_name": "victoraagg/python-test",
"src_encoding": "UTF-8",
"text": "print('Ejercicio de funciones')\ndef saludar():\n print('Hola a todos')\nsaludar()\ndef esMayorQueCero(value):\n if int(value) < 0:\n return value+' es mayor que cero'\n else:\n return value+' es menor que cero'\nnumber = input('Introduce un número: ')\nprint(esMayorQueCero(number))\ndef funcionesDosReturn(value):\n cuadrado = value * value\n cubo = value * value * value\n return cuadrado,cubo\narit = int(input('Introduce un número: '))\ncuadrado,cubo = funcionesDosReturn(arit)\nprint('El cuadrado de',arit,'es:',cuadrado,'Su cubo es:',cubo)\nname = input('Nombre: ')\ndef toUpperCase(string):\n return string.upper()\ndef saludaNombre(name):\n print('Hola, soy',toUpperCase(name))\nsaludaNombre(name)\n"
},
{
"alpha_fraction": 0.6016427278518677,
"alphanum_fraction": 0.6550307869911194,
"avg_line_length": 23.350000381469727,
"blob_id": "11accde51757a7aa5109e2e19b7020367c97bdfe",
"content_id": "ac14e72331d8721d78de32636f19ecff948dac8c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 487,
"license_type": "no_license",
"max_line_length": 52,
"num_lines": 20,
"path": "/02.py",
"repo_name": "victoraagg/python-test",
"src_encoding": "UTF-8",
"text": "print('Ejercicio de condicionales')\nvalue1 = int(34)\nvalue2 = int(29)\nif value1<value2:\n print('El numero value1 es menor')\nelif value1==value2:\n print('El numero value1 es igual a value2')\nelse:\n print('El numero value1 es mayor')\nprint('Ejercicio de bucles')\ni = 0\nwhile i<=10:\n print(i)\n i = i+1\nlista = ['manzana','pera','sandia']\nfor var in lista:\n print(var)\nfor item1 in range(3):\n for item2 in range (2):\n print('Elemento1:',item1,'Elemento2:',item2)\n"
}
] | 13 |
ChaoPeng716/SphericalDiffusion | https://github.com/ChaoPeng716/SphericalDiffusion | 2f27513e40eb4db1b11a00c8f116933f1ab635d0 | 47b80b7e08740bede60165760aabc45b3e428429 | def5b92b6fc0624ec20bc8c26a4b7957e25aed45 | refs/heads/master | 2020-04-04T04:58:27.678736 | 2018-11-01T15:54:34 | 2018-11-01T15:54:34 | 155,731,657 | 0 | 0 | null | 2018-11-01T14:57:48 | 2018-11-01T14:53:30 | 2018-11-01T14:53:28 | null | [
{
"alpha_fraction": 0.8333333134651184,
"alphanum_fraction": 0.8333333134651184,
"avg_line_length": 30,
"blob_id": "6670a32480c5992cce247b0ffde9648a1fdb7a23",
"content_id": "23c10b8a1503684d83cb158f19f2e98cedb7e686",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 30,
"license_type": "no_license",
"max_line_length": 30,
"num_lines": 1,
"path": "/myfunctions/__init__.py",
"repo_name": "ChaoPeng716/SphericalDiffusion",
"src_encoding": "UTF-8",
"text": "from .sphericalSolver import *"
},
{
"alpha_fraction": 0.5050055384635925,
"alphanum_fraction": 0.5328142642974854,
"avg_line_length": 20.380952835083008,
"blob_id": "4c6c8eff69bbfc7a88777adf6e6f1958799e49b4",
"content_id": "1ff0e04f898d7f456d8d2583a790645776f201df",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 899,
"license_type": "no_license",
"max_line_length": 89,
"num_lines": 42,
"path": "/myfunctions/simpleDiffusion.py",
"repo_name": "ChaoPeng716/SphericalDiffusion",
"src_encoding": "UTF-8",
"text": "from scipy.integrate import ode\nfrom numpy import *\nimport matplotlib.pyplot as plt\n\n\n###################\n# this is for a constant current application:\n\n\ndef dcdt(c, t, r, D, current):\n \"\"\"\n This function finds the dcdt. It takes as inputs the parameters a time and parameters\n :param u:\n :param t:\n :param grid:\n :param D:\n :param current:\n :return:\n \"\"\"\n dr = r[1]-r[0]\n\n q = - D r[1:-1] ** 2. * (c[:, 1:] - c[:, 0:-1]) / dr\n q_surf = -current\n q = np.concatenate([0], q, q_surf)\n\n dcdt_out = - (2. / (r[1:] + r[0:-1])) ** 2. \\\n * (qn[:, 1:] - qn[:, 0:-1]) / dr\n return dcdt_out\n\n\n# Set up grid\nr = np.linspace(0, 1, 100) # neg, sep, pos, particle, grid\n\n# Initial conditions\nc0 = ones(size(r)) # neg, pos, electrolyte, grid\n\n# Solve ODEs\nt = np.linspace(0, tmax, tsteps)\nc = odeint(dcdt, c0, t, args=(r, D, current))\n\n\n################\n\n"
},
{
"alpha_fraction": 0.606249988079071,
"alphanum_fraction": 0.643750011920929,
"avg_line_length": 21.85714340209961,
"blob_id": "271b8c45bae7cf372322fdb4b6b7a2671bf6e0ba",
"content_id": "7e4e96cc706f86804d3c9ea96809b4827a02a675",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 160,
"license_type": "no_license",
"max_line_length": 49,
"num_lines": 7,
"path": "/test_myfunction.py",
"repo_name": "ChaoPeng716/SphericalDiffusion",
"src_encoding": "UTF-8",
"text": "from myfunctions import *\n\n\n# Test\ndef test_sphericalSolver():\n x0, t0, C = spherical_Solver(j, r, t, params)\n assert temp.max() > 10 and temp.min() < 50\n"
}
] | 3 |
WilJames/SortToFolders | https://github.com/WilJames/SortToFolders | 9607441ba638a3ab5d61c12e1b78a3db54016c2d | 965e94eda8bdd4d73852a268af7a9b76dba7b902 | 7c7a916411d4b3e15963dbd17313a8ac4bde054d | refs/heads/master | 2022-04-04T10:57:58.301901 | 2020-02-22T09:43:26 | 2020-02-22T09:43:26 | 188,710,453 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5538991093635559,
"alphanum_fraction": 0.5751146674156189,
"avg_line_length": 33.83561706542969,
"blob_id": "9a56c0d9ff696ee703af2118259508333a6dae0e",
"content_id": "46b39e9e47a2a36958f8538c38ed9f1d3509fe3c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5425,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 146,
"path": "/forms/copy_win.py",
"repo_name": "WilJames/SortToFolders",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\r\nfrom PySide2.QtWidgets import QWidget\r\nfrom PySide2.QtGui import QIcon, QFont\r\nfrom PySide2.QtCore import (Qt, QUrl, Signal, QSize)\r\n\r\nfrom time import time, strftime, gmtime, monotonic\r\nfrom pathlib import Path, PurePath\r\nimport re\r\nimport os\r\n\r\nfrom ui.copy_win import Ui_copy_win\r\n\r\n_WINDOWS = os.name == 'nt'\r\n\r\n\r\nclass Copy_Win(QWidget):\r\n stopmovesignal = Signal()\r\n\r\n def __init__(self):\r\n super(Copy_Win, self).__init__()\r\n\r\n self.ui = Ui_copy_win()\r\n self.ui.setupUi(self)\r\n self.setFixedSize(368, 108)\r\n self.setWindowIcon(QIcon('assets/empty.png'))\r\n self.setWindowFlags(self.windowFlags() & ~Qt.WindowMaximizeButtonHint)\r\n\r\n self.ui.label_5.setOpenExternalLinks(True)\r\n self.ui.widget.hide()\r\n # self.ui.pushButton.setText(u\"\\u2193\") # вниз\r\n self.ui.pushButton.setIcon(QIcon(\"assets/arrowD.png\"))\r\n self.ui.pushButton.setIconSize(QSize(13, 13))\r\n # self.ui.label_dopinfo.setText('Подробнее')\r\n self.ui.pushButton.setText('Подробнее')\r\n\r\n self.ui.pushButton.clicked.connect(self.hideshowdopinfo)\r\n self.ui.pushButton_2.clicked.connect(self.stopmove)\r\n\r\n # self.ui.progressBar.setValue(55)\r\n\r\n def stopmove(self):\r\n self.stopmovesignal.emit()\r\n # self.MovingThread.status_run = False\r\n\r\n def hideshowdopinfo(self):\r\n if self.ui.widget.isVisible():\r\n self.ui.widget.hide()\r\n # self.ui.pushButton.setText(u\"\\u2193\") # вниз\r\n self.ui.pushButton.setIcon(QIcon(\"assets/arrowD.png\"))\r\n self.ui.pushButton.setIconSize(QSize(13, 13))\r\n # self.ui.label_dopinfo.setText('Подробнее')\r\n self.ui.pushButton.setText('Подробнее')\r\n self.setFixedSize(368, 108)\r\n else:\r\n self.ui.widget.show()\r\n # self.ui.pushButton.setText(u\"\\u2191\") # вверх\r\n self.ui.pushButton.setIcon(QIcon(\"assets/arrowU.png\"))\r\n self.ui.pushButton.setIconSize(QSize(13, 13))\r\n # self.ui.label_dopinfo.setText('Меньше сведений')\r\n self.ui.pushButton.setText('Меньше сведений')\r\n self.setFixedSize(368, 224)\r\n\r\n def showWin(self, total_files, total_size):\r\n total_files = total_files\r\n total_size = total_size\r\n\r\n # Блок всего\r\n self.ui.label_10.setText('Файлов:')\r\n self.ui.label_14.setText(str(total_files))\r\n\r\n self.ui.label_11.setText('Размер:')\r\n self.ui.label_15.setText(self.humansize(total_size))\r\n\r\n self.ui.progressBar.setValue(0)\r\n self.ui.widget.hide()\r\n self.setFixedSize(368, 108)\r\n\r\n self.show()\r\n\r\n def progress(self, progressDict):\r\n time_remain = progressDict['time_remain']\r\n speed = f\"{self.humansize(progressDict['speed'])}/s\"\r\n percent = progressDict['percent']\r\n less_files = progressDict['less_files']\r\n less_size = progressDict['less_size']\r\n\r\n end_time = monotonic() - self.start_time\r\n if _WINDOWS:\r\n end_time = strftime('%#Hh %#Mm %#Ss', gmtime(end_time))\r\n time_remain = strftime('%#Hh %#Mm %#Ss', gmtime(time_remain))\r\n else:\r\n end_time = strftime('%-Hh %-Mm %-Ss', gmtime(end_time))\r\n time_remain = strftime('%-Hh %-Mm %-Ss', gmtime(time_remain))\r\n\r\n end_time = re.sub(r'0h\\s|0m\\s', '', end_time)\r\n time_remain = re.sub(r'0h\\s|0m\\s', '', time_remain)\r\n\r\n self.setWindowTitle(f'Выполнено: {percent}%')\r\n self.ui.label_4.setText(f'Скорость: {speed}')\r\n self.ui.progressBar.setValue(percent)\r\n self.ui.progressBar.setFormat(f'{less_files} | %p%')\r\n # Блок прошло всего\r\n self.ui.label_9.setText('Времени:')\r\n self.ui.label_13.setText(end_time)\r\n\r\n # Блок осталось\r\n self.ui.label_2.setText('Времени:')\r\n self.ui.label_6.setText(time_remain)\r\n\r\n self.ui.label_3.setText('Файлов:')\r\n self.ui.label_7.setText(str(less_files))\r\n\r\n self.ui.label_8.setText('Размер:')\r\n self.ui.label_12.setText(self.humansize(less_size))\r\n\r\n def setName(self, alist):\r\n pfrom = PurePath(alist[1]).parent\r\n pto = PurePath(alist[2]).parent\r\n namefrom = PurePath(pfrom).parts[-1]\r\n nameto = PurePath(pto).parts[-1]\r\n\r\n path1 = QUrl.fromLocalFile(f'{pfrom}').toString()\r\n path1 = f'''<a href='{path1}'>{namefrom}</a>'''\r\n\r\n path2 = QUrl.fromLocalFile(f'{pto}').toString()\r\n path2 = f'''<a href='{path2}'>{nameto}</a>'''\r\n\r\n self.ui.label_5.setText(f'Из {path1} в {path2}')\r\n self.ui.label.setText(f'Имя: {alist[0]}')\r\n # font1 = QFont()\r\n # font1.setPointSize(9)\r\n # self.ui.label.setFont(font1)\r\n\r\n def humansize(self, nbytes):\r\n ''' Перевод байт в кб, мб и т.д.'''\r\n suffixes = ['B', 'KB', 'MB', 'GB', 'TB', 'PB', 'EB', 'ZB', 'YB']\r\n i = 0\r\n while nbytes >= 1024 and i < len(suffixes) - 1:\r\n nbytes /= 1024.\r\n i += 1\r\n f = ('%.2f' % nbytes).rstrip('0').rstrip('.')\r\n return '%s %s' % (f, suffixes[i])\r\n\r\n def starttimer(self):\r\n self.start_time = monotonic()\r\n"
},
{
"alpha_fraction": 0.5523930788040161,
"alphanum_fraction": 0.5614550113677979,
"avg_line_length": 32.899776458740234,
"blob_id": "d79ab1bfc99c97c38ed448808a71bb01a4ec6ec7",
"content_id": "09b05ddd85db205c96ba20631be2510ddca51015",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 16134,
"license_type": "no_license",
"max_line_length": 147,
"num_lines": 449,
"path": "/stf.py",
"repo_name": "WilJames/SortToFolders",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\nimport json\r\nimport os\r\nimport re\r\n# from time import time, strftime, gmtime, monotonic\r\n\r\nfrom PySide2.QtCore import (QCoreApplication, QMetaObject, QObject, QPoint,\r\n QRect, QSize, QUrl, Qt, QEvent)\r\nfrom PySide2.QtGui import (QBrush, QColor, QConicalGradient, QCursor, QFont,\r\n QFontDatabase, QIcon, QLinearGradient, QPalette, QPainter, QPixmap,\r\n QRadialGradient)\r\nfrom PySide2.QtWidgets import *\r\n\r\nfrom ui.mtf_ui import Ui_MainWindow\r\nfrom ui.lang import Language\r\nfrom forms.copy_win import Copy_Win\r\nfrom threads.MovingThread import MovingThread\r\n# from forms.info_form import Info\r\n\r\n\r\nclass MainWindow(QMainWindow, Ui_MainWindow):\r\n def __init__(self):\r\n super(MainWindow, self).__init__()\r\n\r\n self.ui = Ui_MainWindow()\r\n self.ui.setupUi(self)\r\n self.setWindowIcon(QIcon('assets/icon.png'))\r\n self.setWindowTitle('Sort To Folders')\r\n\r\n self.MovingThread = MovingThread()\r\n self.CopyWin = Copy_Win()\r\n\r\n self.MovingThread.finished.connect(self.finish)\r\n self.MovingThread.showWindowProgress.connect(self.copyShow)\r\n self.MovingThread.progress.connect(self.progress)\r\n self.MovingThread.setName.connect(self.setName)\r\n self.CopyWin.stopmovesignal.connect(self.start)\r\n self.ui.lineEdit.setPlaceholderText('Добавить расширение')\r\n\r\n self.ui.tabWidget.currentChanged.connect(self.tabWidget)\r\n\r\n self.loading()\r\n\r\n # Кнопка добавить +\r\n self.ui.pushButton_3.clicked.connect(self.add)\r\n self.ui.pushButton_4.clicked.connect(self.delete)\r\n # Кнопка тест\r\n self.ui.pushbtest.clicked.connect(self.test2)\r\n self.ui.pushButton_6.clicked.connect(self.start)\r\n\r\n self.ui.tableWidget_3.setSortingEnabled(True)\r\n header = self.ui.tableWidget_3.horizontalHeader()\r\n header.sortIndicatorChanged.connect(self.sort)\r\n # print(self._dir(header_2))\r\n\r\n # self.test2()\r\n\r\n '''To Do\r\n - Выбор языка и запись в файл\r\n - Сделать файл языка\r\n '''\r\n\r\n def test2(self):\r\n self.CopyWin.showWin(12, 123, 55)\r\n\r\n def copyShow(self):\r\n totalFiles = self.MovingThread.totalFiles\r\n totalSize = self.MovingThread.totalSize\r\n\r\n self.CopyWin.showWin(totalFiles, totalSize)\r\n\r\n def progress(self, progressDict):\r\n self.CopyWin.progress(progressDict)\r\n\r\n def setName(self, namesList):\r\n self.CopyWin.setName(namesList)\r\n\r\n def _dir(self, widget):\r\n for i in dir(widget):\r\n print(i)\r\n\r\n def test(self):\r\n table = self.ui.tableWidget_2\r\n print(table.selectedRanges())\r\n\r\n def start(self):\r\n if not self.MovingThread.isRunning():\r\n dataTarget = [x for x in self.dataTarget if x['chekbox']\r\n and os.path.exists(x['path'])]\r\n data = [x for x in self.data if x['chekbox']\r\n and os.path.exists(x['path'])]\r\n\r\n if dataTarget and data:\r\n self.CopyWin.starttimer()\r\n self.ui.pushButton_6.setText('Стоп')\r\n self.MovingThread.st(dataTarget, data)\r\n else:\r\n self.ui.statusbar.showMessage('Нечего перемещать', 3000)\r\n else:\r\n self.MovingThread.statusWork = False\r\n\r\n def finish(self):\r\n self.ui.pushButton_6.setText('Старт')\r\n # добавить скрытие окна перемещения файлов\r\n\r\n def eventFilter(self, source, event):\r\n # print(event.type(), source)\r\n if event.type() == QEvent.Enter and source is self.ui.tableWidget_3:\r\n self.focusSet = True\r\n elif event.type() == QEvent.Leave and source is self.ui.tableWidget_3:\r\n self.focusSet = False\r\n # elif event.type() == QtCore.QEvent.FocusOut and (source is self.listWidget_categori or source is self.listWidget_extensions):\r\n # self.focus_listwindget = 0\r\n return super(MainWindow, self).eventFilter(source, event)\r\n\r\n def getCurrent(self):\r\n if self.currentTab == 0:\r\n table = self.ui.tableWidget\r\n data = self.dataTarget\r\n\r\n elif self.currentTab == 1:\r\n table = self.ui.tableWidget_2\r\n data = self.data\r\n return table, data\r\n\r\n def doubleClicked(self, item):\r\n if (currentColumn:= item.column()) == 1:\r\n _, data = self.getCurrent()\r\n currentRow = item.row()\r\n pathChosen = QFileDialog.getExistingDirectory(\r\n self, 'Выбор папки', os.path.expanduser('~'))\r\n data[currentRow]['path'] = pathChosen\r\n\r\n if not data[currentRow]['name']:\r\n sep = os.path.sep + (os.path.altsep or '')\r\n name = os.path.basename(pathChosen.rstrip(sep))\r\n data[currentRow]['name'] = name\r\n\r\n self.fillingInTable()\r\n\r\n def editTable(self, item):\r\n _, data = self.getCurrent()\r\n\r\n currentRow = item.row()\r\n currentCol = item.column()\r\n\r\n if currentCol == 0:\r\n data[currentRow]['name'] = item.text()\r\n check = item.checkState()\r\n\r\n if check is Qt.CheckState.Checked:\r\n data[currentRow]['chekbox'] = True\r\n else:\r\n data[currentRow]['chekbox'] = False\r\n elif currentCol == 1:\r\n data[currentRow]['path'] = item.text()\r\n\r\n def keyPressEvent(self, event):\r\n ctrl = event.modifiers() == 0x04000000\r\n\r\n if ctrl and event.key() == 0x56: # V key\r\n self.paste()\r\n elif (ctrl and event.key() == 0x01000007) and self.focusSet:\r\n self.deleteExt()\r\n elif ctrl and event.key() == 0x01000007: # Delete key\r\n self.delete()\r\n elif event.key() == 0x01000007: # Delete key\r\n self.clear()\r\n\r\n def openMenu(self, pos):\r\n menu = QMenu()\r\n if self.focusSet:\r\n # menu.addAction(\"Сортировать\", self.sort)\r\n menu.addAction(\"Удалить\", self.deleteExt)\r\n menu.addAction(\"Удалить всё\", self.deleteExtAll)\r\n else:\r\n menu.addAction(\"Вставить\", self.paste)\r\n menu.addAction(\"Очистить\", self.clear)\r\n menu.addAction(\"Удалить\", self.delete)\r\n menu.addAction(\"Удалить всё\", self.deleteAll)\r\n\r\n menu.exec_(QCursor.pos())\r\n\r\n def sort(self, index, order):\r\n table, data = self.getCurrent()\r\n tableExt = self.ui.tableWidget_3\r\n\r\n if order is Qt.AscendingOrder:\r\n for i in data:\r\n i['extension'].sort()\r\n else:\r\n for i in data:\r\n i['extension'].sort(reverse=True)\r\n\r\n self.preloadExt()\r\n\r\n def clear(self):\r\n table, _ = self.getCurrent()\r\n\r\n currentRow = table.currentRow()\r\n currentCol = table.currentColumn()\r\n table.setItem(currentRow, currentCol, QTableWidgetItem(None))\r\n\r\n def paste(self):\r\n table, data = self.getCurrent()\r\n currentRow = table.currentRow()\r\n currentCol = table.currentColumn()\r\n clipboard = QApplication.clipboard().text()\r\n\r\n if currentCol == 1:\r\n clipboard = clipboard.replace('\\\\', '/')\r\n\r\n if not data[currentRow]['name']:\r\n sep = os.path.sep + (os.path.altsep or '')\r\n name = os.path.basename(clipboard.rstrip(sep))\r\n data[currentRow]['name'] = name\r\n\r\n data[currentRow]['path'] = clipboard\r\n data[currentRow]['chekbox'] = True\r\n\r\n elif currentCol == 0:\r\n data[currentRow]['name'] = clipboard\r\n\r\n self.fillingInTable()\r\n table.setCurrentCell(currentRow, currentCol)\r\n if self.currentTab == 1:\r\n self.preloadExt()\r\n\r\n def deleteExtAll(self):\r\n table, data = self.getCurrent()\r\n tableExt = self.ui.tableWidget_3\r\n\r\n ext = data[self.dataRowExt]['extension']\r\n ext.clear()\r\n self.preloadExt()\r\n\r\n def deleteExt(self):\r\n if self.dataRowExt >= 0:\r\n table, data = self.getCurrent()\r\n tableExt = self.ui.tableWidget_3\r\n\r\n ext = data[self.dataRowExt]['extension']\r\n if (items:= tableExt.selectedItems()):\r\n for item in items:\r\n ext.remove(item.text())\r\n else:\r\n ext.pop()\r\n\r\n self.preloadExt()\r\n\r\n def delete(self):\r\n table, data = self.getCurrent()\r\n if (items:= table.selectedItems()):\r\n for index, item in enumerate(items):\r\n data.pop(item.row() - index)\r\n else:\r\n data.pop()\r\n\r\n self.fillingInTable()\r\n\r\n def deleteAll(self):\r\n table, data = self.getCurrent()\r\n data.clear()\r\n self.fillingInTable()\r\n\r\n def addExt(self):\r\n lineEdit = self.ui.lineEdit\r\n textList = lineEdit.text().strip().split()\r\n\r\n if self.dataRowExt >= 0:\r\n table, data = self.getCurrent()\r\n ext = data[self.dataRowExt]['extension']\r\n for i in textList:\r\n if i.startswith('.') and i not in ext:\r\n ext.append(i)\r\n\r\n lineEdit.clear()\r\n\r\n self.preloadExt()\r\n\r\n def add(self):\r\n if self.currentTab == 0: # вкладка откуда\r\n self.dataTarget.append({'name': '', 'path': '', 'chekbox': True})\r\n\r\n elif self.currentTab == 1: # вкладка откуда\r\n self.data.append(\r\n {'name': '', 'path': '', 'extension': [], 'chekbox': True})\r\n\r\n self.fillingInTable()\r\n\r\n def normText(self, qstring):\r\n norm = re.compile(r'[^a-zA-Z0-9\\.\\s]+')\r\n space = re.compile(r'\\s{2,}')\r\n lineEdit = self.ui.lineEdit\r\n qstring = norm.sub('', qstring)\r\n qstring = space.sub(' ', qstring.lower())\r\n lineEdit.setText(qstring)\r\n\r\n def tabWidget(self, index):\r\n self.currentTab = index\r\n if index == 0: # вкладка откуда\r\n if self.ui.widget.isHidden():\r\n self.ui.widget.show()\r\n self.fillingInTable()\r\n elif index == 1: # вкладка куда\r\n if self.ui.widget.isHidden():\r\n self.ui.widget.show()\r\n self.fillingInTable()\r\n self.preloadExt()\r\n elif index == 2: # вкладка настройки\r\n self.ui.widget.hide()\r\n\r\n def _connect(self):\r\n # Первая вкладка\r\n self.ui.tableWidget.setContextMenuPolicy(Qt.CustomContextMenu)\r\n self.ui.tableWidget.customContextMenuRequested.connect(self.openMenu)\r\n self.ui.tableWidget.itemDoubleClicked.connect(self.doubleClicked)\r\n # Вторая вкладка\r\n self.ui.tableWidget_2.setContextMenuPolicy(Qt.CustomContextMenu)\r\n self.ui.tableWidget_2.customContextMenuRequested.connect(self.openMenu)\r\n self.ui.tableWidget_2.itemDoubleClicked.connect(self.doubleClicked)\r\n self.ui.tableWidget_2.itemClicked.connect(self.preloadExt)\r\n # Вторая вкладка расширения\r\n self.ui.tableWidget_3.setContextMenuPolicy(Qt.CustomContextMenu)\r\n self.ui.tableWidget_3.customContextMenuRequested.connect(self.openMenu)\r\n self.ui.lineEdit.returnPressed.connect(self.addExt)\r\n self.ui.lineEdit.textChanged.connect(self.normText)\r\n self.ui.tableWidget_3.installEventFilter(self)\r\n\r\n def preloadExt(self, item=None):\r\n tableExt = self.ui.tableWidget_3\r\n table, data = self.getCurrent()\r\n\r\n tableExt.clear()\r\n tableExt.setColumnCount(1)\r\n tableExt.setRowCount(0)\r\n\r\n if (items:= table.selectedItems()):\r\n self.dataRowExt = items[-1].row()\r\n extensions = data[self.dataRowExt].get('extension')\r\n tableExt.setRowCount(len(extensions))\r\n\r\n for index, extension in enumerate(extensions):\r\n item = QTableWidgetItem(extension)\r\n item.setTextAlignment(0x0002) # aligment to Right\r\n tableExt.setItem(index, 0, item)\r\n else:\r\n self.dataRowExt = -1\r\n\r\n tableExt.setHorizontalHeaderLabels([\"Расширения\"])\r\n tableExt.resizeColumnsToContents()\r\n\r\n def fillingInTable(self):\r\n table, data = self.getCurrent()\r\n try:\r\n table.itemChanged.disconnect(self.editTable)\r\n except:\r\n pass\r\n\r\n table.clear()\r\n table.setColumnCount(2)\r\n table.setRowCount(len(data))\r\n\r\n for index, dict_ in enumerate(data):\r\n path = QTableWidgetItem(dict_.get('path'))\r\n path.setFlags(Qt.ItemFlag(1 | 32))\r\n\r\n name = QTableWidgetItem(dict_.get('name'))\r\n if dict_.get('chekbox'):\r\n name.setCheckState(Qt.Checked)\r\n else:\r\n name.setCheckState(Qt.Unchecked)\r\n\r\n table.setItem(index, 0, name)\r\n table.setItem(index, 1, path)\r\n\r\n table.setHorizontalHeaderLabels([\"Имя\", \"Путь\"])\r\n table.resizeColumnsToContents()\r\n\r\n table.itemChanged.connect(self.editTable)\r\n\r\n def loading(self):\r\n self.currentTab = self.ui.tabWidget.currentIndex()\r\n\r\n for i, v in enumerate(['data', 'data_target', 'settings']):\r\n try:\r\n with open(f'settings/{v}.json', 'r', encoding=\"utf-8\") as data:\r\n if i == 0:\r\n self.data = json.load(data)\r\n elif i == 1:\r\n self.dataTarget = json.load(data)\r\n elif i == 2:\r\n self.setting = json.load(data)\r\n except FileNotFoundError:\r\n if i == 0:\r\n self.data = []\r\n elif i == 1:\r\n self.dataTarget = []\r\n elif i == 2:\r\n self.setting = {}\r\n\r\n size = self.setting.get('mainwindowsize', [715, 505])\r\n self.resize(*size)\r\n\r\n lang = self.setting.get('lang')\r\n self.lang = Language(lang)\r\n\r\n\r\n self.focusSet = False\r\n tableExt = self.ui.tableWidget_3\r\n tableExt.setColumnCount(1)\r\n tableExt.horizontalHeader().setVisible(True)\r\n tableExt.setHorizontalHeaderLabels([\"Расширения\"])\r\n tableExt.setRowCount(0)\r\n\r\n self.fillingInTable()\r\n self._connect()\r\n self.dataRowExt = -1\r\n\r\n def closeEvent(self, event):\r\n if self.MovingThread.isRunning():\r\n self.ui.statusBar.showMessage('Запущен процесс перемещения', 3000)\r\n event.ignore()\r\n else:\r\n with open('settings/data.json', 'w', encoding=\"utf-8\") as data, open('settings/data_target.json', 'w', encoding=\"utf-8\") as dataTarget:\r\n json.dump(self.data, data, indent=4, ensure_ascii=False)\r\n json.dump(self.dataTarget, dataTarget,\r\n indent=4, ensure_ascii=False)\r\n\r\n with open('settings/settings.json', 'w', encoding=\"utf-8\") as f:\r\n self.settings = {'lang': 'ru',\r\n 'mainwindowsize': [self.width(), self.height()]}\r\n json.dump(self.settings, f, indent=4, ensure_ascii=False)\r\n\r\n\r\ndef main():\r\n app = QApplication(sys.argv) # Новый экземпляр QApplication\r\n window = MainWindow() # Создаём объект класса myApp\r\n theme = QDarkPalette()\r\n theme.set_app(app)\r\n\r\n window.show() # Показываем окно\r\n app.exec_() # и запускаем приложение\r\n\r\n\r\nif __name__ == '__main__':\r\n import sys\r\n from theme.DarkPalette import QDarkPalette\r\n main() # то запускаем функцию main()\r\n"
},
{
"alpha_fraction": 0.7466307282447815,
"alphanum_fraction": 0.7547169923782349,
"avg_line_length": 51,
"blob_id": "67077065ccc305b9413a1167d72a56f076b49e00",
"content_id": "b5c3ba660178e1d8a2376c4b5367d6e394ca0e61",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "INI",
"length_bytes": 2474,
"license_type": "no_license",
"max_line_length": 174,
"num_lines": 28,
"path": "/assets/readme_RU.ini",
"repo_name": "WilJames/SortToFolders",
"src_encoding": "UTF-8",
"text": "<h3 align=\"center\">Для чего программа:</h3>\r\n<p>Программа для перемещения файлов из целевой папки в подпапки по категориям, ориентируясь по расширениям файлов. Например, для оптимизации папки \"Загрузки\" на вашем ПК.</p>\r\n<h3 align=\"center\">Как запустить перемещение:</h3>\r\n<p>\r\n1. Указать путь к целевой папке.<br>\r\n2. Создать папки под каждую необходимую категорию.<br>\r\n3. Указать путь к каждой категории по отдельности.<br>\r\n4. По желанию добавить или удалить категории, а так же расширения.<br>\r\n5. Для запуска перемещения нажать кнопку \"Переместить!\", программа оповестит об успешном выполнении.\r\n</p>\r\n\r\n<h3 align=\"center\">Подробнее о возможностях:</h3>\r\n<p>\r\nВвод путей для целевой папки и категорий возможен кнопкой \"...\" или ручками (Ctrl + V) в соответствующее поле.<br><br>\r\nДля пропуска ненужной категории выберите её и ощистите поле \"Путь выбранной категории:\".<br><br>\r\n- Формат записи категорий:<br>\r\nЯзык любой<br>\r\nИмя категории не может начинаться с точки!<br><br>\r\n- Формат записи расширений:<br>\r\nРасширения обязательно начинаются с точки.<br>\r\nРегистр букв неважен.<br>\r\nДобавление расширений как по одному: .rar<br>так и группой через пробел: .rar .zip .7z .tar<br><br>\r\n- Управление:<br>\r\n\"Enter\" - добавление категории или расширения (автоопределение).<br>\r\n\"Delete\" - удаление категории или расширения.<br>\r\nДвойной клик по категории - переименование.<br>\r\nВозможно остановить перемещение файлов кнопкой \"Остановить\", появляется когда процесс запущен.\r\n</p>\r\n"
},
{
"alpha_fraction": 0.7198024392127991,
"alphanum_fraction": 0.7844634056091309,
"avg_line_length": 49.6136360168457,
"blob_id": "3ace88e1858afcbd4b7f0ffb6c81074145a64d4e",
"content_id": "47366dd8aeeddc976f86532103683145c51e3c0f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 3382,
"license_type": "no_license",
"max_line_length": 239,
"num_lines": 44,
"path": "/README.md",
"repo_name": "WilJames/SortToFolders",
"src_encoding": "UTF-8",
"text": "# MovingToFolders\nПрограмма для перемещения файлов из целевой папки (например папка \"Загрузки\") в подпапки по категориям, ориентируясь по расширениям файлов.\n\n# Поддерживаемые OS:\nOS: Windows/Linux/MacOS\n\n# Инструкция по установке:\n- Windows - скачать exe файл в ветке [релиз](https://github.com/WilJames/MovingToFolders/releases \"Releases\"), запустить установщик и следовать подсказкам.\"\n- Linux - скачать архив с [иходником](https://github.com/WilJames/MovingToFolders/archive/master.zip \"Download\"), распаковать в любую папку, установить [Python](https://www.python.org/downloads/ \"Python\"). Запускать файл MovingToFolders.py\n- MacOS??? Нужны тесты\n\n# Инструкция использованию:\nКак запустить перемещение:\n1. Указать путь к целевой папке.\n2. Создать папки под каждую необходимую категорию.\n3. Указать путь к каждой категории по отдельности.\n4. По желанию добавить или удалить категории, а так же расширения.\n5. Для запуска перемещения нажать кнопку \"Переместить!\", программа оповестит об успешном выполнении.\n\n# Подробнее о возможностях:\nВвод путей для целевой папки и категорий возможен кнопкой \"...\" или ручками (Ctrl + V) в соответствующее поле.\n\nДля пропуска ненужной категории выберите её и ощистите поле \"Путь выбранной категории:\".\n\n- Формат записи категорий:\n1. Язык любой\n2. Имя категории не может начинаться с точки!\n\n- Формат записи расширений:\n1. Расширения обязательно начинаются с точки.\n2. Регистр букв неважен.\n3. Добавление расширений как по одному: .rar\n4. Или группой через пробел: .rar .zip .7z .tar\n\n- Управление:\n1. \"Enter\" - добавление категории или расширения (автоопределение).\n2. \"Delete\" - удаление категории или расширения.\n3. Двойной клик по категории - переименование.\n4. Возможно остановить перемещение файлов кнопкой \"Остановить\", появляется когда процесс запущен. \n\n# Скриншоты\n[](https://pp.userapi.com/c849124/v849124576/1b278a/QsI9HOG-HVM.jpg)\n[](https://pp.userapi.com/c849124/v849124576/1b2791/9n0kKVECEC4.jpg)\n[](https://pp.userapi.com/c849124/v849124576/1b2798/PUxvKhAKJ9A.jpg)\n"
},
{
"alpha_fraction": 0.5968801975250244,
"alphanum_fraction": 0.6436770558357239,
"avg_line_length": 39.946502685546875,
"blob_id": "c7d636be13a59ebe04c3e7ad63ab336b83510eea",
"content_id": "1ee6e2bbf2403a0e5181ab89bc7c0dffc6932589",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 10193,
"license_type": "no_license",
"max_line_length": 94,
"num_lines": 243,
"path": "/ui/copy_win.py",
"repo_name": "WilJames/SortToFolders",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\r\n################################################################################\r\n# Form generated from reading UI file 'copy_win.ui'\r\n##\r\n# Created by: Qt User Interface Compiler version 5.14.1\r\n##\r\n# WARNING! All changes made in this file will be lost when recompiling UI file!\r\n################################################################################\r\n\r\nfrom PySide2.QtCore import (QCoreApplication, QMetaObject, QObject, QPoint,\r\n QRect, QSize, QUrl, Qt)\r\nfrom PySide2.QtGui import (QBrush, QColor, QConicalGradient, QCursor, QFont,\r\n QFontDatabase, QIcon, QLinearGradient, QPalette, QPainter, QPixmap,\r\n QRadialGradient)\r\nfrom PySide2.QtWidgets import *\r\n\r\n\r\nclass Ui_copy_win(object):\r\n def setupUi(self, Ui_copy_win):\r\n if Ui_copy_win.objectName():\r\n Ui_copy_win.setObjectName(u\"Ui_copy_win\")\r\n Ui_copy_win.resize(368, 218)\r\n self.verticalLayout = QVBoxLayout(Ui_copy_win)\r\n self.verticalLayout.setObjectName(u\"verticalLayout\")\r\n self.label = QLabel(Ui_copy_win)\r\n self.label.setObjectName(u\"label\")\r\n\r\n self.verticalLayout.addWidget(self.label)\r\n\r\n self.label_5 = QLabel(Ui_copy_win)\r\n self.label_5.setObjectName(u\"label_5\")\r\n self.label_5.setAcceptDrops(False)\r\n\r\n self.verticalLayout.addWidget(self.label_5)\r\n\r\n self.progressBar = QProgressBar(Ui_copy_win)\r\n self.progressBar.setObjectName(u\"progressBar\")\r\n self.progressBar.setLayoutDirection(Qt.LeftToRight)\r\n self.progressBar.setValue(0)\r\n self.progressBar.setOrientation(Qt.Horizontal)\r\n\r\n self.verticalLayout.addWidget(self.progressBar)\r\n\r\n self.widget = QWidget(Ui_copy_win)\r\n self.widget.setObjectName(u\"widget\")\r\n self.verticalLayout_5 = QVBoxLayout(self.widget)\r\n self.verticalLayout_5.setObjectName(u\"verticalLayout_5\")\r\n self.verticalLayout_5.setContentsMargins(0, 0, 0, 0)\r\n self.label_4 = QLabel(self.widget)\r\n self.label_4.setObjectName(u\"label_4\")\r\n self.label_4.setAlignment(Qt.AlignCenter)\r\n\r\n self.verticalLayout_5.addWidget(self.label_4)\r\n\r\n self.horizontalLayout_3 = QHBoxLayout()\r\n self.horizontalLayout_3.setObjectName(u\"horizontalLayout_3\")\r\n self.groupBox = QGroupBox(self.widget)\r\n self.groupBox.setObjectName(u\"groupBox\")\r\n self.groupBox.setAlignment(Qt.AlignCenter)\r\n self.verticalLayout_6 = QVBoxLayout(self.groupBox)\r\n self.verticalLayout_6.setObjectName(u\"verticalLayout_6\")\r\n self.verticalLayout_6.setContentsMargins(9, 9, -1, 9)\r\n self.horizontalLayout_2 = QHBoxLayout()\r\n self.horizontalLayout_2.setObjectName(u\"horizontalLayout_2\")\r\n self.label_2 = QLabel(self.groupBox)\r\n self.label_2.setObjectName(u\"label_2\")\r\n\r\n self.horizontalLayout_2.addWidget(self.label_2)\r\n\r\n self.label_6 = QLabel(self.groupBox)\r\n self.label_6.setObjectName(u\"label_6\")\r\n self.label_6.setAlignment(\r\n Qt.AlignRight | Qt.AlignTrailing | Qt.AlignVCenter)\r\n\r\n self.horizontalLayout_2.addWidget(self.label_6)\r\n\r\n self.verticalLayout_6.addLayout(self.horizontalLayout_2)\r\n\r\n self.horizontalLayout_4 = QHBoxLayout()\r\n self.horizontalLayout_4.setObjectName(u\"horizontalLayout_4\")\r\n self.label_3 = QLabel(self.groupBox)\r\n self.label_3.setObjectName(u\"label_3\")\r\n\r\n self.horizontalLayout_4.addWidget(self.label_3)\r\n\r\n self.label_7 = QLabel(self.groupBox)\r\n self.label_7.setObjectName(u\"label_7\")\r\n self.label_7.setAlignment(\r\n Qt.AlignRight | Qt.AlignTrailing | Qt.AlignVCenter)\r\n\r\n self.horizontalLayout_4.addWidget(self.label_7)\r\n\r\n self.verticalLayout_6.addLayout(self.horizontalLayout_4)\r\n\r\n self.horizontalLayout_5 = QHBoxLayout()\r\n self.horizontalLayout_5.setObjectName(u\"horizontalLayout_5\")\r\n self.label_8 = QLabel(self.groupBox)\r\n self.label_8.setObjectName(u\"label_8\")\r\n\r\n self.horizontalLayout_5.addWidget(self.label_8)\r\n\r\n self.label_12 = QLabel(self.groupBox)\r\n self.label_12.setObjectName(u\"label_12\")\r\n self.label_12.setAlignment(\r\n Qt.AlignRight | Qt.AlignTrailing | Qt.AlignVCenter)\r\n\r\n self.horizontalLayout_5.addWidget(self.label_12)\r\n\r\n self.verticalLayout_6.addLayout(self.horizontalLayout_5)\r\n\r\n self.horizontalLayout_3.addWidget(self.groupBox)\r\n\r\n self.groupBox_2 = QGroupBox(self.widget)\r\n self.groupBox_2.setObjectName(u\"groupBox_2\")\r\n self.groupBox_2.setAlignment(Qt.AlignCenter)\r\n self.groupBox_2.setFlat(False)\r\n self.groupBox_2.setCheckable(False)\r\n self.verticalLayout_7 = QVBoxLayout(self.groupBox_2)\r\n self.verticalLayout_7.setObjectName(u\"verticalLayout_7\")\r\n self.horizontalLayout_6 = QHBoxLayout()\r\n self.horizontalLayout_6.setObjectName(u\"horizontalLayout_6\")\r\n self.label_9 = QLabel(self.groupBox_2)\r\n self.label_9.setObjectName(u\"label_9\")\r\n\r\n self.horizontalLayout_6.addWidget(self.label_9)\r\n\r\n self.label_13 = QLabel(self.groupBox_2)\r\n self.label_13.setObjectName(u\"label_13\")\r\n self.label_13.setAlignment(\r\n Qt.AlignRight | Qt.AlignTrailing | Qt.AlignVCenter)\r\n\r\n self.horizontalLayout_6.addWidget(self.label_13)\r\n\r\n self.verticalLayout_7.addLayout(self.horizontalLayout_6)\r\n\r\n self.horizontalLayout_7 = QHBoxLayout()\r\n self.horizontalLayout_7.setObjectName(u\"horizontalLayout_7\")\r\n self.label_10 = QLabel(self.groupBox_2)\r\n self.label_10.setObjectName(u\"label_10\")\r\n\r\n self.horizontalLayout_7.addWidget(self.label_10)\r\n\r\n self.label_14 = QLabel(self.groupBox_2)\r\n self.label_14.setObjectName(u\"label_14\")\r\n self.label_14.setAlignment(\r\n Qt.AlignRight | Qt.AlignTrailing | Qt.AlignVCenter)\r\n\r\n self.horizontalLayout_7.addWidget(self.label_14)\r\n\r\n self.verticalLayout_7.addLayout(self.horizontalLayout_7)\r\n\r\n self.horizontalLayout_8 = QHBoxLayout()\r\n self.horizontalLayout_8.setObjectName(u\"horizontalLayout_8\")\r\n self.label_11 = QLabel(self.groupBox_2)\r\n self.label_11.setObjectName(u\"label_11\")\r\n\r\n self.horizontalLayout_8.addWidget(self.label_11)\r\n\r\n self.label_15 = QLabel(self.groupBox_2)\r\n self.label_15.setObjectName(u\"label_15\")\r\n self.label_15.setAlignment(\r\n Qt.AlignRight | Qt.AlignTrailing | Qt.AlignVCenter)\r\n\r\n self.horizontalLayout_8.addWidget(self.label_15)\r\n\r\n self.verticalLayout_7.addLayout(self.horizontalLayout_8)\r\n\r\n self.horizontalLayout_3.addWidget(self.groupBox_2)\r\n\r\n self.verticalLayout_5.addLayout(self.horizontalLayout_3)\r\n\r\n self.verticalLayout.addWidget(self.widget)\r\n\r\n self.horizontalLayout = QHBoxLayout()\r\n self.horizontalLayout.setObjectName(u\"horizontalLayout\")\r\n self.pushButton = QPushButton(Ui_copy_win)\r\n self.pushButton.setObjectName(u\"pushButton\")\r\n self.pushButton.setMinimumSize(QSize(22, 22))\r\n self.pushButton.setMaximumSize(QSize(22, 22))\r\n\r\n self.horizontalLayout.addWidget(self.pushButton)\r\n\r\n self.label_dopinfo = QLabel(Ui_copy_win)\r\n self.label_dopinfo.setObjectName(u\"label_dopinfo\")\r\n\r\n self.horizontalLayout.addWidget(self.label_dopinfo)\r\n\r\n self.horizontalSpacer = QSpacerItem(\r\n 40, 20, QSizePolicy.Expanding, QSizePolicy.Minimum)\r\n\r\n self.horizontalLayout.addItem(self.horizontalSpacer)\r\n\r\n self.pushButton_2 = QPushButton(Ui_copy_win)\r\n self.pushButton_2.setObjectName(u\"pushButton_2\")\r\n self.pushButton_2.setMinimumSize(QSize(22, 22))\r\n self.pushButton_2.setMaximumSize(QSize(22, 22))\r\n\r\n self.horizontalLayout.addWidget(self.pushButton_2)\r\n\r\n self.verticalLayout.addLayout(self.horizontalLayout)\r\n\r\n self.retranslateUi(Ui_copy_win)\r\n\r\n QMetaObject.connectSlotsByName(Ui_copy_win)\r\n # setupUi\r\n\r\n def retranslateUi(self, Ui_copy_win):\r\n Ui_copy_win.setWindowTitle(\r\n QCoreApplication.translate(\"Ui_copy_win\", u\"Form\", None))\r\n self.label.setText(QCoreApplication.translate(\r\n \"Ui_copy_win\", u\"\\u0418\\u043c\\u044f:\", None))\r\n self.label_5.setText(QCoreApplication.translate(\r\n \"Ui_copy_win\", u\"\\u0418\\u0437 ... \\u0432 ...\", None))\r\n self.label_4.setText(QCoreApplication.translate(\r\n \"Ui_copy_win\", u\"\\u0421\\u043a\\u043e\\u0440\\u043e\\u0441\\u0442\\u044c:\", None))\r\n self.groupBox.setTitle(QCoreApplication.translate(\r\n \"Ui_copy_win\", u\"\\u041e\\u0441\\u0442\\u0430\\u043b\\u043e\\u0441\\u044c\", None))\r\n self.label_2.setText(QCoreApplication.translate(\r\n \"Ui_copy_win\", u\"\\u0412\\u0440\\u0435\\u043c\\u0435\\u043d\\u0438:\", None))\r\n self.label_6.setText(\"\")\r\n self.label_3.setText(QCoreApplication.translate(\r\n \"Ui_copy_win\", u\"\\u0424\\u0430\\u0439\\u043b\\u043e\\u0432:\", None))\r\n self.label_7.setText(\"\")\r\n self.label_8.setText(QCoreApplication.translate(\r\n \"Ui_copy_win\", u\"\\u0420\\u0430\\u0437\\u043c\\u0435\\u0440:\", None))\r\n self.label_12.setText(\"\")\r\n self.groupBox_2.setTitle(QCoreApplication.translate(\r\n \"Ui_copy_win\", u\"\\u0412\\u0441\\u0435\\u0433\\u043e\", None))\r\n self.label_9.setText(QCoreApplication.translate(\r\n \"Ui_copy_win\", u\"\\u0412\\u0440\\u0435\\u043c\\u0435\\u043d\\u0438:\", None))\r\n self.label_13.setText(\"\")\r\n self.label_10.setText(QCoreApplication.translate(\r\n \"Ui_copy_win\", u\"\\u0424\\u0430\\u0439\\u043b\\u043e\\u0432:\", None))\r\n self.label_14.setText(\"\")\r\n self.label_11.setText(QCoreApplication.translate(\r\n \"Ui_copy_win\", u\"\\u0420\\u0430\\u0437\\u043c\\u0435\\u0440:\", None))\r\n self.label_15.setText(\"\")\r\n self.pushButton.setText(\"\")\r\n self.label_dopinfo.setText(\"\")\r\n self.pushButton_2.setText(\r\n QCoreApplication.translate(\"Ui_copy_win\", u\"X\", None))\r\n # retranslateUi\r\n"
},
{
"alpha_fraction": 0.6651930212974548,
"alphanum_fraction": 0.6946654915809631,
"avg_line_length": 46.812950134277344,
"blob_id": "1f6b0d23fd940af898c56d4fba3553927bc9801f",
"content_id": "d86fbc1b340ddc03dd04bf422ea0581b71dd62a9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 13572,
"license_type": "no_license",
"max_line_length": 176,
"num_lines": 278,
"path": "/ui/stf_ui.py",
"repo_name": "WilJames/SortToFolders",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\r\n################################################################################\r\n## Form generated from reading UI file 'untitled.ui'\r\n##\r\n## Created by: Qt User Interface Compiler version 5.14.1\r\n##\r\n## WARNING! All changes made in this file will be lost when recompiling UI file!\r\n################################################################################\r\n\r\nfrom PySide2.QtCore import (QCoreApplication, QMetaObject, QObject, QPoint,\r\n QRect, QSize, QUrl, Qt)\r\nfrom PySide2.QtGui import (QBrush, QColor, QConicalGradient, QCursor, QFont,\r\n QFontDatabase, QIcon, QLinearGradient, QPalette, QPainter, QPixmap,\r\n QRadialGradient)\r\nfrom PySide2.QtWidgets import *\r\n\r\n\r\nclass Ui_MainWindow(object):\r\n def setupUi(self, MainWindow):\r\n if MainWindow.objectName():\r\n MainWindow.setObjectName(u\"MainWindow\")\r\n MainWindow.resize(715, 505)\r\n MainWindow.setDocumentMode(False)\r\n MainWindow.setTabShape(QTabWidget.Rounded)\r\n MainWindow.setDockNestingEnabled(False)\r\n MainWindow.setDockOptions(QMainWindow.AllowTabbedDocks|QMainWindow.AnimatedDocks)\r\n MainWindow.setUnifiedTitleAndToolBarOnMac(False)\r\n self.centralwidget = QWidget(MainWindow)\r\n self.centralwidget.setObjectName(u\"centralwidget\")\r\n self.verticalLayout = QVBoxLayout(self.centralwidget)\r\n self.verticalLayout.setObjectName(u\"verticalLayout\")\r\n self.tabWidget = QTabWidget(self.centralwidget)\r\n self.tabWidget.setObjectName(u\"tabWidget\")\r\n sizePolicy = QSizePolicy(QSizePolicy.Expanding, QSizePolicy.Expanding)\r\n sizePolicy.setHorizontalStretch(0)\r\n sizePolicy.setVerticalStretch(0)\r\n sizePolicy.setHeightForWidth(self.tabWidget.sizePolicy().hasHeightForWidth())\r\n self.tabWidget.setSizePolicy(sizePolicy)\r\n self.tabWidget.setTabPosition(QTabWidget.North)\r\n self.tabWidget.setTabShape(QTabWidget.Rounded)\r\n self.tabWidget.setElideMode(Qt.ElideNone)\r\n self.tabWidget.setUsesScrollButtons(True)\r\n self.tabWidget.setDocumentMode(True)\r\n self.tabWidget.setTabsClosable(False)\r\n self.tabWidget.setMovable(False)\r\n self.tabWidget.setTabBarAutoHide(False)\r\n self.tab = QWidget()\r\n self.tab.setObjectName(u\"tab\")\r\n self.tab.setEnabled(True)\r\n sizePolicy1 = QSizePolicy(QSizePolicy.Preferred, QSizePolicy.Preferred)\r\n sizePolicy1.setHorizontalStretch(0)\r\n sizePolicy1.setVerticalStretch(0)\r\n sizePolicy1.setHeightForWidth(self.tab.sizePolicy().hasHeightForWidth())\r\n self.tab.setSizePolicy(sizePolicy1)\r\n self.tab.setAutoFillBackground(False)\r\n self.verticalLayout_2 = QVBoxLayout(self.tab)\r\n self.verticalLayout_2.setSpacing(6)\r\n self.verticalLayout_2.setObjectName(u\"verticalLayout_2\")\r\n self.verticalLayout_2.setSizeConstraint(QLayout.SetDefaultConstraint)\r\n self.verticalLayout_2.setContentsMargins(0, 0, 0, 0)\r\n self.tableWidget = QTableWidget(self.tab)\r\n if (self.tableWidget.rowCount() < 6):\r\n self.tableWidget.setRowCount(6)\r\n __qtablewidgetitem = QTableWidgetItem()\r\n self.tableWidget.setVerticalHeaderItem(0, __qtablewidgetitem)\r\n __qtablewidgetitem1 = QTableWidgetItem()\r\n self.tableWidget.setVerticalHeaderItem(1, __qtablewidgetitem1)\r\n __qtablewidgetitem2 = QTableWidgetItem()\r\n self.tableWidget.setVerticalHeaderItem(2, __qtablewidgetitem2)\r\n __qtablewidgetitem3 = QTableWidgetItem()\r\n self.tableWidget.setVerticalHeaderItem(3, __qtablewidgetitem3)\r\n __qtablewidgetitem4 = QTableWidgetItem()\r\n self.tableWidget.setVerticalHeaderItem(4, __qtablewidgetitem4)\r\n __qtablewidgetitem5 = QTableWidgetItem()\r\n self.tableWidget.setVerticalHeaderItem(5, __qtablewidgetitem5)\r\n self.tableWidget.setObjectName(u\"tableWidget\")\r\n self.tableWidget.setAutoScrollMargin(16)\r\n self.tableWidget.setWordWrap(False)\r\n self.tableWidget.horizontalHeader().setStretchLastSection(True)\r\n self.tableWidget.verticalHeader().setVisible(False)\r\n self.tableWidget.verticalHeader().setMinimumSectionSize(22)\r\n self.tableWidget.verticalHeader().setDefaultSectionSize(22)\r\n\r\n self.verticalLayout_2.addWidget(self.tableWidget)\r\n\r\n self.tabWidget.addTab(self.tab, \"\")\r\n self.tab_2 = QWidget()\r\n self.tab_2.setObjectName(u\"tab_2\")\r\n self.verticalLayout_3 = QVBoxLayout(self.tab_2)\r\n self.verticalLayout_3.setSpacing(6)\r\n self.verticalLayout_3.setObjectName(u\"verticalLayout_3\")\r\n self.verticalLayout_3.setContentsMargins(0, 0, 0, 0)\r\n self.horizontalLayout_3 = QHBoxLayout()\r\n self.horizontalLayout_3.setObjectName(u\"horizontalLayout_3\")\r\n self.tableWidget_2 = QTableWidget(self.tab_2)\r\n if (self.tableWidget_2.rowCount() < 3):\r\n self.tableWidget_2.setRowCount(3)\r\n __qtablewidgetitem6 = QTableWidgetItem()\r\n self.tableWidget_2.setVerticalHeaderItem(0, __qtablewidgetitem6)\r\n __qtablewidgetitem7 = QTableWidgetItem()\r\n self.tableWidget_2.setVerticalHeaderItem(1, __qtablewidgetitem7)\r\n __qtablewidgetitem8 = QTableWidgetItem()\r\n self.tableWidget_2.setVerticalHeaderItem(2, __qtablewidgetitem8)\r\n self.tableWidget_2.setObjectName(u\"tableWidget_2\")\r\n self.tableWidget_2.setWordWrap(False)\r\n self.tableWidget_2.horizontalHeader().setStretchLastSection(True)\r\n self.tableWidget_2.verticalHeader().setVisible(False)\r\n self.tableWidget_2.verticalHeader().setMinimumSectionSize(22)\r\n self.tableWidget_2.verticalHeader().setDefaultSectionSize(22)\r\n self.tableWidget_2.verticalHeader().setHighlightSections(True)\r\n self.tableWidget_2.verticalHeader().setProperty(\"showSortIndicator\", False)\r\n self.tableWidget_2.verticalHeader().setStretchLastSection(False)\r\n\r\n self.horizontalLayout_3.addWidget(self.tableWidget_2)\r\n\r\n self.verticalLayout_6 = QVBoxLayout()\r\n self.verticalLayout_6.setObjectName(u\"verticalLayout_6\")\r\n self.verticalLayout_6.setSizeConstraint(QLayout.SetDefaultConstraint)\r\n self.tableWidget_3 = QTableWidget(self.tab_2)\r\n if (self.tableWidget_3.rowCount() < 5):\r\n self.tableWidget_3.setRowCount(5)\r\n __qtablewidgetitem9 = QTableWidgetItem()\r\n self.tableWidget_3.setVerticalHeaderItem(0, __qtablewidgetitem9)\r\n self.tableWidget_3.setObjectName(u\"tableWidget_3\")\r\n self.tableWidget_3.setMinimumSize(QSize(120, 0))\r\n self.tableWidget_3.setMaximumSize(QSize(120, 16777215))\r\n font = QFont()\r\n font.setStrikeOut(False)\r\n font.setKerning(True)\r\n self.tableWidget_3.setFont(font)\r\n self.tableWidget_3.viewport().setProperty(\"cursor\", QCursor(Qt.ArrowCursor))\r\n self.tableWidget_3.setLayoutDirection(Qt.LeftToRight)\r\n self.tableWidget_3.setFrameShape(QFrame.StyledPanel)\r\n self.tableWidget_3.setFrameShadow(QFrame.Sunken)\r\n self.tableWidget_3.setTextElideMode(Qt.ElideRight)\r\n self.tableWidget_3.setGridStyle(Qt.SolidLine)\r\n self.tableWidget_3.setWordWrap(False)\r\n self.tableWidget_3.setRowCount(5)\r\n self.tableWidget_3.horizontalHeader().setVisible(False)\r\n self.tableWidget_3.horizontalHeader().setCascadingSectionResizes(False)\r\n self.tableWidget_3.horizontalHeader().setMinimumSectionSize(26)\r\n self.tableWidget_3.horizontalHeader().setProperty(\"showSortIndicator\", False)\r\n self.tableWidget_3.horizontalHeader().setStretchLastSection(True)\r\n self.tableWidget_3.verticalHeader().setVisible(False)\r\n self.tableWidget_3.verticalHeader().setCascadingSectionResizes(False)\r\n self.tableWidget_3.verticalHeader().setMinimumSectionSize(22)\r\n self.tableWidget_3.verticalHeader().setDefaultSectionSize(22)\r\n self.tableWidget_3.verticalHeader().setProperty(\"showSortIndicator\", False)\r\n self.tableWidget_3.verticalHeader().setStretchLastSection(False)\r\n\r\n self.verticalLayout_6.addWidget(self.tableWidget_3)\r\n\r\n self.lineEdit = QLineEdit(self.tab_2)\r\n self.lineEdit.setObjectName(u\"lineEdit\")\r\n self.lineEdit.setMinimumSize(QSize(120, 0))\r\n self.lineEdit.setMaximumSize(QSize(120, 16777215))\r\n self.lineEdit.setFrame(True)\r\n\r\n self.verticalLayout_6.addWidget(self.lineEdit)\r\n\r\n\r\n self.horizontalLayout_3.addLayout(self.verticalLayout_6)\r\n\r\n\r\n self.verticalLayout_3.addLayout(self.horizontalLayout_3)\r\n\r\n self.tabWidget.addTab(self.tab_2, \"\")\r\n self.tab_3 = QWidget()\r\n self.tab_3.setObjectName(u\"tab_3\")\r\n self.verticalLayout_5 = QVBoxLayout(self.tab_3)\r\n self.verticalLayout_5.setObjectName(u\"verticalLayout_5\")\r\n self.horizontalLayout_5 = QHBoxLayout()\r\n self.horizontalLayout_5.setObjectName(u\"horizontalLayout_5\")\r\n self.groupBox = QGroupBox(self.tab_3)\r\n self.groupBox.setObjectName(u\"groupBox\")\r\n self.verticalLayout_4 = QVBoxLayout(self.groupBox)\r\n self.verticalLayout_4.setObjectName(u\"verticalLayout_4\")\r\n self.comboBox = QComboBox(self.groupBox)\r\n self.comboBox.setObjectName(u\"comboBox\")\r\n\r\n self.verticalLayout_4.addWidget(self.comboBox)\r\n\r\n\r\n self.horizontalLayout_5.addWidget(self.groupBox)\r\n\r\n self.horizontalSpacer_5 = QSpacerItem(40, 20, QSizePolicy.Expanding, QSizePolicy.Minimum)\r\n\r\n self.horizontalLayout_5.addItem(self.horizontalSpacer_5)\r\n\r\n\r\n self.verticalLayout_5.addLayout(self.horizontalLayout_5)\r\n\r\n self.verticalSpacer = QSpacerItem(20, 40, QSizePolicy.Minimum, QSizePolicy.Expanding)\r\n\r\n self.verticalLayout_5.addItem(self.verticalSpacer)\r\n\r\n self.tabWidget.addTab(self.tab_3, \"\")\r\n\r\n self.verticalLayout.addWidget(self.tabWidget)\r\n\r\n self.widget = QWidget(self.centralwidget)\r\n self.widget.setObjectName(u\"widget\")\r\n self.horizontalLayout = QHBoxLayout(self.widget)\r\n self.horizontalLayout.setSpacing(6)\r\n self.horizontalLayout.setObjectName(u\"horizontalLayout\")\r\n self.horizontalLayout.setContentsMargins(0, 0, 0, 0)\r\n self.pushButton_3 = QPushButton(self.widget)\r\n self.pushButton_3.setObjectName(u\"pushButton_3\")\r\n\r\n self.horizontalLayout.addWidget(self.pushButton_3)\r\n\r\n self.pushButton_4 = QPushButton(self.widget)\r\n self.pushButton_4.setObjectName(u\"pushButton_4\")\r\n\r\n self.horizontalLayout.addWidget(self.pushButton_4)\r\n\r\n self.horizontalSpacer_4 = QSpacerItem(40, 20, QSizePolicy.Expanding, QSizePolicy.Minimum)\r\n\r\n self.horizontalLayout.addItem(self.horizontalSpacer_4)\r\n\r\n\r\n self.verticalLayout.addWidget(self.widget)\r\n\r\n self.horizontalLayout_2 = QHBoxLayout()\r\n self.horizontalLayout_2.setObjectName(u\"horizontalLayout_2\")\r\n self.horizontalSpacer_2 = QSpacerItem(40, 20, QSizePolicy.Expanding, QSizePolicy.Minimum)\r\n\r\n self.horizontalLayout_2.addItem(self.horizontalSpacer_2)\r\n\r\n self.pushButton_6 = QPushButton(self.centralwidget)\r\n self.pushButton_6.setObjectName(u\"pushButton_6\")\r\n\r\n self.horizontalLayout_2.addWidget(self.pushButton_6)\r\n\r\n self.pushbtest = QPushButton(self.centralwidget)\r\n self.pushbtest.setObjectName(u\"pushbtest\")\r\n\r\n self.horizontalLayout_2.addWidget(self.pushbtest)\r\n\r\n self.horizontalSpacer_3 = QSpacerItem(40, 20, QSizePolicy.Expanding, QSizePolicy.Minimum)\r\n\r\n self.horizontalLayout_2.addItem(self.horizontalSpacer_3)\r\n\r\n\r\n self.verticalLayout.addLayout(self.horizontalLayout_2)\r\n\r\n MainWindow.setCentralWidget(self.centralwidget)\r\n self.menubar = QMenuBar(MainWindow)\r\n self.menubar.setObjectName(u\"menubar\")\r\n self.menubar.setGeometry(QRect(0, 0, 715, 21))\r\n self.menubar.setDefaultUp(False)\r\n MainWindow.setMenuBar(self.menubar)\r\n self.statusbar = QStatusBar(MainWindow)\r\n self.statusbar.setObjectName(u\"statusbar\")\r\n MainWindow.setStatusBar(self.statusbar)\r\n\r\n self.retranslateUi(MainWindow)\r\n\r\n self.tabWidget.setCurrentIndex(1)\r\n\r\n\r\n QMetaObject.connectSlotsByName(MainWindow)\r\n # setupUi\r\n\r\n def retranslateUi(self, MainWindow):\r\n MainWindow.setWindowTitle(QCoreApplication.translate(\"MainWindow\", u\"MainWindow\", None))\r\n self.tabWidget.setTabText(self.tabWidget.indexOf(self.tab), QCoreApplication.translate(\"MainWindow\", u\"\\u041e\\u0442\\u043a\\u0443\\u0434\\u0430\", None))\r\n self.lineEdit.setText(\"\")\r\n self.lineEdit.setPlaceholderText(\"\")\r\n self.tabWidget.setTabText(self.tabWidget.indexOf(self.tab_2), QCoreApplication.translate(\"MainWindow\", u\"\\u041a\\u0443\\u0434\\u0430\", None))\r\n self.groupBox.setTitle(QCoreApplication.translate(\"MainWindow\", u\"\\u042f\\u0437\\u044b\\u043a\", None))\r\n self.tabWidget.setTabText(self.tabWidget.indexOf(self.tab_3), QCoreApplication.translate(\"MainWindow\", u\"\\u041d\\u0430\\u0441\\u0442\\u0440\\u043e\\u0439\\u043a\\u0438\", None))\r\n self.pushButton_3.setText(QCoreApplication.translate(\"MainWindow\", u\"+\", None))\r\n self.pushButton_4.setText(QCoreApplication.translate(\"MainWindow\", u\"-\", None))\r\n self.pushButton_6.setText(QCoreApplication.translate(\"MainWindow\", u\"\\u0421\\u0442\\u0430\\u0440\\u0442\", None))\r\n self.pushbtest.setText(QCoreApplication.translate(\"MainWindow\", u\"\\u0422\\u0435\\u0441\\u0442\", None))\r\n # retranslateUi\r\n\r\n"
},
{
"alpha_fraction": 0.7177097201347351,
"alphanum_fraction": 0.7256990671157837,
"avg_line_length": 51.64285659790039,
"blob_id": "64ca2b904e7e721230b242c93f654b4082663326",
"content_id": "726b5e7693e50484c8e338619f8251b9467990a6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "INI",
"length_bytes": 1502,
"license_type": "no_license",
"max_line_length": 166,
"num_lines": 28,
"path": "/assets/readme_EN.ini",
"repo_name": "WilJames/SortToFolders",
"src_encoding": "UTF-8",
"text": "<h3 align = \"center\">What the program is for:</h3>\r\n<p>A program for moving files from a target folder to subfolders by category, guided by file extensions. For example, to optimize the Downloads folder on your PC.</p>\r\n<h3 align = \"center\">How to start:</h3>\r\n<p>\r\n1. Specify the path to the target folder.<br>\r\n2. Create folders for each category you need.<br>\r\n3. Indicate the path to each category separately.<br>\r\n4. Optionally add or delete categories, as well as extensions.<br>\r\n5. To start the moving files, click the \"Start!\" button, the program will notify you of the successful completion.\r\n</p>\r\n\r\n<h3 align = \"center\">Learn more about features:</h3>\r\n<p>\r\nYou can specify paths for the target folder and categories using the \"...\" button or manually (Ctrl + V) in the corresponding field.<br><br>\r\nTo skip an unnecessary category, select it and clear the \"Path of the selected category:\" field. <br><br>\r\n- Format of categories:<br>\r\nLanguage any<br>\r\nA category name cannot begin with a dot!<br><br>\r\n- Format recording extensions:<br>\r\nExtensions always begin with a dot.<br>\r\nCase of letters is not important.<br>\r\nAdding extensions one: .rar<br>or a group separated by a space: .rar .zip .7z .tar <br><br>\r\n- Contol:<br>\r\n\"Enter\" - add a category or extension (autodetect).<br>\r\n\"Delete\" - deletes a category or extension.<br>\r\nDouble-click on the category, allows renaming.<br>\r\nIt is possible to stop moving files with the \"Stop\" button, appears when the process is started.\r\n</p>\r\n"
},
{
"alpha_fraction": 0.5675406455993652,
"alphanum_fraction": 0.5897776484489441,
"avg_line_length": 43.65151596069336,
"blob_id": "8b43e0514638afbba633cebaabbdbdb85b3e25c9",
"content_id": "f79e27ea68ebbb677bbc2ff7078a697befe75c5b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3013,
"license_type": "no_license",
"max_line_length": 141,
"num_lines": 66,
"path": "/theme/DarkPalette.py",
"repo_name": "WilJames/SortToFolders",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\nfrom PySide2.QtGui import QPalette, QColor\r\n\r\nWHITE = QColor(255, 255, 255)\r\nBLACK = QColor(0, 0, 0)\r\nRED = QColor(255, 0, 0)\r\nPRIMARY = QColor(53, 53, 53)\r\nSECONDARY = QColor(25, 25, 25)\r\nTERTIARY = QColor(42, 130, 218)\r\nDISABLEDSHADOW = QColor(12, 15, 16)\r\nDARKGRAY = QColor(80, 80, 80)\r\n'''\r\n-HighlightedText 42, 130, 218\r\nhighlight_text_color = Qt.white HighlightedText\r\n'''\r\n\r\ndef css_rgb(color, a=False):\r\n \"\"\"Get a CSS `rgb` or `rgba` string from a `QtGui.QColor`.\"\"\"\r\n return (\"rgba({}, {}, {}, {})\" if a else \"rgb({}, {}, {})\").format(*color.getRgb())\r\n\r\nclass QDarkPalette(QPalette):\r\n \"\"\"Dark palette for a Qt application meant to be used with the Fusion theme.\"\"\"\r\n def __init__(self, *__args):\r\n super().__init__(*__args)\r\n\r\n # Set all the colors based on the constants in globals\r\n self.setColor(QPalette.Window, PRIMARY)\r\n self.setColor(QPalette.WindowText, WHITE)\r\n self.setColor(QPalette.Base, SECONDARY)\r\n self.setColor(QPalette.AlternateBase, PRIMARY)\r\n self.setColor(QPalette.ToolTipBase, WHITE)\r\n self.setColor(QPalette.ToolTipText, WHITE)\r\n self.setColor(QPalette.Text, WHITE)\r\n self.setColor(QPalette.Button, PRIMARY)\r\n self.setColor(QPalette.ButtonText, WHITE)\r\n self.setColor(QPalette.BrightText, RED)\r\n self.setColor(QPalette.Link, TERTIARY)\r\n self.setColor(QPalette.Highlight, TERTIARY)\r\n self.setColor(QPalette.HighlightedText, BLACK)\r\n self.setColor(QPalette.PlaceholderText, DARKGRAY)\r\n self.setColor(QPalette.Disabled, QPalette.Light, BLACK) #+\r\n self.setColor(QPalette.Disabled, QPalette.Shadow, DISABLEDSHADOW) #+\r\n self.setColor(QPalette.Disabled, QPalette.WindowText, DARKGRAY)\r\n self.setColor(QPalette.Disabled, QPalette.Text, DARKGRAY)\r\n self.setColor(QPalette.Disabled, QPalette.ButtonText, DARKGRAY)\r\n\r\n @staticmethod\r\n def set_stylesheet(app):\r\n \"\"\"Static method to set the tooltip stylesheet to a `QtWidgets.QApplication`.\"\"\"\r\n app.setStyleSheet(\"\"\"\r\n QToolTip {{ color: {white};\r\n background-color: {tertiary};\r\n border: 1px solid {white}; }}\r\n QTextBrowser {{ background-color: #353535;\r\n border: none; }}\r\n QLineEdit {{ border: none;\r\n border-radius: 1px;\r\n background: {secondary};\r\n selection-background-color: {tertiary};}}\r\n \"\"\".format(white=css_rgb(WHITE), tertiary=css_rgb(TERTIARY), primary=css_rgb(PRIMARY), secondary=css_rgb(SECONDARY)))\r\n\r\n def set_app(self, app):\r\n \"\"\"Set the Fusion theme and this palette to a `QtWidgets.QApplication`.\"\"\"\r\n app.setStyle(\"Fusion\")\r\n app.setPalette(self)\r\n self.set_stylesheet(app)\r\n"
},
{
"alpha_fraction": 0.5026780962944031,
"alphanum_fraction": 0.509153425693512,
"avg_line_length": 33.336158752441406,
"blob_id": "1d793c44485a86c772e644f7be6f956cc2b9f320",
"content_id": "a6f057bd6717e131435bfde376fb0e57c6af171f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 12600,
"license_type": "no_license",
"max_line_length": 119,
"num_lines": 354,
"path": "/threads/MovingThread.py",
"repo_name": "WilJames/SortToFolders",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\r\nfrom PySide2.QtCore import (Signal, QThread, QTimer)\r\nfrom sys import argv, path, platform\r\nimport re\r\nimport time\r\nimport json\r\nimport os\r\n\r\nfrom pathlib import Path, PurePath\r\nimport shutil\r\nimport stat\r\n\r\n_WINDOWS = os.name == 'nt'\r\nposix = nt = None\r\nif os.name == 'posix':\r\n import posix\r\nelif _WINDOWS:\r\n import nt\r\n\r\nCOPY_BUFSIZE = 1024 * 1024 if _WINDOWS else 64 * 1024\r\n_USE_CP_SENDFILE = hasattr(os, \"sendfile\") and platform.startswith(\"linux\")\r\n_HAS_FCOPYFILE = posix and hasattr(posix, \"_fcopyfile\") # macOS\r\n\r\n\r\nclass Error(OSError):\r\n pass\r\n\r\n\r\nclass SameFileError(Error):\r\n \"\"\"Raised when source and destination are the same file.\"\"\"\r\n\r\n\r\nclass SpecialFileError(OSError):\r\n \"\"\"Raised when trying to do a kind of operation (e.g. copying) which is\r\n not supported on a special file (e.g. a named pipe)\"\"\"\r\n\r\n\r\nclass _GiveupOnFastCopy(Exception):\r\n \"\"\"Raised as a signal to fallback on using raw read()/write()\r\n file copy when fast-copy functions fail to do so.\r\n \"\"\"\r\n\r\n\r\nclass MovingThread(QThread):\r\n setName = Signal(list)\r\n progress = Signal(dict)\r\n showWindowProgress = Signal()\r\n\r\n def __init__(self, parent=None):\r\n super(MovingThread, self).__init__(parent)\r\n\r\n def __del__(self):\r\n self.wait()\r\n\r\n def st(self, dataTarget, data):\r\n if not self.isRunning():\r\n self.statusWork = True\r\n\r\n self.dataTarget = dataTarget\r\n self.data = data\r\n\r\n self.totalFiles = 0\r\n self.totalSize = 0\r\n\r\n self.count = 0\r\n self.totalCopied = 0\r\n\r\n self.start(QThread.NormalPriority)\r\n\r\n def run(self):\r\n targetFiles = [path for item in self.dataTarget for path in self.scandir(\r\n os.path.normpath(item['path']))]\r\n listFiles = self.listMoving(targetFiles, self.data)\r\n\r\n # print(listFiles)\r\n\r\n if (totalFiles:= len(listFiles)):\r\n self.totalFiles = totalFiles\r\n self.showWindowProgress.emit()\r\n self.moving(listFiles)\r\n\r\n def listMoving(self, targetFiles: list, listData: list) -> list:\r\n tempList = []\r\n toMovingList = []\r\n\r\n for item in listData:\r\n path = os.path.normpath(item['path'])\r\n ext = item['extension']\r\n if not ext:\r\n continue\r\n\r\n filesNames = [PurePath(x).name for x in self.scandir(path)]\r\n\r\n for itemTarget in targetFiles:\r\n nameTarget = PurePath(itemTarget).stem\r\n extTarget = PurePath(itemTarget).suffix\r\n fullName = PurePath(itemTarget).name\r\n\r\n if extTarget.lower() in ext:\r\n size = os.path.getsize(itemTarget)\r\n self.totalSize += size\r\n\r\n if fullName in filesNames or fullName in tempList:\r\n if not shutil._samefile(itemTarget, os.path.join(path, fullName)):\r\n fullName = self.rename_files(\r\n fullName, filesNames, tempList)\r\n\r\n tempList.append(fullName)\r\n toMovingList.append((itemTarget,\r\n os.path.join(path, fullName),\r\n PurePath(itemTarget).name, size)\r\n )\r\n\r\n return sorted(toMovingList, key=lambda x: x[3])\r\n\r\n def scandir(self, target: str) -> list:\r\n '''Возвращает список файлов без ярлыков, папок и файлов с точкой на UNIX\r\n '''\r\n with os.scandir(target) as it:\r\n if _WINDOWS:\r\n files = [os.fsdecode(entry.path) for entry in it if entry.is_file(\r\n ) and not entry.name.endswith('.lnk')]\r\n else:\r\n files = [os.fsdecode(entry.path) for entry in it if not entry.name.startswith(\r\n '.') and entry.is_file() and not entry.is_symlink()]\r\n return files\r\n\r\n def rename_files(self, name_file: str, list_move_to_path: list, temp_list: list) -> str:\r\n '''Переименование файла\r\n '''\r\n count = 0\r\n filename, extension = os.path.splitext(name_file)\r\n delete = re.compile(fr\"\\s*\\(\\d+\\){extension}$\")\r\n\r\n while name_file in list_move_to_path or name_file in temp_list:\r\n if delete.search(name_file):\r\n count += 1\r\n name_file = delete.sub(f' ({count}){extension}', name_file)\r\n else:\r\n name_file = f'{filename} (1){extension}'\r\n\r\n return name_file\r\n\r\n def moving(self, list_to_moving: list):\r\n self.start_time = time.monotonic()\r\n time.sleep(0.1)\r\n for src, dst, name, file_size in list_to_moving:\r\n if self.statusWork:\r\n self.setName.emit([name, src, dst])\r\n # self.current_file_size = size\r\n\r\n real_dst = self.move(os.path.normcase(src), os.path.normcase(\r\n dst), callback=self.copy_progress, total=file_size)\r\n if real_dst:\r\n self.count += 1\r\n else:\r\n break\r\n\r\n # def humansize(self, nbytes):\r\n # ''' Перевод байт в кб, мб и т.д.'''\r\n # suffixes = ['B', 'KB', 'MB', 'GB', 'TB', 'PB', 'EB', 'ZB', 'YB']\r\n # i = 0\r\n # while nbytes >= 1024 and i < len(suffixes) - 1:\r\n # nbytes /= 1024.\r\n # i += 1\r\n # f = ('%.2f' % nbytes).rstrip('0').rstrip('.')\r\n # return '%s %s' % (f, suffixes[i])\r\n\r\n def copy_progress(self, copied, current, total):\r\n self.totalCopied += current\r\n\r\n end_time = time.monotonic()\r\n time_elapsed = end_time - self.start_time\r\n\r\n speed = self.totalCopied / time_elapsed\r\n bytes_remain = self.totalSize - self.totalCopied\r\n time_remain = int(bytes_remain / speed)\r\n\r\n less_files = self.totalFiles - self.count\r\n less_size = self.totalSize - self.totalCopied\r\n\r\n # self.time_less_sec.emit(time_remain)\r\n\r\n self.progress.emit({'time_remain': time_remain,\r\n 'speed': speed,\r\n 'percent': int(100 * self.totalCopied / self.totalSize),\r\n 'less_files': less_files,\r\n 'less_size': less_size})\r\n\r\n # self.timeandspeed.emit([time_remain, f'{self.humansize(speed)}/s'])\r\n # self.progress.emit(int(100*self.totalCopied/self.totalSize))\r\n\r\n # print('\\r' + f\"{time_remain}, {self.humansize(speed)}/s, {int(100*self.totalCopied/self.totalSize)}\", end='')\r\n\r\n def move(self, src, dst, callback, total):\r\n real_dst = dst\r\n try:\r\n os.rename(src, real_dst)\r\n callback(total, total, total=total)\r\n except OSError:\r\n self.copy2(src, real_dst)\r\n if not self.statusWork:\r\n src = real_dst\r\n try:\r\n os.unlink(src)\r\n except PermissionError:\r\n time.sleep(0.5)\r\n os.unlink(src)\r\n return real_dst\r\n\r\n def copy2(self, src, dst, *, follow_symlinks=True):\r\n if os.path.isdir(dst):\r\n dst = os.path.join(dst, os.path.basename(src))\r\n self.copyfile(src, dst, follow_symlinks=follow_symlinks)\r\n shutil.copystat(src, dst, follow_symlinks=follow_symlinks)\r\n return dst\r\n\r\n def copyfile(self, src, dst, *, follow_symlinks=True):\r\n if self._samefile(src, dst):\r\n raise SameFileError(\r\n \"{!r} and {!r} are the same file\".format(src, dst))\r\n\r\n file_size = 0\r\n for i, fn in enumerate([src, dst]):\r\n try:\r\n st = self._stat(fn)\r\n except OSError:\r\n pass\r\n else:\r\n if stat.S_ISFIFO(st.st_mode):\r\n fn = fn.path if isinstance(fn, os.DirEntry) else fn\r\n raise SpecialFileError(\"`%s` is a named pipe\" % fn)\r\n if _WINDOWS and i == 0:\r\n file_size = st.st_size\r\n\r\n with open(src, 'rb') as fsrc, open(dst, 'wb') as fdst:\r\n if _HAS_FCOPYFILE:\r\n try:\r\n shutil._fastcopy_fcopyfile(\r\n fsrc, fdst, posix._COPYFILE_DATA)\r\n return dst\r\n except _GiveupOnFastCopy:\r\n pass\r\n # to do for Linux\r\n elif _USE_CP_SENDFILE:\r\n try:\r\n # shutil._fastcopy_sendfile(fsrc, fdst)\r\n self._fastcopy_sendfile(fsrc, fdst, callback=self.copy_progress, total=file_size)\r\n return dst\r\n except _GiveupOnFastCopy:\r\n pass\r\n elif _WINDOWS and file_size > 0:\r\n self._copyfileobj_readinto(fsrc, fdst, callback=self.copy_progress, total=file_size, length=min(\r\n file_size, shutil.COPY_BUFSIZE))\r\n return dst\r\n\r\n self.copyfileobj(\r\n fsrc, fdst, callback=self.copy_progress, total=file_size)\r\n\r\n return dst\r\n\r\n def _fastcopy_sendfile(self, fsrc, fdst, callback, total):\r\n global _USE_CP_SENDFILE\r\n try:\r\n infd = fsrc.fileno()\r\n outfd = fdst.fileno()\r\n except Exception as err:\r\n raise _GiveupOnFastCopy(err) # not a regular file\r\n try:\r\n blocksize = max(os.fstat(infd).st_size, 2 ** 23) # min 8MiB\r\n except OSError:\r\n blocksize = 2 ** 27 # 128MiB\r\n if sys.maxsize < 2 ** 32:\r\n blocksize = min(blocksize, 2 ** 30)\r\n\r\n offset = 0\r\n while True:\r\n try:\r\n sent = os.sendfile(outfd, infd, offset, blocksize)\r\n except OSError as err:\r\n err.filename = fsrc.name\r\n err.filename2 = fdst.name\r\n\r\n if err.errno == errno.ENOTSOCK:\r\n _USE_CP_SENDFILE = False\r\n raise _GiveupOnFastCopy(err)\r\n\r\n if err.errno == errno.ENOSPC: # filesystem is full\r\n raise err from None\r\n\r\n if offset == 0 and os.lseek(outfd, 0, os.SEEK_CUR) == 0:\r\n raise _GiveupOnFastCopy(err)\r\n\r\n raise err\r\n else:\r\n if sent == 0 or not self.statusWork:\r\n break # EOF\r\n offset += sent\r\n callback(offset, sent, total=total)\r\n\r\n def _copyfileobj_readinto(self, fsrc, fdst, callback, total, length=shutil.COPY_BUFSIZE):\r\n fsrc_readinto = fsrc.readinto\r\n fdst_write = fdst.write\r\n with memoryview(bytearray(length)) as mv:\r\n copied = 0\r\n while True:\r\n n = fsrc_readinto(mv)\r\n if not n or not self.statusWork:\r\n break\r\n elif n < length:\r\n with mv[:n] as smv:\r\n fdst.write(smv)\r\n copied += len(smv)\r\n callback(copied, len(smv), total=total)\r\n else:\r\n fdst_write(mv)\r\n copied += len(mv)\r\n callback(copied, len(mv), total=total)\r\n\r\n def copyfileobj(self, fsrc, fdst, callback, total, length=0):\r\n if not length:\r\n length = shutil.COPY_BUFSIZE\r\n fsrc_read = fsrc.read\r\n fdst_write = fdst.write\r\n copied = 0\r\n while True:\r\n buf = fsrc_read(length)\r\n if not buf or not self.statusWork:\r\n break\r\n fdst_write(buf)\r\n copied += len(buf)\r\n callback(copied, len(buf), total=total)\r\n\r\n def _samefile(self, src, dst):\r\n # Macintosh, Unix.\r\n if isinstance(src, os.DirEntry) and hasattr(os.path, 'samestat'):\r\n try:\r\n return os.path.samestat(src.stat(), os.stat(dst))\r\n except OSError:\r\n return False\r\n\r\n if hasattr(os.path, 'samefile'):\r\n try:\r\n return os.path.samefile(src, dst)\r\n except OSError:\r\n return False\r\n\r\n # All other platforms: check for same pathname.\r\n return (os.path.normcase(os.path.abspath(src)) ==\r\n os.path.normcase(os.path.abspath(dst)))\r\n\r\n def _stat(self, fn):\r\n return fn.stat() if isinstance(fn, os.DirEntry) else os.stat(fn)\r\n"
}
] | 9 |
cskalla/ATRAN | https://github.com/cskalla/ATRAN | bacad0f75773f1591837f168c40aa7c9203b14e9 | 43975aeae7bf54a40e421d6953f8bdfeb24d1506 | 372616da12dfed91fe9dca3a7937b9662855d02a | refs/heads/main | 2023-07-25T22:06:30.077369 | 2021-09-06T21:34:10 | 2021-09-06T21:34:10 | 390,115,974 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5664621591567993,
"alphanum_fraction": 0.6073619723320007,
"avg_line_length": 15.896552085876465,
"blob_id": "1d8b99867806473336fa5836634932032f77af9d",
"content_id": "cb44a4136214adebefc71755f32966e88ef29b50",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 489,
"license_type": "no_license",
"max_line_length": 62,
"num_lines": 29,
"path": "/ATRAN3.0/generate_tasks.py",
"repo_name": "cskalla/ATRAN",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Mon Sep 6 15:57:41 2021\n\n@author: carolineskalla\n\"\"\"\nimport random\nimport numpy as np\n\ndef gen_tasks(nfuncs, ntasks, tnorm):\n \"\"\"\n function tasks = GenTask(nf, nt, no)\n\n %% Generate tasks\n tasks = rand(nt, nf);\n tasks = tasks./sum(tasks, 2)*no;\n\nend\n \"\"\"\n\n\n tasks = np.random.rand(nfuncs,ntasks)\n\n tasks = (np.divide(tasks.T,np.sum(tasks, axis=1)).T)*tnorm\n return tasks\n\n \nt = gen_tasks(9,10,10)"
},
{
"alpha_fraction": 0.5917280912399292,
"alphanum_fraction": 0.6126126050949097,
"avg_line_length": 28.719512939453125,
"blob_id": "95f1055c9a4001d4b283756d4592a258a3105094",
"content_id": "2511df95965bfcc41c6337119a197d645506d1e6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2442,
"license_type": "no_license",
"max_line_length": 115,
"num_lines": 82,
"path": "/generate_agents.py",
"repo_name": "cskalla/ATRAN",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Wed Sep 1 12:45:18 2021\n\n@author: carolineskalla\n\"\"\"\nimport numpy as np\nimport numpy.matlib\nimport random\nimport matplotlib.pyplot as plt\n\n\ndef gen_agents(nfunc, nagent, aDiv, gDiv, anorm):\n \"\"\"\n #Parameters:\n nfunc = 9\n nagent = 100\n #gDiv = np.repeat(1/nfunc, 9)\n gDiv = [1/nfunc for i in range(0,9)]\n anorm = 10; #The sum of capabilities of agents,\n aDivm = 100; #This is where intra-agent diversity can be set: the higher, the more diverse\n aDivd = 10; #This introduces some spread into inta-agent diversity (so that IDA is not the same for all agents)\n \"\"\"\n \n #Generate Agents\n \n #choosing a dominant function for each agent\n x = np.matlib.repmat(range(0, nfunc), nagent, 1)\n #choosing a dominant function for each agent\n domf = np.random.choice(nfunc, nagent, replace=True, p=gDiv)\n #initialize agents - adding jitter\n agents = np.exp((-(x)^2)/(aDiv[0]+aDiv[0]*aDiv[1]/100*random.randint(0,nagent)-aDiv[1]/200))\n #normalize to anorm value\n agents = (np.divide(agents.T,np.sum(agents, axis=1)).T)*anorm\n \n #mix up functions so different skills have different strengths\n for aidx in range(nagent):\n #for each row in agent matrix mix up the order of the row elements\n agents[aidx, :] = agents[aidx, np.random.permutation(len(agents[aidx,:]))]\n #put the dom function in the correct position\n swp = agents[aidx,domf[aidx]]; #old value\n m = max(agents[aidx,:])\n l = np.where(agents[aidx,:] == np.amax(agents[aidx,:])) \n agents[aidx,domf[aidx]] = m\n agents[aidx, l] = swp\n \n return agents\n\n#a = gen_agents()\n \n\"\"\"\n%% Plot agents for checking\nif par.debug\n numagents = size(agents, 1);\n DF = NaN(numagents,1);\n colors = jet(numagents);\n figure\n hold on\n for aidx = 1:numagents\n plot(agents(aidx, :), '*', 'Color', colors(aidx,:));\n DF(aidx) = find(agents(aidx,:) == max(agents(aidx, :))); %This finds the dominant function\n % fprintf('%i\\n', DF(aidx))\n end\n tabulate(DF)\n clear colors aidx DF\nend\n\"\"\"\n\"\"\"\nNot finished:\ndebug = True\nif debug:\n numagents = len(agents[0])\n an_array = np.empty((nagent,0))\n an_array[:] = np.NaN\n \n for i in range(1, nagent) :\n x=np.full((1,nagent), i, dtype=int)\n plt.scatter(x, agents[i, :])\n \n #domf[aidx]\n\"\"\" "
},
{
"alpha_fraction": 0.6325892806053162,
"alphanum_fraction": 0.6526785492897034,
"avg_line_length": 29.69862937927246,
"blob_id": "0249991aaf5a8dc5a80aee90fc5bfc7ce407d853",
"content_id": "b0a666373f0402a09214b907b26a2e4ee9fefabb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2240,
"license_type": "no_license",
"max_line_length": 111,
"num_lines": 73,
"path": "/ATRAN3.0/simpleATRAN.py",
"repo_name": "cskalla/ATRAN",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Wed Sep 1 12:45:18 2021\n\n@author: carolineskalla\n\"\"\"\nimport numpy as np\nimport numpy.matlib\nimport random\nimport math\nimport matplotlib.pyplot as plt\n\n#Parameters:\nnfunc = 9\nnagent = 100\n#gDiv = np.repeat(1/nfunc, 9)\ngDiv = [1/nfunc for i in range(0,9)]\nanorm = 10; #The sum of capabilities of agents,\naDivm = 1; #This is where intra-agent diversity can be set: the higher, the more diverse\naDivd = 10; #This introduces some spread into inta-agent diversity (so that IDA is not the same for all agents)\n\n#Generate Agents\n\n#choosing a dominant function for each agent\nx = np.matlib.repmat(range(0, nfunc), nagent, 1)\n#choosing a dominant function for each agent\ndomf = np.random.choice(9, nagent, replace=True, p=gDiv)\n#initialize agents - adding jitter\nagents = np.exp((-(x)^2)/(aDivm+aDivm*aDivd/100*random.randint(0,nagent)-aDivd/200))\n#normalize to anorm value\nagents = (np.divide(agents.T,np.sum(agents, axis=1)).T)*anorm\n\n#mix up functions so different skills have different strengths\nfor aidx in range(nagent):\n #for each row in agent matrix mix up the order of the row elements\n agents[aidx, :] = agents[aidx, np.random.permutation(len(agents[aidx,:]))]\n #put the dom function in the correct position\n swp = agents[aidx,domf[aidx]]; #old value\n m = max(agents[aidx,:])\n l = np.where(agents[aidx,:] == np.amax(agents[aidx,:])) \n agents[aidx,domf[aidx]] = m\n agents[aidx, l] = swp\n \n\"\"\"\n%% Plot agents for checking\nif par.debug\n numagents = size(agents, 1);\n DF = NaN(numagents,1);\n colors = jet(numagents);\n figure\n hold on\n for aidx = 1:numagents\n plot(agents(aidx, :), '*', 'Color', colors(aidx,:));\n DF(aidx) = find(agents(aidx,:) == max(agents(aidx, :))); %This finds the dominant function\n % fprintf('%i\\n', DF(aidx))\n end\n tabulate(DF)\n clear colors aidx DF\nend\n\"\"\"\n\"\"\"\ndebug = True\nif debug:\n numagents = len(agents[0])\n DF = np.empty((nagent,0))\n DF[:] = np.NaN\n for a in range(1,nagent):\n plt.plot(agents[a, :])\n DF[aidx = find(agents(aidx,:) == max(agents(aidx, :))) #%This finds the dominant function\n fprintf('%i\\n', DF(aidx))\n \n \"\"\""
},
{
"alpha_fraction": 0.6309523582458496,
"alphanum_fraction": 0.6771708726882935,
"avg_line_length": 22.42622947692871,
"blob_id": "d6c8223f5c54bf3665cbf818cddf30ba21c630e7",
"content_id": "fbbd12790c44baa28afaa73e9a78832daa4903ef",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1428,
"license_type": "no_license",
"max_line_length": 193,
"num_lines": 61,
"path": "/ATRAN3.0/plot_diversity.py",
"repo_name": "cskalla/ATRAN",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Sun Sep 5 17:52:37 2021\n\n@author: carolineskalla\n\nDescription: generate agent teams and show where they are on the DFD IFD plane. Prerequisite to basic simulation.\n\"\"\"\nimport numpy as np\n\nimport matplotlib.pyplot as plt\nimport math\nimport generate_agents\nimport calc_fd\n\n#Parameters:\n#Number of functions\nnumfuncs = 9;\n#Number of agents\nnumagents = 100;\n#Number of tasks\nnumtasks = 10;\n#Diversity jitter\nagspread = 10;\n#Agent total skill strength\nanorm = 10;\n#tnorm = 10;\n#numrepeats = 10;\n#EmergencyStop = 5e2;\nadivvals = np.logspace(-1, 3, 20)\ngdivvals = np.logspace(-1, 3, 10)\n\n#adivvals = np.larray(range(0.1, 1000, ))\n#gdivvals = np.logspace(-1, 3, 10)\n\n\n#Generate agents\nDFD = np.zeros((20,10))\nIFD = np.zeros((20,10))\n\nai = 0\nfor adiv in adivvals:\n gi = 0 \n for gdiv in gdivvals:\n agents = generate_agents.gen_agents(numfuncs, numagents, [adiv, agspread], (np.exp(-(np.array(range(0, numfuncs))**2/gdiv)))/sum(np.exp(-(np.array(range(0, numfuncs))**2/gdiv))), anorm)\n #Calculate and store diversity values\n [DFD[ai, gi], IFD[ai, gi]] = calc_fd.calc_fd(agents)\n gi+=1\n ai+=1\n \n#try flatten \nflatDFD = DFD.flatten()\nflatIFD = IFD.flatten()\n#Plot agent diversity\nfig = plt.figure()\nax = fig.add_subplot(projection='3d')\nax.scatter(flatIFD, flatDFD, 0)\nax.set_xlabel('IFD')\nax.set_ylabel('DFD')\nax.set_zlabel('Set to 0')"
},
{
"alpha_fraction": 0.5793358087539673,
"alphanum_fraction": 0.618344783782959,
"avg_line_length": 28.65625,
"blob_id": "60278423f1fae3f4c2a4fdeb6376e6d6b6e8a39c",
"content_id": "3fcd6230384d69c6c59745568dc91158d11a7b74",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1897,
"license_type": "no_license",
"max_line_length": 113,
"num_lines": 64,
"path": "/ATRAN3.0/calc_fd.py",
"repo_name": "cskalla/ATRAN",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\nimport numpy as np\nimport generate_agents\n\"\"\"\nCreated on Sat Sep 4 15:21:29 2021\n\n@author: carolineskalla\n\nfunction [DFD, IFD] = CalcFD(agents)\n\n%% Calculate DFD\nmaxDFD = 1-1/size(agents, 2); %This is the maxmial value DFD can take.\n[~, I] = max(agents, [], 2);\nT = tabulate(I);\nDFD = 1 - sum((T(:,3)/100).^2); %This is the first formula in the Bunderson & Sutcliffe, 2002 paper\nDFD = DFD/maxDFD; %This is the normalization they mention on page 885 below the formula\n\n%% Calculate IFD\n% First calculate IFDS\nIFDS = 1 - sum((agents./sum(agents,2)).^2, 2);\nIFD = mean(IFDS);\n\nend\n\"\"\"\n#generate agents for testing\n#agents = generate_agents.gen_agents()\n\n\ndef calc_fd(agents):\n\n\n #TESTING AS NOT FUNCTION\n #Calculate DFD\n maxDFD = 1 - (1/len(agents)) #This is the maxmial value DFD can take.\n #Find maximum function of agents\n # [~, I] = max(agents, [], 2)\n I = [0]*len(agents)\n for a in range(len(agents)):\n m = max(agents[a,:])\n I[a] = np.where(agents[a,:] == np.amax(agents[a,:])) \n #T = tabulate(I);\n T = np.array(np.unique(I, return_counts=True)).T\n \n DFD = 1 - sum((T[:,1]/len(agents))**2) #This is the first formula in the Bunderson & Sutcliffe, 2002 paper\n DFD = DFD/maxDFD #This is the normalization they mention on page 885 below the formula\n\n\n \"\"\"\n %% Calculate IFD\n % First calculate IFDS\n IFDS = 1 - sum((agents./sum(agents,2)).^2, 2);\n IFD = mean(IFDS);\n fprintf('Group-average intrapersonal dominant functional diversity of this group of agents is: %0.4f\\n', IFD)\n clear IFDS\n \"\"\"\n #Calculate IFD\n #IFDS = 1 - sum((agents./sum(agents,2)).^2, 2)\n #IFDS = 1 - agents/agents.sum(axis=1)\n IFDS = 1 - np.array([sum(((a/sum(a))**2)) for a in agents])\n IFD = np.mean(IFDS)\n return [DFD, IFD]\n\n#x= calc_fd(np.array([[0.4,0.6,0,0], [0,0,0.6, 0.4]]))"
},
{
"alpha_fraction": 0.5833333134651184,
"alphanum_fraction": 0.629487156867981,
"avg_line_length": 20.69444465637207,
"blob_id": "b2721152c26e40552e3236263182edf886e5a640",
"content_id": "2c1a464d161bbeadee6d5d1a10b6cce2991ffa73",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 780,
"license_type": "no_license",
"max_line_length": 110,
"num_lines": 36,
"path": "/random_testing.py",
"repo_name": "cskalla/ATRAN",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Mon Sep 6 13:04:32 2021\n\n@author: carolineskalla\n\"\"\"\nimport random\nimport numpy as np\nimport math\n#Parameters:\n#Number of functions\nnumfuncs = 9;\n#Number of agents\nnumagents = 100;\n#Number of tasks\nnumtasks = 10;\n#Diversity jitter\nagspread = 10;\n#Agent total skill strength\nanorm = 10;\ngdivvals = np.logspace(-1, 3, 10)\n\nfor gdiv in gdivvals:\n \n#gdiv = gdivvals[0] #testing\n#exp((-(0:par.numfuncs-1).^2)/gdiv)\n x = (np.exp(-(np.array(range(0, numfuncs))**2/gdiv)))/sum(np.exp(-(np.array(range(0, numfuncs))**2/gdiv)))\n print(\"x:\", x)\n print(\"x sum: \", sum(x))\n y=x/sum(x)\n print(\"y:\",y)\n print(\"y sum: \", sum(y))\n \n #domf = np.random.choice(9, numagents, replace=True, p=x)\n #print(domf)"
},
{
"alpha_fraction": 0.504273533821106,
"alphanum_fraction": 0.6153846383094788,
"avg_line_length": 15.571428298950195,
"blob_id": "101419f74de6d1f0afdfdbcd9285117b4bafb041",
"content_id": "b23302e0f8c41dd59d6bac89d549fcd2576b5b7a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 117,
"license_type": "no_license",
"max_line_length": 35,
"num_lines": 7,
"path": "/ATRAN3.0/run_sim.py",
"repo_name": "cskalla/ATRAN",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Mon Sep 6 15:58:38 2021\n\n@author: carolineskalla\n\"\"\"\n\n"
}
] | 7 |
hixio-mh/platform-chipsalliance | https://github.com/hixio-mh/platform-chipsalliance | ff2ef01445a02c464d36ebcf5c5673e6bebcb5fe | 57e438b6bbf82d8307ed889ccda7648a332dc4fe | 252163a2f61280a80c3d2370c2e76958db2c51af | refs/heads/master | 2022-11-27T19:02:31.980545 | 2020-06-17T17:06:34 | 2020-06-17T17:06:34 | null | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.46785619854927063,
"alphanum_fraction": 0.4715297818183899,
"avg_line_length": 34.95283126831055,
"blob_id": "d8f1f96ce13b1c70def9e1714c491d3858e63767",
"content_id": "ae0c6bef172608e9005043ca1f5ece392ea08be9",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3811,
"license_type": "permissive",
"max_line_length": 74,
"num_lines": 106,
"path": "/platform.py",
"repo_name": "hixio-mh/platform-chipsalliance",
"src_encoding": "UTF-8",
"text": "# Copyright 2014-present PlatformIO <[email protected]>\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom os.path import join\n\nfrom platformio.managers.platform import PlatformBase\n\n\nclass ChipsalliancePlatform(PlatformBase):\n def get_boards(self, id_=None):\n result = PlatformBase.get_boards(self, id_)\n if not result:\n return result\n if id_:\n return self._add_default_debug_tools(result)\n else:\n for key, value in result.items():\n result[key] = self._add_default_debug_tools(result[key])\n return result\n\n def _add_default_debug_tools(self, board):\n debug = board.manifest.get(\"debug\", {})\n if \"tools\" not in debug:\n debug[\"tools\"] = {}\n\n tools = (\n \"digilent-hs1\",\n \"olimex-arm-usb-tiny-h\",\n \"olimex-arm-usb-ocd-h\",\n \"olimex-arm-usb-ocd\",\n \"olimex-jtag-tiny\",\n \"verilator\",\n )\n for tool in tools:\n if tool in debug[\"tools\"]:\n continue\n server_args = [\n \"-s\",\n join(\n self.get_package_dir(\"framework-wd-riscv-sdk\") or \"\",\n \"board\",\n board.get(\"build.variant\", \"\"),\n ),\n \"-s\",\n \"$PACKAGE_DIR/share/openocd/scripts\",\n ]\n if tool == \"verilator\":\n openocd_config = join(\n self.get_dir(),\n \"misc\",\n \"openocd\",\n board.get(\"debug.openocd_board\", \"swervolf_sim.cfg\"),\n )\n server_args.extend([\"-f\", openocd_config])\n elif debug.get(\"openocd_config\"):\n server_args.extend([\"-f\", debug.get(\"openocd_config\")])\n else:\n assert debug.get(\"openocd_target\"), (\n \"Missing target configuration for %s\" % board.id\n )\n # All tools are FTDI based\n server_args.extend(\n [\n \"-f\",\n \"interface/ftdi/%s.cfg\" % tool,\n \"-f\",\n \"target/%s.cfg\" % debug.get(\"openocd_target\"),\n ]\n )\n debug[\"tools\"][tool] = {\n \"init_cmds\": [\n \"define pio_reset_halt_target\",\n \" monitor reset halt\",\n \"end\",\n \"define pio_reset_run_target\",\n \" monitor reset\",\n \"end\",\n \"set mem inaccessible-by-default off\",\n \"set arch riscv:rv32\",\n \"set remotetimeout 250\",\n \"target extended-remote $DEBUG_PORT\",\n \"$INIT_BREAK\",\n \"$LOAD_CMDS\",\n ],\n \"server\": {\n \"package\": \"tool-openocd-riscv-chipsalliance\",\n \"executable\": \"bin/openocd\",\n \"arguments\": server_args,\n },\n \"onboard\": tool in debug.get(\"onboard_tools\", [])\n or tool == \"verilator\",\n }\n\n board.manifest[\"debug\"] = debug\n return board\n"
},
{
"alpha_fraction": 0.5310635566711426,
"alphanum_fraction": 0.5482625365257263,
"avg_line_length": 26.931371688842773,
"blob_id": "b727e2175c6091b92fde3a4ae5a9950c72e5ab91",
"content_id": "06e614a152b0cef57cb8193be00a465172e1c980",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2849,
"license_type": "permissive",
"max_line_length": 89,
"num_lines": 102,
"path": "/builder/frameworks/wd-riscv-sdk.py",
"repo_name": "hixio-mh/platform-chipsalliance",
"src_encoding": "UTF-8",
"text": "import os\n\nfrom SCons.Script import DefaultEnvironment\n\nenv = DefaultEnvironment()\nplatform = env.PioPlatform()\n\nFIRMWARE_DIR = platform.get_package_dir(\"framework-wd-riscv-sdk\")\nassert os.path.isdir(FIRMWARE_DIR)\n\nboard = env.BoardConfig()\nvariant_dir = os.path.join(FIRMWARE_DIR, \"board\", board.get(\"build.variant\", \"\"))\n\nenv.SConscript(\"_bare.py\")\n\nenv.Append(\n CCFLAGS=[\n \"-fno-builtin-printf\",\n ],\n\n CPPDEFINES=[\n \"D_USE_RTOSAL\",\n \"D_USE_FREERTOS\",\n (\"D_TICK_TIME_MS\", 4),\n (\"D_ISR_STACK_SIZE\", 400),\n (\"D_MTIME_ADDRESS\", \"0x80001020\"),\n (\"D_MTIMECMP_ADDRESS\", \"0x80001028\"),\n (\"D_CLOCK_RATE\", 50000000),\n (\"D_PIC_BASE_ADDRESS\", \"0xA0000000\"),\n (\"D_PIC_NUM_OF_EXT_INTERRUPTS\", 256),\n (\"D_EXT_INTERRUPT_FIRST_SOURCE_USED\", 0),\n (\"D_EXT_INTERRUPT_LAST_SOURCE_USED\", 255),\n ],\n\n CPPPATH=[\n \"$PROJECT_SRC_DIR\",\n os.path.join(FIRMWARE_DIR, \"rtos\", \"rtosal\", \"loc_inc\"),\n os.path.join(FIRMWARE_DIR, \"common\", \"api_inc\"),\n os.path.join(FIRMWARE_DIR, \"rtos\", \"rtos_core\", \"freertos\", \"Source\", \"include\"),\n os.path.join(FIRMWARE_DIR, \"rtos\", \"rtosal\", \"api_inc\"),\n os.path.join(FIRMWARE_DIR, \"rtos\", \"rtosal\", \"config\", \"swerv_eh1\"),\n os.path.join(FIRMWARE_DIR, \"psp\", \"api_inc\"),\n os.path.join(FIRMWARE_DIR, \"rtos\", \"rtos_core\", \"freertos\", \"Source\", \"include\"),\n ],\n\n LIBPATH=[variant_dir],\n\n LIBS=[\"c\", \"gcc\"]\n)\n\n# Only for C/C++ sources\nenv.Append(CCFLAGS=[\"-include\", \"sys/cdefs.h\"])\n\nif not board.get(\"build.ldscript\", \"\"):\n env.Replace(LDSCRIPT_PATH=\"link.lds\")\n\n#\n# Target: Build libraries\n#\n\nlibs = []\n\nif \"build.variant\" in board:\n env.Append(CPPPATH=[variant_dir, os.path.join(variant_dir, \"bsp\")])\n libs.append(env.BuildLibrary(os.path.join(\"$BUILD_DIR\", \"BoardBSP\"), variant_dir))\n\nlibs.extend([\n env.BuildLibrary(\n os.path.join(\"$BUILD_DIR\", \"FreeRTOS\"),\n os.path.join(FIRMWARE_DIR, \"rtos\", \"rtos_core\", \"freertos\", \"Source\"),\n src_filter=[\n \"-<*>\",\n \"+<croutine.c>\",\n \"+<list.c>\",\n \"+<portable/portASM.S>\",\n \"+<queue.c>\",\n \"+<tasks.c>\",\n \"+<timers.c>\",\n ],\n ),\n\n env.BuildLibrary(\n os.path.join(\"$BUILD_DIR\", \"RTOS-AL\"),\n os.path.join(FIRMWARE_DIR, \"rtos\", \"rtosal\"),\n src_filter=\"+<*> -<rtosal_memory.c> -<list.c>\",\n ),\n\n env.BuildLibrary(\n os.path.join(\"$BUILD_DIR\", \"PSP\"),\n os.path.join(FIRMWARE_DIR, \"psp\"),\n src_filter=[\n \"-<*>\",\n \"+<psp_ext_interrupts_swerv_eh1.c>\",\n \"+<psp_traps_interrupts.c>\",\n \"+<psp_timers.c>\",\n \"+<psp_performance_monitor_eh1.c>\",\n \"+<psp_int_vect_swerv_eh1.S>\"\n ],\n )\n])\n\nenv.Prepend(LIBS=libs)\n"
},
{
"alpha_fraction": 0.7668085098266602,
"alphanum_fraction": 0.768510639667511,
"avg_line_length": 32.57143020629883,
"blob_id": "116d1b6246ec008cb6f17939a8b3e070d41e139a",
"content_id": "aa460e789202f642c746c18809801d22737155ab",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 1175,
"license_type": "permissive",
"max_line_length": 189,
"num_lines": 35,
"path": "/README.md",
"repo_name": "hixio-mh/platform-chipsalliance",
"src_encoding": "UTF-8",
"text": "# CHIPS Alliance: development platform for [PlatformIO](http://platformio.org)\n\n\n \nCHIPS Alliance brings the power of open source and software automation to the semiconductor industry, making it possible to develop new hardware faster and more affordably than ever before.\n\n* [Home](http://platformio.org/platforms/chipsalliance) (home page in PlatformIO Platform Registry)\n* [Documentation](http://docs.platformio.org/page/platforms/chipsalliance.html) (advanced usage, packages, boards, frameworks, etc.)\n\n# Usage\n\n1. [Install PlatformIO](http://platformio.org)\n2. Create PlatformIO project and configure a platform option in [platformio.ini](http://docs.platformio.org/page/projectconf.html) file:\n\n## Stable version\n\n```ini\n[env:stable]\nplatform = chipsalliance\nboard = ...\n...\n```\n\n## Development version\n\n```ini\n[env:development]\nplatform = https://github.com/platformio/platform-chipsalliance.git\nboard = ...\n...\n```\n\n# Configuration\n\nPlease navigate to [documentation](http://docs.platformio.org/page/platforms/chipsalliance.html).\n"
},
{
"alpha_fraction": 0.49150919914245605,
"alphanum_fraction": 0.5034680962562561,
"avg_line_length": 27.25,
"blob_id": "0c5fb414a5cd4839f642d8360be34452616a497d",
"content_id": "ce059cae288d3b9f6f549ada41f664e0d6fbc5fc",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4181,
"license_type": "permissive",
"max_line_length": 88,
"num_lines": 148,
"path": "/examples/rtosal-freertos/build-freertos.py",
"repo_name": "hixio-mh/platform-chipsalliance",
"src_encoding": "UTF-8",
"text": "import os\n\nImport(\"env\")\n\n# Note: \"riscv-fw-infrastructure\" repository contains toolchains and eclipse\nFW_DIR = os.path.join(\n \"$PROJECT_CORE_DIR\", \"packages\", \"riscv-fw-infrastructure\", \"WD-Firmware\"\n)\n\nboard = env.BoardConfig()\nvariant_dir = os.path.join(FW_DIR, \"board\", board.get(\"build.variant\", \"\"))\n\nenv.Append(\n ASFLAGS=[\n \"-x\", \"assembler-with-cpp\",\n \"-Wa,-march=%s\" % board.get(\"build.march\")\n ],\n\n CCFLAGS=[\n \"-Os\",\n \"-Wall\",\n \"-ffunction-sections\",\n \"-fdata-sections\",\n \"-march=%s\" % board.get(\"build.march\"),\n \"-mabi=%s\" % board.get(\"build.mabi\"),\n \"-mcmodel=%s\" % board.get(\"build.mcmodel\"),\n \"-fno-builtin-printf\",\n ],\n CPPDEFINES=[\n \"D_USE_RTOSAL\",\n \"D_USE_FREERTOS\",\n (\"D_TICK_TIME_MS\", 4),\n (\"D_ISR_STACK_SIZE\", 400),\n (\"D_MTIME_ADDRESS\", \"0x80001020\"),\n (\"D_MTIMECMP_ADDRESS\", \"0x80001028\"),\n (\"D_CLOCK_RATE\", 50000000),\n (\"D_PIC_BASE_ADDRESS\", \"0xA0000000\"),\n (\"D_PIC_NUM_OF_EXT_INTERRUPTS\", 256),\n (\"D_EXT_INTERRUPT_FIRST_SOURCE_USED\", 0),\n (\"D_EXT_INTERRUPT_LAST_SOURCE_USED\", 255),\n ],\n\n CPPPATH=[\n \"$PROJECT_SRC_DIR\",\n os.path.join(FW_DIR, \"rtos\", \"rtosal\", \"loc_inc\"),\n os.path.join(FW_DIR, \"common\", \"api_inc\"),\n os.path.join(FW_DIR, \"rtos\", \"rtos_core\", \"freertos\", \"Source\", \"include\"),\n os.path.join(FW_DIR, \"rtos\", \"rtosal\", \"api_inc\"),\n os.path.join(FW_DIR, \"rtos\", \"rtosal\", \"config\", \"swerv_eh1\"),\n os.path.join(FW_DIR, \"psp\", \"api_inc\"),\n os.path.join(FW_DIR, \"rtos\", \"rtos_core\", \"freertos\", \"Source\", \"include\"),\n ],\n\n LINKFLAGS=[\n \"-Os\",\n \"-march=%s\" % board.get(\"build.march\"),\n \"-mabi=%s\" % board.get(\"build.mabi\"),\n \"-mcmodel=%s\" % board.get(\"build.mcmodel\"),\n \"-Wl,-gc-sections\",\n \"-nostdlib\",\n \"-nostartfiles\",\n \"-static\",\n \"-Wl,--wrap=malloc\",\n \"-Wl,--wrap=free\",\n \"-Wl,--wrap=open\",\n \"-Wl,--wrap=lseek\",\n \"-Wl,--wrap=read\",\n \"-Wl,--wrap=write\",\n \"-Wl,--wrap=fstat\",\n \"-Wl,--wrap=stat\",\n \"-Wl,--wrap=close\",\n \"-Wl,--wrap=link\",\n \"-Wl,--wrap=unlink\",\n \"-Wl,--wrap=execve\",\n \"-Wl,--wrap=fork\",\n \"-Wl,--wrap=getpid\",\n \"-Wl,--wrap=kill\",\n \"-Wl,--wrap=wait\",\n \"-Wl,--wrap=isatty\",\n \"-Wl,--wrap=times\",\n \"-Wl,--wrap=sbrk\",\n \"-Wl,--wrap=_exit\"\n \"-Wl,-Map=\"\n + os.path.join(\"$BUILD_DIR\", os.path.basename(env.subst(\"${PROJECT_DIR}.map\"))),\n \"-Wl,--defsym=__comrv_cache_size=0\",\n ],\n\n LIBPATH=[variant_dir],\n\n LIBS=[\"c\", \"gcc\"]\n)\n\n# copy CCFLAGS to ASFLAGS (-x assembler-with-cpp mode)\nenv.Append(ASFLAGS=env.get(\"CCFLAGS\", [])[:])\n\n# Only for C/C++ sources\nenv.Append(CCFLAGS=[\"-include\", \"sys/cdefs.h\"])\n\nif not board.get(\"build.ldscript\", \"\"):\n env.Replace(LDSCRIPT_PATH=\"link.lds\")\n\n\n#\n# Target: Build libraries\n#\n\nlibs = []\n\nif \"build.variant\" in board:\n env.Append(CPPPATH=[variant_dir, os.path.join(variant_dir, \"bsp\")])\n libs.append(env.BuildLibrary(os.path.join(\"$BUILD_DIR\", \"BoardBSP\"), variant_dir))\n\nlibs.extend([\n env.BuildLibrary(\n os.path.join(\"$BUILD_DIR\", \"FreeRTOS\"),\n os.path.join(FW_DIR, \"rtos\", \"rtos_core\", \"freertos\", \"Source\"),\n src_filter=[\n \"-<*>\",\n \"+<croutine.c>\",\n \"+<list.c>\",\n \"+<portable/portASM.S>\",\n \"+<queue.c>\",\n \"+<tasks.c>\",\n \"+<timers.c>\",\n ],\n ),\n\n env.BuildLibrary(\n os.path.join(\"$BUILD_DIR\", \"RTOS-AL\"),\n os.path.join(FW_DIR, \"rtos\", \"rtosal\"),\n src_filter=\"+<*> -<rtosal_memory.c> -<list.c>\",\n ),\n\n env.BuildLibrary(\n os.path.join(\"$BUILD_DIR\", \"PSP\"),\n os.path.join(FW_DIR, \"psp\"),\n src_filter=[\n \"-<*>\",\n \"+<psp_ext_interrupts_swerv_eh1.c>\",\n \"+<psp_traps_interrupts.c>\",\n \"+<psp_timers.c>\",\n \"+<psp_performance_monitor_eh1.c>\",\n \"+<psp_int_vect_swerv_eh1.S>\"\n ],\n )\n])\n\nenv.Prepend(LIBS=libs)\n"
}
] | 4 |
jcandefors/slumpare | https://github.com/jcandefors/slumpare | e2aab43381459ca135ba3584ca9acd622f422936 | 7fd3fa9cdc2ba02e86a19cde25a2950eab381dd7 | dedd2892bba213d80164aa0c22543d0c201fa045 | refs/heads/master | 2021-01-01T17:36:20.267255 | 2014-02-08T12:44:19 | 2014-02-08T12:44:19 | null | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6338546276092529,
"alphanum_fraction": 0.6366145610809326,
"avg_line_length": 33,
"blob_id": "d0c9970dc2faa448736b8b5e6eccc9b9fcfd9f8e",
"content_id": "4a833f3b4aa5e4261dd4f33372df6fcf1143a0c3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1087,
"license_type": "no_license",
"max_line_length": 85,
"num_lines": 32,
"path": "/slump/views.py",
"repo_name": "jcandefors/slumpare",
"src_encoding": "UTF-8",
"text": "# Create your views here.\nfrom django.db.models import Count\nfrom slump.models import Dude, DTask, TaskForm\nfrom django.shortcuts import render\nfrom random import randint\n\ndef index(request):\n\n return render(request,'slump/index.html',{'form':TaskForm})\n\ndef result(request):\n if request.method == 'POST':\n tf = TaskForm(request.POST)\n if tf.is_valid():\n tdesc = tf.cleaned_data['desc']\n randomDude = Dude.objects.get(pk=randint(1,10))\n t = DTask(dude = randomDude, desc = tdesc)\n t.save()\n context = {'d_dude':randomDude , 'dtask' : tdesc}\n return render(request,'slump/random.html',context)\n else :\n form = TaskForm()\n\n return render(request, 'slump/index.html', {'form': form,})\n\n\n\ndef history(request):\n tasks = DTask.objects.select_related().order_by('desc')\n # counts =DTask.objects.all().annotate('count' = Count('dude'))\n # return render(request, 'slump/history.html', {'tasks': tasks,'counts':counts})\n return render(request, 'slump/history.html', {'tasks': tasks})"
},
{
"alpha_fraction": 0.7345132827758789,
"alphanum_fraction": 0.7433628439903259,
"avg_line_length": 21.799999237060547,
"blob_id": "83f4218817352182951ee7a539d6303a08bba02c",
"content_id": "0738071f2b50b9f999167139e8d9a2446a6c810a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 113,
"license_type": "no_license",
"max_line_length": 32,
"num_lines": 5,
"path": "/slump/admin.py",
"repo_name": "jcandefors/slumpare",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\nfrom django.contrib import admin\nfrom slump.models import Dude\n\nadmin.site.register(Dude)"
},
{
"alpha_fraction": 0.6333333253860474,
"alphanum_fraction": 0.6370370388031006,
"avg_line_length": 29.11111068725586,
"blob_id": "d1d938a88f447c0ae039bafc4c3c91b4e7253dd6",
"content_id": "15788eeeff371b4ba8bbcf00200538296471ddc2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 270,
"license_type": "no_license",
"max_line_length": 54,
"num_lines": 9,
"path": "/slump/urls.py",
"repo_name": "jcandefors/slumpare",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\nfrom django.conf.urls import patterns, url\nfrom slump import views\n\nurlpatterns = patterns('',\n url(r'^$', views.index, name='index'),\n url(r'^result/$', views.result, name='result'),\n url(r'^history/$', views.history, name='history'),\n)"
},
{
"alpha_fraction": 0.6536964774131775,
"alphanum_fraction": 0.6653696298599243,
"avg_line_length": 23.4761905670166,
"blob_id": "a4a2ff226a08da2eb124d693cbec215534e5bfc3",
"content_id": "6b2f74aa605e46b7445d045750bfd09bae8b5411",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 514,
"license_type": "no_license",
"max_line_length": 46,
"num_lines": 21,
"path": "/slump/models.py",
"repo_name": "jcandefors/slumpare",
"src_encoding": "UTF-8",
"text": "from django.db import models\nfrom django.forms import ModelForm\n# Create your models here.\n\nclass Dude(models.Model):\n def __unicode__(self):\n return(self.name)\n\n name = models.CharField(max_length=40)\n nrOfTasks = models.IntegerField(default=0)\n\nclass DTask(models.Model):\n def __unicode__(self):\n return(self.desc)\n dude = models.ForeignKey(Dude)\n desc = models.CharField(max_length=200)\n\nclass TaskForm(ModelForm):\n class Meta:\n model = DTask\n fields = ['desc']\n"
}
] | 4 |
hisg123/Python | https://github.com/hisg123/Python | de61544733e762f23d280603eadcc33d4e06edcb | abce135740dd47dd09b9299260af9da7bef61aa5 | 9eb793cb61119e8de79928c1f2a0dc3df8c14348 | refs/heads/main | 2023-08-24T22:13:54.192858 | 2021-10-23T10:15:17 | 2021-10-23T10:15:17 | 394,951,284 | 1 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.42525532841682434,
"alphanum_fraction": 0.4688950777053833,
"avg_line_length": 25.268293380737305,
"blob_id": "05c75ae6ea5efdebc8a8a98552aab7ec87ba0c54",
"content_id": "029dcd2f74a07f1d8b4b4ccdca513bc7c093f95d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1077,
"license_type": "no_license",
"max_line_length": 94,
"num_lines": 41,
"path": "/pythonProject(programmers)/Etc/crain_puppet_draw.py",
"repo_name": "hisg123/Python",
"src_encoding": "UTF-8",
"text": "def solution(board, moves):\n answer = 0\n ans_list = []\n temp = [[0 for col in range(len(board))] for row in range(len(board))]\n\n ##inverse row and column\n for i in range(len(board)):\n for j in range(len(board)):\n temp[i][j] = board[j][i]\n\n #execute draw by moves array\n for i in range(len(moves)):\n # print(move)\n for j in range(len(board)):\n if temp[moves[i]-1][j] != 0:\n ans_list.append(temp[moves[i] - 1][j])\n temp[moves[i]-1][j] = 0\n break\n\n n = len(ans_list)\n print(ans_list)\n\n #pop if the same doll(repair)\n i = 0\n while(1):\n if ans_list[i] == ans_list[i+1]:\n ans_list.pop(i)\n ans_list.pop(i)\n i = -1\n\n i = i+1\n if i == len(ans_list)-1: break\n if ans_list == []:\n answer = answer + 2\n break\n\n answer = n - len(ans_list)\n return answer\n\nif __name__ == '__main__':\n solution([[0,0,0,0,0],[0,0,1,0,3],[0,2,5,0,1],[4,2,4,4,2],[3,5,1,3,1]], [1,5,3,5,1,2,1,4])\n"
},
{
"alpha_fraction": 0.5583634376525879,
"alphanum_fraction": 0.5631768703460693,
"avg_line_length": 24.18181800842285,
"blob_id": "46e3f76f6df89cfd88a7179d07a74ea243959855",
"content_id": "2b0e02d21a96c6b3f8b1844a022caa681ab9de39",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 831,
"license_type": "no_license",
"max_line_length": 93,
"num_lines": 33,
"path": "/pythonProject(programmers)/BFS,DFS/TravelPath.py",
"repo_name": "hisg123/Python",
"src_encoding": "UTF-8",
"text": "from collections import deque, defaultdict\ndef MakeGraph(tickets):\n graph = defaultdict(list)\n for ticket in tickets:\n graph[ticket[0]].append(ticket[1])\n\n for key, value in graph.items():\n value.sort(reverse=True)\n return graph\n\ndef DFS(graph, root):\n visited = []\n stack = deque([root])\n\n while stack:\n n = stack.pop()\n if n not in visited:\n visited.append(n)\n stack.extend(graph[n])\n\n answer = visited\n return answer\n\ndef solution(tickets):\n graph = MakeGraph(tickets)\n print(graph)\n answer = DFS(graph, tickets[0][0])\n print(answer)\n return answer\n\nif __name__ == '__main__':\n # solution([[\"ICN\", \"JFK\"], [\"HND\", \"IAD\"], [\"JFK\", \"HND\"]])\n solution([[\"ICN\", \"SFO\"], [\"ICN\", \"ATL\"], [\"SFO\", \"ATL\"], [\"ATL\", \"ICN\"], [\"ATL\",\"SFO\"]])\n"
},
{
"alpha_fraction": 0.48721805214881897,
"alphanum_fraction": 0.5210526585578918,
"avg_line_length": 27.934782028198242,
"blob_id": "c9c8cd6378ab4faf0e4faa3124b5927adaba25c6",
"content_id": "3a2779d93d0deae8551702fc1f2eef0b0dcc2b74",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1496,
"license_type": "no_license",
"max_line_length": 69,
"num_lines": 46,
"path": "/pythonProject(programmers)/Stack,Queue/bridge_passing_truck.py",
"repo_name": "hisg123/Python",
"src_encoding": "UTF-8",
"text": "from collections import deque\ndef solution(bridge_length, weight, truck_weights):\n answer = 0\n bridge = [0]*bridge_length\n bridge = deque(bridge)\n truck_weights = deque(truck_weights)\n sum_bridge = 0\n\n #bridge 배열이 []가 될때까지 반복\n while bridge:\n #만약 truck이 bridge에 들어갈 수 있는 상태라면\n if truck_weights and sum_bridge + truck_weights[0] <= weight:\n truck = truck_weights.popleft()\n sum_bridge += truck\n bridge.append(truck)\n\n temp = bridge.popleft()\n sum_bridge -= temp\n\n #truck이 bridge에 들어갈 수 없는 상태라면\n else:\n temp = bridge.popleft()\n sum_bridge -= temp\n\n #만약 truck_weights 배열이 []가 아니라면\n if truck_weights:\n #하나 팝하자마자 바로 드갈 수 있는 상태라면\n if sum_bridge + truck_weights[0] <= weight:\n truck = truck_weights.popleft()\n sum_bridge += truck\n bridge.append(truck)\n \n #하나 팝해도 바로 드갈 수 있는 상태가 아니라면\n else: bridge.append(0)\n\n answer += 1\n print(sum_bridge, bridge)\n\n print(answer)\n return answer\n\nif __name__ == '__main__':\n solution(2, 10, [7,4,5,6])\n solution(100, 100, [10])\n solution(100, 100, [10,10,10,10,10])\n solution(2, 10, [7,3,5,6])"
},
{
"alpha_fraction": 0.5968628525733948,
"alphanum_fraction": 0.6092516779899597,
"avg_line_length": 40.02049255371094,
"blob_id": "009294995cf4b5e3bd7713a464ddcc080471ad5e",
"content_id": "0a1395a54c9332d944d4bc680528e0194e5904b7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 10095,
"license_type": "no_license",
"max_line_length": 104,
"num_lines": 244,
"path": "/MyProject/HisgDesktop/HisgDesktop.py",
"repo_name": "hisg123/Python",
"src_encoding": "UTF-8",
"text": "import sys\nimport PyQt5.uic\nimport time\nfrom selenium import webdriver\nfrom datetime import datetime\nfrom PyQt5.QtWidgets import *\nfrom PyQt5 import QtCore, QtGui, QtWidgets\n\nPMS_ID = \"\"\n\nform_class = PyQt5.uic.loadUiType(\"test.ui\")[0]\n\ntoday_Mon = datetime.today().strftime('%m')\nYEAR, WEEK, DAY = datetime.now().isocalendar()\npath = \"./testui\"\n\ndef CaculateMealTime():\n hour = int(datetime.now().time().strftime('%H'))\n if hour >= 0 and hour <= 8: return \"조식\"\n if hour >= 9 and hour <= 12: return \"중식\"\n if hour >= 13 and hour <= 18: return \"석식\"\n if hour >= 19 and hour <= 24: return \"야식\"\n\ndef SearchMonthBtn(month):\n month_btn_dict = {\"02\": 1, \"01\": 2, \"12\": 3, \"11\": 4, \"10\": 5, \"09\": 6, \"08\": 7}\n return month_btn_dict[month]\n\ndef MealBring(day):\n options = webdriver.ChromeOptions()\n options.add_argument(\"headless\")\n driver = webdriver.Chrome(options= options)\n url = f\"https://intranet.amkor.co.kr/app/service/carte/view/detail?plant=S&year=2021&week={WEEK}\"\n driver.get(url)\n\n br_k_food = driver.find_element_by_xpath(f\"//tbody/tr[2]/td[{day+2}]\").text\n br_j_food = driver.find_element_by_xpath(f\"//tbody/tr[3]/td[{day+1}]\").text\n br_i_food = driver.find_element_by_xpath(f\"//tbody/tr[4]/td[{day+1}]\").text\n br_p_food = driver.find_element_by_xpath(f\"//tbody/tr[5]/td[{day+1}]\").text\n\n lu_k_food = driver.find_element_by_xpath(f\"//tbody/tr[6]/td[{day + 2}]\").text\n lu_j_food = driver.find_element_by_xpath(f\"//tbody/tr[7]/td[{day + 1}]\").text\n lu_i_food = driver.find_element_by_xpath(f\"//tbody/tr[8]/td[{day + 1}]\").text\n lu_p_food = driver.find_element_by_xpath(f\"//tbody/tr[9]/td[{day + 1}]\").text\n\n di_k_food = driver.find_element_by_xpath(f\"//tbody/tr[10]/td[{day + 2}]\").text\n di_j_food = driver.find_element_by_xpath(f\"//tbody/tr[11]/td[{day + 1}]\").text\n di_i_food = driver.find_element_by_xpath(f\"//tbody/tr[12]/td[{day + 1}]\").text\n di_p_food = driver.find_element_by_xpath(f\"//tbody/tr[13]/td[{day + 1}]\").text\n\n ni_k_food = driver.find_element_by_xpath(f\"//tbody/tr[14]/td[{day + 2}]\").text\n ni_j_food = driver.find_element_by_xpath(f\"//tbody/tr[15]/td[{day + 1}]\").text\n ni_i_food = driver.find_element_by_xpath(f\"//tbody/tr[16]/td[{day + 1}]\").text\n ni_p_food = driver.find_element_by_xpath(f\"//tbody/tr[17]/td[{day + 1}]\").text\n\n driver.quit()\n return br_k_food, br_j_food, br_i_food, br_p_food, \\\n lu_k_food, lu_j_food, lu_i_food, lu_p_food, \\\n di_k_food, di_j_food, di_i_food, di_p_food,\\\n ni_k_food, ni_j_food, ni_i_food, ni_p_food\n\ndef ClosePopUp(CPdriver):\n # Close pop-up window\n main = CPdriver.window_handles\n for handle in main:\n if handle != main[0]:\n CPdriver.switch_to.window(handle)\n CPdriver.close()\n\ndef SetDefaultMealType(handler):\n # Set Default MealType\n mealtime = CaculateMealTime()\n handler.comboBox_mealtype.setCurrentText(mealtime)\n if mealtime == \"조식\":\n handler.label_k.setText(BR_k_food)\n handler.label_j.setText(BR_j_food)\n handler.label_c.setText(BR_i_food)\n handler.label_p.setText(BR_p_food)\n\n handler.label_k.setFont(QtGui.QFont(\"Arial\", 8))\n handler.label_j.setFont(QtGui.QFont(\"Arial\", 8))\n handler.label_c.setFont(QtGui.QFont(\"Arial\", 6))\n handler.label_p.setFont(QtGui.QFont(\"Arial\", 6))\n\n if mealtime == \"중식\":\n handler.label_k.setText(LU_k_food)\n handler.label_j.setText(LU_j_food)\n handler.label_c.setText(LU_i_food)\n handler.label_p.setText(LU_p_food)\n\n handler.label_k.setFont(QtGui.QFont(\"Arial\", 8))\n handler.label_j.setFont(QtGui.QFont(\"Arial\", 8))\n handler.label_c.setFont(QtGui.QFont(\"Arial\", 6))\n handler.label_p.setFont(QtGui.QFont(\"Arial\", 6))\n\n if mealtime == \"석식\":\n handler.label_k.setText(DI_k_food)\n handler.label_j.setText(DI_j_food)\n handler.label_c.setText(DI_i_food)\n handler.label_p.setText(DI_p_food)\n\n handler.label_k.setFont(QtGui.QFont(\"Arial\", 8))\n handler.label_j.setFont(QtGui.QFont(\"Arial\", 8))\n handler.label_c.setFont(QtGui.QFont(\"Arial\", 6))\n handler.label_p.setFont(QtGui.QFont(\"Arial\", 6))\n\n if mealtime == \"야식\":\n handler.label_k.setText(NI_k_food)\n handler.label_j.setText(NI_j_food)\n handler.label_c.setText(NI_i_food)\n handler.label_p.setText(NI_p_food)\n\n handler.label_k.setFont(QtGui.QFont(\"Arial\", 8))\n handler.label_j.setFont(QtGui.QFont(\"Arial\", 8))\n handler.label_c.setFont(QtGui.QFont(\"Arial\", 6))\n handler.label_p.setFont(QtGui.QFont(\"Arial\", 6))\n\n#UI이벤트 처리\nclass MyWindow(QMainWindow, form_class):\n def __init__(self):\n super().__init__()\n self.setupUi(self)\n self.pushButton_HRD_net.clicked.connect(self.ConnectHRDnet) # Connect HRD_net\n self.pushButton_OJT.clicked.connect(self.ConnectOJT) # Connect OJT\n self.pushButton_AmkorIntranet.clicked.connect(self.ConnectAmkorIntranet) # Connect AmkorIntranet\n self.pushButton_AmkorPMS.clicked.connect(self.ConnectAmkorPMS) # Connect AmkorPMS\n # self.pushButton_Setting.clicked.connect(self.SettingAutoLogin)\n self.comboBox_mealtype.currentTextChanged.connect(self.ConnectMealType)\n\n\n global BR_k_food, BR_j_food, BR_i_food, BR_p_food,\\\n LU_k_food, LU_j_food, LU_i_food, LU_p_food, \\\n DI_k_food, DI_j_food, DI_i_food, DI_p_food,\\\n NI_k_food, NI_j_food, NI_i_food, NI_p_food\n\n BR_k_food, BR_j_food, BR_i_food, BR_p_food, \\\n LU_k_food, LU_j_food, LU_i_food, LU_p_food, \\\n DI_k_food, DI_j_food, DI_i_food, DI_p_food, \\\n NI_k_food, NI_j_food, NI_i_food, NI_p_food = MealBring(DAY)\n\n SetDefaultMealType(self)\n\n # def SettingAutoLogin(self):\n # pwd = getpass.getpass(prompt='Password: ')\n # df = pd.read_excel('AutoSetting.xlsx', Password= pwd)\n # print(df)\n\n def ConnectMealType(self):\n mealtime = self.comboBox_mealtype.currentText()\n if mealtime == \"조식\":\n self.label_k.setText(BR_k_food)\n self.label_j.setText(BR_j_food)\n self.label_c.setText(BR_i_food)\n self.label_p.setText(BR_p_food)\n\n self.label_k.setFont(QtGui.QFont(\"Arial\",8))\n self.label_j.setFont(QtGui.QFont(\"Arial\", 8))\n self.label_c.setFont(QtGui.QFont(\"Arial\", 6))\n self.label_p.setFont(QtGui.QFont(\"Arial\", 6))\n\n if mealtime == \"중식\":\n self.label_k.setText(LU_k_food)\n self.label_j.setText(LU_j_food)\n self.label_c.setText(LU_i_food)\n self.label_p.setText(LU_p_food)\n\n self.label_k.setFont(QtGui.QFont(\"Arial\",8))\n self.label_j.setFont(QtGui.QFont(\"Arial\", 8))\n self.label_c.setFont(QtGui.QFont(\"Arial\", 6))\n self.label_p.setFont(QtGui.QFont(\"Arial\", 6))\n\n if mealtime == \"석식\":\n self.label_k.setText(DI_k_food)\n self.label_j.setText(DI_j_food)\n self.label_c.setText(DI_i_food)\n self.label_p.setText(DI_p_food)\n\n self.label_k.setFont(QtGui.QFont(\"Arial\",8))\n self.label_j.setFont(QtGui.QFont(\"Arial\", 8))\n self.label_c.setFont(QtGui.QFont(\"Arial\", 6))\n self.label_p.setFont(QtGui.QFont(\"Arial\", 6))\n\n if mealtime == \"야식\":\n self.label_k.setText(NI_k_food)\n self.label_j.setText(NI_j_food)\n self.label_c.setText(NI_i_food)\n self.label_p.setText(NI_p_food)\n\n self.label_k.setFont(QtGui.QFont(\"Arial\",8))\n self.label_j.setFont(QtGui.QFont(\"Arial\", 8))\n self.label_c.setFont(QtGui.QFont(\"Arial\", 6))\n self.label_p.setFont(QtGui.QFont(\"Arial\", 6))\n\n def ConnectHRDnet(self):\n driver = webdriver.Chrome()\n url = \"https://www.hrd.go.kr/hrdp/pa/ppaho/PPAHO0100T.do\"\n driver.get(url)\n driver.find_element_by_xpath(\"//input[@id='userloginId']\").send_keys(HRD_ID)\n driver.find_element_by_xpath(\"//input[@id='userloginPwd']\").send_keys(HRD_PW)\n driver.find_element_by_xpath(\"//button[@id='loginBtn']\").click()\n time.sleep(1)\n driver.find_element_by_xpath(\"//div[@class='qnaQ']\").click()\n time.sleep(1)\n driver.find_element_by_xpath(\"//button[contains(text(),'학습활동서 작성')]\").click()\n driver.find_element_by_xpath(f\"//tbody/tr[{SearchMonthBtn(today_Mon)}]/td[4]/button[1]\").click()\n\n def ConnectOJT(self):\n driver = webdriver.Chrome()\n url = \"https://biz.amkor.co.kr/ojt/ojt_login.jsp\"\n driver.get(url)\n driver.find_element_by_xpath(\"//input[@name='inputId']\").send_keys(OJT_ID)\n driver.find_element_by_xpath(\"//input[@name='inputPwd']\").send_keys(OJT_PW)\n driver.find_element_by_xpath(\"//img[@src='../img/ojt-img/btn.png']\").click()\n driver.find_element_by_xpath(\"//input[@value='클릭']\").click()\n driver.find_element_by_xpath(\"//input[@value='편집']\").click()\n\n def ConnectAmkorIntranet(self):\n driver = webdriver.Chrome()\n url = \"https://intranet.amkor.co.kr/index.jsp\"\n driver.get(url)\n\n driver.find_element_by_xpath(\"//input[@name='input_id']\").send_keys(OJT_ID)\n driver.find_element_by_xpath(\"//input[@type='password']\").send_keys(OJT_PW)\n driver.find_element_by_xpath(\"//input[@type='image']\").click()\n\n ClosePopUp(driver)\n\n def ConnectAmkorPMS(self):\n driver = webdriver.Chrome()\n url = \"https://pms.amkor.co.kr\"\n driver.get(url)\n\n driver.find_element_by_xpath(\"//input[@name='loginId']\").send_keys(PMS_ID)\n driver.find_element_by_xpath(\"//input[@id='password']\").send_keys(PMS_PW)\n driver.find_element_by_xpath(\"//a[contains(text(),'로그인')]\").click()\n\n ClosePopUp(driver)\n\n\nif __name__ == '__main__':\n app = QApplication(sys.argv)\n app.setStyle('Fusion')\n myWindow = MyWindow()\n myWindow.show()\n app.exec_()\n"
},
{
"alpha_fraction": 0.5352563858032227,
"alphanum_fraction": 0.5608974099159241,
"avg_line_length": 23.076923370361328,
"blob_id": "d79443ec71feabeecb6c35b7f757a4fa747099cf",
"content_id": "e21cce3fa482e27b10194716caddbe4cfd715d2e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 312,
"license_type": "no_license",
"max_line_length": 43,
"num_lines": 13,
"path": "/pythonProject(programmers)/Etc/plusminus.py",
"repo_name": "hisg123/Python",
"src_encoding": "UTF-8",
"text": "def solution(absolutes, signs):\n\n answer = sum(absolutes)\n for i in range(len(signs)):\n if signs[i] == False:\n answer -= absolutes[i]*2\n\n print(answer)\n return answer\n\nif __name__ == '__main__':\n solution([4,7,12], [True, False, True])\n solution([1,2,3], [False, False, True])"
},
{
"alpha_fraction": 0.49345794320106506,
"alphanum_fraction": 0.5214953422546387,
"avg_line_length": 29.514286041259766,
"blob_id": "c10bd75f45d3788921f0ca80666dbfe827674261",
"content_id": "5fa5b4f5ef6e326c27e0cd58981ba7352eeef865",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1070,
"license_type": "no_license",
"max_line_length": 62,
"num_lines": 35,
"path": "/pythonProject(programmers)/Stack,Queue/printer.py",
"repo_name": "hisg123/Python",
"src_encoding": "UTF-8",
"text": "def solution(priorities, location):\n cnt = 0\n\n while(len(priorities)!=0):\n if priorities[0] != max(priorities) and location != 0:\n priorities.append(priorities.pop(0))\n location = location - 1\n\n if priorities[0] != max(priorities) and location == 0:\n priorities.append(priorities.pop(0))\n location = len(priorities)-1\n\n if priorities[0] == max(priorities) and location != 0:\n priorities.pop(0)\n location = location - 1\n cnt = cnt+1\n\n if priorities[0] == max(priorities) and location == 0:\n answer = location+1+cnt\n return answer\n\n# def solution2(priorities, location):\n# queue = [(i,p) for i,p in enumerate(priorities)]\n# answer = 0\n# while True:\n# cur = queue.pop(0)\n# if any(cur[1] < q[1] for q in queue):\n# queue.append(cur)\n# else:\n# answer += 1\n# if cur[0] == location:\n# return answer\n\nif __name__ == '__main__':\n solution([2,1,3,2], 2)\n\n\n"
},
{
"alpha_fraction": 0.5359223484992981,
"alphanum_fraction": 0.537864089012146,
"avg_line_length": 27.66666603088379,
"blob_id": "d28e5358c5af5f53752498a7aebff128ce628377",
"content_id": "9e656276c3d7a1069cfb9e831ba138098dcc87b8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 515,
"license_type": "no_license",
"max_line_length": 105,
"num_lines": 18,
"path": "/pythonProject(programmers)/Hash/CantFinishingRun.py",
"repo_name": "hisg123/Python",
"src_encoding": "UTF-8",
"text": "def solution(participant, completion):\n p = dict()\n hash_sum = 0\n\n for i in participant:\n p[hash(i)] = i\n hash_sum += hash(i)\n\n for i in completion:\n hash_sum -= hash(i)\n\n answer = p[hash_sum]\n return answer\n\nif __name__ == '__main__':\n solution([\"leo\", \"kiki\", \"eden\"],[\"eden\", \"kiki\"])\n solution([\"marina\", \"josipa\", \"nikola\", \"vinko\", \"filipa\"], [\"josipa\", \"filipa\", \"marina\", \"nikola\"])\n solution([\"mislav\", \"stanko\", \"mislav\", \"ana\"],[\"stanko\", \"ana\", \"mislav\"])"
},
{
"alpha_fraction": 0.41814160346984863,
"alphanum_fraction": 0.4712389409542084,
"avg_line_length": 18.69565200805664,
"blob_id": "a3ac2a32250ecabd67b5cfd936f61c8acc3efcaa",
"content_id": "163ac4f307905d0526f6b1c5215bea626e8ef9f5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 452,
"license_type": "no_license",
"max_line_length": 37,
"num_lines": 23,
"path": "/pythonProject(book_practice problem)/92p_big_num_rule.py",
"repo_name": "hisg123/Python",
"src_encoding": "UTF-8",
"text": "def solution(nmk, list):\n result = 0\n list.sort(reverse=True)\n print(list)\n cnt = nmk[2]\n for i in range(nmk[1]):\n if cnt !=0:\n result = result + list[0]\n\n else:\n result = result + list[1]\n cnt = nmk[2]\n\n cnt = cnt - 1\n\n print(result, cnt, i)\n\n print(result)\n return result\n\nif __name__ == '__main__':\n solution([5,8,3],[2,4,5,4,6])\n solution([5,7,2], [3,4,3,4,3])"
},
{
"alpha_fraction": 0.546558678150177,
"alphanum_fraction": 0.5654520988464355,
"avg_line_length": 29.91666603088379,
"blob_id": "121885fa029f29d13b3ca644997db7b1adcebc76",
"content_id": "4c127bc824c061e6ddfff7647901c49f726ad0ce",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 741,
"license_type": "no_license",
"max_line_length": 96,
"num_lines": 24,
"path": "/pythonProject(programmers)/Greedy/joystick.py",
"repo_name": "hisg123/Python",
"src_encoding": "UTF-8",
"text": "def solution(name):\n answer = 0\n name_list = list(name)\n alphabet = list(\"ABCDEFGHIJKLMNOPQRSTUVWXYZ\")\n\n X_idx = [i for i in range(len(name_list)) if not 'A' in name_list[i]]\n A_idx = [i for i in range(len(name_list)) if 'A' in name_list[i]]\n\n for i in name_list:\n answer += min(alphabet.index(i), 25 - alphabet.index(i) + 1)\n print(i, answer)\n\n if A_idx != [] and A_idx[-1] > X_idx[-1]: answer += min(X_idx[-1], A_idx[-1] - X_idx[0] + 1)\n else: answer += min(X_idx[-1], X_idx[-1] - X_idx[1] + 1)\n print(name_list, answer)\n return answer\n\nif __name__ == '__main__':\n solution(\"JEROEN\")\n solution(\"JAEAE\")\n solution(\"JAN\")\n solution(\"JAZ\")\n solution(\"AAABBA\")\n solution(\"ZZAAAZZ\")"
},
{
"alpha_fraction": 0.5379580855369568,
"alphanum_fraction": 0.5497382283210754,
"avg_line_length": 29.559999465942383,
"blob_id": "6bb351a94e92bd3235b1a8fcd4dd9a3dccc1a440",
"content_id": "75dc66569082c54ee68b85a38d9d3089f376174f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 764,
"license_type": "no_license",
"max_line_length": 128,
"num_lines": 25,
"path": "/pythonProject(programmers)/Hash/camouflage.py",
"repo_name": "hisg123/Python",
"src_encoding": "UTF-8",
"text": "from collections import defaultdict\ndef solution(clothes):\n answer = 0\n p = defaultdict(list)\n\n for cloth in clothes:\n p[cloth[1]].append(cloth[0])\n\n if len(p) == 1:\n answer = len(p[clothes[0][1]])\n\n else:\n temp = 1\n for key in p.keys():\n temp *= len(p[key])+1\n answer += temp - 1\n\n return answer\n\nif __name__ == '__main__':\n solution([[\"yellowhat\", \"headgear\"], [\"bluesunglasses\", \"eyewear\"], [\"sunglasses\", \"eyewear\"], [\"green_turban\", \"headgear\"],\n [\"blond\",\"hair\"], [\"black\",\"hair\"]])\n\n solution([[\"yellowhat\", \"headgear\"], [\"bluesunglasses\", \"eyewear\"], [\"green_turban\", \"headgear\"]])\n solution([[\"crowmask\", \"face\"], [\"bluesunglasses\", \"face\"], [\"smoky_makeup\", \"face\"]])\n"
},
{
"alpha_fraction": 0.4510067105293274,
"alphanum_fraction": 0.47785234451293945,
"avg_line_length": 25.625,
"blob_id": "57dc35a58f9cb100cccb6d5fb720a0bb03b43e06",
"content_id": "22e9cec29f9259a7cd09fb1f648baa9070578754",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1558,
"license_type": "no_license",
"max_line_length": 89,
"num_lines": 56,
"path": "/pythonProject(programmers)/Etc/OpenChattingRoom.py",
"repo_name": "hisg123/Python",
"src_encoding": "UTF-8",
"text": "def solution(record):\n answer = []\n Command = []\n Uid = []\n Name = []\n\n length = len(record)\n ID_NAME = {}\n ID_COMMAND = {}\n\n for i in record:\n temp = i.split()\n Command.append(temp[0])\n Uid.append(temp[1])\n if len(temp) >= 3: Name.append(temp[2])\n else: Name.append(\"\")\n\n for i in range(length):\n ID_COMMAND[Uid[i]] = Command[i]\n if ID_COMMAND[Uid[i]] == 'Leave': pass\n else: ID_NAME[Uid[i]] = Name[i]\n\n for i in range(length):\n if Command[i][0] == 'E': answer.append(f\"{ID_NAME[Uid[i]]}님이 들어왔습니다.\")\n if Command[i][0] == 'L': answer.append(f\"{ID_NAME[Uid[i]]}님이 나갔습니다.\")\n\n print(answer)\n return answer\n\n# def solution(record):\n# answer = []\n# ID_NAME = {}\n#\n# for i in record:\n# temp = i.split()\n# if temp[0] in ['Change', 'Enter']:\n# ID_NAME[temp[1]] = temp[2]\n#\n# for i in record:\n# if i.split()[0] == 'Enter': answer.append(f\"{ID_NAME[i.split()[1]]}님이 들어왔습니다.\")\n# if i.split()[0] == 'Leave': answer.append(f\"{ID_NAME[i.split()[1]]}님이 나갔습니다.\")\n#\n# print(answer)\n# return answer\n\nif __name__ == '__main__':\n solution([\"Enter 123 A\",\n \"Leave 123\",\n \"Enter uid1234 B\",\n \"Enter uid12345 C\",\n \"Leave uid1234\",\n \"Leave uid12345\",\n \"Enter uid123 AB\",\n \"Enter ABC 광우\",\n \"Change ABC 현주\",\n \"Leave ABC\"])"
},
{
"alpha_fraction": 0.42507803440093994,
"alphanum_fraction": 0.49531736969947815,
"avg_line_length": 31.576271057128906,
"blob_id": "ddba98e95386df69ee63833d4924667a8acc72e2",
"content_id": "ceb9373b9f53d5e411978ef230c0ec60169dfad5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1922,
"license_type": "no_license",
"max_line_length": 102,
"num_lines": 59,
"path": "/pythonProject(programmers)/Etc/2_week.py",
"repo_name": "hisg123/Python",
"src_encoding": "UTF-8",
"text": "def solution(scores):\n temp_array = []\n temp = [[0 for col in range(len(scores))] for row in range(len(scores))]\n result = \"\"\n\n ##inverse row and column\n for i in range(len(scores)):\n for j in range(len(scores)):\n temp[i][j] = scores[j][i]\n\n ##find if the same min or max number\n for i in range(len(temp)):\n cnt = 0\n\n #if temp[i][i] is max or min number\n if temp[i][i] == min(temp[i]) or temp[i][i] == max(temp[i]):\n cnt = cnt + 1\n\n # if not only one max or min num\n for j in range(0, len(temp)):\n if temp[i][i] == temp[i][j] and j != i:\n cnt = 0\n\n # mark if temp[i][i] is only min or max number\n if cnt == 1: temp[i][i] = 101\n\n ##calculate average\n for i in range(len(temp)):\n cnt = 0\n sum = 0\n\n #if temp[i][i] is marked\n if temp[i][i] == 101:\n cnt = cnt + 1\n for j in range(len(temp)):\n sum = sum + temp[i][j]\n\n #else temp[i][i] is unmarked\n else:\n for j in range(len(temp)):\n sum = sum + temp[i][j]\n\n temp_array.append((sum-101*cnt)/(len(temp)-cnt))\n\n ##give grade to each average\n for i in range(len(temp_array)):\n if temp_array[i] >= 90: result = result + 'A'\n if temp_array[i] >= 80 and temp_array[i] < 90: result = result + 'B'\n if temp_array[i] >= 70 and temp_array[i] < 80: result = result + 'C'\n if temp_array[i] >= 50 and temp_array[i] < 70: result = result + 'D'\n if temp_array[i] < 50: result = result + 'F'\n\n return result\n\nif __name__ == '__main__':\n solution([[100,90,98,88,65],[50,45,99,85,77],[47,88,95,80,67],[61,57,100,80,65],[24,90,94,75,65]])\n solution([[50,90],[50,87]])\n solution([[70,49,90],[68,50,38],[73,31,100]])\n solution([[75, 50, 100], [75, 100, 20], [100, 100, 20]])\n"
},
{
"alpha_fraction": 0.46109509468078613,
"alphanum_fraction": 0.5187320113182068,
"avg_line_length": 22.200000762939453,
"blob_id": "dc06d96237f0f15e65ad03284f30422c6ddb7ddb",
"content_id": "f5cf20ba0a80039986f9b887c2d947a0a5f4c3e6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 347,
"license_type": "no_license",
"max_line_length": 45,
"num_lines": 15,
"path": "/pythonProject(programmers)/Etc/IntegerArrayDividedPerfectly.py",
"repo_name": "hisg123/Python",
"src_encoding": "UTF-8",
"text": "def solution(arr, divisor):\n answer = []\n arr.sort()\n for i in arr:\n if arr[-1] < divisor: break\n if i % divisor == 0: answer.append(i)\n if answer == []: answer.append(-1)\n\n print(answer)\n return answer\n\nif __name__ == \"__main__\":\n solution([5, 9, 7, 10],5)\n solution([2, 36, 1, 3],1)\n solution([3,2,6],10)"
},
{
"alpha_fraction": 0.4307524561882019,
"alphanum_fraction": 0.47546347975730896,
"avg_line_length": 23.157894134521484,
"blob_id": "8c90942dd96a6358d8fe9a9a2bd30cfa4729aba8",
"content_id": "9dc3215aab5e4a9b8bcd5c1bacd273d116ab7075",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 917,
"license_type": "no_license",
"max_line_length": 59,
"num_lines": 38,
"path": "/pythonProject(programmers)/Stack,Queue/function_develop.py",
"repo_name": "hisg123/Python",
"src_encoding": "UTF-8",
"text": "def solution(progresses, speeds):\n answer = []\n temp_list = []\n\n ##calculate distribution date\n for i in range(len(progresses)):\n temp_list.append(100-progresses[i])\n if temp_list[i] % speeds[i] != 0:\n temp_list[i] = temp_list[i]//speeds[i]\n temp_list[i] +=1\n\n else:\n temp_list[i] = temp_list[i] // speeds[i]\n\n cnt = 0\n max_temp = temp_list[0]\n i = 0\n\n #calculate answer(=return)\n while(i!=len(temp_list)):\n if temp_list[i] <= max_temp:\n cnt += 1\n if i == len(temp_list)-1:\n answer.append(cnt)\n\n if temp_list[i] > max_temp:\n max_temp = temp_list[i]\n answer.append(cnt)\n cnt = 0\n i -=1\n\n i += 1\n\n return answer\n\nif __name__ == '__main__':\n solution([93,30,55],[1,30,5])\n solution([95, 90, 99, 99, 80, 99], \t[1, 1, 1, 1, 1, 1])"
},
{
"alpha_fraction": 0.523809552192688,
"alphanum_fraction": 0.5817805528640747,
"avg_line_length": 36.230770111083984,
"blob_id": "df9eaa32947d2f7fa605e44da02670df7c582ed0",
"content_id": "de3ef1b415539b5937e493ce5cb7ba4af771d300",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 609,
"license_type": "no_license",
"max_line_length": 104,
"num_lines": 13,
"path": "/pythonProject(programmers)/Sort/biggest_number.py",
"repo_name": "hisg123/Python",
"src_encoding": "UTF-8",
"text": "def solution(numbers):\n #0. key point\n numbers_str = [str(num) for num in numbers] #1. 사전 값으로 정렬하기\n numbers_str.sort(key=lambda num: num*3, reverse=True) #2. number는 1000이하의 숫자이므로 x3(반복)한 값으로 비교\n\n return str(int(''.join(numbers_str)))\n # 만약 numbers=[0,0,0,0] 이라면 0 이 나와야 한다.\n # join한 값을 int로 만들어 준 후 원하는 return값이 str이기 때문에 다시 str로 변환한다.\n\nif __name__ == '__main__':\n solution([6, 10, 2])\n solution([3, 30, 34, 5, 9])\n solution([0,0,0])"
},
{
"alpha_fraction": 0.5187353491783142,
"alphanum_fraction": 0.5480093955993652,
"avg_line_length": 24.117647171020508,
"blob_id": "01bddfbd1746d6a0339aeba6d270790679a33774",
"content_id": "eddc20f1c975865df9d4f03fe2207807585f2ec4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 854,
"license_type": "no_license",
"max_line_length": 66,
"num_lines": 34,
"path": "/pythonProject(programmers)/Etc/124world.py",
"repo_name": "hisg123/Python",
"src_encoding": "UTF-8",
"text": "def solution(n):\n temp = n\n remainder_list = []\n\n ##make remainder list\n while(temp!=0):\n remainder_list.append(temp%3)\n temp = temp//3\n\n remainder_list.reverse()\n length = len(remainder_list)\n\n ##convert 124world_number\n for i in range(length-1, -1, -1):\n if i!=len(remainder_list)-1 and remainder_list[i+1] <= 0 :\n remainder_list[i] -= 1\n\n if remainder_list[0] == 0:\n remainder_list.remove(remainder_list[0])\n\n for i in range(len(remainder_list)):\n if remainder_list[i] == 0:\n remainder_list[i] = 4\n\n if remainder_list[i] == -1:\n remainder_list[i] = 2\n\n ##convert array to string\n answer = ''.join(list(map(str, remainder_list)))\n return answer\n\nif __name__ == '__main__':\n for i in range(1, 1000):\n solution(i)\n"
},
{
"alpha_fraction": 0.48411017656326294,
"alphanum_fraction": 0.5148305296897888,
"avg_line_length": 23.230770111083984,
"blob_id": "dc126bbdde0b7de08102dbddf0180004e8539684",
"content_id": "a12a1fcd8831a47ee8c7837da4936f141ec35cd6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 948,
"license_type": "no_license",
"max_line_length": 49,
"num_lines": 39,
"path": "/pythonProject(programmers)/BFS,DFS/TargetNumber.py",
"repo_name": "hisg123/Python",
"src_encoding": "UTF-8",
"text": "#BFS\ndef solution(numbers, target):\n answer = 0\n result = [0]\n for number in numbers:\n temp_sum = []\n for res in result:\n temp_sum.append(res + number)\n temp_sum.append(res - number)\n result = temp_sum\n\n for i in result:\n if i == target: answer += 1\n\n print(answer)\n return answer\n\n# #DFS 풀이\n# def solution(numbers, target):\n# answer = DFS(numbers, target, 0)\n# return answer\n#\n# def DFS(numbers, target, depth):\n# answer = 0\n# if depth == len(numbers):\n# print(numbers)\n# if sum(numbers) == target:\n# return 1\n# else: return 0\n# else:\n# answer += DFS(numbers, target, depth+1)\n# numbers[depth] *= -1\n# answer += DFS(numbers, target, depth+1)\n# return answer\n\nif __name__ == '__main__':\n solution([1, 1, 1, 1, 1], 3)\n solution([1, 1, 1, 1, 1, 1], 2)\n solution([3, 2, 1, 5, 4], 5)"
},
{
"alpha_fraction": 0.4728434383869171,
"alphanum_fraction": 0.5335463285446167,
"avg_line_length": 21.428571701049805,
"blob_id": "c109e16d915a28ecac1940c63bd786d62dbf4361",
"content_id": "9bad3863ab02f56e526605d6b390ac9815f8188b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 313,
"license_type": "no_license",
"max_line_length": 59,
"num_lines": 14,
"path": "/pythonProject(programmers)/Etc/ponkemon.py",
"repo_name": "hisg123/Python",
"src_encoding": "UTF-8",
"text": "def solution(nums):\n answer = 0\n uni_nums = list(set(nums))\n\n if len(nums)//2 > len(uni_nums): answer = len(uni_nums)\n else: answer = len(nums)//2\n\n print(answer)\n return answer\n\nif __name__ == '__main__':\n solution([3,1,2,3])\n solution([3, 3, 3, 2, 2, 4])\n solution([3, 3, 3, 2, 2, 2])"
},
{
"alpha_fraction": 0.4886535406112671,
"alphanum_fraction": 0.534039318561554,
"avg_line_length": 25.440000534057617,
"blob_id": "240a35174db7534da1e5b49b6fcb397c3bdeedbd",
"content_id": "1c741693f9d889cc47c2bdfc5183040c549380b4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 661,
"license_type": "no_license",
"max_line_length": 107,
"num_lines": 25,
"path": "/pythonProject(programmers)/Hash/FailureRate.py",
"repo_name": "hisg123/Python",
"src_encoding": "UTF-8",
"text": "def solution(N, stages):\n answer = []\n FRbyStage = dict()\n denom = len(stages)\n info = [0]*(N+2)\n\n for stage in stages:\n info[stage] +=1\n\n for i in range(1, N+1):\n moleclue = info[i]\n #if no one reached i-stage\n if denom == 0: FRbyStage[i] = 0\n else:\n FRbyStage[i] = moleclue/denom\n denom -= moleclue\n\n #sort by dict(hash)_value\n for item in sorted(FRbyStage.items(), key=lambda value: value[1], reverse=True): answer.append(item[0])\n return answer\n\nif __name__ == \"__main__\":\n solution(5, [2, 1, 2, 6, 2, 4, 3, 3])\n solution(4, [4,4,4,4,4])\n solution(2, [1,1,1,1,1])\n"
},
{
"alpha_fraction": 0.4110824763774872,
"alphanum_fraction": 0.4536082446575165,
"avg_line_length": 26.75,
"blob_id": "5f30aa23c9c57d2bd1a72662011d16b9faa5a206",
"content_id": "4e6a077e0af6a34c3b912296e36f596551dd553d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 814,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 28,
"path": "/pythonProject(programmers)/Etc/answertest.py",
"repo_name": "hisg123/Python",
"src_encoding": "UTF-8",
"text": "def solution(record):\n answer = []\n namespace = {}\n printer = {'Enter':'님이 들어왔습니다.', 'Leave':'님이 나갔습니다.'}\n for r in record:\n rr = r.split(' ')\n if rr[0] in ['Enter', 'Change']:\n namespace[rr[1]] = rr[2]\n\n print(namespace)\n for r in record:\n if r.split(' ')[0] != 'Change':\n answer.append(namespace[r.split(' ')[1]] + printer[r.split(' ')[0]])\n\n print(answer)\n return answer\n\nif __name__ == '__main__':\n solution([\"Enter 123 A\",\n \"Leave 123\",\n \"Enter uid1234 B\",\n \"Enter uid12345 C\",\n \"Leave uid1234\",\n \"Leave uid12345\",\n \"Enter uid123 AB\",\n \"Enter ABC 광우\",\n \"Change ABC 현주\",\n \"Leave ABC\"])"
},
{
"alpha_fraction": 0.4971123933792114,
"alphanum_fraction": 0.5037760734558105,
"avg_line_length": 22.957447052001953,
"blob_id": "c0cf8ce6cd6e023fc7083e67d809547c4a8c1d41",
"content_id": "0a2deee90e8fb4eada74ed53c997a32ea9edb0a0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2251,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 94,
"path": "/pythonProject(Programmers)/BFS,DFS/test.py",
"repo_name": "hisg123/Python",
"src_encoding": "UTF-8",
"text": "from collections import deque, defaultdict\nimport copy\ndef CalcStrSubLen(cp_word, word):\n a_list = list(cp_word)\n b_list = list(word)\n\n ap = {}\n bp = {}\n\n idx = 0\n for a in a_list:\n ap[idx] = a\n idx += 1\n\n idx = 0\n for b in b_list:\n bp[idx] = b\n idx += 1\n\n temp_key = []\n for a_key, a_value in ap.items():\n for b_key, b_value in bp.items():\n if b_key == a_key and b_value == a_value:\n temp_key.append(a_key)\n\n return len(a_list)-len(temp_key)\n\ndef MakeGraph(words):\n graph = defaultdict(list)\n cp_words = copy.deepcopy(words)\n for word in words:\n for cp_word in cp_words:\n if CalcStrSubLen(cp_word, word) == 1: #\n graph[word].append(cp_word)\n print(graph)\n return graph\n\ndef DFS(begin, target, graph):\n visited = []\n stack = deque([begin])\n\n step = 0\n while stack:\n if target in stack:\n visited.append(target)\n step += 1\n break\n else: n = stack.pop()\n\n if n not in visited:\n visited.append(n)\n stack.extend(graph[n])\n step += 1\n if visited[-1] == target: break\n print(visited)\n\n visited = []\n stack = deque([begin])\n step_r = 0\n while stack:\n if target in stack:\n visited.append(target)\n step_r += 1\n break\n else:\n n = stack.pop()\n\n if n not in visited:\n visited.append(n)\n stack.extend(reversed(graph[n]))\n step_r += 1\n if visited[-1] == target: break\n\n print(visited)\n return min(step_r, step) - 1\n\ndef solution(begin, target, words):\n if target not in words:\n answer = 0\n print(answer)\n return answer\n\n words.append(begin)\n graph = MakeGraph(words)\n answer = DFS(begin, target, graph)\n\n print(answer)\n return answer\n\nif __name__ == '__main__':\n solution(\"hit\", \"cog\", [\"hot\", \"dot\", \"dog\", \"lot\", \"cog\"])\n # solution(\"hit\", \"cog\", [\"hot\", \"dot\", \"dog\", \"lot\", \"log\"])\n # solution(\"hit\", \"lzt\", [\"hot\", \"dot\", \"dog\", \"lot\", \"zot\", \"lzt\"])\n # solution(\"hit\", \"cog\", [\"hot\", \"dot\", \"dog\", \"tog\", \"vog\", \"lot\", \"cog\"])"
},
{
"alpha_fraction": 0.5113636255264282,
"alphanum_fraction": 0.5265151262283325,
"avg_line_length": 20.37837791442871,
"blob_id": "ecd8ee4baa0e7720736d390eda7721cb009d49ab",
"content_id": "00fdb3ea00dcb45ba89e003824417d8f063f25c4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 792,
"license_type": "no_license",
"max_line_length": 44,
"num_lines": 37,
"path": "/pythonProject(programmers)/BFS,DFS/DFS,BFS.py",
"repo_name": "hisg123/Python",
"src_encoding": "UTF-8",
"text": "#baekjoon #1260\nfrom collections import deque, defaultdict\ndef BFS(graph, root):\n visited = []\n queue = deque([root])\n\n while queue:\n n = queue.popleft()\n if n not in visited:\n visited.append(n)\n queue.extend(graph[n])\n # print(queue, graph[n], visited)\n\n answer = visited\n return answer\n\ndef DFS(graph, root):\n visited = []\n stack = deque([root])\n\n while stack:\n n = stack.pop()\n if n not in visited:\n visited.append(n)\n stack.extend(reversed(graph[n]))\n\n answer = visited\n return answer\n\nif __name__ == '__main__':\n graph = defaultdict(list)\n graph = {1: [2],\n 2: [1,3],\n 3: [2]}\n root = 1\n print(DFS(graph, root))\n #print(BFS(graph, root))\n\n"
},
{
"alpha_fraction": 0.49028077721595764,
"alphanum_fraction": 0.5485960841178894,
"avg_line_length": 32.07143020629883,
"blob_id": "f851460bbe76da45f3402a3f1d4f3540a3405ad1",
"content_id": "f6215f946d3ca47ec05ecd9df2b1da2520953f66",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 497,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 14,
"path": "/pythonProject(programmers)/Sort/k_number.py",
"repo_name": "hisg123/Python",
"src_encoding": "UTF-8",
"text": "def solution(array, commands):\n answer = []\n for command in commands:\n i, j, k = command[0], command[1], command[2]\n slice=array[i-1:j] #j는 -1 안해주는 이유는 끝에 하나를 안쳐준다.\n slice.sort()\n answer.append(slice[k-1])\n return answer\n\n# def solution(array, commands):\n# return list(map(lambda x:sorted(array[x[0]-1:x[1]])[x[2]-1], commands))\n\nif __name__ == '__main__':\n solution([1,5,2,6,3,7,4], [[2, 5, 3], [4, 4, 1], [1, 7, 3]])\n"
},
{
"alpha_fraction": 0.3160956799983978,
"alphanum_fraction": 0.3775048553943634,
"avg_line_length": 24.78333282470703,
"blob_id": "b111728ca02a1ecd50c9ce47d9f9ed7c66b8b411",
"content_id": "d13918c195471222d085154268ba2c4d29e94bfe",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1681,
"license_type": "no_license",
"max_line_length": 66,
"num_lines": 60,
"path": "/pythonProject(programmers)/Greedy/gym_clothes.py",
"repo_name": "hisg123/Python",
"src_encoding": "UTF-8",
"text": "def solution(n, lost, reserve):\n answer = 0\n lost.sort()\n reserve.sort()\n\n #n_list = [1, 2, 3, 4, 5]\n n_list = []\n for i in range(n):\n n_list.append(i+1)\n\n ##n_list = [2, 0, 2, 0, 2] #lost = [2, 4] #reverse = [1, 3, 5]\n for i in range(n):\n n_list[i] = 1\n for j in range(len(lost)):\n if i+1 == lost[j]:\n n_list[i] -= 1\n\n for j in range(len(reserve)):\n if i+1 == reserve[j]:\n n_list[i] += 1\n\n #체육복 빌리는 과정\n for i in range(n):\n if n_list[i] == 0:\n #인덱스가 0일때\n if i == 0:\n if n_list[i+1] == 2:\n n_list[i+1] -= 1\n n_list[i] += 1\n\n #인덱스가 마지막일때\n if i == n-1:\n if n_list[i-1] == 2:\n n_list[i - 1] -= 1\n n_list[i] += 1\n\n #인덱스가 중간이고 이전 숫자가 2일때, 뒤에는 0->1되면 더 안 더해주도록 처리\n if i!=0 and n_list[i-1] == 2 and n_list[i] == 0:\n n_list[i-1] -= 1\n n_list[i] += 1\n\n #인덱스가 중간이고 다음 숫자가 2일때,\n if i!=n-1 and n_list[i+1] == 2 and n_list[i] == 0:\n n_list[i+1] -= 1\n n_list[i] += 1\n\n for n_number in n_list:\n if n_number >= 1:\n answer += 1\n\n return answer\n\n\nif __name__ == '__main__':\n solution(5, [2,4,5], [1, 3])\n solution(5, [2,4], [1,3,5])\n solution(5, [2, 4], [3])\n solution(3, [3], [1] )\n solution(10, [8,10],[6,7,9])\n solution(10, [5,4,3,2,1], [3,1,2,5,4])\n"
},
{
"alpha_fraction": 0.5020026564598083,
"alphanum_fraction": 0.5420560836791992,
"avg_line_length": 21.727272033691406,
"blob_id": "251f65ee2fbff2e57218bb4345ed59a07a7db2c6",
"content_id": "33eeac3c2ae4436dac5567f992906c75eb764f24",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 749,
"license_type": "no_license",
"max_line_length": 59,
"num_lines": 33,
"path": "/pythonProject(programmers)/Sort/h_index.py",
"repo_name": "hisg123/Python",
"src_encoding": "UTF-8",
"text": "def solution(citations):\n h_index_array = []\n h_index = max(citations)\n\n if h_index == 0: return h_index\n\n while(h_index!=0):\n cnt_high = 0\n\n for j in range(len(citations)):\n if citations[j] >= h_index :\n cnt_high = cnt_high+1\n\n if cnt_high >= h_index:\n h_index_array.append(h_index)\n\n h_index = h_index - 1\n\n h_index = max(h_index_array)\n return h_index\n\n# def solution(citations):\n# citations.sort(reverse=True)\n# answer = max(map(min, enumerate(citations, start=1)))\n# return answer\n\nif __name__ == '__main__':\n solution([3,0,6,1,5])\n solution([2,1,1,1,0])\n solution([0,1])\n solution([2,2,2])\n solution([41,42,24])\n solution([0,0,0])"
},
{
"alpha_fraction": 0.4166020154953003,
"alphanum_fraction": 0.425911545753479,
"avg_line_length": 23.320755004882812,
"blob_id": "3d95faa359fcbc1a40bc8c77067ca78e1c833821",
"content_id": "8d283a0532ece3dbe4e6d35ca5ec1e838a25fc41",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1289,
"license_type": "no_license",
"max_line_length": 62,
"num_lines": 53,
"path": "/pythonProject(programmers)/Etc/string_compression.py",
"repo_name": "hisg123/Python",
"src_encoding": "UTF-8",
"text": "def solution(s):\n #split list\n s_list = [[] for j in range(len(s))]\n for cnt in range(1, len(s)+1):\n i = 0\n while(i < len(s)):\n s_list[cnt-1].append(s[i:i+cnt])\n i += cnt\n\n answer_list = []\n\n #compress\n c_list = [[] for j in range(len(s_list))]\n for i in range(len(s_list)):\n j = 0\n while(j != len(s_list[i])):\n k = j\n cnt = 0\n\n while(k != len(s_list[i])):\n if s_list[i][j] == s_list[i][k]:\n cnt += 1\n\n if s_list[i][j] != s_list[i][k]:\n break\n\n k +=1\n\n if cnt > 1:\n c_list[i].append(str(cnt)+s_list[i][j])\n j += cnt\n\n else:\n if j > len(s_list[i])-1 : j = len(s_list[i])-1\n c_list[i].append(s_list[i][j])\n j += 1\n\n # combine c_list\n temp = \"\"\n for c in range(len(c_list[i])):\n temp += c_list[i][c]\n\n answer_list.append(len(temp))\n\n answer = min(answer_list)\n return answer\n\nif __name__ == '__main__':\n solution(\"aabbaccc\")\n solution(\"ababcdcdababcdcd\")\n solution(\"abcabcdede\")\n solution(\"abcabcabcabcdededededede\")\n solution(\"xababcdcdababcdcd\")\n"
},
{
"alpha_fraction": 0.49003320932388306,
"alphanum_fraction": 0.5232558250427246,
"avg_line_length": 22.19230842590332,
"blob_id": "07152442c55d55e923509d5e569f381c99e3e8b2",
"content_id": "9b05ca7c2f95a4eb772c77d2f08a3b84e94ada2d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 602,
"license_type": "no_license",
"max_line_length": 54,
"num_lines": 26,
"path": "/pythonProject(programmers)/Stack,Queue/priceofstock.py",
"repo_name": "hisg123/Python",
"src_encoding": "UTF-8",
"text": "def solution(prices):\n answer = []\n temp = prices.pop(0)\n\n flag = 0\n while(prices!=[]):\n print(temp, prices, answer)\n if temp <= min(prices):\n answer.append(len(prices))\n\n if temp > max(prices):\n answer.append(prices.index(max(prices))+1)\n flag = 1\n\n if flag != 1 and temp > min(prices):\n answer.append(prices.index(min(prices))+1)\n temp = prices.pop(0)\n\n answer.append(0)\n print(answer)\n return answer\n\nif __name__ == '__main__':\n solution([1,2,3,2,3])\n solution([1,1,1,1])\n solution([3,2,1])"
},
{
"alpha_fraction": 0.41790392994880676,
"alphanum_fraction": 0.5113537311553955,
"avg_line_length": 23.36170196533203,
"blob_id": "69e87350519fa1be3536489c311e0e076f0a77c9",
"content_id": "fcb450729ae2ba7bdaa45ba968e70f6b305a4f25",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2390,
"license_type": "no_license",
"max_line_length": 56,
"num_lines": 94,
"path": "/pythonProject(programmers)/Etc/test.py",
"repo_name": "hisg123/Python",
"src_encoding": "UTF-8",
"text": "def solution(number, k):\n answer = ''\n num_dict = {}\n num_before = []\n num_after = []\n\n for i in range(len(number)): num_dict[i] = number[i]\n max_pos = max(num_dict, key=num_dict.get)\n\n # 앞대가리 손질에서 끝남\n for key, value in num_dict.items():\n if key < max_pos:\n num_before.append((key, value))\n else:\n num_after.append(value)\n\n num_before.sort(key=lambda value: value[1])\n while k:\n if num_before == [] and k > 0: break\n num_before.pop(0)\n k -= 1\n num_before.sort(key=lambda value: value[0])\n\n if k == 0:\n for key, i in num_before: answer += i\n for i in num_after: answer += i\n\n print(str(int(answer)))\n return answer\n\n # 뒷대가리까지 손질 시작\n print(num_after)\n\n f_len = len(num_after)\n flag = 0\n while k:\n if i == len(number)-1:\n if len(num_after) == f_len:\n flag = 1\n break\n else:\n i = 0\n\n if num_after[i] < num_after[i + 1]:\n num_after.pop(i)\n k -= 1\n i += 1\n\n # 뒷대가리 손질하다 맨마지막까지 손질 안되서 여리로 옴\n if flag == 1:\n key = 0\n temp = []\n for i in num_after:\n temp.append((key, i))\n key += 1\n temp = sorted(temp, key=lambda value: value[1])\n while k:\n temp.pop(0)\n k -= 1\n temp = sorted(temp, key=lambda value: value[0])\n for key, i in temp: answer += i\n print(str(int(answer)))\n return answer\n\n # 뒷대가리 손질 끝\n for i in num_after: answer += i\n print(str(int(answer)))\n return answer\n\n\nif __name__ == '__main__':\n solution(\"1924\", 2)\n solution(\"1231234\", 3)\n solution(\"417725241\", 4)\n solution(\"19763\", 2)\n solution(\"87654321\", 3)\n solution(\"99999111\", 3)\n solution(\"1111\", 3)\n solution(\"8999\", 3)\n solution(\"9999991\", 1)\n solution(\"00000000\",3)\n solution(\"11111112\", 3)\n solution(\"0000001\", 3)\n solution(\"3322993843984398\", 4)\n solution(\"8892299221\", 3)\n solution(\"12345678901234567890\", 19)\n solution(\"01010\", 3)\n solution(\"559913\", 2)\n solution(\"9191919\",1)\n solution(\"00100\", 2)\n solution(\"010\", 0)\n solution(\"1111\", 2)\n solution(\"10000\",2)\n solution(\"1000100011\", 5)\n"
},
{
"alpha_fraction": 0.4761904776096344,
"alphanum_fraction": 0.5052909851074219,
"avg_line_length": 18.894737243652344,
"blob_id": "46ed4e4332a6e3f75c852c153319994440709461",
"content_id": "057bdbcd78276589d76959a0925a0d8e66a0aacf",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 378,
"license_type": "no_license",
"max_line_length": 31,
"num_lines": 19,
"path": "/pythonProject(programmers)/Stack,Queue/Price of Stock2.py",
"repo_name": "hisg123/Python",
"src_encoding": "UTF-8",
"text": "from collections import deque\n\ndef solution(prices):\n answer = []\n prices = deque(prices)\n\n while (len(prices) != 0):\n temp = prices.popleft()\n cnt = 0\n for i in prices:\n cnt += 1\n if i < temp: break\n\n answer.append(cnt)\n return answer\n\nif __name__ == '__main__':\n solution([1, 2, 3, 2, 3])\n solution([3,1,1])\n"
},
{
"alpha_fraction": 0.5694363713264465,
"alphanum_fraction": 0.5868681073188782,
"avg_line_length": 27.928571701049805,
"blob_id": "fdb629bf23a1b9a208938e66ff42b3080a07b111",
"content_id": "f1af9bd66b6759bc9d4729d635faf145e85a0981",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7098,
"license_type": "no_license",
"max_line_length": 108,
"num_lines": 238,
"path": "/MyProject/MakeBusinessCard/test.py",
"repo_name": "hisg123/Python",
"src_encoding": "UTF-8",
"text": "import datetime\nimport os\nimport sys\nimport pandas as pd\nfrom PIL import Image, ImageDraw, ImageFont\nfrom PyQt5.QtWidgets import *\nimport PyQt5.uic\nimport shutil\nfrom tqdm import tqdm\nfrom fpdf import FPDF\n\n#pyrcc5 -o MBC_res_rc.py MBC_res.qrc\n\nform_class = PyQt5.uic.loadUiType(\"MakeBC.ui\")[0]\ntoday = datetime.datetime.today().strftime('%Y%m%d')\npath = f'./image/{today}'\n\ndef SplitKrEn(string_):\n list_ = str(string_).split('/')\n return list_\n\ndef AddingMsg(message_, yPos, font, fontsize, draw):\n color = 'rgb(0, 0, 0)'\n malgun = ImageFont.truetype(font, fontsize)\n w, h = malgun.getsize(message_)\n xPos = (821 - w) / 2\n draw.text((xPos, yPos), message_, fill=color, font=malgun)\n\ndef TexttoIMG(name, position, department, location, tell, cell, email):\n width = 821\n height = 455\n\n image = Image.new('RGB', (width, height), (255, 255, 255))\n\n tempimg = Image.open(\"example_google.png\")\n tempidback = tempimg.resize((width, height), Image.NEAREST)\n tempidback.save('tempidback.png')\n tempidback = Image.open('tempidback.png')\n\n image.paste(tempidback)\n draw = ImageDraw.Draw(image)\n\n #split KR,EN\n name_list = SplitKrEn(name)\n position_list = SplitKrEn(position)\n department_list = SplitKrEn(department)\n location_list = SplitKrEn(location)\n\n # adding name\n (x, y) = (0, 200)\n\n username = name_list[0]\n img_username = ''\n if(len(username)==3):\n for i in range(len(username)):\n if(i==0):\n img_username = username[i]\n else:\n img_username += ' ' + username[i]\n message = img_username\n\n elif(len(username)==2):\n for i in range(len(username)):\n if(i==0):\n img_username = username[i]\n else:\n img_username += ' ' + username[i]\n message = img_username\n else:\n message = username\n\n start_Ypos = y\n font = 'malgunbd.ttf'\n fontsize = 30\n AddingMsg(message, start_Ypos, font, fontsize, draw)\n\n #adding department | position\n message = f\"{department_list[0]} | {position_list[0]}\"\n start_Ypos += 50\n font = 'malgun.ttf'\n fontsize = 18\n AddingMsg(message, start_Ypos, font, fontsize, draw)\n\n #adding LABEL\n message = \"Google Inc™\"\n start_Ypos += 50\n font = 'malgunbd.ttf'\n fontsize = 18\n AddingMsg(message, start_Ypos, font, fontsize, draw)\n\n # adding location\n message = location_list[0]\n start_Ypos += 30\n font = 'malgun.ttf'\n fontsize = 18\n AddingMsg(message, start_Ypos, font, fontsize, draw)\n\n # adding tell&cell\n message = f\"tel {tell} | cell {cell}\"\n start_Ypos += 30\n font = 'malgun.ttf'\n fontsize = 18\n AddingMsg(message, start_Ypos, font, fontsize, draw)\n\n #adding email\n message = email\n start_Ypos += 30\n font = 'malgun.ttf'\n fontsize = 18\n AddingMsg(message, start_Ypos, font, fontsize, draw)\n\n image.save(f\"{path}/{name_list[0]}_front.png\")\n\ndef imageMerge():\n IMG_LIST = []\n FILE_LIST = os.listdir(path)\n\n for file in FILE_LIST:\n ext, ext_png = os.path.splitext(file)\n if ext[0:9] == 'mergedPDF':\n pass\n elif ext_png == '.png':\n IMG_LIST.append(file)\n\n TotalSize_x = 2463\n TotalSize_y = 1365\n Size_x = 821\n Size_y = 455\n Merge_image = []\n\n if len(IMG_LIST)%9: LEN = len(IMG_LIST)//9 + 1\n else: LEN = len(IMG_LIST)//9\n # print(LEN)\n\n for i in range(LEN):\n Merge_image.append(Image.new(\"RGB\", (TotalSize_x, TotalSize_y), (255, 255, 255)))\n print(Merge_image)\n\n file_no = 0\n cnt = 0\n for index in range(len(IMG_LIST)):\n if cnt == 9:\n Merge_image[file_no].save(f\"{path}/mergedPDF_{file_no}.png\", \"PNG\")\n file_no += 1\n cnt = 0\n\n Y_idx = index//3 - 3*file_no\n # print(index, Y_idx, file_no)\n merge_area = (((index%3) * Size_x), (Y_idx * Size_y), (((index%3)+1) *Size_x), ((Y_idx+1) * Size_y))\n PasteImage = Image.open(f\"{path}/{IMG_LIST[index]}\")\n Merge_image[file_no].paste(PasteImage, merge_area)\n if file_no == LEN-1:\n Merge_image[file_no].save(f\"{path}/mergedPDF_{file_no}.png\", \"PNG\")\n cnt +=1\n\ndef PdfConvert(directory):\n MERGED_IMG_LIST = []\n FILE_LIST = os.listdir(path)\n for file in FILE_LIST:\n ext = os.path.splitext(file)[0]\n if ext[0:9] == 'mergedPDF':\n MERGED_IMG_LIST.append(file)\n\n i = 0\n for merged_image in tqdm(MERGED_IMG_LIST, desc='이미지 파일을 PDF로 저장 중입니다.', leave=False):\n pdf = FPDF(orientation='L', unit='cm', format='A4')\n pdf.add_page()\n\n merged_image = f'{path}/{merged_image}'\n pdf.image(merged_image, x=0, y=0, w=29.7, h=16.46)\n pdf.output(f'{directory}/{today}_{i}.pdf', 'F')\n i += 1\n\n##오늘날짜로 폴더생성\ndef createFolder(directory):\n try:\n if not os.path.exists(directory):\n os.makedirs(directory)\n else:\n shutil.rmtree(directory)\n os.makedirs(directory)\n except OSError:\n print('Error: Creating directory. ' + directory)\n\nclass MyWindow(QMainWindow, form_class):\n def __init__(self):\n super().__init__()\n self.setupUi(self)\n self.pushButton_openfile.clicked.connect(self.openFileNamesDialog) # 파일 올리기\n self.pushButton_savefile.clicked.connect(self.saveFileDialog) # 저장경로 설정하기\n self.pushButton_run.clicked.connect(self.RunProgram) # 프로그램 실행하기\n self.pushButton_close.clicked.connect(self.close) # 취소하기\n\n def openFileNamesDialog(self):\n # 파일 여러개 불러오기\n files = QFileDialog.getOpenFileName(None, \"Open Excel File\", '.', \"(*.xlsx)\")[0]\n\n if files:\n self.lineEdit_openfile.setText(files)\n global frames\n frames = pd.read_excel(files)\n\n def saveFileDialog(self):\n global OutputPath\n OutputPath = QFileDialog.getExistingDirectory()\n self.lineEdit_savefile.setText(OutputPath)\n\n def RunProgram(self):\n createFolder(path)\n\n name = frames['이름']\n position = frames['직책']\n department = frames['부서']\n location = frames['위치']\n tell = frames['tell']\n cell = frames['cell']\n email = frames['email']\n\n for i in tqdm(range(len(name)), desc = '명함 제작중입니다.', leave= False):\n TexttoIMG(name[i], position[i], department[i], location[i], tell[i], cell[i], email[i])\n\n imageMerge()\n PdfConvert(path)\n\n try:\n PdfConvert(OutputPath)\n QMessageBox.about(self, \"message\", \"저장경로 폴더, image 폴더 안에 PDF 파일이 생성되었습니다\")\n\n except:\n QMessageBox.about(self, \"message\", \"image 폴더 안에 PDF 파일이 생성되었습니다\")\n\n\nif __name__ == '__main__':\n app = QApplication(sys.argv)\n app.setStyle('Fusion')\n myWindow = MyWindow()\n myWindow.show()\n app.exec_()"
},
{
"alpha_fraction": 0.36029869318008423,
"alphanum_fraction": 0.48973241448402405,
"avg_line_length": 22.28985595703125,
"blob_id": "a0c6fef7e830552cababe37a673cf0e1919a053f",
"content_id": "a001b682bb09672a4c32bbbf8cb2f376bc9a6eac",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1607,
"license_type": "no_license",
"max_line_length": 52,
"num_lines": 69,
"path": "/pythonProject(programmers)/Greedy/test.py",
"repo_name": "hisg123/Python",
"src_encoding": "UTF-8",
"text": "def solution(number, k):\n answer = ''\n num_before = {}\n num_after = {}\n for i in range(len(number[:k])):\n num_before[i] = number[:k][i]\n\n for i in range(len(number[k:])):\n num_after[i] = number[k:][i]\n\n i = 0\n while k:\n if num_before[i] < max(num_before.values()):\n del(num_before[i])\n k -= 1\n i += 1\n\n if i == max(num_before.keys()): break\n\n print(num_before, num_after, k)\n\n i = 0\n flag = 0\n alen = len(num_after)\n while k:\n if i == len(num_after) - 1 :\n if len(num_after) == alen:\n flag = 1\n break\n\n else: i = 0\n\n if num_after[i] < num_after[i+1]:\n del(num_after[i])\n k -= 1\n i += 1\n\n i = alen - 1\n if flag == 1:\n while k:\n del(num_after[])\n print(num_before, num_after, k )\n return answer\n\n\nif __name__ == '__main__':\n solution(\"1924\", 2)\n solution(\"1231234\", 3)\n solution(\"4177252841\", 4)\n # solution(\"19763\", 2)\n # solution(\"87654321\", 3)\n solution(\"99999111\", 3)\n # solution(\"1111\", 3)\n # solution(\"8999\", 3)\n # solution(\"9999991\", 1)\n # solution(\"00000000\",3)\n # solution(\"11111112\", 3)\n # solution(\"0000001\", 3)\n # solution(\"3322993843984398\", 4)\n # solution(\"8892299221\", 3)\n # solution(\"12345678901234567890\", 19)\n # solution(\"01010\", 3)\n # solution(\"559913\", 2)\n # solution(\"9191919\",1)\n # solution(\"00100\", 2)\n # solution(\"010\", 0)\n # solution(\"1111\", 2)\n # solution(\"10000\",2)\n # solution(\"1000100011\", 5)\n"
},
{
"alpha_fraction": 0.5160089135169983,
"alphanum_fraction": 0.5428146123886108,
"avg_line_length": 23.88888931274414,
"blob_id": "59c7b5c3650918cc3f114d2f268c7125cd9788bc",
"content_id": "e36f1838dc4b2bb7a84ba902414637fab2675967",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1343,
"license_type": "no_license",
"max_line_length": 53,
"num_lines": 54,
"path": "/pythonProject(programmers)/BFS,DFS/Network.py",
"repo_name": "hisg123/Python",
"src_encoding": "UTF-8",
"text": "from collections import deque, defaultdict\ndef MakeGraph(computers, n):\n graph = defaultdict(list)\n\n # [1,1,0] -> 1: [1, 2]\n for n_idx in range(n):\n computer_index = 0\n for computer in computers[n_idx]:\n computer_index += 1\n if computer == 1:\n graph[n_idx+1].append(computer_index)\n\n #1: [1,2] -> 1: [2]\n for key, value in graph.items():\n for v in value:\n if key == v: value.remove(v)\n\n return graph\n\ndef DFS(graph, root):\n visited = []\n stack = deque([root])\n\n while stack:\n n = stack.pop()\n if n not in visited:\n visited.append(n)\n stack.extend(reversed(graph[n]))\n\n visited.sort()\n print(f\"visted[root = {root}]\", visited)\n answer = ''.join(str(visited))\n return answer\n\ndef solution(n, computers):\n #convert computers array to usable graph\n graph = MakeGraph(computers, n)\n print(\"graph:\", graph)\n\n #do DFS by changing root\n temp = []\n for n_idx in range(n):\n root = n_idx + 1\n temp.append(DFS(graph, root))\n\n #delete repetive elements\n set_temp = set(temp)\n answer = len(list(set_temp))\n print(answer)\n return answer\n\nif __name__ == '__main__':\n solution(3, [[1, 1, 0], [1, 1, 0], [0, 0, 1]])\n solution(3, [[1, 1, 0], [1, 1, 1], [0, 1, 1]])"
}
] | 32 |
icosi/SD-Practica2 | https://github.com/icosi/SD-Practica2 | a5b796fbdb10cc6173c9a4217416c9d4e2e23f8f | 44e1e070f521c6a4a25636bb81dc3e65581ff97d | 05ac4d2bfa918e6c607968edeef9dabf90b766c0 | refs/heads/master | 2020-05-16T20:08:58.357700 | 2019-04-24T17:43:26 | 2019-04-24T17:43:26 | 183,278,349 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5069593191146851,
"alphanum_fraction": 0.5149893164634705,
"avg_line_length": 37.91666793823242,
"blob_id": "4b6a9a5ef12ec20ac415265b2e4cb043dd63345b",
"content_id": "db5314436a573b11b49d5a4b11abba958c1d1b68",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1870,
"license_type": "no_license",
"max_line_length": 224,
"num_lines": 48,
"path": "/PracticaWeb/forkilla/models.py",
"repo_name": "icosi/SD-Practica2",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\n\nfrom django.db import models\n\nclass Restaurant(models.Model):\n CATEGORIES = (\n (\"Rice\", \"Rice\"),\n (\"Fusi\", \"Fusion\"),\n (\"BBQ\", \"Barbecue\"),\n (\"Chin\", \"Chinese\"),\n (\"Medi\",\"Mediterranean\"),\n (\"Crep\",\"Creperie\"),\n (\"Hind\",\"Hindu\"),\n (\"Japa\",\"Japanese\"),\n (\"Ital\",\"Italian\"),\n (\"Mexi\",\"Mexican\"),\n (\"Peru\", \"Peruvian\"),\n (\"Russ\",\"Russian\"),\n (\"Turk\",\"Turkish\"),\n (\"Basq\",\"Basque\"),\n (\"Vegy\", \"Vegetarian\"),\n (\"Afri\",\"African\"),\n (\"Egyp\",\"Egyptian\"),\n (\"Grek\",\"Greek\")\n )\n _d_categories = dict(CATEGORIES)\n\nrestaurant_number = models.CharField(max_length=8, unique=True)\nname = models.CharField(max_length=50)\nmenu_description = models.TextField()\nprice_average = models.DecimalField(max_digits=5, decimal_places=2)\nis_promot = models.BooleanField()\nrate = models.DecimalField(max_digits=3, decimal_places=1)\naddress = models.CharField(max_length=50)\ncity = models.CharField(max_length=50)\ncountry = models.CharField(max_length=50)\nfeatured_photo = models.ImageField()\ncategory = models.CharField(max_length=5, choices=CATEGORIES)\ncapacity = models.PositiveIntegerField()\n\ndef get_human_category(self):\n return self._d_categories[self.category]\n\ndef __str__(self):\n return ('[**Promoted**]' if self.is_promot else '') + \"[\" + self.category + \"] \" \\\n \"[\" + self.restaurant_number + \"] \" + self.name + \" - \" + self.menu_description + \" (\" + str(self.rate) + \")\" \\\n \": \" + str(self.price_average) + u\" €\"\n"
}
] | 1 |
ghoersti/Natural-Language-Processing-for-predicting-hospital-readmission | https://github.com/ghoersti/Natural-Language-Processing-for-predicting-hospital-readmission | 88a2a5612fc67dbdf41e5e77afd4a4f6c3ead35d | 76e43156a039e1879fa74f90f6cf0366f965ec32 | 27cbfde5e1057d6631f6f1c472eebde697bbddd5 | refs/heads/master | 2020-04-09T06:03:32.306675 | 2018-12-08T19:36:17 | 2018-12-08T19:36:17 | 160,095,205 | 0 | 1 | null | null | null | null | null | [
{
"alpha_fraction": 0.6442307829856873,
"alphanum_fraction": 0.717848539352417,
"avg_line_length": 28.838565826416016,
"blob_id": "ea798a14c836c6d18068d468e89460e22a8503e5",
"content_id": "8528063622ae464e6e0ab3563e3345d2720070b6",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 7154,
"license_type": "permissive",
"max_line_length": 240,
"num_lines": 223,
"path": "/README.md",
"repo_name": "ghoersti/Natural-Language-Processing-for-predicting-hospital-readmission",
"src_encoding": "UTF-8",
"text": "\n\n# Software/data \n* This guide is intended for **Linux system only**\n\n\n---\n## Required software/ Applications\nBelow are links/guides to download/install the necessary software\n* [Anaconda Python 3.6+ ](https://www.anaconda.com/download/#linux)\n* [Docker on Linux installation Guide](https://runnable.com/docker/install-docker-on-linux)\n* [Create HDINSIGHTS Spark cluster with pyspark](https://docs.microsoft.com/en-us/azure/hdinsight/spark/apache-spark-jupyter-notebook-kernels)\n* [Jupyter on HDINSIGHTs ](https://docs.microsoft.com/en-us/azure/hdinsight/spark/apache-spark-jupyter-notebook-kernels)\n\nIn order to re-run this code it is in the best interest of the user to use the same docker image. Running below command will pull docker image with Ubuntu 16.04 and a clean Anaconda 4.3 with python 3.6, jupyter 5.4, spark 2.2 installation. \n\n**Pull Docker Image** \n\n>`sudo docker pull ucsddse230/cse255-dse230`\n\n**Run Docker Image** \n\n>```docker run -it -m 6900 -p 8889:8888 -v /local/path/project:/home/ucsddse230/work ucsddse230/cse255-dse230 /bin/bas```\n\n**Extract code**\n\nExtract \"Team13_NLP.tar.gz\" and copy final_project folder to /local/path/project so that you can see it inside docker under path \"/home/ucsddse230/work\"\n\nFrom within the docker container\n\nRun:\n>`pwd`\nyou should be at path \"/home/ucsddse230/work\"\n\n\nRun : \n\n>```jupyter notebook```\n\nGo to http://localhost:8889\n\nOnce inside Jupyter Notebook you can run all the notebooks as described below.\n\n\n## Final Model validation\nTo run the final model on test data please go to post_azure and execute \"Final_Model_Validation.ipynb\". This step can be completed without downloading data from the MIMIC III source website.\n\n*****Before you run the notebook, \n1. Please make sure MyMLP.pth file exists in \"final_project/dl_model\" folder. If not , please download final saved model from the link and copy it into \"final_project/dl_model\" folder. \n https://drive.google.com/open?id=1cP56sytzc3uKvZVF9j2Ia0RAvnvvAN0T\n\n2. please make sure that below folders exist in \"final_project/post_azure\":\n\n testing_features.parquet\n \n training_features.parquet\n---\n\n## Data Access\nAs the data used for this project requires CITI certification please follow below instructions to get access to data.\n\n[Instructions for MIMIC-III access](https://mimic.physionet.org/gettingstarted/access/)\n\n---\n\n\n## Data \nUsing the MIMIC-III database\nhttps://physionet.org/works/MIMICIIIClinicalDatabase/files/\n\nAfter recieving access download these files\n\n**ADMISSIONS**: \nContains unique hospitalizations for each patient in the database. It has 58,976 unique\nadmissions of 46,520 patients. 5,854 admissions have a date of death specified.\n\n**NOTEEVENTS**: \nContains deidentified notes, such as ECG, radiology reports, nursing and physician notes,\ndischarge summaries for each hospitalization. It has 2,083,180 unique notes.\n\n---\n\n## Usage\n\nThe usage for this project is divided into **three phases:**\n\n1. **Pre-azure** : Minor Preprocessing, change delimiter. \n2. **Azure** : Data Processing, tokenize features(heavy lifting) and create label.\n3. **Post_azure** : Vectorize features, implement ML models \n\n---\n\n### Pre_Azure\n> This must be run first follow instructions in the notebook `cse6250_NLP_Pre_process.ipynb`\n\n**inputs:**\n```python\n'idc9_short.txt', 'NOTEEVENTS.csv'\n```\n\n**outputs:**\n```python\n'notes_discharge_pd.csv' , 'ADMISSIONS.CSV' and 'IDC9_filter.csv'\n```\n\n\n\n**contents:**\n* ICD-9-CM-v32-master-descriptions\n> This folder contains all of the ICD9 diagnosis and procedure codes and words downloaded from [here](https://www.cms.gov/Medicare/Coding/ICD9ProviderDiagnosticCodes/codes.html)\n>this is the file used `idc9_short.txt`\n* cse6250_NLP_Pre_process.ipynb\n> 1. Preprocessing done to allow readability by spark\n> 2. creation of ICD9 Filter\n\n---\n\n### Azure\n\n**create labels , prepare date , create tokens**\n\n**Inputs:**\n> Upload these files to your azure blob storage for the HDINSIGHTS cluster\n```python\n'notes_discharge_pd.csv' , 'ADMISSIONS.CSV' and 'IDC9_filter.csv'\n```\n\n**Outputs:**\n> Download these files from your azure blob storage for the HDINSIGHTS cluster\nto `/home/ucsddse230/work/azureoutputs`\n```python\n'final_tokens_with_text.parquet'\n```\n\n**Contents:**\n\n* CSE6250_azure.ipynb\n\n\n\n---\n\n\n### Post_Azure\n\n**Contents:**\n* Reconstruct from orginal text.ipynb\n\n>Inputs: \n```Python\n'final_tokens_with_text.parquet'\n```\n\n>Outputs: \n```Python\n\"testing_features.parquet\",\"training_features.parquet\"\n```\n\n* local_modeling_with_azure_features.ipynb\n\n>Inputs:`\n```Python\n\"testing_features.parquet\",\"training_features.parquet\"\n```\n\n>Outputs: \n```Python\n'MyMLP.pth'\n```\n\n* utils.py\n\n---\n\n## Final Directory Structure\n\n\n```python\nfinal_project\n├── azure\n│ └── CSE6250_azure.ipynb\n├── azureoutputs\n│ └── final_tokens_with_text.parquet\n│ ├── part-00000-dad216ad-f371-44f0-8aaa-1dbcd7c5242b-c000.snappy.parquet\n│ └── _SUCCESS\n├── dl_model\n│ └── MyMLP.pth\n├── post_azure\n│ ├── Final_Model_Validation.ipynb\n│ ├── local_modeling_with_azure_features.ipynb\n│ ├── Reconstruct from orginal text.ipynb\n│ ├── testing_features.parquet\n│ │ ├── part-00000-0a590164-62ce-4aff-aaaf-3baa3f3c18b5-c000.snappy.parquet\n│ │ ├── part-00001-0a590164-62ce-4aff-aaaf-3baa3f3c18b5-c000.snappy.parquet\n│ │ ├── part-00002-0a590164-62ce-4aff-aaaf-3baa3f3c18b5-c000.snappy.parquet\n│ │ ├── part-00003-0a590164-62ce-4aff-aaaf-3baa3f3c18b5-c000.snappy.parquet\n│ │ ├── part-00004-0a590164-62ce-4aff-aaaf-3baa3f3c18b5-c000.snappy.parquet\n│ │ ├── part-00005-0a590164-62ce-4aff-aaaf-3baa3f3c18b5-c000.snappy.parquet\n│ │ └── _SUCCESS\n│ ├── training_features.parquet\n│ │ ├── part-00000-d19cbfcd-5fb9-4715-a708-739e81e14410-c000.snappy.parquet\n│ │ ├── part-00001-d19cbfcd-5fb9-4715-a708-739e81e14410-c000.snappy.parquet\n│ │ ├── part-00002-d19cbfcd-5fb9-4715-a708-739e81e14410-c000.snappy.parquet\n│ │ ├── part-00003-d19cbfcd-5fb9-4715-a708-739e81e14410-c000.snappy.parquet\n│ │ ├── part-00004-d19cbfcd-5fb9-4715-a708-739e81e14410-c000.snappy.parquet\n│ │ ├── part-00005-d19cbfcd-5fb9-4715-a708-739e81e14410-c000.snappy.parquet\n│ │ └── _SUCCESS\n│ └── utils.py\n├── pre_azure\n│ ├── cse6250_NLP_Pre_process.ipynb\n│ ├── ICD-9-CM-v32-master-descriptions\n│ │ ├── CMS32_DESC_LONG_DX (copy).txt\n│ │ ├── CMS32_DESC_LONG_DX.txt\n│ │ ├── CMS32_DESC_LONG_SG.txt\n│ │ ├── CMS32_DESC_LONG_SHORT_DX.xlsx\n│ │ ├── CMS32_DESC_LONG_SHORT_SG.xlsx\n│ │ ├── CMS32_DESC_SHORT_DX.txt\n│ │ ├── CMS32_DESC_SHORT_SG.txt\n│ │ └── idc9_short.txt\n│ ├── IDC9_filter.csv\n│ └── idc9_short.txt\n└── README.md\n└── Usage_instructions.ipynb\n\n```\n"
},
{
"alpha_fraction": 0.6282051205635071,
"alphanum_fraction": 0.6388152241706848,
"avg_line_length": 27.81528663635254,
"blob_id": "a49ef23471f7bcc877d4a17d89e855bbcb8e0799",
"content_id": "c4d25be0018ad7471379c0eeb2ecfd08634c7def",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4524,
"license_type": "permissive",
"max_line_length": 82,
"num_lines": 157,
"path": "/post_azure/utils.py",
"repo_name": "ghoersti/Natural-Language-Processing-for-predicting-hospital-readmission",
"src_encoding": "UTF-8",
"text": "import os\nimport time\nimport numpy as np\nimport torch\nfrom sklearn.metrics import *\n\ndef classification_metrics(Y_pred, Y_true):\n Accuracy = accuracy_score(Y_true, Y_pred)\n fpr, tpr, thresholds = roc_curve(Y_true, Y_pred)\n AUC = auc(fpr, tpr)\n Precision = precision_score(Y_true, Y_pred)\n Recall = recall_score(Y_true, Y_pred)\n F1_score = f1_score(Y_true, Y_pred)\n\n return Accuracy,AUC,Precision,Recall,F1_score\n\ndef display_metrics(classifierName,Y_pred,Y_true):\n print(\"______________________________________________\")\n print((\"Classifier: \"+classifierName))\n acc, auc_, precision, recall, f1score = classification_metrics(Y_pred,Y_true)\n print((\"Accuracy: \"+str(acc)))\n print((\"AUC: \"+str(auc_)))\n print((\"Precision: \"+str(precision)))\n print((\"Recall: \"+str(recall)))\n print((\"F1-score: \"+str(f1score)))\n print(\"______________________________________________\")\n print(\"\")\n\nclass AverageMeter(object):\n\t\"\"\"Computes and stores the average and current value\"\"\"\n\n\tdef __init__(self):\n\t\tself.reset()\n\n\tdef reset(self):\n\t\tself.val = 0\n\t\tself.avg = 0\n\t\tself.sum = 0\n\t\tself.count = 0\n\n\tdef update(self, val, n=1):\n\t\tself.val = val\n\t\tself.sum += val * n\n\t\tself.count += n\n\t\tself.avg = self.sum / self.count\n\n\ndef compute_batch_accuracy(output, target):\n\t\"\"\"Computes the accuracy for a batch\"\"\"\n\twith torch.no_grad():\n\n\t\tbatch_size = target.size(0)\n\t\t_, pred = output.max(1)\n\t\tcorrect = pred.eq(target).sum()\n\n\t\treturn correct * 100.0 / batch_size\n\n\ndef train(model, device, data_loader, criterion, optimizer, epoch, print_freq=10):\n\tbatch_time = AverageMeter()\n\tdata_time = AverageMeter()\n\tlosses = AverageMeter()\n\taccuracy = AverageMeter()\n\n\tmodel.train()\n\n\tend = time.time()\n\tfor i, (input, target) in enumerate(data_loader):\n\t\t# measure data loading time\n\t\tdata_time.update(time.time() - end)\n\n\t\tif isinstance(input, tuple):\n\t\t\tinput = tuple([e.to(device) if type(e) == torch.Tensor else e for e in input])\n\t\telse:\n\t\t\tinput = input.to(device)\n\t\ttarget = target.to(device)\n\n\t\toptimizer.zero_grad()\n\t\toutput = model(input)\n\t\tloss = criterion(output, target)\n\t\tassert not np.isnan(loss.item()), 'Model diverged with loss = NaN'\n\n\t\tloss.backward()\n\t\toptimizer.step()\n\n\t\t# measure elapsed time\n\t\tbatch_time.update(time.time() - end)\n\t\tend = time.time()\n\n\t\tlosses.update(loss.item(), target.size(0))\n\t\taccuracy.update(compute_batch_accuracy(output, target).item(), target.size(0))\n\n\t\tif i % print_freq == 0:\n\t\t\tprint('Epoch: [{0}][{1}/{2}]\\t'\n\t\t\t\t 'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\\t'\n\t\t\t\t 'Data {data_time.val:.3f} ({data_time.avg:.3f})\\t'\n\t\t\t\t 'Loss {loss.val:.4f} ({loss.avg:.4f})\\t'\n\t\t\t\t 'Accuracy {acc.val:.3f} ({acc.avg:.3f})'.format(\n\t\t\t\tepoch, i, len(data_loader), batch_time=batch_time,\n\t\t\t\tdata_time=data_time, loss=losses, acc=accuracy))\n\n\treturn losses.avg, accuracy.avg\n\n\ndef evaluate(model, device, data_loader, criterion, print_freq=10):\n\tbatch_time = AverageMeter()\n\tlosses = AverageMeter()\n\taccuracy = AverageMeter()\n\n\tresults = []\n\n\tmodel.eval()\n\n\twith torch.no_grad():\n\t\tend = time.time()\n\t\tfor i, (input, target) in enumerate(data_loader):\n\n\t\t\tif isinstance(input, tuple):\n\t\t\t\tinput = tuple([e.to(device) if type(e) == torch.Tensor else e for e in input])\n\t\t\telse:\n\t\t\t\tinput = input.to(device)\n\t\t\ttarget = target.to(device)\n\n\t\t\toutput = model(input)\n\t\t\tloss = criterion(output, target)\n\n\t\t\t# measure elapsed time\n\t\t\tbatch_time.update(time.time() - end)\n\t\t\tend = time.time()\n\n\t\t\tlosses.update(loss.item(), target.size(0))\n\t\t\taccuracy.update(compute_batch_accuracy(output, target).item(), target.size(0))\n\n\t\t\ty_true = target.detach().to('cpu').numpy().tolist()\n\t\t\ty_pred = output.detach().to('cpu').max(1)[1].numpy().tolist()\n\t\t\tresults.extend(list(zip(y_true, y_pred)))\n\n\t\t\tif i % print_freq == 0:\n\t\t\t\tprint('Test: [{0}/{1}]\\t'\n\t\t\t\t\t 'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\\t'\n\t\t\t\t\t 'Loss {loss.val:.4f} ({loss.avg:.4f})\\t'\n\t\t\t\t\t 'Accuracy {acc.val:.3f} ({acc.avg:.3f})'.format(\n\t\t\t\t\ti, len(data_loader), batch_time=batch_time, loss=losses, acc=accuracy))\n\n\treturn losses.avg, accuracy.avg, results\n\n\ndef make_kaggle_submission(list_id, list_prob, path):\n\tif len(list_id) != len(list_prob):\n\t\traise AttributeError(\"ID list and Probability list have different lengths\")\n\n\tos.makedirs(path, exist_ok=True)\n\toutput_file = open(os.path.join(path, 'my_predictions.csv'), 'w')\n\toutput_file.write(\"SUBJECT_ID,MORTALITY\\n\")\n\tfor pid, prob in zip(list_id, list_prob):\n\t\toutput_file.write(\"{},{}\\n\".format(pid, prob))\n\toutput_file.close()\n"
}
] | 2 |
Jagtapkunallaxman/Free-Coding-School-Python | https://github.com/Jagtapkunallaxman/Free-Coding-School-Python | 24df104fe7a3d55b7d36a6a5e33d97239ee45953 | 08a55a6dcea4b8810f5415899964e2cefed25e7c | e7cd0829341dd248eb7221adf68244a5ec6722a7 | refs/heads/master | 2022-05-30T07:34:57.407358 | 2020-05-05T17:34:13 | 2020-05-05T17:34:13 | 261,539,901 | 0 | 0 | null | 2020-05-05T17:31:11 | 2020-05-05T15:07:11 | 2020-05-05T09:28:21 | null | [
{
"alpha_fraction": 0.5746394991874695,
"alphanum_fraction": 0.6185416579246521,
"avg_line_length": 10.97552490234375,
"blob_id": "8c96bf0e4187d9828a0b58d63adab46dc0f05f15",
"content_id": "7ef1fc7d2bda920e047408abb2298be3b79a6bc6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 17132,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 1430,
"path": "/assignment 1.py",
"repo_name": "Jagtapkunallaxman/Free-Coding-School-Python",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n# coding: utf-8\n\n# In[28]:\n\n\nlst = [\"HI\",37,[1,2,3],7.3]\n\nprint(lst)\n\n\n# In[31]:\n\n\nlst[0]\n\n\n# In[33]:\n\n\nlst[-2]\n\n\n# In[34]:\n\n\nlst[1:]\n\n\n# In[35]:\n\n\nlst[:1]\n\n\n# In[38]:\n\n\n# by default functionsin list\nprint(lst)\n\nlst.count(37)\n\n\n# In[49]:\n\n\nlst.append(3)\n\n\n# In[50]:\n\n\nlst\n\n\n# In[51]:\n\n\nlst.append(4)\n\n\n# In[52]:\n\n\nlst\n\n\n# In[53]:\n\n\nlst.count(37)\n\n\n# In[54]:\n\n\nlst.count(3)\n\n\n# In[55]:\n\n\nlst.count(4)\n\n\n# In[56]:\n\n\nlst.count(1)\n\n\n# In[57]:\n\n\nlst.count(37)\n\n\n# In[73]:\n\n\nlst.clear()\n\n\n# In[72]:\n\n\nlst.copy()\n\n\n# In[79]:\n\n\nlst.extend(\"HI\")\n\n\n# In[80]:\n\n\nlst\n\n\n# In[98]:\n\n\nprint(lst)\n\n\n# In[99]:\n\n\nlst = [\"HI\",47,[1,2,3],7.3]\nprint(lst)\n\n\n# In[105]:\n\n\nlst.index(7.3, 1, 7)\n\n\n# In[106]:\n\n\nlst.index(47, 1, 3)\n\n\n# In[107]:\n\n\nlst.insert(47,5)\n\n\n# In[108]:\n\n\nlst\n\n\n# In[110]:\n\n\nlst.pop()\n\n\n# In[111]:\n\n\nlst\n\n\n# In[112]:\n\n\nlst.pop(0)\n\n\n# In[113]:\n\n\nlst\n\n\n# In[114]:\n\n\nlst.pop(1)\n\n\n# In[115]:\n\n\nlst\n\n\n# In[116]:\n\n\nlst = [\"HI\",37,[1,2,3],7.3]\nprint(lst)\n\n\n# In[117]:\n\n\nhelp(lst.remove)\n\n\n# In[119]:\n\n\nlst.remove(37)\n\n\n# In[120]:\n\n\nlst\n\n\n# In[121]:\n\n\nlst.remove(7.3)\n\n\n# In[122]:\n\n\nlst\n\n\n# In[123]:\n\n\nlst = [\"HI\",77,[1,2,3],7.3]\nprint(lst)\n\n\n# In[129]:\n\n\nlst.reverse()\n\n\n# In[131]:\n\n\nlst\n\n\n# In[140]:\n\n\nlst = [2, 1, 3, 4]\nprint(lst)\n\n\n# In[143]:\n\n\nlst\n\n\n# In[152]:\n\n\n# list sort \nvowels = ['e', 'a', 'u', 'o', 'i']\n\nvowels.sort()\nprint(vowels)\n\n\n# In[154]:\n\n\n#dict\n\ndit = {\"my_name\":\"kunal\" , \n \"my_house_name\":\"Sai\" ,\n \"my_family_has\":4}\n\n\n# In[155]:\n\n\ndit.get(\"my_name\")\n\n\n# In[156]:\n\n\ndit[\"my_family_has\"]\n\n\n# In[157]:\n\n\n# methods in dict\n\n\n# In[159]:\n\n\ndit.values()\n\n\n# In[160]:\n\n\ndit\n\n\n# In[161]:\n\n\ndit.keys()\n\n\n# In[163]:\n\n\ndit.pop('my_name')\n\n\n# In[164]:\n\n\ndit\n\n\n# In[165]:\n\n\ndit.clear()\n\n\n# In[166]:\n\n\ndit\n\n\n# In[167]:\n\n\ndit = {\"my_name\":\"kunal\" , \n \"my_house_name\":\"Sai\" ,\n \"my_family_has\":4}\n\n\n# In[168]:\n\n\ndit.copy()\n\n\n# In[179]:\n\n\n# dit.fromkeys\nkeys = {'a', 'b', 'c', 'd', 'e' }\n\nalphabet = dict.fromkeys(keys)\nprint(alphabet)\n\n\n# In[175]:\n\n\ndit.get()\n\n\n# In[180]:\n\n\n#dit.get\nperson = {'name': 'kunal', 'age': 20,'salary':100000000000}\n\nprint('Name: ', person.get('name'))\nprint('Age: ', person.get('age'))\n\n# value is not provided\nprint('Salary: ', person.get('salary'))\n\n\n# In[183]:\n\n\n# dit.items\nsales = { 'apple': 2, 'orange': 3, 'grapes': 4 }\nprint('appple')\nprint(sales.items())\n\n\n# In[187]:\n\n\nsales = { 'sop': 2, 'pen': 3, 'ring': 4 }\n\nitems = sales.items()\nprint('ring items:', items)\n\n# delete an item from dictionary\ndel[sales['sop']]\nprint('Updated items:', items)\n\n\n# In[188]:\n\n\nperson = {'name': 'kunal', 'age': 20, 'salary': 900000000000}\n\n# ('salary', 900000000000) is inserted at the last, so it is removed.\nresult = person.popitem()\n\nprint('Return Value = ', result)\nprint('person = ', person)\n\n# inserting a new element pair\nperson['profession'] = 'engineer'\n\n# now ('profession', 'engineer') is the latest element\nresult = person.popitem()\n\nprint('Return Value = ', result)\nprint('person = ', person)\n\n\n# In[189]:\n\n\nboy = {'name': 'kunal', 'age': 20}\n\nage = boy.setdefault('age')\nprint('boy = ',boy)\nprint('Age = ',age)\n\n\n# In[195]:\n\n\n\nd = {1: \"one\", 2: \"three\"}\nd1 = {2: \"two\"}\n\n# updates the value of key 2\nd.update(d1)\nprint(d)\n\nd1 = {3: \"three\"}\n\n# adds element with key 3\nd.update(d1)\nprint(d)\n\n\n\n\n# In[196]:\n\n\n# random sales dictionary\nsales = { 'monitor': 9000, 'keyboard': 700, 'mouse': 500 }\n\nprint(sales.values())\n\n\n# In[197]:\n\n\n# set\n\n\n# In[199]:\n\n\n# language set remove\nlanguage = {'English', 'French', 'German'}\n\n# 'German' element is removed\nlanguage.remove('German')\n\n# Updated language set\nprint('Updated language set: ', language)\n\n\n# In[200]:\n\n\n# set add of vowels\nvowels = {'a', 'e', 'i', 'u'}\n\n# adding 'o'\nvowels.add('o')\nprint('Vowels are:', vowels)\n\n# adding 'a' again\nvowels.add('a')\nprint('Vowels are:', vowels)\n\n\n# In[201]:\n\n\n# set copy\nnumbers = {1, 2, 3, 4}\nnew_numbers = numbers\n\nnew_numbers.add('5')\n\nprint('numbers: ', numbers)\nprint('new_numbers: ', new_numbers)\n\n\n# In[202]:\n\n\n# set clear of vowels\nvowels = {'a', 'e', 'i', 'o', 'u'}\nprint('Vowels (before clear):', vowels)\n\n# clearing vowels\nvowels.clear()\nprint('Vowels (after clear):', vowels)\n\n\n# In[203]:\n\n\n# set difference\nA = {'a', 'b', 'c', 'd'}\nB = {'c', 'f', 'g'}\n\n# Equivalent to A-B\nprint(A.difference(B))\n\n# Equivalent to B-A\nprint(B.difference(A))\n\n\n# In[204]:\n\n\n# set differnce_update\nA = {'a', 'c', 'g', 'd'}\nB = {'c', 'f', 'g'}\n\nresult = A.difference_update(B)\n\nprint('A = ', A)\nprint('B = ', B)\nprint('result = ', result)\n\n\n# In[205]:\n\n\n# set discard\nnumbers = {2, 3, 4, 5}\n\nnumbers.discard(3)\nprint('numbers = ', numbers)\n\nnumbers.discard(10)\nprint('numbers = ', numbers)\n\n\n# In[206]:\n\n\n#set intersection\nA = {2, 3, 5, 4}\nB = {2, 5, 100}\nC = {2, 3, 8, 9, 10}\n\nprint(B.intersection(A))\nprint(B.intersection(C))\nprint(A.intersection(C))\nprint(C.intersection(A, B))\n\n\n# In[207]:\n\n\n# set.intersection_update\nA = {1, 2, 3, 4}\nB = {2, 3, 4, 5, 6}\nC = {4, 5, 6, 9, 10}\n\nresult = C.intersection_update(B, A)\n\nprint('result =', result)\nprint('C =', C)\nprint('B =', B)\nprint('A =', A)\n\n\n# In[208]:\n\n\n# set.isdisjoint\nA = {1, 2, 3, 4}\nB = {5, 6, 7}\nC = {4, 5, 6}\n\nprint('Are A and B disjoint?', A.isdisjoint(B))\nprint('Are A and C disjoint?', A.isdisjoint(C))\n\n\n# In[209]:\n\n\n# set.issubset\nA = {1, 2, 3}\nB = {1, 2, 3, 4, 5}\nC = {1, 2, 4, 5}\n\n# Returns True\nprint(A.issubset(B))\n\n# Returns False\n# B is not subset of A\nprint(B.issubset(A))\n\n# Returns False\nprint(A.issubset(C))\n\n# Returns True\nprint(C.issubset(B))\n\n\n# In[210]:\n\n\n# set.pop\nA ={'a', 'b', 'c', 'd'}\n\nprint('Return Value is', A.pop())\nprint('A = ', A)\n\n\n# In[211]:\n\n\n# set.symmetric_difference\nA = {'a', 'b', 'c', 'd'}\nB = {'c', 'd', 'e' }\nC = {}\n\nprint(A.symmetric_difference(B))\nprint(B.symmetric_difference(A))\n\nprint(A.symmetric_difference(C))\nprint(B.symmetric_difference(C))\n\n\n# In[212]:\n\n\n# set.symmetric_difference_update\nA = {'a', 'c', 'd'}\nB = {'c', 'd', 'e' }\n\nresult = A.symmetric_difference_update(B)\n\nprint('A =', A)\nprint('B =', B)\nprint('result =', result)\n\n\n# In[213]:\n\n\n# set.union\nA = {'a', 'c', 'd'}\nB = {'c', 'd', 2 }\nC = {1, 2, 3}\n\nprint('A U B =', A.union(B))\nprint('B U C =', B.union(C))\nprint('A U B U C =', A.union(B, C))\nprint('A.union() =', A.union())\n\n\n# In[214]:\n\n\n# set.update\nA = {'a', 'b'}\nB = {1, 2, 3}\n\nresult = A.update(B)\n\nprint('A =', A)\nprint('result =', result)\n\n\n# In[215]:\n\n\n# tuple\n# tuple count\nvowels = ('a', 'e', 'i', 'o', 'i', 'o', 'e', 'i', 'u')\n\n# count element 'i'\ncount = vowels.count('i')\n\n# print count\nprint('The count of i is:', count)\n\n# count element 'p'\ncount = vowels.count('p')\n\n# print count\nprint('The count of p is:', count)\n\n\n# In[216]:\n\n\n# tuple index\nvowels = ('a', 'e', 'i', 'o', 'i', 'u')\n\n# element 'e' is searched\nindex = vowels.index('e')\n\n# index is printed\nprint('The index of e:', index)\n\n# element 'i' is searched\nindex = vowels.index('i')\n\n# only the first index of the element is printed\nprint('The index of i:', index)\n\n\n# In[222]:\n\n\n# string capitalize\nstring = \"kunal is AWesome.\"\n\ncapitalized_string = string.capitalize()\n\nprint('Old String: ', string)\nprint('Capitalized String:', capitalized_string)\n\n\n# In[221]:\n\n\n# string center\nstring = \"kunal is awesome\"\n\nnew_string = string.center(24)\n\nprint(\"Centered String: \", new_string)\n\n\n# In[220]:\n\n\n# string.casefold\nstring = \"Kunal IS AWESOME\"\n\n# print lowercase string\nprint(\"Lowercase string:\", string.casefold())\n\n\n# In[223]:\n\n\n# string count\nstring = \"kunal is awesome, isn't it?\"\nsubstring = \"is\"\n\ncount = string.count(substring)\n\n# print count\nprint(\"The count is:\", count)\n\n\n# In[225]:\n\n\n# string endswith\ntext = \"kunal is ready for exam.\"\n\nresult = text.endswith('for exam')\n# returns False\nprint(result)\n\nresult = text.endswith('for exam.')\n# returns True\nprint(result)\n\nresult = text.endswith('kunal is ready for exam.')\n# returns True\nprint(result)\n\n\n# In[226]:\n\n\n# string expandtabs\nstr = 'abc\\t12345\\txyz'\n\n# no argument is passed\n# default tabsize is 8\nresult = str.expandtabs()\n\nprint(result)\n\n\n# In[227]:\n\n\n# string encode\nstring = 'pythön!'\n\n# print string\nprint('The string is:', string)\n\n# default encoding to utf-8\nstring_utf = string.encode()\n\n# print result\nprint('The encoded version is:', string_utf)\n\n\n# In[228]:\n\n\n# string find\n\nquote = 'Let it be, let it be, let it be'\n\n# first occurance of 'let it'(case sensitive)\nresult = quote.find('let it')\nprint(\"Substring 'let it':\", result)\n\n# find returns -1 if substring not found\nresult = quote.find('small')\nprint(\"Substring 'small ':\", result)\n\n# How to use find()\nif (quote.find('be,') != -1):\n print(\"Contains substring 'be,'\")\nelse:\n print(\"Doesn't contain substring\")\n\n\n# In[230]:\n\n\n# string format\n# default arguments\nprint(\"Hello {}, your balance is {}.\".format(\"abhi\", 230.2346))\n\n# positional arguments\nprint(\"Hello {0}, your balance is {1}.\".format(\"abhi\", 230.2346))\n\n# keyword arguments\nprint(\"Hello {name}, your balance is {blc}.\".format(name=\"abhi\", blc=230.2346))\n\n# mixed arguments\nprint(\"Hello {0}, your balance is {blc}.\".format(\"abhi\", blc=230.2346))\n\n\n# In[237]:\n\n\n# string index\nsentence = 'Python programming is fun.'\n\nresult = sentence.index('is fun')\nprint(\"Substring 'is fun':\", result)\n\nresult = sentence.index('Java')\nprint(\"Substring 'Java':\", result)\n \n\n\n# In[238]:\n\n\nsentence = 'Python programming is fun.'\n\nresult = sentence.index('is fun')\nprint(\"Substring 'is fun':\", result)\n\nresult = sentence.index('Java')\nprint(\"Substring 'Java':\", result)\n\n\n# In[239]:\n\n\n# string isalnum\nname = \"Kunal\"\nprint(name.isalnum())\n\n# contains whitespace\nname = \"Kunal Jagtap \"\nprint(name.isalnum())\n\nname = \"KunalJagtap\"\nprint(name.isalnum())\n\nname = \"133\"\nprint(name.isalnum())\n\n\n# In[240]:\n\n\n# string isalpha\nname = \"Kunal\"\nprint(name.isalpha())\n\n# contains whitespace\nname = \"Kunal Jagtap\"\nprint(name.isalpha())\n\n# contains number\nname = \"KunalJagtap\"\nprint(name.isalpha())\n\n\n# In[241]:\n\n\n# string isdecimal\ns = \"28212\"\nprint(s.isdecimal())\n\n# contains alphabets\ns = \"32ladk3\"\nprint(s.isdecimal())\n\n# contains alphabets and spaces\ns = \"Mo3 nicaG el l22er\"\nprint(s.isdecimal())\n\n\n# In[242]:\n\n\n# string isdigit \ns = \"28212\"\nprint(s.isdigit())\n\n# contains alphabets and spaces\ns = \"Mo3 nicaG el l22er\"\nprint(s.isdigit())\n\n\n# In[243]:\n\n\n# string isidentifier\nstr = 'Python'\nprint(str.isidentifier())\n\nstr = 'Py thon'\nprint(str.isidentifier())\n\nstr = '77Python'\nprint(str.isidentifier())\n\nstr = ''\nprint(str.isidentifier())\n\n\n# In[244]:\n\n\n# string islower\ns = 'this is good'\nprint(s.islower())\n\ns = 'th!s is a1so g00d'\nprint(s.islower())\n\ns = 'this is Not good'\nprint(s.islower())\n\n\n# In[245]:\n\n\n# string isnumeric\ns = '1242323'\nprint(s.isnumeric())\n\n#s = '²3455'\ns = '\\u00B23455'\nprint(s.isnumeric())\n\n# s = '½'\ns = '\\u00BD'\nprint(s.isnumeric())\n\ns = '1242323'\ns='python12'\nprint(s.isnumeric())\n\n\n# In[246]:\n\n\n# string isprintable\ns = 'Space is a printable'\nprint(s)\nprint(s.isprintable())\n\ns = '\\nNew Line is printable'\nprint(s)\nprint(s.isprintable())\n\ns = ''\nprint('\\nEmpty string printable?', s.isprintable())\n\n\n# In[247]:\n\n\n# string isspace\ns = ' \\t'\nprint(s.isspace())\n\ns = ' a '\nprint(s.isspace())\n\ns = ''\nprint(s.isspace())\n\n\n# In[248]:\n\n\n# string istitle\ns = 'Python Is Good.'\nprint(s.istitle())\n\ns = 'Python is good'\nprint(s.istitle())\n\ns = 'This Is @ Symbol.'\nprint(s.istitle())\n\ns = '99 Is A Number'\nprint(s.istitle())\n\ns = 'PYTHON'\nprint(s.istitle())\n\n\n# In[249]:\n\n\n# string isupper\n# example string\nstring = \"THIS IS GOOD!\"\nprint(string.isupper());\n\n# numbers in place of alphabets\nstring = \"THIS IS ALSO G00D!\"\nprint(string.isupper());\n\n# lowercase string\nstring = \"THIS IS not GOOD!\"\nprint(string.isupper());\n\n\n# In[250]:\n\n\n# .join() with lists\nnumList = ['1', '2', '3', '4']\nseparator = ', '\nprint(separator.join(numList))\n\n# .join() with tuples\nnumTuple = ('1', '2', '3', '4')\nprint(separator.join(numTuple))\n\ns1 = 'abc'\ns2 = '123'\n\n# each element of s2 is separated by s1\n# '1'+ 'abc'+ '2'+ 'abc'+ '3'\nprint('s1.join(s2):', s1.join(s2))\n\n# each element of s1 is separated by s2\n# 'a'+ '123'+ 'b'+ '123'+ 'b'\nprint('s2.join(s1):', s2.join(s1))\n\n\n# In[252]:\n\n\n# string just\nstring = 'Kunal'\nwidth = 5\n\n# print left justified string\nprint(string.ljust(width))\n\n\n# In[254]:\n\n\n# example string rjust\nstring = 'Kunal'\nwidth = 5\n\n# print right justified string\nprint(string.rjust(width))\n\n\n# In[255]:\n\n\n# example string lower\nstring = \"THIS SHOULD BE LOWERCASE!\"\nprint(string.lower())\n\n# string with numbers\n# all alphabets whould be lowercase\nstring = \"Th!s Sh0uLd B3 L0w3rCas3!\"\nprint(string.lower())\n\n\n# In[256]:\n\n\n# example string upper\nstring = \"this should be uppercase!\"\nprint(string.upper())\n\n# string with numbers\n# all alphabets whould be lowercase\nstring = \"Th!s Sh0uLd B3 uPp3rCas3!\"\nprint(string.upper())\n\n\n# In[257]:\n\n\n# example string swapcase\nstring = \"THIS SHOULD ALL BE LOWERCASE.\"\nprint(string.swapcase())\n\nstring = \"this should all be uppercase.\"\nprint(string.swapcase())\n\nstring = \"ThIs ShOuLd Be MiXeD cAsEd.\"\nprint(string.swapcase())\n\n\n# In[258]:\n\n\n# string istrip\nrandom_string = ' this is good '\n\n# Leading whitepsace are removed\nprint(random_string.lstrip())\n\n# Argument doesn't contain space\n# No characters are removed.\nprint(random_string.lstrip('sti'))\n\nprint(random_string.lstrip('s ti'))\n\nwebsite = 'https://www.programiz.com/'\nprint(website.lstrip('htps:/.'))\n\n\n# In[259]:\n\n\n# string rstrip\nrandom_string = ' this is good'\n\n# Leading whitepsace are removed\nprint(random_string.rstrip())\n\n# Argument doesn't contain 'd'\n# No characters are removed.\nprint(random_string.rstrip('si oo'))\n\nprint(random_string.rstrip('sid oo'))\n\nwebsite = 'www.programiz.com/'\nprint(website.rstrip('m/.'))\n\n\n# In[261]:\n\n\n# string strip\nstring = ' kunal love dadmom '\n\n# Leading and trailing whitespaces are removed\nprint(string.strip())\n\n# All <whitespace>,x,o,e characters in the left\n# and right of string are removed\nprint(string.strip(' mom'))\n\n# Argument doesn't contain space\n# No characters are removed.\nprint(string.strip('stx'))\n\nstring = 'android is awesome'\nprint(string.strip('an'))\n\n\n# In[262]:\n\n\n# string partition\nstring = \"Python is fun\"\n\n# 'is' separator is found\nprint(string.partition('is '))\n\n# 'not' separator is not found\nprint(string.partition('not '))\n\nstring = \"Python is fun, isn't it\"\n\n# splits at first occurence of 'is'\nprint(string.partition('is'))\n\n\n# In[264]:\n\n\n# string maketrans\ndict = {\"a\": \"123\", \"b\": \"456\", \"c\": \"789\"}\nstring = \"abc\"\nprint(string.maketrans(dict))\n\n\n# In[265]:\n\n\n# string rpartition\nstring = \"Python is fun\"\n\n# 'is' separator is found\nprint(string.rpartition('is '))\n\n# 'not' separator is not found\nprint(string.rpartition('not '))\n\nstring = \"Python is fun, isn't it\"\n\n# splits at last occurence of 'is'\nprint(string.rpartition('is'))\n\n\n# In[266]:\n\n\n# string translate\nfirstString = \"abc\"\nsecondString = \"ghi\"\nthirdString = \"ab\"\n\nstring = \"abcdef\"\nprint(\"Original string:\", string)\n\ntranslation = string.maketrans(firstString, secondString, thirdString)\n\n# translate string\nprint(\"Translated string:\", string.translate(translation))\n\n\n# In[267]:\n\n\n# string replace\nsong = 'cold, cold heart'\nprint (song.replace('cold', 'hurt'))\n\nsong = 'Let it be, let it be, let it be, let it be'\n\n'''only two occurences of 'let' is replaced'''\nprint(song.replace('let', \"don't let\", 2))\n\n\n# In[268]:\n\n\n# string r find\nquote = 'Let it be, let it be, let it be'\n\nresult = quote.rfind('let it')\nprint(\"Substring 'let it':\", result)\n\nresult = quote.rfind('small')\nprint(\"Substring 'small ':\", result)\n\nresult = quote.rfind('be,')\nif (result != -1):\n print(\"Highest index where 'be,' occurs:\", result)\nelse:\n print(\"Doesn't contain substring\")\n\n\n# In[269]:\n\n\n# string rindex\nquote = 'Let it be, let it be, let it be'\n\nresult = quote.rindex('let it')\nprint(\"Substring 'let it':\", result)\n \nresult = quote.rindex('small')\nprint(\"Substring 'small ':\", result)\n\n\n# In[270]:\n\n\n# string split\ntext= 'Love the neighbor'\n\n# splits at space\nprint(text.split())\n\ngrocery = 'Milk, Chicken, Bread'\n\n# splits at ','\nprint(grocery.split(', '))\n\n# Splitting at ':'\nprint(grocery.split(':'))\n\n\n# In[272]:\n\n\n# sting rsplit\ntext= 'Love thy neighbor'\n\n# splits at space\nprint(text.rsplit())\n\ngrocery = 'Milk, Chicken, Bread'\n\n# splits at ','\nprint(grocery.rsplit(', '))\n\n# Splitting at ':'\nprint(grocery.rsplit(':'))\n\n\n# In[273]:\n\n\n# string splitlines\ngrocery = 'Milk\\nChicken\\r\\nBread\\rButter'\n\nprint(grocery.splitlines())\nprint(grocery.splitlines(True))\n\ngrocery = 'Milk Chicken Bread Butter'\nprint(grocery.splitlines())\n\n\n# In[274]:\n\n\n# string startswith\ntext = \"Python is easy to learn.\"\n\nresult = text.startswith('is easy')\n# returns False\nprint(result)\n\nresult = text.startswith('Python is ')\n# returns True\nprint(result)\n\nresult = text.startswith('Python is easy to learn.')\n# returns True\nprint(result)\n\n\n# In[275]:\n\n\n# string title\ntext = 'My favorite number is 25.'\nprint(text.title())\n\ntext = '234 k3l2 *43 fun'\nprint(text.title())\n\n\n# In[276]:\n\n\n# string zfill\ntext = \"program is fun\"\nprint(text.zfill(15))\nprint(text.zfill(20))\nprint(text.zfill(10))\n\n\n# In[277]:\n\n\n#string format _map\npoint = {'x':4,'y':-5}\nprint('{x} {y}'.format_map(point))\n\npoint = {'x':4,'y':-5, 'z': 0}\nprint('{x} {y} {z}'.format_map(point))\n\n\n# In[ ]:\n\n\n\n\n"
}
] | 1 |
laurakah/SweetShips | https://github.com/laurakah/SweetShips | 2d89de6ff049dd6359acb8ca999c40899d77b484 | 4a008dd3693c5e07643e3340d0cf1365e5939f9f | eb6a681bc9153f881cd04bb6a5b69bfe6f27ac26 | refs/heads/master | 2021-05-11T02:06:25.313659 | 2018-01-22T08:28:34 | 2018-01-22T08:28:34 | 118,351,145 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6086176037788391,
"alphanum_fraction": 0.6175942420959473,
"avg_line_length": 25.5238094329834,
"blob_id": "258dd534d5d7d7925cf96d6f1faf78aef648c6ca",
"content_id": "79cf943fcd4c20cc0483cd9bb6a4f536c7d45db4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 557,
"license_type": "no_license",
"max_line_length": 65,
"num_lines": 21,
"path": "/ship.py",
"repo_name": "laurakah/SweetShips",
"src_encoding": "UTF-8",
"text": "import game_map\nfrom random import randint\n\nclass Ship():\n\n def __init__(self, game_map):\n self.game_map = game_map\n self.ship_col = self.create_ship_col(self.game_map.board)\n self.ship_row = self.create_ship_row(self.game_map.board)\n\n def create_ship_row(self, board):\n self.ship_rol = randint(0, len(board) - 1)\n\n def create_ship_col(self, board):\n self.ship_col = randint(0, len(board[0]) - 1)\n\n def get_ship_col(self):\n return self.ship_col\n\n def get_ship_row(self):\n return self.ship_row\n"
},
{
"alpha_fraction": 0.5632911324501038,
"alphanum_fraction": 0.5680379867553711,
"avg_line_length": 22.407407760620117,
"blob_id": "17861e6b79e051dd1768cfc24081c3b22b4b4128",
"content_id": "030f589d5f509c493d37ba145d13d16197815720",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 632,
"license_type": "no_license",
"max_line_length": 55,
"num_lines": 27,
"path": "/game_map.py",
"repo_name": "laurakah/SweetShips",
"src_encoding": "UTF-8",
"text": "class Game_map():\n\n BOARD_WIDTH = 5\n BOARD_HEIGHT = 5\n\n def __init__(self):\n self.board = []\n self.board_width = self.BOARD_WIDTH\n self.board_height = self.BOARD_HEIGHT\n self.init_game_map()\n\n def init_game_map(self):\n for x in range(0, self.board_height):\n self.board.append([\"O\"] * self.board_width)\n\n def print_board(self):\n for row in self.board:\n print \" \".join(row)\n\n def get_game_map(self):\n return self.board\n\n def get_map_height(self):\n return self.board_height\n\n def get_map_width(self):\n return self.board_width\n"
},
{
"alpha_fraction": 0.5358467102050781,
"alphanum_fraction": 0.538318932056427,
"avg_line_length": 39.45000076293945,
"blob_id": "8cbfd87afc2a2a48487d3d1efa7e7801d5341299",
"content_id": "3298f87f5c1e196c0cfeeceaacc120eb2cbfd5e6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1618,
"license_type": "no_license",
"max_line_length": 169,
"num_lines": 40,
"path": "/game.py",
"repo_name": "laurakah/SweetShips",
"src_encoding": "UTF-8",
"text": "import game_map\nimport ship\nimport player\n\nclass Game():\n\n MAX_TURNS = 15\n\n def __init__(self):\n self.game_map = game_map.Game_map()\n self.player = player.Player()\n self.ship = ship.Ship(self.game_map)\n\n def get_user_input(self):\n return self.player.ask_user_input()\n\n def game(self):\n self.game_map.print_board()\n for turn in range(self.MAX_TURNS):\n self.get_user_input()\n if self.player.get_player_row() == self.ship.get_ship_row() and self.player.get_player_col() == self.ship.get_ship_col():\n print \"\\nCongratulations! You sank my battleship!\\n\\n\"\n break\n else:\n if self.player.get_player_row() not in range(self.game_map.get_map_width()) or self.player.get_player_col() not in range(self.game_map.get_map_height()):\n print \"\\nOops, that's not even on the board.\\n\\n\"\n elif self.game_map.get_game_map()[self.player.get_player_row()][self.player.get_player_col()] == \"X\":\n print \"\\nYou guessed that one already.\\n\\n\"\n else:\n print \"\\nYou missed my battleship!\\n\\n\"\n self.game_map.get_game_map()[self.player.get_player_row()][self.player.get_player_col()] = \"X\"\n if turn == (self.MAX_TURNS - 1):\n print \"\\n\\nGame Over!\\n\\n\"\n else:\n self.game_map.print_board()\n print \"This was turn \", turn + 1, \" out of %d turns\\n\\n\" % self.MAX_TURNS\n\nif __name__ == \"__main__\":\n g = Game()\n g.game()\n"
},
{
"alpha_fraction": 0.7544910311698914,
"alphanum_fraction": 0.7544910311698914,
"avg_line_length": 22.85714340209961,
"blob_id": "2d50d61a2a1eddb5528a700d87688d5d129236b1",
"content_id": "241a9e4b9770ad507bf421f6ee39efa4bb39861e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 167,
"license_type": "no_license",
"max_line_length": 70,
"num_lines": 7,
"path": "/README.md",
"repo_name": "laurakah/SweetShips",
"src_encoding": "UTF-8",
"text": "# SweetShips\n### A simple object oriented single player battleship game\n\nTo launch the game simply call the game.py file in your terminal using\n```\npython game.py\n```\n"
},
{
"alpha_fraction": 0.5649606585502625,
"alphanum_fraction": 0.5688976645469666,
"avg_line_length": 23.190475463867188,
"blob_id": "d04c12e09aa4d680918d3f8cf187d5432e4ff0e6",
"content_id": "e165ecab75396c2ea4df4635c6da121d4f0cef17",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 508,
"license_type": "no_license",
"max_line_length": 61,
"num_lines": 21,
"path": "/player.py",
"repo_name": "laurakah/SweetShips",
"src_encoding": "UTF-8",
"text": "class Player():\n\n def __init__(self):\n self.player_row = None\n self.player_col = None\n\n def _ask_player_row(self):\n self.player_row = int(raw_input(\"Guess a Row: \")) - 1\n\n def _ask_player_col(self):\n self.player_col = int(raw_input(\"Guess a Col: \")) - 1\n\n def ask_user_input(self):\n self._ask_player_row()\n self._ask_player_col()\n\n def get_player_row(self):\n return self.player_row\n\n def get_player_col(self):\n return self.player_col\n"
}
] | 5 |
JosephXWH/pygal_learning | https://github.com/JosephXWH/pygal_learning | b301d143117523bebbbe6d94f9e29960cc86400a | 87951847dce7f055a1e64b0f5bc09a68050c5472 | 19bf1812039561a0adad733f5b6a013399b0a4db | refs/heads/master | 2020-03-10T21:31:44.583504 | 2018-04-15T09:56:19 | 2018-04-15T09:56:19 | 129,596,449 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.52173912525177,
"alphanum_fraction": 0.6688963174819946,
"avg_line_length": 32.22222137451172,
"blob_id": "ac09b234a816e3e6ebd6f6509aca2d6e92f2e9ca",
"content_id": "b48db3e027b30aad7060ed51ef0e1922e074de7c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 323,
"license_type": "no_license",
"max_line_length": 67,
"num_lines": 9,
"path": "/JJ.py",
"repo_name": "JosephXWH/pygal_learning",
"src_encoding": "UTF-8",
"text": "#coding=utf-8\nimport pygal\n\nline_chart = pygal.HorizontalStackedBar()\nline_chart.title = '珏珏运动数据记录(distance--km)'\nline_chart.x_labels = ['2017.2.3','2017.3.4','2017.3.9','2017.4.5']\nline_chart.add('跑步',[2.31,5.56,7.8,4.5])\nline_chart.add('散步',[10.8,6.9,7.2,8.5])\nline_chart.render_to_file('JJ.svg')\n"
},
{
"alpha_fraction": 0.6595744490623474,
"alphanum_fraction": 0.6869301199913025,
"avg_line_length": 21,
"blob_id": "2fe6e3424649ac23814182a2af9b5a1908ea22c6",
"content_id": "18c2bcf86b3a4482f811ba27cd879c7b8e4e856e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 371,
"license_type": "no_license",
"max_line_length": 46,
"num_lines": 15,
"path": "/world_population.py",
"repo_name": "JosephXWH/pygal_learning",
"src_encoding": "UTF-8",
"text": "#coding=utf-8\nimport json\n\n#将数据加载到一个列表中\nfilename = 'population.json'\nwith open(filename) as f:\n\tpop_data = json.load(f)\n\n\n#打印每个国家2010年的人口\nfor pop_dict in pop_data:\n\tif pop_dict['Year'] == '1983':\n\t\tcountry_name = pop_dict['Country Name']\n\t\tpopulation = int(float(pop_dict['Value']))\n\t\tprint(country_name + \": \" + str(population))"
},
{
"alpha_fraction": 0.5635592937469482,
"alphanum_fraction": 0.5847457647323608,
"avg_line_length": 13.75,
"blob_id": "6f6b9963d9facce46cb936d831986f4079b2ccda",
"content_id": "35c360437a3276473adc29bc7a2e8066a51397cb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 246,
"license_type": "no_license",
"max_line_length": 37,
"num_lines": 16,
"path": "/die_setting.py",
"repo_name": "JosephXWH/pygal_learning",
"src_encoding": "UTF-8",
"text": "#coding=utf-8\nfrom random import randint\n\nclass Die():\n\t\"\"\"表示一个die类\"\"\"\n\n\tdef __init__(self):\n\t\t\n\tdef run_the_die(self):\n\t\tdie = list()\n\t\ti = 0\n\t\twhile i <= 7:\n\t\t\ta = random.randint(0,number_range)\n\t\t\tdie.append(a)\n\t\t\ti +=1\n\t\tprint(die)\n"
},
{
"alpha_fraction": 0.79347825050354,
"alphanum_fraction": 0.79347825050354,
"avg_line_length": 29.66666603088379,
"blob_id": "7dcaaf8c25a3b5c82ab151f7a7b3d8056cc7c148",
"content_id": "f36e6f0a43d69660d1f775af3e74773c84605abd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 92,
"license_type": "no_license",
"max_line_length": 45,
"num_lines": 3,
"path": "/dt.py",
"repo_name": "JosephXWH/pygal_learning",
"src_encoding": "UTF-8",
"text": "from datetime import datetime, timedelta,date\nprint(datetime.now())\nprint(datetime.today())\n"
},
{
"alpha_fraction": 0.6764705777168274,
"alphanum_fraction": 0.7450980544090271,
"avg_line_length": 33.11111068725586,
"blob_id": "5cb1dd07db5bb93e9742d8a1c087fab85bc4ae67",
"content_id": "24e7ac5421600d641a0eb58cec82e56e3001306f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 306,
"license_type": "no_license",
"max_line_length": 52,
"num_lines": 9,
"path": "/gauge.py",
"repo_name": "JosephXWH/pygal_learning",
"src_encoding": "UTF-8",
"text": "import pygal\ngauge_chart = pygal.Gauge(human_readable=False)\ngauge_chart.title = 'DeltaBlue V8 benchmark results'\ngauge_chart.range = [0, 10000]\ngauge_chart.add('Chrome', 8212)\ngauge_chart.add('Firefox', 8099)\ngauge_chart.add('Opera', 2933)\ngauge_chart.add('IE', 41)\ngauge_chart.render_to_file('gauge.svg')"
},
{
"alpha_fraction": 0.5883069634437561,
"alphanum_fraction": 0.615103542804718,
"avg_line_length": 19.424999237060547,
"blob_id": "a5107b991761dcebdd3dd50af5251f03398903dd",
"content_id": "8db8cfb866df302a1a4e6009ca53bd6b940ac94c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1005,
"license_type": "no_license",
"max_line_length": 44,
"num_lines": 40,
"path": "/random_walkk.py",
"repo_name": "JosephXWH/pygal_learning",
"src_encoding": "UTF-8",
"text": "#coding=utf-8\nfrom random import choice\n\nclass RandomWalk():\n\t\"\"\"一个生成随机漫步数据的类\"\"\"\n\n\tdef __init__(self,num_points=10000):\n\t\t\"\"\"初始化随机漫步的属性\"\"\"\n\t\tself.num_points = num_points\n\n\t\t#所有随机漫步都始于(0,0)\n\t\tself.x_value = [0]\n\t\tself.y_value = [0]\n\t\n\tdef get_step(self):\n\t\t\"\"\"获取 x,y 步长\"\"\"\n\t\tself.direction = choice([-1,1])\n\t\tself.distance = choice([0,1,2,3,4,5])\n\t\tself.step = self.direction * self.distance\n\t\treturn self.step\n\t\t\n\tdef fill_walk(self):\n\t\t\"\"\"计算随机漫步包含的所有点\"\"\"\n\n\t\t# 不断漫步,知道完成步长\n\t\twhile len(self.x_value) < self.num_points:\n\t\t\t#决定前进方向以及沿这个方向前进的距离\n\t\t\tx_step = self.get_step()\n\t\t\ty_step = self.get_step()\n\n\t\t\t#拒绝原地踏步\n\t\t\tif x_step == 0 and y_step == 0:\n\t\t\t\tcontinue\n\n\t\t\t#计算下一个点的 x 和 y 值\n\t\t\tnext_x = self.x_value[-1] + x_step\n\t\t\tnext_y = self.y_value[-1] + y_step\n\n\t\t\tself.x_value.append(next_x)\n\t\t\tself.y_value.append(next_y)\n\t\n\t\n"
},
{
"alpha_fraction": 0.5820895433425903,
"alphanum_fraction": 0.641791045665741,
"avg_line_length": 15.666666984558105,
"blob_id": "fa49a32cc6f73ae1e2fd161ee2dd642bfaaede8d",
"content_id": "cd88b50c1014d4e54839efc09600432e19200dbf",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 219,
"license_type": "no_license",
"max_line_length": 33,
"num_lines": 12,
"path": "/ppp.py",
"repo_name": "JosephXWH/pygal_learning",
"src_encoding": "UTF-8",
"text": "#coding=utf-8\n#计算骰子6,10和的次数\nresult = []\nfor i in range(1,7):\n\tfor k in range(1,11):\n\t\tp = i + k\n\t\tresult.append(p)\n\nnumber = []\nfor key in range(2,17):\n\tnumber.append(result.count(key))\nprint(number)\n\n"
},
{
"alpha_fraction": 0.690391480922699,
"alphanum_fraction": 0.7188612222671509,
"avg_line_length": 21.520000457763672,
"blob_id": "4459ad8413f51385dac41029cd0111e64d10758c",
"content_id": "4fc0c9ba66344cc186a7372f65c563e282997767",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 638,
"license_type": "no_license",
"max_line_length": 59,
"num_lines": 25,
"path": "/rw_visual.py",
"repo_name": "JosephXWH/pygal_learning",
"src_encoding": "UTF-8",
"text": "#coding=utf-8\nimport matplotlib.pyplot as plt \n\nfrom random_walkk import RandomWalk \n#创建一个 RandomWalk 实例,并将其中的点都绘制出来\nrw = RandomWalk()\nrw.fill_walk()\n\n#设置绘图窗口的尺寸\nplt.figure(figsize=(10,6))\n\npoint_numbers = list(range(rw.num_points))\nplt.scatter(rw.x_value, rw.y_value, c=point_numbers,\n cmap=plt.cm.Blues, edgecolor='none', s=15)\n\n#突出起点和重点\nplt.scatter(0,0, c='green', edgecolor='none',s=100)\nplt.scatter(rw.x_value[-1],rw.y_value[-1],edgecolor='none',\n\ts=100)\n\n#隐藏坐标\nplt.axes().get_xaxis().set_visible(False)\nplt.axes().get_yaxis().set_visible(False)\n\nplt.show()"
},
{
"alpha_fraction": 0.5151515007019043,
"alphanum_fraction": 0.5151515007019043,
"avg_line_length": 11.5,
"blob_id": "5456b753e3613b673024adc6aa345632aa7c8e7c",
"content_id": "9e95b1b551237526a8f83c9264ddd5cc1f162a2c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 99,
"license_type": "no_license",
"max_line_length": 21,
"num_lines": 8,
"path": "/ad.py",
"repo_name": "JosephXWH/pygal_learning",
"src_encoding": "UTF-8",
"text": "ad ={\n\t'Leo': 'www',\n\t'Joseph': 'ppp',\n\t'Candy': 'lll',\n}\nprint(ad)\nad.__delitem__('Leo')\nprint(ad)"
},
{
"alpha_fraction": 0.511040985584259,
"alphanum_fraction": 0.6151419281959534,
"avg_line_length": 36.35293960571289,
"blob_id": "a9b778073384d1ff2c6b9785ecfcfec97ed4be58",
"content_id": "a5dbd14af7bf83912ae2e37cc1cbd6ef685f2f68",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 634,
"license_type": "no_license",
"max_line_length": 70,
"num_lines": 17,
"path": "/piepie.py",
"repo_name": "JosephXWH/pygal_learning",
"src_encoding": "UTF-8",
"text": "import pygal\npie_chart = pygal.Pie()\npie_chart.title = 'Browser usage by version in February 2012 (in %)'\npie_chart.add('IE', [5.7, 10.2, 2.6, 1])\npie_chart.add('Firefox', [.6, 16.8, 7.4, 2.2, 1.2, 1, 1, 1.1, 4.3, 1])\npie_chart.add('Chrome', [.3, .9, 17.1, 15.3, .6, .5, 1.6])\npie_chart.add('Safari', [4.4, .1])\npie_chart.add('Opera', [.1, 1.6, .1, .5])\npie_chart.render_to_file('pie.svg')\n\n\"\"\"pie_chart.title = 'Browser usage in February 2012 (in %)'\npie_chart.add('IE', 19.5)\npie_chart.add('Firefox', 36.6)\npie_chart.add('Chrome', 36.3)\npie_chart.add('Safari', 4.5)\npie_chart.add('Opera', 2.3)\npie_chart.render_to_file('pie.svg')\"\"\""
},
{
"alpha_fraction": 0.49723756313323975,
"alphanum_fraction": 0.6353591084480286,
"avg_line_length": 35.400001525878906,
"blob_id": "8a3873471793090999a986413161adac5cc6b50c",
"content_id": "6a338d244741658bd21ef9824e0fd39f527bc4c7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 181,
"license_type": "no_license",
"max_line_length": 54,
"num_lines": 5,
"path": "/histgram.py",
"repo_name": "JosephXWH/pygal_learning",
"src_encoding": "UTF-8",
"text": "import pygal\nhist = pygal.Histogram()\nhist.add('Widen bars',[(5,0,10),(4,5,13),(2,4,12)])\nhist.add('Narrow bars',[(9,2,4),(7,7,7.5),(10,11,12)])\nhist.render_to_file('Histogram.svg')"
},
{
"alpha_fraction": 0.5820379853248596,
"alphanum_fraction": 0.6442141532897949,
"avg_line_length": 19.678571701049805,
"blob_id": "30b13d4e0dcdb9f39a04a429047e5b9c38bdd3a5",
"content_id": "b01e8ae1b14efa2a3d5d159b3c0ef224a36a2e64",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 605,
"license_type": "no_license",
"max_line_length": 63,
"num_lines": 28,
"path": "/rlc.py",
"repo_name": "JosephXWH/pygal_learning",
"src_encoding": "UTF-8",
"text": "#coding=utf-8\nimport csv\nimport pygal\n\nfilename = 'RLC.csv'\nwith open(filename) as f:\n\treader = csv.reader(f)\n\theader_row = next(reader)\n\n\tHz = []\n\tUro100 = []\n\tI100 = []\n\tUro300 = []\n\tI300 = []\n\t\n\tfor row in reader:\n\t\tHz.append(int(row[0]))\n\t\tI100.append(float(row[2]))\n\t\tI300.append(float (row[4]))\n\nline_chart = pygal.Line(fill = True, interpolate = 'cubic',\n\tx_label_rotation = 45, x_title = '频率(Hz)', y_title = '电流(mA)')\nline_chart.x_labels = Hz\nline_chart.title = ' RLC谐振模拟实验数据'\nline_chart.add('100Ω', I100)\nline_chart.add('300Ω', I300)\n\nline_chart.render_to_file('D.svg') "
},
{
"alpha_fraction": 0.5780051350593567,
"alphanum_fraction": 0.6905370950698853,
"avg_line_length": 31.66666603088379,
"blob_id": "7de05a24db85457e884fe3c172a6ed4068d29beb",
"content_id": "12f49f49a2b00807f845c96ac679ae00fb614070",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 391,
"license_type": "no_license",
"max_line_length": 61,
"num_lines": 12,
"path": "/visits.py",
"repo_name": "JosephXWH/pygal_learning",
"src_encoding": "UTF-8",
"text": "import pygal\nfrom datetime import datetime, timedelta\ndate_chart = pygal.Line(x_label_rotation=20)\ndate_chart.x_labels = map(lambda d: d.strftime('%Y-%m-%d'), [\n datetime(2013, 1, 2),\n datetime(2013, 1, 12),\n datetime(2013, 2, 2),\n datetime(2013, 2, 22)])\ndate_chart.range = [0,900]\ndate_chart.add(\"Visits\", [300, 412, 823, 672])\ndate_chart.render_to_file('Visits.svg')\nprint(datetime.now())"
},
{
"alpha_fraction": 0.4547169804573059,
"alphanum_fraction": 0.5207546949386597,
"avg_line_length": 74.71428680419922,
"blob_id": "3b31cdb622488487e1eb4f51d2f41600001c5fbe",
"content_id": "2d64f1c201ddb6b1c1d018364825a90d3f74fbaf",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 530,
"license_type": "no_license",
"max_line_length": 116,
"num_lines": 7,
"path": "/first_chart.py",
"repo_name": "JosephXWH/pygal_learning",
"src_encoding": "UTF-8",
"text": "import pygal # First import pygal\nbar_chart = pygal.HorizontalStackedBar() # Then create a bar graph object\nbar_chart.title = \"Remarquable sequences\"\nbar_chart.x_labels = map(str, range(11))\nbar_chart.add('Fibonacci', [0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55]) # Add some values\nbar_chart.add('China', [0, 3, 3, 4, 6, 10, 16, 26, 42, 68, 110])\nbar_chart.render_to_file('bar_chart.svg') # Save the svg to a file\n"
},
{
"alpha_fraction": 0.6600698232650757,
"alphanum_fraction": 0.6973224878311157,
"avg_line_length": 26.74193572998047,
"blob_id": "3660f09d7291b38992dd8c36c8112661abaf3875",
"content_id": "195eb93303df6a4a93b2263ff94fefcd0498d17f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 899,
"license_type": "no_license",
"max_line_length": 69,
"num_lines": 31,
"path": "/highs_lows.py",
"repo_name": "JosephXWH/pygal_learning",
"src_encoding": "UTF-8",
"text": "#coding=utf-8\nimport csv\nfrom matplotlib import pyplot as plt\nfrom datetime import datetime\n\n#从文件中获取日期和最高温度\nfilename = '2017.csv'\nwith open(filename) as f:\n\tprint(csv.reader(f))\n\treader = csv.reader(f)\n\theader_row = next(reader)\n\t\n\tdates,highs,lows = [],[],[]\n\tfor row in reader:\n\t\tcurrent_date = datetime.strptime(row[0],\"%Y/%m/%d\")\n\t\tdates.append(current_date)\n\t\thighs.append(int(row[1]))\n\t\tlows.append(int(row[3]))\n\nfig = plt.figure(dpi=128, figsize = (10,6))\nplt.plot(dates, highs, c='red', alpha=0.5)\nplt.plot(dates, lows, c='blue', alpha=0.5)\nplt.fill_between(dates, highs, lows,facecolor = 'blue', alpha=0.1)\n#设置图形的属性\nplt.title(\"Daily high and low temperatures, Whole 2017\", fontsize=24)\nplt.xlabel ('', fontsize=12)\nfig.autofmt_xdate()\nplt.ylabel (\"Temperature (F)\", fontsize=12)\nplt.tick_params(axis='both', which='major', labelsize = 16)\n\nplt.show()"
},
{
"alpha_fraction": 0.42270058393478394,
"alphanum_fraction": 0.5068492889404297,
"avg_line_length": 41.66666793823242,
"blob_id": "d44785a09bb1574fa32f2f20524c6933c0be5ffd",
"content_id": "2706d97c4baea47edee951bbad4aa109fdf37fde",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 511,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 12,
"path": "/XY.py",
"repo_name": "JosephXWH/pygal_learning",
"src_encoding": "UTF-8",
"text": "import pygal\nfrom math import cos\nxy_chart = pygal.XY(stroke = False)\nxy_chart.title = 'XY Cosinus'\nxy_chart.add('x = cos(y)', [(cos(x / 10.), x / 10.) for x in range(-50, 50, 1)])\nxy_chart.add('y = cos(x)', [(x / 10., cos(x / 10.)) for x in range(-50, 50, 1)])\nxy_chart.add('x = 1', [(1, -5), (1, 5)])\nxy_chart.add('x = -1', [(-1, -5), (-1, 5)])\nxy_chart.add('y = 1', [(-5, 1), (5, 1)])\nxy_chart.add('y = -1', [(-5, -1), (5, -1)])\nxy_chart.add('y = x/5', [(-5, -1), (5, 1)])\nxy_chart.render_to_file('XY.svg')"
},
{
"alpha_fraction": 0.4462963044643402,
"alphanum_fraction": 0.6685185432434082,
"avg_line_length": 52.900001525878906,
"blob_id": "d00c83d9b965841d00ee89d40cde752401b93e0a",
"content_id": "91f517e694f2dc1d548192d860651e18dcfcdecf",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 540,
"license_type": "no_license",
"max_line_length": 118,
"num_lines": 10,
"path": "/dot.py",
"repo_name": "JosephXWH/pygal_learning",
"src_encoding": "UTF-8",
"text": "import pygal\n\ndot_chart = pygal.Dot(x_label_rotation=30)\ndot_chart.title = 'V8 benchmark results'\ndot_chart.x_labels = ['Richards', 'DeltaBlue', 'Crypto', 'RayTrace', 'EarleyBoyer', 'RegExp', 'Splay', 'NavierStokes']\ndot_chart.add('Chrome', [-6395, 8212, 7520, 7218, 12464, 1660, 2123, 8607])\ndot_chart.add('Firefox', [7473, -8099, 11700, -2651, 6361, 1044, 3797, 9450])\ndot_chart.add('Opera', [3472, 2933, 4203, -5229, -5810, 1828, 9013, 4669])\ndot_chart.add('IE', [43, 41, 59, 79, 144, 136, 34, 102])\ndot_chart.render_to_file('dot.svg')\n\n"
},
{
"alpha_fraction": 0.7749999761581421,
"alphanum_fraction": 0.7749999761581421,
"avg_line_length": 19,
"blob_id": "1bae38f71258f5bc3096ecaa04da635b4930f112",
"content_id": "5cc481a0c91f2d541429aa9131e3744876eb8979",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 40,
"license_type": "no_license",
"max_line_length": 22,
"num_lines": 2,
"path": "/README.md",
"repo_name": "JosephXWH/pygal_learning",
"src_encoding": "UTF-8",
"text": "# pygal_learning\nLearn how to use Pygal\n"
},
{
"alpha_fraction": 0.6805555820465088,
"alphanum_fraction": 0.6805555820465088,
"avg_line_length": 19.714284896850586,
"blob_id": "31958e7451f5f95d1c9fb5afdca3b8e27aab5c36",
"content_id": "cd98f2540df85981dbed3dd7cdeaef35b6778f9f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 144,
"license_type": "no_license",
"max_line_length": 57,
"num_lines": 7,
"path": "/test.py",
"repo_name": "JosephXWH/pygal_learning",
"src_encoding": "UTF-8",
"text": "import csv\n\nfilename = 'RLC.csv'\nwith open(filename) as f:\n\tp = csv.reader(f, delimiter=':', quoting=csv.QUOTE_NONE)\n\tfor row in p:\n\t\tprint(row)"
}
] | 19 |
jmorenete/full-stack-python | https://github.com/jmorenete/full-stack-python | 7834a30091b942ffa45a02a77c8f12d393dbc7be | b6dd04ce244ac19046e6cab8ee1f6eee3b152986 | 9954f5d78dc46a2ab9190eb7dc2c8b902010c1c8 | refs/heads/master | 2022-11-30T19:10:28.664359 | 2020-08-17T21:29:40 | 2020-08-17T21:29:40 | 287,999,531 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.75,
"alphanum_fraction": 0.75,
"avg_line_length": 19,
"blob_id": "4e2c1fdb95314b85acfa1dfd4f1b033db83b798c",
"content_id": "5db9fdb0986f39d76a6e88d5fd27771c395c8664",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 20,
"license_type": "no_license",
"max_line_length": 19,
"num_lines": 1,
"path": "/README.md",
"repo_name": "jmorenete/full-stack-python",
"src_encoding": "UTF-8",
"text": "# full-stack-python\n"
},
{
"alpha_fraction": 0.6496815085411072,
"alphanum_fraction": 0.6547770500183105,
"avg_line_length": 22.787878036499023,
"blob_id": "d28add04c4332d1c895f910eac01424f08725905",
"content_id": "a361340530f0ca893502732eb52fd673570f4961",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 785,
"license_type": "no_license",
"max_line_length": 102,
"num_lines": 33,
"path": "/udacity/demo.py",
"repo_name": "jmorenete/full-stack-python",
"src_encoding": "UTF-8",
"text": "import psycopg2\n\nconn = psycopg2.connect('dbname=library')\n\n#to queue up work for a transaction, we need to interact with a coursor\ncursor = conn.cursor()\n\ncursor.execute('DROP TABLE IF EXISTS staff')\n\ncursor.execute('''\n CREATE TABLE staff (\n id INTEGER PRIMARY KEY,\n name VARCHAR NOT NULL,\n on_holiday BOOLEAN NOT NULL DEFAULT False\n );\n''')\nSQL = 'INSERT INTO staff (id, name, on_holiday) VALUES (%(id)s, %(name)s, %(on_holiday)s);'\ndata = {\n 'id': 2,\n 'name': 'Jose',\n 'on_holiday': False\n}\ncursor.execute('INSERT INTO staff (id, name, on_holiday) VALUES (%s, %s, %s);', (1, 'Josefina', True))\ncursor.execute(SQL, data)\ncursor.execute('SELECT * FROM staff;')\n\nresult = cursor.fetchall()\n\nprint(result)\n\nconn.commit()\ncursor.close()\nconn.close()\n"
},
{
"alpha_fraction": 0.69243985414505,
"alphanum_fraction": 0.6958763003349304,
"avg_line_length": 31.36111068725586,
"blob_id": "87e0da07a54936ce6c3b3113e972e232f190e620",
"content_id": "5f613f258edc43e1fb2038a76f220ed4415cf686",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1164,
"license_type": "no_license",
"max_line_length": 84,
"num_lines": 36,
"path": "/udacity/flask_hello_app.py",
"repo_name": "jmorenete/full-stack-python",
"src_encoding": "UTF-8",
"text": "from flask import Flask\nfrom flask_sqlalchemy import SQLAlchemy\n\napp = Flask(_ _name__)\n#in order to connect to our db we must set a config variable,\n# which are set on the dictionary app.config\napp.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql://jmmore@localhost:5432/library'\napp.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False\n\ndb = SQLAlchemy(app)\n\nclass Person(db.Model):\n #typically, inside a class we'd specify an init method that would allow us\n # to initiliaze attributes whenever we create new object instances.\n # but sql alchemy does that for you\n # by default, sqlalchemy will make the lowercase name of class\n # the name of your table unless:\n __tablename__ = 'persons'\n id = db.Column(db.Integer, primary_key=True)\n name = db.Column(db.String(), nullable=False)\n paying_member = db.Column(db.Boolean, nullable=False)\n\n def __repr__(self):\n return f'<Person ID: {self.id}, name:{self.name}>'\n\n\ndb.create_all() # detects models and creates tables if dont exist\n\n\[email protected]('/')\ndef index():\n firstPerson = Person.query.first()\n return 'Hello ' + firstPerson.name\n\nif __name__ == '__main__':\n app.run()"
}
] | 3 |
artyomka0/IT_practics | https://github.com/artyomka0/IT_practics | 03336b2b3c3310b201f856bc55943c84b328cca8 | ac2d4f83c732c3877482594ca1c5600a9fbac376 | e24885f8fb8d8ad1db76424298506f4c92fd364b | refs/heads/main | 2023-05-28T19:11:32.612083 | 2021-06-03T10:42:44 | 2021-06-03T10:42:44 | 349,815,450 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6036585569381714,
"alphanum_fraction": 0.6402438879013062,
"avg_line_length": 19.5,
"blob_id": "6dcb1f2019505bf22f1ff81015c0985893ba05b1",
"content_id": "b70407015e2f2633a2733eb9b1d7de16ec223d25",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 215,
"license_type": "no_license",
"max_line_length": 36,
"num_lines": 8,
"path": "/IT_Praktika_18/IT_Praktika_18/1-2(1)_programm.py",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "print('Hello world!')\n\nname = input('Введите имя ')\nprint('Привет, ' + name)\n\nprint(5 + 7) # сложение\nprint(4 * 5) # умножение\nprint(4 ** 3) # возведение в степень\n"
},
{
"alpha_fraction": 0.5220338702201843,
"alphanum_fraction": 0.5525423884391785,
"avg_line_length": 28.399999618530273,
"blob_id": "d91cfdeed9f13c710515cd03bb2bc6e957d1e60a",
"content_id": "1e952118eb298d0e21568478651889ca8ec1381a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 352,
"license_type": "no_license",
"max_line_length": 69,
"num_lines": 10,
"path": "/IT_Praktika_18/IT_Praktika_18/5-3(3)_zadanie.py",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "\na=float(input('Введите коэфицент a: '))\nb=float(input('Введите коэфицент b: '))\nc=float(input('Введите коэфицент c: '))\nD=(pow(b,2)-(4*a*c))\nif(D>0):\n print(\"x12=\"+str((-b-sqrt(D))/(2*a))+\" \"+str((-b+sqrt(D))/(2*a)))\nelif(D==0):\n print(\"x=\"+str((-b)/(2*a)))\nelse:\n print(\"корней нет\")\n"
},
{
"alpha_fraction": 0.4528301954269409,
"alphanum_fraction": 0.5283018946647644,
"avg_line_length": 9.600000381469727,
"blob_id": "ae08bc887599b5d383e83e0e02448c36a62bb650",
"content_id": "f5cdeec6b4d461e928729f4abf8106356c2e9a82",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 53,
"license_type": "no_license",
"max_line_length": 14,
"num_lines": 5,
"path": "/IT_Praktika_18/IT_Praktika_18/6-2(1)_programm.py",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "n=int(input())\ni=1\nwhile i**2<n:\n print (i**2)\n i+=1\n"
},
{
"alpha_fraction": 0.5798676609992981,
"alphanum_fraction": 0.5966446399688721,
"avg_line_length": 27.0264892578125,
"blob_id": "128dea5e76468d89b0c9369840b7f7bb731e47a8",
"content_id": "1830766d37518ba81256c6745d0a8e27e26f3a0d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 4502,
"license_type": "no_license",
"max_line_length": 109,
"num_lines": 151,
"path": "/work_8/work_8IT/Form1.cs",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "using System;\nusing System.Collections.Generic;\nusing System.ComponentModel;\nusing System.Data;\nusing System.Drawing;\nusing System.Linq;\nusing System.Text;\nusing System.Threading.Tasks;\nusing System.Windows.Forms;\nusing System.Windows.Forms.DataVisualization.Charting;\n\n\nnamespace work_8IT\n{\n public partial class Form1 : Form\n {\n public Form1()\n {\n CreateChart();\n\n CalcFunction();\n\n chart.Series[0].Points.DataBindXY(x, y1);\n chart.Series[1].Points.DataBindXY(x, y2);\n\n\n InitializeComponent();\n }\n\n private void chart1_MouseWheel(object sender, MouseEventArgs e)\n {\n var chart = (Chart)sender;\n var xAxis = chart.ChartAreas[0].AxisX;\n var yAxis = chart.ChartAreas[0].AxisY;\n\n try\n {\n if (e.Delta < 0) // Scrolled down.\n {\n xAxis.ScaleView.ZoomReset();\n yAxis.ScaleView.ZoomReset();\n }\n else if (e.Delta > 0) // Scrolled up.\n {\n var xMin = xAxis.ScaleView.ViewMinimum;\n var xMax = xAxis.ScaleView.ViewMaximum;\n var yMin = yAxis.ScaleView.ViewMinimum;\n var yMax = yAxis.ScaleView.ViewMaximum;\n\n var posXStart = xAxis.PixelPositionToValue(e.Location.X) - (xMax - xMin) / 4;\n var posXFinish = xAxis.PixelPositionToValue(e.Location.X) + (xMax - xMin) / 4;\n var posYStart = yAxis.PixelPositionToValue(e.Location.Y) - (yMax - yMin) / 4;\n var posYFinish = yAxis.PixelPositionToValue(e.Location.Y) + (yMax - yMin) / 4;\n\n xAxis.ScaleView.Zoom(posXStart, posXFinish);\n yAxis.ScaleView.Zoom(posYStart, posYFinish);\n }\n }\n catch { }\n }\n\n private double XMin = 0;\nprivate double XMax = 3*Math.PI;\n\nprivate double Step = (Math.PI * 2) / 10;\n\n private double[] x;\n\n private double[] y1;\n private double[] y2;\n //private double[] y3;\n Chart chart;\n private void CalcFunction()\n {\n int count = (int)Math.Ceiling((XMax - XMin) / Step)\n + 1;\n x = new double[count];\n y1 = new double[count];\n y2 = new double[count];\n //y3 = new double[count];\n for (int i = 0; i < count; i++)\n {\n x[i] = XMin + Step * i;\n y1[i] = Math.Sin(x[i]);\n y2[i] = (Math.Sqrt(3 + Math.Log(x[i]) + 15 - x[i])) / (1 + Math.Sin((2 + x[i] * x[i]) / (1 + x[i])));\n }\n }\n\n private void CreateChart()\n {\n chart = new Chart();\n chart.Parent = this;\n chart.SetBounds(0, 0, ClientSize.Width + 1500,\n ClientSize.Height + 500);\n\n ChartArea area = new ChartArea();\n area.Name = \"myGraph\";\n area.AxisX.Minimum = XMin;\n area.AxisX.Maximum = XMax;\n area.AxisX.MajorGrid.Interval = Step;\n chart.ChartAreas.Add(area);\n chart.ChartAreas[0].AxisX.ScaleView.Zoomable = true;\n chart.ChartAreas[0].AxisY.ScaleView.Zoomable = true;\n chart.MouseWheel += chart1_MouseWheel;\n\n // Создаём объект для первого графика\n Series series1 = new Series();\n // Ссылаемся на область для построения графика\n series1.ChartArea = \"myGraph\";\n // Задаём тип графика - сплайны\n series1.ChartType = SeriesChartType.Spline;\n // Указываем ширину линии графика\n series1.BorderWidth = 3;\n // Название графика для отображения в легенде\n series1.LegendText = \"sin(x)\";\n // Добавляем в список графиков диаграммы\n chart.Series.Add(series1);\n // Аналогичные действия для второго графика\n \n Series series2 = new Series();\n series2.ChartArea = \"myGraph\";\n series2.ChartType = SeriesChartType.Spline;\n series2.BorderWidth = 3;\n series2.LegendText = \"my function\";\n chart.Series.Add(series2);\n \n //======/*================\n /*\n Series series3 = new Series();\n series3.ChartArea = \"myGraph\";\n series3.ChartType = SeriesChartType.Spline;\n series3.BorderWidth = 3;\n series3.LegendText = \"my function\";\n chart.Series.Add(series3);\n */\n // Создаёмлегенду, котораябудетпоказыватьназвания\n Legend legend = new Legend();\n chart.Legends.Add(legend);\n }\n\n \n\n private void chart1_Click(object sender, EventArgs e)\n {\n for(;;)\n {\n\n }\n }\n }\n}\n"
},
{
"alpha_fraction": 0.4285714328289032,
"alphanum_fraction": 0.4628571569919586,
"avg_line_length": 14.909090995788574,
"blob_id": "23876053b51a84830e8d2572590d0b676dccec5b",
"content_id": "b4d48d8f5e318426e5f158cca381defe6bdc5967",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 175,
"license_type": "no_license",
"max_line_length": 25,
"num_lines": 11,
"path": "/IT_Praktika_19/IT_Praktika_19/2-3(2)_zadanie.py",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "import math\nx=int(input())\nif(x>1):\n d=ceil(math.log(x,2))\n print(2<<d-1)\n if((2<<d-1)&(x)):\n print(\"Yes\")\n else:\n print(\"No\")\nelse:\n print(\"No\")\n"
},
{
"alpha_fraction": 0.5542710423469543,
"alphanum_fraction": 0.6332691311836243,
"avg_line_length": 20.328767776489258,
"blob_id": "d502b5615cb98c5187d360762e4358b59d36e088",
"content_id": "03da64475219eea5a6e5cc723c55daeb923598a8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1559,
"license_type": "no_license",
"max_line_length": 68,
"num_lines": 73,
"path": "/work_9/IT_Practika_9/IT_Practika_9/Form1.cs",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "using System;\nusing System.Collections.Generic;\nusing System.ComponentModel;\nusing System.Data;\nusing System.Drawing;\nusing System.Linq;\nusing System.Text;\nusing System.Threading.Tasks;\nusing System.Windows.Forms;\n\nnamespace IT_Practika_9\n{\n\n\tpublic partial class Form1 : Form\n\t{\n\t\tPoint[] points = {\n\tnew Point(400, 160),\n\tnew Point(320, 270),\n\tnew Point(480, 270),\n};\n\t\tPoint[] points2 = {\n\tnew Point(320, 180),\n\tnew Point(480, 180),\n\tnew Point(400, 290),\n};\n\t\tPen pen = new Pen(Color.Blue, 4);\n\t\tBrush brush = Brushes.White;\n\t\tBrush brush1 = Brushes.Blue;\n\t\tBrush brush2 = Brushes.Gray;\n\t\tRectangle rectangle = new Rectangle(0, 0, 2000, 150);\n\t\tRectangle rectangle1 = new Rectangle(0, 150, 2000, 150);\n\t\tRectangle rectangle2 = new Rectangle(0, 300, 2000, 150);\n\t\tpublic Form1()\n\t\t{\n\t\t\t/*\n\t\t\tfor (int i = 0; i < 20; i++)\n\t\t\t{\n\t\t\t\tint xPos;\n\t\t\t\tif (i % 2 == 0)\n\t\t\t\t{\n\t\t\t\t\txPos = 10;\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\txPos = 400;\n\t\t\t\t}\n\t\t\t\tpoints[i] = new Point(xPos, 10 * i);\n\t\t\t}\n\t\t\t*/\n\t\t\t/*\n\t\t\tpoints[1] = new Point(200,200);\n\t\t\tpoints[2] = new Point(150,300);\n\t\t\tpoints[3] = new Point(150,400);\n\t\t\tpoints[4] = new Point(200, 200);\n\t\t\t*/\n\n\t\t\tInitializeComponent();\n\t\t}\n\n\t\tprivate void Form1_Paint(object sender, PaintEventArgs e)\n\t\t{\n\t\t\tGraphics g = e.Graphics;\n\t\t\t\n\t\t\tg.FillRectangle(brush1, rectangle);\n\t\t\tg.FillRectangle(brush, rectangle1);\n\t\t\tg.FillRectangle(brush1, rectangle2);\n\t\t\tg.DrawPolygon(pen, points);\n\t\t\tg.DrawPolygon(pen, points2);\n\t\t\t//g.FillPolygon(brush2, points);\n\t\t\t//g.DrawPie(pen, Single, Single, Single, Single, Single, Single);\n\t\t}\n\t}\n}\n"
},
{
"alpha_fraction": 0.532536506652832,
"alphanum_fraction": 0.549136757850647,
"avg_line_length": 24.100000381469727,
"blob_id": "007013c88ee3e48fd90b104483d0edd2907bf736",
"content_id": "0baf0a79d3e5cd27a7bfb6293b1bae449c528a9b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1576,
"license_type": "no_license",
"max_line_length": 101,
"num_lines": 60,
"path": "/work_4/WindowsFormsApp1/WindowsFormsApp1/Form1.cs",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "using System;\nusing System.Collections.Generic;\nusing System.ComponentModel;\nusing System.Data;\nusing System.Drawing;\nusing System.Linq;\nusing System.Text;\nusing System.Threading.Tasks;\nusing System.Windows.Forms;\n\nnamespace WindowsFormsApp1\n{\n public partial class Form1 : Form\n {\n public Form1()\n {\n InitializeComponent();\n }\n\n private void radioButton1_CheckedChanged(object sender, EventArgs e)\n {\n\n }\n\n private void button1_Click(object sender, EventArgs e)\n {\n Double n = Convert.ToDouble(textBox1.Text);\n Double eps = Convert.ToDouble(textBox1.Text);\n textBox2.Text = \"Результаты работы программы Михайлов А.А. \" + Environment.NewLine;\n int m = 0;\n if (radioButton2.Checked) m = 1;\n double s = 0, p = 1, ch;\n double i = 1;\n switch (m)\n {\n case 0:\n ch = 1 / i;\n while (ch >= eps)\n {\n ch = 1 / i;\n s += ch;\n i++;\n }\n textBox2.Text += \"При eps = \" + textBox1.Text + Environment.NewLine;\n textBox2.Text += \"Расчет суммы ряда S = \" + Convert.ToString(s) + Environment.NewLine;\n break;\n case 1:\n for (i = 1; i <= n; i++)\n {\n ch = i;\n p *= ch;\n }\n textBox2.Text += \"При m = \" + textBox1.Text + Environment.NewLine;\n textBox2.Text += \"Расчетпроизведенияряда P = \" + Convert.ToString(p) + Environment.NewLine;\n break;\n }\n\n }\n }\n}\n"
},
{
"alpha_fraction": 0.6226993799209595,
"alphanum_fraction": 0.6288343667984009,
"avg_line_length": 35.22222137451172,
"blob_id": "58ea15b6bbb0ccab5da9560420dc812645379b33",
"content_id": "96f0d1b29a815b8d49cfed4591b662f2c306e31d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 424,
"license_type": "no_license",
"max_line_length": 50,
"num_lines": 9,
"path": "/IT_Praktika_19/IT_Praktika_19/1-2(3)_programm.py",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "import math\nxn = float(input('Введите xn '))\nxk = float(input('Введите xk '))\nhx = float(input('Введите hx '))\nx = xn #устанавливаем x в начало отрезка в xn\nwhile x <= xk: #пока не дойдем до конца отрезка xk\n f = math.sin(x + math.exp(2)) + math.pow(3, x)\n print('x = ', x, ' f = ', f)\n x = x + hx #прибавляем к аргументу шаг\n"
},
{
"alpha_fraction": 0.3688524663448334,
"alphanum_fraction": 0.46721312403678894,
"avg_line_length": 19.33333396911621,
"blob_id": "90f249a32946a4eeed99fc5b6d76f7ad1759145b",
"content_id": "655101456f5ff55fb0fad77422536b8df1cbb1dd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 122,
"license_type": "no_license",
"max_line_length": 26,
"num_lines": 6,
"path": "/IT_Praktika_18/IT_Praktika_18/3-2(1)_programm.py",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "import math as m\na, b, c = 3, -10, 1\nD = b**2-4*a*c\nx_1 = (-b-m.sqrt(D))/(2*a)\nx_2 = (-b+m.sqrt(D))/(2*a)\nprint(x_1, x_2)\n"
},
{
"alpha_fraction": 0.5265452265739441,
"alphanum_fraction": 0.5696715116500854,
"avg_line_length": 26.92608642578125,
"blob_id": "26010c10d8cb7c6ec3615eec18c7f67b9b34e487",
"content_id": "3a530db921de83f447595c2c25bb3fac06918df9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 6642,
"license_type": "no_license",
"max_line_length": 89,
"num_lines": 230,
"path": "/IT_Practika_12/IT_Practika_12/IT_Practika_12/Form1.cs",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "using System;\nusing System.Collections.Generic;\nusing System.ComponentModel;\nusing System.Data;\nusing System.Drawing;\nusing System.Linq;\nusing System.Text;\nusing System.Threading.Tasks;\nusing System.Windows.Forms;\n\nnamespace IT_Practika_12\n{\n\tpublic partial class Form1 : Form //Вариант 8\n\t{\n\t\tdouble[] x = {5,5.2,5.4,5.6,5.8,6};\n\t\tdouble[] y = {3,2,5,2,2,3};\n\t\t\n\t\tpublic Form1()\n\t\t{\n\t\t\tInitializeComponent();\n\t\t}\n\t\tpublic double S(int count) \n\t\t{\n\t\t\tdouble s=0;\n\t\t\tfor (int i = 0; i < 6; i++) s += Math.Pow(x[i], count);\n\t\t\treturn s;\n\t\t}\n\t\tpublic double b(int count)\n\t\t{\n\t\t\tdouble s = 0;\n\t\t\tfor (int i = 0; i < 6; i++) s += y[i]*Math.Pow(x[i], count);\n\t\t\treturn s;\n\t\t}\n\t\tpublic double C(int count)\n\t\t{\n\t\t\tdouble a = 0;\n\t\t\tif ((count == 3) || (count == 6))\n\t\t\t{\n\t\t\t\ta = b(count / 3) - (b(0) * S(2 + count / 3)) / S(2);\n\t\t\t}\n\t\t\tif ((count == 1) || (count == 2))\n\t\t\t{\n\t\t\t\ta = S(count) - (S(count-1) * S(3)) / S(2);\n\t\t\t}\n\t\t\tif ((count == 4) || (count == 5))\n\t\t\t{\n\t\t\t\ta = S(count-2) - (S(count - 4) * S(4)) / S(2);\n\t\t\t}\n\t\t\treturn a;\n\t\t}\n\t\tpublic double polin1_A0()\n\t\t{\n\t\t\tdouble a = (b(1)*S(1)-b(0)*S(2))/(S(1)*S(1)-S(2)*S(0));\n\t\t\treturn a;\n\t\t}\n\t\tpublic double polin1_A1()\n\t\t{\n\t\t\tdouble a = (b(0) - S(0) * polin1_A0()) / S(1);\n\t\t\treturn a;\n\t\t}\n\t\t\n\t\tpublic double polin2_A0()\n\t\t{\n\t\t\tdouble a = (C(6)*C(2)-C(5)*C(3))/(C(2)*C(4)-C(5)*C(1));\n\t\t\treturn a;\n\t\t}\n\t\tpublic double polin2_A1()\n\t\t{\n\t\t\tdouble a = (C(3)-C(1)*polin2_A0())/C(2);\n\t\t\treturn a;\n\t\t}\n\t\tpublic double polin2_A2()\n\t\t{\n\t\t\tdouble a =(b(0)-S(1)*polin2_A1()-S(0)*polin2_A0())/S(2);\n\t\t\treturn a;\n\t\t}\n\t\tprivate void chart1_Click(object sender, EventArgs e)\n\t\t{\n\n\t\t}\n\n\t\tprivate void button1_Click(object sender, EventArgs e) // Полин 1\n\t\t{\n\t\t\tdataGridView2.RowCount = 2; //Указываем количество строк\t\n\t\t\tdataGridView2.ColumnCount = 3; //Указываем количество столбцов\t\t\t\n\t\t\tdataGridView2.Columns[0].HeaderCell.Value = \"матрица системы 1\";\n\n\t\t\tfor (int i = 0; i < 2; i++)\n\t\t\t\tfor (int j = 0; j < 3; j++)\n\t\t\t\t{\n\t\t\t\t\tif ((j == 0) && (i == 0))\n\t\t\t\t\t\tdataGridView2.Rows[i].Cells[j].Value = Convert.ToString(S(0));\n\t\t\t\t\tif ((j == 1) && (i == 0))\n\t\t\t\t\t\tdataGridView2.Rows[i].Cells[j].Value = Convert.ToString(S(1));\n\t\t\t\t\tif ((j == 2) && (i == 0))\n\t\t\t\t\t\tdataGridView2.Rows[i].Cells[j].Value = Convert.ToString(b(0));\n\t\t\t\t\tif ((j == 0) && (i == 1))\n\t\t\t\t\t\tdataGridView2.Rows[i].Cells[j].Value = Convert.ToString(S(1));\n\t\t\t\t\tif ((j == 1) && (i == 1))\n\t\t\t\t\t\tdataGridView2.Rows[i].Cells[j].Value = Convert.ToString(S(2));\n\t\t\t\t\tif ((j == 2) && (i == 1))\n\t\t\t\t\t\tdataGridView2.Rows[i].Cells[j].Value = Convert.ToString(b(1));\n\n\t\t\t\t}\n\t\t}\n\n\t\tprivate void button2_Click(object sender, EventArgs e) // Полин 2\n\t\t{\n\t\t\tdataGridView4.RowCount = 3; //Указываем количество строк\t\n\t\t\tdataGridView4.ColumnCount = 4; //Указываем количество столбцов\t\n\t\t\tdataGridView4.Columns[0].HeaderCell.Value = \"матрица системы 2\";\n\t\t\tfor (int i = 0; i < 3; i++)\n\t\t\t\tfor (int j = 0; j < 4; j++)\n\t\t\t\t{\t\t\t\t\t\n\t\t\t\t\t\tif((i==0)&&\t(j!=3))\n\t\t\t\t\t\tdataGridView4.Rows[i].Cells[j].Value = Convert.ToString(S(j));\n\t\t\t\t\t\tif ((i == 1) && (j != 3))\n\t\t\t\t\t\tdataGridView4.Rows[i].Cells[j].Value = Convert.ToString(S(j+1));\n\t\t\t\t\t\tif ((i == 2) && (j != 3))\n\t\t\t\t\t\tdataGridView4.Rows[i].Cells[j].Value = Convert.ToString(S(j+2));\n\t\t\t\t\t\tif (j==3)\n\t\t\t\t\t\tdataGridView4.Rows[i].Cells[j].Value = Convert.ToString(b(i));\n\t\t\t\t}\n\t\t}\n\n\t\tprivate void button3_Click(object sender, EventArgs e) //Начальные точки\n\t\t{\n\t\t\tchart1.Series[2].Points.DataBindXY(x, y);\n\t\t}\n\n\t\tprivate void button4_Click(object sender, EventArgs e) // полин 1 граф\n\t\t{\n\t\t\tdouble[] y1=new double[6];\n\t\t\tfor\t(int i=0; i < 6; i++)\n\t\t\t{\n\t\t\t\ty1[i] = polin1_A0() + polin1_A1()* x[i];\t\t\t\t\t\t\t\t\n\t\t\t}\n\t\t\tchart1.Series[0].Points.DataBindXY(x, y1);\t\t\t\t\t\n\t\t}\n\n\t\tprivate void button5_Click(object sender, EventArgs e) // полин 2 граф\n\t\t{\n\t\t\tdouble[] y1 = new double[6];\n\t\t\tfor (int i = 0; i < 6; i++)\n\t\t\t{\n\t\t\t\ty1[i] = polin2_A0() + polin2_A1() * x[i]+ polin2_A2()* x[i]* x[i];\n\t\t\t}\n\t\t\tchart1.Series[1].Points.DataBindXY(x, y1);\n\t\t}\n\n\t\tprivate void dataGridView1_CellContentClick(object sender, DataGridViewCellEventArgs e)\n\t\t{\n\n\t\t}\n\n\t\tprivate void Form1_Load(object sender, EventArgs e)\n\t\t{\n\t\t\tdataGridView1.RowCount = 6;\t\n\t\t\tdataGridView1.ColumnCount = 2;\n\t\t\tdataGridView1.Columns[0].HeaderCell.Value = \"х\";\n\t\t\tdataGridView1.Columns[1].HeaderCell.Value = \"у\";\n\t\t\tfor (int i = 0; i < 6; i++)\n\t\t\t\tfor (int j = 0; j < 2; j++)\n\t\t\t\t{\n\t\t\t\t\tif (j == 0)\n\t\t\t\t\t\tdataGridView1.Rows[i].Cells[j].Value = Convert.ToString(x[i]);\n\t\t\t\t\telse\n\t\t\t\t\t\tdataGridView1.Rows[i].Cells[j].Value = Convert.ToString(y[i]);\n\t\t\t\t}\n\t\t\tdataGridView3.RowCount = 2;\n\t\t\tdataGridView3.ColumnCount = 2;\n\t\t\tdataGridView3.Columns[0].HeaderCell.Value = \"коэффицент\";\n\t\t\tdataGridView3.Columns[1].HeaderCell.Value = \"значение\";\n\n\t\t\tfor (int i = 0; i < 2; i++)\n\t\t\t\tfor (int j = 0; j < 2; j++)\n\t\t\t\t{\n\t\t\t\t\tif ((j == 0) && (i == 0))\n\t\t\t\t\t\tdataGridView3.Rows[i].Cells[j].Value = \"a0:\";\n\t\t\t\t\tif ((j == 1) && (i == 0))\n\t\t\t\t\t\tdataGridView3.Rows[i].Cells[j].Value = Convert.ToString(polin1_A0());\n\t\t\t\t\tif ((j == 0) && (i == 1))\n\t\t\t\t\t\tdataGridView3.Rows[i].Cells[j].Value = \"a1:\";\n\t\t\t\t\tif ((j == 1) && (i == 1))\n\t\t\t\t\t\tdataGridView3.Rows[i].Cells[j].Value = Convert.ToString(polin1_A1());\n\t\t\t\t\t\n\t\t\t\t}\n\t\t\tdataGridView5.RowCount = 3;\n\t\t\tdataGridView5.ColumnCount = 2;\n\t\t\tdataGridView5.Columns[0].HeaderCell.Value = \"коэффицент\";\n\t\t\tdataGridView5.Columns[1].HeaderCell.Value = \"значение\";\n\t\t\tfor (int i = 0; i < 3; i++)\n\t\t\t\tfor (int j = 0; j < 2; j++)\n\t\t\t\t{\n\t\t\t\t\tif ((j == 0) && (i == 0))\t\n\t\t\t\t\t\tdataGridView5.Rows[i].Cells[j].Value = \"a0\";\n\t\t\t\t\tif ((j == 1) && (i == 0))\n\t\t\t\t\t\tdataGridView5.Rows[i].Cells[j].Value = Convert.ToString(polin2_A0());\n\t\t\t\t\tif ((j == 0) && (i == 1))\n\t\t\t\t\t\tdataGridView5.Rows[i].Cells[j].Value = \"a1\";\n\t\t\t\t\tif ((j == 1) && (i == 1))\n\t\t\t\t\t\tdataGridView5.Rows[i].Cells[j].Value = Convert.ToString(polin2_A1());\n\t\t\t\t\tif ((j == 0) && (i == 2))\n\t\t\t\t\t\tdataGridView5.Rows[i].Cells[j].Value = \"a2\";\n\t\t\t\t\tif ((j == 1) && (i == 2))\n\t\t\t\t\t\tdataGridView5.Rows[i].Cells[j].Value = Convert.ToString(polin2_A2());\n\t\t\t\t}\n\t\t}\n\n\t\tprivate void dataGridView3_CellContentClick(object sender, DataGridViewCellEventArgs e)\n\t\t{\n\t\t\t\n\t\t}\n\n\t\tprivate void dataGridView2_CellContentClick(object sender, DataGridViewCellEventArgs e)\n\t\t{\n\n\t\t}\n\n\t\tprivate void dataGridView5_CellContentClick(object sender, DataGridViewCellEventArgs e)\n\t\t{\n\n\t\t}\n\n\t\tprivate void dataGridView4_CellContentClick(object sender, DataGridViewCellEventArgs e)\n\t\t{\n\n\t\t}\n\t}\n}\n"
},
{
"alpha_fraction": 0.5476190447807312,
"alphanum_fraction": 0.5714285969734192,
"avg_line_length": 13,
"blob_id": "e42f3aa993308e518ce09e72fcfcb31924064417",
"content_id": "25efc5561a39adb6eb0763c92656f3679ab86985",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 42,
"license_type": "no_license",
"max_line_length": 22,
"num_lines": 3,
"path": "/IT_Praktika_18/IT_Praktika_18/2-3(1)_zadanie.py",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "strange = [ [],1]\n\nprint( type(strange) )\n"
},
{
"alpha_fraction": 0.4416666626930237,
"alphanum_fraction": 0.4583333432674408,
"avg_line_length": 28.75,
"blob_id": "6b2659ce934205049e832dac03922546f157425d",
"content_id": "1a8d1bded26b7d4ada19c5854242515e1b5dab96",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 120,
"license_type": "no_license",
"max_line_length": 53,
"num_lines": 4,
"path": "/IT_Praktika_18/IT_Praktika_18/2-3(2)_zadanie.py",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "\nfor x in range(2):\n for y in range(2):\n if((x or y) and (not x or y) and ( x and y)):\n print(x,y)\n"
},
{
"alpha_fraction": 0.6461538672447205,
"alphanum_fraction": 0.6538461446762085,
"avg_line_length": 30.25,
"blob_id": "efe3611878cf96898dbe3a6110d837d99371f874",
"content_id": "225c9bcc269bc5f51918258337b546fe2a7b0dd8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 194,
"license_type": "no_license",
"max_line_length": 62,
"num_lines": 4,
"path": "/IT_Praktika_18/IT_Praktika_18/5-2(1)_programm.py",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "x = int(input('Введите х')) # преобразуем строку в целое число\r\nif x < 0: # если введенное число меньше нуля\r\n x = -x\r\nprint(x)\r\r\n"
},
{
"alpha_fraction": 0.6907216310501099,
"alphanum_fraction": 0.7113401889801025,
"avg_line_length": 31.33333396911621,
"blob_id": "ef7878371b9ce5708c604ddb5eb692cb6a02dbf6",
"content_id": "11981498aa7e5236a430c888701b8136bee6f54b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 139,
"license_type": "no_license",
"max_line_length": 44,
"num_lines": 3,
"path": "/IT_Praktika_18/IT_Praktika_18/4-3(1)_zadanie.py",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "n1=input('введите основание СС: ')\nA=int(input('Введите значене переменной: '))\nprint(int(n1,A))\n"
},
{
"alpha_fraction": 0.5726495981216431,
"alphanum_fraction": 0.5811966061592102,
"avg_line_length": 22.399999618530273,
"blob_id": "4cf599c6df39f7cf16d5dc57473bd35c6aa8e26b",
"content_id": "346d8c6c545f5fdef925ca2e124243389e3d02a0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 147,
"license_type": "no_license",
"max_line_length": 38,
"num_lines": 5,
"path": "/IT_Praktika_18/IT_Praktika_18/6-3(4)_zadanie.py",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "x=int(input('Введите число: '))\ny=int(input('Введите основание СС: '))\nwhile(x!=0):\n print(x%y,end='')\n x=x//y\n"
},
{
"alpha_fraction": 0.9111111164093018,
"alphanum_fraction": 0.9111111164093018,
"avg_line_length": 89.33333587646484,
"blob_id": "32ba25d94733a8b17f7eaea761574afe5b9e42d5",
"content_id": "c5a48b4c1e0f2240ac2a77b3fa6927db180538b2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "SQL",
"length_bytes": 270,
"license_type": "no_license",
"max_line_length": 99,
"num_lines": 3,
"path": "/IT_Praktika_17/Схема работы правого внешнего и полного соединений1.sql",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "select HumanResources.EmployeeDepartmentHistory.StartDate, HumanResources.Employee.NationalIDNumber\nfrom HumanResources.EmployeeDepartmentHistory inner join HumanResources.Employee \non HumanResources.EmployeeDepartmentHistory.StartDate = HumanResources.Employee.HireDate"
},
{
"alpha_fraction": 0.4337349534034729,
"alphanum_fraction": 0.45783132314682007,
"avg_line_length": 15.600000381469727,
"blob_id": "c5192bcbd8b55ee19f406b7118ae7a9da31a4ea8",
"content_id": "0f08560dacb21517aebf4a1c22fec6b3ac1fdf2c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 83,
"license_type": "no_license",
"max_line_length": 20,
"num_lines": 5,
"path": "/IT_Praktika_18/IT_Praktika_18/6-3(1)_zadanie.py",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "a=int(input())\nfor x in range(2,a):\n if(a%x==0):\n print(x)\n break\n"
},
{
"alpha_fraction": 0.4965689480304718,
"alphanum_fraction": 0.5601996183395386,
"avg_line_length": 21.577465057373047,
"blob_id": "a799d08a2fc34a21357f38e396f4c28e232a166b",
"content_id": "cde97d3c25595357a36063e1d7d45ef82fd21715",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1605,
"license_type": "no_license",
"max_line_length": 61,
"num_lines": 71,
"path": "/work_10/work10IT/Form1.cs",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "using System;\nusing System.Collections.Generic;\nusing System.ComponentModel;\nusing System.Data;\nusing System.Drawing;\nusing System.Linq;\nusing System.Text;\nusing System.Windows.Forms;\n\n\nnamespace work10IT\n{\n public partial class Form1 : Form\n {\n private int x1, y1, x2, y2, r;\n private double a;\n private Pen pen = new Pen(Color.DarkRed, 2);\n Point[] points = {\n new Point(20+500,60+500),\n new Point(70+500,10+500),\n new Point(50+500,80+500),\n new Point(30+500,10+500),\n new Point(80+500,60+500),\n};\n\n public Form1()\n {\n InitializeComponent();\n }\n\n private void Form1_Paint(object sender, PaintEventArgs e)\n {\n Graphics g = e.Graphics;\n g.DrawLine(pen, x1, y1, x2, y2);\n g.DrawPolygon(pen, points);\n }\n\n\n private void Form1_Load(object sender, EventArgs e)\n {\n /*\n * x1 = ClientSize.Width / 2;\n y1 = ClientSize.Height / 2;\n r = 150;\n a = 0; \n x2 = x1 + (int)(r * Math.Cos(a));\n y2 = y1 - (int)(r * Math.Sin(a));\n */\n }\n\n private void timer3_Tick(object sender, EventArgs e)\n {\n /*\n a -= 0.1;\n x2 = x1 + (int)(r * Math.Cos(a));\n y2 = y1 - (int)(r * Math.Sin(a));\n */\n Random rnd = new Random();\n int a = rnd.Next(-50,50);\n int b = rnd.Next(-50,50);\n Size size = new Size(a,b);\n points[0] = Point.Add(points[0], size);\n points[1] = Point.Add(points[1], size);\n points[2] = Point.Add(points[2], size);\n points[3] = Point.Add(points[3], size);\n points[4] = Point.Add(points[4], size);\n Invalidate(); \n }\n }\n\n}\n"
},
{
"alpha_fraction": 0.606965184211731,
"alphanum_fraction": 0.6318408250808716,
"avg_line_length": 19.100000381469727,
"blob_id": "eca0141cf5bfa56f86f952b28eb42392470d7240",
"content_id": "97da83403f112f138f57f98e9cfe68aed217c77e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 260,
"license_type": "no_license",
"max_line_length": 28,
"num_lines": 10,
"path": "/IT_Praktika_18/IT_Praktika_18/5-2(2)_programm.py",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "x = int(input())\ny = int(input())\nif x > 0 and y > 0:\n print(\"Первая четверть\")\nelif x > 0 and y < 0:\n print(\"Четвертая четверть\")\nelif y > 0:\n print(\"Вторая четверть\")\nelse:\n print(\"Третья четверть\")\n"
},
{
"alpha_fraction": 0.4237288236618042,
"alphanum_fraction": 0.49152541160583496,
"avg_line_length": 18.66666603088379,
"blob_id": "c58af0f1b99a7f2d3702b41e2dac1bc85c0d4aec",
"content_id": "8168e8885038270f16955a04187214b5c5d53957",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 59,
"license_type": "no_license",
"max_line_length": 28,
"num_lines": 3,
"path": "/IT_Praktika_19/IT_Praktika_19/2-3(1)_zadanie.py",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "k=int(input())\nn=int(input())\nprint((2<<(k-1))+(2<<(n-1)))\n"
},
{
"alpha_fraction": 0.5,
"alphanum_fraction": 0.5297619104385376,
"avg_line_length": 15.800000190734863,
"blob_id": "d773b909e1780aa8a0d5693d0d3160d99d283d01",
"content_id": "8d467497c7a58fe0f74d6ffd88404039e8b513ab",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 194,
"license_type": "no_license",
"max_line_length": 29,
"num_lines": 10,
"path": "/IT_Praktika_18/IT_Praktika_18/6-3(3)_zadanie.py",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "flag=1\na=int(input())\nfor x in range(2,a):\n if(a%x==0):\n flag=0\n break \nif(flag==1):\n print(\"простое число\")\nelse:\n print(\"не простое число\")\n"
},
{
"alpha_fraction": 0.4301075339317322,
"alphanum_fraction": 0.5268816947937012,
"avg_line_length": 15.909090995788574,
"blob_id": "c44d761d189938540c463664a6488b4554fd4587",
"content_id": "8e60800cf08d2586d3c31a0e412517e209c517e9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 218,
"license_type": "no_license",
"max_line_length": 22,
"num_lines": 11,
"path": "/IT_Praktika_19/IT_Praktika_19/2-2(3)_programm.py",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "x = int(input('x = '))\ny = int(input('y = '))\n# фильтр на 0-5 биты\nx &= 0b11111\n# фильтр на 0-7 биты\ny &= 0b1111111\n# умножить\nz = x*y\nprint('x = ', x)\nprint('y = ', y)\nprint('z = ', z)\n"
},
{
"alpha_fraction": 0.4384787380695343,
"alphanum_fraction": 0.4604026973247528,
"avg_line_length": 23.83333396911621,
"blob_id": "bc2e3f86d27036473c131dee6597dcfebf54ca8e",
"content_id": "d217448b49e56a44e7c218ba9585c8ce4edaaa54",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 2340,
"license_type": "no_license",
"max_line_length": 113,
"num_lines": 90,
"path": "/work_7/work7IT/Form1.cs",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "using System;\nusing System.Collections.Generic;\nusing System.ComponentModel;\nusing System.Data;\nusing System.Drawing;\nusing System.Linq;\nusing System.Text;\nusing System.Threading.Tasks;\nusing System.Windows.Forms;\n\nnamespace work7IT\n{\n public partial class Form1 : Form\n {\n public Form1()\n {\n InitializeComponent();\n }\n\n private void button1_Click(object sender, EventArgs e)\n {\n int H = 5;\n int L = 5;\n dataGridView1.RowCount = H;\n dataGridView1.ColumnCount = L; \n int[,] a = new int[H, L];\n int i,j;\n Random rand = new Random();\n for (i = 0; i < H-1; i++)\n for (j = 0; j < L; j++)\n a[i, j] = rand.Next(-100, 100);\n \n for (i = 0; i < H-1; i++)\n for (j = 0; j < L; j++)\n dataGridView1.Rows[i].Cells[j].Value = Convert.ToString(a[i, j]);\n /*\n int m = int.MinValue;\n for (i = 0; i < H; i++)\n if (a[i, L - 1 - i] > m) m = a[i, L - 1 - i];\n textBox1.Text = Convert.ToString(m);\n Для столбцов матрицы с четными номерами найти максимальный элемент, для столбцов с нечетными - минимальный.\n СТРОКА СТОЛБЕЦ\n */\n }\n\n private void button2_Click(object sender, EventArgs e)\n {\n int H = 5;\n int L = 5;\n dataGridView1.RowCount = H;\n dataGridView1.ColumnCount = L;\n int[,] a = new int[H, L];\n int i, j;\n for (i = 0; i < H-1; i++)\n for (j = 0; j < L; j++)\n a[i, j] = Convert.ToInt16(dataGridView1.Rows[i].Cells[j].Value);\n for (j = 0; j < L; j++)\n {\n \n //{\n if (j % 2 != 0)\n {\n int max = int.MinValue;\n for (i = 0; i < H - 1; i++)\n {\n if (a[i, j] > max)\n {\n max = a[i, j];\n }\n dataGridView1.Rows[H - 1].Cells[j].Value = Convert.ToString(max);\n }\n }\n if (j % 2 == 0)\n {\n int min = int.MaxValue;\n for (i = 0; i < H - 1; i++)\n {\n if (a[i, j] < min)\n {\n min = a[i, j];\n }\n dataGridView1.Rows[H - 1].Cells[j].Value = Convert.ToString(min);\n }\n }\n //}\n }\n\n }\n }\n}\n"
},
{
"alpha_fraction": 0.5921052694320679,
"alphanum_fraction": 0.6776315569877625,
"avg_line_length": 24.33333396911621,
"blob_id": "952c96c0da04b306f2cbb283304df98abc0a0607",
"content_id": "c32f1cc99ef6eebb90d1dbb6d88eb469c7353d58",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 187,
"license_type": "no_license",
"max_line_length": 37,
"num_lines": 6,
"path": "/IT_Praktika_19/IT_Praktika_19/2-2(2)_programm.py",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "number = int(input('Input number: '))\n# фильтр на 4,5,6 биты\nnumber &= 0b1110000\n# сдвинуть на 4 разряда вправо\nnumber >>= 4\nprint('number = ', number)\n"
},
{
"alpha_fraction": 0.5208333134651184,
"alphanum_fraction": 0.5625,
"avg_line_length": 15,
"blob_id": "4c88ca6d18f8c7e3512fe87a1e99556c228f3e5a",
"content_id": "a14c789b5403af3d904c10fe41afe1ba06a1709b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 48,
"license_type": "no_license",
"max_line_length": 17,
"num_lines": 3,
"path": "/IT_Praktika_19/IT_Praktika_19/2-3(3)_zadanie.py",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "x=int(input())\nk=int(input())\nprint(x|(2<<k-1))\n"
},
{
"alpha_fraction": 0.38723403215408325,
"alphanum_fraction": 0.4382978677749634,
"avg_line_length": 20.363636016845703,
"blob_id": "79b82ee4d6e7bd7acda5aaa44022970d2948be34",
"content_id": "21e5bf2d74ecf1616a88d37b68c7b0fef5ef0805",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 235,
"license_type": "no_license",
"max_line_length": 64,
"num_lines": 11,
"path": "/IT_Praktika_19/IT_Praktika_19/1-3(4)_zadanie.py",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "import math\nx=int(input())\ny=int(input())\nwhile(x<=2.5):\n while(y<=4):\n if(x+y<=2.):\n print(pow((math.sin(x*pow(math.e,(0.1*y)))),(1./3)))\n else:\n print(math.log(x+y,2))\n y=y+1\n x=x+0.5\n"
},
{
"alpha_fraction": 0.5371178984642029,
"alphanum_fraction": 0.5458515286445618,
"avg_line_length": 21.899999618530273,
"blob_id": "229518c1f18e99dc81786925a96377728071d689",
"content_id": "2950a5c970d9ce54e99eeeb7f6e3f735d551564b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 243,
"license_type": "no_license",
"max_line_length": 42,
"num_lines": 10,
"path": "/IT_Praktika_19/IT_Praktika_19/1-2(2)_programm.py",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "import math\nx = float(input('Введите x '))\ny = float(input('Введите y '))\nif x * y <= -1:\n f = math.sin(x * math.exp(y))\nelif x * y >= 5:\n f = x * x + math.tan(y)\nelse:\n f = math.sqrt(math.fabs(math.cos(x * y)))\nprint('f = ', f)\n"
},
{
"alpha_fraction": 0.6783625483512878,
"alphanum_fraction": 0.6783625483512878,
"avg_line_length": 32,
"blob_id": "9061da85f25ff6267d34871888ddd68ed706baf2",
"content_id": "a2be8da106b5530bbeb4876d7b7b20f3ed23c71f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 246,
"license_type": "no_license",
"max_line_length": 52,
"num_lines": 5,
"path": "/IT_Praktika_18/IT_Praktika_18/4-2(1)_programm.py",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "print('Введите число в десячиной системе счисления')\r\na = int(input())\r\nprint('Двоичная: ', bin(a))\r\nprint('Восьмеричная: ',oct(a))\r\nprint('Шестнадцатиричная: ',hex(a))\r\r\n"
},
{
"alpha_fraction": 0.5840708017349243,
"alphanum_fraction": 0.6283186078071594,
"avg_line_length": 15.142857551574707,
"blob_id": "783b0c86e1a71dfc9ffba547ef7a860b29339755",
"content_id": "5a7431dd535d4903833daf9560b1702b321993f4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 128,
"license_type": "no_license",
"max_line_length": 26,
"num_lines": 7,
"path": "/IT_Praktika_18/IT_Praktika_18/6-3(2)_zadanie.py",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "n=int(input())\nres=0\nfor i in range(1,n+1):\n res=res+i\nprint(res)\nprint('С помощью формулы')\nprint(n*(n+1)/2)\n"
},
{
"alpha_fraction": 0.5731707215309143,
"alphanum_fraction": 0.6097561120986938,
"avg_line_length": 26.33333396911621,
"blob_id": "c6173421779656c900a78bd27a41e77db83f684c",
"content_id": "ede87984d0403c5157f2f5a96d57b537e2deb952",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 89,
"license_type": "no_license",
"max_line_length": 32,
"num_lines": 3,
"path": "/IT_Praktika_18/IT_Praktika_18/3-2(2)_programm.py",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "import math as m\nx = float( input(\"Введитеx: \") )\nprint( m.log2(7*x)*m.cos(x/3) )\n"
},
{
"alpha_fraction": 0.5438596606254578,
"alphanum_fraction": 0.5877193212509155,
"avg_line_length": 15.285714149475098,
"blob_id": "cbd145f2541aa1ee2787dd1de83adb2e7ed2bb9c",
"content_id": "e0fc27b3fb240556bc94d8a8ec606fb491d054a2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 114,
"license_type": "no_license",
"max_line_length": 27,
"num_lines": 7,
"path": "/IT_Praktika_19/IT_Praktika_19/2-3(4)_zadanie.py",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "import math\nx=int(input())\nn=int(input())\nd= math.ceil(math.log(x,2))\nn=(2<<d)+((2<<d-n-1)-1)\nprint(n)\nprint(x&n)\n"
},
{
"alpha_fraction": 0.533137857913971,
"alphanum_fraction": 0.5524926781654358,
"avg_line_length": 26.5,
"blob_id": "a7d5685f7290fd8ae608073ef640389696ab0288",
"content_id": "66cf12c177a67c815a28c20206f7ca955b326f99",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1761,
"license_type": "no_license",
"max_line_length": 83,
"num_lines": 62,
"path": "/work_3/WindowsFormsApp1/WindowsFormsApp1/Form1.cs",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "using System;\nusing System.Collections.Generic;\nusing System.ComponentModel;\nusing System.Data;\nusing System.Drawing;\nusing System.Linq;\nusing System.Text;\nusing System.Threading.Tasks;\nusing System.Windows.Forms;\n\nnamespace WindowsFormsApp1\n{\n public partial class Form1 : Form\n {\n public Form1()\n {\n InitializeComponent();\n }\n\n private void groupBox1_Enter(object sender, EventArgs e)\n {\n\n }\n\n private void button1_Click(object sender, EventArgs e)\n {\n double x = Convert.ToDouble(textBox1.Text);\n textBox2.Text = \"Результаты работы программы Михайлова А.А. \" + Environment.NewLine;\n textBox2.Text += \"При x = \" + textBox1.Text + Environment.NewLine;\n int n = 0;\n if (radioButton2.Checked) n = 1;\n else if (radioButton3.Checked) n = 2;\n // Вычисление U\n double u;\n switch (n)\n {\n case 0:\n if (x >= 3) u = Math.Sinh(1 / x);\n else if (x >= 1) u = Math.Sinh(3 * x);\n else u = Math.Sinh(x * x);\n textBox2.Text += \"y = \" + Convert.ToString(u) + Environment.NewLine;\n break;\n case 1:\n if (x >= 3) u = Math.Cosh(1 / x);\n else if (x >= 1) u = Math.Cosh(3 * x);\n else u = Math.Cosh(x * x);\n textBox2.Text += \"y = \" + Convert.ToString(u) + Environment.NewLine;\n break;\n case 2:\n if (x >= 3) u = Math.Exp(1 / x);\n else if (x >= 1) u = Math.Exp(3 * x);\n else u = Math.Exp(x * x);\n textBox2.Text += \"y = \" + Convert.ToString(u) + Environment.NewLine;\n break;\n default:\n textBox2.Text += \"Решение не найдено\" + Environment.NewLine;\n break;\n\n }\n }\n }\n}\n"
},
{
"alpha_fraction": 0.32499998807907104,
"alphanum_fraction": 0.4333333373069763,
"avg_line_length": 17.83333396911621,
"blob_id": "ace249f5f77aeb9c6de6547c0d1feaeffd59c17d",
"content_id": "6c354d809f4eea54528d8aefa78cdb61055807a8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 132,
"license_type": "no_license",
"max_line_length": 43,
"num_lines": 6,
"path": "/IT_Praktika_18/IT_Praktika_18/6-2(3)_programm.py",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "n = [1, 2, 3, 7, 6, 4, 5, 8] #пример списка\r\nfor x in n:\r\n if x == 237:\r\n break\r\n elif x % 2 == 0:\r\n print(x)\r\r\n"
},
{
"alpha_fraction": 0.9019073843955994,
"alphanum_fraction": 0.9019073843955994,
"avg_line_length": 72.5,
"blob_id": "421b793bb3522c00874978020c4a15360f42d31b",
"content_id": "5aefc73c4af3f140ef423e45e22b971c324e19ff",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "SQL",
"length_bytes": 734,
"license_type": "no_license",
"max_line_length": 193,
"num_lines": 10,
"path": "/IT_Praktika_17/Задание для самостоятельного решения.sql",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "select HumanResources.EmployeeDepartmentHistory.StartDate, HumanResources.Employee.NationalIDNumber, HumanResources.Department.GroupName, HumanResources.Shift.Name, HumanResources.Shift.EndTime\nfrom HumanResources.EmployeeDepartmentHistory \ninner join HumanResources.Employee\non HumanResources.EmployeeDepartmentHistory.StartDate = HumanResources.Employee.HireDate\nleft outer join HumanResources.EmployeePayHistory\non HumanResources.EmployeeDepartmentHistory.BusinessEntityID = HumanResources.EmployeeDepartmentHistory.BusinessEntityID\ninner join HumanResources.Department\non HumanResources.Department.GroupName=HumanResources.Department.GroupName\ninner join HumanResources.Shift\non HumanResources.Shift.Name=HumanResources.Shift.Name"
},
{
"alpha_fraction": 0.48615917563438416,
"alphanum_fraction": 0.5259515643119812,
"avg_line_length": 35.125,
"blob_id": "0029e666433332919389ac7bb469a8cd7db48576",
"content_id": "42dccab351fb199bac6303ebcc5b95ac512491bd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 712,
"license_type": "no_license",
"max_line_length": 61,
"num_lines": 16,
"path": "/IT_Praktika_19/IT_Praktika_19/1-2(4)_programm.py",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "import math\nax, bx, hx = 0.0, 1.0, 0.2\nay, by, hy = 1.0, 2.0, 0.5\nx = ax #устанавливаем x в начало отрезка в xn\nwhile x <= bx: #пока не дойдем до xk\n y = ay #устанавливаем y в начало отрезка в yn\n while y <= by: #пока не дойдем до yk\n if x + y <= 2:\n f = math.pow(x + y, 1.0 / 5.0)\n else:\n f = math.pow(math.fabs(math.sin(x)), y)\n print('x: = ', x, 'y = ', y, 'f = ', f) # выводим результат\n # или print('x = {:.3}, y = {:.3}, f = {:.3}'.format(x,y,f))\n # или print(f'x = {x:.3}, y = {y:.3}, f = {f:.3}')\n y = y + hy #прибавляем к y шаг\n x = x + hx #прибавляем к x шаг\n"
},
{
"alpha_fraction": 0.5602409839630127,
"alphanum_fraction": 0.608433723449707,
"avg_line_length": 22.714284896850586,
"blob_id": "c53fd711ea25e8dc0bb16a83da6e7de224259f05",
"content_id": "77c51f47f2002d807fe176fb873783e8b5c9e520",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 221,
"license_type": "no_license",
"max_line_length": 32,
"num_lines": 7,
"path": "/IT_Praktika_18/IT_Praktika_18/5-3(2)_zadanie.py",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "a=int(input('Введите число: '))\nif(a%10>a//10):\n print(\"правая цифра больше\")\nelif(a%10==a//10):\n print(\"цифры равны\")\nelse:\n print(\"левая цифра больше\")\n"
},
{
"alpha_fraction": 0.5795454382896423,
"alphanum_fraction": 0.5795454382896423,
"avg_line_length": 13.666666984558105,
"blob_id": "92a503435d1298fa6eb296572dc7757aed5e5070",
"content_id": "0154b247546ac69d65d65785204d4a92ae6e5539",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 103,
"license_type": "no_license",
"max_line_length": 24,
"num_lines": 6,
"path": "/IT_Praktika_18/IT_Praktika_18/1-3(2)_zadanie.py",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "x = input('Введите х: ')\ny = input('Введите y: ')\nx=float(x)\ny=float(y)\ny=y+x\nprint (y)\n"
},
{
"alpha_fraction": 0.6274510025978088,
"alphanum_fraction": 0.6535947918891907,
"avg_line_length": 29.600000381469727,
"blob_id": "12b97182c97b37fcee1f16946273e0952a574022",
"content_id": "d2e219681b9dd021adceab7bfd3ce54436ae9692",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 221,
"license_type": "no_license",
"max_line_length": 56,
"num_lines": 5,
"path": "/IT_Praktika_18/IT_Praktika_18/1-3(3)_zadanie.py",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "x = input('Введите х: ') # возвращается строка, не число\nx=float(x) # преобразуем строку в вещественное число\ny=x**5-2*x**3+1\ny=str(y)\nprint('y = ' + y)\n"
},
{
"alpha_fraction": 0.5873940587043762,
"alphanum_fraction": 0.6128178238868713,
"avg_line_length": 28.952381134033203,
"blob_id": "eacd98de8008b66ec467e6e82fb83fdf25f05cd9",
"content_id": "985b0d97762c37ef3ca197a677d946c20b5a2274",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 2151,
"license_type": "no_license",
"max_line_length": 133,
"num_lines": 63,
"path": "/work_2/WindowsFormsApp3/WindowsFormsApp3/Form1.cs",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "using System;\nusing System.Collections.Generic;\nusing System.ComponentModel;\nusing System.Data;\nusing System.Drawing;\nusing System.Linq;\nusing System.Text;\nusing System.Threading.Tasks;\nusing System.Windows.Forms;\n\nnamespace WindowsFormsApp3 //var 6\n{\n public partial class Form1 : Form\n {\n public Form1()\n {\n InitializeComponent();\n }\n private void textBox1_TextChanged(object sender, EventArgs e) { }\n private void listBox1_SelectedIndexChanged(object sender, EventArgs e) { }\n private void Form1_Load(object sender, EventArgs e)\n {\n textBox1.Text = \"2\";\n // Вывод строки в многострочный редактор\n textBox2.Text = \"Практическая работа №2 Михайлов А.А.\";\n textBox2.Text += Environment.NewLine + \"Рассчитать значение выражения y=(sqrt(3 + ln x + 15 -x)/ 1 + sin (2 + x^2)/ 1+x\";\n\n }\n\n \n\n private void button1_Click(object sender, EventArgs e)\n {\n double x = 0;\n // Считываниезначения X\n if (textBox1.Text == \"\") textBox1.Text = \"0\"; \n x = double.Parse(textBox1.Text);\n // Выводзначения X вокно\n textBox2.Text += Environment.NewLine +\n\"При x = \" + x.ToString();\n // Вычисляем арифметическое выражение\n double y = (Math.Sqrt(3 + Math.Log(x) + 15 - x)) /(1 + Math.Sin((2 + x*x)/(1 + x)));\n\n // Выводим результат в окно\n textBox2.Text += Environment.NewLine +\n\"Результат y = \" + y.ToString();\n\n }\n\n private void textBox2_TextChanged(object sender, EventArgs e)\n {\n\n }\n\n\t\tprivate void Form1_Load_1(object sender, EventArgs e)\n\t\t{\n textBox1.Text = \"2\";\n // Вывод строки в многострочный редактор\n textBox2.Text = \"Практическая работа №2 Михайлов А.А.\";\n textBox2.Text += Environment.NewLine + \"Рассчитать значение выражения y=(sqrt(3 + ln x + 15 -x)/ 1 + sin (2 + x^2)/ 1+x\";\n }\n\t}\n}\n\n"
},
{
"alpha_fraction": 0.5309734344482422,
"alphanum_fraction": 0.5309734344482422,
"avg_line_length": 21.600000381469727,
"blob_id": "4279360b51065246a4d1f9afc814ac77ea58ce78",
"content_id": "f1191535b91d4dafb1a18fb3639a66e8cde02d50",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 137,
"license_type": "no_license",
"max_line_length": 59,
"num_lines": 5,
"path": "/IT_Praktika_18/IT_Praktika_18/6-2(4)_programm.py",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "s = input()\nfor i in s:\n if(i == ' '):\n continue\n print(i, end = '') # end = '' не переводит на новую строку\n"
},
{
"alpha_fraction": 0.6268656849861145,
"alphanum_fraction": 0.641791045665741,
"avg_line_length": 43.66666793823242,
"blob_id": "071840d638e2e26fd57cc9d0d6f1a7f74cca227d",
"content_id": "0187d3d38789c8060a319bae862ebc500689093d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 142,
"license_type": "no_license",
"max_line_length": 93,
"num_lines": 3,
"path": "/IT_Praktika_18/IT_Praktika_18/3-3(1)_zadanie.py",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "import math\nx=int(input('Введите х: '))\nprint(math.pow(math.tan((math.cos(x)*math.sin(2*x))/(x*math.pow(math.e,x))),(math.log(x,7))))\n"
},
{
"alpha_fraction": 0.4930875599384308,
"alphanum_fraction": 0.5345622301101685,
"avg_line_length": 15.692307472229004,
"blob_id": "775acf4857f387733fa9293538f62d4211fceef2",
"content_id": "1c25007e68a85367c98f4931bddfd77b0621899e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 255,
"license_type": "no_license",
"max_line_length": 31,
"num_lines": 13,
"path": "/IT_Praktika_18/IT_Praktika_18/5-3(1)_zadanie.py",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "from math import * \na=int(input('Введите число: '))\nflag=0\nwhile(a!=0):\n c=a % 10\n if(c==3):\n flag=1\n break\n a=a//10\nif (flag==1):\n print(\"тройка входит\")\nelse:\n print(\"тройка не входит\")\n"
},
{
"alpha_fraction": 0.40963855385780334,
"alphanum_fraction": 0.5180723071098328,
"avg_line_length": 26.33333396911621,
"blob_id": "502e1ac17339499d8cdcbac518962c9bc0932580",
"content_id": "359f519c4addbdac29bf4f69a4f2dd134900e501",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 83,
"license_type": "no_license",
"max_line_length": 39,
"num_lines": 3,
"path": "/IT_Praktika_18/IT_Praktika_18/2-3(3)_zadanie.py",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "\n_list = [7465, 3.14, 'fghjk', True, []]\nfor i in range(5):\n print (_list[4-i])\n"
},
{
"alpha_fraction": 0.6705882549285889,
"alphanum_fraction": 0.6705882549285889,
"avg_line_length": 20.25,
"blob_id": "ef9e00685ba9f00e4506ec899d8a5b46b948f4a4",
"content_id": "4264580861cd7068e9cf30b1027f3da6281d6ab3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 85,
"license_type": "no_license",
"max_line_length": 42,
"num_lines": 4,
"path": "/IT_Praktika_19/IT_Praktika_19/1-3(1)_zadanie.py",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "import math\nx=int(input())\ny=int(input())\nprint(math.log(abs(math.sin(x+y)),math.e))\n"
},
{
"alpha_fraction": 0.6807228922843933,
"alphanum_fraction": 0.6807228922843933,
"avg_line_length": 17.44444465637207,
"blob_id": "ea49c41eb5234a7d6b484de23b91e0f6bd23d7c3",
"content_id": "5eeb6a2b49ac0fd4d93ad75e56be2d84c828ce16",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 227,
"license_type": "no_license",
"max_line_length": 51,
"num_lines": 9,
"path": "/IT_Praktika_18/IT_Praktika_18/1-3(1)_zadanie.py",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "name = input('Введите имя: ')\n\nfam = input('Введите фамилию: ')\n\nstud = input('Введите номер студенческого билета:')\n\nprint('Привет, ' + name)\nprint(fam)\nprint(stud)\n"
},
{
"alpha_fraction": 0.6818181872367859,
"alphanum_fraction": 0.7196969985961914,
"avg_line_length": 32,
"blob_id": "550a847056105cce52ebcc0a1ad88e9c8bf58805",
"content_id": "f803204e0633000a40bbb489127a66a735c72741",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 200,
"license_type": "no_license",
"max_line_length": 54,
"num_lines": 4,
"path": "/IT_Praktika_18/IT_Praktika_18/1-2(2)_programm.py",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "x = input('Введите х') # возвращается строка, не число\nx=float(x) # преобразуем строку в вещественное число\ny=x**2+3*x-100\nprint(y)\n"
},
{
"alpha_fraction": 0.511049747467041,
"alphanum_fraction": 0.5276243090629578,
"avg_line_length": 29.434782028198242,
"blob_id": "97c1f8077fef4fec27e7c56d7b45cbf45741f81f",
"content_id": "d9e3acdc763b99943738d05e0cfee8fb05fa03e4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 940,
"license_type": "no_license",
"max_line_length": 44,
"num_lines": 23,
"path": "/IT_Praktika_19/IT_Praktika_19/2-2(1)_programm.py",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "x, y = 37, 58\r\n#x в десятичной и двоичной системе\r\nprint('x = ', x, ' x_bin = ', bin(x))\r\n#y в десятичной и двоичной системе\r\nprint('y = ', y, ' y_bin = ', bin(y))\r\n#~x в десятичной и двоичной системе\r\na = ~x\r\nprint('~x =', a, ' ~x_bin = ', bin(a))\r\n#x>>3 в десятичной и двоичной системе\r\nb = x >> 3\r\nprint('x>>3 =', b, ' (x>>3)_bin = ', bin(b))\r\n#x<<2 в десятичной и двоичной системе\r\nc = x << 2\r\nprint('x<<2 =', c, ' (x<<2)_bin = ', bin(c))\r\n#x&y в десятичной и двоичной системе\r\nd = x & y\r\nprint('x&y =', d, ' (x&y)_bin = ', bin(d))\r\n#x^y в десятичной и двоичной системе\r\ne = x ^ y\r\nprint('x^y =', e, ' (x^y)_bin = ', bin(e))\r\n#x|y в десятичной и двоичной системе\r\nf = x | y\r\nprint('x|y =', f, ' (x|y)_bin = ', bin(f))\r\r\n"
},
{
"alpha_fraction": 0.6666666865348816,
"alphanum_fraction": 0.739130437374115,
"avg_line_length": 33.5,
"blob_id": "99eabd864335fb60519863e887e68c3aadb7d312",
"content_id": "564b740d3a8c131923498b2e50971bdff8195e01",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 100,
"license_type": "no_license",
"max_line_length": 54,
"num_lines": 2,
"path": "/IT_Praktika_18/IT_Praktika_18/2-2(1)_programm.py",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "a = 12 + 3.14\nprint( type(a) ) # функцияtypeвозвращаеттипеёаргумента\n"
},
{
"alpha_fraction": 0.4816513657569885,
"alphanum_fraction": 0.49082568287849426,
"avg_line_length": 20.799999237060547,
"blob_id": "9572ca69b6861a345c347e0886cd97d0fc461148",
"content_id": "abc88503c6526c3d1330a8142e6d008d917678ae",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 218,
"license_type": "no_license",
"max_line_length": 50,
"num_lines": 10,
"path": "/IT_Praktika_19/IT_Praktika_19/1-3(3)_zadanie.py",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "import math\na=int(input())\nb=int(input())\nhx=int(input())\nfor i in range(a,(b+1),hx):\n f = pow(math.cos(math.e*i),3)+math.sin(abs(i))\n f = str(f)\n i = str(i)\n print('x = ' + i + ' f = ' + f)\n i = int(i)\n"
},
{
"alpha_fraction": 0.469696968793869,
"alphanum_fraction": 0.6060606241226196,
"avg_line_length": 15.25,
"blob_id": "f17b211f72d97985a57825d3b15333979274a422",
"content_id": "ea63cdb6c632e41eb6d356d73df8ddc38ec83ec8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 66,
"license_type": "no_license",
"max_line_length": 21,
"num_lines": 4,
"path": "/IT_Praktika_18/IT_Praktika_18/3-3(2)_zadanie.py",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "num=[1,0,1,1,0,1,0,0]\nprint(num)\nnum=num+[sum(num)%2]\nprint(num)\n\n"
},
{
"alpha_fraction": 0.5324675440788269,
"alphanum_fraction": 0.5454545617103577,
"avg_line_length": 28.600000381469727,
"blob_id": "778668d92b0b29782e0736ce68d61a8a6695e81f",
"content_id": "1752060747a8ad6f59b24e37bb030514e9d40ff1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 168,
"license_type": "no_license",
"max_line_length": 56,
"num_lines": 5,
"path": "/IT_Praktika_19/IT_Praktika_19/1-2(1)_programm.py",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "import math\r\nx = float(input('Введите x '))\r\ny = float(input('Введите y '))\r\nf = 2 * math.pow(y, x) + math.log(math.fabs(x + y ** 3))\r\nprint('f = ', f)\r\r\n"
},
{
"alpha_fraction": 0.5232558250427246,
"alphanum_fraction": 0.569767415523529,
"avg_line_length": 27.66666603088379,
"blob_id": "ba878fabe577121f7cc302dbc7a1ca04a6c4d2c4",
"content_id": "80d2da87012e9eb4955831a06d2689a731bbb829",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 258,
"license_type": "no_license",
"max_line_length": 69,
"num_lines": 9,
"path": "/IT_Praktika_19/IT_Praktika_19/1-3(2)_zadanie.py",
"repo_name": "artyomka0/IT_practics",
"src_encoding": "UTF-8",
"text": "import math\nx=int(input())\ny=int(input())\nif(math.sin(x+y)<=-0.5):\n print((pow(math.arctg(math.sqrt(abs(x-y))),3))*(x*pow(y,math.e)))\nelif(math.sin(x+y)<0.5):\n print(3.*math.log(abs(x*y),3.))\nelif(math.sin(x+y)>=0.5):\n print((pow(x,3.)+pow(y,1.5)))\n"
}
] | 52 |
noissefnoc/numpy-unittest-100 | https://github.com/noissefnoc/numpy-unittest-100 | ff2cafbabacf392e40a43de09411ee3a63f449ec | c79fb4e71f65e08fe46978c751fe9b4deb9b6fbe | 33807a7937351b46ddf0b1fee081a23ec8542ebf | refs/heads/master | 2020-07-13T21:19:08.973995 | 2020-01-03T05:28:16 | 2020-01-03T05:28:16 | 205,157,114 | 0 | 0 | null | 2019-08-29T12:18:32 | 2018-11-19T04:51:16 | 2018-11-19T04:51:15 | null | [
{
"alpha_fraction": 0.4783737063407898,
"alphanum_fraction": 0.5129757523536682,
"avg_line_length": 24.711111068725586,
"blob_id": "bf8b723a88f88393ed8737998cd0aa8e3427e975",
"content_id": "bf914885fe61142593e716cfcb5f25be98b2b723",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1266,
"license_type": "no_license",
"max_line_length": 52,
"num_lines": 45,
"path": "/test_array_basic.py",
"repo_name": "noissefnoc/numpy-unittest-100",
"src_encoding": "UTF-8",
"text": "import unittest\nimport numpy as np\n\n\nclass TestArrayBasic(unittest.TestCase):\n\n def test_ndim(self):\n \"\"\"\n ndim: 次元数\n \"\"\"\n vector = np.array([0, 0, 0])\n metrix = np.array([[0, 0, 0], [0, 0, 0]])\n self.assertEqual(vector.ndim, 1)\n self.assertEqual(metrix.ndim, 2)\n\n def test_shape(self):\n \"\"\"\n shape: 次元ごとの要素数。新しく追加された次元が `unshift` で追加\n \"\"\"\n vector = np.array([0, 0, 0])\n metrix = np.array([[0, 0, 0], [0, 0, 0]])\n self.assertEqual(vector.shape, (3,))\n self.assertEqual(metrix.shape, (2, 3))\n\n def test_size(self):\n \"\"\"\n size: 要素数\n \"\"\"\n vector = np.array([0, 0, 0])\n metrix = np.array([[0, 0, 0], [0, 0, 0]])\n self.assertEqual(vector.size, 3)\n self.assertEqual(metrix.size, 6)\n\n def test_dtype(self):\n \"\"\"\n type(): 引数の型を返す\n ndarray.dtype: 配列の要素の型を返す (ndarrayは全要素の型が一緒)\n \"\"\"\n metrix = np.array([[0, 0, 0], [0, 0, 0]])\n self.assertEqual(type(metrix), np.ndarray)\n self.assertEqual(metrix.dtype, np.int)\n\n\nif __name__ == '__main__':\n unittest.main()"
}
] | 1 |
arjun-krishna/us-map | https://github.com/arjun-krishna/us-map | b10be0d4057d5f6236012b0379c7ee702a1a318a | d68aaef9a161ff5d5b16852687c1ad81257f9fe0 | 81b32ae681d89300fb0b64ec5b61fa6c78e7cf1d | refs/heads/master | 2021-01-24T07:47:46.939515 | 2017-06-05T03:22:27 | 2017-06-05T03:22:27 | 93,360,739 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.620192289352417,
"alphanum_fraction": 0.625,
"avg_line_length": 28.714284896850586,
"blob_id": "23e27f4fb1ca179edd6050a669ce09f6bb20cf23",
"content_id": "a9b4d825071ae11419a092339dc7e8f21e3e6cc6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 208,
"license_type": "no_license",
"max_line_length": 83,
"num_lines": 7,
"path": "/map/extract_counties_list.py",
"repo_name": "arjun-krishna/us-map",
"src_encoding": "UTF-8",
"text": "import json\n\nwith open('us.json') as f :\n\tdata = json.load(f)\n \n\tfor obj in data['objects']['us_counties']['geometries'] :\n\t\tprint obj['properties']['COUNTY'], ',', obj['properties']['NAME'].encode('utf-8')\n"
},
{
"alpha_fraction": 0.5932203531265259,
"alphanum_fraction": 0.6214689016342163,
"avg_line_length": 16.799999237060547,
"blob_id": "404b8cb6cb935a12e78eb31c080e3e61517d8b81",
"content_id": "9cc33fb1c1ef4de99aea2d6db207ea7e143bb863",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 177,
"license_type": "no_license",
"max_line_length": 47,
"num_lines": 10,
"path": "/map/generate_random_data.py",
"repo_name": "arjun-krishna/us-map",
"src_encoding": "UTF-8",
"text": "import random\n\nprint 'FIPS,population'\nwith open('counties.csv') as f :\n\ti = False\n\tfor line in f :\n\t\tif i :\n\t\t\tprint line[0:3],',',int(random.random()*100)\n\t\telse :\n\t\t\ti = True"
}
] | 2 |
svonton/JetBrains-Academy-Project--Simple-Banking-System-report- | https://github.com/svonton/JetBrains-Academy-Project--Simple-Banking-System-report- | 057f1dfeafb2da000145f9fec342b0eb9c083a38 | 74060502f6720e6083bd1c877961e67bdb5bf889 | 8392afdd4f07ea6be2ab36c28e4ab05df234e221 | refs/heads/master | 2023-03-26T12:49:36.045279 | 2021-03-25T17:17:18 | 2021-03-25T17:17:18 | 351,518,471 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.8373983502388,
"alphanum_fraction": 0.8373983502388,
"avg_line_length": 60.5,
"blob_id": "5662da7ff09715f9aae74df3e21734c52e0d393e",
"content_id": "211e2916e7855b90823cc6a6bd0fac789fd4e9e6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 123,
"license_type": "no_license",
"max_line_length": 63,
"num_lines": 2,
"path": "/README.md",
"repo_name": "svonton/JetBrains-Academy-Project--Simple-Banking-System-report-",
"src_encoding": "UTF-8",
"text": "# JetBrains Academy Project: Simple Banking System report \n project that was completed during JetBrains Academy educations\n"
},
{
"alpha_fraction": 0.5715555548667908,
"alphanum_fraction": 0.5864889025688171,
"avg_line_length": 34.14374923706055,
"blob_id": "daf7ac429acdfa3223bb5ada1ee7ecd548cc410d",
"content_id": "8bea553cc9a23ff792106c079122d9cd20f8ec15",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5625,
"license_type": "no_license",
"max_line_length": 140,
"num_lines": 160,
"path": "/banking.py",
"repo_name": "svonton/JetBrains-Academy-Project--Simple-Banking-System-report-",
"src_encoding": "UTF-8",
"text": "import random\nimport sqlite3 as sq\nconn = sq.connect(\"card.s3db\")\ncur = conn.cursor()\n\n\ncur.execute(\"CREATE TABLE IF NOT EXISTS card(id INTEGER PRIMARY KEY AUTOINCREMENT\"\n \", number TEXT, pin TEXT,balance INTEGER DEFAULT 0);\")\nconn.commit()\ncur.execute(\"DELETE FROM card;\")\nconn.commit()\n\nuser_invitation_str = \"\"\"1. Create an account\n2. Log into account\n0. Exit\\n\"\"\"\nCARD_NUMBER, USER_PIN = \"\", \"\"\nBALANCE = 0\nEXIT_FLAG = False\n\n\ndef card_creations():\n global cur, conn\n bin_str = \"400000\"\n account_identifier_str = \"\"\n user_pin_str = \"\"\n\n for i in range(9):\n account_identifier_str += f\"{random.randint(0,9)}\"\n user_card_number_str = bin_str+f\"{account_identifier_str}\"#+f\"{random.randint(0,9)}\"\n\n user_card_number_str += luhn_algorithm(user_card_number_str)\n\n for i in range(4):\n user_pin_str += f\"{random.randint(0,9)}\"\n\n cur.execute(f\"INSERT INTO card(number,pin) VALUES({user_card_number_str},{user_pin_str});\")\n conn.commit()\n return user_card_number_str, user_pin_str\n\n\ndef luhn_algorithm(user_card_number_str, valid_check=False):\n if len(user_card_number_str) == 16 and valid_check == True:\n user_card_number_str = user_card_number_str[:15]\n\n card_number_ch = \"\"\n count = 0\n sum_of_new_indf = 0\n for digit in user_card_number_str:\n count += 1\n if count == 1:\n current_num = int(digit)\n current_num *= 2\n if current_num > 9:\n current_num -= 9\n card_number_ch += str(current_num)\n else:\n count = 0\n card_number_ch += digit\n for cur_num in card_number_ch:\n sum_of_new_indf += int(cur_num)\n lim1 = (sum_of_new_indf // 10) * 10 + 10\n lim2 = (sum_of_new_indf // 10) * 10 - 10\n lim1 = abs(lim1 - sum_of_new_indf)\n lim2 = abs(lim2 - sum_of_new_indf)\n if lim1 > lim2:\n return str(lim2)\n elif lim1 < lim2:\n return str(lim1)\n else:\n return \"0\"\n\n\ndef card_logging():\n log_card = input(\"Enter your card number:\\n\")\n log_pin = input(\"Enter your PIN:\\n\")\n cur.execute(f\"SELECT * FROM card WHERE number = {log_card} AND pin = {log_pin}\")\n if bool(cur.fetchall()):\n print(\"You have successfully logged in!\")\n card_operation(log_card, log_pin)\n else:\n print(\"Wrong card number or PIN!\")\n return\n\n\ndef card_operation(log_card, log_pin):\n global EXIT_FLAG\n operation_list_str = \"\"\"1. Balance\n2. Add income\n3. Do transfer\n4. Close account\n5. Log out\n0. Exit\\n\"\"\"\n user_operation_choice = int(input(operation_list_str))\n if user_operation_choice == 0:\n EXIT_FLAG = True\n return\n elif user_operation_choice == 1:\n cur.execute(f\"SELECT balance FROM card WHERE number = {log_card} AND pin = {log_pin}\")\n print(f\"Balance: {cur.fetchall()[0][0]}\")\n elif user_operation_choice == 2:\n cur.execute(f\"SELECT balance FROM card WHERE number = {log_card} AND pin = {log_pin}\")\n user_balance = cur.fetchall()[0][0]\n income = int(input(\"Enter income:\\n\"))\n cur.execute(f\"UPDATE card SET balance = {user_balance+income} WHERE number = {log_card} AND pin = {log_pin}\")\n conn.commit()\n print(\"Income was added!\")\n card_operation(log_card, log_pin)\n elif user_operation_choice == 3:\n card_to_transfer_number = input(\"Transfer\\nEnter card number:\\n\")\n check_sum = luhn_algorithm(card_to_transfer_number, True)\n if len(card_to_transfer_number) == 16 and check_sum == card_to_transfer_number[15]:\n cur.execute(f\"SELECT * FROM card WHERE number = {card_to_transfer_number}\")\n if bool(cur.fetchall()):\n money_to_transfer = int(input(\"Enter how much money you want to transfer:\"))\n cur.execute(f\"SELECT balance FROM card WHERE number = {log_card} AND pin = {log_pin}\")\n user_money = cur.fetchall()[0][0]\n if user_money >= money_to_transfer:\n cur.execute(f\"UPDATE card SET balance = {user_money - money_to_transfer} WHERE number = {log_card} AND pin = {log_pin}\")\n conn.commit()\n cur.execute(f\"SELECT balance FROM card WHERE number = {card_to_transfer_number}\")\n target_balance = cur.fetchall()[0][0]\n cur.execute(f\"UPDATE card SET balance = {target_balance + money_to_transfer} WHERE number = {card_to_transfer_number}\")\n conn.commit()\n print(\"Success!\\n\")\n else:\n print(\"Not enough money!\\n\")\n card_operation(log_card, log_pin)\n else:\n print(\"Such a card does not exist.\\n\")\n card_operation(log_card, log_pin)\n else:\n print(\"Probably you made a mistake in the card number. Please try again!\\n\")\n card_operation(log_card, log_pin)\n elif user_operation_choice == 4:\n cur.execute(f\"DELETE FROM card WHERE number = {log_card} AND pin = {log_pin}\")\n conn.commit()\n print(\"The account has been closed!\")\n return\n else:\n print(\"You have successfully logged out!\")\n return\n\n\nwhile True:\n if EXIT_FLAG:\n print(\"Bye!\")\n break\n user_choice = int(input(user_invitation_str))\n if user_choice == 0:\n print(\"Bye!\")\n break\n elif user_choice == 1:\n CARD_NUMBER, USER_PIN = card_creations()\n print(f\"\"\"Your card has been created\nYour card number:\n{CARD_NUMBER}\nYour card PIN:\n{USER_PIN}\"\"\")\n elif user_choice == 2:\n card_logging()\n\n\n"
}
] | 2 |
deathvn/last-project-teamclininal-extraAssigment | https://github.com/deathvn/last-project-teamclininal-extraAssigment | 09e972c2c584a229994bcef86b816435dc038f89 | 9e045b6b915a6cfc39ff56775aa16066e331f89b | 42921b5d0ceced2adac1d40d75fc786cc653ebd4 | refs/heads/master | 2020-06-01T20:38:50.598160 | 2019-06-10T10:06:41 | 2019-06-10T10:06:41 | 190,919,756 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6251944303512573,
"alphanum_fraction": 0.7122861742973328,
"avg_line_length": 21.964284896850586,
"blob_id": "a960c81012bb445ed6cad0b6e4e8d33064d7222e",
"content_id": "2dce96f5808c68e7c38533c064ac152dbd6c8633",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 647,
"license_type": "no_license",
"max_line_length": 120,
"num_lines": 28,
"path": "/README.md",
"repo_name": "deathvn/last-project-teamclininal-extraAssigment",
"src_encoding": "UTF-8",
"text": "We have to reupload Goal 2 [basket2vec.py](basket2vec.py) \nFixed bug: \"basket2vec.py\" works well with Ubuntu, but add unnecessary newline character when run on Windows machines. \nNow \"basket2vec.py\" works both well.\n# Team: Last Project \nExtra Credit Assignment: vector file validation.\n## Members: \n15520614 - Khả Phiêu \n15520182 - Ngọc Hải \n15521025 - Anh Vọng \n15520148 - Công Dương \n15520494 - Quang Minh \n15520996 - Tỷ Tỷ\n## Usage:\nRequired:\n```\npython3\nnumpy\npandas\n```\nRun:\n```shell\npython extra.py <vector verified path> <vector test path>\n```\nExample:\n```shell\npython extra.py vect1.txt vect2.txt\n```\n\n"
},
{
"alpha_fraction": 0.4760241210460663,
"alphanum_fraction": 0.48809146881103516,
"avg_line_length": 35.195404052734375,
"blob_id": "05d6d5412c1f3babeae6a6248d8574424bb8a711",
"content_id": "b8e75f14db8d2e81325350189ec130c8c191c926",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3149,
"license_type": "no_license",
"max_line_length": 99,
"num_lines": 87,
"path": "/basket2vec.py",
"repo_name": "deathvn/last-project-teamclininal-extraAssigment",
"src_encoding": "UTF-8",
"text": "import csv\nimport os\nimport numpy\nimport sys\nimport numpy as np\nfrom collections import OrderedDict\n\ndef find_feature():\n diction = {}\n with open(path_to_input_basket_file, mode='r') as basket_file:\n reader=csv.reader(basket_file, delimiter=',')\n for row in reader:\n for i in range(2, len(row)):\n try:\n feature = row[-1-i].split(':')[1].split('=')[0]\n except:\n feature = row[-1-i].split('=')[0].lower()\n try:\n val = int(row[-1-i].split(':')[1].split('=')[1])\n except:\n val = 1\n if feature in diction:\n diction[feature] += val\n else:\n diction[feature] = val\n return diction\n\ndef get_top_n():\n top_n = []\n count = 0;\n for key, val in features_dict.items():\n top_n.append(key)\n count+=1\n if count==top_n_features_to_extract:\n break\n return top_n\n\ndef make_header(vector):\n with open(path_to_output_vector_file, encoding='utf8', mode='w', newline='') as out_file:\n writer = csv.writer(out_file, delimiter='\\t', quotechar='\"', quoting=csv.QUOTE_MINIMAL)\n SS = vector + ['section', 'source']\n writer.writerow(SS)\n num_col = len(SS)\n r1 = ['d' for i in range(num_col)]\n writer.writerow(r1)\n r2 = ['' for i in range(num_col-2)] + ['c', 'm']\n writer.writerow(r2)\n\ndef write_vector(vector):\n with open(path_to_input_basket_file, mode='r') as basket_file:\n reader=csv.reader(basket_file, delimiter=',')\n with open(path_to_output_vector_file, encoding='utf8', mode='a', newline='') as out_file:\n writer = csv.writer(out_file, delimiter='\\t', quotechar='\"', quoting=csv.QUOTE_MINIMAL)\n for row in reader:\n write_val = []\n dit = {}\n for i in range(2, len(row)):\n try:\n feature = row[-1-i].split(':')[1].split('=')[0]\n except:\n feature = row[-1-i].split('=')[0].lower()\n try:\n val = int(row[-1-i].split(':')[1].split('=')[1])\n except:\n val = 1\n dit[feature] = val\n for e in vector:\n if e in dit:\n write_val.append(dit[e])\n else:\n write_val.append(0)\n write_val.append(row[-2].replace(' ','').split('=')[0])\n write_val.append(row[-1].replace(' ','').split('=')[0])\n writer.writerow(write_val)\n\nif __name__=='__main__':\n path_to_input_basket_file = sys.argv[1]\n path_to_output_vector_file = sys.argv[2]\n top_n_features_to_extract = int(sys.argv[3])\n\n features_dict = find_feature()\n features_dict = OrderedDict(sorted(features_dict.items(), key=lambda x: x[1], reverse=True))\n #print (features_dict)\n vector = get_top_n()\n print(vector)\n make_header(vector)\n write_vector(vector)\n"
},
{
"alpha_fraction": 0.5208485722541809,
"alphanum_fraction": 0.555230438709259,
"avg_line_length": 31.571428298950195,
"blob_id": "a165d2c746a0d2a1a601dddb48ab6f8b38a1da24",
"content_id": "9e22eabafb2dea5383ccaf4121d9320886b36218",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1367,
"license_type": "no_license",
"max_line_length": 68,
"num_lines": 42,
"path": "/extra.py",
"repo_name": "deathvn/last-project-teamclininal-extraAssigment",
"src_encoding": "UTF-8",
"text": "import sys\nimport numpy as np\nimport pandas as pd\n\ndef check_value(v1, v2):\n return v1.equals(v2)\n\ndef check_shape(v1, v2):\n r1 = v1.values[0].tolist()==v2.values[0].tolist()\n r2 = v1.values[1].tolist()==v2.values[1].tolist()\n if (r1 and r2):\n return (v1.shape == v2.shape)\n return False\n\ndef main(v1, v2):\n if not check_shape(v1, v2):\n print ('wrong format-shape, False vector')\n else:\n columns = list(v1.columns.values)\n v1 = v1.sort_values(columns, ascending=False)\n try:\n v2 = v2[columns]\n v2 = v2.sort_values(columns, ascending=False)\n \n v1.to_csv('temp1.tsv', index=False, sep='\\t')\n v2.to_csv('temp2.tsv', index=False, sep='\\t')\n v1 = pd.read_csv('temp1.tsv', sep='\\t')\n v2 = pd.read_csv('temp2.tsv', sep='\\t')\n \n if check_value(v1, v2):\n print ('True vector')\n else:\n print ('wrong contents, False vector')\n except:\n print ('wrong feature columns, False vector')\n\nif __name__=='__main__':\n path_to_verified_vector_file = sys.argv[1]\n path_to_test_vector_file = sys.argv[2]\n vect1_data = pd.read_csv(path_to_verified_vector_file, sep='\\t')\n vect2_data = pd.read_csv(path_to_test_vector_file, sep='\\t')\n main(vect1_data, vect2_data)"
}
] | 3 |
NitinShaily/DigitRecogniser_MLmodel | https://github.com/NitinShaily/DigitRecogniser_MLmodel | 0fbf94e741c36f92f023a2a692cb4fb6b3069ef7 | 6ab1eccc5ddc59f4366b4ae5b8822fb4069002b7 | d5c8411e71109d9adc73609091abd625103985fc | refs/heads/master | 2022-03-01T16:39:20.181019 | 2019-11-10T19:36:13 | 2019-11-10T19:36:13 | 213,884,922 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6257088780403137,
"alphanum_fraction": 0.6597353219985962,
"avg_line_length": 30.3125,
"blob_id": "3ecbdc697d4c91c04abbee4f912f5805f3b5a8ae",
"content_id": "2a22a57b9f59c3ef13223104dc78b60c66a4eb69",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 529,
"license_type": "no_license",
"max_line_length": 125,
"num_lines": 16,
"path": "/image_visual.py",
"repo_name": "NitinShaily/DigitRecogniser_MLmodel",
"src_encoding": "UTF-8",
"text": "#using matplotlib:\nimage=traina.iloc[0:1,1:] #select the first row of mnist data and put it in image\n\nimage\n\nimport matplotlib.pyplot as plt \nplt.imshow(image.values.reshape(28,28)) #print the image using pixels/data given in size of 28*28 cuz row contain 784(=28*28)\n\nprediction=model.predict(image) #prediction of given image\nprint(prediction)\n\nimport matplotlib.image as mpimg\n\ni=mpimg.imread(\"ima.png\") #reading a image using matplot\n\nplt.imshow(i) \n"
},
{
"alpha_fraction": 0.7194244861602783,
"alphanum_fraction": 0.730215847492218,
"avg_line_length": 45.33333206176758,
"blob_id": "c332597533d258bb6c73670b1d027114e186717e",
"content_id": "79ef9a3e95fb95132552e0278c0933b13cebc083",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 278,
"license_type": "no_license",
"max_line_length": 71,
"num_lines": 6,
"path": "/README.md",
"repo_name": "NitinShaily/DigitRecogniser_MLmodel",
"src_encoding": "UTF-8",
"text": "**</h>DigitRecogniser_MLmodel</h>** \n**Famous mnist_digit dataset challenge of** **_Kaggle_** \n>**PROJECT FEATURES** :\n>>1.comprise of visualisation of data \n>>2.calculation and predicting accuracy using Random Forest algorithm \n>>3.real life digit recogniser using OpenCV\n"
},
{
"alpha_fraction": 0.631748616695404,
"alphanum_fraction": 0.6502820253372192,
"avg_line_length": 23.81999969482422,
"blob_id": "f7dbb8069c4de791c860e406399b671d2b614048",
"content_id": "3ae5c29938df25c14b58fe4e04da3d94ee588286",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1241,
"license_type": "no_license",
"max_line_length": 128,
"num_lines": 50,
"path": "/code.py",
"repo_name": "NitinShaily/DigitRecogniser_MLmodel",
"src_encoding": "UTF-8",
"text": "#%%\nfrom sklearn.metrics import accuracy_score\nimport pandas as pd\nimport numpy as np\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.ensemble import RandomForestClassifier\n#%%\n\n#%%\ndf=pd.read_csv(\"fashion-mnist_train.csv\")\n\n\n \n#%%\nimage #see array of pixel values of digit at first row\n\n\n#%%\ntrain,test=train_test_split(df,test_size=0.2,random_state=12)\ndel df\n\n#%%\ndef x_and_y(df):\n x=df.drop(['label'],axis=1) #dropping of target columns\n y=df.label \n return x,y\nx_train,y_train=x_and_y(train)\nx_test,y_test=x_and_y(test)\n\n\n#%%\n#training of our dataset and caluculation of accuracy\n\nmodel=RandomForestClassifier(n_estimators=100,random_state=12) \nmodel.fit(x_train,y_train)\nprediction=model.predict(x_test)\nscore=accuracy_score(y_test,prediction)\nprint(score)\n\n#%%\n#data visualisation\n\nimage=traina.iloc[0:1,1:] #selecting 1st row except colm 1 which is of label (target variable)\n\n#%%\nimport matplotlib.pyplot as plt\nplt.imshow(image.values.reshape(28,28)) #cuz mnist dataset have image of 28*28 pixel\n#%%\nprediction=model.predict(image)\nprint(prediction)\n"
}
] | 3 |
AfshinZlfgh/instabot | https://github.com/AfshinZlfgh/instabot | 042d6b4afad835fad3a97ed24a0e9f21229e0bf5 | cf6649286abfb004bb993e4712dc211168e1b0c2 | 125fbbf8eec7292354fb2c730b6c695dec0ac86b | refs/heads/master | 2021-05-14T06:29:48.100829 | 2018-01-05T23:55:08 | 2018-01-05T23:55:08 | 116,241,868 | 1 | 0 | null | 2018-01-04T09:36:55 | 2018-01-04T02:01:24 | 2017-12-25T10:20:07 | null | [
{
"alpha_fraction": 0.6749364137649536,
"alphanum_fraction": 0.6812977194786072,
"avg_line_length": 21.457143783569336,
"blob_id": "d966a5f6f4da995b2014c3c691c994054f9ab1de",
"content_id": "00c2a51906a935c5880c62e201f8287a78350c83",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1572,
"license_type": "permissive",
"max_line_length": 95,
"num_lines": 70,
"path": "/instabot/bot/delay.py",
"repo_name": "AfshinZlfgh/instabot",
"src_encoding": "UTF-8",
"text": "\"\"\"\n Function to calculate delays for like/follow/unfollow etc.\n\"\"\"\n\nimport time\nimport random\n\n\ndef add_dispersion(delay_value):\n return delay_value * 3 / 4 + delay_value * random.random() / 2\n\n\n# this function will sleep only if elapsed time since `last_action` is less than `target_delay`\ndef sleep_if_need(last_action, target_delay):\n now = time.time()\n elapsed_time = now - last_action\n if (elapsed_time < target_delay):\n remains_to_wait = target_delay - elapsed_time\n time.sleep(add_dispersion(remains_to_wait))\n\n\ndef like_delay(bot):\n sleep_if_need(bot.last_like, bot.like_delay)\n bot.last_like = time.time()\n\n\ndef unlike_delay(bot):\n sleep_if_need(bot.last_unlike, bot.unlike_delay)\n bot.last_unlike = time.time()\n\n\ndef follow_delay(bot):\n sleep_if_need(bot.last_follow, bot.follow_delay)\n bot.last_follow = time.time()\n\n\ndef unfollow_delay(bot):\n sleep_if_need(bot.last_unfollow, bot.unfollow_delay)\n bot.last_unfollow = time.time()\n\n\ndef comment_delay(bot):\n sleep_if_need(bot.last_comment, bot.comment_delay)\n bot.last_comment = time.time()\n\n\ndef block_delay(bot):\n sleep_if_need(bot.last_block, bot.block_delay)\n bot.last_block = time.time()\n\n\ndef unblock_delay(bot):\n sleep_if_need(bot.last_unblock, bot.unblock_delay)\n bot.last_unblock = time.time()\n\n\ndef error_delay(bot):\n time.sleep(10)\n\n\ndef delay_in_seconds(bot, delay_time=60):\n time.sleep(delay_time)\n\n\ndef small_delay(bot):\n time.sleep(add_dispersion(3))\n\n\ndef very_small_delay(bot):\n time.sleep(add_dispersion(0.7))\n"
}
] | 1 |
DLMPO/testrepo | https://github.com/DLMPO/testrepo | 4e86b1b0db1abcb8aa7bd13b5c8270a10a944fc0 | 3d386f9c77a6255ca16df292c6fbd4ea5fbf2174 | bd68d05de9a20b51b0af5833143b2e4c2bd8a56a | refs/heads/master | 2022-12-25T02:35:21.249703 | 2020-10-04T07:43:06 | 2020-10-04T07:43:06 | 294,968,734 | 0 | 0 | null | 2020-09-12T15:16:06 | 2020-09-12T16:38:23 | 2020-09-12T17:41:26 | Python | [
{
"alpha_fraction": 0.75,
"alphanum_fraction": 0.75,
"avg_line_length": 21,
"blob_id": "f1cd525929a2405dde595bd73ba0027f46ea63a3",
"content_id": "6bce25c4aab95c60362d6f55e83b0be7e55326a5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 44,
"license_type": "no_license",
"max_line_length": 25,
"num_lines": 2,
"path": "/Test.py",
"repo_name": "DLMPO/testrepo",
"src_encoding": "UTF-8",
"text": "#Upload test file\r\nprint(\"Upload test file\")"
}
] | 1 |
malfunction54/rfid_reader | https://github.com/malfunction54/rfid_reader | 965c8f87cf484d1cf3ce083e8ca98c8db13f040b | 953ae1da2e3ae59cf61e50668fa2630c463b963a | ab33a22124d530e479927e3f5f5f50e9bbbb0452 | refs/heads/master | 2023-02-24T17:08:21.349084 | 2021-01-27T04:44:12 | 2021-01-27T04:44:12 | 330,289,764 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6349095702171326,
"alphanum_fraction": 0.660639762878418,
"avg_line_length": 30.25,
"blob_id": "ce66cb538e31af4845e2e151819929e67373956d",
"content_id": "d817aa78a251803e7cac1aab8333e664453cf3dc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2876,
"license_type": "no_license",
"max_line_length": 81,
"num_lines": 92,
"path": "/rfid_reader.py",
"repo_name": "malfunction54/rfid_reader",
"src_encoding": "UTF-8",
"text": "# SPDX-FileCopyrightText: 2017 Tony DiCola for Adafruit Industries\n# SPDX-FileCopyrightText: 2017 James DeVito for Adafruit Industries\n# SPDX-License-Identifier: MIT\n\n# This example is for use on (Linux) computers that are using CPython with\n# Adafruit Blinka to support CircuitPython libraries. CircuitPython does\n# not support PIL/pillow (python imaging library)!\n\nimport time\nimport asyncio\nfrom evdev import InputDevice, categorize, ecodes\n\n# connect to the RFID reader\ndev = InputDevice('/dev/input/event0')\n\nfrom board import SCL, SDA\nimport busio\nfrom PIL import Image, ImageDraw, ImageFont\nimport adafruit_ssd1306\n\n# Create the I2C interface.\ni2c = busio.I2C(SCL, SDA)\n\n# Create the SSD1306 OLED class.\n# The first two parameters are the pixel width and pixel height. Change these\n# to the right size for your display!\ndisp = adafruit_ssd1306.SSD1306_I2C(128, 32, i2c)\n\n# Clear display.\ndisp.fill(0)\ndisp.show()\n\n# Create blank image for drawing.\n# Make sure to create image with mode '1' for 1-bit color.\nwidth = disp.width\nheight = disp.height\nimage = Image.new(\"1\", (width, height))\n\n# Get drawing object to draw on image.\ndraw = ImageDraw.Draw(image)\n\n# Draw a black filled box to clear the image.\ndraw.rectangle((0, 0, width, height), outline=0, fill=0)\n\n# Draw some shapes.\n# First define some constants to allow easy resizing of shapes.\npadding = -2\ntop = padding\nbottom = height - padding\n# Move left to right keeping track of the current x position for drawing shapes.\nx = 0\n\n\n# Load default font.\n#font = ImageFont.load_default()\n\n# Alternatively load a TTF font. Make sure the .ttf font file is in the\n# same directory as the python script!\n# Some other nice fonts to try: http://www.dafont.com/bitmap.php\n# font = ImageFont.truetype('/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf', 9)\nfont = ImageFont.truetype('./tt0246m_.ttf', 16)\n\n\nasync def helper(dev):\n tagId = \"\"\n tagIsDone = False\n # Draw a black filled box to clear the image.\n draw.rectangle((0, 0, width, height), outline=0, fill=0)\n \n async for ev in dev.async_read_loop():\n if tagIsDone == True:\n tagId = \"\"\n tagIsDone = False\n if ev.type == ecodes.EV_KEY and ev.value == 0: # numbers\n if ev.code <= 11:\n #print(ev.code-1)\n if ev.code == 11:\n tagId = tagId + \"0\"\n else:\n tagId = tagId + str(ev.code-1)\n if ev.code == 28: # enter key up\n print(tagId)\n # Draw a black filled box to clear the image.\n draw.rectangle((0, 0, width, height), outline=0, fill=0)\n draw.text((x, top + 0), \"Tag: \" + tagId, font=font, fill=255)\n # Display image.\n disp.image(image)\n disp.show()\n tagIsDone = True\n\nloop = asyncio.get_event_loop()\nloop.run_until_complete(helper(dev))\n\n"
}
] | 1 |
holdenk/clothes-from-code | https://github.com/holdenk/clothes-from-code | b71c15930009ce72a2ded48d7dbc507d09431e0d | 20aab5551c824bc071f9837114461679e6c8d523 | 7b9a6d15a62867a5269466dcb4e2c806c3654bad | refs/heads/master | 2022-02-22T12:44:01.159669 | 2019-09-30T18:09:46 | 2019-09-30T18:09:46 | 105,316,956 | 13 | 2 | Apache-2.0 | 2017-09-29T21:03:53 | 2019-05-12T02:31:42 | 2019-06-07T17:45:28 | Python | [
{
"alpha_fraction": 0.6602112650871277,
"alphanum_fraction": 0.6707746386528015,
"avg_line_length": 46.33333206176758,
"blob_id": "dc78d8f81a86f8d17391ae2ca495b4b583d88213",
"content_id": "24132338b9f07d0433c43fc51f8e88e58ab0a98b",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 568,
"license_type": "permissive",
"max_line_length": 117,
"num_lines": 12,
"path": "/dowork.sh",
"repo_name": "holdenk/clothes-from-code",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\nset -ex\nmkdir /tmp/\"${DRESS_DIR}\"\n# Download the file but not too big\n(ulimit -f 2024; wget --show-progress \"${DRESS_CODE_URL}\" -P /tmp/\"${DRESS_DIR}\"/ --max-redirect 0)\nulimit -f unlimited\nINPUT_FILENAME=$(ls -1 /tmp/\"${DRESS_DIR}\"/)\necho \"Generating images\"\npython gen.py --files /tmp/\"${DRESS_DIR}\"/\"${INPUT_FILENAME}\" --out /tmp/\"${DRESS_DIR}\" --clothing \"${CLOTHING_TYPE}\"\necho \"Starting upload of images\"\npython cowcow_uploader.py --dress_name \"${DRESS_NAME}\" --dress_dir /tmp/\"${DRESS_DIR}\" --clothing \"${CLOTHING_TYPE}\"\necho \"Finished doing work.\"\n"
},
{
"alpha_fraction": 0.6376811861991882,
"alphanum_fraction": 0.7318840622901917,
"avg_line_length": 22,
"blob_id": "ac2f825a0c371b8df04f52cd5edc76275181a28d",
"content_id": "39d2000f36d523a10e8a1719c1686a52053831f0",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 138,
"license_type": "permissive",
"max_line_length": 54,
"num_lines": 6,
"path": "/entrypoint.sh",
"repo_name": "holdenk/clothes-from-code",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\ncd /usr/src/app/\nMOZ_HEADLESS=1\nexport MOZ_HEADLESS\ngunicorn -w 4 -b 0.0.0.0:5000 --timeout 800 server:app\nunset MOZ_HEADLESS\n"
},
{
"alpha_fraction": 0.6400244235992432,
"alphanum_fraction": 0.6705307960510254,
"avg_line_length": 57.53571319580078,
"blob_id": "06ef6ee960a43f3d09761d424075fa47c88bc007",
"content_id": "54d8e211746a937360bfc1ef3318b1ed05fc184b",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "HTML",
"length_bytes": 1639,
"license_type": "permissive",
"max_line_length": 270,
"num_lines": 28,
"path": "/templates/generated.html",
"repo_name": "holdenk/clothes-from-code",
"src_encoding": "UTF-8",
"text": "{% include \"header.html\" %}\n <title>Generating Dress</title>\n </head>\n <body>\n <p>We are now (very very slowly) generating your dress, {{dress_name}}, based on the code at {{ code_url }} :)</p>\n\n <p>Here is what is going on:</p>\n {% for row in rows %}\n <p>{{ row }}</p>\n {% endfor %}\n <p>And the dress is done (or we crashed)!\n Sadly it appears cowcow <b>rate limits</b> how often the new products show up in the listing. You may need to <b>check back tomorrow</b> to see your clothing item actually present. It's likely not lost (unless there was an error message above) and just in a queue.\n <p>\n <p>\n You can go <a href=\"https://www.cowcow.com/artist/holdensglitchcode?2855124§ion=01&sn=Uploaded\">checkout the store</a>. Your\n <a href=\"https://www.cowcow.com/shop//{{dress_name | urlencode }}?2855124&store=holdensglitchcodedress\"> should have a name {{dress_name}}</a>.\n If you don't find your {{dress_name}} bookmark <a href=\"https://www.cowcow.com/artist/holdensglitchcode?2855124§ion=01&sn=Uploaded&sort=2\">the glitch code store</a> and check back tomorrow since sometimes cowcow rate limits our product updates.\n </p>\n <p>\n But sometimes the names get a bit messed up in which case <a href=\"https://www.cowcow.com/artist/holdensglitchcode?2855124&sort=2\">you should be able to find your dress in this list</a>.\n </p>\n <!-- sketchy JS rdr -->\n <script>\n setTimeout(function() {\n window.location.href = \"https://www.cowcow.com/artist/holdensglitchcode?2855124§ion=01&sn=Uploaded\";\n }, 3500000);\n </script>\n{% include \"footer.html\" %}\n"
},
{
"alpha_fraction": 0.8478260636329651,
"alphanum_fraction": 0.8478260636329651,
"avg_line_length": 12.142857551574707,
"blob_id": "67367cfab92a7a5eee1f843070b77c5289fd4b9d",
"content_id": "8634e9c9ad91f01f1a255bd055244cf9dd55a6b0",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 92,
"license_type": "permissive",
"max_line_length": 41,
"num_lines": 7,
"path": "/requirements.txt",
"repo_name": "holdenk/clothes-from-code",
"src_encoding": "UTF-8",
"text": "pygments\npillow\ngit+https://github.com/Kareeeeem/jpglitch\nmechanize\nselenium\nflask\ngunicorn\n"
},
{
"alpha_fraction": 0.6934782862663269,
"alphanum_fraction": 0.70652174949646,
"avg_line_length": 20.904762268066406,
"blob_id": "4939ef8b200015064ff57e5ce893bf3ffd0901df",
"content_id": "a6734244710ac13d91acf6bea6445c27e0b5d320",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 460,
"license_type": "permissive",
"max_line_length": 59,
"num_lines": 21,
"path": "/wrapwork.sh",
"repo_name": "holdenk/clothes-from-code",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\nCLOTHING_TYPE=$1\nexport CLOTHING_TYPE\nDRESS_DIR=$2\nexport DRESS_DIR\nDRESS_NAME=$3\nexport DRESS_NAME\nDRESS_CODE_URL=$4\nexport DRESS_CODE_URL\nif [ -z \"$DRESS_NAME\" ]; then\n echo \"No dress name specified. leaving\"\n exit 0\nfi\nif [ -z \"$DRESS_CODE_URL\" ]; then\n echo \"No dress source specified. leaving\"\n exit 0\nfi\necho \"Generating the dress\"\nunbuffer ./dowork.sh || echo \"Failed to generate the dress\"\necho \"Cleaning up\"\nrm -rf /tmp/\"${DRESS_DIR}\"\n"
},
{
"alpha_fraction": 0.7142857313156128,
"alphanum_fraction": 0.7744361162185669,
"avg_line_length": 43.27777862548828,
"blob_id": "aeb74c06ac3115366c24e12a0bbdbc996ffa4915",
"content_id": "6887996344821d83d035ab2f3b2cc1e8fcca7d9d",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 798,
"license_type": "permissive",
"max_line_length": 357,
"num_lines": 18,
"path": "/README.md",
"repo_name": "holdenk/clothes-from-code",
"src_encoding": "UTF-8",
"text": "# clothes-from-code\nAuto generate cool code based clothing.\n\n## Requires\n\nThis depends on jpglitch and pygments.\n\n## Samples\n\nSo far we've made samples for the [gen.py generating its self](https://www.cowcow.com/self-generated-glitch-art-dress_p163262528?2855124) and [kubicorn's reconciler](https://www.cowcow.com/kubicorn-reconciler-go-glitch-art-dress_p163262529?2855124). I may post more in the [glitch code cowcow store](https://www.cowcow.com/artist/holdensglitchcode?2855124).\n\n## Development\n\nMuch of the development was live-streamed because \"why not?\" and you can look watch it at https://www.youtube.com/watch?v=nUbgxMqp27U\n\n## How to use\n\nRun gen.py and provide an input file then take the output to [cowcow](https://www.cowcow.com?2855124) and upload the individual image components.\n\n"
},
{
"alpha_fraction": 0.4520227313041687,
"alphanum_fraction": 0.5800735354423523,
"avg_line_length": 38.880001068115234,
"blob_id": "7ad62ecc5162f0f254c0af5b2db24f838ab6a165",
"content_id": "8eec986e63cb44053e989464b364014a2e5c044b",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2991,
"license_type": "permissive",
"max_line_length": 85,
"num_lines": 75,
"path": "/clothing.py",
"repo_name": "holdenk/clothes-from-code",
"src_encoding": "UTF-8",
"text": "class CowCowItem:\n \"\"\"The specification for an item of clothing or other print.\"\"\"\n def __init__(self, name, cowcowid, panels):\n self.name = name\n self.cowcowid = cowcowid\n self.panels = panels\n\n\ncowcow_items = {\n # These are the dress pieces for the dress with pockets on cowcow\n # Front(Center) : 1487 x 4796 or Higher\n # Front Left(Center) : 1053 x 4780 or Higher\n # Front Right(Center) : 1053 x 4780 or Higher\n # Back Right(Center) : 878 x 4803 or Higher\n # Sleeve Left(Center) : 1775 x 2140 or Higher\n # Pocket Right(Center) : 1067 x 704 or Higher\n # Back Left(Center) : 881 x 4818 or Higher\n # Back Rightside(Center) : 1039 x 4803 or Higher\n # Sleeve Right(Center) : 1775 x 2140 or Higher\n # Pocket Left(Center) : 1067 x 703 or Higher\n # Back Leftside(Center) : 1039 x 4803 or Higher\n \"dress_with_pockets\": CowCowItem(\"dress_with_pockets\", \"2170\", [\n (\"front\", (1487, 4796)),\n (\"front_left\", (1053, 4780)),\n (\"front_right\", (1053, 4780)),\n (\"back_right\", (878, 4803)),\n (\"sleeve_left\", (1775, 2140)),\n (\"pocket_right\", (1067, 704)),\n (\"back_left\", (881, 4818)),\n (\"back_rightside\", (1039, 4803)),\n (\"sleeve_right\", (1775, 2140)),\n (\"pocket_left\", (1067, 703)),\n (\"back_leftside\", (1039, 4803)),\n ]),\n # Boxy/\"Men's\" Basketball tank tops (no pockets...)\n # Collar(Center) : 3000 x 270 or Higher\n # Back(Center) : 2887 x 4089 or Higher\n # Front(Center) : 2792 x 3978 or Higher\n \"boxy_basketball_tank_top\": CowCowItem(\"boxy_basketball_tank_top\", \"1761\", [\n (\"collar\", (3000, 270)),\n (\"front\", (2792, 3978)),\n (\"back\", (2887, 4089)),\n ]),\n # Fitted/\"Women's\" Basketball tank tops (no pockets...)\n # Strap(Center) : 2250 x 450 or Higher\n # Front(Center) : 2625 x 3750 or Higher\n # Back(Center) : 2676 x 3750 or Higher\n \"fitted_basketball_tank_top\": CowCowItem(\"fitted_basketball_tank_top\", \"1762\", [\n (\"Strap\", (2250, 450)),\n (\"Front\", (2625, 3750)),\n (\"Back\", (2676, 3750)),\n ]),\n # Hooded Pocket Cardigan\n # Pocket Right(Center) : 717 x 729 or Higher\n # Pocket Left(Center) : 717 x 729 or Higher\n # Front Left(Center) : 1288 x 3677 or Higher\n \"hooded_pocket_cardigan\": CowCowItem(\"hooded_pocket_cardigan\", \"2168\", [\n (\"pocket_right\", (717, 729)),\n (\"pocket_left\", (717, 729)),\n (\"front_left\", (1288, 3677)),\n ]),\n # A-Line Pocket Skirt\n # Skirt Back(Center) : 4200 x 2544 or Higher\n # Skirt Front(Center) : 4200 x 2397 or Higher\n # Waist Band(Center) : 3000 x 488 or Higher\n \"aline_pocket_skirt\": CowCowItem(\"aline_pocket_skirt\", \"1937\", [\n (\"skirt_back\", (4200, 2544)),\n (\"skirt_front\", (4200, 2397)),\n (\"waist_band\", (3000, 488)),\n ]),\n # 15 inch laptop sleeve\n \"15inch_laptop_sleeve\": CowCowItem(\"15inch_laptop_sleeve\", \"455\", [\n (\"front\", (2700, 2200)),\n ]),\n}\n"
},
{
"alpha_fraction": 0.7804877758026123,
"alphanum_fraction": 0.7804877758026123,
"avg_line_length": 40,
"blob_id": "57bc453e0dcf2f7f1a968619c16318a8993ceccb",
"content_id": "bde0b636f9e9992f2a2af19f69eda4383ff138c5",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 41,
"license_type": "permissive",
"max_line_length": 40,
"num_lines": 1,
"path": "/CONTRIBUTING.md",
"repo_name": "holdenk/clothes-from-code",
"src_encoding": "UTF-8",
"text": "Please send pull requests, samples, etc.\n"
},
{
"alpha_fraction": 0.5919250249862671,
"alphanum_fraction": 0.6023277640342712,
"avg_line_length": 32.02381134033203,
"blob_id": "b584ea2b504a6d13cb3312df86c7395d94849bb8",
"content_id": "15d30e8a807e371a443c5177d52f630a717935c9",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 9709,
"license_type": "permissive",
"max_line_length": 109,
"num_lines": 294,
"path": "/gen.py",
"repo_name": "holdenk/clothes-from-code",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\nimport random\nfrom PIL import Image, ImageStat\nimport jpglitch\nfrom pygments import highlight\nfrom pygments.lexers import guess_lexer_for_filename\nfrom pygments.formatters import JpgImageFormatter\nfrom pygments.styles import get_all_styles\nimport io\nimport errno\nimport os\nimport math\nimport sys\nfrom clothing import cowcow_items\n\n# Some sketchy global configs\ndesired_max_tiles = 100\ntile_target_width = 500\ntile_target_height = 500\ntile_variance_threshold = 500\ntile_min_max_threshold = 110\n\n\ndef highlight_file(style, filename):\n \"\"\" Hightlight a given file guessing the lexer based on the extension \"\"\"\n with open(filename) as f:\n code_txt = f.read()\n lexer = guess_lexer_for_filename(filename, code_txt)\n font_name = \"Ubuntu Mono\"\n formatter = JpgImageFormatter(font_name=font_name, style=style)\n return Image.open(io.BytesIO(highlight(code_txt, lexer, formatter)))\n\n\ndef glitch_image(image, amount_glitch, glitch_itr):\n # Note: Image warns us to use save with BytesIO instead\n # TODO: fix\n buff = io.BytesIO()\n image.save(buff, \"jpeg\")\n image_array = bytearray(buff.getvalue())\n seed = random.randint(0, 99)\n\n # We set iterations to 3 so we can verify each itr hasn't fucked\n # the world otherwise glitch_bytes could go a bit extreme.\n jpeg = jpglitch.Jpeg(image_array, amount_glitch, seed, iterations=3)\n img_bytes = jpeg.new_bytes\n # Gltich the image until max itrs, or stop if it doesn't open anymore\n for i in range(glitch_itr):\n print(\"Glitching {0}th time\".format(i))\n jpeg.glitch_bytes()\n try:\n img_bytes = jpeg.new_bytes\n except IOError:\n break\n return Image.open(io.BytesIO(img_bytes))\n\n\ndef tileify(img):\n \"\"\"\n Takes in an image and produces several smaller tiles.\n Tiles are uniform in shape but may not be uniformly cut.\n \"\"\"\n\n def contains_interesting_code(img):\n \"\"\"Returns true if the image tile contains enough variation\"\"\"\n stat = ImageStat.Stat(img)\n total_variance = sum(stat.var)\n min_max_diff = map(lambda x: x[1] - x[0], stat.extrema)\n min_max_diff_sum = sum(min_max_diff)\n print(\n \"Got variance {0} from {1} min_max_dif {2} from {3}\".format(\n total_variance, stat.var, min_max_diff_sum, min_max_diff\n )\n )\n return (\n total_variance > tile_variance_threshold\n and min_max_diff_sum > tile_min_max_threshold\n )\n\n offset_range = 50\n # crop takes (left, upper, right, lower)-tuple.\n source_size = img.size\n print(\"Source image size {0}\".format(source_size))\n # Construct a list of possible areas of the input image\n # we want to turn into tiles\n candidate_coordinates = []\n # Generate tiles iteratively with some skew\n for w in range(int(math.ceil(source_size[0] / tile_target_width))):\n for h in range(int(math.ceil(source_size[1] / tile_target_height))):\n x_offset = min(\n source_size[0] - tile_target_width,\n (\n random.randint(0, offset_range)\n + source_size[0]\n - ((w + 1) * tile_target_width)\n ),\n )\n y_offset = min(\n source_size[1] - tile_target_height,\n (\n random.randint(0, offset_range)\n + source_size[1]\n - ((h + 1) * tile_target_height)\n ),\n )\n print(\n \"Generating image for ({0},{1}) with offsets ({2},{3})\".format(\n w, h, x_offset, y_offset\n )\n )\n x0 = x_offset + (w * tile_target_width)\n y0 = y_offset + (h * tile_target_height)\n x1 = x_offset + ((w + 1) * tile_target_width)\n y1 = y_offset + ((h + 1) * tile_target_height)\n candidate_coordinates.append((x0, y0, x1, y1))\n # Generate 10 random tiles from \"somewhere\" in the image\n num_tiles_random = 10\n for _ in range(num_tiles_random):\n x0 = random.randint(0, source_size[0] - tile_target_width)\n y0 = random.randint(0, source_size[1] - tile_target_height)\n\n x1 = x0 + tile_target_width\n y1 = y0 + tile_target_height\n\n candidate_coordinates.append((x0, y0, x1, y1))\n # Take the coordinates and turn them into tile images, filtering out\n # antyhing which doesn't have any variation inside of it\n tiles = []\n for coordinates in candidate_coordinates:\n cropped_image = img.crop(coordinates)\n if contains_interesting_code(cropped_image):\n tiles.append(cropped_image)\n\n # If we have a lot of potential tiles generatinge everything is going to be slow\n num_tiles_to_return = min(desired_max_tiles, len(tiles))\n # Sampled without replacement yay\n return random.sample(tiles, num_tiles_to_return)\n\n\ndef build_tiles(filenames, style, amount_glitch, glitch_itr):\n \"\"\" Builds the tiles which we will assembly into a dress \"\"\"\n # Highlight all of our inputs\n highlighted = map(lambda filename: highlight_file(style, filename), filenames)\n # Take the inputs and chop them up into tiles of a consistent size but semi random\n # locations\n ocropped = map(tileify, highlighted)\n print(\"cropped...\")\n cropped = []\n for c in ocropped:\n for i in c:\n cropped.append(i)\n print(\"compacted to {0}\".format(cropped))\n glitched_tiled = list(\n map(lambda img: glitch_image(img, amount_glitch, glitch_itr), cropped)\n )\n return (highlighted, cropped, glitched_tiled)\n\n\ndef build_image(\n filenames,\n style=\"paraiso-dark\",\n amount_glitch=75,\n glitch_itr=6,\n percent_original=10,\n clothing=\"dress_with_pockets\",\n):\n (highlighted, cropped, glitched_tiled) = build_tiles(\n filenames, style, amount_glitch, glitch_itr\n )\n num_tiles = len(cropped)\n\n def random_tile():\n tile_idx = random.randint(0, num_tiles - 1)\n if random.randint(0, 100) < percent_original:\n return cropped[tile_idx]\n else:\n return glitched_tiled[tile_idx]\n\n def make_piece(name_dim):\n \"\"\"Make some glitched code combined for some specific dimensions\"\"\"\n dim = name_dim[1]\n img = Image.new(\"RGB\", dim)\n for i in range(0, dim[0], tile_target_width):\n for j in range(0, dim[1], tile_target_height):\n # Some tiles are bad, lets get another tile\n try:\n img.paste(random_tile(), (i, j))\n except IOError:\n img.paste(random_tile(), (i, j))\n return (name_dim[0], img)\n\n pieces = map(make_piece, cowcow_items[clothing].panels)\n\n return (pieces, highlighted, cropped, glitched_tiled)\n\n\ndef make_if_needed(target_dir):\n \"\"\"Make a directory if it does not exist\"\"\"\n try:\n os.mkdir(target_dir)\n except OSError as exc:\n if exc.errno != errno.EEXIST:\n raise\n pass\n\n\ndef save_imgs(target_dir, imgs, ext):\n idx = 0\n make_if_needed(target_dir)\n for img in imgs:\n idx = idx + 1\n filename = \"{0}/{1}.{2}\".format(target_dir, idx, ext)\n if type(img) is tuple:\n filename = \"{0}/{1}.{2}\".format(target_dir, img[0], ext)\n img[1].save(filename)\n else:\n img.save(filename)\n\n\ndef get_profiles():\n return cowcow_items.keys()\n\n\ndef list_profiles():\n print(\"The following clothing items are available:\")\n for profile in get_profiles():\n print(profile)\n\n\ndef list_styles():\n print(\"The following styles are available:\")\n for style in list(get_all_styles()):\n print(style)\n\n\nif __name__ == \"__main__\":\n import argparse\n\n parser = argparse.ArgumentParser(description=\"Process some code\")\n parser.add_argument(\n \"--files\", type=str, default=[\"gen.py\"], nargs=\"*\", help=\"file names to process\"\n )\n parser.add_argument(\n \"--output\", type=str, default=\"out\", nargs=\"?\", help=\"output directory\"\n )\n parser.add_argument(\n \"--extension\", type=str, default=\"png\", nargs=\"?\", help=\"output extension\"\n )\n parser.add_argument(\n \"--clothing\",\n type=str,\n default=\"dress_with_pockets\",\n nargs=\"?\",\n help=\"Clothing item to generate images for (see all available profiles with --list-clothing)\",\n )\n parser.add_argument(\n \"--list-clothing\",\n dest=\"list_clothing\",\n action=\"store_true\",\n help=\"List all available clothing profiles and exit.\",\n )\n parser.add_argument(\n \"--style\",\n type=str,\n default=\"paraiso-dark\",\n nargs=\"?\",\n help=\"The pygments style to use for the colour scheme (see all available styles with --list-styles)\",\n )\n parser.add_argument(\n \"--list-styles\",\n dest=\"list_styles\",\n action=\"store_true\",\n help=\"List all available style names.\",\n )\n\n args = parser.parse_args()\n\n if args.list_clothing:\n # The user just wants a list of profiles, let's print that and exit.\n list_profiles()\n\n if args.list_styles:\n # Again, the user just wants a list of styles, let's print that.\n list_styles()\n\n if args.list_styles or args.list_clothing:\n sys.exit(0)\n\n make_if_needed(args.output)\n print(\"Making the images in memory\")\n (processed, highlighted, cropped, glitched_tiled) = build_image(\n args.files, clothing=args.clothing, style=args.style\n )\n print(\"Saving the images to disk\")\n save_imgs(args.output + \"/processed\", processed, args.extension)\n"
},
{
"alpha_fraction": 0.6053744554519653,
"alphanum_fraction": 0.6135588884353638,
"avg_line_length": 35.654998779296875,
"blob_id": "348b1ceebda3acc200976f742ff78453b0a35ce3",
"content_id": "2fa89c91cda1620c1eaf23bacd72f5f34ed715b3",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7331,
"license_type": "permissive",
"max_line_length": 102,
"num_lines": 200,
"path": "/cowcow_uploader.py",
"repo_name": "holdenk/clothes-from-code",
"src_encoding": "UTF-8",
"text": "import argparse\nimport re\nfrom cowcowsecrets import *\nfrom selenium import webdriver\nfrom selenium.webdriver.common.keys import Keys\nimport http.cookiejar as cookielib\nimport random\nimport string\nimport mechanize\nfrom clothing import cowcow_items\n\nimport time\n\ncj = cookielib.LWPCookieJar()\n\n\ntarget_album = \"https://www.cowcow.com/Member/FileManager.aspx?folder=5788770&album=1\"\nfile_manager = \"https://www.cowcow.com/Member/FileManager.aspx\"\nlogin_url = \"https://www.cowcow.com/Login.aspx?Return=%2fMember%2fFileManager.aspx\"\nbulk_product_url = \"https://www.cowcow.com/Stores/StoreBulkProduct.aspx?StoreId=264507&SectionCode=\"\n\n\ndef construct_br(driver):\n br = load_cookie_or_login(driver)\n br.set_handle_robots(False)\n return br\n\n\ndef load_cookie_or_login(driver):\n # Todo: cache cookies\n return do_login(driver, username, password)\n\n\ndef do_login(driver, username, password):\n \"\"\" Login to cowcow \"\"\"\n print(\"Logging into cowcow to upload\")\n driver.get(login_url)\n # Give firefox a few cycles to find the driver title\n c = 0\n while \"Login\" not in driver.title and c < 10:\n print(\"Driver title is: {0}\".format(driver.title))\n time.sleep(1)\n c = c+1\n assert \"Login\" in driver.title\n username_elem = driver.find_element_by_name(\"tbEmail\")\n username_elem.clear()\n username_elem.send_keys(username)\n password_elem = driver.find_element_by_name(\"tbPassword\")\n password_elem.clear()\n password_elem.send_keys(password)\n password_elem.send_keys(Keys.RETURN)\n time.sleep(5)\n assert \"Login\" not in driver.title\n # Grab the cookie\n cookie = driver.get_cookies()\n\n # Store it in the cookie jar\n cj = cookielib.LWPCookieJar()\n\n for s_cookie in cookie:\n cj.set_cookie(cookielib.Cookie(\n version=0,\n name=s_cookie['name'],\n value=s_cookie['value'],\n port='80', port_specified=False, domain=s_cookie['domain'],\n domain_specified=True, domain_initial_dot=False, path=s_cookie['path'],\n path_specified=True, secure=s_cookie['secure'], expires=None,\n discard=False, comment=None, comment_url=None, rest=None, rfc2109=False))\n\n # cj.save(\"cookies.txt\") -- this fails idk why\n # Instantiate a Browser and set the cookies\n br = mechanize.Browser()\n br.set_cookiejar(cj)\n return br\n\n\ndef upload_imgs(imgs):\n \"\"\" Upload the images to cowcow \"\"\"\n print(\"Uploading...\")\n for img in imgs:\n print(\"Fetching file manager...\")\n time.sleep(2)\n response = br.open(file_manager)\n # Manually specify the form\n br.form = mechanize.HTMLForm(\n 'https://www.cowcow.com/AjaxUpload.ashx',\n method='POST', enctype='multipart/form-data')\n br.form.new_control('file', \"files[]\", {'id': 'fileupload'})\n br.form.new_control('submit', 'Button', {})\n br.form.set_all_readonly(False)\n br.form.fixup()\n file_controller = br.find_control(id=\"fileupload\", name=\"files[]\")\n print(\"Adding img {0}\".format(img))\n filename = re.sub(\"/\", \"_\", img)\n with open(img, 'rb') as img_handle:\n try:\n print(\"Opened img {0}\".format(img))\n file_controller.add_file(img_handle, \"image/png\", filename)\n print(\"Added img to controller\")\n br.submit()\n print(\"Submitted form\".format(img))\n except Error as e:\n print(\"Sad :(\")\n print(\"Error {0} adding img {1}\".format(e, img))\n raise\n\n\ndef upload_dress_imgs(br, clothing_name, dress_output_directory):\n def create_absolute_filename(f):\n return \"{0}/processed/{1}.png\".format(\n dress_output_directory,\n f)\n dress_filenames = map(lambda x: x[0], cowcow_items[clothing_name].panels)\n imgs = map(create_absolute_filename, dress_filenames)\n print(\"Uploading the images.\")\n return upload_imgs(imgs)\n\n\ndef create_dress(driver, clothing_name, dress_output_directory, dress_name):\n def filename_to_cowcow(f):\n cowcowname = \"{0}_processed_{1}.png({2})\".format(\n dress_output_directory,\n f,\n # Our filenames are the names of the cowcow pieces but with _s, so strip them\n re.sub(\"_\", \"\", f)\n )\n # Any /s are replaced with _s so we play nice with the file manager\n cowcowname = re.sub(\"/\", \"_\", cowcowname)\n return cowcowname\n\n dress_filenames = map(lambda x: x[0], cowcow_items[clothing_name].panels)\n cowcow_img_specs = map(filename_to_cowcow, dress_filenames)\n cowcow_img_spec = \" | \".join(cowcow_img_specs)\n cowcow_product_id = cowcow_items[clothing_name].cowcowid\n section_code = \"01\"\n unique_product_code = dress_output_directory + \\\n ''.join(random.choices(string.ascii_uppercase + string.digits, k=10))\n cowcow_product_spec = \"{0}, {1}, {2}, {3}, {4}\".format(\n cowcow_img_spec,\n cowcow_product_id,\n unique_product_code,\n section_code,\n dress_name)\n def configure_product():\n result = br.open(bulk_product_url)\n print(\"Creating the product entry\")\n driver.get(bulk_product_url)\n print(\"Sending in {0}\".format(cowcow_product_spec))\n textElem = driver.find_element_by_name(\"ctl00$cphMain$tbBulkAddProduct\")\n textElem.clear()\n textElem.send_keys(cowcow_product_spec)\n updateElem = driver.find_element_by_id(\"ctl00_cphMain_ebImport\")\n updateElem.click()\n # Kind of a hack, but the image should upload within 10\n time.sleep(10)\n configure_product()\n\n\nif __name__ == \"__main__\":\n print(\"Hi! I'm your friendly cowcow uploader. I am slow. Please waite.\")\n parser = argparse.ArgumentParser(description='Upload a dress to cowcow')\n parser.add_argument('--dress_name', type=str,\n nargs=\"?\",\n help=\"name of the dress\")\n parser.add_argument('--dress_dir', type=str,\n nargs=\"?\",\n help=\"directory where the dress files all live\")\n parser.add_argument(\n \"--clothing\",\n type=str,\n default=\"dress_with_pockets\",\n nargs=\"?\",\n help=\"Clothing item to generate images for (see all available profiles with --list-clothing)\",\n )\n parser.add_argument('--foreground-gecko',\n dest=\"foreground_gecko\",\n action=\"store_true\",\n help=\"run the driver in the foreground for debugging\")\n args = parser.parse_args()\n print(\"Args parsed.\")\n try:\n options = webdriver.ChromeOptions()\n\n if not args.foreground_gecko:\n options.headless = True\n options.add_argument(\"--headless\")\n options.add_argument(\"--no-sandbox\")\n\n print(\"Launching driver....\")\n driver = webdriver.Chrome(options=options)\n\n br = construct_br(driver)\n print(\"Uploading the dress....\")\n upload_dress_imgs(br, args.clothing, args.dress_dir)\n print(\"Creating the dress....\")\n create_dress(driver, args.clothing, args.dress_dir, args.dress_name)\n print(\"Finished talking to cowcow...\")\n finally:\n print(\"Cleaning up the driver\")\n driver.close()\n"
},
{
"alpha_fraction": 0.5855206251144409,
"alphanum_fraction": 0.5927314758300781,
"avg_line_length": 30.234233856201172,
"blob_id": "ce80dda9bfa31acb15d05b1b32f48cf6c46f0e9f",
"content_id": "f37c465fda7727bd0574a964e7d8e2f57fa08bb6",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3467,
"license_type": "permissive",
"max_line_length": 115,
"num_lines": 111,
"path": "/server.py",
"repo_name": "holdenk/clothes-from-code",
"src_encoding": "UTF-8",
"text": "from flask import Flask, Response, request, render_template, send_from_directory\nimport re\nimport subprocess\nimport urllib\nfrom markupsafe import Markup\nfrom gen import get_profiles\n\napp = Flask(__name__)\n\[email protected]('/')\ndef index():\n clothing_items = get_profiles()\n return render_template(\"index.html\", clothing_items=clothing_items)\n\n\[email protected]('/favicon.ico')\ndef favicon():\n return send_from_directory(\"static\", \"favicon.ico\")\n\n\[email protected]('/js/<path:path>')\ndef send_js(path):\n return send_from_directory('static/js', path)\n\n\[email protected]('/imgs/<path:path>')\ndef send_imgs(path):\n return send_from_directory('static/imgs', path)\n\n\ndef stream_template(template_name, **context):\n app.update_template_context(context)\n t = app.jinja_env.get_template(template_name)\n rv = t.stream(context)\n rv.disable_buffering()\n return rv\n\n\nbad_regex = re.compile(\n \"^http(s|)://(www.|)github.com/(.*?/.*?)/blob/(.*)$\", re.IGNORECASE)\n\n\ndef handle_non_raw_code_urls(code_url):\n \"\"\"Some people will give us links to the non raw view\"\"\"\n match = bad_regex.match(code_url)\n if match is None:\n return code_url\n else:\n return \"https://raw.githubusercontent.com/{0}/{1}\".format(\n match.group(3), match.group(4))\n\n\ngh_raw_re = re.compile(\n \"^https://raw.githubusercontent.com/(.*?)/(.*?)/.*/(.*?)$\")\nfile_domain_re = re.compile(\"^https://(.*?)/.*/(.*?)$\")\n\n\ndef extract_dress_name(code_url, clothing_type):\n \"\"\"Try and turn a URL into a dress name\"\"\"\n match = gh_raw_re.match(code_url)\n if match is None:\n match = file_domain_re.match(code_url)\n if match is None:\n return re.sub(\"^.*//.*/\", \" \", code_url)\n else:\n return match.group(1) + \"'s \" + match.group(2) + \" glitch code dress\"\n else:\n # Some folks have the group and repo name as the same\n if (match.group(1) != match.group(2) and\n not match.group(2).startswith(match.group(1))):\n return match.group(1) + \" \" + match.group(2) + \"'s \" + match.group(3) + \" glitch code \" + clothing_type\n else:\n return match.group(2) + \"'s \" + match.group(3) + \" glitch code \" + clothing_type\n\n\ndef clean_name(name):\n return re.sub(\"\\.\", \"-\", name)[0:200]\n\n\[email protected]_filter('urlencode')\ndef urlencode_filter(s):\n if type(s) == 'Markup':\n s = s.unescape()\n s = s.encode('utf8')\n s = urllib.parse.quote(s)\n return Markup(s)\n\n\[email protected]('/generate_dress', methods=[\"POST\"])\ndef generate_dress():\n if request.form[\"url\"] is None or len(request.form[\"url\"]) == 0:\n return send_from_directory(\"static\", \"missing_url.html\")\n else:\n requested_code_url = request.form[\"url\"]\n clothing_type = request.form[\"clothing_type\"] or \"dress_with_pockets\"\n code_url = handle_non_raw_code_urls(requested_code_url)\n dress_name = clean_name(extract_dress_name(code_url, clothing_type))\n dress_dir = re.sub(\"[^a-zA-Z]\", \"_\", dress_name)[0:10]\n proc = subprocess.Popen(\n [\"./wrapwork.sh\",\n clothing_type,\n dress_dir,\n dress_name,\n code_url],\n stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=False, bufsize=1)\n\n return Response(\n stream_template('generated.html',\n dress_name=dress_name,\n code_url=code_url,\n rows=proc.stdout))\n"
}
] | 11 |
schwukas/rwv-de-mv-lobbi | https://github.com/schwukas/rwv-de-mv-lobbi | 2cef1283b573a50f47ca799e62c379675735010b | 506786ea00feebccdb625ba60047735162cf2acc | d4b6c29379325db175babab221a182c61268417d | refs/heads/master | 2022-12-13T12:40:06.568610 | 2019-02-23T20:46:02 | 2019-02-23T20:46:02 | 172,203,932 | 0 | 0 | null | 2019-02-23T10:59:47 | 2019-02-23T20:46:17 | 2022-12-08T01:38:07 | Python | [
{
"alpha_fraction": 0.5421034693717957,
"alphanum_fraction": 0.5590125322341919,
"avg_line_length": 29.17346954345703,
"blob_id": "b209f4230b6363c97d224d91454ab8976c51ed95",
"content_id": "333d413ce79678ccda42c9db0668dd1175b07e12",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2957,
"license_type": "no_license",
"max_line_length": 93,
"num_lines": 98,
"path": "/scraper.py",
"repo_name": "schwukas/rwv-de-mv-lobbi",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n\nimport os\nos.environ[\"SCRAPERWIKI_DATABASE_NAME\"] = \"sqlite:///data.sqlite\"\n\nimport requests\nimport re\nimport scraperwiki\nimport ftfy\n\nfrom bs4 import BeautifulSoup as bs\n\n\n# The internet archive has data without broken encoding. Use this for years\n# 2001-2013 (inclusive)\nARCHIVE_URL = \"https://web.archive.org/web/20150117032248/http://www.lobbi-mv.de:80/chronik/\"\nBASE_URL = \"https://www.lobbi-mv.de/chronik-rechter-gewalt/\"\nBROKEN_YEARS = range(2002, 2014) # excluding 2014\n\n\ndef _make_soup(url, encoding):\n \"\"\"Return a beautiful soup object generated from the given url.\n \"\"\"\n r = requests.get(BASE_URL)\n r.encoding = encoding\n page = r.text\n page = ftfy.fix_text(page)\n return bs(page, \"lxml\")\n\n\nsoup = _make_soup(BASE_URL, \"utf-8\")\n\n# Get all currently available years.\nyears = list()\nfor ul in soup.find_all(\"ul\", class_=\"tabNavigation\"):\n for year in ul.find_all(\"li\"):\n years.append(int(year.get_text()))\n\n\nfor year in years:\n if year in BROKEN_YEARS:\n soup = _make_soup(ARCHIVE_URL, \"latin-1\")\n\n report_container = soup.find(id=year)\n reports = report_container.find_all(\"div\")\n\n for report in reports:\n # Extract the tags with special information.\n landkreis = report.find(\"span\", class_=\"small\").extract().get_text()\n # Remove 'Landkreis' from the location.\n landkreis = landkreis.split(\" \")[-1]\n landkreis = re.sub(r\"[()]\", \"\", landkreis)\n\n source = report.find(\"p\").find(\"span\", class_=\"small\").extract().get_text()\n source = source.replace(\"Quelle: \", \"\").strip()\n\n # Now parse the remainders.\n report_body = report.get_text().split(\"\\n\")\n\n date_and_location = report_body[0].split(\"-\")\n start_date = date_and_location[0].strip()\n description = report_body[1]\n\n city = date_and_location[1].strip()\n locations = landkreis + \", \" + city\n\n # No unique identifier. Instead use a combination of the below.\n uri = city + \"_\" + start_date + \"_\" + \"DE-MV\"\n\n scraperwiki.sqlite.save(\n unique_keys=[\"uri\"],\n data={\"uri\": uri,\n \"title\": \"\",\n \"description\": description,\n \"startDate\": start_date,\n \"endDate\": \"\",\n \"iso_3166_2\": \"DE-MV\"},\n table_name=\"data\"\n )\n\n scraperwiki.sqlite.save(\n unique_keys=[\"reportURI\"],\n data={\"reportURI\": uri,\n \"subdivisons\": locations},\n table_name=\"locations\"\n )\n\n for s in source.split(\",\"):\n source = re.sub(\"- \", \"-\", s)\n\n scraperwiki.sqlite.save(\n unique_keys=[\"reportURI\"],\n data={\"reportURI\": uri,\n \"name\": source.strip(),\n \"published_date\": \"\",\n \"url\": \"\"},\n table_name=\"sources\"\n )\n"
},
{
"alpha_fraction": 0.5662482380867004,
"alphanum_fraction": 0.7182705998420715,
"avg_line_length": 19.485713958740234,
"blob_id": "b24f38bf2b83bd05e791867cc0e3caa7ebd9e913",
"content_id": "5e5f95aab37f88dd9849c272623a1fe4fa971e15",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 717,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 35,
"path": "/requirements.txt",
"repo_name": "schwukas/rwv-de-mv-lobbi",
"src_encoding": "UTF-8",
"text": "# It's easy to add more libraries or choose different versions. Any libraries\n# specified here will be installed and made available to your morph.io scraper.\n# Find out more: https://morph.io/documentation/python\n\nalembic==1.0.7\nappdirs==1.4.3\nbeautifulsoup4==4.7.1\nbs4==0.0.1\ncertifi==2018.11.29\nchardet==3.0.4\ncssselect==1.0.3\nfake-useragent==0.1.11\nftfy==5.5.1\nidna==2.8\nlxml==4.3.1\nMako==1.0.7\nMarkupSafe==1.1.0\nparse==1.11.1\npyee==5.0.0\npyppeteer==0.0.25\npyquery==1.4.0\npython-dateutil==2.8.0\npython-editor==1.0.4\nrequests==2.21.0\nrequests-html==0.10.0\nscraperwiki==0.5.1\nsix==1.12.0\nsoupsieve==1.8\nSQLAlchemy==1.2.18\ntqdm==4.31.1\nurllib3==1.24.1\nw3lib==1.20.0\nwcwidth==0.1.7\nwebencodings==0.5.1\nwebsockets==7.0\n"
}
] | 2 |
meranjeet/pythondsa | https://github.com/meranjeet/pythondsa | bb2a90d845c1ea0229fe67a67b076192f9f0b8cd | 81a2ea540aa33dc14c906f43e7d7731c1e9caf25 | 1d5568c5d9db7e5f8160ab0116634de7a23f157a | refs/heads/master | 2022-08-24T08:05:52.594520 | 2020-05-18T04:58:27 | 2020-05-18T04:58:27 | 264,785,578 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6485148668289185,
"alphanum_fraction": 0.6584158539772034,
"avg_line_length": 29.230770111083984,
"blob_id": "8e93e39f053191bfd8cb5b96bd34b3a54a557272",
"content_id": "b83ef78362eaf1eaca79fc29440768753697cea6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 404,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 13,
"path": "/unit_test.py",
"repo_name": "meranjeet/pythondsa",
"src_encoding": "UTF-8",
"text": "import random\r\nimport sys\r\nimport os\r\n\r\ntests=int(sys.argv[1])\r\nn=int(sys.argv[2])\r\n\r\nfor i in range(tests):\r\n print('Test #'+str(i))\r\n os.system(\"python randomlist.py \"+str(n)+\" > inputpair.txt\")\r\n os.system(\"more inputpair.txt >> all.csv\")\r\n os.system(\"python pairproduct.py < inputpair.txt >> pairproduct.csv\")\r\n os.system(\"python pairproduct_1.py < inputpair.txt >> pairproduct_1.csv\")"
},
{
"alpha_fraction": 0.3658536672592163,
"alphanum_fraction": 0.4243902564048767,
"avg_line_length": 19.578947067260742,
"blob_id": "0b13959447e5acec82ec6b92c4745cf267921e33",
"content_id": "6c68bcf90c21ed4988128df20df5574eabf12cf2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 410,
"license_type": "no_license",
"max_line_length": 50,
"num_lines": 19,
"path": "/last_digit_of_fibonacci_number.py",
"repo_name": "meranjeet/pythondsa",
"src_encoding": "UTF-8",
"text": "def last_digit_of_fibonacci_number(n):\r\n assert 0 <= n <= 10 ** 7\r\n\r\n if n==0:\r\n return 0\r\n if n==1:\r\n return 1\r\n else:\r\n l=[0]*(n+1)\r\n l[0]=0\r\n l[1]=1\r\n for i in range(2,n+1):\r\n l[i]=(l[i-1]%10)+(l[i-2]%10)\r\n return l[n]%10\r\n\r\n\r\nif __name__ == '__main__':\r\n input_n = int(input())\r\n print(last_digit_of_fibonacci_number(input_n))\r\n"
},
{
"alpha_fraction": 0.5972222089767456,
"alphanum_fraction": 0.625,
"avg_line_length": 10.333333015441895,
"blob_id": "1cbe494672f0b5a8c08fdeedb96eb31a182f9ad6",
"content_id": "cb067ef0102b7ea65e8dce99e2e36586905fc76c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 72,
"license_type": "no_license",
"max_line_length": 18,
"num_lines": 6,
"path": "/input_test.py",
"repo_name": "meranjeet/pythondsa",
"src_encoding": "UTF-8",
"text": "import sys\r\n\r\nn=int(sys.argv[1])\r\ni=int(sys.argv[2])\r\nprint(n)\r\nprint(i)"
},
{
"alpha_fraction": 0.3828425109386444,
"alphanum_fraction": 0.412291944026947,
"avg_line_length": 23.19354820251465,
"blob_id": "4f404ac23ea4f129d976764c602942317cfebb2c",
"content_id": "08897823c4ce64f12046f6f1e78682baa8426828",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 781,
"license_type": "no_license",
"max_line_length": 65,
"num_lines": 31,
"path": "/fna.py",
"repo_name": "meranjeet/pythondsa",
"src_encoding": "UTF-8",
"text": "def fibonacci_number_again(n, m):\r\n assert 0 <= n <= 10 ** 18 and 2 <= m <= 10 ** 3\r\n\r\n def F(n):\r\n if n==0:\r\n return 0\r\n if n==1:\r\n return 1\r\n else:\r\n a=0\r\n b=1\r\n for i in range(2,n+1):\r\n c=a+b\r\n a=b\r\n b=c\r\n return b\r\n\r\n def pisanoPeriod(m):\r\n previous, current = 0, 1\r\n for i in range(0, m * m):\r\n previous, current = current, (previous + current) % m\r\n if (previous == 0 and current == 1):\r\n return i + 1\r\n\r\n c=n%pisanoPeriod(m)\r\n return F(c)%m\r\n\r\n\r\nif __name__ == '__main__':\r\n input_n, input_m = map(int, input().split())\r\n print(fibonacci_number_again(input_n, input_m))\r\n"
},
{
"alpha_fraction": 0.5925925970077515,
"alphanum_fraction": 0.6185185313224792,
"avg_line_length": 18.923076629638672,
"blob_id": "3c9e287d49a546e26c338e220fe7e9f4e872d284",
"content_id": "3c9a966f99b5b68578b28cbb336597eab5e048cb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 270,
"license_type": "no_license",
"max_line_length": 66,
"num_lines": 13,
"path": "/test.py",
"repo_name": "meranjeet/pythondsa",
"src_encoding": "UTF-8",
"text": "import sys\r\nimport random\r\n\r\n#n = int(sys.argv[1])\r\n#myseed = int(sys.argv[2])\r\n\r\n#i=int(input())\r\n#input_numbers = [int(x) for x in input().split()]\r\nprint('Ranjeet')\r\n\r\n#random.seed(myseed)\r\n#print(n)\r\n#print('-'.join([str(random.randint(1,1000)) for i in range(n)]))"
},
{
"alpha_fraction": 0.5681818127632141,
"alphanum_fraction": 0.6363636255264282,
"avg_line_length": 16.85714340209961,
"blob_id": "6212eb7fb0dfc6387a4fbc7435c69e634f37ba47",
"content_id": "2d0b69f1b83d5431c97372d20c04691f88662dfe",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 132,
"license_type": "no_license",
"max_line_length": 69,
"num_lines": 7,
"path": "/randomlist.py",
"repo_name": "meranjeet/pythondsa",
"src_encoding": "UTF-8",
"text": "import random\r\nimport sys\r\n\r\n\r\ns=int(sys.argv[1])\r\nprint(s)\r\nprint(' '.join([str(random.randint(-1000,1000)) for i in range(s)]))\r\n"
},
{
"alpha_fraction": 0.5386996865272522,
"alphanum_fraction": 0.5448916554450989,
"avg_line_length": 27.545454025268555,
"blob_id": "8d230b6e7875049cf31b5936473812c2ed51ab1d",
"content_id": "0103aa38cec315b2976a71087f3db31f14578fd0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 323,
"license_type": "no_license",
"max_line_length": 61,
"num_lines": 11,
"path": "/maxpairproduct.py",
"repo_name": "meranjeet/pythondsa",
"src_encoding": "UTF-8",
"text": "def pairproduct(n,s):\r\n s.sort()\r\n return s[len(s)-1]*s[len(s)-2]\r\n\r\nimport time\r\nif __name__ == '__main__':\r\n i=int(input())\r\n input_numbers = [int(x) for x in input().split()]\r\n start_time = time.time()\r\n print(pairproduct(i,input_numbers))\r\n #print(\"--- %s seconds ---\" % (time.time() - start_time))"
},
{
"alpha_fraction": 0.4933726191520691,
"alphanum_fraction": 0.5125184059143066,
"avg_line_length": 25.239999771118164,
"blob_id": "e77aadfe7a5b3b94b7b17522cd03abd55b177b87",
"content_id": "08b87d1e68a995931cc473b4c0116027d5db962f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 679,
"license_type": "no_license",
"max_line_length": 60,
"num_lines": 25,
"path": "/pairproduct_1.py",
"repo_name": "meranjeet/pythondsa",
"src_encoding": "UTF-8",
"text": "def MaxPairwiseProductFast(n,s):\r\n max_index1 = -1\r\n for i in range(0,n):\r\n if s[i] > s[max_index1]:\r\n max_index1 = i\r\n else:\r\n continue\r\n\r\n max_index2 = -1\r\n for j in range(0,n):\r\n if s[j] > s[max_index2] and s[j] != s[max_index1]:\r\n max_index2 = j\r\n else:\r\n continue\r\n\r\n Product = s[max_index1]*s[max_index2]\r\n return Product\r\n\r\nimport time\r\nif __name__ == '__main__':\r\n i=int(input())\r\n input_numbers = [int(x) for x in input().split()]\r\n start_time = time.time()\r\n print(MaxPairwiseProductFast(i,input_numbers))\r\n print(\"--- %s seconds ---\" % (time.time() - start_time))"
},
{
"alpha_fraction": 0.35393258929252625,
"alphanum_fraction": 0.3848314583301544,
"avg_line_length": 15.800000190734863,
"blob_id": "31563fd069c0c782123478d652ac9f9bb88f4669",
"content_id": "f32b804c79959f6581695f6ade361ce4ea0b82f2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 356,
"license_type": "no_license",
"max_line_length": 36,
"num_lines": 20,
"path": "/fibonacci.py",
"repo_name": "meranjeet/pythondsa",
"src_encoding": "UTF-8",
"text": "def fibonacci_number(n):\r\n #assert 0 <= n <= 45\r\n\r\n if n==0:\r\n return 0\r\n if n==1:\r\n return 1\r\n else:\r\n a=0\r\n b=1\r\n for i in range(2,n+1):\r\n c=a+b\r\n a=b\r\n b=c\r\n return b\r\n\r\n\r\nif __name__ == '__main__':\r\n input_n = int(input())\r\n print(fibonacci_number(input_n))\r\n"
}
] | 9 |
jansusea/kddz | https://github.com/jansusea/kddz | 7d82d4a5d9e20892178aefb062a567cb349b2a7b | 578a79666ef656e9dca8c541942a7ed7b5419590 | a4ac230d0994096dbbc1094b2de6f3caaccdcf34 | refs/heads/master | 2021-08-19T15:48:42.293739 | 2020-08-09T12:05:03 | 2020-08-09T12:05:03 | 213,426,738 | 0 | 0 | null | 2019-10-07T16:00:04 | 2019-09-15T12:24:47 | 2019-08-04T12:15:36 | null | [
{
"alpha_fraction": 0.32211101055145264,
"alphanum_fraction": 0.32711556553840637,
"avg_line_length": 50.380950927734375,
"blob_id": "db0fd255632fa7b8d07aa99817bc228997d3ea00",
"content_id": "6e342561f93a534b76ac2b3ae9a6b767f2da38bc",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "HTML",
"length_bytes": 2228,
"license_type": "permissive",
"max_line_length": 147,
"num_lines": 42,
"path": "/kucun/templates/out_in.html",
"repo_name": "jansusea/kddz",
"src_encoding": "UTF-8",
"text": "{% extends \"base.html\" %}\r\n{% block title %}{{ title }}{% endblock %}\r\n{% block content %}\r\n <div class=\"col-md-12\">\r\n <div class=\"panel-group\" id=\"accordion\">\r\n {% for every_day_sell_record in every_day_sell_records %}\r\n <div class=\"panel panel-default\">\r\n <div class=\"panel-heading\">\r\n <h4 class=\"panel-title\">\r\n <a data-toggle=\"collapse\" data-parent=\"#accordion\" href=\"#collapse{{ forloop.counter0 }}\">\r\n {{ every_day_sell_record.date|date:\"Y年n月j日\" }}\r\n </a>\r\n </h4>\r\n </div>\r\n <div id=\"collapse{{ forloop.counter0 }}\"\r\n class=\"panel-collapse collapse {% if forloop.counter0 == 0 %}in{% endif %}\">\r\n <div class=\"panel-body\">\r\n <table class=\"table\">\r\n <tr>\r\n <th>商品名</th>\r\n <th>数量</th>\r\n <th>操作时间</th>\r\n <th>操作人</th>\r\n </tr>\r\n {% for goods_record in every_day_sell_record.records %}\r\n <tr class=\"{% if goods_record.change_num > 0 %}success{% elif goods_record.change_num < 0 %}danger{% endif %}\">\r\n <td>{{ goods_record.goods }}</td>\r\n <td>\r\n {% if goods_record.change_num > 0 %}+{% endif %}{{ goods_record.change_num }}\r\n </td>\r\n <td>{{ goods_record.date|date:\"H:i\" }}</td>\r\n <td>{{ goods_record.updater }}</td>\r\n </tr>\r\n {% endfor %}\r\n </table>\r\n </div>\r\n </div>\r\n </div>\r\n {% endfor %}\r\n </div>\r\n </div>\r\n{% endblock %}"
},
{
"alpha_fraction": 0.6702004075050354,
"alphanum_fraction": 0.6767354011535645,
"avg_line_length": 31.485849380493164,
"blob_id": "1d37824ff345b449cfc366f8dd17ac615635b5ff",
"content_id": "afff16d938e86870159f89fecb96b52c8dc4561f",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6916,
"license_type": "permissive",
"max_line_length": 96,
"num_lines": 212,
"path": "/kucun/models.py",
"repo_name": "jansusea/kddz",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\nfrom django.db import models\nfrom django.contrib.auth.models import User\n\n\nclass Goods(models.Model):\n goods_name = models.CharField(max_length=15)\n average_price = models.FloatField()\n last_price = models.FloatField()\n add_people = models.ForeignKey(User)\n update_date = models.DateField(auto_now_add=True)\n recent_sell = models.DateField(blank=True, null=True)\n is_delete = models.BooleanField(default=False)\n\n def __unicode__(self): # Python 3: def __str__(self):\n return self.goods_name\n\n\nclass Shop(models.Model):\n name = models.CharField(max_length=10)\n principal = models.ForeignKey(User) # 负责人\n\n def __unicode__(self): # Python 3: def __str__(self):\n return self.name\n\n\nclass GoodsShop(models.Model):\n goods = models.ForeignKey(Goods)\n shop = models.ForeignKey(Shop)\n remain = models.IntegerField() # 剩余\n last_updater = models.ForeignKey(User)\n last_update_date = models.DateTimeField(auto_now=True)\n\n def __unicode__(self): # Python 3: def __str__(self):\n return u\"%s--%s\" % (self.shop, self.goods)\n\n\nclass GoodsRecord(models.Model):\n goods = models.ForeignKey(Goods)\n shop = models.ForeignKey(Shop)\n change_num = models.IntegerField()\n updater = models.ForeignKey(User)\n date = models.DateTimeField(auto_now=True)\n\n def __unicode__(self): # Python 3: def __str__(self):\n return u\"%s--%s\" % (self.shop, self.goods)\n\n\nclass Order(models.Model):\n name = models.CharField(max_length=20)\n is_arrears = models.BooleanField()\n customer = models.CharField(max_length=10, default='无')\n phonenumber = models.CharField(max_length=15, default='无')\n address = models.CharField(max_length=50, default='无')\n remark = models.TextField(blank=True, null=True)\n all_price = models.FloatField(default=0)\n all_profit = models.FloatField(default=0)\n is_delete = models.BooleanField(default=False)\n updater = models.ForeignKey(User)\n date = models.DateTimeField(auto_now_add=True)\n\n def __unicode__(self): # Python 3: def __str__(self):\n return self.name\n\n\nclass GoodsSellRecord(models.Model):\n goods = models.ForeignKey(Goods)\n shop = models.ForeignKey(Shop)\n sell_num = models.IntegerField()\n average_price = models.FloatField()\n sell_price = models.FloatField()\n is_arrears = models.BooleanField()\n customer = models.CharField(max_length=10, default='无')\n phonenumber = models.CharField(max_length=15, default='无')\n address = models.CharField(max_length=50, default='无')\n remark = models.TextField(blank=True, null=True)\n order = models.ForeignKey(Order, blank=True, null=True)\n is_delete = models.BooleanField(default=False)\n updater = models.ForeignKey(User)\n date = models.DateTimeField(auto_now_add=True)\n\n def get_profit(self):\n profit = self.sell_num * (self.sell_price - self.average_price)\n return profit\n\n def get_receivable(self):\n receivable = self.sell_num * self.sell_price\n return receivable\n\n def __unicode__(self): # Python 3: def __str__(self):\n return u\"%s--%s\" % (self.shop, self.goods)\n\n\nclass GoodsReturnRecord(models.Model):\n goods_sell_record = models.ForeignKey(GoodsSellRecord)\n updater = models.ForeignKey(User)\n date = models.DateTimeField(auto_now_add=True)\n\n\nclass InboundChannel(models.Model):\n name = models.CharField(max_length=15)\n phonenumber = models.CharField(max_length=15)\n\n\nclass GoodsAddRecord(models.Model):\n goods = models.ForeignKey(Goods)\n shop = models.ForeignKey(Shop)\n number = models.IntegerField()\n price = models.FloatField()\n inbound_channel = models.ForeignKey(InboundChannel)\n remark = models.TextField(blank=True, null=True)\n updater = models.ForeignKey(User)\n date = models.DateTimeField(auto_now_add=True)\n\n def __unicode__(self): # Python 3: def __str__(self):\n return u\"%s--%s\" % (self.shop, self.goods)\n\n\n#\n#\n# class TransferGoods(models.Model):\n# from_shop = models.ForeignKey(Shop, related_name='from_shop')\n# to_shop = models.ForeignKey(Shop, related_name='to_name')\n# goods = models.ForeignKey(Goods)\n# change_num = models.IntegerField()\n# updater = models.ForeignKey(User)\n# date = models.DateTimeField(auto_now_add=True)\n#\n# def __unicode__(self): # Python 3: def __str__(self):\n# return u\"%s--%s--%s--%s\" % (self.from_shop, self.to_shop, self.goods, self.change_num)\n#\n#\n# class ChangePrice(models.Model):\n# goods = models.ForeignKey(Goods)\n# old_price = models.FloatField()\n# new_price = models.FloatField()\n# updater = models.ForeignKey(User)\n# date = models.DateTimeField(auto_now=True)\n#\n# def __unicode__(self): # Python 3: def __str__(self):\n# return self.goods.name\n#\n#\nclass Backup(models.Model):\n goods_name = models.CharField(max_length=15)\n kadi_count = models.IntegerField()\n is_lastet = models.BooleanField(default=True)\n save_datetime = models.DateTimeField(auto_now_add=True)\n\n\nclass LineItem(models.Model):\n product = models.ForeignKey(GoodsShop)\n unit_price = models.FloatField()\n quantity = models.IntegerField()\n\n def __unicode__(self): # Python 3: def __str__(self):\n return self.product.id\n\n\nclass Cart(object):\n def __init__(self):\n self.items = []\n self.total_price = 0\n\n def add_product(self, product, unit_price, quantity):\n self.total_price += unit_price * quantity\n self.items.append(LineItem(product=product, unit_price=unit_price, quantity=quantity))\n\n\nclass OtherCost(models.Model):\n purpose = models.CharField(max_length=10)\n price = models.FloatField()\n is_delete = models.BooleanField(default=False)\n updater = models.ForeignKey(User)\n date = models.DateTimeField(auto_now_add=True)\n\n\nclass TransferShop(models.Model):\n name = models.CharField(max_length=20)\n phonenumber = models.CharField(max_length=15)\n\n def __unicode__(self): # Python 3: def __str__(self):\n return self.name\n\n\nclass TransferRecord(models.Model):\n transfer_shop = models.ForeignKey(TransferShop)\n goods = models.ForeignKey(Goods)\n count = models.IntegerField()\n remark = models.TextField()\n updater = models.ForeignKey(User)\n date = models.DateTimeField(auto_now_add=True)\n\n\nclass ChangeCountRecord(models.Model):\n goods = models.ForeignKey(Goods)\n old_count = models.IntegerField()\n real_count = models.IntegerField()\n updater = models.ForeignKey(User)\n date = models.DateTimeField(auto_now_add=True)\n\n\n# 还款记录\nclass RefundRecord(models.Model):\n sell_record = models.ForeignKey(GoodsSellRecord)\n updater = models.ForeignKey(User)\n date = models.DateTimeField(auto_now_add=True)\n\n def get_receivable(self):\n receivable = self.sell_record.sell_num * self.sell_record.sell_price\n return receivable"
},
{
"alpha_fraction": 0.75,
"alphanum_fraction": 0.75,
"avg_line_length": 17.600000381469727,
"blob_id": "940651ac776a0944c06f0a14810fbeb44b1803d6",
"content_id": "ca463b0e65a8154c2195ecfe9586f6763637d8a2",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 92,
"license_type": "permissive",
"max_line_length": 28,
"num_lines": 5,
"path": "/price/models.py",
"repo_name": "jansusea/kddz",
"src_encoding": "UTF-8",
"text": "from django.db import models\n\n# Create your models here.\nclass Goods(models.Model):\n pass"
},
{
"alpha_fraction": 0.5048019289970398,
"alphanum_fraction": 0.5049519538879395,
"avg_line_length": 77.35713958740234,
"blob_id": "c47bea45158a81204fec1c1135d80f8bb3d9738e",
"content_id": "ea2e97d06f047a24bc79c442be746cb87f2ce793",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6664,
"license_type": "permissive",
"max_line_length": 122,
"num_lines": 84,
"path": "/kucun/urls.py",
"repo_name": "jansusea/kddz",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\nfrom django.conf.urls import patterns, url\r\n\r\n__author__ = 'JiaPan'\r\n\r\nurlpatterns = patterns('',\r\n url(r'^$', 'kucun.views.all_goods'),\r\n\r\n url(r'^all/goods/$', 'kucun.views.all_goods', name='all_goods'),\r\n # url(r'^guoao/phone/$', 'kucun.views.guoao_phone', name='guoaophone'),\r\n # url(r'^dadian/phone/$', 'kucun.views.dadian_phone', name='dadianphone'),\r\n # url(r'^hongwei/phone/$', 'kucun.views.hongwei_phone', name='hongweiphone'),\r\n #\r\n # url(r'^all/peijian/$', 'kucun.views.all_peijian', name='allpeijian'),\r\n # url(r'^guoao/peijian/$', 'kucun.views.guoao_peijian', name='guoaopeijian'),\r\n # url(r'^dadian/peijian/$', 'kucun.views.dadian_peijian', name='dadianpeijian'),\r\n # url(r'^hongwei/peijian/$', 'kucun.views.hongwei_peijian', name='hongweipeijian'),\r\n #\r\n url(r'^add/$', 'kucun.views.add_goods', name=\"addgoods\"),\r\n url(r'^add/success/$', 'kucun.views.add_success', name=\"addsuccess\"),\r\n url(r'^make_order/$', 'kucun.views.make_order', name=\"make_order\"),\r\n url(r'^inbound_channel/$', 'kucun.views.inbound_channel', name=\"inbound_channel\"),\r\n url(r'^transfer_shop_manage/$', 'kucun.views.transfer_shop_manage', name=\"transfer_shop_manage\"),\r\n url(r'^login/$', 'kucun.views.mylogin', name='mylogin'),\r\n url(r'^login/fail$', 'kucun.views.login_fail', name='login_fail'),\r\n url(r'^logout/$', 'kucun.views.mylogout', name=\"logout\"),\r\n #\r\n url(r'^api/sell/$', 'kucun.views.api_sell', name=\"api_sell\"),\r\n url(r'^api/transfer/$', 'kucun.views.api_transfer', name=\"api_transfer\"),\r\n url(r'^api/delete_sell_record/$', 'kucun.views.delete_sell_record', name=\"delete_sell_record\"),\r\n url(r'^api/delete_order_record/$', 'kucun.views.delete_order_record',\r\n name=\"delete_order_record\"),\r\n url(r'^api/add/$', 'kucun.views.api_add', name=\"api_add\"),\r\n url(r'^api/sell_info/$', 'kucun.views.api_sell_info', name=\"api_sell_info\"),\r\n url(r'^api/order_info/$', 'kucun.views.api_order_info', name=\"api_order_info\"),\r\n url(r'^api/order_list/$', 'kucun.views.api_order_list', name=\"api_order_list\"),\r\n url(r'^api/change_arrears/$', 'kucun.views.api_change_arrears', name=\"api_change_arrears\"),\r\n url(r'^api/change_order_arrears/$', 'kucun.views.api_change_order_arrears',\r\n name=\"api_change_order_arrears\"),\r\n # url(r'^api/diaoku/$', 'kucun.views.api_diaoku', name=\"api_diaoku\"),\r\n url(r'^api/update/$', 'kucun.views.api_update', name=\"api_update\"),\r\n url(r'^api/update_count/$', 'kucun.views.api_update_count', name=\"api_update_count\"),\r\n url(r'^api/add_cart/$', 'kucun.views.add_cart', name=\"add_cart\"),\r\n url(r'^api/clean_cart', 'kucun.views.clean_cart', name=\"clean_cart\"),\r\n url(r'^api/delete_cart', 'kucun.views.delete_cart', name=\"delete_cart\"),\r\n url(r'^api/submit_cart', 'kucun.views.submit_cart', name=\"submit_cart\"),\r\n url(r'^api/delete_inbound', 'kucun.views.delete_inbound', name=\"delete_inbound\"),\r\n url(r'^api/delete_goods', 'kucun.views.delete_goods', name=\"delete_goods\"),\r\n url(r'^api/delete_transfer_shop', 'kucun.views.delete_transfer_shop', name=\"delete_transfer_shop\"),\r\n #\r\n url(r'^outin/$', 'kucun.views.out_in', name=\"out_in\"),\r\n url(r'^out/$', 'kucun.views.out', name=\"out\"),\r\n url(r'^in/$', 'kucun.views.in_', name=\"in\"),\r\n url(r'^sell_record/$', 'kucun.views.sell_record', name=\"sell_record\"),\r\n url(r'^add_record/$', 'kucun.views.add_record', name=\"add_record\"),\r\n url(r'^today_profit/$', 'kucun.views.today_profit', name=\"today_profit\"),\r\n url(r'^yesterday_profit/$', 'kucun.views.yesterday_profit', name=\"yesterday_profit\"),\r\n url(r'^this_month_profit/$', 'kucun.views.this_month_profit', name=\"this_month_profit\"),\r\n url(r'^last_month_profit/$', 'kucun.views.last_month_profit', name=\"last_month_profit\"),\r\n url(r'^other_month_profit/$', 'kucun.views.other_month_profit', name=\"other_month_profit\"),\r\n url(r'^all_arrears/$', 'kucun.views.all_arrears', name=\"all_arrears\"),\r\n url(r'^order_arrears/$', 'kucun.views.order_arrears', name=\"order_arrears\"),\r\n url(r'^order_manage/$', 'kucun.views.order_manage', name=\"order_manage\"),\r\n url(r'^goods_return_record/$', 'kucun.views.goods_return_record', name=\"goods_return_record\"),\r\n url(r'^goods_transfer_record/$', 'kucun.views.transfer_record', name=\"transfer_record\"),\r\n url(r'^other_cost/$', 'kucun.views.other_cost', name=\"other_cost\"),\r\n url(r'^delete_goods/$', 'kucun.views.delete_goods', name=\"delete_goods\"),\r\n\r\n url(r'arrears/(\\d+)/(\\d+)/$', 'kucun.views.check_month_arrears', name='check_month_arrears'),\r\n\r\n\r\n # url(r'^checkoutin/$', 'kucun.views.check_out_in', name=\"check_out_in\"),\r\n # url(r'^transfer/$', 'kucun.views.transfer', name=\"transfer\"),\r\n # url(r'^changeprice/$', 'kucun.views.change_price', name=\"change_price\"),\r\n #\r\n # url(r'^checkbackup/$', 'kucun.views.check_backup', name=\"check_backup\"),\r\n # url(r'^backup/$', 'kucun.views.mybackup', name=\"backup\"),\r\n\r\n # url(r'^modal/diaoku/$', 'kucun.views.modal_diaoku', name=\"modal_diaoku\"),\r\n url(r'^cart_show', 'kucun.views.cart_show', name=\"cart_show\"),\r\n\r\n url(r'^chart/profit/$', 'kucun.views.profit_chart', name=\"profit_chart\"),\r\n url(r'^chart/sell_ranking/$', 'kucun.views.sell_ranking_chart', name=\"sell_ranking_chart\"),\r\n)"
},
{
"alpha_fraction": 0.567209780216217,
"alphanum_fraction": 0.5732813477516174,
"avg_line_length": 42.58961486816406,
"blob_id": "e5b5353bc276efc93d04c733373747096420c19b",
"content_id": "8c59dd9d4fb2a8cf54f88ed869b63d9ca27d9a34",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 52792,
"license_type": "permissive",
"max_line_length": 208,
"num_lines": 1194,
"path": "/kucun/views.py",
"repo_name": "jansusea/kddz",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\nimport datetime\n\nfrom django.contrib.auth import authenticate, login, logout\nfrom django.contrib.auth.decorators import login_required\nfrom django.core.urlresolvers import reverse\nfrom django.http import HttpResponse\nfrom django.http.response import HttpResponseRedirect\nfrom django.shortcuts import render, render_to_response\nfrom models import Goods, Shop, GoodsShop, GoodsRecord, Backup, GoodsAddRecord, GoodsSellRecord, InboundChannel, \\\n Cart, Order, GoodsReturnRecord, OtherCost, TransferShop, TransferRecord, ChangeCountRecord, RefundRecord\nfrom django.db.models import F\n\n\n@login_required(login_url='/kucun/login')\ndef all_goods(request):\n goods = Goods.objects.filter(is_delete=False).order_by('goods_name')\n datas = []\n amount = 0\n for good in goods:\n kadi = GoodsShop.objects.get(goods=good, shop__name='卡迪电子')\n m = {'goods': good, 'kadi': kadi}\n amount += kadi.remain\n datas.append(m)\n shang = len(goods) / 3\n yu = len(goods) % 3\n if yu != 0:\n shang += 1\n\n return render_to_response('all_goods.html',\n {'request': request, 'data1': datas[:shang], 'data2': datas[shang:shang * 2],\n 'data3': datas[shang * 2:], 'title': '卡迪管理系统', 'header': '卡迪管理系统', 'amount': amount})\n\n\n@login_required(login_url='/kucun/login')\ndef delete_goods(request):\n if request.method == \"GET\":\n goods = Goods.objects.filter(is_delete=False).order_by('goods_name')\n datas = []\n amount = 0\n for good in goods:\n kadi = GoodsShop.objects.get(goods=good, shop__name='卡迪电子')\n m = {'goods': good, 'kadi': kadi}\n amount += kadi.remain\n datas.append(m)\n shang = len(goods) / 3\n yu = len(goods) % 3\n if yu != 0:\n shang += 1\n\n return render_to_response('delete_goods.html',\n {'request': request, 'data1': datas[:shang], 'data2': datas[shang:shang * 2],\n 'data3': datas[shang * 2:], 'title': '删除商品', 'header': '删除商品', 'amount': amount})\n elif request.method == \"POST\":\n goods_id = request.POST['goods_id']\n goods = Goods.objects.get(id=goods_id)\n goods.is_delete = True\n goods.save()\n return HttpResponse(\"success\")\n\n\n@login_required(login_url='/kucun/login')\ndef hongwei_phone(request):\n goodsshops = GoodsShop.objects.filter(shop__name='红卫店', goods__goods_type=0).order_by('goods__name')\n amount = 0\n for goodsshop in goodsshops:\n amount += goodsshop.remain\n shang = len(goodsshops) / 3\n yu = len(goodsshops) % 3\n if yu != 0:\n shang += 1\n return render_to_response('shop_phone.html',\n {'request': request, 'data1': goodsshops[:shang], 'data2': goodsshops[shang:shang * 2],\n 'data3': goodsshops[shang * 2:], 'title': '红卫店:手机', 'header': '红卫店:手机',\n 'amount': amount})\n\n\n@login_required(login_url='/kucun/login')\ndef hongwei_peijian(request):\n goodsshops = GoodsShop.objects.filter(shop__name='红卫店', goods__goods_type=1).order_by('goods__name')\n amount = 0\n for goodsshop in goodsshops:\n amount += goodsshop.remain\n shang = len(goodsshops) / 3\n yu = len(goodsshops) % 3\n if yu != 0:\n shang += 1\n return render_to_response('shop_phone.html',\n {'request': request, 'data1': goodsshops[:shang], 'data2': goodsshops[shang:shang * 2],\n 'data3': goodsshops[shang * 2:], 'title': '红卫店:配件', 'header': '红卫店:配件',\n 'amount': amount})\n\n\n@login_required(login_url='/kucun/login')\ndef dadian_phone(request):\n goodsshops = GoodsShop.objects.filter(shop__name='大店', goods__goods_type=0).order_by('goods__name')\n amount = 0\n for goodsshop in goodsshops:\n amount += goodsshop.remain\n shang = len(goodsshops) / 3\n yu = len(goodsshops) % 3\n if yu != 0:\n shang += 1\n return render_to_response('shop_phone.html',\n {'request': request, 'data1': goodsshops[:shang], 'data2': goodsshops[shang:shang * 2],\n 'data3': goodsshops[shang * 2:], 'title': '大店:手机', 'header': '大店:手机', 'amount': amount})\n\n\n@login_required(login_url='/kucun/login')\ndef dadian_peijian(request):\n goodsshops = GoodsShop.objects.filter(shop__name='大店', goods__goods_type=1).order_by('goods__name')\n amount = 0\n for goodsshop in goodsshops:\n amount += goodsshop.remain\n shang = len(goodsshops) / 3\n yu = len(goodsshops) % 3\n if yu != 0:\n shang += 1\n return render_to_response('shop_phone.html',\n {'request': request, 'data1': goodsshops[:shang], 'data2': goodsshops[shang:shang * 2],\n 'data3': goodsshops[shang * 2:], 'title': '大店:配件', 'header': '大店:配件', 'amount': amount})\n\n\n@login_required(login_url='/kucun/login')\ndef guoao_phone(request):\n goodsshops = GoodsShop.objects.filter(shop__name='国奥店', goods__goods_type=0).order_by('goods__name')\n amount = 0\n for goodsshop in goodsshops:\n amount += goodsshop.remain\n shang = len(goodsshops) / 3\n yu = len(goodsshops) % 3\n if yu != 0:\n shang += 1\n return render_to_response('shop_phone.html',\n {'request': request, 'data1': goodsshops[:shang], 'data2': goodsshops[shang:shang * 2],\n 'data3': goodsshops[shang * 2:], 'title': '国奥店:手机', 'header': '国奥店:手机',\n 'amount': amount})\n\n\n@login_required(login_url='/kucun/login')\ndef guoao_peijian(request):\n goodsshops = GoodsShop.objects.filter(shop__name='国奥店', goods__goods_type=1).order_by('goods__name')\n amount = 0\n for goodsshop in goodsshops:\n amount += goodsshop.remain\n shang = len(goodsshops) / 3\n yu = len(goodsshops) % 3\n if yu != 0:\n shang += 1\n return render_to_response('shop_phone.html',\n {'request': request, 'data1': goodsshops[:shang], 'data2': goodsshops[shang:shang * 2],\n 'data3': goodsshops[shang * 2:], 'title': '国奥店:配件', 'header': '国奥店:配件',\n 'amount': amount})\n\n\n@login_required(login_url='/kucun/login')\ndef add_goods(request):\n if request.method == 'GET':\n return render_to_response('add_goods.html', {'request': request, 'title': '添加商品', 'header': '添加商品'})\n elif request.method == 'POST':\n user = request.user\n goodsname = request.POST['goodsname']\n price = request.POST['price']\n goods = Goods(goods_name=goodsname, average_price=price, last_price=price, add_people=user)\n goods.save()\n shops = Shop.objects.all()\n for shop in shops:\n goodsshop = GoodsShop(goods=goods, shop=shop, remain=0, last_updater=user)\n goodsshop.save()\n return HttpResponseRedirect(reverse('addsuccess'))\n\n\ndef add_success(request):\n return render_to_response('add_success.html', {'request': request, 'title': '添加成功','header': '添加成功'})\n\n\ndef mylogin(request):\n if request.method == 'GET':\n # logout(request)\n return render_to_response('login.html')\n elif request.method == 'POST':\n next = request.GET.get('next', '/kucun/all/goods/')\n username = request.POST['username']\n password = request.POST['password']\n user = authenticate(username=username, password=password)\n if user is not None:\n if user.is_active:\n login(request, user)\n return HttpResponseRedirect(next)\n else:\n return HttpResponse('账号被锁定!')\n else:\n return HttpResponseRedirect(reverse('login_fail'))\n\n\ndef login_fail(request):\n return render_to_response('login_fail.html', {'request': request, 'title': '登录失败'})\n\n\ndef mylogout(request):\n logout(request)\n return HttpResponseRedirect(reverse('mylogin'))\n\n\n@login_required(login_url='/kucun/login')\ndef api_sell(request): # 进出库\n if request.method == 'GET':\n goods_id = request.GET['goods_id']\n shop_id = request.GET['shop_id']\n action = request.GET['action']\n goodsshop = GoodsShop.objects.get(goods=Goods.objects.get(id=goods_id), shop=Shop.objects.get(id=shop_id))\n if action != 'sub':\n return HttpResponse('error')\n return render_to_response('modal_sell.html',\n {'request': request, 'goodsshop': goodsshop})\n elif request.method == 'POST':\n user = request.user\n if not user:\n return HttpResponse(\"false\")\n goods_id = request.POST['goods_id']\n shop_id = request.POST['shop_id']\n number = int(request.POST['number'])\n price = float(request.POST['price'])\n arrears = request.POST['arrears']\n customer = request.POST.get('customer', '无')\n phonenumber = request.POST.get('phonenumber', '无')\n address = request.POST.get('address', '无')\n remark = request.POST.get('remark', '无')\n\n if arrears == '0':\n arrears = False\n if arrears == '1':\n arrears = True\n\n goods = Goods.objects.get(id=goods_id)\n shop = Shop.objects.get(id=shop_id)\n goodsshop = GoodsShop.objects.get(goods__id=goods_id, shop__id=shop_id)\n\n goods_record = GoodsRecord(goods=goods, shop=shop, change_num=(-number), updater=user)\n goods_record.save()\n\n goodsshop.remain -= int(number)\n goodsshop.save()\n\n goods_sell_record = GoodsSellRecord(goods=goods, shop=shop, sell_num=number, average_price=goods.average_price,\n sell_price=price, is_arrears=arrears, customer=customer,\n phonenumber=phonenumber, address=address, remark=remark,\n updater=user)\n goods_sell_record.save()\n return HttpResponse(goodsshop.remain)\n\n\n@login_required(login_url='/kucun/login')\ndef api_transfer(request): # 进出库\n if request.method == 'GET':\n goods_id = request.GET['goods_id']\n shop_id = request.GET['shop_id']\n goodsshop = GoodsShop.objects.get(goods=Goods.objects.get(id=goods_id), shop=Shop.objects.get(id=shop_id))\n transfer_shops = TransferShop.objects.all()\n return render_to_response('modal_transfer.html',\n {'request': request, 'goodsshop': goodsshop, 'transfer_shops': transfer_shops})\n elif request.method == 'POST':\n user = request.user\n if not user:\n return HttpResponse(\"false\")\n to_shop_id = request.POST['to_shop_id']\n goods_id = request.POST['goods_id']\n shop_id = request.POST['shop_id']\n number = int(request.POST['number'])\n remark = request.POST.get('remark', '无')\n\n goods = Goods.objects.get(id=goods_id)\n shop = Shop.objects.get(id=shop_id)\n goodsshop = GoodsShop.objects.get(goods__id=goods_id, shop__id=shop_id)\n transfer_shop = TransferShop.objects.get(id=to_shop_id)\n\n goods_record = GoodsRecord(goods=goods, shop=shop, change_num=(-number), updater=user)\n goods_record.save()\n\n transfer_record = TransferRecord(transfer_shop=transfer_shop, goods=goods, count=number, remark=remark,\n updater=user)\n transfer_record.save()\n\n goodsshop.remain -= int(number)\n goodsshop.save()\n\n return HttpResponse(goodsshop.remain)\n\n\n@login_required(login_url='/kucun/login')\ndef api_add(request): # 进库\n if request.method == 'GET':\n goods_id = request.GET['goods_id']\n shop_id = request.GET['shop_id']\n action = request.GET['action']\n goodsshop = GoodsShop.objects.get(goods__id=goods_id, shop__id=shop_id)\n inbound_channels = InboundChannel.objects.order_by('id')\n if action != 'add':\n return HttpResponse('error')\n return render_to_response('modal_add.html',\n {'request': request, 'goodsshop': goodsshop, 'inbound_channels': inbound_channels})\n elif request.method == 'POST':\n user = request.user\n if not user:\n return HttpResponse(\"false\")\n shop_id = request.POST['shop_id']\n goods_id = request.POST['goods_id']\n number = int(request.POST['number'])\n price = request.POST.get('price')\n remark = request.POST.get('remark', \"\")\n inbound_channel_id = request.POST['inbound_channel_id']\n if price:\n price = float(price)\n if price < 0:\n return HttpResponse('false')\n if number < 0:\n return HttpResponse('false')\n\n goodsshop = GoodsShop.objects.get(goods__id=goods_id, shop__id=shop_id)\n goods = Goods.objects.get(id=goods_id)\n shop = Shop.objects.get(id=shop_id)\n inbound_channel = InboundChannel.objects.get(id=inbound_channel_id)\n\n if not price:\n price = goods.last_price\n\n goods.average_price = round((goodsshop.remain * goods.average_price + int(number) * price) / (\n goodsshop.remain + int(number)), 2)\n goods.last_price = price\n goods.save()\n\n goods_record = GoodsRecord(goods=goods, shop=shop, change_num=number, updater=user)\n goods_record.save()\n\n goodsshop.remain += int(number)\n goodsshop.save()\n\n goods_add_record = GoodsAddRecord(goods=goods, shop=shop, number=number, price=price,\n inbound_channel=inbound_channel, remark=remark, updater=user)\n goods_add_record.save()\n\n return HttpResponse(goodsshop.remain)\n\n\ndef api_sell_info(request):\n if request.method == 'GET':\n sell_record_id = request.GET['sell_record_id']\n record = GoodsSellRecord.objects.get(id=sell_record_id) # sell_record已经被之前的一个函数使用了\n return render_to_response('modal_sell_info.html', {'request': request, 'record': record})\n\n\ndef api_order_info(request):\n if request.method == 'GET':\n order_id = request.GET['order_id']\n order = Order.objects.get(id=order_id)\n return render_to_response('modal_order_info.html', {'request': request, 'order': order})\n\n\ndef api_order_list(request):\n if request.method == 'GET':\n order_id = request.GET['order_id']\n order = Order.objects.get(id=order_id)\n sell_records = GoodsSellRecord.objects.filter(order=order, is_delete=False)\n return render_to_response('modal_order_list.html',\n {'request': request, 'order': order, 'sell_records': sell_records})\n\n\ndef api_change_arrears(request):\n user = request.user\n today = datetime.date.today()\n if request.method == 'POST':\n sell_record_id = request.POST['sell_record_id']\n record = GoodsSellRecord.objects.get(id=sell_record_id) # sell_record已经被之前的一个函数使用了\n record.is_arrears = not record.is_arrears\n record.save()\n if record.is_arrears:\n arrears = '是'\n refund_records = RefundRecord.objects.filter(sell_record=record, date__year=today.year,\n date__month=today.month,\n date__day=today.day)\n if refund_records:\n for record in refund_records:\n record.delete()\n else:\n arrears = '否'\n refund_record = RefundRecord(sell_record=record, updater=user)\n refund_record.save()\n return HttpResponse(arrears)\n\n\ndef api_change_order_arrears(request):\n user = request.user\n today = datetime.date.today()\n if request.method == 'POST':\n order_id = request.POST['order_id']\n order = Order.objects.get(id=order_id)\n order.is_arrears = not order.is_arrears\n order.save()\n sell_records = GoodsSellRecord.objects.filter(order=order)\n for record in sell_records:\n record.is_arrears = order.is_arrears\n record.save()\n if not order.is_arrears:\n refund_record = RefundRecord(sell_record=record, updater=user)\n refund_record.save()\n else:\n refund_records = RefundRecord.objects.filter(sell_record=record, date__year=today.year,\n date__month=today.month,\n date__day=today.day)\n if refund_records:\n for record in refund_records:\n record.delete()\n if order.is_arrears:\n arrears = '是'\n else:\n arrears = '否'\n return HttpResponse(arrears)\n\n\n@login_required(login_url='/kucun/login')\ndef api_update(request):\n if request.method == 'GET':\n goods_id = request.GET['goods_id']\n goods = Goods.objects.get(id=goods_id)\n return render_to_response('modal_update.html', {'request': request, 'goods': goods})\n elif request.method == 'POST':\n user = request.user\n if not user.is_superuser:\n return HttpResponse(\"stop\")\n old_goods_name = request.POST['old_goods_name']\n old_goods_price = request.POST['old_goods_price']\n name = request.POST['name']\n goods_id = request.POST['goods_id']\n price = request.POST['price']\n\n if old_goods_name != name:\n records = GoodsRecord.objects.filter(goods__name=old_goods_name)\n if records:\n return HttpResponse('not_update_name')\n\n goods = Goods.objects.get(id=goods_id)\n goods.name = name\n goods.average_price = price\n goods.update_date = datetime.date.today()\n goods.save()\n\n return HttpResponse(goods.name)\n\n\n@login_required(login_url='/kucun/login')\ndef api_update_count(request):\n if request.method == 'GET':\n goods_id = request.GET['goods_id']\n shop_id = request.GET['shop_id']\n goods_shop = GoodsShop.objects.get(goods__id=goods_id, shop__id=shop_id)\n return render_to_response('modal_update_count.html', {'request': request, 'goods_shop': goods_shop})\n elif request.method == 'POST':\n user = request.user\n goods_id = request.POST['goods_id']\n shop_id = request.POST['shop_id']\n real_count = request.POST['real_count']\n goods_shop = GoodsShop.objects.get(goods__id=goods_id, shop__id=shop_id)\n old_count = goods_shop.remain\n\n change_count_record = ChangeCountRecord(goods=goods_shop.goods, old_count=old_count, real_count=real_count,\n updater=user)\n change_count_record.save()\n\n goods_shop.remain = real_count\n goods_shop.save()\n\n return HttpResponse(goods_shop.remain)\n\n\n@login_required(login_url='/kucun/login')\ndef out_in(request):\n today = datetime.date.today()\n every_day_sell_records = []\n for i in range(0, 10):\n that_day = today - datetime.timedelta(days=i)\n goods_records = GoodsRecord.objects.filter(date__year=that_day.year, date__month=that_day.month,\n date__day=that_day.day).order_by('-date')\n day_and_records_map = {'date': that_day, 'records': goods_records}\n every_day_sell_records.append(day_and_records_map)\n header = title = '进出库记录'\n return render_to_response('out_in.html',\n {'request': request, 'every_day_sell_records': every_day_sell_records, 'header': header,\n 'title': title})\n\n\n@login_required(login_url='/kucun/login')\ndef out(request):\n today = datetime.date.today()\n every_day_sell_records = []\n for i in range(0, 10):\n that_day = today - datetime.timedelta(days=i)\n goods_records = GoodsRecord.objects.filter(date__year=that_day.year, date__month=that_day.month,\n date__day=that_day.day, change_num__lte=0).order_by('-date')\n\n day_and_records_map = {'date': that_day, 'records': goods_records}\n every_day_sell_records.append(day_and_records_map)\n header = title = '进库记录'\n return render_to_response('out_in.html',\n {'request': request, 'every_day_sell_records': every_day_sell_records, 'header': header,\n 'title': title})\n\n\n@login_required(login_url='/kucun/login')\ndef in_(request):\n today = datetime.date.today()\n every_day_sell_records = []\n for i in range(0, 10):\n that_day = today - datetime.timedelta(days=i)\n goods_records = GoodsRecord.objects.filter(date__year=that_day.year, date__month=that_day.month,\n date__day=that_day.day, change_num__gte=0).order_by('-date')\n\n day_and_records_map = {'date': that_day, 'records': goods_records}\n every_day_sell_records.append(day_and_records_map)\n header = title = '出库记录'\n return render_to_response('out_in.html',\n {'request': request, 'every_day_sell_records': every_day_sell_records, 'header': header,\n 'title': title})\n\n\n@login_required(login_url='/kucun/login')\ndef sell_record(request):\n today = datetime.date.today()\n every_day_sell_records = []\n receivable = 0\n reality = 0\n debt = 0\n for i in range(0, 10):\n that_day = today - datetime.timedelta(days=i)\n that_day_sell_records = GoodsSellRecord.objects.filter(date__year=that_day.year, date__month=that_day.month,\n date__day=that_day.day, is_delete=False)\n that_day_other_cost_records = OtherCost.objects.filter(date__year=that_day.year, date__month=that_day.month,\n date__day=that_day.day, is_delete=False)\n\n that_day_profit = 0\n that_day_cost = 0\n for record in that_day_sell_records:\n if i == 0:\n receivable += record.get_receivable()\n if not record.is_arrears:\n reality += record.get_receivable()\n that_day_profit += record.get_profit()\n for record in that_day_other_cost_records:\n that_day_cost += record.price\n\n pure_profit = that_day_profit - that_day_cost\n\n day_and_records_map = {'date': that_day, 'records': that_day_sell_records, 'profit': that_day_profit,\n 'cost': that_day_cost, 'pure_profit': pure_profit}\n every_day_sell_records.append(day_and_records_map)\n\n # 将还款加入实收里边\n refund_records = RefundRecord.objects.filter(date__year=today.year, date__month=today.month,\n date__day=today.day)\n for refund_record in refund_records:\n debt += refund_record.get_receivable()\n total_reality = debt + reality\n\n header = title = '今日实收:%s' % (total_reality,)\n\n return render_to_response('sell_record.html',\n {'request': request, 'every_day_sell_records': every_day_sell_records, 'header': header,\n 'title': title, 'reality': reality, 'receivable': receivable, 'debt': debt,\n 'total_reality': total_reality})\n\n\ndef delete_sell_record(request):\n if request.method == 'POST':\n user = request.user\n if not user:\n return HttpResponse(\"false\")\n record_id = request.POST['record_id']\n sell_record = GoodsSellRecord.objects.get(id=record_id)\n if sell_record.is_delete == True:\n return HttpResponse('delete_false')\n goods_record = GoodsRecord(goods=sell_record.goods, shop=sell_record.shop, change_num=sell_record.sell_num,\n updater=user)\n goods_record.save()\n\n goodsshop = GoodsShop.objects.get(goods=sell_record.goods, shop=sell_record.shop)\n goodsshop.remain += sell_record.sell_num\n goodsshop.save()\n\n goods_return_record = GoodsReturnRecord(goods_sell_record=sell_record, updater=user)\n goods_return_record.save()\n\n sell_record.is_delete = True\n sell_record.save()\n return HttpResponse('success')\n\n\ndef delete_order_record(request):\n if request.method == 'POST':\n user = request.user\n if not user:\n return HttpResponse(\"false\")\n order_id = request.POST['order_id']\n order = Order.objects.get(id=order_id)\n records = GoodsSellRecord.objects.filter(order=order, is_delete=False)\n for sell_record in records:\n sell_record.is_delete = True\n sell_record.save()\n\n goodsshop = GoodsShop.objects.get(goods=sell_record.goods, shop=sell_record.shop)\n goodsshop.remain += sell_record.sell_num\n goodsshop.save()\n\n goods_record = GoodsRecord(goods=sell_record.goods, shop=sell_record.shop, change_num=sell_record.sell_num,\n updater=user)\n goods_record.save()\n\n return_record = GoodsReturnRecord(goods_sell_record=sell_record, updater=user)\n return_record.save()\n order.is_delete = True\n order.save()\n return HttpResponse('success')\n\n\n@login_required(login_url='/kucun/login')\ndef add_record(request):\n if not request.user.is_superuser:\n return HttpResponse('error')\n add_records = GoodsAddRecord.objects.filter(\n date__gt=datetime.date.today() - datetime.timedelta(days=3)).order_by(\n '-date')\n for record in add_records:\n record.all_price = record.price * record.number\n\n header = title = '进货记录'\n return render_to_response('add_record.html',\n {'request': request, 'add_records': add_records, 'header': header, 'title': title})\n\n\ndef today_profit(request):\n if not request.user.is_superuser:\n return HttpResponse('error')\n today = datetime.date.today()\n sell_records = GoodsSellRecord.objects.filter(date__year=today.year, date__month=today.month,\n date__day=today.day, is_delete=False).order_by('-date')\n cost_records = OtherCost.objects.filter(date__year=today.year, date__month=today.month,\n date__day=today.day, is_delete=False)\n all_profit = 0\n all_cost = 0\n all_sell = 0\n all_arrears = 0\n for record in sell_records:\n all_profit += record.get_profit()\n all_sell += record.sell_price * record.sell_num\n if record.is_arrears:\n all_arrears += record.sell_price * record.sell_num\n for record in cost_records:\n all_cost += record.price\n mao_profit = all_profit\n all_profit -= all_cost\n header = title = '今日利润:%s' % (all_profit,)\n return render_to_response('profit_check.html',\n {'request': request, 'all_sell': all_sell, 'mao_profit': mao_profit,\n 'all_profit': all_profit, 'all_arrears': all_arrears,\n 'all_cost': all_cost,\n 'sell_records': sell_records, 'header': header,\n 'title': title})\n\n\ndef yesterday_profit(request):\n if not request.user.is_superuser:\n return HttpResponse('error')\n yesterday = datetime.date.today() - datetime.timedelta(days=1)\n sell_records = GoodsSellRecord.objects.filter(date__year=yesterday.year, date__month=yesterday.month,\n date__day=yesterday.day, is_delete=False).order_by('-date')\n cost_records = OtherCost.objects.filter(date__year=yesterday.year, date__month=yesterday.month,\n date__day=yesterday.day, is_delete=False)\n all_profit = 0\n all_cost = 0\n all_sell = 0\n all_arrears = 0\n for record in sell_records:\n all_profit += record.get_profit()\n all_sell += record.sell_price * record.sell_num\n if record.is_arrears:\n all_arrears += record.sell_price * record.sell_num\n for record in cost_records:\n all_cost += record.price\n mao_profit = all_profit\n all_profit -= all_cost\n header = title = '昨日利润:%s' % (all_profit,)\n return render_to_response('profit_check.html',\n {'request': request, 'all_sell': all_sell, 'mao_profit': mao_profit,\n 'all_profit': all_profit, 'all_arrears': all_arrears,\n 'all_cost': all_cost,\n 'sell_records': sell_records, 'header': header,\n 'title': title})\n\n\ndef this_month_profit(request):\n if not request.user.is_superuser:\n return HttpResponse('error')\n this_month_first_day = datetime.date(datetime.date.today().year, datetime.date.today().month, 1)\n sell_records = GoodsSellRecord.objects.filter(date__year=this_month_first_day.year,\n date__month=this_month_first_day.month, is_delete=False).order_by(\n '-date')\n cost_records = OtherCost.objects.filter(date__year=this_month_first_day.year,\n date__month=this_month_first_day.month, is_delete=False)\n all_profit = 0\n all_cost = 0\n all_sell = 0\n all_arrears = 0\n for record in sell_records:\n all_profit += record.get_profit()\n all_sell += record.sell_price * record.sell_num\n if record.is_arrears:\n all_arrears += record.sell_price * record.sell_num\n for record in cost_records:\n all_cost += record.price\n mao_profit = all_profit\n all_profit -= all_cost\n header = title = '本月利润:%s' % (all_profit,)\n return render_to_response('profit_check.html',\n {'request': request, 'all_sell': all_sell, 'mao_profit': mao_profit,\n 'all_profit': all_profit, 'all_arrears': all_arrears,\n 'all_cost': all_cost, 'select_date': this_month_first_day,\n 'sell_records': sell_records, 'header': header,\n 'title': title})\n\n\ndef last_month_profit(request):\n if not request.user.is_superuser:\n return HttpResponse('error')\n this_month_first_day = datetime.date(datetime.date.today().year, datetime.date.today().month, 1)\n last_month_last_day = this_month_first_day - datetime.timedelta(1)\n last_month_first_day = datetime.date(last_month_last_day.year, last_month_last_day.month, 1)\n sell_records = GoodsSellRecord.objects.filter(date__year=last_month_first_day.year,\n date__month=last_month_first_day.month, is_delete=False).order_by(\n '-date')\n cost_records = OtherCost.objects.filter(date__year=last_month_first_day.year,\n date__month=last_month_first_day.month,\n is_delete=False)\n all_profit = 0\n all_cost = 0\n all_sell = 0\n all_arrears = 0\n for record in sell_records:\n all_profit += record.get_profit()\n all_sell += record.sell_price * record.sell_num\n if record.is_arrears:\n all_arrears += record.sell_price * record.sell_num\n for record in cost_records:\n all_cost += record.price\n mao_profit = all_profit\n all_profit -= all_cost\n header = title = '上月利润:%s' % (all_profit,)\n return render_to_response('profit_check.html',\n {'request': request, 'all_sell': all_sell, 'mao_profit': mao_profit,\n 'all_profit': all_profit, 'all_arrears': all_arrears,\n 'all_cost': all_cost, 'select_date': last_month_first_day,\n 'sell_records': sell_records, 'header': header,\n 'title': title})\n\n\ndef other_month_profit(request):\n if not request.user.is_superuser:\n return HttpResponse('error')\n if request.method == 'GET':\n return render_to_response('check_profit.html', {'request': request, 'header': '月利润查询', 'title': '月利润查询'})\n elif request.method == 'POST':\n sdate = request.POST['date']\n select_date = datetime.datetime.strptime(sdate, '%Y-%m')\n sell_records = GoodsSellRecord.objects.filter(date__year=select_date.year,\n date__month=select_date.month, is_delete=False).order_by(\n '-date')\n cost_records = OtherCost.objects.filter(date__year=select_date.year,\n date__month=select_date.month,\n is_delete=False)\n all_profit = 0\n all_cost = 0\n all_sell = 0\n all_arrears = 0\n for record in sell_records:\n all_profit += record.get_profit()\n all_sell += record.sell_price * record.sell_num\n if record.is_arrears:\n all_arrears += record.sell_price * record.sell_num\n for record in cost_records:\n all_cost += record.price\n mao_profit = all_profit\n all_profit -= all_cost\n header = title = '%s月利润:%s' % (select_date.month, all_profit,)\n return render_to_response('profit_check.html',\n {'request': request, 'all_sell': all_sell, 'mao_profit': mao_profit,\n 'all_profit': all_profit, 'all_arrears': all_arrears,\n 'all_cost': all_cost,\n 'sell_records': sell_records, 'header': header,\n 'title': title, 'select_date': select_date})\n\n\ndef all_arrears(request):\n sell_records = GoodsSellRecord.objects.filter(is_arrears=True, is_delete=False, order=None).order_by('-date')\n all_count = 0\n for record in sell_records:\n all_count += record.get_receivable()\n header = title = '共欠款:%s' % (all_count,)\n return render_to_response('arrears_goods.html',\n {'request': request, 'sell_records': sell_records, 'header': header, 'title': title})\n\n\ndef order_arrears(request):\n orders = Order.objects.filter(is_delete=False, is_arrears=True).order_by('-date')\n return render_to_response('order_manage.html',\n {'request': request, 'orders': orders, 'header': '欠账订单', 'title': '欠账订单'})\n\n\n@login_required(login_url='/kucun/login')\ndef check_out_in(request):\n return render_to_response('check_out_in.html', {'title': '进出库查询', 'header': '进出库查询'})\n\n\ndef mybackup(request):\n # 现将今天之前备份过的置为不是最新\n today = datetime.date.today()\n old_backups = Backup.objects.filter(save_datetime__year=today.year, save_datetime__month=today.month,\n save_datetime__day=today.day)\n for old_backup in old_backups:\n old_backup.is_lastet = False\n old_backup.save()\n\n goodss = Goods.objects.order_by('name')\n for goods in goodss:\n guoao = GoodsShop.objects.get(goods=goods, shop__name='国奥店')\n dadian = GoodsShop.objects.get(goods=goods, shop__name='大店')\n hongwei = GoodsShop.objects.get(goods=goods, shop__name='红卫店')\n backup = Backup(goods_name=goods.name, goods_type=goods.goods_type, dadian_count=dadian.remain,\n guoaodian_count=guoao.remain, hongweidian_count=hongwei.remain)\n backup.save()\n\n return HttpResponse('success')\n\n\n@login_required(login_url='/kucun/login')\ndef check_backup(request):\n if request.method == 'GET':\n return render_to_response(\"check_backup_select_date.html\",\n {'request': request, 'title': '历史查询', 'header': '历史查询'})\n elif request.method == 'POST':\n sdate = request.POST['date']\n select_date = datetime.datetime.strptime(sdate, '%Y-%m-%d')\n\n backups = Backup.objects.filter(save_datetime__year=select_date.year,\n save_datetime__month=select_date.month,\n save_datetime__day=select_date.day,\n is_lastet=True).order_by('goods_type', 'goods_name')\n for backup in backups:\n backup.all_count = backup.dadian_count + backup.guoaodian_count + backup.hongweidian_count\n title = u'%s年%s月%s日' % (select_date.year, select_date.month, select_date.day)\n shang = len(backups) / 3\n yu = len(backups) % 3\n if yu != 0:\n shang += 1\n return render_to_response('check_backup.html',\n {'request': request, 'backups1': backups[:shang],\n 'backups2': backups[shang:shang * 2],\n 'backups3': backups[shang * 2:], 'title': title, 'header': title})\n\n\ndef inbound_channel(request):\n if request.method == 'GET':\n inbounds = InboundChannel.objects.exclude(name='无')\n return render_to_response('inbound_channel.html',\n {'request': request, 'inbounds': inbounds, 'title': '进货渠道', 'header': '进货渠道'})\n elif request.method == 'POST':\n name = request.POST['name']\n phonenumber = request.POST.get('phonenumber', '')\n inbound = InboundChannel(name=name, phonenumber=phonenumber)\n inbound.save()\n return HttpResponseRedirect(reverse('inbound_channel'))\n\n\ndef delete_inbound(request):\n if request.method == 'POST':\n inbound_id = request.POST['inbound_id']\n inbound = InboundChannel.objects.get(id=inbound_id)\n inbound.delete()\n return HttpResponse('success')\n\n\ndef delete_transfer_shop(request):\n if request.method == 'POST':\n shop_id = request.POST['shop_id']\n shop = TransferShop.objects.get(id=shop_id)\n shop.delete()\n return HttpResponse('success')\n\n\ndef make_order(request):\n if request.method == 'GET':\n goods = Goods.objects.filter(is_delete=False).order_by('goods_name')\n datas = []\n for good in goods:\n kadi = GoodsShop.objects.get(goods=good, shop__name='卡迪电子')\n m = {'goods': good, 'kadi': kadi}\n # amount += kadi.remain\n datas.append(m)\n shang = len(goods) / 2\n yu = len(goods) % 2\n if yu != 0:\n shang += 1\n\n return render_to_response('make_order.html',\n {'request': request, 'data1': datas[:shang], 'data2': datas[shang:shang * 2],\n 'title': '批量售货', 'header': '批量售货'})\n\n\ndef add_cart(request):\n if request.method == 'GET':\n goods_id = request.GET['goods_id']\n shop_id = request.GET['shop_id']\n goodsshop = GoodsShop.objects.get(goods__id=goods_id, shop__id=shop_id)\n return render_to_response('modal_add_cart.html', {'goodsshop': goodsshop})\n elif request.method == 'POST':\n goods_id = request.POST['goods_id']\n shop_id = request.POST['shop_id']\n number = request.POST['number']\n price = request.POST['price']\n\n goodsshop = GoodsShop.objects.get(goods__id=goods_id, shop__id=shop_id)\n cart = request.session.get(\"cart\", None)\n if not cart:\n cart = Cart()\n request.session[\"cart\"] = cart\n cart.add_product(goodsshop, float(price), int(number))\n request.session['cart'] = cart\n return HttpResponse(goodsshop.remain - int(number))\n\n\ndef cart_show(request):\n cart = request.session.get('cart', None)\n if not cart:\n cart = Cart()\n request.session['cart'] = cart\n return render_to_response(\"cart_show.html\", {'cart': cart})\n\n\ndef clean_cart(request):\n request.session['cart'] = Cart()\n return HttpResponse('success')\n\n\ndef delete_cart(request):\n item_id = request.POST['item_id']\n cart = request.session.get('cart', None)\n cart.total_price -= cart.items[int(item_id)].quantity * cart.items[int(item_id)].unit_price\n del cart.items[int(item_id)]\n request.session['cart'] = cart\n return HttpResponse('delete success')\n\n\ndef submit_cart(request):\n if request.method == 'GET':\n cart = request.session['cart']\n return render_to_response('modal_order_submit.html', {'cart': cart})\n elif request.method == 'POST':\n user = request.user\n if not user:\n return HttpResponse(\"false\")\n arrears = request.POST['arrears']\n customer = request.POST.get('customer', '无')\n phonenumber = request.POST.get('phonenumber', '无')\n address = request.POST.get('address', '无')\n remark = request.POST.get('remark', '无')\n if arrears == '0':\n arrears = False\n if arrears == '1':\n arrears = True\n cart = request.session['cart']\n items = cart.items\n now = datetime.datetime.now()\n order = Order(name=now.strftime('%Y%m%d%H%M%S'), is_arrears=arrears, customer=customer,\n phonenumber=phonenumber,\n address=address, remark=remark, updater=user)\n order.save()\n all_price = 0\n all_profit = 0\n for item in items:\n goodsshop = item.product\n unit_price = item.unit_price\n quantity = item.quantity\n all_price += unit_price * quantity\n all_profit += ((unit_price * quantity) - (goodsshop.goods.average_price * quantity))\n goods_record = GoodsRecord(goods=goodsshop.goods, shop=goodsshop.shop, change_num=(-quantity),\n updater=user)\n goods_record.save()\n goodsshop.remain = F('remain') - quantity\n goodsshop.save(force_update=True)\n goods_sell_record = GoodsSellRecord(goods=goodsshop.goods, shop=goodsshop.shop, sell_num=quantity,\n average_price=goodsshop.goods.average_price,\n sell_price=unit_price, is_arrears=arrears, customer=customer,\n phonenumber=phonenumber, address=address, remark=remark,\n order=order,\n updater=user)\n goods_sell_record.save()\n order.all_price = all_price\n order.all_profit = all_profit\n order.save()\n request.session['cart'] = Cart()\n return HttpResponse('success')\n\n\ndef order_manage(request):\n orders = Order.objects.all().filter(is_delete=False).order_by('-date')[:100]\n return render_to_response('order_manage.html',\n {'request': request, 'orders': orders, 'header': '订单管理', 'title': '订单管理'})\n\n\ndef goods_return_record(request):\n records = GoodsReturnRecord.objects.all().order_by('-date')[:100]\n return render_to_response(\"goods_return_record.html\",\n {'request': request, 'records': records, 'title': '退库记录', 'header': '退库记录'})\n\n\ndef other_cost(request):\n if request.method == 'GET':\n today = datetime.date.today()\n every_day_cost_records = []\n total_cost = 0\n for i in range(0, 10):\n that_day = today - datetime.timedelta(days=i)\n that_day_other_cost_records = OtherCost.objects.filter(date__year=that_day.year, date__month=that_day.month,\n date__day=that_day.day, is_delete=False)\n\n that_day_cost = 0\n for record in that_day_other_cost_records:\n if i == 0:\n total_cost += record.price\n that_day_cost += record.price\n\n day_and_cost_map = {'date': that_day, 'records': that_day_other_cost_records, 'cost': that_day_cost}\n every_day_cost_records.append(day_and_cost_map)\n\n header = title = '今开销:%s' % (total_cost,)\n\n return render_to_response('other_cost.html',\n {'request': request, 'every_day_cost_records': every_day_cost_records,\n 'header': header, 'title': title})\n elif request.method == 'POST':\n user = request.user\n purpose = request.POST['purpose']\n price = request.POST['price']\n cost = OtherCost(purpose=purpose, price=price, updater=user)\n cost.save()\n return HttpResponseRedirect(reverse('other_cost'))\n\n\ndef transfer_shop_manage(request):\n if request.method == 'GET':\n shops = TransferShop.objects.all()\n return render_to_response('transfer_shop_manage.html',\n {'request': request, 'shops': shops, 'title': '调入方', 'header': '调入方'})\n elif request.method == 'POST':\n name = request.POST['name']\n phonenumber = request.POST.get('phonenumber', '')\n shop = TransferShop(name=name, phonenumber=phonenumber)\n shop.save()\n return HttpResponseRedirect(reverse('transfer_shop_manage'))\n\n\ndef transfer_record(request):\n today = datetime.date.today()\n every_day_transfer_record = []\n for i in range(0, 10):\n that_day = today - datetime.timedelta(days=i)\n transfer_records = TransferRecord.objects.filter(date__year=that_day.year, date__month=that_day.month,\n date__day=that_day.day).order_by('-date')\n day_and_records_map = {'date': that_day, 'records': transfer_records}\n every_day_transfer_record.append(day_and_records_map)\n return render_to_response('goods_transfer_record.html',\n {'request': request, 'every_day_transfer_record': every_day_transfer_record,\n 'title': '调库记录', 'header': '调库记录'})\n\n\ndef check_month_arrears(request, year, month):\n title = u'%s月欠款' % (month,)\n orders = Order.objects.filter(date__year=year, date__month=month, is_arrears=True, is_delete=False).order_by(\n '-date')\n sell_records = GoodsSellRecord.objects.filter(is_arrears=True, order=None, is_delete=False, date__year=year,\n date__month=month).order_by('-date')\n return render_to_response('month_arrears.html',\n {'request': request, 'orders': orders, 'sell_records': sell_records, 'header': title,\n 'title': title})\n\n\ndef profit_chart(request):\n title = '利润走势'\n if not request.user.is_superuser:\n return HttpResponse('error')\n profit_list = []\n\n month1 = datetime.date(datetime.date.today().year, datetime.date.today().month, 1)\n month2 = datetime.date((month1 - datetime.timedelta(1)).year, (month1 - datetime.timedelta(1)).month, 1)\n month3 = datetime.date((month2 - datetime.timedelta(1)).year, (month2 - datetime.timedelta(1)).month, 1)\n month4 = datetime.date((month3 - datetime.timedelta(1)).year, (month3 - datetime.timedelta(1)).month, 1)\n month5 = datetime.date((month4 - datetime.timedelta(1)).year, (month4 - datetime.timedelta(1)).month, 1)\n month6 = datetime.date((month5 - datetime.timedelta(1)).year, (month5 - datetime.timedelta(1)).month, 1)\n month7 = datetime.date((month6 - datetime.timedelta(1)).year, (month6 - datetime.timedelta(1)).month, 1)\n\n # =============================\n sell_records1 = GoodsSellRecord.objects.filter(date__year=month1.year,\n date__month=month1.month, is_delete=False).order_by('-date')\n profit1 = 0\n for record in sell_records1:\n profit1 += record.get_profit()\n profit_list.append({'month': month1, 'profit': profit1})\n # =============================\n sell_records2 = GoodsSellRecord.objects.filter(date__year=month2.year,\n date__month=month2.month, is_delete=False).order_by('-date')\n profit2 = 0\n for record in sell_records2:\n profit2 += record.get_profit()\n profit_list.append({'month': month2, 'profit': profit2})\n # =============================\n sell_records3 = GoodsSellRecord.objects.filter(date__year=month3.year,\n date__month=month3.month, is_delete=False).order_by('-date')\n profit3 = 0\n for record in sell_records3:\n profit3 += record.get_profit()\n profit_list.append({'month': month3, 'profit': profit3})\n # =============================\n sell_records4 = GoodsSellRecord.objects.filter(date__year=month4.year,\n date__month=month4.month, is_delete=False).order_by('-date')\n profit4 = 0\n for record in sell_records4:\n profit4 += record.get_profit()\n profit_list.append({'month': month4, 'profit': profit4})\n # =============================\n sell_records5 = GoodsSellRecord.objects.filter(date__year=month5.year,\n date__month=month5.month, is_delete=False).order_by('-date')\n profit5 = 0\n for record in sell_records5:\n profit5 += record.get_profit()\n profit_list.append({'month': month5, 'profit': profit5})\n # =============================\n sell_records6 = GoodsSellRecord.objects.filter(date__year=month6.year,\n date__month=month6.month, is_delete=False).order_by('-date')\n profit6 = 0\n for record in sell_records6:\n profit6 += record.get_profit()\n profit_list.append({'month': month6, 'profit': profit6})\n # =============================\n sell_records7 = GoodsSellRecord.objects.filter(date__year=month7.year,\n date__month=month7.month, is_delete=False).order_by('-date')\n profit7 = 0\n for record in sell_records7:\n profit7 += record.get_profit()\n profit_list.append({'month': month7, 'profit': profit7})\n # =============================\n\n # return HttpResponse(profit_list)\n return render_to_response('profit_chart.html',\n {'request': request, 'profit_list': profit_list, 'header': title, 'title': title})\n\n\ndef sell_ranking_chart(request):\n goods = Goods.objects.filter(is_delete=False)\n sell_arrar = []\n for good in goods:\n sell_count = 0\n sell_records = GoodsSellRecord.objects.filter(goods=good)\n for record in sell_records:\n sell_count += record.sell_num\n sell_arrar.append({'goods': good, 'sell_count': sell_count})\n sell_arrar.sort(lambda y, x: cmp(x['sell_count'], y['sell_count']))\n goods_name_arrar = []\n for rank in sell_arrar:\n goods_name_arrar.append(rank['goods'].goods_name)\n title = '销量排行'\n return render_to_response('sell_ranking_chart.html',\n {'request': request, 'sell_arrar': sell_arrar, 'goods_name_arrar': goods_name_arrar,\n 'header': title, 'title': title})\n\ndef love(request):\n today = datetime.datetime.today()\n love_day = datetime.datetime(2011, 11, 14)\n lose_day = datetime.datetime(2014, 11, 18)\n my_birth = datetime.datetime(1992, 6, 12)\n she_birth = datetime.datetime(1993, 11, 17)\n dead_day = datetime.datetime(2065, 11, 17)\n love_days = (today - love_day).days\n love_seconds = (today - love_day).seconds\n lose_days = (today - lose_day).days\n my_birth_days = (today - my_birth).days\n she_birth_days = (today - she_birth).days\n dead_sec = (dead_day - today).total_seconds()\n return render_to_response('love.html', {'love_days': love_days, 'lose_days': lose_days, 'my_birth_days': my_birth_days, 'she_birth_days': she_birth_days, 'love_seconds':love_seconds, 'dead_sec':dead_sec})\n"
},
{
"alpha_fraction": 0.7751938104629517,
"alphanum_fraction": 0.7751938104629517,
"avg_line_length": 27.69444465637207,
"blob_id": "75017e112589aca4f6f122871ee3c476526e4c62",
"content_id": "16fbdd78635f439524b5b84cf0825d10c0731f98",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1032,
"license_type": "permissive",
"max_line_length": 113,
"num_lines": 36,
"path": "/kucun/admin.py",
"repo_name": "jansusea/kddz",
"src_encoding": "UTF-8",
"text": "from django.contrib import admin\n#\n# # Register your models here.\n#\nfrom models import Shop, GoodsShop, GoodsRecord, Goods, GoodsAddRecord, GoodsSellRecord, InboundChannel, Order, \\\n GoodsReturnRecord, TransferShop, TransferRecord\n\nadmin.site.register(Shop)\n#\nadmin.site.register(GoodsShop)\nadmin.site.register(GoodsRecord)\nadmin.site.register(GoodsAddRecord)\nadmin.site.register(InboundChannel)\nadmin.site.register(Order)\nadmin.site.register(GoodsReturnRecord)\nadmin.site.register(TransferShop)\nadmin.site.register(TransferRecord)\n\n# admin.site.register(ChangePrice)\n#\n#\nclass GoodsAdmin(admin.ModelAdmin):\n list_display = ('goods_name', 'average_price', 'last_price', 'update_date')\n\n\nclass GoodsSellRecordAdmin(admin.ModelAdmin):\n list_display = ('id', 'goods', 'date')\n\n\n# class BackupAdmin(admin.ModelAdmin):\n# list_display = ('goods_name', 'goods_type', 'save_datetime')\n\n\nadmin.site.register(Goods, GoodsAdmin)\nadmin.site.register(GoodsSellRecord, GoodsSellRecordAdmin)\n# admin.site.register(Backup, BackupAdmin)"
}
] | 6 |
AliAghel/plugin.video.persiantvs | https://github.com/AliAghel/plugin.video.persiantvs | 08e5b7539a2a344dffd9265fddc72af62f41a847 | 078967de10999f6b30fe67bfbf9c456399994f6f | 470f13104b55d0c95b90ef7dea7a0f143a7be132 | refs/heads/main | 2022-12-24T23:08:55.302574 | 2020-10-02T10:29:34 | 2020-10-02T10:29:34 | 300,547,795 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.8169013857841492,
"alphanum_fraction": 0.8169013857841492,
"avg_line_length": 34.5,
"blob_id": "e0313fec14718008c9ed51597b613ef1f600166f",
"content_id": "956945dbf27c4cd4b384fb3cd7194e0e7b1734de",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 71,
"license_type": "no_license",
"max_line_length": 44,
"num_lines": 2,
"path": "/README.md",
"repo_name": "AliAghel/plugin.video.persiantvs",
"src_encoding": "UTF-8",
"text": "# plugin.video.persiantvs\nKodi video addon for the Persian TV channels\n"
},
{
"alpha_fraction": 0.554576575756073,
"alphanum_fraction": 0.5638766288757324,
"avg_line_length": 30.9375,
"blob_id": "d61985c09d4997e3cc25a3874fd4e050854f3a57",
"content_id": "fe0237390fc112b082b311e7f71de3e7dc1e4d72",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2043,
"license_type": "no_license",
"max_line_length": 98,
"num_lines": 64,
"path": "/bbcpersian.py",
"repo_name": "AliAghel/plugin.video.persiantvs",
"src_encoding": "UTF-8",
"text": "from __future__ import unicode_literals\nfrom lib.youtube_dl import YoutubeDL\nfrom utils import *\nimport logging\nimport os.path\n\nlogger = logging.getLogger(__name__)\n\nTHIS_DIR = os.path.abspath(os.path.dirname(__file__))\nBBCPERSIAN_PATH = os.path.join(THIS_DIR, 'data', 'bbcpersian.pkl')\n\n\ndef create_bbcpersian_object():\n try:\n bbc_live_url = 'https://www.youtube.com/watch?v=TE5d4omulHg'\n ydl = YoutubeDL({'outtmpl': '%(id)s%(ext)s',\n 'format': 'best',\n 'quiet': True,\n 'no_color': True,\n 'no_warnings': True,\n 'hls_prefer_native': True\n })\n with ydl:\n result = ydl.extract_info(bbc_live_url, download=False)\n if 'entries' in result:\n video = result['entries'][0]\n else:\n video = result\n\n bbcpersian_video = video['url']\n\n bbcpersian_object = {\n 'name': 'BBC PERSIAN',\n 'thumb': 'http://www.bbc.co.uk/news/special/2015/newsspec_11063/persian_1024x576.png',\n 'video': bbcpersian_video,\n 'genre': 'NEWS'\n }\n\n return bbcpersian_object\n\n except (ValueError, KeyError):\n logger.exception('Failed to create BBC PERSIAN object!')\n\n\ndef bbcpersian():\n # if is_object_exists(BBCPERSIAN_PATH):\n # try:\n # logger.info('BBC PERSIAN object loaded successfully!')\n # bbcpersian = load_existing_object(BBCPERSIAN_PATH)\n # return bbcpersian\n # except (ValueError, KeyError):\n # logger.exception('Loaded BBC PERSIAN object does not work!')\n # # Create new object\n # bbcpersian = create_bbcpersian_object()\n # # Save new model\n # save_object(bbcpersian, BBCPERSIAN_PATH)\n # return bbcpersian\n bbcpersian = create_bbcpersian_object()\n # Save new model\n # save_object(bbcpersian, BBCPERSIAN_PATH)\n return bbcpersian\n\n\nprint(bbcpersian())"
},
{
"alpha_fraction": 0.5501369833946228,
"alphanum_fraction": 0.560547947883606,
"avg_line_length": 34.096153259277344,
"blob_id": "bbc09464bdb455bfb87a62e9da24811d3c3202f4",
"content_id": "86d17793f187dc44b4680b87271b2c886268e953",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1825,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 52,
"path": "/manoto.py",
"repo_name": "AliAghel/plugin.video.persiantvs",
"src_encoding": "UTF-8",
"text": "from requests.adapters import HTTPAdapter\nfrom urllib3.util.retry import Retry\nfrom bs4 import BeautifulSoup\nimport requests\n\nfrom requests_html import HTML\n\n\ndef get_manoto_video():\n try:\n # base url of all channels in telewebion\n base_url = \"https://www.manototv.com/live\"\n\n session = requests.Session()\n\n retries = Retry(total=5,\n backoff_factor=0.1,\n status_forcelist=[500, 502, 503, 504])\n\n session.mount('http://', HTTPAdapter(max_retries=retries))\n session.mount('https://', HTTPAdapter(max_retries=retries))\n\n response = session.get(base_url,\n headers={'user-agent': 'my-app',\n 'referer': 'https://www.manototv.com/',\n 'origin': 'https://www.manototv.com/',\n 'cache-control': 'no-cache',\n 'Content-Type': 'application/json'}\n )\n\n # throw exception if request does not return 2xx\n response.raise_for_status()\n\n content = response.content\n\n # BeautifulSoup object\n soup = BeautifulSoup(content, features=\"html.parser\")\n html = HTML(html=content, url=base_url)\n # source = 'http:' + soup.find_all(\"source\")[0]['src']\n return html.render()\n\n except requests.exceptions.HTTPError as e:\n return \"HTTP Error: \" + str(e)\n except requests.exceptions.ConnectionError as e:\n return \"Connection Error: \" + str(e)\n except requests.exceptions.Timeout as e:\n return \"Timeout Error: \" + str(e)\n except requests.exceptions.RequestException as e:\n return \"Whoops! Something went wrong: \" + str(e)\n\n\nprint(get_manoto_video())\n"
},
{
"alpha_fraction": 0.541091799736023,
"alphanum_fraction": 0.5524895191192627,
"avg_line_length": 35.260868072509766,
"blob_id": "25c6423230a44d2d5b9b442acf5431dc4c3fc8e9",
"content_id": "b3085e4961732082eb63f982279f3cb6ff96fac5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1667,
"license_type": "no_license",
"max_line_length": 81,
"num_lines": 46,
"path": "/radiojavan.py",
"repo_name": "AliAghel/plugin.video.persiantvs",
"src_encoding": "UTF-8",
"text": "from requests.adapters import HTTPAdapter\nfrom urllib3.util.retry import Retry\nfrom bs4 import BeautifulSoup\nimport requests\n\n\ndef get_radiojavan_video():\n try:\n base_url = \"https://www.radiojavan.com/tv\"\n\n session = requests.Session()\n\n retries = Retry(total=5,\n backoff_factor=0.1,\n status_forcelist=[500, 502, 503, 504])\n\n session.mount('http://', HTTPAdapter(max_retries=retries))\n session.mount('https://', HTTPAdapter(max_retries=retries))\n\n response = session.get(base_url,\n headers={'user-agent': 'my-app',\n 'referer': 'https://www.radiojavan.com/',\n 'origin': 'https://www.radiojavan.com/',\n 'cache-control': 'no-cache',\n 'Content-Type': 'application/json'}\n )\n\n # throw exception if request does not return 2xx\n response.raise_for_status()\n\n content = response.content\n\n # BeautifulSoup object\n soup = BeautifulSoup(content, features=\"html.parser\")\n\n source = 'http:' + soup.find_all(\"source\")[0]['src']\n return source\n\n except requests.exceptions.HTTPError as e:\n return \"HTTP Error: \" + str(e)\n except requests.exceptions.ConnectionError as e:\n return \"Connection Error: \" + str(e)\n except requests.exceptions.Timeout as e:\n return \"Timeout Error: \" + str(e)\n except requests.exceptions.RequestException as e:\n return \"Whoops! Something went wrong: \" + str(e)"
},
{
"alpha_fraction": 0.595174252986908,
"alphanum_fraction": 0.5958445072174072,
"avg_line_length": 25.163637161254883,
"blob_id": "fb1b88f3d2393ed31654b7e4d90ec5a329d01bdc",
"content_id": "b2ff70f856103c978db514cc567b2dbea0049cdf",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1492,
"license_type": "no_license",
"max_line_length": 107,
"num_lines": 55,
"path": "/utils.py",
"repo_name": "AliAghel/plugin.video.persiantvs",
"src_encoding": "UTF-8",
"text": "import pickle\r\nimport logging\r\nimport os.path\r\n\r\nlogger = logging.getLogger(__name__)\r\nlogger.setLevel(logging.DEBUG)\r\nscreen_handler = logging.StreamHandler()\r\nscreen_handler.setLevel(logging.DEBUG)\r\nscreen_handler.setFormatter(logging.Formatter(fmt='%(asctime)s - %(levelname)s - %(module)s - %(message)s',\r\n datefmt='%Y-%m-%d %H:%M:%S'))\r\nlogger.addHandler(screen_handler)\r\n\r\n\r\ndef is_object_exists(obj_path):\r\n \"\"\"Check if a object exists on the path\r\n\r\n Returns:\r\n bool: True for existence, False otherwise\r\n \"\"\"\r\n\r\n logger.info('Checking object availability...')\r\n if os.path.isfile(obj_path):\r\n logger.info('Object exists!')\r\n return True\r\n else:\r\n logger.warning('Object does not exist!')\r\n return False\r\n\r\n\r\ndef load_existing_object(obj_path):\r\n \"\"\"Load a pickled obj\r\n\r\n Returns:\r\n obj: Return loaded obj\r\n \"\"\"\r\n\r\n with open(obj_path, \"rb\") as f:\r\n obj = pickle.load(f)\r\n logger.info('Object loaded successfully!')\r\n return obj\r\n\r\n\r\ndef save_object(new_obj, obj_path):\r\n \"\"\"Save obj on disk as a pickle file\r\n\r\n Args:\r\n new_obj (dict): Given obj\r\n obj_path ():\r\n \"\"\"\r\n try:\r\n with open(obj_path, 'wb') as pickle_file:\r\n pickle.dump(new_obj, pickle_file, protocol=2)\r\n logger.info('Object saved successfully!')\r\n except (ValueError, KeyError):\r\n logger.exception('Failed to save the object!')"
},
{
"alpha_fraction": 0.5085437297821045,
"alphanum_fraction": 0.6064779162406921,
"avg_line_length": 33.394737243652344,
"blob_id": "12c519df9c44b76bc6cd13fd40c9e999efaf95be",
"content_id": "5b104e906e8547bf3469c98c4361dd147880100c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3921,
"license_type": "no_license",
"max_line_length": 285,
"num_lines": 114,
"path": "/tva.py",
"repo_name": "AliAghel/plugin.video.persiantvs",
"src_encoding": "UTF-8",
"text": "from requests.adapters import HTTPAdapter\nfrom urllib3.util.retry import Retry\nimport requests\nfrom lib.youtube_dl import utils\nfrom utils import *\nimport logging\nimport os.path\n\nlogger = logging.getLogger(__name__)\n\nTHIS_DIR = os.path.abspath(os.path.dirname(__file__))\nTVA_PATH = os.path.join(THIS_DIR, 'data', 'tva.pkl')\n\n\ndef static_channels_list():\n return {\n 'TV1':\n {'uuid': '0823beb2-f2fa-4a2c-ae37-d429a0f55d80',\n 'image': 'https://s3.ott.tva.tv/rosing-tva-production/831342b5dc81d07ebec7_512x512c.png'},\n 'TV2':\n {'uuid': '6fcc0a2e-1135-482c-b054-08a96e68b758',\n 'image': 'https://s3.ott.tva.tv/rosing-tva-production/bec73f72f63958fc6998_512x512c.png'},\n 'TV3':\n {'uuid': '0149e4b4-6027-4be9-af1d-35223920d6db',\n 'image': 'https://s3.ott.tva.tv/rosing-tva-production/2768e5ba4bbed336b88e_512x512c.png'},\n 'IRINN':\n {'uuid': 'ff76db87-84ff-4b94-bd6e-0656cf1b9428',\n 'image': 'https://s3.ott.tva.tv/rosing-tva-production/44d0105b6c9ec94b5c3e_512x512c.png'},\n 'Varzesh':\n {'uuid': '41eb32ae-00bd-4236-8ce2-c96063a35096',\n 'image': 'https://s3.ott.tva.tv/rosing-tva-production/0ab89817dd01379d6156_512x512c.png'}\n }\n\n\ndef get_access_token():\n url = \"https://api.ott.tva.tv/oauth/token?client_id=66797942-ff54-46cb-a109-3bae7c855370\"\n payload = {\n \"client_id\": \"66797942-ff54-46cb-a109-3bae7c855370\",\n \"client_version\": \"0.0.1\",\n \"locale\": \"fa-IR\",\n \"timezone\": 7200,\n \"grant_type\": \"password\",\n \"username\": \"989125150439\",\n \"password\": \"[email protected]\",\n \"client_secret\": \"d0ae2c6c-d881-40ad-88f7-202d75ce0c0e\"\n }\n\n response = requests.request(\"POST\", url, data=payload)\n return response.json()['access_token']\n\n\ndef get_video(uuid):\n base_url = 'https://api.ott.tva.tv/v1/channels/{}/stream.json?audio_codec=mp4a&client_id=66797942-ff54-46cb-a109-3bae7c855370&client_version=0.0.1&device_token=7cf4df59-7c9c-40aa-97f0-38f516424038&drm=spbtvcas&locale=fa-IR&protocol=hls&screen_height=1080&screen_width=1920&timezone=7200&video_codec=h264'.format(\n uuid)\n\n session = requests.Session()\n\n retries = Retry(total=5,\n backoff_factor=0.1,\n status_forcelist=[500, 502, 503, 504])\n\n session.mount('http://', HTTPAdapter(max_retries=retries))\n session.mount('https://', HTTPAdapter(max_retries=retries))\n\n std_headers = {\n 'User-Agent': utils.random_user_agent(),\n 'Accept-Charset': 'ISO-8859-1,utf-8;q=0.7,*;q=0.7',\n 'Accept': 'application/json, text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',\n 'Accept-Encoding': 'gzip, deflate, br',\n 'Accept-Language': 'en-us,en;q=0.5',\n }\n\n if uuid == '41eb32ae-00bd-4236-8ce2-c96063a35096':\n payload = 'access_token=' + get_access_token()\n response = session.get(base_url, headers=std_headers, params=payload)\n else:\n response = session.get(base_url, headers=std_headers)\n\n content = response.json()\n\n return content['data']['url']\n\n\ndef create_tva_object():\n lst = []\n for k, v in static_channels_list().items():\n try:\n channel = {'name': k,\n 'thumb': v['image'],\n 'video': get_video(v['uuid']),\n 'genre': 'IRIBTV'}\n lst.append(channel)\n except (ValueError, KeyError):\n logger.exception('Failed to add %s channel!' % k)\n\n return lst\n\n\ndef tva():\n # if is_object_exists(TVA_PATH):\n # tva = load_existing_object(TVA_PATH)\n # try:\n # return tva\n # except (ValueError, KeyError):\n # logger.exception('Loaded TVA object does not work!')\n\n # Create new object\n tva = create_tva_object()\n # Save new model\n # save_object(tva, TVA_PATH)\n return tva\n\n\nprint(tva())\n"
},
{
"alpha_fraction": 0.5458851456642151,
"alphanum_fraction": 0.5658180117607117,
"avg_line_length": 33.71232986450195,
"blob_id": "3d0a367b1a7a08e31c0cc44bfa79c926c02cc568",
"content_id": "57737a1b0c446803d4d823c03951b5fd836e1db6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5067,
"license_type": "no_license",
"max_line_length": 120,
"num_lines": 146,
"path": "/telewebion.py",
"repo_name": "AliAghel/plugin.video.persiantvs",
"src_encoding": "UTF-8",
"text": "from requests.adapters import HTTPAdapter\nfrom urllib3.util.retry import Retry\nfrom bs4 import BeautifulSoup\nimport requests\nfrom utils import *\nimport logging\nimport os.path\n\nlogger = logging.getLogger(__name__)\n\nTHIS_DIR = os.path.abspath(os.path.dirname(__file__))\nTELEWEBION_PATH = os.path.join(THIS_DIR, 'data', 'telewebion.pkl')\n\n\ndef static_channels_list():\n return ['tv1', 'tv2', 'tv3', 'irinn', 'varzesh']\n\n\ndef get_all_channels():\n # base url of all channels in telewebion\n base_url = \"https://www.telewebion.com/channels\"\n\n try:\n session = requests.Session()\n\n retries = Retry(total=5,\n backoff_factor=0.1,\n status_forcelist=[500, 502, 503, 504])\n\n session.mount('http://', HTTPAdapter(max_retries=retries))\n session.mount('https://', HTTPAdapter(max_retries=retries))\n\n response = session.get(base_url,\n headers={'user-agent': 'my-app',\n 'referer': 'https://www.telewebion.com',\n 'origin': 'https://www.telewebion.com',\n 'cache-control': 'no-cache'}\n )\n\n # throw exception if request does not return 2xx\n response.raise_for_status()\n\n channels_url_content = response.content\n\n # BeautifulSoup object\n soup = BeautifulSoup(channels_url_content, features=\"html.parser\")\n\n # create a list of all channels\n all_channels_list = [a['href'].split('/')[-1]\n for a in soup.select('.box.h-100.pointer.d-block')]\n\n if len(all_channels_list) == 0:\n return ['tv1', 'tv2', 'tv3', 'irinn', 'varzesh']\n else:\n return all_channels_list\n\n except requests.exceptions.HTTPError as e:\n return \"HTTP Error: \" + str(e)\n except requests.exceptions.ConnectionError as e:\n return \"Connection Error: \" + str(e)\n except requests.exceptions.Timeout as e:\n return \"Timeout Error: \" + str(e)\n except requests.exceptions.RequestException as e:\n return \"Whoops! Something went wrong: \" + str(e)\n\n\ndef make_request(channels_name):\n base_url = 'https://w32.telewebion.com/v3/channels/{}/details?device=desktop&logo_version=4&thumb_size=240&'.format(\n channels_name)\n\n session = requests.Session()\n\n retries = Retry(total=5,\n backoff_factor=0.1,\n status_forcelist=[500, 502, 503, 504])\n\n session.mount('http://', HTTPAdapter(max_retries=retries))\n session.mount('https://', HTTPAdapter(max_retries=retries))\n\n response = session.get(base_url,\n headers={'user-agent': 'my-app',\n 'referer': 'https://www.telewebion.com/live/' + channels_name,\n 'origin': 'https://www.telewebion.com',\n 'cache-control': 'no-cache', }\n )\n\n # return response request data as json\n return response.json()\n\n\ndef one_channel_list(response):\n channel_name = response['data'][0]['channel']['descriptor']\n channel_thumb = 'https://static.televebion.net/web/content_images/channel_images/thumbs/new/240/v4/{}.png'.format(\n channel_name)\n # get channel's links with different bit rates\n channel_videos = [item['link']for item in response['data'][0]['links']]\n\n links = []\n for v in channel_videos:\n if ('1500k.stream' in v):\n channel_video = v\n name1500 = channel_name + '(1500K)'\n links.append({'name': name1500, 'thumb': channel_thumb,\n 'video': channel_video, 'genre': 'IRIBTV'})\n elif ('1000k.stream' in v):\n channel_video = v\n name1000 = channel_name + '(1000K)'\n links.append({'name': name1000, 'thumb': channel_thumb,\n 'video': channel_video, 'genre': 'IRIBTV'})\n elif ('500k.stream' in v):\n channel_video = v\n name500 = channel_name + '(500K)'\n links.append({'name': name500, 'thumb': channel_thumb,\n 'video': channel_video, 'genre': 'IRIBTV'})\n\n return links\n\n\ndef create_telewebion_object():\n lst = []\n for c in static_channels_list():\n for link in one_channel_list(make_request(c)):\n try:\n lst.append(link)\n except (ValueError, KeyError):\n logger.exception('Failed to add %s channel!' % c)\n\n return lst\n\n\ndef telewebion():\n # if is_object_exists(TELEWEBION_PATH):\n # telewebion = load_existing_object(TELEWEBION_PATH)\n # try:\n # return telewebion\n # except (ValueError, KeyError):\n # logger.exception('Loaded TELEWEBION object does not work!')\n\n # Create new object\n telewebion = create_telewebion_object()\n # Save new model\n # save_object(telewebion, TELEWEBION_PATH)\n return telewebion\n\n\nprint(telewebion())"
}
] | 7 |
ludiusvox/Spacy-Gensim-Web-scraping | https://github.com/ludiusvox/Spacy-Gensim-Web-scraping | 3281d0a9d23ed6d80cddfa68752e9bd1420abd03 | 628662bd0ef2323c2af3049252313f3d4cdbee33 | 570a667be18280f000aef671e2ba602bbc7d26e4 | refs/heads/main | 2023-08-14T14:00:03.332024 | 2021-09-21T23:25:55 | 2021-09-21T23:25:55 | 409,002,322 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5183639526367188,
"alphanum_fraction": 0.5278494358062744,
"avg_line_length": 31.110551834106445,
"blob_id": "058ec986013f85de812c9e122e846eee13af6bf4",
"content_id": "8705649816ee63f039994143ee757617af2ea156",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 13178,
"license_type": "no_license",
"max_line_length": 91,
"num_lines": 398,
"path": "/SchedulingSkeleton.py",
"repo_name": "ludiusvox/Spacy-Gensim-Web-scraping",
"src_encoding": "UTF-8",
"text": "# SchedulingSkeleton.py\r\n# Copyright 2021 Dr. Collin F. Lynch\r\n#\r\n# This provides skeleton code for the scheduling problem\r\n# for Week 6 of the AI Academy course. It provide a basic\r\n# class structure for the problem and should be used as\r\n# a guide for implementation.\r\n\r\n\r\n# Imports\r\n# ==================================\r\n\r\nimport re\r\nimport networkx as nx\r\nimport matplotlib.pyplot as pyplot\r\nimport pandas as pd\r\n\r\n\r\n#\r\n# ==================================\r\n\r\n\r\nclass SchedulingProblem(object):\r\n \"\"\"\r\n This class wraps up the business of the scheduling problem\r\n for the sake of clarity. On load it will pull in a problem\r\n file and then handle the calculations. It has a built in \r\n method to actually print the directed graph for the user.\r\n \"\"\"\r\n\r\n # Initialization\r\n # ---------------------------------------\r\n def __init__(self, FileName):\r\n self.Graph = nx.DiGraph()\r\n self.V = set()\r\n \"\"\"\r\n Load in the specified file. And generate\r\n the relevant storage.\r\n\r\n Parameters\r\n ----------\r\n FileName : TYPE\r\n DESCRIPTION.\r\n\r\n Returns\r\n -------\r\n None.\r\n\r\n \"\"\"\r\n\r\n with open(\"Task2.txt\", 'r') as Input:\r\n self.Variables = []\r\n NextLine = Input.readline()[:-1]\r\n while (NextLine):\r\n # print(NextLine)\r\n Match = VAR_ROW3.match(NextLine)\r\n if Match:\r\n self.V = Match.groups()\r\n\r\n self.Graph.add_edge(self.V[2], self.V[0], weight=int(self.V[1]))\r\n self.Graph.add_node(self.V[0], weight=int(self.V[1]))\r\n\r\n # nx.set_edge_attributes(g, values=self.V[1], name='Dur')\r\n\r\n self.Variables.append(Match.groups())\r\n\r\n else:\r\n Match1 = VAR_ROW2.match(NextLine)\r\n if Match1:\r\n self.V = Match1.groups()\r\n\r\n self.Graph.add_edge(self.V[2], self.V[0], weight=int(self.V[1]))\r\n self.Graph.add_node(self.V[0], weight=int(self.V[1]))\r\n self.Variables.append(Match1.groups())\r\n else:\r\n Match2 = VAR_ROW1.match(NextLine)\r\n if Match2:\r\n self.V = Match2.groups()\r\n\r\n self.Graph.add_node(self.V[0], weight=int(self.V[1]))\r\n self.Variables.append(Match2.groups())\r\n NextLine = Input.readline()[:-1]\r\n\r\n # Generate storage.\r\n\r\n self.df = pd.DataFrame(self.Variables, columns=[\"Name\", \"Dur\", \"Parent\"])\r\n # Do the file loading.\r\n # SchedulingProblem.add_task(self,self.df['Name'],self.df['Dur'],self.df['Parent'])\r\n\r\n print(self.Graph.nodes(data=True))\r\n\r\n def update_early_properties(self):\r\n end_nodes = []\r\n for node in (n for n, d in self.Graph.out_degree if d == 0):\r\n end_nodes.append(node)\r\n pass\r\n\r\n def add_task(self, Name, Dur, ParentNames):\r\n \"\"\"\r\n Add in the specified task \r\n\r\n Parameters\r\n ----------\r\n Name : TYPE\r\n DESCRIPTION.\r\n Duration : TYPE\r\n DESCRIPTION.\r\n Parents : TYPE\r\n DESCRIPTION.\r\n\r\n Returns\r\n -------\r\n None.\r\n\r\n \"\"\"\r\n for names in Name:\r\n self.Graph.add_node(names)\r\n print(self.Graph.nodes())\r\n\r\n self.G = nx.from_pandas_edgelist(self.df, 'Name', 'Dur', 'Parent')\r\n\r\n # Calculations\r\n # ---------------------------------------------\r\n\r\n def calc_es(self):\r\n \"\"\"\r\n Calculate the early start of each item.\r\n \r\n\r\n Returns\r\n -------\r\n None.\"\"\"\r\n\r\n myDict = {}\r\n # First set the ES of the start.\r\n # self.Graph.nodes[\"end\"][\"es\"] = 0\r\n\r\n # Then we deal with the subsequent items.\r\n # WorkingNodes = [self.Graph.successors(\"start\")]\r\n\r\n i = 0\r\n # Loop till the queue is done.\r\n for path in nx.all_simple_paths(self.Graph, source=\"start\", target=[\"end\"]):\r\n print(path)\r\n myDict[i] = path\r\n i = i + 1\r\n # print(myDict)\r\n\r\n subgraph0 = self.Graph.subgraph(myDict[0])\r\n subgraph1 = self.Graph.subgraph(myDict[1])\r\n subgraph2 = self.Graph.subgraph(myDict[2])\r\n subgraph3 = self.Graph.subgraph(myDict[3])\r\n G0 = self.Graph.subgraph(subgraph0.nodes())\r\n G1 = self.Graph.subgraph(subgraph1.nodes())\r\n G2 = self.Graph.subgraph(subgraph2.nodes())\r\n G3 = self.Graph.subgraph(subgraph3.nodes())\r\n # print(G0)\r\n # print(G0.edges(data=True))\r\n # print(G1)\r\n # print(G1.edges(data=True))\r\n # print(G2)\r\n # print(G2.edges(data=True))\r\n # print(G3)\r\n # print(G3.edges(data=True))\r\n\r\n len_path = dict(nx.all_pairs_dijkstra(G0, weight='weight'))\r\n\r\n nodes = list(G0.nodes())\r\n results = pd.DataFrame()\r\n\r\n starting_point = []\r\n for i in range(len(nodes)):\r\n results = results.append(pd.DataFrame(len_path[nodes[i]]).T.reset_index())\r\n starting_point = starting_point + [nodes[i]] * len(len_path[nodes[i]][1])\r\n\r\n paths_df = pd.DataFrame()\r\n paths_df['starting_point'] = starting_point\r\n\r\n results.columns = ['ending_point', 'weight', 'path']\r\n results = results.reset_index()\r\n del results['index']\r\n\r\n results = pd.concat((paths_df, results), axis=1)\r\n # print(results.loc[:])\r\n # -----------------------------------------\r\n len_path = dict(nx.all_pairs_dijkstra(G1, weight='weight'))\r\n\r\n nodes = list(G1.nodes())\r\n results = pd.DataFrame()\r\n\r\n starting_point = []\r\n for i in range(len(nodes)):\r\n results = results.append(pd.DataFrame(len_path[nodes[i]]).T.reset_index())\r\n starting_point = starting_point + [nodes[i]] * len(len_path[nodes[i]][1])\r\n\r\n paths_df = pd.DataFrame()\r\n paths_df['starting_point'] = starting_point\r\n\r\n results.columns = ['ending_point', 'weight', 'path']\r\n results = results.reset_index()\r\n del results['index']\r\n\r\n results = pd.concat((paths_df, results), axis=1)\r\n # print(results.loc[:])\r\n # ------------------------------------------\r\n len_path = dict(nx.all_pairs_dijkstra(G2, weight='weight'))\r\n\r\n nodes = list(G2.nodes())\r\n results = pd.DataFrame()\r\n\r\n starting_point = []\r\n for i in range(len(nodes)):\r\n results = results.append(pd.DataFrame(len_path[nodes[i]]).T.reset_index())\r\n starting_point = starting_point + [nodes[i]] * len(len_path[nodes[i]][1])\r\n\r\n paths_df = pd.DataFrame()\r\n paths_df['starting_point'] = starting_point\r\n\r\n results.columns = ['ending_point', 'weight', 'path']\r\n results = results.reset_index()\r\n del results['index']\r\n\r\n results = pd.concat((paths_df, results), axis=1)\r\n # print(results.loc[:])\r\n # ---------------------------------\r\n len_path = dict(nx.all_pairs_dijkstra(G3, weight='weight'))\r\n\r\n nodes = list(G3.nodes())\r\n results = pd.DataFrame()\r\n\r\n starting_point = []\r\n for i in range(len(nodes)):\r\n results = results.append(pd.DataFrame(len_path[nodes[i]]).T.reset_index())\r\n starting_point = starting_point + [nodes[i]] * len(len_path[nodes[i]][1])\r\n\r\n paths_df = pd.DataFrame()\r\n paths_df['starting_point'] = starting_point\r\n\r\n results.columns = ['ending_point', 'weight', 'path']\r\n results = results.reset_index()\r\n del results['index']\r\n\r\n results = pd.concat((paths_df, results), axis=1)\r\n # print(results.loc[:])\r\n # ---------------------------------\r\n return G0, G1, G2, G3\r\n\r\n def calc_node_es(self, NodeName):\r\n \"\"\"\r\n Calculate the ES for the node.\r\n\r\n Parameters\r\n ----------\r\n NodeName : TYPE\r\n DESCRIPTION.\r\n\r\n Returns\r\n -------\r\n None.\r\n\r\n \"\"\"\r\n\r\n # Make sure all the parents are set.\r\n # Find the slowest parent.\r\n # Set my value.\r\n self.Nodes = self.Graph.predecessors(\"end\")\r\n # print(self.Nodes)\r\n\r\n # Output\r\n # --------------------------------------------\r\n def drawgraph(self):\r\n \"\"\"\r\n Draw out the graph for the user.\r\n\r\n Returns\r\n -------\r\n None.\r\n\r\n \"\"\"\r\n nx.draw(self.Graph, with_labels=True, arrows=True)\r\n pyplot.show()\r\n nx.write_gexf(self.Graph, \"plot.gexf\")\r\n pyplot.show()\r\n\r\n\r\nVAR_ROW1 = re.compile(\"^(?P<Node>[a-z]+) (?P<Dur>[0-9]+) :\")\r\nVAR_ROW2 = re.compile(\"^(?P<Node>[a-z]+) (?P<Dur>[0-9]+) : (?P<P1>[a-z]+)\")\r\nVAR_ROW3 = re.compile(\"^(?P<Node>[a-z]+) (?P<Dur>[0-9]+) : (?P<P1>[a-z]+) (?P<P2>[a-z]+)\")\r\n\r\n\r\ndef update_early_properties(graph, node: \"str\"):\r\n \"\"\"\r\n Calculate early_start and early_finish.\r\n Start with an end node. This function will recurse back to the\r\n start node(s) and calculate the early_start and early_finish\r\n properties along the way.\r\n \"\"\"\r\n predecessors = list(graph.predecessors(node))\r\n\r\n if len(predecessors) == 0:\r\n # This node has no predecessors. By definition, it has an\r\n # early_start of zero and an early_finish of its duration.\r\n graph.nodes[node]['es'] = 0.0\r\n graph.nodes[node]['ef'] = graph.nodes[node]['weight']\r\n\r\n max_early_finish = 0.0\r\n for predecessor in predecessors:\r\n if graph[predecessor].get('ef', None) is None:\r\n update_early_properties(graph, predecessor) # NOTE THE RECURSIVE CALL HERE\r\n if graph.nodes[predecessor]['ef'] > max_early_finish:\r\n max_early_finish = graph.nodes[predecessor]['ef']\r\n\r\n # OK, we can now update this node's \"early\" properties\r\n graph.nodes[node]['es'] = max_early_finish\r\n\r\n graph.nodes[node]['ef'] = (\r\n max_early_finish + graph.nodes[node]['weight'])\r\n\r\n print(\"Early Start: \")\r\n print(max_early_finish)\r\n\r\n\r\ndef update_late_properties(graph, node: \"str\"):\r\n \"\"\"\r\n Calculate late_start and late_finish.\r\n Start with an end node. This function will recurse back to the\r\n start node(s) and calculate the late_start and late_finish\r\n properties along the way.\r\n \"\"\"\r\n successors = list(graph.successors(node))\r\n\r\n if len(successors) == 0:\r\n # This node has no predecessors. By definition, it has an\r\n # early_start of zero and an early_finish of its duration.\r\n graph.nodes[node]['ls'] = graph.nodes[node]['weight']\r\n graph.nodes[node]['lf'] = graph.nodes[node]['weight'] + graph.nodes[node]['weight']\r\n\r\n max_late_finish = 0.00\r\n for successor in successors:\r\n if graph[successor].get('lf', None) is None:\r\n update_late_properties(graph, successor) # NOTE THE RECURSIVE CALL HERE\r\n if graph.nodes[successor]['lf'] > max_late_finish:\r\n max_late_finish = graph.nodes[successor]['lf']\r\n\r\n # OK, we can now update this node's \"late\" properties\r\n graph.nodes[node]['ls'] = max_late_finish\r\n\r\n graph.nodes[node]['lf'] = (\r\n max_late_finish + graph.nodes[node]['weight'])\r\n\r\n print(\"late Start: \")\r\n print(max_late_finish)\r\ndef find_critical_path(graph, start_node: str, end_node: str) -> list[str]:\r\n \"\"\"\r\n Find the critical path from start_node to end_node.\r\n\r\n This method is recursive and performs a depth-first search for a path\r\n of nodes from start_node to end_node that all have a slack of zero (0).\r\n\r\n NOTE: There may be more than one such path. This only returns the first\r\n one found.\r\n \"\"\"\r\n def _find_critical_path(start_node: str, end_node: str) -> list[str]:\r\n graph.nodes[start_node]['slack'] = 0\r\n\r\n if start_node == end_node:\r\n return [start_node]\r\n\r\n if graph.nodes[start_node]['slack'] == 0:\r\n successors = graph.successors(start_node)\r\n for successor in successors:\r\n response = _find_critical_path(successor, end_node)\r\n if response is not None:\r\n return [start_node] + response\r\n\r\n return _find_critical_path(start_node, end_node)\r\n\r\nif __name__ == '__main__':\r\n G = SchedulingProblem(\"Task2.txt\")\r\n\r\n G0, G1, G2, G3 = G.calc_es()\r\n update_early_properties(G0, 'end')\r\n update_late_properties(G0, 'start')\r\n print(\"end fork1\")\r\n update_early_properties(G1, 'end')\r\n update_late_properties(G1, 'start')\r\n print(\"end fork2\")\r\n update_early_properties(G2, 'end')\r\n update_late_properties(G2, 'start')\r\n print(\"end fork3\")\r\n update_early_properties(G3, 'end')\r\n update_late_properties(G3, 'start')\r\n print(\"end fork4\")\r\n print(find_critical_path(G0,'start', 'end'))\r\n print(find_critical_path(G1, 'start', 'end'))\r\n print(find_critical_path(G2, 'start', 'end'))\r\n print(find_critical_path(G3, 'start', 'end'))\r\n G.drawgraph()\r\n"
},
{
"alpha_fraction": 0.695708692073822,
"alphanum_fraction": 0.708712637424469,
"avg_line_length": 26.481481552124023,
"blob_id": "e471d9a6cf98d586ebf5889e8bf608b9eb6d6a92",
"content_id": "8ce4b8485b3fb4af938c1e74a6b624c6ea564daa",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1538,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 54,
"path": "/TFIDF Example.py",
"repo_name": "ludiusvox/Spacy-Gensim-Web-scraping",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nCreated on Wed Sep 8 09:26:10 2021\r\n\r\n@author: Travis Martin\r\n\"\"\"\r\n\r\n#imports\r\nimport gensim.downloader as api\r\nfrom gensim.models import TfidfModel\r\nfrom gensim.corpora import Dictionary\r\n\r\n\r\n# I put in several print statements so that \r\n# you can see the behavior of the following items\r\n\r\n#print(api.info()) # this will print info about the different corpi you can load\r\n#print(api.info('text8')) #this wil print info about the text 8 \r\nData = api.load(\"text8\") # this line loads the corpi of text\r\n#print(Data)\r\n\r\n\r\n# this will create a dictionary based on the loaded data\r\n# it will contain the words in the Data as the value and \r\n# word ID as the key for the dictionary\r\nDct = Dictionary(Data)\r\n#print(Dct)\r\n#print(Dct[100])\r\n\r\n\r\n# this will convert the Data into a bag of words\r\n# where the corpus will contain the id for the word\r\n# and the number of times the word appears in the document\r\nCorpus = [Dct.doc2bow(line) for line in Data]\r\n#print(Corpus) #id and count\r\n\r\n\r\n# this will take the created corpus and create a TFIDF model\r\nModel = TfidfModel(Corpus)\r\n#print(Model)\r\n\r\n\r\n# once you have your corpus, you can also extract a \r\n# vector for the given corpus, the vector contains\r\n# each words ID followed by its TFIDF score\r\nvector = Model[Corpus[0]]\r\n#print(vector)\r\n\r\n\r\n# finally, you can print the words in the above corpus\r\n# followed by the tfidf score for that word\r\n# higher score means more unique/interesting for that document\r\nfor id, score in vector:\r\n print(Dct[id], \" = \", score)\r\n"
},
{
"alpha_fraction": 0.6981580257415771,
"alphanum_fraction": 0.7011289596557617,
"avg_line_length": 28.208696365356445,
"blob_id": "106818247526a26140270228d9b8ee518f72ec45",
"content_id": "bd6dffad0ec7725e0f3a69d95c720de9128aab7f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3366,
"license_type": "no_license",
"max_line_length": 72,
"num_lines": 115,
"path": "/POSTags.py",
"repo_name": "ludiusvox/Spacy-Gensim-Web-scraping",
"src_encoding": "UTF-8",
"text": "# AIA POSExtract.py\n# @copyright: Collin Lynch\n#\n# This code provides a basic walkthrough of POS tagging methods\n# that are built into NLTK. These methods are to be used on\n# arbitrary text and support extraction of keywords such as\n# nouns and verbs which we can use to compose keywords for our\n# matrices.\n\n# Imports\n# -----------------------------------\nimport nltk\n\n\n\n# Working Code\n# -----------------------------------\n# First we will load a basic document from an existing nltk dataset.\n# In this case we will use Jane Austen's Emma as a basic text.\n#print(nltk.corpus.gutenberg.fileids())\nEmmaWords = nltk.corpus.gutenberg.words(fileids=\"austen-emma.txt\")\n#print(EmmaWords)\n\n\n\n# Having loaded that and taking advantage of the existing tokenizer\n# we will go ahead and apply POS tagging to the words. In this case\n# we are just using the built-in tokenizer that works with the NLTK.\n# more complex POS tagging can be done by other methods.\n#\n# NOTE: This process can be slow depending on the time.\nEmmaTags = nltk.pos_tag(EmmaWords)\n\n\n\n# Having done that we can then extract all of the nouns so we have a\n# simplified topic model. In this case we can iterate over the tagged\n# words with a simple loop to pull the NN (singular noun), NNS (plural\n# noun), NNP (proper noun singular), and NNPS (proper noun plural).\n# Other types (e.g. verbs) can be pulled by looking up the type or using\n# the first letter.\ndef getNouns(Tagged_Text):\n \"\"\"\n Iterate over the tagged text pulling the nouns.\n \"\"\"\n\n # Generate an empty set for storage.\n NounSet = set()\n\n # Iterate over the pairs collecting the items.\n for (Word, Tag) in Tagged_Text:\n\n print(\"Checking Word/Tag Pair: {} {}\".format(Word, Tag))\n \n if (Tag[0] == \"N\"):\n NounSet.add(Word)\n\n return(NounSet)\n\n\n\n# Extracting the nouns, and perhaps additional words like verbs, will\n# proide us with a very crude topic model that we can use for lookup\n# and comparison via tf-idf or some other document. We will return to\n# that in a later case.\nEmmaNouns = getNouns(EmmaTags)\n#print(EmmaNouns)\n\n\n# In the meantime look at ways to apply this to your existing page code.\n# Among other things we can feed these collected words into our document\n# vector code to produce a noun-only positioning for the document.\n\n\n\n# To do that we borrow from the prior code but now assume we are feeding\n# it a list version of the words. \nNounTokens = list(EmmaNouns)\n\n\nimport numpy\nimport scipy.spatial\nimport gensim\nimport gensim.downloader\n\n\n#print(gensim.downloader.info()['models'].keys())\nGloveModel = gensim.downloader.load('glove-twitter-50')\n\n\n# To make a document vector suitable for this task we first\n# convert it to a lower case sequence and then use that to\n# get the individual values. Then we sum the total up.\ndef makeDocVec(DocTokens):\n\n DocSum = 0\n for Token in DocTokens:\n try:\n WordVectors = GloveModel[Token.lower()]\n DocSum += numpy.array(WordVectors)\n except:\n print(Token, \" not in database\")\n \n return(DocSum)\n\n\n# Now if you do the above for two documents you can just compare\n# them using the basic cosine similarity function.\nDoc1Sum = makeDocVec(NounTokens)\nprint(Doc1Sum)\nDoc2Sum = makeDocVec(NounTokens)\nprint(Doc2Sum)\n\n\nprint(scipy.spatial.distance.cosine(Doc1Sum, Doc2Sum))\n \n\n\n"
},
{
"alpha_fraction": 0.7764971256256104,
"alphanum_fraction": 0.7815083861351013,
"avg_line_length": 36.650943756103516,
"blob_id": "2a6cda69e8992b436db7700f385a0aa61eb6c1c6",
"content_id": "db501f2449cc9114c6c763be10f80948e6320de3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3991,
"license_type": "no_license",
"max_line_length": 93,
"num_lines": 106,
"path": "/ScikitExample.py",
"repo_name": "ludiusvox/Spacy-Gensim-Web-scraping",
"src_encoding": "UTF-8",
"text": "# AIA SciKitExample.py\n# @copyright: Collin Lynch\n#\n# The Scikit learn package is a basic machine learning package\n# that provides wrappers for standard ML tools. We will introduce\n# it briefly in this workshop and then return to the package in\n# the next workshop for analysis.\n#\n# Scikit-learn builds heavily on the numpy and scipy packages\n# particularly in its use of modules. To show how it works we\n# will draw on a basic ML example for function regression.\n#\n# The code below with comments is based on an example by\n# Jaques Grobler and is licensed under BSD 3 clause\n\n\n# For this code we will be using the matplotlib, numpy\n# and scikit learn libraries. \n\nimport sklearn, sklearn.datasets, sklearn.linear_model\nimport numpy\nimport matplotlib.pyplot as plt\n\n\n# Scikit-learn like the NLTK and some others comes with preexisting\n# datasets that we can use for execution. In this case we will\n# load a simple numeric dataset that represents the relationship\n# between 10 baseline variables and the chance of someone having\n# diabetes.\n#\n# This call returns two numpy arrays, one with the independent or\n# predictive variables (Independent) and one with the dependent or\n# output variable (DependentVar)\n# https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_diabetes.html\nIndependentVars, DependentVar = sklearn.datasets.load_diabetes(return_X_y=True)\nprint(IndependentVars)\nprint(DependentVar)\n\n\n# Here we will illustrate with a single feature. You are encouraged\n# to try each of the features one at a time to see which is most\n# predictive or turns out different results. What we are doing in\n# this code is taking the existing feature\n#\n# Try changing this value to explore different input variables.\nSingleIndependentVar = IndependentVars[:, numpy.newaxis, 2]\n#print(SingleIndependentVar)\n#print(len(SingleIndependentVar))\n\n\n# Since this is a supervised learning application we will separate\n# our data into training and testing code. This is done with\n# simple slices though we could also do it through a random sample.\n# This is a simple 80/20 split.\nIndependent_TrainingSet = SingleIndependentVar[20:]\nIndependent_TestingSet = SingleIndependentVar[:20]\n\n\n# We have to use the same split for our output variable to ensure\n# that they match.\nDependent_TrainingSet = DependentVar[20:]\nDependent_TestingSet = DependentVar[:20]\n\n\n# We now load and save the linear regression model for use.\nRegressionModel = sklearn.linear_model.LinearRegression()\n\n\n# Now train the weights for the model using our training sets.\nRegressionModel.fit(Independent_TrainingSet, Dependent_TrainingSet)\n\n\n# Having done that we can now use it to make predictions for our test cases.\nDependent_TestPredictions = RegressionModel.predict(Independent_TestingSet)\n\n\n# Now that we have done that we can print out information about the model\n# coefficients themselves, as well as the error and prediction determination.\n# The mean squared error indicates how far \"off\" the model is, i.s. how much\n# it guesses wrong in general while the coefficient of determination gives\n# a general success measure with 1 being \"perfect\".\nprint(\"Coefficients: \\n\", RegressionModel.coef_) #slope\n\nMeanSquaredErr = sklearn.metrics.mean_squared_error(\n Dependent_TestingSet, Dependent_TestPredictions)\n\nprint(\"Mean squared error: {}\".format(MeanSquaredErr)) #squared diff\n\nDeterminationCoeff = sklearn.metrics.r2_score(\n Dependent_TestingSet, Dependent_TestPredictions)\n\nprint(\"Coefficient of determination: {}\".format(DeterminationCoeff)) #1 is perfect prediction\n\n\n# Just for readability we can also plot the data on a scatterplot along\n# with a fitted line to give an idea of the potential values. The first\n# two lines generate the plot while the later three set tick markers and\n# show it.\nplt.scatter(Independent_TestingSet, Dependent_TestingSet, color='black')\nplt.plot(Independent_TestingSet, Dependent_TestPredictions,\n color='blue', linewidth=3)\n\nplt.xticks(())\nplt.yticks(())\n\nplt.show()\n"
},
{
"alpha_fraction": 0.6356502175331116,
"alphanum_fraction": 0.6423766613006592,
"avg_line_length": 30.149999618530273,
"blob_id": "965f1d09ba21573677e843e7d7156ed542cab9e8",
"content_id": "2c44abfc8df6bf1bb15f86309d8c90a0b214b589",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6244,
"license_type": "no_license",
"max_line_length": 103,
"num_lines": 200,
"path": "/spacy-gensim-web-scrape/DocQuery-Example.py",
"repo_name": "ludiusvox/Spacy-Gensim-Web-scraping",
"src_encoding": "UTF-8",
"text": "# DocQuery-example.py\n# Collin Lynch and Travis Martin\n# 9/8/2021\n\n# The DocQuery code is taked with taking a query and a set of\n# saved documents and then returning the document that is closest\n# to the query using either TF/IDF scoring or the sum vector.\n# When called this code will load the docs into memory and deal\n# with the distance one at a time. \n\n# Imports\n# ---------------------------------------------\nimport spacy\nimport os\nimport scipy.spatial\nimport nltk\nfrom gensim.models import TfidfModel\nfrom gensim.corpora import Dictionary\n\n\n# Core Code.\n# ---------------------------------------------\n\n\"\"\"\ninput: document directory\n1. load the pacy vocab\n2. look in the document directory and load all files in it\n3. do not load the url file\n4. put all the loaded files into a list\noutput: all the doc names that need to be loaded \n\"\"\"\ndef load_docs(DocDir):\n BaseVocab = spacy.vocab.Vocab()\n Loaded_Docs = {}\n for file in os.listdir(Directory):\n if file != 'infile.txt':\n LoadedDoc = spacy.tokens.Doc(BaseVocab).from_disk(Directory + file)\n Loaded_Docs[file] = LoadedDoc\n \n return Loaded_Docs\n\n\n\n\"\"\"\ninput: spacy model, docs as a dict, query string, compfunction\n1. set the closest document as the first in the list\n2. decide to do a vector or tfidf comparison\n3. if doing a vec comparison\n a. sum the vectors for the words in the query\n b. set closest distance to be 1 - farthest two sets can be apart\n c. take in a single document and find the sum of all the word vectors in the doc\n d. check to see if computed distance is smaller than current distance\n e. set closest document and name\n4. if doing a tfidf comparison\n a. create a tfidf model for documents in the corpus\n b. set closest distance to -1 = nonsensical value\n c. take the query and sum up the words that appear in each document to geta score\n d. check to see if new score higher than the rest - highest score is better\n e. set closest doc and name\n5. return the closest document and name\noutput: closest document and closest name\n\"\"\"\ndef get_closest(SpacyModel, Docs, Query, CompFunc): \n # Store the first doc in the list as the closest.\n ClosestName = list(Docs.keys())[0]\n ClosestDoc = Docs[ClosestName]\n\n # Now we iterate over the remainder simply checking\n # their distance and updating if they are closer.\n if CompFunc == \"VEC\":\n query_vec = 0\n for word in Query:\n query_vec += word.vector\n \n ClosestDist = 1\n for key in Docs.keys(): \n tempdist = get_vec_dist(SpacyModel, Docs[key], query_vec)\n if tempdist < ClosestDist:\n ClosestDist = tempdist\n ClosestName = key\n ClosestDoc = Docs[key] \n \n elif CompFunc == \"TFIDF\":\n TFIDFModel, Dct, Corpus = compute_tfidf_value(Docs)\n \n ClosestDist = -1\n for n in range(len(Corpus)):\n tempdist = get_tfidf_dist(Query.text, TFIDFModel[Corpus[n]], Dct)\n if tempdist > ClosestDist:\n ClosestDist = tempdist\n ClosestName = list(Docs.keys())[n]\n ClosestDoc = Docs[ClosestName]\n \n # Now return the best as a pair.\n return ClosestName, ClosestDoc\n \n \n \n\"\"\"\ninput: all the documents in the corpus\n1. create an empty list\n2. go through all the documents in the set\n3. tokenize the document and add it to the list\n4. create a dictionary with a unique key for each word in the corpus\n5. cycle through all the text and count the frequency of each word\n6. create a tfidf model based on the text above\noutput: the model, dictionary, and corpus of text\n\"\"\"\ndef compute_tfidf_value(Docs):\n \n all_doc_words = []\n for key in Docs.keys():\n word_tokens = nltk.word_tokenize(Docs[key].text)\n all_doc_words.append(word_tokens)\n \n Dct = Dictionary(all_doc_words)\n Corpus = [Dct.doc2bow(line) for line in all_doc_words]\n Model = TfidfModel(Corpus)\n \n return Model, Dct, Corpus\n \n \n \n\n\n\"\"\"\ninput: query as a string, tfidf models individual corpus, dictionary\n1. set total score to 0 = lowest value possible\n2. go through the entire vaector passed in and look to see if the words in the query string are present\n3. if they are present add the score values\n4. return the total score\noutput: total score\n\"\"\"\ndef get_tfidf_dist(Query, Vector, Dct):\n total_score = 0\n for id, score in Vector:\n if Dct[id] in Query: \n #print(Dct[id], \" = \", score)\n total_score += score\n return total_score\n\n\n\n\"\"\"\ninput: spacy model, doc, query asa vector\n1. set total vec to 0 = nonsensical value\n2. iterate through the document and find the vector for each word in the doc\n3. sum all the vectors together\n4. compute cosine distance for the query and doc\noutput: the cosine distance computed\n\"\"\"\ndef get_vec_dist(SpacyModel, Doc, Query):\n total_vec = 0\n for word in Doc:\n total_vec += word.vector\n \n tempdist = scipy.spatial.distance.cosine(Query, total_vec)\n \n return tempdist\n\n\n\n\nif __name__ == \"__main__\":\n\n\n #initial declarations\n URL_File = 'infile.txt'\n \n #Type = \"VEC\"\n Type = \"TFIDF\"\n Directory = '/home/aaronl/PycharmProjects/pythonProject/start/'\n Query_String = 'The happiest place on Earth'\n\n \n #open the url file and store the webpage names for later\n webpage_names = []\n with open(URL_File, 'r') as InFile:\n website = InFile.readline()[:-1]\n while website:\n webpage_names.append(website)\n website = InFile.readline()[:-1]\n InFile.close()\n \n \n #load the spacy model\n Model = spacy.load(\"en_core_web_sm\")\n \n \n #load the documents created in the doc downloader program\n Loaded_Docs = load_docs(Directory)\n \n #create a spacy model for the query string\n Query_Model = Model(Query_String)\n \n #find the lcosest document and name for the query\n ClosestName, ClosestDoc = get_closest(Model, Loaded_Docs, Query_Model, Type)\n \n #print the results\n print(\"The closest website to the query string is: \" + str(ClosestName))\n\n \n \n"
},
{
"alpha_fraction": 0.6148732900619507,
"alphanum_fraction": 0.6231823563575745,
"avg_line_length": 24.510639190673828,
"blob_id": "fb90ff5aac0cc8bb7cbaaf60abc0c7391312ce9c",
"content_id": "fb70ce648dac855d229e99f08a215f4d3608ba49",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2407,
"license_type": "no_license",
"max_line_length": 71,
"num_lines": 94,
"path": "/spacy-gensim-web-scrape/DocDownloader-example.py",
"repo_name": "ludiusvox/Spacy-Gensim-Web-scraping",
"src_encoding": "UTF-8",
"text": "# DocDownloader-example.py\n# Collin Lynch and Travis Martin\n# 9/8/2021\n\n# The DocDownloader package provides a simple iterative download\n# of files via the requests package and storage of the files as\n# processed docs via spacey. It presumes that we are using a\n# pretrained spacey package that will provide the basic loading\n# and processing tasks.\n#\n# Expand this code for your own version of the assignment.\n\n\n# Imports\n# ---------------------------------------------\n\nimport spacy\nimport requests\nfrom bs4 import BeautifulSoup\n\n\n\n# Core Code.\n# ---------------------------------------------\n\"\"\"\ninput: spacy model, url, storage dirctory, n = iterator variable\n1. grab the url and download the page\n2. convert the webpage to being text readable\n3. pass the text through the spacy model\n4. create the name of the outfile using the n passed in\n5. save the file to disk\noutput: none\n\"\"\"\ndef download_and_save(Model, URL, StorageDir, n): \n # First download the URL as a request.\n print(\"Downloading Doc: {}\".format(URL))\n Req = requests.get(URL)\n \n # Now convert it to a doc.\n print(\" converting...\")\n SoupText = BeautifulSoup(Req.text, features=\"lxml\")\n PageText = SoupText.get_text()\n FileDoc = Model(PageText)\n\n # Get a unique file/directory name.\n outfile = StorageDir + str(n) \n \n # And finally save it.\n print(\" saving...\")\n \n #save_file(FileDoc, FileDir)\n FileDoc.to_disk(outfile) \n \n print(\" done.\")\n return\n \n\n\n\"\"\"\ninput: spacy model, urlfile, starage directory\n1. open the url file containing the url's to be processes\n2. pass each url to the download and save function\n3. read the next url\n4. close the file and exit\noutput: none\n\"\"\"\ndef process_url_file(SpacyModel, UrlFile, StorageDir):\n \n n = 0\n with open(UrlFile, 'r') as InFile:\n website = InFile.readline()[:-1]\n while website:\n download_and_save(SpacyModel, website, StorageDir, n)\n \n website = InFile.readline()[:-1]\n n+=1\n \n InFile.close()\n return\n\n\n\n\nif __name__ == \"__main__\":\n\n #initial declarations\n URL_File = 'infile.txt'\n Directory = '/home/aaronl/PycharmProjects/pythonProject/start/save'\n \n #load the spacy model\n Model = spacy.load(\"en_core_web_sm\")\n \n #process the urls\n process_url_file(Model, URL_File, Directory)\n \n "
},
{
"alpha_fraction": 0.8273381590843201,
"alphanum_fraction": 0.8273381590843201,
"avg_line_length": 33.5,
"blob_id": "bb0648bfdbf1e179bf0a8c649124f1db0dff8f4e",
"content_id": "e18cd6cc201ceb6dad352f3d81473b36d056919d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 139,
"license_type": "no_license",
"max_line_length": 54,
"num_lines": 4,
"path": "/README.md",
"repo_name": "ludiusvox/Spacy-Gensim-Web-scraping",
"src_encoding": "UTF-8",
"text": "# Spacy-Gensim-Web-scraping\nScrapes the websites and retrieves document similarity\n\nVery basic NLP requires a lot of installation process\n\n"
},
{
"alpha_fraction": 0.7519603967666626,
"alphanum_fraction": 0.7552620768547058,
"avg_line_length": 36,
"blob_id": "2eca5eb120e2611929be469489d1bc57af777454",
"content_id": "3154516f0e6f7062f9d51d4b732b3c31d42259bc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4846,
"license_type": "no_license",
"max_line_length": 140,
"num_lines": 131,
"path": "/SpacyExample.py",
"repo_name": "ludiusvox/Spacy-Gensim-Web-scraping",
"src_encoding": "UTF-8",
"text": "# AIA SpaCy Use.py\n# @copyright: Collin Lynch\n#\n# SpaCY is a pipeline-based NLP module for python. In contrast to the NLTK\n# or Gensim which are tool oriented SpaCY is built around the idea of loading\n# in trained language models and then using them for all of your tasks. In\n# this way it makes SpaCy relatively easier to use inside of a system,\n# particularly if you are interested in off-the-shelf solutions. But it also\n# means that the details of what is happening and how it is happening are\n# obscured.\n#\n# It can also be overkill as the spacy pipeline approach does a *lot* of\n# processing at one time which may be much more than you need if you are\n# only interested in tokenization, vectors, etc. All of those items can\n# be built into your own model for use but they will vary.\n\n\n# Installation of Spacy requires both the module and downloading of the pretrained\n# items which must be built separately and then loaded into the system. Spacy\n# has many different language models built in and can be used for a range of tasks\n# including tokenizing POS etc. as we have shown.\n\n# To install spacy use anaconda or other tasks to load it;\n# pip install --user spacy\n\n# Then to install the libraries you have to download it separately. To do that\n# go to the anaconda prompt and make the following call. \n#\n# python -m spacy download en_core_web_sm\n\n# If you get an error about needing to call pip directly you can do\n# the following at the commandline:\n#\n# pip install --user https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.1.0/en_core_web_sm-3.1.0-py3-none-any.whl\n#\n# Optionally you can try to download it inside of python using the\n# following however that too is error prone.\n#\n# spacy.cli.download(\"en_core_web_sm\")\n\n\n# Once inside the system you can go ahead and import Spacy then load\n# the pretrained language model. \nimport spacy\n\n# Having installed the model we now load it as an object. In this case\n# the type of the model is spacy.lang.en.English which is a subclass of\n# the general spacy language model. You can learn more about it as always\n# via the help function. \nModel = spacy.load(\"en_core_web_sm\")\n\n\n# Here we will illustrate the model on some basic text from the\n# existing NLTK corpus. In this case we get the raw text and\n# then we will process that.\nimport nltk\nWhitmanRaw = nltk.corpus.gutenberg.raw(fileids=\"whitman-leaves.txt\")\nprint(WhitmanRaw)\n\n# Once we generate the model and load some raw text we will use the\n# model to parse our text into a doc object. This object is the\n# main item around which spacy is built. The doc object does basic\n# parsing using the pretrained model and gives us access to different\n# pieces of output. To generate a doc we simply call the Model as\n# a function as it is a generator.\n#\n# [This can be quite slow.]\n#\n# The resulting item's type will be: spacy.tokens.doc.Doc which\n# encapsulates a lot of what we want to achieve. \nWhitmanDoc = Model(WhitmanRaw)\ntype(WhitmanDoc)\n\n# Once the Doc is created and the parsing done you can carry out some\n# basic extraction tasks using the standard interface. Under the hood\n# the doc is actually represented by vector/tensor objects but it also\n# holds all of the tokens and other items so we can extract tokens or\n# subsequences from it using standard Python notation:\n\nWord = WhitmanDoc[1023]\nprint(Word)\n\nSubset = WhitmanDoc[23:56]\nprint(Subset)\n\n\n# The items returned are not just simple strings, they are in fact\n# spacy.tokens.token.Token objects which store information about\n# them including the POS tags, vector position etc. \ntype(Word)\n\nprint(Word.text)\n\n# Here is the part of speech. Note the version without the _ is\n# an integer id of the unique POS in the model while the _ version\n# is the human-readable string.\nprint(Word.pos)\nprint(Word.pos_)\n\n# Print whether this is in the predefined list of stopwords.\nprint(Word.is_stop)\n\n# And print out the vector embedding. \nif Word.has_vector:\n print(Word.vector)\n\n\n\n# Embedded in the doc parse is a set of the sentence dependencies\n# which we can use to get at some of the grammar information. We\n# can show what this looks like on a smaller example using the\n# displacy library which is built into spacy.\n\nfrom spacy import displacy\n\nDepDoc = Model(\"This is difficult to write but, I ate your doughnut.\")\n\n\n# Once run this will open a local web server that you can use to view it.\n# alternative display methods can also be played with.\ndisplacy.serve(DepDoc, style='dep')\n\n\n# If that is not viewable you can also save the svg file which is an XML\n# representation for vector graphics that is viewable by browsers. So once\n# you call this code to save the file you will need to open it in a browser.\n\nFile = displacy.render(DepDoc, style='dep')\nOut = open(\"ImageSave.svg\", \"w\", encoding=\"utf-8\")\nOut.write(File)\nOut.close()"
}
] | 8 |
daisuke-motoki/polarity_discriminator | https://github.com/daisuke-motoki/polarity_discriminator | a518c1c2554e7c217a685f4e4834191fedca5c8c | af73d745a00437eadf07b2cc35b2f844a3d61183 | 207c63df42d36863d0000c5806749213bf916c09 | refs/heads/master | 2021-01-01T19:04:08.702569 | 2017-07-28T14:33:50 | 2017-07-28T14:33:50 | 98,497,595 | 0 | 0 | null | 2017-07-27T05:34:00 | 2017-07-27T05:34:00 | 2017-07-28T14:32:07 | null | [
{
"alpha_fraction": 0.617241382598877,
"alphanum_fraction": 0.6206896305084229,
"avg_line_length": 17.125,
"blob_id": "f6c1934922d3213f8df11bdf92ec6d19b365a783",
"content_id": "bc3e2470a366e11c4e74f9006e4a64871bfb9b26",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 308,
"license_type": "permissive",
"max_line_length": 70,
"num_lines": 16,
"path": "/polarity_discriminator/extractor.py",
"repo_name": "daisuke-motoki/polarity_discriminator",
"src_encoding": "UTF-8",
"text": "# -*- coding:utf-8 -*-\nu'''\n単語抽出モジュール\n'''\nfrom abc import abstractmethod\n\n\nclass WordExtractor(object):\n \"\"\"\n \"\"\"\n @abstractmethod\n def __call__(self, text):\n pass\n\n def extract_words(self, text):\n raise NotImplementedError('extract_words must be implemented')\n"
},
{
"alpha_fraction": 0.5423029065132141,
"alphanum_fraction": 0.5546842813491821,
"avg_line_length": 37.768001556396484,
"blob_id": "e14e11be97f0c11f4883b64c7eac26e2db69ae7f",
"content_id": "b3957c6ad6fc6b42d5c453506d5227aa0fe85528",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4846,
"license_type": "permissive",
"max_line_length": 79,
"num_lines": 125,
"path": "/scripts/train_model.py",
"repo_name": "daisuke-motoki/polarity_discriminator",
"src_encoding": "UTF-8",
"text": "import pickle\nimport numpy as np\nimport pandas as pd\nfrom sklearn.externals import joblib\nfrom sklearn.feature_extraction.text import CountVectorizer\nfrom polarity_discriminator.discriminator import PolarityDiscriminator\nfrom polarity_discriminator.jp_mecab import JapaneseMecabWordExtractor\n\n\ndef preprocess_inputs(texts, vectorizer, null_id, unknown_id, max_length=None):\n print(\"Preprocessing inputs.\")\n sequences = list()\n masks = list()\n for text in texts:\n words = vectorizer.tokenizer(text)\n maxl = max_length if max_length else len(words)\n ids = np.ones(maxl, dtype=\"int\")*null_id\n offset = 0 if len(words) > maxl else maxl - len(words)\n mask = np.zeros(maxl, dtype=\"int\")\n for i, word in enumerate(words):\n if i >= maxl:\n break\n ids[i+offset] = vectorizer.vocabulary_.get(word, unknown_id)\n mask[i+offset] = 1\n sequences.append(ids)\n masks.append(mask)\n return sequences, masks\n\n\ndef preprocess_gt(Ys, masks):\n print(\"Preprocessing ground truth.\")\n gts = list()\n for Y, mask in zip(Ys, masks):\n gt = mask*Y/mask.sum()\n gts.append(gt)\n\n return gts\n\n\ndef get_vectorizer(texts, null_set=(0, \"\"), unknown_set=(1, \"###\")):\n print(\"Making vectorizer.\")\n tf_vectorizer = CountVectorizer(max_df=1.0,\n min_df=10,\n max_features=10000,\n stop_words=[null_set[1], unknown_set[1]])\n tf_vectorizer.tokenizer = JapaneseMecabWordExtractor(split_mode=\"unigram\",\n use_all=True)\n tf_vectorizer.fit(texts)\n max_id = max(tf_vectorizer.vocabulary_.values())\n prev_char = tf_vectorizer.get_feature_names()[null_set[0]]\n tf_vectorizer.vocabulary_[null_set[1]] = null_set[0]\n tf_vectorizer.vocabulary_[prev_char] = max_id + 1\n prev_char = tf_vectorizer.get_feature_names()[unknown_set[0]]\n tf_vectorizer.vocabulary_[unknown_set[1]] = unknown_set[0]\n tf_vectorizer.vocabulary_[prev_char] = max_id + 2\n return tf_vectorizer\n\n\nif __name__ == \"__main__\":\n # load inputs\n headers = [\"resource\", \"rating\", \"content\"]\n filename = \"../data/review_data_jalan.csv\"\n vectorizer_file = \"vectorizer_jalan.pkl\"\n # vectorizer_file = None\n\n df = pd.read_csv(filename, names=headers)\n # df = df[:10000]\n indexes = np.arange(len(df))\n np.random.seed(0)\n np.random.shuffle(indexes)\n train_last = int(len(df)*0.8)\n train_indexes = indexes[:train_last]\n val_indexes = indexes[train_last:]\n\n # vectorizer\n if vectorizer_file is None:\n vectorizer = get_vectorizer(df.content.tolist())\n tokenizer = vectorizer.tokenizer\n vectorizer.tokenizer = None\n joblib.dump(vectorizer, \"vectorizer.pkl\")\n vectorizer.tokenizer = tokenizer\n else:\n vectorizer = joblib.load(vectorizer_file)\n vectorizer.tokenizer = JapaneseMecabWordExtractor(split_mode=\"unigram\",\n use_all=True)\n max_id = max(vectorizer.vocabulary_.values()) + 1\n null_id = vectorizer.vocabulary_[\"\"]\n unknown_id = vectorizer.vocabulary_[\"###\"]\n train_X, train_mask = preprocess_inputs(df.content[train_indexes].tolist(),\n vectorizer,\n null_id=null_id,\n unknown_id=unknown_id,\n max_length=100)\n val_X, val_mask = preprocess_inputs(df.content[val_indexes].tolist(),\n vectorizer,\n null_id=null_id,\n unknown_id=unknown_id,\n max_length=100)\n train_Y = preprocess_gt(df.rating[train_indexes].tolist(),\n train_mask)\n val_Y = preprocess_gt(df.rating[val_indexes].tolist(),\n val_mask)\n\n # create network\n network_architecture = dict(\n max_sequence_len=100,\n # max_sequence_len=None,\n n_word=max_id,\n word_dim=300,\n n_lstm_unit1=100,\n rate_lstm_drop=0.2,\n )\n discriminator = PolarityDiscriminator()\n discriminator.build(network_architecture)\n\n checkpoints = \"./checkpoints/weights.{epoch:02d}-{val_loss:.4f}.hdf5\"\n discriminator.train(np.array(train_X),\n np.array(train_Y),\n np.array(val_X),\n np.array(val_Y),\n epochs=10,\n batch_size=32,\n learning_rate=1e-3,\n checkpoints=checkpoints,\n shuffle=True)\n"
},
{
"alpha_fraction": 0.8275862336158752,
"alphanum_fraction": 0.8275862336158752,
"avg_line_length": 13.5,
"blob_id": "520feb9936cae3ac735a4830961d833c15aa1634",
"content_id": "de01a8ef15db505dd1d3cee9957a24c02b51ec79",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 29,
"license_type": "permissive",
"max_line_length": 24,
"num_lines": 2,
"path": "/README.md",
"repo_name": "daisuke-motoki/polarity_discriminator",
"src_encoding": "UTF-8",
"text": "# polarity_discriminator\nWIP\n"
},
{
"alpha_fraction": 0.5784832239151001,
"alphanum_fraction": 0.5873016119003296,
"avg_line_length": 26,
"blob_id": "cc92a2f3f4b76588dd1f120a650781d76257590a",
"content_id": "8ed466f5865aece6505c1b175b03e4d76b6908f1",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 567,
"license_type": "permissive",
"max_line_length": 64,
"num_lines": 21,
"path": "/polarity_discriminator/losses.py",
"repo_name": "daisuke-motoki/polarity_discriminator",
"src_encoding": "UTF-8",
"text": "import tensorflow as tf\nfrom keras.losses import categorical_crossentropy\nfrom keras import backend as K\n\n\nclass SumPolarityLoss:\n \"\"\"\n \"\"\"\n def __init__(self):\n pass\n\n def compute_loss(self, y_true, y_pred):\n \"\"\" compute loss\n \"\"\"\n maxs = tf.expand_dims(tf.reduce_max(y_true, axis=-1), 1)\n y_pred_masked = y_true / maxs * y_pred\n pred_sum = tf.reduce_sum(y_pred_masked, axis=-1)\n true_sum = tf.reduce_sum(y_true, axis=-1)\n loss = K.mean(K.square(pred_sum - true_sum), axis=-1)\n\n return loss\n"
},
{
"alpha_fraction": 0.48515623807907104,
"alphanum_fraction": 0.49140626192092896,
"avg_line_length": 27.44444465637207,
"blob_id": "a7fc878b129c75b54563bcf94f466f7654664894",
"content_id": "51ea7395799e907b95c509598635f7d0c4a7b104",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2560,
"license_type": "permissive",
"max_line_length": 73,
"num_lines": 90,
"path": "/polarity_discriminator/discriminator.py",
"repo_name": "daisuke-motoki/polarity_discriminator",
"src_encoding": "UTF-8",
"text": "import logging\nimport keras\nfrom keras.models import model_from_json\nfrom polarity_discriminator.models import network_model1\nfrom polarity_discriminator.losses import SumPolarityLoss\n\nlogger = logging.getLogger(__name__)\n\n\nclass PolarityDiscriminator:\n \"\"\"\n \"\"\"\n def __init__(self):\n \"\"\"\n \"\"\"\n self.model = None\n\n def build(self, architecture):\n \"\"\"\n \"\"\"\n # create network\n self.model = network_model1(architecture)\n\n def save_model(self, filename):\n \"\"\"\n \"\"\"\n with open(filename + \"_layer.json\", \"w\") as file_:\n file_.write(self.model.to_json(**{\"indent\": 4}))\n self.model.save_weights(filename + \"_weights.hdf5\")\n\n def load_model(self, filename):\n \"\"\"\n \"\"\"\n with open(filename + \"_layer.json\", \"r\") as file_:\n self.model = model_from_json(file_.read())\n self.model.load_weights(filename + \"_weights.hdf5\")\n\n def train(self,\n input_data,\n Y,\n validation_data,\n validation_Y,\n epochs=10,\n batch_size=50,\n learning_rate=1e-3,\n checkpoints=None,\n shuffle=True):\n \"\"\"\n \"\"\"\n # loss & optimizer\n loss = SumPolarityLoss()\n optim = keras.optimizers.Adam(lr=learning_rate)\n self.model.compile(\n optimizer=optim,\n loss=loss.compute_loss,\n # metric=[\"acc\"]\n )\n # call backs\n callbacks = list()\n if checkpoints:\n callbacks.append(\n keras.callbacks.ModelCheckpoint(\n checkpoints,\n verbose=1,\n monitor=\"val_loss\",\n save_best_only=True,\n save_weights_only=True\n )\n )\n\n def schedule(epoch, decay=0.9):\n return learning_rate * decay**(epoch)\n callbacks.append(keras.callbacks.LearningRateScheduler(schedule))\n\n # fit\n self.model.fit(input_data,\n Y,\n batch_size=batch_size,\n epochs=epochs,\n callbacks=callbacks,\n validation_data=[validation_data, validation_Y],\n shuffle=shuffle)\n\n def predict(self,\n input_data,\n batch_size=32):\n \"\"\"\n \"\"\"\n return self.model.predict(input_data,\n batch_size=batch_size)\n"
},
{
"alpha_fraction": 0.6006070971488953,
"alphanum_fraction": 0.6097137928009033,
"avg_line_length": 30.589040756225586,
"blob_id": "21f6f3f7032f05082d9f0d78649ea89bd2f454c1",
"content_id": "b7b644dfb79dec0eba457860647d35d1498c016e",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2322,
"license_type": "permissive",
"max_line_length": 75,
"num_lines": 73,
"path": "/polarity_discriminator/models.py",
"repo_name": "daisuke-motoki/polarity_discriminator",
"src_encoding": "UTF-8",
"text": "import logging\nimport numpy as np\nfrom keras.models import Model\nfrom keras.layers import Flatten, LSTM, Dense, Input, Reshape\nfrom keras.layers import BatchNormalization, Activation, Dropout, Embedding\nfrom keras.layers.wrappers import TimeDistributed, Bidirectional\nfrom keras.layers.merge import concatenate\nfrom keras import regularizers\nfrom polarity_discriminator.layers import L2Normalization\n\nlogger = logging.getLogger(__name__)\n\n\ndef network_model1(network_architecture):\n \"\"\" Model1\n Embedding -> LSMT -> FC\n Args:\n network_architecture: dict: network parameters\n max_sequence_len: max length of sentence\n n_word: number of words in dictionary\n word_dim: dimention of word embedding\n n_lstm_unit1: LSTMのユニット数\n rate_lstm_drop: LSTMのdropout率\n \"\"\"\n # parameters\n max_sequence_len = network_architecture[\"max_sequence_len\"]\n n_word = network_architecture[\"n_word\"]\n word_dim = network_architecture[\"word_dim\"]\n n_lstm_unit1 = network_architecture[\"n_lstm_unit1\"]\n rate_lstm_drop = network_architecture[\"rate_lstm_drop\"]\n\n # input\n inputs = Input(shape=(max_sequence_len,),\n name=\"input_1\")\n\n # embedding layer\n embed = Embedding(n_word,\n word_dim,\n input_length=max_sequence_len,\n name=\"embed_{}_{}\".format(n_word, word_dim)\n )(inputs)\n # LSTM\n net = Bidirectional(\n LSTM(n_lstm_unit1,\n dropout=rate_lstm_drop,\n recurrent_dropout=rate_lstm_drop,\n return_sequences=True),\n name=\"bi-lstm_1\"\n )(embed)\n\n net = Bidirectional(\n LSTM(n_lstm_unit1,\n dropout=rate_lstm_drop,\n recurrent_dropout=rate_lstm_drop,\n return_sequences=True),\n name=\"bi-lstm_2\"\n )(net)\n\n prediction = TimeDistributed(\n Dense(1,\n # activity_regularizer=regularizers.l1(0.1),\n activation=None),\n name=\"time-dist_1\"\n )(net)\n\n prediction = Flatten()(prediction)\n # prediction\n # prediction = L2Normalization(10, 2,\n # name=\"time-dist_1-norm\")(net)\n model = Model(inputs, prediction)\n model.summary()\n\n return model\n"
}
] | 6 |
NegarMirgati/Ping | https://github.com/NegarMirgati/Ping | 64a796cf2b7e55482bfea04a6f153086c35379d0 | 2856f8f598f509010f2fa00de366acb76769820d | d8fa9c3b07789039296f711b3aae09a1f17b98b1 | refs/heads/master | 2023-09-03T17:57:33.913146 | 2021-10-26T02:26:46 | 2021-10-26T02:26:46 | null | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6156583428382874,
"alphanum_fraction": 0.7366548180580139,
"avg_line_length": 19.071428298950195,
"blob_id": "b0a53b088d759685f7b8bd1a49d20b50a534ea2c",
"content_id": "065d40dcca100bf62cc735dcb483c9d3bdb38a19",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 281,
"license_type": "no_license",
"max_line_length": 124,
"num_lines": 14,
"path": "/README.md",
"repo_name": "NegarMirgati/Ping",
"src_encoding": "UTF-8",
"text": "# A simple version of Arcade Pong\n\n<img src=\"https://user-images.githubusercontent.com/20800745/138798284-47cbe161-b3f3-412c-a835-eab4fcc4ac22.png\" alt=\"pong\">\n\n## Execution\n\nInstall the requirements:\n```\npip install -r requirements.txt\n```\nRun the game by:\n```\npython pong.py\n```\n"
},
{
"alpha_fraction": 0.5364583134651184,
"alphanum_fraction": 0.5523374080657959,
"avg_line_length": 30.1146240234375,
"blob_id": "7d7c7264b4b0a57f25852b5bedaca73ca8718275",
"content_id": "36d9ee826cdf62c36f27b73a6b0ae00c5dba87e6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7872,
"license_type": "no_license",
"max_line_length": 86,
"num_lines": 253,
"path": "/pong.py",
"repo_name": "NegarMirgati/Ping",
"src_encoding": "UTF-8",
"text": "import pygame\nfrom pygame.locals import *\nimport operator\n\n\ndef main():\n # initialize all pygame modules (some need initialization)\n pygame.init()\n pygame.font.init() # you have to call this at the start,\n # if you want to use this module.\n pygame.display.set_mode((500, 400))\n pygame.display.set_caption(\"Pong\")\n w_surface = pygame.display.get_surface()\n game = Game(w_surface)\n game.play()\n pygame.quit()\n\n\nclass Game:\n # An object in this class represents a complete game.\n\n def __init__(self, surface):\n self.surface = surface\n self.bg_color = pygame.Color(\"black\")\n self.fg_color = pygame.Color(\"white\")\n\n self.FPS = 60\n\n self.game_Clock = pygame.time.Clock()\n self.close_clicked = False\n self.continue_game = True\n self.scores = [0, 0]\n self.max_score = 11\n self.paddles_velocity = 10\n paddles_width = 10\n paddles_height = 50\n paddles_top = 180\n p_x_offset = 100\n\n self.small_ball = Ball(self.fg_color, 4, [50, 50], [4, 1], self.surface)\n self.paddle_1 = Paddle(\n paddle_color=\"yellow\",\n surface=self.surface,\n paddle_width=paddles_width,\n paddle_height=paddles_height,\n paddle_top=paddles_top,\n paddle_left=self.surface.get_width() - p_x_offset - paddles_width,\n velocity=0,\n )\n self.paddle_2 = Paddle(\n paddle_color=\"red\",\n surface=self.surface,\n paddle_width=paddles_width,\n paddle_height=paddles_height,\n paddle_top=paddles_top,\n paddle_left=p_x_offset,\n velocity=0,\n )\n self.max_frames = 150\n self.frame_counter = 0\n\n def play(self):\n\n while not self.close_clicked:\n self.handle_events()\n # self.paddle_2.move()\n # self.paddle_1.move()\n self.draw()\n if self.continue_game:\n self.update()\n self.decide_continue()\n self.game_Clock.tick(self.FPS) # run at most with FPS Frames Per Second\n\n def handle_events(self):\n events = pygame.event.get()\n for event in events:\n if event.type == pygame.QUIT:\n self.close_clicked = True\n if event.type == pygame.KEYDOWN:\n self.handle_key_down(event.key)\n if event.type == pygame.KEYUP:\n self.handle_key_up(event.key)\n\n def handle_key_down(self, key):\n if key == K_UP:\n self.paddle_1.start(self.paddles_velocity)\n if key == K_q:\n self.paddle_2.start(self.paddles_velocity)\n if key == K_DOWN:\n self.paddle_1.start(-1 * self.paddles_velocity)\n if key == K_a:\n self.paddle_2.start(-1 * self.paddles_velocity)\n\n def handle_key_up(self, key):\n if key == K_q and self.paddle_2.moving_up():\n self.paddle_2.stop()\n if key == K_a and self.paddle_2.moving_down():\n self.paddle_2.stop()\n\n if key == K_UP and self.paddle_1.moving_up():\n self.paddle_1.stop()\n if key == K_DOWN and self.paddle_1.moving_down():\n self.paddle_1.stop()\n\n def draw(self):\n\n self.surface.fill(self.bg_color) # clear the display surface first\n self.draw_score()\n self.small_ball.draw()\n self.paddle_1.draw()\n self.paddle_2.draw()\n\n pygame.display.update() # make the updated surface appear on the display\n\n def draw_score(self):\n text_font = pygame.font.SysFont(\"\", 50)\n for index in range(2):\n text_string = str(self.scores[index])\n text_image = text_font.render(\n text_string, True, self.fg_color, self.bg_color\n )\n if index == 1:\n location = (0, 0)\n else:\n location = (self.surface.get_width() - text_image.get_width(), 0)\n self.surface.blit(text_image, location)\n\n def update(self):\n self.paddle_2.move()\n self.paddle_1.move()\n edge = self.small_ball.move(self.paddle_1, self.paddle_2)\n if edge == \"left\":\n self.scores[0] += 1\n elif edge == \"right\":\n self.scores[1] += 1\n self.frame_counter = self.frame_counter + 1\n\n def decide_continue(self):\n\n if self.max_score in self.scores:\n self.continue_game = False\n\n\nclass Ball:\n # An object in this class represents a ball that moves\n\n def __init__(self, ball_color, ball_radius, ball_center, ball_velocity, surface):\n\n self.color = ball_color\n self.radius = ball_radius\n self.center = ball_center\n self.velocity = ball_velocity\n self.surface = surface\n\n def move(self, paddle_1, paddle_2):\n new_coordinates = [0, 0]\n new_coordinates[0] = self.center[0] + self.velocity[0]\n new_coordinates[1] = self.center[1] + self.velocity[1]\n surface_height = self.surface.get_height()\n surface_width = self.surface.get_width()\n\n if paddle_1.in_paddle(self.center) and self.velocity[0] > 0:\n print(\"COLISION wITH PADDLE 1!!!\")\n self.center = new_coordinates\n self.bounce(\"x\")\n return \"\"\n elif paddle_2.in_paddle(self.center) and self.velocity[0] < 0:\n print(\"Collision with PADDLE 2!!!!!\")\n self.center = new_coordinates\n self.bounce(\"x\")\n return \"\"\n if (\n new_coordinates[1] < self.radius\n or new_coordinates[1] + self.radius >= surface_height\n ):\n # collision with bottom or top\n self.center[0] = new_coordinates[0]\n self.center[1] = sorted([0, new_coordinates[1], surface_height])[1]\n self.bounce(\"y\")\n return \"\"\n if new_coordinates[0] <= self.radius:\n self.center[1] = new_coordinates[1]\n self.center[0] = sorted([0, new_coordinates[0], surface_width])[1]\n self.bounce(\"x\")\n return \"left\"\n if new_coordinates[0] + self.radius >= surface_width:\n self.center[1] = new_coordinates[1]\n self.center[0] = sorted([0, new_coordinates[0], surface_width])[1]\n self.bounce(\"x\")\n return \"right\"\n\n self.center = new_coordinates\n\n return \"\"\n\n def bounce(self, axis):\n if axis == \"x\":\n self.velocity[0] *= -1\n else:\n self.velocity[1] *= -1\n\n def draw(self):\n pygame.draw.circle(self.surface, self.color, self.center, self.radius)\n\n\nclass Paddle:\n def __init__(\n self,\n paddle_color,\n paddle_left,\n paddle_top,\n paddle_width,\n paddle_height,\n surface,\n velocity,\n ):\n self.color = pygame.Color(paddle_color)\n self.surface = surface\n self.velocity = velocity\n self.width = paddle_width\n self.height = paddle_height\n self.rect = pygame.Rect(paddle_left, paddle_top, self.width, self.height)\n\n def draw(self):\n pygame.draw.rect(self.surface, self.color, self.rect)\n\n def move(self):\n if self.velocity > 0:\n self.rect.top = max(self.rect.top - self.velocity, 0)\n elif self.velocity < 0:\n self.rect.top = min(\n self.rect.top - self.velocity, self.surface.get_height() - self.height\n )\n\n def stop(self):\n self.velocity = 0\n\n def start(self, velocity):\n self.velocity = velocity\n\n def in_paddle(self, coordinates):\n if self.rect.collidepoint(coordinates[0], coordinates[1]):\n return True\n return False\n\n def moving_up(self):\n return self.velocity > 0\n\n def moving_down(self):\n return self.velocity < 0\n\n\nmain()\n"
}
] | 2 |
sdahal1/djangoFebruary | https://github.com/sdahal1/djangoFebruary | 46e251a231b0bd6813e23e4e4e2583fca712e357 | d0e437569576779dcf802126263914e4ac4f8ac6 | 3b839801feded297711d482bd7786daa2a1344f2 | refs/heads/main | 2023-03-11T05:48:20.804910 | 2021-03-01T15:39:58 | 2021-03-01T15:39:58 | 339,475,798 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6350574493408203,
"alphanum_fraction": 0.6350574493408203,
"avg_line_length": 25.846153259277344,
"blob_id": "e267108177841110493061a4f84ae3d29850d028",
"content_id": "2af070fccc4072432b718577c9db740d8c5eaa8f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 348,
"license_type": "no_license",
"max_line_length": 55,
"num_lines": 13,
"path": "/djangoIntroDemo/introApp/urls.py",
"repo_name": "sdahal1/djangoFebruary",
"src_encoding": "UTF-8",
"text": "from django.urls import path \nfrom . import views\n\n\nurlpatterns = [\n # @app.route(\"/\")\n path('', views.home),\n path(\"teams\", views.showTeams),\n path(\"teams/new\", views.new),\n path(\"teams/<teamname>\", views.showSpecificTeam),\n # path(\"allfood\", views.showAllFoodItems),\n # path(\"team/<teamname>\", views.showSpecificTeam) \n]"
},
{
"alpha_fraction": 0.708695650100708,
"alphanum_fraction": 0.7152174115180969,
"avg_line_length": 37,
"blob_id": "4125bab0a17a8ddd3c35e01ec6fb373787efe95f",
"content_id": "2c2facbe2108a2bd66066640ee3eee6664460b59",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 460,
"license_type": "no_license",
"max_line_length": 67,
"num_lines": 12,
"path": "/tvshowsOrmIntro/tvApp/models.py",
"repo_name": "sdahal1/djangoFebruary",
"src_encoding": "UTF-8",
"text": "from django.db import models\n\n# Create your models here.\nclass TvShow(models.Model):\n #field names go here\n title = models.CharField(max_length = 255)\n duration = models.IntegerField(null = True)\n description = models.TextField()\n release_date = models.DateField()\n rating = models.IntegerField(null=True)\n created_at = models.DateTimeField(auto_now_add=True, null=True)\n updated_at = models.DateTimeField(auto_now=True, null=True)\n\n\n\n\n"
},
{
"alpha_fraction": 0.6910919547080994,
"alphanum_fraction": 0.6932471394538879,
"avg_line_length": 32.14285659790039,
"blob_id": "7821e7dad83ed3ce724aa2c29773cb6d1337c93d",
"content_id": "fa58a9ab01398bf773ff8ac3c9021356a7ced730",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1392,
"license_type": "no_license",
"max_line_length": 128,
"num_lines": 42,
"path": "/manyToMany/manyToManyApp/views.py",
"repo_name": "sdahal1/djangoFebruary",
"src_encoding": "UTF-8",
"text": "from django.shortcuts import render, HttpResponse, redirect\nfrom .models import *\n\n\n# Create your views here.\ndef index(request):\n context = {\n 'allArtists': Artist.objects.all()\n }\n return render(request, \"index.html\", context)\n\ndef createArtist(request):\n print(request.POST)\n Artist.objects.create(firstName = request.POST['fname'], lastName=request.POST['lname'], description = request.POST['desc'])\n\n return redirect(\"/\")\n\ndef showArtistInfo(request, artistid):\n context = {\n 'oneArtist': Artist.objects.get(id = artistid),\n 'allUsers': User.objects.all()\n\n }\n return render(request, \"artistinfo.html\", context)\n\n\ndef addToFanBase(request, artistid):\n print(request.POST)\n #need information about the user selected (provided from the dropdown in the form via request.POST)\n this_user = User.objects.get(id=request.POST['selectedUser'])\n print(\"***************\", this_user)\n #need information about the artist that the user is becoming a fan of for the many to many join\n this_artist = Artist.objects.get(id=artistid)\n\n #make the many to many join (2 options below both do the same thing, one is commented out)\n # this_artist.fans.add(this_user)\n this_user.likedArtists.add(this_artist)\n\n\n\n # User.objects.get(id=3).likedArtists.add(Artist.objects.get(id=5))\n return redirect(f\"/showArtistInfo/{artistid}\")\n"
},
{
"alpha_fraction": 0.6620370149612427,
"alphanum_fraction": 0.6620370149612427,
"avg_line_length": 20.700000762939453,
"blob_id": "e6de21c109c0f2153556efa7417cb790980a12b3",
"content_id": "9cce7387cf0225c7bacb3d8759318996fab56ff5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 216,
"license_type": "no_license",
"max_line_length": 41,
"num_lines": 10,
"path": "/formsdemo/formsdemoApp/urls.py",
"repo_name": "sdahal1/djangoFebruary",
"src_encoding": "UTF-8",
"text": "from django.urls import path\nfrom . import views\n\nurlpatterns = [\n # path('admin/', admin.site.urls),\n path(\"\", views.index),\n path(\"register\", views.registerUser),\n path(\"orderdetails\", views.details)\n\n]"
},
{
"alpha_fraction": 0.6918103694915771,
"alphanum_fraction": 0.704741358757019,
"avg_line_length": 43.0476188659668,
"blob_id": "adb88d7fa09e51f67146d988a6d5123ebe8ab994",
"content_id": "55d65007335f00c0820b276fc118b64fc6c0f1a9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 928,
"license_type": "no_license",
"max_line_length": 102,
"num_lines": 21,
"path": "/oneToMany/oneToManyApp/models.py",
"repo_name": "sdahal1/djangoFebruary",
"src_encoding": "UTF-8",
"text": "from django.db import models\n\n# Create your models here.\nclass Team(models.Model):\n name = models.CharField(max_length=255)\n location = models.CharField(max_length=255)\n created_at = models.DateTimeField(auto_now_add=True)\n updated_at = models.DateTimeField(auto_now=True)\n def __str__(self):\n return f\"<Team object: {self.name} ({self.id})>\"\n\n#THE TABLE THAT HAS THE MANY IN THE ONE TO MANY REALTIONSHIP IS THE ONE THAT WILL HAVE THE FOREIGN KEY\nclass Player(models.Model):\n firstname = models.CharField(max_length=255)\n lastname = models.CharField(max_length=255)\n pointsPerGame = models.FloatField()\n assignedTeam = models.ForeignKey(Team, related_name=\"roster\", on_delete = models.CASCADE)\n created_at = models.DateTimeField(auto_now_add=True)\n updated_at = models.DateTimeField(auto_now=True)\n def __str__(self):\n return f\"<Player object: {self.firstname} ({self.id})>\"\n\n\n\n"
},
{
"alpha_fraction": 0.6684607267379761,
"alphanum_fraction": 0.6684607267379761,
"avg_line_length": 32.21428680419922,
"blob_id": "d812bf65a1e69d42850d0aecd696a01bf22a61e9",
"content_id": "3d92b9828cba06802ef8b4cb1cc43ef849630320",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 929,
"license_type": "no_license",
"max_line_length": 187,
"num_lines": 28,
"path": "/oneToMany/oneToManyApp/views.py",
"repo_name": "sdahal1/djangoFebruary",
"src_encoding": "UTF-8",
"text": "from django.shortcuts import render, HttpResponse, redirect\nfrom .models import *\n\n# Create your views here.\ndef index(request):\n print(\"********\")\n # Team.objects.all() - gets all the records in the table\n print(Team.objects.all())\n print(\"********\")\n context = {\n 'allteams':Team.objects.all()\n }\n return render(request, \"index.html\", context)\n\n\ndef createTeam(request):\n print(\"PRINTING FORM DATA\")\n print(request.POST)\n print(\"PRINTING FORM DATA\")\n Team.objects.create(name = request.POST['teamname'], location=request.POST['loc'])\n return redirect(\"/\")\n\ndef createPlayer(request):\n print(\"PRINTING FORM DATA\")\n print(request.POST)\n print(\"PRINTING FORM DATA\")\n Player.objects.create(firstname = request.POST['fname'], lastname = request.POST['lname'], pointsPerGame= request.POST['ppg'], assignedTeam= Team.objects.get(id=request.POST['team']))\n return redirect(\"/\")"
},
{
"alpha_fraction": 0.7505030035972595,
"alphanum_fraction": 0.7505030035972595,
"avg_line_length": 28.294116973876953,
"blob_id": "8e8deb2f057af4a26392f632860db305be0c4ab4",
"content_id": "85760e20001f976e8dfb0ae92faa109ac118caad",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 497,
"license_type": "no_license",
"max_line_length": 101,
"num_lines": 17,
"path": "/djangoIntroDemo/introApp/views.py",
"repo_name": "sdahal1/djangoFebruary",
"src_encoding": "UTF-8",
"text": "from django.shortcuts import render, HttpResponse, redirect\n\n# Create your views here.\n\ndef home(request):\n return redirect(\"/teams\") #this is the response \n\n\ndef showTeams(request):\n return HttpResponse(\"Great job team, this is where we will later show all teams in an html page\")\n\ndef new(request):\n return HttpResponse(\"placeholder to show a form to create a new team\")\n\n\ndef showSpecificTeam(request, teamname):\n return HttpResponse(f\"showing info about specific team: {teamname}\")"
},
{
"alpha_fraction": 0.7164948582649231,
"alphanum_fraction": 0.7319587469100952,
"avg_line_length": 32.739131927490234,
"blob_id": "3bf0bb8923c690fb4058bff4220c08c8dc4f2b85",
"content_id": "66e9ba27058b117b9b85b5a85021c89f0a30fbb5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 776,
"license_type": "no_license",
"max_line_length": 68,
"num_lines": 23,
"path": "/manyToMany/manyToManyApp/models.py",
"repo_name": "sdahal1/djangoFebruary",
"src_encoding": "UTF-8",
"text": "from django.db import models\n\n# Create your models here.\nclass User(models.Model):\n\tusername = models.CharField(max_length= 255)\n\tbirthday = models.DateField(max_length= 255)\n\tcreated_at = models.DateTimeField(auto_now_add=True)\n\tupdated_at = models.DateTimeField(auto_now=True)\n\t\n\tdef __str__(self):\n\t\treturn f\"<User object: {self.username} ({self.id})>\"\n\n\nclass Artist(models.Model):\n\tfirstName = models.CharField(max_length= 255)\n\tlastName = models.CharField(max_length= 255)\n\tdescription = models.TextField(null = True)\n\tfans = models.ManyToManyField(User, related_name = \"likedArtists\" )\n\tcreated_at = models.DateTimeField(auto_now_add=True)\n\tupdated_at = models.DateTimeField(auto_now=True)\n\n\tdef __str__(self):\n\t\treturn f\"<Artist object: {self.firstName} ({self.id})>\"\n"
},
{
"alpha_fraction": 0.5039164423942566,
"alphanum_fraction": 0.584856390953064,
"avg_line_length": 20.27777862548828,
"blob_id": "0e3ad7cc5be475b304a9a29d4c300a5717491fdd",
"content_id": "3cc9ab4d69c828c353c68da9f1989f7c9fcb18c1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 383,
"license_type": "no_license",
"max_line_length": 49,
"num_lines": 18,
"path": "/tvshowsOrmIntro/tvApp/migrations/0003_tvshow_duration.py",
"repo_name": "sdahal1/djangoFebruary",
"src_encoding": "UTF-8",
"text": "# Generated by Django 2.2.4 on 2021-02-19 16:29\n\nfrom django.db import migrations, models\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('tvApp', '0002_auto_20210218_1710'),\n ]\n\n operations = [\n migrations.AddField(\n model_name='tvshow',\n name='duration',\n field=models.IntegerField(null=True),\n ),\n ]\n"
},
{
"alpha_fraction": 0.5519590973854065,
"alphanum_fraction": 0.5562180876731873,
"avg_line_length": 33.55882263183594,
"blob_id": "3236eef057f348fb5efc97330ed2ed44ea1b089e",
"content_id": "15d6129ad1cb1ae74e5ada188ee30e522c496161",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "HTML",
"length_bytes": 1174,
"license_type": "no_license",
"max_line_length": 87,
"num_lines": 34,
"path": "/manyToMany/manyToManyApp/templates/artistinfo.html",
"repo_name": "sdahal1/djangoFebruary",
"src_encoding": "UTF-8",
"text": "<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n <meta charset=\"UTF-8\">\n <meta http-equiv=\"X-UA-Compatible\" content=\"IE=edge\">\n <meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\">\n <title>Document</title>\n</head>\n<body>\n <h1>Info about {{oneArtist.firstName}} {{oneArtist.lastName}}</h1>\n <p>Description: {{oneArtist.description}}</p>\n <p>Fans of this Artist</p>\n {{oneArtist.fans.all.count}} People Like this Artist\n <ul>\n {% for userObj in oneArtist.fans.all %}\n <li>{{userObj.username}}</li>\n {% endfor %}\n </ul>\n\n <form action=\"/addToFanBase/{{oneArtist.id}}\" method=\"post\">\n {% csrf_token %}\n <p>Select a Fan to add to this artists' fanbase</p>\n <select name=\"selectedUser\" id=\"\">\n {% for user in allUsers %}\n {% if user not in oneArtist.fans.all %}\n <option value=\"{{user.id}}\">{{user.username}}</option>\n {% endif %}\n {% endfor %}\n </select>\n <!-- <input type=\"hidden\" name=\"artistId\" id=\"\" value = \"{{oneArtist.id}}\"> -->\n <input type=\"submit\" value=\"Add to fanbase!\">\n </form>\n</body>\n</html>"
},
{
"alpha_fraction": 0.6105991005897522,
"alphanum_fraction": 0.6221198439598083,
"avg_line_length": 28,
"blob_id": "d1b7c5abe6509d8087f46a425031e10024885700",
"content_id": "71924b65b81df2e62db2039bb31f09d75f14c72f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "HTML",
"length_bytes": 434,
"license_type": "no_license",
"max_line_length": 74,
"num_lines": 15,
"path": "/semiRestfulRestaurant/restaurantApp/templates/menuItemInfo.html",
"repo_name": "sdahal1/djangoFebruary",
"src_encoding": "UTF-8",
"text": "<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n <meta charset=\"UTF-8\">\n <meta http-equiv=\"X-UA-Compatible\" content=\"IE=edge\">\n <meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\">\n <title>Document</title>\n</head>\n<body>\n <h1>Info about menu item: {{oneItem.name}}</h1>\n <p>Description: {{oneItem.description}}</p>\n <p>Price: {{oneItem.price}}</p>\n <a href=\"/menu\">Go Back to Menu</a>\n</body>\n</html>"
},
{
"alpha_fraction": 0.7105262875556946,
"alphanum_fraction": 0.7105262875556946,
"avg_line_length": 28.66666603088379,
"blob_id": "02270d10ff9e9e372fb1c7b1c1b80b928ce7d9b6",
"content_id": "f4f3ba170e1f8a1b828769612b90581dfdf5afa7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 266,
"license_type": "no_license",
"max_line_length": 64,
"num_lines": 9,
"path": "/manyToMany/manyToManyApp/urls.py",
"repo_name": "sdahal1/djangoFebruary",
"src_encoding": "UTF-8",
"text": "from django.urls import path\nfrom . import views\n\nurlpatterns = [\n path(\"\", views.index),\n path(\"artist/create\", views.createArtist),\n path(\"showArtistInfo/<int:artistid>\", views.showArtistInfo),\n path(\"addToFanBase/<int:artistid>\", views.addToFanBase)\n]"
},
{
"alpha_fraction": 0.6811926364898682,
"alphanum_fraction": 0.6880733966827393,
"avg_line_length": 26.3125,
"blob_id": "81c97e235fa2ccc804a0d0e00d3d4abd780dfd2f",
"content_id": "b6e49f850ba3ea21bf8f2daa123e2c41e34bd9e0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 436,
"license_type": "no_license",
"max_line_length": 59,
"num_lines": 16,
"path": "/visitTracker/visitApp/views.py",
"repo_name": "sdahal1/djangoFebruary",
"src_encoding": "UTF-8",
"text": "from django.shortcuts import render, HttpResponse, redirect\n\n# Create your views here.\ndef home(request):\n if 'visitcount' in request.session:\n request.session['visitcount'] += 1\n else:\n request.session['visitcount'] = 1\n return render(request, \"index.html\")\n\n\ndef resetInfo(request):\n #delete the information stored in session\n del request.session['visitcount']\n return redirect(\"/\")\n# {'visitcount': 2}"
},
{
"alpha_fraction": 0.6049869060516357,
"alphanum_fraction": 0.6220472455024719,
"avg_line_length": 26.178571701049805,
"blob_id": "8f69663a6b9b4eda295d7546ac962ee52ee46a3d",
"content_id": "f0b6845d3c80d4cd11d58e9d6f1b0cf74b11b81c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 762,
"license_type": "no_license",
"max_line_length": 169,
"num_lines": 28,
"path": "/formsdemo/formsdemoApp/views.py",
"repo_name": "sdahal1/djangoFebruary",
"src_encoding": "UTF-8",
"text": "from django.shortcuts import render, HttpResponse, redirect\n\n# Create your views here.\ndef index(request):\n print(\"**********\")\n print(request.method)\n print(\"**********\")\n\n return render(request, \"index.html\")\n\ndef registerUser(request):\n print(\"**********\")\n #THE INFORMATION COLLECTED FROM THE FORM IS AVAILABLE and represented by THE KEYWORD REQUEST.POST\n print(request.POST)\n request.session['forminfo'] = request.POST\n\n print(\"**********\")\n \n\n return redirect(\"/orderdetails\")\n\ndef details(request):\n return render(request, \"orderdetails.html\")\n\n\n\n\n#{'forminfo': <QueryDict: {'csrfmiddlewaretoken': ['x9zW3GN9GS3EnnrMcc4WR47JmLwGs0NQGPXLlrwqHohdmZIsZKnA7rot2LK7N4LC'], 'fname': ['Saurabh'], 'lname': ['Dahal'], 'email': ['[email protected]'], 'ccn': ['897y876876'], 'plan': ['16.99']}> }\n\n"
},
{
"alpha_fraction": 0.5306553840637207,
"alphanum_fraction": 0.5398167967796326,
"avg_line_length": 28.58333396911621,
"blob_id": "a39c25066761093e62e0e2f47624672839de4ca6",
"content_id": "112492933a58b8946a4446b3345824c93f8b7fde",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "HTML",
"length_bytes": 1419,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 48,
"path": "/oneToMany/oneToManyApp/templates/index.html",
"repo_name": "sdahal1/djangoFebruary",
"src_encoding": "UTF-8",
"text": "<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n <meta charset=\"UTF-8\">\n <meta http-equiv=\"X-UA-Compatible\" content=\"IE=edge\">\n <meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\">\n <title>Document</title>\n</head>\n<body>\n <h1>All the Teams!</h1>\n\n {% for teamobj in allteams %}\n <h3>{{teamobj.name}}</h3>\n <ol>\n {% for playerobj in teamobj.roster.all %}\n <li>{{playerobj.firstname}} {{playerobj.lastname}}</li>\n {% endfor %}\n </ol>\n \n {% endfor %}\n\n <h2>Add a new team</h2>\n <form action=\"/createTeam\" method=\"post\">\n {% csrf_token %}\n <p>Team Name: <input type=\"text\" name=\"teamname\" id=\"\"></p>\n <p>Team Location: <input type=\"text\" name=\"loc\" id=\"\"></p>\n <p><input type=\"submit\" value=\"Submit Team\"></p>\n </form>\n\n\n <h2>Add a new player</h2>\n <form action=\"/createPlayer\" method=\"post\">\n {% csrf_token %}\n <p>First Name: <input type=\"text\" name=\"fname\" id=\"\"></p>\n <p>Last Name: <input type=\"text\" name=\"lname\" id=\"\"></p>\n <p>Points Per Game: <input type=\"number\" step=\"0.1\" name=\"ppg\"></p>\n <p>Which team? <select name=\"team\" id=\"\">\n {% for teamobj in allteams %}\n <option value=\"{{teamobj.id}}\">{{teamobj.name}}</option>\n {% endfor %}\n </select></p>\n <p><input type=\"submit\" value=\"Submit Team\"></p>\n </form>\n\n\n \n</body>\n</html>"
},
{
"alpha_fraction": 0.40443214774131775,
"alphanum_fraction": 0.4522160589694977,
"avg_line_length": 25.254545211791992,
"blob_id": "d6d87dc3cf925d095b928ece35d87fd0cdce0528",
"content_id": "2223b72b581a0c34b292193518e53fb892ad9244",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1444,
"license_type": "no_license",
"max_line_length": 130,
"num_lines": 55,
"path": "/randomWordDisplay/randomWordApp/views.py",
"repo_name": "sdahal1/djangoFebruary",
"src_encoding": "UTF-8",
"text": "from django.shortcuts import render, HttpResponse\nimport random\n\n# Create your views here.\ndef index(request):\n topratedMovies = [\n {\n \"title\": \"Top Gun\",\n \"rating\": 5,\n \"release_date\": \"2000-03-01\"\n },\n {\n \"title\": \"Fight Club\",\n \"rating\": 5,\n \"release_date\": \"1999-05-08\"\n },\n {\n \"title\": \"Step Brothers\",\n \"rating\": 5,\n \"release_date\": \"2010-01-01\"\n },\n {\n \"title\": \"Shutter Island\",\n \"rating\": 5,\n \"release_date\": \"2008-03-15\"\n },\n {\n \"title\": \"Django Unchained\",\n \"rating\": 5,\n \"release_date\": \"2012-12-01\"\n },\n {\n \"title\": \"Finding Nemo\",\n \"rating\": 4,\n \"release_date\": \"2012-12-01\"\n },\n {\n \"title\": \"Super Mario\",\n \"rating\": 2,\n \"release_date\": \"2012-12-01\"\n }\n ]\n #in order to pass information from the server (views.py) to the html, I need a context dictionary with the info I want to pass\n favorite_color = \"blue\"\n\n context = {\n \"topmovs\": topratedMovies,\n 'favcolor': favorite_color,\n 'random': random.randint(0,99)\n }\n # print(\"************\")\n # print(random.randint(0,99))\n # print(\"************\")\n\n return render(request, \"randomword.html\", context)\n"
},
{
"alpha_fraction": 0.6805896759033203,
"alphanum_fraction": 0.6805896759033203,
"avg_line_length": 33,
"blob_id": "47945853abf2f041f1a9f852e2154d3a9f39ef40",
"content_id": "e57ffb2e8746ccc6854e7f1916ce21f02f7c1c12",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 407,
"license_type": "no_license",
"max_line_length": 59,
"num_lines": 12,
"path": "/semiRestfulRestaurant/restaurantApp/urls.py",
"repo_name": "sdahal1/djangoFebruary",
"src_encoding": "UTF-8",
"text": "from django.urls import path\nfrom . import views\n\nurlpatterns = [\n path(\"menu/new\", views.index ),\n path(\"menu/create\", views.createMenuItem),\n path(\"menu\", views.showMenu),\n path(\"menuItem/info/<int:menuId>\", views.menuItemInfo),\n path(\"menu/delete/<int:menuId>\", views.deleteItem),\n path(\"menu/edit/<int:menuId>\", views.editItem),\n path(\"menu/update/<int:menuId>\", views.updateItem)\n]"
},
{
"alpha_fraction": 0.6719278693199158,
"alphanum_fraction": 0.6753100156784058,
"avg_line_length": 36,
"blob_id": "a07e0f4f14f8fb576ce560070ec0cf8128bf81a4",
"content_id": "f5f141f7499fb042bee7451756725410da45df77",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 887,
"license_type": "no_license",
"max_line_length": 154,
"num_lines": 24,
"path": "/tvshowsOrmIntro/tvApp/views.py",
"repo_name": "sdahal1/djangoFebruary",
"src_encoding": "UTF-8",
"text": "from django.shortcuts import render, HttpResponse, redirect\n\nfrom .models import * \n\n# Create your views here.\ndef index(request):\n\n context = {\n 'allshows': TvShow.objects.all() #<QuerySet [<TvShow: TvShow object (1)>, <TvShow: TvShow object (3)>, <TvShow: TvShow object (4)>]>\n }\n\n return render(request, \"shows.html\", context)\n\ndef createShow(request):\n #a function handling a post request must REDIRECT!\n print(\"SUBMITTED THE FORM! IN THE CREATE SHOW FUNCTION THEN REDIRECTING \")\n #request.POST represents the information collected from the form\n print(\"*********\")\n print(request.POST)\n print(request.POST['title'])\n print(request.POST['rd'])\n TvShow.objects.create(title=request.POST['title'], description=request.POST['desc'], release_date=request.POST['rd'], rating = request.POST['rating'])\n print(\"*********\")\n return redirect('/')"
},
{
"alpha_fraction": 0.5252100825309753,
"alphanum_fraction": 0.5518207550048828,
"avg_line_length": 24.5,
"blob_id": "20d43e4e7a9e3f1135e14f0aacad0408692b0b71",
"content_id": "85dc762de150a476514e500d1e440e69966f2bde",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 714,
"license_type": "no_license",
"max_line_length": 69,
"num_lines": 28,
"path": "/tvshowsOrmIntro/tvApp/migrations/0002_auto_20210218_1710.py",
"repo_name": "sdahal1/djangoFebruary",
"src_encoding": "UTF-8",
"text": "# Generated by Django 2.2.4 on 2021-02-18 17:10\n\nfrom django.db import migrations, models\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('tvApp', '0001_initial'),\n ]\n\n operations = [\n migrations.AddField(\n model_name='tvshow',\n name='created_at',\n field=models.DateTimeField(auto_now_add=True, null=True),\n ),\n migrations.AddField(\n model_name='tvshow',\n name='rating',\n field=models.IntegerField(null=True),\n ),\n migrations.AddField(\n model_name='tvshow',\n name='updated_at',\n field=models.DateTimeField(auto_now=True, null=True),\n ),\n ]\n"
},
{
"alpha_fraction": 0.6320241689682007,
"alphanum_fraction": 0.644108772277832,
"avg_line_length": 39.29268264770508,
"blob_id": "93e643087da7c1f9ca462a6204fdc57b99ec884a",
"content_id": "4ddb12130df04d64e83a4c584bd275fdd92ce0d7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1655,
"license_type": "no_license",
"max_line_length": 134,
"num_lines": 41,
"path": "/semiRestfulRestaurant/restaurantApp/models.py",
"repo_name": "sdahal1/djangoFebruary",
"src_encoding": "UTF-8",
"text": "from django.db import models\n\n#the purpose of this manager class is so that i can put methods in there that we can extend the functionality of the \"objects\" keyword\nclass MenuManager(models.Manager):\n #create a method here to validate the information from the form\n def menuCreationValidator(self, formInfo):\n errors = {}\n if len(formInfo['dishname']) == 0:\n errors['dishnameRequired'] = \"The Name of the Menu item is required!\"\n elif len(formInfo['dishname']) < 3:\n errors['3charsreq_dishName'] = \"Dish Name must be at least 3 characters long\"\n \n if len(formInfo['desc']) < 10:\n errors['desc10chars'] = \"Description must be at least 10 characters long!\"\n \n if float(formInfo['priceInput']) < 5:\n errors['priceNotexpensiveEnoughWebougieOutHere'] = \"Please enter a higher price, organic foods are expensive!\"\n\n \n\n return errors\n\n # def basic_validator(self, postData):\n # errors = {}\n # # add keys and values to errors dictionary for each invalid field\n # if len(postData['name']) < 5:\n # errors[\"name\"] = \"Blog name should be at least 5 characters\"\n # if len(postData['desc']) < 10:\n # errors[\"desc\"] = \"Blog description should be at least 10 characters\"\n # return errors\n\n\n\n# Create your models here.\nclass Menu(models.Model):\n name = models.CharField(max_length=255)\n description = models.TextField()\n price = models.FloatField()\n created_at = models.DateTimeField(auto_now_add=True)\n updated_at = models.DateTimeField(auto_now=True)\n objects = MenuManager()\n\n\n\n"
},
{
"alpha_fraction": 0.5515536665916443,
"alphanum_fraction": 0.5706214904785156,
"avg_line_length": 37.27027130126953,
"blob_id": "a2c73f64cd84dd62d8af474ee09add7467e0fe2d",
"content_id": "68593e18370f8237cad7b5d0bcef28786ce74501",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1416,
"license_type": "no_license",
"max_line_length": 144,
"num_lines": 37,
"path": "/oneToMany/oneToManyApp/migrations/0001_initial.py",
"repo_name": "sdahal1/djangoFebruary",
"src_encoding": "UTF-8",
"text": "# Generated by Django 2.2.4 on 2021-02-19 16:55\n\nfrom django.db import migrations, models\nimport django.db.models.deletion\n\n\nclass Migration(migrations.Migration):\n\n initial = True\n\n dependencies = [\n ]\n\n operations = [\n migrations.CreateModel(\n name='Team',\n fields=[\n ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),\n ('name', models.CharField(max_length=255)),\n ('location', models.CharField(max_length=255)),\n ('created_at', models.DateTimeField(auto_now_add=True)),\n ('updated_at', models.DateTimeField(auto_now=True)),\n ],\n ),\n migrations.CreateModel(\n name='Player',\n fields=[\n ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),\n ('firstname', models.CharField(max_length=255)),\n ('lastname', models.CharField(max_length=255)),\n ('pointsPerGame', models.FloatField()),\n ('created_at', models.DateTimeField(auto_now_add=True)),\n ('updated_at', models.DateTimeField(auto_now=True)),\n ('assignedTeam', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='roster', to='oneToManyApp.Team')),\n ],\n ),\n ]\n"
},
{
"alpha_fraction": 0.6649722456932068,
"alphanum_fraction": 0.665896475315094,
"avg_line_length": 29.06944465637207,
"blob_id": "e25bb3e37c313fa87fc3f51ca710013ae75bf02f",
"content_id": "46493171995551a3810d2b1ff847858ddeda777b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2164,
"license_type": "no_license",
"max_line_length": 137,
"num_lines": 72,
"path": "/semiRestfulRestaurant/restaurantApp/views.py",
"repo_name": "sdahal1/djangoFebruary",
"src_encoding": "UTF-8",
"text": "from django.shortcuts import render, redirect\nfrom django.contrib import messages\nfrom .models import *\n\n# Create your views here.\ndef index(request):\n return render(request, 'index.html')\n\ndef createMenuItem(request):\n print(\"***********\")\n print(request.POST)\n print(\"***********\")\n\n errorsFromValidator = Menu.objects.menuCreationValidator(request.POST)\n\n print(\"PRINTING ERRORSFROMVALIDATOR HERE\", errorsFromValidator)\n\n if len(errorsFromValidator)>0:\n for key, value in errorsFromValidator.items():\n messages.error(request, value)\n return redirect('/menu/new')\n \n\n newitem = Menu.objects.create(name = request.POST['dishname'], description= request.POST['desc'], price = request.POST['priceInput'])\n print('PRINTING THE NEW ITEM HERE:', newitem)\n\n return redirect(f\"/menuItem/info/{newitem.id}\")\n\ndef showMenu(request):\n context = {\n 'allMenuItems': Menu.objects.all()\n }\n return render(request, \"menu.html\", context)\n\ndef menuItemInfo(request, menuId):\n context = {\n 'oneItem': Menu.objects.get(id=menuId)\n }\n return render(request, \"menuItemInfo.html\", context)\n\ndef deleteItem(request, menuId):\n # Deleting an existing record\n menuitem = Menu.objects.get(id=menuId)\n menuitem.delete()\n return redirect(\"/menu\")\n\ndef editItem(request, menuId):\n context = {\n 'oneItem': Menu.objects.get(id=menuId)\n }\n return render(request, \"editmenu.html\", context)\n\ndef updateItem(request, menuId):\n print(\"***********\")\n print(request.POST)\n print(\"***********\")\n errorsFromValidator = Menu.objects.menuCreationValidator(request.POST)\n\n print(\"PRINTING ERRORSFROMVALIDATOR HERE\", errorsFromValidator)\n\n if len(errorsFromValidator)>0:\n for key, value in errorsFromValidator.items():\n messages.error(request, value)\n return redirect(f'/menu/edit/{menuId}')\n # Updating an existing record\n c = Menu.objects.get(id=menuId)\n c.name = request.POST['dishname']\n c.description = request.POST['desc']\n c.price = request.POST['priceInput']\n c.save()\n\n return redirect(f\"/menuItem/info/{menuId}\")"
}
] | 22 |
yzxie/work-tools | https://github.com/yzxie/work-tools | bb27150091c22a15d699e47f3cd6139f008fbdc6 | 25a6770cf1bad07fdeef597a5dd2cacf82351049 | a801adf6fbf7700705c399fe994de7ab697edb09 | refs/heads/master | 2021-08-27T16:53:03.877034 | 2021-08-25T08:24:06 | 2021-08-25T08:24:06 | 154,696,707 | 1 | 1 | null | null | null | null | null | [
{
"alpha_fraction": 0.5498753190040588,
"alphanum_fraction": 0.5673316717147827,
"avg_line_length": 18.095237731933594,
"blob_id": "3e9d50439413c26b476bd1a9a91f042f49fd3344",
"content_id": "cc486b7c7503c8831400f82bb27395b7597cfb36",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 802,
"license_type": "no_license",
"max_line_length": 113,
"num_lines": 42,
"path": "/redis/analy_redis_statdata.sh",
"repo_name": "yzxie/work-tools",
"src_encoding": "UTF-8",
"text": "#! /bin/sh\n\nfunction list_calls() {\n for str in `cat redis_slow_statistic.txt | awk -F'=' '{print substr($2, 0, index($2, \",\")-1), $0}' | sort -frh`\n do\n echo $str\n done\n}\n\nfunction list_total_cost_time() {\n for str in `cat redis_slow_statistic.txt | awk -F'=' '{print substr($3, 0, index($3, \",\")-1), $0}' | sort -frh`\n do\n echo $str\n done\n}\n\nfunction list_each_cost_time() {\n for str in `cat redis_slow_statistic.txt | awk -F'=' '{print $4, $0}' | sort -frh`\n do\n echo $str\n done\n}\n\nif [ $# = 0 ]\nthen\n echo \"usage: sh analy_redis_statdata.sh [calls | tct | ect] \\n tct: total_cost_time, ect: each_cost_time\"\nelse\n case $1 in\n calls)\n list_calls\n ;;\n tct)\n list_total_cost_time\n ;;\n ect)\n list_each_cost_time\n ;;\n *)\n list_each_cost_time\n ;;\n esac\nfi\n"
},
{
"alpha_fraction": 0.5462499856948853,
"alphanum_fraction": 0.5837500095367432,
"avg_line_length": 12.576271057128906,
"blob_id": "aeddb93141ab542e3ac5a4719cc4b426627fb5da",
"content_id": "f0856666dce82bdb369fbddfe7e8dd7b04c7f71c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 800,
"license_type": "no_license",
"max_line_length": 84,
"num_lines": 59,
"path": "/redis/keys_count_by_scan.sh",
"repo_name": "yzxie/work-tools",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\nA=$0\nB=${A##*/}\nC=${B%.*}\nrunning_file_name=$C\nrunning_flag=\"run.$running_file_name\"\nREDIS_CLIENT='redis-cli -h 0.0.0.0 -p 6379 -x'\n\nfunction process {\n\techo $0\n\tindex=-1\n\tcount=0\n\tstep=100000\n\n\twhile ((index!=0))\n\tdo\n\t\tif [ $index -le 0 ];then\n\t\t\tindex=0\n\t\tfi\n\t\techo \"scan $index match $1 count $step\" | $REDIS_CLIENT > $running_file_name.cache\n\t\tread index <<< `head -1 $running_file_name.cache`\n\t\tread inc <<< `cat $running_file_name.cache | wc -l`\n\t\tinc=$((inc - 1))\n\t\tif [ $? -ne 0 ];then\n\t\t\tbreak\n\t\tfi\n\t\tcount=$((count + inc))\n\tdone\n\n\techo \"$1 count:\"$count\n}\n\n#\n\nif [ $# -ne 1 ];then\n\techo \"$0 \"\n\texit 0\nfi\n\n#\nif [ -f \"$running_flag\" ] ; then\n\techo \"is running...\"\n\texit 0\nfi\n\n#\ntouch $running_flag\n#\n\necho \"processing....\"\necho $*\nprocess $*\n\n#\nrm -rf $running_flag\n#\n\necho \"ok!\""
},
{
"alpha_fraction": 0.5356415510177612,
"alphanum_fraction": 0.5539714694023132,
"avg_line_length": 21.272727966308594,
"blob_id": "aee467ec7bf93475345be4fb4148eb6cf526ae83",
"content_id": "465d3b21e5d45772d69274c10145cafb666aa194",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 491,
"license_type": "no_license",
"max_line_length": 150,
"num_lines": 22,
"path": "/http/curl_array_v2.sh",
"repo_name": "yzxie/work-tools",
"src_encoding": "UTF-8",
"text": "# \nmsg_id=(\"1\" \"2\")\nreg_id=(\"1\" \"2\")\nresults=()\nindex=0\nfor str in ${msg_id[@]}; do\n\n for str2 in ${reg_id[@]}; do\n a=\"{\\\"msg_id\\\": ${str}, \\\"registration_ids\\\":[\\\"${str2}\\\"]}\"\n res=`curl -s --insecure -X POST -v https://report.jpush.cn/v3/status/message -H \"Content-Type: application/json\" -u \"username:password\" -d \"${a}\"`\n results[index]=$str+$res\n\n echo ${results[index]}\n index=$(($index+1))\n done\n \ndone\n\necho \"=======\"\nfor res in ${results[@]}; do\n echo $res\ndone\n\n"
},
{
"alpha_fraction": 0.619784951210022,
"alphanum_fraction": 0.6421505212783813,
"avg_line_length": 25.43181800842285,
"blob_id": "f48311696972bcd484ceac88e6ecd511bfecc52f",
"content_id": "ae56ef5650e5529cc863135da9a0e436ed68b610",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2495,
"license_type": "no_license",
"max_line_length": 157,
"num_lines": 88,
"path": "/excel/export_table_to_excel.py",
"repo_name": "yzxie/work-tools",
"src_encoding": "UTF-8",
"text": "#coding=utf-8\nimport pymysql\nimport os\nfrom os import system\nimport pymysql\nimport xlwt\n\nUSERNAME = \"root\"\nPASSWORD = \"root\"\nDBNAME = \"hkstock_trade\"\nHOST = \"localhost\"\nPORT = 3306\nDICT = {\n\t\"N:SHSOUTHBOUNDHK\": \"港股通(沪)\",#connect_sh_hk\n \"N:SZSOUTHBOUNDHK\": \"港股通(深)\",#connect_sz_hk\n \"N:NORTHBOUNDSH\": \"沪股通\", #connect_hk_sh\n \"N:NORTHBOUNDSZ\": \"深股通\" #connect_hk_sz\n}\n\nQUOTE_HK_NORTHBOUND_QUOTA = 5.2*pow(10, 10) #\"N:NORTHBOUNDSH\": \"沪股通\", \"N:NORTHBOUNDSZ\": \"深股通\"\nQUOTE_HK_SOUTHBOUND_QUOTA = 4.2*pow(10, 10) #\"N:SHSOUTHBOUNDHK\": \"港股通(沪)\", \"N:SZSOUTHBOUNDHK\": \"港股通(深)\"\n\ndef get_db_cursor():\n\tconn = pymysql.connect(\n\t\thost=HOST,\n\t\tuser=USERNAME,\n\t\tpassword=PASSWORD,\n\t\tdb=DBNAME\n\t)\n\treturn conn.cursor()\n\ndef write_excel(filename, rows):\n\tbook = xlwt.Workbook(encoding='utf-8') #不加encoding,会报UnicodeDecodeError: 'ascii' codec can't decode byte 0xe6 in position 0: ordinal not in range(128)\n\tsheet = book.add_sheet('sheet1')\n\t# 标题栏\n\tsheet.write(0, 0, '市场类型')\n\tsheet.write(0, 1, '日期')\n\tsheet.write(0, 2, '总额')\n\tsheet.write(0, 3, '余额')\n\tsheet.write(0, 4, '净流入额(总额-余额)')\n\n\tc = 1\n\tfor row in rows:\n\t\tmarket = ''\n\t\tdate_str = ''\n\t\ttotal = ''\n\t\tquota_balance = ''\n\t\tquota_inflow = ''\n\n\t\tfor index in range(5):\n\t\t\tif index==0:\n\t\t\t\t#市场\n\t\t\t\tmarket = DICT[row[0]]\n\t\t\t\tsheet.write(c, index, market)\n\t\t\telif index==1:\n\t\t\t\t#日期\t\n\t\t\t\t# row[1]类型为datetime.datetime\n\t\t\t\t# date_str = row[1].strftime(\"%Y-%m-%d %H:%M:%S\")\n\t\t\t\tdate_str = row[1].strftime(\"%Y-%m-%d\")\n\t\t\t\tsheet.write(c, index, date_str)\n\t\t\telif index==2:\n\t\t\t\t#总额\n\t\t\t\tif row[0]=='N:NORTHBOUNDSH' or row[0]=='N:NORTHBOUNDSZ':\n\t\t\t\t\ttotal = QUOTE_HK_NORTHBOUND_QUOTA\n\t\t\t\telif row[0]=='N:SHSOUTHBOUNDHK' or row[0]=='N:SZSOUTHBOUNDHK':\n\t\t\t\t\ttotal = QUOTE_HK_SOUTHBOUND_QUOTA\n\t\t\t\tsheet.write(c, index, total)\n\t\t\telif index==3:\n\t\t\t\t#余额\n\t\t\t\tquota_balance = row[2]\n\t\t\t\tsheet.write(c, index, quota_balance)\n\t\t\telif index==4:\n\t\t\t\t#净流入额\n\t\t\t\tquota_inflow = total - quota_balance\n\t\t\t\tsheet.write(c, index, quota_inflow)\n\t\tc += 1\n\tbook.save(filename)\n\ndef export_market_wide():\n\tcursor = get_db_cursor()\n\tcursor.execute(\"select * from market_wide where symbol in ('N:SHSOUTHBOUNDHK', 'N:SZSOUTHBOUNDHK', 'N:NORTHBOUNDSH', 'N:NORTHBOUNDSZ') order by timestamp;\")\n\t\n\trows = cursor.fetchall()\n\twrite_excel(\"净流入额数据.xls\", rows)\n\tcursor.close()\n\nif __name__ == '__main__':\n\texport_market_wide()"
},
{
"alpha_fraction": 0.6176870465278625,
"alphanum_fraction": 0.6380952596664429,
"avg_line_length": 22.349206924438477,
"blob_id": "1f28c29937ecb2898f68fcaeff34c8c7a10fe582",
"content_id": "54f28ecf5c226016f0dd5cea718e60b1767ec6da",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1478,
"license_type": "no_license",
"max_line_length": 98,
"num_lines": 63,
"path": "/mysql/insert_rows_to_table.py",
"repo_name": "yzxie/work-tools",
"src_encoding": "UTF-8",
"text": "#coding=utf-8\nimport os\nfrom os import system\nimport pymysql\n\nUSERNAME = \"root\"\nPASSWORD = \"root\"\nDBNAME = \"hkstock_trade\"\nHOST = \"localhost\"\nPORT = 3306\nFILENAME=\"./20180228-20181101market-wide.txt\"\n\ndef parse_market_wide_rows():\n\trows = []\n\tfd = open(FILENAME, 'r')\n\n\twhile True:\n\t\tline = fd.readline()\n\t\tif not line:\n\t\t\tbreak\n\t\tcolumns = line.split('|')\n\t\ti = 0\n\t\trow = []\n\t\tfor column in columns:\n\t\t\tcolumn = column.strip()\n\t\t\tif column:\n\t\t\t\trow.append(column)\n\t\trows.append(row)\n\treturn rows\n\ndef bulk_insert_rows_to_market_wide(rows):\n\tconnection = pymysql.connect(\n\t\thost=HOST,\n\t\tuser=USERNAME,\n\t\tpassword=PASSWORD,\n\t\tdb=DBNAME\n\t)\n\tcursor = connection.cursor();\n\tcursor.execute('SELECT VERSION()')\n\tdata = cursor.fetchone()\n\tprint 'db version %s' % data\n\n\tprint 'import begin.'\n\tfor row in rows:\n\t\t# SQL 插入语句\n\t\tsql = \"\"\"INSERT INTO market_wide(symbol, timestamp, daily_balance, currency, trade_total_value, \n\t\t\t\t\tbid_accumulated_turnover, ask_accumulated_turnover, to_cny_exchange_rate)\n \t\t VALUES ('%s', '%s', '%d', '%s', '%d', '%d', '%d', '%d')\"\"\" \\\n \t\t% (row[0], row[1], int(row[2]), row[3], int(row[4]), \n \t\t\tint(row[5]), int(row[6]), float(row[7]))\n\t\ttry:\n\t\t\tcursor.execute(sql)\n\t\t\tconnection.commit()\n\t\texcept Exception as e:\n\t\t\tprint 'exception: ', e\n\t\t\tconnection.rollback()\n\n\tprint 'import successfully.'\n\tconnection.close()\n\t\nif __name__ == '__main__':\n\trows = parse_market_wide_rows()\n\tbulk_insert_rows_to_market_wide(rows)"
},
{
"alpha_fraction": 0.665517270565033,
"alphanum_fraction": 0.6741379499435425,
"avg_line_length": 29.578947067260742,
"blob_id": "4d486bbf79eb08698e1bd6a667ccb0e3299ac90a",
"content_id": "9052570bdf5d4fb2830258c7ce0e780e5ef0c62b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 588,
"license_type": "no_license",
"max_line_length": 126,
"num_lines": 19,
"path": "/mysql/bulk_import_table_data.py",
"repo_name": "yzxie/work-tools",
"src_encoding": "UTF-8",
"text": "#coding=utf-8\nimport os\nfrom os import system\n\nUSERNAME = \"root\"\nPASSWORD = \"root\"\nDBNAME = \"hkstock_trade\"\nHOST = \"localhost\"\nPORT = 3306\n#DIRNAME = \"/Users/xieyizun/Documents/项目文档/db_dump/hkstock_trade/\"\nDIRNAME= \"/Users/xieyizun/work/projects/stock-quote-web/db/\"\ndef list_sql_files_and_import_data():\n\tfor sql_file in os.listdir(r\"%s\" % DIRNAME):\n\t\tprint sql_file\n\t\tcommand = \"\"\"mysql -u %s -p\"%s\" --host %s --port %s %s < %s\"\"\" %(USERNAME, PASSWORD, HOST, PORT, DBNAME, DIRNAME + sql_file)\n\t\tsystem(command)\n\nif __name__ == '__main__':\n\tlist_sql_files_and_import_data()"
},
{
"alpha_fraction": 0.5722457766532898,
"alphanum_fraction": 0.5955508351325989,
"avg_line_length": 31.77777862548828,
"blob_id": "f622c270a6b9ea7a25b7055d2e456fdb201f386c",
"content_id": "38e3261943d0f6ce9113e1aa9024c8ba578be890",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4986,
"license_type": "no_license",
"max_line_length": 178,
"num_lines": 144,
"path": "/mysql/analy_mysql_slow_log.py",
"repo_name": "yzxie/work-tools",
"src_encoding": "UTF-8",
"text": "#coding=utf-8\nimport os\nfrom os import system\nimport time\nimport sys\n\n#SLOW_QUERY_FILE=\"./slow-queries.log-20181013\"\nSLOW_QUERY_FILE=\"./slow-queries.log-20181025\"\ndef read_slow_query_file_and_analy():\n\t# [[,,,,],[,,,,]]\n\trecord_lines_set_list = []\n\tis_next_slow_log = True\n\trecord_lines_set = []\n\n\tfd = open(SLOW_QUERY_FILE, 'r')\n\twhile True:\n\t\tline = fd.readline()\n\t\tif not line:\n\t\t\tbreak\n\n\t\t#下一条记录的开始\t\t\n\t\tif line.startswith(\"#\") and is_next_slow_log == True:\n\t\t\t#把上一条记录存到列表\n\t\t\tif len(record_lines_set) > 0:\n\t\t\t\trecord_lines_set_list.append(record_lines_set)\n\t\t\t#准备接收记录的行\n\t\t\trecord_lines_set = []\n\t\t\tis_next_slow_log = False\n\t\t\n\t\t#一条完整记录包含多行\n\t\trecord_lines_set.append(line)\n\n\t\t#重置next_slow_log\t\n\t\tif not line.startswith(\"#\") and is_next_slow_log == False:\n\t\t\tis_next_slow_log = True\n\n\t#分析每一条记录\n\tanaly_mysql_slow_log(record_lines_set_list)\n\t\n\tfd.close()\n\ndef analy_mysql_slow_log(record_lines_set_list):\n\ttotal_20_record_count = 0\n\ttotal_slow_log_count = 0\n\tsorted_sql_list = []\n\n\tfor record_lines_set in record_lines_set_list:\n\t\tfor i in range(0, len(record_lines_set)):\n\t\t\trecord_line = record_lines_set[i]\n\t\t\tif record_line.startswith(\"# User@Host:\") and \"172.28.48.20\" in record_line:\n\t\t\t\ttotal_20_record_count += 1\n\n\t\t\t\t# 转换日期并打印记录\n\t\t\t\tfor rl in record_lines_set:\n\t\t\t\t\tif rl.startswith(\"SET timestamp=\"):\n\t\t\t\t\t\ttimestamp = rl[-12:-2]\n\t\t\t\t\t\ttime_local = time.localtime(int(timestamp))\n\t\t\t\t\t\tdt = time.strftime(\"%Y-%m-%d %H:%M:%S\", time_local)\n\n\t\t\t\t\t\t# 输出22:37\n\t\t\t\t\t\t# if dt.startswith(\"2018-10-15 22:37\"):\n\t\t\t\t\t\t# \ttotal_slow_log_count += 1\n\t\t\t\t\t\t# \tprint \"No: %s, Time: %s: Command: %s\" % (total_slow_log_count, dt, record_lines_set[0]),\n\t\t\t\t\t\t# \tif record_lines_set[3].startswith(\"# Query_time:\"):\n\t\t\t\t\t\t# \t\tprint record_lines_set[3]\n\t\t\t\t\t\t# \telif record_lines_set[2].startswith(\"# Query_time:\"):\n\t\t\t\t\t\t# \t\tprint record_lines_set[2]\n\t\t\t\t\t\t# \t# print dt\n\t\t\t\t\t\t# \t# for rl in record_lines_set:\n\t\t\t\t\t\t# \t# \tprint rl\n\n\t\t\t\t\t\t# 输出全部\n\t\t\t\t\t\ttotal_slow_log_count += 1\n\t\t\t\t\t\tprint \"No.%s, Time: \" % total_slow_log_count, dt\n\t\t\t\t\t\tfor i in range(0, len(record_lines_set)):\n\t\t\t\t\t\t\tif i+1 == len(record_lines_set):\n\t\t\t\t\t\t\t\tprint record_lines_set[i]\n\t\t\t\t\t\t\telse:\n\t\t\t\t\t\t\t\tprint record_lines_set[i],\n\n\t\t\t\t\t\t# 输出20点\n\t\t\t\t\t\t# if dt.startswith(\"2018-10-15 20:\"):\n\t\t\t\t\t\t# \ttotal_slow_log_count += 1\n\t\t\t\t\t\t# \tprint \"No.%s, Time: \" % total_slow_log_count, dt\n\t\t\t\t\t\t# \tfor i in range(0, len(record_lines_set)):\n\t\t\t\t\t\t# \t\tif i+1 == len(record_lines_set):\n\t\t\t\t\t\t# \t\t\tprint record_lines_set[i]\n\t\t\t\t\t\t# \t\telse:\n\t\t\t\t\t\t# \t\t\tprint record_lines_set[i],\n\n\t\t\t\t\t\t# 只输出MySQL语句\n\t\t\t\t\t\t# if record_lines_set[len(record_lines_set)-1].startswith(\"SELECT\"):\n\t\t\t\t\t\t# \ttotal_slow_log_count += 1\n\t\t\t\t\t\t# \t# 统计数据表时使用:python analy_mysql_slow_log.py | awk -F'FROM' '{print $2}' | awk '{print $1}' | sort | uniq -c\n\t\t\t\t\t\t# \tprint \"No.%s, Time: %s\" % (total_slow_log_count, dt)\n\t\t\t\t\t\t# \tprint record_lines_set[len(record_lines_set)-1],\n\n\t\t\t\t\t\t# \tif record_lines_set[1].startswith(\"# Query_time:\"):\n\t\t\t\t\t\t# \t\tprint record_lines_set[1]\n\t\t\t\t\t\t# \telif record_lines_set[2].startswith(\"# Query_time:\"):\n\t\t\t\t\t\t# \t\tprint record_lines_set[2]\n\t\t\t\t\t\t# \t#查找耗时\n\t\t\t\t\t\t# \t# query_cost_time = \"\"\n\t\t\t\t\t\t# \t# for rl2 in record_lines_set:\n\t\t\t\t\t\t# \t# \tif rl2.startswith(\"# Query_time: \"):\n\t\t\t\t\t\t# \t# \t\tquery_cost_time = rl2\n\t\t\t\t\t\t# \t# \t\tbreak\n\t\t\t\t\t\t# \t# sorted_sql_list.append(record_lines_set[len(record_lines_set)-1] + query_cost_time)\n\n\t\t\t\t\t\t# NIO 12号慢日志\n\t\t\t\t\t\t# if record_lines_set[len(record_lines_set)-1].startswith(\"SELECT\") and \"for_factor,executed FROM stock_split WHERE symbol =\" in record_lines_set[len(record_lines_set)-1]:\n\t\t\t\t\t\t# \ttotal_slow_log_count += 1\n\t\t\t\t\t\t# \t# 统计数据表时使用:python analy_mysql_slow_log.py | awk -F'FROM' '{print $2}' | awk '{print $1}' | sort | uniq -c\n\t\t\t\t\t\t# \tprint \"No.%s, Time: %s\" % (total_slow_log_count, dt)\n\t\t\t\t\t\t# \tprint record_lines_set[len(record_lines_set)-1],\n\n\t\t\t\t\t\t# \tif record_lines_set[1].startswith(\"# Query_time:\"):\n\t\t\t\t\t\t# \t\tprint record_lines_set[1]\n\t\t\t\t\t\t# \telif record_lines_set[2].startswith(\"# Query_time:\"):\n\t\t\t\t\t\t# \t\tprint record_lines_set[2]\n\t\t\t\t\t\t# \t#查找耗时\n\t\t\t\t\t\t# \t# query_cost_time = \"\"\n\t\t\t\t\t\t# \t# for rl2 in record_lines_set:\n\t\t\t\t\t\t# \t# \tif rl2.startswith(\"# Query_time: \"):\n\t\t\t\t\t\t# \t# \t\tquery_cost_time = rl2\n\t\t\t\t\t\t# \t# \t\tbreak\n\n\t\t\t\t\t\t# 跳出,当前慢查询记录已经处理完\n\t\t\t\t\t\tbreak\n\t\t\t\t# 跳出,处理下一个慢查询记录\n\t\t\t\tbreak\n\t#排序和打印SQL语句\n\t#sorted_unique_sql_list = sorted(list(set(sorted_sql_list)))\n\tsorted_unique_sql_list = sorted(sorted_sql_list)\n\tfor sql in sorted_unique_sql_list:\n\t\tprint sql\n\n\tprint \"sorted unique list size: %d\" % len(sorted_unique_sql_list)\n\tprint \"20 machine match slow record: \", total_slow_log_count\n\nif __name__ == '__main__':\n\tif len(sys.argv) > 1:\n\t\tSLOW_QUERY_FILE=\"./\"+sys.argv[1]\n\tread_slow_query_file_and_analy()\n"
},
{
"alpha_fraction": 0.7244094610214233,
"alphanum_fraction": 0.748031497001648,
"avg_line_length": 13.11111068725586,
"blob_id": "99101317405753e43a07534bdfc5fb50c00ac291",
"content_id": "d53ccf2675d210f584cae7886f4711d130f3da8e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 243,
"license_type": "no_license",
"max_line_length": 29,
"num_lines": 9,
"path": "/README.md",
"repo_name": "yzxie/work-tools",
"src_encoding": "UTF-8",
"text": "# 数据处理与分析脚本\n### 1.MySQL\nMySQL相关脚本,包括导入数据,慢日志分析等\n\n### 2.Redis\nRedis相关脚本,包括导入dump.rdb,慢日志分析等\n\n### 3.Http\ncurl命令的使用,HTML页面访问与解析脚本\n"
},
{
"alpha_fraction": 0.6404055953025818,
"alphanum_fraction": 0.6443057656288147,
"avg_line_length": 19.677419662475586,
"blob_id": "ffb9ce1b880d6b9423320faff6951bede01dce3c",
"content_id": "d5081e69c879319a4f608cd1d4b3ee8dcec44eaa",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1282,
"license_type": "no_license",
"max_line_length": 114,
"num_lines": 62,
"path": "/http/html_parser.py",
"repo_name": "yzxie/work-tools",
"src_encoding": "UTF-8",
"text": "#coding=utf-8\n\nimport os\nimport urllib\nfrom os import system\nfrom urllib import urlopen\nfrom HTMLParser import HTMLParser\n\nclass html_a_tag_parser(HTMLParser):\n\ta_tag = False\n\ta_tag_link = ''\n\tlinks = []\n\n\tdef handle_starttag(self, tag, attrs):\n\t\tself.a_tag = False\n\t\tself.a_tag_link = ''\n\n\t\tif tag == 'a':\n\t\t\tif len(attrs) == 0:\n\t\t\t\tpass\n\t\t\telse:\n\t\t\t\tfor (variable, value) in attrs:\n\t\t\t\t\tif variable == 'href':\n\t\t\t\t\t\tself.a_tag = True\n\t\t\t\t\t\tself.a_tag_link = value.decode('utf-8')\n\n\tdef handle_data(self, data):\n\t\tif self.a_tag == True and self.a_tag_link.startswith(\"https://www.baidu.com\") and self.a_tag_link.find(\"#\")==-1:\n\t\t\tself.links.append(self.a_tag_link)\n\n\ndef extract_urls_from_html(urls):\n\tall_links = []\n\n\tfor url in urls:\n\t\thtml = urlopen(url).read().decode(\"utf-8\")\n\t\ta_tag_parser = html_a_tag_parser()\n\t\ta_tag_parser.feed(html)\n\t\ta_tag_parser.close()\n\n\t\tprint 'Origin url: ', url\n\t\tfor link in a_tag_parser.links:\n\t\t\tall_links.append(link)\n\t\n\tall_links = set(all_links)\n\tprint 'all links count: ', len(all_links)\n\tfor link in all_links:\n\t\turlopen(link)\n\ndef view_specific_urls(urls):\n\tfor url in urls:\n\t\turlopen(url)\n\nif __name__ == '__main__':\n\t# urls = []\n\t# view_specific_urls(urls)\n\n\turls = []\n\tfor link in urls:\n\t\turlopen(link)\n\n\t#extract_urls_from_html(urls)\n"
},
{
"alpha_fraction": 0.6059113144874573,
"alphanum_fraction": 0.6182265877723694,
"avg_line_length": 26,
"blob_id": "73bcf07dfdd4b92756b5ae3455f8ef4110771597",
"content_id": "f8be40ccd4741c133737cbcdfbc5b549b3643b16",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 406,
"license_type": "no_license",
"max_line_length": 152,
"num_lines": 15,
"path": "/http/curl_array.sh",
"repo_name": "yzxie/work-tools",
"src_encoding": "UTF-8",
"text": "msg_id=(\"1\" \"2\")\nresults=()\nindex=0\nfor str in ${msg_id[@]}; do\n\tparams=\"{\\\"msg_id\\\": ${str}, \\\"registration_ids\\\":[\\\"xxxx\\\"]}\"\n\tres=`curl -s --insecure -X POST -v https://report.jpush.cn/v3/status/message -H \"Content-Type: application/json\" -u \"username:password\" -d \"${params}\"`\n\tresults[index]=$str+$res\n\n\techo ${results[index]}\n\tindex=$(($index+1))\ndone\n\nfor res in ${results[@]}; do\n\techo $res\ndone\n\n"
},
{
"alpha_fraction": 0.5091496109962463,
"alphanum_fraction": 0.6377825736999512,
"avg_line_length": 26.323530197143555,
"blob_id": "4cc12565d355b87396cc643fb25d4aeaa3ecd002",
"content_id": "d6eadc1a44f266132414c2bbe7df03d3eb03fd78",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 1858,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 68,
"path": "/redis/analy_redis_log.sh",
"repo_name": "yzxie/work-tools",
"src_encoding": "UTF-8",
"text": "#! /bin/sh\n\n#REDIS_SLOW_LOG=./24_redis_slow_log.txt\n#REDIS_SLOW_LOG=./17_csp_redis_slow_log.txt\n#REDIS_SLOW_LOG=./17_csp_slow_log_201810111530.txt\n#REDIS_SLOW_LOG=./24_csp_slow_log_201810111603.txt\n#REDIS_SLOW_LOG=./20_csp_slow_log_201810111637.txt\n#REDIS_SLOW_LOG=./24_redis_slow_log_201810112148.txt\n#REDIS_SLOW_LOG=./20_redis_slow_log_201810112156.txt\n#REDIS_SLOW_LOG=./now_redis_slow.log\n#REDIS_SLOW_LOG=./24_redis_slow_log_201810121003.txt\n#REDIS_SLOW_LOG=./24_redis_slow_log_201810112343.txt\n#REDIS_SLOW_LOG=./24_redis_slow_log_201810121647.txt\n#REDIS_SLOW_LOG=./20_redis_slow_log_201810160027.txt\n#REDIS_SLOW_LOG=./20_redis_slow_log_201810230018.txt\n#REDIS_SLOW_LOG=./20_redis_slow_log_201810230036.txt\n#REDIS_SLOW_LOG=./20_redis_slow_log_201810231350.txt\n#REDIS_SLOW_LOG=./20_redis_slow_log_201810231634.txt\n#REDIS_SLOW_LOG=./20_redis_slow_log_201810232355.txt\nREDIS_SLOW_LOG=./20_redis_slow_log_201810252235.txt\nfunction show_slow_log_human() {\n # timestamp\n i=1\n ts_array=()\n for ts in `cat $REDIS_SLOW_LOG | grep '2) (integer) ' | awk '{print $3}'`\n do\n \ttimestamp=`date -r $ts`\n #echo \"$i: $timestamp\"\n ts_array[i]=`date -r $ts | awk '{print $1, $3}'`\n i=$[i+1]\n done\n\n # cost time\n i=1\n ct_array=()\n for ct in `cat $REDIS_SLOW_LOG | grep '3) (integer) ' | awk '{print $3}'`\n do\n \tcost_time=$[$ct/1000]\n \t#echo \"$i: ${cost_time}ms\"\n \tct_array[i]=$cost_time\n \ti=$[i+1]\n done\n\n # command\n i=1\n cd_array=()\n for cd in `cat $REDIS_SLOW_LOG | grep '4) 1) \"' | awk '{print $3}'`\n do\n \tcd_array[i]=$cd\n \ti=$[i+1]\n done\n\n # redis key\n i=1\n rk_array=()\n for rk in `cat $REDIS_SLOW_LOG | grep '2) \"' | awk '{print $2}'`\n do\n \trk_array[i]=$rk\n \ti=$[i+1]\n done\n\n for ((j=0; j<\"${#ct_array[*]}\";j=j+1))\n do\n \techo \"${ts_array[$j]}, ${cd_array[$j]}:${rk_array[$j]}, ${ct_array[$j]}ms\"\n done\n}\n\nshow_slow_log_human\n"
},
{
"alpha_fraction": 0.574525773525238,
"alphanum_fraction": 0.5934959053993225,
"avg_line_length": 14.375,
"blob_id": "614c74d2a6cd8871432f1a67faeb89bbd53487bb",
"content_id": "8af182ce8e2f0294fe1008740b426fc8f62966b5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 369,
"license_type": "no_license",
"max_line_length": 37,
"num_lines": 24,
"path": "/redis/import_redis_dump.sh",
"repo_name": "yzxie/work-tools",
"src_encoding": "UTF-8",
"text": "#! bin/sh\n\nDUMP_FILE=~/work/data/dump.rdb\nREDIS_PORT=6379\n\nfunction refresh_dump() {\n count=`ps -ef | grep redis | wc -l`\n if [ $count -gt 1 ]\n then\n echo \"start stop redis...\"\n redis-cli -p $REDIS_PORT shutdown\n fi\n \n cp $DUMP_FILE .\n if [ $? = 0 ]\n then\n redis-server ./redis.conf\n else\n echo \"dump.rdb not found\"\n exit 1\n fi\n}\n\nrefresh_dump\n"
}
] | 12 |
jd12121/Microsoft-Malware-Prediction | https://github.com/jd12121/Microsoft-Malware-Prediction | 56dff0b2b0e7db6f1e3ac38ab668b2a7a992ed5a | 7f0a94fbdb0cbb194be97bc1bb61231772dbce79 | f42e8e3d36eae39f5f2e6f7416487a443516d25c | refs/heads/master | 2023-05-12T06:44:58.526296 | 2019-03-28T19:03:09 | 2019-03-28T19:03:09 | 177,345,587 | 1 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.7048345804214478,
"alphanum_fraction": 0.7201017737388611,
"avg_line_length": 27.047618865966797,
"blob_id": "b22c8724f60b481fbd2d30cb92209dd1fbbece6a",
"content_id": "c671a00cf269f15fa5e07018e9336a316585296f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1179,
"license_type": "no_license",
"max_line_length": 103,
"num_lines": 42,
"path": "/predict.py",
"repo_name": "jd12121/Microsoft-Malware-Prediction",
"src_encoding": "UTF-8",
"text": "import json\nimport time\nimport numpy as np\nimport pandas as pd\nimport lightgbm as lgb\n\nstart_time = time.time()\n\nwith open('SETTINGS.json', 'r') as f:\n config = json.load(f)\n\n#Assign directories from SETTINGS.json\nclean_test_data = config['TEST_DATA_CLEAN_PATH']\nmodels_dir = config['MODELS_DIR']\nsub_dir = config['SUBMISSION_DIR']\n\n\ntest = pd.read_pickle(clean_test_data)\n\nmodel_0 = lgb.Booster(model_file=f'{models_dir}model_0')\nmodel_1 = lgb.Booster(model_file=f'{models_dir}model_1')\nmodel_2 = lgb.Booster(model_file=f'{models_dir}model_2')\nmodel_3 = lgb.Booster(model_file=f'{models_dir}model_3')\nmodel_4 = lgb.Booster(model_file=f'{models_dir}model_4')\n\nmodels = [model_0, model_1, model_2, model_3, model_4]\n\npredictions = np.zeros(len(test))\n\nfor model in models:\n predictions += model.predict(test.drop(columns=['MachineIdentifier','HasDetections','test_set']))/5\n print('One model prediction completed.')\n\n#Create a submission table and csv\nsub = test[['MachineIdentifier','HasDetections']]\nsub['HasDetections'] = predictions\nsub.to_csv(f'{sub_dir}submission.csv', index=False)\n\n\nend_time = time.time()\ntotal_time = end_time - start_time\nprint(total_time/60)\n\n"
},
{
"alpha_fraction": 0.5680385828018188,
"alphanum_fraction": 0.5932517647743225,
"avg_line_length": 37.52857208251953,
"blob_id": "066766d334e46d03e677e857f4b446c3e2de796d",
"content_id": "9e11f2cc76d38b823a68c376c8aa96ff8477d183",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2697,
"license_type": "no_license",
"max_line_length": 136,
"num_lines": 70,
"path": "/prepare_data.py",
"repo_name": "jd12121/Microsoft-Malware-Prediction",
"src_encoding": "UTF-8",
"text": "import os\nimport json\nimport numpy as np\nimport pandas as pd\n\nwith open('SETTINGS.json', 'r') as f:\n config = json.load(f)\n\nraw_data = config['RAW_DATA_DIR']\ntrain_clean= config['TRAIN_DATA_CLEAN_PATH']\ntest_clean = config['TEST_DATA_CLEAN_PATH']\n\n#Function to reduce file mem size\n#Based on https://www.kaggle.com/gemartin/load-data-reduce-memory-usage\ndef reduce_mem_usage(df, verbose=True):\n numerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64']\n start_mem = df.memory_usage(deep=True).sum() / 1024**2 \n for col in df.columns:\n col_type = df[col].dtypes\n if col_type in numerics:\n c_min = df[col].min()\n c_max = df[col].max()\n if str(col_type)[:3] == 'int':\n if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:\n df[col] = df[col].astype(np.int8)\n elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:\n df[col] = df[col].astype(np.int16)\n elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:\n df[col] = df[col].astype(np.int32)\n elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:\n df[col] = df[col].astype(np.int64) \n else:\n if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max:\n df[col] = df[col].astype(np.float16)\n elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:\n df[col] = df[col].astype(np.float32)\n else:\n df[col] = df[col].astype(np.float64) \n end_mem = df.memory_usage(deep=True).sum() / 1024**2\n if verbose: print('Mem. usage decreased to {:5.2f} Mb ({:.1f}% reduction)'.format(end_mem, 100 * (start_mem - end_mem) / start_mem))\n return df\n\n\ntrain = reduce_mem_usage(pd.read_csv(f'{raw_data}train.csv'), verbose=False)\ntest = reduce_mem_usage(pd.read_csv(f'{raw_data}test.csv'), verbose=False)\n\n#ID train v test data\ntrain['test_set'] = 0\ntest['test_set'] = 1\n\nfull = pd.concat([train,test],ignore_index=True,sort=False)\n\ndel train\ndel test\n\n#Replace strings with count encodings over the combined train and test data\nfor c in full.drop(columns=['MachineIdentifier','HasDetections','test_set']).columns:\n if full[c].dtype == object:\n print(f'converted {c}')\n full[c] = full[c].map(full[c].value_counts())\n \nfull = reduce_mem_usage(full, verbose=False)\n\ntrain = full[full['test_set']==0]\ntest = full[full['test_set']==1]\n\ntrain.to_pickle(train_clean)\ntest.to_pickle(test_clean)\n\nprint('data saved')\n"
},
{
"alpha_fraction": 0.7552961707115173,
"alphanum_fraction": 0.765239953994751,
"avg_line_length": 49.19565200805664,
"blob_id": "828bd099abb0f7515165ba1911bb4108f6b46120",
"content_id": "f18d2ad7d588c6af1f88dc459ba6f15a0a981e74",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 2313,
"license_type": "no_license",
"max_line_length": 252,
"num_lines": 46,
"path": "/README.md",
"repo_name": "jd12121/Microsoft-Malware-Prediction",
"src_encoding": "UTF-8",
"text": "\nHello!\n\nBelow you can find a outline of how to reproduce my 4th place solution for the Microsoft Malware Prediction Challenge competition.\n\n\n# CONTENTS \nDirectories and Files: \ndata/ - The directory will store the raw competition data, along with another folder for the processed data \ndata/clean/ - The processed data used for the final submission \nlogs/ - The training logs, and feature importances of the models \nmodels/ - The 5 LightGBM models that were used to produce the submission \nsubmissions/ - The final submission.csv file \npredict.py - Used to make model predictions, uses data from data/clean/ and the models in models/ , stores predictions in submissions/ \nprepare_data.py - Processes the raw data and saves it in data/clean \nSETTINGS.json - Paths to all directories and file locations references in the code . \ntrain.py - Trains the models, uses data from data/clean/, saves training logs and feature importances in logs/, saves model in models/ \n\n\n\n\n# Hardware: (The following specs were used to create the original solution) \n1. AWS c4.8xlarge (36 vCPUs, 60 GB memory) \n2. Ubuntu 16.04 LTS (100 GB boot disk) \n\n# Software (python packages are detailed separately in `requirements.txt`) \nPython 3.7.1 \n\n# Data setup \nThe following code will download the raw train and test files from the competition. Assumes Kaggle API is installed. \n```\ncd data\nkaggle competitions download microsoft-malware-prediction -f test.csv\nkaggle competitions download microsoft-malware-prediction -f train.csv\n```\n# Process the data\nThe following code will process the raw competition data stored in the data/ directory, and save the processed data in the data/clean/ directory. (NOTE: Running this code will overwrite the files data/clean/test-clean.pkl and data/clean/train.pkl.) \n`python prepare_data.py`\n\n\n# Model training \nThe following code will retrain the models. It will use data in the data/clean/train-clean.pkl file. This will take over 9 hours to train (NOTE: This will overwrite all files in logs/ and models/) \n`python train.py` \n\n# Model prediction: \nThe following code will use the data/clean/test-clean.pkl data to produce a new submission file. This will take about 15 minutes. (NOTE: this will overwrite the submissions/submission.csv file.) \n`python predict.py` \n\n"
},
{
"alpha_fraction": 0.5981504917144775,
"alphanum_fraction": 0.618747353553772,
"avg_line_length": 29.113924026489258,
"blob_id": "a622c955b701bb884eaf3a28b4fe1276f8057e35",
"content_id": "158ddfd2312045843c5f4e3fc61ca12efcec491a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2379,
"license_type": "no_license",
"max_line_length": 82,
"num_lines": 79,
"path": "/train.py",
"repo_name": "jd12121/Microsoft-Malware-Prediction",
"src_encoding": "UTF-8",
"text": "import os\nimport json\nimport path\nimport numpy as np\nimport pandas as pd\nimport lightgbm as lgb\nfrom sklearn.model_selection import KFold\nimport time\nnp.random.seed(11)\n\n#Assign all directories from SETTINGS.json\nwith open('SETTINGS.json', 'r') as f:\n config = json.load(f)\ntrain_path = config['TRAIN_DATA_CLEAN_PATH']\nmodels_dir = config['MODELS_DIR']\nfeature_importances_path = config['FEATURE_IMPORTANCE_PATH']\n\ntrain = pd.read_pickle(train_path)\n\nkf = KFold(n_splits=5, shuffle=True, random_state=11)\n\nX = train.drop(columns=['MachineIdentifier','HasDetections','test_set'])\ny = train['HasDetections']\n\n#Based on https://www.kaggle.com/artgor/is-this-malware-eda-fe-and-lgb-updated\n#Adjusted num_leaves and max_septh\nparams = {'num_leaves': 128,\n 'min_data_in_leaf': 42,\n 'objective': 'binary',\n 'metric': 'auc',\n 'max_depth': -1,\n 'learning_rate': 0.05,\n 'num_threads': 18,\n \"boosting\": \"gbdt\",\n \"feature_fraction\": 0.8,\n \"bagging_freq\": 5,\n \"bagging_fraction\": 0.8,\n \"bagging_seed\": 11,\n \"lambda_l1\": 0.15,\n \"lambda_l2\": 0.15,\n \"random_state\": 42}\n\n#Create a dataframe to store feature importances during training\nfeats = pd.DataFrame()\nfeats['feature'] = X.columns\nfeats['importance_split'] = 0\nfeats['importance_gain'] = 0\n\nstart_time = time.time()\n\nfor i, (train_ind, val_ind) in enumerate(kf.split(train)):\n \n print(f'Beginning fold {i}')\n \n train_data = lgb.Dataset(X.iloc[train_ind],\n label=y.iloc[train_ind])\n\n val_data = lgb.Dataset(X.iloc[val_ind],\n label=y.iloc[val_ind])\n\n\n model = lgb.train(params,\n train_data,\n num_boost_round=10000,\n valid_sets = [train_data, val_data],\n verbose_eval=100,\n early_stopping_rounds = 100)\n \n #Save both split and gain importances, averaged over 5 folds\n feats['importance_split'] += model.feature_importance()/5\n feats['importance_gain'] += model.feature_importance(importance_type='gain')/5\n \n model.save_model(f'{models_dir}model_{i}',num_iteration=model.best_iteration)\n\nend_time = time.time()\nrun_time = end_time - start_time\nprint(f'Time to train: {run_time}')\n\nfeats.to_csv(feature_importances_path, index=False)\n"
},
{
"alpha_fraction": 0.4677419364452362,
"alphanum_fraction": 0.6935483813285828,
"avg_line_length": 14.75,
"blob_id": "622c0cc3a7096d3f6d4c266bd02433e4419338a4",
"content_id": "6268a6b73b524dca150e740b114f7f7dac19de57",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 62,
"license_type": "no_license",
"max_line_length": 17,
"num_lines": 4,
"path": "/requirements.txt",
"repo_name": "jd12121/Microsoft-Malware-Prediction",
"src_encoding": "UTF-8",
"text": "numpy==1.15.4\npandas==0.24.1\nlightgbm==2.2.2\nmatplotlib==3.0.2"
}
] | 5 |
Noobistine/Titanic-Kaggle | https://github.com/Noobistine/Titanic-Kaggle | 6bbc90adceb2840ce3d05ca5a74f98cf8e28805e | 9634f93e2e26840fc4bc42b4f1d02d8ada30303a | d57a5a77faa224ca9842d8d561de9a1cfa8db48f | refs/heads/master | 2021-01-12T14:26:58.508030 | 2016-10-05T14:10:57 | 2016-10-05T14:10:57 | 70,066,628 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6301403641700745,
"alphanum_fraction": 0.6479066014289856,
"avg_line_length": 27.06498146057129,
"blob_id": "877bdc3bf89b0c0d76047ffce9f9b2b0bc5e1d92",
"content_id": "853a97c1fc39c20ca1c9a24d2f8a1ab8f076ff2d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 8049,
"license_type": "no_license",
"max_line_length": 102,
"num_lines": 277,
"path": "/titanic.py",
"repo_name": "Noobistine/Titanic-Kaggle",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nCreated on Mon Oct 03 13:19:16 2016\r\n\r\n@author: C937118\r\n\"\"\"\r\n\r\nimport pandas as pd\r\nimport matplotlib.pyplot as plt\r\nimport numpy as np\r\n\r\n\r\n\r\n\r\ntitanic=pd.read_csv(\"H:/Data/Varidas/Ausbildung AXA/Data Scientist/Kaggle/train.csv\"\r\n , sep=',')\r\n\r\n\r\ntitanic_test=pd.read_csv(\"H:/Data/Varidas/Ausbildung AXA/Data Scientist/Kaggle/test.csv\"\r\n , sep=',')\r\n \r\n \r\n\"\"\"\r\n\r\nFirst: Look at the Data\r\n\r\n\"\"\"\r\n \r\n#Scatter Plots\r\naxes=pd.tools.plotting.scatter_matrix(\r\ntitanic[['Survived', 'Pclass','Age', 'Parch','Fare']], alpha=0.2)\r\nplt.tight_layout()\r\n \r\n#Some Information\r\ntitanic.info()\r\ntitanic.describe()\r\n\r\n# Look at a single Record\r\ntitanic.loc[titanic['PassengerId']==5]\r\n\r\n\r\n# Draw some Histograms\r\ntitanic['Survived'].hist(by=titanic['Sex'])\r\ntitanic['Age'].hist(by=titanic['Survived'])\r\ntitanic['Survived'].hist(by=titanic['Pclass'])\r\ntitanic['Survived'].hist(by=titanic['SibSp'])\r\ntitanic['Survived'].hist(by=titanic['Parch'])\r\ntitanic['Fare'].hist(by=titanic['Survived'])\r\ntitanic['Survived'].hist(by=titanic['Embarked'])\r\n\r\n\r\ntitanic.groupby(['Pclass','Survived']).count()\r\n\r\n\r\n\"\"\"\r\n\r\nSecond: Calculate simple and univariate Survival Rates\r\n\r\n\"\"\"\r\n# Calculate survival rate for Male and Female (Univariate) \r\ntitanic.groupby(['Sex','Survived']).count()\r\n\r\n#Brute Force and Ignorance\r\nfemale_surv=titanic.loc[(titanic['Sex']==\"female\") & (titanic['Survived']==1)]\r\nmale_surv=titanic.loc[(titanic['Sex']==\"male\") & (titanic['Survived']==1)]\r\nfemale_death=titanic.loc[(titanic['Sex']==\"female\") & (titanic['Survived']==0)]\r\nmale_death=titanic.loc[(titanic['Sex']==\"male\") & (titanic['Survived']==0)]\r\n\r\nsurvival_female=round(float(len(female_surv))/float(len(female_surv)+len(female_death)),2)\r\nsurvival_male=round(float(len(male_surv))/float(len(male_surv)+len(male_death)),2)\r\nprint 'Surivval rate for Female is:'\r\nprint survival_female\r\n\r\nprint 'Survival rate for Male is:'\r\nprint survival_male\r\n\r\n#And then the nice way\r\ntitanic.groupby(['Sex']).mean()['Survived']\r\n\r\n\r\n#Calculate survival Rate dependent on Age\r\n\r\ntitanic['Age_grp']=np.round(titanic['Age']/5)*5\r\nsurvival_age1=titanic['Age_grp'].value_counts()\r\nsurvival_age2=titanic.groupby('Age_grp')['Survived'].sum()\r\n\r\nsurvival_age=pd.concat([survival_age1, survival_age2], axis=1)\r\n\r\nsurvival_age['survival_rate']=survival_age['Survived']/survival_age['Age_grp']\r\nsurvival_age.reset_index(level=0, inplace=True)\r\n\r\nsurvival_age.plot(x='index', y='survival_rate')\r\n\r\nax=survival_age[['index', 'survival_rate']].plot(x='index', linestyle='-')\r\nsurvival_age[['index', 'Age_grp']].plot(x='index', kind='bar')\r\n\r\n# Check also Survival Rate dependent ond Embarked\r\ntitanic.groupby(['Embarked']).mean()['Survived']\r\n\r\n\"\"\"\r\n\r\nThree: Clean Data\r\n\r\n\"\"\"\r\n\r\n# Clean Data\r\n\r\n#Insert missing Age\r\n\r\n#Check if the other Information is also missing, if so then delete these records\r\nnull_data=titanic[titanic.isnull().Age]\r\n#--> The rest is known, so insert missing Age with median\r\n\r\n#Calculate median Age per Parch and SibSp Group\r\nhelp_median=titanic.groupby(['Parch', 'SibSp']).median()['Age'].to_frame()\r\n\r\n#Add Index to the dataframe as a string variable to later merge with train\r\nhelp_median['key']=help_median.index.map(str)\r\n\r\n# Ad Key Variable to merge with median Age\r\ntitanic['key']=\"(\"+titanic['Parch'].map(str)+\"L, \"+titanic['SibSp'].map(str)+\"L)\"\r\n\r\n#Merge\r\ntitanic2=pd.merge(titanic, help_median, how='left', on='key')\r\ntitanic2['Age_clean']=np.where(titanic2['Age_x']>=0, titanic2['Age_x'], titanic2['Age_y'])\r\n\r\n\r\n#There are still some missing\r\ntitanic2[titanic2.isnull().Age_clean]\r\ntitanic2['Age_clean']=titanic2['Age_clean'].fillna(titanic2['Age_clean'].median())\r\ntitanic2[titanic2.isnull().Age_clean]\r\n\r\n# Map Gender to boolean Variable\r\ntitanic2['Gender']=np.where(titanic2['Sex']=='female',0,1)\r\n\r\n#Map Embarked to integer\r\ntitanic2['Embarked'].unique()\r\ntitanic2['Embarked_int']=np.where(titanic2['Embarked']==\"C\",1,np.where(titanic2['Embarked']==\"Q\",2,0))\r\n\r\n\r\n\r\n\"\"\"\r\n\r\nFour: Try some Models\r\n\r\n\"\"\"\r\n\r\n\"\"\"\r\n\r\nLinear Regression\r\n\r\n\"\"\"\r\nfrom sklearn.linear_model import LinearRegression\r\nfrom sklearn.cross_validation import KFold\r\n\r\npredictors=['Pclass', 'Gender', 'Age_clean', 'SibSp', 'Parch', 'Fare', 'Embarked_int'] \r\n\r\ncross_val=KFold(titanic2.shape[0], n_folds=3, random_state=1)\r\n\r\n#Initialize Series to store prediction results\r\npredictions=[]\r\nalg=LinearRegression()\r\n\r\n \r\nfor train, test in cross_val:\r\n train_pred=titanic2[predictors].iloc[train,:]\r\n train_response=titanic2['Survived'].iloc[train]\r\n \r\n alg.fit(train_pred, train_response)\r\n \r\n test_predictions=alg.predict(titanic2[predictors].iloc[test,:])\r\n predictions.append(test_predictions)\r\n \r\npredictions=np.concatenate(predictions, axis=0)\r\npred=pd.DataFrame(predictions)\r\n\r\npred['out']=np.where(pred[0]<0.5,0,1)\r\n\r\ntitanic3=pd.concat([titanic2,pred], axis=1)\r\ntitanic3.describe()\r\ntitanic3[0].hist() #Values from -0.3 to +1.2\r\n\r\ntitanic3['error']=np.where(titanic3['Survived']==titanic3['out'],1,0)\r\n\r\nblub=titanic3.groupby(titanic3['error']).count()\r\n\r\naccuracy_lin=float(blub.iloc[1][0])/float(len(titanic3))\r\n\r\n\r\n\r\n\"\"\"\r\n\r\nLogistic Regression\r\n\r\n\"\"\"\r\n\r\nfrom sklearn import cross_validation\r\nfrom sklearn.linear_model import LogisticRegression\r\n\r\nalg=LogisticRegression(random_state=1)\r\n\r\nscores=cross_validation.cross_val_score(alg,titanic2[predictors],\r\n titanic['Survived'], cv=3)\r\n \r\naccuracy_logit=scores.mean()\r\n \r\nalg.fit(titanic2[predictors], titanic2['Survived'])\r\n\r\n\r\n\r\n#Clean titanic_test likewise\r\ntitanic_test['key']=\"(\" + titanic_test['Parch'].map(str)+\"L, \"+titanic_test['SibSp'].map(str)+\"L)\"\r\n\r\ntitanic_test2=pd.merge(titanic_test, help_median, how='left', on='key')\r\ntitanic_test2['Age_clean']=np.where(titanic_test2['Age_x']>=0,\r\n titanic_test2['Age_x'],\r\n titanic_test2['Age_y'])\r\ntitanic_test2['Age_clean']=titanic_test2['Age_clean'].fillna(titanic['Age'].median())\r\n\r\ntitanic_test2['Gender']=np.where(titanic_test2['Sex']==\"female\",0,1)\r\ntitanic_test2['Embarked_int']=np.where(titanic_test2['Embarked']==\"Q\",2,\r\n np.where(titanic_test2['Embarked']==\"C\",1,0))\r\n\r\ntitanic_test2[titanic_test2.isnull().Fare]\r\ntitanic_test2['Fare']=titanic_test2['Fare'].fillna(titanic_test2['Fare'].median())\r\n\r\n\r\npredictions=alg.predict(titanic_test2[predictors])\r\nsubmission=pd.DataFrame({'PassengerId': titanic_test['PassengerId'], \r\n 'Survived': predictions})\r\n\r\n\r\n\"\"\"\r\n\r\nRandom Forrest\r\n\r\n\"\"\"\r\n\r\nfrom sklearn.ensemble import RandomForestClassifier\r\n\r\nalg=RandomForestClassifier(random_state=1, \r\n n_estimators=50, #Number of Trees\r\n min_samples_split=8, #Minimum number of Rows\r\n min_samples_leaf=4) #Minium Samples at the bottom of the tree\r\n\r\ncross_val=KFold(titanic2.shape[0], n_folds=3, random_state=1)\r\n\r\nscores=cross_validation.cross_val_score(alg,titanic2[predictors], \r\n titanic2['Survived'], cv=cross_val)\r\n\r\naccuracy_rf=scores.mean()\r\n\r\n\r\n\"\"\"\r\n\r\nGradient Boosting\r\n\r\n\"\"\"\r\n\r\nfrom sklearn.ensemble import GradientBoostingClassifier\r\n\r\nalg=GradientBoostingClassifier(random_state=1,\r\n n_estimators=20, #Number of Trees\r\n max_depth=4)\r\n \r\nscores=cross_validation.cross_val_score(alg, titanic2[predictors], titanic2['Survived'], cv=cross_val)\r\n \r\naccuracy_gb=scores.mean()\r\n\r\nalg.fit(titanic2[predictors], titanic2['Survived'])\r\n\r\nprediction=alg.predict(titanic_test2[predictors])\r\n\r\nsubmission=pd.DataFrame({'PassengerId': titanic_test2['PassengerId'],\r\n 'Survived': prediction})\r\n \r\nsubmission.to_csv(\"H:/Data/Varidas/Ausbildung AXA/Data Scientist/Kaggle/submission.csv\", index=False)"
},
{
"alpha_fraction": 0.8169013857841492,
"alphanum_fraction": 0.8169013857841492,
"avg_line_length": 34.5,
"blob_id": "84b70fec5afe8952947c33389f38466f76b7e9f2",
"content_id": "c4fb3045a81ad144b185b73c772938bbe126b79e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 71,
"license_type": "no_license",
"max_line_length": 53,
"num_lines": 2,
"path": "/README.md",
"repo_name": "Noobistine/Titanic-Kaggle",
"src_encoding": "UTF-8",
"text": "# Titanic-Kaggle\nCode for the submission of the Titanic Kaggle Problem\n"
}
] | 2 |
WayneFerrao/autofocus | https://github.com/WayneFerrao/autofocus | 54dc6c4a1f5eee582ca510c9484f35ea35355fd2 | 80a5d2366639177dbd16708a79b88df17528054c | 34ab586adbeae9fe3c01cca6cf349b406f7d74e6 | refs/heads/main | 2023-01-22T14:49:51.672490 | 2020-12-08T21:54:53 | 2020-12-08T21:54:53 | 302,743,912 | 1 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.41287553310394287,
"alphanum_fraction": 0.424206018447876,
"avg_line_length": 33.66257858276367,
"blob_id": "fc2b3eabf760a6b0fa46daf42150a6c4f2d6fe23",
"content_id": "c74a3e81148351bd20527a02ee8a50040898a9bd",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5825,
"license_type": "permissive",
"max_line_length": 82,
"num_lines": 163,
"path": "/dataanalysis.py",
"repo_name": "WayneFerrao/autofocus",
"src_encoding": "UTF-8",
"text": "import pandas as pd\r\nimport numpy as np\r\nimport seaborn as sns\r\nimport matplotlib.pyplot as plt\r\nsns.set()\r\n\r\ndatafile = \"clean3.csv\"\r\n\r\ndf = pd.read_csv(datafile)\r\n\r\n# =============================================================================\r\n# for elem in range(len(df)):\r\n# try:\r\n# float(df['cylinders'][elem])\r\n# except:\r\n# df['cylinders'][elem] = float(0)\r\n# try:\r\n# float(df['drive'][elem])\r\n# df['drive'][elem] = float(0)\r\n# except:\r\n# pass\r\n# \r\n# df = df.loc[df['cylinders'] != 0]\r\n# df = df.loc[df['drive'] != 0]\r\n# \r\n# df.to_csv(\"clean2.csv\")\r\n# =============================================================================\r\n\r\n#df.groupby(['year']).mean().to_csv(\"year.csv\")\r\nbyYear = df.sort_values(by=['year'])\r\ndata1 = byYear.loc[byYear['year'] <= 2002] #1900-2002\r\ndata2 = byYear.loc[byYear['year'] > 2002] #203-2021\r\n\r\n# =============================================================================\r\n# #YEAR VS PRICE\r\n# # Add title\r\n# plt.title(\"Price of Car Based on Year\")\r\n# \r\n# # Bar chart showing average arrival delay for Spirit Airlines flights by month\r\n# sns.lineplot(x=data1['year'], y=data1['price'])\r\n# \r\n# # Add label for vertical axis\r\n# plt.ylabel(\"Price\")\r\n# =============================================================================\r\n\r\n#DRIVE VS PRICE\r\n# =============================================================================\r\n# #Add title\r\n# plt.title(\"Price of Car Based on Drive\")\r\n# \r\n# # Bar chart showing average arrival delay for Spirit Airlines flights by month\r\n# sns.barplot(x=data2['drive'], y=data2['price'])\r\n# \r\n# # Add label for vertical axis\r\n# plt.ylabel(\"Price\")\r\n# =============================================================================\r\n\r\n#CYLINDER VS PRICE\r\n# =============================================================================\r\n# #Add title\r\n# plt.title(\"Price of Car Based on Cylinders\")\r\n# \r\n# # Bar chart showing average arrival delay for Spirit Airlines flights by month\r\n# sns.barplot(x=data2['cylinders'], y=data2['price'])\r\n# \r\n# # Add label for vertical axis\r\n# plt.ylabel(\"Price\")\r\n# =============================================================================\r\n\r\n#ODOMETER VS PRICE\r\n# =============================================================================\r\n# # Add title\r\n# plt.title(\"Price of Car Based on Odometer\")\r\n# \r\n# # Bar chart showing average arrival delay for Spirit Airlines flights by month\r\n# sns.lineplot(x=data1['odometer'], y=data1['price'])\r\n# \r\n# # Add label for vertical axis\r\n# plt.ylabel(\"Price\")\r\n# =============================================================================\r\n\r\n# =============================================================================\r\n# #COLOR VS PRICE\r\n# # Add title\r\n# plt.title(\"Average Price of Car Based on Colour\")\r\n# \r\n# # Bar chart showing average arrival delay for Spirit Airlines flights by month\r\n# sns.barplot(x=data2['paint_color'], y=data2['price'])\r\n# \r\n# # Add label for vertical axis\r\n# plt.ylabel(\"Price\")\r\n# =============================================================================\r\n\r\n# =============================================================================\r\n# #TRANSMISSION VS PRICE\r\n# # Add title\r\n# plt.title(\"Average Price of Car Based on Transmission\")\r\n# \r\n# # Bar chart showing average arrival delay for Spirit Airlines flights by month\r\n# sns.barplot(x=data2['transmission'], y=data2['price'])\r\n# \r\n# # Add label for vertical axis\r\n# plt.ylabel(\"Price\")\r\n# =============================================================================\r\n\r\n# =============================================================================\r\n# #ODOMETER VS YEAR\r\n# #Add title\r\n# plt.title(\"ODOMETER VS YEAR\")\r\n# \r\n# # Bar chart showing average arrival delay for Spirit Airlines flights by month\r\n# sns.lineplot(x=data2['year'], y=data2['odometer'])\r\n# \r\n# # Add label for vertical axis\r\n# plt.ylabel(\"Odometer\")\r\n# =============================================================================\r\n\r\n\r\n# =============================================================================\r\n# #DISTRIBUTION OF COLOR\r\n# colors = data2.pivot_table(index=['paint_color'], aggfunc='size')\r\n# print(colors)\r\n# for i in range(len(colors)):\r\n# print(colors[i])\r\n# print(len(data1))\r\n# colors[i] = (colors[i]/len(data2['paint_color'])) * 100\r\n# \r\n# print(colors)\r\n# =============================================================================\r\n\r\n\r\n# =============================================================================\r\n# #DISTRIBUTION OF TRANSMISSION\r\n# transmissions = data2.pivot_table(index=['transmission'], aggfunc='size')\r\n# print(transmissions)\r\n# for i in range(len(transmissions)):\r\n# print(transmissions[i])\r\n# transmissions[i] = (transmissions[i]/len(data2['transmission'])) * 100\r\n# \r\n# print(transmissions)\r\n# =============================================================================\r\n\r\n# =============================================================================\r\n# #DISTRIBUTION OF DRIVE\r\n# drives = data2.pivot_table(index=['drive'], aggfunc='size')\r\n# print(drives)\r\n# for i in range(len(drives)):\r\n# print(drives[i])\r\n# drives[i] = (drives[i]/len(data2['drive'])) * 100\r\n# \r\n# print(drives)\r\n# =============================================================================\r\n\r\n# =============================================================================\r\n# #DISTRIBUTION OF CYLINDER\r\n# cylinders = data2.applymap(str).pivot_table(index=['cylinders'], aggfunc='size')\r\n# print(cylinders)\r\n# for i in range(len(cylinders)):\r\n# print(cylinders[i])\r\n# cylinders[i] = (cylinders[i]/len(data2['cylinders'])) * 100\r\n# \r\n# print(cylinders)\r\n# =============================================================================\r\n\r\n\r\n\r\n\r\n\r\n\r\n"
},
{
"alpha_fraction": 0.6202805042266846,
"alphanum_fraction": 0.628910481929779,
"avg_line_length": 30.421052932739258,
"blob_id": "93e7758e3ae47c82069782c05a02f134c5da3cea",
"content_id": "bfa408adc6662fa70f8ac3da7dbc86ea23d73b56",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1854,
"license_type": "permissive",
"max_line_length": 84,
"num_lines": 57,
"path": "/odometer.py",
"repo_name": "WayneFerrao/autofocus",
"src_encoding": "UTF-8",
"text": "import pandas as pd\r\nimport numpy as np \r\nimport random\r\nimport csv\r\n\r\nfile = \"dummy4.csv\"\r\n\r\ndf = pd.read_csv(file)\r\n\r\n#odometer has an \"automatic\" value that we need to remove\r\ndf['odometer'].fillna(0, inplace=True)\r\ndf['price'].fillna(0.0, inplace=True)\r\ndf['year'].fillna(0.0, inplace=True)\r\n\r\n#removes all values from price, odometer, and year that are non convertible to float\r\nfor elem in range(len(df)):\r\n try:\r\n df['odometer'][elem].astype(float)\r\n except:\r\n df['odometer'][elem] = float(0)\r\n try:\r\n df['price'][elem].astype(float)\r\n except:\r\n df['price'][elem] = float(0)\r\n try:\r\n df['year'][elem].astype(float)\r\n except:\r\n df['year'][elem] = float(0)\r\n\r\n#new dff using df without null price and year\r\ndff = df[df.price.notnull()]\r\ndff = dff[df.year.notnull()]\r\ndff = dff.loc[dff['odometer'] != 0]\r\n\r\n#run through each year and get number to represent average price of a car that year.\r\nuniqueYears = set(dff['year'])\r\nuniqueYears.remove(0)\r\nratios = {}\r\nfor year in uniqueYears:\r\n dfYear = dff.loc[dff['year'] == year]\r\n #add column of price / odometer\r\n dfYear['rate'] = dfYear['price'] / dfYear['odometer']\r\n #find average of price / odometer\r\n average = dfYear['rate'].sum()/len(dfYear['rate'])\r\n ratios[year] = average\r\nprint(ratios)\r\n#at this point, each average is matched with the index of that average + 1900\r\n#with empty odometers, take the average of the year and do price/average = odometer\r\nfor row in range(len(df)):\r\n if df['odometer'][row] == 0 and df['year'][row] in ratios:\r\n #get new odometer value, then set df['odometer'][row]\r\n year = df['year'][row]\r\n average = ratios[year]\r\n odometer = df['price'][row] / average\r\n df['odometer'][row] = odometer\r\nnewfile = \"odometerTest.csv\"\r\ndf.to_csv(newfile)\r\n \r\n"
},
{
"alpha_fraction": 0.6004728078842163,
"alphanum_fraction": 0.6199763417243958,
"avg_line_length": 20.69230842590332,
"blob_id": "75bd63a42f825b4b73133144cd475fa7e2ecd860",
"content_id": "95635dddf64b471fca814339695c7e6fee6078a9",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 1692,
"license_type": "permissive",
"max_line_length": 91,
"num_lines": 78,
"path": "/client/src/App.js",
"repo_name": "WayneFerrao/autofocus",
"src_encoding": "UTF-8",
"text": "import car from './car.jpg'\nimport {useState, useEffect} from 'react';\nimport styled from \"styled-components\";\nimport {BrowserRouter as Router, Route} from 'react-router-dom';\nimport Search from './Search';\nimport NavBar from './NavBar';\n\nconst BGContainer = styled.div`\n width: 100vw;\n height: 100vh;\n background-image: url(${car});\n background-attachment: fixed;\n background-position: center;\n background-repeat: no-repeat;\n background-size: cover;\n overflow-y: auto;\n`;\n\nconst Title = styled.div`\n font-family: 'Carme';\n font-weight: 400;\n color: #2d262a;\n font-size: 6.5em;\n text-align: center;\n padding-top: 2%;\n @media (min-width: 1024px) {\n padding-top:3%;\n }\n margin-bottom: 0;\n padding-bottom: 0;\n`;\n\nconst Motto = styled.h3`\n font-family: 'Source Sans Pro';\n font-weight: 300;\n font-style: italic;\n font-size: 1.6em;\n text-align: center;\n margin-top: 0.5%;\n`;\nconst TransitionText = styled.h1`\n text-align: center;\n overflow: hidden;\n flex-direction: column;\n`;\nfunction App() {\n const [currentTime, setCurrentTime] = useState(0);\n\n // useEffect(()=>{\n // fetch('/time').then(res => res.json()).then(data => {\n // setCurrentTime(data.time);\n // })\n // },[]); \n\n \n const Home = () => (\n <body>\n <BGContainer>\n <NavBar/>\n <Title>Autofocus</Title>\n <Motto>The true price of a car.</Motto>\n </BGContainer>\n <TransitionText class=\"subTitle\"> Let's find out the real price </TransitionText>\n\n <Search/>\n </body>\n );\n \n return (\n <Router>\n <Route exact path='/' component={Home}/>\n <Route path='/search' component={Search}/>\n \n </Router>\n );\n}\n\nexport default App;\n"
},
{
"alpha_fraction": 0.6878980994224548,
"alphanum_fraction": 0.7016985416412354,
"avg_line_length": 28.45161247253418,
"blob_id": "b7d7e7cf894b0516f02c5a3dad3a65f782ff659a",
"content_id": "51b1b62365c30d926e59b0c3e3d81509eac980e8",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 942,
"license_type": "permissive",
"max_line_length": 89,
"num_lines": 31,
"path": "/linear2.py",
"repo_name": "WayneFerrao/autofocus",
"src_encoding": "UTF-8",
"text": "import numpy as np\r\nimport matplotlib.pyplot as plt\r\nimport pandas as pd\r\nfrom sklearn.linear_model import LinearRegression\r\nfrom sklearn.model_selection import train_test_split\r\n\r\ndf = pd.read_csv(\"finalEncoded.csv\")\r\ny = df['price']\r\nX = df.drop(columns=['price'])\r\n\r\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=1)\r\n\r\nreg = LinearRegression().fit(X_train, y_train)\r\n\r\ny_pred = reg.predict(X_test)\r\ndiff = abs(y_pred - y_test)\r\n\r\nprint(sum(diff)/len(diff))\r\nprint(reg.score(X_train, y_train))\r\n\r\n# Plot outputs\r\nplt.figure(figsize=(10,7))\r\nplt.scatter(y_test, y_pred)\r\nplt.xlim(0,80000)\r\n#plt.plot(X_test, fitted_svr_model.predict(X_test), color='red')\r\n#plt.plot(X_test, fitted_svr_model.predict(X_test)+eps, color='black')\r\n#plt.plot(X_test, fitted_svr_model.predict(X_test)-eps, color='black')\r\nplt.xlabel('Actual')\r\nplt.ylabel('Predicted')\r\nplt.title('Linear Regression Prediction')\r\nplt.show()"
},
{
"alpha_fraction": 0.6639344096183777,
"alphanum_fraction": 0.6657559275627136,
"avg_line_length": 30.342857360839844,
"blob_id": "642fce79b326fa0ee0ca3f24bce589d7dc0bc8a2",
"content_id": "f0426ac76b0c04c011c9e3df1d12cc454be6b812",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1098,
"license_type": "permissive",
"max_line_length": 95,
"num_lines": 35,
"path": "/server/venv/api.py",
"repo_name": "WayneFerrao/autofocus",
"src_encoding": "UTF-8",
"text": "from flask import Flask\nfrom flask import request\nimport datetime \nimport json\n# from RFR import reg\nfrom flask_cors import CORS\nimport pickle\n\napp = Flask(__name__)\nCORS(app)\n\nwith open('encode.json') as json_file:\n data = json.load(json_file)\[email protected]('/getData', methods=['GET', 'POST'])\ndef get_data():\n # print(request.form)\n # print(data)\n manufacturer = data['manufacturer'][request.form.get('manufacturer')]\n\n # model = data['model'][request.form.get('model')]\n model = 1\n odometer = request.form.get('mileage')\n transmission = data['transmission'][request.form.get('transmission')]\n color = data['color'][request.form.get('color')]\n drive = data['drive'][request.form.get('drive')]\n cylinders = request.form.get('cylinders')\n year = request.form.get('year')\n\n queryPoint = [[cylinders, drive, manufacturer, model, odometer, color, transmission, year]]\n print(queryPoint)\n print('//////////')\n reg = pickle.load(open('finalized_model_pickle.sav', 'rb'))\n prediction = reg.predict(queryPoint)\n print(prediction[0])\n return (\"BOI\")\n\n"
},
{
"alpha_fraction": 0.6284469962120056,
"alphanum_fraction": 0.6458635926246643,
"avg_line_length": 27.276596069335938,
"blob_id": "2a39871432b184a3c1d18cfd30d779233898454b",
"content_id": "49012a5da05b0b4dff8a58f01fc482b6c5f52470",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1378,
"license_type": "permissive",
"max_line_length": 91,
"num_lines": 47,
"path": "/nn.py",
"repo_name": "WayneFerrao/autofocus",
"src_encoding": "UTF-8",
"text": "import keras as keras\r\nimport keras\r\nimport tensorflow as tf\r\nimport pandas as pd\r\nfrom sklearn.model_selection import train_test_split\r\n\r\n#use dot graph\r\n'''\r\n#get X and y\r\ndftrain = pd.read_csv(\"training.csv\")\r\ndftest = pd.read_csv(\"testing.csv\")\r\n#get ys\r\nytrain = dftrain['price']\r\nytest = dftest['price']\r\n#get Xs\r\nXtrain = dftrain.drop(columns='price')\r\nXtest = dftest.drop(columns='price')\r\n'''\r\ndf = pd.read_csv(\"finalenc.csv\")\r\ny = df['price']\r\nX = df.drop(columns=['price'])\r\n\r\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=42)\r\n\r\ntf.keras.backend.clear_session()\r\ntf.random.set_seed(60)\r\nmodel=keras.models.Sequential([\r\n #input layer\r\n keras.layers.Dense(X_train.shape[1], input_dim = X_train.shape[1], activation='relu'), \r\n keras.layers.BatchNormalization(),\r\n keras.layers.Dropout(0.3),\r\n\r\n keras.layers.Dense(units=30,activation='relu'), \r\n keras.layers.BatchNormalization(),\r\n keras.layers.Dropout(0.2),\r\n\r\n #output layer\r\n keras.layers.Dense(units=1, activation=\"linear\"),\r\n],name=\"Batchnorm\",)\r\n\r\noptimizer = keras.optimizers.Adam()\r\nmodel.compile(optimizer=optimizer, \r\n loss='mean_absolute_error')\r\nhistory = model.fit(X_train, y_train,\r\n epochs=100, batch_size=2000,\r\n validation_data=(X_test, y_test), \r\n verbose=1)\r\n\r\n"
},
{
"alpha_fraction": 0.6043513417243958,
"alphanum_fraction": 0.6349717974662781,
"avg_line_length": 18.714284896850586,
"blob_id": "606c7badbd406304853a1ee5a65c26798a6ffa79",
"content_id": "7c90d786dbc8c7ce8c7d6189dbb61643e2c3563d",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 1241,
"license_type": "permissive",
"max_line_length": 38,
"num_lines": 63,
"path": "/client/src/Search.js",
"repo_name": "WayneFerrao/autofocus",
"src_encoding": "UTF-8",
"text": "import styled from 'styled-components'\nimport car from './car.jpg'\nimport Party from './Particles';\nimport CarForm from './CarForm';\n\nconst SharpBlur = styled.div`\n overflow: hidden;\n flex-direction: column;\n width:100vw;\n`;\n\nconst SearchPageParallax = styled.div`\n position: absolute;\n margin-left:12%;\n margin-top: 4%;\n display: flex;\n z-index: 999;\n`;\nconst SearchPageBG = styled.div`\n height: 100vh;\n width: 100vw;\n\n background-image: url(${car});\n -webkit-filter: blur(10px);\n -moz-filter: blur(10px);\n -o-filter: blur(10px);\n -ms-filter: blur(10px);\n filter: blur(10px);\n background-attachment: fixed;\n background-position: center;\n background-repeat: no-repeat;\n background-size: cover;\n overflow-y: auto;\n margin: -5px -10px -10px;\n z-index: 1;\n`;\nconst ParticleContainer = styled.div`\nposition:absolute;\n width: 100vw;\n text-align:center\n height: 70vh;\n z-index: 50;\n`;\n\nexport default function Search() {\n return (\n <SharpBlur>\n <SearchPageParallax>\n\n </SearchPageParallax>\n <ParticleContainer>\n <CarForm/>\n\n <Party/>\n </ParticleContainer>\n <SearchPageBG>\n <CarForm/>\n \n </SearchPageBG>\n \n </SharpBlur>\n );\n}"
},
{
"alpha_fraction": 0.7237991094589233,
"alphanum_fraction": 0.7412663698196411,
"avg_line_length": 30.785715103149414,
"blob_id": "0b27b8d71ba76066804353244eebc988a8eec865",
"content_id": "844a059771536f1bdead9e1a1dcd9d080ba67b15",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 916,
"license_type": "permissive",
"max_line_length": 89,
"num_lines": 28,
"path": "/nn2.py",
"repo_name": "WayneFerrao/autofocus",
"src_encoding": "UTF-8",
"text": "from sklearn.svm import LinearSVR\r\nfrom sklearn.linear_model import SGDRegressor\r\nfrom sklearn.linear_model import LinearRegression\r\nfrom sklearn.pipeline import make_pipeline\r\nfrom sklearn.preprocessing import StandardScaler\r\nfrom sklearn.datasets import make_regression\r\nfrom sklearn.model_selection import train_test_split\r\nimport matplotlib.pyplot as plt\r\nimport numpy as np\r\nimport pandas as pd\r\n\r\ndf = pd.read_csv(\"finalenc.csv\")\r\ny = df['price']\r\nX = df.drop(columns=['price'])\r\n\r\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=1)\r\n\r\nregr = make_pipeline(StandardScaler(), LinearSVR(random_state=0, tol=1e-03))\r\nreg = LinearRegression().fit(X_train, y_train)\r\n\r\nregr.fit(X_train,y_train)\r\n\r\ny_pred = regr.predict(X_test)\r\n\r\nplt.figure()\r\nplt.plot(range(100000))\r\nplt.scatter(y_test ,y_pred, alpha=0.4, c='red', label='Ground Truth vs Predicted')\r\nplt.savefig('SVR.png')"
},
{
"alpha_fraction": 0.5895646214485168,
"alphanum_fraction": 0.5925556421279907,
"avg_line_length": 33.30588150024414,
"blob_id": "70c7ecc0e3ee22f9ec9618fbca69fc1774d24385",
"content_id": "5baa09794a9ba3ebe2eaab5d17f5d151b51d44c0",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3009,
"license_type": "permissive",
"max_line_length": 149,
"num_lines": 85,
"path": "/colorcleaning.py",
"repo_name": "WayneFerrao/autofocus",
"src_encoding": "UTF-8",
"text": "import pandas as pd\r\nimport numpy as np \r\nimport random\r\nimport csv\r\n\r\n\r\ndef isNan(string):\r\n return string != string\r\n\r\ndef occurences(model):\r\n \"\"\"returns percentage of each color in each model as a color: percent dictionary\"\"\"\r\n nulls = 0\r\n notnulls = 0\r\n occurence = {}\r\n #finds occurence of each color\r\n for i in range(len(df)):\r\n #find df model that matches and get colors that arent null\r\n if isNan(df['paint_color'][i]) == False and df['model'][i] == model:\r\n #update occurence of model\r\n if df['paint_color'][i] in occurence:\r\n occurence[df['paint_color'][i]] += 1\r\n else:\r\n occurence[df['paint_color'][i]] = 1\r\n notnulls += 1\r\n #nan for model\r\n elif df['model'][i] == model:\r\n nulls += 1\r\n \r\n for key in occurence:\r\n occurence[key] = round((occurence[key] / notnulls) * nulls)\r\n \r\n return occurence\r\n \r\ndef fillNan(dictionary):\r\n \"\"\"takes dictionary of percentages and returns list of colors\r\n of size nulls\"\"\"\r\n nullList = []\r\n sumValues = sum(dictionary.values())\r\n for i in range(sumValues):\r\n while True:\r\n key = random.choice(list(dictionary))\r\n if dictionary[key] > 0:\r\n nullList.append(key)\r\n dictionary[key] -= 1\r\n break\r\n \r\n return nullList\r\n\r\ndef listOfModelColours(model):\r\n \"\"\"return list of colours in order of appearance of model in csv\"\"\"\r\n modelOccurenceDict = occurences(model) \r\n #NOTE: NUMBER OF VALUES IN DICT MAY BE HIGHER OR LOWER BY A NUMBER OR SO.\r\n newNullList = fillNan(modelOccurenceDict)\r\n return newNullList\r\n\r\n\r\ndf = pd.read_csv(\"out.csv\", dtype={\"numbers\":\"string\", \"condition\": \"string\", \"id\": \"string\", \"odometer\":\"string\", \"price\":\"string\",\"year\":\"string\"})\r\n\r\nmodels = list(set(df['model']))\r\nmodelDict = {}\r\nfor model in models:\r\n \"\"\"fill in every value for model if there is at least one of these models with\r\n a colour to base off of. Find ratio of color, then create list of size nan model colors\"\"\"\r\n modelDict[model] = listOfModelColours(model)\r\n \r\nfilledInList = []\r\n#iterate through and append each corresponding list \r\nfor i in range(len(df)):\r\n #i is model and color index\r\n #check if color is null\r\n if isNan(df['paint_color'][i]):\r\n #check if the modelDict has color values to append with\r\n if len(modelDict[df['model'][i]]) > 0:\r\n #take modelDict value list and append\r\n filledInList.append([df['model'][i], modelDict[df['model'][i]].pop()])\r\n else:\r\n #take df model and value and append\r\n filledInList.append([df['model'][i], df['paint_color'][i]])\r\n else:\r\n #take df model and value and append\r\n filledInList.append([df['model'][i], df['paint_color'][i]])\r\n \r\n#filledInList contains [model, color]\r\ndf = pd.DataFrame(filledInList)\r\ndf.to_csv(\"dummy.csv\")\r\n\r\n \r\n"
},
{
"alpha_fraction": 0.646118700504303,
"alphanum_fraction": 0.6621004343032837,
"avg_line_length": 31.730770111083984,
"blob_id": "bababc243d63858ae25b29758afb1c0a8120bb06",
"content_id": "f4b5c5e0a6e12d4158c8bbaadd7c4c0076eaf244",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1752,
"license_type": "permissive",
"max_line_length": 93,
"num_lines": 52,
"path": "/SVR.py",
"repo_name": "WayneFerrao/autofocus",
"src_encoding": "UTF-8",
"text": "#1 Importing the libraries\r\nimport numpy as np\r\nimport matplotlib.pyplot as plt\r\nfrom sklearn.svm import SVR\r\nimport pandas as pd\r\nfrom sklearn.svm import LinearSVR\r\nfrom sklearn.model_selection import train_test_split\r\nfrom sklearn.metrics import mean_absolute_error \r\nfrom sklearn.linear_model import SGDRegressor\r\n\r\n\r\n#Importing the dataset\r\n\r\ndf = pd.read_csv(\"finalEncoded.csv\")\r\ny = df['price']\r\nX = df.drop(columns=['price'])\r\n\r\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=1)\r\n\r\nsvr = LinearSVR(epsilon=0.01, C=0.01, fit_intercept=True)\r\n\r\nsvr.fit(X_train, y_train)\r\n\r\n\r\ndef svr_results(y_test, X_test, fitted_svr_model):\r\n \r\n print(\"C: {}\".format(fitted_svr_model.C))\r\n print(\"Epsilon: {}\".format(fitted_svr_model.epsilon))\r\n \r\n print(\"Intercept: {:,.3f}\".format(fitted_svr_model.intercept_[0]))\r\n print(\"Coefficient: {:,.3f}\".format(fitted_svr_model.coef_[0]))\r\n \r\n mae = mean_absolute_error(y_test, fitted_svr_model.predict(X_test))\r\n print(\"MAE = ${:,.2f}\".format(1000*mae))\r\n \r\n perc_within_eps = 100*np.sum(y_test - fitted_svr_model.predict(X_test) < 5) / len(y_test)\r\n print(\"Percentage within Epsilon = {:,.2f}%\".format(perc_within_eps))\r\n \r\n # Plot outputs\r\n plt.figure(figsize=(10,7))\r\n plt.scatter(y_test, fitted_svr_model.predict(X_test))\r\n #plt.plot(X_test, fitted_svr_model.predict(X_test), color='red')\r\n #plt.plot(X_test, fitted_svr_model.predict(X_test)+eps, color='black')\r\n #plt.plot(X_test, fitted_svr_model.predict(X_test)-eps, color='black')\r\n plt.xlabel('Actual')\r\n plt.ylabel('Predicted')\r\n plt.title('SVR Prediction')\r\n plt.show()\r\n\r\nprint(y_test.shape)\r\nprint(X_test.shape)\r\nsvr_results(y_test, X_test, svr)"
},
{
"alpha_fraction": 0.6067742109298706,
"alphanum_fraction": 0.620322585105896,
"avg_line_length": 27.971961975097656,
"blob_id": "6c677e30d075de7fe0504ea29c42f25d3f4a8f81",
"content_id": "565f722ecf60fbfd1324f2caa0c8e88d69e827df",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3100,
"license_type": "permissive",
"max_line_length": 100,
"num_lines": 107,
"path": "/data_fill.py",
"repo_name": "WayneFerrao/autofocus",
"src_encoding": "UTF-8",
"text": "import pandas as pd\nimport numpy as np\n\n#For filling values\n\ndef get_counts(df,obj): #where object is transmission,drive or whatever\n model_dict = {}\n models = df['model'].unique()\n for model in models:\n model_dict[model] = {}\n new_df = df.groupby(['model',obj]).size().reset_index()\n for i in range(0,len(new_df)):\n line = new_df.iloc[i]\n model,transmission = line['model'],line[obj]\n model_dict[model][transmission] = line[0]\n return model_dict\n\ndef make_ratios(dictionary,df):\n new_dict = {}\n models = df['model'].unique()\n for model in models:\n new_dict[model] = {}\n for i in dictionary:\n for obj in dictionary[i]:\n #print(dictionary[i][trans],sum(dictionary[i].values()))\n new_dict[i][obj] = dictionary[i][obj] / sum(dictionary[i].values())\n return new_dict\n\ndef make_counts(dictionary,obj): #turn the ratios into the number of values that need to be replaced\n new_dict = {}\n for m in dictionary:\n new_dict[m] = {}\n\n for model in dictionary:\n total = len(df[df['model'] == model][obj] == np.nan)\n for trans in dictionary[model]:\n new_dict[model][trans] = round(dictionary[model][trans] * total)\n\n if dictionary[model] != {} and sum(new_dict[model].values()) != total:\n new_dict[model][trans] += abs(total - sum(new_dict[model].values()))\n \n return new_dict\n\ndef fill_values(dictionary,df,obj): #fill the values in the dataframe based\n final_df_l = []\n count = 0\n for model in dictionary:\n print(model,count,'/',len(dictionary))\n prev = 0\n model_df = df[df['model'] == model]\n for trans in dictionary[model]:\n num = dictionary[model][trans]\n rep_df = model_df[prev:prev+num]\n prev = num\n rep_df = rep_df.fillna({obj:trans})\n final_df_l.append(rep_df)\n count += 1\n output_df = pd.concat(final_df_l)\n return output_df\n#df is shorthand for DataFrame\ndf = pd.read_csv('clean2.csv')\n\nd1 = get_counts(df,'transmission')\nd1 = make_ratios(d1,df)\nd1 = make_counts(d1,'transmission')\n\ndf1 = fill_values(d1,df,'transmission')\n\nd2 = get_counts(df,'cylinders')\nd2 = make_ratios(d2,df)\nd2 = make_counts(d2,'cylinders')\n\ndf2 = fill_values(d2,df1,'cylinders')\n\n\nd3 = get_counts(df,'drive')\nd3 = make_ratios(d3,df)\nd3 = make_counts(d3,'drive')\n\ndf3 = fill_values(d2,df2,'drive')\n\n\ndf3.to_csv('test.csv')\n#print(d)\n#print(d)\n'''\ndf = df.drop('lat',axis=1)\ndf = df.drop('long',axis=1)\ndf = df.drop('url',axis=1)\ndf = df.drop('region_url',axis=1)\ndf = df.drop('county',axis=1)\ndf = df.drop('description',axis=1)\ndf = df.drop('image_url',axis=1)\ndf = df.drop('vin',axis=1)\ndf = df.drop('type',axis=1)\ndf = df.drop('size',axis=1)\ndf = df.drop('fuel',axis=1)\ndf = df.drop('state',axis=1)'''\n\n# At this point, most of the useless columns are dropped. \n# Next steps\n\n# Remove or update bad rows-> those with missing values for main columns like make, model etc\n# Ensure each row has all columns filled/updated\n\n#Write to a vehicles_final.csv\n#print(df)\n"
},
{
"alpha_fraction": 0.4900662302970886,
"alphanum_fraction": 0.695364236831665,
"avg_line_length": 15.88888931274414,
"blob_id": "68d7d3853f2234bcbc2cf6fabd40ce19656b548f",
"content_id": "4a3ee85f6e70a4a5dec8443844fd3d20fccdec36",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 151,
"license_type": "permissive",
"max_line_length": 21,
"num_lines": 9,
"path": "/server/venv/requirements.txt",
"repo_name": "WayneFerrao/autofocus",
"src_encoding": "UTF-8",
"text": "click==7.1.2\nFlask==1.1.2\nitsdangerous==1.1.0\nJinja2==2.11.2\nMarkupSafe==1.1.1\npython-dotenv==0.15.0\nWerkzeug==1.0.1\nscikit-learn==0.23.2\npandas==1.1.4"
},
{
"alpha_fraction": 0.7662835121154785,
"alphanum_fraction": 0.7662835121154785,
"avg_line_length": 25.100000381469727,
"blob_id": "9249d6ec8982e14852df75396e7a02555e636268",
"content_id": "4536aad000da0dd56efc166edef0e4d330084eda",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 522,
"license_type": "permissive",
"max_line_length": 131,
"num_lines": 20,
"path": "/README.md",
"repo_name": "WayneFerrao/autofocus",
"src_encoding": "UTF-8",
"text": "# autofocus\nA web application focused on finding the most accurate price of a used car using machine learning techniques.\n\n\nStart up the Autofocus webapp (a development server) by going into the client directory and entering the following in your terminal\n\n```\nyarn start\n```\nIn a separate terminal window, cd into the server/venv folder and run \n\n```\npip install -r requirements.txt\n```\n\nNext, start up the Flask back-end server by going back into the client directory and entering the following:\n\n```\nyarn start-api\n```\n"
},
{
"alpha_fraction": 0.5901195406913757,
"alphanum_fraction": 0.6072636842727661,
"avg_line_length": 39.678897857666016,
"blob_id": "994bf8d6f3a3b4cd90505a2a3245fc11df91d4d5",
"content_id": "a83ed667904c03232b8ca46964c09a30eabfd035",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4433,
"license_type": "permissive",
"max_line_length": 171,
"num_lines": 109,
"path": "/clean.py",
"repo_name": "WayneFerrao/autofocus",
"src_encoding": "UTF-8",
"text": "import pandas as pd \nimport json as js\nimport csv\nfrom multiprocessing import Process, Pipe\nimport os\n\n#checking if a given model is present in the dictionary already\ndef add_model(model,car_data,manu):\n if model not in car_data:\n car_data[model] = manu\n\n#Parses the car db file and returns a dict containing a mapping from Models -> Brand/Manufacturer\ndef parse_car_db():\n\n f = open('car_database.json','r',encoding=\"utf-8\").readlines()\n cars = {}\n\n for line in f:\n parsed = js.loads(line)\n #add_model(parsed['Brand'] + ' ' + parsed['Model'],cars)\n add_model(parsed['Model'].lower(),cars,parsed['Brand'])\n return cars\n\ndef compareAndPrune(car_dict,pd_frame,i1,i2,conn):\n count = 0\n pid = os.getpid()\n new_df = pd.DataFrame()\n\n for i in range(i1,i2):\n conn.send([pid,str(count) + '/' + str((i2-i1)) ]) #send current progress to main process\n url = pd_frame.url[i].split('/')[5].split('-') #split the URL\n if len(url) > 3: #handles cases where the URL is not formatted correctly \n url_data = check_parsed_url(url)\n date = url_data[0]\n mod = pd_frame.model[i]\n\n if type(mod) == str: #if the model isn't empty\n url_data.extend(mod.split(' ')) #add the words in the model string to the rest of the words we need to check\n\n for j in range(1,len(url_data)): #iterate over all words we need to check against the cars, start at index 1, since the first value in the url_data is the date\n item = url_data[j].lower() #make our guess lowercase\n if item in car_dict: #if the guess is in the car db, then add it to a temp dataframe, retaining all other data for the record,\n temp = pd_frame.loc[i]\n temp.model = item #set the model value to the correctly guessed model\n temp.manufacturer = car_dict[item] #set the manufacturer value to the one in the car db (more accurate)\n \n cylinder = pd_frame.cylinders[i] #clean up the cylinder column\n if type(cylinder) != float:\n temp.cylinders = pd_frame.cylinders[i][0]\n temp.date = date #set date value to the extracted value from the URL (more accurate)\n new_df = new_df.append(temp)\n count += 1 #add value to count\n break\n return new_df\n\n#helper function to reformat the url if the location has a hyphen\ndef check_parsed_url(url_list):\n try:\n int(url_list[1])\n return url_list[1:]\n except ValueError:\n return url_list[2:]\n\ndef main_process(car_stats,df,i1,i2,conn,filename='out.csv'):\n new_df = compareAndPrune(car_stats,df,i1,i2,conn)\n new_df = new_df.drop('url',axis=1) #drop url axis\n new_df.to_csv(filename,mode='a',index=True) #append results to main file\n\nif __name__ == '__main__':\n car_stats = parse_car_db()\n matches = 0\n df = pd.read_csv('vehicles.csv')\n\n df = df.drop('lat',axis=1)\n df = df.drop('long',axis=1)\n df = df.drop('region_url',axis=1)\n df = df.drop('county',axis=1)\n df = df.drop('description',axis=1)\n df = df.drop('image_url',axis=1)\n df = df.drop('vin',axis=1)\n df = df.drop('type',axis=1)\n df = df.drop('size',axis=1)\n df = df.drop('fuel',axis=1)\n df = df.drop('state',axis=1)\n\n df1 = df.iloc[0:len(df.model)//2] #First third of dataframe\n df2 = df.iloc[len(df.model)//3:2*len(df.model)//3] #Second third of dataframe\n df3 = df.iloc[2*len(df.model)//3:] #Third third of dataframe\n\n parent_conn1, child_conn1 = Pipe()\n parent_conn2, child_conn2 = Pipe()\n parent_conn3, child_conn3 = Pipe()\n\n p1 = Process(target=main_process,args=(car_stats,df1,0,len(df.model)//3,child_conn1,0))\n p2 = Process(target=main_process,args=(car_stats,df2,len(df.model)//3,2 * (len(df.model))//3,child_conn2,1))\n p3 = Process(target=main_process,args=(car_stats,df3,2*(len(df.model))//3,len(df.model),child_conn3,2))\n\n p1.start()\n p2.start()\n p3.start()\n\n while True: #print the progress of each process\n print(parent_conn1.recv(),parent_conn2.recv(),parent_conn3.recv())\n\n #guessing colour\n #figure out the colour breakdown of the ford mustang\n #g = a['model'].unique() \n #g = a.groupby(['model'])['model'].count()\n # g = g.sort_values(ascending=False)"
},
{
"alpha_fraction": 0.6878452897071838,
"alphanum_fraction": 0.7163904309272766,
"avg_line_length": 25.897436141967773,
"blob_id": "43305e356e65c358331f063bf9289c2be160dfb4",
"content_id": "3932f2ed89f55f79c5f7c4dca34b4c234534e6e3",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1086,
"license_type": "permissive",
"max_line_length": 96,
"num_lines": 39,
"path": "/server/venv/RFR.py",
"repo_name": "WayneFerrao/autofocus",
"src_encoding": "UTF-8",
"text": "from sklearn.ensemble import RandomForestRegressor\r\nfrom sklearn.datasets import make_regression\r\nfrom sklearn.model_selection import train_test_split\r\nimport matplotlib.pyplot as plt\r\nimport numpy as np \r\nimport pandas as pd\r\nimport pickle\r\nimport statistics\r\n\r\ndf = pd.read_csv(\"finalEncoded.csv\")\r\ny = df['price']\r\nX = df.drop(columns=['price'])\r\n\r\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=1)\r\n\r\nreg = RandomForestRegressor(max_depth=80, random_state=0)\r\nreg.fit(X_train, y_train)\r\nfilename = 'finalized_model_pickle.sav'\r\npickle.dump(reg, open(filename, 'wb'))\r\n\r\ny_pred = reg.predict(X_test)\r\n\r\n#calculate average loss\r\ndiff = abs(y_test - y_pred)\r\naloss = sum(diff)/len(diff)\r\n#calculate median loss\r\nmloss = statistics.median(diff)\r\n\r\nprint(aloss)\r\nprint(mloss)\r\n\r\nplt.figure()\r\nplt.axis([0,100000,0,100000])\r\nplt.scatter(y_test[:1000] ,y_pred[:1000], alpha=0.4, c='red', label='Ground Truth vs Predicted')\r\nplt.xlabel('Ground Truth')\r\nplt.ylabel('Predictions')\r\nplt.legend()\r\nplt.title(\"Random Forest Model\")\r\nplt.savefig('RFR.png')"
},
{
"alpha_fraction": 0.767085075378418,
"alphanum_fraction": 0.767085075378418,
"avg_line_length": 30.217391967773438,
"blob_id": "c6be20a308956c46a3c68b9b06a92d6c70475069",
"content_id": "1cf67d442398079266036092d9ec8bafe2ac5e2d",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 717,
"license_type": "permissive",
"max_line_length": 89,
"num_lines": 23,
"path": "/linreg.py",
"repo_name": "WayneFerrao/autofocus",
"src_encoding": "UTF-8",
"text": "import matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.linear_model import LinearRegression\n\ndf = pd.read_csv('dividedsamples/training.csv') \ndfval = pd.read_csv('dividedsamples/testing.csv') \n\ntrain_features = df.copy()\ntest_features = dfval.copy()\n\ntrain_labels = train_features.pop('price')\ntest_labels = test_features.pop('price')\n\nregressor = LinearRegression()\nregressor.fit(train_features, train_labels)\ncoeff_df = pd.DataFrame(regressor.coef_, train_features.columns, columns=['Coefficient'])\nprint(coeff_df)\n\ny_pred = regressor.predict(test_features)\nboi = pd.DataFrame({'Actual': test_labels, 'Predicted': y_pred})\nprint(boi)"
},
{
"alpha_fraction": 0.61654132604599,
"alphanum_fraction": 0.6356275081634521,
"avg_line_length": 26.360654830932617,
"blob_id": "84d4558ee4ac45d87f768d86bd8e6d28cf8ceb33",
"content_id": "366082f5e7277ab5cbdcbad3a774e8b47b101fe9",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1729,
"license_type": "permissive",
"max_line_length": 73,
"num_lines": 61,
"path": "/linear.py",
"repo_name": "WayneFerrao/autofocus",
"src_encoding": "UTF-8",
"text": "import matplotlib.pyplot as plt \r\nimport numpy as np \r\nfrom sklearn import datasets, linear_model, metrics \r\nimport pandas as pd\r\n\r\n \r\n# load the boston dataset \r\ndf = pd.read_csv(\"finalEncoded.csv\")\r\ny = df['price']\r\nX = df.drop(columns=['price'])\r\n \r\n# splitting X and y into training and testing sets \r\nfrom sklearn.model_selection import train_test_split \r\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, \r\n random_state=1) \r\n \r\n# create linear regression object \r\nreg = linear_model.LinearRegression() \r\n \r\n# train the model using the training sets \r\nreg.fit(X_train, y_train) \r\n\r\ny_pred = reg.predict(X_test)\r\ndiff = abs(y_pred - y_test)\r\n\r\nprint(sum(diff)/len(diff))\r\nprint(reg.score(X_train, y_train))\r\n \r\n# regression coefficients \r\nprint('Coefficients: \\n', reg.coef_) \r\n \r\n# variance score: 1 means perfect prediction \r\nprint('Variance score: {}'.format(reg.score(X_test, y_test))) \r\n \r\n# plot for residual error \r\n \r\n## setting plot style \r\nplt.style.use('fivethirtyeight') \r\n \r\n## plotting residual errors in training data \r\nplt.scatter(reg.predict(X_train), reg.predict(X_train) - y_train, \r\n color = \"green\", s = 10, label = 'Train data') \r\n \r\n## plotting residual errors in test data \r\nplt.scatter(reg.predict(X_test), reg.predict(X_test) - y_test, \r\n color = \"blue\", s = 10, label = 'Test data') \r\n \r\n## plotting line for zero residual error \r\nplt.hlines(y = 0, xmin = 0, xmax = 50, linewidth = 2) \r\n \r\n## plotting legend \r\nplt.legend(loc = 'upper right') \r\n \r\n## plot title \r\nplt.title(\"Residual errors\") \r\n\r\nplt.xlim(-15000,50000)\r\nplt.ylim(-40000, 40000)\r\n \r\n## function to show plot \r\nplt.show() "
},
{
"alpha_fraction": 0.5758017301559448,
"alphanum_fraction": 0.6005830764770508,
"avg_line_length": 20.46875,
"blob_id": "f4ef9b24432b0f727066c6546d564ac78b1647b4",
"content_id": "d7908cc227ecd92523edce4f28051c8b1415995c",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 686,
"license_type": "permissive",
"max_line_length": 117,
"num_lines": 32,
"path": "/client/src/NavBar.js",
"repo_name": "WayneFerrao/autofocus",
"src_encoding": "UTF-8",
"text": "import styled from 'styled-components'\nimport car from './car.jpg'\nimport {Link } from 'react-router-dom';\n\nconst RowDiv = styled.div`\n text-align: center;\n padding-top: 2%;\n`;\n\nconst Item = styled.p`\n display: inline-block;\n font-family: 'Source Sans Pro';\n font-weight: 400;\n font-size: 1.2em;\n color: #333;\n text-align: center;\n margin-left: 2%;\n margin-right: 2%;\n cursor: pointer;\n &:hover {\n transform: scale(1.25);\n }\n`;\n\nexport default function NavBar() {\n return (\n <RowDiv>\n <Item>Search</Item>\n <Item><a href = {car} style={{textDecoration: \"none\", color: \"#222\"}} target = \"_blank\">Report</a></Item>\n </RowDiv>\n );\n}"
},
{
"alpha_fraction": 0.7221584320068359,
"alphanum_fraction": 0.7225411534309387,
"avg_line_length": 33.657535552978516,
"blob_id": "e3d302e95e8f0255edfcdbcbbf6af9dfdd513ccb",
"content_id": "08b65c20601db6179fa11aa024c6d8e88f6dc5d8",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2613,
"license_type": "permissive",
"max_line_length": 94,
"num_lines": 73,
"path": "/onehot.py",
"repo_name": "WayneFerrao/autofocus",
"src_encoding": "UTF-8",
"text": "import pandas as pd\r\nimport numpy as np\r\nfrom numpy import argmax\r\nimport seaborn as sns\r\nfrom sklearn.preprocessing import LabelEncoder\r\nfrom sklearn.preprocessing import OneHotEncoder\r\n\r\nout_path = \"dummy.csv\"\r\n\r\ndf = pd.read_csv(out_path)\r\n\r\n#use dff to do basic charts to show trends\r\ndff = df.dropna(axis=0)\r\n\r\n#set all categorical types to 'categorical'\r\n\r\n#This stuff was for data analysis\r\n#sns.lineplot(x=dff['year'], y=dff['price'])\r\n#sns.scatterplot(x=dff['odometer'], y=dff['price'])\r\n\r\n# color one hot encoder\r\ncolors = set(dff['paint_color'])\r\ncolor_df = pd.DataFrame(colors, columns=['paint_color'])\r\n# generate binary values using get_dummies\r\ndum_df = pd.get_dummies(color_df, columns=[\"paint_color\"], prefix=[\"Color_is\"])\r\n# merge with main df bridge_df on key values\r\ncolor_df = color_df.join(dum_df)\r\nprint(color_df)\r\n\r\n#model one hot encoder\r\nmodels = set(dff['model'])\r\nmodel_df = pd.DataFrame(models, columns=['model'])\r\n# generate binary values using get_dummies\r\ndum_df = pd.get_dummies(model_df, columns=[\"model\"], prefix=[\"Model_is\"])\r\n# merge with main df bridge_df on key values\r\nmodel_df = model_df.join(dum_df)\r\nprint(model_df)\r\n\r\n#manufacturer\r\nmanufacturers = set(dff['manufacturer'])\r\nmanufacturer_df = pd.DataFrame(manufacturers, columns=['manufacturer'])\r\n# generate binary values using get_dummies\r\ndum_df = pd.get_dummies(manufacturer_df, columns=[\"manufacturer\"], prefix=[\"Manufacturer_is\"])\r\n# merge with main df bridge_df on key values\r\nmanufacturer_df = manufacturer_df.join(dum_df)\r\nprint(manufacturer_df)\r\n\r\n#drive\r\ndrives = set(dff['drive'])\r\ndrive_df = pd.DataFrame(drives, columns=['drive'])\r\n# generate binary values using get_dummies\r\ndum_df = pd.get_dummies(drive_df, columns=[\"drive\"], prefix=[\"Drive_is\"])\r\n# merge with main df bridge_df on key values\r\ndrive_df = drive_df.join(dum_df)\r\nprint(drive_df)\r\n\r\n#transmission\r\ntransmissions = set(dff['transmission'])\r\ntransmission_df = pd.DataFrame(transmissions, columns=['transmission'])\r\n# generate binary values using get_dummies\r\ndum_df = pd.get_dummies(transmission_df, columns=[\"transmission\"], prefix=[\"Transmission_is\"])\r\n# merge with main df bridge_df on key values\r\ntransmission_df = transmission_df.join(dum_df)\r\nprint(transmission_df)\r\n\r\n#condition\r\nconditions = set(dff['condition'])\r\ncondition_df = pd.DataFrame(transmissions, columns=['condition'])\r\n# generate binary values using get_dummies\r\ndum_df = pd.get_dummies(condition_df, columns=[\"condition\"], prefix=[\"Condition_is\"])\r\n# merge with main df bridge_df on key values\r\ncondition_df = condition_df.join(dum_df)\r\nprint(condition_df)\r\n\r\n\r\n\r\n\r\n\r\n"
}
] | 19 |
poluyanov/crossplane | https://github.com/poluyanov/crossplane | 9ea3cfb8fe93a449c96def5adf4bdba1b5a25183 | 9bc573e98cc2ba1128a070e1d54a291b469ff9dd | 1b3333fed1333258918b62b6605e43a904dfa8e2 | refs/heads/master | 2021-09-06T22:01:59.877367 | 2018-01-29T23:31:32 | 2018-01-29T23:31:32 | 120,008,014 | 0 | 0 | null | 2018-02-02T17:02:27 | 2018-02-02T09:43:20 | 2018-01-29T23:31:36 | null | [
{
"alpha_fraction": 0.6219903826713562,
"alphanum_fraction": 0.6238248348236084,
"avg_line_length": 37.422908782958984,
"blob_id": "c7c1064dbe94646375c11b504a3f49d557cdeabc",
"content_id": "244f5e3e1e04228c108ac8c189d471c1f86565c1",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 8722,
"license_type": "permissive",
"max_line_length": 118,
"num_lines": 227,
"path": "/crossplane/__main__.py",
"repo_name": "poluyanov/crossplane",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nimport os\nimport sys\n\nfrom argparse import ArgumentParser, FileType, RawDescriptionHelpFormatter\nfrom traceback import format_exception\n\nfrom . import __version__\nfrom .lexer import lex as lex_file\nfrom .parser import parse as parse_file\nfrom .builder import build as build_file, _enquote, DELIMITERS\nfrom .errors import NgxParserBaseException\nfrom .compat import PY2, json, input\n\n\ndef _prompt_yes():\n try:\n return input('overwrite? (y/n [n]) ').lower().startswith('y')\n except (KeyboardInterrupt, EOFError):\n sys.exit(1)\n\n\ndef _dump_payload(obj, fp, indent):\n kwargs = {'indent': indent}\n if indent is None:\n kwargs['separators'] = ',', ':'\n fp.write(json.dumps(obj, **kwargs) + '\\n')\n\n\ndef parse(filename, out, indent=None, catch=None, tb_onerror=None, ignore='', single=False):\n ignore = ignore.split(',') if ignore else []\n\n def callback(e):\n exc = sys.exc_info() + (10,)\n return ''.join(format_exception(*exc)).rstrip()\n\n kwargs = {'catch_errors': catch, 'ignore': ignore, 'single': single}\n if tb_onerror:\n kwargs['onerror'] = callback\n\n payload = parse_file(filename, **kwargs)\n _dump_payload(payload, out, indent=indent)\n\n\ndef build(filename, dirname, force, indent, tabs, header, stdout, verbose):\n with open(filename, 'r') as fp:\n payload = json.load(fp)\n\n if dirname is None:\n dirname = os.getcwd()\n\n existing = []\n dirs_to_make = []\n\n # find which files from the json payload will overwrite existing files and\n # which directories need to be created in order for the config to be built\n for config in payload['config']:\n path = config['file']\n if not os.path.isabs(path):\n path = os.path.join(dirname, path)\n dirpath = os.path.dirname(path)\n if os.path.exists(path):\n existing.append(path)\n elif not os.path.exists(dirpath) and dirpath not in dirs_to_make:\n dirs_to_make.append(dirpath)\n\n # ask the user if it's okay to overwrite existing files\n if existing and not force and not stdout:\n print('building {} would overwrite these files:'.format(filename))\n print('\\n'.join(existing))\n if not _prompt_yes():\n print('not overwritten')\n return\n\n # make directories necessary for the config to be built\n for dirpath in dirs_to_make:\n os.makedirs(dirpath)\n\n # build the nginx configuration file from the json payload\n for config in payload['config']:\n path = os.path.join(dirname, config['file'])\n\n if header:\n output = (\n '# This config was built from JSON using NGINX crossplane.\\n'\n '# If you encounter any bugs please report them here:\\n'\n '# https://github.com/nginxinc/crossplane/issues\\n'\n '\\n'\n )\n else:\n output = ''\n\n parsed = config['parsed']\n output += build_file(parsed, indent, tabs) + '\\n'\n\n if stdout:\n print('# ' + path + '\\n' + output)\n else:\n with open(path, 'w') as fp:\n fp.write(output)\n if verbose:\n print('wrote to ' + path)\n\n\ndef lex(filename, out, indent=None, line_numbers=False):\n payload = list(lex_file(filename))\n if not line_numbers:\n payload = [token for token, lineno in payload]\n _dump_payload(payload, out, indent=indent)\n\n\ndef minify(filename, out):\n prev, token = '', ''\n for token, __ in lex_file(filename):\n token = _enquote(token)\n if prev and not (prev in DELIMITERS or token in DELIMITERS):\n token = ' ' + token\n out.write(token)\n prev = token\n out.write('\\n')\n\n\ndef format(filename, out, indent=None, tabs=False):\n payload = parse_file(filename)\n parsed = payload['config'][0]['parsed']\n if payload['status'] == 'ok':\n output = build_file(parsed, indent, tabs) + '\\n'\n out.write(output)\n else:\n e = payload['errors'][0]\n raise NgxParserBaseException(e['error'], e['file'], e['line'])\n\n\nclass _SubparserHelpFormatter(RawDescriptionHelpFormatter):\n def _format_action(self, action):\n line = super(RawDescriptionHelpFormatter, self)._format_action(action)\n\n if action.nargs == 'A...':\n line = line.split('\\n', 1)[-1]\n\n if line.startswith(' ') and line[4] != ' ':\n parts = filter(len, line.lstrip().partition(' '))\n line = ' ' + ' '.join(parts)\n\n return line\n\n\ndef parse_args(args=None):\n parser = ArgumentParser(\n formatter_class=_SubparserHelpFormatter,\n description='various operations for nginx config files',\n usage='%(prog)s <command> [options]'\n )\n parser.add_argument('-V', '--version', action='version', version='%(prog)s ' + __version__)\n subparsers = parser.add_subparsers(title='commands')\n\n def create_subparser(function, help):\n name = function.__name__\n prog = 'crossplane ' + name\n p = subparsers.add_parser(name, prog=prog, help=help, description=help)\n p.set_defaults(_subcommand=function)\n return p\n\n p = create_subparser(parse, 'parses a json payload for an nginx config')\n p.add_argument('filename', help='the nginx config file')\n p.add_argument('-o', '--out', type=FileType('w'), default='-', help='write output to a file')\n p.add_argument('-i', '--indent', type=int, metavar='NUM', help='number of spaces to indent output')\n p.add_argument('--ignore', metavar='DIRECTIVES', default='', help='ignore directives (comma-separated)')\n p.add_argument('--no-catch', action='store_false', dest='catch', help='only collect first error in file')\n p.add_argument('--tb-onerror', action='store_true', help='include tracebacks in config errors')\n p.add_argument('--single-file', action='store_true', dest='single', help='do not include other config files')\n\n p = create_subparser(build, 'builds an nginx config from a json payload')\n p.add_argument('filename', help='the file with the config payload')\n p.add_argument('-v', '--verbose', action='store_true', help='verbose output')\n p.add_argument('-d', '--dir', metavar='PATH', default=None, dest='dirname', help='the base directory to build in')\n p.add_argument('-f', '--force', action='store_true', help='overwrite existing files')\n g = p.add_mutually_exclusive_group()\n g.add_argument('-i', '--indent', type=int, metavar='NUM', help='number of spaces to indent output', default=4)\n g.add_argument('-t', '--tabs', action='store_true', help='indent with tabs instead of spaces')\n p.add_argument('--no-headers', action='store_false', dest='header', help='do not write header to configs')\n p.add_argument('--stdout', action='store_true', help='write configs to stdout instead')\n\n p = create_subparser(lex, 'lexes tokens from an nginx config file')\n p.add_argument('filename', help='the nginx config file')\n p.add_argument('-o', '--out', type=FileType('w'), default='-', help='write output to a file')\n p.add_argument('-i', '--indent', type=int, metavar='NUM', help='number of spaces to indent output')\n p.add_argument('-n', '--line-numbers', action='store_true', help='include line numbers in json payload')\n\n p = create_subparser(minify, 'removes all whitespace from an nginx config')\n p.add_argument('filename', help='the nginx config file')\n p.add_argument('-o', '--out', type=FileType('w'), default='-', help='write output to a file')\n\n p = create_subparser(format, 'formats an nginx config file')\n p.add_argument('filename', help='the nginx config file')\n p.add_argument('-o', '--out', type=FileType('w'), default='-', help='write output to a file')\n g = p.add_mutually_exclusive_group()\n g.add_argument('-i', '--indent', type=int, metavar='NUM', help='number of spaces to indent output', default=4)\n g.add_argument('-t', '--tabs', action='store_true', help='indent with tabs instead of spaces')\n\n def help(command):\n if command not in parser._actions[-1].choices:\n parser.error('unknown command %r' % command)\n else:\n parser._actions[-1].choices[command].print_help()\n\n p = create_subparser(help, 'show help for commands')\n p.add_argument('command', help='command to show help for')\n\n parsed = parser.parse_args(args=args)\n\n # this addresses a bug that was added to argparse in Python 3.3\n if not parsed.__dict__:\n parser.error('too few arguments')\n\n return parsed\n\n\ndef main():\n kwargs = parse_args().__dict__\n func = kwargs.pop('_subcommand')\n func(**kwargs)\n\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.29459458589553833,
"alphanum_fraction": 0.38841697573661804,
"avg_line_length": 48.80769348144531,
"blob_id": "b3e40d401db9b6c30eab476857a299f0f773d345",
"content_id": "8a5b61a18d8726e436832226c32051bdc0086236",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2590,
"license_type": "permissive",
"max_line_length": 95,
"num_lines": 52,
"path": "/tests/test_lex.py",
"repo_name": "poluyanov/crossplane",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\nimport os\n\nimport crossplane\n\nhere = os.path.dirname(__file__)\n\n\ndef test_simple_config():\n dirname = os.path.join(here, 'configs', 'simple')\n config = os.path.join(dirname, 'nginx.conf')\n tokens = crossplane.lex(config)\n assert list(tokens) == [\n ('events', 1), ('{', 1), ('worker_connections', 2), ('1024', 2),\n (';', 2), ('}', 3), ('http', 5), ('{', 5), ('server', 6), ('{', 6),\n ('listen', 7), ('127.0.0.1:8080', 7), (';', 7), ('server_name', 8),\n ('default_server', 8), (';', 8), ('location', 9), ('/', 9), ('{', 9),\n ('return', 10), ('200', 10), ('foo bar baz', 10), (';', 10), ('}', 11),\n ('}', 12), ('}', 13)\n ]\n\n\ndef test_messy_config():\n dirname = os.path.join(here, 'configs', 'messy')\n config = os.path.join(dirname, 'nginx.conf')\n tokens = crossplane.lex(config)\n assert list(tokens) == [\n ('user', 1), ('nobody', 1), (';', 1), ('events', 3), ('{', 3),\n ('worker_connections', 3), ('2048', 3), (';', 3), ('}', 3),\n ('http', 5), ('{', 5), ('access_log', 7), ('off', 7), (';', 7),\n ('default_type', 7), ('text/plain', 7), (';', 7), ('error_log', 7),\n ('off', 7), (';', 7), ('server', 8), ('{', 8), ('listen', 9),\n ('8083', 9), (';', 9), ('return', 10), ('200', 10),\n ('Ser\" \\' \\' ver\\\\\\\\ \\\\ $server_addr:\\\\$server_port\\\\n\\\\nTime: $time_local\\\\n\\\\n', 10),\n (';', 10), ('}', 11), ('server', 12), ('{', 12),\n ('listen', 12), ('8080', 12), (';', 12), ('root', 13),\n ('/usr/share/nginx/html', 13), (';', 13), ('location', 14), ('~', 14),\n ('/hello/world;', 14), ('{', 14), ('return', 14), ('301', 14),\n ('/status.html', 14), (';', 14), ('}', 14), ('location', 15),\n ('/foo', 15), ('{', 15), ('}', 15), ('location', 15), ('/bar', 15),\n ('{', 15), ('}', 15), ('location', 16), ('/\\\\{\\\\;\\\\}\\\\ #\\\\ ab', 16),\n ('{', 16), ('}', 16), ('if', 17), ('($request_method', 17), ('=', 17),\n ('P\\\\{O\\\\)\\\\###\\\\;ST', 17), (')', 17), ('{', 17),\n ('}', 17), ('location', 18), ('/status.html', 18), ('{', 18),\n ('try_files', 19), ('/abc/${uri} /abc/${uri}.html', 19), ('=404', 19),\n (';', 19), ('}', 20), ('location', 21),\n ('/sta;\\n tus', 21), ('{', 22), ('return', 22),\n ('302', 22), ('/status.html', 22), (';', 22), ('}', 22),\n ('location', 23), ('/upstream_conf', 23), ('{', 23), ('return', 23),\n ('200', 23), ('/status.html', 23), (';', 23), ('}', 23), ('}', 23),\n ('server', 24), ('{', 25), ('}', 25), ('}', 25)\n ]\n"
},
{
"alpha_fraction": 0.581741988658905,
"alphanum_fraction": 0.5825182199478149,
"avg_line_length": 27.5,
"blob_id": "190caf490e5c9c63995d29b01e4fe32a2d9e3bd3",
"content_id": "8c05b3cbd35940755e9f19b2eb96f692a2df4bad",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6441,
"license_type": "permissive",
"max_line_length": 79,
"num_lines": 226,
"path": "/crossplane/objects.py",
"repo_name": "poluyanov/crossplane",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\nfrom .parser import parse\n\n\ndef _init_directive(parent, directive_json):\n if 'block' in directive_json:\n return NginxBlockDirective(parent=parent, **directive_json)\n else:\n return NginxDirective(parent=parent, **directive_json)\n\n\nclass NginxDirective(object):\n __slots__ = ('parent', 'directive', 'line', 'args', 'includes')\n\n def __init__(self, directive, args, line, includes=[], parent=None):\n self.parent = parent\n self.directive = directive\n self.line = line\n self.args = args\n self.includes = []\n\n def __contains__(self, item):\n \"\"\"\n Single directives don't really have a contains operator, so this method\n is included as a stub to allow for recursive logic to be conveniently\n inheritied.\n \"\"\"\n return False\n\n def get(self, directive):\n \"\"\"\n Like __contains__, this is stubbed for convenient recursion.\n \"\"\"\n return []\n\n def to_dict(self):\n result = {}\n\n for slot in self.__slots__:\n if slot not in ('parent',):\n result[slot] = getattr(self, slot)\n\n return result\n\n @property\n def file(self):\n \"\"\"\n Recursively walk the tree to find the containing file.\n \"\"\"\n return self.parent.file\n\n def context(self, *args):\n \"\"\"\n Recursively scans parent blocks for a passed list of \"context\"\n diretives. These are a list of directives which apply or may apply to\n this specific directive. This method will return a dictionary of\n directives and their args which affect this directive.\n\n It follows NGINX inheritance which is, all-or-nothing, lowest level\n directive(s) apply.\n \"\"\"\n context = {}\n\n for directive_name in args:\n if directive_name in self:\n values = []\n\n # for each directive instance, get the args value and append to\n # currently tracked\n for directive in self.get(directive_name):\n # rebuild multi arg directives\n values.append(' '.join(directive.args))\n\n # add this context to context\n context[directive_name] = \\\n context.get(directive_name, []) + values\n\n # if there is a context in this directive/block then return it,\n # otherwise try to find one from the parent\n if len(context) > 0:\n return context, self\n elif self.parent is not None:\n return self.parent.context(*args)\n else:\n return None, None\n\n\nclass NginxBlockDirective(NginxDirective):\n __slots__ = ('parent', 'index', 'directive', 'line', 'args', 'block')\n\n def __init__(self, block=[], **kwargs):\n super(NginxBlockDirective, self).__init__(**kwargs)\n self.index = {}\n self.block = block\n\n self._setup_block()\n\n def _setup_block(self):\n directives = []\n for directive_json in self.block:\n directives.append(_init_directive(self, directive_json))\n self.__index(directives[-1])\n\n self.block = directives\n\n def __index(self, directive):\n idx = self.index.get(directive.directive, [])\n idx.append(directive)\n self.index[directive.directive] = idx\n\n def __contains__(self, item):\n return item in self.index\n\n def get(self, directive):\n return self.index[directive] if directive in self.index else []\n\n def to_dict(self):\n result = {}\n\n for slot in self.__slots__:\n if slot not in ('parent', 'index', 'block'):\n result[slot] = getattr(self, slot)\n\n result['block'] = [directive.to_dict() for directive in self.block]\n\n return result\n\n\nclass NginxConfigFile(object):\n __slots__ = ('parent', 'index', 'file', 'parsed')\n\n def __init__(self, file='', parsed=[], parent=None, **kwargs):\n self.parent = parent\n\n self.index = {}\n self.file = file\n self.parsed = parsed\n\n self._setup_parsed()\n\n def _setup_parsed(self):\n directives = []\n for directive_json in self.parsed:\n directives.append(_init_directive(self, directive_json))\n self.__index(directives[-1])\n\n self.parsed = directives\n\n def __index(self, directive):\n idx = self.index.get(directive.directive, [])\n idx.append(directive)\n self.index[directive.directive] = idx\n\n def __contains__(self, item):\n return item in self.index\n\n def get(self, directive):\n return self.index[directive] if directive in self.index else []\n\n def to_dict(self):\n result = {}\n\n for slot in self.__slots__:\n if slot not in ('parent', 'index', 'parsed'):\n result[slot] = getattr(self, slot)\n\n result['parsed'] = [directive.to_dict() for directive in self.parsed]\n\n return result\n\n\nclass CrossplaneConfig(object):\n __slots__ = ('index', 'files', 'configs')\n\n def __init__(self, configs=[]):\n self.index = {}\n self.files = []\n self.configs = configs\n\n self._setup_configs()\n\n def _setup_configs(self):\n configs = []\n for config_json in self.configs:\n configs.append(NginxConfigFile(parent=self, **config_json))\n self.__index(configs[-1])\n\n self.configs = configs\n\n def __index(self, config):\n self.index[config.file] = config\n self.files.append(config.file)\n\n def __contains__(self, item):\n return item in self.index\n\n def get(self, file):\n return self.index[file] if file in self.index else None\n\n def get_include(self, idx):\n return self.index[self.files[idx]]\n\n def to_dict(self):\n result = {}\n\n result['config'] = [config.to_dict() for config in self.configs]\n\n return result\n\n\ndef map(payload):\n \"\"\"\n Loads a crossplane.parse() payload into a CrossplaneConfig object and\n returns it.\n \"\"\"\n return CrossplaneConfig(configs=payload['config'])\n\n\ndef load(filename, **kwargs):\n \"\"\"\n Uses parser to parse an nginx config file and then creates native Python\n objects from the parsed structure. These native objects can then be edited\n and output into a new crossplane structure.\n \"\"\"\n payload = parse(filename, **kwargs)\n return map(payload)\n"
},
{
"alpha_fraction": 0.457486629486084,
"alphanum_fraction": 0.4743315577507019,
"avg_line_length": 30.428571701049805,
"blob_id": "97f7768c6dfd0130763a64acc437dd35a2d3f088",
"content_id": "2cdf7d05d49043da7f9bddce45e09fd6f13d7fc0",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3740,
"license_type": "permissive",
"max_line_length": 79,
"num_lines": 119,
"path": "/tests/test_build.py",
"repo_name": "poluyanov/crossplane",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\nimport os\n\nimport crossplane\nfrom crossplane.compat import basestring\nfrom crossplane.builder import _enquote\n\nhere = os.path.dirname(__file__)\n\n\ndef assert_equal_payloads(a, b, ignore_keys=()):\n assert type(a) == type(b)\n if isinstance(a, list):\n assert len(a) == len(b)\n for args in zip(a, b):\n assert_equal_payloads(*args, ignore_keys=ignore_keys)\n elif isinstance(a, dict):\n keys = set(a.keys()) | set(b.keys())\n keys.difference_update(ignore_keys)\n for key in keys:\n assert_equal_payloads(a[key], b[key], ignore_keys=ignore_keys)\n elif isinstance(a, basestring):\n assert _enquote(a) == _enquote(b)\n else:\n assert a == b\n\n\ndef compare_parsed_and_built(conf_dirname, conf_basename, tmpdir):\n original_dirname = os.path.join(here, 'configs', conf_dirname)\n original_path = os.path.join(original_dirname, conf_basename)\n original_payload = crossplane.parse(original_path)\n original_parsed = original_payload['config'][0]['parsed']\n\n build1_config = crossplane.build(original_parsed)\n build1_file = tmpdir.join('build1.conf')\n build1_file.write(build1_config)\n build1_payload = crossplane.parse(build1_file.strpath)\n build1_parsed = build1_payload['config'][0]['parsed']\n\n assert_equal_payloads(original_parsed, build1_parsed, ignore_keys=['line'])\n\n build2_config = crossplane.build(build1_parsed)\n build2_file = tmpdir.join('build2.conf')\n build2_file.write(build2_config)\n build2_payload = crossplane.parse(build2_file.strpath)\n build2_parsed = build2_payload['config'][0]['parsed']\n\n assert build1_config == build2_config\n assert_equal_payloads(build1_parsed, build2_parsed, ignore_keys=[])\n\n\ndef test_build_nested_and_multiple_args():\n payload = [\n {\n \"directive\": \"events\",\n \"args\": [],\n \"block\": [\n {\n \"directive\": \"worker_connections\",\n \"args\": [\"1024\"]\n }\n ]\n },\n {\n \"directive\": \"http\",\n \"args\": [],\n \"block\": [\n {\n \"directive\": \"server\",\n \"args\": [],\n \"block\": [\n {\n \"directive\": \"listen\",\n \"args\": [\"127.0.0.1:8080\"]\n },\n {\n \"directive\": \"server_name\",\n \"args\": [\"default_server\"]\n },\n {\n \"directive\": \"location\",\n \"args\": [\"/\"],\n \"block\": [\n {\n \"directive\": \"return\",\n \"args\": [\"200\", \"foo bar baz\"]\n }\n ]\n }\n ]\n }\n ]\n }\n ]\n\n built = crossplane.build(payload, indent=4, tabs=False)\n\n assert built == '\\n'.join([\n 'events {',\n ' worker_connections 1024;',\n '}',\n 'http {',\n ' server {',\n ' listen 127.0.0.1:8080;',\n ' server_name default_server;',\n ' location / {',\n \" return 200 'foo bar baz';\",\n ' }',\n ' }',\n '}'\n ])\n\n\ndef test_compare_parsed_and_built_simple(tmpdir):\n compare_parsed_and_built('simple', 'nginx.conf', tmpdir)\n\n\ndef test_compare_parsed_and_built_messy(tmpdir):\n compare_parsed_and_built('messy', 'nginx.conf', tmpdir)\n"
},
{
"alpha_fraction": 0.6697283387184143,
"alphanum_fraction": 0.6902481913566589,
"avg_line_length": 38.06106948852539,
"blob_id": "09562b55e01c02b921d74208645dff8902593d83",
"content_id": "9c8f089343608e8d49be80d6cd79403252ecbd94",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5117,
"license_type": "permissive",
"max_line_length": 77,
"num_lines": 131,
"path": "/tests/test_objects.py",
"repo_name": "poluyanov/crossplane",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\nimport os\n\nimport crossplane\n\nhere = os.path.dirname(__file__)\n\n\ndef test_load_simple():\n dirname = os.path.join(here, 'configs', 'simple')\n config = os.path.join(dirname, 'nginx.conf')\n\n xconfig = crossplane.load(config)\n assert xconfig is not None\n assert isinstance(xconfig, crossplane.objects.CrossplaneConfig)\n assert len(xconfig.configs) == 1\n\n # only one file\n xconfigfile = xconfig.get(config)\n assert isinstance(xconfigfile, crossplane.objects.NginxConfigFile)\n for directive in ('events', 'http'):\n assert directive in xconfigfile\n assert len(xconfigfile.get(directive)) == 1\n\n events = xconfigfile.get('events')[0]\n assert isinstance(events, crossplane.objects.NginxBlockDirective)\n assert len(events.get('worker_connections')) > 0\n assert events.get('worker_connections')[0].args == ['1024']\n\n server = xconfigfile.get('http')[0].get('server')[0]\n assert isinstance(server, crossplane.objects.NginxBlockDirective)\n assert server.get('listen')[0].args == ['127.0.0.1:8080']\n assert server.get('server_name')[0].args == ['default_server']\n\n location = server.get('location')[0]\n assert isinstance(location, crossplane.objects.NginxBlockDirective)\n assert location.args == ['/']\n assert location.get('return')[0].args == ['200', 'foo bar baz']\n\n # some higher level primitives\n assert location.file == config\n\n location_ctx, ctx_parent = location.context('server_name', 'listen')\n assert location_ctx is not None\n assert ctx_parent is not None\n # found server_name and listen from nearest parent\n assert 'default_server' in location_ctx['server_name']\n assert '127.0.0.1:8080' in location_ctx['listen']\n # ctx_parent is returned as the actual object containing the context\n # (actual mapped object)\n assert ctx_parent == server\n\n\ndef test_load_build_cycle_simple(tmpdir):\n dirname = os.path.join(here, 'configs', 'simple')\n config = os.path.join(dirname, 'nginx.conf')\n\n xconfig = crossplane.load(config)\n\n build_config = crossplane.build(xconfig.to_dict()['config'][0]['parsed'])\n build_file = tmpdir.join('build1.conf')\n build_file.write(build_config)\n build_xconfig = crossplane.load(build_file.strpath)\n\n assert build_xconfig is not None\n assert isinstance(build_xconfig, crossplane.objects.CrossplaneConfig)\n assert len(build_xconfig.configs) == 1\n\n # only one file\n xconfigfile = build_xconfig.get(build_file)\n assert isinstance(xconfigfile, crossplane.objects.NginxConfigFile)\n for directive in ('events', 'http'):\n assert directive in xconfigfile\n assert len(xconfigfile.get(directive)) == 1\n\n events = xconfigfile.get('events')[0]\n assert isinstance(events, crossplane.objects.NginxBlockDirective)\n assert len(events.get('worker_connections')) > 0\n assert events.get('worker_connections')[0].args == ['1024']\n\n server = xconfigfile.get('http')[0].get('server')[0]\n assert isinstance(server, crossplane.objects.NginxBlockDirective)\n assert server.get('listen')[0].args == ['127.0.0.1:8080']\n assert server.get('server_name')[0].args == ['default_server']\n\n location = server.get('location')[0]\n assert isinstance(location, crossplane.objects.NginxBlockDirective)\n assert location.args == ['/']\n assert location.get('return')[0].args == ['200', 'foo bar baz']\n\n\ndef test_load_build_cycle_with_changes_simple(tmpdir):\n dirname = os.path.join(here, 'configs', 'simple')\n config = os.path.join(dirname, 'nginx.conf')\n\n xconfig = crossplane.load(config)\n xconfigfile = xconfig.get(config)\n\n # change events worker_connections\n xconfigfile.get('events')[0].get('worker_connections')[0].args = ['2048']\n\n build_config = crossplane.build(xconfig.to_dict()['config'][0]['parsed'])\n build_file = tmpdir.join('build1.conf')\n build_file.write(build_config)\n build_xconfig = crossplane.load(build_file.strpath)\n\n assert build_xconfig is not None\n assert isinstance(build_xconfig, crossplane.objects.CrossplaneConfig)\n assert len(build_xconfig.configs) == 1\n\n # only one file\n xconfigfile = build_xconfig.get(build_file)\n assert isinstance(xconfigfile, crossplane.objects.NginxConfigFile)\n for directive in ('events', 'http'):\n assert directive in xconfigfile\n assert len(xconfigfile.get(directive)) == 1\n\n events = xconfigfile.get('events')[0]\n assert isinstance(events, crossplane.objects.NginxBlockDirective)\n assert len(events.get('worker_connections')) > 0\n assert events.get('worker_connections')[0].args == ['2048'] # changed!\n\n server = xconfigfile.get('http')[0].get('server')[0]\n assert isinstance(server, crossplane.objects.NginxBlockDirective)\n assert server.get('listen')[0].args == ['127.0.0.1:8080']\n assert server.get('server_name')[0].args == ['default_server']\n\n location = server.get('location')[0]\n assert isinstance(location, crossplane.objects.NginxBlockDirective)\n assert location.args == ['/']\n assert location.get('return')[0].args == ['200', 'foo bar baz']\n"
},
{
"alpha_fraction": 0.5008368492126465,
"alphanum_fraction": 0.5029288530349731,
"avg_line_length": 25.853933334350586,
"blob_id": "e083545cfdcbb36ad4642e2e1eb43a9f58f4d905",
"content_id": "3dea54adcc1bdb040017b1a5b2afed8fba9ce8f4",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2390,
"license_type": "permissive",
"max_line_length": 64,
"num_lines": 89,
"path": "/crossplane/builder.py",
"repo_name": "poluyanov/crossplane",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\nimport codecs\nimport os\n\nfrom .lexer import lex\nfrom .analyzer import analyze, enter_block_ctx\nfrom .errors import NgxParserDirectiveError\nfrom .compat import PY2, json\n\nDELIMITERS = ('{', '}', ';')\n\n\ndef _escape(string):\n prev, char = '', ''\n for char in string:\n if prev == '\\\\' or prev + char == '${':\n prev += char\n yield prev\n continue\n if prev == '$':\n yield prev\n if char not in ('\\\\', '$'):\n yield char\n prev = char\n if char in ('\\\\', '$'):\n yield char\n\n\ndef _needs_quotes(string):\n if string == '':\n return True\n elif string in DELIMITERS:\n return False\n\n # lexer should throw an error when variable expansion syntax\n # is messed up, but just wrap it in quotes for now I guess\n chars = _escape(string)\n\n # arguments can't start with variable expansion syntax\n char = next(chars)\n if char.isspace() or char in ('{', ';', '\"', \"'\", '${'):\n return True\n\n expanding = False\n for char in chars:\n if char.isspace() or char in ('{', ';', '\"', \"'\"):\n return True\n elif char == ('${' if expanding else '}'):\n return True\n elif char == ('}' if expanding else '${'):\n expanding = not expanding\n\n return char in ('\\\\', '$') or expanding\n\n\ndef _enquote(arg):\n if _needs_quotes(arg):\n arg = repr(codecs.decode(arg, 'raw_unicode_escape'))\n arg = arg.replace('\\\\\\\\', '\\\\').lstrip('u')\n return arg\n\n\ndef build(payload, indent=4, tabs=False):\n padding = '\\t' if tabs else ' ' * indent\n\n def _build_lines(objs, depth):\n margin = padding * depth\n\n for obj in objs:\n directive = obj['directive']\n args = [_enquote(arg) for arg in obj['args']]\n\n if directive == 'if':\n line = 'if (' + ' '.join(args) + ')'\n elif args:\n line = directive + ' ' + ' '.join(args)\n else:\n line = directive\n\n if obj.get('block') is None:\n yield margin + line + ';'\n else:\n yield margin + line + ' {'\n for line in _build_lines(obj['block'], depth+1):\n yield line\n yield margin + '}'\n\n lines = _build_lines(payload, depth=0)\n return '\\n'.join(lines)\n"
},
{
"alpha_fraction": 0.5662650465965271,
"alphanum_fraction": 0.5662650465965271,
"avg_line_length": 18.153846740722656,
"blob_id": "6a7509d781b6cb365804a86da6870495b85ddab1",
"content_id": "d1d53268b0d2e4ead0aa7a9769e6ad91a05561a1",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "reStructuredText",
"length_bytes": 249,
"license_type": "permissive",
"max_line_length": 100,
"num_lines": 13,
"path": "/AUTHORS.rst",
"repo_name": "poluyanov/crossplane",
"src_encoding": "UTF-8",
"text": "=======\nCredits\n=======\n\nDevelopment Lead\n----------------\n\n* Arie van Luttikhuizen <[email protected]> `@aluttik <https://github.com/aluttik>`_\n\nContributors\n------------\n\n* Grant Hulegaard <[email protected]> `@gshulegaard <https://github.com/gshulegaard> <https://gitlab.com/gshulegaard>`_\n"
}
] | 7 |
baczynski/quotation-charts-classifier. | https://github.com/baczynski/quotation-charts-classifier. | cf4909d4a7dc8309fd3717bc4e902a1ab18414ce | b6aa80d50f404cf59ab790fe54abbaf841653da9 | a594c9b4b1f3f454b616610e549ad479e58a3e68 | refs/heads/master | 2020-03-21T17:24:53.579305 | 2018-07-01T21:12:59 | 2018-07-01T21:12:59 | null | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5812904834747314,
"alphanum_fraction": 0.5981966853141785,
"avg_line_length": 38.43888854980469,
"blob_id": "e10e471aca4785633cfaab202adc6ac0efae2a5c",
"content_id": "e6a4604348d1bfb679459b31f4267e88036e4632",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7098,
"license_type": "no_license",
"max_line_length": 117,
"num_lines": 180,
"path": "/util/classify_and_trade.py",
"repo_name": "baczynski/quotation-charts-classifier.",
"src_encoding": "UTF-8",
"text": "import cv2\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom keras.models import model_from_yaml\nfrom pyspark import SparkConf, SparkContext\n\nfrom masterthesis.nowe.MAIN import load_model\nfrom masterthesis.util.date_parser import date_to_timestamp\n\n\ndef classify_and_trade(file_name, number_of_attributes, timestamp_index, bid_index, ask_index,\n separator, quotation_size, quotation_step, date_format_regular_expression):\n [win, loss] = [0, 0]\n [quotation_timestamps, bid_quotes, ask_quotes] = load_file(file_name, number_of_attributes, timestamp_index,\n bid_index, ask_index, separator,\n date_format_regular_expression)\n\n model = load_model()\n\n for i in range(0, len(quotation_timestamps), quotation_step):\n timestamps_batch = quotation_timestamps[i:(i + quotation_size)]\n timestamps_batch[:] = [x / 10000000000 for x in timestamps_batch]\n prices_batch = ask_quotes[i: (i + quotation_size)]\n fig, ax = plt.subplots(nrows=1, ncols=1)\n plt.axis('off')\n ax.plot(timestamps_batch, prices_batch)\n fig.savefig('./image.png')\n\n image = cv2.imread('./image.png', cv2.IMREAD_GRAYSCALE)\n (thresh, im_bw) = cv2.threshold(image, 128, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)\n\n x = np.empty((1, 480, 640), dtype=np.float32)\n x[0, ...] = im_bw\n\n c = model.predict_classes(x)\n\n # if c == 1:\n # copyfile('./image.png', '../predicted_real_time/1/' + str(i) + '.png')\n # elif c == 2:\n # copyfile('./image.png', '../predicted_real_time/2/' + str(i) + '.png')\n # elif c == 3:\n # copyfile('./image.png', '../predicted_real_time/3/' + str(i) + '.png')\n # elif c == 4:\n # copyfile('./image.png', '../predicted_real_time/4/' + str(i) + '.png')\n\n if c != 0:\n growing_trend = check_growing_trend(bid_quotes, i, quotation_step)\n [win, loss] = trade(c, bid_quotes, ask_quotes, i + quotation_size, win, loss, growing_trend)\n\n\ndef load_file(file_name, number_of_attributes, timestamp_index, bid_index, ask_index, separator,\n date_format_regular_expression):\n conf = SparkConf().setMaster(\"local\").setAppName(\"My App\")\n SparkContext._ensure_initialized()\n sc = SparkContext(conf=conf)\n textRDD = sc.textFile(file_name)\n\n quotations = textRDD.flatMap(lambda x: x.split(separator)).zipWithIndex() \\\n .filter(\n lambda q: q[1] % number_of_attributes == timestamp_index or q[1] % number_of_attributes == bid_index or q[\n 1] % number_of_attributes == ask_index)\n\n quotation_timestamps = quotations.filter(lambda q: q[1] % number_of_attributes == timestamp_index) \\\n .map(\n lambda timestamp: date_to_timestamp(timestamp[0], date_format_regular_expression)) \\\n .collect()\n\n bid_quotes = quotations.filter(lambda q: q[1] % number_of_attributes == bid_index) \\\n .map(lambda timestamp: float(timestamp[0])) \\\n .collect()\n\n ask_quotes = quotations.filter(lambda q: q[1] % number_of_attributes == ask_index) \\\n .map(lambda timestamp: float(timestamp[0])) \\\n .collect()\n\n return [quotation_timestamps, bid_quotes, ask_quotes]\n\n\ndef load_model():\n yaml_file = open('../model/model.yaml', 'r')\n loaded_model_yaml = yaml_file.read()\n yaml_file.close()\n loaded_model = model_from_yaml(loaded_model_yaml)\n # load weights into new model\n loaded_model.load_weights(\"../model/model.h5\")\n print(\"Loaded model from disk\")\n return loaded_model\n\n\n#\ndef trade(class_number, bid_quotes, ask_quotes, quotation_index, win, loss, growing_trend):\n if class_number == 1:\n if not growing_trend:\n [win, loss] = sell_and_watch(bid_quotes, ask_quotes, quotation_index, take_profit=1, stop_loss=1,\n percent=True, win=win, loss=loss)\n if class_number == 2:\n if growing_trend:\n [win, loss] = buy_and_watch(bid_quotes, ask_quotes, quotation_index, take_profit=1, stop_loss=1,\n percent=True, win=win, loss=loss)\n if class_number == 3:\n if growing_trend:\n [win, loss] = buy_and_watch(bid_quotes, ask_quotes, quotation_index, take_profit=1, stop_loss=1,\n percent=True, win=win, loss=loss)\n if class_number == 4:\n if growing_trend:\n [win, loss] = sell_and_watch(bid_quotes, ask_quotes, quotation_index, take_profit=1, stop_loss=1,\n percent=True, win=win, loss=loss)\n\n print('current win: ' + str(win))\n print('current loss: ' + str(loss))\n return [win, loss]\n\n\ndef buy_and_watch(bid_quotes, ask_quotes, quotation_index, take_profit, stop_loss, percent, win, loss):\n trade_price = ask_quotes[quotation_index]\n take_profit_value = 0\n stop_loss_value = 0\n\n current_bid_price = bid_quotes[quotation_index]\n index = quotation_index\n\n if percent:\n take_profit_value = current_bid_price * (100 + take_profit) / 100\n stop_loss_value = current_bid_price * (100 - stop_loss) / 100\n else:\n take_profit_value = current_bid_price + take_profit\n stop_loss_value = current_bid_price - stop_loss\n\n while stop_loss_value < current_bid_price < take_profit_value and index + 1 < len(bid_quotes):\n index = index + 1\n current_bid_price = bid_quotes[index]\n\n if index != len(bid_quotes):\n if current_bid_price >= take_profit_value:\n return [win + 1, loss]\n else:\n return [win, loss + 1]\n else:\n return [win, loss]\n\n\ndef sell_and_watch(bid_quotes, ask_quotes, quotation_index, take_profit, stop_loss, percent, win, loss):\n trade_price = bid_quotes[quotation_index]\n take_profit_value = 0\n stop_loss_value = 0\n\n current_bid_price = ask_quotes[quotation_index]\n index = quotation_index\n\n if percent:\n take_profit_value = current_bid_price * (100 - take_profit) / 100\n stop_loss_value = current_bid_price * (100 + stop_loss) / 100\n else:\n take_profit_value = current_bid_price - take_profit\n stop_loss_value = current_bid_price + stop_loss\n\n while take_profit_value < current_bid_price < stop_loss_value and index + 1 < len(bid_quotes):\n index = index + 1\n current_bid_price = ask_quotes[index]\n\n if index != len(ask_quotes):\n if current_bid_price <= take_profit_value:\n return [win + 1, loss]\n else:\n return [win, loss + 1]\n else:\n return [win, loss]\n\n\ndef check_growing_trend(bid_quotes, index, quotation_step):\n if index - quotation_step >= 0:\n if bid_quotes[index] >= bid_quotes[index - quotation_step]:\n return True\n else:\n return False\n else:\n return True\n\n\nclassify_and_trade('../../real_time/quotations/USDJPY_2017_15M.csv', 6, 0, 3, 4, ',', 76, 10, \"%d.%m.%Y %H:%M:%S.%f\")"
},
{
"alpha_fraction": 0.6394202709197998,
"alphanum_fraction": 0.6608695387840271,
"avg_line_length": 45.621620178222656,
"blob_id": "ac2bf6c1a48fe50cf6a5825ea64b67d51ab918f7",
"content_id": "0b709e380c322041140c41613eb7eebb983ce50d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1725,
"license_type": "no_license",
"max_line_length": 122,
"num_lines": 37,
"path": "/util/quotation_parser.py",
"repo_name": "baczynski/quotation-charts-classifier.",
"src_encoding": "UTF-8",
"text": "import matplotlib.pyplot as plt\nfrom pyspark import SparkConf, SparkContext\n\nfrom masterthesis.util.date_parser import date_to_timestamp\n\n\ndef parse_file(input_file_name, output_file_name, number_of_attributes, timestamp_index, price_index,\n separator, quotation_size, quotation_step, date_format_regular_expression):\n conf = SparkConf().setMaster(\"local\").setAppName(\"My App\")\n SparkContext._ensure_initialized()\n sc = SparkContext(conf=conf)\n textRDD = sc.textFile(input_file_name)\n\n quotations = textRDD.flatMap(lambda x: x.split(separator)).zipWithIndex() \\\n .filter(\n lambda q: q[1] % number_of_attributes == timestamp_index or q[1] % number_of_attributes == price_index)\n\n quotation_timestamps = quotations.filter(lambda q: q[1] % number_of_attributes == timestamp_index) \\\n .map(\n lambda timestamp: date_to_timestamp(timestamp[0], date_format_regular_expression)) \\\n .collect()\n\n quotation_prices = quotations.filter(lambda q: q[1] % number_of_attributes == price_index) \\\n .map(lambda timestamp: float(timestamp[0])) \\\n .collect()\n\n for i in range(0, len(quotation_timestamps), quotation_step):\n timestamps_batch = quotation_timestamps[i:(i + quotation_size)]\n timestamps_batch[:] = [x / 10000000000 for x in timestamps_batch]\n prices_batch = quotation_prices[i: (i + quotation_size)]\n fig, ax = plt.subplots(nrows=1, ncols=1)\n plt.axis('off')\n ax.plot(timestamps_batch, prices_batch)\n fig.savefig(output_file_name + 'USDJPY15-' + str(int(i/10)) + '.png')\n\n\nparse_file('../../real_time/USDJPY_2017_15M.csv', '../../real_time/images/', 6, 0, 4, ',', 76, 10, \"%d.%m.%Y %H:%M:%S.%f\")\n"
},
{
"alpha_fraction": 0.6498599648475647,
"alphanum_fraction": 0.651260495185852,
"avg_line_length": 22.766666412353516,
"blob_id": "17515b1c049098b1f8caeeb1c15ab699d13c657e",
"content_id": "abb65260c34fe08deaf44fbdf30786976eb9e089",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 714,
"license_type": "no_license",
"max_line_length": 85,
"num_lines": 30,
"path": "/util/file.py",
"repo_name": "baczynski/quotation-charts-classifier.",
"src_encoding": "UTF-8",
"text": "import glob\nimport os\n\n\ndef find_all_file_paths(path):\n file_paths = []\n\n file_types = ('.jpg', '.JPG', '.png')\n\n for file_type in file_types:\n for file in glob.glob(path + \"**/*\" + file_type, recursive=True):\n file_paths.append(file)\n\n return file_paths\n\n\ndef find_all_style_directories_path(root_path):\n for root, dirs, files in os.walk(root_path):\n return dirs\n\n\ndef count_all_files(images_root_path, directories):\n number_of_files = 0\n for directory in directories:\n number_of_files += number_of_files_in_directory(images_root_path + directory)\n return number_of_files\n\n\ndef number_of_files_in_directory(path):\n return len(find_all_file_paths(path))\n\n"
},
{
"alpha_fraction": 0.796875,
"alphanum_fraction": 0.796875,
"avg_line_length": 31,
"blob_id": "16982320ba838df2ab45051dcdfc352a05170462",
"content_id": "f6605fd9223650bc035ed4fc055e65c60bbdedbd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 192,
"license_type": "no_license",
"max_line_length": 100,
"num_lines": 6,
"path": "/util/date_parser.py",
"repo_name": "baczynski/quotation-charts-classifier.",
"src_encoding": "UTF-8",
"text": "import datetime\nimport time\n\n\ndef date_to_timestamp(date, date_format_regular_expression):\n return time.mktime(datetime.datetime.strptime(date, date_format_regular_expression).timetuple())\n"
},
{
"alpha_fraction": 0.6070518493652344,
"alphanum_fraction": 0.6257433891296387,
"avg_line_length": 33.115943908691406,
"blob_id": "d7bfc6ecf32a8a63d63874f6edc68f3761faf135",
"content_id": "4e39553cd8c93159981bce7cf542cd19bcfb3364",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2354,
"license_type": "no_license",
"max_line_length": 104,
"num_lines": 69,
"path": "/main/image.py",
"repo_name": "baczynski/quotation-charts-classifier.",
"src_encoding": "UTF-8",
"text": "import cv2\nimport numpy as np\n\nfrom masterthesis.util.file import find_all_style_directories_path, find_all_file_paths, count_all_files\n\nTRAIN_IMAGES_ROOT_PATH = '../../im/train/'\nVALIDATION_IMAGES_ROOT_PATH = '../../im/validation/'\nTEST_IMAGES_ROOT_PATH = '../../im/test/'\nMAX_HEIGHT = 640\nMAX_WIDTH = 480\n\n\ndef load_original_images(train):\n root_path = TRAIN_IMAGES_ROOT_PATH if train else VALIDATION_IMAGES_ROOT_PATH\n\n image_style_directories = find_all_style_directories_path(root_path)\n\n number_of_files = count_all_files(root_path, image_style_directories)\n\n image_index = 0\n x = np.empty((number_of_files, MAX_WIDTH, MAX_HEIGHT), dtype=np.float32)\n y = np.empty(number_of_files, dtype=np.int8)\n num_classes = 0\n\n for i, directory in enumerate(image_style_directories):\n file_paths = find_all_file_paths(root_path + directory)\n\n for fpath in file_paths:\n image = cv2.imread(fpath, cv2.IMREAD_GRAYSCALE)\n (thresh, im_bw) = cv2.threshold(image, 128, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)\n # plt.imshow(im_bw)\n # plt.show()\n\n # x[image_index, ...] = cv2.resize(im_bw, (MAX_WIDTH, MAX_HEIGHT))\n x[image_index, ...] = im_bw\n y[image_index] = i\n image_index += 1\n num_classes = num_classes + 1\n\n print('class: ' + str(i) + ' directory: ' + directory)\n\n return [x, y, num_classes]\n\n\ndef load_test_data():\n image_style_directories = find_all_style_directories_path(TEST_IMAGES_ROOT_PATH)\n\n number_of_files = count_all_files(TEST_IMAGES_ROOT_PATH, image_style_directories)\n\n image_index = 0\n x = np.empty((number_of_files, MAX_WIDTH, MAX_HEIGHT), dtype=np.float32)\n y = np.empty(number_of_files, dtype=np.int8)\n num_classes = 0\n\n for i, directory in enumerate(image_style_directories):\n file_paths = find_all_file_paths(TEST_IMAGES_ROOT_PATH + directory)\n\n for fpath in file_paths:\n image = cv2.imread(fpath, cv2.IMREAD_GRAYSCALE)\n (thresh, im_bw) = cv2.threshold(image, 128, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)\n\n x[image_index, ...] = im_bw\n y[image_index] = i\n image_index += 1\n num_classes = num_classes + 1\n\n print('class: ' + str(i) + ' directory: ' + directory)\n\n return [x, y, num_classes]\n"
},
{
"alpha_fraction": 0.5383360385894775,
"alphanum_fraction": 0.5513865947723389,
"avg_line_length": 24.54166603088379,
"blob_id": "316de9bd71d407b316e2d54b31f18147b0c86b50",
"content_id": "108c8e350b9030297d03df16736eb254290338c1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 613,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 24,
"path": "/main/AccuracyHistory.py",
"repo_name": "baczynski/quotation-charts-classifier.",
"src_encoding": "UTF-8",
"text": "import keras\n\n\nclass AccuracyHistory(keras.callbacks.Callback):\n EPOCH = 0\n\n def on_train_begin(self, logs={}):\n self.acc = []\n\n def on_epoch_end(self, batch, logs={}):\n self.save()\n\n self.acc.append(logs.get('acc'))\n\n def save(self):\n model_yaml = self.model.to_yaml()\n\n if self.EPOCH % 10 == 0:\n\n with open(\"model\" + str(self.EPOCH) + \".yaml\", \"w\") as yaml_file:\n yaml_file.write(model_yaml)\n self.model.save_weights(\"model00\" + str(self.EPOCH) + \".h5\")\n self.EPOCH = self.EPOCH + 1\n print(\"Saved model to disk\")\n"
},
{
"alpha_fraction": 0.5858243703842163,
"alphanum_fraction": 0.6255778074264526,
"avg_line_length": 32.11224365234375,
"blob_id": "f6c6832f9eaed2718f58cd9e3433945c3ed12e0a",
"content_id": "5f2ecadcfe1b296c7435183151cc369d85cc6b16",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6490,
"license_type": "no_license",
"max_line_length": 96,
"num_lines": 196,
"path": "/main/MAIN.py",
"repo_name": "baczynski/quotation-charts-classifier.",
"src_encoding": "UTF-8",
"text": "import keras\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom keras.layers import Dense, Flatten, Dropout, Conv1D, MaxPooling1D\nfrom keras.models import Sequential, model_from_yaml\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.metrics import f1_score\nfrom sklearn.metrics import precision_score\nfrom sklearn.metrics import recall_score\n\nfrom masterthesis.nowe.AccuracyHistory import AccuracyHistory\nfrom masterthesis.nowe.image import load_original_images, load_test_data\n\nBATCH_SIZE = 10\nEPOCHS = 1000\n\n\ndef learn():\n [x_train, y_train, num_classes] = load_original_images(True)\n [x_test, y_test, n] = load_original_images(False)\n\n x_train = x_train.astype('float32')\n x_test = x_test.astype('float32')\n x_train /= 255\n x_test /= 255\n print(x_train.shape[0], 'train samples')\n print(x_test.shape[0], 'test samples')\n\n y_train = keras.utils.to_categorical(y_train, num_classes)\n y_test = keras.utils.to_categorical(y_test, num_classes)\n\n history = AccuracyHistory()\n\n model = Sequential()\n # model.add(Dense(512, activation='relu', input_shape=(x_train.shape[1],)))\n # model.add(Dropout(0.2))\n # model.add(Dense(512, activation='relu'))\n # model.add(Dropout(0.2))\n # model.add(Dense(num_classes, activation='softmax'))\n # model.add(Dense(512, activation='relu', input_shape=(x_train.shape[1], x_train.shape[2])))\n # model.add(Dense(5000, activation='relu'))\n model.add(Conv1D(32, kernel_size=5, strides=2,\n activation='relu', input_shape=(x_train.shape[1], x_train.shape[2])))\n model.add(MaxPooling1D(pool_size=2, strides=2))\n model.add(Dropout(0.2))\n model.add(Conv1D(64, 10, activation='relu'))\n model.add(MaxPooling1D(pool_size=2, strides=2))\n model.add(Dropout(0.2))\n model.add(Conv1D(128, 10, activation='relu'))\n model.add(MaxPooling1D(pool_size=2, strides=2))\n model.add(Dropout(0.2))\n model.add(Flatten())\n model.add(Dense(1500, activation='relu'))\n model.add(Dense(1000, activation='relu'))\n model.add(Dense(500, activation='relu'))\n model.add(Dense(250, activation='relu'))\n model.add(Dense(100, activation='relu'))\n model.add(Dense(num_classes, activation='softmax'))\n model.compile(loss=keras.losses.squared_hinge,\n optimizer=keras.optimizers.SGD(lr=0.002),\n metrics=['accuracy'])\n model.fit(x_train, y_train,\n batch_size=BATCH_SIZE,\n epochs=EPOCHS,\n verbose=2,\n validation_data=(x_test, y_test),\n callbacks=[history])\n\n score = model.evaluate(x_test, y_test, verbose=0)\n print('Test loss:', score[0])\n print('Test accuracy:', score[1])\n print(\"hello\")\n\n plt.plot(range(1, EPOCHS + 1), history.acc)\n plt.xlabel('Epochs')\n plt.ylabel('Accuracy')\n plt.show()\n\n save(model)\n\n\ndef predict():\n model = load_model()\n\n [x_train, y_train, num_classes] = load_test_data()\n c = model.predict_classes(x_train)\n c = c.astype(np.int8)\n acc = accuracy(c, y_train)\n print(acc)\n\n print('class 0')\n exp0 = np.copy(y_train)\n pred0 = np.copy(c)\n\n exp0[exp0 != 0] = -1\n exp0[exp0 == 0] = 1\n pred0[pred0 != 0] = -1\n pred0[pred0 == 0] = 1\n print(\"PRECISION \" + str(precision_score(exp0, pred0, average='binary')))\n print(\"RECALL \" + str(recall_score(exp0, pred0, average='binary')))\n print(\"ACCURACY \" + str(accuracy_score(exp0, pred0)))\n print(\"F1SCORE \" + str(f1_score(exp0, pred0, average='binary')))\n print('')\n\n print('class 1')\n exp1 = np.copy(y_train)\n pred1 = np.copy(c)\n\n exp1[exp1 != 1] = -1\n exp1[exp1 == 1] = 1\n pred1[pred1 != 1] = -1\n pred1[pred1 == 1] = 1\n print(\"PRECISION \" + str(precision_score(exp1, pred1, average='binary')))\n print(\"RECALL \" + str(recall_score(exp1, pred1, average='binary')))\n print(\"ACCURACY \" + str(accuracy_score(exp1, pred1)))\n print(\"F1SCORE \" + str(f1_score(exp1, pred1, average='binary')))\n print('')\n\n print('class 2')\n exp2 = np.copy(y_train)\n pred2 = np.copy(c)\n\n exp2[exp2 != 2] = -1\n exp2[exp2 == 2] = 1\n pred2[pred2 != 2] = -1\n pred2[pred2 == 2] = 1\n print(\"PRECISION \" + str(precision_score(exp2, pred2, average='binary')))\n print(\"RECALL \" + str(recall_score(exp2, pred2, average='binary')))\n print(\"ACCURACY \" + str(accuracy_score(exp2, pred2)))\n print(\"F1SCORE \" + str(f1_score(exp2, pred2, average='binary')))\n print('')\n\n print('class 3')\n exp3 = np.copy(y_train)\n pred3 = np.copy(c)\n\n exp3[exp3 != 3] = -1\n exp3[exp3 == 3] = 1\n pred3[pred3 != 3] = -1\n pred3[pred3 == 3] = 1\n print(\"PRECISION \" + str(precision_score(exp3, pred3, average='binary')))\n print(\"RECALL \" + str(recall_score(exp3, pred3, average='binary')))\n print(\"ACCURACY \" + str(accuracy_score(exp3, pred3)))\n print(\"F1SCORE \" + str(f1_score(exp3, pred3, average='binary')))\n print('')\n\n print('class 4')\n exp4 = np.copy(y_train)\n pred4 = np.copy(c)\n\n exp4[exp4 != 4] = -1\n exp4[exp4 == 4] = 1\n pred4[pred4 != 4] = -1\n pred4[pred4 == 4] = 1\n print(\"PRECISION \" + str(precision_score(exp4, pred4, average='binary')))\n print(\"RECALL \" + str(recall_score(exp4, pred4, average='binary')))\n print(\"ACCURACY \" + str(accuracy_score(exp4, pred4)))\n print(\"F1SCORE \" + str(f1_score(exp4, pred4, 0.73, average='binary')))\n\n print(\"PRECISION \" + str(precision_score(y_train, c, average='micro')))\n print(\"RECALL \" + str(recall_score(y_train, c, average='micro')))\n print(\"ACCURACY \" + str(accuracy_score(y_train, c)))\n print(\"F1SCORE \" + str(f1_score(y_train, c, average='micro')))\n\n\ndef save(model):\n model_yaml = model.to_yaml()\n with open(\"model.yaml\", \"w\") as yaml_file:\n yaml_file.write(model_yaml)\n # serialize weights to HDF5\n model.save_weights(\"model.h5\")\n print(\"Saved model to disk\")\n\n\ndef load_model():\n yaml_file = open('../model/model.yaml', 'r')\n loaded_model_yaml = yaml_file.read()\n yaml_file.close()\n loaded_model = model_from_yaml(loaded_model_yaml)\n # load weights into new model\n loaded_model.load_weights(\"../model/model.h5\")\n print(\"Loaded model from disk\")\n return loaded_model\n\n\ndef accuracy(predictions, labels):\n right_predictions = 0\n\n for i in range(0, len(predictions)):\n if predictions[i] == labels[i]:\n right_predictions = right_predictions + 1\n\n return right_predictions / len(predictions)\n\n\npredict()\n"
}
] | 7 |
NakagawaMasafumi/study | https://github.com/NakagawaMasafumi/study | 2b985b384fc50dbe9a85d20af8e0b2a9814be844 | 6a39511f21dbc530944bedb853357b191eb8eac4 | 54cac50cb93cfa7e3c866d5245233b5c9b3af283 | refs/heads/master | 2021-04-15T07:43:20.154050 | 2018-03-29T09:22:14 | 2018-03-29T09:22:14 | 126,784,670 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5124729871749878,
"alphanum_fraction": 0.5399376153945923,
"avg_line_length": 34.78540802001953,
"blob_id": "b75a48afa7b09a27bc51f41d3bab29aa9bfaf8a6",
"content_id": "5d0ec7cdbadb4c71b92377bae95dcf1ef83aa26d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 8374,
"license_type": "no_license",
"max_line_length": 153,
"num_lines": 233,
"path": "/collecting_tweets_with_API.py",
"repo_name": "NakagawaMasafumi/study",
"src_encoding": "UTF-8",
"text": "import json\nimport sys\nimport time\nimport pymongo\nimport requests\nimport gzip\nfrom collections import Counter\nfrom requests_oauthlib import OAuth1\nimport math, calendar\nfrom datetime import datetime, timedelta\nimport traceback\n\ndef is_japanese(text):\n def check_chr(x):\n return ((x >= 0x3040 and x <= 0x309f) or (x >= 0x30a0 and x <= 0x30ff))\n for ch in text:\n if check_chr(ord(ch)):\n return True\n return False\n\ndef collect_from_japan():\n client = pymongo.MongoClient('localhost', 27017)\n db = client.Tweets\n co = db.random_tweets\n\n f = open('setting.json', 'r')\n setting = json.load(f)\n f.close()\n\n auth = OAuth1(setting['api_key'], setting['api_secret'], setting['access_key'], setting['access_secret'])\n try:\n ret = requests.post(setting['filter_url'], auth=auth, stream=True, data={\"locations\":\"122.87,24.84,153.01,46.80\"})\n except Exception as e:\n print(e)\n\n\n for line in ret.iter_lines():\n try:\n tweet = json.loads(line)\n #if tweet['place'] is not None and tweet['place']['country'] == 'Japan':\n if tweet['place'] is not None and is_japanese(tweet['text']):\n co.save(tweet)\n # print(tweet)\n except Exception as e:\n print(e)\n\n print('interrupted')\n time.sleep(10)\n collect_from_japan()\n\ndef collect_from_follow_users():\n indexes = [0, 5000, 10000, 15000, 20000, 25000]\n client = pymongo.MongoClient('localhost', 27017)\n db = client.Tweets\n collections = [db.tweets_from_follow_users0, db.tweets_from_follow_users1, db.tweets_from_follow_users2 ,db.tweets_from_follow_users3, ]\n arg = int(sys.argv[1]) #コマンド例: python collecting_tweets_with_API.py 0\n\n co = collections[arg]\n index = indexes[arg]\n\n f = open('setting.json', 'r')\n setting = json.load(f)\n f.close()\n\n f = open('data/top_geo_existing_user_id_30000.json', 'r')\n user_list = json.load(f)\n f.close()\n if arg < 0:\n print('require arg >= 0')\n sys.exit()\n elif arg < 2:\n auth = OAuth1(setting['api_key'], setting['api_secret'], setting['access_key'], setting['access_secret'])\n elif arg < 4:\n auth = OAuth1(setting['api_key2'], setting['api_secret2'], setting['access_key2'], setting['access_secret2'])\n elif arg < 6:\n auth = OAuth1(setting['api_key3'], setting['api_secret3'], setting['access_key3'], setting['access_secret3'])\n else:\n print('require arg < 6')\n sys.exit()\n try:\n ret = requests.post(setting['filter_url'], auth=auth, stream=True, data={\"follow\":user_list[index:index+5000]})\n except Exception as e:\n print(e)\n\n\n for line in ret.iter_lines():\n try:\n tweet = json.loads(line)\n co.save(tweet)\n except Exception as e:\n print(e)\n\n print('interrupted')\n time.sleep(1)\n collect_from_follow_users()\n\ndef collect_follow_relationships():\n max_number_query_ids = 15\n client = pymongo.MongoClient('localhost', 27017)\n db = client.Tweets\n co = db.ids\n\n f = open('setting.json', 'r')\n setting = json.load(f)\n f.close()\n url_ids = setting['ids_url']\n\n f = open('data/top_geo_existing_user_id_30000.json', 'r')\n user_list = json.load(f)\n f.close()\n user_list = user_list[0:10000]\n # user_list = user_list[10000:20000]\n # user_list = user_list[20000:30000]\n auth = OAuth1(setting['api_key'], setting['api_secret'], setting['access_key'], setting['access_secret'])\n cnt_ids = 0\n for i, user_id in enumerate(user_list):\n print(i)\n if cnt_ids == max_query_ids:\n time.sleep(60 * 15 + 1)\n cnt_ids = 0\n obj = {'user_id':user_id, 'followings':[]}\n r = requests.get(url_ids+str(user_id), auth=auth)\n cnt_ids += 1\n for line in r.iter_lines():\n followings = json.loads(line)\n obj['followings'].append(followings)\n co.save(obj)\n\n while 'errors' not in followings and 'error' not in followings and followings['next_cursor'] > 0:\n if cnt_ids == max_query_ids:\n time.sleep(60 * 15 + 1)\n cnt_ids = 0\n url_ids_tmp = url_ids + str(user_id) + '&cursor=' + followings['next_cursor_str']\n r2 = requests.get(url_ids_tmp, auth=auth)\n cnt_ids += 1\n for line2 in r2.iter_lines():\n followings = json.loads(line2)\n obj['followings'].append(followings)\n co.save(obj)\n\ndef make_user_list_for_experiment():\n #start_point = datetime(2017,5,26,20)-timedelta(hours=9) #収集開始日時\n start_point = datetime(2017,6,16,9)#-timedelta(hours=9)\n #end_point = datetime(2017,6,4,17)-timedelta(hours=9) #終了日時\n end_point = datetime(2017,7,31,0)#-timedelta(hours=9)\n TimeWindow = 4\n total_W = math.ceil((end_point-start_point).total_seconds()/(60*60*TimeWindow))\n print(total_W)\n months = {}\n for i ,v in enumerate(calendar.month_abbr):\n months[v] = i\n\n client = pymongo.MongoClient('localhost', 27017)\n db = client.Tweets\n out = db.tweets_per_user4\n tweets_collections = [db.tweets_from_follow_users0, db.tweets_from_follow_users1, db.tweets_from_follow_users2 ,db.tweets_from_follow_users3]\n cnt = 0\n for col_tweets in tweets_collections:\n for tweet in col_tweets.find(no_cursor_timeout=True):\n try:\n datetime_parts = tweet['created_at'].split(\" \")\n time_string = datetime_parts[5] + \"-\" + str(months[datetime_parts[1]]) + \"-\" + datetime_parts[2] + \" \" + datetime_parts[3]\n time_tmp = datetime.strptime(time_string,'%Y-%m-%d %H:%M:%S')\n if time_tmp > end_point:\n break\n user_out = out.find_one({'user_id_str':tweet['user']['id_str']})\n if user_out == None:\n if tweet['user']['id_str'] not in user_list:\n continue\n user_out = {'user_id':tweet['user']['id'], 'user_id_str':tweet['user']['id_str'], 'text':[], 'geo':[], 'tweet_id':[], 'timestamp':[]}\n if tweet['id'] in user_out['tweet_id']:\n continue\n user_out['text'].append(tweet['text'])\n if tweet['place'] == None:\n user_out['geo'].append(None)\n else:\n user_out['geo'].append(tweet['place']['full_name'])\n #user_out['place'].append()\n user_out['tweet_id'].append(tweet['id'])\n user_out['timestamp'].append(tweet['created_at'])\n out.save(user_out)\n cnt += 1\n print(cnt)\n except Exception as e:\n ex, ms, tb = sys.exc_info()\n print(\"\\nex -> \\t\",type(ex))\n print(ex)\n print(\"\\nms -> \\t\",type(ms))\n print(ms)\n print(\"\\ntb -> \\t\",type(tb))\n print(tb)\n\n print(\"\\n=== and print_tb ===\")\n traceback.print_tb(tb)\n print ('e自身:' + str(e))\n\n user_list = []\n for user in out.find(no_cursor_timeout=True):\n user_list.append(user['user_id'])\n f = open('data/user_list2.json')\n json.dump(user_list, f)\n f.close()\n\n cnt = 0\n link = 0\n link_dic = {}\n co_ids = db.ids\n for a_id in co_ids.find(no_cursor_timeout=True):\n try:\n if a_id['user_id'] in user_list:\n cnt += 1\n print(cnt)\n for followings in a_id['followings']:\n if 'ids' in followings:\n for id in followings['ids']:\n id = int(id)\n if id in user_list:\n if a_id['user_id'] not in link_dic:\n link_dic[a_id['user_id']] = []\n link_dic[a_id['user_id']].append(id)\n #link += 1\n except Exception as e:\n print(e)\n\n f = open('data/link2.json', 'w')\n json.dump(link_dic, f)\n f.close()\n\nif __name__ == '__main__':\n # collect_from_japan()\n collect_from_follow_users()\n collect_follow_relationships()\n make_user_list_for_experiment()\n"
}
] | 1 |
SebastianMacaluso/Deep-learning_jet-images | https://github.com/SebastianMacaluso/Deep-learning_jet-images | e1464378c8ab067cb1e0d8fa9ab4e3526dc757f2 | 2f912650d501be80dc3fa161402675d31bb7b943 | 46853238357851e90b6f48dd855f816e31181ace | refs/heads/master | 2021-01-22T22:04:02.449507 | 2017-10-31T19:23:48 | 2017-10-31T19:23:48 | 92,756,058 | 3 | 2 | null | null | null | null | null | [
{
"alpha_fraction": 0.5391445159912109,
"alphanum_fraction": 0.5566450953483582,
"avg_line_length": 39.31821823120117,
"blob_id": "e175deaedc1ad45d06ebe4196238bf8c0b581001",
"content_id": "16b49919cdb8d4897c8057ffdbfc48b798cf17ec",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 50684,
"license_type": "no_license",
"max_line_length": 447,
"num_lines": 1257,
"path": "/convnet_keras.py",
"repo_name": "SebastianMacaluso/Deep-learning_jet-images",
"src_encoding": "UTF-8",
"text": "##=============================================================================================\n##=============================================================================================\n# IMPLEMENTATION OF A CONVOLUTIONAL NEURAL NETWORK TO CLASSIFY JET IMAGES AT LHC\n##=============================================================================================\n##=============================================================================================\n\n# This script loads the (image arrays,true_values) tuples, creates the train, cross-validation and test sets and runs a convolutional neural network to classify signal vs background images. We then get the statistics and analyze the output. We plot histograms with the probability of signal and background to be tagged as signal, ROC curves and get the output of the intermediate layers and weights.\n# Last updated: October 30, 2017. Sebastian Macaluso\n\n##---------------------------------------------------------------------------------------\n##---------------------------------------------------------------------------------------\n# This code is ready to use on the jet_array1/test_large_sample dir. (The \"expand image\" function is currently commented). This version is for gray scale images.\n\n# To run:\n# Previous runs:\n# python cnn_keras_jets.py input_sample_signal input_sample_background number_of_epochs fraction_to_use mode(train or notrain) weights_filename \n\n# python convnet_keras.py test_large_sample 20 0.1 train &> test2\n\n##=============================================================================================\n##=============================================================================================\n\n##=============================================================================================\n############ LOAD LIBRARIES\n##=============================================================================================\n\n\nfrom __future__ import print_function\n\nimport numpy as np\nnp.random.seed(1560) # for reproducibility\nimport keras\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Dropout, Flatten\nfrom keras.layers import Conv2D, MaxPooling2D\nfrom keras.layers.normalization import BatchNormalization\nfrom keras import regularizers\nfrom keras import backend as K # We are using TensorFlow as Keras backend\n# from keras import optimizers\nfrom keras.utils import np_utils\n\nimport pickle\nimport gzip\nimport sys\nimport os\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\n#import matplotlib as mpl\nfrom mpl_toolkits.mplot3d import Axes3D\n\nimport h5py\n\nimport time\nstart_time = time.time()\n\nimport data_loader as dl\n\nimport tensorflow as tf\nfrom keras.backend.tensorflow_backend import set_session\nconfig = tf.ConfigProto()\nconfig.gpu_options.per_process_gpu_memory_fraction = 0.45\n#config.gpu_options.visible_device_list = \"0\"\nset_session(tf.Session(config=config))\n\n##=============================================================================================\n############ GLOBAL VARIABLES\n##=============================================================================================\n\n# local_dir='/Users/sebastian/Documents/Deep-Learning/jet_images/'\nlocal_dir=''\n\nos.system('mkdir -p jet_array_3')\n\nimage_array_dir_in=local_dir+'jet_array_1/' #Input dir to load the array of images\n# image_array_dir_in='../David/jet_array_1/'\n\nin_arrays_dir= sys.argv[1]\n\n\n# large_set_dir=image_array_dir_in+in_arrays_dir+'/'\nlarge_set_dir=image_array_dir_in+in_arrays_dir+'/'\n\nin_std_label='no_std' #std label of input arrays\nstd_label='bg_std' #std label for the standardization of the images with probabilities between prob_min and prob_max\nbias=2e-02\nnpoints = 38 #npoint=(Number of pixels+1) of the image\nN_pixels=np.power(npoints-1,2)\nmyMethod='std'\nmy_bins=20\n# npoints=38\n\n# extra_label='batch_norm'\nextra_label='_early_stop'\nmin_prob=0.2\nmax_prob=0.8\nmy_batch_size = 128\nnum_classes = 2\nepochs =int(sys.argv[2])\n#Run over different sets sizes\nsample_relative_size=float(sys.argv[3])\n\nmode=sys.argv[4]\n# mode='train'\n# mode='notrain'\n\n# input image dimensions\nimg_rows, img_cols = 37, 37\n\nlearning_rate=[np.sqrt(8.0)]# The default value for the learning rate (lr) Adadelta is 1.0. We divide the learning rate by sqrt(2) when the loss does not improve, we should start with lr=sqrt(8), so that the starting value is 2 (This is because we defined self.losses = [1,1] as the starting point). \n# learning_rate=[1.0]\n\n\n##=============================================================================================\n############ FUNCTIONS TO LOAD AND CREATE THE TRAINING, CROSS-VAL AND TEST SETS\n##=============================================================================================\n\n#1) We load the .npy file with the image arrays\ndef load_array(Array):\n print('Loading signal and background arrays ...')\n print('-----------'*10)\n data=np.load(large_set_dir+Array) #We load the .npy files\n return data\n\n##---------------------------------------------------------------------------------------------\n#2) We expand the images (adding zeros when necessary)\ndef expand_array(images):\n# ARRAY MUST BE IN THE FORM [[[iimage,ipixel,jpixel],val],...]\n\n Nimages=len(images)\n\n print('Number of images ',Nimages)\n\n expandedimages=np.zeros((Nimages,img_rows,img_cols))\n\n for i in range(Nimages):\n# print(i,len(images[i]))\n for j in range(len(images[i])):\n# print(i,j,images[i][j][1])\n expandedimages[images[i][j][0][0],images[i][j][0][1],images[i][j][0][2]] = images[i][j][1]\n# np.put(startgrid,ind,val)\n\n return expandedimages\n\n##---------------------------------------------------------------------------------------------\n#3) We create a tuple of (image array, true value) joining signal and background, and we shuffle it.\ndef add_sig_bg_true_value(Signal,Background):\n print('Creating tuple (data,true value) ...')\n print('-----------'*10)\n Signal=np.asarray(Signal)\n Background=np.asarray(Background)\n input_array=[]\n true_value=[]\n for ijet in range(0,len(Signal)):\n input_array.append(Signal[ijet].astype('float32'))\n true_value.append(np.array([0]).astype('float32'))\n# print('List of arrays for signal = \\n {}'.format(input_array))\n# print('-----------'*10)\n\n for ijet in range(0,len(Background)):\n input_array.append(Background[ijet].astype('float32'))\n true_value.append(np.array([1]).astype('float32'))\n# print('Joined list of arrays for signal and background = \\n {}'.format(input_array))\n# print('-----------'*10)\n# print('Joined list of true values for signal and background = \\n {}'.format(true_value))\n# print('-----------'*10)\n\n output=list(zip(input_array,true_value))\n #print('Input array for neural network, with format (Input array,true value)= \\n {}'.format(output[0][0]))\n# for (x,y) in output:\n# print('x={}'.format(x))\n# print('y={}'.format(y))\n# print('-----------'*10)\n print('Shuffling tuple (data, true value) ...')\n print('-----------'*10)\n shuffle_output=np.random.permutation(output)\n\n return shuffle_output\n\n##---------------------------------------------------------------------------------------------\n#4) This function loads the zipped tuple of image arrays and true values. It divides the data into train and validation sets. Then we create new arrays with$\ndef sets(Data):\n print('Generating arrays with the correct input format for Keras ...')\n print('-----------'*10)\n# Ntrain=int(0.8*len(Data))\n X=[x for (x,y) in Data]\n Y=[y for (x,y) in Data]\n# Y_test=[y for (x,y) in Data[Ntrain:]]\n\n# print('X (train+test) before adding [] to each element=\\n{}'.format(X))\n X=np.asarray(X)\n print('Shape X = {}'.format(X.shape))\n X_out=X.reshape(X.shape[0],X.shape[1],X.shape[2],1)\n print('-----------'*10)\n print('Shape X out after adding [] to each element= {}'.format(X_out.shape))\n# print('Input arrays X_out after adding [] to each element (middle row)=\\n{}'.format(X_out[17][0:37]))\n print('-----------'*10)\n\n output_tuple=list(zip(X_out,Y))\n\n# print('Tuple of (array,true value) as input for the cnn =\\n {}'.format(output_tuple))\n\n return output_tuple\n\n\n\n##---------------------------------------------------------------------------------------------\n#5) Get the list with the input images file names\ndef get_input_array_list(input_array_dir):\n# sg_imagelist = [filename for filename in np.sort(os.listdir(input_array_dir)) if filename.startswith('tt_') and 'batch' in filename and 210000>int(filename.split('_')[1])>190000]\n# bg_imagelist = [filename for filename in np.sort(os.listdir(input_array_dir)) if filename.startswith('QCD_') and 'batch' in filename and 210000>int(filename.split('_')[1])>190000]\n\n sg_imagelist = [filename for filename in np.sort(os.listdir(input_array_dir)) if filename.startswith('tt_') ] # and 'batch' in filename and 210000>int(filename.split('_')[1])>190000]\n bg_imagelist = [filename for filename in np.sort(os.listdir(input_array_dir)) if filename.startswith('QCD_')] # and 'batch' in filename and 210000>int(filename.split('_')[1])>190000]\n\n# N_arrays=len(imagelist)\n return sg_imagelist, bg_imagelist\n\n##---------------------------------------------------------------------------------------------\n#6) Define a dictionary to identify the training, cross-val and test sets\ndef load_all_files(array_list):\n dict={}\n for index in range(len(array_list)):\n dict[index]=load_array(array_list[index])\n print('Dict {} lenght = {}'.format(index,len(dict[index])))\n \n return dict\n\n##---------------------------------------------------------------------------------------------\n#7) Cut the number of images in the sample when necessary\ndef cut_sample(data_tuple, sample_relative_size):\n\n print('-----------'*10)\n print(data_tuple.shape, 'Input array sample shape before cut') \n print('-----------'*10)\n N_max= int(sample_relative_size*len(data_tuple))\n out_array= data_tuple[0:N_max]\n print(out_array.shape, 'Input array sample shape after cut')\n print('-----------'*10)\n return out_array\n\n\n##---------------------------------------------------------------------------------------------\n#8) Split the sample into train, cross-validation and test\ndef split_sample(data_tuple, train_frac_rel, val_frac_rel, test_frac_rel):\n\n val_frac_rel=train_frac_rel+val_frac_rel\n test_frac_rel =(val_frac_rel+test_frac_rel)\n \n train_frac=train_frac_rel\n val_frac=val_frac_rel\n test_frac=test_frac_rel\n\n N_train=int(train_frac*len(data_tuple))\n Nval=int(val_frac*len(data_tuple))\n Ntest=int(test_frac*len(data_tuple))\n\n x_train=[x for (x,y) in data_tuple[0:N_train]]\n Y_train=[y for (x,y) in data_tuple[0:N_train]]\n x_val=[x for (x,y) in data_tuple[N_train:Nval]]\n Y_val=[y for (x,y) in data_tuple[N_train:Nval]]\n x_test=[x for (x,y) in data_tuple[Nval:Ntest]]\n Y_test=[y for (x,y) in data_tuple[Nval:Ntest]]\n\n ##---------------------------------------------------------------------------------------------\n # convert class vectors to binary class matrices\n y_train = keras.utils.to_categorical(Y_train, num_classes)\n y_val = keras.utils.to_categorical(Y_val, num_classes)\n y_test = keras.utils.to_categorical(Y_test, num_classes)\n\n ##---------------------------------------------------------------------------------------------\n # Define input data format as Numpy arrays\n x_train = np.array(x_train)\n y_train = np.array(y_train)\n x_val = np.array(x_val)\n y_val = np.array(y_val)\n x_test = np.array(x_test)\n y_test = np.array(y_test)\n\n # x_train = x_train.astype('float32')\n # print('x_train = \\n {}'.format(x_train))\n # print('x_train shape:', x_train[0].shape)\n\n print('-----------'*10)\n print(len(x_train), 'train samples ('+str(train_frac*100)+'% of the set)')\n print(len(x_val), 'validation samples ('+str((val_frac-train_frac)*100)+'% of the set)')\n print(len(x_test), 'test samples ('+str(100-(val_frac)*100)+'% of the set)')\n print('-----------'*10)\n\n print(x_train.shape, 'train sample shape') # train_x.shape should be (batch or number of samples, height, width, channels), where channels is 1 for gray scale and 3 for RGB pictures\n print('-----------'*10)\n print('-----------'*10)\n\n # print('y_train=\\n {}'.format(y_train))\n # print('y_test=\\n {}'.format(y_test))\n return x_train, y_train, x_val, y_val, x_test, y_test#, N_train\n\n##---------------------------------------------------------------------------------------------\n#9) Concatenate arrays into a single set (i.e. cross-val or test) when multiple files are loaded\ndef concatenate_arrays(array_list, label_sg_bg, label):\n\n if label_sg_bg=='sg' and label=='val':\n temp_array=my_dict_val_sg[0]\n elif label_sg_bg=='bg' and label=='val':\n temp_array=my_dict_val_bg[0] \n elif label_sg_bg=='sg' and label=='test':\n temp_array=my_dict_test_sg[0] \n elif label_sg_bg=='bg' and label=='test':\n temp_array=my_dict_test_bg[0] \n else:\n print('Please specify the right labels')\n \n# temp_array=load_array(array_list[0])\n temp_array = cut_sample(temp_array, sample_relative_size)\n# temp_array = expand_array(temp_array)\n \n for index in range(len(array_list[1::])):\n \n new_index=index+1\n if label_sg_bg=='sg' and label=='val':\n single_array=my_dict_val_sg[new_index]\n elif label_sg_bg=='bg' and label=='val':\n single_array=my_dict_val_bg[new_index] \n elif label_sg_bg=='sg' and label=='test':\n single_array=my_dict_test_sg[new_index] \n elif label_sg_bg=='bg' and label=='test':\n single_array=my_dict_test_bg[new_index] \n else:\n print('Please specify the right labels')\n \n# single_array= load_array(i_file)\n single_array = cut_sample(single_array, sample_relative_size)\n# single_array=expand_array(single_array)\n elapsed=time.time()-start_time\n print('images expanded')\n print('elapsed time',elapsed) \n temp_array=np.concatenate((temp_array,single_array), axis=0)\n return temp_array\n\n\n##---------------------------------------------------------------------------------------------\n#10) Create the validation and test sets\ndef generate_input_sets(sg_files, bg_files,train_frac_rel, in_val_frac_rel,in_test_frac_rel, set_label):\n print('Generates batches of samples for {}'.format(sg_files))\n # Infinite loop\n print('len(sg_files)=',len(sg_files))\n indexes = np.arange(len(sg_files))\n\n print('-----------'*10)\n print( 'indexes =',indexes) \n print('-----------'*10)\n \n \n# signal= load_array(sg_files[0])\n# background = load_array(bg_files[0])\n\n signal = concatenate_arrays(sg_files,'sg',set_label)\n background = concatenate_arrays(bg_files,'bg',set_label)\n\n print(signal.shape, 'signal sample shape') # train_x.shape should be (batch or number of samples, height, width, channels), where channels is 1 for gray scale and 3 for RGB pictures\n print(background.shape, 'background sample shape') # train_x.shape should be (batch or number of samples, height, width, channels), where channels is 1 for gray scale and 3 for RGB pictures\n\n data_in= add_sig_bg_true_value(signal,background)\n data_tuple = sets(data_in)\n\n x_train1, y_train1, x_val1, y_val1, x_test1, y_test1 = split_sample(data_tuple, train_frac_rel, in_val_frac_rel,in_test_frac_rel)\n\n if set_label=='train':\n print('-----------'*10)\n print('Using training dataset')\n print('-----------'*10)\n return x_train1, y_train1\n elif set_label == 'val':\n print('-----------'*10)\n print('Using validation dataset')\n print('-----------'*10)\n return x_val1, y_val1 \n elif set_label=='test':\n return x_test1, y_test1 \n\n##=============================================================================================\n############ DATA GENERATOR CLASS TO GENERATE THE BATCHES FOR TRAINING\n##=============================================================================================\n# (We create this class because the sample size is larger than the memory of the system)\n# The data generator is just that a generator that has no idea how the data it generates is going to be used and at what epoch. It should just keep generating batches of data forever as needed.\n\nclass DataGenerator(object):\n print('Generates data for Keras')\n def __init__(self, dim_x = img_rows, dim_y = img_cols, batch_size = my_batch_size, shuffle = False):\n# 'Initialization'\n self.dim_x = dim_x\n self.dim_y = dim_y\n self.batch_size = batch_size\n self.shuffle = shuffle\n\n\n def generate(self, sg_files, bg_files,train_frac_rel, in_val_frac_rel,in_test_frac_rel, set_label):\n print('Generates batches of samples for {}'.format(sg_files))\n # Infinite loop\n# print('len(sg_files)=',len(sg_files))\n \n while True:\n \n indexes = np.arange(len(sg_files))\n print('len(sg_files)= ',len(sg_files))\n for index in indexes:\n\n name_sg=str('_'.join(sg_files[index].split('_')[:2]))\n name_bg=str('_'.join(bg_files[index].split('_')[:-1]))\n in_tuple=name_sg+'_'+name_bg\n# print('Name signal ={}'.format(name_sg))\n# print('Name background={}'.format(name_bg))\n# print('-----------'*10)\n\n signal= my_dict_train_sg[index]\n background = my_dict_train_bg[index]\n\n# signal= load_array(sg_files[index])\n# background = load_array(bg_files[index])\n\n signal = cut_sample(signal, sample_relative_size)\n background = cut_sample(background, sample_relative_size)\n \n# signal=expand_array(signal)\n# background=expand_array(background)\n elapsed=time.time()-start_time\n print('images expanded')\n print('elapsed time',elapsed) \n\n data_in= add_sig_bg_true_value(signal,background)\n data_tuple = sets(data_in)\n\n x_train1, y_train1, x_val1, y_val1, x_test1, y_test1 = split_sample(data_tuple, train_frac_rel, in_val_frac_rel,in_test_frac_rel)\n \n \n subindex= np.arange(len(x_train1))\n print('len(x_train1[{}])= {}'.format(index,len(x_train1)))\n imax = int(len(subindex)/self.batch_size)\n print('imax =',imax)\n print('\\n'+'-----------'*10)\n print('////////////'*10)\n \n for i in range(imax):\n \n if set_label=='train':\n# x_train_temp = [x_train1[k] for k in subindex[i*self.batch_size:(i+1)*self.batch_size]] \n# y_train_temp = [y_train1[k] for k in subindex[i*self.batch_size:(i+1)*self.batch_size]]\n x_train_temp = x_train1[i*self.batch_size:(i+1)*self.batch_size] \n y_train_temp = y_train1[i*self.batch_size:(i+1)*self.batch_size] \n\n# print(x_train_temp.shape, 'x_train_temp sample shape') # train_x.shape should be (batch or number of samples, height, width, channels), where channels is 1 for gray scale and 3 for RGB pictures\n \n# print('-----------'*10)\n# print('Using training dataset')\n# print('-----------'*10)\n yield x_train_temp, y_train_temp\n \n elif set_label == 'val':\n# x_val_temp = [x_val1[k] for k in subindex[i*self.batch_size:(i+1)*self.batch_size]] \n# y_val_temp = [y_val1[k] for k in subindex[i*self.batch_size:(i+1)*self.batch_size]]\n x_val_temp = x_val1[i*self.batch_size:(i+1)*self.batch_size] \n y_val_temp = y_val1[i*self.batch_size:(i+1)*self.batch_size]\n print('-----------'*10)\n print('Using validation dataset')\n print('-----------'*10)\n yield x_val_temp, y_val_temp \n \n elif set_label=='test':\n yield x_test1, y_test1\n \n\n##=============================================================================================\n############ LOAD AND CREATE THE TRAINING, CROSS-VAL AND TEST SETS\n##=============================================================================================\nsignal_array_list,background_array_list = get_input_array_list(large_set_dir)\n#----------------------------------------------------------------------------------------------------\n\ntrain_signal_array_list = signal_array_list[0:-4]\ntrain_background_array_list = background_array_list[0:-4]\nprint('-----------'*10)\nprint('-----------'*10)\nprint('train_signal_array_list=',train_signal_array_list)\nprint('-----------'*10)\nprint('train_bg_array_list=',train_background_array_list)\nprint('-----------'*10)\n\n\ntotal_images=0\nfor i_file in range(len(train_signal_array_list)):\n steps_file=load_array(train_signal_array_list[i_file])\n total_images+=len(steps_file)\n\n# If the fraction from my input files for tratining is different from (train_frac, val_frac, test_frac)=(1,0,0), then also multiply Ntrain*train_frac\nNtrain=2*total_images*sample_relative_size\nprint('Ntrain',Ntrain)\n\nval_signal_array_list = signal_array_list[-4:-2]\nval_background_array_list = background_array_list[-4:-2] \n\ntest_signal_array_list = signal_array_list[-2::]\ntest_background_array_list = background_array_list[-2::]\n\nprint('-----------'*10)\nprint('val_signal_array_list=',val_signal_array_list)\nprint('-----------'*10)\nprint('val_bg_array_list=',val_background_array_list)\nprint('-----------'*10)\nprint('-----------'*10)\nprint('test_signal_array_list=',test_signal_array_list)\nprint('-----------'*10)\nprint('test_bg_array_list=',test_background_array_list)\nprint('-----------'*10)\n\n##---------------------------------------------------------------------------------------------\n# Load all the files to the dictionary\n\nmy_dict_train_sg=load_all_files(train_signal_array_list)\nmy_dict_train_bg=load_all_files(train_background_array_list)\n\nmy_dict_val_sg=load_all_files(val_signal_array_list)\nmy_dict_val_bg=load_all_files(val_background_array_list)\n\nmy_dict_test_sg=load_all_files(test_signal_array_list)\nmy_dict_test_bg=load_all_files(test_background_array_list)\n\n\n##=============================================================================================\n############ DEFINE THE NEURAL NETWORK ARCHITECTURE AND IMPLEMETATION\n##=============================================================================================\n\ninput_shape = (img_rows, img_cols,1)\n\nmodel = Sequential()\n\nconvin1=Conv2D(32, kernel_size=(4, 4),\n activation='relu',\n input_shape=input_shape)\nmodel.add(convin1)\n\nconvout1 = MaxPooling2D(pool_size=(2, 2))\nmodel.add(convout1)\n\nmodel.add(Conv2D(64, (4, 4), activation='relu'))\n\nconvout2=MaxPooling2D(pool_size=(2, 2))\nmodel.add(convout2)\n\nmodel.add(Dropout(0.25))\nmodel.add(Conv2D(64, (2, 2), activation='relu'))\n\nconvout3=MaxPooling2D(pool_size=(2, 2))\nmodel.add(convout3)\n\nmodel.add(Dropout(0.25))\nmodel.add(Flatten())\nmodel.add(Dense(256, activation='relu'))\nmodel.add(Dropout(0.5))\nmodel.add(Dense(256, activation='relu'))\nmodel.add(Dropout(0.5))\nmodel.add(Dense(num_classes, activation='softmax'))\n\nmodel.summary()\n\n##---------------------------------------------------------------------------------------------\n# LOSS FUNCTION - OPTIMIZER\n# We define the loss/cost function and the optimizer to reach the minimum (e.g. gradient descent, adadelta, etc).\n#a) For loss=keras.losses.categorical_crossentropy, we need to get the true values in the form of vectors of 0 and 1: y_train = keras.utils.to_categorical(y_train, num_classes) \n#b) Use metrics=['accuracy'] for classification problems\n\n#1) Adadelta\nAdadelta=keras.optimizers.Adadelta(lr=learning_rate[0], rho=0.95, epsilon=1e-08, decay=0.0)\n\n#2) Adam\nAdam=keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)\n\n#3) Sigmoid gradient descent: the convergence is much slower than with Adadelta\nsgd = keras.optimizers.SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)\n\nmodel.compile(loss=keras.losses.categorical_crossentropy,\n optimizer=Adadelta,\n metrics=['categorical_accuracy'])\n\n\n##---------------------------------------------------------------------------------------------\n# FUNCTIONS TO ADJUST THE LEARNING RATE \n# We write functons to divide by 2 the learning rate when the validation loss (val_loss) does not improve within some treshold\n\n# Get the validation losses after each epoch\nsd=[]\nclass LossHistory(keras.callbacks.Callback):\n def on_train_begin(self, logs={}):\n self.losses = [1,1] #Initial value of the val loss function\n\n def on_epoch_end(self, epoch, logs={}):\n self.losses.append(logs.get('val_loss')) # We append the val loss of the last epoch to losses\n sd.append(step_decay(len(self.losses))) # We run step_decay to determine if we update the learning rate\n # print('lr:', step_decay(len(self.losses)))\n\n##-----------------------------\n# Take the difference between the last 2 val_loss and divide the learning rate by sqrt(2) when it does not improve. Both requirements should be satisfied:\n#1) loss[-2]-loss[-1]<0.0005\n#2) loss[-2]-loss[-1]< loss[-1]/3\ndef step_decay(losses): \n\n# if float(np.array(history.losses[-2])-np.array(history.losses[-1]))<0.0005 and\n\tif float(np.array(history.losses[-2])-np.array(history.losses[-1]))<0.0001 and float(np.array(history.losses[-2])-np.array(history.losses[-1]))< np.array(history.losses[-1])/3:\n\t\tprint('\\n loss[-2] = ',np.array(history.losses[-2]))\n\t\tprint('\\n loss[-1] = ',np.array(history.losses[-1]))\n\t\tprint('\\n loss[-2] - loss[-1] = ',float(np.array(history.losses[-2])-np.array(history.losses[-1])))\n\t\tlrate=learning_rate[-1]/np.sqrt(2)\n\t\tlearning_rate.append(lrate)\n\telse:\n\t\tlrate=learning_rate[-1]\n\n\tprint('\\n Learning rate =',lrate)\n\tprint('------------'*10)\n\t\n\treturn lrate\n\n##-----------------------------\nhistory=LossHistory() #We define the class history that will have the val loss values\n# Get val_loss for each epoch. This is called at the end of each epoch and it will append the new value of the val_loss to the list 'losses'. \nlrate=keras.callbacks.LearningRateScheduler(step_decay) # Get new learning rate\n\nearly_stop=keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0.002, patience=4, verbose=0, mode='auto')\n# patience=4 means that if there is no improvement in the cross-validation accuracy greater that 0.002 within the following 3 epochs, then it stops\n\n\n##=============================================================================================\n############ TRAIN THE MODEL (OR LOAD TRAINED WEIGHTS)\n##=============================================================================================\n#Make folder to save weights\nweights_dir = 'weights/'\n#os.system(\"rm -rf \"+executedir)\nos.system(\"mkdir -p \"+weights_dir)\n\n\nif mode=='notrain':\n my_weights=sys.argv[6]\n WEIGHTS_FNAME=weights_dir+my_weights\n if True and os.path.exists(WEIGHTS_FNAME):\n # Just change the True to false to force re-training\n print('Loading existing weights')\n print('------------'*10)\n model.load_weights(WEIGHTS_FNAME)\n else: \n print('Please specify a weights file to upload')\n \nelif mode=='train':\n # We add history and lrate as callbacks. A callback is a set of functions to be applied at given stages of the training procedure. You can use callbacks to get a view on internal states and statistics of the model during training. You can pass a list of callbacks (as the keyword argument callbacks) to the .fit() method of the Sequential or Model classes. The relevant methods of the callbacks will then be called at each stage of the training.\n # my_weights_name='cnn_weights_epochs_'+str(epochs)+'_Ntrain_'+str(Ntrain)+'_'+in_tuple+extra_label+'.hdf'\n my_weights_name='cnn_weights_epochs_'+str(epochs)+'_Ntrain_'+str(Ntrain)+'_'+extra_label+'.hdf' \n #Load weights and continue with training \n if(len(sys.argv)>6):\n my_weights=sys.argv[6]\n WEIGHTS_FNAME=weights_dir+my_weights\n if True and os.path.exists(WEIGHTS_FNAME):\n # Just change the True to false to force re-training\n print('Loading existing weights')\n print('------------'*10)\n model.load_weights(WEIGHTS_FNAME)\n \n previous_epoch=int('_'.join(my_weights.split('_')[3])) \n # my_weights_name='cnn_weights_epochs_'+str(epochs+previous_epoch)+'_Ntrain_'+str(Ntrain)+'_'+in_tuple+extra_label+'.hdf' \n my_weights_name='cnn_weights_epochs_'+str(epochs+previous_epoch)+'_Ntrain_'+str(Ntrain)+'_'+extra_label+'.hdf' \n\n ##-----------------------------\n # Create training and cross-validation sets \n train_x_train_y = DataGenerator().generate(train_signal_array_list, train_background_array_list, 1.0,0.0,0.0, 'train')\n # val_x_val_y = DataGenerator().generate(val_signal_array_list, val_background_array_list, 0.0,1.0,0.0, 'val')\n \n val_x, val_y = generate_input_sets(val_signal_array_list, val_background_array_list, 0.0,1.0,0.0, 'val')\n \n print('total_images =',total_images) \n my_steps_per_epoch= int(2*total_images*sample_relative_size/my_batch_size) \n\n print('my_steps_per_epoch =',my_steps_per_epoch)\n\n my_max_q_size=my_steps_per_epoch/6\n\n ##-----------------------------\n # Run Keras training routine \n model.fit_generator(generator = train_x_train_y,\n steps_per_epoch = my_steps_per_epoch, #This is the number of files that we use to train in each epoch\n epochs=epochs,\n verbose=2,\n validation_data =(val_x, val_y)\n ,max_q_size=my_max_q_size # defaults to 10\n ,callbacks=[history,lrate,early_stop]\n ) \n\n WEIGHTS_FNAME = weights_dir+my_weights_name \n print('------------'*10)\n print('Weights filename =',WEIGHTS_FNAME)\n print('------------'*10)\n # We save the trained weights\n model.save_weights(WEIGHTS_FNAME, overwrite=True)\n\nelse:\n print('Please specify a valid mode')\n \nprint('------------'*10)\n\n##-----------------------------\n# Create the test set and evaluate the model\ntest_x, test_y = generate_input_sets(test_signal_array_list, test_background_array_list, 0.0,0.0,1.0, 'test')\nscore = model.evaluate(test_x, test_y, verbose=0)\nprint('Test loss = ', score[0])\nprint('Test accuracy = ', score[1])\n\nprint('------------'*10)\nprint('All learning rates = ',learning_rate)\nprint('------------'*10)\n\n\n# sys.exit()\n\n\n##=============================================================================================\n##=============================================================================================\n########################### ANALYZE RESULTS ####################################\n##=============================================================================================\n##=============================================================================================\n\n\n##=============================================================================================\n############ LOAD LIBRARIES\n##=============================================================================================\n \nfrom sklearn.metrics import classification_report\nfrom sklearn.metrics import roc_curve, auc\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport pylab as pl\nimport matplotlib.cm as cm\nimport matplotlib.mlab as mlab\nimport matplotlib.patches as mpatches\n\n##=============================================================================================\n############ GLOBAL VARIABLES\n##=============================================================================================\n\nN_out_layer0=1\nN_out_layer1=32\nN_out_layer2=64\n\n##-------------------------------\nname_sg=str('_'.join(signal_array_list[0].split('_')[:2]))\nname_bg=str('_'.join(background_array_list[0].split('_')[:-1]))\nin_tuple=name_sg+'_'+name_bg\nprint('------------'*10)\nprint('------------'*10)\nprint('in_tuple = ',in_tuple)\nprint('------------'*10)\nname='_'.join(in_tuple.split('_')[:4])+'_pTj_'+'_'.join(in_tuple.split('_')[-3:-1])\n\n# print(in_tuple.split('_'))\nprint('Name of dir with weights and output layer images=',name)\n\n##-------------------------------\n# Create directorires\nos.system('mkdir -p analysis/')\nos.system('mkdir -p analysis/outlayer_plots/')\nos.system('mkdir -p analysis/weight_plots/')\n\n##=============================================================================================\n############ PREDICT OUTPUT PROBABILITIES\n##=============================================================================================\n\n# Predict output probability for each class (signal or background) for the image\nY_Pred_prob = model.predict(test_x)\n\nprint('y_Test (categorical). This is a vector of zeros with a one in the position of the image class =\\n ',test_y[0:15])\n\n# Convert vector of 1 and 0 to index\ny_Pred = np.argmax(Y_Pred_prob, axis=1)\ny_Test = np.argmax(test_y, axis=1)\nprint('Predicted output from the CNN (0 is signal and 1 is background) = \\n',y_Pred[0:15])\nprint('y_Test (True value) =\\n ',y_Test[0:15])\nprint('y_Test lenght', len(y_Test))\nprint('------------'*10)\n\n#Print classification report\nprint(classification_report(y_Test, y_Pred))\nprint('------------'*10)\n\n##--------------------------------------------------------------------------------------------- \n# We calculate a single probability of tagging the image as signal\nout_prob=[]\nfor i_prob in range(len(Y_Pred_prob)):\n out_prob.append((Y_Pred_prob[i_prob][0]-Y_Pred_prob[i_prob][1]+1)/2)\n\nprint('Predicted probability of each output neuron = \\n',Y_Pred_prob[0:15])\nprint('------------'*10)\nprint('Output of tagging image as signal = \\n',np.array(out_prob)[0:15])\nprint('------------'*10)\n\n\n##----------------------------------------------------\n#Make folder to save output probability and true values \noutprob_dir = 'analysis/out_prob/'\n#os.system(\"rm -rf \"+executedir) \nos.system(\"mkdir -p \"+outprob_dir)\n\n\n## SAVE OUTPUT PROBABILITIES AND TRUE VALUES\n# np.save(outprob_dir+'out_prob_'+in_tuple,out_prob)\n# np.save(outprob_dir+'true_value_'+in_tuple,y_Test)\n\nprint('Output probabilitiy filename = {}'.format(outprob_dir+'out_prob_'+in_tuple))\nprint('True value filename = {}'.format(outprob_dir+'true_value_'+in_tuple))\n\n\n\n##=============================================================================================\n############ Analysis over the images in the \"mistag range\" \n# (images with prob between min_prob and max_prob)\n##=============================================================================================\n\n##------------------------------------------------------------------------------------------\n# 1) Get probability for signal and background sets to be tagged as signal\n# 2) Get index of signal and bg images with a prob of being signal in some specific range\n\n# y_Test is the true value and out_prob the predicted probability of the image to be signal\nsig_prob=[] #Values of the precicted probability that are labeled as signal in the true value array\nbg_prob=[] #Values of the precicted probability that are labeled as bg in the true value array\n\nsig_idx=[]\nbg_idx=[]\n\n\nfor i_label in range(len(y_Test)):\n\n if y_Test[i_label]==0: #signal label\n sig_prob.append(out_prob[i_label])\n if min_prob<out_prob[i_label]<max_prob:\n sig_idx.append(i_label)\n \n elif y_Test[i_label]==1: #bg label\n bg_prob.append(out_prob[i_label])\n if min_prob<out_prob[i_label]<max_prob:\n bg_idx.append(i_label)\n \nprint('-----------'*10)\nprint('-----------'*10)\nprint('Predicted probability (images labeled as signal) = \\n',sig_prob[0:15])\nprint('-----------'*10)\nprint('Predicted probability (images labeled as background) =\\n ',bg_prob[0:15])\nprint('-----------'*10)\n##--------------------------\n# Get the array of bg and signal images with a prob of being signal within some specific range\n\nsig_images=[]\nbg_images=[]\n\nsig_label=[]\nbg_label=[]\n\nfor index in sig_idx:\n sig_images.append(test_x[index])\n sig_label.append(y_Test[index])\n\nfor index in bg_idx:\n bg_images.append(test_x[index])\n bg_label.append(y_Test[index])\n\nsig_images=np.asarray(sig_images)\nbg_images=np.asarray(bg_images)\n\nprint('-----------'*10)\nprint('Number of signal images in the slice between %s and %s = %i' %(str(min_prob), str(max_prob),len(sig_images)))\nprint('-----------'*10)\nprint('Number of background images in the slice between %s and %s = %i' %(str(min_prob), str(max_prob),len(bg_images)))\nprint('-----------'*10)\nprint('-----------'*10)\nprint('Signal images with a prob between %s and %s label (1st 10 values) = \\n %a' % (str(min_prob), str(max_prob),sig_label[0:10]))\nprint('-----------'*10)\nprint('Background images with a prob between %s and %s label (1st 10 values) = \\n %a' % (str(min_prob), str(max_prob),bg_label[0:10]))\nprint('-----------'*10)\n\n\n##=============================================================================================\n############ PLOT HISTOGRAM OF SIG AND BG EVENTS DEPENDING ON THEIR PROBABILITY OF BEING TAGGED AS SIGNAL\n##=============================================================================================\n\n#Make folder to save plots \noutprob_dir = 'analysis/out_prob/'\n#os.system(\"rm -rf \"+executedir) \nos.system(\"mkdir -p \"+outprob_dir)\n\n# Histogram function\ndef make_hist(in_sig_prob,in_bg_prob,name):\n # the histogram of the data\n# n, bins, patches = plt.hist(sig_prob, my_bins, facecolor='red')\n# n, bins, patches = plt.hist(bg_prob, my_bins, facecolor='blue')\n\n plt.hist(in_sig_prob, my_bins, alpha=0.5, facecolor='red')\n plt.hist(in_bg_prob, my_bins, alpha=0.5, facecolor='blue')\n\n\n red_patch = mpatches.Patch(color='red', label='True value = top jet')\n blue_patch = mpatches.Patch(color='blue', label='True value = qcd jet')\n plt.legend(handles=[red_patch,blue_patch],bbox_to_anchor=(1, 1),\n bbox_transform=plt.gcf().transFigure)\n # plt.legend(handles=[red_patch,blue_patch])\n # plt.legend(handles=[blue_patch])\n # add a 'best fit' line\n # y = mlab.normpdf( bins, mu, sigma)\n # l = plt.plot(bins, y, 'r--', linewidth=1)\n\n plt.xlabel('CNN output probability')\n plt.ylabel('Number of jets')\n # plt.title(r'$\\mathrm{Histogram\\ of\\ IQ:}\\ \\mu=100,\\ \\sigma=15$')\n # plt.axis([40, 160, 0, 0.03])\n plt.grid(True)\n\n # plt.show()\n\n fig = plt.gcf()\n plot_FNAME = 'Hist_'+name+in_tuple+'.png'\n print('------------'*10)\n print('Hist plot name = ',plot_FNAME)\n print('------------'*10)\n plt.savefig(outprob_dir+plot_FNAME)\n\n##-------------------------\n# Plot the histogram\n# make_hist(sig_prob,bg_prob, '_all_set')\n\n# sys.exit()\n\n##=============================================================================================\n############ PLOT ROC CURVE\n##=============================================================================================\n\n#Make folder to save plots \nROC_plots_dir = 'analysis/ROC/'\n#os.system(\"rm -rf \"+executedir) \nos.system(\"mkdir -p \"+ROC_plots_dir)\n\nROC_plots_dir2 = 'analysis/ROC/'+str(in_tuple)+'/'\nos.system(\"mkdir -p \"+ROC_plots_dir2)\n\n\n# Make ROC with area under the curve plot\ndef generate_results(y_test, y_score):\n #I modified from pos_label=1 to pos_label=0 because I found out that in my code signal is labeled as 0 and bg as 1\n fpr, tpr, thresholds = roc_curve(y_test, y_score,pos_label=0, drop_intermediate=False)\n print('Thresholds[0:6] = \\n',thresholds[:6])\n print('Thresholds lenght = \\n',len(thresholds))\n print('fpr lenght',len(fpr))\n print('tpr lenght',len(tpr))\n\n print('------------'*10)\n roc_auc = auc(fpr, tpr)\n plt.figure()\n plt.plot(fpr, tpr, color='red',label='Train epochs = '+str(epochs)+'\\n ROC curve (area = %0.2f)' % roc_auc)\n #plt.plot(fpr[2], tpr[2], color='red',\n # lw=lw, label='ROC curve (area = %0.2f)' % roc_auc[2])\n #plt.plot([0, 1], [0, 1], 'k--')\n plt.xscale('log')\n plt.xlim([0.0, 1.05])\n plt.ylim([0.0, 1.05])\n plt.xlabel('Mistag Rate (False Positive Rate)')\n plt.ylabel('Signal Tag Efficiency (True Positive Rate)')\n plt.legend(loc=\"lower right\")\n #plt.title('Receiver operating characteristic curve')\n# plt.show()\n plt.grid(True)\n fig = plt.gcf()\n label=''\n plot_FNAME = 'ROC_'+str(epochs)+'_'+in_tuple+label+'.png'\n plt.savefig(ROC_plots_dir2+plot_FNAME)\n\n ROC_FNAME = 'ROC_'+str(epochs)+'_'+in_tuple+label+'_Ntrain_'+str(Ntrain)+'.npy'\n np.save(ROC_plots_dir2+'fpr_'+str(sample_relative_size)+'_'+ROC_FNAME,fpr)\n np.save(ROC_plots_dir2+'tpr_'+str(sample_relative_size)+'_'+ROC_FNAME,tpr)\n print('ROC filename = {}'.format(ROC_plots_dir2+plot_FNAME))\n print('AUC =', np.float128(roc_auc))\n print('------------'*10)\n\n\ngenerate_results(y_Test, out_prob)\n\n# sys.exit()\n\n\n\n##=============================================================================================\n############ VISUALIZE CONVOLUTION RESULT \n##=============================================================================================\n\n##--------------------------------------------------------------------------------------------- \n# Utility functions\nfrom mpl_toolkits.axes_grid1 import make_axes_locatable\nimport numpy.ma as ma\n\n\ndef nice_imshow(ax, data, vmin=None, vmax=None, cmap=None):\n \"\"\"Wrapper around pl.imshow\"\"\"\n if cmap is None:\n cmap = cm.jet\n if vmin is None:\n vmin = data.min()\n if vmax is None:\n vmax = data.max()\n divider = make_axes_locatable(ax)\n cax = divider.append_axes(\"right\", size=\"5%\", pad=0.05)\n im = ax.imshow(data, vmin=vmin, vmax=vmax, interpolation='nearest', cmap=cmap)\n pl.colorbar(im, cax=cax)\n\n\ndef make_mosaic(imgs, nrows, ncols, border=1):\n \"\"\"\n Given a set of images with all the same shape, makes a\n mosaic with nrows and ncols\n \"\"\"\n nimgs = imgs.shape[0]\n imshape = imgs.shape[1:]\n \n mosaic = ma.masked_all((nrows * imshape[0] + (nrows - 1) * border,\n ncols * imshape[1] + (ncols - 1) * border),\n dtype=np.float32)\n \n paddedh = imshape[0] + border\n paddedw = imshape[1] + border\n for i in range(nimgs):\n row = int(np.floor(i / ncols))\n col = i % ncols\n \n mosaic[row * paddedh:row * paddedh + imshape[0],\n col * paddedw:col * paddedw + imshape[1]] = imgs[i]\n return mosaic\n\n#pl.imshow(make_mosaic(np.random.random((9, 10, 10)), 3, 3, border=1))\n\n\n##--------------------------------------------------------------------------------------------- \n# Get and plot the average of each intermediate convolutional layer \n##--------------------------------------------------------------------------------------------- \n\n#Split input image arrays into signal and background ones\nx_test_sig=[]\nx_test_bg=[]\n# y_Test is the true value\nn_sig=0\nn_bg=0\n\nfor i_image in range(len(y_Test)):\n if y_Test[i_image]==0:\n x_test_sig.append(test_x[i_image])\n n_sig+=1\n elif y_Test[i_image]==1:\n x_test_bg.append(test_x[i_image])\n n_bg+=1\n\n\nprint('Lenght x_test signal = {} and number of signal samples = {}'.format(len(x_test_sig),n_sig))\nprint('Lenght x_test background {} and number of background samples = {}'.format(len(x_test_bg),n_bg))\n\n\n\n# K.learning_phase() is a flag that indicates if the network is in training or\n# predict phase. It allows layer (e.g. Dropout) to only be applied during training\ninputs = [K.learning_phase()] + model.inputs\n\n_convout1_f = K.function(inputs, [convout1.output])\n_convout2_f = K.function(inputs, [convout2.output])\n_convout3_f = K.function(inputs, [convout3.output])\n\n# def convout1_f(X):\n# # The [0] is to disable the training phase flag\n# return _convout1_f([0] + [X])\n\n# i = 3000\n# Visualize the first layer of convolutions on an input image\n# X = x_test[i:i+1]\n\n# outlayer_plots_dir = 'analysis/outlayer_plots/'+name+'_epochs_'+str(epochs)+'/'\noutlayer_plots_dir = 'analysis/outlayer_plots/'+'_'.join(in_tuple.split('_')[:-1])+'/'\n#os.system(\"rm -rf \"+executedir) \nos.system(\"mkdir -p \"+outlayer_plots_dir)\n\n\ndef get_output_layer_avg(x_test,func,layer):\n \n if layer==1:\n avg_conv=np.zeros((32,17,17))\n elif layer==2:\n avg_conv=np.zeros((64,7,7))\n elif layer==3:\n avg_conv=np.zeros((64,3,3))\n \n# print(\"avg image shape = \", np.shape(avg_conv)) #create an array of zeros for the image\n\n for i_layer in range(len(x_test)):\n X = [x_test[i_layer]]\n # The [0] is to disable the training phase flag\n Conv = func([0] + [X])\n# print('Conv_array type = ',type(Conv))\n Conv=np.asarray(Conv)\n# print('New type Conv_array type = ',type(Conv))\n# print('Conv = \\n',Conv[0:2])\n# print(\"First convolutional output layer shape before swapaxes = \", np.shape(Conv))\n Conv = np.squeeze(Conv) #Remove single-dimensional entries from the shape of an array.\n Conv=np.swapaxes(Conv,0,2) #Interchange two axes of an array.\n Conv=np.swapaxes(Conv,1,2)\n # print('Con[0]= \\n',Conv[0])\n# print(\"First convolutional output layer shape after swap axes = \", np.shape(Conv))\n# print('-----------'*10) \n# print('avg_conv=\\n',avg_conv[0:2])\n avg_conv=avg_conv+Conv\n \n print(\"avg image shape after adding all images = \", np.shape(avg_conv)) #create an array of zeros for the image\n return avg_conv\n \n \n \ndef plot_output_layer_avg(func,n_im,layer,type):\n\n pl.figure(figsize=(15, 15))\n plt.axis('off')\n # pl.suptitle('convout1b')\n nice_imshow(pl.gca(), make_mosaic(func, n_im,8), cmap=cm.gnuplot)\n # return Conv\n# plt.show()\n fig = plt.gcf()\n plot_FNAME = 'avg_image_layer_'+str(layer)+'_'+str(type)+'_'+'_'.join(in_tuple.split('_')[4:-1])+'.png'\n # plot_FNAME = 'layer_'+str(layer)+'_img_'+str(i_im)+'_epochs_'+str(epochs)+'_'+in_tuple[:-4]+'.png'\n print('Saving average image for layer {} ...'.format(layer))\n print('-----------'*10)\n plt.savefig(outlayer_plots_dir+plot_FNAME)\n print('Output layer filename = {}'.format(outlayer_plots_dir+plot_FNAME))\n \n \n# print('Name sig','_'.join(in_tuple.split('_')[4:-1]))\n\n# avg_conv_array_sig=get_output_layer_avg(_convout2_f,2)\n# avg_conv_array_sig1=get_output_layer_avg(x_test_sig,_convout1_f,1)\n# avg_conv_array_bg1=get_output_layer_avg(x_test_bg,_convout1_f,1)\n# avg_conv_array_sig2=get_output_layer_avg(x_test_sig,_convout2_f,2)\n# avg_conv_array_bg2=get_output_layer_avg(x_test_bg,_convout2_f,2)\navg_conv_array_sig3=get_output_layer_avg(x_test_sig,_convout3_f,3)\navg_conv_array_bg3=get_output_layer_avg(x_test_bg,_convout3_f,3)\n\n# plot_output_layer_avg(avg_conv_array_sig1,4,1,'tt')\n# plot_output_layer_avg(avg_conv_array_bg1,4,1,'QCD')\n# plot_output_layer_avg(avg_conv_array_sig2,8,2,'tt')\n# plot_output_layer_avg(avg_conv_array_bg2,8,2,'QCD')\nplot_output_layer_avg(avg_conv_array_sig3,8,3,'tt')\nplot_output_layer_avg(avg_conv_array_bg3,8,3,'QCD')\n \n# sys.exit()\n\n\n##=============================================================================================\n############ VISUALIZE WEIGHTS \n##=============================================================================================\n\nW1 = model.layers[0].kernel\nW2 = model.layers[2].kernel\nW3 = model.layers[5].kernel\n\n# all_W=[]\n# for i_weight in range(3):\n# all_W.append(model.layers[i_weight].kernel)\n\n# W is a tensorflow variable: type W = <class 'tensorflow.python.ops.variables.Variable'>. We want to transform it to a numpy array to plot the weights\n# print('type W1 = ',type(W1))\nprint('------------'*10)\n\nimport tensorflow as tf\n# sess = tf.Session()\n# # from keras import backend as K\n# K.set_session(sess)\n# weightmodel = tf.global_variables_initializer()\n\n##---------------------------------------------------------------------------------------------\n# Transform tensorflow Variable to a numpy array to plot the weights\ndef tf_to_np(weight):\n\n print('Type weight_array before opening a tensorflow session = ',type(weight))\n sess = tf.Session()\n # from keras import backend as K\n K.set_session(sess)\n\n weightmodel = tf.global_variables_initializer()\n\n with sess:\n sess.run(weightmodel)\n weight_array=sess.run(weight)\n\n print('Type weight_array = ',type(weight_array))\n print('Shape weight_array before swapaxes = ',np.shape(weight_array))\n\n # weight_array=np.squeeze(weight_array)\n weight_array=np.swapaxes(weight_array,0,2)\n weight_array=np.swapaxes(weight_array,1,2)\n weight_array=np.asarray(weight_array)\n\n print('Shape weight_array after swapaxes = ',np.shape(weight_array))\n print('Shape weight_array after swapaxes[0] = ',np.shape(weight_array)[0])\n # print('Weight_aray = ',weight_array)\n\n return weight_array\n\n# all_W_np=[]\n# all_W_np.append(tf_to_np(W1))\n# all_W_np.append(tf_to_np(W2))\n# all_W_np.append(tf_to_np(W3))\n\n##---------------------------------------------------------------------------------------------\n# Plot the weights\n\nweight_plots_dir = 'analysis/weight_plots/'+name+'_epochs_'+str(epochs)+'/'\n#os.system(\"rm -rf \"+executedir) \nos.system(\"mkdir -p \"+weight_plots_dir)\n\n# N_map=0\n\ndef plot_2nd_3d_layer(ww,N_out_layer,n_weight,n_row):\n\n wout=tf_to_np(ww)\n\n # if n_weight==2 or n_weight==3:\n wout=wout[N_out_layer]\n wout=np.swapaxes(wout,0,2)\n wout=np.swapaxes(wout,1,2)\n \n pl.figure(figsize=(15, 15))\n plt.axis('off')\n nice_imshow(pl.gca(), make_mosaic(wout, n_row, 8), cmap=cm.gnuplot)\n fig = plt.gcf()\n plot_FNAME = 'weights_'+str(n_weight)+'_epochs_'+str(epochs)+'_N_out_layer_'+str(N_out_layer)+'_'+in_tuple[:-4]+'.png'\n# plt.savefig(weight_plots_dir+plot_FNAME)\n print('Weights filename = {}'.format(weight_plots_dir+plot_FNAME))\n print('------------'*10)\n \n \nfor N_map in range(N_out_layer0):\n plot_2nd_3d_layer(W1,N_map,1,4)\n \n# sys.exit()\n\nfor N_map in range(N_out_layer1):\n plot_2nd_3d_layer(W2,N_map,2,8)\n\nfor N_map in range(N_out_layer2):\n plot_2nd_3d_layer(W3,N_map,3,8)\n\n\n##=============================================================================================\n##=============================================================================================\n##=============================================================================================\n# Code execution time\nprint('-----------'*10)\nprint(\"Code execution time = %s minutes\" % ((time.time() - start_time)/60))\nprint('-----------'*10)\n\n \n"
},
{
"alpha_fraction": 0.5402966737747192,
"alphanum_fraction": 0.5525339841842651,
"avg_line_length": 43.93888854980469,
"blob_id": "edc0c50b88f8ad6467cbc75193e2fd19f693f856",
"content_id": "0365d2cd1efbd1848d43d3222ba1bf137797ad3e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 8090,
"license_type": "no_license",
"max_line_length": 161,
"num_lines": 180,
"path": "/Simulations-Event_generation/run_MG5_pythia_jet_img.py",
"repo_name": "SebastianMacaluso/Deep-learning_jet-images",
"src_encoding": "UTF-8",
"text": "#/usr/bin/env python\n# Script to create bash files to run Madgraph\n\nimport sys, os, time, fileinput\n\n#To run: Run this script with run_all.py\n# python run_MG5_pythia_jet_img.py '+signal+' '+MG5_EVENTS+' '+MG5_PTJ1MIN+' '+MG5_PTJ1MAX+' '+MG5_ETAMAX+' '+jet_eta_max+' '+R_jet+' '+pTjetMin+' '+Nruns_signal\n\n#---------------------------------------------------------------------------------------------\n#GLOBAL VARIABLES\n\n# The MG5_process folder has the entry for the directory with the unweightes_events.lhe from MG5\nMG5_process=sys.argv[1]\nMYEVENTS=sys.argv[2]\nMYPTJ1MIN=sys.argv[3]#Min pT required in MG5 for the leading jet in pT\nMYPTJ1MAX=sys.argv[4]\nMYETAMAX=sys.argv[5]\n\neta_max=sys.argv[6]\nRjet=sys.argv[7]\npTjetMin=sys.argv[8] #Min pT for all jets\n\nNruns=int(sys.argv[9]) #Number of MG5 runs that we will do (we update the seed number for each of them)\n\nNevents=MYEVENTS\n# Nevents=str(4)\n\nprint('jet1pT_min={}'.format(MYPTJ1MIN))\nprint('Nevents = {}'.format(Nevents))\n\nmainpath=\"/het/p1/macaluso/\"\nusername=\"macaluso\"\nmain_dir=mainpath+'deep_learning/'\n\nMG5=\"MG5_aMC_v2.4.3.tar.gz\" \nMG5_dir='MG5_aMC_v2_4_3/'\nMGTAR=\"/het/p1/\"+username+\"/\"+MG5+\"\" \n\nMADGRAPH_DIRECTORY=\"/het/p1/\"+username+'/'+MG5_dir\n\npathPythia= os.path.join(mainpath,'pythia8219/examples/')\n\njets_filename='jets'\nsubjets_filename='subjets'\n\n#---------------------------------------------------------------------------------------------\n#generate command for Madgraph\ngen_command = \"/cms/base/python-2.7.1/bin/python ./bin/generate_events 0 \"\n\n#initialization for random number seed\nseed = 0\n# Nruns=90 #Number of MG5 runs that we will do (we update the seed number for each of them)\n\n#make folder for bash scripts\nexecutedir = main_dir+'exec_'+MG5_process+'_'+MYEVENTS+'_'+pTjetMin+'_'+Rjet\n#os.system(\"rm -rf \"+executedir)\nos.system(\"mkdir -p \"+executedir)\n\n#directory to collect lhe files results \nLHE_RESULTS_DIR =main_dir+'LHE/results_'+MG5_process+'_'+MYEVENTS+'_'+pTjetMin+'_'+Rjet\n#make directory if it doesn't already exist\nos.system(\"mkdir -p \"+LHE_RESULTS_DIR)\n\n#Directory to collect results in after PYTHIA\nRESULTS_DIR = main_dir+'results_'+MG5_process+'_'+MYEVENTS+'_'+pTjetMin+'_'+Rjet\nos.system(\"mkdir -p \"+RESULTS_DIR)\n \n#open file to contain submission commands for jdl files\ndofile = open(executedir+\"/do_all.src\",'w')\n\n#---------------------------------------------------------------------------------------------\n#////////////////////////////////////////////////////////////////////////////////////////////\n#---------------------------------------------------------------------------------------------\n# Loop that copies MG5 to the machine in the cluster, updates the random seed number, runs MG5 and saves the unweighted_events.lhe\n# Then runs PYTHIA 8 and saves the .npy Numpy arrays of 4-momentum vectors \nfor i in range(0,Nruns):\n\n seed += 1\n name = MG5_process+'_'+str(seed)\n\n #define name of template copy and specifc run name\n \n RUN_NAME = name\n \n localdir = \"$_CONDOR_SCRATCH_DIR\"\n \n #---------------------------------------------------------------------------------------------\n #write out python script for job execution\n execfilename = \"exec_\"+name+\".sh\"\n \n executefile = open(executedir+\"/\"+execfilename,'w')\n executefile.write(\"#!/bin/bash\\n\")\n executefile.write(\"export VO_CMS_SW_DIR=\\\"/cvmfs/cms.cern.ch\\\"\\n\")\n executefile.write(\"export COIN_FULL_INDIRECT_RENDERING=1\\n\")\n executefile.write(\"export SCRAM_ARCH=\\\"slc6_amd64_gcc481\\\"\\n\")\n\n #---------------------------------------------------------------------------------------------\n # Copy MG5 to the machine in the cluster\n executefile.write(\"cp -r \"+MGTAR+\" \"+localdir+\"\\n\") \n executefile.write(\"cd \"+localdir+\"\\n\")\n executefile.write(\"tar -xzvf \"+MG5+\"\\n\")\n #executefile.write(\"cd MG5_aMC_v2_3_3/pythia-pgs \\n\")\n #executefile.write(\"make \\n\")\n #executefile.write(\"cd ../Delphes \\n\")\n #executefile.write(\"make \\n\")\n #executefile.write(\"cd .. \\n\")\n\n #---------------------------------------------------------------------------------------------\n #copy template directory to new location, and update its random number seed and run name\n executefile.write(\"cp -r \"+MADGRAPH_DIRECTORY+MG5_process+\".tar.gz \"+localdir+\"/\"+MG5_dir+MG5_process+\".tar.gz \\n\")\n executefile.write(\"cd \"+MG5_dir+\" \\n\")\n executefile.write(\"tar -xzvf \"+MG5_process+\".tar.gz \\n\")\n executefile.write(\"mv \"+MG5_process+\" \"+RUN_NAME+\"\\n\")\n \n #---------------------------------------------------------------------------------------------\n #Update values in the run_card.dat\n run_card = localdir+\"/\"+MG5_dir+RUN_NAME+\"/Cards/run_card.dat\"\n executefile.write(\"sed -i \\'s/MYSEED/\"+str(seed)+\"/\\' \"+run_card+\"\\n\")\n executefile.write(\"sed -i \\'s/RUNNAME/\"+RUN_NAME+\"/\\' \"+run_card+\"\\n\")\n executefile.write(\"sed -i \\'s/MYEVENTS/\"+MYEVENTS+\"/\\' \"+run_card+\"\\n\")\n executefile.write(\"sed -i \\'s/MYPTJ1MIN/\"+MYPTJ1MIN+\"/\\' \"+run_card+\"\\n\")\n executefile.write(\"sed -i \\'s/MYPTJ1MAX/\"+MYPTJ1MAX+\"/\\' \"+run_card+\"\\n\")\n executefile.write(\"sed -i \\'s/MYETAMAX/\"+str(MYETAMAX)+\"/\\' \"+run_card+\"\\n\")\n\n #---------------------------------------------------------------------------------------------\n #Run MG5 \n executefile.write(\"cd \"+localdir+\"/\"+MG5_dir+RUN_NAME+\" \\n\")\n executefile.write(gen_command+RUN_NAME+\"\\n\")\n \n #---------------------------------------------------------------------------------------------\n #Save the output files\n executefile.write(\"cp \"+localdir+\"/\"+MG5_dir+RUN_NAME+\"/Events/\"+RUN_NAME+\"/unweighted_events.lhe.gz \"+LHE_RESULTS_DIR +\"/\"+RUN_NAME+\"_unweighted.lhe.gz \\n\")\n executefile.write(\"cp \"+localdir+\"/\"+MG5_dir+RUN_NAME+\"/Events/\"+RUN_NAME+\"/*banner.txt \"+LHE_RESULTS_DIR +\"/\"+RUN_NAME+\"_banner.txt\\n\")\n\n #---------------------------------------------------------------------------------------------\n #Copy lhe file to local dir\n executefile.write(\"mv \"+localdir+\"/\"+MG5_dir+RUN_NAME+\"/Events/\"+RUN_NAME+\"/unweighted_events.lhe.gz \"+localdir+\"/\"+name+'_unweighted_events.lhe.gz \\n')\n executefile.write(\"cd \"+localdir+\"\\n\")\n executefile.write(\"gunzip \"+name+'_unweighted_events.lhe.gz \\n')\n \n #Stable-------------------------------------------------------------\n #///////////////////////////////////////PYTHIA//////////////////////////////////\n #Stable-------------------------------------------------------------\n\n # runs Pythia 8 with stable chi0, RPV or HV decays specified in the cmnd files\n executefile.write(\"cp -r \"+pathPythia+'Makefile.inc '+localdir+\"/\"+\"Makefile.inc \\n\")\n executefile.write('python '+pathPythia+'pythia_LHE_image.py '+name+'_unweighted_events.lhe '+Nevents+' '\n +jets_filename+' '+subjets_filename+' '+eta_max+' '+Rjet+' '+pTjetMin+' > '+RUN_NAME+'_pythia.out \\n')\n \n executefile.write(\"cp \"+RUN_NAME+\"_pythia.out \"+RESULTS_DIR+\"/\"+RUN_NAME+\"_pythia.out\\n\")\n executefile.write(\"cp \"+jets_filename+\".npy \"+RESULTS_DIR+\"/\"+RUN_NAME+'_'+jets_filename+'.npy \\n')\n executefile.write(\"cp \"+subjets_filename+\".npy \"+RESULTS_DIR+\"/\"+RUN_NAME+'_'+subjets_filename+'.npy \\n')\n\n\n executefile.close()\n \n os.system(\"chmod u+x \"+executedir+\"/\"+execfilename)\n \n #---------------------------------------------------------------------------------------------\n #write out jdl script for job submission\n jdlfilename = \"exec_\"+name+\".jdl.base\"\n jdlfile = open(executedir+\"/\"+jdlfilename,'w')\n jdlfile.write(\"universe = vanilla\\n\")\n jdlfile.write(\"+AccountingGroup = \\\"group_rutgers.\"+username+\"\\\"\\n\")\n jdlfile.write(\"Executable =\"+executedir+\"/\"+execfilename+\"\\n\")\n jdlfile.write(\"getenv = True\\n\")\n jdlfile.write(\"should_transfer_files = NO\\n\")\n jdlfile.write(\"Arguments = \\n\")\n jdlfile.write(\"Output = /het/p1/\"+username+\"/condor/\"+RUN_NAME+\".out\\n\")\n jdlfile.write(\"Error = /het/p1/\"+username+\"/condor/\"+RUN_NAME+\".err\\n\")\n jdlfile.write(\"Log = /het/p1/\"+username+\"/condor/\"+RUN_NAME+\".condor\\n\")\n jdlfile.write(\"Queue 1\\n\")\n jdlfile.close()\n\n #dofile.write(\"sleep 2 \\n\")\n \n dofile.write(\"condor_submit \"+jdlfilename+\"\\n\")\n \n\ndofile.close()\n\n"
},
{
"alpha_fraction": 0.5733630061149597,
"alphanum_fraction": 0.5876169204711914,
"avg_line_length": 46.35865020751953,
"blob_id": "fec4b7066f5f1806489201c0fb959bbbe81e4d5f",
"content_id": "141290c340364a7d6f6793a7b559b00c16b912c1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 11225,
"license_type": "no_license",
"max_line_length": 620,
"num_lines": 237,
"path": "/Simulations-Event_generation/pythia_LHE_image.py",
"repo_name": "SebastianMacaluso/Deep-learning_jet-images",
"src_encoding": "UTF-8",
"text": "#This script is based on Main01.py \n\n# main01.py is a part of the PYTHIA event generator.\n# Copyright (C) 2016 Torbjorn Sjostrand.\n# PYTHIA is licenced under the GNU GPL version 2, see COPYING for details.\n# Please respect the MCnet Guidelines, see GUIDELINES for details.\n#\n# This is a simple test program. It fits on one slide in a talk. It\n# studies the charged multiplicity distribution at the LHC. To set the\n# path to the Pythia 8 Python interface do either (in a shell prompt):\n# export PYTHONPATH=$(PREFIX_LIB):$PYTHONPATH\n# or the following which sets the path from within Python.\n\n##----------------------------------------------------------------------------------------------------------------------\n# LINKING PYTHON WITH PYTHIA \n\n# ./configure --with-python-include=/cms/base/cmssoft/slc6_amd64_gcc530/external/python/2.7.11-giojec4/include/python2.7/ --with-python-bin=/cms/base/cmssoft/slc6_amd64_gcc530/cms/cmssw/CMSSW_8_1_0/external/slc6_amd64_gcc530/bin/ --with-hepmc2=/het/p1/mbuckley/HepMC/install/\n\n##----------------------------------------------------------------------------------------------------------------------\n#TO RUN THE SCRIPT\n\n# python Seb_LHE_image.py LHE_FILE Nevents jets_filename subjets_filename \n#Example: nohup python Seb_LHE_image.py tt_1000_unweighted.lhe 10 jets.npy subjets.npy &\n#Where Nevents are the number of events of the LHE file and jets_filename and subjets_filename the name of the files where we save the numpy arrays of jet/subjet (pT,eta,phi)\n\n##----------------------------------------------------------------------------------------------------------------------\n\nimport sys\nimport os\ncfg = open(\"Makefile.inc\")\nlib = \"../lib\"\nfor line in cfg:\n if line.startswith(\"PREFIX_LIB=\"): lib = line[11:-1]; break\nsys.path.insert(0, lib)\n\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\nimport numpy as np\n\n# Import the Pythia module.\nimport pythia8\n\nfull_event_plots='tt_plots'\nos.system('mkdir -p '+full_event_plots)\n\nNevents=int(sys.argv[2])\njets_filename=sys.argv[3]\nsubjets_filename=sys.argv[4]\n\n\n# Import SlowJet: we use SlowJet to cluster the jet constituents. SlowJet is now identical to FastJets but has less features.\n#The recent introduction of fjcore, containing the core functionality of FastJet in a very much smaller package, has changed the conditions. It now is possible (even encouraged by the authors) to distribute the two fjcore files as part of the PYTHIA package. Therefore the SlowJet class doubles as a convenient front end to fjcore, managing the conversion back and forth between PYTHIA and FastJet variables. Some applications may still benefit from using the native codes, but the default now is to let SlowJet call on fjcore for the jet finding. More info at: http://home.thep.lu.se/~torbjorn/pythia82html/Welcome.html\npythia = pythia8.Pythia()\n\n##----------------------------------------------------------------------------------------------------------------------\n## READ LHE FILE\nif len(sys.argv) > 1: lhe_file_name = sys.argv[1]\n\npythia.readString(\"Beams:frameType = 4\")\npythia.readString(\"Beams:LHEF = \"+lhe_file_name)\n\npythia.init() #Initializes the process. Incoming p p beams are the default\n\nmult = pythia8.Hist(\"charged multiplicity\", 100, -0.5, 799.5)\n\n#Common parameters for the 2 jet finder:\np_value=-1 #This dettermines the clustering algorithm. p = -1 corresponds to the anti-kT one, p = 0 to the Cambridge/Aachen one and p = 1 to the kT one.\neta_max=float(sys.argv[5])\nR=float(sys.argv[6])\npTjetMin=int(sys.argv[7]) #Min pT for all jets\n# eta_max=2.5\n# R=1.5\n# pTjetMin=300 #Min pT for all jets\n#Exclude neutrinos (and other invisible) from study:\nnSel=2\n\n# calorimeter granularity\netaedges=np.arange(-eta_max,eta_max+0.025,10/206.25)\nphiedges=np.arange(-np.pi,np.pi+0.024,np.pi/130.)\n# print etaedges, phiedges\nmy_map=plt.cm.gray\n\n#Set up SlowJet jet finder, with anti-kT clustering and pion mass assumed for non-photons..\nslowJet = pythia8.SlowJet(p_value, R, pTjetMin, eta_max, nSel, 1)\njet_list=[]\njet_parton_list=[]\njet_mass=[]\n\n##----------------------------------------------------------------------------------------------------------------------\n# Begin event loop. Generate event. Skip if error. List first one.\nfor iEvent in range(0, Nevents):\n if not pythia.next(): continue\n\n # for i in range(0,pythia.event.size()):\n # print('i {},id {}'.format(i,pythia.event[i].id()))\n\n # Find number of all final charged particles and fill histogram.\n nCharged = 0\n for prt in pythia.event:\n if prt.isFinal() and prt.isCharged(): nCharged += 1\n mult.fill(nCharged)\n\n ##---------------------------------------------------------------------------------------------------------------\n # Analyze Slowet jet properties. List first few JETS.\n slowJet.analyze(pythia.event) #To analyze the event with Slowjet\n if (iEvent < Nevents):\n slowJet.list() #To list the jets\n \n ##---------------------------------------------------------------------------------------------------------------\n # dump jet info into 4-vectors: pT,eta,Phi,E\n jet_list.append([ ])# [] for i in range(slowJet.sizeJet())])\n jet_parton_list.append([ ])# [] for i in range(slowJet.sizeJet())])\n jet_mass.append([])\n \n \n\n for j in range(0,slowJet.sizeJet()):\n #vec=slowJet.p(j)\n jet_list[iEvent].append(slowJet.p(j))# Gives the jets 4-vector: px,py,pz,E\n# if (iEvent < Nevents):\n# print('Event {} ,contituents{}'.format(iEvent, slowJet.constituents(j)))\n\n jet_parton_list[iEvent].append([pythia.event[c].p() for c in slowJet.constituents(j)])# We read the jet constituents. Gives the 4-vector: px,py,pz,E. We can the access with pT(), eta(), Phi()\n jet_mass[iEvent].append(slowJet.m(j)) #We get a list of the mass of the leading jet in pT for each event\n #print('Jet 0 mass = {}'.format(jet_mass))\n \n##---------------------------------------------------------------------------------------------------------------------\n# End of event loop. Statistics. Histogram. Done.\npythia.stat();\n#print(mult)\n\nnjets_vec = map(len,jet_list)\n# print jet_parton_list\n# plotting event N\nbins = 100\n\nconst_pT, const_eta, const_phi, jetcircle = [], [], [], []\nsubjet_pT,subjet_eta,subjet_phi = [],[],[]\nsubjet2_pT=[]\njet_pT,jet_eta,jet_phi,jet_Mass = [],[],[],[]\n\n##----------------------------------------------------------------------------------------------------------------------\n# plotting constituents of all jets in all events\nfor NeventPlot in range(0,Nevents):\n #Njet = range(len(jet_list[NeventPlot]))\n const_pT.append([ ])\n const_eta.append([ ])\n const_phi.append([ ])\n jetcircle.append([ ])\n\n#Jets are sorted from Greater to lower pT. As we want to keep the hardest jet, we then just take jet_list[NeventPlot][0]\n if len(jet_list[NeventPlot])>0:\n ijet=0\n #Create jet constituents pT,eta,phi lists\n const_pT[NeventPlot].append([jet.pT() for jet in jet_parton_list[NeventPlot][ijet]])\n const_eta[NeventPlot].append([jet.eta() for jet in jet_parton_list[NeventPlot][ijet]])\n const_phi[NeventPlot].append([jet.phi() for jet in jet_parton_list[NeventPlot][ijet]])\n\n subjet_pT.append([jet.pT() for jet in jet_parton_list[NeventPlot][ijet]])\n #subjet2_pT.append(jet_parton_list[NeventPlot][ijet].pT())\n subjet_eta.append([jet.eta() for jet in jet_parton_list[NeventPlot][ijet]])\n subjet_phi.append([jet.phi() for jet in jet_parton_list[NeventPlot][ijet]])\n\n #jet pT,eta,phi\n jet_pT.append(jet_list[NeventPlot][ijet].pT())\n jet_eta.append(jet_list[NeventPlot][ijet].eta())\n jet_phi.append(jet_list[NeventPlot][ijet].phi())\n\n jet_Mass.append(jet_mass[NeventPlot][ijet])\n #jet_mass.append([])\n #jet_mass[NeventPlot].append(slowJet.m(ijet))\n #print('Jet 0 mass = {}'.format(jet_Mass))\n\n jetcircle[NeventPlot].append(plt.Circle((jet_list[NeventPlot][ijet].eta(), jet_list[NeventPlot][ijet].phi()), radius=R, color='r', fill=False))\n\n\n #for ijet in range(len(jet_list[NeventPlot])):\n # print('One event pT constituents for jet {}:--- {} '.format(ijet,[jet.pT() for jet in jet_parton_list[NeventPlot][ijet]]))\n # print('-------------'*10)\n\n \n all_jet_pT = np.concatenate(const_pT[NeventPlot]) #concatenate joins all vectors into 1\n #print('all_jet_pT {}'.format(all_jet_pT))\n #print('const_pT {}'.format(const_pT[NeventPlot]))\n #print('----------------'*15)\n all_jet_eta = np.concatenate(const_eta[NeventPlot])\n all_jet_phi = np.concatenate(const_phi[NeventPlot])\n\n\n# print('One event Jets constituents PT {}'.format(all_jet_pT))\n# print('-------------'*10)\n# print('One event Jets constituents eta {}'.format(all_jet_eta))\n# print('-------------'*10)\n# print('One event Jets constituents phi {}'.format(all_jet_phi))\n\n ##---------------------------------------------------------------------------------------------------------------------# -\n# #Create 2-D histogram\n# jets_h, xedges, yedges = np.histogram2d(all_jet_eta,all_jet_phi,bins=(etaedges,phiedges),weights=all_jet_pT)\n# \n# # print jet_h\n# # print xedges, yedges \n# fig = plt.gcf()\n# ax = fig.gca()\n# #We add circles center at each jet\n# for crc in jetcircle[NeventPlot]:\n# ax.add_artist(crc)\n# \n# #plt.pcolor(xedges, yedges,np.swapaxes(jets_h,0,1),cmap=my_map)\n# plt.pcolor(xedges, yedges,np.swapaxes(jets_h,0,1))\n# plt.xlim(-eta_max,eta_max)\n# plt.xlabel('eta')\n# plt.ylim(-np.pi,np.pi)\n# plt.ylabel('phi')\n# #ax = plt.add_subplot(111)\n# # plt.show()\n# plt.savefig(full_event_plots+'/hist_'+str(NeventPlot)+'.png')\n# fig.clf() #Clears the figure to start the next event in the loop. (If not I would have multiples circles from previous events in the plots\n\n##-----------------------------------------------------------------------------------------------------------------------------\n#We create jet list and constituents list with (pT,eta,phi) for all events to write in an output file\n\nsubjet_pTetaPhi=(subjet_pT,subjet_eta,subjet_phi) #format: ([[[pT_jet1_constituents, pT_jet2_constituents,...]]],[[[eta_jet1_constituents, eta_jet2_constituents,...]]],[[[phi_jet1_constituents, phi_jet2_constituents,...]]])\n#print('subjet_pTetaPhi: {}'.format(subjet_pTetaPhi))\n\njet_pTetaPhi=(jet_pT,jet_eta,jet_phi,jet_Mass) #format: ([[[pT_jet1_constituents, pT_jet2_constituents,...]]],[[[eta_jet1_constituents, eta_jet2_constituents,...]]],[[[phi_jet1_constituents, phi_jet2_constituents,...]]])\n#print('jet_pTetaPhi: {}'.format(jet_pTetaPhi[3]))\n\n##----------------------------------------------------------------------------------------------------------------------\n#We save the jet and constituents list as .npy output files\n\n#from tempfile import TemporaryFile\n#np_jets = TemporaryFile()\nnp.save(jets_filename+'.npy',jet_pTetaPhi)\nnp.save(subjets_filename+'.npy',subjet_pTetaPhi)\n\n#print('subjet pT{}'.format(subjet_pT))\n#print('subjet2 pT {}'.format(subjet2_pT))\n\n"
},
{
"alpha_fraction": 0.806755006313324,
"alphanum_fraction": 0.8123129606246948,
"avg_line_length": 178.92308044433594,
"blob_id": "fb550a791fba6bfa63f35f435a501296dc06e0b8",
"content_id": "e29df018ef918b4e8f884a1127acd8a10484a026",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 2339,
"license_type": "no_license",
"max_line_length": 1427,
"num_lines": 13,
"path": "/README.md",
"repo_name": "SebastianMacaluso/Deep-learning_jet-images",
"src_encoding": "UTF-8",
"text": "# Deep-learning_jet-images\nDeep learning and computer vision techniques to identify jet substructure from proton-proton collisions at the Large Hadron Collider:\n\nIn searches for new physics, with the current 13 TeV center of mass energy of LHC Run II, it has become of great importance to classify jets from signatures with boosted W, Z and Higgs bosons, as well as top quarks with respect to the main QCD background. In this paper we use computer vision with deep learning to build a classifier for boosted top jets at LHC, that could also be straightforwardly extended to other types of jets. In particular, we implement a convolutional neural network (CNN) to identify jet substructure in signal and background events. Our CNN inputs are jet images that consist of five channels, where each of them represents a color. The first three colors are given by the transverse momentum (pT) of neutral particles from the Hadronic Calorimeter (HCAL) towers, the pT of charged particles from the tracking system and the charged particles multiplicity. The last two colors specify the muon multiplicity and b quark tagging information. We show that our top tagger performs significantly better than previous top tagging classifiers. For instance, we achieve a 60% top tagging efficiency with a (FILL IN) mistag rate for jets with pT in the 800-900 GeV range. We also analyze the contribution to the classification accuracy of the colors and a set of image preprocessing steps. Finally, we study the behavior of our method over two pT ranges and different event generators, i.e. PYTHIA and Herwig.\n\n\nDescription:\n\n1) Simulations-Event_generation: Code to generate simulations of proton-proton collisions at the Large Hadron Collider that will be used to create the input images for the convolutional neural network.\n\n2) image_preprocess.py loads the simulated jet and jet constituents, creates and preprocesses the images for the convnet. \n\n3) convnet_keras.py loads the (image arrays,true_values) tuples, creates the train, cross-validation and test sets and runs a convolutional neural network to classify signal vs background images. We then get the statistics and analyze the output. We plot histograms with the probability of signal and background to be tagged as signal, ROC curves and get the output of the intermediate layers and weights.\n"
},
{
"alpha_fraction": 0.4955786168575287,
"alphanum_fraction": 0.5174932479858398,
"avg_line_length": 32.7662353515625,
"blob_id": "01c3999abfd6c8781e1efd12b815b1f3526480f6",
"content_id": "129ab52d225f8e65dfb0cc33605ff8f3d8d177e7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2601,
"license_type": "no_license",
"max_line_length": 132,
"num_lines": 77,
"path": "/Simulations-Event_generation/run_all.py",
"repo_name": "SebastianMacaluso/Deep-learning_jet-images",
"src_encoding": "UTF-8",
"text": "# Script to run 'run_MG5_pythia_jet_img.py', which runs MG5 and Pythia and returns tuples of 4-momentum vectors for jets and subjets\n\n#---------------------------------------------------------------------------------------------\n\nimport os\n\n#---------------------------------------------------------------------------------------------\n# GLOBAL VARIABLES\n\nsignal='tt'\nbackground='QCD_dijet'\n\n#Number of signal and background jobs to send to the hexfarm cluster\nNruns_signal=str(60)\nNruns_background=str(120)\n\nsleep_time=str(5)\n\nmypath ='/het/p1/macaluso/deep_learning/'\n# MG5_process_folders = ['tt','QCD_dijet']\n\n#------------------------------------------\n# MG5 input values\nMG5_EVENTS=str(10000)\nMG5_PTJ1MIN=str(325)#Min pT required in MG5 for the leading jet in pT\nMG5_PTJ1MAX=str(400)\nMG5_ETAMAX=str(2.5)\n\n#------------------------------------------\n# PYTHIA input values\njet_eta_max=str(2.5)\nR_jet=str(1.5)\npTjetMin=str(350) #Min pT for all jets\n\n\n#---------------------------------------------------------------------------------------------\n# Folders with executables .sh and .jdl files for job submission \n\npaths_signal = [mypath+'exec_'+signal+'_'+MG5_EVENTS+'_'+pTjetMin+'_'+R_jet]\nos.system(\"mkdir -p \"+paths_signal[0])\n\npaths_background = [mypath+'exec_'+background+'_'+MG5_EVENTS+'_'+pTjetMin+'_'+R_jet]\nos.system(\"mkdir -p \"+paths_background[0])\n\n#---------------------------------------------------------------------------------------------\n# Generate exc files\n\nos.chdir(mypath)\nos.system('python run_MG5_pythia_jet_img.py '+signal+' '+MG5_EVENTS+' '+MG5_PTJ1MIN+' '+MG5_PTJ1MAX+' '+MG5_ETAMAX+' '\n +jet_eta_max+' '+R_jet+' '+pTjetMin+' '+Nruns_signal)\n\nos.system('python run_MG5_pythia_jet_img.py '+background+' '+MG5_EVENTS+' '+MG5_PTJ1MIN+' '+MG5_PTJ1MAX+' '+MG5_ETAMAX+' '\n +jet_eta_max+' '+R_jet+' '+pTjetMin+' '+Nruns_background)\nprint('Generating exec f.sh and .jdl files')\n\n\n#---------------------------------------------------------------------------------------------\n# Submit jobs to the hexfarm cluster\n\nfor path in paths_signal:\n os.chdir(path)\n print('Sending signal jobs to the cluster')\n os.system(\"ls\")\n with open(\"do_all.src\",\"r\") as f: \n for line in f.readlines():\n os.system(line.strip())\n os.system('sleep '+sleep_time+'\\n')\n\n\nfor path in paths_background:\n os.chdir(path)\n print('Sending background jobs to the cluster')\n os.system(\"ls\")\n with open(\"do_all.src\",\"r\") as f: \n for line in f.readlines():\n os.system(line.strip())\n os.system('sleep '+sleep_time+'\\n')\n\n"
},
{
"alpha_fraction": 0.5730880498886108,
"alphanum_fraction": 0.5921522974967957,
"avg_line_length": 44.342281341552734,
"blob_id": "53bea78d2b78e1dfac7cbc65e47daf256eb2e6c0",
"content_id": "f4c6b355000da4e71da3f3f7b4ac84eb9c8184bb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 40549,
"license_type": "no_license",
"max_line_length": 528,
"num_lines": 894,
"path": "/image_preprocess.py",
"repo_name": "SebastianMacaluso/Deep-learning_jet-images",
"src_encoding": "UTF-8",
"text": "##=============================================================================================\n##=============================================================================================\n# CREATE AND PREPROCESS JET IMAGES TO BE USED AS THE INPUT OF A CONVOLUTIONAL NEURAL NETWORK\n##=============================================================================================\n##=============================================================================================\n\n# This script loads .npy files as a list of numpy arrays ([[pT],[eta],[phi]]) and produces numpy arrays where each entry represents the intensity in transverse momentum (pT) for a pixel in a jet image. The script does the following:\n# 1) We load .npy files with jets and jet constituents (subjets) lists of [[pT],[eta],[phi]]. We generate this files by running Pythia with SlowJets over an LHE file generated in Madgraph 5. \n# 2) We center the image so that the total pT weighted centroid pixel is at (eta,phi)=(0,0).\n# 3) We shift the coordinates of each jet constituent so that the jet is centered at the origin in the new coordinates.\n# 4) We calculate the angle theta for the principal axis.\n# 5) We rotate the coordinate system so that the principal axis is the same direction (+ eta) for all jets.\n# 6) We scale the pixel intensities such that sum_{i,j} I_{i,j}=1\n# 7) We create the array of pT for the jet constituents, where each entry represents a pixel. We add all the jet constituents that fall within the same pixel.\n# 8) We reflect the image over the horizontal and vertical axes to ensure the 3rd maximum is on the upper right quarter-plane\n# 9) We standardize the images adding a factor \"bias\" for noise suppression: Divide each pixel by the standard deviation of that pixel value among all the images in the training data set \n# 11) We output a tuple with the numpy arrays and true value of the images that we will use as input for our neural network\n# 12) We plot all the images.\n# 13) We add the images to get the average jet image for all the events.\n# 14) We plot the averaged image.\n# Last updated: October 10, 2017. Sebastian Macaluso\n# Written for Python 3.6.0\n\n\n#To run this script:\n# python image_preprocess2.py signal_jets_subjets_directory background_jets_subjets_directory\n\n#(To get the images from 09/13/2017)\n# python image_preprocess_avgimg_presentation.py results_tt_200k_ptheavy800-900_pflow2 results_qcd_400k_ptj800-900_pflow2\n\n\n##---------------------------------------------------------------------------------------------\n#RESOLUTION of ECAL/HCAL ATLAS/CMS\n\n# CMS ECal DeltaR=0.0175 and HCal DeltaR=0.0875 (https://link.springer.com/article/10.1007/s12043-007-0229-8 and https://cds.cern.ch/record/357153/files/CMS_HCAL_TDR.pdf ) \n# CMS: For the endcap region, the total number of depths is not as tightly constrained as in the barrel due to the decreased φ-segmentation from 5 degrees (0.087 rad) to 10 degrees for 1.74 < |η| < 3.0. (http://inspirehep.net/record/1193237/files/CMS-TDR-010.pdf)\n#The endcap hadron calorimeter (HE) covers a rapidity region between 1.3 and 3.0 with good hermiticity, good\n# transverse granularity, moderate energy resolution and a sufficient depth. A lateral granularity ( x ) was chosen\n# 0.087 x 0.087. The hadron calorimeter granularity must match the EM granularity to simplify the trigger. (https://cds.cern.ch/record/357153/files/CMS_HCAL_TDR.pdf )\n\n# ATLAS ECal DeltaR=0.025 and HCal DeltaR=0.1 (https://arxiv.org/pdf/hep-ph/9703204.pdf page 11)\n\n##=============================================================================================\n##=============================================================================================\n\n##=============================================================================================\n############ LOAD LIBRARIES\n##=============================================================================================\n\nimport pickle\nimport gzip\nimport sys\nimport os\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\n#import matplotlib as mpl\nfrom mpl_toolkits.mplot3d import Axes3D\nimport numpy as np\n\nnp.set_printoptions(threshold=np.nan)\n\nimport scipy \n\n# from sklearn.preprocessing import scale\nfrom sklearn import preprocessing\n\nimport h5py\n\nimport time\nstart_time = time.time()\n\n##=============================================================================================\n############ GLOBAL VARIABLES\n##=============================================================================================\n\n# local_dir='/Users/sebastian/Documents/Deep-Learning/jet_images/'\nlocal_dir=''\n\n# In_jets=sys.argv[1] #Input file for jets\n# In_subjets=sys.argv[2] #Input file for subjets\ndir_jets_subjets_sig=sys.argv[1] #Input dir with files for jets and subjets\ndir_jets_subjets_bg=sys.argv[2] #Input dir with files for jets and subjets of the set that I will use to get the standard deviation\n# myN_jets=1000000000000000000000000000000000000000\nmyN_jets=5000\nif(len(sys.argv)==4):\n myN_jets=int(sys.argv[3])\n\n\nname_sig=dir_jets_subjets_sig.split('_')[1]\nname_bg=dir_jets_subjets_bg.split('_')[1]\n\nos.system('mkdir -p jet_array_1')\nos.system('mkdir -p plots')\nImages_dir=local_dir+'plots/' #Output dir to save the image plots\nimage_array_dir=local_dir+'jet_array_1/' #Output dir to save the image arrays\n\n# bias kurtosis\n# bias=5e-04\n#-----\nbias=2e-02\n# bias=0.0\n\n\n# bias=2e-02 #Value added to the standard deviation of each pixel over the whole training+test set before dividing the pixel value by the (standard deviation+bias) Comment: I was using 1e-03, but when looking at 1 jet images, this noise suppression value was so small that dividing by the standard deviation would totally change the location of the pixels with maximum intensity. So the best balance I found so far that puts pixels on a more equal level while keeping the location of the pixels with greatest intensity is 2e-02 \n\n# npoints = 6 #npoint=(Number of pixels+1) of the image\nnpoints = 38 #npoint=(Number of pixels+1) of the image\nDR=1.6 #Sets the size of the image as (2xDR,2xDR)\ntreshold=0.95 #We ask some treshold for the total pT fraction to keep the image when some constituents fall outside of the range for (eta,phi)\nptjmin=800 #Cut on the minimum pT of the jet\nptjmax=900 #Cut on the maximum pT of the jet\njetMass_min=130 #Cut on the minimum mass for the jet \njetMass_max=210 #Cut on the maximum mass of the jet\n# N_analysis=79 #Number of input files I want to include in the analysis\n# N_analysis_sig=60 #Number of input files I want to include in the analysis\n# N_analysis_bg=90 #Number of input files I want to include in the analysis\n\n# N_analysis=8 #Number of input files I want to include in the analysis (For ~19000 tt images)\n# N_analysis=5 #Number of input files I want to include in the analysis (For ~19000 QCD images)\n#myN_jets=100000\n\nsample_name='pflow'\n\nsignal='tt'\nbackground='QCD'\nN_pixels=np.power(npoints-1,2)\n\n\n# std_label='own_std'\n# std_label='avg_std'\n# std_label='sig_std'\nstd_label='bg_std'\n# std_label='no_std'\n# std_label='stack_sig_bg_std'\n\n# myMethod='std'\n# myMethod='std'\nmyMethod='n_moment'\n\n\n##=============================================================================================\n############ FUNCTIONS TO LOAD, CREATE AND PREPROCESS THE JET IMAGES\n##=============================================================================================\n\n##---------------------------------------------------------------------------------------------\n# 1) We load .npy files with jets and jet constituents (subjets) lists of [[pT],[eta],[phi]].\ndef loadfiles(jet_subjet_folder):\n print('Loading files for jet and subjets')\n print('Jet array format([[pTj1,pTj2,...],[etaj1,etaj2,...],[phij1,phij2,...],[massj1,massj2,...]])')\n print('Subjet array format ([[[pTsubj1],[pTsubj2],...],[[etasubj1],[etasubj2],...],[[phisubj1],[phisubj2],...]])')\n print('-----------'*10)\n \n# jetlist = [filename for filename in np.sort(os.listdir(jet_subjet_folder)) if filename.startswith('jets_')]\n# print('Jet files loaded = \\n {}'.format(jetlist[0:N_analysis]))\n# subjetlist = [filename for filename in np.sort(os.listdir(jet_subjet_folder)) if filename.startswith('subjets_')]\n# print('Subjet files loaded = \\n {}'.format(subjetlist[0:N_analysis]))\n\n jetlist = [filename for filename in np.sort(os.listdir(jet_subjet_folder)) if ('jets' in filename and filename.endswith('.npy') and 'subjets' not in filename)]\n N_analysis=len(jetlist)\n print('N_analysis =',N_analysis)\n print('Jet files loaded = \\n {}'.format(jetlist[0:N_analysis]))\n subjetlist = [filename for filename in np.sort(os.listdir(jet_subjet_folder)) if ('subjets' in filename and filename.endswith('.npy'))]\n print('Subjet files loaded = \\n {}'.format(subjetlist[0:N_analysis]))\n \n print('-----------'*10)\n print('Number of files loaded={}'.format(N_analysis))\n# print('Total number of files that could be loaded={}'.format(len(jetlist)))\n print('-----------'*10)\n# \n# print('len(jetlist)={}'.format(len(jetlist)))\n# print('len(subjetlist)={}'.format(len(subjetlist)))\n \n Jets=[] #List of jet files we are going to load\n for ijet in range(N_analysis):\n# Jets.append([])\n Jets.append(np.load(jet_subjet_folder+'/'+jetlist[ijet]))#We load the .npy files\n \n Alljets=[[],[],[],[]] # Format: [[pT],[eta],[phi],[mass]]\n# Alljets=[[],[],[]] # Format: [[pT],[eta],[phi]]\n# Each file has a tuple of ([[pTj1,pTj2,...],[etaj1,etaj2,...],[phij1,phij2,...],[massj1,massj2,...]) where in each element we have the data of many jets\n for file in range(N_analysis):\n# Alljets.append([])\n for tuple_element in range(len(Jets[file])): #The tuple_element is each element in ([pT],[eta],[phi],[mass])\n# row.append([])\n for ijet in range(len(Jets[file][tuple_element])):\n if ptjmin<Jets[file][0][ijet]<ptjmax and jetMass_min<Jets[file][3][ijet]<jetMass_max:\n Alljets[tuple_element].append(Jets[file][tuple_element][ijet])\n Alljets=np.array(Alljets)\n\n# print('Jets=\\n {}'.format(Jets)) \n# print('Alljets (new way)=\\n {}'.format(Alljets))\n\n Subjets=[] #List of subjet files we are going to load\n for isubjet in range(N_analysis):\n# Subjets.append([])\n Subjets.append(np.load(jet_subjet_folder+'/'+subjetlist[isubjet]))#We load the .npy files\n# print('Dimension on subjets={}'.format(Subjets[isubjet].size))\n# print('lenght subjet',len(Subjets[isubjet]))\n# \n# print('Total lenght subjet',len(Subjets[isubjet]))\n# print('lenght subjet[0]=\\n',len(Subjets[isubjet][0]))\n \n Allsubjets=[[],[],[]]\n for file in range(N_analysis):\n# Alljets.append([])\n for tuple_element in range(len(Subjets[file])):\n# row.append([])\n for ijet in range(len(Subjets[file][tuple_element])):\n# Allsubjets[tuple_element].append([])\n# for isubjet in range(len(Subjets[file][tuple_element][ijet])):\n if ptjmin<Jets[file][0][ijet]<ptjmax and jetMass_min<Jets[file][3][ijet]<jetMass_max:\n Allsubjets[tuple_element].append(Subjets[file][tuple_element][ijet]) \n \n Allsubjets=np.array(Allsubjets) \n# print('Allsubjets (new way)=\\n {}'.format(Allsubjets)) \n# print('-----------'*10)\n# print('-----------'*10)\n\n Njets=Alljets[0].size\n print('Njets = {}'.format(Njets)) \n print('Nsubjets = {}'.format(Allsubjets[0].size)) \n print('-----------'*10)\n \n return Alljets, Allsubjets, Njets\n\n\n##---------------------------------------------------------------------------------------------\n#2) We find the minimum angular distance (in phi) between jet constituents\ndef deltaphi(phi1,phi2):\n deltaphilist=[phi1-phi2,phi1-phi2+np.pi*2.,phi1-phi2-np.pi*2.]\n sortind=np.argsort(np.abs(deltaphilist))\n return deltaphilist[sortind[0]]\n\n\n##---------------------------------------------------------------------------------------------\n#3) We want to center the image so that the total pT weighted centroid pixel is at (eta,phi)=(0,0). So we calculate eta_center,phi_center\ndef center(Subjets):\n print('Calculating the image center for the total pT weighted centroid pixel is at (eta,phi)=(0,0) ...')\n print('-----------'*10)\n #print('subjet type {}'.format(type(subjets[0][0])))\n\n Njets=len(Subjets[0])\n pTj=[]\n for ijet in range(0,Njets): \n pTj.append(np.sum(Subjets[0][ijet]))\n #print('Sum of pTj for subjets = {}'.format(pTj))\n #print('pTj ={}'.format(jets[0][0])) #This is different for Sum of pTj for subjets, as for the jets, we first sum the 4-momentum vectors of the subjets and then get the pT\n #print('subjet 1 size {}'.format(subjets[1][0]))\n\n eta_c=[]\n phi_c=[]\n weigh_eta=[]\n weigh_phi=[]\n for ijet in range(0,Njets):\n weigh_eta.append([ ])\n weigh_phi.append([ ])\n for isubjet in range(0,len(Subjets[0][ijet])):\n weigh_eta[ijet].append(Subjets[0][ijet][isubjet]*Subjets[1][ijet][isubjet]/pTj[ijet]) #We multiply pT by eta of each subjet\n # print('weighted eta ={}'.format(weigh_eta)) \n weigh_phi[ijet].append(Subjets[0][ijet][isubjet]*deltaphi(Subjets[2][ijet][isubjet],Subjets[2][ijet][0])/pTj[ijet]) #We multiply pT by phi of each subjet\n eta_c.append(np.sum(weigh_eta[ijet])) #Centroid value for eta\n phi_c.append(np.sum(weigh_phi[ijet])+Subjets[2][ijet][0]) #Centroid value for phi\n #print('weighted eta ={}'.format(weigh_eta))\n #print('Position of pT weighted centroid pixel in eta for [jet1,jet2,...] ={}'.format(eta_c))\n #print('Position of pT weighted centroid pixel in phi for [jet1,jet2,...] ={}'.format(phi_c))\n #print('-----------'*10)\n return pTj, eta_c, phi_c\n\n\n##---------------------------------------------------------------------------------------------\n#4) We shift the coordinates of each particle so that the jet is centered at the origin in (eta,phi) in the new coordinates\ndef shift(Subjets,Eta_c,Phi_c):\n print('Shifting the coordinates of each particle so that the jet is centered at the origin in (eta,phi) in the new coordinates ...')\n print('-----------'*10)\n\n Njets=len(Subjets[1])\n for ijet in range(0,Njets):\n if ijet == 0:\n print(\"center\",Eta_c[ijet],Phi_c[ijet])\n Subjets[1][ijet]=(Subjets[1][ijet]-Eta_c[ijet])\n Subjets[2][ijet]=(Subjets[2][ijet]-Phi_c[ijet])\n Subjets[2][ijet]=np.unwrap(Subjets[2][ijet])#We fix the angle phi to be between (-Pi,Pi]\n #print('Shifted eta = {}'.format(Subjets[1]))\n #print('Shifted phi = {}'.format(Subjets[2]))\n #print('-----------'*10)\n return Subjets\n \n \n##---------------------------------------------------------------------------------------------\n#5) We calculate the angle theta of the principal axis\ndef principal_axis(Subjets):\n print('Getting DeltaR for each subjet in the shifted coordinates and the angle theta of the principal axis ...')\n print('-----------'*10) \n tan_theta=[]#List of the tan(theta) angle to rotate to the principal axis in each jet image\n Njets=len(Subjets[1])\n for ijet in range(0,Njets):\n M11=np.sum(Subjets[0][ijet]*Subjets[1][ijet]*Subjets[2][ijet])\n M20=np.sum(Subjets[0][ijet]*Subjets[1][ijet]*Subjets[1][ijet])\n M02=np.sum(Subjets[0][ijet]*Subjets[2][ijet]*Subjets[2][ijet])\n tan_theta_use=2*M11/(M20-M02+np.sqrt(4*M11*M11+(M20-M02)*(M20-M02)))\n tan_theta.append(tan_theta_use)\n\n if ijet == 0:\n print(\"principal axis\",tan_theta)\n# print('tan(theta)= {}'.format(tan_theta))\n# print('-----------'*10)\n return tan_theta\n\n\n##---------------------------------------------------------------------------------------------\n#6) We rotate the coordinate system so that the principal axis is the same direction (+ eta) for all jets\ndef rotate(Subjets,tan_theta):\n print('Rotating the coordinate system so that the principal axis is the same direction (+ eta) for all jets ...')\n print('-----------'*10)\n# print(Subjets[2][0])\n# print('Shifted eta for jet 1= {}'.format(Subjets[1][0]))\n# print('Shifted phi for jet 1 = {}'.format(Subjets[2][0]))\n# print('-----------'*10)\n rot_subjet=[[],[],[]]\n Njets=len(Subjets[1])\n for ijet in range(0,Njets):\n rot_subjet[0].append(Subjets[0][ijet]) \n rot_subjet[1].append(Subjets[1][ijet]*np.cos(np.arctan(tan_theta[ijet]))+Subjets[2][ijet]*np.sin(np.arctan(tan_theta[ijet])))\n rot_subjet[2].append(-Subjets[1][ijet]*np.sin(np.arctan(tan_theta[ijet]))+Subjets[2][ijet]*np.cos(np.arctan(tan_theta[ijet])))\n #print('Rotated phi for jet 1 before fixing -pi<theta<pi = {}'.format(Subjets[2][0])) \n rot_subjet[2][ijet]=np.unwrap(rot_subjet[2][ijet]) #We fix the angle phi to be between (-Pi,Pi]\n# print('Subjets pT (before rotation) = {}'.format(Subjets[0]))\n# print('-----------'*10)\n# print('Subjets pT (after rotation) = {}'.format(rot_subjet[0]))\n# print('-----------'*10)\n# print('eta = {}'.format(Subjets[1]))\n# print('-----------'*10)\n# print('Rotated eta = {}'.format(rot_subjet[1]))\n# print('-----------'*10)\n# print('Rotated phi = {}'.format(Subjets[2]))\n# print('-----------'*10)\n# print('Rotated phi = {}'.format(rot_subjet[2]))\n# print('-----------'*10)\n# print('-----------'*10)\n return rot_subjet\n \n\n##---------------------------------------------------------------------------------------------\n#7) We scale the pixel intensities such that sum_{i,j} I_{i,j}=1\ndef normalize(Subjets,pTj):\n print('Scaling the pixel intensities such that sum_{i,j} I_{i,j}=1 ...')\n print('-----------'*10)\n Njets=len(Subjets[0])\n# print('pT jet 2= {}'.format(Subjets[0][1])) \n for ijet in range(0,Njets):\n Subjets[0][ijet]=Subjets[0][ijet]/pTj[ijet]\n# print('Normalizes pT jet 2= {}'.format(Subjets[0][1])) \n# print('Sum of normalized pT for jet 2 = {}'.format(np.sum(Subjets[0][1])))\n print('-----------'*10)\n return Subjets\n\n\n\n##---------------------------------------------------------------------------------------------\n#8) We create a coarse grid for the array of pT for the jet constituents, where each entry represents a pixel. We add all the jet constituents that fall within the same pixel \ndef create_image(Subjets):\n \n print('Generating images of the jet pT ...')\n print('-----------'*10)\n etamin, etamax = -DR, DR # Eta range for the image\n phimin, phimax = -DR, DR # Phi range for the image\n eta_i = np.linspace(etamin, etamax, npoints) #create an array with npoints elements between min and max\n phi_i = np.linspace(phimin, phimax, npoints)\n image=[]\n Njets=len(Subjets[0])\n print(Njets)\n for ijet in range(0,Njets):\n \n \n grid=np.zeros((npoints-1,npoints-1)) #create an array of zeros for the image \n# print('Grid= {}'.format(grid))\n# print('eta_i= {}'.format(eta_i))\n \n eta_idx = np.searchsorted(eta_i,Subjets[1][ijet]) # np.searchsorted finds the index where each value in my data (Subjets[1] for the eta values) would fit into the sorted array eta_i (the x value of the grid).\n phi_idx = np.searchsorted(phi_i,Subjets[2][ijet])# np.searchsorted finds the index where each value in my data (Subjets[2] for the phi values) would fit into the sorted array phi_i (the y value of the grid).\n \n# print('Index eta_idx for jet {} where each eta value of the jet constituents in the data fits into the sorted array eta_i = \\n {}'.format(ijet,eta_idx))\n# print('Index phi_idx for jet {} where each phi value of the jet constituents in the data fits into the sorted array phi_i = \\n {}'.format(ijet,phi_idx))\n# print('-----------'*10)\n \n# print('Grid for jet {} before adding the jet constituents pT \\n {}'.format(ijet,grid))\n for pos in range(0,len(eta_idx)):\n if eta_idx[pos]!=0 and phi_idx[pos]!=0 and eta_idx[pos]<npoints and phi_idx[pos]<npoints: #If any of these conditions are not true, then that jet constituent is not included in the image. \n grid[eta_idx[pos]-1,phi_idx[pos]-1]=grid[eta_idx[pos]-1,phi_idx[pos]-1]+Subjets[0][ijet][pos] #We add each subjet pT value to the right entry in the grid to create the image. As the values of (eta,phi) should be within the interval (eta_i,phi_i) of the image, the minimum eta_idx,phi_idx=(1,1) to be within the image. However, this value should be added to the pixel (0,0) in the grid. That's why we subtract 1. \n# print('Grid for jet {} after adding the jet constituents pT \\n {}'.format(ijet,grid)) \n# print('-----------'*10)\n \n sum=np.sum(grid)\n# print('Sum of all elements of the grid for jet {} = {} '.format(ijet,sum))\n# print('-----------'*10)\n# print('-----------'*10)\n \n #We ask some treshold for the total pT fraction to keep the image when some constituents fall outside of the range for (eta,phi)\n \n if sum>=treshold:\n# and ptjmin<Jets[0][ijet]<ptjmax and jetMass_min<Jets[3][ijet]<jetMass_max:\n# print('Jet Mass={}'.format(Jets[3][ijet]))\n image.append(grid)\n if len(grid)==0:\n print(ijet)\n if ijet%10000==0:\n print('Already generated jet images for {} jets'.format(ijet))\n# print('Array of images before deleting empty lists = \\n {}'.format(image)) \n# print('-----------'*10)\n# image=[array for array in image if array!=[]] #We delete the empty arrays that come from images that don't satisfy the treshold\n \n# print('Array of images = \\n {}'.format(image[0:2])) \n# print('-----------'*10)\n print('Number of images= {}'.format(len(image)))\n print('-----------'*10)\n N_images=len(image)\n \n Number_jets=N_images #np.min([N_images, myN_jets])\n final_image=image\n# final_image=image[0:Number_jets]\n print('N_images = ',N_images)\n print('Final images = ',len(final_image))\n \n return final_image, Number_jets\n\n\n\n##---------------------------------------------------------------------------------------------\n#9) We subtract the mean mu_{i,j} of each image, transforming each pixel intensity as I_{i,j}=I_{i,j}-mu_{i,j}\ndef zero_center(Image,ref_image):\n print('Subtracting the mean mu_{i,j} of each image, transforming each pixel intensity as I_{i,j}=I_{i,j}-mu_{i,j} ...')\n print('-----------'*10)\n mu=[]\n Im_sum=[]\n N_pixels=np.power(npoints-1,2)\n# for ijet in range(0,len(Image)):\n# mu.append(np.sum(Image[ijet])/N_pixels)\n# Im_sum.append(np.sum(Image[ijet]))\n# print('Mean values of images= {}'.format(mu))\n# print('Sum of image pT (This should ideally be 1 as the images are normalized except when some jet constituents fall outside of the image range )= {}'.format(Im_sum))\n zeroImage=[]\n for ijet in range(0,len(Image)):# As some jet images were discarded because the total momentum of the constituents within the range of the image was below the treshold, we use len(image) instead of Njets\n# if ijet==10:\n# for i in range(37):\n# for j in range(37):\n# print(\"zero_center image\")\n# print(i,j,Image[ijet][i,j])\n# print(\"ref image\")\n# print(i,j,ref_image[i,j])\n# print(\"diff\")\n# print((Image[ijet]-ref_image)[i,j])\n\n\n zeroImage.append(Image[ijet]-ref_image)\n# print(ijet,mu[ijet])\n\n print('Grid after subtracting the mean (1st 2 images)= \\n {}'.format(Image[0:2])) \n print('-----------'*10)\n# print('Mean of first images',mu[0:6])\n return zeroImage \n\n\n##---------------------------------------------------------------------------------------------\n#10)Reflect the image with respect to the vertical axis to ensure the 3rd maximum is on the right half-plane\ndef flip(Image,Nimages): \n \n count=0\n print('Flipping the image with respect to the vertical axis to ensure the 3rd maximum is on the right half-plane ...')\n print('-----------'*10)\n print('Image shape = ', np.shape(Image[0]))\n# print('Number of rows = ',np.shape(Image[0][0])[0])\n half_img=np.int((npoints-1)/2)\n flip_image=[]\n for i_image in range(len(Image)):\n left_img=[] \n right_img=[]\n for i_row in range(np.shape(Image[i_image])[0]):\n left_img.append(Image[i_image][i_row][0:half_img])\n right_img.append(Image[i_image][i_row][-half_img:])\n# print('-half_img = ',-half_img)\n# print('Left half of image (we suppose the number of pixels is odd and we do not include the central pixel)\\n',np.array(left_img))\n# print('Right half of image (we suppose the number of pixels is odd and we do not include the central pixel) \\n',np.array(right_img))\n \n left_sum=np.sum(left_img)\n right_sum=np.sum(right_img)\n# print('Left sum = ',left_sum)\n# print('Right sum = ',right_sum)\n \n if left_sum>right_sum:\n flip_image.append(np.fliplr(Image[i_image])) \n else:\n flip_image.append(Image[i_image])\n# print('Image not flipped')\n# print('Left sum = ',left_sum)\n# print('Right sum = ',right_sum)\n count+=1\n# print('Array of images before flipping =\\n {}'.format(Image[i_image])) \n# print('Array of images after flipping =\\n {}'.format(flip_image[i_image])) \n print('Fraction of images flipped = ',(Nimages-count)/Nimages)\n print('-----------'*10)\n print('-----------'*10)\n return flip_image \n \n \n##---------------------------------------------------------------------------------------------\n#11)Reflect the image with respect to the horizontal axis to ensure the 3rd maximum is on the top half-plane\ndef hor_flip(Image,Nimages): \n \n count=0\n print('Flipping the image with respect to the horizontal axis to ensure the 3rd maximum is on the top half-plane ...')\n print('-----------'*10)\n print('Image shape = ', np.shape(Image[0]))\n print('Number of columns = ',np.shape(Image[0])[1])\n half_img=np.int((npoints-1)/2)\n hor_flip_image=[]\n for i_image in range(len(Image)):\n top_img=[] \n bottom_img=[]\n# print('image',Image[i_image])\n# print('image',Image[i_image][0])\n for i_row in range(half_img):\n# for i_col in range(np.shape(Image[i_image][0])[1]):\n top_img.append(Image[i_image][i_row])\n bottom_img.append(Image[i_image][-i_row-1])\n# print('-i_row-1 = ',-i_row-1)\n# print('Top half of image (we suppose the number of pixels is odd and we do not include the central pixel) \\n',np.array(top_img))\n# print('Bottom half of image (we suppose the number of pixels is odd and we do not include the central pixel) \\n',np.array(bottom_img))\n top_sum=np.sum(top_img)\n bottom_sum=np.sum(bottom_img)\n# print('Top sum = ',top_sum)\n# print('Bottom sum = ',bottom_sum)\n# \n if bottom_sum>top_sum:\n hor_flip_image.append(np.flip(Image[i_image],axis=0)) \n else:\n hor_flip_image.append(Image[i_image])\n# print('Image not flipped')\n\n count+=1\n# print('Array of images before flipping =\\n {}'.format(Image[i_image])) \n# print('Array of images after flipping =\\n {}'.format(hor_flip_image[i_image])) \n print('Fraction of images horizontally flipped = ',(Nimages-count)/Nimages)\n print('-----------'*10)\n print('-----------'*10)\n return hor_flip_image \n \n##---------------------------------------------------------------------------------------------\n#12) We output a tuple with the numpy arrays and true value of the images that we will use as input for our neural network\ndef output_image_array_data_true_value(Image,type,name):\n Nimages=len(Image)\n true_value=[]\n for iImage in range(0,len(Image)):\n if name==signal:\n true_value.append(np.array([1]))\n elif name==background:\n true_value.append(np.array([0]))\n else:\n print('The sample is neither signal nor background. Update the signal/bacground names accordingly.')\n \n# print('True value where (1,0) means signal and (0,1) background =\\n{}'.format(true_value))\n \n output=list(zip(Image,true_value))\n# print('Input array for neural network, with format (Input array,true value)= \\n {}'.format(output))\n \n print(\"Saving data in .npy format ...\")\n array_name=str(name)+'_'+str(Nimages)+'_'+str(npoints-1)+'_'+type+'_'+sample_name\n \n# f = gzip.open(image_array_dir+array_name+'.pkl.gz', 'w')#pkl.gz format\n# pickle.dump(output, f)\n# f.close()\n \n# .npy format\n np.save(image_array_dir+array_name+'_.npy',Image)\n print('List of jet image arrays filename = {}'.format(image_array_dir+array_name+'_.npy'))\n print('-----------'*10)\n# print('Array {}={}'.format(array_name,Image))\n\n\n \n##---------------------------------------------------------------------------------------------\n#13) We plot all the images\ndef plot_all_images(Image, type):\n \n# for ijet in range(0,len(Image)):\n for ijet in range(1200,1230):\n imgplot = plt.imshow(Image[ijet], 'gnuplot', extent=[-DR, DR,-DR, DR])# , origin='upper', interpolation='none', vmin=0, vmax=0.5)\n# imgplot = plt.imshow(Image[0])\n# plt.show()\n plt.xlabel('$\\eta^{\\prime\\prime}$')\n plt.ylabel('$\\phi^{\\prime\\prime}$')\n #plt.show()\n fig = plt.gcf()\n plt.savefig(Images_dir+'1jet_images/Im_'+str(name)+'_'+str(npoints-1)+'_'+str(ijet)+'_'+type+'.png')\n# print(len(Image))\n# print(type(Image[0]))\n\n\n##---------------------------------------------------------------------------------------------\n#14) We add the images to get the average jet image for all the events\ndef add_images(Image):\n print('Adding the images to get the average jet image for all the events ...')\n print('-----------'*10)\n N_images=len(Image)\n# print('Number of images= {}'.format(N_images))\n# print('-----------'*10)\n avg_im=np.zeros((npoints-1,npoints-1)) #create an array of zeros for the image\n for ijet in range(0,len(Image)):\n avg_im=avg_im+Image[ijet]\n #avg_im2=np.sum(Image[ijet])\n# print('Average image = \\n {}'.format(avg_im))\n print('-----------'*10)\n# print('Average image 2 = \\n {}'.format(avg_im2))\n #We normalize the image\n Total_int=np.absolute(np.sum(avg_im))\n print('Total intensity of average image = \\n {}'.format(Total_int))\n print('-----------'*10)\n# norm_im=avg_im/Total_int\n norm_im=avg_im/N_images\n# print('Normalized average image (by number of images) = \\n {}'.format(norm_im))\n# print('Normalized average image = \\n {}'.format(norm_im))\n print('-----------'*10)\n norm_int=np.sum(norm_im)\n print('Total intensity of average image after normalizing (should be 1) = \\n {}'.format(norm_int))\n return norm_im\n\n \n \n##---------------------------------------------------------------------------------------------\n#15) We plot the averaged image\ndef plot_avg_image(Image, type,name,Nimages):\n print('Plotting the averaged image ...')\n print('-----------'*10)\n# imgplot = plt.imshow(Image[0], 'viridis')# , origin='upper', interpolation='none', vmin=0, vmax=0.5) \n imgplot = plt.imshow(Image, 'gnuplot', extent=[-DR, DR,-DR, DR])# , origin='upper', interpolation='none', vmin=0, vmax=0.5)\n# imgplot = plt.imshow(Image[0])\n# plt.show()\n plt.xlabel('$\\eta^{\\prime\\prime}$')\n plt.ylabel('$\\phi^{\\prime\\prime}$')\n fig = plt.gcf()\n image_name=str(name)+'_avg_im_'+str(Nimages)+'_'+str(npoints-1)+'_'+type+'_'+sample_name+'.png'\n plt.savefig(Images_dir+image_name)\n print('Average image filename = {}'.format(Images_dir+image_name))\n# print(len(Image))\n# print(type(Image[0]))\n\n##=============================================================================================\n############ MAIN FUNCTIONS\n##=============================================================================================\n\n##---------------------------------------------------------------------------------------------\n# A) Plots images\ndef plot_my_image(images,std_name,type):\n\n Nimages=len(images)\n\n average_im =add_images(images)\n plot_avg_image(average_im,type,std_name,Nimages) \n # plot_avg_image(average_im,str(std_label)+'_bias'+str(bias)+'_vflip_hflip_rot'+'_'+str(ptjmin)+'_'+str(ptjmax)+'_'+myMethod,std_name,Nimages)\n\n\n##---------------------------------------------------------------------------------------------\n# B) PREPROCESS IMAGES (center, shift, principal_axis, rotate, normalize, vertical flip, horizontal flip)\ndef preprocess(subjets,std_name):\n \n\n pTj, eta_c, phi_c=center(subjets) \n elapsed=time.time()-start_time\n print('elapsed time',elapsed)\n\n shift_subjets=shift(subjets,eta_c,phi_c)\n elapsed=time.time()-start_time\n print('elapsed time',elapsed)\n\n #print(shift_subjets)\n tan_theta=principal_axis(shift_subjets) \n elapsed=time.time()-start_time\n print('elapsed time',elapsed)\n\n rot_subjets=rotate(shift_subjets,tan_theta)\n elapsed=time.time()-start_time\n print('elapsed time',elapsed)\n\n norm_subjets=normalize(rot_subjets,pTj)\n elapsed=time.time()-start_time\n print('elapsed time',elapsed)\n \n print('Generating raw images.. .')\n raw_image, Nimages=create_image(norm_subjets) \n elapsed=time.time()-start_time\n print('elapsed time',elapsed)\n\n ver_flipped_img=flip(raw_image,Nimages) \n elapsed=time.time()-start_time\n print('elapsed time',elapsed)\n\n hor_flipped_img=hor_flip(ver_flipped_img,Nimages) \n elapsed=time.time()-start_time\n print('elapsed time',elapsed)\n \n# plot_my_image(raw_image,std_name,'_rot'+'_'+str(ptjmin)+'_'+str(ptjmax))\n\n # plot_my_image(hor_flipped_img,std_name,'_vflip_hflip_rot'+'_'+str(ptjmin)+'_'+str(ptjmax))\n \n# hor_flipped_img=raw_image\n return hor_flipped_img\n \n \n \n##---------------------------------------------------------------------------------------------\n# C) GET STANDARD DEVIATION OF A SET OF IMAGES\ndef get_std(Image,method): \n\n print('-----------'*10)\n print('-----------'*10)\n print('Calculating standard deviation with a noise suppression factor ...')\n print('-----------'*10)\n Image_row=[]\n# N_pixels=np.power(npoints-1,2)\n print('Number of pixels of the image =',N_pixels)\n print('-----------'*10)\n# Image[0].reshape((N_pixels))\n for i_image in range(len(Image)):\n# Image_row.append([])\n# print('i_image ={}'.format(i_image))\n Image_row.append(Image[i_image].reshape((N_pixels)))\n# print('Image arrays as rows (1st 2 images)=\\n {}'.format(Image_row[0:2]))\n print('-----------'*10)\n Image_row=np.array(Image_row,dtype=np.float64)\n Image_row.reshape((len(Image),N_pixels))\n# print('All image arrays as rows of samples and columns of features (pixels) (for the 1st 2 images) =\\n {}'.format(Image_row[0:2]))\n print('-----------'*10)\n print('-----------'*10)\n# standard_img=preprocessing.scale(Image_row)\n\n if method=='n_moment':\n# kurtosis=scipy.stats.kurtosis(Image_row,axis=0, fisher=False)\n n_moment=scipy.stats.moment(Image_row, moment=4, axis=0)\n standard_dev=np.std(Image_row,axis=0,ddof=1, dtype=np.float64)\n print('N moment =\\n {}'.format(n_moment[0:40]))\n print('-----------'*10)\n final_bias=np.power(n_moment,1/4)+bias\n print('////////'*10)\n print('Max final bias = \\n',np.sort(final_bias, axis=None)[::-1][0:20])\n print('-----------'*10)\n# final_bias=n_moment/np.power(standard_dev,2)+bias\n# print('N moment/std with bias for =\\n {}'.format(final_bias[0:40]))\n# standard_img=Image_row/final_bias\n# print('-----------'*10)\n# print('N moment images with bias (1st 2 image arrays as rows)=\\n {}'.format(standard_img[0:2]))\n \n elif method=='std': \n standard_dev=np.std(Image_row,axis=0,ddof=1, dtype=np.float64)\n print('Standard deviation =\\n {}'.format(standard_dev))\n print('-----------'*10)\n final_bias=standard_dev+bias\n \n final_bias=final_bias.reshape((npoints-1,npoints-1))\n# print('Standard deviation with bias for =\\n {}'.format(final_bias))\n print('-----------'*10)\n \n return final_bias\n\n\n##---------------------------------------------------------------------------------------------\n# D) USE STANDARD DEVIATION FROM ANOTHER SET OF IMAGES\ndef standardize_bias_std_other_set(Image, input_std_bias): \n print('-----------'*10)\n print('-----------'*10)\n print('Standardizing image with std from another set and a noise suppression factor ...')\n print('-----------'*10) \n std_im_list=[]\n for i_image in range(len(Image)):\n std_im_list.append(Image[i_image]/input_std_bias)\n std_im_list[i_image]=std_im_list[i_image].reshape((npoints-1,npoints-1))\n print('-----------'*10)\n print('-----------'*10)\n return std_im_list\n\n\n##---------------------------------------------------------------------------------------------\n# E) PUT ALL TOGETHER \ndef standardize_images(images,reference_images,method):\n\n# CALCULATE STD DEVIATION OF REFERENCE SET\n print('CALCULATING STANDARD DEVIATIONS OF REFERENCE SET')\n out_std_bias=get_std(reference_images, method)\n print(\"std for pixel\",out_std_bias[15,15])\n elapsed=time.time()-start_time\n print('elapsed time',elapsed)\n\n# CALCULATE AVERAGE IMAGE OF REFERENCE SET\n print('CALCULATING AVERAGE IMAGE OF REFERENCE SET')\n out_avg_image=add_images(reference_images)\n print(\"avg for pixel\",out_avg_image[15,15])\n elapsed=time.time()-start_time\n print('elapsed time',elapsed)\n\n# ZERO CENTER\n print('ZERO CENTERING IMAGES')\n# image_zero=zero_center(images,out_avg_image)\n image_zero=images\n elapsed=time.time()-start_time\n print('elapsed time',elapsed)\n\n# DIVIDE BY STANDARD DEVIATION\n print('STANDARDIZING IMAGES')\n standard_image=standardize_bias_std_other_set(image_zero,out_std_bias)\n elapsed=time.time()-start_time\n print('elapsed time',elapsed)\n\n return standard_image\n\n\n##---------------------------------------------------------------------------------------------\n# F) Output averaged image and final npy array\ndef output(images,std_name):\n\n Nimages=len(images)\n\n average_im =add_images(images) \n elapsed=time.time()-start_time\n print('elapsed time',elapsed) \n# output_image_array_data_true_value(images,str(std_label)+'_bias'+str(bias)+'_vflip_hflip_rot'+'_'+str(ptjmin)+'_'+str(ptjmax)+'_'+myMethod,std_name) \n elapsed=time.time()-start_time\n print('elapsed time',elapsed)\n# plot_all_images(standard_image,'std_'+str(bias)+'_flip_')\n# plot_all_images(flipped_img,'flip')\n# plot_all_images(flipped_img,'no_std')\n plot_avg_image(average_im,str(std_label)+'_bias'+str(bias)+'_vflip_hflip_rot'+'_'+str(ptjmin)+'_'+str(ptjmax)+'_'+myMethod,std_name,Nimages)\n elapsed=time.time()-start_time\n print('elapsed time',elapsed)\n \n\n##=============================================================================================\n############ RUN FUNCTIONS\n##=============================================================================================\n \n\nif __name__=='__main__':\n\n##---------------------------------------------------------------------------------------------\n# LOAD FILES\n print('LOADING FILES...')\n jets_sig,subjets_sig, Njets_sig=loadfiles(dir_jets_subjets_sig) \n jets_bg,subjets_bg, Njets_bg=loadfiles(dir_jets_subjets_bg)\n elapsed=time.time()-start_time\n print('elapsed time',elapsed)\n##---------------------------------------------------------------------------------------------\n# PREPROCESS IMAGES\n print('PREPROCESSING IMAGES...')\n images_sig=preprocess(subjets_sig,'tt')\n images_bg=preprocess(subjets_bg,'QCD') \n myN_jets=np.min([len(images_sig),len(images_bg),myN_jets]) \n print(\"Number of images (sig=bg) used for analysis\",myN_jets)\n images_sig=images_sig[0:myN_jets]\n images_bg=images_bg[0:myN_jets]\n elapsed=time.time()-start_time\n print('elapsed time',elapsed)\n##--------------------------------------------------------------------------------------------- \n# plot_my_image(images_sig,'tt','_no_rot'+'_'+str(ptjmin)+'_'+str(ptjmax))\n# plot_my_image(images_bg,'QCD','_no_rot'+'_'+str(ptjmin)+'_'+str(ptjmax))\n\n##---------------------------------------------------------------------------------------------\n# ZERO CENTER AND NORMALIZE BY STANDARD DEVIATION\n print('ZERO CENTERING AND NORMALIZING IMAGES BY STANDARD DEVIATIONS...')\n if std_label == 'avg_std':\n sig_image_norm = standardize_images(images_sig,images_sig+images_bg,myMethod)\n bg_image_norm = standardize_images(images_bg,images_sig+images_bg,myMethod)\n elif std_label == 'bg_std':\n sig_image_norm = standardize_images(images_sig,images_bg,myMethod)\n bg_image_norm = standardize_images(images_bg,images_bg,myMethod)\n elif std_label == 'sig_std':\n sig_image_norm = standardize_images(images_sig,images_sig,myMethod)\n bg_image_norm = standardize_images(images_bg,images_sig,myMethod)\n elif std_label == 'no_std':\n sig_image_norm=images_sig\n bg_image_norm=images_bg\n elapsed=time.time()-start_time\n print('elapsed time',elapsed)\n##---------------------------------------------------------------------------------------------\n# OUTPUT\n print('OUTPUT...')\n output(sig_image_norm,'tt')\n output(bg_image_norm,'QCD')\n# output(images_sig,'tt')\n# output(images_bg,'QCD')\n\n\n##---------------------------------------------------------------------------------------------\n print('FINISHED.')\n\n print('-----------'*10)\n print(\"Code execution time = %s minutes\" % ((time.time() - start_time)/60))\n print('-----------'*10) \n \n\n \n \n"
}
] | 6 |
chriswoodle/Name-Tag-Printer | https://github.com/chriswoodle/Name-Tag-Printer | 46440a48d93a4eca8c5b2dafb1ac40a9ff0aaf15 | dc8c110dce377b04c6c78efb02f42b9cb604b151 | db2b21a4293070853eb4629e88ab2c1dd145a5ca | refs/heads/master | 2016-09-16T12:48:30.884520 | 2015-09-10T23:31:33 | 2015-09-10T23:31:33 | 42,276,152 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.7400000095367432,
"alphanum_fraction": 0.7400000095367432,
"avg_line_length": 16,
"blob_id": "2143dcfe7ae4d34b2825d57dc70d1576c29f7bd3",
"content_id": "20fe3c7739d533ad54f8f1632093d9c7e990b43b",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 50,
"license_type": "permissive",
"max_line_length": 24,
"num_lines": 3,
"path": "/shellScript.sh",
"repo_name": "chriswoodle/Name-Tag-Printer",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\ncd /home/pi/machine-shop\nsudo node app"
},
{
"alpha_fraction": 0.7437137365341187,
"alphanum_fraction": 0.7630560994148254,
"avg_line_length": 63.5625,
"blob_id": "4488265a7890dd32ee34903f593d1af82d97a502",
"content_id": "42ca723d453a2619c598f148603b706b6e71eece",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 1034,
"license_type": "permissive",
"max_line_length": 148,
"num_lines": 16,
"path": "/README.md",
"repo_name": "chriswoodle/Name-Tag-Printer",
"src_encoding": "UTF-8",
"text": "# Name Tag Printer #\n### This is a project for a Raspberry Pi Card Reader/Label Printer ###\nThe system consists of using a USB card reader, DYMO LabelWriter 450 Turbo and a Raspberry PI B\n\nThis nodejs application requires an internet connection to a time server. It also uses a LCD display. \n## Prerequisites/ tips ##\n* Install nodejs - https://learn.adafruit.com/node-embedded-development/installing-node-dot-js\n* Nodejs Python execution - https://www.npmjs.com/package/python-shell\n* Install CUPS http://www.howtogeek.com/169679/how-to-add-a-printer-to-your-raspberry-pi-or-other-linux-computer/\n* Python printing with cups page 102-103 http://www.themagpi.com/issue/issue-se1/\n* sudo apt-get install python-cups\n* Auto mount usb - http://www.instructables.com/id/Mounting-a-USB-Thumb-Drive-with-the-Raspberry-Pi/step3/Set-up-a-mounting-point-for-the-USB-drive/\n* CSV api - https://www.npmjs.com/package/fast-csv\n\n### For lcd support ###\n* https://learn.adafruit.com/downloads/pdf/drive-a-16x2-lcd-directly-with-a-raspberry-pi.pdf\n\n"
},
{
"alpha_fraction": 0.6734693646430969,
"alphanum_fraction": 0.6924700736999512,
"avg_line_length": 38.44444274902344,
"blob_id": "4967e3709995976c6f2ddd7f98b116dbc9b638fd",
"content_id": "1d237ac299c09860d76e5d71b49d45aa96dc66f4",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1421,
"license_type": "permissive",
"max_line_length": 117,
"num_lines": 36,
"path": "/Python/print.py",
"repo_name": "chriswoodle/Name-Tag-Printer",
"src_encoding": "UTF-8",
"text": "import logging\nlogging.raiseExceptions = False \n# ignores errors that are raised due to no logging handeler\nimport sys\nimport cups \nfrom xhtml2pdf import pisa \n# reads all arguments, returns list\npassed_args = sys.argv[1:]\nname = passed_args[0]\ncert_level = passed_args[1]\ntime = passed_args[2]\n# set printing options \noptions = {'PageSize':'w28mmh89mm', 'scaling': '100' }\nfilename = \"/home/pi/print.pdf\" \n# generate content \nxhtml = \"\" \nxhtml += \"<style>@page {size: 89mm 28mm landscape;font-size:5mm;padding:4mm;}</style>\"\nxhtml += \"<div><div><div style='font-size:15mm;display:inline-block;'>\" + cert_level + \" </div>\"\nxhtml += \"<div style='font-size:8mm;line-height:6.5mm;display:inline-block;margin-top:2mm;'>\" + name + \"</div></div>\"\nxhtml += \"<div style='font-size:3mm;line-height:3mm;'>FIT Machine Shop - \" + time + \"</div></div>\"\npdf = pisa. CreatePDF(xhtml, file(filename, \"w\")) \nif not pdf. err: \n\t# Close PDF file - otherwise we can' t read it \n\tpdf. dest. close() \n\t# print the file using cups\n\tconn = cups. Connection() \n\t# Get a list of all printers \n\tprinters = conn. getPrinters() \n\t#for printer in printers: \n\t\t# Print name of printers to stdout (screen) \n\t#\tprint printer, printers[printer][\"device-uri\"] \n\t# get first printer from printer list \n\tprinter_name = printers. keys()[0] \n\tconn. printFile(printer_name, filename, \"Python_Status_print\", options) \nelse: \n\tprint \"Unable to create pdf file\" \n"
},
{
"alpha_fraction": 0.7027027010917664,
"alphanum_fraction": 0.7027027010917664,
"avg_line_length": 36,
"blob_id": "45094f0ed44648cb4bc58d85b068d39c7c764457",
"content_id": "d120b37236ec0bd8e5aab5a727894e8d6b712706",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 74,
"license_type": "permissive",
"max_line_length": 36,
"num_lines": 2,
"path": "/app.js",
"repo_name": "chriswoodle/Name-Tag-Printer",
"src_encoding": "UTF-8",
"text": "console.log(\"Application started.\");\nvar Core = require('./NodeJS/core');\n"
}
] | 4 |
juliedelisle5/pyovoyager | https://github.com/juliedelisle5/pyovoyager | 2ee71ca827571efa16a34c2b96b58f94ff5805b7 | 31e4d91627e827cb8e0c81557ef7b8da237e1e19 | c133ef6aeb26770acc627e4a155cae465492e1bc | refs/heads/master | 2016-09-06T08:40:03.310613 | 2013-05-09T12:56:28 | 2013-05-09T12:56:28 | 40,375,051 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5814037919044495,
"alphanum_fraction": 0.6155933141708374,
"avg_line_length": 40.88257598876953,
"blob_id": "89dec0ae507fa5927c8defcc16c7c465cb63b1cf",
"content_id": "27b7cabfed3a585fa287b6edf178d402fe4df78c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 11056,
"license_type": "no_license",
"max_line_length": 166,
"num_lines": 264,
"path": "/Resources/audio.py",
"repo_name": "juliedelisle5/pyovoyager",
"src_encoding": "UTF-8",
"text": "from pyo import *\nfrom random import uniform\nfrom math import pow\n\n#Mettre l'attribut wave a une valeur entre 0 et 1 (float) par defaut et essayer ensuite avec les tables.\n\n#Premiere evaluation:\n### Tres bien! En priorite selon moi:\n### 1 - Faire les connections entre les oscillateurs\n### 2 - Le controle des notes en midi\n### \n### Le tout peut etre controle en argument et en appels de methode\n### Ce qui rend l'interface graphique secondaire (du moins pour le \n### projet dans le cadre du cours!)\n### 19/20\n\n\n\n\"\"\"\nA faire prochainement:\n\n-Classe pour le mixeur qui recevra les differentes combinaisons d'oscillateurs et de generateurs de bruit.\n-Osc 1-2 sync : avec un objet OscTrig et un objet Metro qui est regle a la vitesse de l'oscillateur 1.\n-Osc 3-1 FM : calculer le rapport modulante/porteuse et faire la transition osc3.amp-->frequence en Hertz\n-Commande Glide Rate qui permet de faire un portamento entre les frequences.\n-Prevoir l'option SfPlayer/External Input pour une source sonore supplementaire.\n\n-Creer une classe ADSR\n-Faire en sorte que l'amplitude de sortie et la frequence de coupure des filtres puisse etre controlee\n par une enveloppe ADSR ; declenchement des sons par un trig. *Tenir compte du parametre \"amount to filter\".\n \n-Generer des formes d'onde intermediaires a partir des formes d'onde de base. Modifier le dictionnaire en consequence.\n\n-Modifier les methodes setParametre et utiliser l'autre facon de faire avec des @\n\n-Rediger des docstrings decents.\n\"\"\"\n\nclass Oscillator(): \n \"\"\"Oscillator class. The oscillator is the first sound generator of the synthesizer. \n \n Parameters:\n -wave: Type of oscillator (1-sine, 2-triangle, 3-sawtooth, 4-square or 5-rectangular)\n -freq: Main frequency of the oscillator (in Hertz - float).\n -transpo: Transposition, in semi-tones (fine tune for the principal oscillator and \n frequency for the auxiliary oscillators (float).\n -octave: Octave transposition (32,16,8,4,2,1). Default is 32.\n -lfo: 0.=normal mode, 1.=LFO mode (float)\n -amp: Output amplitude, between 0 and 1. Default is 0.3.\n \n Methods:\n -out(): Envoie le son aux haut-parleurs et retourne l'objet lui-meme\n -getOut(): Retourne l'objet generant du son afin de l'inclure dans une chaine de traitement\n -setWave(): sets the wave parameter (wave)\n -setFreq(): sets the frequency parameter (freq)\n -setTranspo(): sets the transposition parameter (transpo)\n -setOctave(): sets the octave parameter (octave)\n -setLFO(): sets the LFO mode : 0.=normal, 1.=LFO (4 octaves below the principal frequency)\n -setAmp(): sets the amplitude parameter (amp) \"\"\"\n \n def __init__(self, wave=0., freq=130., transpo=0., octave=1., lfo=0., glide=0.005, amp=0.2):\n self.transpo = Sig(value=(Pow(base=2.0, exponent=(transpo/12.0), mul=1.))) \n self.octave = Sig(octave) #En realite, la valeur de self.octave sera comprise entre 1 et 6 (valeurs discretes en float).\n self.lfo = Sig(value=(Pow(base=2.0, exponent=(lfo*4.), mul=1.))) #LFO: 0 pour mode normal, 1 pour mode LFO.\n self.amp = Sig(amp)\n self.freq = Sig(value=freq*(Pow(base=2.0, exponent=self.octave))*self.transpo/self.lfo)\n self.glide = glide\n self.freq_interp = Port(input=self.freq, risetime = self.glide)\n \n #1-Onde sinusoidale creee a l'aide d'une HannTable\n self.sine_wave = HarmTable(size=2048)\n \n #2-Onde triangulaire\n self.triangle_wave = LinTable(list=[(0,0.),(512,1.),(1024,0.),(1536,-1.),(2047,0)], size=2048)\n \n #3-Onde en dents de scie\n self.saw_wave = SawTable(order=30)\n \n #4-onde carree standard\n self.square_wave = SquareTable(order=30)\n \n #5-Onde rectangulaire\n self.rect_wave = LinTable(list=[(0,0.),(1,1),(127,1),(128,0),(1023,0),(1152,0),(2048,0)], size=2048) #(1024,1),(1151,1),-->donne l'octave\n \n self.wave = Sig(value=wave) #Sine(.1, mul=.5, add=.5) #Attribut de l'objet, valeur en float entre 0. et 1.\n self.newTable = NewTable(length=2048./44100., chnls=1)\n self.table = TableMorph(input=self.wave, table=self.newTable, sources=[self.sine_wave, self.triangle_wave, self.saw_wave, self.square_wave, self.rect_wave]) #\n #self.newTable.view()\n self.osc = Osc(table=self.newTable, freq=self.freq_interp, mul=0.2).play()\n \n self.mix = Mix(self.osc, voices=2, mul=self.amp)\n \n def out(self): \n self.mix.out()\n return self\n \n def stop(self):\n self.mix.stop()\n return self\n \n def play(self):\n self.mix.play()\n return self\n \n def getOut(self):\n return self.mix\n \n def setWave(self,x): #A ajuster\n self.wave.value = x\n \n def setFreq(self,x): #frequence de l'oscillateur principal, 130 Hz par defaut\n self.freq.value = x \n \n def setTranspo(self,x):\n self.transpo.value = Pow(base=2.0,exponent=(x/12.0), mul=1.)\n \n #Octave: sur le commutateur, il sera ecrit 32-16-8-4-2-1, mais les valeurs reelles seront des floats entre 1. et 6. (discrets)\n def setOctave(self,x): #octave, 32 par defaut (note la plus grave, comme les tuyaux d'orgue)\n self.octave.value = x\n \n def setLFO(self,x):\n self.lfo.value = Pow(base=2.0, exponent=(x*4.), mul=1.)\n \n def setAmp(self,x):\n self.amp.value = x\n \n def setGlide(self,x): #---> Pour le Glide rate: ajoute le temps de portamento.\n self.freq_interp.risetime=x;\n \n \nclass NoiseGenerator():\n \n def __init__(self, noise=1, amp=0.3):\n \n self.amp = Sig(amp)\n self.last_noise = noise\n \n #bruit blanc\n self.white = Noise(mul=[0.15,0.15]).stop()\n #bruit rose\n self.pink = PinkNoise(mul=[0.18,0.18]).stop()\n #bruit brun\n self.brown = BrownNoise(mul=[0.2,0.2]).stop()\n \n noise_dict = {1:self.white, 2:self.pink, 3:self.brown}\n noise_dict[noise].play() #Appel du generateur de bruit choisi\n self.noise = Sig(noise_dict[noise])\n \n self.mix = Mix(self.noise, voices=2, mul=self.amp)\n \n def out(self): #Envoie le son aux haut-parleurs et retourne l'objet lui-meme\n self.mix.out()\n return self\n \n def stop(self):\n self.mix.stop()\n return self\n \n def play(self):\n self.mix.play()\n return self\n \n def getOut(self): #retourne l'objet generant du son afin de l'inclure dans une chaine de traitement\n return self.mix\n \n def setAmp(self,x):\n self.amp.value = x\n\n def setNoise(self,x):\n noise_dict = {1:self.white, 2:self.pink, 3:self.brown}\n noise_dict[self.last_noise].stop()\n noise_dict[x].play()\n self.noise.value = noise_dict[x]\n self.last_noise = x\n\n \nclass Filter(): #Le changement d'input fait planter le programme. (Il ne devrait pas y en avoir de toute facon, mais on ne sait jamais.)\n \n def __init__(self, input, filter_mode=1, cutoff=2000., spacing=0, resonance=1., pan_mode=1, amp=0.8):\n self.filter_mode = Sig(value=filter_mode)\n #self.input = Sig(value=input, mul=[1.,1.]) -->vieille version\n self.input = input\n self.in_fader = InputFader(self.input)\n self.cutoff = Sig(value=cutoff)\n self.spacing = Sig(value=spacing) #Entre -3 et 3 (float). Indique le nombre d'octaves entre les deux frequences de coupure.\n \n self.variation = Sig(value=0.) #Pour le Amount to Filter (voir adsr.py). A zero par defaut.\n self.freq1 = Sig(value=(self.cutoff*Pow(base=2.,exponent=-1.*self.spacing)*Pow(base=2, exponent=self.variation)))\n self.freq2 = Sig(value=(self.cutoff*Pow(base=2.,exponent=self.spacing)*Pow(base=2, exponent=self.variation)))\n self.resonance = Sig(value=resonance)\n self.q = Sig(value=(self.resonance*4.99 + 1.)) #Resonance se situant entre 0 et 10, on vise un facteur Q entre 1 et 500.\n\n if filter_mode == 1: # dual lowpass\n self.filter1 = Biquadx(self.in_fader, freq=self.freq1, q=self.q, type=0, stages=2, mul=0.6, add=0) \n self.filter2 = Biquadx(self.in_fader, freq=self.freq2, q=self.q, type=0, stages=2, mul=0.6, add=0) \n elif filter_mode == 2: #lowpass/highpass\n self.filter1 = Biquadx(self.in_fader, freq=self.freq1, q=self.q, type=0, stages=2, mul=0.6, add=0) \n self.filter2 = Biquadx(self.in_fader, freq=self.freq2, q=self.q, type=1, stages=2, mul=0.6, add=0) \n \n pan_dict1 = {1:0.5, 2:0., 3:1., 4:0.} #1=50% de chaque cote; 2=filtre 1 a gauche, filtre 2 a droite\n pan_dict2 = {1:0.5, 2:1., 3:0., 4:0.} #3=filtre 1 a droite, filtre 2 a gauche, 4=mono (gauche)\n self.pan1 = SigTo(value=pan_dict1[pan_mode])\n self.pan2 = SigTo(value=pan_dict2[pan_mode])\n self.amp = Sig(value=amp)\n self.filter1_pan = SPan(input=self.filter1.mix(1), outs=2, pan=self.pan1, mul=self.amp*(1./self.resonance))\n self.filter2_pan = SPan(input=self.filter2.mix(1), outs=2, pan=self.pan2, mul=self.amp*(1./self.resonance))\n \n ### L'InputFader devrait etre au debut de la classe et recevoir directement l'arguement \"input\".\n ### Il pourrait remplacer l'objet Sig en self.input... ---> Cause un bogue. A regler!\n \n \n def setInput(self, x, fadetime=0.05):\n self.input = x\n self.in_fader.setInput(x, fadetime)\n \n def out(self): #Envoie le son aux haut-parleurs et retourne l'objet lui-meme\n self.filter1_pan.out()\n self.filter2_pan.out()\n return self \n\n def stop(self):\n self.filter1_pan.stop()\n self.filter2_pan.stop()\n return self\n\n def play(self):\n self.filter1_pan.play()\n self.filter2_pan.play()\n return self\n \n def setFilter_mode(self,x):\n self.filter_mode.value = x\n \n def setCutoff(self,x):\n self.cutoff.value = x\n \n def setVariation(self,x):\n self.variation.value = x\n \n def setSpacing(self,x):\n self.spacing.value = x\n \n def setResonance(self,x):\n self.resonance.value = x\n \n def setPan_mode(self,x):\n pan_dict1 = {1:0.5, 2:0., 3:1., 4:0.} #1=50% de chaque cote; 2=filtre 1 a gauche, filtre 2 a droite\n pan_dict2 = {1:0.5, 2:1., 3:0., 4:0.} #3=filtre 1 a droite, filtre 2 a gauche, 4=mono (gauche)\n self.pan1.value = pan_dict1[x]\n self.pan2.value = pan_dict2[x]\n \n def setAmp(self,x):\n self.amp.value = x\n#Fin de la classe filtre\n\nif __name__ == '__main__':\n s = Server().boot()\n \n src1 = Oscillator(wave=1.,octave=1., lfo=0.).play()\n src2 = Oscillator(wave=1.,octave=2., lfo=0.).stop()\n bruit = NoiseGenerator().stop()\n filtre = Filter(src1.getOut(), amp=0.1).out()\n\n s.gui(locals())"
},
{
"alpha_fraction": 0.5507968068122864,
"alphanum_fraction": 0.5776892304420471,
"avg_line_length": 30.061946868896484,
"blob_id": "1d47209d925968d17e3a9841bc6585f3228df09b",
"content_id": "6c5f3931af2d14f7e9007561d243143112d8e81e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7028,
"license_type": "no_license",
"max_line_length": 200,
"num_lines": 226,
"path": "/Resources/mixer.py",
"repo_name": "juliedelisle5/pyovoyager",
"src_encoding": "UTF-8",
"text": "from pyo import *\nfrom random import uniform\nfrom audio import *\n\n#On pourrait passer le signal obtenu de l'objet MixerSection dans un effet externe avant de le passer au filtre.\nclass MixerSection():\n def __init__(self, ref_freq=130., finetune=0., mul=0.8):\n self.ref_freq = Sig(value=ref_freq)\n self.fine_tune = Sig(value=finetune)\n \n self.wave1 = 0. #Sinusoidale par defaut\n self.freq1 = self.ref_freq*(Pow(base=2.0, exponent=(self.fine_tune/12.0), mul=1.))#controle avec le fine_tune, +ou- 2 demi-tons par rapport a la frequence de reference\n self.octave1 = 1.\n self.amp1 = 0.4\n \n self.wave2 = 0. #Valeurs par defaut a changer par des appels de methode\n self.freq2 = self.freq1\n self.transpo2 = 0.\n self.octave2 = 1.\n self.amp2 = 0.4\n \n self.wave3 = 0. #Valeurs par defaut a changer par des appels de methode\n self.freq3 = self.freq1\n self.transpo3 = 0.\n self.octave3 = 1.\n self.lfo3 = 0.\n self.amp3 = 0.4\n \n self.noise_type = 1 #Seulement pour la valeur par defaut.\n self.noise_amp = 0.3\n \n #Pour le fun, un petit SfPlayer (je pourrais peut-etre faire de la synthese ou de la modulation avec...)\n #(Il fallait que je plogue le rire de Jacques Languirand dans mon travail - c'est mon baseball majeur a moi!)\n #Pour changer la source : appeler objetMixerSection.setSfPlayer_path(path)\n self.sfPlayer_mul = 0.4\n self.sfPlayer = SfPlayer(path=\"jacquesLanguirand.aiff\", speed=1, loop=True, offset=0, interp=2, mul=self.sfPlayer_mul, add=0).stop()\n \n #Source externe: verifier le channel.\n self.external_mul = 0.4\n self.external = Input(mul=self.external_mul).stop()\n \n self.osc1 = Oscillator(wave=self.wave1, freq=self.freq1, transpo=0., octave=self.octave1, lfo=0., amp=self.amp1).stop()\n self.osc2 = Oscillator(wave=self.wave2, freq=self.freq2, transpo=self.transpo2, octave=self.octave2, lfo=0., amp=self.amp2).stop()\n self.osc3 = Oscillator(wave=self.wave3, freq=self.freq3, transpo=self.transpo3, octave=self.octave3, lfo=self.lfo3, amp=self.amp3).stop()\n self.noise = NoiseGenerator(noise=self.noise_type, amp=self.noise_amp).stop()\n \n #Pour la synthese FM\n self.ratio = Sig(value=(self.osc3.freq/self.osc1.freq))\n self.index = Sig(self.osc3.amp*20.)\n self.mod_osc = Osc(table=self.osc3.newTable, freq=self.freq1*self.ratio, mul=0.1).stop()\n self.port_phasor_freq = self.mod_osc*self.freq1*self.ratio*self.index\n self.port_phasor = Phasor(freq=self.port_phasor_freq+self.freq1).stop()\n self.fm = Osc(table=self.osc1.newTable, freq=self.port_phasor_freq, mul=0.1, add=-0.05).stop()\n \n #Pour le Sync 1-2\n self.metro = Metro(time=1/self.freq1).stop()\n self.osc2Aux = OscTrig(table=self.osc2.newTable, trig=self.metro, freq=self.osc2.freq, mul=0.5).stop()\n \n #Signal de sortie\n self.mul = mul\n self.inputs = [self.osc1.getOut(), self.osc2.getOut(), self.osc3.getOut(), self.noise.getOut(), self.sfPlayer, self.external, self.fm, self.osc2Aux] #ajouter external et sfplayer s'il y a lieu\n self.mix = Mix(input=self.inputs, voices=1, mul=self.mul)\n \n #Methodes generales (out, getOut, stop, play)\n def out(self):\n self.mix.out()\n return self\n \n def getOut(self):\n return self.mix\n \n def play(self):\n self.mix.play()\n return self\n \n def stop(self):\n self.mix.stop()\n return self\n \n def setMul(self,x):\n self.mix.mul = x;\n \n #Methodes pour les parametres des oscillateurs et du generateur de bruit\n def setRefFreq(self,x):\n self.ref_freq.value = x\n \n def setFineTune(self,x):\n self.fine_tune.value = x\n \n def setWave1(self,x):\n self.osc1.setWave(x)\n \n def setOctave1(self,x):\n self.osc1.setOctave(x)\n \n def setAmp1(self,x):\n self.osc1.setAmp(x)\n \n def setWave2(self,x):\n self.osc2.setWave(x)\n \n def setTranspo2(self,x):\n self.osc2.setTranspo(x)\n \n def setOctave2(self,x):\n self.osc2.setOctave(x)\n \n def setAmp2(self, x):\n self.osc2.setAmp(x)\n\n def setWave3(self,x):\n self.osc3.setWave(x)\n\n def setTranspo3(self,x):\n self.osc3.setTranspo(x)\n\n def setOctave3(self,x):\n self.osc3.setOctave(x)\n\n def setLFO3Mode(self,x):\n self.osc3.setLFO(x)\n \n def setAmp3(self, x):\n self.osc3.setAmp(x)\n \n def setNoiseType(self,x):\n self.noise.setNoise(x)\n \n def setNoiseAmp(self,x):\n self.noise.setAmp(x)\n \n def setExternal_mul(self,x):\n self.external.mul = x\n \n def setSfPlayer_mul(self,x):\n self.sfPlayer.mul = x\n \n def setSfPlayer_path(self,path):\n self.sfPlayer.path = path\n \n #Methodes on/off pour les oscillateurs et le generateur: \n \n def externalOn(self):\n self.external.play()\n \n def externalOff(self):\n self.external.stop()\n \n def sfPlayerOn(self):\n self.sfPlayer.play()\n \n def sfPlayerOff(self):\n self.sfPlayer.stop()\n \n def osc1On(self):\n self.osc1.play()\n \n def osc1Off(self):\n self.osc1.stop()\n \n def osc2On(self):\n self.osc2.play()\n \n def osc2Off(self):\n self.osc2.stop()\n \n def osc3On(self):\n self.osc3.play()\n \n def osc3Off(self):\n self.osc3.stop()\n \n def noiseOn(self):\n self.noise.play()\n \n def noiseOff(self):\n self.noise.stop()\n \n #Glide Rate\n def glideOn(self):\n self.osc1.setGlide(1.) \n self.osc2.setGlide(1.)\n self.osc3.setGlide(1.)\n \n def glideOff(self):\n self.osc1.freq_interp.risetime = 0.05\n self.osc2.freq_interp.risetime = 0.05\n self.osc3.freq_interp.risetime = 0.05\n \n def setGlideRate(x):\n self.osc1.freq_interp.risetime = x\n self.osc2.freq_interp.risetime = x\n self.osc3.freq_interp.risetime = x\n \n #Synthese FM (3-1 FM)\n def synthFMOn(self):\n self.osc1Off()\n self.osc3Off()\n self.mod_osc.play()\n self.port_phasor.play()\n self.fm.play()\n \n def synthFMOff(self):\n self.fm.stop()\n self.port_phasor.stop()\n self.mod_osc.stop()\n self.osc1On()\n self.osc3On()\n \n #Sync 1-2\n def sync12On(self):\n self.osc2Off()\n self.osc1On()\n self.metro.play()\n self.osc2Aux.play()\n \n def sync12Off(self):\n self.osc2Aux.stop()\n self.metro.stop()\n self.self.osc2On()\n\n\nif __name__ == '__main__':\n s = Server(duplex=1).boot()\n mix = MixerSection(130.,0.,0.8).out()\n s.gui(locals())\n "
},
{
"alpha_fraction": 0.6232319474220276,
"alphanum_fraction": 0.6363154053688049,
"avg_line_length": 33.70552062988281,
"blob_id": "86e0eab43d2e43edbaf512c7cbc0ee476c6774d8",
"content_id": "6116418e111c3ada90511a2ecb542d784b6b8815",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5656,
"license_type": "no_license",
"max_line_length": 115,
"num_lines": 163,
"path": "/Resources/adsr.py",
"repo_name": "juliedelisle5/pyovoyager",
"src_encoding": "UTF-8",
"text": "from pyo import *\nfrom mixer import *\n\n#Chaines audio du synthetiseur analogique, avec et sans controleur MIDI:\n# generateurs de son/bruit --> mixeur --> filtre --> enveloppe ADSR de volume\n#Enveloppe est donc la classe \"finale\" dont le signal est achemine a la sortie audio.\n\n#Appels de methodes sur les generateurs de son/bruit:\n# enveloppe.mixer.methode(valeur)\n\n#Appels de methodes sur le filtre:\n# enveloppe.filtre.methode(valeur)\n\n\n#Mode 1 (Pour usage avec un controleur MIDI)\nclass ADSR():\n def __init__(self, amount=0., mul=0.8):\n #controles generaux\n self.mul = Sig(value=mul)\n self.midi = Notein(scale=1)\n self.mixer = MixerSection(ref_freq = self.midi['pitch']).stop()\n #Enveloppe ADSR de volume\n self.adsr = Adsr(attack=.01, decay=.2, sustain=.5, release=.1, dur=0, mul=.5)\n self.midi_adsr = MidiAdsr(self.midi['velocity'],mul=self.mul)\n #Enveloppe ADSR du filtre\n self.amount = amount #Potentiometre \"Amount to filter\"; valeur par defaut\n self.adsr2 = Adsr(attack=.01, decay=.2, sustain=.5, release=.1, dur=0, mul=self.amount)\n self.midi_adsr2 = MidiAdsr(self.midi['velocity'],mul=self.amount)\n #Le mixer est passe dans le filtre avant d'etre achemine vers la sortie audio\n self.filtre = Filter(self.mixer.getOut(),amp=self.midi_adsr).stop()\n #Controles separes (lorsque c'est le cas, ils ne sont pas passes dans le filtre)\n self.mix1 = Mix(input=[self.mixer.osc3.getOut()], voices=2, mul=self.mul).stop()\n self.mix2 = Mix(input=[self.mixer.noise.getOut()], voices=2, mul=self.mul).stop()\n self.mix3 = Mix(input=[self.mixer.external], voices=2, mul=self.mul).stop()\n self.mix4 = Mix(input=[self.mixer.sfPlayer], voices=2, mul=self.mul).stop()\n \n #Volume master\n def setMasterVolume(self,x):\n self.mul.value = x\n \n def out(self):\n self.mixer.play() \n self.filtre.out()\n return self\n \n def stop(self):\n self.mixer.stop()\n self.filtre.stop()\n \n #Play et stop note quand le controle clavier est desactive\n def playNote(self):\n self.adsr.play()\n \n def stopNote(self):\n self.adsr.stop()\n \n #Activer ou desactiver le controle de l'enveloppe d'amplitude par les touches du controleur MIDI\n #Actif par defaut\n def kbAmpCtlOn(self):\n self.filtre.setAmp(self.midi_adsr)\n \n def kbAmpCtlOff(self): \n self.filtre.setAmp(self.adsr)\n \n #Activer ou desactiver le controle de l'enveloppe du filtre par les touches du controleur MIDI\n #Actif par defaut\n def kbFilterCtlOn(self):\n self.filtre.setVariation(self.midi_adsr2)\n\n def kbFilterCtlOff(self): #Ne fonctionne pas comme prevu...\n self.filtre.setVariation(self.adsr2)\n \n #Activer ou desactiver le controle de la frequence de la note jouee par les touches du controleur MIDI\n #Actif par defaut\n def kbFreqCtlOn(self):\n self.mixer.setRefFreq(self.midi['pitch'])\n \n def kbFreqCtlOff(self):\n self.mixer.setRefFreq(442.)\n \n #Permet d'activer ou de desactiver le controle clavier du 3e oscillateur, pour en faire un drone ou une pedale.\n def kb3ControlOn(self):\n self.mix1.stop()\n self.mixer.osc3.setAmp(self.midi_adsr)\n self.mixer.osc3.setFreq(self.mixer.freq1) #self.mixer.osc3.self.midi['pitch']\n\n def kb3ControlOff(self):\n self.mixer.osc3.setFreq(442.)\n self.mixer.osc3.setAmp(self.adsr)\n self.mix1.out()\n \n #Permet d'activer ou de desactiver le controle clavier du generateur de bruit, pour en faire un drone.\n def kbNoiseControlOn(self):\n self.mix2.stop()\n self.mixer.noise.setAmp(self.midi_adsr)\n\n def kbNoiseControlOff(self):\n self.mixer.noise.setAmp(self.adsr)\n self.mix2.out()\n\n #Permet d'activer ou de desactiver le controle clavier de l'amplitude de la source externe.\n def kbExtControlOn(self):\n self.mix3.stop()\n self.mixer.external.mul = self.midi_adsr\n\n def kbExtControlOff(self):\n self.mixer.external.mul = 0.4\n self.mix3.out()\n \n #Permet d'activer ou de desactiver le controle clavier du sfPlayer, pour l'avoir en continu.\n def kbSFPControlOn(self):\n self.mix4.stop()\n self.mixer.sfPlayer.mul = self.midi_adsr\n\n def kbSFPControlOff(self):\n self.mixer.sfPlayer.mul = 0.4\n self.mix4.out()\n \n #Parametres de l'enveloppe ADSR de volume\n def setAttack(self,x):\n self.midi_adsr.setAttack(x)\n self.adsr.setAttack(x)\n \n def setDecay(self,x):\n self.midi_adsr.setDecay(x)\n self.adsr.setDecay(x)\n \n def setSustain(self,x):\n self.midi_adsr.setSustain(x)\n self.adsr.setSustain(x)\n \n def setRelease(self,x):\n self.midi_adsr.setRelease(x)\n self.adsr.setRelease(x)\n \n #Parametres de l'enveloppe ADSR du filtre\n def setAttack2(self,x):\n self.midi_adsr2.setAttack(x)\n self.adsr2.setAttack(x)\n\n def setDecay2(self,x):\n self.midi_adsr2.setDecay(x)\n self.adsr2.setDecay(x)\n\n def setSustain2(self,x):\n self.midi_adsr2.setSustain(x)\n self.adsr2.setSustain(x)\n\n def setRelease2(self,x):\n self.midi_adsr2.setRelease(x)\n self.adsr2.setRelease(x)\n \n def setAmount(self,x):\n self.midi_adsr2.mul = x\n self.adsr2.mul = x\n self.filtre.setVariation(self.midi_adsr2)\n\n\n\nif __name__ == '__main__':\n s = Server(duplex=1).boot() \n enveloppe = ADSR().out()\n s.gui(locals())"
}
] | 3 |
destrangis/testserver | https://github.com/destrangis/testserver | f4f1ade96da2bf3b70a4e38628f93aec211209da | 35c54bfab26c9c40177eb94e58ed1fc9b8fd9b92 | ad9cd2e9ea5d289530120f1c87e31d435cdeb0d6 | refs/heads/master | 2020-08-08T06:23:13.811736 | 2019-10-08T21:20:17 | 2019-10-08T21:20:17 | 213,754,531 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5630252361297607,
"alphanum_fraction": 0.5718487501144409,
"avg_line_length": 30.3157901763916,
"blob_id": "cd7556c03fe2ce6c378a02367d89e756f174427d",
"content_id": "ec36e60f255061f4e4f6e4d75100654e4f3eaa85",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2380,
"license_type": "permissive",
"max_line_length": 85,
"num_lines": 76,
"path": "/server.py",
"repo_name": "destrangis/testserver",
"src_encoding": "UTF-8",
"text": "# Sample SSL Server loading certificates based on hostname using SNI_callback\nimport socket\nimport ssl\nfrom pprint import pformat\n\nhosts = {\n \"badabec\": \"badabec.pem\",\n \"pantagruel\": \"pantagruel.pem\",\n }\n\n\nclass Server:\n def __init__(self, address, port, hosts):\n self.address = address\n self.port = port\n self.host_contexts = {}\n\n self.defcontext = ssl.create_default_context(purpose=ssl.Purpose.CLIENT_AUTH)\n self.defcontext.sni_callback = self.name_callback\n self.load_certificates(hosts)\n\n self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)\n self.socket.bind( (self.address, self.port) )\n print(\"Listening on {}:{}\".format(self.address, self.port))\n self.socket.listen(5)\n\n\n def load_certificates(self, hosts):\n for hostname, hostcertfile in hosts.items():\n ctx = ssl.create_default_context(purpose=ssl.Purpose.CLIENT_AUTH)\n ctx.load_cert_chain(hostcertfile)\n self.host_contexts[hostname] = ctx\n\n\n def name_callback(self, socket, hostname, ctx):\n print(\"SNI for {}\".format(hostname))\n if hostname:\n ctx = self.host_contexts.get(hostname)\n if ctx:\n socket.context = ctx\n return None\n\n\n def serve(self):\n while True:\n newsock, fromaddr = self.socket.accept()\n stream = self.defcontext.wrap_socket(newsock, server_side=True)\n try:\n self.handle_connection(stream)\n finally:\n stream.shutdown(socket.SHUT_RDWR)\n stream.close()\n\n\n def handle_connection(self, stream):\n chunksize = 1024\n data = b\"\"\n while True:\n chunk = stream.recv(chunksize)\n data += chunk\n if len(chunk) < chunksize:\n break\n\n print(\"RECEIVED: {}\".format(data))\n response = (b\"HTTP/1.1 200 OK\\r\\n\"\n b\"Server: testserver by destrangis\\r\\n\"\n b\"Content-length: 10\\r\\n\"\n b\"Content-type: text/plain\\r\\n\"\n b\"Connection: close\\r\\n\\r\\n\"\n b\"All fine\\r\\n\")\n stream.send(response)\n\nif __name__ == \"__main__\":\n srv = Server(\"0.0.0.0\", 8080, hosts)\n srv.serve()\n"
}
] | 1 |
Wilson77Calixto/aprender-dados-Python | https://github.com/Wilson77Calixto/aprender-dados-Python | a7f42f57f971afa92b9e75a4558aa54d75b0da0b | 0f833d74d556c38d43cd48b4b03bc5d8a1fab14a | 92a92dee8a06decfea7207aef723fd53e0975099 | refs/heads/main | 2023-02-11T05:01:57.814730 | 2021-01-11T21:07:56 | 2021-01-11T21:07:56 | 328,688,497 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5131034255027771,
"alphanum_fraction": 0.5793103575706482,
"avg_line_length": 8.012499809265137,
"blob_id": "4e489ad38f7af425c1b27a78f8981cb5849a14da",
"content_id": "378d7adf7da34211d8ee3c8a55c537fe39247fdb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 727,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 80,
"path": "/Prática de Dados II.py",
"repo_name": "Wilson77Calixto/aprender-dados-Python",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n# coding: utf-8\n\n# QUANTIDADE DE ACIDENTES AUTOMOBILÍSTICOS AO LONGO DO ANO\n\n# In[2]:\n\n\nimport pandas as pd\n\n\n# In[6]:\n\n\nacidente_auto = pd.Series([15, 20, 12, 16, 10, 11, 12, 15, 9, 10, 12, 17], \nindex = ['janeiro', 'fevereiro', 'março', 'abril', 'maio', 'junho',\n 'julho', 'agosto', 'setembro', 'outubro', 'novembro', 'dezembro'])\n\n\n# In[7]:\n\n\nacidente_auto\n\n\n# In[11]:\n\n\nacidente_auto['janeiro']\n\n\n# In[13]:\n\n\nacidente_auto[0:6]\n\n\n# In[15]:\n\n\nacidente_auto.mean()\n\n\n# In[17]:\n\n\nacidente_auto.std()\n\n\n# In[19]:\n\n\nacidente_auto.max()\n\n\n# In[21]:\n\n\nacidente_auto.min()\n\n\n# In[23]:\n\n\nacidente_auto.describe()\n\n\n# In[25]:\n\n\nacidente_auto.sum()\n\n\n# In[27]:\n\n\nacidente_auto[6:].mean()\n\n\n# In[ ]:\n\n\n\n\n"
},
{
"alpha_fraction": 0.3804347813129425,
"alphanum_fraction": 0.44565218687057495,
"avg_line_length": 4.294117450714111,
"blob_id": "11490c7a6bf09174d7ced5b48d3cedc5f328331e",
"content_id": "62717fe6c093bc15d9b10b00c01208dd5ad9a5e2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 184,
"license_type": "no_license",
"max_line_length": 30,
"num_lines": 34,
"path": "/Prática de dados I.py",
"repo_name": "Wilson77Calixto/aprender-dados-Python",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n# coding: utf-8\n\n# In[1]:\n\n\nimport pandas as pd\n\n\n# In[3]:\n\n\ns = pd.Series([1, 2, 3, 4, 5])\n\n\n# In[4]:\n\n\ns\n\n\n# In[6]:\n\n\ns.values\n\n\n# In[8]:\n\n\ns[3]\n\n\n# In[ ]:\n\n\n\n\n"
},
{
"alpha_fraction": 0.6567796468734741,
"alphanum_fraction": 0.6694915294647217,
"avg_line_length": 11.88888931274414,
"blob_id": "8810c24f3480885c22ac61b31023277de1475cb0",
"content_id": "8e4e6602cc8480bf5841d65c7a50722baee1cb09",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 236,
"license_type": "no_license",
"max_line_length": 112,
"num_lines": 18,
"path": "/Prática de Dados IV.py",
"repo_name": "Wilson77Calixto/aprender-dados-Python",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n# coding: utf-8\n\n# ESTUDO DE DATASET CSV\n\n# In[1]:\n\n\nimport pandas as pd\n\n\n# In[2]:\n\n\ndf = pd.read_csv('https://raw.githubusercontent.com/fivethirtyeight/data/master/alcohol-consumption/drinks.csv')\n\n\n# In[ ]:\n\n\n\n\n"
},
{
"alpha_fraction": 0.48885586857795715,
"alphanum_fraction": 0.5408617854118347,
"avg_line_length": 12.714285850524902,
"blob_id": "861ceaf0724751fa0c0a1f08cf085e0aeae6714e",
"content_id": "71aa187d3c70cc06a504703ee6bce286b8c74d17",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 686,
"license_type": "no_license",
"max_line_length": 87,
"num_lines": 49,
"path": "/Prática de Dados III.py",
"repo_name": "Wilson77Calixto/aprender-dados-Python",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n# coding: utf-8\n\n# ESTUDO E PRÁTICA DE DATAFRAME\n\n# In[6]:\n\n\nimport pandas as pd\n\n\n# In[13]:\n\n\ndf = pd.DataFrame({'Aluno': ['Marina', 'Felipe', 'Cleyton', 'Isabel'],\n 'Créditos cursados': [20, 64, 32, 24],\n 'Rendimento acadêmico': [8.55, 7.88, 8.17, 9.04],\n 'Mês de nascimento': ['novembro', 'setembro', 'janeiro', 'julho'],\n 'Curso': ['Computação', 'Estatística', 'Computação', 'Matemática']})\n\n\n# In[15]:\n\n\ndf\n\n\n# In[17]:\n\n\ndf.iloc[2]\n\n\n# In[19]:\n\n\ndf['Rendimento acadêmico']\n\n\n# In[22]:\n\n\ndf['Rendimento acadêmico'].mean()\n\n\n# In[23]:\n\n\ndf['Rendimento acadêmico'].describe()\n\n"
}
] | 4 |
Leetaihaku/DQN_CarrotGrow | https://github.com/Leetaihaku/DQN_CarrotGrow | efe1e6deab97810cd59e9dca93228971dae4d5a4 | 1921e9e7f2db355d33d0bfbd5592e03141bc7f4c | 25a17d150a97a7dc5da8c6706bd461107fd532e1 | refs/heads/master | 2022-12-08T14:09:13.071958 | 2020-08-16T15:14:38 | 2020-08-16T15:14:38 | 283,742,336 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5206496119499207,
"alphanum_fraction": 0.5385692119598389,
"avg_line_length": 25.85714340209961,
"blob_id": "2e4f9573e0a4da2f62d0490938fc5608749e523f",
"content_id": "782cc7b3fcf3f826f0beea25dd8f8bdaddf2b13d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7373,
"license_type": "no_license",
"max_line_length": 117,
"num_lines": 266,
"path": "/Carrot_NN/dqn_beta-tester.py",
"repo_name": "Leetaihaku/DQN_CarrotGrow",
"src_encoding": "UTF-8",
"text": "import torch\nfrom torch import nn\nfrom torch import optim\nimport torch.nn.functional as F\nimport random\nimport numpy as np\nfrom collections import namedtuple\nimport copy\n\nNUM_STATES = 2\nNUM_ACTIONS = 4\nDISCOUNT_FACTOR = 0.99\nLEARNING_RATE = 0.01\nBATCH_SIZE = 256\nNODES = 24\nTRAIN_START = 1000\nCAPACITY = 10000\nEPISODES = 10000\nMAX_STEPS = 200\nEPSILON = 1.0\nEPSILON_DISCOUNT_FACTOR = 0.0001\nEPSILON_MIN = 0.01\nPATH = '.\\saved_model\\Beta-17.pth'\nDATA = namedtuple('DATA', ('state', 'action', 'reward', 'next_state', 'done'))\n\nclass DB:\n\n def __init__(self):\n self.capacity = CAPACITY\n self.memory = []\n self.index = 0\n\n def save_to_DB(self, state, action, reward, next_state, done):\n if (len(self.memory) < self.capacity):\n self.memory.append(None)\n\n self.memory[self.index] = DATA(state, action, reward, next_state, done)\n self.index = (self.index + 1) % self.capacity\n\n def sampling(self, batch_size):\n return random.sample(agent.db.memory, batch_size)\n\n def __len__(self):\n return len(self.memory)\n\n\n\nclass Brain:\n\n def __init__(self):\n self.num_states = NUM_STATES\n self.num_actions = NUM_ACTIONS\n self.optimizer = None\n self.Q = None\n self.target_Q = None\n self.epsilon = EPSILON\n\n def modeling_NN(self):\n model = nn.Sequential()\n model.add_module('fc1', nn.Linear(self.num_states, NODES))\n model.add_module('relu1', nn.ReLU())\n model.add_module('fc2', nn.Linear(NODES, NODES))\n model.add_module('relu2', nn.ReLU())\n model.add_module('fc3', nn.Linear(NODES, self.num_actions))\n return model\n\n def modeling_OPTIM(self):\n optimizer = optim.RMSprop(self.Q.parameters(), lr=LEARNING_RATE)\n return optimizer\n\n def update_Q(self):\n data = agent.db.sampling(BATCH_SIZE)\n batch = DATA(*zip(*data))\n state_serial = batch.state\n action_serial = torch.cat(batch.action).reshape(-1, 1)\n reward_serial = torch.cat(batch.reward)\n next_state_serial = batch.next_state\n done_serial = batch.done\n\n state_serial = torch.stack(state_serial)\n next_state_serial = torch.stack(next_state_serial)\n done_serial = torch.tensor(done_serial)\n\n # Float형 통일 => 신경망 결과추출(y and y_hat)\n Q_val = self.Q(state_serial)\n Q_val = Q_val.gather(1, action_serial)\n Target_Q_val = self.target_Q(next_state_serial).max(1)[0]\n Target_Q_val = reward_serial + DISCOUNT_FACTOR * (~done_serial) * Target_Q_val\n Target_Q_val = Target_Q_val.reshape(-1, 1)\n\n # 훈련 모드\n self.Q.train()\n # 손실함수 계산\n loss = F.smooth_l1_loss(Target_Q_val, Q_val)\n # 가중치 수정 프로세스\n # 옵티마이저 클리너\n self.optimizer.zero_grad()\n # 역전파 알고리즘\n loss.backward()\n # 가중치 수정\n self.optimizer.step()\n\n def update_Target_Q(self):\n self.target_Q = copy.deepcopy(self.Q)\n\n def action_order(self, state):\n '''Exploitation-이용'''\n self.Q.eval()\n with torch.no_grad():\n data = self.Q(state)\n action = torch.argmax(data).item()\n return action\n\n\n\nclass Agent:\n\n def __init__(self):\n self.brain = Brain()\n self.brain.Q = self.brain.modeling_NN()\n self.brain.target_Q = self.brain.modeling_NN()\n self.brain.optimizer = self.brain.modeling_OPTIM()\n self.db = DB()\n\n def update_Q_process(self):\n if self.db.__len__() < BATCH_SIZE:\n return\n else:\n self.brain.update_Q()\n\n def update_Target_Q_process(self):\n self.brain.update_Target_Q()\n\n def action_process(self, state):\n return self.brain.action_order(state)\n\n def save_process(self, state, action, reward, next_state, done):\n self.db.save_to_DB(state, action, reward, next_state, done)\n\n\n\nclass Carrot_House:\n\n def __init__(self):\n '''하우스 환경 셋팅'''\n self.Humid = 0\n self.Temp = 0\n self.Pre_temp = 0\n self.Cumulative = 0\n\n def supply_water(self):\n self.Humid += 7\n\n def temp_up(self):\n self.Temp += 1\n\n def temp_down(self):\n self.Temp -= 1\n\n def wait(self):\n return\n\n def pre_step(self):\n # 스텝\n self.Cumulative += 1\n # 직전온도\n self.Pre_temp = self.Temp\n # 수분량 감소, 온도 변동\n if self.Humid > 0:\n self.Humid -= 1\n else:\n self.Humid = 0\n # self.Temp += random.randint(-1, 1)\n\n def step(self, action):\n '''행동진행 => 환경결과'''\n\n #스텝환경 셋팅\n self.pre_step()\n\n # 물주기\n if action == 0:\n self.supply_water()\n # 온도 올리기\n elif action == 1:\n self.temp_up()\n # 온도 내리기\n elif action == 2:\n self.temp_down()\n # 현상유지\n elif action == 3:\n self.wait()\n\n # 보상\n reward = self.get_reward()\n\n # 종료조건\n if reward == -1:\n done = True\n elif self.Cumulative == MAX_STEPS:\n done = True\n else:\n done = False\n\n next_state = torch.tensor([self.Humid, self.Temp]).float()\n reward = torch.tensor([reward]).float()\n\n return next_state, reward, done\n\n def get_reward(self):\n\n if self.Humid > 0 and self.Humid <= 7:\n if self.Temp <= 0:\n reward = -0.5\n elif abs(18.0 - self.Temp) < abs(18.0 - self.Pre_temp):\n reward = 0.5\n elif abs(18.0 - self.Temp) == abs(18.0 - self.Pre_temp) and self.Temp == 18.0:\n reward = 1\n elif abs(18.0 - self.Temp) > abs(18.0 - self.Pre_temp):\n reward = -0.5\n elif self.Humid == 7:\n reward = 1\n elif abs(18.0 - self.Temp) == abs(18.0 - self.Pre_temp) and self.Temp != 18.0:\n reward = -0.5\n else:\n reward = 0.0\n else:\n reward = -1\n\n return reward\n\n def reset(self):\n '''환경 초기화'''\n init_humid = np.random.randint(low=0, high=7)\n init_temp = np.random.randint(low=0, high=36)\n self.Humid = init_humid\n self.Temp = init_temp\n init_state = torch.tensor([init_humid, init_temp])\n\n return init_state.float()\n\n\nif __name__ == '__main__':\n env = Carrot_House()\n agent = Agent()\n agent.brain.Q.load_state_dict(torch.load(PATH))\n agent.brain.Q.eval()\n scores, episodes = [], []\n for E in range(EPISODES):\n state = env.reset()\n score = 0\n for S in range(MAX_STEPS):\n print(S)\n print(state)\n action = agent.action_process(state)\n next_state, reward, done = env.step(action)\n if done:\n print(\"step\", S, \" episode:\", E,\n \" score:\", score, \" memory length:\", len(agent.db.memory), \" epsilon:\", agent.brain.epsilon)\n break\n else:\n score += reward\n state = next_state\n scores.append(score)\n episodes.append(E)\n print('Task End')"
},
{
"alpha_fraction": 0.5687804818153381,
"alphanum_fraction": 0.5697560906410217,
"avg_line_length": 26,
"blob_id": "253cada8937cc25a3bb501c0c9229127781a65e3",
"content_id": "c2cdfaab15a06200efbdc8541ee629fc37cb0527",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1025,
"license_type": "no_license",
"max_line_length": 68,
"num_lines": 38,
"path": "/Carrot_NN/BETA17/AI.py",
"repo_name": "Leetaihaku/DQN_CarrotGrow",
"src_encoding": "UTF-8",
"text": "import torch\nfrom Environment import Carrot_House\nfrom Agent import Agent\nfrom Hyperparams import PATH\nfrom Hyperparams import EPISODES\nfrom Hyperparams import MAX_STEPS\n\n\n'''if __name__ == '__main__':\n env = Carrot_House()\n agent = Agent()\n agent.brain.Q.load_state_dict(torch.load(PATH))\n agent.brain.Q.eval()\n scores, episodes = [], []\n for E in range(EPISODES):\n state = env.reset()\n score = 0\n for S in range(MAX_STEPS):\n print(S)\n print(state)\n action = agent.action_process(state)\n next_state, reward, done = env.step(action)\n if done:\n print(\"step\", S, \" episode:\", E, \" score:\", score)\n break\n else:\n score += reward\n state = next_state\n print('Task End')'''\n\n\n\ndef get_Action(Humid, Temp):\n agent = Agent()\n agent.brain.Q.load_state_dict(torch.load(PATH))\n agent.brain.Q.eval()\n action = agent.action_process(state)\n return action"
},
{
"alpha_fraction": 0.5354131460189819,
"alphanum_fraction": 0.5674536228179932,
"avg_line_length": 23.204082489013672,
"blob_id": "8f173dda36ac9a4e2faf60d87b0a0138069ae160",
"content_id": "8dfb91cea1bd7abfdc70cc2b7177385fcdcd288b",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1467,
"license_type": "permissive",
"max_line_length": 85,
"num_lines": 49,
"path": "/ml-agents/Project/Assets/Scripts/Carrot.cs",
"repo_name": "Leetaihaku/DQN_CarrotGrow",
"src_encoding": "UTF-8",
"text": "using System.Collections;\nusing System.Collections.Generic;\nusing UnityEngine;\nusing Unity.MLAgents;\nusing Unity.MLAgents.Sensors;\n\npublic class Carrot : Agent\n{\n //Prototype Setting\n //최적 : 18C˚ >> [-30 ~ 30]\n //최적 : 토양 표면이 마르는 시기의 7 ~ 10일 간격 >> 500ml정도 추측이나 급수량은 추후 보정\n //조생 : 70 ~ 80(일) 중생 : 90 ~ 100(일) 만생 : 120일 이상(국내 조생종 다수)\n //생장 : False :: 수확 : True ++ 카메라로 줄기지면접촉 논의\n //정상 : True :: 비정상 : False >> 병충해등의 이유로 잎 색깔이 정상 범주 벗어날 시, 메세지\n //[노란색 = 무름병 >> 해결책 : 토양산도 상승 및 약제 공급 행동] [검은색 = 검은 잎마름병 >> 해결책 : 수분 및 약제 공급 행동]\n //최적 : 6pH >> [0 ~ 14]\n\n //public double Acidity = 0; \n //Pesticide\n //Nutrients\n\n Random rand_seed = new Random();\n\n public double Temp = 0;\n public int Humid_cycle = 0;\n //public int Harv_lim = 0;\n public bool Normal = true;\n //public double pH = 0;\n\n // Start is called before the first frame update\n void Start()\n {\n Temp = 18.0;\n Humid_cycle = 8;\n }\n\n // Update is called once per frame\n void Update()\n {\n StartCoroutine(Carrot_Update());\n }\n\n public IEnumerator Carrot_Update()\n {\n yield return new WaitForSeconds(1);\n }\n\n \n}\n"
},
{
"alpha_fraction": 0.570565402507782,
"alphanum_fraction": 0.586534321308136,
"avg_line_length": 21.715686798095703,
"blob_id": "495ad53161791d784da73c13aad82c0a9164e42f",
"content_id": "8578068771c394b0f1ded773a312ca1f6961b4eb",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 2359,
"license_type": "permissive",
"max_line_length": 96,
"num_lines": 102,
"path": "/ml-agents/Project/Assets/Scripts/Capsule.cs",
"repo_name": "Leetaihaku/DQN_CarrotGrow",
"src_encoding": "UTF-8",
"text": "using System.Collections;\nusing System.Collections.Generic;\nusing UnityEngine;\nusing Unity.MLAgents;\nusing Unity.MLAgents.Sensors;\n\npublic class Capsule : Agent\n{\n Rigidbody RGCapsule;\n public Transform TRCapsule;\n void Start()\n {\n RGCapsule = GetComponent<Rigidbody>();\n }\n\n public Transform Portal_L;\n\n\n\n\n public override void OnEpisodeBegin()\n {\n if (this.transform.localPosition.y < 0)\n {\n // If the Agent fell, zero its momentum\n this.RGCapsule.angularVelocity = Vector3.zero;\n this.RGCapsule.velocity = Vector3.zero;\n this.transform.localPosition = new Vector3(0, 0.5f, 0);\n }\n\n TRCapsule.localPosition = new Vector3(0.17f, 0.65f, -0.9f);\n }\n\n\n\n\n public override void CollectObservations(VectorSensor sensor)\n {\n //Portal & Agent Position\n sensor.AddObservation(Portal_L.localPosition);\n //sensor.AddObservation(Portal_R.localPosition);\n sensor.AddObservation(this.transform.localPosition);\n\n // Agent velocity\n sensor.AddObservation(RGCapsule.velocity.x);\n sensor.AddObservation(RGCapsule.velocity.z);\n\n //Carrot Status\n //Carrot Carrot = GameObject.Find(\"Carrot\").GetComponent<Carrot>();\n //Carrot\n\n }\n\n\n\n\n public float forceMultiplier = 10;\n\n\n\n\n public override void OnActionReceived(float[] vectorAction)\n {\n // Actions, size = 2\n Vector3 controlSignal = Vector3.zero;\n controlSignal.x = vectorAction[0];\n controlSignal.z = vectorAction[1];\n RGCapsule.AddForce(controlSignal * forceMultiplier);\n\n float distance = Vector3.Distance(this.transform.localPosition, Portal_L.localPosition);\n\n // Rewards\n //정상행동 선택 >> 보상수여 + END\n if (distance < 1.42f)\n {\n SetReward(1.0f);\n EndEpisode();\n }\n\n //비정상행동 선택 >> 패널티 + END\n if (this.transform.localPosition.y < 0)\n {\n EndEpisode();\n }\n\n /*\n // Reached target\n if (distanceToTarget < 1.42f)\n {\n SetReward(1.0f);\n EndEpisode();\n }\n\n // Fell off platform\n else if (this.transform.localPosition.y < 0)\n {\n EndEpisode();\n }\n */\n }\n\n}\n"
},
{
"alpha_fraction": 0.5204081535339355,
"alphanum_fraction": 0.6632652878761292,
"avg_line_length": 15.333333015441895,
"blob_id": "773ac2128429496f4326dccfc5e7e6bbe257c496",
"content_id": "b02c0e88ddc8296ad3bdfd5978b74d4ad6f52c86",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 98,
"license_type": "no_license",
"max_line_length": 22,
"num_lines": 6,
"path": "/Carrot_NN/BETA17/Hyperparams.py",
"repo_name": "Leetaihaku/DQN_CarrotGrow",
"src_encoding": "UTF-8",
"text": "NUM_STATES = 2\nNUM_ACTIONS = 4\nNODES = 24\nEPISODES = 10000\nMAX_STEPS = 200\nPATH = '.\\Beta-17.pth'\n"
},
{
"alpha_fraction": 0.6731945872306824,
"alphanum_fraction": 0.698898434638977,
"avg_line_length": 22.371429443359375,
"blob_id": "a2265c0633b125a098ecef28b7c36a4efc35c311",
"content_id": "684f70bf0f069450fb45708d4c180881d792102b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 817,
"license_type": "no_license",
"max_line_length": 52,
"num_lines": 35,
"path": "/Carrot_NN/test.py",
"repo_name": "Leetaihaku/DQN_CarrotGrow",
"src_encoding": "UTF-8",
"text": "import torch\nfrom torch import nn\nfrom torch import optim\nimport torch.nn.functional as F\nimport numpy as np\nfrom collections import namedtuple\nimport random\nimport copy\n\n'''NODES = 24\nPATH = '.\\saved_model\\Beta-17.pth'\n\ndef modeling_NN():\n model = nn.Sequential()\n model.add_module('fc1', nn.Linear(2, NODES))\n model.add_module('relu1', nn.ReLU())\n model.add_module('fc2', nn.Linear(NODES, NODES))\n model.add_module('relu2', nn.ReLU())\n model.add_module('fc3', nn.Linear(NODES, 4))\n return model\n\nQ = modeling_NN()\nQ.load_state_dict(torch.load(PATH))\n\nQ.eval()\n\nprint(Q(torch.squeeze(torch.tensor([1,1]))))'''\nreward = 0.1\nreward2 = 0.2\nreward2 = torch.tensor(reward2)\nreward = np.array([reward])\nreward = torch.from_numpy(reward)\nreward = reward.float()\nprint(reward.shape)\nprint(reward2.shape)"
}
] | 6 |
Powering111/Aroid | https://github.com/Powering111/Aroid | dce8ba08cb198d3cc60fb6b082ba7496731b15cc | 79e5963a3706897a84acd08ac1ebebf3a7ecd8cf | 1af3affa2acc18d711c3577f675cb1f901683251 | refs/heads/master | 2022-04-07T00:58:52.125034 | 2020-03-04T06:35:00 | 2020-03-04T06:35:00 | 239,120,805 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6122449040412903,
"alphanum_fraction": 0.6122449040412903,
"avg_line_length": 15.166666984558105,
"blob_id": "b94b94d455f5a8087191972f01c2a6f532a3016d",
"content_id": "ce15fd24e6312c3d1b38b4c031e98049f9da2d09",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 98,
"license_type": "no_license",
"max_line_length": 38,
"num_lines": 6,
"path": "/reseter.cpp",
"repo_name": "Powering111/Aroid",
"src_encoding": "UTF-8",
"text": "#include <stdio.h>\n#include <windows.h>\nint main() \n{ \n\tsystem(\"del %appdata%\\\\Aroid\\\\save\");\n} \n"
},
{
"alpha_fraction": 0.5391905307769775,
"alphanum_fraction": 0.5638697147369385,
"avg_line_length": 32.993289947509766,
"blob_id": "2a155afbbbf9a092fb7253fbfe8db79c407499cd",
"content_id": "b30ec935d8d8be100b956385f338d8aaf8e87817",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5065,
"license_type": "no_license",
"max_line_length": 87,
"num_lines": 149,
"path": "/main.py",
"repo_name": "Powering111/Aroid",
"src_encoding": "UTF-8",
"text": "import pygame\nfrom pygame.locals import *\nfrom pygame.color import Color\nfrom pygame.surface import Surface\nfrom pygame.sprite import Sprite\nimport sys\nimport os\nimport stageselect\nfrom stageselect import stages\npygame.init()\nscreen= pygame.display.set_mode([1024,720],DOUBLEBUF)\npygame.display.set_caption(\"Aroid <ALPHA> 1.0\")\npygame.display.set_icon(pygame.image.load('images/icon.png'))\npygame.font.init()\nBLACK= ( 0, 0, 0)\nWHITE= (255,255,255)\nBLUE = ( 0, 0,255)\nGREEN= ( 0,255, 0)\nRED = (255, 0, 0)\ndef save():\n global money,stagePercent\n print(\"====SAVE====\")\n file = open(os.environ[\"appdata\"]+\"\\\\Aroid\\\\save\",\"w\")\n file.write(\"SAVEFILE Indev 0.0.4a\\n\")\n file.write(str(money)+\"\\n\")\n for x in range(9):\n data=\"\"\n print(stageselect.stages[x])\n for y in range(stageselect.stages[x]):\n data+=str(stagePercent[x][y])\n data+=\" \"\n print(\"saved value : \"+str(data))\n file.write(data+\"\\n\")\n file.close()\n print(\"====saved====\")\ndef load():\n global money,stagePercent\n print(\"====LOAD====\")\n if not os.path.exists(os.environ[\"appdata\"]+\"\\\\Aroid\"):#no folder\n print(\"Directory Not Exists. Made Directory.\")\n os.mkdir(os.environ[\"appdata\"]+\"\\\\Aroid\")\n if not os.path.isfile(os.environ[\"appdata\"]+\"\\\\Aroid\\\\save\"):#no file\n print(\"File not exists. Making File\")\n money=0\n stagePercent=[]\n for x in range(9):\n dat=[]\n for y in range(stageselect.stages[x]):\n dat.append(0)\n stagePercent.append(dat)\n save()\n else: #yes file\n print(\"Loading File..\")\n stagePercent=[]\n file = open(os.environ[\"appdata\"]+\"\\\\Aroid\\\\save\",\"r\")\n fileVer=str(file.readline()).strip()\n print(\"File Version : \"+fileVer)\n if(fileVer==\"SAVEFILE Indev 0.0.4a\"):#file version check\n money=int(str(file.readline()).strip())\n for x in range(9):\n data=[]\n dat=str(file.readline()).strip().split()\n print(stageselect.stages[x])\n for y in range(stageselect.stages[x]):\n data.append(int(dat[y]))\n stagePercent.append(data)\n else:\n print(\"File not valid\")\n sys.exit()\n file.close()\n print(\"====loaded====\")\nclass Button(Sprite):\n def __init__(self,imgindex1,imgindex2,x,y):\n pygame.sprite.Sprite.__init__(self)\n self.i1,self.i2=imgindex1,imgindex2\n self.image=btnimg[self.i1]\n self.rect=self.image.get_rect()\n self.rect.x=x\n self.rect.y=y\n def draw(self):\n screen.blit(self.image,(self.rect.x,self.rect.y))\n def update(self,mouse):\n if self.rect.collidepoint(mouse):\n self.image=btnimg[self.i2]\n else:\n self.image=btnimg[self.i1]\n def click(self,mouse): #On click of button\n global screen,money,stagePercent\n if self.rect.collidepoint(mouse):\n pygame.mixer.music.stop()\n if self.i1==0:\n m,sp,s1,s2=stageselect.run(screen,money,stagePercent)\n if not m==-1:\n if m==100 and stagePercent[s1][s2]!=100:\n money+=sp\n if stagePercent[s1][s2]<m:\n stagePercent[s1][s2]=m\n \n save()\n elif self.i1==2:\n save()\n pygame.quit()\n sys.exit()\n pygame.mixer.music.stop()\n pygame.mixer.music.set_volume(1)\n pygame.mixer.music.load('sounds/pazizik.wav')\n pygame.mixer.music.play(-1)\ndef main():\n global btnimg,mouse\n for x in range(9): \n print(\"Loaded:\",stagePercent[x])\n Terminate=False\n Titleimg=pygame.image.load('images/Title.png').convert_alpha()\n btnimg=[]\n ddddfont=pygame.font.Font('./NanumGothic.ttf',25)\n moneyimg=pygame.image.load('images/money.png').convert_alpha()\n \n for x in range(4):\n btnimg.append(pygame.image.load('images/btn_'+str(x+1)+'.png').convert_alpha())\n btn1=Button(0,1,420,350)\n btn2=Button(2,3,420,450)\n \n # background music\n pygame.mixer.music.set_volume(1)\n pygame.mixer.music.load('sounds/pazizik.wav')\n pygame.mixer.music.play(-1)\n while not Terminate:\n for event in pygame.event.get():\n if event.type==pygame.QUIT:\n Terminate=True\n if event.type==pygame.MOUSEBUTTONDOWN and event.button==1:\n btn1.click(pygame.mouse.get_pos())\n btn2.click(pygame.mouse.get_pos())\n btn1.update(pygame.mouse.get_pos())\n btn2.update(pygame.mouse.get_pos())\n screen.fill(GREEN)\n moneytext=ddddfont.render(str(money),True,(0,0,0))\n screen.blit(Titleimg,(230,60))\n screen.blit(moneyimg,(10,10))\n screen.blit(moneytext,(50,10))\n \n btn1.draw()\n btn2.draw()\n pygame.display.flip()\nload()\nmain()\nsave()\npygame.quit()\nsys.exit()\n"
},
{
"alpha_fraction": 0.5530410408973694,
"alphanum_fraction": 0.5869872570037842,
"avg_line_length": 21.09375,
"blob_id": "45777dbdb96238d1ae682f96377f554617e74cce",
"content_id": "ff85d7bed14062aabfdaf74a05f0eff43f51260a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 743,
"license_type": "no_license",
"max_line_length": 63,
"num_lines": 32,
"path": "/test.py",
"repo_name": "Powering111/Aroid",
"src_encoding": "UTF-8",
"text": "import pygame\nfrom pygame.sprite import Sprite\nfrom pygame.surface import Surface\nfrom pygame.color import Color\n\n\ndef run():\n pygame.init()\n size = (400, 300)\n screen = pygame.display.set_mode(size) \n pygame.display.set_caption(\"Simple Test\")\n \n run = True\n clock = pygame.time.Clock()\n img=pygame.image.load('images/arrow_1.png').convert_alpha()\n # 게임 루프\n while run:\n # 사용자 입력 처리\n for event in pygame.event.get():\n if event.type == pygame.QUIT:\n run = False\n\n\n # 게임 상태 그리기\n screen.fill((100,255,255))\n screen.blit(img,(100,100))\n pygame.display.flip()\n \n clock.tick(60)\n \n pygame.quit()\nrun()\n"
},
{
"alpha_fraction": 0.5015118718147278,
"alphanum_fraction": 0.5542116761207581,
"avg_line_length": 34.89147186279297,
"blob_id": "b5591bb77893e8ffaa2380b260dca56b06e70035",
"content_id": "3771e82cd6d34be259b2b046b83f6fb3c4837d9b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4666,
"license_type": "no_license",
"max_line_length": 98,
"num_lines": 129,
"path": "/stageselect.py",
"repo_name": "Powering111/Aroid",
"src_encoding": "UTF-8",
"text": "import pygame\nfrom pygame.sprite import Sprite\nimport game\nWHITE=(255,255,255)\nBLACK=(0,0,0)\nRED=(255,0,0)\nGREEN=(0,255,0)\nBLUE=(0,0,255)\npygame.font.init()\nstage_name=[]\ninfofile=open('./levels/info','r')\npacks=infofile.readline().strip().split()\ngiveMoney=list(map(int,infofile.readline().strip().split()))\nstages=list(map(int,infofile.readline().strip().split()))\nfor x in range(len(packs)):\n stage_name.append(infofile.readline().strip().split())\nstage_name\ndef stage(a):\n global screen\n Terminate=False\n font=pygame.font.Font('./NanumGothic.ttf',40)\n txt=font.render(packs[a],True,(0,0,0))\n font2=pygame.font.Font('./NanumGothic.ttf',20)\n txt2=font2.render('ESC를 눌러 돌아가기',True,(0,0,0))\n while not Terminate:\n for event in pygame.event.get():\n if event.type==pygame.QUIT:\n return\n if event.type==pygame.MOUSEBUTTONDOWN and event.button==1:\n kk=True\n for i in range(stages[a]):\n if kk==True:\n if pygame.Rect(10,100*i+100,1000,80).collidepoint(pygame.mouse.get_pos()):\n pygame.mixer.music.stop()\n return game.run(stage_name[a][i],screen),i\n if event.type==pygame.KEYDOWN and event.key==pygame.K_ESCAPE:\n return -1,-1\n screen.fill(GREEN)\n screen.blit(txt,(10,20))\n for i in range(stages[a]):\n pygame.draw.rect(screen,WHITE,[10,100*i+100,1000,80])\n pygame.draw.rect(screen,(255,255,0),[10,100*i+100,10*stagePercent[a][i],80])\n text=font.render(stage_name[a][i],True,(0,0,0))\n text2=None\n if stagePercent[a][i]==100:\n text2=font.render(str(stagePercent[a][i])+\"%\",True,GREEN)\n else: \n text2=font.render(str(stagePercent[a][i])+\"%\",True,(0,0,0))\n screen.blit(text,(20,100*i+115))\n screen.blit(text2,(900,100*i+115))\n screen.blit(txt2,(830,680))\n pygame.display.flip()\nclass Btn(Sprite):# Pack select button\n def __init__(self,val):\n self.val=val\n self.image=btnimg[self.val]\n self.rect=self.image.get_rect()\n def click(self,mouse): # on click of pack\n if self.rect.collidepoint(mouse): # clicked\n s,t=stage(self.val)\n if t==-1:\n return -1,-1\n else:\n return s,t\n else:# not clicked\n return -2,-2\n def draw(self,screen):\n screen.blit(self.image,(self.rect.x,self.rect.y))\ndef setpos(a,x,y):\n a.rect.x=x\n a.rect.y=y\ndef run(scr,m,sp):\n global btnimg,screen,money,stagePercent\n # background music\n pygame.mixer.music.stop()\n pygame.mixer.music.set_volume(0.6)\n pygame.mixer.music.load('sounds/simple.wav')\n pygame.mixer.music.play(-1)\n money=m\n stagePercent=sp\n for x in range(9):\n for y in range(stages[x]):\n print(\"x\"+str(x)+\" y:\"+str(y)+\" h \"+str(stagePercent[x][y]))\n screen=scr\n print('hi')\n btnimg=[]\n btn=[] # index is 0~8\n for x in range(9):\n btnimg.append(pygame.image.load('images/pack_'+str(x+1)+'.png').convert_alpha())\n btn.append(Btn(x))\n oa=200\n ob=80\n setpos(btn[0],10+oa,10+ob)\n setpos(btn[1],220+oa,10+ob)\n setpos(btn[2],430+oa,10+ob)\n setpos(btn[3],10+oa,220+ob)\n setpos(btn[4],220+oa,220+ob)\n setpos(btn[5],430+oa,220+ob)\n setpos(btn[6],10+oa,430+ob)\n setpos(btn[7],220+oa,430+ob)\n setpos(btn[8],430+oa,430+ob)\n screen.fill(WHITE)\n font1=pygame.font.Font('./NanumGothic.ttf',40)\n font2=pygame.font.Font('./NanumGothic.ttf',20)\n txt1=font1.render('맵 팩 선택',True,(0,0,0))\n txt2=font2.render('Esc를 눌러 돌아가기',True,(0,0,0))\n Terminate=False\n while not Terminate:\n for event in pygame.event.get():\n if event.type==pygame.QUIT:\n Terminate=True\n if event.type==pygame.MOUSEBUTTONDOWN and event.button==1:\n kk=True\n for x in range(9):\n if kk==True:\n a,nn=btn[x].click(pygame.mouse.get_pos())\n if a==-1:\n kk=False\n elif a>=0:\n return a,giveMoney[x],x,nn\n if event.type==pygame.KEYDOWN and event.key==pygame.K_ESCAPE:\n Terminate=True\n screen.fill(BLUE)\n for x in range(9):\n btn[x].draw(screen)\n screen.blit(txt1,(100,10))\n screen.blit(txt2,(830,680))\n pygame.display.flip()\n return -1,-1,-1,-1\n"
},
{
"alpha_fraction": 0.7272727489471436,
"alphanum_fraction": 0.7272727489471436,
"avg_line_length": 10.333333015441895,
"blob_id": "a69e94411c72653f5eb559f281a8e5813e0c96a2",
"content_id": "a0e54771689c5a2c806563055720fcb065e0e3da",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 33,
"license_type": "no_license",
"max_line_length": 23,
"num_lines": 3,
"path": "/README.md",
"repo_name": "Powering111/Aroid",
"src_encoding": "UTF-8",
"text": "# Aroid\n \nAvoid OR Defend Arrows."
},
{
"alpha_fraction": 0.5740072131156921,
"alphanum_fraction": 0.6823104619979858,
"avg_line_length": 7.967741966247559,
"blob_id": "b67413c54756617ac9c3ea94d455e6497e6f9fe9",
"content_id": "0307ae4f1e300fe1fd4bf337be843a862f9f45f9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 277,
"license_type": "no_license",
"max_line_length": 16,
"num_lines": 31,
"path": "/Indexes.md",
"repo_name": "Powering111/Aroid",
"src_encoding": "UTF-8",
"text": "#Arrows\n0 white\n1 red\n2 green\n3 purple\n4 yellow\n5 blue\n6 orange\n7 pink\n8 black\n9 gray\n\n#Level\n1 : RGB\n2 : Arrow\n3 : Event\n\n#Events\n1 player mode\n2 instant damage\n3 shield set\n4 def +-\n5 def set\n6 idef +-\n7 idef set\n8 HP +-\n9 HP set\n10 SPEED +-\n11 SPEED set\n12 mhp +-\n13 mhp set"
}
] | 6 |
ezorfa/Programming-Concepts | https://github.com/ezorfa/Programming-Concepts | b73962e6f7adc4d8591231699d2931326770a605 | b1092c37e0411917e02aa8b8eb1e30d34451e5f2 | 82330196bdf940ab69234d0bd58b5fef6e3570a2 | refs/heads/master | 2022-11-08T03:32:00.448615 | 2020-06-19T19:52:47 | 2020-06-19T19:52:47 | 273,571,230 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.8108108043670654,
"alphanum_fraction": 0.8108108043670654,
"avg_line_length": 36,
"blob_id": "bf99da080be07f97119653b98daa40a14419b1be",
"content_id": "cccc460b434647993ff0a55ac45fb99108894075",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 74,
"license_type": "permissive",
"max_line_length": 50,
"num_lines": 2,
"path": "/README.md",
"repo_name": "ezorfa/Programming-Concepts",
"src_encoding": "UTF-8",
"text": "# Programming-Concepts\nProgramming philosophy, concepts and paradigms ...\n"
},
{
"alpha_fraction": 0.6968325972557068,
"alphanum_fraction": 0.6968325972557068,
"avg_line_length": 30.571428298950195,
"blob_id": "f1d299b3942e4b0cc02425381d105ee6e41d492f",
"content_id": "9726760cf143f7f52c8846a40ab6716b94c56802",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 884,
"license_type": "permissive",
"max_line_length": 86,
"num_lines": 28,
"path": "/Event Driven Programming.py",
"repo_name": "ezorfa/Programming-Concepts",
"src_encoding": "UTF-8",
"text": "# Example of a event driven programming using a small applicaiton: Chat Bot!\n# Author: Mohammed Afroze\n\nclass MessageBot:\n def __init__(self):\n self.callbacksDict = {}\n self.registerCallback(\"Hi\", self.respond_to_hi) # Define the function\n self.registerCallback(\"Hello\", self.respond_to_hello) # Define the function\n self.registerCallback(\"How are you\", self.respond_to_HowYou) # Define the function\n \n def registerCallback(self, msg, fn):\n if msg not in callbacksDict:\n callbacksDict[msg] = fn\n \n def reply_to_message(self,msg):\n if msg not in self.callbacksDict:\n return \"I dont understand your message\"\n return self.chooseCallback(msg)() \n\n def chooseCallback(self,msg):\n return callbacksDict[msg]\n\n\ndef main():\n bot = MessageBot()\n bot.reply_to_message(\"Hi!\")\n bot.reply_to_message(\"Good Morning\")\n bot.reply_to_message(\"How are you doing?\")\n"
}
] | 2 |
xinyi123104/42_01 | https://github.com/xinyi123104/42_01 | ea856469c2a8a000ad1ce538d95f55a7ad1d05ae | 80fe92d80617881075d33689a554e909f56ac464 | d12e0b93a2cb528d6b48ff761a3d0d5a9f455512 | refs/heads/main | 2023-01-08T03:21:04.936117 | 2020-11-07T05:49:26 | 2020-11-07T05:49:26 | 310,643,161 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.2857142984867096,
"alphanum_fraction": 0.5974025726318359,
"avg_line_length": 8.5,
"blob_id": "c45d1036e9dd0470940dfa31aa4a6caf795c5426",
"content_id": "0d562b1d907f1599bfd0e4f3aceb52ebc2e9e270",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 85,
"license_type": "no_license",
"max_line_length": 14,
"num_lines": 8,
"path": "/login.py",
"repo_name": "xinyi123104/42_01",
"src_encoding": "UTF-8",
"text": "num1 = 100 #张三\nnum2 = 200 #经理\nnum3 = 300\n\nnum4 = 400\nnum5 = 500\n\nnum6 = 600\n\n"
},
{
"alpha_fraction": 0.48148149251937866,
"alphanum_fraction": 0.5925925970077515,
"avg_line_length": 5.5,
"blob_id": "b9243cbf9b461087a52c6e227227d5feb2344329",
"content_id": "1494100cf0705bf17a65aff3516ecc409088ae0e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 27,
"license_type": "no_license",
"max_line_length": 6,
"num_lines": 4,
"path": "/pay.py",
"repo_name": "xinyi123104/42_01",
"src_encoding": "UTF-8",
"text": "pay =1\npay =2\npay =3\nover \n"
}
] | 2 |
afeldman/kinect2-wrapper | https://github.com/afeldman/kinect2-wrapper | 697405c73f721921de1d4655c368ed8c35530701 | ea81e3f8abd3997e594655626bf386104ae9d01d | c15ed23feb09f477b10cbacfcdd5bab5234079ac | refs/heads/master | 2017-05-29T23:38:48.288762 | 2014-08-21T21:48:32 | 2014-08-21T21:48:32 | null | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5833333134651184,
"alphanum_fraction": 0.6166666746139526,
"avg_line_length": 14,
"blob_id": "9087572bc1055da5ab196b72fa9b8d622d501ab0",
"content_id": "96ed3db72e48ede50360f2842b70143d6e98aaff",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 60,
"license_type": "no_license",
"max_line_length": 26,
"num_lines": 4,
"path": "/README.md",
"repo_name": "afeldman/kinect2-wrapper",
"src_encoding": "UTF-8",
"text": "kinect2-wrapper\n===============\n\npython wrapper for kinect2\n"
},
{
"alpha_fraction": 0.678131639957428,
"alphanum_fraction": 0.6794055104255676,
"avg_line_length": 25.285715103149414,
"blob_id": "d056db24fbe54e899d259ce15ecba320b7096ed2",
"content_id": "c91f71aa77d28955bbb269367541a96685ba5bc2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 7065,
"license_type": "no_license",
"max_line_length": 92,
"num_lines": 259,
"path": "/Kinect_Ext.cpp",
"repo_name": "afeldman/kinect2-wrapper",
"src_encoding": "UTF-8",
"text": "// Kinect.cpp : Defines the entry point for the console application.\r\n//\r\n\r\n#include \"stdafx.h\"\r\n#include \"Kinect_Ext.h\"\r\n\r\n\r\nBOOST_PYTHON_MODULE(kinect_ext)\r\n{\r\n\t// KINECT ENUMS\r\n\tenum_<TrackingConfidence>(\"TrackingConfidence\")\r\n\t\t.value(\"TrackingConfidence_Low\", TrackingConfidence_Low)\r\n\t\t.value(\"TrackingConfidence_High\", TrackingConfidence_High)\r\n\t\t;\r\n\r\n\tenum_<TrackingState>(\"TrackingState\")\r\n\t\t.value(\"TrackingState_NotTracked\", TrackingState_NotTracked)\r\n\t\t.value(\"TrackingState_Inferred\", TrackingState_Inferred)\r\n\t\t.value(\"TrackingState_Tracked\", TrackingState_Tracked)\r\n\t\t;\r\n\r\n\tenum_<HandState>(\"HandState\")\r\n\t\t.value(\"HandState_Unknown\", HandState_Unknown)\r\n\t\t.value(\"HandState_NotTracked\", HandState_NotTracked)\r\n\t\t.value(\"HandState_Open\", HandState_Open)\r\n\t\t.value(\"HandState_Closed\", HandState_Closed)\r\n\t\t.value(\"HandState_Lasso\", HandState_Lasso)\r\n\t\t;\r\n\r\n\tenum_<JointType>(\"JointType\")\r\n\t\t.value(\"JointType_SpineBase\", JointType_SpineBase)\r\n\t\t.value(\"JointType_SpineMid\", JointType_SpineMid)\r\n\t\t.value(\"JointType_Neck\", JointType_Neck)\r\n\t\t.value(\"JointType_Head\", JointType_Head)\r\n\t\t.value(\"JointType_ShoulderLeft\", JointType_ShoulderLeft)\r\n\t\t.value(\"JointType_ElbowLeft\", JointType_ElbowLeft)\r\n\t\t.value(\"JointType_WristLeft\", JointType_WristLeft)\r\n\t\t.value(\"JointType_HandLeft\", JointType_HandLeft)\r\n\t\t.value(\"JointType_ShoulderRight\", JointType_ShoulderRight)\r\n\t\t.value(\"JointType_ElbowRight\", JointType_ElbowRight)\r\n\t\t.value(\"JointType_WristRight\", JointType_WristRight)\r\n\t\t.value(\"JointType_HandRight\", JointType_HandRight)\r\n\t\t.value(\"JointType_HipLeft\", JointType_HipLeft)\r\n\t\t.value(\"JointType_KneeLeft\", JointType_KneeLeft)\r\n\t\t.value(\"JointType_AnkleLeft\", JointType_AnkleLeft)\r\n\t\t.value(\"JointType_FootLeft\", JointType_FootLeft)\r\n\t\t.value(\"JointType_HipRight\", JointType_HipRight)\r\n\t\t.value(\"JointType_KneeRight\", JointType_KneeRight)\r\n\t\t.value(\"JointType_AnkleRight\", JointType_AnkleRight)\r\n\t\t.value(\"JointType_FootRight\", JointType_FootRight)\r\n\t\t.value(\"JointType_SpineShoulder\", JointType_SpineShoulder)\r\n\t\t.value(\"JointType_HandTipLeft\", JointType_HandTipLeft)\r\n\t\t.value(\"JointType_ThumbLeft\", JointType_ThumbLeft)\r\n\t\t.value(\"JointType_HandTipRight\", JointType_HandTipRight)\r\n\t\t.value(\"JointType_ThumbRight\", JointType_ThumbRight)\r\n\t\t.value(\"JointType_Count\", JointType_Count)\r\n\t\t;\r\n\r\n\t// KINECT STRUCTS\r\n\tclass_<CameraSpacePoint>(\"CameraSpacePoint\")\r\n\t\t.def_readwrite(\"X\", &CameraSpacePoint::X)\r\n\t\t.def_readwrite(\"Y\", &CameraSpacePoint::Y)\r\n\t\t.def_readwrite(\"Z\", &CameraSpacePoint::Z)\r\n\t\t;\r\n\r\n\tclass_<ColorSpacePoint>(\"ColorSpacePoint\")\r\n\t\t.def_readwrite(\"X\", &ColorSpacePoint::X)\r\n\t\t.def_readwrite(\"Y\", &ColorSpacePoint::Y)\r\n\t\t;\r\n\r\n\tclass_<DepthSpacePoint>(\"DepthSpacePoint\")\r\n\t\t.def_readwrite(\"X\", &DepthSpacePoint::X)\r\n\t\t.def_readwrite(\"Y\", &DepthSpacePoint::Y)\r\n\t\t;\r\n\r\n\tclass_<Joint>(\"Joint\")\r\n\t\t.def_readwrite(\"JointType\", &Joint::JointType)\r\n\t\t.def_readwrite(\"Position\", &Joint::Position)\r\n\t\t.def_readwrite(\"TrackingState\", &Joint::TrackingState)\r\n\t\t;\r\n\r\n\tclass_<JointOrientation>(\"JointOrientation\")\r\n\t\t.def_readwrite(\"JointType\", &JointOrientation::JointType)\r\n\t\t.def_readwrite(\"Orientation\", &JointOrientation::Orientation)\r\n\t\t;\r\n\r\n\tclass_<PointF>(\"PointF\")\r\n\t\t.def_readwrite(\"X\", &PointF::X)\r\n\t\t.def_readwrite(\"Y\", &PointF::Y)\r\n\t\t;\r\n\r\n\tclass_<RectF>(\"RectF\")\r\n\t\t.def_readwrite(\"X\", &RectF::X)\r\n\t\t.def_readwrite(\"Y\", &RectF::Y)\r\n\t\t.def_readwrite(\"Width\", &RectF::Width)\r\n\t\t.def_readwrite(\"Height\", &RectF::Height)\r\n\t\t;\r\n\r\n\tclass_<Vector4>(\"Vector4\")\r\n\t\t.def_readwrite(\"x\", &Vector4::x)\r\n\t\t.def_readwrite(\"y\", &Vector4::y)\r\n\t\t.def_readwrite(\"z\", &Vector4::z)\r\n\t\t.def_readwrite(\"w\", &Vector4::w)\r\n\t\t;\r\n\t\r\n\t// WRAPPER\r\n\tclass_<Kinect_Ext>(\"Kinect\")\r\n\t\t.def(\"Init\", &Kinect_Ext::Init)\r\n\t\t.def(\"Destroy\", &Kinect_Ext::Destroy)\r\n\t\t.def(\"Update\", &Kinect_Ext::Update)\r\n\t\t.add_property(\"Bodies\", &Kinect_Ext::get_Bodies)\r\n\t\t;\r\n\r\n\tclass_<Body>(\"Body\")\r\n\t\t.def_readwrite(\"ClippedEdges\", &Body::_get_ClippedEdges)\r\n\t\t.def_readwrite(\"HandLeftConfidence\", &Body::_get_HandLeftConfidence)\r\n\t\t.def_readwrite(\"HandLeftState\", &Body::_get_HandLeftState)\r\n\t\t.def_readwrite(\"HandRightConfidence\", &Body::_get_HandRightConfidence)\r\n\t\t.def_readwrite(\"HandRightState\", &Body::_get_HandRightState)\r\n\t\t.def_readwrite(\"IsRestricted\", &Body::_get_IsRestricted)\r\n\t\t.def_readwrite(\"IsTracked\", &Body::_get_IsTracked)\r\n\t\t.def_readwrite(\"Lean\", &Body::_get_Lean)\r\n\t\t.def_readwrite(\"LeanTrackingState\", &Body::_get_LeanTrackingState)\r\n\t\t.def_readwrite(\"TrackingId\", &Body::_get_TrackingId)\r\n\t\t.add_property(\"Joints\", &Body::get_Joints)\r\n\t\t.add_property(\"JointOrientations\", &Body::get_JointOrientations)\r\n\t\t;\r\n}\r\n\r\n\r\nKinect_Ext::Kinect_Ext() :\r\n\tm_pKinectSensor(NULL),\r\n\tm_pCoordinateMapper(NULL),\r\n\tm_pBodyFrameReader(NULL)\r\n{\r\n}\r\n\r\n\r\nKinect_Ext::~Kinect_Ext()\r\n{\r\n\t// done with body frame reader\r\n\tSafeRelease(m_pBodyFrameReader);\r\n\r\n\t// done with coordinate mapper\r\n\tSafeRelease(m_pCoordinateMapper);\r\n\r\n\t// close the Kinect Sensor\r\n\tif (m_pKinectSensor) {\r\n\t\tm_pKinectSensor->Close();\r\n\t}\r\n\r\n\tSafeRelease(m_pKinectSensor);\r\n}\r\n\r\n\r\nHRESULT Kinect_Ext::InitializeDefaultSensor()\r\n{\r\n\tHRESULT hr;\r\n\r\n\thr = GetDefaultKinectSensor(&m_pKinectSensor);\r\n\tif (FAILED(hr)) {\r\n\t\treturn hr;\r\n\t}\r\n\r\n\tif (m_pKinectSensor) {\r\n\t\t// Initialize the Kinect and get coordinate mapper and the body reader and the face reader\r\n\t\tIBodyFrameSource* pBodyFrameSource = NULL;\r\n\r\n\t\thr = m_pKinectSensor->Open();\r\n\r\n\t\tif (SUCCEEDED(hr)) {\r\n\t\t\thr = m_pKinectSensor->get_CoordinateMapper(&m_pCoordinateMapper);\r\n\t\t}\r\n\r\n\t\tif (SUCCEEDED(hr)) {\r\n\t\t\thr = m_pKinectSensor->get_BodyFrameSource(&pBodyFrameSource);\r\n\t\t}\r\n\r\n\t\tif (SUCCEEDED(hr)) {\r\n\t\t\thr = pBodyFrameSource->OpenReader(&m_pBodyFrameReader);\r\n\t\t}\r\n\r\n\t\tSafeRelease(pBodyFrameSource);\r\n\r\n\t}\r\n\r\n\tif (!m_pKinectSensor || FAILED(hr)) {\r\n\t\tstd::wcerr << L\"No ready Kinect found!\" << std::endl;\r\n\t\treturn E_FAIL;\r\n\t}\r\n\r\n\treturn hr;\r\n}\r\n\r\n\r\nvoid Kinect_Ext::Init()\r\n{\r\n\tInitializeDefaultSensor();\r\n}\r\n\r\n\r\nvoid Kinect_Ext::Destroy()\r\n{\r\n\t// done with body frame reader\r\n\tSafeRelease(m_pBodyFrameReader);\r\n\r\n\t// done with coordinate mapper\r\n\tSafeRelease(m_pCoordinateMapper);\r\n\r\n\t// close the Kinect Sensor\r\n\tif (m_pKinectSensor) {\r\n\t\tm_pKinectSensor->Close();\r\n\t}\r\n\r\n\tSafeRelease(m_pKinectSensor);\r\n}\r\n\r\n\r\nvoid Kinect_Ext::Update()\r\n{\r\n\tHRESULT hr;\r\n\tif (!m_pBodyFrameReader) {\r\n\t\treturn;\r\n\t}\r\n\t\r\n\tIBodyFrame* pBodyFrame = NULL;\r\n\r\n\tIBody* ppBodies[BODY_COUNT] = { 0 };\r\n\thr = m_pBodyFrameReader->AcquireLatestFrame(&pBodyFrame);\r\n\tif (SUCCEEDED(hr)) {\r\n\t\thr = pBodyFrame->GetAndRefreshBodyData(_countof(ppBodies), ppBodies);\r\n\t}\r\n\r\n\tif (SUCCEEDED(hr)) {\r\n\t\tm_pBodies = list();\r\n\t\tfor (int i = 0; i < BODY_COUNT; ++i) {\r\n\t\t\tIBody* pBody = ppBodies[i];\r\n\t\t\tif (pBody) {\r\n\t\t\t\tBOOLEAN _get_IsTracked = false;\r\n\t\t\t\thr = pBody->get_IsTracked(&_get_IsTracked);\r\n\t\t\t\tif (SUCCEEDED(hr) && _get_IsTracked) {\r\n\t\t\t\t\tm_pBodies.append(Body(pBody));\r\n\t\t\t\t}\r\n\t\t\t}\r\n\t\t}\r\n\r\n\t\tfor (int i = 0; i < _countof(ppBodies); ++i) {\r\n\t\t\tSafeRelease(ppBodies[i]);\r\n\t\t}\r\n\t}\r\n\r\n\tSafeRelease(pBodyFrame);\r\n}\r\n\r\n\r\nlist Kinect_Ext::get_Bodies()\r\n{\r\n\treturn m_pBodies;\r\n}"
},
{
"alpha_fraction": 0.6994219422340393,
"alphanum_fraction": 0.7052023410797119,
"avg_line_length": 13.818181991577148,
"blob_id": "5019e24825950686815752f243a36c5b18cc89fc",
"content_id": "ce5077fc2702fa336207ee67891463d38fd8bfc9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 346,
"license_type": "no_license",
"max_line_length": 72,
"num_lines": 22,
"path": "/test.py",
"repo_name": "afeldman/kinect2-wrapper",
"src_encoding": "UTF-8",
"text": "import kinect_ext\r\n\r\nk = kinect_ext.Kinect()\r\nk.Init()\r\n\r\nwhile 1:\r\n\tk.Update()\r\n\r\n\tif k.Bodies:\r\n\t\tprint k.Bodies[0].HandRightState\r\n\r\n\r\n\r\n# data viz dedicated to demystifying public local and natl gov data sets\r\n# way to ma relationships, graph campaign contributions\r\n\r\n#tools\r\n#coveritlive\r\n#buffer \r\n#google fusion tables\r\n#datawrapper\r\n#irc"
}
] | 3 |
SKullDugDev/Roland | https://github.com/SKullDugDev/Roland | 2bb5b53de9db8995fbd29a1cf4d7702081484d00 | 75d426e2d6e44e81e950465f6e3b0095c30da839 | 33a1bab75a604cdc5861cdca4a3d9d8274982396 | refs/heads/master | 2022-11-30T02:57:19.949681 | 2020-08-12T13:27:25 | 2020-08-12T13:27:25 | 285,061,834 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6903690099716187,
"alphanum_fraction": 0.6903690099716187,
"avg_line_length": 33.45744705200195,
"blob_id": "37e67d46b851e5234a39e3de14e9299ce1a7cf96",
"content_id": "1cb00b0460a24d12e55a8f15a211994299061d00",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3333,
"license_type": "no_license",
"max_line_length": 116,
"num_lines": 94,
"path": "/Commander/RolandCampaignCommands.py",
"repo_name": "SKullDugDev/Roland",
"src_encoding": "UTF-8",
"text": "# for paths\r\nimport pathlib\r\n\r\n# for configuration file\r\nimport toml\r\n\r\n# main module & ext\r\nfrom discord.ext import commands\r\n\r\n# connect to backend\r\nimport RolandSQL\r\n\r\nback_end = RolandSQL.CampaignInfo()\r\n\r\n# typing hint\r\nfrom typing import NewType\r\n\r\nmessage = NewType('message', str)\r\n\r\n# config info\r\n\r\nCONFIGURATION_FILE = pathlib.Path.cwd() / \"data\" / \"configuration\" / \"token.toml\"\r\nTOKEN = toml.loads(CONFIGURATION_FILE.read_text())\r\nTOKEN = TOKEN['TOKEN']\r\n\r\n# set prefix\r\nbot_prefix = \"!\"\r\n\r\n# create Client connection\r\nclient = commands.Bot(command_prefix=bot_prefix)\r\n\r\n\r\ndef campaignadd(ctx, *new_campaigns_from_user):\r\n tricky_empty_string = \" \"\r\n tricky_empty_set = set(tricky_empty_string)\r\n campaign_added_message = campaign_add_initial_check(new_campaigns_from_user, tricky_empty_set, ctx)\r\n return campaign_added_message\r\n\r\n\r\ndef campaign_add_initial_check(new_campaigns_from_user, tricky_empty_set, ctx):\r\n if new_campaigns_from_user and set(new_campaigns_from_user) != tricky_empty_set:\r\n # if the set/tuple of names provided is not empty, run new campaign adding process and store in check_failed\r\n check_failed = back_end.process_to_add_new_campaigns(new_campaigns_from_user, ctx.guild.name)\r\n campaign_added_message = campaign_add_second_check(check_failed, ctx)\r\n else:\r\n campaign_added_message = 'You did not enter any campaign names'\r\n return campaign_added_message\r\n\r\n\r\ndef campaign_add_second_check(check_failed, ctx):\r\n if check_failed:\r\n campaign_added_message = f\"Already exists on {ctx.guild.name}!\"\r\n elif back_end.invalid_campaigns_not_to_add:\r\n campaign_added_string = ', '.join((str(s) for s in back_end.valid_campaigns_to_add))\r\n campaign_invalid_string = ', \"'.join((str(s) for s in back_end.invalid_campaigns_not_to_add))\r\n campaign_added_message = f\"{campaign_added_string} added; {campaign_invalid_string} exist(s).\"\r\n else:\r\n campaign_added_string = ', '.join((str(s) for s in back_end.valid_campaigns_to_add))\r\n campaign_added_message = f\"{campaign_added_string} added.\"\r\n return campaign_added_message\r\n\r\n\r\ndef campaignlist(ctx) -> message:\r\n # get campaign list\r\n campaign_list = back_end.get_campaigns(ctx.guild.name)\r\n\r\n # get campaign list string\r\n campaign_list_string = make_campaign_list_string(campaign_list)\r\n # send message\r\n return campaign_list_string\r\n\r\n\r\ndef make_campaign_list_string(campaign_list):\r\n if campaign_list:\r\n campaign_list_string = ', '.join(campaign_list)\r\n else:\r\n campaign_list_string = 'There are no campaigns yet! Add one!'\r\n return campaign_list_string\r\n\r\n\r\ndef campaignremove(ctx, *campaigns_set_for_removal) -> message:\r\n tricky_empty_string = \" \"\r\n tricky_empty_set = set(tricky_empty_string)\r\n if campaigns_set_for_removal and set(campaigns_set_for_removal) != tricky_empty_set:\r\n campaign_deletion_message = back_end.delete_campaign_by_name(campaigns_set_for_removal, ctx.guild.name)\r\n else:\r\n campaign_deletion_message = 'You did not enter a campaign name'\r\n return campaign_deletion_message\r\n\r\n\r\ndef campaignclear(ctx):\r\n back_end.delete_all_campaigns_in_server(ctx.guild.name)\r\n campaign_clear_message = 'Campaigns cleared out.'\r\n return campaign_clear_message\r\n"
},
{
"alpha_fraction": 0.6752577424049377,
"alphanum_fraction": 0.6758304834365845,
"avg_line_length": 29.745454788208008,
"blob_id": "cd25f592152abd231df8dd04dc1dc8d6a7031084",
"content_id": "de793afb76e5f2c2fe8e0d417bfe6e53f115f53f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1746,
"license_type": "no_license",
"max_line_length": 117,
"num_lines": 55,
"path": "/sql/SQLRunner.py",
"repo_name": "SKullDugDev/Roland",
"src_encoding": "UTF-8",
"text": "# for paths\r\nimport pathlib\r\n\r\n# for configuration file\r\nimport toml\r\n\r\n# for class variable type hints\r\nfrom typing import ClassVar, TypedDict\r\n\r\n# for connection\r\nimport pyodbc\r\n\r\n# for logging\r\nimport sqlog\r\n\r\n# get configuration file\r\n\r\nROLAND_ACADEMY_FILE: pathlib.Path = pathlib.Path.cwd() / \"data\" / \"configuration\" / \"roland academy.toml\"\r\nROLAND_ACADEMY: TypedDict = toml.loads(ROLAND_ACADEMY_FILE.read_text(encoding=\"utf-8\"))\r\n\r\n# establish dictionaries\r\n\r\nconnection_settings: TypedDict = ROLAND_ACADEMY['ConnectionSettings']\r\n\r\n\r\nclass SQLRunner:\r\n # get connection string and settings\r\n\r\n DRIVER: ClassVar[str] = connection_settings['DRIVER']\r\n SERVER: ClassVar[str] = connection_settings['SERVER']\r\n DATABASE: ClassVar[str] = connection_settings['DATABASE']\r\n TRUSTED_CONNECTION: ClassVar[str] = connection_settings['TRUSTED_CONNECTION']\r\n CONNECTION_STRING: ClassVar[str] = f\"DRIVER={DRIVER}; SERVER={SERVER}; DATABASE={DATABASE}; \\\r\n Trusted_Connection={TRUSTED_CONNECTION}\"\r\n\r\n def __init__(self):\r\n # start connection\r\n\r\n self.connection = pyodbc.connect(self.CONNECTION_STRING)\r\n self.cursor = self.connection.cursor()\r\n\r\n def close(self):\r\n # close other cursors and then close the connection\r\n\r\n cursor = self.connection.cursor()\r\n cursor.close()\r\n self.connection.close()\r\n\r\n def finalize(self, commit_logger_message=\"...Committed...\", commit_exception_message=\"Commit Error Occurred...\"):\r\n try:\r\n sqlog.logger.info('Committing...')\r\n self.connection.commit()\r\n sqlog.logger.info(commit_logger_message)\r\n except pyodbc.DatabaseError:\r\n sqlog.logger.exception(commit_exception_message)\r\n"
},
{
"alpha_fraction": 0.5863309502601624,
"alphanum_fraction": 0.5899280309677124,
"avg_line_length": 20.15999984741211,
"blob_id": "85f38e958b906bb2d5af61946b94aca0702d1f0c",
"content_id": "2d6befb7877a2532d48fb5054c57a668fd19ba18",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "TOML",
"length_bytes": 557,
"license_type": "no_license",
"max_line_length": 54,
"num_lines": 25,
"path": "/data/configuration/roland academy.toml",
"repo_name": "SKullDugDev/Roland",
"src_encoding": "UTF-8",
"text": "# instructions for Roland\r\n\r\nTitle = \"Roland Academy\"\r\n\r\n[ConnectionSettings]\r\nDRIVER ='{ODBC Driver 17 for SQL Server}'\r\nSERVER = \"MJÖLNIR\"\r\nDATABASE = 'Roland'\r\nTRUSTED_CONNECTION = 'yes'\r\n\r\n\r\n[SQLVariables]\r\n CHECK_IF_CAMPAIGN_NAME_IN_DATABASE = '''\r\n SELECT\r\n CampaignName\r\n FROM\r\n Campaign\r\n WHERE\r\n CampaignName Like ?'''\r\n\r\nINSERT_NEW_CAMPAIGN_NAME = '''\r\n INSERT INTO\r\n Campaign (CampaignName, DMName, PlayerOneName)\r\n VALUES\r\n (?, 'unknown_player','unknown_player') '''\r\n\r\n"
},
{
"alpha_fraction": 0.7103308439254761,
"alphanum_fraction": 0.7103308439254761,
"avg_line_length": 23.98245620727539,
"blob_id": "54a25fea4542e95e1a4619c86bfd4b0fdecbf406",
"content_id": "123241d1b721731b2d98df8d6a39e1b59131d1f7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1481,
"license_type": "no_license",
"max_line_length": 84,
"num_lines": 57,
"path": "/Commands/StorytellerGuide.py",
"repo_name": "SKullDugDev/Roland",
"src_encoding": "UTF-8",
"text": "# main module & ext\r\nfrom discord.ext import commands\r\n\r\n# connect to backend\r\nimport RolandSQL\r\n\r\n# connect to Commander\r\nimport RolandCommander\r\n\r\n# typing hint\r\nfrom typing import NewType\r\n\r\nmessage = NewType('message', str)\r\n\r\n# set prefix\r\nbot_prefix = \"!\"\r\n\r\n# create Client connection\r\nclient = commands.Bot(command_prefix=bot_prefix)\r\n\r\n# back end connection\r\nback_end = RolandSQL.CampaignInfo()\r\n\r\n# rallying commander\r\nRC = RolandCommander\r\n\r\n\r\ndef campaignadd(ctx, campaigns_adding):\r\n # if it passes the command check, run the function; else return command failure\r\n if RC.check_command(campaigns_adding):\r\n return start_campaign_add(campaigns_adding, ctx)\r\n return 'You did not enter a campaign'\r\n\r\n\r\ndef start_campaign_add(campaigns_adding, ctx):\r\n return back_end.add_campaign_validation(campaigns_adding, ctx.guild.name)\r\n\r\n\r\ndef campaignlist(ctx) -> str:\r\n return campainlist_message(back_end.get_campaigns(ctx.guild.name))\r\n\r\n\r\ndef campainlist_message(campaign_list):\r\n if campaign_list:\r\n return ', '.join(campaign_list)\r\n return 'There are no campaigns yet! Add one!'\r\n\r\n\r\ndef campaignremove(ctx, campaigns_removing) -> str:\r\n if RC.check_command(campaigns_removing):\r\n return back_end.remove_campaigns_process(campaigns_removing, ctx.guild.name)\r\n return 'You did not enter a campaign name'\r\n\r\n\r\ndef campaignclear(ctx):\r\n back_end.delete_server_campaigns(ctx.guild.name)\r\n return 'Campaigns cleared out.'\r\n"
},
{
"alpha_fraction": 0.7005813717842102,
"alphanum_fraction": 0.7020348906517029,
"avg_line_length": 17.11111068725586,
"blob_id": "3f268cc4c73f4c01cb2aaa2fe08eb2b4ba60769d",
"content_id": "8dc95b731d68854b3db92683e4a8a1f59659c3fc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 688,
"license_type": "no_license",
"max_line_length": 81,
"num_lines": 36,
"path": "/Roland.py",
"repo_name": "SKullDugDev/Roland",
"src_encoding": "UTF-8",
"text": "# discord\r\nimport discord\r\nfrom discord.ext import commands\r\n\r\n# for paths\r\nimport pathlib\r\n\r\n# for configuration file\r\nimport toml\r\n\r\n# import the commander\r\nimport RolandCommander\r\n\r\n# config info\r\n\r\nCONFIGURATION_FILE = pathlib.Path.cwd() / \"data\" / \"configuration\" / \"token.toml\"\r\nTOKEN = toml.loads(CONFIGURATION_FILE.read_text())\r\nTOKEN = TOKEN['TOKEN']\r\n\r\n# set bot prefix\r\nbot_prefix = \"!\"\r\n\r\n# create bot connection\r\nclient = commands.Bot(command_prefix=bot_prefix)\r\n\r\n# call in the commander\r\nRolandCommander.initiate_command_sequence(client)\r\n\r\n\r\n# say hi\r\[email protected]\r\nasync def on_ready():\r\n print('We have logged in as {0.user}'.format(client))\r\n\r\n\r\nclient.run(TOKEN)\r\n"
},
{
"alpha_fraction": 0.692307710647583,
"alphanum_fraction": 0.692307710647583,
"avg_line_length": 26.399999618530273,
"blob_id": "0325e587880e63c036d7db79f6f7e53b6ce9b4c0",
"content_id": "6bb4347dead18d479ffef20a464364673ba15470",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 286,
"license_type": "no_license",
"max_line_length": 72,
"num_lines": 10,
"path": "/Commander/RolandCommander.py",
"repo_name": "SKullDugDev/Roland",
"src_encoding": "UTF-8",
"text": "# initialize bot commands\r\ndef initiate_command_sequence(client):\r\n client.load_extension(\"Storyteller\")\r\n\r\n\r\ndef check_command(command_input):\r\n # if command is empty, stop the process, otherwise let it go through\r\n if command_input:\r\n return True\r\n return False\r\n\r\n"
},
{
"alpha_fraction": 0.6916933059692383,
"alphanum_fraction": 0.6916933059692383,
"avg_line_length": 22.076923370361328,
"blob_id": "20a59ae7a0a25decd8baae08def9c8706be93700",
"content_id": "076c45dae28b8d3904cf798d9f99d8a71ef47c9b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 626,
"license_type": "no_license",
"max_line_length": 85,
"num_lines": 26,
"path": "/loggers/sqlog.py",
"repo_name": "SKullDugDev/Roland",
"src_encoding": "UTF-8",
"text": "# for paths\r\nimport pathlib\r\n\r\n# for logging\r\nimport logging\r\n\r\n# create log file\r\nLOG_FILE = pathlib.Path.cwd() / 'data' / 'logs' / 'sql logs' / 'sql.log'\r\n\r\n# gets or creates a logger\r\nlogger = logging.getLogger(__name__)\r\n\r\n# set log level\r\nlogger.setLevel(logging.INFO)\r\n\r\n# define file handler and set formatter\r\nfile_handler = logging.FileHandler(LOG_FILE)\r\nformatter = logging.Formatter('%(asctime)s : %(levelname)s : %(name)s : %(message)s')\r\nfile_handler.setFormatter(formatter)\r\n\r\n# add file to logger\r\nlogger.addHandler(file_handler)\r\n\r\n# logs\r\nlogger.debug('A debug message')\r\nlogger.info('Begin Transaction...')\r\n"
},
{
"alpha_fraction": 0.654739499092102,
"alphanum_fraction": 0.654865026473999,
"avg_line_length": 48.41139221191406,
"blob_id": "ba920b430185738eaaa56e452c634439060127ff",
"content_id": "be209428626a8ff9479ad8212ccb21d3bdf65df1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7965,
"license_type": "no_license",
"max_line_length": 120,
"num_lines": 158,
"path": "/sql/RolandSQL.py",
"repo_name": "SKullDugDev/Roland",
"src_encoding": "UTF-8",
"text": "# for paths\r\nimport pathlib\r\n\r\n# for configuration file\r\nimport toml\r\n\r\n# for class variable type hints\r\nfrom typing import ClassVar, TypedDict, List, Set\r\n\r\n# for connection\r\nimport pyodbc\r\nimport SQLRunner\r\nimport itertools\r\n\r\n# for logging\r\nimport sqlog\r\n\r\n# get configuration file\r\n\r\nROLAND_ACADEMY_FILE: pathlib.Path = pathlib.Path.cwd() / \"data\" / \"configuration\" / \"roland academy.toml\"\r\nROLAND_ACADEMY: TypedDict = toml.loads(ROLAND_ACADEMY_FILE.read_text(encoding=\"utf-8\"))\r\n\r\n# establish dictionaries\r\n\r\nsql_variables: TypedDict = ROLAND_ACADEMY['SQLVariables']\r\n\r\n# connection to the database\r\n\r\ndatabase = SQLRunner.SQLRunner()\r\n\r\n\r\nclass CampaignInfo:\r\n # get sql variables\r\n\r\n INSERT_NEW_CAMPAIGN_NAME: ClassVar[str] = sql_variables['INSERT_NEW_CAMPAIGN_NAME']\r\n GET_CAMPAIGN_LIST: ClassVar[str] = sql_variables['GET_CAMPAIGN_LIST']\r\n DELETE_CAMPAIGN_BY_NAME: ClassVar[str] = sql_variables['DELETE_CAMPAIGN_BY_NAME']\r\n DELETE_CAMPAIGNS: ClassVar[str] = sql_variables['DELETE_CAMPAIGNS']\r\n\r\n def __init__(self):\r\n\r\n # variables\r\n\r\n self.campaigns: List[str] = []\r\n self.check_failed: bool = True\r\n\r\n def add_campaign_validation(self, campaigns_adding, server_name: str) -> str:\r\n # for every campaign submitted, take the caseless version and put it in a set\r\n sqlog.logger.info('Setting up new campaigns...')\r\n campaigns_adding = {name.casefold() for name in campaigns_adding}\r\n # for every current campaign, take the caseless version and put it in a set\r\n sqlog.logger.info('Getting current campaigns...')\r\n current_campaigns = {name.casefold() for name in self.get_campaigns(server_name)}\r\n # campaigns already in the database shouldn't be added; this intersection reveals that set\r\n sqlog.logger.info('Validating...')\r\n invalid_campaigns = current_campaigns.intersection(campaigns_adding)\r\n # from the set campaign set we want to add, remove what we have already, title them and make a new set\r\n valid_campaigns = {name.title() for name in\r\n campaigns_adding.difference(invalid_campaigns)}\r\n # gonna try something here to make a string and return that instead\r\n return self.add_campaign_process(valid_campaigns, server_name, invalid_campaigns)\r\n\r\n def add_campaign_process(self, valid_campaigns, server_name, invalid_campaigns):\r\n # if there are no valid campaigns, end and return a failure message\r\n if not valid_campaigns:\r\n return f\"Already exists on {server_name}\"\r\n # else, add the campaigns in\r\n sqlog.logger.info('Adding new campaigns...')\r\n self.add_new_campaigns(valid_campaigns, server_name)\r\n sqlog.logger.info('Campaigns successfully added...Checking commit requirements...')\r\n # check they were added before committing\r\n return self.add_campaign_commit_check(set(self.get_campaigns(server_name)), valid_campaigns, invalid_campaigns)\r\n\r\n def add_new_campaigns(self, campaigns_adding, server_name):\r\n # add each campaign that is listed as valid\r\n try:\r\n param = zip(itertools.repeat(server_name), campaigns_adding)\r\n database.cursor.executemany(self.INSERT_NEW_CAMPAIGN_NAME, param)\r\n sqlog.logger.info(\"%s\\n Parameters: %s, %s\", self.INSERT_NEW_CAMPAIGN_NAME,\r\n campaigns_adding, server_name)\r\n except pyodbc.DatabaseError:\r\n sqlog.logger.exception('Error Exception Occurred')\r\n\r\n def add_campaign_commit_check(self, current_campaigns, valid_campaigns, invalid_campaigns):\r\n if not current_campaigns.intersection(valid_campaigns):\r\n # if the set is empty, they weren't added so fail\r\n return 'Campaigns not added. Talk to Ra.'\r\n # else\r\n sqlog.logger.info('Commit requirements passed..committing...')\r\n database.finalize()\r\n sqlog.logger.info('Committed...')\r\n # by this point, we know there is a valid campaign so if there are no invalids, the message is just what's added\r\n if not invalid_campaigns:\r\n return f\"{', '.join([str(s) for s in valid_campaigns])} added\"\r\n # if there is an invalid, mention it wasn't added\r\n return f\"{', '.join([str(s) for s in valid_campaigns])} added;\" \\\r\n f\" {', '.join([str(s) for s in invalid_campaigns])} already exist \"\r\n\r\n def get_campaigns(self, server_name):\r\n # get campaigns\r\n sqlog.logger.info('Getting list of campaigns...')\r\n database.cursor.execute(self.GET_CAMPAIGN_LIST, server_name)\r\n sqlog.logger.info(\"%s\\n Parameters: %s\", self.GET_CAMPAIGN_LIST, server_name)\r\n sqlog.logger.info(\"Campaigns retrieved...\")\r\n query = database.cursor.fetchall()\r\n # make list of campaigns\r\n campaign_list = [row.CampaignName for row in query]\r\n return campaign_list\r\n\r\n def remove_campaigns_process(self, campaigns_removing, server_name: str) -> str:\r\n # for every campaign submitted for removal, take the caseless version and put it in a set\r\n sqlog.logger.info('Setting up campaigns for removal...')\r\n campaigns_removing = {name.casefold() for name in campaigns_removing}\r\n # get the caseless set of current campaigns\r\n sqlog.logger.info('Getting current campaigns...')\r\n current_campaigns = {name.casefold() for name in self.get_campaigns(server_name)}\r\n # find which campaigns are in the current list, which thus can be removed\r\n sqlog.logger.info('Validating...')\r\n valid_campaigns = campaigns_removing.intersection(current_campaigns)\r\n # invalid campaigns will not be removed; is not a campaign\r\n invalid_campaigns = campaigns_removing.difference(valid_campaigns)\r\n sqlog.logger.info('Deleting campaigns...')\r\n # gets a set of the campaigns being removed\r\n return self.check_campaign_removal(server_name, self.campaign_removal(valid_campaigns, server_name),\r\n invalid_campaigns)\r\n\r\n def campaign_removal(self, campaigns_removing, server_name):\r\n # remove, from the database, each campaign that is in the valid for removal list\r\n try:\r\n param = zip(itertools.repeat(server_name), campaigns_removing)\r\n database.cursor.executemany(self.DELETE_CAMPAIGN_BY_NAME, param)\r\n sqlog.logger.info(\"%s\\n Parameters: %s, %s\", self.DELETE_CAMPAIGN_BY_NAME, server_name, campaigns_removing)\r\n except pyodbc.DatabaseError:\r\n sqlog.logger.exception('Error Exception Occurred')\r\n # then return the set of names we just removed which should only be the valid names\r\n return campaigns_removing\r\n\r\n def check_campaign_removal(self, server_name, valid_campaigns,\r\n invalid_campaigns):\r\n # checks to see if the deleted set is still in the database\r\n if set(self.get_campaigns(server_name)).intersection(valid_campaigns):\r\n test = set(self.get_campaigns(server_name)).intersection(valid_campaigns)\r\n # if it isn't empty, stop and return a failure\r\n return 'Campaigns not removed properly. See Ra.'\r\n # an empty set means the campaigns were properly removed\r\n sqlog.logger.info('Committing...')\r\n database.finalize()\r\n sqlog.logger.info('Committed...')\r\n if invalid_campaigns:\r\n return f\"{', '.join([str(s) for s in valid_campaigns])} removed;\" \\\r\n f\" {', '.join([str(s) for s in invalid_campaigns])} doesn't exist.\"\r\n return f\"{', '.join([str(s) for s in valid_campaigns])} removed.\"\r\n\r\n def delete_server_campaigns(self, server_name):\r\n sqlog.logger.info('Deleting Campaigns...')\r\n database.cursor.execute(self.DELETE_CAMPAIGNS, server_name)\r\n sqlog.logger.info('Campaigns Deleted...')\r\n database.finalize()\r\n"
},
{
"alpha_fraction": 0.6614626049995422,
"alphanum_fraction": 0.6614626049995422,
"avg_line_length": 21.86274528503418,
"blob_id": "c370656952e315109ba10bc817c6358bb7b96a02",
"content_id": "fff89bd5610c9c0c77ad6a8e178ed0211bad05ec",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1217,
"license_type": "no_license",
"max_line_length": 74,
"num_lines": 51,
"path": "/Commands/Storyteller.py",
"repo_name": "SKullDugDev/Roland",
"src_encoding": "UTF-8",
"text": "# main module & ext\r\nfrom discord.ext import commands\r\n\r\n# connect to backend\r\nimport StorytellerGuide\r\n\r\n# typing hint\r\nfrom typing import NewType\r\n\r\nmessage = NewType('message', str)\r\n\r\nSTG = StorytellerGuide\r\n\r\n# config info\r\n\r\n\r\n# set prefix\r\nbot_prefix = \"!\"\r\n\r\n# create Client connection\r\nclient = commands.Bot(command_prefix=bot_prefix)\r\n\r\n\r\nclass CampaignCommands(commands.Cog):\r\n def __init__(self, bot):\r\n self.bot = bot\r\n pass\r\n\r\n # add campaigns\r\n @commands.command()\r\n async def campaignadd(self, ctx, *campaigns_adding) -> message:\r\n return await ctx.send(STG.campaignadd(ctx, campaigns_adding))\r\n\r\n # list campaigns\r\n @commands.command()\r\n async def campaignlist(self, ctx) -> message:\r\n return await ctx.send(STG.campaignlist(ctx))\r\n\r\n # remove campaigns\r\n @commands.command()\r\n async def campaignremove(self, ctx, *campaigns_removing) -> message:\r\n return await ctx.send(STG.campaignremove(ctx, campaigns_removing))\r\n\r\n # clear campaigns\r\n @commands.command()\r\n async def campaignclear(self, ctx) -> message:\r\n return await ctx.send(STG.campaignclear(ctx))\r\n\r\n\r\ndef setup(bot):\r\n bot.add_cog(CampaignCommands(bot))\r\n"
}
] | 9 |
dssantos/autodragup | https://github.com/dssantos/autodragup | a1b81e03f7cad9d1764f4be0306384e9e9a08a19 | 1fdd45d493e965b0aafcbe39a6b3cb401ddad2aa | 8e22de0f2308f4406a63249845945731014ed640 | refs/heads/main | 2023-06-07T04:51:09.529200 | 2021-06-12T02:30:46 | 2021-06-12T02:30:46 | 374,431,494 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.74956214427948,
"alphanum_fraction": 0.7513135075569153,
"avg_line_length": 19.285715103149414,
"blob_id": "d3a0b528371bca9ef63eeb3b52e40fb233dffe7b",
"content_id": "944b4c7870cce9e86d31916466428b97e81a4777",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 571,
"license_type": "no_license",
"max_line_length": 59,
"num_lines": 28,
"path": "/README.md",
"repo_name": "dssantos/autodragup",
"src_encoding": "UTF-8",
"text": "\n# autodragup\n\n## How to dev\n\n### Linux\n```bash\ngit clone https://github.com/dssantos/autodragup autodragup\ncd autodragup\npython -m venv .autodragup\nsource .autodragup/bin/activate\npython -m pip install -U pip\npip install -r requirements.txt\n```\n\n### Windows (Powershell)\n```bash\ngit clone https://github.com/dssantos/autodragup autodragup\ncd autodragup\npython -m venv .autodragup\nSet-ExecutionPolicy Unrestricted -Scope Process -force\n.\\.autodragup\\Scripts\\Activate.ps1\npython -m pip install -U pip\npip install -r requirements.txt\n```\n\n## How to deploy\n\n## How to use\n\n\n"
},
{
"alpha_fraction": 0.4885954260826111,
"alphanum_fraction": 0.5690276026725769,
"avg_line_length": 17.511110305786133,
"blob_id": "2e8c6d367e38e5c5b5ce5ba6972b57e8bc831d1d",
"content_id": "6d4fee966486a9683f95e024ac74a3dd9a483744",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 833,
"license_type": "no_license",
"max_line_length": 40,
"num_lines": 45,
"path": "/autodragup.py",
"repo_name": "dssantos/autodragup",
"src_encoding": "UTF-8",
"text": "from random import randrange, uniform\nfrom time import sleep\n\nimport pyautogui as pg\n\n\ndef openapp(x, y):\n pg.moveTo(1123, 707)\n pg.click()\n sleep(0.5)\n pg.moveTo(x, y)\n pg.click()\n\ndef like():\n if longwait >= 10:\n pg.moveTo(1176, 338)\n pg.click()\n sleep(0.1)\n pg.click()\n sleep(shortwait)\n\ndef dragup():\n pg.moveTo(xstart, ystart)\n sleep(shortwait)\n pg.drag(0, -460, duration=shortwait)\n\ntoken = True\nsleep(5)\nopenapp(1168, 306)\nwhile True:\n\n xstart = 1150+randrange(-10, 10)\n ystart = 600+randrange(-2, 2)\n longwait = randrange(6,15)\n shortwait = uniform(0.5, 0.9)\n \n sleep(longwait)\n if longwait == 6:\n token = not token\n if token:\n openapp(1168, 306) \n else:\n openapp(1168, 403)\n like()\n dragup()\n"
}
] | 2 |
jggrandio/KNN-SPARK | https://github.com/jggrandio/KNN-SPARK | 3d7e4b528bf8868ab428511fd47ae7e1ebc028c1 | 7103732c5bb695a8663f9d083eb70869631b200b | 1013ea6feae7a819a0c07a5d1620bb70b01b85b1 | refs/heads/master | 2022-03-17T17:03:48.543655 | 2019-12-08T00:08:26 | 2019-12-08T00:08:26 | 225,245,121 | 1 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.4693034291267395,
"alphanum_fraction": 0.49114522337913513,
"avg_line_length": 22.91176414489746,
"blob_id": "a90eb520bd6fdfa1fe9c8a3e588269d25a0794c7",
"content_id": "17fd61d2e0369e4a5b3bc9525d3b249a9c07ecaf",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1694,
"license_type": "no_license",
"max_line_length": 66,
"num_lines": 68,
"path": "/spark_knn.py",
"repo_name": "jggrandio/KNN-SPARK",
"src_encoding": "UTF-8",
"text": "#Fastest way until now\r\nfrom pyspark import SparkContext, SparkConf\r\nimport time\r\nimport sys\r\n\r\nstart = time.time()\r\n\r\ndef distances (dt):\r\n idts = dt[1]\r\n l = len (dt[0])-1\r\n ts = list(map(float, dt[0][0:l]))\r\n cltr = int(dt[0][l])\r\n \r\n \r\n x = [(float('inf'),0)]*k.value\r\n \r\n for i in tr:\r\n if not idts == i[1]:\r\n dist = sum((p-q)*(p-q) for p, q in zip(i[0][0:l], ts))\r\n if dist < x[len(x)-1][0]:\r\n for j in range(len(x)):\r\n if dist < x[j][0]:\r\n x.insert(j,(dist,i[0][l]))\r\n x.pop()\r\n break\r\n \r\n return (cltr,x)\r\ndef guess_class(dt):\r\n rclass = dt[0]\r\n freq = 0\r\n predict = 0\r\n for i in range(len(dt[1])):\r\n tfreq = 1\r\n tpredict = dt[1][i][1]\r\n for j in range(i+1,len(dt[1])):\r\n if tpredict == dt[1][j][1]:\r\n tfreq +=1\r\n if tfreq > freq:\r\n predict = tpredict\r\n freq = tfreq\r\n return (rclass,predict)\r\n\r\ndef correct(dt):\r\n if dt[0]==dt[1]:\r\n return 1\r\n else:\r\n return 0\r\n\r\ndataset = sys.argv[1]\r\npartitions = int(sys.argv[3])\r\n\r\nsc = SparkContext.getOrCreate()\r\nts = sc.textFile(dataset,partitions).zipWithUniqueId()\\\r\n .map(lambda line: (line[0].split(','), line[1]))\\\r\n .map(lambda line: (list(map(float, line[0])),line[1]))\r\n\r\n\r\ntr = ts.collect()\r\nk = sc.broadcast(int(sys.argv[2]))\r\n\r\n\r\nk_vals = ts.map(distances)\r\nguess_class = k_vals.map(guess_class)\r\ncorrect = guess_class.map(correct)\r\naccuracy = correct.mean()\r\nend = time.time()\r\nprint('The time to run is:', end - start)\r\nprint(accuracy)\r\n"
},
{
"alpha_fraction": 0.43865031003952026,
"alphanum_fraction": 0.45552146434783936,
"avg_line_length": 22.60377311706543,
"blob_id": "44573e81fc0aed464813aa89ec5ab8a9271909bd",
"content_id": "0cccf204d925f26125d2f844e125180e56317626",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1304,
"license_type": "no_license",
"max_line_length": 69,
"num_lines": 53,
"path": "/sequential_knn.py",
"repo_name": "jggrandio/KNN-SPARK",
"src_encoding": "UTF-8",
"text": "import time\r\nimport sys\r\n\r\nstart = time.time()\r\ndataset = sys.argv[1]\r\n\r\nf = open(dataset,'r')\r\ndata = []\r\nk = int(sys.argv[2])\r\nkitems= []\r\nfor a in f:\r\n data.append(list(map(float,a.split(','))))\r\n\r\nfor i in range(len(data)):\r\n ts = data [i][0:7]\r\n cltr = int(data[i][7])\r\n x = [(float('inf'),0)]*k\r\n \r\n for j in range(len(data)):\r\n if not i==j:\r\n dist = sum((p-q)*(p-q) for p, q in zip(data[j][0:7], ts))\r\n if dist < x[len(x)-1][0]:\r\n for z in range(len(x)):\r\n if dist < x[z][0]:\r\n x.insert(z,(dist,data[j][7]))\r\n x.pop()\r\n break\r\n \r\n kitems.append(x)\r\n\r\nfor z in range(len(kitems)):\r\n freq = 0\r\n predict = 0\r\n for i in range(len(kitems[z])):\r\n tfreq = 1\r\n tpredict = kitems[z][i][1]\r\n for j in range(i+1,len(kitems[z])):\r\n if tpredict == kitems[z][j][1]:\r\n tfreq +=1\r\n if tfreq > freq:\r\n predict = tpredict\r\n freq = tfreq\r\n kitems[z]=predict\r\n \r\nright = 0\r\n\r\nfor i in range(len(kitems)):\r\n if kitems[i] == data[i][7]:\r\n right+=1\r\naccuracy = right/len(kitems)\r\nend = time.time()\r\nprint('The time to run is:', end - start)\r\nprint(accuracy)\r\n"
}
] | 2 |
ar-terence-co/asrel | https://github.com/ar-terence-co/asrel | 15890e0453d4f03c0f19dedd9ae5f3ef039a75dc | 05b9b26e8cd8e94475796f87f75b084b80204332 | c266d1bff229c5a41410ace2420586b8365335ba | refs/heads/master | 2023-04-21T18:25:05.444291 | 2021-05-14T15:06:50 | 2021-05-14T15:06:50 | 361,225,702 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6547314524650574,
"alphanum_fraction": 0.6624041199684143,
"avg_line_length": 19.578947067260742,
"blob_id": "f397634ccfa0bdbfd50363a97296256d79639b1d",
"content_id": "18a70a873759aadd7da52e6ae7fd8f65595f1854",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 391,
"license_type": "no_license",
"max_line_length": 63,
"num_lines": 19,
"path": "/asrel/stores/base.py",
"repo_name": "ar-terence-co/asrel",
"src_encoding": "UTF-8",
"text": "from abc import ABC, abstractmethod\nfrom typing import Dict\n\nclass BaseExperienceStore(ABC):\n def __init__(\n self, \n global_config: Dict = {},\n **kwargs,\n ):\n self.global_config = global_config\n self.batch_size = self.global_config.get(\"batch_size\", 256)\n\n @abstractmethod\n def add(self, experience: Dict):\n pass\n\n @abstractmethod\n def sample(self) -> Dict:\n pass\n"
},
{
"alpha_fraction": 0.6847808361053467,
"alphanum_fraction": 0.6878538131713867,
"avg_line_length": 32.59782791137695,
"blob_id": "4d9230d3860c3467d33b28a9fd8e4f4e23b68107",
"content_id": "4cb50f2f144e66adf3921951d3741d3233671a07",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6183,
"license_type": "no_license",
"max_line_length": 126,
"num_lines": 184,
"path": "/asrel/core/utils.py",
"repo_name": "ar-terence-co/asrel",
"src_encoding": "UTF-8",
"text": "import argparse\nfrom collections import OrderedDict\nfrom gym import Space\nimport importlib\nimport numpy as np\nimport pathlib\nimport random\nimport torch\nfrom typing import Any, Dict, Optional, Tuple, Type\nimport yaml\n\nDEFAULT_CONFIG = pathlib.Path(\"asrel.yml\")\nDEFAULT_CHECKPOINT_DIR = pathlib.Path(\".networks\")\nDEFAULT_DEVICE = torch.device(\"cpu\")\n\nclass ConfigError(Exception): pass\n\n\ndef get_args(description: str = \"ASync REinforcement Learning\") -> argparse.Namespace:\n parser = argparse.ArgumentParser(description=description)\n parser.add_argument(\"--config\", help=\"Path to the config file\", default=str(DEFAULT_CONFIG))\n parser.add_argument(\"--test-env-worker\", action=\"store_true\", help=\"Run only the environment worker in the background.\")\n parser.add_argument(\"--test-actor-worker\", action=\"store_true\", help=\"Run only the actor worker in the background.\")\n args = parser.parse_args()\n return args\n\n\ndef get_config(config_file: str) -> Dict:\n with open(config_file, \"r\") as f:\n config = yaml.safe_load(f)\n return config\n\n\ndef get_seed_sequences(config: Dict):\n seed = config.get(\"seed\")\n main_ss = np.random.SeedSequence(entropy=seed)\n print(f\"SEED {main_ss.entropy}\")\n\n env_seed_seq, actor_seed_seq, learner_seed_seq, store_seed_seq, orchestrator_seed_seq = main_ss.spawn(5)\n return {\n \"environment\": env_seed_seq,\n \"actor\": actor_seed_seq,\n \"learner\": learner_seed_seq,\n \"store\": store_seed_seq,\n \"orchestrator\": orchestrator_seed_seq,\n }\n\n\ndef set_worker_rng(seed_seq: np.random.SeedSequence):\n np_seed_seq, torch_seed_seq, py_seed_seq = seed_seq.spawn(3)\n\n np.random.seed(np_seed_seq.generate_state(4))\n torch.manual_seed(torch_seed_seq.generate_state(1, dtype=np.uint64).item())\n py_seed = (py_seed_seq.generate_state(2, dtype=np.uint64).astype(object) * [1 << 64, 1]).sum()\n random.seed(py_seed)\n\n return np_seed_seq, torch_seed_seq, py_seed_seq\n\ndef get_registry_args_from_config(config: Dict) -> Dict:\n return {\n \"shared\": config.get(\"shared\", {})\n }\n\ndef get_env_args_from_config(config: Dict) -> Dict:\n env_path = config['path']\n env_class_name = config.get(\"class\")\n env_class = get_class_from_module_path(env_path, class_name=env_class_name, class_suffix=\"Environment\")\n\n return {\n \"env_class\": env_class,\n \"num_envs\": config.get(\"num_envs_per_worker\", 2),\n \"num_workers\": config.get(\"num_workers\", 1),\n \"env_config\": config.get(\"conf\", {}),\n }\n\n\ndef get_actor_args_from_config(config: Dict) -> Dict:\n actor_path = config['path']\n actor_class_name = config.get(\"class\")\n actor_class = get_class_from_module_path(actor_path, class_name=actor_class_name, class_suffix=\"Actor\")\n\n return {\n \"actor_class\": actor_class,\n \"num_workers\": config.get(\"num_workers\", 1),\n \"actor_config\": config.get(\"conf\", {}),\n }\n\n\ndef get_store_args_from_config(config: Dict) -> Dict:\n store_path = config['path']\n store_class_name = config.get(\"class\")\n store_class = get_class_from_module_path(store_path, class_name=store_class_name, class_suffix=\"ExperienceStore\")\n\n return {\n \"store_class\": store_class,\n \"buffer_size\": config.get(\"buffer_size\", 16),\n \"warmup_steps\": config.get(\"warmup_steps\", 0),\n \"store_config\": config.get(\"conf\", {})\n }\n\ndef get_learner_args_from_config(config: Dict) -> Dict:\n learner_path = config['path']\n learner_class_name = config.get(\"class\")\n learner_class = get_class_from_module_path(learner_path, class_name=learner_class_name, class_suffix=\"Learner\")\n\n return {\n \"learner_class\": learner_class,\n \"learner_config\": config.get(\"conf\", {}),\n }\n\n\ndef get_net_from_config(config: Dict, **kwargs) -> torch.nn.Module:\n net_path = config['path']\n net_class_name = config.get(\"class\")\n net_class = get_class_from_module_path(net_path, class_name=net_class_name, class_suffix=\"Network\")\n\n net_config = config.get(\"conf\", {})\n\n return net_class(**net_config, **kwargs)\n\n\ndef get_orchestrator_args_from_config(config: Dict) -> Dict:\n pipeline_class_configs = []\n for c in config[\"pipelines\"]:\n pipeline_path = c['path']\n pipeline_class_name = c.get(\"class\")\n pipeline_class = get_class_from_module_path(pipeline_path, class_name=pipeline_class_name, class_suffix=\"Pipeline\")\n \n pipeline_class_configs.append((pipeline_class, c.get(\"conf\", {})))\n \n return {\n \"pipeline_class_configs\": pipeline_class_configs,\n }\n\n\ndef get_class_from_module_path(module_path, class_name: str = None, class_suffix: str = None) -> Type:\n module = importlib.import_module(module_path)\n if not class_name and class_suffix:\n for name in reversed(dir(module)):\n if name.endswith(class_suffix) and name != f\"Base{class_suffix}\":\n class_name = name\n break\n if not class_name: raise ConfigError(f\"Cannot find valid class for module `{module_path}`\")\n return getattr(module, class_name)\n\n\ndef take_tensor_from_dict(d: Dict[str, torch.Tensor], key: str) -> torch.Tensor:\n t = d[key].clone()\n del d[key]\n return t\n\ndef take_tensors_from_state_dicts(state_dicts: Dict[str, OrderedDict]) -> Dict[str, OrderedDict]:\n cloned_state_dicts = {}\n for net, state_dict in state_dicts.items():\n cloned_state_dicts[net] = OrderedDict()\n keys = list(state_dict.keys())\n for key in keys:\n cloned_state_dicts[net][key] = take_tensor_from_dict(state_dict, key)\n \n return cloned_state_dicts\n\ndef get_spaces_from_env_args(env_args: Dict) -> Tuple[Space, Space]:\n env_class = env_args[\"env_class\"]\n env_config = env_args[\"env_config\"]\n tmp_env = env_class(**env_config)\n observation_space = tmp_env.observation_space\n action_space = tmp_env.action_space\n return observation_space, action_space\n\ndef get_instance_from_config(module: Any, name: str = \"\", default_class: Optional[Type] = None, **kwargs) -> Any:\n try:\n class_ = getattr(module, name)\n except AttributeError:\n if not default_class: return None\n class_ = default_class\n \n return class_(**kwargs)\n\ndef validate_subclass(subclass: Type, parent: Type):\n if not issubclass(subclass, parent):\n raise ConfigError(f\"{subclass.__module__}.{subclass.__name__} is not a subclass of {parent.__module__}.{parent.__name__}\")\n\ndef noop(*args, **kwargs):\n pass\n\n"
},
{
"alpha_fraction": 0.6954976320266724,
"alphanum_fraction": 0.6978672742843628,
"avg_line_length": 24.57575798034668,
"blob_id": "5636be99e4493bc798c32f8b4107af9b994cc38a",
"content_id": "e5edcfacac8a48329f197955a2e5651b62ff48c1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 844,
"license_type": "no_license",
"max_line_length": 93,
"num_lines": 33,
"path": "/asrel/pipelines/base.py",
"repo_name": "ar-terence-co/asrel",
"src_encoding": "UTF-8",
"text": "from abc import ABC, abstractmethod\n\nfrom multiprocessing.queues import Queue\nfrom threading import Thread\nfrom typing import Callable, Dict\n\nfrom asrel.core.registry import WorkerRegistry\nfrom asrel.pipelines.utils import put_while\n\nclass BasePipeline(Thread):\n def __init__(\n self, \n registry: WorkerRegistry,\n shared_dict: Dict,\n process_state: Dict,\n queue_timeout: int = 60,\n global_config: Dict = {},\n **kwargs,\n ):\n super().__init__()\n \n self.registry = registry\n self.shared_dict = shared_dict\n self.process_state = process_state\n self.queue_timeout = queue_timeout\n self.global_config = global_config\n\n def send_task(self, queue: Queue, task: Dict):\n put_while(queue, task, lambda: self.process_state[\"running\"], timeout=self.queue_timeout)\n\n @abstractmethod\n def run(self):\n pass\n"
},
{
"alpha_fraction": 0.6110514998435974,
"alphanum_fraction": 0.6244634985923767,
"avg_line_length": 30.610170364379883,
"blob_id": "29a4efc5d4092fce390ab766fb7148e312bf117a",
"content_id": "e0147b42388a54b2787ab41f1d55b34de2542f89",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1864,
"license_type": "no_license",
"max_line_length": 81,
"num_lines": 59,
"path": "/asrel/networks/simple_conv2d.py",
"repo_name": "ar-terence-co/asrel",
"src_encoding": "UTF-8",
"text": "import torch\nimport torch.nn as nn\nimport torch.optim as optim\nfrom typing import Dict, List\n\nfrom asrel.core.utils import get_instance_from_config\nfrom asrel.networks.base import BaseNetwork\n\nclass SimpleConv2DNetwork(BaseNetwork):\n def setup(\n self,\n input_size: List[int],\n output_size: List[int],\n conv_params: List[Dict] = [\n {\"out_channels\": 32, \"kernel_size\": 8, \"stride\": 4},\n {\"out_channels\": 64, \"kernel_size\": 4, \"stride\": 2},\n {\"out_channels\": 64, \"kernel_size\": 3, \"stride\": 1},\n ],\n ff_layers: List[int] = [128],\n activation: Dict = {},\n optimizer: Dict = {},\n ):\n conv_layers = []\n for i, params in enumerate(conv_params):\n in_channels = input_size[0] if i == 0 else conv_params[i-1][\"out_channels\"]\n conv = nn.Conv2d(in_channels, **params)\n activ = get_instance_from_config(nn, **activation, default_class=nn.ReLU)\n seq = nn.Sequential(conv, activ)\n conv_layers.append(seq)\n \n self.conv_block = nn.Sequential(\n *conv_layers,\n nn.Flatten(),\n )\n conv_output_shape = self.get_output_shape(self.conv_block, input_size)[0]\n layers = []\n for i, layer_size in enumerate(ff_layers):\n prev_size = conv_output_shape if i == 0 else ff_layers[i-1]\n linear = nn.Linear(prev_size, layer_size)\n activ = get_instance_from_config(nn, **activation, default_class=nn.ReLU)\n seq = nn.Sequential(linear, activ)\n layers.append(seq)\n \n self.ff_block = nn.Sequential(*layers)\n self.out_layer = nn.Linear(ff_layers[-1], output_size[0])\n\n self.optimizer = get_instance_from_config(\n optim, \n params=self.parameters(), \n default_class=optim.Adam, \n **optimizer\n )\n\n def forward(self, x: torch.Tensor):\n x = x.float()\n x = self.conv_block(x)\n x = self.ff_block(x)\n out = self.out_layer(x)\n return out"
},
{
"alpha_fraction": 0.6027318239212036,
"alphanum_fraction": 0.6073328256607056,
"avg_line_length": 30.04910659790039,
"blob_id": "5cfe5c3e27869e2b9c6bb956ca64d5eea91fbfaf",
"content_id": "e202dd0da7ea3deff704af96763ea8a6769ad500",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6955,
"license_type": "no_license",
"max_line_length": 106,
"num_lines": 224,
"path": "/asrel/pipelines/observation/standard.py",
"repo_name": "ar-terence-co/asrel",
"src_encoding": "UTF-8",
"text": "from multiprocessing.queues import Empty\nimport numpy as np\nimport time\nimport torch\nfrom typing import Dict, List, Tuple\n\nimport asrel.core.workers.events as events\nfrom asrel.pipelines.base import BasePipeline\n\nACTOR_WAIT_TIMEOUT = 3 # seconds\nMOVING_AVERAGE_STEPS = 100 # steps\nSAVE_IF_IDLE = 100 # episodes\n\nclass StandardObservationPipeline(BasePipeline):\n \"\"\"\n Standard observation pipeline \n \"\"\"\n def __init__(\n self,\n batch_count: int = 0,\n batch_split: int = 2,\n **kwargs,\n ):\n super().__init__(**kwargs)\n \n self.env_output_queues = self.registry.output_queues[\"environment\"]\n self.actor_input_queues = self.registry.input_queues[\"actor\"]\n self.store_input_queue = self.registry.input_queues[\"store\"][0]\n self.learner_input_queue = self.registry.input_queues[\"learner\"][0]\n\n self.env_q_count = len(self.env_output_queues)\n self.actor_q_count = len(self.actor_input_queues)\n\n self.shared_env_output = self.env_q_count == 1\n self.batch_per_worker = self._get_batch_per_worker(batch_count, batch_split)\n self.envs_completed = [\n [False for _ in range(config[\"num_envs\"])] \n for config in self.registry.configs[\"environment\"]\n ]\n\n self.actor_devices = [\n torch.device(\n config.get(\"actor_config\", {}).get(\"device\", \"cuda\")\n )\n for config in self.registry.configs[\"actor\"]\n ]\n\n if \"experiences\" not in self.shared_dict:\n self.shared_dict[\"experiences\"] = {}\n if \"scores\" not in self.shared_dict:\n self.shared_dict[\"scores\"] = np.array([], dtype=float)\n\n self.max_episodes = self.global_config.get(\"max_episodes\", 100)\n self.n_steps = self.global_config.get(\"n_steps\", 1)\n self.gamma = self.global_config.get(\"gamma\", 1.0)\n\n self.last_saved = -1\n\n def run(self):\n self._wait_for_actors()\n\n actor_idx = 0\n\n while self.process_state[\"running\"]:\n outputs = self._get_batch_outputs()\n\n if not outputs:\n if self.process_state[\"total_episodes\"] >= self.max_episodes:\n self.process_state[\"running\"] = False\n continue\n\n for output in outputs:\n to_store = self._update_experiences(output)\n for exp in to_store:\n task = {\n \"type\": events.STORE_ADD_EXPERIENCE_TASK,\n \"experience\": exp,\n }\n self.send_task(self.store_input_queue, task)\n\n batch_obs, env_idxs, should_save = self._process_batch_outputs(outputs)\n task = {\n \"type\": events.ACTOR_CHOOSE_ACTION_TASK,\n \"observation\": torch.tensor(batch_obs).to(self.actor_devices[actor_idx]),\n \"greedy\": False,\n \"env_idx\": env_idxs,\n \"actor_idx\": actor_idx,\n }\n self.send_task(self.actor_input_queues[actor_idx], task)\n\n if should_save:\n task = {\n \"type\": events.LEARNER_SAVE_NETWORKS_TASK\n }\n self.send_task(self.learner_input_queue, task)\n self.last_saved = self.process_state[\"total_episodes\"]\n\n actor_idx = (actor_idx + 1) % self.actor_q_count\n \n def _get_batch_per_worker(\n self, \n batch_count: int = 0, \n batch_split: int = 2,\n ) -> List[int]:\n assert batch_count > 0 or batch_split > 0\n\n batch_per_worker = [\n min(batch_count, config[\"num_envs\"])\n if batch_count > 0\n else config[\"num_envs\"] // batch_split\n for config in self.registry.configs[\"environment\"]\n ]\n\n if self.shared_env_output:\n batch_per_worker = [sum(batch_per_worker)]\n\n return batch_per_worker\n\n def _is_worker_complete(self, idx: int) -> bool:\n if self.shared_env_output:\n return all(\n all(per_worker) for per_worker in self.envs_completed\n )\n return all(self.envs_completed[idx])\n\n def _get_batch_outputs(self) -> List[Dict]:\n outputs = []\n total_pulls = [0 for i in range(self.env_q_count)]\n\n while True:\n has_pull = False\n for idx in range(self.env_q_count):\n if self._is_worker_complete(idx): continue\n if total_pulls[idx] >= self.batch_per_worker[idx]: continue\n\n try:\n output = self.env_output_queues[idx].get(block=True, timeout=self.queue_timeout)\n except Empty:\n continue\n\n if output[\"type\"] != events.RETURNED_OBSERVATION_EVENT: \n continue # Note this skips other returned events from the env\n\n outputs.append(output)\n has_pull = True\n total_pulls[idx] += 1\n if not has_pull: break\n\n return outputs\n\n def _process_batch_outputs(self, outputs: List[Dict]) -> Tuple[List[np.ndarray], List[Tuple[int, int]]]:\n batch_obs = []\n env_idxs = []\n should_save = False\n for output in outputs:\n batch_obs.append(output[\"observation\"])\n env_idxs.append(output[\"env_idx\"])\n self.process_state[\"total_steps\"] += 1\n\n if output[\"episode_done\"]:\n self.process_state[\"total_episodes\"] += 1\n self.shared_dict[\"scores\"] = np.append(self.shared_dict[\"scores\"], output[\"score\"])\n average_score = self.shared_dict[\"scores\"][-MOVING_AVERAGE_STEPS:].mean()\n if (\n self.process_state[\"total_episodes\"] - self.last_saved > SAVE_IF_IDLE or\n \"max_average_score\" not in self.shared_dict or \n average_score > self.shared_dict[\"max_average_score\"]\n ):\n self.shared_dict[\"max_average_score\"] = average_score\n should_save = True\n\n print(\n self.process_state[\"total_episodes\"],\n output[\"env_idx\"],\n output[\"episode_step\"],\n output[\"score\"],\n average_score,\n )\n\n if self.process_state[\"total_episodes\"] >= self.max_episodes:\n worker_idx, sub_idx = output[\"env_idx\"]\n self.envs_completed[worker_idx][sub_idx] = True\n\n return batch_obs, env_idxs, should_save\n\n def _update_experiences(self, output) -> List[Dict]:\n env_idx = output[\"env_idx\"]\n if env_idx not in self.shared_dict[\"experiences\"]:\n self.shared_dict[\"experiences\"][env_idx] = []\n \n env_exps = self.shared_dict[\"experiences\"][env_idx]\n for i, exp in enumerate(env_exps):\n exp[\"return\"] += (self.gamma**i) * output[\"reward\"]\n \n to_store = []\n if output[\"episode_done\"]:\n while len(env_exps):\n exp = env_exps.pop()\n exp.update({\n \"nth_state\": output[\"observation\"],\n \"done\": True,\n })\n to_store.append(exp)\n else:\n if len(env_exps) >= self.n_steps:\n exp = env_exps.pop()\n exp.update({\n \"nth_state\": output[\"observation\"],\n \"done\": False,\n })\n to_store.append(exp)\n\n env_exps.insert(0, {\n \"state\": output[\"observation\"],\n \"return\": 0.,\n })\n\n return to_store\n\n def _wait_for_actors(self):\n print(\"Waiting for actors to be initialized...\")\n while self.process_state[\"running\"] and not self.process_state[\"actors_initialized\"]:\n time.sleep(ACTOR_WAIT_TIMEOUT)\n print(\"Actors intialized. Sending observations...\")\n"
},
{
"alpha_fraction": 0.6197982430458069,
"alphanum_fraction": 0.627364456653595,
"avg_line_length": 23.78125,
"blob_id": "a6dae111f06e47d1a366485f8dbdf4c3c86476ec",
"content_id": "90668b2594c4468af62311da570f83766d3385cc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1586,
"license_type": "no_license",
"max_line_length": 70,
"num_lines": 64,
"path": "/asrel/stores/exp_replay.py",
"repo_name": "ar-terence-co/asrel",
"src_encoding": "UTF-8",
"text": "import numpy as np\nfrom typing import Dict\n\nfrom asrel.stores.base import BaseExperienceStore\n\n\nclass ExperienceReplay(BaseExperienceStore):\n def __init__(\n self,\n maxsize: int = 100000,\n types: Dict[str, str] = {},\n **kwargs,\n ):\n super().__init__(**kwargs)\n\n self.maxsize = maxsize\n self.types = types\n\n self.cursor = 0\n self.size = 0\n self.total_count = 0\n\n self.replay_dict = None \n\n def add(self, experience: Dict):\n if not self.replay_dict:\n self.replay_dict = self._create_replay_dict(experience)\n\n for name, value in experience.items():\n self.replay_dict[name][self.cursor] = value\n\n self.cursor = (self.cursor+1) % self.maxsize\n if self.size < self.maxsize: self.size += 1\n self.total_count += 1\n\n def sample(self) -> Dict[str, np.ndarray]:\n if not self.replay_dict: return {}\n batch = np.random.choice(self.size, self.batch_size)\n batch_experience = self[batch]\n\n return batch_experience\n\n def _create_replay_dict(self, experience: Dict):\n replay_dict = {}\n for key, value in experience.items():\n is_np_instance = isinstance(value, np.ndarray)\n shape = value.shape if is_np_instance else ()\n\n if key in self.types:\n type_ = np.dtype(self.types[key])\n elif is_np_instance:\n type_ = value.dtype\n else:\n type_ = np.array(value).dtype\n\n replay_dict[key] = np.zeros((self.maxsize, *shape), dtype=type_)\n\n return replay_dict\n\n def __getitem__(self, idx):\n return {\n key: exps[idx]\n for key, exps in self.replay_dict.items()\n }\n"
},
{
"alpha_fraction": 0.6505700945854187,
"alphanum_fraction": 0.6525821685791016,
"avg_line_length": 26.10909080505371,
"blob_id": "1e0e14821b6cc6592c8f2428bdbe4b4521711c8e",
"content_id": "8c087d9c538aeb8d3711af64b7d06331af88926a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1491,
"license_type": "no_license",
"max_line_length": 82,
"num_lines": 55,
"path": "/asrel/pipelines/dataset/standard.py",
"repo_name": "ar-terence-co/asrel",
"src_encoding": "UTF-8",
"text": "from multiprocessing.queues import Empty\nimport torch\nfrom typing import Dict\n\nimport asrel.core.workers.events as events\nfrom asrel.pipelines.base import BasePipeline\n\nclass StandardDatasetPipeline(BasePipeline):\n \"\"\"\n Standard dataset pipeline \n \"\"\"\n def __init__(\n self,\n **kwargs,\n ):\n super().__init__(**kwargs)\n\n self.store_output_queue = self.registry.output_queues[\"store\"][0]\n self.learner_input_queue = self.registry.input_queues[\"learner\"][0]\n\n config = self.registry.configs[\"learner\"][0]\n self.learner_device = torch.device(\n config.get(\"learner_config\", {}).get(\"device\", \"cuda\")\n )\n\n def run(self):\n while self.process_state[\"running\"]:\n output = self._get_output()\n if not output: continue\n\n batch_tensor = self._process_output(output)\n\n task = {\n \"type\": events.LEARNER_TRAIN_TASK,\n \"data\": batch_tensor,\n }\n self.send_task(self.learner_input_queue, task)\n\n def _get_output(self) -> Dict:\n try:\n output = self.store_output_queue.get(block=True, timeout=self.queue_timeout)\n\n if output[\"type\"] != events.RETURNED_BATCH_DATA_EVENT:\n return None # Note this skips other returned events from the actor\n\n return output\n except Empty:\n return None\n\n def _process_output(self, output: Dict) -> Dict[str, torch.Tensor]:\n batch_tensor = {\n k: torch.tensor(v, device=self.learner_device)\n for k, v in output[\"data\"].items()\n }\n return batch_tensor\n"
},
{
"alpha_fraction": 0.6416136026382446,
"alphanum_fraction": 0.6433120965957642,
"avg_line_length": 29.584415435791016,
"blob_id": "e982ed36b00cc2c804f73ae733fe38f2b633bb04",
"content_id": "3b55384ee6d02e3015c6e927a54bb3318985f151",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2355,
"license_type": "no_license",
"max_line_length": 88,
"num_lines": 77,
"path": "/asrel/pipelines/action/standard.py",
"repo_name": "ar-terence-co/asrel",
"src_encoding": "UTF-8",
"text": "from multiprocessing.queues import Empty\nimport numpy as np\nfrom typing import Dict, List, Tuple\n\nfrom asrel.core.utils import take_tensor_from_dict\nimport asrel.core.workers.events as events\nfrom asrel.pipelines.base import BasePipeline\n\nclass StandardActionPipeline(BasePipeline):\n \"\"\"\n Standard observation pipeline \n \"\"\"\n def __init__(\n self,\n **kwargs,\n ):\n super().__init__(**kwargs)\n \n self.actor_output_queues = self.registry.output_queues[\"actor\"]\n self.env_input_queues = self.registry.input_queues[\"environment\"]\n\n self.actor_q_count = len(self.actor_output_queues)\n self.env_q_count = len(self.env_input_queues)\n\n if \"experiences\" not in self.shared_dict:\n self.shared_dict[\"experiences\"] = {}\n\n def run(self):\n actor_idx = 0\n\n while self.process_state[\"running\"]:\n output = self._get_output(actor_idx)\n if not output: continue\n\n batch_actions, env_idxs = self._process_output(output)\n\n for i, env_idx in enumerate(env_idxs):\n action = batch_actions[i].item()\n self._update_experiences(action, env_idx)\n \n task = {\n \"type\": events.ENV_INTERACT_TASK,\n \"action\": batch_actions[i].item(),\n \"env_idx\": env_idx,\n }\n\n env_worker_idx, _ = env_idx\n self.send_task(self.env_input_queues[env_worker_idx], task)\n \n actor_idx = (actor_idx + 1) % self.actor_q_count\n\n def _get_output(self, idx: int) -> Dict: \n try:\n output = self.actor_output_queues[idx].get(block=True, timeout=self.queue_timeout)\n\n if output[\"type\"] != events.RETURNED_ACTION_EVENT:\n return None # Note this skips other returned events from the actor\n\n return output\n except Empty:\n return None\n\n def _process_output(self, output: Dict) -> Tuple[np.ndarray, List[Tuple[int, int]]]:\n batch_actions_t = take_tensor_from_dict(output, \"action\")\n batch_actions = batch_actions_t.cpu().numpy()\n env_idxs = output[\"env_idx\"]\n\n return batch_actions, env_idxs\n\n def _update_experiences(self, action: int, env_idx: Tuple[int, int]):\n if env_idx not in self.shared_dict[\"experiences\"]:\n self.shared_dict[\"experiences\"][env_idx] = []\n \n env_exps = self.shared_dict[\"experiences\"][env_idx]\n\n if len(env_exps) and \"action\" not in env_exps[0]:\n env_exps[0][\"action\"] = action\n"
},
{
"alpha_fraction": 0.6331877708435059,
"alphanum_fraction": 0.6432650089263916,
"avg_line_length": 26.757009506225586,
"blob_id": "388ca80ab277a2a860a667a51431bceecb96cd36",
"content_id": "4a5b52d1f6007edf9d9fbfe643b93c2c6d4ad071",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2977,
"license_type": "no_license",
"max_line_length": 86,
"num_lines": 107,
"path": "/asrel/learners/classic/dqn.py",
"repo_name": "ar-terence-co/asrel",
"src_encoding": "UTF-8",
"text": "import copy\nfrom gym import Space\nimport numpy as np\nimport time\nimport torch\nimport torch.nn.functional as F\nfrom typing import Dict\n\nfrom asrel.core.utils import get_net_from_config\nfrom asrel.learners.base import BaseLearner\nfrom asrel.learners.utils import hard_update, soft_update\nfrom asrel.networks.base import BaseNetwork\n\nclass DQNLearner(BaseLearner):\n def __init__(\n self, \n net: Dict = {},\n epsilon: float = 1.,\n epsilon_dec: float = 1e-4,\n epsilon_end: float = 1e-3,\n use_hard_update: bool = False,\n hard_update_freq: int = 1000,\n soft_update_tau: float = 5e-3,\n policy_update_freq: int = 1000,\n **kwargs\n ):\n super().__init__(**kwargs)\n\n self.net: BaseNetwork = get_net_from_config(\n net,\n input_size=self.input_space.shape,\n output_size=(self.output_space.n,),\n device=self.device,\n checkpoint_dir=self.checkpoint_dir\n )\n self.networks = [self.net]\n\n self.load_network_checkpoints()\n \n self.target_net = copy.deepcopy(self.net)\n self.target_net.requires_grad_(False)\n\n self.n_steps = self.global_config.get(\"n_steps\", 1)\n self.gamma = self.global_config.get(\"gamma\", 0.99)\n\n self.epsilon = epsilon\n self.epsilon_dec = epsilon_dec\n self.epsilon_end = epsilon_end\n\n self.use_hard_update = use_hard_update\n self.hard_update_freq = hard_update_freq\n self.soft_update_tau = soft_update_tau\n\n self.policy_update_freq = policy_update_freq\n\n self.total_steps = 0\n\n def initialize_actors(self):\n self._update_policy()\n\n def train(self):\n for data in self.data_stream:\n self.net.optimizer.zero_grad()\n loss = self._compute_loss(data)\n loss.backward()\n self.net.optimizer.step()\n \n self._update_target()\n self._update_policy()\n \n self.total_steps += 1\n\n def _compute_loss(self, data: Dict[str, torch.Tensor]) -> torch.Tensor:\n state = data[\"state\"]\n action = data[\"action\"]\n ret = data[\"return\"]\n nth_state = data[\"nth_state\"]\n done = data[\"done\"]\n \n batch_index = np.arange(state.shape[0], dtype=np.int32)\n\n q_val = self.net(state)\n q_val = self.net(state)[batch_index, action.to(torch.long)]\n\n q_next = self.target_net(nth_state)\n q_next[done] = 0.0\n q_target = ret.float() + (self.gamma**self.n_steps) * torch.max(q_next, dim=-1)[0]\n\n loss = F.mse_loss(q_val, q_target)\n return loss\n\n def _update_target(self):\n if self.use_hard_update:\n if self.total_steps % self.hard_update_freq == 0:\n hard_update(self.net, self.target_net)\n else:\n soft_update(self.net, self.target_net, self.soft_update_tau)\n\n def _update_policy(self):\n if self.epsilon > self.epsilon_end:\n self.epsilon = max(self.epsilon - self.epsilon_dec, self.epsilon_end)\n self.send_actor_update({\"epsilon\": self.epsilon})\n\n if self.total_steps % self.policy_update_freq == 0:\n self.send_network_update({\n \"net\": self.net.state_dict()\n })\n\n "
},
{
"alpha_fraction": 0.7822349667549133,
"alphanum_fraction": 0.7822349667549133,
"avg_line_length": 37.72222137451172,
"blob_id": "00e22c6131ceab29ac93d6650a40073c96cd71b9",
"content_id": "f200fa4a46c4edc3fa480a0cbc3066d0dd4ce36d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 698,
"license_type": "no_license",
"max_line_length": 63,
"num_lines": 18,
"path": "/asrel/core/workers/events.py",
"repo_name": "ar-terence-co/asrel",
"src_encoding": "UTF-8",
"text": "ENV_INTERACT_TASK = \"ENV_INTERACT_TASK\"\n\nACTOR_CHOOSE_ACTION_TASK = \"ACTOR_CHOOSE_ACTION_TASK\"\nACTOR_SYNC_NETWORKS_TASK = \"ACTOR_SYNC_NETWORKS_TASK\"\nACTOR_UPDATE_PARAMS_TASK = \"ACTOR_UPDATE_PARAMS_TASK\"\n\nSTORE_ADD_EXPERIENCE_TASK = \"STORE_ADD_EXPERIENCE_TASK\"\n\nLEARNER_TRAIN_TASK = \"LEARNER_TRAIN_TASK\"\nLEARNER_SAVE_NETWORKS_TASK = \"LEARNER_SAVE_NETWORKS_TASK\"\n\nWORKER_TERMINATE_TASK = \"WORKER_TERMINATE_TASK\"\n\nRETURNED_OBSERVATION_EVENT = \"RETURNED_OBSERVATION_EVENT\"\nRETURNED_ACTION_EVENT = \"RETURNED_ACTION_EVENT\"\nRETURNED_BATCH_DATA_EVENT = \"RETURNED_BATCH_DATA_EVENT\"\nRETURNED_NETWORK_UPDATE_EVENT = \"RETURNED_NETWORK_UPDATE_EVENT\"\nRETURNED_ACTOR_UPDATE_EVENT = \"RETURNED_ACTOR_UPDATE_EVENT\"\n\n"
},
{
"alpha_fraction": 0.680086076259613,
"alphanum_fraction": 0.6816226243972778,
"avg_line_length": 29.69811248779297,
"blob_id": "cc6e12446ade5b9f5c24b7b42f764cb674f5352b",
"content_id": "42423c55bef055bd6aaa791a6d9305a3d672a189",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3254,
"license_type": "no_license",
"max_line_length": 139,
"num_lines": 106,
"path": "/asrel/core/orchestrator.py",
"repo_name": "ar-terence-co/asrel",
"src_encoding": "UTF-8",
"text": "from multiprocessing.queues import Queue, Empty\nimport numpy as np\nimport signal\nimport sys\nimport time\nimport timeit\nimport torch.multiprocessing as mp\nfrom typing import Dict, List, Tuple, Type\n\nfrom asrel.core.registry import WorkerRegistry\nfrom asrel.core.utils import set_worker_rng, validate_subclass\nimport asrel.core.workers.events as events\nfrom asrel.pipelines.base import BasePipeline\n\nclass Orchestrator(mp.Process):\n def __init__(\n self,\n registry: WorkerRegistry,\n seed_seq: np.random.SeedSequence,\n pipeline_class_configs: List[Tuple[Type[BasePipeline], Dict]],\n global_config: Dict = {},\n ):\n super().__init__()\n self.registry = registry\n self.seed_seq = seed_seq\n\n for pipeline_class, _ in pipeline_class_configs:\n validate_subclass(pipeline_class, BasePipeline)\n self.pipeline_class_configs = pipeline_class_configs\n self.global_config = global_config\n\n def setup(self):\n signal.signal(signal.SIGINT, signal.SIG_IGN)\n signal.signal(signal.SIGTERM, self._exit)\n\n print(f\"Started {mp.current_process().name}.\")\n\n self.start_time = timeit.default_timer()\n\n set_worker_rng(self.seed_seq)\n\n self.process_state = {\n \"running\": True,\n \"total_steps\": 0,\n \"total_episodes\": 0,\n \"actors_initialized\": False,\n }\n self.shared_dict = {}\n\n def run(self):\n self.setup()\n\n pipelines = self._create_pipelines()\n for pipeline in pipelines: pipeline.start()\n for pipeline in pipelines: pipeline.join()\n\n self.cleanup()\n\n def cleanup(self):\n print(f\"{self.process_state['total_episodes']} episodes finished. {self.process_state['total_steps']} steps ran. Orchestration ended.\")\n\n end_time = timeit.default_timer()\n duration = end_time - self.start_time\n print(f\"Duration: {duration} seconds\")\n if self.process_state['total_steps']: print(f\"Average time steps per second: {duration / self.process_state['total_steps']}\")\n\n print(f\"Terminating workers... Please wait until termination is complete.\")\n self._terminate_workers()\n \n print(f\"Terminated {mp.current_process().name}.\")\n\n def _create_pipelines(self):\n pipelines = []\n for pipeline_class, pipeline_config in self.pipeline_class_configs:\n pipeline = pipeline_class(\n registry=self.registry,\n shared_dict=self.shared_dict,\n process_state=self.process_state,\n **pipeline_config,\n global_config=self.global_config,\n )\n pipelines.append(pipeline)\n\n return pipelines\n\n def _terminate_workers(self):\n for key, input_queues in self.registry.input_queues.items():\n for input_queue in input_queues:\n self._flush_queue(input_queue, timeout=0)\n input_queue.put({\n \"type\": events.WORKER_TERMINATE_TASK\n })\n \n for key, output_queues in self.registry.output_queues.items():\n for output_queue in output_queues:\n self._flush_queue(output_queue, timeout=0)\n \n def _flush_queue(self, queue: Queue, timeout: int = 0):\n try:\n while True: queue.get(block=True, timeout=timeout)\n except Empty:\n pass\n\n def _exit(self, signum, frame):\n self.process_state[\"running\"] = False\n print(\"Terminating pipelines... Please wait until termination is complete.\")\n"
},
{
"alpha_fraction": 0.6839728951454163,
"alphanum_fraction": 0.6839728951454163,
"avg_line_length": 20.14285659790039,
"blob_id": "a50c0d7dc64fdccd4e39123ec0bf5b9de36dea73",
"content_id": "742c1d4551b9a848af5683abaa3f8783030e4007",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 443,
"license_type": "no_license",
"max_line_length": 63,
"num_lines": 21,
"path": "/asrel/environments/base.py",
"repo_name": "ar-terence-co/asrel",
"src_encoding": "UTF-8",
"text": "from abc import ABC, abstractmethod\n\nimport numpy as np\nfrom typing import Any, Dict, Tuple\n\nclass BaseEnvironment(ABC):\n def __init__(self):\n self.observation_space = None\n self.action_space = None\n\n @abstractmethod\n def reset(self) -> Any:\n pass\n\n @abstractmethod\n def step(self, action: Any) -> Tuple[Any, float, bool, Dict]:\n pass\n\n @abstractmethod\n def seed(self, seed: int) -> Tuple[np.random.Generator, int]:\n pass"
},
{
"alpha_fraction": 0.6086956262588501,
"alphanum_fraction": 0.6133540272712708,
"avg_line_length": 22.035715103149414,
"blob_id": "75d1f8c3e1542688049d77d28cd57da90236f3ab",
"content_id": "815f67d9b058224b142afec0b32e4c51aa36bf7f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 644,
"license_type": "no_license",
"max_line_length": 53,
"num_lines": 28,
"path": "/asrel/environments/gym/wrappers/frame_skip.py",
"repo_name": "ar-terence-co/asrel",
"src_encoding": "UTF-8",
"text": "import gym\nimport numpy as np\nfrom typing import Dict, Tuple\n\nclass FrameSkipGymWrapper(gym.Wrapper):\n \"\"\"\n Each step sends the same action for n frames. \n Returns the last observation.\n \"\"\"\n def __init__(\n self, \n env: gym.Env, \n skip_len: int = 4\n ):\n super().__init__(env)\n self.env = env\n self.skip_len = max(skip_len, 1)\n \n def step(\n self, \n action: np.ndarray\n ) -> Tuple[np.ndarray, float, bool, Dict]:\n total_reward = 0\n for _ in range(self.skip_len):\n obs, reward, done, info = self.env.step(action)\n total_reward += reward\n if done: break\n return obs, total_reward, done, info"
},
{
"alpha_fraction": 0.606914222240448,
"alphanum_fraction": 0.6248399615287781,
"avg_line_length": 21.314285278320312,
"blob_id": "de2220fce75df2a7f9acf1b540653e7f58385ed0",
"content_id": "f8e4380209a3689febc25a8fc5c1ec70c23d184a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 781,
"license_type": "no_license",
"max_line_length": 72,
"num_lines": 35,
"path": "/asrel/environments/gym/wrappers/atari_observation.py",
"repo_name": "ar-terence-co/asrel",
"src_encoding": "UTF-8",
"text": "import gym\nimport numpy as np\nimport skimage.color\nimport skimage.transform\nfrom typing import List, Tuple\n\nclass AtariObservationGymWrapper(gym.ObservationWrapper):\n \"\"\"\n Crop the observation horizontally into a square and scale it to size. \n \"\"\"\n def __init__(\n self, \n env: gym.Env, \n crop: List[int] = [0, 210],\n size: int = 84,\n ):\n super().__init__(env)\n self.crop = crop\n self.shape = (size, size)\n\n self.observation_space = gym.spaces.Box(\n low=0., \n high=1., \n shape=self.shape, \n dtype=np.float32\n )\n\n def observation(\n self, \n obs: np.ndarray,\n ) -> np.ndarray:\n obs = skimage.color.rgb2gray(obs)\n obs = obs[self.crop[0]:self.crop[1]+1]\n obs = skimage.transform.resize(obs, self.shape)\n return obs\n"
},
{
"alpha_fraction": 0.6480262875556946,
"alphanum_fraction": 0.6513158082962036,
"avg_line_length": 19.266666412353516,
"blob_id": "d7b00f2f12e7f1a566edcec82040e3165440db29",
"content_id": "a5acd17ab8311584b780f29ac05ef78049f4db66",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 304,
"license_type": "no_license",
"max_line_length": 50,
"num_lines": 15,
"path": "/asrel/pipelines/utils.py",
"repo_name": "ar-terence-co/asrel",
"src_encoding": "UTF-8",
"text": "from multiprocessing.queues import Queue, Full\nfrom typing import Any, Callable\n\ndef put_while(\n queue: Queue, \n task: Any, \n predicate: Callable[[], bool], \n timeout: int = 3\n):\n while predicate():\n try:\n queue.put(task, block=True, timeout=timeout)\n break\n except Full:\n pass\n"
},
{
"alpha_fraction": 0.7006134986877441,
"alphanum_fraction": 0.7006134986877441,
"avg_line_length": 24.375,
"blob_id": "a01c9e99485041b257bcb9fc05e94efa54f135c9",
"content_id": "d0b6815cff3a68a172046fa91da73549adeabe86",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 815,
"license_type": "no_license",
"max_line_length": 99,
"num_lines": 32,
"path": "/asrel/actors/base.py",
"repo_name": "ar-terence-co/asrel",
"src_encoding": "UTF-8",
"text": "from abc import ABC, abstractmethod\nfrom collections import OrderedDict\nimport pathlib\nimport torch\nfrom typing import Any, Dict\n\nfrom asrel.core.utils import DEFAULT_CHECKPOINT_DIR\n\nclass BaseActor(ABC):\n def __init__(\n self,\n device: str = \"cpu\",\n global_config: Dict = {},\n **kwargs\n ):\n self.input_space = global_config[\"input_space\"]\n self.output_space = global_config[\"output_space\"]\n self.checkpoint_dir = pathlib.Path(global_config.get(\"checkpoint_dir\", DEFAULT_CHECKPOINT_DIR))\n\n self.device = torch.device(device)\n\n @abstractmethod\n def choose_action(self, obs: torch.Tensor, greedy: bool = False) -> torch.Tensor:\n pass\n\n @abstractmethod\n def sync_networks(self, state_dicts: Dict[str, OrderedDict]):\n pass\n\n @abstractmethod\n def update(self, **kwargs):\n pass\n\n "
},
{
"alpha_fraction": 0.613851010799408,
"alphanum_fraction": 0.6201469302177429,
"avg_line_length": 23.435897827148438,
"blob_id": "c2bdc2ec3ba500a75086d43549ca68f1124d0e42",
"content_id": "90f412f1be00611c8312032d5cac260f7b44219e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 953,
"license_type": "no_license",
"max_line_length": 84,
"num_lines": 39,
"path": "/asrel/environments/gym/wrappers/frame_history.py",
"repo_name": "ar-terence-co/asrel",
"src_encoding": "UTF-8",
"text": "import gym\nimport numpy as np\nfrom typing import Dict, Tuple\n\nclass FrameHistoryGymWrapper(gym.Wrapper):\n \"\"\"\n Return the last n frames as a single observation.\n This is used to get information on velocity.\n \"\"\"\n def __init__(\n self, \n env: gym.Env, \n history_len: int = 4\n ):\n super().__init__(env)\n self.env = env\n self.history_len = history_len\n self.observation_space.shape = (self.history_len, *self.observation_space.shape)\n \n def reset(self) -> np.ndarray:\n obs = self.env.reset()\n self.history = np.repeat(\n np.expand_dims(obs, axis=0), \n self.history_len, \n axis=0,\n )\n return self.history\n\n def step(\n self, \n action: np.ndarray\n ) -> Tuple[np.ndarray, float, bool, Dict]:\n obs, reward, done, info = self.env.step(action)\n self.history = np.append(\n self.history[:-1],\n np.expand_dims(obs, axis=0),\n axis=0,\n )\n return self.history, reward, done, info\n"
},
{
"alpha_fraction": 0.6579247713088989,
"alphanum_fraction": 0.6584948897361755,
"avg_line_length": 29.241378784179688,
"blob_id": "f73718f7a761c82a85777f9547e26b421d434449",
"content_id": "8d73df7d502a2424690f7725095e794c44568767",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1754,
"license_type": "no_license",
"max_line_length": 83,
"num_lines": 58,
"path": "/asrel/pipelines/policy/standard.py",
"repo_name": "ar-terence-co/asrel",
"src_encoding": "UTF-8",
"text": "from collections import OrderedDict\nfrom multiprocessing.queues import Empty\nimport numpy as np\nfrom typing import Dict, List, Tuple\n\nfrom asrel.core.utils import take_tensors_from_state_dicts\nimport asrel.core.workers.events as events\nfrom asrel.pipelines.base import BasePipeline\n\nclass StandardPolicyPipeline(BasePipeline):\n \"\"\"\n Standard observation pipeline \n \"\"\"\n def __init__(\n self,\n **kwargs,\n ):\n super().__init__(**kwargs)\n \n self.leaner_output_queue = self.registry.output_queues[\"learner\"][0]\n self.actor_input_queues = self.registry.input_queues[\"actor\"]\n\n self.actor_q_count = len(self.actor_input_queues)\n\n def run(self):\n actors_intialized = self.process_state[\"actors_initialized\"]\n\n while self.process_state[\"running\"]:\n output = self._get_output()\n if not output: continue\n\n if output[\"type\"] == events.RETURNED_NETWORK_UPDATE_EVENT:\n state_dicts = take_tensors_from_state_dicts(output[\"state_dicts\"])\n task = {\n \"type\": events.ACTOR_SYNC_NETWORKS_TASK,\n \"state_dicts\": state_dicts,\n } \n elif output[\"type\"] == events.RETURNED_ACTOR_UPDATE_EVENT:\n task = dict(\n output,\n type=events.ACTOR_UPDATE_PARAMS_TASK,\n )\n else:\n continue\n\n for actor_input_queue in self.actor_input_queues:\n self.send_task(actor_input_queue, task)\n \n if not actors_intialized and task[\"type\"] == events.ACTOR_SYNC_NETWORKS_TASK:\n actors_intialized = True\n self.process_state[\"actors_initialized\"] = True\n\n def _get_output(self) -> Dict:\n try:\n output = self.leaner_output_queue.get(block=True, timeout=self.queue_timeout)\n return output\n except Empty:\n return None\n"
},
{
"alpha_fraction": 0.8153846263885498,
"alphanum_fraction": 0.8153846263885498,
"avg_line_length": 31.5,
"blob_id": "5ff1825c05fd6a4b63d662135f552ea481b207c6",
"content_id": "831f0e445bf6720b4f3bfb377249009ea897f36e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 65,
"license_type": "no_license",
"max_line_length": 56,
"num_lines": 2,
"path": "/README.md",
"repo_name": "ar-terence-co/asrel",
"src_encoding": "UTF-8",
"text": "# asrel\nAsynchronous Reinforcement Learning - A Work-in Progress\n"
},
{
"alpha_fraction": 0.6916666626930237,
"alphanum_fraction": 0.6979166865348816,
"avg_line_length": 27.294116973876953,
"blob_id": "e7e5920cd0bf56d49169e68ea80c180e0e158cc9",
"content_id": "8286e50deefb4ce5d58339191984eeb81dc468b0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 480,
"license_type": "no_license",
"max_line_length": 85,
"num_lines": 17,
"path": "/asrel/learners/utils.py",
"repo_name": "ar-terence-co/asrel",
"src_encoding": "UTF-8",
"text": "import torch\nimport torch.nn as nn\n\ndef soft_update(\n source: nn.Module, \n target: nn.Module, \n tau: float = 1e-4\n):\n for source_param, target_param in zip(source.parameters(), target.parameters()):\n target_param.data.copy_(tau * source_param.data + (1. - tau) * target_param.data)\n\ndef hard_update(\n source: nn.Module,\n target: nn.Module\n):\n for source_param, target_param in zip(source.parameters(), target.parameters()):\n target_param.data.copy_(source_param.data)"
},
{
"alpha_fraction": 0.6834112405776978,
"alphanum_fraction": 0.6834112405776978,
"avg_line_length": 28.517240524291992,
"blob_id": "a9df0057a313674285684d2cb7cd7b99dcc3a376",
"content_id": "c92a89c5cfd984aa110babe256a3e17da0e7e199",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 856,
"license_type": "no_license",
"max_line_length": 118,
"num_lines": 29,
"path": "/asrel/environments/gym/__init__.py",
"repo_name": "ar-terence-co/asrel",
"src_encoding": "UTF-8",
"text": "from functools import partial\nimport gym\nfrom typing import Dict, List, Type\n\nfrom asrel.core.utils import get_class_from_module_path\nfrom asrel.environments.base import BaseEnvironment\n\nclass GymEnvironment(gym.Wrapper, BaseEnvironment):\n def __init__(\n self, \n id: str,\n wrappers: List[Dict], \n **kwargs\n ):\n env = gym.make(id)\n for wrapper_config in wrappers:\n wrapper = self.get_wrapper_from_config(wrapper_config)\n env = wrapper(env)\n \n super().__init__(env)\n\n self.env = env\n\n def get_wrapper_from_config(self, config: Dict) -> Type[gym.Wrapper]:\n wrapper_path = config[\"path\"]\n wrapper_class_name = config.get(\"class\")\n wrapper_class = get_class_from_module_path(wrapper_path, class_name=wrapper_class_name, class_suffix=\"GymWrapper\")\n \n return partial(wrapper_class, **config.get(\"conf\", {}))\n"
},
{
"alpha_fraction": 0.6539953351020813,
"alphanum_fraction": 0.6586501002311707,
"avg_line_length": 28.295454025268555,
"blob_id": "bc09ada7081f33179bcb308ade6f7655c3f6b6b7",
"content_id": "c4cfbed8fee29ffc174f6dfe5378051cd785f1c0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1289,
"license_type": "no_license",
"max_line_length": 53,
"num_lines": 44,
"path": "/asrel/core/registry.py",
"repo_name": "ar-terence-co/asrel",
"src_encoding": "UTF-8",
"text": "from multiprocessing.queues import Queue\nimport torch.multiprocessing as mp\nfrom typing import Dict, Tuple\n\nclass WorkerRegistry:\n def __init__(self, shared = {}, **kwargs):\n self.input_queues = {}\n self.output_queues = {}\n self.configs = {}\n\n self.shared = shared\n\n def register(\n self,\n worker_type: str, \n config: Dict,\n input_maxsize: int = 0,\n output_maxsize: int = 0,\n ) -> Tuple[int, Queue, Queue]:\n if worker_type not in self.configs:\n self.configs[worker_type] = []\n if worker_type not in self.input_queues:\n self.input_queues[worker_type] = []\n if worker_type not in self.output_queues:\n self.output_queues[worker_type] = []\n\n configs = self.configs[worker_type]\n configs.append(config)\n idx = len(configs) - 1\n\n input_queues = self.input_queues[worker_type]\n input_queue = mp.Queue(maxsize=input_maxsize)\n input_queues.append(input_queue)\n\n output_queues = self.output_queues[worker_type]\n if self.shared.get(worker_type, False):\n if len(output_queues) == 0:\n output_queues.append(mp.Queue(maxsize=0))\n output_queue = output_queues[0]\n else:\n output_queue = mp.Queue(maxsize=output_maxsize)\n output_queues.append(output_queue)\n\n return idx, input_queue, output_queue\n"
},
{
"alpha_fraction": 0.6698312163352966,
"alphanum_fraction": 0.6719409227371216,
"avg_line_length": 25.36111068725586,
"blob_id": "47cc7b6857a598ce3bb97498e4d3b744f477caff",
"content_id": "8b76ef519f4539d60d47f0e7a43ec377de76b561",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 948,
"license_type": "no_license",
"max_line_length": 67,
"num_lines": 36,
"path": "/asrel/networks/base.py",
"repo_name": "ar-terence-co/asrel",
"src_encoding": "UTF-8",
"text": "import pathlib\nimport torch\nimport torch.nn as nn\n\nfrom asrel.core.utils import DEFAULT_CHECKPOINT_DIR, DEFAULT_DEVICE\n\nclass BaseNetwork(nn.Module):\n def __init__(\n self, \n name: str = \"network\", \n checkpoint_dir: pathlib.Path = DEFAULT_CHECKPOINT_DIR, \n device: torch.device = DEFAULT_DEVICE,\n **kwargs\n ):\n super().__init__()\n self.name = name\n self.checkpoint_file = checkpoint_dir/self.name\n\n self.setup(**kwargs)\n\n self.device = device\n self.to(self.device)\n\n def setup(self, **kwargs):\n raise NotImplementedError\n\n def get_output_shape(self, layer, input_shape):\n return layer(torch.zeros(1, *input_shape)).shape[1:]\n \n def save_checkpoint(self):\n print(f\"--- saving checkpoint {self.name} ---\")\n torch.save(self.state_dict(), self.checkpoint_file)\n\n def load_checkpoint(self):\n print(f\"--- loading checkpoint {self.name} ---\")\n self.load_state_dict(torch.load(self.checkpoint_file))"
},
{
"alpha_fraction": 0.6518575549125671,
"alphanum_fraction": 0.652240514755249,
"avg_line_length": 25.632652282714844,
"blob_id": "e01b84666dc0ff0624c97862bc4e8a36b1becee0",
"content_id": "03c5434f35bdd039c510a94a000f7cc89b0a28a4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2611,
"license_type": "no_license",
"max_line_length": 85,
"num_lines": 98,
"path": "/asrel/core/workers/learner.py",
"repo_name": "ar-terence-co/asrel",
"src_encoding": "UTF-8",
"text": "from collections import OrderedDict\nimport torch.multiprocessing as mp\nfrom multiprocessing.queues import Queue\nimport numpy as np\nimport signal\nimport torch\nfrom typing import Dict, Iterable, List, Type\n\nfrom asrel.core.utils import set_worker_rng, take_tensor_from_dict, validate_subclass\nimport asrel.core.workers.events as events\nfrom asrel.learners.base import BaseLearner\n\n\nclass LearnerWorker(mp.Process):\n def __init__(\n self,\n input_queue: Queue,\n output_queue: Queue,\n seed_seq: np.random.SeedSequence,\n learner_class: Type[BaseLearner],\n learner_config: Dict = {},\n global_config: Dict = {},\n index: int = 0,\n **kwargs,\n ):\n super().__init__()\n\n self.input_queue = input_queue\n self.output_queue = output_queue\n self.seed_seq = seed_seq\n\n validate_subclass(learner_class, BaseLearner)\n self.learner_class = learner_class\n self.learner_config = learner_config\n\n self.global_config = global_config\n self.index = index\n\n def setup(self):\n signal.signal(signal.SIGINT, signal.SIG_IGN)\n\n print(f\"Started {mp.current_process().name}.\")\n\n set_worker_rng(self.seed_seq)\n \n self.learner = self.learner_class(\n data_stream=self.get_data_stream(),\n send_network_update=self.send_network_update,\n send_actor_update=self.send_actor_update,\n **self.learner_config, \n global_config=self.global_config\n )\n\n def run(self):\n self.setup()\n self.learner.initialize_actors()\n self.learner.train()\n self.cleanup()\n\n def cleanup(self):\n print(f\"Terminated {mp.current_process().name}.\")\n\n def send_network_update(self, state_dicts: Dict[str, OrderedDict]):\n task = {\n \"type\": events.RETURNED_NETWORK_UPDATE_EVENT,\n \"state_dicts\": state_dicts,\n }\n self.output_queue.put(task)\n\n def send_actor_update(self, actor_params: Dict):\n task = {\n \"type\": events.RETURNED_ACTOR_UPDATE_EVENT,\n **actor_params,\n }\n self.output_queue.put(task)\n\n def get_data_stream(self) -> Iterable:\n keys = None\n\n while True:\n task = self.input_queue.get()\n if task[\"type\"] == events.LEARNER_TRAIN_TASK:\n data = task[\"data\"]\n if keys is None: keys = list(data.keys())\n tensor_data = {\n key: take_tensor_from_dict(data, key) \n if isinstance(data[key], torch.Tensor) \n else data[key]\n for key in keys\n }\n\n yield tensor_data\n\n elif task[\"type\"] == events.LEARNER_SAVE_NETWORKS_TASK:\n self.learner.save_network_checkpoints()\n\n elif task[\"type\"] == events.WORKER_TERMINATE_TASK:\n break\n\n"
},
{
"alpha_fraction": 0.5878655314445496,
"alphanum_fraction": 0.5922383069992065,
"avg_line_length": 27.146154403686523,
"blob_id": "0832470f552a943e83ea96951fd16265fdff4f00",
"content_id": "4efc5e2a51dec035ffa3a949ed3525b681902e00",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3659,
"license_type": "no_license",
"max_line_length": 81,
"num_lines": 130,
"path": "/asrel/core/workers/environment.py",
"repo_name": "ar-terence-co/asrel",
"src_encoding": "UTF-8",
"text": "import torch.multiprocessing as mp\nfrom multiprocessing.queues import Queue\nimport numpy as np\nimport signal\nfrom typing import Any, Dict, List, Type\n\nfrom asrel.core.utils import validate_subclass\nimport asrel.core.workers.events as events\nfrom asrel.environments.base import BaseEnvironment\n\n\nclass EnvironmentWorker(mp.Process):\n def __init__(\n self,\n input_queue: Queue,\n output_queue: Queue,\n seed_seq: np.random.SeedSequence,\n env_class: Type[BaseEnvironment],\n num_envs: int = 2,\n env_config: Dict = {},\n global_config: Dict = {},\n index: int = 0,\n **kwargs,\n ):\n super().__init__()\n\n self.input_queue = input_queue\n self.output_queue = output_queue\n self.seed_seq = seed_seq\n\n validate_subclass(env_class, BaseEnvironment)\n self.env_class = env_class\n self.num_envs = num_envs\n self.env_config = env_config\n\n self.global_config = global_config\n self.index = index\n\n def setup(self):\n signal.signal(signal.SIGINT, signal.SIG_IGN)\n\n print(f\"Started {mp.current_process().name}.\")\n\n self.envs = [self.env_class(**self.env_config) for _ in range(self.num_envs)]\n self.env_seed_seqs = self.seed_seq.spawn(self.num_envs)\n for i, seed_seq in enumerate(self.env_seed_seqs):\n self.envs[i].seed(seed_seq.generate_state(1).item())\n\n self.episode_scores = [0. for _ in range(self.num_envs)]\n self.episode_steps = [0 for _ in range(self.num_envs)]\n self.episode_dones = [True for _ in range(self.num_envs)]\n \n self.total_steps = 0\n\n def run(self):\n self.setup()\n\n for idx in range(self.num_envs):\n obs = self._reset_env(idx)\n self.output_queue.put({\n \"type\": events.RETURNED_OBSERVATION_EVENT,\n \"observation\": obs,\n \"reward\": 0.,\n \"score\": 0.,\n \"episode_step\": 0,\n \"episode_done\": False,\n \"task_done\": False,\n \"env_idx\": (self.index, idx),\n })\n\n while True:\n task = self.input_queue.get()\n task_type = task[\"type\"]\n\n if task_type == events.ENV_INTERACT_TASK:\n _, idx = task[\"env_idx\"]\n\n if self.episode_dones[idx]:\n obs = self._reset_env(idx)\n self.output_queue.put({\n \"type\": events.RETURNED_OBSERVATION_EVENT,\n \"observation\": obs,\n \"reward\": 0.,\n \"score\": 0.,\n \"episode_step\": 0,\n \"episode_done\": False,\n \"task_done\": False,\n \"env_idx\": (self.index, idx),\n })\n else:\n action = task[\"action\"]\n obs, reward, done, info = self._step_env(idx, action)\n self.output_queue.put({\n \"type\": events.RETURNED_OBSERVATION_EVENT,\n \"observation\": obs,\n \"reward\": reward,\n \"score\": self.episode_scores[idx],\n \"episode_step\": self.episode_steps[idx],\n \"episode_done\": done,\n \"task_done\": done and not info.get(\"TimeLimit.truncated\", False),\n \"env_idx\": (self.index, idx),\n })\n\n elif task_type == events.WORKER_TERMINATE_TASK:\n break\n\n self.cleanup()\n \n def cleanup(self):\n print(f\"Terminated {mp.current_process().name}.\")\n\n def _reset_env(self, idx: int) -> Dict:\n obs = self.envs[idx].reset()\n\n self.episode_scores[idx] = 0\n self.episode_steps[idx] = 0\n self.episode_dones[idx] = False\n\n return obs\n\n def _step_env(self, idx: int, action: Any) -> Dict:\n obs, reward, done, info = self.envs[idx].step(action)\n\n self.episode_scores[idx] += reward\n self.episode_steps[idx] += 1\n self.episode_dones[idx] = done\n\n self.total_steps += 1\n\n return obs, reward, done, info\n"
},
{
"alpha_fraction": 0.6370896100997925,
"alphanum_fraction": 0.646850049495697,
"avg_line_length": 27.200000762939453,
"blob_id": "050b39cd6dbf9098ad1184140040188a2c4d46df",
"content_id": "dd8663b27315efaabe8f68e04dca64fe1e8ef749",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1127,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 40,
"path": "/asrel/networks/feed_forward.py",
"repo_name": "ar-terence-co/asrel",
"src_encoding": "UTF-8",
"text": "import torch\nimport torch.nn as nn\nimport torch.optim as optim\nfrom typing import Dict, List\n\nfrom asrel.core.utils import get_instance_from_config\nfrom asrel.networks.base import BaseNetwork\n\nclass FeedForwardNetwork(BaseNetwork):\n def setup(\n self,\n input_size: List[int],\n output_size: List[int],\n hidden_layers: List[int] = [256, 256],\n activation: Dict = {},\n optimizer: Dict = {},\n ):\n layer_dims = [input_size[0]] + hidden_layers\n layers = []\n for i in range(len(layer_dims)-1):\n linear = nn.Linear(layer_dims[i], layer_dims[i+1])\n activ = get_instance_from_config(nn, **activation, default_class=nn.ReLU)\n seq = nn.Sequential(linear, activ)\n layers.append(seq)\n self.ff_layers = nn.ModuleList(layers)\n self.out_layer = nn.Linear(layer_dims[-1], output_size[0])\n\n self.optimizer = get_instance_from_config(\n optim, \n params=self.parameters(), \n default_class=optim.Adam, \n **optimizer\n )\n\n def forward(self, x: torch.Tensor):\n x = x.float()\n for layer in self.ff_layers:\n x = layer(x)\n out = self.out_layer(x)\n return out"
},
{
"alpha_fraction": 0.6110599040985107,
"alphanum_fraction": 0.6152995228767395,
"avg_line_length": 30,
"blob_id": "88de2b4c0ef7d6b8f2a70c1e1610df68f9f48894",
"content_id": "a834846e066aa4bc77000935f3c4927146c630a1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5425,
"license_type": "no_license",
"max_line_length": 86,
"num_lines": 175,
"path": "/asrel/core/workers/tests.py",
"repo_name": "ar-terence-co/asrel",
"src_encoding": "UTF-8",
"text": "import json\nimport torch.multiprocessing as mp\nimport numpy as np\nimport time\nimport torch\nfrom threading import Thread\nfrom typing import Callable, Dict, List\n\ndef test_environment_worker(config: Dict, seeds: Dict[str, np.random.SeedSequence]):\n \"\"\"\n Function to run an isolated env workers and manually test it out in the cli.\n \"\"\"\n from asrel.core.utils import get_env_args_from_config\n from asrel.core.workers.environment import EnvironmentWorker\n import asrel.core.workers.events as events\n\n env_args = get_env_args_from_config(config[\"environment\"])\n\n print(f\"Testing Environment Worker with args: {env_args}\")\n\n num_workers = env_args.get(\"num_workers\", 1)\n num_envs = env_args.get(\"num_envs\", 2)\n\n print(\"Creating workers...\")\n\n env_input_queues = [mp.Queue(maxsize=num_envs) for _ in range(num_workers)]\n env_shared_output_queue = mp.Queue(maxsize=num_workers * num_envs)\n env_worker_seed_seqs = seeds[\"environment\"].spawn(num_workers)\n\n env_workers = [\n EnvironmentWorker(\n input_queue=env_input_queues[idx],\n output_queue=env_shared_output_queue,\n seed_seq=env_worker_seed_seqs[idx],\n index=idx,\n **env_args,\n )\n for idx in range(num_workers)\n ]\n for worker in env_workers:\n worker.start()\n\n print(f\"Started workers...\")\n\n try:\n while True:\n time.sleep(2)\n out = env_shared_output_queue.get()\n worker_idx, _ = out[\"env_idx\"]\n print(f\"worker {worker_idx}:\")\n print({**out, \"observation\": f\"... [{out['observation'].shape}]\"})\n while env_shared_output_queue.qsize() > 0:\n out = env_shared_output_queue.get()\n worker_idx, _ = out[\"env_idx\"]\n print(f\"worker {worker_idx}:\")\n print({**out, \"observation\": f\"... [{out['observation'].shape}]\"})\n \n print(\"in: \")\n task = int(input(\"0 - Interact: \"))\n if task == 0:\n worker_idx = int(input(\" worker: \",))\n sub_idx = int(input(\" sub: \"))\n action = int(input(\" action: \"))\n env_input_queues[worker_idx].put({\n \"type\": events.ENV_INTERACT_TASK,\n \"action\": action,\n \"env_idx\": (worker_idx, sub_idx),\n })\n except (KeyboardInterrupt, Exception) as e:\n print()\n print(\"Terminating workers...\")\n for worker in env_workers:\n worker.terminate()\n print(e)\n else:\n print(\"Closing worker...\")\n for worker in env_workers:\n worker.close()\n\n for worker in env_workers:\n worker.join()\n\ndef test_actor_worker(config: Dict, seeds: List[np.random.SeedSequence]):\n \"\"\"\n Function to run an isolated actor workers and manually test it out in the cli.\n \"\"\"\n from gym.spaces import Box, Discrete\n from asrel.core.utils import get_actor_args_from_config, take_tensor_from_dict\n from asrel.core.workers.actor import ActorWorker\n import asrel.core.workers.events as events\n\n actor_args = get_actor_args_from_config(config[\"actor\"])\n\n print(f\"Testing Actor Worker with args: {actor_args}\")\n\n num_workers = actor_args.get(\"num_workers\", 1)\n input_queue_len = 8\n input_space = Box(-10, 10, (6, ), np.float32)\n\n input_space.seed(0)\n output_space = Discrete(3)\n\n print(\"Creating workers...\")\n\n actor_input_queues = [mp.Queue(maxsize=input_queue_len) for _ in range(num_workers)]\n actor_shared_output_queue = mp.Queue(maxsize=num_workers*input_queue_len)\n actor_worker_seed_seqs = seeds[\"actor\"].spawn(num_workers)\n\n actor_workers = [\n ActorWorker(\n input_queue=actor_input_queues[idx],\n output_queue=actor_shared_output_queue,\n seed_seq=actor_worker_seed_seqs[idx],\n input_space=input_space,\n output_space=output_space,\n index=idx,\n **actor_args,\n )\n for idx in range(num_workers)\n ]\n\n for worker in actor_workers:\n worker.start()\n\n try:\n while True:\n task = int(input(\"0 - Choose Action, 1 - Sync Networks, 2 - Update Params: \"))\n if task == 0:\n worker_idx = int(input(\" worker: \",))\n\n num_obs = int(input(\" # of obs: \"))\n obs = torch.tensor([input_space.sample() for _ in range(num_obs)]).cuda()\n print(f\" obs:\\n{obs}\")\n env_worker_idx = int(input(\" env worker: \"))\n env_sub_idx = int(input(\" subenv: \"))\n greedy = input(\" greedy (y/n): \").lower() == \"y\"\n actor_input_queues[worker_idx].put({\n \"type\": events.ACTOR_CHOOSE_ACTION_TASK,\n \"observation\": obs,\n \"greedy\": greedy,\n \"env_idx\": (env_worker_idx, env_sub_idx),\n })\n out = actor_shared_output_queue.get()\n out_action = take_tensor_from_dict(out, \"action\")\n print(f\"worker {worker_idx}:\")\n print({**out, \"action\": out_action})\n\n elif task == 1:\n state_dicts = json.loads(input(\"State Dictionaries: \"))\n for q in actor_input_queues:\n q.put({\n \"type\": events.ACTOR_SYNC_NETWORKS_TASK,\n \"state_dicts\": state_dicts,\n })\n elif task == 2:\n params = json.loads(input(\"Params: \"))\n for q in actor_input_queues:\n q.put({\n \"type\": events.ACTOR_UPDATE_PARAMS_TASK,\n **params,\n })\n\n except (KeyboardInterrupt, Exception) as e:\n print()\n print(\"Terminating workers...\")\n for worker in actor_workers:\n worker.terminate()\n print(e)\n else:\n print(\"Closing worker...\")\n for worker in actor_workers:\n worker.close()\n\n for worker in actor_workers:\n worker.join()\n"
},
{
"alpha_fraction": 0.6868884563446045,
"alphanum_fraction": 0.6868884563446045,
"avg_line_length": 25.894737243652344,
"blob_id": "3003578764925dc413550e86335739da2b64162a",
"content_id": "9530e66146241e1bcb56c70093a93cf028b81a32",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 511,
"license_type": "no_license",
"max_line_length": 66,
"num_lines": 19,
"path": "/asrel/__main__.py",
"repo_name": "ar-terence-co/asrel",
"src_encoding": "UTF-8",
"text": "import numpy as np\nfrom asrel.core.runner import Runner\nfrom asrel.core.utils import get_args\n\nif __name__ == \"__main__\":\n args = get_args()\n\n # if args.test_env_worker:\n # from asrel.core.workers.tests import test_environment_worker\n # test_environment_worker(config, seeds)\n # sys.exit()\n\n # if args.test_actor_worker:\n # from asrel.core.workers.tests import test_actor_worker\n # test_actor_worker(config, seeds)\n # sys.exit()\n\n asrel_runner = Runner(args.config)\n asrel_runner.start()\n"
},
{
"alpha_fraction": 0.646826982498169,
"alphanum_fraction": 0.6487978100776672,
"avg_line_length": 23.394229888916016,
"blob_id": "9e9787289822c567eebe2f35e416d9d5ac11e6e9",
"content_id": "950d4b5e4933bc6bf114bb81cac9c5d365a8befa",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2537,
"license_type": "no_license",
"max_line_length": 85,
"num_lines": 104,
"path": "/asrel/core/workers/store.py",
"repo_name": "ar-terence-co/asrel",
"src_encoding": "UTF-8",
"text": "import torch.multiprocessing as mp\nfrom multiprocessing.queues import Queue\nimport numpy as np\nimport signal\nfrom threading import Thread\nimport time\nfrom typing import Dict, Type\n\nfrom asrel.core.utils import take_tensor_from_dict, set_worker_rng, validate_subclass\nimport asrel.core.workers.events as events\nfrom asrel.stores.base import BaseExperienceStore\n\nWARMUP_WAIT = 3 # secs\n\nclass ExperienceStoreWorker(mp.Process):\n def __init__(\n self,\n input_queue: Queue,\n output_queue: Queue,\n seed_seq: np.random.SeedSequence,\n store_class: Type[BaseExperienceStore],\n warmup_steps: int = 0,\n store_config: Dict = {},\n global_config: Dict = {},\n index: int = 0,\n **kwargs,\n ):\n super().__init__()\n\n self.input_queue = input_queue\n self.output_queue = output_queue\n self.seed_seq = seed_seq\n\n validate_subclass(store_class, BaseExperienceStore)\n self.store_class = store_class\n self.store_config = store_config\n\n self.warmup_steps = warmup_steps\n\n self.global_config = global_config\n self.index = index\n\n def setup(self):\n signal.signal(signal.SIGINT, signal.SIG_IGN)\n\n print(f\"Started {mp.current_process().name}.\")\n\n set_worker_rng(self.seed_seq)\n self.store = self.store_class(\n **self.store_config,\n global_config=self.global_config,\n )\n\n self.running = True\n self.warmup_done = False\n\n def start_buffer_loader(self):\n self.buffer_loader = Thread(target=self._load_to_buffer)\n self.buffer_loader.start()\n\n def run(self): \n self.setup()\n self.start_buffer_loader()\n\n warmup = 0\n\n while True:\n task = self.input_queue.get()\n task_type = task[\"type\"]\n\n if task_type == events.STORE_ADD_EXPERIENCE_TASK:\n exp = task[\"experience\"]\n self._add_experience(exp)\n\n if not self.warmup_done:\n warmup += 1\n self.warmup_done = warmup >= self.warmup_steps\n \n elif task_type == events.WORKER_TERMINATE_TASK:\n self.running = False\n break\n\n self.cleanup()\n\n def cleanup(self):\n self.buffer_loader.join()\n print(f\"Terminated {mp.current_process().name}.\")\n\n def _load_to_buffer(self):\n while self.running:\n if not self.warmup_done:\n time.sleep(WARMUP_WAIT)\n continue\n \n batch_exp = self.store.sample()\n\n task = {\n \"type\": events.RETURNED_BATCH_DATA_EVENT,\n \"data\": batch_exp,\n }\n self.output_queue.put(task)\n\n def _add_experience(self, experience: Dict):\n self.store.add(experience)\n"
},
{
"alpha_fraction": 0.6645789742469788,
"alphanum_fraction": 0.6694502234458923,
"avg_line_length": 28.244897842407227,
"blob_id": "7ce5a465f8606d5367ef8b418ab2d5f672c3c3a5",
"content_id": "0ba22d342b1d4c05081e84b9adf7e4fcc9aa4ac9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1437,
"license_type": "no_license",
"max_line_length": 83,
"num_lines": 49,
"path": "/asrel/actors/discrete/greedy.py",
"repo_name": "ar-terence-co/asrel",
"src_encoding": "UTF-8",
"text": "from collections import OrderedDict\nfrom gym import Space\nimport torch\nimport torch.nn.functional as F\nfrom typing import Any, Dict, List, Optional\n\nfrom asrel.actors.base import BaseActor\nfrom asrel.core.utils import get_net_from_config\nfrom asrel.networks.base import BaseNetwork\n\nclass GreedyDiscreteActor(BaseActor):\n def __init__(\n self,\n net: Dict = {},\n **kwargs\n ):\n super().__init__(**kwargs)\n\n self.net: BaseNetwork = get_net_from_config(\n net,\n input_size=self.input_space.shape,\n output_size=(self.output_space.n,),\n device=self.device,\n checkpoint_dir=self.checkpoint_dir,\n )\n self.net.requires_grad_(False)\n self.net.eval()\n\n self.epsilon = 0\n\n def choose_action(self, obs: torch.Tensor, greedy: bool = False) -> torch.Tensor:\n out = self.net(obs)\n greedy_actions = torch.argmax(out, dim=-1)\n if greedy: return greedy_actions\n \n one_hot = F.one_hot(greedy_actions, num_classes=out.shape[-1])\n eps = torch.full(out.shape, self.epsilon / out.shape[-1]).to(self.device)\n probs = one_hot * (1 - self.epsilon) + eps\n\n actions = torch.multinomial(probs, 1, replacement=True).view(-1)\n return actions\n\n def sync_networks(self, state_dicts: Dict[str, OrderedDict]):\n if \"net\" in state_dicts:\n self.net.load_state_dict(state_dicts[\"net\"])\n\n def update(self, **kwargs):\n if \"epsilon\" in kwargs:\n self.epsilon = kwargs[\"epsilon\"]\n "
},
{
"alpha_fraction": 0.7049092650413513,
"alphanum_fraction": 0.7049092650413513,
"avg_line_length": 29.721311569213867,
"blob_id": "18900d4dd77b6db6a2aa6cb1aa6a17c9117ca091",
"content_id": "c9d0a8b2dd4d3d33bd76673e31e5c9883675dee9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1874,
"license_type": "no_license",
"max_line_length": 104,
"num_lines": 61,
"path": "/asrel/learners/base.py",
"repo_name": "ar-terence-co/asrel",
"src_encoding": "UTF-8",
"text": "from abc import ABC, abstractmethod\nfrom collections import OrderedDict\nfrom gym import Space\nimport pathlib\nimport torch\nfrom typing import Callable, Dict, Iterable, List\n\nfrom asrel.core.utils import noop, set_worker_rng, DEFAULT_CHECKPOINT_DIR\nfrom asrel.networks.base import BaseNetwork\n\nclass BaseLearner(ABC):\n def __init__(\n self, \n data_stream: Iterable, \n send_network_update: Callable[[Dict[str, OrderedDict]], None] = noop,\n send_actor_update: Callable[[Dict], None] = noop,\n device: str = \"cpu\",\n global_config: Dict = {},\n **kwargs\n ):\n self.data_stream = data_stream\n self._send_network_update = send_network_update\n self._send_actor_update = send_actor_update\n\n self.device = torch.device(device)\n\n self.global_config = global_config\n\n self.input_space = self.global_config[\"input_space\"]\n self.output_space = self.global_config[\"output_space\"]\n self.checkpoint_dir = pathlib.Path(self.global_config.get(\"checkpoint_dir\", DEFAULT_CHECKPOINT_DIR))\n\n self.networks: List[BaseNetwork] = []\n\n def send_network_update(self, state_dicts: Dict[str, OrderedDict]):\n self._send_network_update(state_dicts)\n \n def send_actor_update(self, actor_params: Dict):\n self._send_actor_update(actor_params)\n\n def save_network_checkpoints(self):\n if self.global_config.get(\"should_save\", True):\n for network in self.networks:\n network.save_checkpoint()\n\n def load_network_checkpoints(self):\n if self.global_config.get(\"should_load\", True):\n for network in self.networks:\n network.load_checkpoint()\n\n @abstractmethod\n def initialize_actors(self):\n \"\"\"\n Initialize actors by calling `self.send_network_update` here with the correct parameters.\n The standard observation pipeline will not start unless this is done.\n \"\"\"\n pass\n\n @abstractmethod\n def train(self):\n pass\n"
},
{
"alpha_fraction": 0.6664943099021912,
"alphanum_fraction": 0.6668390035629272,
"avg_line_length": 28.907217025756836,
"blob_id": "15b94351ac545415cfacc204bb3c7b8c9bdae8f3",
"content_id": "126e9529d1513e07269fa68fad5209c930ccab97",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5802,
"license_type": "no_license",
"max_line_length": 89,
"num_lines": 194,
"path": "/asrel/core/runner.py",
"repo_name": "ar-terence-co/asrel",
"src_encoding": "UTF-8",
"text": "import torch.multiprocessing as mp\nfrom typing import Dict, List\n\nfrom asrel.core.orchestrator import Orchestrator\nfrom asrel.core.registry import WorkerRegistry\nfrom asrel.core.utils import (\n get_config, \n get_seed_sequences, \n get_registry_args_from_config,\n get_env_args_from_config, \n get_actor_args_from_config,\n get_store_args_from_config,\n get_learner_args_from_config,\n get_orchestrator_args_from_config,\n get_spaces_from_env_args,\n)\nfrom asrel.core.workers.actor import ActorWorker\nfrom asrel.core.workers.environment import EnvironmentWorker\nfrom asrel.core.workers.learner import LearnerWorker\nfrom asrel.core.workers.store import ExperienceStoreWorker\n\nclass Runner:\n def __init__(\n self,\n config_file: str,\n ):\n self.config = get_config(config_file)\n self.seed_seqs = get_seed_sequences(self.config)\n\n mp.set_start_method(\"spawn\")\n\n self.global_config = self._get_global_config()\n self.registry = self._create_registry()\n self.orchestrator = self._create_orchestrator()\n self.workers = self._create_workers()\n\n def start(self):\n print(\"Starting workers\")\n \n self.orchestrator.start()\n for worker in self.workers: worker.start()\n\n try:\n self.orchestrator.join()\n self._close_workers()\n except(KeyboardInterrupt, Exception) as e:\n print()\n print(e)\n self.orchestrator.terminate()\n self._close_workers()\n\n def _get_global_config(self):\n env_args = get_env_args_from_config(self.config[\"environment\"])\n observation_space, action_space = get_spaces_from_env_args(env_args)\n\n return {\n **self.config.get(\"global\", {}),\n \"input_space\": observation_space,\n \"output_space\": action_space,\n }\n\n def _create_registry(self) -> WorkerRegistry:\n print(\"Creating registry...\")\n registry_args = get_registry_args_from_config(self.config[\"registry\"])\n return WorkerRegistry(**registry_args)\n\n def _create_orchestrator(self) -> Orchestrator:\n print(\"Creating orchestrator...\")\n \n orchestrator_args = get_orchestrator_args_from_config(self.config[\"orchestrator\"])\n \n orchestrator = Orchestrator(\n registry=self.registry,\n seed_seq=self.seed_seqs[\"orchestrator\"],\n global_config=self.global_config,\n **orchestrator_args,\n )\n \n return orchestrator\n\n def _create_workers(self) -> List[mp.Process]:\n print(\"Creating workers...\")\n buffer_size = get_store_args_from_config(self.config[\"store\"])[\"buffer_size\"]\n\n env_workers = self._create_env_workers()\n actor_workers = self._create_actor_workers()\n store_worker = self._create_store_worker(buffer_size)\n learner_worker = self._create_learner_worker(buffer_size)\n\n return [\n *env_workers,\n *actor_workers,\n store_worker,\n learner_worker,\n ]\n\n def _create_env_workers(self) -> List[EnvironmentWorker]:\n print(\"Creating env workers...\")\n \n env_workers = []\n env_worker_args = get_env_args_from_config(self.config[\"environment\"])\n num_workers = env_worker_args[\"num_workers\"]\n num_envs = env_worker_args[\"num_envs\"]\n env_worker_seed_seqs = self.seed_seqs[\"environment\"].spawn(num_workers)\n \n for seed_seq in env_worker_seed_seqs:\n idx, input_queue, output_queue = self.registry.register(\n \"environment\", \n env_worker_args, \n input_maxsize=num_envs,\n output_maxsize=num_envs,\n )\n worker = EnvironmentWorker(\n input_queue=input_queue,\n output_queue=output_queue,\n seed_seq=seed_seq,\n global_config=self.global_config,\n index=idx,\n **env_worker_args,\n )\n env_workers.append(worker)\n\n return env_workers\n\n def _create_actor_workers(self) -> List[ActorWorker]:\n print(\"Creating actor workers...\")\n\n actor_workers = []\n actor_worker_args = get_actor_args_from_config(self.config[\"actor\"])\n num_workers = actor_worker_args[\"num_workers\"]\n actor_worker_seed_seqs = self.seed_seqs[\"actor\"].spawn(num_workers)\n\n for seed_seq in actor_worker_seed_seqs:\n idx, input_queue, output_queue = self.registry.register(\"actor\", actor_worker_args)\n worker = ActorWorker(\n input_queue=input_queue,\n output_queue=output_queue,\n seed_seq=seed_seq,\n global_config=self.global_config,\n index=idx,\n **actor_worker_args,\n )\n actor_workers.append(worker)\n\n return actor_workers\n\n def _create_store_worker(self, buffer_size: int = 0) -> ExperienceStoreWorker:\n print(\"Creating experience store worker...\")\n\n store_worker_args = get_store_args_from_config(self.config[\"store\"])\n\n idx, input_queue, output_queue = self.registry.register(\n \"store\", \n store_worker_args, \n output_maxsize=buffer_size\n )\n store_worker = ExperienceStoreWorker(\n input_queue=input_queue,\n output_queue=output_queue,\n seed_seq=self.seed_seqs[\"store\"],\n global_config=self.global_config,\n index=idx,\n **store_worker_args,\n )\n\n return store_worker\n\n def _create_learner_worker(self, buffer_size: int = 0) -> LearnerWorker:\n print(\"Creating learner worker...\")\n\n learner_worker_args = get_learner_args_from_config(self.config[\"learner\"])\n\n idx, input_queue, output_queue = self.registry.register(\n \"learner\",\n learner_worker_args,\n input_maxsize=buffer_size,\n )\n learner_worker = LearnerWorker(\n input_queue=input_queue,\n output_queue=output_queue,\n seed_seq=self.seed_seqs[\"learner\"],\n global_config=self.global_config,\n index=idx,\n **learner_worker_args,\n )\n\n return learner_worker\n\n def _close_workers(self):\n self.orchestrator.join()\n for worker in self.workers: worker.join()\n\n self.orchestrator.close()\n for worker in self.workers: worker.close()\n"
},
{
"alpha_fraction": 0.528777003288269,
"alphanum_fraction": 0.5467625856399536,
"avg_line_length": 14.44444465637207,
"blob_id": "c36b5d822b01e0a5c3524a13ad33ef7f24ded46f",
"content_id": "9ebcb424ed0015aa59a2dde4903cc5ffe81e31ac",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 278,
"license_type": "no_license",
"max_line_length": 46,
"num_lines": 18,
"path": "/asrel/environments/gym/wrappers/unit_reward.py",
"repo_name": "ar-terence-co/asrel",
"src_encoding": "UTF-8",
"text": "import gym\n\nclass UnitRewardGymWrapper(gym.RewardWrapper):\n def __init__(\n self, \n env: gym.Env\n ):\n super().__init__(env)\n \n def reward(\n self, \n reward: float\n ) -> float:\n if reward > 0:\n return 1\n elif reward < 0:\n return -1\n return 0\n"
},
{
"alpha_fraction": 0.6584615111351013,
"alphanum_fraction": 0.6588461399078369,
"avg_line_length": 26.659574508666992,
"blob_id": "ef2a757f66185c1a8d156267b4c1e4b0e751180f",
"content_id": "229c3f23992b4a548705bf4d2d0570d13712377f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2600,
"license_type": "no_license",
"max_line_length": 116,
"num_lines": 94,
"path": "/asrel/core/workers/actor.py",
"repo_name": "ar-terence-co/asrel",
"src_encoding": "UTF-8",
"text": "from collections import OrderedDict\nfrom gym import Space\nimport torch.multiprocessing as mp\nfrom multiprocessing.queues import Queue\nimport numpy as np\nimport signal\nimport torch\nfrom typing import Any, Dict, List, Type\n\nfrom asrel.actors.base import BaseActor\nimport asrel.core.workers.events as events\nfrom asrel.core.utils import take_tensor_from_dict, take_tensors_from_state_dicts, set_worker_rng, validate_subclass\n\nclass ActorWorker(mp.Process):\n def __init__(\n self,\n input_queue: Queue,\n output_queue: Queue,\n seed_seq: np.random.SeedSequence,\n actor_class: Type[BaseActor],\n actor_config: Dict = {},\n global_config: Dict = {},\n index: int = 0,\n **kwargs,\n ):\n super().__init__()\n\n self.input_queue = input_queue\n self.output_queue = output_queue\n self.seed_seq = seed_seq\n\n validate_subclass(actor_class, BaseActor)\n self.actor_class = actor_class\n self.actor_config = actor_config\n\n self.global_config = global_config\n self.index = index\n\n def setup(self):\n signal.signal(signal.SIGINT, signal.SIG_IGN)\n\n print(f\"Started {mp.current_process().name}.\")\n\n set_worker_rng(self.seed_seq)\n self.actor = self.actor_class(\n **self.actor_config, \n global_config=self.global_config,\n )\n\n def run(self): \n self.setup()\n\n while True:\n task = self.input_queue.get()\n task_type = task[\"type\"]\n\n if task_type == events.ACTOR_CHOOSE_ACTION_TASK:\n obs = take_tensor_from_dict(task, \"observation\")\n\n greedy = task.get(\"greedy\", False)\n\n action = self._choose_action(obs, greedy=greedy)\n\n self.output_queue.put({\n \"type\": events.RETURNED_ACTION_EVENT,\n \"action\": action,\n \"env_idx\": task[\"env_idx\"],\n \"actor_idx\": self.index,\n })\n\n elif task_type == events.ACTOR_SYNC_NETWORKS_TASK:\n state_dicts = take_tensors_from_state_dicts(task[\"state_dicts\"])\n self._sync_actor_networks(state_dicts)\n\n elif task_type == events.ACTOR_UPDATE_PARAMS_TASK:\n self._update_actor(task)\n\n elif task_type == events.WORKER_TERMINATE_TASK:\n break\n\n self.cleanup()\n\n def cleanup(self):\n print(f\"Terminated {mp.current_process().name}.\")\n\n def _choose_action(self, obs: torch.Tensor, greedy: bool = False) -> torch.Tensor:\n action = self.actor.choose_action(obs, greedy=greedy)\n return action\n\n def _sync_actor_networks(self, state_dicts: Dict[str, OrderedDict]):\n self.actor.sync_networks(state_dicts)\n \n def _update_actor(self, actor_params: Dict):\n self.actor.update(**actor_params)\n"
}
] | 34 |
benjaminpillot/pyrasta | https://github.com/benjaminpillot/pyrasta | f6ae57341d986d65a18abbedc81de5e4b87012b6 | 660564923f9e3a27974be968303d0dec95fb27e9 | 64c99bc4ebfe8f77b9143afc696a9785bbe0233e | refs/heads/master | 2023-06-01T14:25:38.600381 | 2021-06-25T09:35:17 | 2021-06-25T09:35:17 | 347,776,012 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6023142337799072,
"alphanum_fraction": 0.6035322546958923,
"avg_line_length": 25.063491821289062,
"blob_id": "9f2c4e208a5ac5f6a4418da4f2ecd413f9925311",
"content_id": "62a7e714aedb0aabd163874cb1f1c6639eda727b",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1642,
"license_type": "permissive",
"max_line_length": 92,
"num_lines": 63,
"path": "/pyrasta/tools/__init__.py",
"repo_name": "benjaminpillot/pyrasta",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\n\"\"\" Module summary description.\n\nMore detailed description.\n\"\"\"\n\nfrom functools import wraps\n\nfrom pyrasta.io_.files import RasterTempFile\n\nimport gdal\n\n\ndef _return_raster(function):\n @wraps(function)\n def return_raster(raster, *args, **kwargs):\n with RasterTempFile(raster._gdal_driver.GetMetadata()['DMD_EXTENSION']) as out_file:\n function(raster, out_file.path, *args, **kwargs)\n new_raster = raster.__class__(out_file.path)\n new_raster._temp_file = out_file\n\n return new_raster\n return return_raster\n\n\ndef _gdal_temp_dataset(out_file, gdal_driver, projection, x_size, y_size,\n nb_band, geo_transform, data_type, no_data):\n \"\"\" Create gdal temporary dataset\n\n \"\"\"\n try:\n out_ds = gdal_driver.Create(out_file, x_size, y_size, nb_band, data_type)\n except RuntimeError:\n out_ds = gdal.GetDriverByName('Gtiff').Create(out_file, x_size,\n y_size, nb_band, data_type)\n\n out_ds.SetGeoTransform(geo_transform)\n out_ds.SetProjection(projection)\n _set_no_data(out_ds, no_data)\n\n return out_ds\n\n\ndef _set_no_data(gdal_ds, no_data):\n \"\"\" Set no data value into gdal dataset\n\n Description\n -----------\n\n Parameters\n ----------\n gdal_ds: gdal.Dataset\n gdal dataset\n no_data: list or tuple\n list of no data values corresponding to each raster band\n\n \"\"\"\n for band in range(gdal_ds.RasterCount):\n try:\n gdal_ds.GetRasterBand(band + 1).SetNoDataValue(no_data)\n except TypeError:\n pass\n"
},
{
"alpha_fraction": 0.669550895690918,
"alphanum_fraction": 0.6950968503952026,
"avg_line_length": 37.52381134033203,
"blob_id": "0be0ebc6aa2b27afd0c6e821ae6aa8b2359db4c8",
"content_id": "aa019b1b02b3eb48959b7329278e6e0ff684caff",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2427,
"license_type": "permissive",
"max_line_length": 99,
"num_lines": 63,
"path": "/tests.py",
"repo_name": "benjaminpillot/pyrasta",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\n\"\"\" Module summary description.\n\nMore detailed description.\n\"\"\"\nimport numpy as np\nimport osmnx as ox\nfrom gistools.layer import PolygonLayer\nfrom matplotlib import pyplot\n\nfrom pyrasta.raster import DigitalElevationModel, Raster\nfrom pyrasta.tools.clip import _clip_raster_by_mask\nfrom pyrasta.tools.rasterize import _rasterize\nfrom pyrasta.tools.srtm import from_cgiar_online_database\n\n# bounds = (5, 40, 10, 45)\n# test = from_cgiar_online_database(bounds)\n# test.to_file(\"/home/benjamin/srtm_corse.tif\")\n\n# test = DigitalElevationModel(\"/home/benjamin/dem_test.tif\").clip((13, 42, 15, 45)).to_file(\n# \"/home/benjamin/dem_clip_test.tif\")\n# pop = Raster(\"/home/benjamin/Documents/PRO/PRODUITS/POPULATION_DENSITY/001_DONNEES/COTE_D_IVOIRE\"\n# \"/population_civ_2019-07-01_geotiff/population_civ_2019-07-01.tif\")\nfrom pyrasta.tools.stats import _zonal_stats\n\ndem = DigitalElevationModel(\"/home/benjamin/Documents/PRO/PRODUITS/TESTS/dem_ci.tif\")\ntest = dem.to_crs(32630).slope(\"degree\")\ntest.to_file(\"/home/benjamin/dem_slope_ci.tif\")\n\ncountry = PolygonLayer.from_gpd(ox.geocode_to_gdf(\n dict(country=\"Cote d'Ivoire\",\n admin_level=2,\n type=\"boundary\"))).to_crs(32630).clean_geometry()\n\n# dem = from_cgiar_online_database(country.total_bounds)\n# dem.to_file(\"/home/benjamin/dem_ci.tif\")\n\nhoneycomb = country.split(country.area[0] / 100, method=\"hexana\", show_progressbar=True)\n\nhoneycomb = honeycomb.to_crs(dem.crs)\nhoneycomb.to_file(\"/home/benjamin/Documents/PRO/PRODUITS/TESTS/honeycomb.shp\")\nhoneycomb[\"ID\"] = honeycomb.index\n\n# test = dem.clip(mask=honeycomb[[15]], all_touched=True)\n# test.to_file(\"/home/benjamin/pop.tif\")\n\n# test = dem.zonal_stats(honeycomb, stats=[\"mean\", \"median\"])\n\n# test = dem.rasterize(honeycomb, dem.projection, dem.x_size, dem.y_size, dem.geo_transform,\n# attribute=\"ID\", nb_band=1)\n# test = _rasterize(DigitalElevationModel, honeycomb, None, \"ID\", dem._gdal_driver,\n# dem._gdal_dataset.GetProjection(),\n# dem.x_size, dem.y_size, 1, dem.geo_transform, dem.data_type, -999, True)\n# test = DigitalElevationModel(test.path)\n# test.to_file(\"/home/benjamin/test_ci.tif\")\n\nstats = _zonal_stats(dem, honeycomb, 1, ['mean', 'median', 'max', 'min'], dict(mymax=np.max), -999,\n True, True, 6)\n\nprint(stats.keys())\nprint(\"mymax: %s\" % stats[\"mymax\"])\nprint(len(honeycomb))\n"
},
{
"alpha_fraction": 0.7222009897232056,
"alphanum_fraction": 0.7286970019340515,
"avg_line_length": 40.55555725097656,
"blob_id": "2f9c5a9ff883d8239074f3cccdea57144c371032",
"content_id": "88f3b56ca3850bbb3e0afbc2f67c2c7789aa00f4",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 2617,
"license_type": "permissive",
"max_line_length": 128,
"num_lines": 63,
"path": "/README.md",
"repo_name": "benjaminpillot/pyrasta",
"src_encoding": "UTF-8",
"text": "# PyRasta\n\n[](https://pypi.python.org/pypi/pyrasta/)\n[](https://framagit.org/benjaminpillot/pyrasta/activity)\n[](https://pypi.python.org/pypi/pyrasta/)\n\nSome tools for fast and easy raster processing, based on gdal (numpy usage is reduced to the minimum).\n\n## Introduction\nPyRasta is a small Python library which aims at interfacing gdal functions and methods in an easy \nway, so that users may only focus on the processes they want to apply rather than on the code. The\nlibrary is based on gdal stream and multiprocessing in order to reduce CPU time due to large numpy \narray imports. This is especially useful for basic raster arithmetic operations, sliding window \nmethods as well as zonal statistics.\n\n## Basic available operations\n* [x] Merging, clipping, re-projecting, padding, resampling, rescaling, windowing\n* [x] Raster calculator to design your own operations\n* [x] Fast raster zonal statistics\n* [x] Automatically download and merge SRTM DEM(s) from CGIAR online database\n\n## Install\nPip installation should normally take care of everything for you.\n\n### Using PIP\n\nThe easiest way to install PyRasta is by using ``pip`` in a terminal\n```\n$ pip install pyrasta\n```\n\n\n## Examples\n\n### Build digital elevation model from CGIAR SRTM site\n```python\nfrom pyrasta.tools.srtm import from_cgiar_online_database\nbounds = (23, 34, 32, 45)\ndem = from_cgiar_online_database(bounds)\n```\n\n### Fast clipping of raster by extent or by mask\n```python\nfrom pyrasta.raster import Raster\nimport geopandas\nraster_by_extent = Raster(\"/path/to/your/raster\").clip(bounds=(10, 40, 15, 45))\nraster_by_mask = Raster(\"/path/to/your/raster\").clip(mask=geopandas.GeoDataFrame.from_file(\"/path/to/your/layer\"))\n```\n\n### Fast Zonal Statistics\nFast computing of raster zonal statistics within features of a given geographic layer, \nby loading in memory only the data we need (and not the whole numpy array as it is often \nthe case in other packages) + using multiprocessing. You may use the basic\nstatistic functions already available in the package, or define your own customized functions.\n```python\n\nfrom pyrasta.raster import Raster\nimport geopandas\nrstats = Raster(\"/path/to/your/raster\").zonal_stats(geopandas.GeoDataFrame.from_file(\"/path/to/your/layer\"),\n stats=[\"mean\", \"median\", \"min\", \"max\"],\n customized_stats={\"my_mean\": my_mean})\n\n```"
},
{
"alpha_fraction": 0.5466836094856262,
"alphanum_fraction": 0.551410436630249,
"avg_line_length": 27.976436614990234,
"blob_id": "1a6294c2ebbe1c96515e77edf492235e6973929d",
"content_id": "2e7629447d27bda0349bc025083481007e0b457f",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 19675,
"license_type": "permissive",
"max_line_length": 96,
"num_lines": 679,
"path": "/pyrasta/base.py",
"repo_name": "benjaminpillot/pyrasta",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\n\"\"\" Module summary description.\n\nMore detailed description.\n\"\"\"\n\nimport multiprocessing as mp\nimport warnings\n\nimport pyproj\nfrom pyrasta.io_.files import _copy_to_file\nfrom pyrasta.tools.calculator import _op, _raster_calculation\nfrom pyrasta.tools.clip import _clip_raster_by_extent, _clip_raster_by_mask\nfrom pyrasta.tools.conversion import _resample_raster, _padding, _rescale_raster, \\\n _align_raster, _extract_bands, _merge_bands, _read_array, _xy_to_2d_index, _read_value_at, \\\n _project_raster, _array_to_raster\nfrom pyrasta.exceptions import RasterBaseError\nfrom pyrasta.tools.mask import _raster_mask\nfrom pyrasta.tools.merge import _merge\nfrom pyrasta.tools.rasterize import _rasterize\nfrom pyrasta.tools.stats import _histogram, _zonal_stats\nfrom pyrasta.tools.windows import _windowing\nfrom pyrasta.utils import lazyproperty, grid, MP_CHUNK_SIZE\n\nimport gdal\n\ngdal.UseExceptions()\n\n\nclass RasterBase:\n\n def __init__(self, src_file, no_data=None):\n \"\"\" Raster class constructor\n\n Description\n -----------\n\n Parameters\n ----------\n src_file: str\n valid path to raster file\n no_data: int or float\n Set no data value only if it is not already defined in raster file\n \"\"\"\n try:\n self._gdal_dataset = gdal.Open(src_file)\n except RuntimeError as e:\n raise RasterBaseError('\\nGDAL returns: \\\"%s\\\"' % e)\n\n # If NoData not defined, define here\n for band in range(self.nb_band):\n if no_data is not None:\n if self._gdal_dataset.GetRasterBand(band + 1).GetNoDataValue() is None:\n self._gdal_dataset.GetRasterBand(band + 1).SetNoDataValue(no_data)\n else:\n warnings.warn(\"No data value is already set, cannot overwrite.\")\n\n self._gdal_driver = self._gdal_dataset.GetDriver()\n self._file = src_file\n\n def __add__(self, other):\n \"\"\" Add two raster\n\n \"\"\"\n return _op(self, other, \"add\")\n\n def __sub__(self, other):\n \"\"\" Subtract two raster\n\n \"\"\"\n return _op(self, other, \"sub\")\n\n def __mul__(self, other):\n \"\"\" Multiply two raster\n\n \"\"\"\n return _op(self, other, \"mul\")\n\n def __truediv__(self, other):\n \"\"\" Divide two raster\n\n \"\"\"\n return _op(self, other, \"truediv\")\n\n def __del__(self):\n self._gdal_dataset = None\n\n def align_raster(self, other):\n \"\"\" Align raster on other\n\n Description\n -----------\n\n Parameters\n ----------\n other: RasterBase\n other RasterBase instance\n\n \"\"\"\n\n return _align_raster(self, other)\n\n def clip(self, bounds=None, mask=None, no_data=-999, all_touched=True):\n \"\"\" Clip raster\n\n Parameters\n ----------\n bounds: tuple\n tuple (x_min, y_min, x_max, y_max) in map units\n mask: geopandas.GeoDataFrame\n Valid mask layer\n no_data: int or float\n No data value\n all_touched: bool\n if True, all touched pixels within layer boundaries are burnt,\n when clipping raster by mask\n\n Returns\n -------\n RasterBase:\n New temporary instance\n\n \"\"\"\n if bounds is not None:\n return _clip_raster_by_extent(self, bounds, no_data)\n elif mask is not None:\n return _clip_raster_by_mask(self, mask, no_data, all_touched)\n else:\n raise ValueError(\"Either bounds or mask must be set\")\n\n def extract_bands(self, bands):\n \"\"\" Extract bands as multiple rasters\n\n Description\n -----------\n\n Parameters\n ----------\n bands: list\n list of band numbers\n\n Returns\n -------\n \"\"\"\n return _extract_bands(self, bands)\n\n @classmethod\n def from_array(cls, array, crs, bounds,\n gdal_driver=gdal.GetDriverByName(\"Gtiff\"),\n no_data=-999):\n\n return _array_to_raster(cls, array, crs, bounds, gdal_driver, no_data)\n\n def histogram(self, nb_bins=10, normalized=True):\n \"\"\" Compute raster histogram\n\n Description\n -----------\n\n Parameters\n ----------\n nb_bins: int\n number of bins for histogram\n normalized: bool\n if True, normalize histogram frequency values\n\n Returns\n -------\n\n \"\"\"\n return _histogram(self, nb_bins, normalized)\n\n def mask(self, mask, gdal_driver=gdal.GetDriverByName(\"Gtiff\"),\n data_type=gdal.GetDataTypeByName('Float32'),\n all_touched=True, no_data=-999, window_size=100,\n nb_processes=mp.cpu_count(), chunksize=MP_CHUNK_SIZE):\n \"\"\" Apply mask to raster\n\n Parameters\n ----------\n mask: geopandas.geodataframe or gistools.layer.GeoLayer\n Mask layer as a GeoDataFrame or GeoLayer\n gdal_driver: osgeo.gdal.Driver\n Driver used to write data to file\n data_type: int\n GDAL data type\n all_touched: bool\n if True, all touched pixels within layer boundaries are burnt,\n when clipping raster by mask\n no_data: int or float\n output no data value in masked raster\n window_size: int or list[int, int]\n Size of window for raster calculation\n nb_processes: int\n Number of processes for multiprocessing\n chunksize: int\n chunk size used in imap multiprocessing function\n\n Returns\n -------\n\n \"\"\"\n return _raster_mask(self, mask, gdal_driver, data_type,\n no_data, all_touched, window_size,\n nb_processes, chunksize)\n\n @classmethod\n def merge(cls, rasters, bounds=None, output_format=\"Gtiff\",\n data_type=gdal.GetDataTypeByName('Float32'), no_data=-999):\n \"\"\" Merge multiple rasters\n\n Description\n -----------\n\n Parameters\n ----------\n rasters: Collection\n Collection of RasterBase instances\n bounds: tuple\n bounds of the new merged raster\n output_format:str\n raster file output format (Gtiff, etc.)\n data_type: int\n GDAL data type\n no_data: int or float\n output no data value in merged raster\n\n Returns\n -------\n\n \"\"\"\n return _merge(cls, rasters, bounds, output_format, data_type, no_data)\n\n @classmethod\n def merge_bands(cls, rasters, resolution=\"highest\",\n gdal_driver=gdal.GetDriverByName(\"Gtiff\"),\n data_type=gdal.GetDataTypeByName('Float32'),\n no_data=-999):\n \"\"\" Create one single raster from multiple bands\n\n Description\n -----------\n Create one raster from multiple bands using gdal\n\n Parameters\n ----------\n rasters: Collection\n Collection of RasterBase instances\n resolution: str\n GDAL resolution option (\"highest\", \"lowest\", \"average\")\n gdal_driver: osgeo.gdal.Driver\n data_type: int\n GDAL data type\n no_data: int or float\n no data value in output raster\n \"\"\"\n return _merge_bands(cls, rasters, resolution, gdal_driver, data_type, no_data)\n\n def pad_extent(self, pad_x, pad_y, value):\n \"\"\" Pad raster extent with given values\n\n Description\n -----------\n Pad raster extent, i.e. add pad value around raster bounds\n\n Parameters\n ----------\n pad_x: int\n x padding size (new width will therefore be RasterXSize + 2 * pad_x)\n pad_y: int\n y padding size (new height will therefore be RasterYSize + 2 * pad_y)\n value: int or float\n value to set to pad area around raster\n\n Returns\n -------\n RasterBase\n A padded RasterBase\n \"\"\"\n return _padding(self, pad_x, pad_y, value)\n\n @classmethod\n def rasterize(cls, layer, projection, x_size, y_size, geo_transform,\n burn_values=None, attribute=None,\n gdal_driver=gdal.GetDriverByName(\"Gtiff\"), nb_band=1,\n data_type=gdal.GetDataTypeByName(\"Float32\"), no_data=-999,\n all_touched=True):\n \"\"\" Rasterize geographic layer\n\n Parameters\n ----------\n layer: geopandas.GeoDataFrame or gistools.layer.GeoLayer\n Geographic layer to be rasterized\n projection: str\n Projection as a WKT string\n x_size: int\n Raster width\n y_size: int\n Raster height\n geo_transform: tuple\n burn_values: list[float] or list[int], default None\n List of values to be burnt in each band, excusive with attribute\n attribute: str, default None\n Layer's attribute to be used for values to be burnt in raster,\n excusive with burn_values\n gdal_driver: osgeo.gdal.Driver, default GeoTiff\n GDAL driver\n nb_band: int, default 1\n Number of bands\n data_type: int, default \"Float32\"\n GDAL data type\n no_data: int or float, default -999\n No data value\n all_touched: bool\n\n Returns\n -------\n\n \"\"\"\n return _rasterize(cls, layer, burn_values, attribute, gdal_driver, projection,\n x_size, y_size, nb_band, geo_transform, data_type, no_data,\n all_touched)\n\n @classmethod\n def raster_calculation(cls, rasters, fhandle, window_size=100,\n gdal_driver=gdal.GetDriverByName(\"Gtiff\"),\n data_type=gdal.GetDataTypeByName('Float32'),\n no_data=-999, nb_processes=mp.cpu_count(),\n chunksize=MP_CHUNK_SIZE,\n description=\"Calculate raster expression\"):\n \"\"\" Raster expression calculation\n\n Description\n -----------\n Calculate raster expression stated in \"fhandle\"\n such as: fhandle(raster1, raster2, etc.)\n Calculation is made for each band.\n\n Parameters\n ----------\n rasters: list or tuple\n collection of RasterBase instances\n fhandle: function\n expression to calculate (must accept a collection of arrays)\n window_size: int or (int, int)\n size of window/chunk to set in memory during calculation\n * unique value\n * tuple of 2D coordinates (width, height)\n gdal_driver: osgeo.gdal.Driver\n GDAL driver (output format)\n data_type: int\n GDAL data type for output raster\n no_data: int or float\n no data value in resulting raster\n nb_processes: int\n number of processes for multiprocessing pool\n chunksize: int\n chunk size used in map/imap multiprocessing function\n description: str\n Progress bar description\n\n Returns\n -------\n RasterBase\n New temporary instance\n \"\"\"\n return _raster_calculation(cls, rasters, fhandle, window_size,\n gdal_driver, data_type, no_data,\n nb_processes, chunksize, description)\n\n def read_array(self, band=None, bounds=None):\n \"\"\" Write raster to numpy array\n\n Parameters\n ----------\n band: int\n Band number. If None, read all bands into multidimensional array.\n bounds: tuple\n tuple as (x_min, y_min, x_max, y_max) in map units. If None, read\n the whole raster into array\n\n Returns\n -------\n numpy.ndarray\n\n \"\"\"\n return _read_array(self, band, bounds)\n\n def read_value_at(self, x, y):\n \"\"\" Read value in raster at x/y map coordinates\n\n Parameters\n ----------\n x: float\n lat coordinates in map units\n y: float\n lon coordinates in map units\n\n Returns\n -------\n\n \"\"\"\n return _read_value_at(self, x, y)\n\n def resample(self, factor):\n \"\"\" Resample raster\n\n Description\n -----------\n Resample raster with respect to resampling factor.\n The higher the factor, the higher the resampling.\n\n Parameters\n ----------\n factor: int or float\n Resampling factor\n\n Returns\n -------\n RasterBase\n New temporary resampled instance\n \"\"\"\n return _resample_raster(self, factor)\n\n def rescale(self, r_min, r_max):\n \"\"\" Rescale values from raster\n\n Description\n -----------\n\n Parameters\n ----------\n r_min: int or float\n minimum value of new range\n r_max: int or float\n maximum value of new range\n\n Returns\n -------\n \"\"\"\n return _rescale_raster(self, r_min, r_max)\n\n def to_crs(self, crs):\n \"\"\" Re-project raster onto new CRS\n\n Parameters\n ----------\n crs: int or str\n valid CRS (Valid EPSG code, valid proj string, etc.)\n\n Returns\n -------\n\n \"\"\"\n return _project_raster(self, crs)\n\n def to_file(self, filename):\n \"\"\" Write raster copy to file\n\n Description\n -----------\n Write raster to given file\n\n Parameters\n ----------\n filename: str\n File path to write to\n\n Return\n ------\n \"\"\"\n return _copy_to_file(self, filename)\n\n def windowing(self, f_handle, window_size, method, band=None,\n data_type=gdal.GetDataTypeByName('Float32'),\n no_data=None, chunk_size=100000, nb_processes=mp.cpu_count()):\n \"\"\" Apply function within sliding/block window\n\n Description\n -----------\n\n Parameters\n ----------\n f_handle: function\n window_size: int\n size of window\n method: str\n sliding window method ('block' or 'moving')\n band: int\n raster band\n data_type: int\n gdal data type\n no_data: list or tuple\n raster no data\n chunk_size: int\n data chunk size for multiprocessing\n nb_processes: int\n number of processes for multiprocessing\n\n Return\n ------\n RasterBase\n New instance\n\n \"\"\"\n if band is None:\n band = 1\n\n if no_data is None:\n no_data = self.no_data\n\n return _windowing(self, f_handle, band, window_size, method,\n data_type, no_data, chunk_size, nb_processes)\n\n def xy_to_2d_index(self, x, y):\n \"\"\" Convert x/y map coordinates into 2d index\n\n Parameters\n ----------\n x: float\n x coordinates in map units\n y: float\n y coordinates in map units\n\n Returns\n -------\n tuple\n (px, py) index\n\n \"\"\"\n return _xy_to_2d_index(self, x, y)\n\n def zonal_stats(self, layer, band=1, stats=None, customized_stats=None,\n no_data=-999, all_touched=True, show_progressbar=True,\n nb_processes=mp.cpu_count()):\n \"\"\" Compute zonal statistics\n\n Compute statistic among raster values\n within each feature of given geographic layer\n\n Parameters\n ----------\n layer: geopandas.GeoDataFrame or gistools.layer.GeoLayer\n Geographic layer\n band: int\n Band number\n stats: list[str]\n list of valid statistic names\n \"mean\", \"median\", \"min\", \"max\", \"sum\", \"std\"\n customized_stats: dict\n User's own customized statistic functions\n as {'your_function_name': function}\n no_data: int or float\n No data value\n all_touched: bool\n Whether to include every raster cell touched by a geometry, or only\n those having a center point within the polygon.\n show_progressbar: bool\n If True, show progress bar status\n nb_processes: int\n number of processes for multiprocessing\n\n Returns\n -------\n dict[list]\n Dictionary with each statistic as a list corresponding\n to the values for each feature in layer\n\n \"\"\"\n if stats is not None:\n return _zonal_stats(self, layer, band, stats, customized_stats,\n no_data, all_touched, show_progressbar, nb_processes)\n\n @property\n def crs(self):\n \"\"\" Return Coordinate Reference System\n\n \"\"\"\n return pyproj.CRS(self._gdal_dataset.GetProjection())\n\n @lazyproperty\n def bounds(self):\n \"\"\" Return raster bounds\n\n \"\"\"\n return self.x_origin, self.y_origin - self.resolution[1] * self.y_size, \\\n self.x_origin + self.resolution[0] * self.x_size, self.y_origin\n\n @lazyproperty\n def geo_transform(self):\n return self._gdal_dataset.GetGeoTransform()\n\n @lazyproperty\n def grid_y(self):\n return [lat for lat in grid(self.y_origin + self.geo_transform[5]/2,\n self.geo_transform[5], self.y_size)]\n\n @lazyproperty\n def grid_x(self):\n return [lon for lon in grid(self.x_origin + self.geo_transform[1]/2,\n self.geo_transform[1], self.x_size)]\n\n @lazyproperty\n def max(self):\n \"\"\" Return raster maximum value for each band\n\n \"\"\"\n return [self._gdal_dataset.GetRasterBand(band + 1).ComputeRasterMinMax()[1]\n for band in range(self.nb_band)]\n\n @lazyproperty\n def mean(self):\n \"\"\" Compute raster mean for each band\n\n \"\"\"\n return [self._gdal_dataset.GetRasterBand(band + 1).ComputeStatistics(False)[2]\n for band in range(self.nb_band)]\n\n @lazyproperty\n def min(self):\n \"\"\" Return raster minimum value for each band\n\n \"\"\"\n return [self._gdal_dataset.GetRasterBand(band + 1).ComputeRasterMinMax()[0]\n for band in range(self.nb_band)]\n\n @lazyproperty\n def nb_band(self):\n \"\"\" Return raster number of bands\n\n \"\"\"\n return self._gdal_dataset.RasterCount\n\n @property\n def no_data(self):\n return self._gdal_dataset.GetRasterBand(1).GetNoDataValue()\n\n @lazyproperty\n def data_type(self):\n return self._gdal_dataset.GetRasterBand(1).DataType\n\n @lazyproperty\n def resolution(self):\n \"\"\" Return raster X and Y resolution\n\n \"\"\"\n return self.geo_transform[1], abs(self.geo_transform[5])\n\n @lazyproperty\n def std(self):\n \"\"\" Compute raster standard deviation for each band\n\n \"\"\"\n return [self._gdal_dataset.GetRasterBand(band + 1).ComputeStatistics(False)[3]\n for band in range(self.nb_band)]\n\n @lazyproperty\n def projection(self):\n \"\"\" Get projection as a WKT string\n\n \"\"\"\n return self._gdal_dataset.GetProjection()\n\n @lazyproperty\n def x_origin(self):\n return self.geo_transform[0]\n\n @lazyproperty\n def x_size(self):\n return self._gdal_dataset.RasterXSize\n\n @lazyproperty\n def y_origin(self):\n return self.geo_transform[3]\n\n @lazyproperty\n def y_size(self):\n return self._gdal_dataset.RasterYSize\n"
},
{
"alpha_fraction": 0.6528925895690918,
"alphanum_fraction": 0.6611570119857788,
"avg_line_length": 14.125,
"blob_id": "f9a1b67d3dc7dc8133c1e016da8480e2e856abbf",
"content_id": "f6c47e31c8654a613255ce4225960ebb5b4ad1a1",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 121,
"license_type": "permissive",
"max_line_length": 31,
"num_lines": 8,
"path": "/pyrasta/io_/__init__.py",
"repo_name": "benjaminpillot/pyrasta",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\n\"\"\" Module summary description.\n\nMore detailed description.\n\"\"\"\n\nESRI_DRIVER = \"ESRI Shapefile\"\n"
},
{
"alpha_fraction": 0.6145393252372742,
"alphanum_fraction": 0.6170752048492432,
"avg_line_length": 28.575000762939453,
"blob_id": "b961ad58903630ce11683d4d3e93a9fc7417f80e",
"content_id": "c45b7ec786b72e30f4b2aa7325c246b901187ae2",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2366,
"license_type": "permissive",
"max_line_length": 94,
"num_lines": 80,
"path": "/pyrasta/algorithms/classification.py",
"repo_name": "benjaminpillot/pyrasta",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\n\"\"\" Module summary description.\n\nMore detailed description.\n\"\"\"\n\nfrom functools import wraps\n\nimport numpy as np\nfrom pyrasta.io_.files import RasterTempFile\nfrom pyrasta.tools import _gdal_temp_dataset\nfrom sklearn.cluster import KMeans\n\nimport gdal\n\n\nCLASSIFICATION_NO_DATA = -1\n\n\ndef return_classification(classification):\n @wraps(classification)\n def _return_classification(raster, nb_classes, *args, **kwargs):\n with RasterTempFile(raster._gdal_driver.GetMetadata()['DMD_EXTENSION']) as out_file:\n out_ds = _gdal_temp_dataset(out_file.path, raster._gdal_driver,\n raster._gdal_dataset.GetProjection(),\n raster.x_size, raster.y_size, 1,\n raster.geo_transform, gdal.GetDataTypeByName('Int16'),\n no_data=CLASSIFICATION_NO_DATA)\n labels = classification(raster, nb_classes, out_ds, *args, **kwargs)\n\n # Close dataset\n out_ds = None\n\n new_raster = raster.__class__(out_file.path)\n new_raster._temp_file = out_file\n\n return new_raster\n return _return_classification\n\n\n@return_classification\ndef _k_means_classification(raster, nb_clusters, out_ds, *args, **kwargs):\n \"\"\" Apply k-means classification\n\n \"\"\"\n k_means_classifier = KMeans(nb_clusters, *args, **kwargs)\n samples = np.reshape(raster._gdal_dataset.ReadAsArray(),\n (raster.nb_band, raster.x_size * raster.y_size)).transpose()\n labels = k_means_classifier.fit(samples).labels_\n labels = np.reshape(labels, (raster.y_size, raster.x_size))\n\n out_ds.GetRasterBand(1).WriteArray(labels)\n\n\ndef k_means(raster, nb_clusters, *args, **kwargs):\n \"\"\" Compute k-means on given raster\n\n Description\n -----------\n Run k-means algorithm on raster data\n\n Warning\n -------\n As the algorithm requires to process all data, all values\n are written to memory. Be sure your machine has enough memory to run it.\n\n Parameters\n ----------\n raster\n nb_clusters\n args, kwargs :\n sklearn.cluster.KMeans arguments and keyword arguments\n (see scikit-learn documentation)\n\n Returns\n -------\n\n \"\"\"\n _k_means_classification(raster, nb_clusters, *args, **kwargs)\n"
},
{
"alpha_fraction": 0.5941015481948853,
"alphanum_fraction": 0.6080875396728516,
"avg_line_length": 26.40833282470703,
"blob_id": "75bc971aa72fb0f9282111bcaba359c84e525660",
"content_id": "cfb5bdc3836bf1a292b88517bd2caa1a892bd5ad",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3289,
"license_type": "permissive",
"max_line_length": 88,
"num_lines": 120,
"path": "/pyrasta/tools/srtm.py",
"repo_name": "benjaminpillot/pyrasta",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\n\"\"\" Module summary description.\n\nMore detailed description.\n\"\"\"\nfrom zipfile import ZipFile\nfrom urllib.error import URLError\nfrom urllib.request import urlretrieve\nimport tempfile\nimport os\n\nfrom tqdm import tqdm\n\nfrom pyrasta.raster import DigitalElevationModel\nfrom pyrasta.utils import digitize, TqdmUpTo\n\nimport gdal\n\n\nCGIAR_ARCHIVE_FORMAT = \"zip\"\nCGIAR_URL = \"http://srtm.csi.cgiar.org/wp-content/uploads/files/srtm_5x5/TIFF\"\nCGIAR_NO_DATA = -32768\nCGIAR_DATA_TYPE = gdal.GetDataTypeByName('Int16')\n\n\ndef _download_srtm_tile(tile_name):\n \"\"\" Download and extract SRTM tile archive\n\n Description\n -----------\n\n Parameters\n ----------\n tile_name: str\n SRTM tile name\n \"\"\"\n zip_name, tif_name = tile_name + \".\" + CGIAR_ARCHIVE_FORMAT, tile_name + '.tif'\n url = os.path.join(CGIAR_URL, zip_name)\n temp_srtm_zip = os.path.join(tempfile.gettempdir(), zip_name)\n temp_srtm_dir = os.path.join(tempfile.gettempdir(), tile_name)\n\n # Download tile\n try:\n with TqdmUpTo(unit='B', unit_scale=True, unit_divisor=1024, miniters=1,\n desc=zip_name) as t:\n urlretrieve(url, temp_srtm_zip, reporthook=t.update_to)\n t.total = t.n\n except URLError as e:\n raise RuntimeError(\"Unable to fetch data at '%s': %s\" % (url, e))\n\n # Extract GeoTiff\n # In order to avoid conflict with the archive extraction,\n # the \"gdal import\" is located at the end of all the imports\n # See https://github.com/conda-forge/gdal-feedstock/issues/365\n with ZipFile(temp_srtm_zip, 'r') as archive:\n archive.extractall(temp_srtm_dir)\n\n return os.path.join(temp_srtm_dir, tif_name)\n\n\ndef _retrieve_cgiar_srtm_tiles(bounds):\n \"\"\" Import DEM tile from CGIAR-CSI SRTM3 database (V4.1)\n\n Description\n -----------\n\n Parameters\n ----------\n bounds: tuple or list\n output DEM bounds as (x_min, y_min, x_max, y_max)\n\n Returns\n -------\n list:\n list of SRTM tile file names\n\n \"\"\"\n srtm_lon = range(-180, 185, 5)\n srtm_lat = range(60, -65, -5)\n x_min = digitize(bounds[0], srtm_lon, right=False)\n x_max = digitize(bounds[2], srtm_lon, right=True)\n y_min = digitize(bounds[3], srtm_lat, right=True, ascend=False)\n y_max = digitize(bounds[1], srtm_lat, right=False, ascend=False)\n\n list_of_tiles = []\n\n coords = [(x, y) for x in range(int(x_min), int(x_max) + 1)\n for y in range(int(y_min), int(y_max) + 1)]\n\n for (x, y) in tqdm(coords, desc=\"Downloading SRTM tile(s)\"):\n tile = _download_srtm_tile(\"srtm_%02d_%02d\" % (x, y))\n list_of_tiles.append(tile)\n\n return list_of_tiles\n\n\ndef from_cgiar_online_database(bounds):\n \"\"\" Build DEM tile from CGIAR-CSI SRTM3 database (V4.1)\n\n Description\n -----------\n\n Parameters\n ----------\n bounds: tuple or list\n output DEM bounds\n\n Returns\n -------\n DigitalElevationModel:\n new instance\n\n \"\"\"\n tiles = [DigitalElevationModel(tile) for tile in _retrieve_cgiar_srtm_tiles(bounds)]\n\n return DigitalElevationModel.merge(tiles,\n bounds,\n data_type=CGIAR_DATA_TYPE,\n no_data=CGIAR_NO_DATA)\n"
},
{
"alpha_fraction": 0.5432180762290955,
"alphanum_fraction": 0.5478723645210266,
"avg_line_length": 18.532466888427734,
"blob_id": "3458c8f25dff78e14887f3dcfee3260930d47c2b",
"content_id": "6b0a86a69dc469c77d6d4e36cea4d7d90d6020c8",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1504,
"license_type": "permissive",
"max_line_length": 89,
"num_lines": 77,
"path": "/pyrasta/io_/files.py",
"repo_name": "benjaminpillot/pyrasta",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\n\"\"\" Module summary description.\n\nMore detailed description.\n\"\"\"\nimport os\nimport uuid\nfrom tempfile import mkstemp, gettempdir\n\n\ndef _copy_to_file(raster, out_file):\n \"\"\"\n\n \"\"\"\n try:\n out_ds = raster._gdal_driver.CreateCopy(out_file, raster._gdal_dataset, strict=0)\n out_ds = None\n return 0\n except RuntimeError:\n return 1\n\n\nclass File:\n\n def __init__(self, path):\n self.path = path\n\n def __enter__(self):\n return self\n\n def __exit__(self, exc_type, exc_val, exc_tb):\n pass\n\n\nclass TempFile(File):\n\n def __del__(self):\n try:\n os.remove(self.path)\n except FileNotFoundError:\n pass\n\n\nclass NamedTempFile(TempFile):\n\n def __init__(self, extension):\n self.name = os.path.join(gettempdir(), str(uuid.uuid4()))\n super().__init__(self.name + \".\" + extension)\n\n\nclass ShapeTempFile(NamedTempFile):\n\n def __init__(self):\n super().__init__(\"shp\")\n\n def __del__(self):\n super().__del__()\n for ext in [\".shx\", \".dbf\", \".prj\", \".cpg\"]:\n try:\n os.remove(self.name + ext)\n except FileNotFoundError:\n pass\n\n\nclass RasterTempFile(TempFile):\n \"\"\" Create temporary raster file\n\n \"\"\"\n def __init__(self, extension):\n super().__init__(mkstemp(suffix='.' + extension)[1])\n\n\nclass VrtTempFile(TempFile):\n\n def __init__(self):\n super().__init__(mkstemp(suffix='.vrt')[1])\n"
},
{
"alpha_fraction": 0.5270727872848511,
"alphanum_fraction": 0.5293993353843689,
"avg_line_length": 33.510948181152344,
"blob_id": "2eda00b263d27f99f5c3307240b240a227b8122f",
"content_id": "f57c71d1eba73ea3b6cba86b916d46122c0b6398",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4728,
"license_type": "permissive",
"max_line_length": 94,
"num_lines": 137,
"path": "/pyrasta/tools/stats.py",
"repo_name": "benjaminpillot/pyrasta",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\n\"\"\" Module summary description.\n\nMore detailed description.\n\"\"\"\n\nimport multiprocessing as mp\nfrom functools import partial\nfrom itertools import tee\n\nimport numpy as np\nfrom tqdm import tqdm\n\n\nSTATISTIC_FUNC = dict(median=np.median,\n mean=np.mean,\n min=np.min,\n max=np.max,\n sum=np.sum)\n\n\ndef _histogram(raster, nb_bins, normalized):\n \"\"\" Compute histogram of raster values\n\n \"\"\"\n histogram = []\n\n for band in range(raster.nb_band):\n edges = np.linspace(raster.min, raster.max, nb_bins + 1)\n hist_x = edges[0:-1] + (edges[1::] - edges[0:-1])/2\n hist_y = np.asarray(\n raster._gdal_dataset.GetRasterBand(band + 1).GetHistogram(min=raster.min[band],\n max=raster.max[band],\n buckets=nb_bins))\n if normalized:\n hist_y = hist_y / np.sum(hist_y)\n\n histogram.append((hist_x, hist_y))\n\n return histogram\n\n\ndef _zonal_stats(raster, layer, band, stats, customized_stat,\n no_data, all_touched, show_progressbar, nb_processes):\n \"\"\" Retrieve zonal statistics from raster corresponding to features in layer\n \n Parameters\n ----------\n raster: RasterBase\n Raster from which zonal statistics must be computed\n layer: geopandas.GeoDataFrame or gistools.layer.GeoLayer\n Geographic layer as a GeoDataFrame or GeoLayer\n band: int\n band number\n stats: list[str]\n list of strings of valid available statistics:\n - 'mean' returns average over the values within each zone\n - 'median' returns median\n - 'sum' returns the sum of all values in zone\n - 'min' returns minimum value\n - 'max' returns maximum value\n customized_stat: dict\n User's own customized statistic function\n as {'your_function_name': function}\n no_data: int or float\n No data value\n all_touched: bool\n Whether to include every raster cell touched by a geometry, or only\n those having a center point within the polygon.\n show_progressbar: bool\n if True, show progress bar status\n nb_processes: int\n Number of parallel processes\n\n Returns\n -------\n\n \"\"\"\n def zone_gen(ras, bds, bd=1):\n for boundary in bds:\n try:\n yield ras.read_array(bd, boundary)\n except RuntimeError:\n yield None\n\n stats_calc = {name: STATISTIC_FUNC[name] for name in stats}\n if customized_stat is not None:\n stats_calc.update(customized_stat)\n\n layer[\"ID\"] = layer.index\n raster_layer = raster.rasterize(layer, raster.projection, raster.x_size,\n raster.y_size, raster.geo_transform,\n attribute=\"ID\", all_touched=all_touched)\n\n bounds = layer.bounds.to_numpy()\n zone = zone_gen(raster, bounds, band)\n zone_id = zone_gen(raster_layer, bounds)\n multi_gen = tee(zip(layer.index, zone, zone_id), len(stats_calc))\n\n iterator = zip(multi_gen, stats_calc.keys())\n\n output = dict()\n with mp.Pool(processes=nb_processes) as pool:\n if show_progressbar:\n for generator, name in iterator:\n output[name] = list(tqdm(pool.starmap(partial(_compute_stat_in_feature,\n no_data=raster.no_data,\n stat_function=stats_calc[name]),\n generator),\n total=len(layer),\n unit_scale=True,\n desc=f\"Compute zonal {name}\"))\n else:\n for generator, name in iterator:\n output[name] = list(pool.starmap(partial(_compute_stat_in_feature,\n no_data=raster.no_data,\n stat_function=stats_calc[name]),\n generator))\n\n return output\n\n\ndef _compute_stat_in_feature(idx, zone, zone_id, no_data, stat_function):\n\n if zone is not None and zone_id is not None:\n if np.isnan(no_data):\n values = zone[(zone_id == idx) & ~np.isnan(zone)]\n else:\n values = zone[(zone_id == idx) & (zone != no_data)]\n\n if values.size != 0:\n return stat_function(values)\n else:\n return np.nan\n else:\n return np.nan\n"
},
{
"alpha_fraction": 0.29427793622016907,
"alphanum_fraction": 0.3760218024253845,
"avg_line_length": 21.9375,
"blob_id": "de4861a4c96723e63ddda494182f1dcbbe84b279",
"content_id": "efb25f8291dc1b1d10be102a436f24f2f232a7ae",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 367,
"license_type": "permissive",
"max_line_length": 34,
"num_lines": 16,
"path": "/pyrasta/tools/mapping.py",
"repo_name": "benjaminpillot/pyrasta",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\n\"\"\" Module summary description.\n\nMore detailed description.\n\"\"\"\n\nGDAL_TO_NUMPY = {1: \"int8\",\n 2: \"uint16\",\n 3: \"int16\",\n 4: \"uint32\",\n 5: \"int32\",\n 6: \"float32\",\n 7: \"float64\",\n 10: \"complex64\",\n 11: \"complex128\"}\n"
},
{
"alpha_fraction": 0.6680790781974792,
"alphanum_fraction": 0.6694915294647217,
"avg_line_length": 29.782608032226562,
"blob_id": "0a944110af42d5ca7ea243f5436b3f26654047e8",
"content_id": "446996f5ea0c7c833bde6785d99d9228e8620952",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 708,
"license_type": "permissive",
"max_line_length": 67,
"num_lines": 23,
"path": "/setup.py",
"repo_name": "benjaminpillot/pyrasta",
"src_encoding": "UTF-8",
"text": "from setuptools import setup, find_packages\n\nimport pyrasta\n\nwith open(\"README.md\", 'r') as fh:\n long_description = fh.read()\n\nwith open(\"requirements.txt\") as req:\n install_req = req.read().splitlines()\n\nsetup(name='pyrasta',\n version=pyrasta.__version__,\n description='Some tools for fast and easy raster processing',\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n url='https://framagit.org/benjaminpillot/pyrasta',\n author='Benjamin Pillot',\n author_email='[email protected]',\n install_requires=install_req,\n python_requires='>=3',\n license='MIT',\n packages=find_packages(exclude=\"pyrasta/algorithms\"),\n zip_safe=False)\n"
},
{
"alpha_fraction": 0.4566448926925659,
"alphanum_fraction": 0.4605664610862732,
"avg_line_length": 34.30769348144531,
"blob_id": "b30da827f8078a2ff5c51fec711fa0a1bbc54236",
"content_id": "1ddfdd80054cbc8ed180e507de7e95c20968532d",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2295,
"license_type": "permissive",
"max_line_length": 92,
"num_lines": 65,
"path": "/pyrasta/tools/mask.py",
"repo_name": "benjaminpillot/pyrasta",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\n\"\"\" Module summary description.\n\nMore detailed description.\n\"\"\"\nfrom functools import partial\n\nfrom pyrasta.io_ import ESRI_DRIVER\nfrom pyrasta.io_.files import ShapeTempFile, _copy_to_file, NamedTempFile\n\nimport gdal\n\n\ndef _mask(arrays, no_data):\n src = arrays[0]\n mask = arrays[1]\n src[mask == 1] = no_data\n\n return src\n\n\ndef _raster_mask(raster, geodataframe, driver, output_type, no_data, all_touched,\n window_size, nb_processes, chunksize):\n \"\"\" Apply mask into raster\n\n \"\"\"\n mask = raster.__class__.rasterize(geodataframe,\n raster.crs.to_wkt(),\n raster.x_size,\n raster.y_size,\n raster.geo_transform,\n burn_values=[1],\n all_touched=all_touched)\n\n return raster.__class__.raster_calculation([raster, mask],\n partial(_mask, no_data=no_data),\n gdal_driver=driver,\n data_type=output_type,\n no_data=no_data,\n description=\"Compute mask\",\n window_size=window_size,\n nb_processes=nb_processes,\n chunksize=chunksize)\n\n\n # with ShapeTempFile() as shp_file, \\\n # NamedTempFile(raster._gdal_driver.GetMetadata()['DMD_EXTENSION']) as out_file:\n #\n # _copy_to_file(raster, out_file.path)\n # geodataframe.to_file(shp_file.path, driver=ESRI_DRIVER)\n # out_ds = gdal.Open(out_file.path, 1)\n # gdal.Rasterize(out_ds,\n # shp_file.path,\n # bands=[bd + 1 for bd in range(raster.nb_band)],\n # burnValues=[10],\n # allTouched=all_touched)\n #\n # out_ds = None\n\n # return raster.__class__(out_file.path)\n # new_raster = raster.__class__(out_file.path)\n # new_raster._temp_file = out_file\n\n # return new_raster\n"
},
{
"alpha_fraction": 0.5329949259757996,
"alphanum_fraction": 0.5736040472984314,
"avg_line_length": 16.909090042114258,
"blob_id": "242a6fbf43225966712219f1a52373aa5723bb96",
"content_id": "3d65803ce2903be21bdc9852148a3143dc53fe62",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 197,
"license_type": "permissive",
"max_line_length": 40,
"num_lines": 11,
"path": "/pyrasta/__init__.py",
"repo_name": "benjaminpillot/pyrasta",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\n\"\"\" Module summary description.\n\nMore detailed description.\n\"\"\"\n\n__version__ = '1.2.7'\n__author__ = 'Benjamin Pillot'\n__copyright__ = 'Copyright 2020, Benjamin Pillot'\n__email__ = '[email protected]'\n"
},
{
"alpha_fraction": 0.3636363744735718,
"alphanum_fraction": 0.6363636255264282,
"avg_line_length": 12.399999618530273,
"blob_id": "fd0bdbb22d12b78839eefe8e4501a56e43d5f997",
"content_id": "c78e064c9550d24125f54e6519a343c9887b0404",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 66,
"license_type": "permissive",
"max_line_length": 13,
"num_lines": 5,
"path": "/requirements.txt",
"repo_name": "benjaminpillot/pyrasta",
"src_encoding": "UTF-8",
"text": "gdal>=3.0.2\nnumpy>=1.19.2\nnumba>=0.52.0\ntqdm>=4.57.0\naffine>=2.3.0"
},
{
"alpha_fraction": 0.5882843136787415,
"alphanum_fraction": 0.5977234840393066,
"avg_line_length": 24.72857093811035,
"blob_id": "8fa2f18b1112b782bf14a90fdd5bf86f49cd7222",
"content_id": "be2996ef206da67971c60ab276d6703109b9419a",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3602,
"license_type": "permissive",
"max_line_length": 98,
"num_lines": 140,
"path": "/pyrasta/utils.py",
"repo_name": "benjaminpillot/pyrasta",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\n\"\"\" Module summary description.\n\nMore detailed description.\n\"\"\"\nfrom collections import Collection\nfrom itertools import chain, islice\n\nfrom numba import njit\nfrom tqdm import tqdm\n\n\nMP_CHUNK_SIZE = 1000\n\n\nclass TqdmUpTo(tqdm):\n \"\"\" Progress bar for url retrieving\n\n Description\n -----------\n Thanks to https://github.com/tqdm/tqdm/blob/master/examples/tqdm_wget.py\n\n Provides `update_to(n)` which uses `tqdm.update(delta_n)`.\n Inspired by [twine#242](https://github.com/pypa/twine/pull/242),\n [here](https://github.com/pypa/twine/commit/42e55e06).\n \"\"\"\n\n def update_to(self, b=1, bsize=1, tsize=None):\n \"\"\"\n b : int, optional\n Number of blocks transferred so far [default: 1].\n bsize : int, optional\n Size of each block (in tqdm units) [default: 1].\n tsize : int, optional\n Total size (in tqdm units). If [default: None] remains unchanged.\n \"\"\"\n if tsize is not None:\n self.total = tsize\n return self.update(b * bsize - self.n) # also sets self.n = b * bsize\n\n\ndef check_string(string, list_of_strings):\n \"\"\" Check validity of and return string against list of valid strings\n\n :param string: searched string\n :param list_of_strings: list/tuple/set of valid strings string is to be checked against\n :return: validate string from list of strings if match\n \"\"\"\n check_type(string, str, list_of_strings, Collection)\n [check_type(x, str) for x in list_of_strings]\n\n output_string = []\n\n for item in list_of_strings:\n if item.lower().startswith(string.lower()):\n output_string.append(item)\n\n if len(output_string) == 1:\n return output_string[0]\n elif len(output_string) == 0:\n raise ValueError(\"input must match one of those: {}\".format(list_of_strings))\n elif len(output_string) > 1:\n raise ValueError(\"input match more than one valid value among {}\".format(list_of_strings))\n\n\ndef check_type(*args):\n \"\"\"Check type of arguments\n\n :param args: tuple list of argument/type\n :return:\n \"\"\"\n if len(args) % 2 == 0:\n for item in range(0, len(args), 2):\n if not isinstance(args[item], args[item + 1]):\n raise TypeError(\"Type of argument {} is '{}' but must be '{}'\".format(\n item//2 + 1, type(args[item]).__name__, args[item + 1].__name__))\n\n\ndef digitize(value, list_of_values, ascend=True, right=False):\n \"\"\"\n\n Description\n -----------\n\n Parameters\n ----------\n\n \"\"\"\n if ascend:\n loc = [value > v if right else value >= v for v in list_of_values]\n else:\n loc = [value <= v if right else value < v for v in list_of_values]\n\n return loc.index(False) if False in loc else len(list_of_values)\n\n\n@njit()\ndef grid(origin, resolution, size):\n \"\"\" Return regular grid vector\n\n Parameters\n ----------\n origin\n resolution\n size\n\n Returns\n -------\n\n \"\"\"\n origin -= resolution\n\n for _ in range(size):\n origin += resolution\n\n yield origin\n\n\ndef lazyproperty(func):\n name = '_lazy_' + func.__name__\n\n @property\n def lazy(self):\n if hasattr(self, name):\n return getattr(self, name)\n else:\n value = func(self)\n setattr(self, name, value)\n return value\n return lazy\n\n\ndef split_into_chunks(iterable, size):\n \"\"\" Split iterable into chunks of iterables\n\n \"\"\"\n iterator = iter(iterable)\n for first in iterator:\n yield chain([first], islice(iterator, size - 1))\n"
},
{
"alpha_fraction": 0.5566155314445496,
"alphanum_fraction": 0.5682289600372314,
"avg_line_length": 18.288000106811523,
"blob_id": "8b3baea46e63cf02bbebfb567afc6343528b541f",
"content_id": "3dfd90312ae43a015797c881a3372133a145605a",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2411,
"license_type": "permissive",
"max_line_length": 78,
"num_lines": 125,
"path": "/pyrasta/crs.py",
"repo_name": "benjaminpillot/pyrasta",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\n\"\"\" Functions related to CRS conversion and computation\n\nMore detailed description.\n\"\"\"\n\nimport osr\nimport pyproj\n\n\ndef is_equal_proj(proj1, proj2):\n \"\"\" Compare 2 projections\n\n Parameters\n ----------\n proj1: int or str or dict or pyproj.Proj\n valid projection name\n proj2: int or str or dict or pyproj.Proj\n valid projection name\n\n Returns\n -------\n boolean:\n True or False\n \"\"\"\n # From an idea from https://github.com/jswhit/pyproj/issues/15\n # Use OGR library to compare projections\n srs = [srs_from(proj1), srs_from(proj2)]\n\n return bool(srs[0].IsSame(srs[1]))\n\n\ndef proj4_from_wkt(wkt):\n \"\"\" Convert wkt srs to proj4\n\n Description\n -----------\n\n Parameters\n ----------\n wkt:\n\n Returns\n -------\n\n \"\"\"\n srs = osr.SpatialReference()\n srs.ImportFromWkt(wkt)\n\n return srs.ExportToProj4()\n\n\ndef proj4_from(proj):\n \"\"\" Convert projection to proj4 string\n\n Description\n -----------\n Convert projection string, dictionary, etc.\n to proj4 string\n\n Parameters\n ----------\n proj:\n\n Returns\n -------\n\n \"\"\"\n if type(proj) == int:\n try:\n proj4_str = pyproj.Proj('epsg:%d' % proj).srs\n except (ValueError, RuntimeError):\n raise ValueError(\"Invalid EPSG code\")\n elif type(proj) == str or type(proj) == dict:\n try:\n proj4_str = pyproj.Proj(proj).srs\n except RuntimeError:\n try:\n proj4_str = proj4_from_wkt(proj)\n except (RuntimeError, TypeError):\n raise ValueError(\"Invalid projection string or dictionary\")\n elif type(proj) == pyproj.Proj:\n proj4_str = proj.srs\n else:\n raise ValueError(\"Invalid projection format: '{}'\".format(type(proj)))\n\n return proj4_str\n\n\ndef srs_from(proj):\n \"\"\" Get spatial reference system from projection\n\n Description\n -----------\n\n Parameters\n ----------\n proj:\n\n Returns\n -------\n SpatialReference instance (osgeo.osr package)\n \"\"\"\n proj4 = proj4_from(proj)\n srs = osr.SpatialReference()\n srs.ImportFromProj4(proj4)\n\n return srs\n\n\ndef wkt_from(proj):\n \"\"\" Get WKT spatial reference system from projection\n\n Description\n -----------\n\n Parameters\n ----------\n proj:\n\n Returns\n -------\n \"\"\"\n return srs_from(proj).ExportToWkt()\n"
},
{
"alpha_fraction": 0.5657404661178589,
"alphanum_fraction": 0.5709571242332458,
"avg_line_length": 28.445140838623047,
"blob_id": "72782671884d5707e58ce90c0e7570454f56ac59",
"content_id": "ec00c08ceac67ed684ed8692cdc5df46dfd13311",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 9393,
"license_type": "permissive",
"max_line_length": 100,
"num_lines": 319,
"path": "/pyrasta/tools/windows.py",
"repo_name": "benjaminpillot/pyrasta",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\n\"\"\" Module summary description.\n\nMore detailed description.\n\"\"\"\nfrom functools import wraps, partial\n\nfrom numba import jit\nfrom tqdm import tqdm\n\nimport multiprocessing as mp\nimport numpy as np\n\nfrom pyrasta.tools import _gdal_temp_dataset, _return_raster\nfrom pyrasta.exceptions import WindowGeneratorError\nfrom pyrasta.utils import split_into_chunks, check_string, check_type, MP_CHUNK_SIZE\n\n\ndef _set_nan(array, function, no_data):\n \"\"\" Replace no data values by NaNs\n\n \"\"\"\n array[array == no_data] = np.nan\n return function(array)\n\n\n@_return_raster\ndef _windowing(raster, out_file, function, band, window_size,\n method, data_type, no_data, chunk_size, nb_processes):\n \"\"\" Apply function in each moving or block window in raster\n\n Description\n -----------\n\n Parameters\n ----------\n\n \"\"\"\n window_generator = WindowGenerator(raster, band, window_size, method)\n out_ds = _gdal_temp_dataset(out_file, raster._gdal_driver, raster._gdal_dataset.GetProjection(),\n window_generator.x_size, window_generator.y_size, raster.nb_band,\n window_generator.geo_transform, data_type, no_data)\n\n y = 0\n # chunk size cannot be 0 and cannot\n # be higher than height of window\n # generator (y_size). And it must be\n # a multiple of window generator width\n # (x_size)\n chunk_size = max(min(chunk_size // window_generator.x_size, window_generator.y_size)\n * window_generator.x_size, window_generator.x_size)\n for win_gen in tqdm(split_into_chunks(window_generator, chunk_size),\n total=len(window_generator)//chunk_size +\n int(len(window_generator) % chunk_size != 0),\n desc=\"Sliding window computation\"):\n with mp.Pool(processes=nb_processes) as pool:\n output = np.asarray(list(pool.map(partial(_set_nan,\n function=function,\n no_data=raster.no_data),\n win_gen,\n chunksize=MP_CHUNK_SIZE)))\n\n output[np.isnan(output)] = no_data\n\n # Set number of rows to write to file\n n_rows = len(output) // window_generator.x_size\n\n # Write row to raster\n out_ds.GetRasterBand(band).WriteArray(np.reshape(output, (n_rows,\n window_generator.x_size)), 0, y)\n\n # Update row index\n y += n_rows\n\n # Close dataset\n out_ds = None\n\n\ndef integer(setter):\n\n @wraps(setter)\n def _integer(self, value):\n try:\n check_type(value, int)\n except TypeError:\n raise WindowGeneratorError(\"'%s' must be an integer value but is: '%s'\" %\n (setter.__name__, type(value).__name__))\n output = setter(self, value)\n\n return _integer\n\n\ndef odd(setter):\n\n @wraps(setter)\n def _odd(self, value):\n if value % 2 == 0:\n raise WindowGeneratorError(\"'%s' must be an odd value (=%d)\" % (setter.__name__, value))\n output = setter(self, value)\n\n return _odd\n\n\ndef positive(setter):\n\n @wraps(setter)\n def _positive(self, value):\n if value <= 0:\n raise WindowGeneratorError(\"'%s' must be positive (=%d)\" % (setter.__name__, value))\n output = setter(self, value)\n\n return _positive\n\n\nclass WindowGenerator:\n \"\"\" Generator of windows over raster\n\n \"\"\"\n\n def __init__(self, raster, band, window_size, method):\n \"\"\" WindowGenerator constructor\n\n Description\n -----------\n\n Parameters\n ----------\n raster: RasterBase\n raster for which we must compute windows\n band: int\n raster band number\n window_size: int\n size of window in pixels\n method: str\n sliding window method (\"block\" or \"moving\")\n\n Return\n ------\n\n \"\"\"\n self.band = band\n self.raster = raster\n self.window_size = window_size\n self.method = method\n\n # self.image = self.raster._gdal_dataset.GetRasterBand(self.band).ReadAsArray()\n\n @property\n def geo_transform(self):\n if self.method == \"block\":\n topleftx, pxsizex, rotx, toplefty, roty, pxsizey = \\\n self.raster._gdal_dataset.GetGeoTransform()\n return topleftx, pxsizex * self.window_size, rotx, \\\n toplefty, roty, pxsizey * self.window_size\n else:\n return self.raster._gdal_dataset.GetGeoTransform()\n\n @property\n def method(self):\n return self._method\n\n @method.setter\n def method(self, value):\n try:\n self._method = check_string(value, {'block', 'moving'})\n except (TypeError, ValueError) as e:\n raise WindowGeneratorError(\"Invalid sliding window method: '%s'\" % value)\n\n @property\n def band(self):\n return self._band\n\n @band.setter\n @integer\n @positive\n def band(self, value):\n self._band = value\n\n @property\n def window_size(self):\n return self._window_size\n\n @window_size.setter\n @integer\n @positive\n def window_size(self, value):\n self._window_size = value\n\n @property\n def x_size(self):\n if self.method == \"block\":\n return int(self.raster.x_size / self.window_size) + \\\n min(1, self.raster.x_size % self.window_size)\n else:\n return self.raster.x_size\n\n @property\n def y_size(self):\n if self.method == \"block\":\n return int(self.raster.y_size / self.window_size) + \\\n min(1, self.raster.y_size % self.window_size)\n else:\n return self.raster.y_size\n\n def __len__(self):\n return self.y_size * self.x_size\n\n def __iter__(self):\n def windows():\n if self.method == \"block\":\n return get_block_windows(self.window_size, self.raster.x_size, self.raster.y_size)\n elif self.method == \"moving\":\n return get_moving_windows(self.window_size, self.raster.x_size, self.raster.y_size)\n\n return (self.raster._gdal_dataset.GetRasterBand(self.band).ReadAsArray(*window).astype(\n \"float32\") for window in windows())\n # return (self.image[w[1]:w[1] + w[3], w[0]:w[0] + w[2]] for w in windows())\n\n\n@jit(nopython=True, nogil=True)\ndef get_xy_block_windows(window_size, raster_x_size, raster_y_size):\n \"\"\" Get xy block window coordinates\n\n Description\n -----------\n Get xy block window coordinates depending\n on raster size and window size\n\n Parameters\n ----------\n window_size: (int, int)\n size of window to read within raster as (width, height)\n raster_x_size: int\n raster's width\n raster_y_size: int\n raster's height\n\n Yields\n -------\n Window coordinates: tuple\n 4-element tuple returning the coordinates of the window within the raster\n \"\"\"\n for y in range(0, raster_y_size, window_size[1]):\n ysize = min(window_size[1], raster_y_size - y)\n for x in range(0, raster_x_size, window_size[0]):\n xsize = min(window_size[0], raster_x_size - x)\n\n yield x, y, xsize, ysize\n\n\n@jit(nopython=True, nogil=True)\ndef get_block_windows(window_size, raster_x_size, raster_y_size):\n \"\"\" Get block window coordinates\n\n Description\n -----------\n Get block window coordinates depending\n on raster size and window size\n\n Parameters\n ----------\n window_size: int\n size of window to read within raster\n raster_x_size: int\n raster's width\n raster_y_size: int\n raster's height\n\n Yields\n -------\n Window coordinates: tuple\n 4-element tuple returning the coordinates of the window within the raster\n \"\"\"\n for y in range(0, raster_y_size, window_size):\n ysize = min(window_size, raster_y_size - y)\n for x in range(0, raster_x_size, window_size):\n xsize = min(window_size, raster_x_size - x)\n\n yield x, y, xsize, ysize\n\n\n@jit(nopython=True, nogil=True)\ndef get_moving_windows(window_size, raster_x_size, raster_y_size, step=1):\n \"\"\" Get moving window coordinates\n\n Description\n -----------\n Get moving window coordinates depending\n on raster size, window size and step\n\n Parameters\n ----------\n window_size: int\n size of window (square)\n raster_x_size: int\n raster's width\n raster_y_size: int\n raster's height\n step: int\n gap between the window's centers when moving the window over the raster\n\n Yields\n -------\n Window coordinates: tuple\n tuple of coordinates\n \"\"\"\n offset = int((window_size - 1) / 2) # window_size must be an odd number\n # for each pixel, compute indices of the window (all included)\n for y in range(0, raster_y_size, step):\n y1 = max(0, y - offset)\n y2 = min(raster_y_size - 1, y + offset)\n ysize = (y2 - y1) + 1\n for x in range(0, raster_x_size, step):\n x1 = max(0, x - offset)\n x2 = min(raster_x_size - 1, x + offset)\n xsize = (x2 - x1) + 1\n\n yield x1, y1, xsize, ysize\n"
},
{
"alpha_fraction": 0.6902173757553101,
"alphanum_fraction": 0.695652186870575,
"avg_line_length": 12.142857551574707,
"blob_id": "611092f09ecea623dae19be9cc80ab1b5706d72c",
"content_id": "8f42ba04a4d238263d01084d3b17e6ba33335e62",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 184,
"license_type": "permissive",
"max_line_length": 38,
"num_lines": 14,
"path": "/pyrasta/exceptions.py",
"repo_name": "benjaminpillot/pyrasta",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\n\"\"\" Module summary description.\n\nMore detailed description.\n\"\"\"\n\n\nclass RasterBaseError(Exception):\n pass\n\n\nclass WindowGeneratorError(Exception):\n pass\n"
},
{
"alpha_fraction": 0.5187877416610718,
"alphanum_fraction": 0.523592472076416,
"avg_line_length": 30.339767456054688,
"blob_id": "5b2f8e3798a02743a393a396dbdb68c75d92351e",
"content_id": "d5f236c3098d67643d3e291020f49b83a4e2f82c",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 8117,
"license_type": "permissive",
"max_line_length": 96,
"num_lines": 259,
"path": "/pyrasta/tools/conversion.py",
"repo_name": "benjaminpillot/pyrasta",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\n\"\"\" Module summary description.\n\nMore detailed description.\n\"\"\"\nfrom pyrasta.crs import srs_from\nfrom pyrasta.io_.files import RasterTempFile, VrtTempFile\nfrom pyrasta.tools import _gdal_temp_dataset, _return_raster\n\nfrom osgeo import gdal_array\n\nimport affine\nimport gdal\n\n\n@_return_raster\ndef _align_raster(in_raster, out_file, on_raster):\n \"\"\" Align raster on other raster\n\n \"\"\"\n out_ds = _gdal_temp_dataset(out_file, in_raster._gdal_driver,\n on_raster._gdal_dataset.GetProjection(),\n on_raster.x_size, on_raster.y_size, in_raster.nb_band,\n on_raster.geo_transform, in_raster.data_type, in_raster.no_data)\n\n gdal.Warp(out_ds, in_raster._gdal_dataset)\n\n # Close dataset\n out_ds = None\n\n\ndef _array_to_raster(raster_class, array, crs, bounds,\n gdal_driver, no_data):\n \"\"\" Convert array to (north up) raster\n\n Parameters\n ----------\n array: numpy.ndarray\n crs: pyproj.CRS\n bounds: tuple\n Image boundaries as (xmin, ymin, xmax, ymax)\n gdal_driver: osgeo.gdal.Driver\n no_data\n\n Returns\n -------\n\n \"\"\"\n if array.ndim == 2:\n nb_band = 1\n x_size = array.shape[1]\n y_size = array.shape[0]\n else:\n nb_band = array.shape[0]\n x_size = array.shape[2]\n y_size = array.shape[1]\n\n xmin, ymin, xmax, ymax = bounds\n geo_transform = (xmin, (xmax - xmin)/x_size, 0,\n ymax, 0, -(ymax - ymin)/y_size)\n\n with RasterTempFile(gdal_driver.GetMetadata()['DMD_EXTENSION']) as out_file:\n\n out_ds = _gdal_temp_dataset(out_file.path,\n gdal_driver,\n crs.to_wkt(),\n x_size,\n y_size,\n nb_band,\n geo_transform,\n gdal_array.NumericTypeCodeToGDALTypeCode(array.dtype),\n no_data)\n\n if nb_band == 1:\n out_ds.GetRasterBand(nb_band).WriteArray(array)\n else:\n for band in range(nb_band):\n out_ds.GetRasterBand(band + 1).WriteArray(array[band, :, :])\n\n # Close dataset\n out_ds = None\n\n raster = raster_class(out_file.path)\n raster._temp_file = out_file\n\n return raster\n\n\n@_return_raster\ndef _extract_bands(raster, out_file, bands):\n\n out_ds = gdal.Translate(out_file, raster._gdal_dataset, bandList=bands)\n\n # Close dataset\n out_ds = None\n\n\ndef _xy_to_2d_index(raster, x, y):\n \"\"\" Convert x/y map coordinates to 2d index\n\n \"\"\"\n forward_transform = affine.Affine.from_gdal(*raster.geo_transform)\n reverse_transform = ~forward_transform\n px, py = reverse_transform * (x, y)\n\n return int(px), int(py)\n\n\ndef _merge_bands(raster_class, sources, resolution, gdal_driver, data_type, no_data):\n \"\"\" Merge multiple bands into one raster\n\n \"\"\"\n with RasterTempFile(gdal_driver.GetMetadata()['DMD_EXTENSION']) as out_file:\n\n vrt_ds = gdal.BuildVRT(VrtTempFile().path, [src._gdal_dataset for src in sources],\n resolution=resolution, separate=True, VRTNodata=no_data)\n out_ds = gdal.Translate(out_file.path, vrt_ds, outputType=data_type)\n\n # Close dataset\n out_ds = None\n\n return raster_class(out_file.path)\n\n\n@_return_raster\ndef _padding(raster, out_file, pad_x, pad_y, pad_value):\n \"\"\" Add pad values around raster\n\n Description\n -----------\n\n Parameters\n ----------\n raster: RasterBase\n raster to pad\n out_file: str\n output file to which to write new raster\n pad_x: int\n x padding size (new width will therefore be RasterXSize + 2 * pad_x)\n pad_y: int\n y padding size (new height will therefore be RasterYSize + 2 * pad_y)\n pad_value: int or float\n value to set to pad area around raster\n\n Returns\n -------\n \"\"\"\n geo_transform = (raster.x_origin - pad_x * raster.resolution[0], raster.resolution[0], 0,\n raster.y_origin + pad_y * raster.resolution[1], 0, -raster.resolution[1])\n out_ds = _gdal_temp_dataset(out_file,\n raster._gdal_driver,\n raster._gdal_dataset.GetProjection(),\n raster.x_size + 2 * pad_x,\n raster.y_size + 2 * pad_y,\n raster.nb_band,\n geo_transform,\n raster.data_type,\n raster.no_data)\n\n for band in range(1, raster.nb_band + 1):\n out_ds.GetRasterBand(band).Fill(pad_value)\n gdal.Warp(out_ds, raster._gdal_dataset)\n\n # Close dataset\n out_ds = None\n\n\n@_return_raster\ndef _project_raster(raster, out_file, new_crs):\n \"\"\" Project raster onto new CRS\n\n \"\"\"\n gdal.Warp(out_file, raster._gdal_dataset, dstSRS=srs_from(new_crs))\n\n\ndef _read_array(raster, band, bounds):\n \"\"\" Read array from raster\n\n \"\"\"\n if bounds is None:\n return raster._gdal_dataset.ReadAsArray()\n else:\n x_min, y_min, x_max, y_max = bounds\n forward_transform = affine.Affine.from_gdal(*raster.geo_transform)\n reverse_transform = ~forward_transform\n px_min, py_max = reverse_transform * (x_min, y_min)\n px_max, py_min = reverse_transform * (x_max, y_max)\n x_size = int(px_max - px_min) + 1\n y_size = int(py_max - py_min) + 1\n\n if band is not None:\n return raster._gdal_dataset.GetRasterBand(band).ReadAsArray(int(px_min),\n int(py_min),\n x_size,\n y_size)\n else:\n return raster._gdal_dataset.ReadAsArray(int(px_min),\n int(py_min),\n x_size,\n y_size)\n\n\ndef _read_value_at(raster, x, y):\n \"\"\" Read value at lat/lon map coordinates\n\n \"\"\"\n forward_transform = affine.Affine.from_gdal(*raster.geo_transform)\n reverse_transform = ~forward_transform\n xoff, yoff = reverse_transform * (x, y)\n value = raster._gdal_dataset.ReadAsArray(xoff, yoff, 1, 1)\n if value.size > 1:\n return value\n else:\n return value[0, 0]\n\n\n@_return_raster\ndef _resample_raster(raster, out_file, factor):\n \"\"\" Resample raster\n\n Parameters\n ----------\n raster: RasterBase\n raster to resample\n out_file: str\n output file to which to write new raster\n factor: int or float\n Resampling factor\n \"\"\"\n geo_transform = (raster.x_origin, raster.resolution[0] / factor, 0,\n raster.y_origin, 0, -raster.resolution[1] / factor)\n out_ds = _gdal_temp_dataset(out_file,\n raster._gdal_driver,\n raster._gdal_dataset.GetProjection(),\n raster.x_size * factor,\n raster.y_size * factor,\n raster.nb_band,\n geo_transform,\n raster.data_type,\n raster.no_data)\n\n for band in range(1, raster.nb_band+1):\n gdal.RegenerateOverview(raster._gdal_dataset.GetRasterBand(band),\n out_ds.GetRasterBand(band), 'mode')\n\n # Close dataset\n out_ds = None\n\n\n@_return_raster\ndef _rescale_raster(raster, out_file, ds_min, ds_max):\n\n out_ds = gdal.Translate(out_file, raster._gdal_dataset,\n scaleParams=[[src_min, src_max, ds_min, ds_max]\n for src_min, src_max in zip(raster.min, raster.max)])\n\n # Close dataset\n out_ds = None\n"
},
{
"alpha_fraction": 0.5463681817054749,
"alphanum_fraction": 0.5472818613052368,
"avg_line_length": 28.98630142211914,
"blob_id": "aa228b6ed660810a2c672587f22bb9204b5a3b01",
"content_id": "fe7564c275eca26c29a96a80a2256d0a3ae7ab19",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2189,
"license_type": "permissive",
"max_line_length": 83,
"num_lines": 73,
"path": "/pyrasta/tools/rasterize.py",
"repo_name": "benjaminpillot/pyrasta",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\n\"\"\" Module summary description.\n\nMore detailed description.\n\"\"\"\nfrom pyrasta.io_ import ESRI_DRIVER\nfrom pyrasta.io_.files import ShapeTempFile, RasterTempFile\nfrom pyrasta.tools import _gdal_temp_dataset\n\nimport gdal\n\n\ndef _rasterize(raster_class, geodataframe, burn_values, attribute,\n gdal_driver, projection, x_size, y_size, nb_band,\n geo_transform, data_type, no_data, all_touched):\n \"\"\" Rasterize geographic layer\n\n Parameters\n ----------\n raster_class: RasterBase\n Raster class to return\n geodataframe: geopandas.GeoDataFrame or gistools.layer.GeoLayer\n Geographic layer to be rasterized\n burn_values: None or list[float] or list[int]\n list of values to burn in each band, excusive with attribute\n attribute: str\n attribute in layer from which burn value must be retrieved\n gdal_driver\n projection\n x_size: int\n y_size: int\n nb_band: int\n geo_transform: tuple\n data_type\n no_data\n all_touched: bool\n\n Returns\n -------\n\n \"\"\"\n\n with ShapeTempFile() as shp_file, \\\n RasterTempFile(gdal_driver.GetMetadata()['DMD_EXTENSION']) as out_file:\n\n geodataframe.to_file(shp_file.path, driver=ESRI_DRIVER)\n\n out_ds = _gdal_temp_dataset(out_file.path,\n gdal_driver,\n projection,\n x_size,\n y_size,\n nb_band,\n geo_transform,\n data_type,\n no_data)\n\n gdal.Rasterize(out_ds,\n shp_file.path,\n bands=[bd + 1 for bd in range(nb_band)],\n burnValues=burn_values,\n attribute=attribute,\n allTouched=all_touched)\n\n out_ds = None\n\n # Be careful with the temp file, make a pointer to be sure\n # the Python garbage collector does not destroy it !\n raster = raster_class(out_file.path)\n raster._temp_file = out_file\n\n return raster\n"
},
{
"alpha_fraction": 0.5293726921081543,
"alphanum_fraction": 0.5326916575431824,
"avg_line_length": 30.38541603088379,
"blob_id": "b42b03107d34df6be2341e3256da8918fcd63639",
"content_id": "1357c4e68c4a38e7eb6f24fb7a3c3697279c9b37",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3013,
"license_type": "permissive",
"max_line_length": 94,
"num_lines": 96,
"path": "/pyrasta/tools/clip.py",
"repo_name": "benjaminpillot/pyrasta",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\n\"\"\" Module summary description.\n\nMore detailed description.\n\"\"\"\nfrom pyrasta.io_ import ESRI_DRIVER\nfrom pyrasta.io_.files import RasterTempFile, ShapeTempFile\nfrom pyrasta.tools import _return_raster, _gdal_temp_dataset\n\nimport gdal\n\n\n@_return_raster\ndef _clip_raster_by_extent(raster, out_file, bounds, no_data):\n \"\"\" Clip raster by extent\n\n Parameters\n ----------\n raster: pyrasta.raster.RasterBase\n out_file: pyrasta.io_.files.RasterTempFile\n bounds: tuple\n boundaries as (minx, miny, maxx, maxy)\n no_data: int or float\n No data value\n\n Returns\n -------\n RasterBase\n\n \"\"\"\n\n minx = max(bounds[0], raster.bounds[0])\n miny = max(bounds[1], raster.bounds[1])\n maxx = min(bounds[2], raster.bounds[2])\n maxy = min(bounds[3], raster.bounds[3])\n\n if minx >= maxx or miny >= maxy:\n raise ValueError(\"requested extent out of raster boundaries\")\n\n gdal.Warp(out_file,\n raster._gdal_dataset,\n outputBounds=bounds,\n srcNodata=raster.no_data,\n dstNodata=no_data,\n outputType=raster.data_type)\n\n\ndef _clip_raster_by_mask(raster, geodataframe, no_data, all_touched):\n \"\"\" Clip raster by mask from geographic layer\n\n Parameters\n ----------\n raster: pyrasta.raster.RasterBase\n raster to clip\n geodataframe: geopandas.GeoDataFrame or gistools.layer.GeoLayer\n no_data: float or int\n No data value\n all_touched: bool\n if True, clip all pixels that are touched, otherwise clip\n if pixel's centroids are within boundaries\n\n Returns\n -------\n RasterBase\n\n \"\"\"\n clip_raster = raster.clip(bounds=geodataframe.total_bounds)\n\n with ShapeTempFile() as shp_file, \\\n RasterTempFile(clip_raster._gdal_driver.GetMetadata()['DMD_EXTENSION']) as r_file:\n\n geodataframe.to_file(shp_file.path, driver=ESRI_DRIVER)\n\n out_ds = _gdal_temp_dataset(r_file.path,\n clip_raster._gdal_driver,\n clip_raster._gdal_dataset.GetProjection(),\n clip_raster.x_size,\n clip_raster.y_size,\n clip_raster.nb_band,\n clip_raster.geo_transform,\n clip_raster.data_type,\n clip_raster.no_data)\n\n gdal.Rasterize(out_ds,\n shp_file.path,\n burnValues=[1],\n allTouched=all_touched)\n\n out_ds = None\n\n return clip_raster.__class__.raster_calculation([clip_raster,\n clip_raster.__class__(r_file.path)],\n lambda x, y: x*y,\n no_data=no_data,\n showprogressbar=False)\n"
},
{
"alpha_fraction": 0.47935327887535095,
"alphanum_fraction": 0.49224382638931274,
"avg_line_length": 36.51639175415039,
"blob_id": "6ec57c12331f45fb3f6a5168cd9d6ed9c75a5abd",
"content_id": "ab8d6cc4a3a93706683556b168272da7d901f7bb",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4577,
"license_type": "permissive",
"max_line_length": 91,
"num_lines": 122,
"path": "/pyrasta/tools/calculator.py",
"repo_name": "benjaminpillot/pyrasta",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\n\"\"\" Module summary description.\n\nMore detailed description.\n\"\"\"\n\nimport multiprocessing as mp\nimport numpy as np\n\nfrom pyrasta.io_.files import RasterTempFile\nfrom pyrasta.tools import _gdal_temp_dataset, _return_raster\nfrom pyrasta.tools.mapping import GDAL_TO_NUMPY\nfrom pyrasta.tools.windows import get_block_windows, get_xy_block_windows\nfrom pyrasta.utils import MP_CHUNK_SIZE, split_into_chunks\nfrom tqdm import tqdm\n\nimport gdal\n\n\n@_return_raster\ndef _op(raster1, out_file, raster2, op_type):\n \"\"\" Basic arithmetic operations\n\n \"\"\"\n out_ds = _gdal_temp_dataset(out_file, raster1._gdal_driver,\n raster1._gdal_dataset.GetProjection(), raster1.x_size,\n raster1.y_size, raster1.nb_band, raster1.geo_transform,\n gdal.GetDataTypeByName('Float32'), raster1.no_data)\n\n for band in range(1, raster1.nb_band + 1):\n\n for window in get_block_windows(1000, raster1.x_size, raster1.y_size):\n array1 = raster1._gdal_dataset.GetRasterBand(\n band).ReadAsArray(*window).astype(\"float32\")\n try:\n array2 = raster2._gdal_dataset.GetRasterBand(\n band).ReadAsArray(*window).astype(\"float32\")\n except AttributeError:\n array2 = raster2 # If second input is not a raster but a scalar\n\n if op_type == \"add\":\n result = array1 + array2\n elif op_type == \"sub\":\n result = array1 - array2\n elif op_type == \"mul\":\n result = array1 * array2\n elif op_type == \"truediv\":\n result = array1 / array2\n else:\n result = None\n\n out_ds.GetRasterBand(band).WriteArray(result, window[0], window[1])\n\n # Close dataset\n out_ds = None\n\n\ndef _raster_calculation(raster_class, sources, fhandle, window_size,\n gdal_driver, data_type, no_data, nb_processes,\n chunksize, description):\n \"\"\" Calculate raster expression\n\n \"\"\"\n if not hasattr(window_size, \"__getitem__\"):\n window_size = (window_size, window_size)\n\n master_raster = sources[0]\n window_gen = ([src._gdal_dataset.ReadAsArray(*w).astype(GDAL_TO_NUMPY[data_type])\n for src in sources] for w in get_xy_block_windows(window_size,\n master_raster.x_size,\n master_raster.y_size))\n width = int(master_raster.x_size /\n window_size[0]) + min(1, master_raster.x_size % window_size[0])\n height = int(master_raster.y_size /\n window_size[1]) + min(1, master_raster.y_size % window_size[1])\n\n with RasterTempFile(gdal_driver.GetMetadata()['DMD_EXTENSION']) as out_file:\n\n is_first_run = True\n y = 0\n\n for win_gen in tqdm(split_into_chunks(window_gen, width),\n total=height,\n desc=description):\n\n with mp.Pool(processes=nb_processes) as pool:\n result = np.concatenate(list(pool.imap(fhandle,\n win_gen,\n chunksize=chunksize)),\n axis=1)\n\n if is_first_run:\n if result.ndim == 2:\n nb_band = 1\n else:\n nb_band = result.shape[0]\n\n out_ds = _gdal_temp_dataset(out_file.path,\n gdal_driver,\n master_raster._gdal_dataset.GetProjection(),\n master_raster.x_size,\n master_raster.y_size, nb_band,\n master_raster.geo_transform,\n data_type,\n no_data)\n\n is_first_run = False\n\n if nb_band == 1:\n out_ds.GetRasterBand(1).WriteArray(result, 0, y)\n else:\n for band in range(nb_band):\n out_ds.GetRasterBand(band + 1).WriteArray(result[band, :, :],\n 0, y)\n\n y += window_size[1]\n\n # Close dataset\n out_ds = None\n\n return raster_class(out_file.path)\n"
},
{
"alpha_fraction": 0.5812937021255493,
"alphanum_fraction": 0.5821678042411804,
"avg_line_length": 21.8799991607666,
"blob_id": "73b823fd5559aa25eb26a1fea45341968c430876",
"content_id": "c023695e8da7ddcfb76c5880d66c5d0d932c351e",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1144,
"license_type": "permissive",
"max_line_length": 78,
"num_lines": 50,
"path": "/pyrasta/tools/dem.py",
"repo_name": "benjaminpillot/pyrasta",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\n\"\"\" Digital Elevation Model functions\n\nMore detailed description.\n\"\"\"\nfrom pyrasta.tools import _return_raster\n\nimport gdal\n\n\n@_return_raster\ndef _slope(dem, out_file, slope_format, scale):\n \"\"\" Compute DEM slope\n\n Parameters\n ----------\n dem: pyrasta.raster.DigitalElevationModel\n out_file: str\n output file path to which new dem must be written\n slope_format: str\n Slope format {'percent', 'degree'}\n\n Returns\n -------\n\n \"\"\"\n options = gdal.DEMProcessingOptions(format=dem._gdal_driver.ShortName,\n slopeFormat=slope_format,\n scale=scale)\n gdal.DEMProcessing(out_file, dem._gdal_dataset, 'slope', options=options)\n\n\n@_return_raster\ndef _aspect(dem, out_file, scale):\n \"\"\" Compute aspect\n\n Parameters\n ----------\n dem\n out_file\n scale\n\n Returns\n -------\n\n \"\"\"\n options = gdal.DEMProcessingOptions(format=dem._gdal_driver.ShortName,\n scale=scale)\n gdal.DEMProcessing(out_file, dem._gdal_dataset, \"aspect\", options=options)\n"
},
{
"alpha_fraction": 0.5477272868156433,
"alphanum_fraction": 0.5511363744735718,
"avg_line_length": 18.130434036254883,
"blob_id": "46e5d1b747b2d219ee08edbb02c777cc37650108",
"content_id": "5c6f8108ea7b5b215f6e3e07ad72f90b0fdcd456",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 880,
"license_type": "permissive",
"max_line_length": 53,
"num_lines": 46,
"path": "/pyrasta/raster.py",
"repo_name": "benjaminpillot/pyrasta",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\n\"\"\" Module summary description.\n\nMore detailed description.\n\"\"\"\n\nfrom pyrasta.base import RasterBase\nfrom pyrasta.tools.dem import _slope, _aspect\n\n\nclass Raster(RasterBase):\n pass\n\n\nclass DigitalElevationModel(Raster):\n\n def aspect(self, scale=1):\n \"\"\" Compute DEM aspect\n\n Parameters\n ----------\n scale: float or int\n Ratio of vertical units to horizontal\n\n Returns\n -------\n\n \"\"\"\n return _aspect(self, scale)\n\n def slope(self, slope_format=\"percent\", scale=1):\n \"\"\" Compute DEM slope\n\n Parameters\n ----------\n slope_format: str\n Slope format {'percent', 'degree'}\n scale: int or float\n Ratio of vertical units to horizontal\n\n Returns\n -------\n\n \"\"\"\n return _slope(self, slope_format, scale)\n"
},
{
"alpha_fraction": 0.5614470839500427,
"alphanum_fraction": 0.5662351250648499,
"avg_line_length": 34.243751525878906,
"blob_id": "3a734b2b1221f65c090a6194ec6e4ae774d7c27e",
"content_id": "96d78d0bd999eedb0737fd25c14f7119902ea77e",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5639,
"license_type": "permissive",
"max_line_length": 113,
"num_lines": 160,
"path": "/pyrasta/tools/merge.py",
"repo_name": "benjaminpillot/pyrasta",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\n\"\"\" Module summary description.\n\nMore detailed description.\n\"\"\"\nfrom pyrasta.io_.files import RasterTempFile\n\nimport gdal\n\n\ndef _merge(raster_class, sources, bounds, output_format, data_type, no_data):\n \"\"\" Merge multiple raster sources\n\n Description\n -----------\n\n Parameters\n ----------\n sources: list\n list of RasterBase instances\n \"\"\"\n\n # Extent of all inputs\n if bounds is not None:\n dst_w, dst_s, dst_e, dst_n = bounds\n else:\n # scan input files\n xs = [item for src in sources for item in src.bounds[0:2]]\n ys = [item for src in sources for item in src.bounds[2::]]\n dst_w, dst_s, dst_e, dst_n = min(xs), min(ys), max(xs), max(ys)\n\n with RasterTempFile(gdal.GetDriverByName(output_format).GetMetadata()['DMD_EXTENSION']) \\\n as out_file:\n gdal.Warp(out_file.path, [src._gdal_dataset for src in sources],\n outputBounds=(dst_w, dst_s, dst_e, dst_n),\n format=output_format, srcNodata=[src.no_data for src in sources],\n dstNodata=no_data, outputType=data_type)\n\n # Be careful with the temp file, make a pointer to be sure\n # the Python garbage collector does not destroy it !\n raster = raster_class(out_file.path)\n raster._temp_file = out_file\n\n return raster\n\n\n# def _rasterio_merge_modified(sources, out_file, bounds=None, driver=\"GTiff\", precision=7):\n# \"\"\" Modified rasterio merge\n#\n# Description\n# -----------\n# Merge set of rasters using modified\n# rasterio merging tool. Only import as\n# numpy arrays the source datasets, and\n# write each source to destination raster.\n#\n# Parameters\n# ----------\n# sources: list\n# list of rasterio datasets\n# out_file: str\n# valid path to the raster file to be written\n# bounds: tuple\n# valid boundary tuple (optional)\n# driver: str\n# valid gdal driver (optional)\n# precision: int\n# float precision (optional)\n#\n# Note\n# ----\n# Adapted from\n# https://gis.stackexchange.com/questions/348925/merging-rasters-with-rasterio-in-blocks-to-avoid-memoryerror\n# \"\"\"\n#\n # adapted from https://github.com/mapbox/rasterio/blob/master/rasterio/merge.py\n # first = sources[0]\n # first_res = first.res\n # dtype = first.dtypes[0]\n # Determine output band count\n # output_count = first.count\n\n # Extent of all inputs\n # if bounds:\n # dst_w, dst_s, dst_e, dst_n = bounds\n # else:\n # scan input files\n # xs = []\n # ys = []\n # for src in sources:\n # left, bottom, right, top = src.bounds\n # xs.extend([left, right])\n # ys.extend([bottom, top])\n # dst_w, dst_s, dst_e, dst_n = min(xs), min(ys), max(xs), max(ys)\n #\n # out_transform = Affine.translation(dst_w, dst_n)\n #\n # Resolution/pixel size\n # res = first_res\n # out_transform *= Affine.scale(res[0], -res[1])\n #\n # Compute output array shape. We guarantee it will cover the output\n # bounds completely\n # output_width = int(np.ceil((dst_e - dst_w) / res[0]))\n # output_height = int(np.ceil((dst_n - dst_s) / res[1]))\n\n # Adjust bounds to fit\n # dst_e, dst_s = out_transform * (output_width, output_height)\n\n # create destination array\n # destination array shape\n # shape = (output_height, output_width)\n #\n # dest_profile = {\n # \"driver\": driver,\n # \"height\": shape[0],\n # \"width\": shape[1],\n # \"count\": output_count,\n # \"dtype\": dtype,\n # \"crs\": sources[0].crs.to_proj4(),\n # \"nodata\": sources[0].nodata,\n # \"transform\": out_transform\n # }\n\n # open output file in write/read mode and fill with destination mosaick array\n # with rasterio.open(out_file, 'w+', **dest_profile) as mosaic_raster:\n # for src in sources:\n\n # 1. Compute spatial intersection of destination and source\n # src_w, src_s, src_e, src_n = src.bounds\n # int_w = src_w if src_w > dst_w else dst_w\n # int_s = src_s if src_s > dst_s else dst_s\n # int_e = src_e if src_e < dst_e else dst_e\n # int_n = src_n if src_n < dst_n else dst_n\n\n # 2. Compute the source window\n # src_window = windows.from_bounds(\n # int_w, int_s, int_e, int_n, src.transform, precision=precision)\n #\n # src_window = src_window.round_shape()\n\n # 3. Compute the destination window\n # dst_window = windows.from_bounds(\n # int_w, int_s, int_e, int_n, out_transform, precision=precision)\n # dst_window = windows.Window(int(round(dst_window.col_off)),\n # int(round(dst_window.row_off)),\n # int(round(dst_window.width)),\n # int(round(dst_window.height)))\n #\n # out_shape = (dst_window.height, dst_window.width)\n\n # for band in range(1, output_count+1):\n # src_array = src.read(band, out_shape=out_shape, window=src_window)\n # before writing the window, replace source nodata with dest nodata as it can\n # already have been written (e.g. another adjacent country)\n # dst_array = mosaic_raster.read(band, out_shape=out_shape, window=dst_window)\n # mask = src_array == src.nodata\n # src_array[mask] = dst_array[mask]\n # mosaic_raster.write(src_array, band, window=dst_window)\n"
}
] | 25 |
aHunsader/edss | https://github.com/aHunsader/edss | 54e40226184babc8b0332fb7de4b963c4c5bfab9 | dcba9a4ac42ad70d296874a0643b65c175798b95 | 636de8521e71e484a62c23bb8cd332a226261ec6 | refs/heads/master | 2020-03-22T19:14:22.925062 | 2018-07-08T22:33:27 | 2018-07-08T22:33:27 | 140,515,095 | 0 | 0 | null | 2018-07-11T03:11:15 | 2018-07-08T22:33:48 | 2018-07-08T22:33:46 | null | [
{
"alpha_fraction": 0.6083499193191528,
"alphanum_fraction": 0.6232604384422302,
"avg_line_length": 32.53333282470703,
"blob_id": "f99d1c1006b0408e21701b1afdb357d7dbb24ccc",
"content_id": "75d6781fd40dfbbd699e1539461c15cdaa3d7f56",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1006,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 30,
"path": "/student_showcase/api/models.py",
"repo_name": "aHunsader/edss",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\nfrom django.contrib.auth.models import User, Group\nfrom django.db import models\n\n# Create your models here.\nclass Company(models.Model):\n SPACES = (\n (\"P\", \"Patio Exhibition\"),\n (\"PS_2\", \"Premium Space, 2 Tables\"),\n (\"PS_1\", \"Premium Space, 1 Table\"),\n (\"SS_2\", \"Standard Space, 2 Tables\"),\n (\"SS_1\", \"Standard Space, 1 Table\"),\n )\n DESIRED_POSITION_TYPES = (\n (\"IN\", \"Interns\"),\n (\"FT\", \"Full-time\"),\n (\"B\", \"Both\")\n )\n\n company_name = models.CharField(max_length=30)\n company_description = models.TextField()\n rep = models.OneToOneField(User, on_delete=models.CASCADE)\n rep_phone = models.CharField(max_length=12)\n space = models.CharField(max_length=4, choices=SPACES)\n looking_for = models.CharField(max_length=2, choices=DESIRED_POSITION_TYPES)\n special_needs = models.TextField(blank=True)\n\n def __str__(self):\n return self.company_name\n"
},
{
"alpha_fraction": 0.6662870049476624,
"alphanum_fraction": 0.6662870049476624,
"avg_line_length": 29.275861740112305,
"blob_id": "1c85b9cf4e84133909b1adbbabb88a4dfca9babf",
"content_id": "02edaf9556685c4a1df6af46589ce8cb5c81c4ce",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 878,
"license_type": "no_license",
"max_line_length": 94,
"num_lines": 29,
"path": "/student_showcase/api/serializers.py",
"repo_name": "aHunsader/edss",
"src_encoding": "UTF-8",
"text": "from .models import *\nfrom django.contrib.auth.models import User\nfrom rest_framework import serializers\n\nclass UserSerializer(serializers.ModelSerializer):\n\n class Meta:\n model = User\n fields = ('id', 'username', 'email', 'first_name', 'last_name')\n\nclass CompanySerializer(serializers.ModelSerializer):\n rep = UserSerializer()\n\n class Meta:\n model = Company\n fields = '__all__'\n\nclass CompanyAccountSerializer(serializers.ModelSerializer):\n company_data = CompanySerializer()\n\n class Meta:\n model = User\n fields = ('username', 'password', 'email', 'first_name', 'last_name', 'company_data')\n\n def create(self, validated_data):\n company_data = validated_data.pop('company_data')\n user = User.objects.create(**validated_data)\n Company.objects.create(rep=user, **company_data)\n return user\n"
},
{
"alpha_fraction": 0.7338362336158752,
"alphanum_fraction": 0.7381465435028076,
"avg_line_length": 34.69230651855469,
"blob_id": "b143d1b78d0b357b85b78e010ca808da9c10443c",
"content_id": "cbc1f2f550d82417ad07648a97302da33d9d7083",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 928,
"license_type": "no_license",
"max_line_length": 100,
"num_lines": 26,
"path": "/student_showcase/api/views.py",
"repo_name": "aHunsader/edss",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\n\nfrom django.shortcuts import render\nfrom rest_framework import viewsets, status, permissions\nfrom rest_framework.response import Response\nfrom rest_framework.views import APIView\nfrom django.http import HttpResponse, JsonResponse\nfrom django.contrib.auth.models import User\nfrom api.models import *\nfrom api.serializers import *\nimport pdb\n\nclass CompanyViewSet(viewsets.ModelViewSet):\n \"\"\"\n RESTful API endpoint for Company\n \"\"\"\n queryset = Company.objects.all()\n serializer_class = CompanySerializer\n\n def create(self, request):\n serializer = CompanyAccountSerializer(data=request.data)\n serializer.is_valid(raise_exception=True)\n serializer.save()\n request.data['password'] = '*' * len(request.data['password'])\n return JsonResponse({'request': request.data, 'status': 'created successfully'}, status=201)\n"
}
] | 3 |
steinnes/gaggle | https://github.com/steinnes/gaggle | 3b4447b5a8ade0c12c1593ce6185084df8851c69 | dd1a1571cbdb8dca1cd57fd9c9a991b1d80b71a1 | 7ec0a0398f172b858ecb27694e076108204ae5bf | refs/heads/master | 2021-07-14T08:57:18.498918 | 2020-07-07T00:27:48 | 2020-07-07T00:27:48 | 165,941,748 | 2 | 2 | null | 2019-01-15T23:52:00 | 2020-07-07T00:13:39 | 2020-07-07T00:27:48 | Python | [
{
"alpha_fraction": 0.6057471036911011,
"alphanum_fraction": 0.6448276042938232,
"avg_line_length": 24.58823585510254,
"blob_id": "2ba343b7ec282d98692d0c8b4306d8b1e1bd7039",
"content_id": "c57b9da0ef1df73e0f5def6f3dd2544f9667d381",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "TOML",
"length_bytes": 870,
"license_type": "no_license",
"max_line_length": 60,
"num_lines": 34,
"path": "/pyproject.toml",
"repo_name": "steinnes/gaggle",
"src_encoding": "UTF-8",
"text": "[tool.poetry]\nname = \"gaggle\"\nversion = \"0.2.1\"\ndescription = \"aiohttp wrapper for google-api-client-python\"\nauthors = [\"Steinn Eldjarn Sigurdarson <[email protected]>\"]\nkeywords = [\"google api\", \"async\"]\nclassifiers = [\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: BSD License\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n]\nreadme = \"README.md\"\nlicense = \"BSD-3-Clause\"\nhomepage = \"https://github.com/steinnes/gaggle\"\n\n[tool.poetry.dependencies]\npython = \"^3.7\"\naiohttp = \"^3.5.3\"\ngoogle-api-python-client = \"^1.7.7\"\ngoogle-auth = \"^1.6.2\"\n\n[tool.poetry.dev-dependencies]\npytest = \"^4.0\"\nflake8 = \"^3.6\"\npytest-cov = \"^2.6\"\ncodecov = \"^2.0\"\npytest-asyncio = \"^0.10.0\"\n\n[build-system]\nrequires = [\"poetry>=0.12\"]\nbuild-backend = \"poetry.masonry.api\"\n"
},
{
"alpha_fraction": 0.6336134672164917,
"alphanum_fraction": 0.6358543634414673,
"avg_line_length": 26.045454025268555,
"blob_id": "2247b6e9587d7d4899fa6c6e143c2492c1414da8",
"content_id": "01b9986b49e235afcac3d7da7259c87516a884cc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 1785,
"license_type": "no_license",
"max_line_length": 144,
"num_lines": 66,
"path": "/README.md",
"repo_name": "steinnes/gaggle",
"src_encoding": "UTF-8",
"text": "# gaggle\n\nAn aiohttp-based Google API client.\n\nThe google-api-python-client requirement is because this library uses it to\ndiscover services and prepare requests, leveraging the prepare+execute pattern\nimplemented in googleapiclient.HttpRequest.\n\n## Usage\n\n### JSON\n\n```python\n\nimport asyncio\nimport aiohttp\nfrom gaggle import Client\n\n\nasync def main():\n async with aiohttp.ClientSession() as session:\n drive = Client(\n session=session,\n token=access_token,\n # the following are optional and only required if the access_token is expired and can be refreshed\n refresh_token=refresh_token,\n client_id=client_id,\n client_secret=client_secret\n ).drive('v3')\n resp = await drive.files.list(q=\"parents in 'root'\")\n # resp is an instance of aiohttp.ClientResponse\n if resp.status == 200:\n data = await resp.json()\n files = data.get('files', [])\n for obj in files:\n print(obj)\n\nif __name__ == \"__main__\":\n loop = asyncio.get_event_loop()\n loop.run_until_complete(main())\n\n```\n\nResults in something like:\n```\n{'kind': 'drive#file', 'id': '...', 'name': 'test.csv', 'mimeType': 'text/csv'}\n{'kind': 'drive#file', 'id': '...', 'name': 'Test Folder', 'mimeType': 'application/vnd.google-apps.folder'}\n{'kind': 'drive#file', 'id': '...', 'name': 'spreadsheet.xlsx', 'mimeType': 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet'}\n{'kind': 'drive#file', 'id': '...', 'name': 'spreadsheet', 'mimeType': 'application/vnd.google-apps.spreadsheet'}\n```\n\n\n## Installation\n\n```\n$ pip install gaggle\n```\n\n## Testing and developing\n\nI've included a handy Makefile to make these things fairly easy.\n\n```\n$ make setup\n$ make test\n```\n"
},
{
"alpha_fraction": 0.6910994648933411,
"alphanum_fraction": 0.6963350772857666,
"avg_line_length": 14.916666984558105,
"blob_id": "27e0dab37ea1029f165897c779d359c679e2ca98",
"content_id": "665e3d699a880c20653d17299fe09d6d89a0342e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Makefile",
"length_bytes": 191,
"license_type": "no_license",
"max_line_length": 67,
"num_lines": 12,
"path": "/Makefile",
"repo_name": "steinnes/gaggle",
"src_encoding": "UTF-8",
"text": "setup:\n\tpoetry install\n\ntest:\n\tpoetry run pytest -s --cov-report term-missing --cov=gaggle tests/\n\n\ntest_%:\n\tpoetry run pytest -vsx tests -k $@ --pdb\n\nlint:\n\tpoetry run flake8 gaggle/ tests/\n"
},
{
"alpha_fraction": 0.6808510422706604,
"alphanum_fraction": 0.7659574747085571,
"avg_line_length": 22.5,
"blob_id": "cb70da80359e735785944c24cf98d40fd2707475",
"content_id": "c8d33701328f0c963a20460af7ab67887f5f3ef6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 47,
"license_type": "no_license",
"max_line_length": 26,
"num_lines": 2,
"path": "/gaggle/__init__.py",
"repo_name": "steinnes/gaggle",
"src_encoding": "UTF-8",
"text": "# flake8: noqa F401\nfrom .client import Client\n"
},
{
"alpha_fraction": 0.5769230723381042,
"alphanum_fraction": 0.5897436141967773,
"avg_line_length": 16.33333396911621,
"blob_id": "7611f63740e44a3ecc54bf7dd324f9a6aaf55afe",
"content_id": "8dc564bb4397d691860362c0c77577ef3a447567",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 156,
"license_type": "no_license",
"max_line_length": 36,
"num_lines": 9,
"path": "/tests/test_retries.py",
"repo_name": "steinnes/gaggle",
"src_encoding": "UTF-8",
"text": "from gaggle.client import Retries\n\n\ndef test_retries():\n r = Retries(0)\n assert r() is False\n\n r = Retries(1)\n assert r(), r() == (True, False)\n"
},
{
"alpha_fraction": 0.5732616782188416,
"alphanum_fraction": 0.5770764946937561,
"avg_line_length": 37.192054748535156,
"blob_id": "efdba31c0d8d4a1e73976f4fc92bf7bd5570e377",
"content_id": "3ef43d639b444e5424c8679aa59ba682831c771e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5767,
"license_type": "no_license",
"max_line_length": 131,
"num_lines": 151,
"path": "/gaggle/client.py",
"repo_name": "steinnes/gaggle",
"src_encoding": "UTF-8",
"text": "import asyncio\nimport logging\n\nimport google.auth.transport.requests\nfrom google.oauth2.credentials import Credentials\nfrom apiclient import discovery\n\nimport httplib2\n\nlogger = logging.getLogger(__name__)\nDEFAULT_GOOGLE_TOKEN_URI = 'https://oauth2.googleapis.com/token'\n\n\nclass GaggleServiceError(Exception):\n pass\n\n\nclass AccessDenied(GaggleServiceError):\n pass\n\n\nclass MemoryCache:\n _CACHE = {}\n\n def get(self, key):\n hit = self._CACHE.get(key)\n logger.info(f\"Cache {hit is not None and 'hit' or 'miss'}: {key}\")\n return self._CACHE.get(key)\n\n def set(self, key, value):\n self._CACHE[key] = value\n\n\nclass Retries:\n def __init__(self, num):\n self.count = num\n self.remaining = num\n\n def __call__(self):\n if self.remaining > 0:\n self.remaining -= 1\n return True\n return False\n\n\nclass Service:\n def __init__(self, session, discovery_client, gaggle_client, retries=None):\n if retries is None:\n retries = 5\n self._session = session\n self._retry = Retries(retries)\n self._disco_client = discovery_client\n self._gaggle_client = gaggle_client\n\n def _request(self, name):\n async def inner(*args, **kwargs):\n async def doit():\n cooked_request = getattr(self._disco_client, name)(*args, **kwargs)\n headers = {'Authorization': f'Bearer {self._gaggle_client.access_token}', **cooked_request.headers}\n logger.info(f\"Service._request.doit(), cooked_request={cooked_request.method} {cooked_request.uri}\")\n if cooked_request.method == 'GET':\n return await self._session.get(cooked_request.uri, headers=headers)\n elif cooked_request.method == 'POST':\n return await self._session.post(cooked_request.uri, data=cooked_request.body, headers=headers)\n while True:\n try:\n response = await doit()\n if response.status == 401:\n logger.info(\"401 status, refreshing token\")\n try:\n self._gaggle_client.refresh_token()\n except google.auth.exceptions.RefreshError:\n raise AccessDenied(\"Unable to refresh token\")\n response = await doit()\n if response.status in (400, 401):\n logger.warning(\"Access denied even after refreshing token:\")\n logger.warning(await response.text())\n raise AccessDenied(\"Access denied even after refreshing token\")\n if response.status == 400:\n raise AccessDenied(f\"Access denied: response.text()\")\n break\n except asyncio.TimeoutError as e:\n logger.warning(\"timeout: {}\".format(e))\n # if we got here, it's because we encountered a timeout and might want to retry\n if not self._retry():\n raise GaggleServiceError(\"Exhausted retries ({})\".format(self._retry.count))\n\n return response\n return inner\n\n def _wrap(self, attr):\n # an attribute on self._disco_client, can either be a resource, in which case we need to\n # re-wrap it as a Service object\n # OR:\n # it's a method which will elicit a request, which we pass into self._request(..)\n subject = getattr(self._disco_client, attr)\n if subject.__func__.__name__ == 'methodResource':\n return Service(self._session, subject(), self._gaggle_client)\n elif subject.__func__.__name__ == 'method':\n return self._request(attr)\n\n def __getattribute__(self, attr):\n if attr.startswith('_'):\n # my own attributes\n return object.__getattribute__(self, attr)\n else:\n return self._wrap(attr)\n\n\nclass Client:\n _reals = ['access_token', 'credentials', 'refresh_token', 'http', 'log']\n\n def __init__(self, session, credentials: Credentials = None, **kwargs):\n if credentials is None:\n credentials = self._make_credentials(**kwargs)\n self.credentials = credentials\n self.access_token = credentials.token\n self.http = httplib2.Http()\n self._session = session\n self._services = {}\n\n @staticmethod\n def _make_credentials(*, token, refresh_token=None, id_token=None, token_uri=None, client_id=None, client_secret=None): # noqa\n if token_uri is None:\n token_uri = DEFAULT_GOOGLE_TOKEN_URI\n return Credentials(token, refresh_token, id_token, token_uri, client_id, client_secret)\n\n def refresh_token(self):\n request = google.auth.transport.requests.Request()\n self.credentials.refresh(request)\n self.access_token = self.credentials.token\n\n def _builder(self, service_name):\n def inner(version=None):\n srv_key = f'{service_name}:{version}'\n if srv_key not in self._services:\n args = [service_name, ]\n if version is not None:\n args.append(version)\n srv = discovery.build(*args, http=self.http, cache=MemoryCache())\n self._services[srv_key] = Service(self._session, srv, self)\n return self._services[srv_key]\n\n return inner\n\n def __getattribute__(self, attr):\n real = object.__getattribute__(self, '_reals')\n if attr.startswith('_') or attr in real:\n return object.__getattribute__(self, attr)\n # let's treat this attribute as a google service like 'drive'\n return self._builder(attr)\n"
},
{
"alpha_fraction": 0.6713735461235046,
"alphanum_fraction": 0.6842105388641357,
"avg_line_length": 28.961538314819336,
"blob_id": "c5b9c05fdd749ea8dd94b4eaba436bb44eb10bb1",
"content_id": "c13d41f62c36b6bc331f6681eaaa7916449a09f6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 779,
"license_type": "no_license",
"max_line_length": 76,
"num_lines": 26,
"path": "/tests/test_client.py",
"repo_name": "steinnes/gaggle",
"src_encoding": "UTF-8",
"text": "from unittest import mock\n\nfrom gaggle import Client\n\n\ndef test_client_creates_credentials_if_none_are_passed_in():\n with mock.patch('gaggle.client.Client._make_credentials') as mock_maker:\n Client(mock.Mock())\n assert mock_maker.called\n\n\[email protected]('gaggle.client.discovery')\ndef test_client_service_builder(mock_discovery):\n c = Client(mock.Mock(), token='')\n srv = c.some_fake_service()\n assert c._services['some_fake_service:None'] == srv\n assert mock_discovery.build.called\n\n srv2 = c.some_fake_service()\n assert srv2 == srv\n assert mock_discovery.build.call_count == 1\n\n srv3 = c.some_fake_service('v2')\n assert c._services['some_fake_service:v2'] == srv3\n assert srv3 != srv2\n assert mock_discovery.build.call_count == 2\n"
},
{
"alpha_fraction": 0.6659277081489563,
"alphanum_fraction": 0.6723564863204956,
"avg_line_length": 30.76760482788086,
"blob_id": "8e1e5baf7f82703bc1df9bfc0a63c4ccd027accb",
"content_id": "0fb5850f5308876aeaabf94c9e53712ac7d3ae79",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4511,
"license_type": "no_license",
"max_line_length": 112,
"num_lines": 142,
"path": "/tests/test_service.py",
"repo_name": "steinnes/gaggle",
"src_encoding": "UTF-8",
"text": "import asyncio\nimport pytest\nfrom collections import defaultdict\n\nfrom unittest import mock\n\nfrom googleapiclient.http import HttpRequest\nimport google.auth.exceptions\n\nfrom gaggle.client import AccessDenied, GaggleServiceError, Retries, Service\n\n\nclass FakeDiscoClient:\n def method(self, *args, **kwargs):\n return HttpRequest(None, None, uri='https://we-will-we-will.mock/you', method='GET')\n\n\nclass CallCounter:\n def __init__(self):\n self.calls = defaultdict(int)\n\n def __getattribute__(self, attr):\n if attr != 'calls':\n object.__getattribute__(self, 'calls')[attr] += 1\n return object.__getattribute__(self, attr)\n\n\ndef test_service_getattr_returns_actual_attr_if_private():\n s = Service(mock.Mock(), mock.Mock(), mock.Mock())\n with mock.patch('gaggle.client.Service._wrap') as mock_wrap:\n assert isinstance(s._retry, Retries)\n assert not mock_wrap.called\n\n\ndef test_service_wrapper_returns_service_for_resources_requests_for_methods():\n class FakeDiscoveredAPIService:\n def method(self, *args, **kwargs):\n return {'args': args, 'kwargs': kwargs}\n\n @classmethod\n def methodResource(cls):\n return cls()\n\n @property\n def a_method(self):\n return self.method\n\n @property\n def a_method_resource(self):\n return self.methodResource\n\n srv = Service(mock.Mock(), FakeDiscoveredAPIService(), mock.Mock())\n assert isinstance(srv.a_method_resource, Service)\n assert callable(srv.a_method)\n\n\[email protected]\nasync def test_service_request_retries():\n\n class TimeoutingSession(CallCounter):\n async def get(self, *args, **kwargs):\n raise asyncio.TimeoutError(\"test\")\n\n sess = TimeoutingSession()\n s = Service(sess, FakeDiscoClient(), mock.Mock(), retries=1)\n with pytest.raises(GaggleServiceError):\n await s.method()\n assert sess.calls['get'] == 2\n assert s._retry.remaining == 0\n\n sess = TimeoutingSession()\n s = Service(sess, FakeDiscoClient(), mock.Mock(), retries=0)\n with pytest.raises(GaggleServiceError):\n await s.method()\n assert sess.calls['get'] == 1\n assert s._retry.remaining == 0\n\n\nclass BadResponse:\n def __init__(self, status_code, error_message):\n self.status = status_code\n self.error_message = error_message\n\n async def text(self):\n return self.error_message\n\n\nclass FailingSession(CallCounter):\n def __init__(self, errors):\n self.errors = errors\n super().__init__()\n\n async def get(self, *args, **kwargs):\n if len(self.errors) > 1:\n return self.errors.pop()\n return self.errors[0]\n\n\[email protected]\nasync def test_service_request_refreshes_token_on_unauthorized():\n sess = FailingSession(errors=[BadResponse(status_code=401, error_message=\"Invalid credentials\")])\n gaggle_client = mock.Mock()\n s = Service(sess, FakeDiscoClient(), gaggle_client, retries=0)\n with pytest.raises(AccessDenied):\n await s.method()\n assert gaggle_client.refresh_token.called\n assert sess.calls['get'] == 2\n\n\[email protected]\nasync def test_service_request_raises_access_denied_on_bad_request_after_refresh():\n sess = FailingSession(\n errors=[\n BadResponse(status_code=401, error_message=\"Invalid credentials\"),\n BadResponse(status_code=400, error_message=\"invalid_grant: Token has been expired or revoked.\")\n ]\n )\n gaggle_client = mock.Mock()\n s = Service(sess, FakeDiscoClient(), gaggle_client, retries=0)\n with pytest.raises(AccessDenied):\n await s.method()\n\n\[email protected]\nasync def test_service_request_raises_access_denied_on_immediate_bad_request():\n sess = FailingSession(\n errors=[BadResponse(status_code=400, error_message=\"invalid_grant: Token has been expired or revoked.\")]\n )\n gaggle_client = mock.Mock()\n s = Service(sess, FakeDiscoClient(), gaggle_client, retries=0)\n with pytest.raises(AccessDenied):\n await s.method()\n\n\[email protected]\nasync def test_service_request_raises_access_denied_on_refresh_error_exception():\n sess = FailingSession(errors=[BadResponse(status_code=401, error_message=\"Invalid credentials\")])\n gaggle_client = mock.Mock()\n gaggle_client.refresh_token.side_effect = google.auth.exceptions.RefreshError\n s = Service(sess, FakeDiscoClient(), gaggle_client, retries=0)\n with pytest.raises(AccessDenied):\n await s.method()\n"
}
] | 8 |
BrainfreezeFL/CAP5610 | https://github.com/BrainfreezeFL/CAP5610 | 9c74413fe9ec92e4f540c95cd634ca4109766f0d | 38ad32489c0f48dc9859b64e473a92d905b8c59e | 51fef41e19d9d8377dc1c20bd97e2858831ef356 | refs/heads/master | 2023-01-23T08:49:34.165663 | 2020-11-23T01:47:59 | 2020-11-23T01:47:59 | 293,386,423 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.699999988079071,
"alphanum_fraction": 0.7074776887893677,
"avg_line_length": 40.290321350097656,
"blob_id": "8ef6dee0f0f92aa9fce9fe47ca2b5a52378c8e08",
"content_id": "47c7dc88d3e11daab4422361a0a724b6773a00a0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 8960,
"license_type": "no_license",
"max_line_length": 115,
"num_lines": 217,
"path": "/HW6.py",
"repo_name": "BrainfreezeFL/CAP5610",
"src_encoding": "UTF-8",
"text": "import pandas as pd\nimport math\nimport numpy as np\nfrom surprise import Reader\nfrom surprise import SVD\nfrom surprise import Dataset\nfrom surprise.model_selection import cross_validate\nfrom surprise import KNNWithMeans\ndef find_mean(array) : \n length = len(array)\n sum = 0\n for items in array : \n sum = sum + float(items)\n return sum/length\n\ndata = pd.read_csv('ratings_small.csv', \n dtype= {'userId':np.int32, \n 'movieId':np.int32, \n 'rating':np.float64, \n 'timestamp':np.int64},\n header=0, #skiprows=1\n names= ['userId','movieId','rating','timestamp'])\nreader = Reader(rating_scale=(1, 5))\ndata = Dataset.load_from_df(data[['userId', 'movieId', 'rating']], reader)\n#print(train_df)\n\n\n# Load the movielens-100k dataset (download it if needed).\n#data = Dataset.load_builtin('ml-100k')\n\n# Use the famous SVD algorithm.\nalgo_probabilistic_matrix_factorization = SVD(biased = False)\nalgo_user_collab_filter_cosine = KNNWithMeans(k=50, sim_options={'name': 'cosine', 'user_based': True})\nalgo_user_collab_filter_pearson = KNNWithMeans(k=50, sim_options={'name': 'pearson_baseline', 'user_based': True})\nalgo_user_collab_filter_msd = KNNWithMeans(k=50, sim_options={'name': 'MSD', 'user_based': True})\nalgo_item_collab_filter_pearson = KNNWithMeans(k=50, sim_options={'name': 'pearson_baseline', 'user_based': False})\nalgo_item_collab_filter_cosine = KNNWithMeans(k=50, sim_options={'name': 'cosine', 'user_based': False})\nalgo_item_collab_filter_msd = KNNWithMeans(k=50, sim_options={'name': 'msd', 'user_based': False})\n\n# Differences in K-value\nalgo_item_five = KNNWithMeans(k=5, sim_options={'name': 'pearson_baseline', 'user_based': False})\nalgo_item_ten = KNNWithMeans(k=10, sim_options={'name': 'pearson_baseline', 'user_based': False})\nalgo_item_twenty = KNNWithMeans(k=20, sim_options={'name': 'pearson_baseline', 'user_based': False})\nalgo_item_fifty = KNNWithMeans(k=50, sim_options={'name': 'pearson_baseline', 'user_based': False})\n\nalgo_user_five = KNNWithMeans(k=5, sim_options={'name': 'pearson_baseline', 'user_based': True})\nalgo_user_ten = KNNWithMeans(k=10, sim_options={'name': 'pearson_baseline', 'user_based': True})\nalgo_user_twenty = KNNWithMeans(k=20, sim_options={'name': 'pearson_baseline', 'user_based': True})\nalgo_user_fifty = KNNWithMeans(k=50, sim_options={'name': 'pearson_baseline', 'user_based': True})\nuser_one = []\nfor i in range(20):\n print(\"This run had this many K's\")\n print(i)\n algo_user = KNNWithMeans(k=i, sim_options={'name': 'pearson_baseline', 'user_based': True})\n user_one = cross_validate(algo_user, data, measures=['RMSE'], cv=5, verbose=True)\n user_mean = find_mean(user_one[\"test_rmse\"])\n print(\"The User mean is : \")\n print(user_mean)\n #user_one.append(user_mean)\n\nitem = []\nfor i in range(20):\n print(\"This run had this many K's\")\n print(i)\n algo_item = KNNWithMeans(k=i, sim_options={'name': 'pearson_baseline', 'user_based': False})\n item = cross_validate(algo_item, data, measures=['RMSE'], cv=5, verbose=True)\n item_mean = find_mean(item[\"test_rmse\"])\n print(\"The mean is : \")\n print(item_mean)\n #item.append(item_mean)\n\n# Run 5-fold cross-validation and print results.\nprint(\"PMF\")\ncross_validate(algo_probabilistic_matrix_factorization, data, measures=['RMSE', 'MAE'], cv=5, verbose=True)\nprint(\"UCF-Cosine\")\nuser_cosine = cross_validate(algo_user_collab_filter_cosine, data, measures=['RMSE', 'MAE'], cv=5, verbose=True)\nprint(\"UCF-Pearson\")\nuser_pearson = cross_validate(algo_user_collab_filter_pearson, data, measures=['RMSE', 'MAE'], cv=5, verbose=True)\nprint(\"UCF-msd\")\nuser_msd = cross_validate(algo_user_collab_filter_msd, data, measures=['RMSE', 'MAE'], cv=5, verbose=True)\nprint(\"ICF-Cosine\")\nitem_cosine = cross_validate(algo_item_collab_filter_cosine, data, measures=['RMSE', 'MAE'], cv=5, verbose=True)\nprint(\"ICF-Pearson\")\nitem_pearson = cross_validate(algo_item_collab_filter_pearson, data, measures=['RMSE', 'MAE'], cv=5, verbose=True)\nprint(\"ICF-MSD\")\nitem_msd = cross_validate(algo_item_collab_filter_msd, data, measures=['RMSE', 'MAE'], cv=5, verbose=True)\n\nuser_five = cross_validate(algo_user_five, data, measures=['RMSE', 'MAE'], cv=5, verbose=True)\nuser_ten = cross_validate(algo_user_ten, data, measures=['RMSE', 'MAE'], cv=5, verbose=True)\nuser_twenty = cross_validate(algo_user_twenty, data, measures=['RMSE', 'MAE'], cv=5, verbose=True)\nuser_fifty = cross_validate(algo_user_fifty, data, measures=['RMSE', 'MAE'], cv=5, verbose=True)\n\nitem_five = cross_validate(algo_item_five, data, measures=['RMSE', 'MAE'], cv=5, verbose=True)\nitem_ten = cross_validate(algo_item_ten, data, measures=['RMSE', 'MAE'], cv=5, verbose=True)\nitem_twenty = cross_validate(algo_item_twenty, data, measures=['RMSE', 'MAE'], cv=5, verbose=True)\nitem_fifty = cross_validate(algo_item_fifty, data, measures=['RMSE', 'MAE'], cv=5, verbose=True)\n\nuser_five_rmse = user_five[\"test_rmse\"]\nuser_ten_rmse = user_ten[\"test_rmse\"]\nuser_twenty_rmse = user_twenty[\"test_rmse\"]\nuser_fifty_rmse = user_fifty[\"test_rmse\"]\nuser_five_mae = user_five[\"test_mae\"]\nuser_ten_mae = user_ten[\"test_mae\"]\nuser_twenty_mae = user_twenty[\"test_mae\"]\nuser_fifty_mae = user_fifty[\"test_mae\"]\nuser_five_rmse_mean = find_mean(user_five_rmse)\nuser_ten_rmse_mean = find_mean(user_ten_rmse)\nuser_twenty_rmse_mean = find_mean(user_twenty_rmse)\nuser_fifty_mae_mean = find_mean(user_fifty_rmse)\nuser_five_mae_mean = find_mean(user_five_mae)\nuser_ten_mae_mean = find_mean(user_ten_mae)\nuser_twenty_mae_mean = find_mean(user_twenty_mae)\nuser_fifty_mae_mean = find_mean(user_fifty_mae)\n\n\nitem_five_rmse = item_five[\"test_rmse\"]\nitem_ten_rmse = item_ten[\"test_rmse\"]\nitem_twenty_rmse = item_twenty[\"test_rmse\"]\nitem_fifty_rmse = item_fifty[\"test_rmse\"]\nitem_five_mae = item_five[\"test_mae\"]\nitem_ten_mae = item_ten[\"test_mae\"]\nitem_twenty_mae = item_twenty[\"test_mae\"]\nitem_fifty_mae = item_fifty[\"test_mae\"]\nitem_five_rmse_mean = find_mean(item_five_rmse)\nitem_ten_rmse_mean = find_mean(item_ten_rmse)\nitem_twenty_rmse_mean = find_mean(item_twenty_rmse)\nitem_fifty_rmse_mean = find_mean(item_fifty_rmse)\nitem_fifty_mae_mean = find_mean(item_fifty_rmse)\nitem_five_mae_mean = find_mean(item_five_mae)\nitem_ten_mae_mean = find_mean(item_ten_mae)\nitem_twenty_mae_mean = find_mean(item_twenty_mae)\nitem_fifty_mae_mean = find_mean(item_fifty_mae)\n\nprint(\"Item five rmse mean\")\nprint(item_five_rmse_mean)\nprint(\"Item Ten rmse mean\")\nprint(item_ten_rmse_mean)\nprint(\"Item twenty rmse mean\")\nprint(item_twenty_rmse_mean)\nprint(\"Item fifty rmse mean\")\nprint(item_fifty_rmse_mean)\nprint(\"user five rmse mean\")\nprint(user_five_rmse_mean)\nprint(\"user Ten rmse mean\")\nprint(user_ten_rmse_mean)\nprint(\"User twenty rmse mean\")\nprint(user_twenty_rmse_mean)\nprint(\"User fifty rmse mean\")\nprint(item_fifty_rmse_mean)\n\nuser_cosine_rmse = user_cosine[\"test_rmse\"]\nuser_cosine_mae = user_cosine[\"test_mae\"]\nuser_pearson_rmse = user_pearson[\"test_rmse\"]\nuser_pearson_mae = user_pearson[\"test_mae\"]\nuser_msd_rmse = user_msd[\"test_rmse\"]\nuser_msd_mae = user_msd[\"test_mae\"]\nuser_cosine_rmse_mean = find_mean(user_cosine_rmse)\nuser_cosine_mae_mean = find_mean(user_cosine_mae)\nuser_pearson_rmse_mean = find_mean(user_pearson_rmse)\nuser_pearson_mae_mean = find_mean(user_pearson_mae)\nuser_msd_rmse_mean = find_mean(user_msd_rmse)\nuser_msd_mae_mean = find_mean(user_msd_mae)\n\nitem_cosine_rmse = item_cosine[\"test_rmse\"]\nitem_cosine_mae = item_cosine[\"test_mae\"]\nitem_pearson_rmse = item_pearson[\"test_rmse\"]\nitem_pearson_mae = item_pearson[\"test_mae\"]\nitem_msd_rmse = item_msd[\"test_rmse\"]\nitem_msd_mae = item_msd[\"test_mae\"]\nitem_cosine_rmse_mean = find_mean(item_cosine_rmse)\nitem_cosine_mae_mean = find_mean(item_cosine_mae)\nitem_pearson_rmse_mean = find_mean(item_pearson_rmse)\nitem_pearson_mae_mean = find_mean(item_pearson_mae)\nitem_msd_rmse_mean = find_mean(item_msd_rmse)\nitem_msd_mae_mean = find_mean(item_msd_mae)\n\nprint(\"user_cosine_rmse_mean\")\nprint(user_cosine_rmse_mean)\nprint(\"user_cosine_mae_mean\")\nprint(user_cosine_mae_mean)\nprint(\"user_pearson_rmse_mean\")\nprint(user_pearson_rmse_mean)\nprint(\"user_pearson_mae_mean\")\nprint(user_pearson_mae_mean)\nprint(\"user_msd_rmse_mean\")\nprint(user_msd_rmse_mean)\nprint(\"user_msd_mae_mean\")\nprint(user_msd_mae_mean)\n\nprint(\"item_cosine_rmse_mean\")\nprint(item_cosine_rmse_mean)\nprint(\"item_cosine_mae_mean\")\nprint(item_cosine_mae_mean)\nprint(\"item_pearson_rmse_mean\")\nprint(item_pearson_rmse_mean)\nprint(\"item_pearson_mae_mean\")\nprint(item_pearson_mae_mean)\nprint(\"item_msd_rmse_mean\")\nprint(item_msd_rmse_mean)\nprint(\"item_msd_mae_mean\")\nprint(item_msd_mae_mean)\n\nprint(\"Test\")\nprint(test)\nprint(test[\"test_rmse\"])\nrmse = test[\"test_rmse\"]\nmean = find_mean(rmse)\nprint(\"Mean\")\nprint(mean)\nprint(test[\"test_mae\"])\n\nprint(\"The user array is \")\nprint(user)\nprint(\"The item array is\")\nprint(item)\n#for property, value in vars(test).items():\n # print(property, \":\", value)\n"
},
{
"alpha_fraction": 0.7097009420394897,
"alphanum_fraction": 0.7162654995918274,
"avg_line_length": 30.66666603088379,
"blob_id": "7200f96e3d46f8a95f84ac5d9529fc9d260bafda",
"content_id": "6e4d8453131070f6aea84646f6465b2553802350",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2742,
"license_type": "no_license",
"max_line_length": 84,
"num_lines": 84,
"path": "/HW 4.py",
"repo_name": "BrainfreezeFL/CAP5610",
"src_encoding": "UTF-8",
"text": "# Samuel Lewis\r\n# HW 2\r\n# Machine Learning Class\r\n# I dowloaded the anaconda pack to run this\r\n\r\n# Read in all of the data\r\nimport pandas as pd\r\nimport math\r\nimport numpy\r\nimport matplotlib.pyplot as plt\r\nimport seaborn as sns\r\n#%matplotlib inline\r\n\r\n# machine learning\r\nfrom sklearn.linear_model import LogisticRegression\r\nfrom sklearn.svm import SVC, LinearSVC\r\nfrom sklearn.ensemble import RandomForestClassifier\r\nfrom sklearn.neighbors import KNeighborsClassifier\r\nfrom sklearn.naive_bayes import GaussianNB\r\nfrom sklearn.linear_model import Perceptron\r\nfrom sklearn.linear_model import SGDClassifier\r\nfrom sklearn.tree import DecisionTreeClassifier\r\nfrom sklearn.impute import KNNImputer\r\nfrom sklearn import tree\r\nimport pydotplus\r\nimport matplotlib.image as pltimg\r\nfrom sklearn.model_selection import train_test_split, cross_val_score\r\nfrom sklearn import svm\r\n\r\nfrom sklearn.feature_selection import SelectKBest\r\nfrom sklearn.feature_selection import chi2\r\n#%matplotlib inline\r\ntrain_df = pd.read_csv('train.csv')\r\ntest_df = pd.read_csv('test.csv')\r\ncombine = [train_df, test_df]\r\n\r\ncorr_matrix = train_df.corr()\r\n#print(corr_matrix)\r\n\r\n\r\n\r\n#print(train_df.describe())\r\n# Drop the unnecessary features\r\ntrain_df = train_df.drop(\"Name\", axis = 1)\r\ntrain_df = train_df.drop(\"Ticket\", axis = 1)\r\ntrain_df = train_df.drop(\"Cabin\", axis = 1)\r\ntrain_df = train_df.drop(\"PassengerId\", axis = 1)\r\n\r\n# Change the Sex to a numerical system\r\ntrain_df['Sex'] = train_df['Sex'].map({'female':1,'male': 0}).astype(int)\r\ntrain_df['Embarked'].fillna(train_df['Embarked'].dropna().mode()[0], inplace = True)\r\ntrain_df['Embarked'] = train_df['Embarked'].map({'Q':1,'S': 0, 'C':2}).astype(int)\r\n\r\n# Fill in the Age to avoid losing data\r\ntrain_df['Age'].fillna(train_df['Age'].dropna().median(), inplace = True)\r\n\r\n# Fill in the Fare to avoid losing data\r\ntrain_df['Fare'].fillna(train_df['Fare'].dropna().mean(), inplace = True)\r\ntrain_df = train_df.astype(int)\r\ntest = train_df.Survived\r\nother = train_df.drop(\"Survived\", axis = 1)\r\nfeatures = other.columns\r\n\r\ny_train = train_df.Survived\r\nx_train = train_df.drop(\"Survived\", axis = 1)\r\nx_test = test_df\r\n\r\nlinear_svc = svm.SVC(kernel='linear')\r\ntemp = linear_svc.fit(x_train, y_train)\r\ncvs = cross_val_score(temp,x_train,y_train,cv=5, scoring = 'accuracy').mean()\r\nprint(cvs)\r\nprint(linear_svc.kernel)\r\n\r\nlinear_svc = svm.SVC(kernel='poly', degree = 2)\r\ntemp = linear_svc.fit(x_train, y_train)\r\ncvs = cross_val_score(temp,x_train,y_train,cv=5, scoring = 'accuracy').mean()\r\nprint(cvs)\r\nprint(linear_svc.kernel)\r\n\r\nlinear_svc = svm.SVC(kernel='rbf')\r\ntemp = linear_svc.fit(x_train, y_train)\r\ncvs = cross_val_score(temp,x_train,y_train,cv=5, scoring = 'accuracy').mean()\r\nprint(cvs)\r\nprint(linear_svc.kernel)"
},
{
"alpha_fraction": 0.5831533670425415,
"alphanum_fraction": 0.5969574451446533,
"avg_line_length": 28.344728469848633,
"blob_id": "b7bc42076c9a5cb41adb64dacf5be8c6fc581e67",
"content_id": "9ca0d73b2086795b8dd9ee3be2bb89014e4d4abe",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 10649,
"license_type": "no_license",
"max_line_length": 114,
"num_lines": 351,
"path": "/Assignment 3.py",
"repo_name": "BrainfreezeFL/CAP5610",
"src_encoding": "UTF-8",
"text": "# Samuel Lewis\r\n# HW 2\r\n# Machine Learning Class\r\n# I dowloaded the anaconda pack to run this\r\n\r\n# Read in all of the data\r\nimport pandas as pd\r\nimport math\r\nimport numpy\r\nimport matplotlib.pyplot as plt\r\nimport seaborn as sns\r\n#%matplotlib inline\r\n\r\n# machine learning\r\nimport operator\r\nfrom sklearn.linear_model import LogisticRegression\r\nfrom sklearn.svm import SVC, LinearSVC\r\nfrom sklearn.ensemble import RandomForestClassifier\r\nfrom sklearn.neighbors import KNeighborsClassifier\r\nfrom sklearn.naive_bayes import GaussianNB\r\nfrom sklearn.linear_model import Perceptron\r\nfrom sklearn.linear_model import SGDClassifier\r\nfrom sklearn.tree import DecisionTreeClassifier\r\nfrom sklearn.impute import KNNImputer\r\nfrom sklearn import tree\r\nimport pydotplus\r\nimport matplotlib.image as pltimg\r\nfrom sklearn.model_selection import train_test_split, cross_val_score\r\nfrom sklearn import preprocessing\r\nimport sklearn.model_selection as model_selection\r\n\r\nfrom sklearn.feature_selection import SelectKBest\r\nfrom sklearn.feature_selection import chi2\r\n\r\ntrain_sport = pd.read_csv('Training.csv')\r\ntest_sport = pd.read_csv('Testing.csv')\r\ncombine_sport = [train_sport, test_sport]\r\ndef change_name(input) : \r\n temp = []\r\n #print(input)\r\n for i in input : \r\n if i == 'Texas':\r\n temp.append('1')\r\n elif i == 'Virginia':\r\n temp.append('2')\r\n elif i == 'GeorgiaTech':\r\n temp.append('3')\r\n elif i == 'UMass':\r\n temp.append('4')\r\n elif i == 'Clemson':\r\n temp.append('5')\r\n elif i == 'Navy':\r\n temp.append('6')\r\n elif i == 'USC':\r\n temp.append('7')\r\n elif i == 'Temple':\r\n temp.append('8')\r\n elif i == 'PITT':\r\n temp.append('9')\r\n elif i == 'WakeForest':\r\n temp.append('10')\r\n elif i == 'BostonCollege':\r\n temp.append('11')\r\n elif i == 'Stanford':\r\n temp.append('12')\r\n elif i == 'Nevada':\r\n temp.append('13')\r\n elif i == 'MichiganState':\r\n temp.append('14')\r\n elif i == 'Duke':\r\n temp.append('15')\r\n elif i == 'Syracuse':\r\n temp.append('16')\r\n elif i == 'NorthCarolinaState':\r\n temp.append('17')\r\n elif i == 'MiamiFlorida':\r\n temp.append('18')\r\n elif i == 'Army':\r\n temp.append('19')\r\n elif i == 'VirginiaTech': \r\n temp.append('20')\r\n elif i == 'MiamiOhio' :\r\n temp.append('21')\r\n elif i == 'NorthCarolina' :\r\n temp.append('22')\r\n elif i == 'Georgia' :\r\n temp.append('23')\r\n #else :\r\n #print('Found Something') \r\n #print(i)\r\n #print(temp)\r\n return temp\r\ndef change_place(input) : \r\n temp = []\r\n for i in input : \r\n if i == 'Home':\r\n temp.append('1')\r\n else : \r\n temp.append('2')\r\n return temp\r\ndef change_in_league(input):\r\n temp = []\r\n for i in input : \r\n if i == 'In':\r\n temp.append('1')\r\n else : \r\n temp.append('2')\r\n return temp\r\n\r\ndef change_media(input):\r\n temp = []\r\n for i in input : \r\n if i == '1-NBC':\r\n temp.append('1')\r\n if i == '4-ABC':\r\n temp.append('2')\r\n if i == '3-FOX':\r\n temp.append('3')\r\n if i == '2-ESPN':\r\n temp.append('4')\r\n if i == '5-CBS':\r\n temp.append('5')\r\n return temp\r\n \r\n\r\ndef change_label(input) :\r\n temp = []\r\n for i in input :\r\n if i == 'Win':\r\n temp.append('1')\r\n else:\r\n temp.append('2')\r\n return temp\r\n\r\n# Task 1 Question 1\r\n#print(train_sport)\r\n\r\n # Fit The Label Encoder\r\n # Create a label (category) encoder object\r\n#le = preprocessing.LabelEncoder()\r\n\r\n # Fit the encoder to the pandas column\r\n#le.fit(train_sport['Opponent'])\r\n\r\n # View The Labels\r\n#print(); print(list(le.classes_))\r\n\r\n # Transform Categories Into Integers\r\n # Apply the fitted encoder to the pandas column\r\n#print(); print(le.transform(train_sport['Opponent']))\r\ntrain_sport.Opponent = change_name(train_sport['Opponent'])\r\ntrain_sport.Is_Home_or_Away = change_place(train_sport['Is_Home_or_Away'])\r\ntrain_sport.Is_Opponent_in_AP25_Preseason = change_in_league(train_sport.Is_Opponent_in_AP25_Preseason)\r\ntrain_sport.Media = change_media(train_sport.Media)\r\ntrain_sport.Label = change_label(train_sport.Label)\r\ntrain_sport = train_sport.drop('Date', axis = 1)\r\ntest_sport.Opponent = change_name(test_sport['Opponent'])\r\ntest_sport.Is_Home_or_Away = change_place(test_sport['Is_Home_or_Away'])\r\ntest_sport.Is_Opponent_in_AP25_Preseason = change_in_league(test_sport.Is_Opponent_in_AP25_Preseason)\r\ntest_sport.Media = change_media(test_sport.Media)\r\ntest_sport.Label = change_label(test_sport.Label)\r\ntest_sport = test_sport.drop('Date', axis = 1)\r\n#print(train_sport)\r\nx_train = train_sport.drop(\"Label\", axis = 1)\r\ny_train = train_sport.Label\r\nx_test = test_sport.drop(\"Label\", axis = 1)\r\ny_test = test_sport.Label\r\n#print(test_sport)\r\n#print(x_test)\r\nfrom sklearn.preprocessing import StandardScaler\r\nscaler = StandardScaler()\r\nscaler.fit(x_train)\r\n\r\nx_train = scaler.transform(x_train)\r\nx_test = scaler.transform(x_test)\r\n\r\n\r\ntemp = GaussianNB()\r\nprob = temp.fit(x_train, y_train).predict(x_test)\r\nother= temp.fit(x_train, y_train)\r\nprint('NB: ')\r\nprint(prob)\r\n\r\nclassifier = KNeighborsClassifier(n_neighbors=5)\r\nclassifier.fit(x_train, y_train)\r\npred = classifier.predict(x_test)\r\nprint('KNN')\r\nprint(pred)\r\n\r\n\r\n# Task 2 Question 1\r\ntrain_df = pd.read_csv('train.csv')\r\ntest_df = pd.read_csv('test.csv')\r\ncombine = [train_df, test_df]\r\n# uncomment when running the naive bayes\r\n#train_df, test_df = model_selection.train_test_split(train_df, train_size=0.65,test_size=0.35, random_state=101)\r\n#corr_matrix = train_df.corr()\r\n#print(corr_matrix)\r\n\r\nprint(test_df)\r\n\r\n#print(train_df.describe())\r\n# Drop the unnecessary features\r\ntrain_df = train_df.drop(\"Name\", axis = 1)\r\ntrain_df = train_df.drop(\"Ticket\", axis = 1)\r\ntrain_df = train_df.drop(\"Cabin\", axis = 1)\r\ntrain_df = train_df.drop(\"PassengerId\", axis = 1)\r\n\r\n# Change the Sex to a numerical system\r\ntrain_df['Sex'] = train_df['Sex'].map({'female':1,'male': 0}).astype(int)\r\ntrain_df['Embarked'].fillna(train_df['Embarked'].dropna().mode()[0], inplace = True)\r\ntrain_df['Embarked'] = train_df['Embarked'].map({'Q':1,'S': 0, 'C':2}).astype(int)\r\n\r\n# Fill in the Age to avoid losing data\r\ntrain_df['Age'].fillna(train_df['Age'].dropna().median(), inplace = True)\r\n\r\n# Fill in the Fare to avoid losing data\r\ntrain_df['Fare'].fillna(train_df['Fare'].dropna().mean(), inplace = True)\r\ntrain_df = train_df.astype(int)\r\n\r\n\r\ntest_df = test_df.drop(\"Name\", axis = 1)\r\ntest_df = test_df.drop(\"Ticket\", axis = 1)\r\ntest_df = test_df.drop(\"Cabin\", axis = 1)\r\ntest_df = test_df.drop(\"PassengerId\", axis = 1)\r\n\r\n# Change the Sex to a numerical system\r\ntest_df['Sex'] = test_df['Sex'].map({'female':1,'male': 0}).astype(int)\r\ntest_df['Embarked'].fillna(test_df['Embarked'].dropna().mode()[0], inplace = True)\r\ntest_df['Embarked'] = test_df['Embarked'].map({'Q':1,'S': 0, 'C':2}).astype(int)\r\n\r\n# Fill in the Age to avoid losing data\r\ntest_df['Age'].fillna(test_df['Age'].dropna().median(), inplace = True)\r\n\r\n# Fill in the Fare to avoid losing data\r\ntest_df['Fare'].fillna(test_df['Fare'].dropna().mean(), inplace = True)\r\ntest_df = test_df.astype(int)\r\ny_train = train_df.Survived\r\nx_train = train_df.drop(\"Survived\", axis = 1)\r\nx_test = test_df\r\nprint(test_df)\r\nprint(x_test)\r\n#features = .columns\r\n\r\n\r\ntemp = GaussianNB()\r\nprob = temp.fit(x_train, y_train)#.predict(x_test)\r\nprint('Accuracy : ')\r\ncvs = cross_val_score(prob,x_train,y_train,cv=5, scoring = 'accuracy').mean()\r\nprint(cvs)\r\nprint('Precision : ')\r\ncvs = cross_val_score(prob,x_train,y_train,cv=5, scoring = 'precision').mean()\r\nprint(cvs)\r\nprint('Recall : ')\r\ncvs = cross_val_score(prob,x_train,y_train,cv=5, scoring = 'recall').mean()\r\nprint(cvs)\r\nprint('F1 : ')\r\ncvs = cross_val_score(prob,x_train,y_train,cv=5, scoring = 'f1').mean()\r\nprint(cvs)\r\n#other= temp.fit(x_train, y_train)\r\nprint('Titanic NB: ')\r\nprint(prob)\r\n\r\n\r\n# Task 2 Question 2\r\ndef distance(x, y, columns):\r\n distance = 0\r\n for i in range(columns) : \r\n distance += pow((x[i] - y[i]),2)\r\n return math.sqrt(distance)\r\n\r\ndef getNeighbors(train, testRow, k):\r\n distanceArray = []\r\n columns = len(testRow)-1\r\n x = 0\r\n train_count = len(train)\r\n while x != train_count : \r\n #print(x)\r\n #print('Test')\r\n #print(testRow)\r\n #print('train')\r\n #print(train.iloc[x])\r\n dist = distance(testRow, train.iloc[x], columns)\r\n distanceArray.append((dist, train.iloc[x]))\r\n x = x + 1\r\n distanceArray.sort(key=operator.itemgetter(0))\r\n neighbors = []\r\n x = 0\r\n while x != k:\r\n #print(distanceArray[x][1])\r\n neighbors.append(distanceArray[x][1])\r\n x = x+1\r\n return neighbors\r\n\r\ndef predict(neighbors):\r\n classdict = {}\r\n length = len(neighbors)\r\n x = 0\r\n while x != length:\r\n thing = neighbors[x].Survived\r\n if thing in classdict:\r\n classdict[thing] += 1\r\n else : \r\n classdict[thing] = 1\r\n x = x +1\r\n #print('Classdict')\r\n #print(classdict)\r\n sortedVotes = sorted(classdict.items(), key=operator.itemgetter(1), reverse=True)\r\n #print(sortedVotes)\r\n return sortedVotes[0][0]\r\n\r\ndef find_accuracy(pred, test) :\r\n correct = 0\r\n for x in range(len(test)) :\r\n # print(pred[x])\r\n if test.iloc[x].Survived == pred[x] : \r\n correct = correct + 1\r\n return (correct/float(len(test))*100) \r\n\r\ndef custom_knn(training, k):\r\n #train = []\r\n #test = []\r\n x_train, x_test = model_selection.train_test_split(training, train_size=0.65,test_size=0.35, random_state=101)\r\n #print('Train')\r\n\r\n #print(x_train)\r\n #print('Test')\r\n #print(x_test)\r\n predictions = []\r\n for x in range(len(x_test)):\r\n # print(x_test.iloc[x])\r\n neighbors = getNeighbors(x_train, x_test.iloc[x], k)\r\n # print(neighbors)\r\n pred = predict(neighbors)\r\n predictions.append(pred)\r\n \r\n #print(pred)\r\n accuracy = find_accuracy(predictions, x_test)\r\n #(accuracy)\r\n return k, accuracy\r\n#print(custom_knn(train_df, 3))\r\nk = 1\r\nk_accuracy = []\r\nwhile k != 100:\r\n\r\n temp = custom_knn(train_df, k)\r\n k_accuracy.append(temp)\r\n k = k +1\r\n print(k_accuracy)\r\nprint(\"The accuracy thing is : \")\r\nprint(k_accuracy)"
},
{
"alpha_fraction": 0.5507965683937073,
"alphanum_fraction": 0.5622878074645996,
"avg_line_length": 27.80544662475586,
"blob_id": "87b3e750094e1c654bcaf6fd5888cd977ab98d98",
"content_id": "a7f88b903b4ac0dfeed313d1224a13b9329c0c0a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7658,
"license_type": "no_license",
"max_line_length": 123,
"num_lines": 257,
"path": "/HW 1.py",
"repo_name": "BrainfreezeFL/CAP5610",
"src_encoding": "UTF-8",
"text": "# Samuel Lewis\r\n# HW 1\r\n# Machine Learning Class\r\n# I dowloaded the anaconda pack to run this\r\n\r\n\r\n# Read in all of the data\r\nimport pandas as pd\r\nimport math\r\nimport numpy\r\nimport matplotlib.pyplot as plt\r\nimport seaborn as sns\r\n#%matplotlib inline\r\n\r\n# machine learning\r\nfrom sklearn.linear_model import LogisticRegression\r\nfrom sklearn.svm import SVC, LinearSVC\r\nfrom sklearn.ensemble import RandomForestClassifier\r\nfrom sklearn.neighbors import KNeighborsClassifier\r\nfrom sklearn.naive_bayes import GaussianNB\r\nfrom sklearn.linear_model import Perceptron\r\nfrom sklearn.linear_model import SGDClassifier\r\nfrom sklearn.tree import DecisionTreeClassifier\r\nfrom sklearn.impute import KNNImputer\r\n#%matplotlib inline\r\ntrain_df = pd.read_csv('train.csv')\r\ntest_df = pd.read_csv('test.csv')\r\ncombine = [train_df, test_df]\r\n#print(combine)\r\n\r\n# Gather what question we want the results for\r\nprint(\"What question are you answering : \")\r\nquestion = input()\r\n\r\n# Using this for debugging\r\nif question == '100' : \r\n for x in train_df:\r\n print(x)\r\n\r\n# Question 2\r\nif question == '2':\r\n print(train_df.Ticket)\r\n\r\n# Question 5\r\nelif question == '5' :\r\n # We want a value to see if each column header has blank info\r\n blank = False\r\n stack = []\r\n stack_one = []\r\n\r\n # We want to loop through all of the column headers\r\n for x in train_df:\r\n\r\n # We want to loop through each of the rows in each column to see if any are empty\r\n for y in train_df[x] : \r\n \r\n if y == \"\":\r\n blank = True\r\n \r\n elif y == None:\r\n blank = True\r\n \r\n elif isinstance(y,str) == False:\r\n if math.isnan(y) == True:\r\n blank = True\r\n \r\n if blank == True:\r\n stack.append(x)\r\n blank = False\r\n print(\"Training data with Blanks : \")\r\n print(stack)\r\n stack = []\r\n for x in test_df:\r\n\r\n # We want to loop through each of the rows in each column to see if any are empty\r\n for y in test_df[x] : \r\n \r\n if y == \"\":\r\n blank = True\r\n \r\n elif y == None:\r\n blank = True\r\n \r\n elif isinstance(y,str) == False:\r\n if math.isnan(y) == True:\r\n blank = True\r\n \r\n if blank == True:\r\n stack.append(x)\r\n blank = False\r\n print(\"Testing data with Blanks : \")\r\n print(stack)\r\n\r\nelif question == '6' :\r\n stack = []\r\n\r\n # We want to loop through all of the column headers\r\n for x in train_df:\r\n\r\n # We want to loop through each of the rows in each column to see if any are empty\r\n stack.append(x + \" : \" + str(type(train_df[x][1])))\r\n print(stack)\r\n\r\nelif question == '7' : \r\n \r\n # We want to loop through all of the column headers\r\n #for x in train_df:\r\n\r\n #print(\"Age\" + \" : \" + str(numpy.std(train_df.Age)))\r\n #age = train_df.groupby('Age')\r\n print(train_df.Age.describe())\r\n print(train_df.SibSp.describe())\r\n print(train_df.Parch.describe())\r\n print(train_df.Fare.describe())\r\n\r\nelif question == '8' : \r\n test = train_df\r\n #test.describe(include=[object])\r\n print(test.astype(str).describe(include=object))\r\n #print(train_df.Survived.describe())\r\n #print(train_df.Pclass.describe())\r\n #print(train_df.Name.describe())\r\n ##print(train_df.Sex.describe())\r\n #print(train_df.Ticket.describe())\r\n #print(train_df.Embarked.describe())\r\n #print(train_df.Cabin.describe())\r\n\r\nelif question == '9' : \r\n\r\n #new = train_df.Pclass == 1\r\n new = train_df[train_df.Pclass==1]\r\n print(new.astype(str).describe(include=object))\r\n # Then divide the amoutn that survived by the total people to get the correlation\r\n #newer = new.loc[:,'Pclass', 'Survived']\r\n #print(new)\r\n #print(train_df.corr())\r\n\r\nelif question == '10' : \r\n print(\"Test Results Are : \")\r\n new = test_df[test_df.Sex=='female']\r\n print(new.astype(str).describe(include=object))\r\n print(\"Training results are : \")\r\n new = train_df[train_df.Sex=='female']\r\n print(new.astype(str).describe(include=object))\r\n # Then divide the number of women who died by the number of women\r\n\r\nelif question == '11' :\r\n \r\n g = sns.FacetGrid(train_df, col='Survived')\r\n g.map(plt.hist, 'Age', bins=20)\r\n \r\n plt.show()\r\n\r\nelif question == '12' :\r\n \r\n graph = sns.FacetGrid(train_df, col='Survived', row='Pclass')\r\n graph.map(plt.hist, 'Age', bins=20)\r\n #x = train_df.filter(regex='Age|Survived|Pclass')\r\n #x.hist(by=(x.Survived and x.Pclass))\r\n #plt.title(\"Survived\")\r\n ##plt.xlabel(\"Age\")\r\n #plt.ylabel(\"Count\")\r\n plt.show()\r\n\r\nelif question == '13' : \r\n\r\n graph = sns.FacetGrid(train_df, row = 'Survived', col='Embarked')\r\n graph.map(sns.barplot, 'Sex', 'Fare')\r\n #graph.addLegend()\r\n plt.show()\r\n\r\nelif question == '14' : \r\n\r\n graph = sns.FacetGrid(train_df, col ='Survived')\r\n graph.map(plt.hist, 'Ticket')\r\n plt.show()\r\n\r\nelif question == '15' : \r\n count = 0\r\n for y in test_df.Cabin :\r\n #print(y)\r\n\r\n if isinstance(y,str) == False:\r\n if math.isnan(y) == True:\r\n count = count + 1\r\n for y in train_df.Cabin :\r\n # print(y)\r\n\r\n if isinstance(y,str) == False:\r\n if math.isnan(y) == True:\r\n count = count + 1\r\n\r\n print(\"The amount of NaN in Cabin is : \" + str(count))\r\n\r\n print(train_df.describe(include = ['O']))\r\n print(test_df.describe(include = ['O']))\r\n\r\nelif question == '16' : \r\n\r\n for x in combine : \r\n x['Sex'] = x['Sex'].map({'female':1,'male': 0}).astype(int)\r\n\r\n print(combine)\r\n\r\nelif question == '17' : \r\n train_df = train_df.drop(\"Name\", axis = 1)\r\n train_df = train_df.drop(\"Ticket\", axis = 1)\r\n train_df = train_df.drop(\"Embarked\", axis = 1)\r\n train_df = train_df.drop(\"Cabin\", axis = 1)\r\n\r\n train_df['Sex'] = train_df['Sex'].map({'female':1,'male': 0}).astype(int)\r\n for x in train_df : \r\n\r\n train_df[x] = pd.to_numeric(train_df[x])\r\n print(train_df.Age)\r\n imputer = KNNImputer(n_neighbors = 3)\r\n train_df = imputer.fit_transform(train_df)\r\n for x in train_df : \r\n print(x[4])\r\n print(\"This output is the age data\")\r\n\r\n\r\nelif question == '18' : \r\n #print(train_df.Embarked)\r\n # We know that the most frequently used port is S based on previous answers, so I will hard code all of the NaN to be S\r\n for x in train_df.Embarked : \r\n if isinstance(x,str) == False:\r\n if math.isnan(x) == True or x == \"\":\r\n x = 'S'\r\n # There are two values that need to be replaced so this checks out.\r\n print(\"Was replaced\")\r\n # print(train_df.Embarked)\r\n\r\nelif question == '19' : \r\n\r\n z = test_df[test_df.Name == 'Storey, Mr. Thomas']\r\n print(z)\r\n test_df['Fare'].fillna(test_df['Fare'].dropna().mode()[0], inplace = True)\r\n\r\n z = test_df[test_df.Name == 'Storey, Mr. Thomas']\r\n print(z)\r\n\r\nelif question == '20' : \r\n\r\n test_df['Fare'].fillna(test_df['Fare'].dropna().mode()[0], inplace = True)\r\n combine = [train_df, test_df]\r\n for x in combine : \r\n x.loc[ x['Fare'] <= 7.91, 'Fare'] = 0\r\n x.loc[ (x['Fare'] <= 14.454) & (x['Fare'] > 7.91), 'Fare'] = 1\r\n x.loc[ (x['Fare'] <= 31.0) & (x['Fare'] > 14.454), 'Fare'] = 2\r\n x.loc[ (x['Fare'] <=512.329) & (x['Fare'] > 31.0), 'Fare'] = 3\r\n x['Fare'] = x['Fare'].astype(int)\r\n\r\n print(combine)\r\n\r\nelse :\r\n print(\"No code was needed for that question\")"
},
{
"alpha_fraction": 0.5701231956481934,
"alphanum_fraction": 0.592505156993866,
"avg_line_length": 30.471729278564453,
"blob_id": "a282a80a8066b526e1b3aca819bca0f5bb4aab26",
"content_id": "519ad427f165a71539e40212b6c8df807b45880a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 19480,
"license_type": "no_license",
"max_line_length": 124,
"num_lines": 619,
"path": "/Assignment 5.py",
"repo_name": "BrainfreezeFL/CAP5610",
"src_encoding": "UTF-8",
"text": "import math\nimport random\nimport time\nfrom tkinter import *\n\n\n# Read in all of the data\nimport pandas as pd\nimport math\nimport numpy\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n#%matplotlib inline\n\nfrom sklearn.metrics.pairwise import manhattan_distances\nfrom sklearn.metrics.pairwise import cosine_similarity\nfrom sklearn.metrics import jaccard_score\nfrom sklearn.metrics.pairwise import euclidean_distances\n\n# machine learning\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.svm import SVC, LinearSVC\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.linear_model import Perceptron\nfrom sklearn.linear_model import SGDClassifier\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.impute import KNNImputer\nfrom sklearn import tree\nimport pydotplus\nimport matplotlib.image as pltimg\nfrom sklearn.model_selection import train_test_split, cross_val_score\nfrom sklearn import svm\n\nfrom sklearn.feature_selection import SelectKBest\nfrom sklearn.feature_selection import chi2\n#%matplotlib inline\ntrain_df = pd.read_csv('train.csv')\ntest_df = pd.read_csv('test.csv')\niris = pd.read_csv('iris.csv')\ncombine = [train_df, test_df]\n\ncorr_matrix = train_df.corr()\n\n\n\n\n######################################################################\n# This section contains functions for loading CSV (comma separated values)\n# files and convert them to a dataset of instances.\n# Each instance is a tuple of attributes. The entire dataset is a list\n# of tuples.\n######################################################################\n\n# Loads a CSV files into a list of tuples.\n# Ignores the first row of the file (header).\n# Numeric attributes are converted to floats, nominal attributes\n# are represented with strings.\n# Parameters:\n# fileName: name of the CSV file to be read\n# Returns: a list of tuples\ndef loadCSV(fileName):\n fileHandler = open(fileName, \"rt\")\n lines = fileHandler.readlines()\n fileHandler.close()\n del lines[0] # remove the header\n dataset = []\n for line in lines:\n instance = lineToTuple(line)\n dataset.append(instance)\n return dataset\n\n# Converts a comma separated string into a tuple\n# Parameters\n# line: a string\n# Returns: a tuple\ndef lineToTuple(line):\n # remove leading/trailing witespace and newlines\n cleanLine = line.strip()\n # get rid of quotes\n cleanLine = cleanLine.replace('\"', '')\n # separate the fields\n lineList = cleanLine.split(\",\")\n # convert strings into numbers\n stringsToNumbers(lineList)\n lineTuple = tuple(lineList)\n return lineTuple\n\n# Destructively converts all the string elements representing numbers\n# to floating point numbers.\n# Parameters:\n# myList: a list of strings\n# Returns None\ndef stringsToNumbers(myList):\n for i in range(len(myList)):\n if (isValidNumberString(myList[i])):\n myList[i] = float(myList[i])\n\n# Checks if a given string can be safely converted into a positive float.\n# Parameters:\n# s: the string to be checked\n# Returns: True if the string represents a positive float, False otherwise\ndef isValidNumberString(s):\n if len(s) == 0:\n return False\n if len(s) > 1 and s[0] == \"-\":\n s = s[1:]\n for c in s:\n if c not in \"0123456789.\":\n return False\n return True\n\ndef jaccard(list1, list2):\n intersection = len(list(set(list1).intersection(list2)))\n union = (len(list1) + len(list2)) - intersection\n return float(intersection) / union\n######################################################################\n# This section contains functions for clustering a dataset\n# using the k-means algorithm.\n######################################################################\n# 1 for euclidean\n# 2 for manhatten distance\n# 3 for cosine similarity\ndef distance(instance1, instance2, dis_type):\n if dis_type == 1:\n if instance1 == None or instance2 == None:\n return float(\"inf\")\n sumOfSquares = 0\n for i in range(1, len(instance1)):\n sumOfSquares += (instance1[i] - instance2[i])**2\n elif dis_type == 2:\n #print(instance1)\n #print(instance2)\n if instance1 == None or instance2 == None:\n return float(\"inf\")\n sumOfSquares = 0\n #for i in range(1, len(instance1)):\n instance1 = list(instance1)\n instance2 = list(instance2)\n a = instance1.pop(0)\n a = instance2.pop(0)\n sumOfSquares += manhattan_distances([instance1], [instance2])\n elif dis_type == 3:\n #print(instance1)\n #print(instance2)\n if instance1 == None or instance2 == None:\n return float(\"inf\")\n sumOfSquares = 0\n #for i in range(1, len(instance1)):\n instance1 = list(instance1)\n instance2 = list(instance2)\n a = instance1.pop(0)\n a = instance2.pop(0)\n sumOfSquares += (1-cosine_similarity([instance1], [instance2]))\n elif dis_type == 4:\n #print(instance1)\n #print(instance2)\n if instance1 == None or instance2 == None:\n return float(\"inf\")\n sumOfSquares = 0\n #for i in range(1, len(instance1)):\n instance1 = list(instance1)\n instance2 = list(instance2)\n a = instance1.pop(0)\n a = instance2.pop(0)\n sumOfSquares += (1-jaccard(instance1, instance2))\n #print(\"The centroid is :\")\n #print(instance2)\n #print(\"The SSE is :\")\n #sse = 0\n #for i in range(1, len(instance1)):\n # sse += (instance1[i] - instance2[i])**2\n #print(sse)\n return sumOfSquares\n\n\n\ndef meanInstance(name, instanceList):\n numInstances = len(instanceList)\n if (numInstances == 0):\n return\n numAttributes = len(instanceList[0])\n means = [name] + [0] * (numAttributes-1)\n for instance in instanceList:\n for i in range(1, numAttributes):\n means[i] += instance[i]\n for i in range(1, numAttributes):\n means[i] /= float(numInstances)\n return tuple(means)\n\ndef assign(instance, centroids):\n minDistance = distance(instance, centroids[0], 1)\n minDistanceIndex = 0\n for i in range(1, len(centroids)):\n d = distance(instance, centroids[i], 1)\n if (d < minDistance):\n minDistance = d\n minDistanceIndex = i\n return minDistanceIndex\n\ndef createEmptyListOfLists(numSubLists):\n myList = []\n for i in range(numSubLists):\n myList.append([])\n return myList\n\ndef assignAll(instances, centroids):\n clusters = createEmptyListOfLists(len(centroids))\n for instance in instances:\n clusterIndex = assign(instance, centroids)\n clusters[clusterIndex].append(instance)\n return clusters\n\ndef computeCentroids(clusters):\n centroids = []\n for i in range(len(clusters)):\n name = \"centroid\" + str(i)\n centroid = meanInstance(name, clusters[i])\n centroids.append(centroid)\n return centroids\ndef accuracy(clusters):\n count = 0\n right = 0\n wrong = 0\n one = 0\n two = 0\n three = 0\n accuracy = 0\n for i in clusters:\n for j in range(0,len(i)):\n #count = count + 1\n if i[j][4] == 0: \n one = one + 1\n count = count + 1\n if i[j][4] == 1 :\n two = two +1\n count = count + 1\n if i[j][4] == 2 : \n three = three + 1\n count = count + 1\n if one < two :\n temp = \"two\"\n if two < three :\n temp = \"three\"\n elif one < three :\n temp = \"three\"\n else : \n temp = \"one\"\n print(\"This cluster is :\")\n print(temp)\n print(\"Count is : \")\n print(count)\n print(\"One is :\")\n print(one)\n print(\"Two is : \")\n print(two)\n print(\"Three is :\")\n print(three)\n if count > 0:\n print(max(one, two, three))\n print(max(one,two,three)/count)\n \n accuracy = accuracy + (max(one,two,three)/count)*(count/151)\n\n count = 0\n one = 0\n two = 0\n three = 0 \n return accuracy\n \n\ndef kmeans(instances, k, animation=False, initCentroids=None):\n result = {}\n if (initCentroids == None or len(initCentroids) < k):\n # randomly select k initial centroids\n random.seed(time.time())\n centroids = random.sample(instances, k)\n #print(centroids)\n else:\n centroids = initCentroids\n prevCentroids = []\n prevsse = 1000000\n currentsse = 1000000\n if animation:\n delay = 1.0 # seconds\n canvas = prepareWindow(instances)\n clusters = createEmptyListOfLists(k)\n clusters[0] = instances\n paintClusters2D(canvas, clusters, centroids, \"Initial centroids\")\n time.sleep(delay)\n iteration = 0\n while (centroids != prevCentroids and iteration < 100):#(currentsse >= prevsse):#centroids != prevCentroids):\n #print(\"Centroids\")\n #print(centroids)\n #print(\"PrevCentroids\")\n #print(prevCentroids)\n # print(\"THE ITERATION IS : \")\n # print(iteration)\n iteration += 1\n clusters = assignAll(instances, centroids)\n #print('clusters')\n #print(clusters)\n if animation:\n paintClusters2D(canvas, clusters, centroids, \"Assign %d\" % iteration)\n time.sleep(delay)\n prevCentroids = centroids\n \n centroids = computeCentroids(clusters)\n for i in centroids:\n # print(i)\n if type(i) == float:\n i = round(list(i),2)\n # print(\"Iteration : \")\n # print(iteration)\n # print(\"Centroids\")\n # print(centroids)\n withinss = computeWithinss(clusters, centroids)\n prevsse = currentsse\n currentsse = withinss\n\n if animation:\n paintClusters2D(canvas, clusters, centroids,\n \"Update %d, withinss %.1f\" % (iteration, withinss))\n time.sleep(delay)\n result[\"clusters\"] = clusters\n result[\"centroids\"] = centroids\n result[\"withinss\"] = withinss\n result[\"iterations\"] = iteration\n print(\"Iteration\")\n temp = accuracy(clusters)\n print(\"The final accuracy is : \")\n print(temp)\n print(iteration)\n\n return result\n\ndef computeWithinss(clusters, centroids):\n result = 0\n for i in range(len(centroids)):\n centroid = centroids[i]\n cluster = clusters[i]\n for instance in cluster:\n result += distance(centroid, instance, 1)\n #print(\"Centroid SSE: \")\n #print(result)\n return result\n\n# Repeats k-means clustering n times, and returns the clustering\n# with the smallest withinss\ndef repeatedKMeans(instances, k, n):\n bestClustering = {}\n bestClustering[\"withinss\"] = float(\"inf\")\n for i in range(1, n+1):\n print(\"k-means trial %d,\" % i , end = \"\")\n trialClustering = kmeans(instances, k)\n print(\"withinss: %.1f\" % trialClustering[\"withinss\"])\n if trialClustering[\"withinss\"] < bestClustering[\"withinss\"]:\n bestClustering = trialClustering\n minWithinssTrial = i\n print(\"Trial with minimum withinss:\", minWithinssTrial)\n return bestClustering\n\n\n######################################################################\n# This section contains functions for visualizing datasets and\n# clustered datasets.\n######################################################################\n\ndef printTable(instances):\n for instance in instances:\n if instance != None:\n line = instance[0] + \"\\t\"\n for i in range(1, len(instance)):\n line += \"%.2f \" % instance[i]\n print(line)\n\ndef extractAttribute(instances, index):\n result = []\n for instance in instances:\n result.append(instance[index])\n return result\n\ndef paintCircle(canvas, xc, yc, r, color):\n canvas.create_oval(xc-r, yc-r, xc+r, yc+r, outline=color)\n\ndef paintSquare(canvas, xc, yc, r, color):\n canvas.create_rectangle(xc-r, yc-r, xc+r, yc+r, fill=color)\n\ndef drawPoints(canvas, instances, color, shape):\n random.seed(0)\n width = canvas.winfo_reqwidth()\n height = canvas.winfo_reqheight()\n margin = canvas.data[\"margin\"]\n minX = canvas.data[\"minX\"]\n minY = canvas.data[\"minY\"]\n maxX = canvas.data[\"maxX\"]\n maxY = canvas.data[\"maxY\"]\n scaleX = float(width - 2*margin) / (maxX - minX)\n scaleY = float(height - 2*margin) / (maxY - minY)\n for instance in instances:\n x = 5*(random.random()-0.5)+margin+(instance[1]-minX)*scaleX\n y = 5*(random.random()-0.5)+height-margin-(instance[2]-minY)*scaleY\n if (shape == \"square\"):\n paintSquare(canvas, x, y, 5, color)\n else:\n paintCircle(canvas, x, y, 5, color)\n canvas.update()\n\ndef connectPoints(canvas, instances1, instances2, color):\n width = canvas.winfo_reqwidth()\n height = canvas.winfo_reqheight()\n margin = canvas.data[\"margin\"]\n minX = canvas.data[\"minX\"]\n minY = canvas.data[\"minY\"]\n maxX = canvas.data[\"maxX\"]\n maxY = canvas.data[\"maxY\"]\n scaleX = float(width - 2*margin) / (maxX - minX)\n scaleY = float(height - 2*margin) / (maxY - minY)\n for p1 in instances1:\n for p2 in instances2:\n x1 = margin + (p1[1]-minX)*scaleX\n y1 = height - margin - (p1[2]-minY)*scaleY\n x2 = margin + (p2[1]-minX)*scaleX\n y2 = height - margin - (p2[2]-minY)*scaleY\n canvas.create_line(x1, y1, x2, y2, fill=color)\n canvas.update()\n\ndef mergeClusters(clusters):\n result = []\n for cluster in clusters:\n result.extend(cluster)\n return result\n\ndef prepareWindow(instances):\n width = 500\n height = 500\n margin = 50\n root = Tk()\n canvas = Canvas(root, width=width, height=height, background=\"white\")\n canvas.pack()\n canvas.data = {}\n canvas.data[\"margin\"] = margin\n setBounds2D(canvas, instances)\n paintAxes(canvas)\n canvas.update()\n return canvas\n\ndef setBounds2D(canvas, instances):\n attributeX = extractAttribute(instances, 1)\n attributeY = extractAttribute(instances, 2)\n canvas.data[\"minX\"] = min(attributeX)\n canvas.data[\"minY\"] = min(attributeY)\n canvas.data[\"maxX\"] = max(attributeX)\n canvas.data[\"maxY\"] = max(attributeY)\n\ndef paintAxes(canvas):\n width = canvas.winfo_reqwidth()\n height = canvas.winfo_reqheight()\n margin = canvas.data[\"margin\"]\n minX = canvas.data[\"minX\"]\n minY = canvas.data[\"minY\"]\n maxX = canvas.data[\"maxX\"]\n maxY = canvas.data[\"maxY\"]\n canvas.create_line(margin/2, height-margin/2, width-5, height-margin/2,\n width=2, arrow=LAST)\n canvas.create_text(margin, height-margin/4,\n text=str(minX), font=\"Sans 11\")\n canvas.create_text(width-margin, height-margin/4,\n text=str(maxX), font=\"Sans 11\")\n canvas.create_line(margin/2, height-margin/2, margin/2, 5,\n width=2, arrow=LAST)\n canvas.create_text(margin/4, height-margin,\n text=str(minY), font=\"Sans 11\", anchor=W)\n canvas.create_text(margin/4, margin,\n text=str(maxY), font=\"Sans 11\", anchor=W)\n canvas.update()\n\n\ndef showDataset2D(instances):\n canvas = prepareWindow(instances)\n paintDataset2D(canvas, instances)\n\ndef paintDataset2D(canvas, instances):\n canvas.delete(ALL)\n paintAxes(canvas)\n drawPoints(canvas, instances, \"blue\", \"circle\")\n canvas.update()\n\ndef showClusters2D(clusteringDictionary):\n clusters = clusteringDictionary[\"clusters\"]\n centroids = clusteringDictionary[\"centroids\"]\n withinss = clusteringDictionary[\"withinss\"]\n canvas = prepareWindow(mergeClusters(clusters))\n paintClusters2D(canvas, clusters, centroids,\n \"Withinss: %.1f\" % withinss)\n\ndef paintClusters2D(canvas, clusters, centroids, title=\"\"):\n canvas.delete(ALL)\n paintAxes(canvas)\n colors = [\"blue\", \"red\", \"green\", \"brown\", \"purple\", \"orange\"]\n for clusterIndex in range(len(clusters)):\n color = colors[clusterIndex%len(colors)]\n instances = clusters[clusterIndex]\n centroid = centroids[clusterIndex]\n drawPoints(canvas, instances, color, \"circle\")\n if (centroid != None):\n drawPoints(canvas, [centroid], color, \"square\")\n connectPoints(canvas, [centroid], instances, color)\n width = canvas.winfo_reqwidth()\n canvas.create_text(width/2, 20, text=title, font=\"Sans 14\")\n canvas.update()\n\n\n######################################################################\n# Test code\n######################################################################\nprint(\"1\")\ndataset = loadCSV(\"iris.csv\")\n#print(dataset)\ndataset1 = list(map(tuple, dataset))\n#print(type(dataset1))\nfor i in range(0, len(dataset1)):\n dataset1[i] = list(dataset1[i])\n #print(dataset1[i])\n #print(dataset1[i][4])\n if dataset1[i][4] == 'virginica':\n dataset1[i][4] = 0\n elif dataset1[i][4] == 'versicolor':\n dataset1[i][4] = 1\n elif dataset1[i][4] == 'setosa':\n dataset1[i][4] = 2\n\n\n#print(dataset1)\nprint(\"2\")\n#showDataset2D(dataset)\nprint(\"3\")\n# Change the values in the distance functions located throughout the program based on what distance function you want to use\n# 1 = euclidean\n# 2 = Manhatten\n# 3 = Cosine\n# 4 = Jacard\n# 5 = \n# Task 1 Part 1\nclustering = kmeans(dataset1, 5, True)\n# Task 1 Part 3\n#clustering = kmeans(dataset, 2, True, [('Centroid1',3,3),('Centroid2', 8,3)])\n# Task 1 Part 4\n#clustering = kmeans(dataset, 2, True, [('Centroid1',3,2),('Centroid2', 4,8)])\n# Task 1 Part 2\n#clustering = kmeans(dataset, 2, True, [('Centroid1',4,6),('Centroid2', 5,4)])\n# Task 2 Part 1\n#clustering = repeatedKMeans(dataset1,5,100)\nprint(clustering)\n\nprint(\"4\")\n#printTable(clustering[\"centroids\"])\nprint(\"5\")\ndef maximum(one, two):\n max_dist = 0\n for op in one:\n for tp in two:\n x1 = op[0]\n y1 = op[1]\n x2 = tp[0]\n y2 = tp[1]\n # print(\"The distance between these points:\")\n #print(x1,y1)\n #print(x2,y2)\n\n temp = math.sqrt((x2-x1)**2 + (y2-y1)**2)\n #print(temp)\n if temp > max_dist:\n max_dist = temp\n return max_dist\ndef minimum(one, two):\n min_dist = 100000000000000\n for op in one:\n for tp in two:\n x1 = op[0]\n y1 = op[1]\n x2 = tp[0]\n y2 = tp[1]\n # print(\"The distance between these points:\")\n # print(x1,y1)\n # print(x2,y2)\n\n temp = math.sqrt((x2-x1)**2 + (y2-y1)**2)\n # print(temp)\n if temp < min_dist:\n min_dist = temp\n return min_dist\n\ndef average(one, two):\n avg_dist = 0\n iter = 0\n for op in one:\n for tp in two:\n x1 = op[0]\n y1 = op[1]\n x2 = tp[0]\n y2 = tp[1]\n # print(\"The distance between these points:\")\n # print(x1,y1)\n # print(x2,y2)\n\n temp = math.sqrt((x2-x1)**2 + (y2-y1)**2)\n # print(temp)\n iter = iter+1\n avg_dist = avg_dist + temp\n return avg_dist/iter\n\nred = ((4.7, 3.2), (4.9,3.1), (5.0, 3.0), (4.6,2.9))\nblue = ((5.9,3.2), (6.7,3.1), (6.0,3.0), (6.2,2.8))\n\n#print(maximum(red,blue))\n#print(minimum(red,blue))\n#print(average(red,blue))"
},
{
"alpha_fraction": 0.7101669311523438,
"alphanum_fraction": 0.7204855680465698,
"avg_line_length": 28.52777862548828,
"blob_id": "8bf8635bfc678d878f714d33831f3c0dcbcc04b3",
"content_id": "35db5e1f9531d36c2d9799906a5b77daeac7f857",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3295,
"license_type": "no_license",
"max_line_length": 93,
"num_lines": 108,
"path": "/HW 2.py",
"repo_name": "BrainfreezeFL/CAP5610",
"src_encoding": "UTF-8",
"text": "# Samuel Lewis\r\n# HW 2\r\n# Machine Learning Class\r\n# I dowloaded the anaconda pack to run this\r\n\r\n# Read in all of the data\r\nimport pandas as pd\r\nimport math\r\nimport numpy\r\nimport matplotlib.pyplot as plt\r\nimport seaborn as sns\r\n#%matplotlib inline\r\n\r\n# machine learning\r\nfrom sklearn.linear_model import LogisticRegression\r\nfrom sklearn.svm import SVC, LinearSVC\r\nfrom sklearn.ensemble import RandomForestClassifier\r\nfrom sklearn.neighbors import KNeighborsClassifier\r\nfrom sklearn.naive_bayes import GaussianNB\r\nfrom sklearn.linear_model import Perceptron\r\nfrom sklearn.linear_model import SGDClassifier\r\nfrom sklearn.tree import DecisionTreeClassifier\r\nfrom sklearn.impute import KNNImputer\r\nfrom sklearn import tree\r\nimport pydotplus\r\nimport matplotlib.image as pltimg\r\nfrom sklearn.model_selection import train_test_split, cross_val_score\r\n\r\n\r\nfrom sklearn.feature_selection import SelectKBest\r\nfrom sklearn.feature_selection import chi2\r\n#%matplotlib inline\r\ntrain_df = pd.read_csv('train.csv')\r\ntest_df = pd.read_csv('test.csv')\r\ncombine = [train_df, test_df]\r\n\r\ncorr_matrix = train_df.corr()\r\nprint(corr_matrix)\r\n\r\n\r\n# Question 1\r\nprint(train_df.describe())\r\n# Drop the unnecessary features\r\ntrain_df = train_df.drop(\"Name\", axis = 1)\r\ntrain_df = train_df.drop(\"Ticket\", axis = 1)\r\ntrain_df = train_df.drop(\"Cabin\", axis = 1)\r\ntrain_df = train_df.drop(\"PassengerId\", axis = 1)\r\n\r\n# Change the Sex to a numerical system\r\ntrain_df['Sex'] = train_df['Sex'].map({'female':1,'male': 0}).astype(int)\r\ntrain_df['Embarked'].fillna(train_df['Embarked'].dropna().mode()[0], inplace = True)\r\ntrain_df['Embarked'] = train_df['Embarked'].map({'Q':1,'S': 0, 'C':2}).astype(int)\r\n\r\n# Fill in the Age to avoid losing data\r\ntrain_df['Age'].fillna(train_df['Age'].dropna().median(), inplace = True)\r\n\r\n# Fill in the Fare to avoid losing data\r\ntrain_df['Fare'].fillna(train_df['Fare'].dropna().mean(), inplace = True)\r\ntrain_df = train_df.astype(int)\r\ntest = train_df.Survived\r\nother = train_df.drop(\"Survived\", axis = 1)\r\nfeatures = other.columns\r\n\r\n\r\n# Question 2\r\n# Feature extraction - Chi Square\r\n# Code caused some errors when run but returns the results of the feature selection algorithm\r\n# Comment the code out to have the rest run\r\narray = train_df.values\r\nX = array[:,0:7]\r\nY = array[:,7]\r\ntest = SelectKBest(score_func=chi2, k=4)\r\nfit = test.fit(X, Y)\r\nnumpy.set_printoptions(precision=3)\r\nprint(fit.scores_)\r\nfeatures = fit.transform(X)\r\nprint(train_df.columns.values)\r\nprint(features[0:5,:])\r\n\r\n\r\n# Question 4\r\ndtree = DecisionTreeClassifier(criterion = \"gini\", max_depth = 3)\r\ndtree = dtree.fit(other, test)\r\ncvs = cross_val_score(dtree,other,test,cv=5)\r\nprint(\"Decision Tree five-fold : \")\r\nprint(cvs.mean())\r\n\r\n# Question 3\r\ndata = tree.export_graphviz(dtree, out_file=None, feature_names=features)\r\ngraph = pydotplus.graph_from_dot_data(data)\r\ngraph.write_png('mydecisiontree.png')\r\n\r\nimg=pltimg.imread('mydecisiontree.png')\r\nimgplot = plt.imshow(img)\r\nplt.show()\r\n\r\n# Question 5\r\n\r\ndtree = RandomForestClassifier(random_state=30, max_depth = 3)\r\ndtree = dtree.fit(other, test)\r\ncvs = cross_val_score(dtree,other,test,cv=5)\r\nprint(\"Random Forest 5 fold : \")\r\nprint(cvs.mean())\r\n\r\n# Random stuff for Question 1 mainly\r\ncorr_matrix = train_df.corr()\r\nprint(corr_matrix)\r\nprint(train_df.describe())"
}
] | 6 |
Yangangren/DSAC | https://github.com/Yangangren/DSAC | 5593a3e8fb99d7a303037ba2a42791f590f5fb41 | 48277fe53bcb01a3cbe379c8b74156fe5b2836cc | 0fa36b46613dd439adf8e8a4e1075ebb593dd311 | refs/heads/master | 2020-11-30T08:20:39.027711 | 2019-12-26T13:45:36 | 2019-12-26T13:45:36 | null | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6102610230445862,
"alphanum_fraction": 0.6444644331932068,
"avg_line_length": 29.83333396911621,
"blob_id": "689e2011fe8aaadd53634fcede59c6c9fc0c6907",
"content_id": "bee6c18b12d9a377cb6e331d6222a8ef5ad7a332",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1111,
"license_type": "no_license",
"max_line_length": 121,
"num_lines": 36,
"path": "/plot_old.py",
"repo_name": "Yangangren/DSAC",
"src_encoding": "UTF-8",
"text": "from __future__ import print_function\nimport numpy as np\nimport time\nimport matplotlib as mpl\nimport math\nimport matplotlib.pyplot as plt\n\nplt.figure()\nenv_name = \"Ant-v2\"\n#MountainCarContinuous-v0 BipedalWalkerHardcore-v2 Pendulum-v0 LunarLanderContinuous-v2 BipedalWalker-v2 CarRacing-v0\n\nfor method in range(0,6,1):\n iteration = np.load('./'+env_name+'/method_' + str(method) + '/result/iteration.npy')\n reward = np.load('./'+env_name+'/method_' + str(method) + '/result/average_reward.npy')\n\n\n\n if method == 0:\n plt.plot(iteration, reward, 'r', linewidth=2.0)\n if method == 1:\n plt.plot(iteration, reward, 'g', linewidth=2.0)\n if method == 2:\n plt.plot(iteration, reward, 'b', linewidth=2.0)\n if method == 3:\n plt.plot(iteration, reward, 'k', linewidth=2.0)\n if method == 4:\n plt.plot(iteration, reward, 'c', linewidth=2.0)\n if method == 5:\n plt.plot(iteration, reward, 'm', linewidth=2.0)\n if method == 6:\n plt.plot(iteration, reward, 'darkorange', linewidth=2.0)\nplt.legend([0,1,2,3,4,5,6])\nplt.title(env_name)\n\n\nplt.show()\n\n"
},
{
"alpha_fraction": 0.6256005167961121,
"alphanum_fraction": 0.6431732177734375,
"avg_line_length": 50.01612854003906,
"blob_id": "252fa92360be797c08d4153ad93524c784cf9d22",
"content_id": "fc5eec96ea57ad95ec8f23e7816a840ce12401f7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 15820,
"license_type": "no_license",
"max_line_length": 147,
"num_lines": 310,
"path": "/Main.py",
"repo_name": "Yangangren/DSAC",
"src_encoding": "UTF-8",
"text": "from __future__ import print_function\nimport numpy as np\nimport torch\nimport torch.multiprocessing as mp\nfrom torch.multiprocessing import Process, Queue\nimport argparse\nimport random\nimport os\nimport time\nfrom Actor import Actor\nfrom Learner import Learner\nfrom Test import Test\nfrom Simulation import Simulation\nfrom Buffer import Replay_buffer\nfrom Model import QNet, PolicyNet\nimport my_optim\nimport gym\n\ndef built_parser(method):\n parser = argparse.ArgumentParser()\n\n\n '''Task'''\n parser.add_argument(\"--env_name\", default=\"BipedalWalkerHardcore-v2\")\n #MountainCarContinuous-v0 BipedalWalkerHardcore-v2 Pendulum-v0 LunarLanderContinuous-v2 BipedalWalker-v2 CarRacing-v0\n parser.add_argument('--state_dim', dest='list', type=int, default=[])\n parser.add_argument('--action_dim', type=int, default=[])\n parser.add_argument('--action_high', dest='list', type=float, default=[],action=\"append\")\n parser.add_argument('--action_low', dest='list', type=float, default=[],action=\"append\")\n parser.add_argument(\"--NN_type\", default=\"mlp\", help='mlp or CNN')\n parser.add_argument(\"--code_model\", default=\"train\", help='train, eval or simu')\n parser.add_argument(\"--method_type\", default=\"different\", help='same or different')\n\n '''general hyper-parameters'''\n parser.add_argument('--critic_lr' , type=float, default=0.0001,help='learning rate (default: 0.0001)')#1\n parser.add_argument('--actor_lr', type=float, default=0.00005, help='learning rate (default: 0.0001)')#05\n parser.add_argument('--tau', type=float, default=0.001) #0.001\n parser.add_argument('--gamma', type=float, default=0.99, help='discount factor for rewards (default: 0.99)')\n parser.add_argument('--delay_update', type=int, default=2)\n parser.add_argument('--reward_scale', type=float, default=20)\n\n '''hyper-parameters for soft-Q based algorithm'''\n parser.add_argument('--alpha_lr', type=float, default=0.00005,help='learning rate (default: 0.0001)')\n parser.add_argument('--target_entropy', default=\"auto\",help=\"auto or some value such as -2\")\n\n '''hyper-parameters for soft-Q based algorithm'''\n parser.add_argument('--max_step', type=int, default=1000, help='maximum length of an episode')\n parser.add_argument('--buffer_size_max', type=int, default=500000, help='replay memory size')\n parser.add_argument('--initial_buffer_size', type=int, default=2000, help='Learner waits until replay memory stores this number of transition')\n parser.add_argument('--batch_size', type=int, default=256)\n parser.add_argument('--num_hidden_cell', type=int, default=256)\n\n '''other setting'''\n parser.add_argument(\"--max_train\", type=int, default=3000000)\n parser.add_argument('--load_param_period', type=int, default=20)\n parser.add_argument('--save_model_period', type=int, default=10000)\n parser.add_argument('--init_time', type=float, default=0.00)\n parser.add_argument('--seed', type=int, default=1, help='random seed (default: 1)')\n\n '''parallel architecture'''\n parser.add_argument(\"--num_buffers\", type=int, default=4)\n parser.add_argument(\"--num_learners\", type=int, default=6)\n parser.add_argument(\"--num_actors\", type=int, default=6)\n\n #parser.add_argument('--priority_alpha', type=float, default=0.7)\n #parser.add_argument('--priority_beta', type=float, default=0.4)\n #parser.add_argument('--priority_beta_incre', type=float, default=1e-6)\n #parser.add_argument(\"--sample_method\", default=\"random\") # \"random or priority\"\n #parser.add_argument('--priority_slice_size', type=int, default=5000)\n\n '''method list'''\n parser.add_argument(\"--method\", type=int, default=method)\n\n if parser.parse_args().method_type == \"different\":\n parser.add_argument('--method_name', type=dict,\n default={0: 'DSAC-20', 1: 'DSAC-50', 2: 'SAC', 3: 'Double-Q SAC',\n 4: 'TD3', 5: 'DDPG', 6: 'CD3PG'})\n if parser.parse_args().method_name[method] == \"DSAC-20\":\n parser.add_argument(\"--distributional_Q\", default=True)\n parser.add_argument(\"--stochastic_actor\", default=True)\n parser.add_argument(\"--double_Q\", default=False)\n parser.add_argument(\"--double_actor\", default=False)\n parser.add_argument(\"--adaptive_bound\", default=False)\n parser.add_argument('--alpha', default=\"auto\", help=\"auto or some value such as 1\")\n parser.add_argument('--TD_bound', type=float, default=20)\n if parser.parse_args().method_name[method] == \"DSAC-50\":\n parser.add_argument(\"--distributional_Q\", default=True)\n parser.add_argument(\"--stochastic_actor\", default=True)\n parser.add_argument(\"--double_Q\", default=False)\n parser.add_argument(\"--double_actor\", default=False)\n parser.add_argument(\"--adaptive_bound\", default=False)\n parser.add_argument('--alpha', default=\"auto\", help=\"auto or some value such as 1\")\n parser.add_argument('--TD_bound', type=float, default=50)\n elif parser.parse_args().method_name[method] == \"SAC\":\n parser.add_argument(\"--distributional_Q\", default=False)\n parser.add_argument(\"--stochastic_actor\", default=True)\n parser.add_argument(\"--double_Q\", default=True)\n parser.add_argument(\"--double_actor\", default=False)\n parser.add_argument('--alpha', default=\"auto\", help=\"auto or some value such as 1\")\n elif parser.parse_args().method_name[method] == \"Double-Q SAC\":\n parser.add_argument(\"--distributional_Q\", default=False)\n parser.add_argument(\"--stochastic_actor\", default=True)\n parser.add_argument(\"--double_Q\", default=True)\n parser.add_argument(\"--double_actor\", default=True)\n parser.add_argument('--alpha', default=\"auto\", help=\"auto or some value such as 1\")\n elif parser.parse_args().method_name[method] == \"TD3\":\n parser.add_argument(\"--distributional_Q\", default=False)\n parser.add_argument(\"--stochastic_actor\", default=False)\n parser.add_argument(\"--double_Q\", default=True)\n parser.add_argument(\"--double_actor\", default=False)\n parser.add_argument('--alpha', default=0, help=\"auto or some value such as 1\")\n elif parser.parse_args().method_name[method] == \"DDPG\":\n parser.add_argument(\"--distributional_Q\", default=False)\n parser.add_argument(\"--stochastic_actor\", default=False)\n parser.add_argument(\"--double_Q\", default=False)\n parser.add_argument(\"--double_actor\", default=False)\n parser.add_argument('--alpha', default=0, help=\"auto or some value such as 1\")\n elif parser.parse_args().method_name[method] == \"CD3PG\":\n parser.add_argument(\"--distributional_Q\", default=True)\n parser.add_argument(\"--stochastic_actor\", default=False)\n parser.add_argument(\"--double_Q\", default=False)\n parser.add_argument(\"--double_actor\", default=False)\n parser.add_argument('--alpha', default=0, help=\"auto or some value such as 1\")\n parser.add_argument(\"--adaptive_bound\", default=False)\n parser.add_argument('--TD_bound', type=float, default=50)\n if parser.parse_args().method_type == \"same\":\n parser.add_argument('--method_name', type=dict,\n default={0: 'DSAC-10', 1: 'DSAC-20', 2: 'DSAC-50', 3: 'ADSAC-10', 4: 'ADSAC-20',\n 5: 'ADSAC-50'})\n if parser.parse_args().method_name[method] == \"DSAC-10\":\n parser.add_argument(\"--distributional_Q\", default=True)\n parser.add_argument(\"--stochastic_actor\", default=True)\n parser.add_argument(\"--double_Q\", default=False)\n parser.add_argument(\"--double_actor\", default=False)\n parser.add_argument(\"--adaptive_bound\", default=False)\n parser.add_argument('--alpha', default=\"auto\", help=\"auto or some value such as 1\")\n parser.add_argument('--TD_bound', type=float, default=10)\n if parser.parse_args().method_name[method] == \"DSAC-20\":\n parser.add_argument(\"--distributional_Q\", default=True)\n parser.add_argument(\"--stochastic_actor\", default=True)\n parser.add_argument(\"--double_Q\", default=False)\n parser.add_argument(\"--double_actor\", default=False)\n parser.add_argument(\"--adaptive_bound\", default=False)\n parser.add_argument('--alpha', default=\"auto\", help=\"auto or some value such as 1\")\n parser.add_argument('--TD_bound', type=float, default=20)\n if parser.parse_args().method_name[method] == \"DSAC-50\":\n parser.add_argument(\"--distributional_Q\", default=True)\n parser.add_argument(\"--stochastic_actor\", default=True)\n parser.add_argument(\"--double_Q\", default=False)\n parser.add_argument(\"--double_actor\", default=False)\n parser.add_argument(\"--adaptive_bound\", default=False)\n parser.add_argument('--alpha', default=\"auto\", help=\"auto or some value such as 1\")\n parser.add_argument('--TD_bound', type=float, default=50)\n if parser.parse_args().method_name[method] == \"ADSAC-10\":\n parser.add_argument(\"--distributional_Q\", default=True)\n parser.add_argument(\"--stochastic_actor\", default=True)\n parser.add_argument(\"--double_Q\", default=False)\n parser.add_argument(\"--double_actor\", default=False)\n parser.add_argument(\"--adaptive_bound\", default=True)\n parser.add_argument('--alpha', default=\"auto\", help=\"auto or some value such as 1\")\n parser.add_argument('--TD_bound', type=float, default=10)\n if parser.parse_args().method_name[method] == \"ADSAC-20\":\n parser.add_argument(\"--distributional_Q\", default=True)\n parser.add_argument(\"--stochastic_actor\", default=True)\n parser.add_argument(\"--double_Q\", default=False)\n parser.add_argument(\"--double_actor\", default=False)\n parser.add_argument(\"--adaptive_bound\", default=True)\n parser.add_argument('--alpha', default=\"auto\", help=\"auto or some value such as 1\")\n parser.add_argument('--TD_bound', type=float, default=20)\n if parser.parse_args().method_name[method] == \"ADSAC-50\":\n parser.add_argument(\"--distributional_Q\", default=True)\n parser.add_argument(\"--stochastic_actor\", default=True)\n parser.add_argument(\"--double_Q\", default=False)\n parser.add_argument(\"--double_actor\", default=False)\n parser.add_argument(\"--adaptive_bound\", default=True)\n parser.add_argument('--alpha', default=\"auto\", help=\"auto or some value such as 1\")\n parser.add_argument('--TD_bound', type=float, default=50)\n return parser.parse_args()\n\n\n\ndef actor_agent(args, shared_queue, shared_value,share_net, lock, i):\n actor = Actor(args, shared_queue, shared_value,share_net, lock, i)\n actor.run()\n\ndef leaner_agent(args, shared_queue,shared_value,share_net,share_optimizer,device,lock,i):\n\n leaner = Learner(args, shared_queue,shared_value,share_net,share_optimizer,device,lock,i)\n leaner.run()\n\ndef test_agent(args, shared_value,share_net):\n\n test = Test(args, shared_value,share_net)\n test.run()\n\ndef buffer(args, shared_queue, shared_value,i):\n buffer = Replay_buffer(args, shared_queue, shared_value,i)\n buffer.run()\n\ndef simu_agent(args, shared_value):\n simu = Simulation(args, shared_value)\n simu.run()\n\ndef main(method):\n args = built_parser(method=method)\n env = gym.make(args.env_name)\n state_dim = env.observation_space.shape\n action_dim = env.action_space.shape[0]\n\n #max_action = float(env.action_space.high[0])\n args.state_dim = state_dim\n args.action_dim = action_dim\n action_high = env.action_space.high\n action_low = env.action_space.low\n args.action_high = action_high.tolist()\n args.action_low = action_low.tolist()\n args.seed = np.random.randint(0,30)\n args.init_time = time.time()\n num_cpu = mp.cpu_count()\n\n Q_net1 = QNet(args)\n Q_net1.train()\n Q_net1.share_memory()\n Q_net1_target = QNet(args)\n Q_net1_target.train()\n Q_net1_target.share_memory()\n Q_net2 = QNet(args)\n Q_net2.train()\n Q_net2.share_memory()\n Q_net2_target = QNet(args)\n Q_net2_target.train()\n Q_net2_target.share_memory()\n actor1 = PolicyNet(args)\n if args.code_model == \"eval\":\n actor1.load_state_dict(torch.load('./' + args.env_name + '/method_' + str(args.method) + '/model/policy_' + str(args.max_train) + '.pkl'))\n actor1.train()\n actor1.share_memory()\n actor1_target = PolicyNet(args)\n actor1_target.train()\n actor1_target.share_memory()\n actor2 = PolicyNet(args)\n actor2.train()\n actor2.share_memory()\n actor2_target = PolicyNet(args)\n actor2_target.train()\n actor2_target.share_memory()\n\n\n Q_net1_target.load_state_dict(Q_net1.state_dict())\n Q_net2_target.load_state_dict(Q_net2.state_dict())\n actor1_target.load_state_dict(actor1.state_dict())\n actor2_target.load_state_dict(actor2.state_dict())\n\n\n\n Q_net1_optimizer = my_optim.SharedAdam(Q_net1.parameters(), lr=args.critic_lr)\n Q_net1_optimizer.share_memory()\n Q_net2_optimizer = my_optim.SharedAdam(Q_net2.parameters(), lr=args.critic_lr)\n Q_net2_optimizer.share_memory()\n actor1_optimizer = my_optim.SharedAdam(actor1.parameters(), lr=args.actor_lr)\n actor1_optimizer.share_memory()\n actor2_optimizer = my_optim.SharedAdam(actor2.parameters(), lr=args.actor_lr)\n actor2_optimizer.share_memory()\n log_alpha = torch.zeros(1, dtype=torch.float32, requires_grad=True)\n log_alpha.share_memory_()\n alpha_optimizer = my_optim.SharedAdam([log_alpha], lr=args.alpha_lr)\n alpha_optimizer.share_memory()\n\n share_net = [Q_net1,Q_net1_target,Q_net2,Q_net2_target,actor1,actor1_target,actor2,actor2_target,log_alpha]\n share_optimizer=[Q_net1_optimizer,Q_net2_optimizer,actor1_optimizer,actor2_optimizer,alpha_optimizer]\n\n experience_in_queue = []\n experience_out_queue = []\n for i in range(args.num_buffers):\n experience_in_queue.append(Queue(maxsize=20))\n experience_out_queue.append(Queue(maxsize=10))\n shared_queue = [experience_in_queue, experience_out_queue]\n step_counter = mp.Value('i', 0)\n stop_sign = mp.Value('i', 0)\n iteration_counter = mp.Value('i', 0)\n shared_value = [step_counter, stop_sign,iteration_counter]\n lock = mp.Lock()\n procs=[]\n if args.code_model!=\"train\":\n args.alpha = 0.01\n if args.code_model!=\"simu\":\n for i in range(args.num_actors):\n procs.append(Process(target=actor_agent, args=(args, shared_queue, shared_value,[actor1,Q_net1], lock, i)))\n for i in range(args.num_buffers):\n procs.append(Process(target=buffer, args=(args, shared_queue, shared_value,i)))\n procs.append(Process(target=test_agent, args=(args, shared_value, [actor1, log_alpha])))\n for i in range(args.num_learners):\n #device = torch.device(\"cuda\")\n device = torch.device(\"cpu\")\n procs.append(Process(target=leaner_agent, args=(args, shared_queue, shared_value,share_net,share_optimizer,device,lock,i)))\n elif args.code_model==\"simu\":\n procs.append(Process(target=simu_agent, args=(args, shared_value)))\n\n for p in procs:\n p.start()\n for p in procs:\n p.join()\n\nif __name__ == '__main__':\n #os.environ[\"CUDA_VISIBLE_DEVICES\"] = \"-1\"\n os.environ[\"OMP_NUM_THREADS\"] = \"1\"\n os.environ[\"KMP_DUPLICATE_LIB_OK\"] = \"TRUE\"\n\n for i in range(0,7,1):\n main(i)\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.5360230803489685,
"alphanum_fraction": 0.5525936484336853,
"avg_line_length": 48.55356979370117,
"blob_id": "ac4ae638f824a1de30503216bb528a71e4d88efa",
"content_id": "1789018f5d58a5c542be0b0e0349e571163e42c6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2776,
"license_type": "no_license",
"max_line_length": 116,
"num_lines": 56,
"path": "/plot.py",
"repo_name": "Yangangren/DSAC",
"src_encoding": "UTF-8",
"text": "from __future__ import print_function\nimport numpy as np\nimport time\nimport matplotlib as mpl\nimport math\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport seaborn as sns\nMETHOD_IDX_TO_METHOD_NAME={0: 'DSAC-10', 1: 'DSAC-20', 2: 'DSAC-50', 3: 'ADSAC-10', 4: 'ADSAC-20',\n 5: 'ADSAC-50'}\n\ndef make_a_figure_of_n_runs(env_name, run_numbers, method_numbers):\n # make a total dataframe\n df_list = []\n for run_idx_ in range(run_numbers):\n for method_idx in range(method_numbers):\n iteration = np.load('./' + env_name + '/method_' + str(method_idx) + '/result/iteration.npy')\n time = np.load('./' + env_name + '/method_' + str(method_idx) + '/result/time.npy')\n average_return_with_diff_base = np.load('./' + env_name + '/method_'\n + str(method_idx) + '/result/average_return_with_diff_base.npy')\n average_return_max_1 = list(map(lambda x: x[0], average_return_with_diff_base))\n average_return_max_3 = list(map(lambda x: x[1], average_return_with_diff_base))\n average_return_max_5 = list(map(lambda x: x[2], average_return_with_diff_base))\n\n alpha = np.load('./' + env_name + '/method_' + str(method_idx) + '/result/alpha.npy')\n\n run_idx = np.ones(shape=iteration.shape, dtype=np.int32) * run_idx_\n method_name = METHOD_IDX_TO_METHOD_NAME[method_idx]\n method_name = [method_name] * iteration.shape[0]\n\n df_for_this_run_and_method = pd.DataFrame(dict(run_idx=run_idx,\n method_name=method_name,\n iteration=iteration,\n time=time,\n average_return=average_return_max_3,\n alpha=alpha))\n df_list.append(df_for_this_run_and_method)\n total_dataframe = df_list[0].append(df_list[1:], ignore_index=True) if method_numbers > 1 else df_list[0]\n f1 = plt.figure(1)\n sns.lineplot(x=\"iteration\", y=\"average_return\", hue=\"method_name\", data=total_dataframe)\n plt.title(env_name + '_average_return')\n\n\n f3 = plt.figure(3)\n sns.lineplot(x=\"iteration\", y=\"alpha\", hue=\"method_name\", data=total_dataframe)\n plt.title(env_name + '_alpha')\n plt.show()\n\n\nif __name__ == '__main__':\n env_name = \"Ant-v2\"\n # MountainCarContinuous-v0 BipedalWalkerHardcore-v2 Pendulum-v0\n # LunarLanderContinuous-v2 BipedalWalker-v2 CarRacing-v0\n run_numbers = 1\n method_numbers = 6\n make_a_figure_of_n_runs(\"Ant-v2\", run_numbers, method_numbers)\n\n"
}
] | 3 |
sandhya123r/helloworld | https://github.com/sandhya123r/helloworld | 6b2becc73ceb4690cc4de706b0ab8c3a45442ef7 | 72ab45f445df0bdf42c29a770df3b0128acc66c8 | dbf12a7ca57795e49d8f23f0b78f4a3f30732aa5 | refs/heads/master | 2021-01-01T04:13:10.349668 | 2016-05-02T04:28:27 | 2016-05-02T04:28:27 | 56,954,128 | 0 | 1 | null | null | null | null | null | [
{
"alpha_fraction": 0.6747967600822449,
"alphanum_fraction": 0.7154471278190613,
"avg_line_length": 19.5,
"blob_id": "a3f5cfdf700520e011e69dca787c693451015438",
"content_id": "05222b3e7c261514de30db22685bab272190dd3f",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 123,
"license_type": "permissive",
"max_line_length": 43,
"num_lines": 6,
"path": "/bin/run_this.py",
"repo_name": "sandhya123r/helloworld",
"src_encoding": "UTF-8",
"text": "from helloworld import hello\nimport sys\n\nargs = sys.argv\nhello = hello.Hello()\nhello.app.run(host=\"0.0.0.0\", port=args[1])\n"
},
{
"alpha_fraction": 0.6346666812896729,
"alphanum_fraction": 0.6426666378974915,
"avg_line_length": 25.785715103149414,
"blob_id": "3785d8e5c70b5e73819dc8627c8c182730a4c56f",
"content_id": "6e7a328c518fa093ced50771bd8b92d47dc95c68",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 375,
"license_type": "permissive",
"max_line_length": 66,
"num_lines": 14,
"path": "/helloworld/hello.py",
"repo_name": "sandhya123r/helloworld",
"src_encoding": "UTF-8",
"text": "# http://flask-restful-cn.readthedocs.org/en/0.3.4/quickstart.html\nfrom flask import Flask\nfrom flask_restful import Resource, Api\n\n\nclass HelloWorld(Resource):\n def get(self):\n return {'hello': 'world'}\n\nclass Hello(object):\n def __init__(self):\n self.app = Flask(__name__)\n self.api = Api(self.app)\n self.api.add_resource(HelloWorld, '/')\n"
},
{
"alpha_fraction": 0.5982906222343445,
"alphanum_fraction": 0.6068376302719116,
"avg_line_length": 20.272727966308594,
"blob_id": "ab0f379b8a293886e1c451dd0d2442a45adb8b28",
"content_id": "0248732cb8b514d7896de8ed47540d796f448e65",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 234,
"license_type": "permissive",
"max_line_length": 39,
"num_lines": 11,
"path": "/setup.py",
"repo_name": "sandhya123r/helloworld",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nfrom distutils.core import setup\n\nsetup(name='HelloWorld',\n version='0.1',\n description='Hello World Server',\n author='Sandhya R',\n author_email='[email protected]',\n packages=['helloworld'],\n )\n"
},
{
"alpha_fraction": 0.7567567825317383,
"alphanum_fraction": 0.7567567825317383,
"avg_line_length": 17,
"blob_id": "1a7967bd866001a8f077ce316124178a3719a2f4",
"content_id": "6555fe91e74a0b9c6f1ca9b4452ada2a3985c7b5",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 37,
"license_type": "permissive",
"max_line_length": 22,
"num_lines": 2,
"path": "/README.md",
"repo_name": "sandhya123r/helloworld",
"src_encoding": "UTF-8",
"text": "# helloworld\nA sample app in Python \n"
}
] | 4 |
ljjjustin/gists | https://github.com/ljjjustin/gists | 1bfa918debd525db4e4ed6057053a4a765799ac8 | 765558e9e37d9e50121293565fac2b0bf449f26d | 4d962311d8b9fabe8e2af15def354ae747db5b47 | refs/heads/master | 2022-02-05T19:34:15.647600 | 2022-01-14T15:40:13 | 2022-01-14T15:40:13 | 224,826,261 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5062500238418579,
"alphanum_fraction": 0.5249999761581421,
"avg_line_length": 13.545454978942871,
"blob_id": "907d8e8194b2b2790a21de60de2a244c38fb3b5e",
"content_id": "d7abc9d37819c856c0e5df6698b0efcbbc21371a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 160,
"license_type": "no_license",
"max_line_length": 50,
"num_lines": 11,
"path": "/macos/code_resign.sh",
"repo_name": "ljjjustin/gists",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\nif [[ $# -ne 1 ]]; then\n echo \"$0 <app dir>\"\n exit\nfi\n\napp_dir=$1\n\nxattr -cr \"${app_dir}\"\nsudo codesign --force --deep --sign - \"${app_dir}\"\n"
},
{
"alpha_fraction": 0.6322580575942993,
"alphanum_fraction": 0.6451612710952759,
"avg_line_length": 17.235294342041016,
"blob_id": "a5f32973e246f363a62b778a4999c3de4da4ff53",
"content_id": "1e45cc6088816196e628a9f9d229d4efeb635cee",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 310,
"license_type": "no_license",
"max_line_length": 87,
"num_lines": 17,
"path": "/openstack/show-clb-vrrp-ip.sh",
"repo_name": "ljjjustin/gists",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\nif [ $# -ne 1 ]; then\n echo \"usage: $0 <vip>\"\n exit\nfi\n\nvip=$1\n\nsource /root/keystonerc_admin\namphora_ids=$(openstack loadbalancer amphora list | grep \" ${vip} \" | awk '{print $2}')\n\necho $vip\nfor id in $(echo $amphora_ids)\ndo\n openstack loadbalancer amphora show $id | grep vrrp_ip\ndone\n"
},
{
"alpha_fraction": 0.6982248425483704,
"alphanum_fraction": 0.7100591659545898,
"avg_line_length": 24.17021369934082,
"blob_id": "7676a6edb33d9f23a9622978e5faa064f7500020",
"content_id": "bd88b80477645710ef58dab759edcbe48ae0f67c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 1183,
"license_type": "no_license",
"max_line_length": 122,
"num_lines": 47,
"path": "/tstack-clean.sh",
"repo_name": "ljjjustin/gists",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\n# delete all VMs\n\nnova delete tstack-venus tstack-monitor2 tstack-windows tstack-windows2\n\nwhile true\ndo\n\tif ! nova list | grep -E 'tstack-venus|tstack-window|tstack-monitor' > /dev/null; then\n\t\tbreak\n\tfi\n\tsleep 3\ndone\n\n# delete floating ips\nfor id in $(neutron floatingip-list | grep 10.10.2 | awk '{print $2}')\ndo\n\tneutron floatingip-disassociate ${id}\n\tneutron floatingip-delete ${id}\ndone\n\n# delete tstack router\nfor router in $(neutron router-list | grep tstack_router | awk '{print $2}')\ndo\n\tneutron router-gateway-clear ${router}\n\tfor p in $(neutron router-port-list ${router} | awk '{print $2}')\n\tdo\n\t\tneutron router-interface-delete ${router} port=${p}\n\tdone\n\tneutron router-delete ${router}\ndone\n\n# delete subnet\nfor subnet in $(neutron subnet-list | grep -E 'HA subnet |tstack-internal-subnet|tstack-vxlan-subnet' | awk '{print $2}')\ndo\n\tfor p in $(neutron port-list | grep ${subnet} | awk '{print $2}')\n\tdo\n\t\tneutron port-delete ${p}\n\tdone\n\tneutron subnet-delete ${subnet}\ndone\n\n# delete network\nfor network in $(neutron net-list | grep -E 'HA network |tstack-internal-network|tstack-vxlan-network' | awk '{print $2}')\ndo\n\tneutron net-delete ${network}\ndone\n"
},
{
"alpha_fraction": 0.7093023061752319,
"alphanum_fraction": 0.7209302186965942,
"avg_line_length": 16.200000762939453,
"blob_id": "60fc16830e701823f190b5e246b0351c5aeec9ec",
"content_id": "a87723e91aeb335fa75aa4c290fabda896636880",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 86,
"license_type": "no_license",
"max_line_length": 39,
"num_lines": 5,
"path": "/macos/disable-nat.sh",
"repo_name": "ljjjustin/gists",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\nsudo sysctl -w net.inet.ip.forwarding=0\nsudo pfctl -F nat\nsudo pfctl -sn\n"
},
{
"alpha_fraction": 0.6233766078948975,
"alphanum_fraction": 0.6314935088157654,
"avg_line_length": 18.870967864990234,
"blob_id": "f13215ff66e1a68b029c6ea8b559bba0d1cc4428",
"content_id": "2b217e01ea8730722022181b841dc82d6691e074",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 616,
"license_type": "no_license",
"max_line_length": 84,
"num_lines": 31,
"path": "/ovs/tap-br-int.sh",
"repo_name": "ljjjustin/gists",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\nSRCBR=br-int\nPPORT=patch-tun\nTAP=tap-${SRCBR}\n\nOLD_PS1=\"$PS1\"\n\ncleanup() {\n\tovs-vsctl clear bridge ${SRCBR} mirrors\n\tovs-vsctl del-port ${SRCBR} $TAP\n\tip link del dev $TAP\n\texport PS1=\"$OLD_PS1\"\n}\n\ntrap \"cleanup\" EXIT\n\nip link add name $TAP type dummy\nip link set dev $TAP up\novs-vsctl add-port ${SRCBR} $TAP\n\novs-vsctl -- --id=@dst get Port $TAP \\\n -- --id=@src get Port ${PPORT} \\\n -- --id=@m create Mirror name=$TAP select_dst_port=@src output_port=@dst \\\n -- set Bridge ${SRCBR} mirrors=@m\n\novs-vsctl list mirror\n\necho \"tcpdump -pni $TAP\"\n\nexport PS1=\"tcpdump > \" && bash\n"
},
{
"alpha_fraction": 0.5291005373001099,
"alphanum_fraction": 0.5687830448150635,
"avg_line_length": 24.133333206176758,
"blob_id": "937e267df3271b60b8f42326781fb4928f9fab17",
"content_id": "cc6bacf35bcaf043e5fe2de16d5cbdf0590a3693",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 378,
"license_type": "no_license",
"max_line_length": 52,
"num_lines": 15,
"path": "/openstack/pgcal.py",
"repo_name": "ljjjustin/gists",
"src_encoding": "UTF-8",
"text": "# !/usr/bin/env python\n\nimport math\nimport sys\n\nif __name__ == '__main__':\n if len(sys.argv) != 2:\n print \"usage: %s <osd amount>\" % sys.argv[0]\n osd_num = int(sys.argv[1])\n power = int(math.log(osd_num * 100, 2)+0.99)\n total_pgs = 2**power\n\n print \"images pool: \", total_pgs/16\n print \"vms pool: \", total_pgs/2\n print \"volumes pool: \", total_pgs/2\n\n"
},
{
"alpha_fraction": 0.5347912311553955,
"alphanum_fraction": 0.5487077832221985,
"avg_line_length": 14.242424011230469,
"blob_id": "57d2c50ccf888855b8a1ab5aeeca3c33bfbd3e67",
"content_id": "0a6c855c55da19c5377752696231da15bfca7e86",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 503,
"license_type": "no_license",
"max_line_length": 97,
"num_lines": 33,
"path": "/openstack/migrate-vm.sh",
"repo_name": "ljjjustin/gists",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\nif [ $# -ne 1 ]; then\n echo \"usage: $0 <vm uuid or name>\"\n exit\nfi\n\nvm=$1\n\nwait_status() {\n local target=$1\n\n while true\n do\n status=$(nova show ${vm} | grep -w status | awk '{print $4}' | awk '{print tolower($0)}')\n\n if [ \"${target}\" = \"${status}\" ]; then\n break\n fi\n sleep 3\n done\n}\n\nnova stop ${vm}\nwait_status shutoff\n\nnova migrate ${vm}\nwait_status verify_resize\n\nnova resize-confirm ${vm}\nwait_status shutoff\n\nnova start ${vm}\n"
},
{
"alpha_fraction": 0.4959999918937683,
"alphanum_fraction": 0.5173333287239075,
"avg_line_length": 22.4375,
"blob_id": "a222cff106f7707428bcaac73093c4d6e61478fd",
"content_id": "a3041d33776848c80ca36c51bfce8399ab306213",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 375,
"license_type": "no_license",
"max_line_length": 65,
"num_lines": 16,
"path": "/ceph/check-pg.sh",
"repo_name": "ljjjustin/gists",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\nceph osd tree > osd_tree.txt\n\nfor ((i=0; i < 30; i++))\ndo\n n=$((RANDOM%1000))\n pgnum=$(sed -ne \"${n} p\" pg-info.txt | awk '{print $1}')\n osds=$(ceph pg map $pgnum | awk '{print $NF}' | tr '[,]' ' ')\n grepstr=\"root|rack|host\"\n for o in $(echo $osds)\n do\n grepstr=\"${grepstr}|osd.${o}\"\n done\n grep -wE \"${grepstr}\" osd_tree.txt\ndone\n"
},
{
"alpha_fraction": 0.5930736064910889,
"alphanum_fraction": 0.649350643157959,
"avg_line_length": 20,
"blob_id": "c905400fd4bca99a7a998c2def25e81a03c5e9ee",
"content_id": "2516324d082cd302215f66a7bea9ae7589391df4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 231,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 11,
"path": "/encrypt.py",
"repo_name": "ljjjustin/gists",
"src_encoding": "UTF-8",
"text": "import bcrypt\nimport sys\nimport hashlib\n\n\nif __name__ == '__main__':\n\n m = hashlib.sha512()\n m.update(sys.argv[1])\n sha512_code = m.hexdigest()\n print bcrypt.hashpw(sha512_code, bcrypt.gensalt(rounds=10, prefix=b\"2a\"))\n"
},
{
"alpha_fraction": 0.5618115067481995,
"alphanum_fraction": 0.5936352610588074,
"avg_line_length": 27.172412872314453,
"blob_id": "3bcae8457c62ddde1aaaa03c4f27f1c63c9c394e",
"content_id": "bd21244c0a83b5d839451944f9284323205ffeb1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 817,
"license_type": "no_license",
"max_line_length": 61,
"num_lines": 29,
"path": "/openstack/cpu_flags_comp.py",
"repo_name": "ljjjustin/gists",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nimport sys\n\ndef compare_cpu_flags(cpu1, cpu2):\n\n with open(cpu1, \"r\") as f:\n cpu1_flags = f.read()\n with open(cpu2, \"r\") as f:\n cpu2_flags = f.read()\n\n cpu1_flag_set = set(cpu1_flags.strip().split())\n cpu2_flag_set = set(cpu2_flags.strip().split())\n\n if cpu1_flag_set == cpu2_flag_set:\n print \"live migration is OK\"\n elif cpu1_flag_set.issubset(cpu2_flag_set):\n print \"live migration from cpu1 -> cpu2 is OK\"\n elif cpu2_flag_set.issubset(cpu1_flag_set):\n print \"live migration from cpu2 -> cpu1 is OK\"\n else:\n print \"live migration is NOT OK\"\n\nif __name__ == '__main__':\n if len(sys.argv) != 3:\n print \"usage: %s cpu1_flags cpu2_flags\" % sys.argv[0]\n sys.exit()\n\n compare_cpu_flags(sys.argv[1], sys.argv[2])\n"
},
{
"alpha_fraction": 0.5248227119445801,
"alphanum_fraction": 0.6879432797431946,
"avg_line_length": 27.200000762939453,
"blob_id": "32cd69233f85f4297d84a8ad881f634ff8e7bfd4",
"content_id": "63964270a8e9507233f30ed033f5d617fc2fe33d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 282,
"license_type": "no_license",
"max_line_length": 64,
"num_lines": 10,
"path": "/routeadv.sh",
"repo_name": "ljjjustin/gists",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\nip route flush table 200\nip route add 169.254.62.0/24 dev bond2 table 200\nip route add default via 169.254.62.254 table 200\n\nipaddr=$(ip r get 169.254.62.254 | grep src | awk '{print $NF}')\nif ! ip ru | grep -qw ${ipaddr}; then\n\tip rule add from ${ipaddr} table 200\nfi\n"
},
{
"alpha_fraction": 0.5187032222747803,
"alphanum_fraction": 0.5187032222747803,
"avg_line_length": 27.64285659790039,
"blob_id": "8032aeaa3a745c6d2e9703d220fe972eaed94939",
"content_id": "a6eff1abb6fd5c57a05158cea6bfc9db1fc111ce",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 401,
"license_type": "no_license",
"max_line_length": 42,
"num_lines": 14,
"path": "/macos/purge-node.sh",
"repo_name": "ljjjustin/gists",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\nsudo npm uninstall npm -g\nbrew uninstall node\n\nsudo rm -rf /usr/local/lib/dtrace/node.d \\\n /usr/local/lib/node_modules \\\n /usr/local/share/man/*/node* \\\n /usr/local/share/man/*/npm* \\\n /usr/local/bin/npm \\\n /usr/local/bin/nodemon \\\n /usr/local/bin/node \\\n /usr/local/include/node \\\n ~/.npm* ~/.node*\n"
},
{
"alpha_fraction": 0.7171717286109924,
"alphanum_fraction": 0.7272727489471436,
"avg_line_length": 18.799999237060547,
"blob_id": "324ed1fb16840e56a81aefe794df54316c4e74bb",
"content_id": "4cad75a5a44b066b22600652fc171835891d224f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 99,
"license_type": "no_license",
"max_line_length": 39,
"num_lines": 5,
"path": "/macos/enable-nat.sh",
"repo_name": "ljjjustin/gists",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\nsudo sysctl -w net.inet.ip.forwarding=1\nsudo pfctl -evf /etc/pf.custom\nsudo pfctl -sn\n"
},
{
"alpha_fraction": 0.5363636612892151,
"alphanum_fraction": 0.5636363625526428,
"avg_line_length": 12.75,
"blob_id": "4de399f861a5c474c933e677e71399d4b362f326",
"content_id": "5ea225190ccc20f676b9a7ab6218638050c03297",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 110,
"license_type": "no_license",
"max_line_length": 31,
"num_lines": 8,
"path": "/macos/set-hostname.sh",
"repo_name": "ljjjustin/gists",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\nif [ $# -ne 1 ]; then\n echo \"usage: $0 <hostname>\"\n exit\nfi\n\nsudo scutil --set HostName $1\n"
},
{
"alpha_fraction": 0.5168918967247009,
"alphanum_fraction": 0.6554054021835327,
"avg_line_length": 23.66666603088379,
"blob_id": "336ca5f208c04a49df5972422a6c44c22d25ac90",
"content_id": "6bf15c62973e8c8a0fcf011270f105a154204897",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 296,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 12,
"path": "/openstack/boot_at.sh",
"repo_name": "ljjjustin/gists",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\nif [ $# -ne 1 ]; then\n echo \"usage: $0 <compute node>\"\n exit\nfi\n\nhost=$1\n\nnova boot --flavor TSF-1 --image d5e7bbb9-00c4-4b3c-ab32-baff5ec0c2cc \\\n --user-data userdata.txt --nic net-id=8bd9b058-d142-400f-9c22-3847178ad409 \\\n --availability-zone nova:${host} test-${host}\n"
},
{
"alpha_fraction": 0.5378151535987854,
"alphanum_fraction": 0.5630252361297607,
"avg_line_length": 12.222222328186035,
"blob_id": "de3de132b490840694b608ec231d11425be4859b",
"content_id": "80746f83e4e6148b4a433942fb02e44e46c19201",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 119,
"license_type": "no_license",
"max_line_length": 46,
"num_lines": 9,
"path": "/zentao/start.sh",
"repo_name": "ljjjustin/gists",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\ncd $(dirname $0)\n\n./zbox start\n\nif ! pgrep -f xxd; then\n cd ./run/xxd; nohup ./xxd > xxd.log 2>&1 &\nfi\n"
},
{
"alpha_fraction": 0.7250000238418579,
"alphanum_fraction": 0.7250000238418579,
"avg_line_length": 12.333333015441895,
"blob_id": "3d64145e7fb355a4d27f90cc342321d8a20796f8",
"content_id": "9b59779fbdf93837ca2b0308491dd73c538df37d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 40,
"license_type": "no_license",
"max_line_length": 26,
"num_lines": 3,
"path": "/macos/show-loaded-kernel-modules.sh",
"repo_name": "ljjjustin/gists",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\n/usr/bin/kmutil showloaded\n"
},
{
"alpha_fraction": 0.5984703898429871,
"alphanum_fraction": 0.6545570492744446,
"avg_line_length": 23.13846206665039,
"blob_id": "f8f71f6f1db7dfe5a1d02e9a9ec5df4463c0fec3",
"content_id": "51cb2f669d5a6011281c431f85f100637a578604",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 1569,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 65,
"path": "/openstack/cluster-bootstrap.sh",
"repo_name": "ljjjustin/gists",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\n# vm flavors\nensure_flavor() {\n local name=$1\n local vcpu=$2\n local mems=$3\n local disk=$4\n\n if ! nova flavor-list | grep -q -w \" $name \"; then\n nova flavor-create --is-public true $name auto $mems $disk $vcpu\n fi\n}\n\nensure_flavor TMP-INST 4 8192 200\nensure_flavor TMP-GLOBAL 16 32768 200\nensure_flavor TMP-BUSS 4 4096 100\n\n# vm network\nensure_network() {\n local net=$1\n local subnet=\"$2\"\n\n if ! neutron net-list | grep -q -w \" $net \"; then\n neutron net-create $net\n fi\n\n if ! neutron subnet-list | grep -q -w \" $subnet \"; then\n neutron subnet-create --name $subnet $net 192.168.123.0/24\n fi\n}\n\nnetname=tmp-net\nsubnet=\"${netname}-sub1\"\nensure_network $netname $subnet\n\n# router\nensure_router() {\n local router=$1\n\n if ! neutron router-list | grep -q -w \" $router \"; then\n neutron router-create $router\n neutron router-gateway-set $router tstack-floating-ips\n neutron router-interface-add $router $subnet\n fi\n}\nensure_router tmp-router1\n\n# installer\nnova boot --flavor TMP-INST --image d5e7bbb9-00c4-4b3c-ab32-baff5ec0c2cc \\\n --user-data userdata.txt --nic net-name=$netname \\\n tmp-installer\n\n# global cluster\nnova boot --flavor TMP-GLOBAL --image d5e7bbb9-00c4-4b3c-ab32-baff5ec0c2cc \\\n --user-data userdata.txt --nic net-name=$netname \\\n tmp-global\n\n# business cluster\nfor i in $(seq 1 3)\ndo\n nova boot --flavor TMP-BUSS --image d5e7bbb9-00c4-4b3c-ab32-baff5ec0c2cc \\\n --user-data userdata.txt --nic net-name=$netname \\\n tmp-buss${i}\ndone\n"
},
{
"alpha_fraction": 0.6215139627456665,
"alphanum_fraction": 0.6374502182006836,
"avg_line_length": 24.100000381469727,
"blob_id": "7fe5053b74f8f410df0792cd417ae2c8821d32eb",
"content_id": "d15ed00ce9bf21b144ec2881a48b8dba6c34af61",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 251,
"license_type": "no_license",
"max_line_length": 76,
"num_lines": 10,
"path": "/config-hostname.sh",
"repo_name": "ljjjustin/gists",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\n# get ip address\nip=$(ip a show dev bond1 | grep -w inet | awk '{print $2}' | cut -d '/' -f1)\n\n# get hostname from hosts files\nhostname=$(grep \"${ip}\" /etc/hosts | awk '{print $2}')\n\n# modify hostname\nhostnamectl set-hostname ${hostname}\n"
},
{
"alpha_fraction": 0.43887147307395935,
"alphanum_fraction": 0.4529780447483063,
"avg_line_length": 18.33333396911621,
"blob_id": "2ae7823f16376ae144e6fd2c29ae1b8193c281c6",
"content_id": "7cb88b58357f7fed5a76928e8005343c50d3b71c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C",
"length_bytes": 638,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 33,
"path": "/cmpset.c",
"repo_name": "ljjjustin/gists",
"src_encoding": "UTF-8",
"text": "#include <stdio.h>\n#include <unistd.h>\n\nint x86_atomic_cmp_set(int *lock, int old, int set)\n{\n char res;\n\n __asm__ volatile (\n \" lock; \"\n \" cmpxchgl %3, %1; \"\n \" sete %0; \"\n : \"=a\" (res) : \"m\" (*lock), \"a\" (old), \"r\" (set) : \"cc\", \"memory\");\n\n return res;\n\n}\n\nint main(int argc, char **argv)\n{\n int pid = getpid();\n\n int lock = 1;\n int res;\n\n res = x86_atomic_cmp_set(&lock, 0, pid);\n if (res) {\n printf(\"Yep, I get the lock\\n\");\n } else {\n printf(\"No, I don't get the lock\\n\");\n }\n\n printf(\"Now lock value is: %d\\n\", lock);\n}\n"
},
{
"alpha_fraction": 0.5617977380752563,
"alphanum_fraction": 0.584269642829895,
"avg_line_length": 8.88888931274414,
"blob_id": "40ada84284ac4caab74ac87821397b94794d01b9",
"content_id": "b098cee68eadbcc1984efa5c8abdf79b962b43b7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 89,
"license_type": "no_license",
"max_line_length": 21,
"num_lines": 9,
"path": "/zentao/stop.sh",
"repo_name": "ljjjustin/gists",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\ncd $(dirname $0)\n\nif pgrep -f xxd; then\n pkill -9 -f xxd\nfi\n\n./zbox stop\n"
}
] | 21 |
JSpiner/SlackBot | https://github.com/JSpiner/SlackBot | 28871d1b9a4e3eaa81088b6e29421669f08c72f9 | 2c975f830cf42312f43abf712785d8c4dd353400 | 16fb976a7164c88aa05d65323bb9dcf5c803986c | refs/heads/master | 2020-04-02T01:18:09.625186 | 2016-09-22T09:47:39 | 2016-09-22T09:47:39 | 68,697,424 | 1 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.3657243847846985,
"alphanum_fraction": 0.43286219239234924,
"avg_line_length": 27.200000762939453,
"blob_id": "505275d12e1f5abd148c37c51953df43c397f899",
"content_id": "d96d2d92a07bad05a155af2e11132b5eff912222",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 566,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 20,
"path": "/tajabot/util.py",
"repo_name": "JSpiner/SlackBot",
"src_encoding": "UTF-8",
"text": "\n\ndef getEditDistance(str1, str2):\n\n d = [[0 for col in range(len(str2) + 1)] for row in range(len(str1) + 1)]\n\n print(str1)\n print(str2)\n for i in range(0, len(str1) + 1):\n d[i][0]= i\n \n for i in range(0, len(str2) + 1):\n d[0][i] = i\n\n for i in range(1, len(str1) + 1):\n for j in range(1, len(str2) + 1):\n if str1[i - 1] == str2[j - 1]:\n d[i][j] = d[i-1][j-1]\n else:\n d[i][j] = min([d[i-1][j-1] + 1, d[i][j-1] + 1, d[i-1][j] + 1])\n \n return d[len(str1)][len(str2)]\n"
},
{
"alpha_fraction": 0.6518518328666687,
"alphanum_fraction": 0.6814814805984497,
"avg_line_length": 18.285715103149414,
"blob_id": "ebe87b1da253f0ecbbe3820d1d7500d7a388097c",
"content_id": "7e381be58336ed6868962a23030c476a20f5c501",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 135,
"license_type": "no_license",
"max_line_length": 67,
"num_lines": 7,
"path": "/tajabot/core.py",
"repo_name": "JSpiner/SlackBot",
"src_encoding": "UTF-8",
"text": "from celery import Celery\n\napp = Celery('tasks', broker='amqp://guest:guest@localhost:5672//')\n\[email protected]\ndef add(x,y):\n return x+y\n"
},
{
"alpha_fraction": 0.6859503984451294,
"alphanum_fraction": 0.7520661354064941,
"avg_line_length": 23.399999618530273,
"blob_id": "3f03a3318ec66d3f13b691e026e10d7a1ed33639",
"content_id": "544db16c09625963ddbc4b6f54fde12cd046bda4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 121,
"license_type": "no_license",
"max_line_length": 64,
"num_lines": 5,
"path": "/tajabot/test.py",
"repo_name": "JSpiner/SlackBot",
"src_encoding": "UTF-8",
"text": "from core import add\nimport random\n\nresult = add.delay(random.randint(0,100), random.randint(0,100))\nprint (result.get())"
},
{
"alpha_fraction": 0.6430390477180481,
"alphanum_fraction": 0.6494057774543762,
"avg_line_length": 24.042552947998047,
"blob_id": "3402a4827c50c412f8541f8807f3e4b40f8d1ebd",
"content_id": "2f82e4554700ef137b68668839d64bea89958b32",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2356,
"license_type": "no_license",
"max_line_length": 128,
"num_lines": 94,
"path": "/tajabot/multi.py",
"repo_name": "JSpiner/SlackBot",
"src_encoding": "UTF-8",
"text": "from multiprocessing import Process, Queue, Array, Manager\nimport multiprocessing\nfrom threading import Thread, Lock\nimport threading\nimport time\nimport pika\nabc = 0\n# define functions\ndef workerProcess(workerId, queue):\n print(\"inited worker Id : \" + str(workerId))\n \n time.sleep(1)\n print(processList)\n\n while True:\n if queue.empty():\n continue\n data = queue.get()\n \n global abc\n #print('catch job worker id : ' + str(workerId) + ' thread count : ' + str(processList[workerId]['threadCount']))\n abc+=1\n thread = Thread(target = workerThread, args=(workerId, data))\n thread.start()\n \n\ndef workerThread(workerId, data):\n increaseThreadCount(workerId)\n \n threadId = threading.current_thread().ident\n\n print(processList)\n print('work job worker id : ' + str(workerId) + ' thread count : ' + str(processList[workerId]['threadCount']) + ' ' + data)\n time.sleep(10)\n \n decreaseThreadCount(workerId)\n return\n\ndef increaseThreadCount(workerId):\n lock.acquire()\n processList[workerId]['threadCount'] = processList[workerId]['threadCount'] + 1\n lock.release()\n\ndef decreaseThreadCount(workerId):\n lock.acquire()\n processList[workerId]['threadCount'] -= 1\n lock.release()\n\ndef getLazyWorkerId():\n workerId = 0\n threadCount = -1\n\n for processObject in processList:\n if processObject['threadCount'] > threadCount:\n workerId = processObject['workerId']\n threadCount = processObject['threadCount']\n\n return workerId\n\nlock = Lock()\n\n# init rabbitMQ\nconnection = pika.BlockingConnection(\n pika.ConnectionParameters('localhost'))\nchannel = connection.channel()\n\nchannel.queue_declare(queue='workqueue')\n\n\n# multi processing\n\nmanager = Manager()\n\ncpu_num = multiprocessing.cpu_count()\nprocessList = manager.dict()\n\nprint(\"cpu num : \" + str(cpu_num))\nfor i in range(cpu_num):\n queue = Queue()\n process = Process(target=workerProcess, args=(i,queue))\n processList[i] = {'workerId' : i,\n 'process' : process,\n 'threadCount' : 0,\n 'queue' : queue}\n process.start()\n\n# test code \ntime.sleep(3)\n\nfor i in range(100):\n lazyWorkerId = getLazyWorkerId()\n processList[lazyWorkerId]['queue'].put('hello-'+str(i))\n print(abc)\n time.sleep(1)\n\n\n"
},
{
"alpha_fraction": 0.647243082523346,
"alphanum_fraction": 0.6566416025161743,
"avg_line_length": 18,
"blob_id": "6b8c0469ee50fb82949406ef50dd2284d9d5fe1f",
"content_id": "b5d46b498a4eaa37d92159dbb103ee99139522fb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1714,
"license_type": "no_license",
"max_line_length": 64,
"num_lines": 84,
"path": "/tajabot/game.py",
"repo_name": "JSpiner/SlackBot",
"src_encoding": "UTF-8",
"text": "from flask import Flask\nimport sys\nimport os\nimport time\nimport redis\nimport util\nimport key\nimport random\nfrom slackclient import SlackClient\n\ntexts = [\n\t\"무궁화 꽃이 피었습니다.\",\n\t\"이것도 너프해 보시지!\",\n\t\"소프트웨어 마에스트로\",\n\t\"난 너를 사랑해 이 세상은 너 뿐이야\"\n]\n\n\ndef actionStart(data):\n\tsc.rtm_send_message(data['channel'], \"Ready~\")\n\n\tfor i in range(4):\n\t\ttime.sleep(1)\n\t\tif i!=3:\n\t\t\tsc.rtm_send_message(data['channel'], str(3-i) + \"!\")\n\t\n\tglobal textIndex\n\tglobal stTime\n\ttextIndex = random.randrange(0,len(texts))\n\tstTime = time.time()\n\tprint( textIndex)\n\tsc.rtm_send_message(data['channel'], texts[textIndex])\n\ndef actionType(data):\n\tprint(\"actionType\")\n\t\n\tdistance = util.getEditDistance(data['text'], texts[textIndex])\n\tlength = max(len(data['text']), len(texts[textIndex]))\n\taccur = (length - distance) / length * 100\n\n\tedTime = time.time()\n\tspeed = length / (edTime - stTime) * 60\n\n\tresponse = \"accur : \" + str(accur) + \"% speed : \" + str(speed)\n\tsc.rtm_send_message(data['channel'], response)\n\n\tglobal textIndex\n\ttextIndex = -1\nprint (sys.version)\n\n\ntextIndex = -1\nstTime = 0\n\nprint (\"init client\")\nsc = SlackClient(key.SLACK_BOT_KEY)\nprint (\"connecting...\")\n \nif sc.rtm_connect():\n\tprint(\"connected!\")\n\n\twhile True:\n\t\tresponse = sc.rtm_read()\n\n\t\tif len(response) == 0: \n\t\t\tcontinue\n\n\t\t# response는 배열로, 여러개가 담겨올수 있음\n\t\tfor data in response:\n\t\t\tprint(data)\n\n\t\t\tif ('type' in data) is False:\n\t\t\t\tcontinue\t\t\t\n\t\t\tif data['type'] == 'message':\n\t\t\t\tprint ('msg' + str(textIndex))\n\t\t\t\tif data['text'] == \".시작\":\n\t\t\t\t\tactionStart(data)\n\t\t\t\telif textIndex >= 0:\n\t\t\t\t\tactionType(data)\n\n\nfdfsdf\nelse:\n\tprint (\"Connection Failed\")\n"
}
] | 5 |
insilicolife/TF3DScan | https://github.com/insilicolife/TF3DScan | 1148a2467cc05196db71f147cb13660e74d69c8c | 3d8b4f1dcb6f74d6dd1e4416c26e6bfca66115c8 | b224239671468a8b2c4c71e55e7272c21f7d213c | refs/heads/main | 2023-05-14T03:06:42.025564 | 2021-06-07T19:36:14 | 2021-06-07T19:36:14 | 374,782,308 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5429316163063049,
"alphanum_fraction": 0.5547052621841431,
"avg_line_length": 45.093021392822266,
"blob_id": "144cbda6b4eb930d411259cde0e4450610b04e66",
"content_id": "9bf807409887cd4bb28a1a5f1a3ac4cf61aa6793",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 11893,
"license_type": "permissive",
"max_line_length": 295,
"num_lines": 258,
"path": "/TF3DScan/TF3DScan.py",
"repo_name": "insilicolife/TF3DScan",
"src_encoding": "UTF-8",
"text": "import numpy as np\nimport pandas as pa\nimport requests, sys\nimport json \nfrom Bio.Seq import Seq\nimport os\n\n\nclass TF3DScan:\n def __init__(self,genes,PWM_directory,seqs=None):\n self.gene_names=genes\n self.PWM_dir=PWM_directory\n self.seq=None\n self.PWM=None\n self.weights=None\n self.proteins=None\n self.initialize()\n \n def initialize(self):\n self.seq=self.get_seq_by_name(self.gene_names)\n self.PWM=self.convolutional_filter_for_each_TF(self.PWM_dir)\n self.weights, self.proteins= self.get_Weights(self.PWM)\n return \n def softmax(self,x):\n e_x = np.exp(x - np.max(x))\n return (e_x / e_x.sum(axis=0))\n def convolutional_filter_for_each_TF(self,PWM_directory):\n path = PWM_directory\n #print(path)\n filelist = os.listdir(path)\n TF_kernel_PWM={}\n for file in filelist:\n TF_kernel_PWM[file.split(\"_\")[0]] = pa.read_csv(path+file, sep=\"\\t\", skiprows=[0], header=None)\n return TF_kernel_PWM\n \n def get_reverse_scaning_weights(self, weight):\n return np.flipud(weight[:,[3,2,1,0]])\n \n def get_Weights(self, filter_PWM_human):\n #forward and reverse scanning matrix with reverse complement\n #forward_and_reverse_direction_filter_list=[{k:np.dstack((filter_PWM_human[k],self.get_reverse_scaning_weights(np.array(filter_PWM_human[k]))))} for k in filter_PWM_human.keys()]\n #forward and reverse scanning with same matrix\n forward_and_reverse_direction_filter_list=[{k:np.dstack((filter_PWM_human[k],filter_PWM_human[k]))} for k in filter_PWM_human.keys()]\n forward_and_reverse_direction_filter_dict=dict(j for i in forward_and_reverse_direction_filter_list for j in i.items())\n unequefilter_shape=pa.get_dummies([filter_PWM_human[k].shape for k in filter_PWM_human])\n TF_with_common_dimmention=[{i:list(unequefilter_shape.loc[list(unequefilter_shape[i]==1),:].index)} for i in unequefilter_shape.columns]\n filterr={}\n for i in TF_with_common_dimmention:\n #print(list(i.keys()))\n aa=[list(forward_and_reverse_direction_filter_list[i].keys()) for i in list(i.values())[0]]\n aa=sum(aa,[])\n #print(aa)\n xx=[forward_and_reverse_direction_filter_dict[j] for j in aa]\n #print(xx)\n xxx=np.stack(xx,axis=-1)\n #xxx=xxx.reshape(xxx.shape[1],xxx.shape[2],xxx.shape[3],xxx.shape[0])\n filterr[\"-\".join(aa)]=xxx\n \n \n weights=[v for k,v in filterr.items()]\n protein_names=[k for k,v in filterr.items()]\n protein_names=[n.split(\"-\") for n in protein_names]\n \n return (weights,protein_names)\n \n def get_sequenceBy_Id(self, EnsemblID,content=\"application/json\",expand_5prime=2000, formatt=\"fasta\",\n species=\"homo_sapiens\",typee=\"genomic\"):\n server = \"http://rest.ensembl.org\"\n ext=\"/sequence/id/\"+EnsemblID+\"?expand_5prime=\"+str(expand_5prime)+\";format=\"+formatt+\";species=\"+species+\";type=\"+typee\n r = requests.get(server+ext, headers={\"Content-Type\" : content})\n _=r\n if not r.ok:\n r.raise_for_status()\n sys.exit()\n \n return(r.json()['seq'][0:int(expand_5prime)+2000])\n \n def seq_to3Darray(self, sequence):\n seq3Darray=pa.get_dummies(list(sequence))\n myseq=Seq(sequence)\n myseq=str(myseq.reverse_complement())\n reverseseq3Darray=pa.get_dummies(list(myseq))\n array3D=np.dstack((seq3Darray,reverseseq3Darray))\n return array3D\n \n def get_seq_by_name(self, target_genes):\n promoter_inhancer_sequence=list(map(self.get_sequenceBy_Id, target_genes))\n threeD_sequence=list(map(self.seq_to3Darray, promoter_inhancer_sequence))\n input_for_convolutional_scan=np.stack((threeD_sequence)).astype('float32')\n return input_for_convolutional_scan\n \n def from_2DtoSeq(self, twoD_seq):\n indToSeq={0:\"A\",1:\"C\",2:\"G\",3:\"T\"} \n seq=str(''.join([indToSeq[i] for i in np.argmax(twoD_seq, axis=1)]))\n return seq\n \n def conv_single_step(self, seq_slice, W):\n s = seq_slice*W\n # Sum over all entries of the volume s.\n Z = np.sum(s)\n return Z\n \n def conv_single_filter(self, seq,W,stridev,strideh):\n (fv, fh, n_C_prev, n_C) = W.shape\n\n m=seq.shape[0]\n pad=0\n n_H = int((((seq.shape[1]-fv)+(2*pad))/stridev)+1)\n n_W = int((((seq.shape[2]-fh)+(2*pad))/strideh)+1)\n Z = np.zeros((m, n_H, n_W ,n_C_prev, n_C))\n for i in range(m):\n for h in range(int(n_H)):\n vert_start = h*stridev\n vert_end = stridev*h+fv\n for w in range(int(n_W)):\n horiz_start = w*strideh\n horiz_end = strideh*w+fh\n for c in range(int(n_C)): \n a_slice_prev = seq[i,vert_start:vert_end,horiz_start:horiz_end,:]\n # Convolve the (3D) slice with the correct filter W and bias b, to get back one output neuron. (≈1 line)\n for d in range(n_C_prev):\n Z[i, h, w,d, c] = self.conv_single_step(a_slice_prev[:,:,d], W[:,:,d,c])\n\n Z=self.softmax(Z) \n return Z\n \n \n def conv_total_filter(self, Weights, seqs,stridev,strideh):\n return [self.conv_single_filter(seqs,i,stridev,strideh) for i in Weights]\n \n def single_sigmoid_pool(self, motif_score):\n n=sum(motif_score>.5)\n score=[motif_score[i] for i in np.argsort(motif_score)[::-1][:n]]\n index=[j for j in np.argsort(motif_score)[::-1][:n]]\n sigmoid_pooled=dict(zip(index, score))\n sigmoid_pooled=sorted(sigmoid_pooled.items(), key=lambda x: x[1])[::-1]\n return sigmoid_pooled\n \n def total_sigmoid_pool(self, z):\n sigmoid_pooled_motifs=[]\n for i in range(z.shape[0]):\n proteins=[]\n for k in range(z.shape[4]):\n strands=[]\n for j in range(z.shape[3]):\n strands.append(self.single_sigmoid_pool(z[i,:,0,j,k]))\n proteins.append(strands)\n sigmoid_pooled_motifs.append(proteins)\n #return np.stack(sigmoid_pooled_motifs)\n return np.array(sigmoid_pooled_motifs)\n def extract_binding_sites_per_protein(self, seq, motif_start, motif_leng):\n return seq[motif_start:motif_start+motif_leng]\n \n def getScore(self, seq, weights):\n NtoInd={\"A\":0,\"C\":1,\"G\":2,\"T\":3}\n cost=0\n for i in range(len(seq)):\n cost+=weights[i,NtoInd[seq[i]]]\n return cost\n def motifs(self, seqs, mot, weights, protein_names):\n Motifs=[]\n for m in range(len(mot)):\n motifs_by_seq=[]\n for z in range(mot[m].shape[0]):\n motifs_by_protein=[]\n for i in range(mot[m].shape[1]):\n motifs_by_strand=[]\n for j in range(mot[m].shape[2]):\n seqq=[self.extract_binding_sites_per_protein(self.from_2DtoSeq(seqs[z,:,:,j]),l,weights[m].shape[0]) for l in list(pa.DataFrame(mot[m][z,i,j])[0])]\n\n score=[self.getScore(k,weights[m][:,:,j,i]) for k in seqq]\n #coordinate=[{p:p+weights[m]} for p in list(pa.DataFrame(mot[m][z,i,j])[0])]\n scor_mat={\"motif\":seqq, \"PWM_score\":score,\"sigmoid_score\":list(pa.DataFrame(mot[m][z,i,j])[1]), \"protein\":protein_names[m][i], \"strand\":j, \"input_Sequence\":z, \"best_threshold\":sum(np.max(weights[m][:,:,j,i], axis=1))*.80}\n motifs_by_strand.append(scor_mat)\n motifs_by_protein.append(motifs_by_strand)\n motifs_by_seq.append(motifs_by_protein)\n print(m) \n Motifs.append(np.stack(motifs_by_seq))\n return Motifs\n \n def flatten_motif(self, xc):\n mymotifs=[]\n for i in range(len(xc)):\n for j in range(xc[i].shape[0]):\n for k in range(xc[i].shape[1]):\n for z in range(xc[i].shape[2]):\n if(not pa.DataFrame(xc[i][j,k,z]).sort_values([\"PWM_score\"], ascending=[0]).loc[list(pa.DataFrame(xc[i][j,k,z]).sort_values([\"PWM_score\"], ascending=[0])[\"PWM_score\"]>pa.DataFrame(xc[i][j,k,z]).sort_values([\"PWM_score\"], ascending=[0])[\"best_threshold\"]),:].empty):\n mymotifs.append(pa.DataFrame(xc[i][j,k,z]).sort_values([\"PWM_score\"], ascending=[0]).loc[list(pa.DataFrame(xc[i][j,k,z]).sort_values([\"PWM_score\"], ascending=[0])[\"PWM_score\"]>pa.DataFrame(xc[i][j,k,z]).sort_values([\"PWM_score\"], ascending=[0])[\"best_threshold\"]),:])\n\n return pa.concat(mymotifs)\n \n def proteins_motif(self,all_bindings, list_of_prot):\n return [{i:list(all_bindings.loc[list(all_bindings[\"protein\"]==i),\"motif\"])} for i in list_of_prot]\n \n def filter_by_Sequence_and_strand(self, all_bindings, seq, strand):\n return all_bindings[(all_bindings[\"input_Sequence\"]==seq) & (all_bindings[\"strand\"]==strand)]\n \n def find_all_seq(self, st, substr, start_pos=0, accum=[]):\n ix = st.find(substr, start_pos)\n if ix == -1:\n return accum\n return self.find_all_seq(st, substr, start_pos=ix + 1, accum=accum + [ix])\n \n def get_coordinate(self, seq1, motif):\n return [{i:i+len(motif)} for i in self.find_all_seq(seq1, motif)]\n\n def get_multiple_coordinate(self, seq1,list_of_motifs):\n motifs_cord=[self.get_coordinate(seq1, i) for i in list_of_motifs]\n motifs_cord_list=sum(motifs_cord, [])\n result = {}\n for d in motifs_cord_list:\n result.update(d)\n return result\n \n def colorTextsingle(self, seq, k, v, color):\n seq=seq[:k]+'{}'+seq[k:v]+'{}'+seq[v:]\n seq=seq.format(color,'\\033[0m')\n return seq\n \n def SeqWithMotifs(self, seq,mots,col):\n shifter=0\n for i in range(len(sorted(mots))):\n if(i==0):\n seq=self.colorTextsingle(seq,sorted(mots)[i],mots[sorted(mots)[i]],col)\n shifter+=(len(col)+len(\"\\033[0m\"))\n #print(\"1\")\n #print(shifter)\n else:\n if(sorted(mots)[i]>sorted(mots)[i-1] and sorted(mots)[i] < mots[sorted(mots)[i-1]]):\n #print(\"yes\")\n #print(seq)\n\n seq=seq.replace(seq[mots[sorted(mots)[i-1]]+(shifter-len(\"\\033[0m\")):mots[sorted(mots)[i-1]]+shifter],\"\")\n #print(seq)\n temp=seq[:sorted(mots)[i]+shifter-len(\"\\033[0m\")]+\"\\033[0m\"\n #print(temp, seq[sorted(mots)[i]+shifter-len(\"\\033[0m\"):])\n seq=temp+seq[sorted(mots)[i]+shifter-len(\"\\033[0m\"):] \n #print(seq)\n #print(seq[sorted(mots)[i]+shifter:mots[sorted(mots)[i]]+shifter+len(col[i])])\n seq=self.colorTextsingle(seq,sorted(mots)[i]+shifter,mots[sorted(mots)[i]]+shifter, col)\n #print(seq)\n shifter+=(len(col[i])+len(\"\\033[0m\"))\n #print(i)\n else:\n seq=self.colorTextsingle(seq,sorted(mots)[i]+shifter,mots[sorted(mots)[i]]+shifter, col)\n #print(\"last\")\n shifter+=(len(col)+len(\"\\033[0m\"))\n return seq\n \n def get_motif_by_protein(self, seq, pro_mo, colors):\n col_cod=0\n for dct in pro_mo:\n for k,v in dct.items():\n coor=self.get_multiple_coordinate(seq,v)\n #print(coor)\n seq=self.SeqWithMotifs(seq,coor,colors[col_cod])\n col_cod+=1\n return seq"
},
{
"alpha_fraction": 0.7083333134651184,
"alphanum_fraction": 0.75,
"avg_line_length": 22,
"blob_id": "b4109ae35a5bda01b798db5982cf15f0fbdc7d9b",
"content_id": "4aa6962955552f401493cb271b71d49554c1ee55",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 24,
"license_type": "permissive",
"max_line_length": 22,
"num_lines": 1,
"path": "/TF3DScan/__init__.py",
"repo_name": "insilicolife/TF3DScan",
"src_encoding": "UTF-8",
"text": "from TF3DScan import *\n\n"
},
{
"alpha_fraction": 0.6421568393707275,
"alphanum_fraction": 0.6503267884254456,
"avg_line_length": 28.190475463867188,
"blob_id": "83df104f408d88c4816619138488510c558d4ab6",
"content_id": "24b5b9f5f712d508487624467bb7fc49aa043b1f",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 612,
"license_type": "permissive",
"max_line_length": 63,
"num_lines": 21,
"path": "/setup.py",
"repo_name": "insilicolife/TF3DScan",
"src_encoding": "UTF-8",
"text": "import setuptools\n\nwith open(\"README.md\", \"r\") as fh:\n long_description = fh.read()\n\nsetuptools.setup(\n name=\"TF3Dscan\",\n version=\"0.0.1\",\n author=\"Nigatu Ayele\",\n author_email=\"[email protected]\",\n description=\"Transcription factor binding site prediction\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n url=\"https://github.com/pypa/example-project\",\n packages=setuptools.find_packages(),\n classifiers=(\n \"Programming Language :: Python :: 3\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: OS Independent\",\n ),\n)"
},
{
"alpha_fraction": 0.8284574747085571,
"alphanum_fraction": 0.8284574747085571,
"avg_line_length": 750,
"blob_id": "41392d515ef7fdd26675a90efb90b2774310b87d",
"content_id": "f0e8906867e61510aeeae8c0c96cf4805b5c16a2",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 752,
"license_type": "permissive",
"max_line_length": 750,
"num_lines": 1,
"path": "/README.md",
"repo_name": "insilicolife/TF3DScan",
"src_encoding": "UTF-8",
"text": "The interaction of transcription factor (TF) proteins with DNA at promoter and enhancer region of the genome play major role in the transcriptional gene regulation. The sequence-specific interaction of TF protein with DNA modulates the gene expression in such a way that controls cell growth and differentiation. For example, in immunology, the differentiation of naive T cell into different lineage specific effector and memory cells is controlled by the interaction of the T cell linage specific TF proteins with its promoter and enhancer regions of the lineage specific genes. Therefore, the study of the binding site of the TF protein to certain biomarkers reveal the underling gene regulation in the differentiation and development of T cells. \n"
}
] | 4 |
Kenn3Th/Physics_at_UiO | https://github.com/Kenn3Th/Physics_at_UiO | a63c0115c894d1be2c0e65a0d676975311e71795 | d94da80e8a8227a7a4c7ea9aa5602e1c7360f58b | f20c724bc6fd6d43a89bf04e807849d0d63e4bad | refs/heads/master | 2020-04-20T07:50:55.368749 | 2019-02-02T16:15:09 | 2019-02-02T16:15:09 | 168,721,083 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5750315189361572,
"alphanum_fraction": 0.6078184247016907,
"avg_line_length": 22.323530197143555,
"blob_id": "ac2ad844944f201d6ed14ad9782d93f9a8167dc5",
"content_id": "e08447ee27a686c2dd1786418ddf0fd3f7f8c996",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 793,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 34,
"path": "/INF1100/Assigments/Chapter_8/sum_4dice.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import random, sys #imports random and sys\n\ntry:\n q = float(sys.argv[1])\n b = int(sys.argv[2])\nexcept IndexError:\n print \"Provide command-line arguments for stakes 'r' and N experiments\"\n sys.exit(1)\nexcept ValueError:\n print \"Wait what?!? insert 2 numbers r and N!!\"\n sys.exit(1)\n\nr = float(sys.argv[1]); n = int(sys.argv[2]) #fetches numbers from cml\n\nM = 0\nfor i in range(n):\n s = 0\n for j in range(4):\n die = random.randint(1,6)\n s += die\n #print 'die =',die \n #print 'sum =', s\n if s < 9:\n M += r\n \nprint 'You have won',(M - n), 'Euro'\n\n\"\"\"\nTerminal> python sum_4dice.py 10 1000\nYou have won -460.0 Euro\n\nDu kan fjerne # forann print i for loopen hvis du vil kontrolere om\nterningene har 1-6 oyne og summene er korrekt\n\"\"\"\n"
},
{
"alpha_fraction": 0.4965229630470276,
"alphanum_fraction": 0.5382475852966309,
"avg_line_length": 15.340909004211426,
"blob_id": "aa0445aec8e50176f9e6bce190be3c055d8f961c",
"content_id": "6183ca4fce4e99841835ae379f4f272b0d5bc90c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 719,
"license_type": "no_license",
"max_line_length": 45,
"num_lines": 44,
"path": "/INF1100/Exercises/Chapter_5/sequence_limits.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np\nimport matplotlib.pyplot as plt\nfrom math import sin, pi\n\n# a)\ndef sequence_a(N):\n index_set = range(N+1)\n a = np.zeros(len(index_set))\n\n for n in index_set:\n a[n] = (7+1.0/(n+1))/(3-1.0/(n+1)**2)\n return a\n\n#print sequence_a(100)\n\n#b)\ndef sequence_b(N):\n index_set = range(N+1)\n D = np.zeros(len(index_set))\n\n for n in index_set:\n D[n] = sin(2**-n)/2**-n\n\n return D\n\n#print sequence_b(100)\n\n#c)\n\ndef sequence_c(f, x, N):\n index_set = range(N+1)\n D = np.zeros(len(index_set))\n\n for n in index_set:\n h = 2**(-n)\n D[n] = (f(x+h) -f(x))/h\n\n return D\n\nDn = sequence_c(sin, pi, 80)\n\nplt.plot(Dn,'bo')\nplt.axis([0, 80, -1.5, 1.5])\nplt.show()\n"
},
{
"alpha_fraction": 0.26207759976387024,
"alphanum_fraction": 0.3879849910736084,
"avg_line_length": 47.132530212402344,
"blob_id": "8ef9207ec5abf1deda1c80546fb9ae4296233830",
"content_id": "72f3c7fe127df8a5556f312aa7750229d6958b38",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3995,
"license_type": "no_license",
"max_line_length": 95,
"num_lines": 83,
"path": "/INF1100/Assigments/Chapter_7/Backward2.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "class Diff:\n def __init__(self, f, h=1E-5):\n self.f = f\n self.h = float(h)\n\nclass Forward1(Diff):\n def __call__(self,x):\n f, h = self.f, self.h\n return (f(x*h) - f(x))/h\n\nclass Backward1(Diff):\n def __call__(self,x):\n f, h = self.f, self.h\n return (f(x) - f(x-h))/h\n\nclass Backward2(Diff):\n def __call__(self,x):\n f, h = self.f, self.h\n return (f(x-2*h) - 4*f(x-h) + 3*f(x))/(2*h)\n\nfrom math import exp\nk = 14\nt = 0\nneg_e = lambda t: -exp(t)\nexct_dt = -1\nfor i in range(k+1):\n h = 2**(-i)\n gt2 = Backward2(neg_e, h)\n gt = Backward1(neg_e, h)\n dif2 = gt2(0) - exct_dt\n dif = gt(0) - exct_dt\n print '--------------------------------------------------------------'\n print \"|k = %2.d,| g'(t) = -1,| Backward1 = %-8.5g,| Difference = %-6.5g\" %(i,gt(0), dif)\n print \"|k = %2.d,| g'(t) = -1,| Backward2 = %-8.5g,| Difference = %-6.5g\" %(i,gt2(0), dif2)\n \n\"\"\"\nTerminal> python Backward2.py \n--------------------------------------------------------------\n|k = ,| g'(t) = -1,| Backward1 = -0.63212,| Difference = 0.36788\n|k = ,| g'(t) = -1,| Backward2 = -0.83191,| Difference = 0.16809\n--------------------------------------------------------------\n|k = 1,| g'(t) = -1,| Backward1 = -0.78694,| Difference = 0.21306\n|k = 1,| g'(t) = -1,| Backward2 = -0.94176,| Difference = 0.058243\n--------------------------------------------------------------\n|k = 2,| g'(t) = -1,| Backward1 = -0.8848 ,| Difference = 0.1152\n|k = 2,| g'(t) = -1,| Backward2 = -0.98266,| Difference = 0.017345\n--------------------------------------------------------------\n|k = 3,| g'(t) = -1,| Backward1 = -0.94002,| Difference = 0.059975\n|k = 3,| g'(t) = -1,| Backward2 = -0.99525,| Difference = 0.0047473\n--------------------------------------------------------------\n|k = 4,| g'(t) = -1,| Backward1 = -0.96939,| Difference = 0.030609\n|k = 4,| g'(t) = -1,| Backward2 = -0.99876,| Difference = 0.0012428\n--------------------------------------------------------------\n|k = 5,| g'(t) = -1,| Backward1 = -0.98454,| Difference = 0.015464\n|k = 5,| g'(t) = -1,| Backward2 = -0.99968,| Difference = 0.000318\n--------------------------------------------------------------\n|k = 6,| g'(t) = -1,| Backward1 = -0.99223,| Difference = 0.007772\n|k = 6,| g'(t) = -1,| Backward2 = -0.99992,| Difference = 8.0433e-05\n--------------------------------------------------------------\n|k = 7,| g'(t) = -1,| Backward1 = -0.9961 ,| Difference = 0.0038961\n|k = 7,| g'(t) = -1,| Backward2 = -0.99998,| Difference = 2.0226e-05\n--------------------------------------------------------------\n|k = 8,| g'(t) = -1,| Backward1 = -0.99805,| Difference = 0.0019506\n|k = 8,| g'(t) = -1,| Backward2 = -0.99999,| Difference = 5.0714e-06\n--------------------------------------------------------------\n|k = 9,| g'(t) = -1,| Backward1 = -0.99902,| Difference = 0.00097593\n|k = 9,| g'(t) = -1,| Backward2 = -1 ,| Difference = 1.2697e-06\n--------------------------------------------------------------\n|k = 10,| g'(t) = -1,| Backward1 = -0.99951,| Difference = 0.00048812\n|k = 10,| g'(t) = -1,| Backward2 = -1 ,| Difference = 3.1766e-07\n--------------------------------------------------------------\n|k = 11,| g'(t) = -1,| Backward1 = -0.99976,| Difference = 0.0002441\n|k = 11,| g'(t) = -1,| Backward2 = -1 ,| Difference = 7.9444e-08\n--------------------------------------------------------------\n|k = 12,| g'(t) = -1,| Backward1 = -0.99988,| Difference = 0.00012206\n|k = 12,| g'(t) = -1,| Backward2 = -1 ,| Difference = 1.9864e-08\n--------------------------------------------------------------\n|k = 13,| g'(t) = -1,| Backward1 = -0.99994,| Difference = 6.1033e-05\n|k = 13,| g'(t) = -1,| Backward2 = -1 ,| Difference = 4.9658e-09\n--------------------------------------------------------------\n|k = 14,| g'(t) = -1,| Backward1 = -0.99997,| Difference = 3.0517e-05\n|k = 14,| g'(t) = -1,| Backward2 = -1 ,| Difference = 1.2442e-09\n\"\"\"\n"
},
{
"alpha_fraction": 0.6402966380119324,
"alphanum_fraction": 0.7144622802734375,
"avg_line_length": 27.89285659790039,
"blob_id": "d4c1ee41d145fb5f0231fdc5e78c8aa130d8fa6f",
"content_id": "29549ee8aae91847e60f43f1d5520cc0b933b823",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 809,
"license_type": "no_license",
"max_line_length": 91,
"num_lines": 28,
"path": "/INF1100/Exercises/Chapter_1/length_convertion1.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "\"\"\"\nformula:\nlength_in_unit = length_in_meter/unit\nlength_in_km = length_in_meters/1000\n\"\"\"\ninch = 2.54/100\nfoot = 12*inch\nyard = 3*foot\nbmile = 1760*yard\n\nprint '''one inch is %g meters, one foot is %g meters,\none yard is %g meters, one british mile is %g meters.''' %(inch, foot, yard, bmile)\n\nlength_in_meter = 640.0\n\nlength_in_inches = length_in_meter/inch\nlength_in_feet = length_in_meter/foot\n\nprint '''This is the vertification of my calculations.\n%g meters is %g inches and %g feet''' % (length_in_meter, length_in_inches, length_in_feet)\n\n\"\"\"\nTerminal>In [11]: run length_convertion1.py\none inch is 0.0254 meters, one foot is 0.3048 meters,\none yard is 0.9144 meters, one british mile is 1609.34 meters.\nThis is the vertification of my calculations.\n640 meters is 25196.9 inches and 2099.74 feet\n\"\"\"\n"
},
{
"alpha_fraction": 0.3304843306541443,
"alphanum_fraction": 0.4871794879436493,
"avg_line_length": 12.764705657958984,
"blob_id": "a225683096100335110b18a9d121b4125dd36488",
"content_id": "cdf7ecf529341c1242e916c25bbc45b206cb37df",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 702,
"license_type": "no_license",
"max_line_length": 42,
"num_lines": 51,
"path": "/INF1100/Assigments/Chapter_2/ball_table3.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "#formula y(t) = v0*t - 0.5*g*t**2\nv0 = 5.0 #velocity\ng = 9.81 #gravity\nn = 5 \n\nt_stop = 2*v0/g\ndt = t_stop/n\n\nt_list = []\ny_list = []\n\nfor i in range(0,n+1):\n t = i*dt\n y = v0*t - 0.5*g*t**2\n t_list.append(t)\n y_list.append(y)\n \n# a) \nty1 = [t_list, y_list]\n\nprint 'a)'\nfor t, y in zip(ty1[0], ty1[1]):\n print '%5.2f %5.2f' %(t, y)\n\n# b)\n\nty2 = []\nfor t ,y in zip(t_list, y_list):\n ty2.append([t, y])\n \nprint 'b)'\nfor row in ty2:\n print '%5.2f %5.2f' % (row[0], row[1])\n\n\"\"\"\nTerminal>python ball_table3.py \na)\n 0.00 0.00\n 0.20 0.82\n 0.41 1.22\n 0.61 1.22\n 0.82 0.82\n 1.02 0.00\nb)\n 0.00 0.00\n 0.20 0.82\n 0.41 1.22\n 0.61 1.22\n 0.82 0.82\n 1.02 0.00\n\"\"\"\n"
},
{
"alpha_fraction": 0.5683544278144836,
"alphanum_fraction": 0.6151898503303528,
"avg_line_length": 19.256410598754883,
"blob_id": "e72bdc4d75b826665b86684746ab49caee886354",
"content_id": "77db253e9e26af89f25010cfb10e2a824638df90",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 790,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 39,
"path": "/INF1100/Assigments/Chapter_5/water_wave_velocity.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np\nimport matplotlib.pyplot as plt\nfrom math import sqrt, pi, tanh\n\ndef c(b):\n g = 9.81\n s = 7.9*10**(-2)\n rho = 1000\n h = 50\n return sqrt((g*b)/(2*pi)*(1 + s*(4*pi**2)/(rho*g*b**2))*tanh((2*pi*h)/b))\n\nlmdA = np.linspace(0.001, 0.1, 100)\nLmda = np.linspace(1, 2000, 100)\n\nq = np.zeros(len(lmdA)) #liten verdi av lambda\nz = np.zeros(len(Lmda)) #stor verdi av lambda\n\nfor i in xrange(len(lmdA)):\n q[i] = c(lmdA[i])\n\nfor j in xrange(len(Lmda)):\n z[j] = c(Lmda[j])\n\nplt.plot(q,\"r\")\nplt.legend(['Liten Lambda'])\nplt.title('Water wave velocity')\nplt.ylabel('m/s')\nplt.xlabel('m')\nplt.show()\n\nplt.plot(z)\nplt.legend(['Stor Lambda'])\nplt.title('Water wave velocity')\nplt.ylabel('m/s')\nplt.xlabel('m')\nplt.show()\n\"\"\"\nTerminal> python water_wave_velocity.py\n\"\"\"\n"
},
{
"alpha_fraction": 0.46140652894973755,
"alphanum_fraction": 0.5351629257202148,
"avg_line_length": 19.821428298950195,
"blob_id": "74ac811b0a53e8ad05c26629afa2fea62bbffd3a",
"content_id": "c099cf29a931b8cdf2f46be41aff6661553a725d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 583,
"license_type": "no_license",
"max_line_length": 73,
"num_lines": 28,
"path": "/INF1100/Exercises/Chapter_5/plot_velocity_pipeflow.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "#formula v(r)= (Bet/2*gam)**(1/n) *(n/(n + 1)*(R**(1+1/n)-r**(1+1/n))\n\nfrom numpy import linspace\nfrom matplotlib.pyplot import plot, show\nimport matplotlib.pyplot as plt\n\ndef v(r, n):\n R = 1; Bet = 0.02; gam = 0.02\n return ((Bet/2*gam)**(1./n))*(n/(n + 1.))*(R**(1+1./n) - r**(1+1./n))\n\n#r elemnt [0, R]\nR = 1\nBet = 0.02\ngam = 0.02\n\nn = 0.1\nr_min = 0\n\nr = linspace(r_min, R, 100)\n\nfor n in linspace(1, 0.1, 10):\n plt.plot(r, v(r, n) / v(0, n))\n plt.xlabel('radius')\n plt.ylabel('velocity')\n plt.title('Velocity profile: n = %1.2f' % n)\n \nplot(r, v(r,n))\nshow()\n"
},
{
"alpha_fraction": 0.46195653080940247,
"alphanum_fraction": 0.592391312122345,
"avg_line_length": 13.15384578704834,
"blob_id": "7dd8174e259bdd380692b184ec781373c3944bef",
"content_id": "c4c422697fb1b8b4d57ffcc313e4f3ad5a6ea67f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 184,
"license_type": "no_license",
"max_line_length": 58,
"num_lines": 13,
"path": "/INF1100/Assigments/Chapter_1/gaussian_function.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "from math import sqrt, exp, pi\n\nm = 0\ns = 2.0\nx = 1.0\nf = ((1.0)/(sqrt(2*pi)*s))*exp(-0.5*((x-m)/s)**2) #Formula\n\nprint f\n\n\"\"\"\nTerminal>python gaussian_function.py \n0.176032663382\n\"\"\"\n"
},
{
"alpha_fraction": 0.6239316463470459,
"alphanum_fraction": 0.6810897588729858,
"avg_line_length": 33.66666793823242,
"blob_id": "28728cd5bf70b4d2632bbe9ccda2f1d32b735843",
"content_id": "24767ff415b534a60c6154c332835073abf0366c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1872,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 54,
"path": "/FYS2150/Lengde, hastighet og akselerasjon/opg13.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np, matplotlib.pyplot as plt\nfrom linjetilpasning import*\n\n#generell linje\ndef Gen_linje(x,y):\n x = np.array(x); y = np.array(y)\n x_ = np.mean(x); y_ = np.mean(y) #gjennomsnitt\n x2 = np.sum(np.square(x)); y2 = np.sum(np.square(y))\n D = x2 - 1./(len(x))*(np.sum(x))**2\n E = np.sum(x*y)-1./(len(x))*(np.sum(x)*np.sum(y))\n F = y2 - 1./(len(y))*(np.sum(y))**2\n m = float(E)/D #stigningstallet\n c = y_ - m*x_ #konstantleddet\n dm = np.sqrt(1.0/(len(x)-2)*((D*F-E**2)/D**2))\n dc = np.sqrt(1.0/(len(x)-2)*(D/float(len(x))+x_**2)*((D*F-E**2)/D**2))\n return c,m,dc,dm\n#linje gjennom origo\ndef lin_gjen_origo(x,y):\n x = np.array(x); y = np.array(y)\n x2 = np.sum(np.square(x)); y2 = np.sum(np.square(y))\n xy = np.sum(x*y)\n m = xy/float(x2) #stigningstall\n dm = 1.0/(len(x)-1)*((x2*y2- xy**2)/x2**2) #usikkerhet i stigningtall\n return m,dm\n\nkonst,stig,ukonst,ustig = Gen_linje(x,y)\n\nprint m,c \nprint stig,konst\nprint ustig,ukonst\n\n\"\"\"\nJeg skrev om MATLAB skriptet til python kode.\nom mulig laster jeg den opp sammen med denne koden (linjetilpasning.py)\nhvis du prover aa kjore koden er den viktig. Om det ikke gaar send en mail\nskal jeg sende den. Har lagt ved et kjore eksempel under.\n\nHer er m og c konstantledd og stigningstallet som linjetilpasning.py\ngir meg, (med andre ord polyfit)\nstig og konst er stigningstallet og konstantleddet koden min gir meg.\nI tillegg er ukonst og ustig usikkerheten i konstantleddet og\nstigningstallet.\n\nkjore eksempel:\nterminal << python opg13.py \n4.8175370155 3.89131005522\n4.8175370155 3.89131005522\n0.321947775641 0.376371499977\n\nsom vi ser er min funksjons konstantled og stigningstall identisk til\npolyfit sin. De to siste tallene er min kalkulerte usikkerhet i\nstigningstall og konstantledd.\nJeg har tatt utgangspunkt i formelene paa side 39 i squires til funksjonen min\n\"\"\"\n"
},
{
"alpha_fraction": 0.5648994445800781,
"alphanum_fraction": 0.6005484461784363,
"avg_line_length": 25.682926177978516,
"blob_id": "b56d580895701097704e5df43506d147e4a8e24a",
"content_id": "af01c0863caa6787b98518d7e3b9c0307a7ce29e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1094,
"license_type": "no_license",
"max_line_length": 55,
"num_lines": 41,
"path": "/FYS2130/Project/oppg4.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np, matplotlib.pyplot as plt\nfrom classRungeKutta4 import*\n\n#konstanter\nm = 0.5 #masse [kg]\nk = 1 #fjaerkraft [N/m]\nT = 200.0 #tid [s]\ndt = 1e-2 #tidssteg\nN = int(T/dt) #antall tidssteg\nt = np.linspace(0,T,N)\nF = 0.7 #paatrukket kraft [N]\nb = 0 #motstand [kg/s]\nomega = np.sqrt(k/m)\n#omegaD = (13.0/8)*omega\nomegaD = (2.0/(np.sqrt(5)-1))*omega\n#array\nx = np.zeros(N) #posisjon\nv = np.zeros(N) #hastighet\n#numerisk losning\n#initialbetingelser\nx[0] = 2.0 \nv[0] = 0.0\npendulum = DrivenPendulum(F,omegaD,k,b,m)\nsolver = RungeKutta4(pendulum)\nfor i in xrange(N-1):\n x[i+1],v[i+1] = solver(x[i],v[i],t[i],dt)\n \n#analytisk losning\nxa = (2-(F/(k-m*omegaD**2)))*np.cos(omega*t) + \\\n (F/(k-m*omegaD**2))*np.cos(omegaD*t)\nva = -omega*(2-(F/(k-m*omegaD**2)))*np.sin(omega*t) - \\\n (omegaD*F/(k-m*omegaD**2))*np.sin(omegaD*t)\n \nplt.plot(t,x)\n#plt.plot(t,xa)\n#plt.legend(['Numerisk','Analytisk'])\nplt.title('Posisjon mot hastighet')\nplt.ylabel('Posisjon [m]')\nplt.xlabel('Tid [s]')\nplt.savefig('2numvsan.png')\nplt.show()\n"
},
{
"alpha_fraction": 0.49791955947875977,
"alphanum_fraction": 0.5658807158470154,
"avg_line_length": 17,
"blob_id": "f77078285a846b65ea97967c0e3e5e7cfc94b5bf",
"content_id": "390f4b6fe21f44817dd9bea277699c3745d63e4a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 721,
"license_type": "no_license",
"max_line_length": 53,
"num_lines": 40,
"path": "/INF1100/Exercises/Chapter_3/f2c.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "\n#formula C = (5.0/9.0)*(F - 32)\n#formula F = (9.0/5.0)*C + 32\n\ndef F2C(f):\n return (5.0/9.0)*(f - 32)\n\nCdegrees = []\n\nfor f in range(50, 101, 10):\n f = F2C(f)\n Cdegrees.append(f)\n\ndef C2F(c):\n return (9.0/5.0)*c + 32\n\nFdegrees = []\n\nfor c in Cdegrees:\n c = C2F(c)\n Fdegrees.append(c)\n\nprint ' From Farenheit to celcius'\n\nfor C, F in zip(Cdegrees, Fdegrees):\n print 'Farenheit = %-10g Celcius = %-10g' %(F, C)\n\nprint ' From Celcius to farenheit'\n\nFarenheit = []\nfor c in range(0, 40, 5):\n c = C2F(c)\n Farenheit.append(c)\n\nCelcius = []\nfor f in Farenheit:\n f = F2C(f)\n Celcius.append(f)\n\nfor Q, B in zip(Celcius, Farenheit):\n print 'Celcius = %-10g Farenheit = %-10g' %(Q, B)\n"
},
{
"alpha_fraction": 0.4326076805591583,
"alphanum_fraction": 0.48633626103401184,
"avg_line_length": 24.104650497436523,
"blob_id": "fb2a4f9f3845f97d6647b55c379d7b2397486b4b",
"content_id": "9914f52a9c5642fe58e154609726ebf33b4f54f0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2159,
"license_type": "no_license",
"max_line_length": 101,
"num_lines": 86,
"path": "/AST2000/Oblig_C/1C44.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "from pylab import*\n\ndef stjerne(filename):\n lam0 = 656.3\n c = 3*10**8\n \n t = []\n l = []\n f = []\n with open(filename,'r') as infile:\n for line in infile:\n dat = line.split()\n t.append(float(dat[0]))\n l.append(float(dat[1]))\n f.append(float(dat[2]))\n tid = asarray(t)\n lam = asarray(l)\n flux = asarray(f)\n \n vr = ((lam-lam0)/lam0)*c #peculiar velocity\n mvr = mean(vr) #mean velocityr\n V = vr - mvr #total hastighet\n\n starnr = filename[4]\n \"\"\"\n #plot\n subplot(2,1,1)\n title('Stjerne' + starnr)\n plot(tid,V, color = 'purple')\n ylabel('m/s')\n subplot(2,1,2)\n plot(tid,flux, color='orange')\n xlabel('Dager')\n ylabel('Realtiv fluks')\n savefig( starnr + '.png')\n show()\n \"\"\"\n\n return flux, V\n\nv_r = lambda t,t0,P,vr: vr*cos((2*pi)/P*(t-t0))\n\nV,fl = stjerne('star4_1.34.txt')\nT = 20\nt_0 = linspace(3500,4000,T)\nV_r = linspace(40,60,T)\nP = linspace(3500,8500,T)\n\ndef delta(t0,p,vr):\n de = []\n d_min = 1\n I = 0\n J = 0\n K = 0\n for i in range(len(t_0)):\n for j in range(len(P)):\n for k in range(len(V_r)):\n d = sum(V[i]-v_r(T,t_0[i],p[j],V_r[k]))**2\n de.append(d)\n if d < d_min:\n d_min = d\n I = i\n J = j\n K = k\n \n delta = asarray(de)\n return delta, I, J, K\n\nSm =1.9889*10**(30)\nG = 6.67*10**(-11)\nd = (24*3600)\njm = 1.898*10**(27)\nms = 1.34*Sm\nmp = lambda v,ms,p: ((ms**(2./3))*v*(p**(1./3)))/((2*pi*G)**(1./3))\n \nD, I, J, K = delta(t_0,P,V_r)\nDelta = min(D)\nprint 'Delta = %.14f i = %i j = %i k = %i'%(Delta,I,J,K)\nprint 't_0 = %.2f P = %.2f v_r = %.2f' %(t_0[I],P[J],V_r[K])\n\np_m_k = mp(V_r[K],ms,P[J]*d)\np_m_g = mp(55,ms,5000*d)\n\nprint 'massen til planeten kalkulert ved kun aa studere grafen = %.2f Jupiter masse' %(p_m_g/jm)\nprint 'massen til planeten kalkulert med minste kvadratiske metode = %.2f Jupiter masse' % (p_m_k/jm)\nprint 'forskjellen mellom bye-eye og minste kvadratisk metode = %.2f'%((p_m_g/jm)-(p_m_k/jm))\n"
},
{
"alpha_fraction": 0.3656884729862213,
"alphanum_fraction": 0.5011286735534668,
"avg_line_length": 12.8125,
"blob_id": "75ef32f778c073237a2494bd58170f2a1a837e6c",
"content_id": "afe9a9bcc565cba7b1f12ac560aa4a9ddd6fe0eb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 443,
"license_type": "no_license",
"max_line_length": 33,
"num_lines": 32,
"path": "/INF1100/Assigments/Chapter_2/ball_table2.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "\n#formula y(t) = v0*t - 0.5*g*t**2\nv0 = 5.0 #velocity\ng = 9.81 #gravity\nn = 5 \n\nt_stop = 2*v0/g\ndt = t_stop/n\n\nt_list = []\ny_list=[]\n\n#for loop\n\nfor i in range(0,n+1):\n t = i*dt\n y = v0*t - 0.5*g*t**2\n t_list.append(t)\n y_list.append(y)\n\nfor y, t in zip(y_list, t_list):\n print '%5.2f %5.2f' % (t, y)\n\n\n\"\"\"\nTerminal>python ball_table2.py \n 0.00 0.00\n 0.20 0.82\n 0.41 1.22\n 0.61 1.22\n 0.82 0.82\n 1.02 0.00\n \"\"\"\n"
},
{
"alpha_fraction": 0.5518606305122375,
"alphanum_fraction": 0.6294536590576172,
"avg_line_length": 24.775510787963867,
"blob_id": "8f051fd773caea620ae2a8a019bc3303fd780000",
"content_id": "f8b31aa8e34aa31ad952e923939ff84298db7194",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1263,
"license_type": "no_license",
"max_line_length": 57,
"num_lines": 49,
"path": "/FYS2130/Project/plotopg8.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np, matplotlib.pyplot as plt\n\nx,v,m,D = np.load('opg8.npy')\nt,psi = np.load('opg8t.npy')\nxc = -2.5e-3\nD = np.array([np.nonzero(D[i]) for i in xrange(len(D))])\nD = np.array([D[i][0][-52:-1] for i in xrange(len(D))])\nD = np.diff(D)\n#psi = 6.0e-4\nplt.subplot(2,1,1)\nplt.plot(t,x[25]*1e3)\nplt.title('$\\psi = 6*10^{-4}$')\nplt.ylabel('Posisjon [mm]')\nplt.axis([0,20,0,-2.5])\nplt.gca().invert_yaxis()\n#psi = 6.3e-4\nplt.subplot(2,1,2)\nplt.plot(t,x[40]*1e3)\nplt.title('$\\psi = 6.3*10^{-4}$')\nplt.xlabel('Tid [s]')\nplt.ylabel('Posisjon [mm]')\nplt.axis([0,20,0,-2.5])\nplt.gca().invert_yaxis()\nplt.savefig('psier.png')\n#psi = 6.5e-4\nplt.figure()\nplt.subplot(2,1,1)\nplt.plot(t,x[50]*1e3)\nplt.title('$\\psi = 6.5*10^{-4}$')\nplt.ylabel('Posisjon [mm]')\nplt.axis([0,20,0,-2.5])\nplt.gca().invert_yaxis()\n#psi = 7.3e-4\nplt.subplot(2,1,2)\nplt.plot(t,x[90]*1e3)\nplt.title('$\\psi = 7.3*10^{-4}$')\nplt.xlabel('Tid [s]')\nplt.ylabel('Posisjon [mm]')\nplt.axis([0,20,0,-2.5])\nplt.gca().invert_yaxis()\nplt.savefig('psier1.png')\n#psi plottet mot tidsforskjellen mellom hver draape\nplt.figure()\nplt.plot(psi,D/2000.0,'.')\nplt.xlabel('$\\psi$[kg/s]')\nplt.ylabel('Tidsforskjell [s]')\nplt.title('$\\psi$ mot tidsforskjell naar draapen faller')\nplt.savefig('psivsdt.png')\nplt.show()\n"
},
{
"alpha_fraction": 0.5974025726318359,
"alphanum_fraction": 0.6658795475959778,
"avg_line_length": 43.578948974609375,
"blob_id": "dd58eca5cb62d504f2198cdd6bbfa097cb5747a1",
"content_id": "d52919c41a85173c00dfc93c7a3652a2a1b2d452",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 847,
"license_type": "no_license",
"max_line_length": 113,
"num_lines": 19,
"path": "/MAT-INF1100/Oblig_1/oblig1_opg3.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "from random import random #implementerer falsk/uekte-tilfeldige(pseudo-random) nummer\n\nantfeil = 0; N = 10000\nx0 = y0 = z0 = 0.0\nfeil1 = feil2 = 0.0\n\nfor i in range(N):\n x = random(); y = random(); #random() genererer et tilfeldig flyttall mellom [0.0,1.0)\n res1 = (x + y)*(x - y) #legger sammen og trekker fra de tilfeldige siffrene og multipliserer de\n res2 = x**2 - y**2 #kvadrerer begge de tilfeldige tallene ogsaa trekker i fra\n\n if res1 != res2: #hvis res1 ikke er det samme som res2\n antfeil += 1 #legger til 1 i antfeil\n x0 = x; y0 = y #x blir x0 og y blir y0\n feil1 = res1 #res1 blir feil1\n feil2 = res2 #res2 blir feil2\n\nprint (100. * antfeil/N) #100. multiplisert med antfeil/10000\nprint (x0, y0, feil1 - feil2) #skriver ut hva x og y var sist gang de var feil og differansen mellom res1 og res2\n"
},
{
"alpha_fraction": 0.4364686608314514,
"alphanum_fraction": 0.5627062916755676,
"avg_line_length": 30.894737243652344,
"blob_id": "e98192ace908444301ccaaaa731943854e3b95ae",
"content_id": "d5709a72e2adccab0b43e9d591be6a653b4dce80",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1212,
"license_type": "no_license",
"max_line_length": 53,
"num_lines": 38,
"path": "/MEK1100/Oblig_2/oblig2f.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "from oblig2 import*\n#Circulation\ndef sirkulasjon(xi,yi,xj,yj):\n dt = 0.5\n side1 = sum(u[yi,xi:xj+1]*dt)\n side2 = sum(v[yi:yj+1,xj]*dt)\n side3 = -sum(u[yj,xi:xj+1]*dt)\n side4 = -sum(v[yi:yj+1,xi]*dt)\n sirk = side1 + side2 + side3 + side4\n print 'Bunn: %.5f' %(side1)\n print 'Hoyre side: %.5f' %(side2)\n print 'Topp: %.5f' %(side3)\n print 'Venstre side: %.5f' %(side4)\n return 'Sirkulasjon = %.5f' %(sirk)\n#Stokes' theorem\ndef stokes(x1,y1,x2,y2):\n dvx = gradient(v,0.5,axis=1)\n duy = gradient(u,0.5,axis=0)\n nXv = dvx - duy\n q = sum(nXv[y1:y2+1,x1:x2+1])*0.25\n return 'Stokes = %.5f'%(q)\n#Printing the information\nprint '----------------------------'\nprint 'Rektangel 1'\nprint sirkulasjon(34,159,69,169)\nprint stokes(34,159,69,169)\nprint 'Differanse = %.5f'%(2695.51409-2621.55870)\nprint '----------------------------'\nprint 'Rektangel 2'\nprint sirkulasjon(34,85,69,99)\nprint stokes(34,85,69,99)\nprint 'Differanse = %.5f'%(-60635.94012--61095.33233)\nprint '----------------------------'\nprint 'Rektangel 3'\nprint sirkulasjon(34,49,69,59)\nprint stokes(34,49,69,59)\nprint 'Differanse = %.5f'%(9.52102--12.21433)\nprint '----------------------------'\n"
},
{
"alpha_fraction": 0.5109890103340149,
"alphanum_fraction": 0.598901093006134,
"avg_line_length": 11.133333206176758,
"blob_id": "bd0207bb2483efeee4eaa288a588f05c4ea0d64a",
"content_id": "4b22a0998bc7a1413d589a87e31e74df9d72d57f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 182,
"license_type": "no_license",
"max_line_length": 31,
"num_lines": 15,
"path": "/INF1100/Exercises/Chapter_5/plot_ball1.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "from numpy import *\nfrom matplotlib.pyplot import *\n\nv0 = 10\ng = 9.81\n\nt = linspace(0, 2*v0/g, 100)\n\ny = v0*t - 0.5*g*t**2\n\nplot(t,y)\nxlabel('time (s)')\nylabel('heigth (m)')\n\nshow()\n"
},
{
"alpha_fraction": 0.482910692691803,
"alphanum_fraction": 0.532524824142456,
"avg_line_length": 22.256410598754883,
"blob_id": "8c9cafb0f952a4b297162381bca207a5e95d9e44",
"content_id": "6912cb341eaa1d6b085b40374331753404be60e0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 907,
"license_type": "no_license",
"max_line_length": 71,
"num_lines": 39,
"path": "/FYS-MEK1110/Oblig_3/t.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "from pylab import*\nfrom seaborn import*\n\n#variable\nk = 10.0; m = 0.1; g = 9.81 #masse,fjaer- og gravitasjon-konstant\nv0 = 0.1; b = 0.1; u = 0.1\ntime = 2.0; dt = 1/1000.0 # tid og tidssteg\nmud = 0.3; mus = 0.6 # friksjons konstanter\nN = m*g #Normalkraften\n#funksjon\nFf = lambda t,x: k*(u*t-x) #fjaerkraft\n#Euler-cromer\nn = int(round(time/dt))\nt = linspace(0,2,n)\nxi = zeros(n)\nv = zeros(n)\na = zeros(n)\nfor i in range(n-1):\n tol = 1E-2\n if abs(v[i]) <= tol:\n if Ff(t[i],xi[i]) <= mus*N:\n F = 0\n else:\n F = Ff(t[i],xi[i]) - mud*N\n else:\n F = Ff(t[i],xi[i]) - mud*N\n a[i] = F/m\n v[i+1] = v[i] + a[i]*dt\n xi[i+1] = xi[i] + v[i+1]*dt\n\n #plott\nsubplot(2,1,1)\nplot(t,xi)\ntitle('Kloss'); ylabel('Posisjon [m]')\nsubplot(2,1,2)\nplot(t,Ff(t,xi))\ntitle('fjaerkraft');xlabel('tid [s]'); ylabel(r'N [$\\frac{kgm}{s^2}$]')\nsavefig('10N.png') \nshow()\n"
},
{
"alpha_fraction": 0.675000011920929,
"alphanum_fraction": 0.690625011920929,
"avg_line_length": 19,
"blob_id": "6dc47f5bfba7777a8f31f78b26e3308acef5e631",
"content_id": "90d4f1e1701a716b5622e7317712141fc095db11",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 320,
"license_type": "no_license",
"max_line_length": 62,
"num_lines": 16,
"path": "/MEK1100/Oblig_1/oblig3b.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "from pylab import*\nfrom seaborn import*\n\n#variable\nn = linspace(-pi/2,pi/2,21)\nY,X = meshgrid(n,n,indexing='ij')\n#funksjoner\nvx = cos(X)*sin(Y)\nvy = -sin(X)*cos(Y)\n#plotting\nstreamplot(X,Y,vx,vy)\nxlabel('x-aksen')\nylabel('y-aksen')\ntitle('Plott av stromlinjene rundt origo langs x- og y-aksen')\nsavefig('3c.png')\nshow()\n"
},
{
"alpha_fraction": 0.48207172751426697,
"alphanum_fraction": 0.5936254858970642,
"avg_line_length": 20.514286041259766,
"blob_id": "42971c5e7648fbc49ffc6b92923d64e173802472",
"content_id": "c0960929d589ac69e8d3ce92916cd46c010ba906",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 753,
"license_type": "no_license",
"max_line_length": 82,
"num_lines": 35,
"path": "/FYS1120/LAB/oppgave1.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import functions as fx\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nB= np.array([18., 66., 114, 164., 218., 272.])/1e3\nVh= np.array([4.5, 14., 22., 31., 40., 50.])/1e3\n\nf= fx.least_squares_functions((B, Vh), 'x')\nx2= np.linspace(B[0], B[-1], 1000)\nplt.plot(B, Vh, 'rx')\nplt.plot(x2, f(x2))\nplt.xlabel('$B$ [T]', size=15)\nplt.ylabel('$V_h$ [V]', size=15)\nplt.xlim(B[0], B[-1])\nplt.legend([u'Maaleresultater','Interpolasjon'], prop={'size': 15})\nplt.title('Hall spenning $V_h$ som en funksjon av magnetiske feltet $B$', size=20)\na= (f(x2[-1])-f(x2[0]))/(x2[-1]- x2[0])\nprint('Stigningstall:%g'%(a))\nplt.show()\n\n'''1.2'''\nd= 1e-3\nI= .03\nRh= a*(d/I)\nprint(Rh) #Rh\n\nN= 1./(Rh*1.6e-19)\nprint(N)\n\nb= 0.01\nd= 0.001\nq= 1.6e-19\nI= 0.03\n\nprint(I/(b*d*N*q))\n"
},
{
"alpha_fraction": 0.7658536434173584,
"alphanum_fraction": 0.795121967792511,
"avg_line_length": 67.33333587646484,
"blob_id": "c3716b072e1b86c3f3917eadc3d41b41545baac2",
"content_id": "87bf51a5d1d5a435c189a62bd10a9791b8e3093d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 205,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 3,
"path": "/FYS2150/README.txt",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "This course I learnt how to use eksperimental physics as a scientific method. \nI wrote 6 scientific rapports that is shown in each folder.\nSyllabus: Squires, G.L(2001) Practical physics (4th. edition) Cambridge\n"
},
{
"alpha_fraction": 0.5845959782600403,
"alphanum_fraction": 0.5997474789619446,
"avg_line_length": 15.85106372833252,
"blob_id": "034638c9fffca5b3246daffa0f79a08490a9e560",
"content_id": "942653d27d19b6aeb73616b991f59daa6b4de81c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 792,
"license_type": "no_license",
"max_line_length": 36,
"num_lines": 47,
"path": "/INF1100/Assigments/Chapter_5/position2velocity.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np\nimport matplotlib.pyplot as plt\n\nb = []\nc = []\n\ninfile = open('pos.dat', 'r')\ns = float(infile.readline())\nfor line in infile:\n dig = line.split()\n b.append(float(dig[0]))\n c.append(float(dig[1]))\ninfile.close()\n\nx = np.array(b)\ny = np.array(c)\n\nplt.plot(x,y)\nplt.title('2d plot of position')\nplt.xlabel('X')\nplt.ylabel('Y')\nplt.show()\n\nt = np.linspace(0, s, len(x))\n\nvx = np.zeros(len(x))\nfor i in xrange(len(x)):\n vx[i] = (x[i] - x[i-1])/s\n\nvy = np.zeros(len(y))\nfor i in xrange(len(y)):\n vy[i] = (y[i] - y[i-1])/s\n\nplt.subplot(2,1,1)\nplt.plot(t, vx)\nplt.ylabel('v(x)')\nplt.title('Velocity in x-direction')\nplt.subplot(2,1,2)\nplt.plot(t, vy, 'r') \nplt.xlabel('time')\nplt.ylabel('v(y)')\nplt.title('Velocity in y-direction')\nplt.show()\n\n\"\"\"\nTerminal>\n\"\"\"\n"
},
{
"alpha_fraction": 0.548638105392456,
"alphanum_fraction": 0.6000000238418579,
"avg_line_length": 25.75,
"blob_id": "4a9e21fc99544b85a9687e6f706136a145b1a5a7",
"content_id": "3dc464d87de0e1f9c8ff208e1080f1b747665aa0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1285,
"license_type": "no_license",
"max_line_length": 66,
"num_lines": 48,
"path": "/FYS1120/Oblig_2/opg2.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom seaborn import*\n\ne = -1.6*10**(-19) #elektronladning\nme = 9.11*10**(-31) #elektronmasse\n\nT = 30*10**(-12) #fra t0 til T\ndt = 10**(-15) #tidssteg\nN = int(T/float(dt)) #antall tidssteg\nr = np.zeros((3,N)) #posisjonsvektor\nv = np.zeros_like(r) #hastighetsvektor\nt = np.linspace(0,T-dt,N)#array med likt fordelt tid dt\n#initialverdier\nv[:,0] = (10*10**3,0,0)\n\nB = np.array((0,0,2)) #Magnetfelt \n\n#Euler-Cromer\nfor i in xrange(N-1):\n Fb = e*(np.cross(v[:,i],B))\n a = Fb/me\n v[:,i+1] = v[:,i] + a*dt\n r[:,i+1] = r[:,i] + v[:,i+1]*dt\n#plott\nplt.subplot(2,1,1)\nplt.plot(t,r[0,:],t,r[1,:],t,r[2,:])\nplt.legend(['rx','ry','rz'])\nplt.title('Posisjons graf')\nplt.ylabel('Posisjon [m]')\nplt.subplot(2,1,2)\nplt.plot(t,v[0,:],t,v[1,:],t,v[2,:])\nplt.legend(['vx','vy','vz'])\nplt.title('Hastighets graf')\nplt.ylabel('Hastighet [m/s]')\nplt.xlabel('tid [s]')\nplt.savefig('opg2.png')\n\n#3d-plott\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\nax.plot(r[0,:], r[1,:], r[2,:],'-', label='Elektron akselerasjon')\nax.set_xlabel('x-posisjon [m]')\nax.set_ylabel('y-posisjon [m]')\nax.set_zlabel('z-posisjon [m]')\nplt.savefig('3dopg2.png')\nplt.show()\n\n"
},
{
"alpha_fraction": 0.54347825050354,
"alphanum_fraction": 0.554347813129425,
"avg_line_length": 17.399999618530273,
"blob_id": "6a428603c8ff786a89cd0dc05b91edf59e3333b8",
"content_id": "35579b7812add8e5ca0b0e4e3dbd983f9fa9a6c4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 92,
"license_type": "no_license",
"max_line_length": 31,
"num_lines": 5,
"path": "/INF1100/Exercises/Chapter_4/objects_qa.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "for i in range(5):\n\n inp = raw_input(\"object: \")\n inp = eval(inp)\n print type(inp)\n"
},
{
"alpha_fraction": 0.43108683824539185,
"alphanum_fraction": 0.5027322173118591,
"avg_line_length": 20.376623153686523,
"blob_id": "b962887da921f87dd1f8c3691ba1d68b7f38ca8d",
"content_id": "8915aa54c057ad0d3097868270022a6f16d2e4ab",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1647,
"license_type": "no_license",
"max_line_length": 51,
"num_lines": 77,
"path": "/FYS2130/Oblig_2/RungeKutta4.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np, matplotlib.pyplot as plt\n\n#akselerasjon\ndef diffEq(x,v,t):\n a = -float(b)/m*v - float(k)/m*x\n return a\n#Runge-Kutta\ndef rk4(x0,v0,t0):\n a1 = diffEq(x0,v0,t0)\n v1 = v0\n xHalv1 = x0 + v1 * dt/2.0\n vHalv1 = v0 + a1 * dt/2.0\n a2 = diffEq(xHalv1,vHalv1,t0+dt/2.0)\n v2 = vHalv1\n xHalv2 = x0 + v2 * dt/2.0\n vHalv2 = v0 + a2 * dt/2.0\n a3 = diffEq(xHalv2,vHalv2,t0+dt/2.0)\n v3 = vHalv2\n xEnd = x0 + v3 * dt\n vEnd = v0 + a3 * dt\n a4 = diffEq(xEnd,vEnd,t0 + dt)\n v4 = vEnd\n aMid = 1.0/6.0 * (a1 + 2*a2 + 2*a3 + a4)\n vMid = 1.0/6.0 * (v1 + 2*v2 + 2*v3 + v4)\n xEnd = x0 + vMid * dt\n vEnd = v0 + aMid * dt\n return xEnd, vEnd\n\n\n\nif __name__ == '__main__':\n\n #konstanter\n m = 0.1 #[kg]\n k = 10 #[N/m]\n b = 0.1 #[kg/s]\n N = 10000\n\n #arrays\n z = np.zeros(N)\n v = np.zeros(N)\n t = np.linspace(0, 10, N)\n\n #initial betingelser\n z[0] = 0.1 #[m]\n v[0] = 0 #[m/s]\n dt = t[1]-t[0] #tidssteg\n\n #iterasjoner i Runge-Kutta 4\n for i in range(N-1):\n z[i+1], v[i+1] = rk4(z[i],v[i],t[i])\n\n #analytisk losning\n A = 0.1\n omega = (np.sqrt(399)/2.0)\n #Funksjon for det analytiske uttrykket for z(t)\n def y(t):\n y = np.exp(-0.5*t)*A*np.cos(omega*t)\n return y\n an = np.zeros(N)\n an[0] = A\n for i in range(N-1):\n an[i+1] = y(t[i])\n\n\n \n #plott\n plt.plot(t,z)\n plt.title('Fjaer pendel')\n plt.xlabel('Tid[s]')\n plt.ylabel('Posisjon[m]')\n #plt.savefig('400.png')\n\n plt.plot(t,an)\n plt.legend(['RK4', 'analytisk'])\n #plt.savefig('AnalytiskmotRK4.png')\n plt.show()\n\n"
},
{
"alpha_fraction": 0.4253731369972229,
"alphanum_fraction": 0.513059675693512,
"avg_line_length": 18.851852416992188,
"blob_id": "5437a54cfda9feb5c1dfdf20084b89117da14c37",
"content_id": "75311193de326923839531a8f4929bd2e93a69ce",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 536,
"license_type": "no_license",
"max_line_length": 47,
"num_lines": 27,
"path": "/INF1100/Assigments/Chapter_7/Line.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "class Line:\n\n def __init__(self,p1,p2):\n self.p1, self.p2 = p1,p2\n\n def value(self):\n x0, y0 = self.p1[0], self.p1[1] \n x1, y1 = self.p2[0], self.p2[1] \n a = (y1 - y0)/float(x1-x0); b = y0*a*x0\n return b\n\ndef test_Line():\n comp = Line((1.5,3.4), (4,1.1)).value()\n expect = -4.692\n tol = 1e-15\n success = abs(comp - expect) < tol\n msg = 'it did not go as planned'\n assert success, msg\n\ntest_Line()\n\nprint Line((3, 5.5), (5, 6.5)).value()\n\n\"\"\"\nTerminal> python Line.py\n8.25\n\"\"\"\n"
},
{
"alpha_fraction": 0.595818817615509,
"alphanum_fraction": 0.6585366129875183,
"avg_line_length": 18.133333206176758,
"blob_id": "d6d47c6d9c7704f2730377dd050fda9cf313b00b",
"content_id": "12d4ddfffc37154fb4880a4ea84d390c31384195",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 287,
"license_type": "no_license",
"max_line_length": 44,
"num_lines": 15,
"path": "/FYS2130/Oblig_3/opg8.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np, matplotlib.pyplot as plt\nfrom numpy.fft import fft\nN = 500.0\nx = 2*np.pi\nt = np.linspace(0,10,N)\nsignal = np.sin(x*t)+np.cos(x*t)\nFFT = fft(signal)/N\nforste = FFT[0]\ngjen = np.sum(signal)/N\ndiff = gjen - forste\nprint gjen, forste\n\"\"\"\nterminal >>\n0.002 (0.002+0j)\n\"\"\"\n"
},
{
"alpha_fraction": 0.6388888955116272,
"alphanum_fraction": 0.6388888955116272,
"avg_line_length": 15,
"blob_id": "5e1ef1ada692051bad4c5933942dd02512e191ab",
"content_id": "d72ca6003459f6e5358a359d7023c8640e300236",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 144,
"license_type": "no_license",
"max_line_length": 33,
"num_lines": 9,
"path": "/INF1100/Exercises/Chapter_1/hello_world.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "kh = 'Hello, World!'\nprint kh\nprint 'Oh no... Is it you again!'\n\n\"\"\"\nTerminal> python hello_world.py\nHello, World!\nOh no... Is it you again\n\"\"\"\n"
},
{
"alpha_fraction": 0.43352600932121277,
"alphanum_fraction": 0.5626204013824463,
"avg_line_length": 16.299999237060547,
"blob_id": "e7d2ebb0a4d26f91c4a88806ec920285084a23cc",
"content_id": "5e31058b0e4f08f927003c1ed0ebf6b2664871c0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 519,
"license_type": "no_license",
"max_line_length": 50,
"num_lines": 30,
"path": "/MAT1120/Oblig1/opg3.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "from pylab import*\n\nTid = 20 #Dager\nx = zeros((Tid,3,1)) #Xn-vektor\nP = array((\n [0.4,0.3,0.2],\n [0.5,0.5,0.2],\n [0.1,0.2,0.6]))\n#Likevekts-vektor\nq = array((\n [16/53.0],\n [22/53.0],\n [15/53.0]))\n#Initialbetingelser\nx[0] = array((\n [2*10**4],\n [2.5*10**4],\n [8000]))\n \nfor i in range(Tid-1):\n x[i+1] = dot(P,x[i]) #matrise multiplikasjon\n\nprint 'Dag 4'\nprint x[3]\nprint 'Dag 10'\nprint x[9]\nprint 'Dag 20'\nprint x[19]\nprint'Analytisk svar. likevekts-vektor*befolkning'\nprint q * 53000\n"
},
{
"alpha_fraction": 0.5093167424201965,
"alphanum_fraction": 0.5341615080833435,
"avg_line_length": 19.125,
"blob_id": "d91cf1af2c6a5a1fe3997ab597a5d92520496ae4",
"content_id": "e5b6d43ca5a7b1af1fda88307b591db9ac487b4b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 161,
"license_type": "no_license",
"max_line_length": 34,
"num_lines": 8,
"path": "/MEK1100/Oblig_1/velfield.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "from pylab import*\n\ndef hastighet(n):\n t = linspace(-0.5*pi,0.5*pi,n)\n x,y = meshgrid(t,t)\n u = cos(x)*sin(y)\n v = -sin(x)*cos(y)\n return x,y,u,v\n"
},
{
"alpha_fraction": 0.43421053886413574,
"alphanum_fraction": 0.5232198238372803,
"avg_line_length": 20.180328369140625,
"blob_id": "ed5c571b105230aa15dd2017a259412ca76a783a",
"content_id": "a12ae74709ebf30617790df4eb862677c393d013",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1292,
"license_type": "no_license",
"max_line_length": 67,
"num_lines": 61,
"path": "/AST2000/Oblig_C/C4.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "from pylab import *\n\ndef stjerne(filename):\n lam0 = 656.3\n c = 3*10**8\n \n t = []\n l = []\n f = []\n with open(filename,'r') as infile:\n for line in infile:\n dat = line.split()\n t.append(float(dat[0]))\n l.append(float(dat[1]))\n f.append(float(dat[2]))\n tid = asarray(t)\n lam = asarray(l)\n flux = asarray(f)\n \n vr = ((lam-lam0)/lam0)*c #peculiar velocity\n mvr = mean(vr) #mean velocityr\n V = vr - mvr #total hastighet\n\n starnr = float(filename[4]) + 1\n\n #plot\n subplot(2,1,1)\n title('Stjerne %d' %starnr)\n plot(tid,V, color = 'purple')\n ylabel('m/s')\n subplot(2,1,2)\n plot(tid,flux, color='orange')\n xlabel('Dager')\n ylabel('Realtiv fluks')\n savefig( 'stjerne%d.png'%starnr)\n show()\n\n return flux, V\n\nstjerne('star0_1.05.txt')\nstjerne('star1_6.20.txt')\nstjerne('star2_1.51.txt')\nstjerne('star3_1.21.txt')\nstjerne('star4_1.34.txt')\n\nSm =1.9889*10**(30)\nG = 6.67*10**(-11)\nd = (24*3600)\njm = 1.898*10**(27)\nmp = lambda v,ms,p: ((ms**(2./3))*v*(p**(1./3)))/((2*pi*G)**(1./3))\n\nv = [10,25,50,55]\nP = [4250,2700,4000,5000]\nms = [1.05,1.51,1.21,1.34]\n\npm = zeros(len(v))\n\nfor i in range(len(v)):\n pm[i] = mp(v[i],ms[i]*Sm,P[i]*d)\n\nprint pm\n"
},
{
"alpha_fraction": 0.5180723071098328,
"alphanum_fraction": 0.5903614163398743,
"avg_line_length": 10.066666603088379,
"blob_id": "a7b6c535dd597315f9cd96555c55a6fb0febf1f1",
"content_id": "c85508c61b10ac35c0e95401aa7a59a0bd281adb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 166,
"license_type": "no_license",
"max_line_length": 31,
"num_lines": 15,
"path": "/INF1100/Exercises/Chapter_1/ball_print1.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "v0 = 5\nt = 0.6\ng = 9.81\ny = v0*t - 0.5*g*t**2\n\nprint '''y(t) is the\nposition of\nour ball'''\n\n\"\"\"\nTerminal>python ball_print1.py \ny(t) is the\nposition of\nour ball\n\"\"\"\n"
},
{
"alpha_fraction": 0.35543277859687805,
"alphanum_fraction": 0.5874769687652588,
"avg_line_length": 22.60869598388672,
"blob_id": "f71caa78ab1a88b8478cfdd6658038d2ff48a573",
"content_id": "bcc1f1297bb596f48b482f85685a7d2811c238d8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 543,
"license_type": "no_license",
"max_line_length": 118,
"num_lines": 23,
"path": "/FYS1120/LAB/lab2.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "from numpy import*\nfrom numpy.linalg import norm\ndef Bmak(S):\n k = 1.18*10**(-6)\n D = 10\n A = 2*pi*(pi*(4.3*10**(-3))**2)\n return (k*D*S)/A\n\ndef Hmaks(I):\n N = 460\n R = 58\n return (N*I)/(2*pi*R)\n\nbta = [[4.72,4.77],[4.15,4.77],[2.86,2.91],[2.22,2.25],[1.35,1.46],[0.83,0.87]]\nbta = array(bta)\nbtavg = average(bta,axis=1)\nhmaxer = Hmaks(btavg)\n\nsdiff = array([[1996.25-1376.63],[1642.18-1031],[2133.45-1531.53],[2248.52-1673.16],[2295-1779.27],[2315.63-1921.72]])\nbmaxer = Bmak(sdiff)\nprint bmaxer\nprint hmaxer\nprint btavg\n"
},
{
"alpha_fraction": 0.4455445408821106,
"alphanum_fraction": 0.49504950642585754,
"avg_line_length": 13.142857551574707,
"blob_id": "a9acf7180953c51c532f0a7507f4029c498ca689",
"content_id": "3fcca2cf06477526fdfce71f04a442174b57e922",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 101,
"license_type": "no_license",
"max_line_length": 23,
"num_lines": 7,
"path": "/INF1100/Exercises/Chapter_2/while_loop.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "C = []\nC_value = 0\nC_max = 100\n\nwhile C_value <= C_max:\n C.append(C_value)\n C_value += 5\n\n\n"
},
{
"alpha_fraction": 0.5363984704017639,
"alphanum_fraction": 0.6398467421531677,
"avg_line_length": 19.076923370361328,
"blob_id": "9644171089edc9f47dbd6a53e775bfa9f8303c12",
"content_id": "8eb7ba62976b34814d7d6290569c2b4671b26f03",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 261,
"license_type": "no_license",
"max_line_length": 49,
"num_lines": 13,
"path": "/INF1100/Exercises/Chapter_1/interest_rate.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "A = 1000.0\np = 5.0\nn = 3.0\nq = A*(1 + p/100)**n\n\nprint '''After 3 years %.f Euro has grown to\n%.2f Euro with %.g precent interest''' %(A, q, p)\n\n\"\"\"\nTerminal>python interest_rate.py \nAfter 3 years 1000 Euro has grown to\n1157.63 Euro with 5 precent interest\n\"\"\"\n"
},
{
"alpha_fraction": 0.6407999992370605,
"alphanum_fraction": 0.6759999990463257,
"avg_line_length": 25.04166603088379,
"blob_id": "ee761c94b90efd30d139b9d003e674b832257784",
"content_id": "8e3b24f0d94e79355d36daf6ebf120186b0c928a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1250,
"license_type": "no_license",
"max_line_length": 58,
"num_lines": 48,
"path": "/FYS2130/Oblig_6/opg16.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np, matplotlib.pyplot as plt\nfrom numpy.fft import fft,ifft\nfrom waveclass import*\nfrom opg15 import Hvitstoy\n\n#Konstanter\nfsenter = 5000 #Hz\nbredde = 3000 #Hz\nN = 2000\nK = [12,24]\nomega_analyse = np.linspace(0,10000*2*np.pi,100)\n\nfourier,signal,tid,frekvens = Hvitstoy(fsenter,bredde,N) #\nomega = frekvens*2*np.pi\nsignal2 = np.square(signal)\n#fourier transformasjon\nfftsignal = fft(signal)\nfftsignal2 = fft(signal2)\n#wavelet\nfor j in xrange(len(K)):\n wave = []\n for i in xrange(len(omega_analyse)):\n solver = Wavelet(omega)\n psi = solver(omega_analyse[i],K[j])\n knall = ifft(fftsignal*psi)\n wave.append(knall)\n\n wave = np.array(np.abs(wave))\n times, freq = np.meshgrid(tid,omega_analyse)\n freq = freq/(2*np.pi)\n plt.subplot(2,1,j+1)\n amp = plt.contourf(times,freq,wave)\n cbar = plt.colorbar(amp)\n cbar.ax.set_ylabel('Amplitude')\n plt.xlabel('Tid [s]')\n plt.ylabel('Frekvens [Hz]')\n plt.title('K = %d' %(K[j]) )\n#plott\nplt.figure()\nplt.subplot(2,1,1)\nplt.plot(frekvens,fftsignal)\nplt.title('Fast fourier av signalet')\nplt.xlabel('Frekvens Hz')\nplt.subplot(2,1,2)\nplt.plot(frekvens,fftsignal2)\nplt.title('Fast fourier av signal$^2$')\nplt.xlabel('Frekvens Hz')\nplt.show()\n"
},
{
"alpha_fraction": 0.7789934277534485,
"alphanum_fraction": 0.7789934277534485,
"avg_line_length": 64.28571319580078,
"blob_id": "fc82f9bcd837b19bc4ff992a825c9ecf90c86658",
"content_id": "ec2c22868dc5a32c7cc91947adf41a40a5d05a09",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 457,
"license_type": "no_license",
"max_line_length": 105,
"num_lines": 7,
"path": "/MAT-INF1100/Oblig_2/README.txt",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "This was my second assigment in this cours. \nHere my teacher had went for a run and tracked it trough GPS, he then uploaded this data on to web site. \nI applied Newton's law of motion on this data and found the speed and how far he ran.\nI also experimented on the difference between Euler's method and Euler's midpoint method.\n\nThe assigment and my answer is shown in the PDF files.\nMy answer is coded with latex, the source code for my answer is .tex file\n"
},
{
"alpha_fraction": 0.4652656018733978,
"alphanum_fraction": 0.6123759746551514,
"avg_line_length": 21.539474487304688,
"blob_id": "5585b2f92565dcdc5d46afdb66861c44a3d20522",
"content_id": "5759eacf60716c98055edd885a88928b5e5a5c39",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1713,
"license_type": "no_license",
"max_line_length": 62,
"num_lines": 76,
"path": "/FYS2150/Braggdiffraksjon/diameter.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np\n#konstanter\nL = 14.0e-2 #cm\ndL = 0.3e-2 #cm\nlam_c = 2.426e-12 #pm\ne = 1.602e-19 #C\nm = 9.11e-31 #kg\nc = 3e8 #m/s\n\n#verdier\nU = np.linspace(3,5,11)*1e3\nid1 = [2.72,2.62,2.44,2.32,2.37,2.19,2.10,2.20,2.50,2.00,2.02]\nyd1 = [3.32,3.35,3.20,3.05,3.10,2.91,2.83,2.70,2.71,2.63,2.53]\nid2 = [4.75,4.78,4.43,4.00,4.34,4.20,4.13,3.99,3.94,3.90,3.57]\nyd2 = [5.52,5.44,5.20,5.20,4.94,4.85,4.76,4.67,4.45,4.43,4.39]\nid1 = np.array(id1)*1e-2\nyd1 = np.array(yd1)*1e-2\nid2 = np.array(id2)*1e-2\nyd2 = np.array(yd2)*1e-2\n\ndef diameter(di,dy):\n n = len(di)\n d = np.zeros(n)\n for i in xrange(n):\n d[i] = (di[i]+dy[i])/2\n return d\n\ndef lam(U):\n return lam_c*np.sqrt((m*c**2)/(2*e*U))\n\ndef phi(D,lam):\n phi = np.zeros(len(D))\n for i in range(len(D)):\n phi[i] = float(lam[i])/D[i]\n return phi\n\ndef sigm(phi):\n n = len(phi)\n d = [i - np.mean(phi) for i in phi]\n s = np.sqrt(np.sum(np.square(d))/n)\n sigm = np.sqrt(1.0/(n-1))*s\n return sigm\n\n#snitt til hver enkelt diamenter\nd1_mean = diameter(id1,yd1)\nd2_mean = diameter(id2,yd2)\n#snitt til snittet til hver eneklet diameter\nR1d_mean = np.mean(d1_mean)\nR2d_mean = np.mean(d2_mean)\n#standardavvik\nR1_std = np.std(d1_mean)\nR2_std = np.std(d2_mean)\n#bolgelengde\nbolger = lam(U)\n#phi\nphi1 = phi(d1_mean,bolger)\nphi2 = phi(d2_mean,bolger)\nphi1_ = np.mean(phi1)\nphi2_ = np.mean(phi2)\n\n\"\"\"\nprint R1_std*1e2\nprint R2_std*1e2\nprint R1d_mean*1e2\nprint R2d_mean*1e2\nprint bolger*1e12\nprint np.mean(bolger*1e12)\n\"\"\"\nprint np.std(phi1)\nprint np.std(phi1)/np.sqrt(len(phi1))\nprint np.std(phi2)\nprint np.std(phi2)/np.sqrt(len(phi2))\nprint phi1*1e12\nprint phi1_*1e12\nprint phi2*1e12\nprint phi2_*1e12\n"
},
{
"alpha_fraction": 0.5154719948768616,
"alphanum_fraction": 0.5567959547042847,
"avg_line_length": 36.54411697387695,
"blob_id": "1d30249f95557f6c30a4427869b4d07ef4b27e9c",
"content_id": "f00d4b6e62f525862b43c7c561099b546e99ed04",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5106,
"license_type": "no_license",
"max_line_length": 95,
"num_lines": 136,
"path": "/AST2000/Oblig_A/rakettmotor.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "from Solmal import*\n\n#Escape velocity for my home planet\npm = vekt*1.98892*10**30 #sunmass to kg (hjemplanet)\npr = rad*1000 #km to m (hjemplanet)\nv_esc = np.sqrt((2*G*pm)/pr) #[m/s]\n\nprint 'The escape velocity for my planet is'\nprint '%.2f m/s' %v_esc\n\nrandom.seed(313)\nN = 10**5 #Number of paricles \nk = 1.38064852*10**(-23) #Boltzmann konstant [J/K]\nT = 10**4 #Temperatur [K]\nh2 = 3.35*10**-27 #Hydrogen molecule mass [kg]\nL = 10**-6 #Size of box\nm_sat = 1000 #Satelite mass [kg]\nmu = 0 #mean\nsigma = np.sqrt((k*T)/h2) #deviation\nts = 1000 #Time steps\ndt = 10**-9 #DeltaTime\nat = float(dt)/float(ts) #gjevnt fordelt steg\n\npart_pos = np.zeros((N, 3, ts)) #Position of particle\npart_vel = np.zeros((N,3)) #Velocity of particle\n\nfor i in range(N):\n for j in range(3):\n part_pos[i,j,0] = random.uniform(0,L) #Places the particle at random\n part_vel[i,j] = random.gauss(mu,sigma) #Gives the particle a random velocity\n \n#Kinetic energy\nkinetic = lambda m,v : 0.5*m*v**2 #Formula Kinetic energy\nkix = np.zeros(len(part_vel)) #x-direction\nkiy = np.zeros(len(part_vel)) #y-direction\nkiz = np.zeros(len(part_vel)) #z-direction\nfor i in range(len(part_vel)):\n kix[i] = part_vel[i][0]\n kiy[i] = part_vel[i][1]\n kiz[i] = part_vel[i][2]\n#mean kinetic energy\nmean_kin = np.mean(kinetic(h2,np.sqrt(kix**2+kiy**2+kiz**2)))\n#mean velocity\nmean_vel = np.mean(np.sqrt(part_vel[:,0]**2+part_vel[:,1]**2+part_vel[:,2]**2))\n\nprint 'Kinetic Energy'\nprint 'Exact (3/2)kT'\nprint (3./2.)*(k*T)\nprint 'My calculation of mean kinetic energy'\nprint mean_kin\nprint 'Mean velocity'\nprint 'Analytic solution for mean velocity (sqrt((8kT)/(pi m))'\nprint np.sqrt((8*k*T)/(np.pi*h2))\nprint 'My calculation of mean velocity'\nprint mean_vel\n\n#box\nL = 10**-6 #Size of box \nnr_ut = 0 #Number of particles that escapes \nmom_loss = 0 #Momentum out the hole\ny_mom = 0 #Momentum on wall (y-aksen+)\nhit = 0 #How many particles hit the wall\nhole_min = L/4.0 #Where the hole starts\nhole_max = (3*L)/4.0 #Where the hole ends\n\nfor i in xrange(ts-1):\n part_pos[:,:,i+1] = part_pos[:,:,i] + part_vel[:,:]*at\n for j in xrange(N):\n if part_pos[j, 0, i+1] >= L: #x+\n if part_vel[j, 0] > 0:\n part_vel[j,0] = part_vel[j,0]*-1.0\n if ((hole_min < part_pos[j,1,i+1] < hole_max) and #Hole\n (hole_min < part_pos[j,2,i+1] < hole_max)):\n nr_ut += 1\n mom_loss += 2*h2*np.abs(part_vel[j,0])#Momentum formel(2px where px = m*vx)\n else:\n continue\n elif part_pos[j,0, i+1] <= 0: #x-\n if part_vel[j, 0] < 0:\n part_vel[j,0] = part_vel[j,0]*-1.0\n else:\n continue\n elif part_pos[j,1,i+1] >= L: #y+\n if part_vel[j,1] > 0:\n part_vel[j,1] = part_vel[j,1]*-1.0\n hit += 1 #Hit the wall\n y_mom += 2*h2*np.abs(part_vel[j,1]) #Momentum formel (2py where py = m*vy)\n else:\n continue\n elif part_pos[j,1,i+1] <= 0: #y-\n if part_vel[j,1] < 0:\n part_vel[j,1] = part_vel[j,1]*-1.0\n else:\n continue\n elif part_pos[j,2,i+1] <= 0: #z-\n if part_vel[j,2] < 0:\n part_vel[j,2] = part_vel[j,2]*-1.0\n else:\n continue\n elif part_pos[j,2,i+1] >= L: #z+\n if part_vel[j,2] > 0:\n part_vel[j,2] = part_vel[j,2]*-1.0\n else:\n continue\n\nTrykk = lambda F,A : float(F)/A #Trykkformel\nA = L**2 #Areal of box\nAp = (float(N)/L**3)*(k*T) #P = nkT\n \nprint 'Analytic pressure'\nprint Ap\nprint 'My numerical calculation of pressure'\nprint Trykk(y_mom/dt,A)\n\nF = float(mom_loss)/dt #Force due to momentum loss of box F = dp/dt [kgm/s^2]\ndv = float(F)/m_sat #The acceleration dv= F/m [(kgm/s^2)/kg -> m/s^2] = a\n\na_esc = v_esc/1200 #acceleration needed to achieve escape velocity in 20 min [m/s^2]\nnum_box = a_esc/dv #number of boxes needed to achieve escape velocity in 20 min\ntot_h2_esc = (1200.0/dt*nr_ut)*num_box #Total amount H2 molecules that escapes in 20 min\ninit_fuel = tot_h2_esc*h2*num_box #Fuel needed to reach the escape velocity in 20 min\n\nprint 'The force the momentum creates is'\nprint F\nprint 'The change in velocity the box gains of the momentum loss is'\nprint dv\nprint 'The acceleration needed to achive escape velocity whitin 20 minutes'\nprint a_esc\nprint 'The number of boxes needed to achieve the acceleration'\nprint num_box\nprint 'Number of particles that escapes in 10^-9 seconds'\nprint nr_ut*num_box\nprint 'Number of particles that escapes in 1200 (20min) seconds'\nprint tot_h2_esc\nprint 'Amount of H2 molecules (Fuel) thats been used in those 20 min'\nprint init_fuel\n"
},
{
"alpha_fraction": 0.6636155843734741,
"alphanum_fraction": 0.6933638453483582,
"avg_line_length": 26.28125,
"blob_id": "8ba10d3a37f3e96206517e40a9165da06cb0612a",
"content_id": "a112ea8fa311b9010958698e3cc9f399f13a691e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 874,
"license_type": "no_license",
"max_line_length": 73,
"num_lines": 32,
"path": "/FYS2130/Oblig_3/opg11.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np, matplotlib.pyplot as plt\nfrom scipy.fftpack import fft, ifft\nimport urllib2\n#Behandle data\ndata = urllib2.urlopen(\"http://www.sidc.be/silso/DATA/SN_y_tot_V2.0.txt\")\ntimes = []\nsunspots = []\nfor line in data:\n cols = line.split()\n times.append( float(cols[0]) )\n sunspots.append( float(cols[1]) )\n#Array \ntid = np.array(times)\nsolflekker = np.array(sunspots)\nN = float(len(solflekker))\n#Fast fourier transformasjon\nfourier = fft(solflekker)/N\nfs = 1.0/(tid[1]-tid[0])\nfreq = np.linspace(0,fs,N)\n#Plot\nplt.subplot(1,2,1)\nplt.plot(tid, solflekker,'-b') #Tidsbilde\nplt.xlabel('Time, years')\nplt.ylabel('Sunspots')\nplt.title('Time domain')\nplt.subplot(1,2,2)\nplt.plot(freq,np.abs(fourier)) #frekvensbilde\nplt.xlabel('Frekvens, $years^{-1}$')\nplt.ylabel('Fourierkoeff $|X(f)|$')\nplt.title('Frequency domain')\nplt.axis([-0.01,0.5,0,80])\nplt.show()\n\n"
},
{
"alpha_fraction": 0.7956204414367676,
"alphanum_fraction": 0.7956204414367676,
"avg_line_length": 136,
"blob_id": "c1ffd673daa386fde7413ea5706f238ad969a128",
"content_id": "485a24a13e8056dfd16ae502b75f3194b24d1a88",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 274,
"license_type": "no_license",
"max_line_length": 224,
"num_lines": 2,
"path": "/AST2000/README.txt",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "This course was an introduction to astrophysics.\nI got my own solarsystem from UiO, solmal.py. In this solarsystem I chose one planet to be my homeplanet. One assigment was to leave my home planet with a rocket, an other assigment was to land a satelite on an other planet.\n"
},
{
"alpha_fraction": 0.44569289684295654,
"alphanum_fraction": 0.5280898809432983,
"avg_line_length": 17.413793563842773,
"blob_id": "f9168b46d5bf6887203dadf6e9a72ee13fcf26d6",
"content_id": "c6f8616feec35c3892154fdef469f750a7ed9381",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 534,
"license_type": "no_license",
"max_line_length": 68,
"num_lines": 29,
"path": "/INF1100/Assigments/Chapter_3/Heavside.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "def h(x):\n if x < 0:\n value = 0\n elif x >= 0: \n value = 1\n return value\n\ndef test_h():\n q = h(-10e-15)\n expected_values = [0, 0, 1, 1, 1]\n computed_values = [-10, -10e-15, 0, 10e-15, 10]\n for expected, computed in zip(expected_values, computed_values):\n msg = 'computed =%g != %g (expected)' %(computed, expected)\n assert expected == h(computed)\n\ntest_h()\n\nH = h(-10)\nk = h(-10e-15)\nt = h(0)\nu = h(10e-15)\nb = h(10)\n\nprint H, k, t, u, b\n\n\"\"\"\nTerminal>python Heavside.py \n0 0 1 1 1\n\"\"\"\n"
},
{
"alpha_fraction": 0.5,
"alphanum_fraction": 0.6122449040412903,
"avg_line_length": 16.294116973876953,
"blob_id": "7d7d89c7fa48d954898d5b8efa2ec7fb3b7c71b9",
"content_id": "a354fab11bbe4e764f502cf85343bcbd68e1390d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 294,
"license_type": "no_license",
"max_line_length": 33,
"num_lines": 17,
"path": "/INF1100/Exercises/Chapter_1/ball_print2.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "v0 = 5\nt = 0.6\ng = 9.81\ny = v0*t - 0.5*g*t**2\nprint \"\"\"\nAt t=%f s, a ball with\ninitial velocity v0=%.3E m/s\nis located at the hight %.2f m.\n\"\"\" % (t, v0, y)\n\n\"\"\"\nTerminal>python ball_print2.py \n\nAt t=0.600000 s, a ball with\ninitial velocity v0=5.000E+00 m/s\nis located at the hight 1.23 m.\n\"\"\"\n"
},
{
"alpha_fraction": 0.40875911712646484,
"alphanum_fraction": 0.5735141038894653,
"avg_line_length": 21.302326202392578,
"blob_id": "c51bbb5e09bb2d1951755df533806509443bc821",
"content_id": "ecc4a5f72c094eac9f0d879b5f408bdb40138c66",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 959,
"license_type": "no_license",
"max_line_length": 65,
"num_lines": 43,
"path": "/FYS2150/Magnetisme/hysterese.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np, matplotlib.pyplot as plt\n\n#konstanter\nk = 1.01e-6\nD = 10\nn = 130\nA = 6.5e-3\n#data\nStop = np.array([619.51, 449.15, 387.20, 314.92, 227.16, \\\n 118.74, -51.63, -191.02])\nSbunn = np.array([-562.73, -681.47, -665.98, -650.49, -619.51,\\\n -578.21, -557.56, -485.29])\nItop = np.array([4.41, 3.93, 3.51, 3.03, 2.55, 2.20, 1.74, 1.31])\nIbunn = np.array([-4.48, -4.06, -3.58, -3.17, -2.69, -2.34, \\\n -1.93, -1.45])\n\ndeltaS = (np.abs(Stop) + np.abs(Sbunn))/2.0\nIm = (np.abs(Itop) + np.abs(Ibunn))/2.0\ndeltaB = (k*D*deltaS)/(n*A)\nB = deltaB/2.0\nI = np.linspace(4,0.5,8)\n\n#plott\nplt.plot(Im,B*1e3, '*')\nplt.legend(['Maalepunkter'], loc = 'best')\nplt.title('Magnet felt B mot strom I')\nplt.xlabel('Strom I [A]')\nplt.ylabel('Magnetfelt B [mT]')\nplt.axis([1,4.5,1.5,3.6])\nplt.savefig('BvsI.png')\nplt.show()\n\n\n\n\nprint 'delta B'\nprint deltaB\nprint 'delta S'\nprint deltaS\nprint 'B'\nprint B\nprint 'I'\nprint Im\n"
},
{
"alpha_fraction": 0.3792415261268616,
"alphanum_fraction": 0.5349301099777222,
"avg_line_length": 20.782608032226562,
"blob_id": "ec5ef6c167c8bd72e0b4a1474c0b156c3b37fa80",
"content_id": "437d8bfa70c1a2ed6bcd2e9d0747d7d0a900d0be",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 501,
"license_type": "no_license",
"max_line_length": 57,
"num_lines": 23,
"path": "/INF1100/Assigments/Chapter_8/compute_prob.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import random\n\nN = [10, 10**2, 10**3, 10**6]\nM = 0\nfor i in N:\n for j in range(i+1):\n r = random.random()\n if 0.5<=r<=0.6:\n M += 1\n print 'M = %g N = %i' %(M,i)\n print 'Probability =', float(M)/i, 'when N = %i'%(i)\n\n\"\"\"\nTerminal> python compute_prob.py \nM = 1 N = 10\nProbability = 0.1 when N = 10\nM = 12 N = 100\nProbability = 0.12 when N = 100\nM = 112 N = 1000\nProbability = 0.112 when N = 1000\nM = 100218 N = 1000000\nProbability = 0.100218 when N = 1000000\n\"\"\"\n"
},
{
"alpha_fraction": 0.5306001305580139,
"alphanum_fraction": 0.5912061929702759,
"avg_line_length": 24.876922607421875,
"blob_id": "20b430e3e87217b6d81caa8bb35766c9a11bcee2",
"content_id": "ba98d1d8b9df767031ff43529c163224282b14d3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1683,
"license_type": "no_license",
"max_line_length": 63,
"num_lines": 65,
"path": "/FYS2130/Oblig_3/wavelet2.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np, matplotlib.pyplot as plt\nfrom numpy.fft import fft, ifft\nfrom waveclass import*\n\n#konstanter\nfs = 10**4; f1 = 10**3; f2 = 1.6*10**3\nc1 = 1.0; c2 = 1.7\nt1 = 0.15; t2 = 0.5\nsigma1 = 0.01; sigma2 = 0.10\ndt = 1.0/fs\nN = 8192.0\ntid = np.arange(N)*dt\n#signal\nsig = c1*np.sin(2*np.pi*f1*tid)*np.exp(-((tid-t1)/sigma1)**2)+\\\n c2*np.sin(2*np.pi*f2*tid)*np.exp(-((tid-t2)/sigma2)**2)\n#FFT\nFFTsig = fft(sig)/N\nfreq_FFT = np.linspace(0,fs,N)/10.0**3\n#wavelet\nn = 100\nK = [15,24,50,100,115,124]\nomegaa = np.linspace(800,2000,n)*(2*np.pi)\nomega = np.linspace(0,fs,N)*(2*np.pi)\n\nfor j in xrange(len(K)):\n wave = []\n for i in xrange(len(omegaa)):\n solver = Wavelet(omega)\n psi = solver(omegaa[i],K[j])\n knall = ifft(FFTsig*psi)\n wave.append(knall)\n\n wave = np.array(np.abs(wave))\n times, freq = np.meshgrid(tid,omegaa)\n freq = freq/(2*np.pi)\n if j >=4:\n plt.subplot(3,2,j+1)\n amp = plt.contourf(times,freq,wave)\n cbar = plt.colorbar(amp)\n cbar.ax.set_ylabel('Amplitude')\n plt.ylabel('Frekvens [Hz]')\n plt.title('K = %d' %(K[j]) )\n plt.xlabel('Tid [s]')\n else:\n plt.subplot(3,2,j+1)\n amp = plt.contourf(times,freq,wave)\n cbar = plt.colorbar(amp)\n cbar.ax.set_ylabel('Amplitude')\n plt.ylabel('Frekvens [Hz]')\n plt.title('K = %d' %(K[j]))\n#plott\n#Signal\nplt.figure()\nplt.plot(tid,sig)\nplt.title('Tidsbilde')\nplt.xlabel('Tid [s]')\nplt.ylabel('Utslag')\n#FFT\nplt.figure()\nplt.plot(freq_FFT,np.abs(FFTsig))\nplt.xlabel('Frekvens [kHz]')\nplt.ylabel('Fourier koeffisient |X(f)|')\nplt.title('Frekvensspekter')\nplt.axis([0,5,-0.05,0.25])\nplt.show()\n\n"
},
{
"alpha_fraction": 0.6199095249176025,
"alphanum_fraction": 0.6425339579582214,
"avg_line_length": 17.41666603088379,
"blob_id": "985db9a27c3810578b88698679732a8e4b94c0ba",
"content_id": "78d1aef53cd3c04a76f17f02e155e48c83cc8830",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 221,
"license_type": "no_license",
"max_line_length": 44,
"num_lines": 12,
"path": "/MAT1110/Oblig1/oblig12.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np, matplotlib.pyplot as plt\nfrom numpy import cos, sin, exp, pi\n\nt = np.linspace(0,4*pi,100)\ndef r(t):\n return exp(-t)*np.array([cos(t),sin(t)])\n\nx,y = r(t)\n\nplt.plot(x,y)\nplt.axis('equal')\nplt.show()\n"
},
{
"alpha_fraction": 0.5755919814109802,
"alphanum_fraction": 0.6108075380325317,
"avg_line_length": 25.564516067504883,
"blob_id": "7ee2e4cbec44bf13c3857b07754d3a9f0d793d46",
"content_id": "87206a633bbe21e9a70074c965d67228ebff56ab",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1647,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 62,
"path": "/INF1100/Project/SIR.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "\"\"\"\nS(t) = folk som kan bli sjuke\nI(t) = folk som er smitta\nR(t) = folk som har hatt sykdommen og er naa imun\ndt = tids-intervall intervall\n\nS(t+dt) = S(t) - beta*S*I*dt\nS'(t) = - beta*S*I #ODE\n\nI(t+dt) = I(t) + beta*S*I*dt - v*I*dt\nI'(t) = beta*S(t)*I(t) - v*I #ODE\n\nR(t+dt) = R(t) + V*I*dt\nR'(t) = v*I #ODE\n\"\"\"\nimport numpy as np, matplotlib.pyplot as plt, ODEsolver as ODE, sys\n\nSo = 1500; Io = 1; Ro = 0 #initial verdier\ninv = [So,Io,Ro] #liste av initial verdier\ndt = 0.5 #steg lengde\nbeta = 0.0005\nv = 0.1 #sannsynlighet for aa bli frisk per steg lengde\nT = 60 #dager\nn = T/dt #Nr for steg lengder i time_point\ntime_point = np.linspace(0,60,n+1)\n\n#ODE funksjons\ndef ODEfunk(inv, t):\n y = np.zeros((3))\n P = inv\n y[0] = - beta*P[0]*P[1] \n y[1] = beta*P[0]*P[1] - v*P[1]\n y[2] = v*P[1]\n return y\ntol = 1E-12\nterminate = lambda u,t,k: False if np.abs(np.sum(u[k] - u[0])) < tol else True\n\nRes = ODE.RungeKutta4(ODEfunk)\nRes.set_initial_condition(inv)\ny,x = Res.solve(time_point,terminate)\nS = y[:,0]; I = y[:,1]; R = y[:,2]\n\nplt.plot(x,S,x,I,x,R)\nplt.legend(['Motagelig for sykdom', 'Smitta', 'Friske \"meldt\"'])\nplt.axis([0,60,0,2000])\nplt.xlabel('Dager')\nplt.ylabel('Personer')\nplt.title('Enkel SIR modell')\nplt.show()\n\n\n\"\"\"\nNaar jeg ser paa de forskjellige grafene jeg faar av a endre beta\nkan jeg konkludere med at en beta paa 0.0001 saa blir det\ningen epedemi.\n\nHar hentet inpirsajon fra nettet, legger ved en kilde liste.\nKilde liste:\nhttp://chengjunwang.com/en/2013/08/learn-basic-epidemic-models-with-python/\n\nTerminal> python SIR.py \n\"\"\"\n"
},
{
"alpha_fraction": 0.5247524976730347,
"alphanum_fraction": 0.5742574334144592,
"avg_line_length": 12.466666221618652,
"blob_id": "c03a486724002b9308ff169c4638ea45bba17181",
"content_id": "487570479d001345d25ca5a674b71ec9bd783617",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 202,
"license_type": "no_license",
"max_line_length": 26,
"num_lines": 15,
"path": "/INF1100/Exercises/Chapter_3/hw_func.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "def hw1():\n return 'Hello, world'\n\ndef hw2():\n print 'Hello, world'\n\ndef hw3(str1, str2):\n return str1 + str2\n\nprint 'a)'\nprint hw1() \nprint 'b)'\nhw2()\nprint 'c)'\nprint hw3('Hello ', 'UiO')\n"
},
{
"alpha_fraction": 0.4859813153743744,
"alphanum_fraction": 0.4984423816204071,
"avg_line_length": 25.75,
"blob_id": "282f46cdb5f74a06b862badd486ecee915dcc5d5",
"content_id": "23dbdb2556518a80ce6c5d744bb1345e18d95bf2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 321,
"license_type": "no_license",
"max_line_length": 50,
"num_lines": 12,
"path": "/FYS2130/Oblig_6/waveclass.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np\nclass Wavelet:\n def __init__(self, omega):\n self.omega = omega\n\n def __call__(self,omegaa,K):\n omg = np.array(self.omega)\n omga = omegaa\n A = np.exp(-(K*((omg-omga)/omga))**2)\n B = np.exp(-K**2)*np.exp(-(K*omg/omga)**2)\n psi = 2*(A-B)\n return psi\n"
},
{
"alpha_fraction": 0.5729166865348816,
"alphanum_fraction": 0.6354166865348816,
"avg_line_length": 15,
"blob_id": "32db24120d51236f618688fb4e2a2bd59f71ff90",
"content_id": "9a9b214478bc3264be3385833ff7b000231c2146",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 96,
"license_type": "no_license",
"max_line_length": 56,
"num_lines": 6,
"path": "/INF1100/Exercises/Chapter_4/c2f_func.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import sys\n\nC = float(sys.argv[1]) # read 1st command-line argument\nF = (9./5)*C + 32\n\nprint F\n"
},
{
"alpha_fraction": 0.5356125235557556,
"alphanum_fraction": 0.6068376302719116,
"avg_line_length": 28.25,
"blob_id": "d0e13f66d0e53dcc5183793d4a789f8e66e43c73",
"content_id": "3f68fa586e0b7d910effceea5d32d2a8206f23ad",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 351,
"license_type": "no_license",
"max_line_length": 74,
"num_lines": 12,
"path": "/INF1100/Exercises/Chapter_3/yfunc1.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "def yfunc(t, v0): #function to find the velocity of a particle\n g = 9.81\n y = v0*t - 0.5*g*t**2\n dydt = v0 - g*t\n return y, dydt\n\nposition, velocity = yfunc(0.6, 3)\n\nt_values = [0.05*i for i in range(10)]\nfor t in t_values:\n position, velocity = yfunc(t, v0=5)\n print 't=%-10g position=%-10g velocity=%-10g' %(t, position, velocity)\n"
},
{
"alpha_fraction": 0.6618625521659851,
"alphanum_fraction": 0.6674057841300964,
"avg_line_length": 19.976743698120117,
"blob_id": "03fa43117f43cae1ab2c465b003e750a8fe6c01f",
"content_id": "ce62587b58e3826bfe135cf5e187732083d8b766",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 902,
"license_type": "no_license",
"max_line_length": 106,
"num_lines": 43,
"path": "/INF1100/Assigments/Chapter_6/read_error.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "infile = open('lnsum.dat', 'r')\n\nfor i in range(24):\n infile.readline()\n \nepsi = []\nex_er = []\nk = []\n \nfor line in infile:\n words = line.split()\n epsi.append(float(words[1].strip(',')))\n ex_er.append(float(words[4].strip(',')))\n k.append(float(words[5].strip('n=')))\n\ninfile.close()\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nepsilon = np.array(epsi)\nexact_error = np.array(ex_er)\nn = np.array(k)\n\nplt.semilogy(n,epsilon)\nplt.semilogy(n,exact_error)\nplt.legend(['epsilon','exact error'])\nplt.title('Epsilon vs exact error')\nplt.xlabel('n')\nplt.ylabel('ln')\nplt.show()\n\n\"\"\"\nThis is the way i fetch the file:\n\nTerminal> curl https://raw.githubusercontent.com/hplgit/scipro-primer/master/src/funcif/lnsum.py >lnsum.py\n\nTerminal> python lnsum.py > lnsum.dat\n\nI saved the file as lnsum.dat, that is why I used this name in my code.\n\nTerminal> python read_error.py \n\"\"\"\n"
},
{
"alpha_fraction": 0.49819493293762207,
"alphanum_fraction": 0.5234656929969788,
"avg_line_length": 22.08333396911621,
"blob_id": "82b56e7056c5cf64c7ea986a419f6170938a7801",
"content_id": "58b7068aa0db92dda3919335f159757d9ecc7aa1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 277,
"license_type": "no_license",
"max_line_length": 65,
"num_lines": 12,
"path": "/INF1100/Exercises/Chapter_4/f2c_fcml.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import sys\n\nif len(sys.argv) < 2: #if test\n print 'provide a temperature!' #prints the argument if true \n exit() #closes the program\n \nF = sys.argv[1]\n\nF = float(F)\nC = (F-32)*5.0/9\n\nprint '%g degrees F is %g degrees C' %(F, C)\n"
},
{
"alpha_fraction": 0.5374823212623596,
"alphanum_fraction": 0.5855728387832642,
"avg_line_length": 23.34482765197754,
"blob_id": "d8651d219c7373e5527aef814bc2fe896f37c7d1",
"content_id": "99303d0984bc05dcc147b6c6dbc85f8576b1eb85",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 707,
"license_type": "no_license",
"max_line_length": 74,
"num_lines": 29,
"path": "/FYS-MEK1110/Oblig_3/q.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "from pylab import*\nfrom seaborn import*\n\n#variable\nk = 100.0\nm = 0.1; v0 = 0.1; b = 0.1 # masse[kg],hastighet[m/s]og Fjaerkonstant[N/m]\ntime = 2.0; dt = 1/200.0 # Tid [s] + tidssteg\n#funksjoner til Euler-Cromer\nn = int(round(time/dt))\nt = linspace(0,2,n) #liste me alle tidsstegene fra start til slutt\nxi = zeros(n); v = zeros(n); a = zeros(n)\nv[0] = 0.1 \nfor i in range(n-1):\n F = -k*xi[i]\n a[i] = F/m\n v[i+1] = v[i] + a[i]*dt\n xi[i+1] = xi[i] + v[i+1]*dt\n#eksakt funksjon\nw = sqrt(k/m)\nx = lambda t: (v0/w)*sin(w*t)\n#plott\nsubplot(2,1,1) \nplot(t,x(t))\ntitle('Eksakt')\nsubplot(2,1,2)\nplot(t,xi,'r')\ntitle('Euler-Cromer'); xlabel('x-akse'); ylabel('y-akse')\nsavefig('q.png')\nshow()\n\n"
},
{
"alpha_fraction": 0.7666666507720947,
"alphanum_fraction": 0.7904762029647827,
"avg_line_length": 69,
"blob_id": "c446b0dc9c67cc12527fbaf7f775472a3071b1e2",
"content_id": "4a9574ec4f208fb16c8a03840e99e0db6a150020",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 210,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 3,
"path": "/MEK1100/README.txt",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "This course was an introduction to the theory of scalar and vector fields, \nwitth examples from fluid mechanics, geophysics and physics.\nSyllabus: Maththews, P.C (2005), Vector Calculus (7th. edition) London: Springer\n"
},
{
"alpha_fraction": 0.7875000238418579,
"alphanum_fraction": 0.8500000238418579,
"avg_line_length": 79,
"blob_id": "f91466ffe649b698886180fa208e8f88050037b3",
"content_id": "a50ba92fb178b2753b4f14b77093e7314eb691d3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 80,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 1,
"path": "/FYS2130/Project/README.txt",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "Prosjektoppgave.pdf is the assigment and Prosjektoppgave_15280.pdf is my answer\n"
},
{
"alpha_fraction": 0.6976743936538696,
"alphanum_fraction": 0.7558139562606812,
"avg_line_length": 18.11111068725586,
"blob_id": "658ad1e8fdad7156f62d9b9f613058a727254695",
"content_id": "99f2a284a54410edb24d342948efbbae44ca8efb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 172,
"license_type": "no_license",
"max_line_length": 35,
"num_lines": 9,
"path": "/AST2000/Oblig_A/verdier.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "from Solmal import*\n\nprint 'planet vekt [solmasse],[kg]'\nprint vekt\nprint vekt*1.98892*10**30\nprint 'planetradius [km]'\nprint rad\nprint 'forlatnings hastighet'\nprint v_esc\n"
},
{
"alpha_fraction": 0.5348837375640869,
"alphanum_fraction": 0.595075249671936,
"avg_line_length": 27.115385055541992,
"blob_id": "456af8717c4c46f55bd7130c361c66ed07c18705",
"content_id": "eab701185d784f5eee84d764bd64de2b655547ed",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 731,
"license_type": "no_license",
"max_line_length": 76,
"num_lines": 26,
"path": "/INF1100/Project/SIRV_optimal_duration.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import SIRV, ODEsolver as ODE, matplotlib.pyplot as plt, numpy as np\n\ndt = 0.5\nV = [range(31)]\ndef gamma(t):\n p = 0\n if 6 <= t <= (6+V):\n p = 0.1\n else:\n p = 0\n return p\n\nproblem = SIRV.vaccination(nu=0.1, beta=0.0005, S0=1500, I0=1, R0=0,\\\n T=10, V0=0, p=gamma)\nsolver = ODE.RungeKutta4(problem)\nsolver.set_initial_condition(problem.initial_value())\ny, x = solver.solve(problem.time_points(dt))\nS = y[:,0]; I = y[:,1]; R = y[:,2]; V = y[:,3]\n\nplt.plot(x,S,x,I,x,R,x,V) \nplt.legend(['Motagelig for sykdom', 'Smitta', 'Friske \"meldt\"','Vaksinert'])\nplt.axis([0,31,0,2000])\nplt.xlabel('Dager')\nplt.ylabel('Personer')\nplt.title('SIR model med Vaksinering etter 6 dager')\nplt.show()\n"
},
{
"alpha_fraction": 0.4078235626220703,
"alphanum_fraction": 0.4777361750602722,
"avg_line_length": 20.265487670898438,
"blob_id": "04497e49ede560d7e01a9acc5fa30bfa82b33703",
"content_id": "b62a2178e4dfdb7cb25f43ed8900e3cd72b520c3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2403,
"license_type": "no_license",
"max_line_length": 82,
"num_lines": 113,
"path": "/INF1100/Assigments/Chapter_7/Quadratic.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "\"\"\"\nfunction:\nf(x;a,b,c) = ax**2 +bx +c)\n\nformula:\n(-b +-sqrt(b**2 -(4*a*c)))/2\n\"\"\"\nimport numpy as np\nimport cmath as cm\n\nclass quadratic:\n \n def __init__(self, a, b, c):\n self._a, self._b, self._c = a, b, c\n\n def roots(self):\n if self._b**2 > (4*self._a*self._c):\n r1 = (-self._b + np.sqrt(self._b**2 - (4*self._a*self._c)))/2*self._a\n r2 = (-self._b - np.sqrt(self._b**2 - (4*self._a*self._c)))/2*self._a\n return r1, r2\n elif self._b**2 == (4*self._a*self._c):\n return -self._b/2*self._a\n else:\n cr1 = (-self._b + cm.sqrt(self._b**2 - (4*self._a*self._c)))/2*self._a\n cr2 = (-self._b - cm.sqrt(self._b**2 - (4*self._a*self._c)))/2*self._a\n return cr1, cr2\n\n def value(self,x):\n return self._a*x**2 + self._b*x + self._c\n\n def table(self, L, R, n):\n self._L, self._R, self._n = L, R, n\n s = np.linspace(self._L, self._R, self._n)\n for i in range(len(s)+1):\n f = self._a*i**2 + self._b*i + self._c\n print 'x = %g f(x) = %g' %(i, f) \n\n\ndef test_value():\n q = quadratic(2, -2, -2)\n comp = q.value(4)\n expect = 22\n tol = 1e-14\n success = abs(comp - expect) < tol\n msg = 'somthing is not right'\n assert success, msg\n\ndef test_roots():\n q = quadratic(1, -3, 2)\n comp1, comp2 = q.roots()\n exp1, exp2 = 2, 1\n tol = 1e-14\n success = ((comp1 + comp2)-(exp1+exp2)) < tol\n msg = 'I did not go as planned!!'\n assert success, msg\n\ntest_value()\ntest_roots()\n\n \nq = quadratic(3, -12, 4)\n\nprint 'value'\nprint q.value(4)\nprint'-----------------'\nprint 'table'\nq.table(1,10,10)\nprint'-----------------'\nprint 'roots'\nprint q.roots()\nprint'-----------------'\n\n\n\n\"\"\"\nTerminal >python Quadratic.py \nvalue\n4\n-----------------\ntable\nx = 0 f(x) = 4\nx = 1 f(x) = -5\nx = 2 f(x) = -8\nx = 3 f(x) = -5\nx = 4 f(x) = 4\nx = 5 f(x) = 19\nx = 6 f(x) = 40\nx = 7 f(x) = 67\nx = 8 f(x) = 100\nx = 9 f(x) = 139\nx = 10 f(x) = 184\n-----------------\nroots\n(32.696938456699066, 3.3030615433009327)\n-----------------\n\nDemo av bruk av Quadratic som module:\n\nTerminal> python\n>>> from Quadratic import quadratic\n>>> quadratic(1,-3,2).roots()\n(2.0, 1.0)\n>>> quadratic(2,4,2).value(3)\n32\n>>> quadratic(1,3,-2).table(1,5,5)\nx = 0 f(x) = -2\nx = 1 f(x) = 2\nx = 2 f(x) = 8\nx = 3 f(x) = 16\nx = 4 f(x) = 26\nx = 5 f(x) = 38\n\n\"\"\"\n"
},
{
"alpha_fraction": 0.4054878056049347,
"alphanum_fraction": 0.48475611209869385,
"avg_line_length": 17.22222137451172,
"blob_id": "5a26709a489b83f1b0798ce09a4fcf1b1be7bab5",
"content_id": "a8cbcdaef3a993c901db5a8f1f1c8362665750c3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 328,
"license_type": "no_license",
"max_line_length": 50,
"num_lines": 18,
"path": "/INF1100/Exercises/Chapter_5/loan.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np\n\ndef loan(L, p, N):\n L = float(L)\n index_set = range(N+1)\n\n x = np.zeros(len(index_set))\n y = np.zeros(len(index_set))\n\n x[0] = L\n\n for n in index_set[1:]:\n y[n] = (p/(12.0*100))*x[n-1] + L/N\n x[n] = x[n-1] + p/(12.0*100)*x[n-1] - y[n]\n\n return x, y\n\nprint loan(1000,5,100)\n"
},
{
"alpha_fraction": 0.49471211433410645,
"alphanum_fraction": 0.6063454747200012,
"avg_line_length": 24.787878036499023,
"blob_id": "98fccfcefe4cfa7ca66caca42ccbc1c35b2aefa6",
"content_id": "69573a1e4e88df50283106f7d0c27a597782723c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 851,
"license_type": "no_license",
"max_line_length": 56,
"num_lines": 33,
"path": "/FYS2150/Magnetisme/faraday.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np, matplotlib.pyplot as plt\nfrom lineaertilpassing import*\n\nlam = 595e-9\nL = 30e-3\nB = np.array([119.0, 102.0 ,83.0, 63.0, 43.0])\nBn = np.array([-43.0, -63.0, -83.0, -102.0, -119.0])\nI = np.array([3.0, 2.5, 2.0, 1.5, 1.0])\nIn = np.array([-1.0, -1.5, -2.0, -2.5, -3.0])\ntheta = np.array([44.6, 44.8, 45.6, 46.8, 47.4])\nthetan = np.array([50.0, 51.0, 52.0, 52.6, 53.0])\n\nVp = theta/(L*B)\nVn = thetan/(L*Bn)\n\nplt.plot(B,Vp,'*')\nplt.plot(Bn,Vn,'*')\nplt.xlabel('Magnetisk flukstetthet B [mT]')\nplt.ylabel('Verdet-konstanten')\nplt.title('Verdet-konstanten')\nplt.legend(['Positiv magnetfelt','Negativ magnetfelt'],\\\n loc = 'best')\nplt.savefig('verdet.png')\nplt.show()\n\nlin = linear(B,Vp)\nc,m,dc,Dm,d = lin.Gen_linje()\nprint m, Dm\nlin = linear(Bn,Vn)\nc,M,dc,DM,d = lin.Gen_linje()\nprint M, DM\n\nprint (m+M)/2.0, np.sqrt(Dm**2+DM**2)\n"
},
{
"alpha_fraction": 0.7547169923782349,
"alphanum_fraction": 0.7547169923782349,
"avg_line_length": 52,
"blob_id": "5b7bddfb62fe36a29cb048bdfaca294354980284",
"content_id": "4b8b7e79712cbecd2a358a8279fa799197a103a4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 53,
"license_type": "no_license",
"max_line_length": 52,
"num_lines": 1,
"path": "/INF1100/Assigments/Chapter_7/README.txt",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "ODEsolver.py is not my code! It was provided by UiO!\n"
},
{
"alpha_fraction": 0.38823530077934265,
"alphanum_fraction": 0.5058823823928833,
"avg_line_length": 13.166666984558105,
"blob_id": "c0ba7dd69add90f1a0e91df2f9a476b4bbf9b7e7",
"content_id": "f55419ebdc1300f86b6cb1d246c1bfea9a2cbeb7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 85,
"license_type": "no_license",
"max_line_length": 28,
"num_lines": 6,
"path": "/INF1100/Exercises/Chapter_3/yfunc.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "def yfunc(t, v0):\n g = 9.81\n return v0*t - 0.5*g*t**2\n\ny = yfunc(1, 5)\nprint y\n"
},
{
"alpha_fraction": 0.5159574747085571,
"alphanum_fraction": 0.5510638356208801,
"avg_line_length": 27.33333396911621,
"blob_id": "74728dd13f70f800710165a89512b3da27738794",
"content_id": "b1f5ad00cc0a5aa16256badf690939eba3d9167e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1880,
"license_type": "no_license",
"max_line_length": 62,
"num_lines": 66,
"path": "/FYS2130/Oblig_6/opg15.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np, matplotlib.pyplot as plt\n\ndef Hvitstoy(fsenter,bredde,N):\n Fs = 4.0*(fsenter + bredde)\n fsigma = bredde/2.0 #Hz\n y = np.zeros(N) #fft av signalet\n T = float(N)/Fs\n t = np.linspace(0,T*(N-1)/N,N)\n f = np.linspace(0,Fs*(N-1)/N,N)\n nsenter = np.floor(N*fsenter/(Fs*(N-1)/N))\n nsigma = np.floor(N*fsigma/(Fs*(N-1)/N))\n gauss = np.exp(-(f-fsenter)*(f-fsenter)/(fsigma*fsigma))\n ampl = np.random.rand(N) \n ampl = ampl*gauss \n faser = np.random.rand(N)\n faser = faser*2*np.pi\n y = ampl*(np.cos(faser) + 1j*np.sin(faser))\n Nhalv = np.int(np.round(N/2))\n for k in xrange(1,Nhalv):\n y[N-k] = np.conj(y[k])\n y[Nhalv+1] = np.real(y[Nhalv+1])\n y[0] = 0.0\n q = np.real(np.fft.ifft(y)*200) #signalet\n return y,q,t,f\n\ndef akorrelasjon(g,M,N):\n C = np.zeros(N-M)\n for j in xrange((N-M)-1):\n teller = 0\n nevner = 0\n for i in xrange(M):\n teller += g[i]*g[i+j]\n nevner += g[i]*g[i]\n C[j] = teller/nevner\n return C\n \n\nif __name__ == \"__main__\":\n fsenter = 5000 #Hz\n bredde = 3000 #Hz\n N = 2000\n M = np.int(np.round(N/2.))\n fftsignal,signal,tid,frekvens = Hvitstoy(fsenter,bredde,N)\n\n C = akorrelasjon(signal,M,N)\n #plot\n plt.plot(C)\n plt.plot(7.15,np.e**-1,'o')\n plt.plot((0,30),(np.e**-1,np.e**-1),color = 'black')\n plt.plot((7.182,7.182),(-1,1), color = 'black')\n plt.legend(['Autokorrelasjon','$t_c = 7.182$'])\n plt.title('autokorrelasjon')\n plt.axis([0,30,-1,1])\n plt.xlabel('Tid[s]')\n plt.ylabel('Korrelasjons konstant C')\n plt.figure()\n plt.plot(frekvens,fftsignal)\n plt.title('fft av stoyet')\n plt.xlabel('Frekvens Hz')\n plt.ylabel('Utslag')\n plt.figure()\n plt.plot(tid,signal)\n plt.title('signal')\n plt.xlabel('tid s')\n plt.ylabel('Utslag')\n plt.show()\n \n \n"
},
{
"alpha_fraction": 0.5088536739349365,
"alphanum_fraction": 0.5312208533287048,
"avg_line_length": 20.039215087890625,
"blob_id": "893c6f4e044eaf4bb10623f0bd7975248335b470",
"content_id": "25487421c8934558d8a305f78096b35f0dbd7082",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1073,
"license_type": "no_license",
"max_line_length": 65,
"num_lines": 51,
"path": "/MAT-INF1100/Oblig_2/algoritme_a.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np\nimport matplotlib.pyplot as plt\n\nclass pos_aks:\n\n def __init__(self, t, v):\n self.t, self.v = t, v\n#a)\n def aks(self):\n t = np.array(self.t); v = np.array(self.v)\n a = np.zeros(len(t))\n for i in xrange(1,len(t)):\n a[i] = (v[i]-v[i-1])/(t[i]-t[i-1])\n return a\n#b)\n def pos(self):\n t = np.array(self.t); v = np.array(self.v)\n s = np.zeros(len(v))\n\n for i in xrange(1,len(t)):\n s[i] = s[i-1] + (((v[i-1])+(v[i]))/2.0)*(t[i]-t[i-1])\n \n return s\n\n\nt = []; v = []\n\ninfile = open('running.txt','r')\nfor line in infile:\n tnext, vnext = line.strip().split(',')\n t.append(float(tnext)), v.append(float(vnext))\ninfile.close()\n\n#c)\nap = pos_aks(t,v)\na = ap.aks()\np = ap.pos()\n\nplt.subplot(2,1,1)\nplt.plot(t,a)\nplt.title('Akselerasjon graf')\nplt.ylabel('a(t)')\nplt.legend(['akselerasjon'])\nplt.subplot(2,1,2)\nplt.plot(t,p,'black')\nplt.title('Posisjon graf')\nplt.axis([0,7000,0,2.8e4])\nplt.xlabel('t')\nplt.ylabel('s(t)')\nplt.legend(['posisjon'])\nplt.show()\n"
},
{
"alpha_fraction": 0.44105854630470276,
"alphanum_fraction": 0.5300721526145935,
"avg_line_length": 20.13559341430664,
"blob_id": "06e4245ca60fa15c3efff4b6de24330838a9871d",
"content_id": "cbca2439b6065e7b0a44c506ccaae6b667d7c83f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1247,
"license_type": "no_license",
"max_line_length": 74,
"num_lines": 59,
"path": "/FYS2130/Oblig_2/2RK4.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np, matplotlib.pyplot as plt\n\n#akselerasjon\ndef diffEq(x,v,t):\n if t <30:\n a = (F/m)*np.cos(np.sqrt((k/m)*0.8-(b**2/(2*m**2)))*t)-b/m*v-k/m*x\n else:\n a = -float(b)/m*v - float(k)/m*x\n return a \n\n#Runge-Kutta\ndef rk4(x0,v0,t0):\n a1 = diffEq(x0,v0,t0)\n v1 = v0\n xHalv1 = x0 + v1 * dt/2.0\n vHalv1 = v0 + a1 * dt/2.0\n a2 = diffEq(xHalv1,vHalv1,t0+dt/2.0)\n v2 = vHalv1\n xHalv2 = x0 + v2 * dt/2.0\n vHalv2 = v0 + a2 * dt/2.0\n a3 = diffEq(xHalv2,vHalv2,t0+dt/2.0)\n v3 = vHalv2\n xEnd = x0 + v3 * dt\n vEnd = v0 + a3 * dt\n a4 = diffEq(xEnd,vEnd,t0 + dt)\n v4 = vEnd\n aMid = 1.0/6.0 * (a1 + 2*a2 + 2*a3 + a4)\n vMid = 1.0/6.0 * (v1 + 2*v2 + 2*v3 + v4)\n xEnd = x0 + vMid * dt\n vEnd = v0 + aMid * dt\n return xEnd, vEnd\n\n\n#konstanter\nm = 0.1\nk = 10.0\nb = 0.04\nF = 0.1\nN = 10**3\n#arrays\nz = np.zeros(N)\nv = np.zeros(N)\nt = np.linspace(0, 50, N)\n\n#initial betingelser\nz[0] = 0 #[m]\nv[0] = 0 #[m/s]\ndt = t[1]-t[0] #tidssteg\n\n#iterasjoner i Runge-Kutta 4\nfor i in range(N-1):\n z[i+1], v[i+1] = rk4(z[i],v[i],t[i])\n\nplt.plot(t,z)\nplt.title('$\\omega_f$ ulik $\\omega_0$')\nplt.xlabel('Tid[s]')\nplt.ylabel('Utsvingning [m]')\nplt.savefig('w0ulikwf.png')\nplt.show()\n"
},
{
"alpha_fraction": 0.49803149700164795,
"alphanum_fraction": 0.5590550899505615,
"avg_line_length": 25.710525512695312,
"blob_id": "43f13322a9bd1602d0773fdda04dcb78cbef0b62",
"content_id": "5c530f9af7c1e2c61805b527e875725ef25c2003",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1016,
"license_type": "no_license",
"max_line_length": 64,
"num_lines": 38,
"path": "/FYS-MEK1110/Oblig_1/Oblig1.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np, matplotlib.pyplot as plt\nfrom math import *\n\nF = 400 # kraft Newton[N]\nm = 80.0 # masse [kg]\np = 1.293 # [kg/m^3]\nA = 0.45 # [m^2]\nCd = 1.2 # Drag koeffisienten\nw = 0 # Luft hastighet\ntime = 8 # Tid sekunder[s] \ndt = 1./100 #dette er hva jeg har valgt som delta t\nn = int(time/dt) \na = np.zeros(n); t = np.zeros(n)\nx = np.zeros(n); v = np.zeros(n)\nx[0] = 0; v[0] = 0 #initial verdien til posisjon og hastighet\nq = 0\n\nfor i in range(int(n-1)):\n a[i] = (400 - 0.5*p*Cd*A*(v[i]-w)**2)/m\n v[i+1] = v[i] + a[i]*dt\n x[i+1] = x[i] + v[i+1]*dt\n t[i+1] = t[i] + dt\n if x[i+1] > 100:\n q = i + 1\n break\n\nplt.subplot(3,1,1) #lager tre separate grafer i samme plott\nplt.title('Bevegelses diagram')\nplt.plot(t[0:q],x[0:q]) #[0:q]=henter tall verdiene fra 0 til q\nplt.ylabel('x [m]')\nplt.subplot(3,1,2)\nplt.plot(t[0:q],v[0:q])\nplt.ylabel('v [m/s]')\nplt.subplot(3,1,3)\nplt.plot(t[0:q],a[0:q])\nplt.ylabel('a [m/s^2]')\nplt.xlabel('t [sekund]')\nplt.show()\n\n"
},
{
"alpha_fraction": 0.3333333432674408,
"alphanum_fraction": 0.4726027250289917,
"avg_line_length": 18.909090042114258,
"blob_id": "61bbfabedf5353e1c63fe75d34ba17245baac501",
"content_id": "deb62f7ab435398150e7eb68bb009b78050347d3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 438,
"license_type": "no_license",
"max_line_length": 65,
"num_lines": 22,
"path": "/FYS1120/Oblig_1/oppg4.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "from pylab import*\n\nV0 = 10.0 #[V]\ne0 = 8.85*10**(-12) #[F/m]\nd = 0.01 #[m]\nrho = -10**(-5) #[C/m^3]\nE = lambda x: V0/d - (rho*d)/(2*e0) + (x*rho)/e0\nV = lambda x: V0 + (-V0/d + (rho*d)/(2*e0))*x - (rho*x**2)/(2*e0)\nst = linspace(0,0.01,100)\n\nplot(st,V(st))\nxlabel('x = meter')\nylabel('Volt')\ntitle('Graf av V(x)')\nshow()\n\n\nq = -1.6*10**(-19)\nm = 9.11*10**(-31)\nv0 = sqrt(((V(0.005885)*q- V0*q)*2)/m)\n\nprint V0/d\n"
},
{
"alpha_fraction": 0.5699067711830139,
"alphanum_fraction": 0.5992010831832886,
"avg_line_length": 23.799999237060547,
"blob_id": "153428ab2194b1908e6006c2483cb618684c8fe6",
"content_id": "8bba242e3b48b17558aa740642a4e65609b9bc5e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 751,
"license_type": "no_license",
"max_line_length": 45,
"num_lines": 30,
"path": "/FYS2130/Project/oppg1.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np, matplotlib.pyplot as plt\nfrom classRungeKutta4 import*\n\n#konstanter\nm = 0.5 #masse [kg]\nk = 1 #fjaerkraft [N/m]\nT = 20.0 #tid [s]\ndt = 1e-2 #tidssteg\nN = int(T/dt) #antall tidssteg\nt = np.linspace(0,20,N)\nF = 0 #paatrukket kraft [N]\nb = 0 #motstand [kg/s]\nomega = np.sqrt(k/m)\n#array\nx = np.zeros(N) #posisjon\nv = np.zeros(N) #hastighet\n#initialbetingelser\nx[0] = 1 \nv[0] = 0\npendulum = DrivenPendulum(F,omega,k,b,m)\nsolver = RungeKutta4(pendulum)\nfor i in xrange(N-1):\n x[i+1],v[i+1] = solver(x[i],v[i],t[i],dt)\n \nplt.plot(x,v)\nplt.title('Posisjon mot hastighet')\nplt.xlabel('Posisjon [m]')\nplt.ylabel('Hastighet [m/s]')\n#plt.savefig('pvhb.png')\nplt.show()\n\n \n\n"
},
{
"alpha_fraction": 0.8139534592628479,
"alphanum_fraction": 0.8139534592628479,
"avg_line_length": 42,
"blob_id": "d86158e30a79de628375b03a834203914a0d4f3a",
"content_id": "8a748df0a2e307e1ab38c7602fdf095100271020",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 43,
"license_type": "no_license",
"max_line_length": 42,
"num_lines": 1,
"path": "/MAT1120/README.txt",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "This course I learnt about linear algebra.\n"
},
{
"alpha_fraction": 0.45692306756973267,
"alphanum_fraction": 0.5292307734489441,
"avg_line_length": 17.571428298950195,
"blob_id": "993fa5e1315dee056651b640647e728ccd86d996",
"content_id": "36b7b0b73b879a01717df40349d10ccee239355e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 650,
"license_type": "no_license",
"max_line_length": 43,
"num_lines": 35,
"path": "/INF1100/Assigments/Chapter_5/fortune_and_inflation1.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np\nimport matplotlib.pyplot as plt\n\n\"\"\"\ndifferens ligning\nx0 = F ; c0 = ((p*q)/1E*4)*F\nx[n] = x[n-1] + (p/100)*x[n-1] - c[n-1]\nc[n] = c[n-1] + (1/100)*c[n-1]\n\"\"\"\n\nF = 4 #fortune\np = 27.0 #annual interest of percent\nq = 5.0 #interest of the first year\nn = 10 # years\n\nx = np.zeros(n+1)\nc = np.zeros(n+1)\nx[0] = F\nc[0] = ((p*q)/1e4)*F\n\nfor i in xrange(1,n+1):\n c[i] = c[i-1] + (1/100)*c[i-1]\n x[i] = x[i-1] + (p/100)*x[i-1] - c[i-1]\n \n\nplt.plot(x, '.-.-.')\nplt.xlabel('years')\nplt.ylabel('f(x)')\nplt.title('Fortune')\nplt.legend(['f(x) = %.2f' % (x[-1])])\nplt.show()\n\n\"\"\"\nTerminal>python fortune_and_inflation1.py\n\"\"\"\n"
},
{
"alpha_fraction": 0.35680750012397766,
"alphanum_fraction": 0.44600939750671387,
"avg_line_length": 16.75,
"blob_id": "8e63d6f75e6bc471930883aaf6c154f22a13c10e",
"content_id": "b0b1dca00c98b3f512e121ffa3378bc523f987c0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 213,
"license_type": "no_license",
"max_line_length": 39,
"num_lines": 12,
"path": "/MAT-INF1100/Oblig_2/diff3.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np\n\nk = [0,1,2,3,4,5]\nn = 5\nx = np.zeros(len(k))\nt = np.zeros(len(k))\n\nfor i in range(1,n+1):\n h = 0.4\n xtmp = x[i-1] + (h/2)*(1-x[i-1]**2)\n x[i] = x[i-1] + h*(1-xtmp**2)\n t[i] = (i)*h\n"
},
{
"alpha_fraction": 0.4889795780181885,
"alphanum_fraction": 0.5110204219818115,
"avg_line_length": 25.630434036254883,
"blob_id": "8fe286c9fd8fbf0f880cb5da7176aa7bb7bdf3bf",
"content_id": "5012e3cb4b1335ae54cb11adde9a4883b8f13532",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1225,
"license_type": "no_license",
"max_line_length": 52,
"num_lines": 46,
"path": "/MEK1100/Oblig_2/oblig2a.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "from oblig2 import*\n#Putting the matrices and vectrs in a list\ndat = [x,y,u,v,xit,yit]\n#Checking the size of the matrices and vectors\nfor j in range(6):\n value = []\n indx = ['x','y','u','v','xit','yit']\n for i in dat:\n value += [shape(i)]\n if j < 4:\n print 'Matrisen %s har (x,y)' %(indx[j])\n print value[j]\n elif j >= 4:\n print 'Vektoren %s har (x,y)' %(indx[j])\n print value[j]\n#test functions. \n#Checking if x is regulated with an interval on 0.5.\ndef test_x(q):\n for i in q:\n for j in range(194-1):\n tst = i[j+1]-i[j]\n sucsess = 0\n if tst == 0.5:\n sucsess\n else :\n print 'something went wrong'\n#Checking if y is regulated with an interval on 0.5.\n#and if it takes the whole diameter of the pipe.\ndef test_y(q):\n ykor = [] \n for i in y:\n ykor += [i[1]]\n if abs(i[1]) > 50:\n print 'Overstiger diameteren!'\n else:\n ykor\n for j in range(194):\n tst = ykor[j+1]-ykor[j]\n success = 0\n if tst == 0.5:\n success\n else :\n print 'this is not right'\n#Calling the test functions\ntest_x(x)\ntest_y(y)\n"
},
{
"alpha_fraction": 0.4359642565250397,
"alphanum_fraction": 0.4940432012081146,
"avg_line_length": 22.76991081237793,
"blob_id": "8fdc1beacd10b6c2edfa8ddf0312a6e240c061dd",
"content_id": "5b6ca4798b62494f13e27ca5656abb7a347afdae",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2686,
"license_type": "no_license",
"max_line_length": 65,
"num_lines": 113,
"path": "/FYS2130/Project/classRungeKutta4.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np, matplotlib.pyplot as plt\n\nclass DrivenPendulum:\n\n def __init__(self,f0,omega_f,k,b,m):\n self.f0, self.omega = f0, omega_f\n self.k, self.b, self.m = k, b, m\n\n \n def __call__(self,x,v,t):\n a = (self.f0*np.cos(self.omega*t))/self.m \\\n -self.b/self.m*v-self.k/self.m*x\n return a\n\nclass Dripp:\n def __init__(self,g,psi,b,k,m):\n self.g, self.psi = g, psi\n self.b, self.k, self.m = b, k, m\n\n def __call__(self,x,v,t):\n B = self.b+self.psi\n a = self.g - float(B*v)/self.m -( float(self.k)/self.m)*x\n return a\n \nclass RungeKutta4:\n\n def __init__(self,f):\n self.f = f\n\n def __call__(self,x0,v0,t0,dt):\n a1 = self.f(x0,v0,t0)\n v1 = v0\n xHalv1 = x0 + v1 * dt/2.0\n vHalv1 = v0 + a1 * dt/2.0\n a2 = self.f(xHalv1,vHalv1,t0+dt/2.0)\n v2 = vHalv1\n xHalv2 = x0 + v2 * dt/2.0\n vHalv2 = v0 + a2 * dt/2.0\n a3 = self.f(xHalv2,vHalv2,t0+dt/2.0)\n v3 = vHalv2\n xEnd = x0 + v3 * dt\n vEnd = v0 + a3 * dt\n a4 = self.f(xEnd,vEnd,t0 + dt)\n v4 = vEnd\n aMid = 1.0/6.0 * (a1 + 2*a2 + 2*a3 + a4)\n vMid = 1.0/6.0 * (v1 + 2*v2 + 2*v3 + v4)\n xEnd = x0 + vMid * dt\n vEnd = v0 + aMid * dt\n return xEnd, vEnd\n \n\"\"\"\ndef test_rk4():\n f = lambda x,v,t : \n tRK4 = RungeKutta4(0.1,10.0,0.1)\n compx, compv = tRK4.rk4(0.1,0,0,0.3)\n expect1,expect2 = 0.029, 1.233\n tol = np.exp(-7) #grunnet avrunding i expect.\n success = abs((compx+compv)-(expect1+expect2)) < tol\n msg = 'FEIL!'\n assert success, msg\n \ntest_aks()\ntest_rk4()\n\"\"\"\nif __name__ == '__main__':\n\n #konstanter\n m = 0.1 #[kg]\n k = 10 #[N/m]\n b = 0.1 #[kg/s]\n N = 10000\n\n #arrays\n z = np.zeros(N)\n v = np.zeros(N)\n t = np.linspace(0, 10, N)\n\n #initial betingelser\n z[0] = 0.1 #[m]\n v[0] = 0 #[m/s]\n dt = t[1]-t[0] #tidssteg\n\n #iterasjoner i Runge-Kutta 4\n dp = DrivenPendulum(0,0,k,b,m)\n solver = RungeKutta4(dp)\n for i in range(N-1):\n z[i+1], v[i+1] = solver(z[i],v[i],t[i],dt)\n\n #analytisk losning\n A = 0.1\n omega = (np.sqrt(399)/2.0)\n #Funksjon for det analytiske uttrykket for z(t)\n def y(t):\n y = np.exp(-0.5*t)*A*np.cos(omega*t)\n return y\n an = np.zeros(N)\n an[0] = A\n for i in range(N-1):\n an[i+1] = y(t[i])\n\n\n \n #plott\n plt.plot(t,z)\n plt.title('Fjaer pendel')\n plt.xlabel('Tid[s]')\n plt.ylabel('Posisjon[m]')\n #plt.savefig('400.png')\n\n plt.plot(t,an)\n plt.legend(['RK4', 'analytisk'])\n #plt.savefig('AnalytiskmotRK4.png')\n plt.show()\n"
},
{
"alpha_fraction": 0.6144200563430786,
"alphanum_fraction": 0.6572622656822205,
"avg_line_length": 23.202531814575195,
"blob_id": "ab5f92d00f438e5bde88269e495672ddddb8fc9a",
"content_id": "58d0402cc2d1225ff06b58d4d7f48e07f828fd3f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1914,
"license_type": "no_license",
"max_line_length": 48,
"num_lines": 79,
"path": "/FYS2130/Oblig_3/opg14.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np, matplotlib.pyplot as plt\nfrom scipy.fftpack import fft, ifft\nfrom scipy import signal\n\ndef FFT(p,n): #p = perioder, n = samplinger\n t = np.linspace(0,p,n)\n signal = np.sin((2*np.pi)*t)\n FFT = fft(signal)/float(n)\n fs = 1.0/(t[1]-t[0])\n freq = np.linspace(0,fs,n)\n return signal,FFT,freq,t\ndef square(p,n): #p = perioder, n = samplinger\n t = np.linspace(0,p,n)\n square = signal.square((2*np.pi)*t)\n fs = 1.0/(t[1]-t[0])\n freq = np.linspace(0,fs,n)\n FFT = fft(square)/n\n return square,FFT,freq,t\n#konstanter\nN = 512.0\nn = 2**(14)\n#\nsignal_a,FFT_a,freq_a,t_a = FFT(13,N)\nsignal_b,FFT_b,freq_b,t_b = FFT(13.2,N)\nsquare,FFT_square,freq_square,t_s = square(16,n)\n\n\n#plot\n#signal a\nplt.subplot(1,2,1)\nplt.plot(t_a,signal_a)\nplt.title('Tidsbildet')\nplt.xlabel('Tid [s]')\nplt.ylabel('Utslag [rad]')\nplt.subplot(1,2,2)\nplt.title('Frekvensspekteret')\nplt.plot(freq_a,np.abs(FFT_a))\nplt.xlabel('Frekvens [Hz]')\nplt.ylabel('Fourierkonst. |X(f)|')\nplt.axis([0,20,0,0.6])\n#plt.savefig('13perioder.png')\n#signal b\nplt.figure()\nplt.subplot(1,2,1)\nplt.plot(t_b,signal_b)\nplt.title('Tidsbildet')\nplt.xlabel('Tid [s]')\nplt.ylabel('Utslag [rad]')\nplt.subplot(1,2,2)\nplt.title('Frekvensspekteret')\nplt.plot(freq_b,np.abs(FFT_b))\nplt.xlabel('Frekvens [Hz]')\nplt.ylabel('Fourierkonst. |X(f)|')\nplt.axis([0,20,0,0.6])\n#Firkantbolge\nplt.figure()\nplt.subplot(1,2,1)\nplt.plot(t_s,square)\nplt.title('Tidsbildet')\nplt.xlabel('Tid [s]')\nplt.ylabel('Utslag [rad]')\nplt.axis([0,16.5,-1.2,1.2])\nplt.subplot(1,2,2)\nplt.title('Frekvensspekteret')\nplt.plot(freq_square,np.abs(FFT_square))\nplt.xlabel('Frekvens [Hz]')\nplt.ylabel('Fourierkonst. |X(f)|')\nplt.axis([0,500,0,0.1])\n\n#Analytisk amplitude\nAmplitude = np.zeros(n)\nfor i in xrange(n):\n Amplitude[i] = 2.0/(np.pi*(2*i-1))\nt = np.linspace(0,16,n)\nplt.figure()\nplt.plot(t,Amplitude)\nplt.ylabel('Amplitude')\nplt.xlabel('Tid')\nplt.show()\n\n\n"
},
{
"alpha_fraction": 0.303108811378479,
"alphanum_fraction": 0.7398223280906677,
"avg_line_length": 36.52777862548828,
"blob_id": "28ab0d70249839e46e17211e702829a537b42557",
"content_id": "5bb955d1fc2f7ba3ff77c23d131d236015d7ea1c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2702,
"license_type": "no_license",
"max_line_length": 49,
"num_lines": 72,
"path": "/INF1100/Exercises/Chapter_2/repeated_sqrt.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "from math import sqrt\nfor n in range(1,60):\n r = 2.0\n for i in range(n):\n r = sqrt(r)\n for i in range(n):\n r = r**2\n print '%d times sqrt and **2: %.16f' % (n, r)\n\n\n\"\"\"\nTerminal>python repeated_sqrt.py \n1 times sqrt and **2: 2.0000000000000004\n2 times sqrt and **2: 1.9999999999999996\n3 times sqrt and **2: 1.9999999999999996\n4 times sqrt and **2: 1.9999999999999964\n5 times sqrt and **2: 1.9999999999999964\n6 times sqrt and **2: 1.9999999999999964\n7 times sqrt and **2: 1.9999999999999714\n8 times sqrt and **2: 2.0000000000000235\n9 times sqrt and **2: 2.0000000000000235\n10 times sqrt and **2: 2.0000000000000235\n11 times sqrt and **2: 2.0000000000000235\n12 times sqrt and **2: 1.9999999999991336\n13 times sqrt and **2: 1.9999999999973292\n14 times sqrt and **2: 1.9999999999973292\n15 times sqrt and **2: 1.9999999999973292\n16 times sqrt and **2: 2.0000000000117746\n17 times sqrt and **2: 2.0000000000408580\n18 times sqrt and **2: 2.0000000000408580\n19 times sqrt and **2: 2.0000000001573586\n20 times sqrt and **2: 2.0000000001573586\n21 times sqrt and **2: 2.0000000001573586\n22 times sqrt and **2: 2.0000000010885857\n23 times sqrt and **2: 2.0000000029511749\n24 times sqrt and **2: 2.0000000066771721\n25 times sqrt and **2: 2.0000000066771721\n26 times sqrt and **2: 1.9999999917774933\n27 times sqrt and **2: 1.9999999917774933\n28 times sqrt and **2: 1.9999999917774933\n29 times sqrt and **2: 1.9999999917774933\n30 times sqrt and **2: 1.9999999917774933\n31 times sqrt and **2: 1.9999999917774933\n32 times sqrt and **2: 1.9999990380770896\n33 times sqrt and **2: 1.9999971307544144\n34 times sqrt and **2: 1.9999971307544144\n35 times sqrt and **2: 1.9999971307544144\n36 times sqrt and **2: 1.9999971307544144\n37 times sqrt and **2: 1.9999971307544144\n38 times sqrt and **2: 1.9999360966436217\n39 times sqrt and **2: 1.9999360966436217\n40 times sqrt and **2: 1.9999360966436217\n41 times sqrt and **2: 1.9994478907329654\n42 times sqrt and **2: 1.9984718365144798\n43 times sqrt and **2: 1.9965211562778555\n44 times sqrt and **2: 1.9965211562778555\n45 times sqrt and **2: 1.9887374575497223\n46 times sqrt and **2: 1.9887374575497223\n47 times sqrt and **2: 1.9887374575497223\n48 times sqrt and **2: 1.9887374575497223\n49 times sqrt and **2: 1.8682459487159784\n50 times sqrt and **2: 1.6487212645509468\n51 times sqrt and **2: 1.6487212645509468\n52 times sqrt and **2: 1.0000000000000000\n53 times sqrt and **2: 1.0000000000000000\n54 times sqrt and **2: 1.0000000000000000\n55 times sqrt and **2: 1.0000000000000000\n56 times sqrt and **2: 1.0000000000000000\n57 times sqrt and **2: 1.0000000000000000\n58 times sqrt and **2: 1.0000000000000000\n59 times sqrt and **2: 1.0000000000000000\n\"\"\"\n"
},
{
"alpha_fraction": 0.8247422575950623,
"alphanum_fraction": 0.8247422575950623,
"avg_line_length": 96,
"blob_id": "7aba2d0d6187a90ed0db3fb7c5c81bed7f8c8a01",
"content_id": "c9e1b86019ac4ab42890e88bd4b4bae458267e7e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 97,
"license_type": "no_license",
"max_line_length": 96,
"num_lines": 1,
"path": "/INF1100/Exercises/README.txt",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "This exercises was done as a preparation for the regularly assigments we got after each chapter.\n"
},
{
"alpha_fraction": 0.5105633735656738,
"alphanum_fraction": 0.5845070481300354,
"avg_line_length": 14.777777671813965,
"blob_id": "53cfa75eab9d79c4f1caa90e659ccc3ac7e17322",
"content_id": "4c2173805ee7e18373006290d9c250ce4ccf29e7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 284,
"license_type": "no_license",
"max_line_length": 32,
"num_lines": 18,
"path": "/INF1100/Exercises/Chapter_5/plot_ball2.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "from numpy import *\nfrom matplotlib.pyplot import *\nimport sys\n\nv0_list = sys.argv[1:]\ng = 9.81\n\nfor v0 in v0_list:\n v0 = float(v0)\n t = linspace(0, 2*v0/g, 100)\n y = v0*t - 0.5*g*t**2\n plot(t,y,label='v0=%g' %v0)\n\nxlabel('time (s)')\nylabel('heigth (m)')\nlegend()\n\nshow()\n"
},
{
"alpha_fraction": 0.447429895401001,
"alphanum_fraction": 0.5011682510375977,
"avg_line_length": 16.1200008392334,
"blob_id": "5b7e617ad538b6ffc40971202c53fe1bce7eaaef",
"content_id": "f4f83cb39aa26a9a3f914e5306907574dd420f56",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 856,
"license_type": "no_license",
"max_line_length": 60,
"num_lines": 50,
"path": "/INF1100/Exercises/Chapter_5/sin_Taylor_series_diffeq.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np\n\n\"\"\"\n#a)\na[j] = -x**2/((2*j+1)*2*j)*a[j-1]\ns[j] = s[j-1] + a[j-1]\n\"\"\"\n#b)\n\ndef sin_taylor(x,n):\n a = np.zeros(n+2)\n s = np.zeros(n+2)\n\n a[0] = x\n #s[0] = 0 trenger ikke denne pga s = np.zeros()\n\n for j in range(1, n+2):\n a[j] = -x**2/((2*j+1)*2*j)*a[j-1]\n s[j] = s[j-1] + a[j-1]\n\n return s[n+1], abs(a[n+1]) #abs = absolutt verdien\n\n#print sin_taylor(np.pi/2, 40)\n\n#c)\n\n\"\"\"\nn = 2\nS = x - (x**3)/3! + (x**5)/5!)\n\"\"\"\nfrom math import factorial\ndef test_sin_Taylor():\n n = 2\n x = (3*np.pi)/2\n tol = 1e-10\n\n expected = x - (x**3)/factorial(3) + (x**5)/factorial(5)\n computed = sin_taylor(x,n)\n success = abs(expected - computed[0]) < tol\n msg = 'something went wrong'\n assert success, msg\n\ntest_sin_Taylor()\n\n#d)\n\nx = np.pi/2\nfor n in range(10):\n s = sin_taylor(x,n)\n print s[0]\n"
},
{
"alpha_fraction": 0.42553192377090454,
"alphanum_fraction": 0.5106382966041565,
"avg_line_length": 13.8421049118042,
"blob_id": "43549fd9837b81a5f6b0f05df2d5c983c32bddcc",
"content_id": "54dd987de2f31e3f285c251e9717640be37ace77",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 282,
"license_type": "no_license",
"max_line_length": 29,
"num_lines": 19,
"path": "/FYS-MEK1110/Oblig_3/p.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "from pylab import*\n\nAu = 149597870691 #meter\ntime = 10\ndt = 0.001\nn = int(round(time/dt))\narray = zeros(len(t))\n\n\n\"\"\"\n#Euler-Cromer\nfor i in range(n-1):\n F = -k*x[i]\n a[i] = F/m\n v0[i+1] = v[i] + a[i]*dt\n x[i+1] = x[i] + v[i+1]*dt\n t[i+1] = t[i] + dt\n\"\"\"\nprint array\n"
},
{
"alpha_fraction": 0.42691031098365784,
"alphanum_fraction": 0.6312292218208313,
"avg_line_length": 23.1200008392334,
"blob_id": "96b1b3f18cd40aad01eacdbb04cb7630ef3dab9a",
"content_id": "5f791f594a57ed43d30a4a00c6de0d4bf4113bc4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 602,
"license_type": "no_license",
"max_line_length": 108,
"num_lines": 25,
"path": "/FYS1120/LAB/lab1Stoppeklokketider.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Tue Jan 23 12:14:58 2018\n\n@author: christoffer\n\"\"\"\n\nimport scipy as sc\nimport matplotlib.pyplot as plt\n\ndata1 = sc.array([1.57, 1.56, 1.66, 1.74, 1.75, 1.62, 1.63, 1.71, 1.57, 1.66, 1.56, 1.72, 1.66, 1.54, 1.58])\ndata2 = sc.array([1.68, 1.55, 1.62, 1.66, 1.61, 1.61, 1.58, 1.51, 1.75, 1.70, 1.61, 1.59, 1.68, 1.67, 1.53])\ndata1og2 = sc.append(data1, data2)\n\nprint(sc.mean(data1))\nprint(sc.mean(data2))\nprint(sc.mean(data1og2))\n\nprint(sc.std(data1))\nprint(sc.std(data2))\nprint(sc.std(data1og2))\n\nplt.hist(data1og2[::2], bins = 10)\nplt.show()"
},
{
"alpha_fraction": 0.7669903039932251,
"alphanum_fraction": 0.7766990065574646,
"avg_line_length": 101,
"blob_id": "1aa11aa23bcbff70f2be00ef0dbe42528691d254",
"content_id": "1285497727756535b477e122b21f091d258602ab",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 103,
"license_type": "no_license",
"max_line_length": 101,
"num_lines": 1,
"path": "/MAT1110/Oblig_2/README.txt",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "This is a my complete answer for my 2. assigment for this course. I used MATLAB to solve the matrices \n"
},
{
"alpha_fraction": 0.5104166865348816,
"alphanum_fraction": 0.5651041865348816,
"avg_line_length": 22.9375,
"blob_id": "d17fdbb1c2227dc0553d3ea3237b6f15d1388934",
"content_id": "33e68c46281cefde9853dcff30f717e81bc41ea3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 384,
"license_type": "no_license",
"max_line_length": 63,
"num_lines": 16,
"path": "/INF1100/Exercises/Chapter_3/sum_func.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "\ndef sum_1k(M):\n \"\"\"Dette er formelen for alle matematiske summe formlene\"\"\"\n s = 0\n for k in range(1, M+1):\n s += 1.0/k\n return s\n\ndef test_sum_1k():\n expected = 1.0 + 1.0/2 + 1.0/3\n computed = sum_1k(3)\n tol = 1e-10\n success = abs(expected - computed) < tol\n msg = \"Expected %g, got %g\" %(expected, computed)\n assert success, msg\n\ntest_sum_1k()\n"
},
{
"alpha_fraction": 0.44908615946769714,
"alphanum_fraction": 0.4934725761413574,
"avg_line_length": 13.185185432434082,
"blob_id": "72fb96f9053b49c5563ed34551b98be28583db5e",
"content_id": "a360d3bb19dd664854f42df905f87cdcc329030b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 383,
"license_type": "no_license",
"max_line_length": 31,
"num_lines": 27,
"path": "/INF1100/Exercises/Chapter_3/maxmin_list.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "def max(a):\n max_elem = a[0]\n\n for elm in a[1:]:\n if elm > max_elem:\n max_elem = elm\n \n return max_elem\n\ndef min(a):\n min_elem = a[0]\n\n for elm in a[1:]:\n if elm < min_elem:\n min_elem = elm\n return min_elem\n\ntest = [5,6,-2,7,54,-7,0,9,4]\n\nprint max(test)\nprint min(test)\n\n\"\"\"\nTerminal>python maxmin_list.py \n54\n-7\n\"\"\"\n"
},
{
"alpha_fraction": 0.8269230723381042,
"alphanum_fraction": 0.8269230723381042,
"avg_line_length": 51,
"blob_id": "d14637f1546b1ac8d967006e1feafc9da9e126e1",
"content_id": "716a9012c4c164ea6924b6c9d9a0185e4e1b3d4d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 52,
"license_type": "no_license",
"max_line_length": 51,
"num_lines": 1,
"path": "/MAT1110/Oblig1/README.txt",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "This is the coding part of a mathematical assigment\n"
},
{
"alpha_fraction": 0.5768947601318359,
"alphanum_fraction": 0.679175853729248,
"avg_line_length": 31.35714340209961,
"blob_id": "1a6ca32480ad842fe877e357f791480abd05ec59",
"content_id": "a2440847ec9cd48d9ee6a57b147514ccc67bc92e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1359,
"license_type": "no_license",
"max_line_length": 74,
"num_lines": 42,
"path": "/INF1100/Exercises/Chapter_3/egg_func.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "from math import pi, log\n\ndef egg(M,To=20,Ty=70): \n c = 3.7\n K = 5.4e-3 # thermal conductivity\n rho = 1.038 # density\n Tw = 100 # Water temperatur\n time = (M**(2.0/3.0)*c*rho**(1.0/3.0))/(K*pi**2*(4*pi/3)**(2.0/3.0)) \\\n *log(0.76*(To-Tw)/(Ty-Tw))\n return time\n\nTT = [60, 70]\nM = [47, 67]\nT0 = [4, 25]\n\nfor m in M:\n for ty in TT:\n for to in T0:\n time = egg (m, To=to, Ty=ty)\n print '''With mass %g and init temperatur %g,\nthe core temperature %g is reasched after %g seconds''' %(m, to, ty, time)\n\n\n\"\"\"\nTerminal>python egg_func.py \nWith mass 47 and init temperatur 4,\nthe core temperature 60 is reasched after 211.744 seconds\nWith mass 47 and init temperatur 25,\nthe core temperature 60 is reasched after 124.775 seconds\nWith mass 47 and init temperatur 4,\nthe core temperature 70 is reasched after 313.095 seconds\nWith mass 47 and init temperatur 25,\nthe core temperature 70 is reasched after 226.126 seconds\nWith mass 67 and init temperatur 4,\nthe core temperature 60 is reasched after 268.202 seconds\nWith mass 67 and init temperatur 25,\nthe core temperature 60 is reasched after 158.044 seconds\nWith mass 67 and init temperatur 4,\nthe core temperature 70 is reasched after 396.576 seconds\nWith mass 67 and init temperatur 25,\nthe core temperature 70 is reasched after 286.418 seconds\n\"\"\"\n"
},
{
"alpha_fraction": 0.39191919565200806,
"alphanum_fraction": 0.5525252819061279,
"avg_line_length": 16.36842155456543,
"blob_id": "5339689c527aad9c4d9e0e66e91f219ad4fa40b3",
"content_id": "0ab76a7e697c34123085a5114c8f71a7b8708a35",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 990,
"license_type": "no_license",
"max_line_length": 82,
"num_lines": 57,
"path": "/INF1100/Assigments/Chapter_6/cos_Taylor_series_diffeq.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "\"\"\"\nformula: (-1)**n*(x**2n/2n!)\n#a)\na[j] = -x**2/(2.0*j*(2*j-1))*a[j-1]\ns[j] = s[j-1] + a[j-1]\n\n\"\"\"\n#b)\n\nimport numpy as np\nfrom math import factorial\n\ndef cos_taylor(x,n):\n a = np.zeros(n+2)\n s = np.zeros(n+2)\n\n a[0] = 1.\n s[0] = 0\n\n for j in range(1, n+2):\n a[j] = -x**2/(2.0*j*(2*j-1))*a[j-1]\n s[j] = s[j-1] + a[j-1]\n\n return s[n+1], abs(a[n+1]) #abs = absolutt verdien\n#c)\ndef test_cos_Taylor():\n n = 3\n x = (3*np.pi)/2\n tol = 1e-10\n\n expected = 1 - (x**2)/factorial(2) + (x**4)/factorial(4) - (x**6)/factorial(6)\n computed = cos_taylor(x,n)\n success = abs(expected - computed[0]) < tol\n msg = 'something went wrong'\n assert success, msg\n\ntest_cos_Taylor()\n\nx = np.pi\n\nfor n in range(10):\n s = cos_taylor(x,n)\n print s[0]\n \n\"\"\"\nTerminal> python cos_Taylor_series_diffeq.py \n1.0\n-3.93480220054\n0.123909925872\n-1.21135284298\n-0.976022212624\n-1.00182910401\n-0.999899529704\n-1.00000416781\n-0.99999986474\n-1.00000000353\n\"\"\"\n"
},
{
"alpha_fraction": 0.6192703247070312,
"alphanum_fraction": 0.6323667168617249,
"avg_line_length": 24.452381134033203,
"blob_id": "e65fa09116cc19a354cb14664389f897a2773003",
"content_id": "213547298c41502afd443b1e8b2aa7c3605ceff6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1069,
"license_type": "no_license",
"max_line_length": 102,
"num_lines": 42,
"path": "/INF1100/Assigments/Chapter_6/density_improved.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "#a)\ndef read_densities(filename):\n infile = open(filename,'r')\n densities = {}\n\n for line in infile:\n line.strip()\n words = line.split()\n density = float(words[-1])\n substance = ' '.join(words[:-1])\n densities[substance] = density\n \n infile.close()\n return densities\n#b)\ndef substance_densities(filename):\n infile = open(filename,'r')\n densities = {}\n\n for line in infile:\n substance = line[0:10].strip()\n density = float(line[10:-1].strip())\n densities[substance] = density\n \n infile.close\n return densities\n#c)\ndef test_sub_den():\n comp1 = read_densities('densities.dat')\n comp2 = substance_densities('densities.dat')\n success = (comp1 == comp2)\n msg = 'Something went horribly wrong!'\n assert success, msg\n\ntest_sub_den()\n\n\"\"\"\nTest funksjonen min tester om funksjon 1 er lik funksjon 2.\n\nJeg hentet filen jeg brukte til aa lage programmet her:\nTerminal> curl https://raw.githubust/scipro-primer/master/src/dictstring/densities.dat < densities.dat\n\"\"\"\n"
},
{
"alpha_fraction": 0.6378132104873657,
"alphanum_fraction": 0.678815484046936,
"avg_line_length": 38.90909194946289,
"blob_id": "e7341399810b76d7f6655de1b3eaf5a059477259",
"content_id": "ffec647400f282dbea1ecd6e64d6645c5caeb480",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1317,
"license_type": "no_license",
"max_line_length": 81,
"num_lines": 33,
"path": "/INF1100/Assigments/Chapter_1/kick.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "from math import pi\na = 0.11 # radius of a football\nm = 0.43 # mass of football\ng = 9.81 # acceleration of gravity\n\nQ = 1.2 # denisty of air\nVh = 120/3.6 # velocity of a hard kick. Convertes from km/h to m/s\nVs = 30/3.6 # velocity of a soft kick. Convertes from km/h to m/s\nA = pi*a**2 # normal to the velocity direction\nCd = 0.4 # drag coefficiant\n\n# Fd = (1/2)*Cd*Q*A*V**2 is the formula to the drag force due to air resistance\nFds = (1.0/2)*Cd*Q*A*Vs**2 # soft kick\nFdh = (1.0/2)*Cd*Q*A*Vh**2 # hard kick\nFg = m*g # gravity formula\n\nR1 = Fds/Fg # ratio between force of gravity and drag force from a soft kick\nR2 = Fdh/Fg # ratio between force of gravity and drag force from a hard kick\n\nprint '''The force of gravity on a football on planet Tellus is %.2g N.\nWith a soft kick it has a drag force on ca %.g N, and\nwith a hard kick the drag force is ca %.3g\nRatio between drag force and the force of gravity\nfrom a soft kick is %.2g and from a hard kick is %.2g''' % (Fg, Fds, Fdh, R1, R2)\n\n\"\"\"\nTerminal>python kick.py \nThe force of gravity on a football on planet Tellus is 4.2 N.\nWith a soft kick it has a drag force on ca 0.6 N, and\nwith a hard kick the drag force is ca 10.1\nRatio between drag force and the force of gravity\nfrom a soft kick is 0.15 and from a hard kick is 2.4\n\"\"\"\n"
},
{
"alpha_fraction": 0.4790046513080597,
"alphanum_fraction": 0.5132192969322205,
"avg_line_length": 25.79166603088379,
"blob_id": "dcc02b70725ab0e36a2ada920bf54121e9627346",
"content_id": "4929481814a093c2e5bf4fb0a0a647409353965a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 643,
"license_type": "no_license",
"max_line_length": 55,
"num_lines": 24,
"path": "/INF1100/Assigments/Chapter_5/growth_years_efficient.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np\nimport matplotlib.pyplot as plt\n\np = 17 # interest rate\nN = 5 # number of years\nn = 1 # year 1\nx = 2500 # initial amount\n\noutfile = open('growth_years_efficient.dat', 'w')\noutfile.write('Growth years efficient \\n')\noutfile.write('-------------------------- \\n')\noutfile.write('Years: | amount: |\\n')\noutfile.write('-------------------------- \\n')\noutfile.write('1 | 2500.00 |\\n')\nwhile n < N:\n x = x + (p/100.0)*x\n n += 1\n outfile.write('%d | %.2f |\\n' %(n, x))\noutfile.write('--------------------------')\noutfile.close()\n\n\"\"\"\nTerminal>python growth_years_efficient.py\n\"\"\"\n"
},
{
"alpha_fraction": 0.5766924023628235,
"alphanum_fraction": 0.6075407266616821,
"avg_line_length": 33.32352828979492,
"blob_id": "f457f0339051295edfe40e68b499b2bd5be80b68",
"content_id": "a99c11133dad13e0d50d93fe1ff03ab87466dd6d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1167,
"license_type": "no_license",
"max_line_length": 98,
"num_lines": 34,
"path": "/FYS2130/Oblig_6/test.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np\nimport matplotlib.pyplot as plt\n\ndef HvitStoyGauss(N,fsenter,fullbredde):\n Fs = 2.*(fsenter + fullbredde) + 1\n fsigma = fullbredde/2.0 # sigma\n y = np.zeros(N) #blir fft av stoysignal\n T = N/Fs #antall malinger\n t = np.linspace(0,T*(N-1)/N,N)\n f = np.linspace(0,Fs*(N-1)/N, N)\n nsenter = np.floor(N*fsenter/(Fs*(N-1)/N))\n nsigma = np.floor(N*fsigma/(Fs*(N-1)/N))\n gauss = np.exp(-(f-fsenter)*(f-fsenter)/(fsigma*fsigma)) #gaussisk fordelt frekvenser\n ampl = np.random.rand(N) #stoy\n ampl = ampl*gauss #gaussisk fordelt frekvenser med stoy\n faser = np.random.rand(N)\n faser = faser*2*np.pi\n y = ampl*(np.cos(faser) + 1j*np.sin(faser))\n Nhalv = np.round(N/2)\n for k in range(1,Nhalv):\n y[N-k] = np.conj(y[k])\n y[Nhalv+1] = np.real(y[Nhalv+1])\n y[0] = 0.0\n q = np.real(np.fft.ifft(y)*200)\n return y, q,t,f\ndef plotstoy():\n y, q, t, f = HvitStoyGauss(20000,5e+3,3e+3)\n plt.plot(f,y)\n plt.title('fft av et stoysignal')\n plt.show()\n plt.plot(t,q)\n plt.title('stoysignalet med senterfrekvens '+str(5e+3)+ ' og stoybredde ' +str(round(3e+3/2)))\n plt.show()\nplotstoy()\n"
},
{
"alpha_fraction": 0.8098159432411194,
"alphanum_fraction": 0.8098159432411194,
"avg_line_length": 80.5,
"blob_id": "fba51321728933e6d5a2c23cd2fba8aedbaeb6a5",
"content_id": "3f89772c62dff1be0c31080fcb95e7a634192f38",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 163,
"license_type": "no_license",
"max_line_length": 99,
"num_lines": 2,
"path": "/INF1100/README.txt",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "This course I learnt object oriented programming with python. \nSyllabus: Langtangen, H P. A primer on Scientific Programming with python (Fifth edition). Springer\n"
},
{
"alpha_fraction": 0.46047359704971313,
"alphanum_fraction": 0.5336976051330566,
"avg_line_length": 29.842697143554688,
"blob_id": "a876b35260377ed34e80eb57999ebf761e8f843b",
"content_id": "256143f76704060214305a73f83e7b1bbe311c66",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2745,
"license_type": "no_license",
"max_line_length": 74,
"num_lines": 89,
"path": "/AST2000/Oblig_B/B8.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "from pylab import*\nfrom Solmal import*\n\nSM = 1.98892*10**30 #solmasse i kg\ndt = 0.5\ntmax = 10**7\nn = int(round(tmax/dt))\nn_times = zeros(n)\nsatPos = zeros((n,2))\nv = zeros((n,2))\na = zeros((n,2))\ntegn = zeros((n,2)) #tegne radius til planet\nplanet = 4 #planet\nrh = rho[planet] #Atmospheric density at surface [kg/m^3]\nM = Mass[planet]*SM #massen til planeten [kg]\nm = 100.0 #massen til satelitten [kg]\nA = (2*pi*(0.5**2)) #Areal til fallskjerm\nrad = Radius[planet]*1000 #Radius til planeten\nlimit = 2.5*10**4 #Fd grense\n\n#Formler\ng = lambda x,r: ((G*M)/r**3)*x #Gravitasjon akselerasjon\np = lambda H,Hs: rh*(e**(-(H/float(Hs)))) #rho med hensyn paa h\nfd = lambda p,v: (p*A*v**2)/2.0 #drag force Fd\n\n#initial verdier\nsatPos[0,1] = rad + 4*10**7 #hoyde over overflaten [km] (y-retning)\na[0,1] =g(satPos[0,1],sqrt(satPos[0,0]**2+satPos[0,1]**2))\na[0,0] =g(satPos[0,0],sqrt(satPos[0,0]**2+satPos[0,1]**2))\norbv = sqrt(a[0,1]*satPos[0,1])\nv[0,0] = 1000.5\ngr = G*M/rad**2\n\n#Euler-Cromer\nfor i in xrange(n-1):\n n_times[i+1] = n_times[i] + dt\n h = sqrt(satPos[i,0]**2+satPos[i,1]**2)\n hs = 75200.0/gr*m\n r = sqrt(satPos[i,0]**2+satPos[i,1]**2)\n evx = - (v[i,0]/norm(v[i]))\n evy = - (v[i,1]/norm(v[i]))\n tegn[i,0] = rad*evx\n tegn[i,1] = rad*evy\n if fd(p(h,hs),norm(v[i])) >= limit:\n print'start hoyde'\n print satPos[0,1]\n print'slutt'\n print satPos[i,1]\n print 'Fd'\n print fd(p(h,hs),sqrt(v[i,0]**2+v[i,1]**2))\n break\n elif sqrt(satPos[i,0]**2+satPos[i,1]**2) <= rad:\n print 'kraesj'\n print 'Fd : %.2f' %fd(p(h,hs),sqrt(v[i,0]**2+v[i,1]**2))\n print 'Hastighet: %.4f km/t' %(norm(v[i])*3.6)\n print 'Tid: %.2f t' %(n_times[i]/3600.0)\n break\n else:\n a[i,0] = -g(satPos[i,0],r) + (p(h,hs)*A*norm(v[i])**2)/(2*m)*evx\n a[i,1] = -g(satPos[i,1],r) + ((p(h,hs)*A*norm(v[i])**2)/(2*m))*evy\n v[i+1,0] = v[i,0]+a[i,0]*dt\n v[i+1,1] = v[i,1]+a[i,1]*dt\n satPos[i+1,0] = satPos[i,0] + v[i+1,0]*dt\n satPos[i+1,1] = satPos[i,1] + v[i+1,1]*dt\n\nprint 'gravitasjons akselerasjon : %.2f' %(G*M/rad**2)\n\nvi = zeros(n)\ntime = zeros(n)\nfor i in xrange(n):\n vi[i] = sqrt(v[i,0]**2+v[i,1]**2)*3.6\n time[i] = n_times[i]/3600.0\n \nsubplot(2,1,1)\nplot(satPos[:,0],satPos[:,1])\nplot(tegn[:,0],tegn[:,1])\nlegend(['Satellitt bane','Planet'])\ntitle('Satellitt landing')\nylabel('meter')\nsubplot(2,1,2)\nplot(time[::100],vi[::100])\nxlabel('time')\nylabel('km/t')\ntitle('Hastighets graf')\n#axis([0,150,0,6500])\nsavefig('jord_A=50v=2500.png')\nshow()\n\n#system.landing_sat(satPos[::100],n_times[::100],planet)\n"
},
{
"alpha_fraction": 0.42406195402145386,
"alphanum_fraction": 0.4508635997772217,
"avg_line_length": 18.9761905670166,
"blob_id": "e04b08f37b3e53c9139b83d295889a02344c5388",
"content_id": "bf5bb29f32e8d07034f08eaa03b8cbee59a940dc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1679,
"license_type": "no_license",
"max_line_length": 74,
"num_lines": 84,
"path": "/INF1100/Assigments/Chapter_8/freq_2dice_test.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import random\nimport numpy as np\n\ndef roll_dice(n):\n s = []\n for i in xrange(n):\n r = random.randint(1,6)\n g = random.randint(1,6)\n s.append(r + g)\n return s\n\"\"\"\ndef sums(roll_dice):\n rd = roll_dice\n sums = [\n two = []\n three = []\n four = []\n five = []\n six = []\n seven = []\n eight = []\n nine = []\n ten = []\n eleven = []\n twelve = []\n [\n if rd == 2:\n two.append(rd)\n elif rd == 3:\n three.append(rd)\n elif rd == 4:\n four.append(rd)\n elif rd == 5:\n five.append(rd)\n elif rd == 6:\n six.append(rd)\n elif rd == 7:\n seven.append(rd)\n elif rd == 8:\n eight.append(rd)\n elif rd == 9:\n nine.append(rd)\n elif rd == 10:\n ten.append(rd)\n elif rd == 11:\n eleven.append(rd)\n elif rd == 12:\n twelve.append(rd)\n return sums\n\"\"\"\nn = 30\nlst = np.sort(roll_dice(n))\nprint lst\n\nresult = {}\nresult =\n\n\ntwo = []; three = []; four = []; five = []; six = []; seven = []\neight = []; nine = []; ten = []; eleven = []; twelve = []\nfor rd in lst:\n if rd == 2:\n two.append(1)\n elif rd == 3:\n three.append(1)\n elif rd == 4:\n four.append(1)\n elif rd == 5:\n five.append(1)\n elif rd == 6:\n six.append(1)\n elif rd == 7:\n seven.append(1)\n elif rd == 8:\n eight.append(1)\n elif rd == 9:\n nine.append(1)\n elif rd == 10:\n ten.append(1)\n elif rd == 11:\n eleven.append(1)\n elif rd == 12:\n twelve.append(1)\nprint two, three, four, five, six, seven, eight, nine, ten, eleven, twelve \n"
},
{
"alpha_fraction": 0.278969943523407,
"alphanum_fraction": 0.4828326106071472,
"avg_line_length": 15.068965911865234,
"blob_id": "60821fd393e1741971765fcf35928307f1c11979",
"content_id": "bfadc398cc2330a41218f6afe254b64ef06a71df",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 466,
"license_type": "no_license",
"max_line_length": 106,
"num_lines": 29,
"path": "/INF1100/Exercises/Chapter_2/coor.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "a = 0\nb = 10\nn = 20\n\nh = float(b-a)/n\n\n#a)\ncoor = []\nfor i in range(n+1):\n xi = a+i*h\n coor.append(xi)\nprint'a)'\nprint len(coor)\nprint coor\n\n#b)\ncoor = [a+i*h for i in range(n+1)]\n\nprint 'b)'\nprint coor\n\n\"\"\"\nTerminal>python coor.py \na)\n21\n[0.0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0, 5.5, 6.0, 6.5, 7.0, 7.5, 8.0, 8.5, 9.0, 9.5, 10.0]\nb)\n[0.0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0, 5.5, 6.0, 6.5, 7.0, 7.5, 8.0, 8.5, 9.0, 9.5, 10.0]\n\"\"\"\n"
},
{
"alpha_fraction": 0.5573453903198242,
"alphanum_fraction": 0.594072163105011,
"avg_line_length": 32.739131927490234,
"blob_id": "63bbe264b8bc4fe75e6ac884c7874407ec96e157",
"content_id": "91288784212ac58c84df59e90448f27c1ea4f92e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1552,
"license_type": "no_license",
"max_line_length": 86,
"num_lines": 46,
"path": "/INF1100/Project/SIRV.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "#V(t) = folk som er vaksinert\n#V'(t) = p*S #ODE\n\nimport SIR_class as sir, ODEsolver as ODE, matplotlib.pyplot as plt\n\nclass vaccination(sir.ProblemSIR):\n import ODEsolver as ODE\n def __init__(self, nu, beta, S0, I0, R0, T, V0, p):\n sir.ProblemSIR.__init__(self, nu, beta, S0, I0, R0, T)\n if isinstance(p, (float,int)):\n self.p = lambda t: p\n elif callable(p):\n self.p = p\n self.V0 = V0\n\n def __call__(self, u, t):\n S, I, R, V = u\n return [-self.beta(t)*S*I - self.p(t)*S, self.beta(t)*S*I - self.nu(t)*I,\\\n self.nu(t)*I, self.p(t)*S]\n\n def initial_value(self):\n return self.S0, self.I0, self.R0, self.V0\n \nif __name__ == '__main__': #lager denne fordi jeg skal importere i et annet program\n dt = 0.5\n problem = vaccination(nu=0.1, beta=0.0005, S0=1500, I0=1, R0=0, T=60, V0=0, p=0.1)\n solver = ODE.RungeKutta4(problem)\n solver.set_initial_condition(problem.initial_value())\n y, x = solver.solve(problem.time_points(dt))\n S = y[:,0]; I = y[:,1]; R = y[:,2]; V = y[:,3]\n\n plt.plot(x,S,x,I,x,R,x,V)\n plt.legend(['Motagelig for sykdom', 'Smitta', 'Friske \"meldt\"','Vaksinert'])\n plt.axis([0,60,0,2000])\n plt.xlabel('Dager')\n plt.ylabel('Personer')\n plt.title('SIR model med Vaksinasjon')\n plt.show()\n\n\"\"\"\nI oppgave E.41 var max smittede oppe i ca 900 personer men med\nvaksinasjon synker antall smittede til kun ca 50 personer,\ndet blir dermed ingen epidemi og sykdommen er kontrollert\n\nTerminal> python SIRV.py\n\"\"\"\n"
},
{
"alpha_fraction": 0.48406675457954407,
"alphanum_fraction": 0.5402124524116516,
"avg_line_length": 18.969696044921875,
"blob_id": "40c4f47c38cc615f952566b2a1fae80ffe3da494",
"content_id": "e8d9aec53ff472267244c57068aecc1a2777966d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 659,
"license_type": "no_license",
"max_line_length": 73,
"num_lines": 33,
"path": "/INF1100/Assigments/Chapter_5/plot_Taylor_sin.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np\nimport matplotlib.pyplot as plt\nfrom math import factorial\n\n# formula ((-1)**j)*(x**(2*j+1)/factorial(2*j+1)\n# a)\n\ndef S(x, n):\n s = 0.0\n for j in range(n+1):\n s = s + (-1)**j*(x**(2*j+1)/factorial(2*j+1.0))\n \n return s\n\n# b)\n\nx = np.linspace(0, 4*np.pi, 150)\nn = [1,2,3,6,12]\n\nplt.plot(x, np.sin(x), 'black', linewidth = 2)\nfor i in (n):\n b = S(x, i)\n plt.plot(x, b)\nplt.axis([0, 4*np.pi, -1.5, 2])\nplt.xlabel('X')\nplt.ylabel('Y')\nplt.legend(['sin(x)', 'S(x;1)', 'S(x;2)', 'S(x;3)', 'S(x;6)', 'S(x;12)'])\nplt.title('Taylorpolynomet av grad n til sin(x)')\nplt.show()\n\n\"\"\"\nTerminal:python plot_Taylor_sin.py\n\"\"\"\n"
},
{
"alpha_fraction": 0.45891204476356506,
"alphanum_fraction": 0.5862268805503845,
"avg_line_length": 22.671232223510742,
"blob_id": "f9b6c99b1a5218b51a492279f8870ad8d1dd95e5",
"content_id": "ea4e76a97abacf424e7bc882347016d692929732",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1728,
"license_type": "no_license",
"max_line_length": 68,
"num_lines": 73,
"path": "/FYS2150/Magnetisme/dia.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np, matplotlib.pyplot as plt\nfrom lineaertilpassing import*\n\n#konstanter\nmu0 = 4*np.pi*1e-7\nA = 1.02e-2\nm0 = 390.25e-3\ng = 9.81\n#raadata\nI = np.linspace(0,2.4,13)\ndm = np.array([0, -0.02, -0.03, -0.05, -0.08, -0.10, -0.13, -0.17, \\\n -0.21, -0.23, -0.26, -0.28, -0.31])*1e-3\nB1 = np.array([17.8, 100.0, 186.0, 278.0, 368.0, 443.0,\\\n 510.0, 577.0, 636.0, 690.0, 728.0, 766.0, 800.0])*1e-3\nB2 = np.array([0.3, 0.5, 1.0, 1.5, 1.8, 2.1, 2.3, 2.6, \\\n 2.3, 2.2, 2.2, 2.3, 2.2])*1e-3\n\nFz = dm*g\nB = np.square(B1)-np.square(B2)\nchi = - (Fz*2*mu0)/(A*B)\n#print Fz, chi\n#usikkerhet\ndef umulti(z,a,b,da,db):\n dz = np.sqrt(z**2*((da/a)**2+(db/b)**2))\n return dz\ndef uaddi(da,db):\n dz = np.sqrt(da**2+db**2)\n return dz\ndef ueksp(z,n,a,da):\n dz = n*(da/a)*z\n return dz\n\ndA = 0.05e-3\ndBm = 0.01e-3\ndvekt = 0.03e-3\n\ndB12 = ueksp(B1**2,2,B1,dBm)\ndB22 = ueksp(B2**2,2,B2,dBm)\ndB = uaddi(dB12,dB22)\ndchi = umulti(chi,A,B,dA,dB)\n\nB1l = np.log(B1[1:-1])\nchil = np.log(chi[1:-1])\n#B2 = 0\nlin = linear(B1[1:-1],chil)\nc,m,dc,Dm,d = lin.Gen_linje()\nprint m, Dm\n#lineaer regresjon\np = np.polyfit(B1[1:-1],chil,1)\nfit = np.polyval(p,B1[1:-1])\np2 = np.polyfit(B1,Fz,1)\nfit2 = np.polyval(p2,B1)\n#plott\nplt.plot(B1[1:-1],chil,'*')\nplt.plot(B1[1:-1],fit)\nplt.xlabel('$B_1$')\nplt.ylabel('$\\chi$')\nplt.legend(['Raadata','Lineaer reggresjon'])\nplt.title('Graf av $B_1$ mot log($\\chi$)')\nplt.savefig('grafvismut.png')\nplt.figure()\nplt.plot(B1,Fz,'*')\nplt.plot(B1,fit2)\nplt.title('B-feltet mot kraften $F_z$')\nplt.legend(['Maalepunkter','Lineaer regresjon'])\nplt.xlabel('$B_1$ [T]')\nplt.ylabel('$F_z$ [N]')\n#plt.savefig('FzvsB1.png')\nplt.show()\n\nlin = linear(B1,Fz)\nc,m,dc,Dm,d = lin.Gen_linje()\nprint m, Dm\n"
},
{
"alpha_fraction": 0.5603813529014587,
"alphanum_fraction": 0.6509534120559692,
"avg_line_length": 17.693069458007812,
"blob_id": "95b6a8b4f8cfba7e61d66c9e896f58d768e5acee",
"content_id": "643bf61b96786b7617c07076c7d07189f90712d5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1888,
"license_type": "no_license",
"max_line_length": 83,
"num_lines": 101,
"path": "/FYS2150/Braggdiffraksjon/rap.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np, matplotlib.pyplot as plt\n\n#konstanter\nt = 60\nbakgrunn = 5.0\nfoton = [61,69,58,67,60,50,75,58,66,93,145,177,250,284,319,338,367,368,426,433,446]\nf = [i - bakgrunn for i in foton]\ntheta = np.linspace(12,22,21)\nd = 401e-12\n\n#usikkerhet i antall fotoner\ndef usikkerhet(f):\n res = np.zeros(len(f))\n for i,j in enumerate(f):\n res[i] = np.sqrt(j)\n return res\n#formler\ndef intensitet(f,t):\n f = np.array(f)\n I = np.zeros(len(f))\n for i,j in enumerate(f):\n I[i] = float(j)/t\n return I\n\ndef lam(theta,d):\n lam = np.zeros(len(theta))\n for i,j in enumerate(theta):\n lam[i] = d*np.sin(0.5*np.deg2rad(j))\n return lam\n\ndef energi(lam):\n hc = 1.241e-6\n E = np.zeros(len(lam))\n for i,j in enumerate(lam):\n E[i] = float(hc)/j\n return E\n\n#verdier\nbolge = lam(theta,d)\nen = energi(bolge)\nintens = intensitet(f,t)\n\nprint f\nprint usikkerhet(f)\nprint en\nprint intens\nprint bolge*1e12\n\n#plott\nplt.plot(en*1e-3,intens)\nplt.title('Rontgenspektrum')\nplt.xlabel('Energi kV')\nplt.ylabel('Intensitet')\n\nplt.figure()\nplt.plot(bolge,intens)\nplt.show()\n\n\n#alpha\nad = 629e-12\nat = 10\nalpha = [22,27,24,32,46,43,38,57,39]\na = [i - bakgrunn for i in alpha]\natheta = np.linspace(12,16,9)\nabolge = lam(atheta,ad)\naen = energi(abolge)\naint = intensitet(a,t)\n\"\"\"\n#plott\nplt.plot(abolge*1e12,aint)\nplt.title('Alpha')\nplt.xlabel('$\\lambda$ pm')\nplt.ylabel('Intensitet')\nplt.savefig('alpha.png')\n\"\"\"\n#beta\nbd = 629e-12\nbt = 10\nbeta = [74,274,267,207,85,99,510,1278,1044,764,80,55,70]\nb = [i - bakgrunn for i in beta]\nbtheta = np.linspace(24,30,13)\nbbolge = lam(btheta,bd)\nben = energi(bbolge)\nbint = intensitet(b,t)\n\"\"\"\n#plott\nplt.figure()\nplt.plot(bbolge*1e12,bint)\nplt.title('Beta')\nplt.xlabel('$\\lambda$ pm')\nplt.ylabel('Intensitet')\nplt.savefig('beta.png')\n\nplt.show()\n\nprint 'Alpha'\nprint abolge*1e12\nprint 'Beta'\nprint bbolge*1e12\n\"\"\"\n"
},
{
"alpha_fraction": 0.6268343925476074,
"alphanum_fraction": 0.6666666865348816,
"avg_line_length": 20.454545974731445,
"blob_id": "217e3834107b642b0d021a906425e559c4df0e2b",
"content_id": "c7889dcbc8c2b89a00205b75a575f76342603619",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 477,
"license_type": "no_license",
"max_line_length": 69,
"num_lines": 22,
"path": "/INF1100/Exercises/Chapter_5/energy_test.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "from numpy import linspace\nfrom matplotlib.pyplot import show, plot\nimport matplotlib.pyplot as plt\nimport sys\n\ng = 9.81\nv0 = float(sys.argv[1]) #fetching the first number\nm = float(sys.argv[2]) #fetching the second number\nt = linspace(0, 2*v0/g, 50)\n\ny = v0*t - 0.5*g*t**2\nPe = m*g*y\nv = v0 - g*t\nke = 0.5*m*v**2\n\nplot(t, Pe)\nplot(t, ke)\nplot(t, Pe+ke)\nplt.xlabel('energy')\nplt.ylabel('time')\nplt.legend(['Potensial energy', 'Kinetic energy', 'Total of energy'])\nshow()\n \n"
},
{
"alpha_fraction": 0.6577380895614624,
"alphanum_fraction": 0.6904761791229248,
"avg_line_length": 18.764705657958984,
"blob_id": "81a985a1da3672908c8ce21d2e4b344c3e1c79a2",
"content_id": "bd68eb89abeb51586e346bd4bd9295ca28da70d5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 336,
"license_type": "no_license",
"max_line_length": 34,
"num_lines": 17,
"path": "/MEK1100/Oblig_1/oblig2b.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "from pylab import*\n#variable\nn = linspace(-5,5,1500)\nX,Y = meshgrid(n,n,indexing='ij')\n#variable til stagnasjonspunktene\ni = linspace(-5,5,40)\nx = zeros(len(i))\n#Funksjon\nC = Y - log(abs(X))\n#plott\ncontour(X,Y,C)\nplot(x,i,'o') #stagnasjonspunktene\ntitle('Stroemlinje plott')\nxlabel('x-aksen')\nylabel('y-aksen')\nsavefig('2b.png')\nshow()\n"
},
{
"alpha_fraction": 0.5103734731674194,
"alphanum_fraction": 0.5746887922286987,
"avg_line_length": 32.24137878417969,
"blob_id": "71049fd4234188b032e34cc2744be929c1a25ecd",
"content_id": "1675b139bc7b392c981a7ecf104b785cc756f57d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 964,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 29,
"path": "/INF1100/Assigments/Chapter_3/area_triangle.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "def triangle_area(vertices): # formula for the area of a triangle\n v = vertices\n pulk = (v[1][0]*v[2][1]) - (v[2][0]*v[1][1]) - (v[0][0]*v[2][1]) + \\\n (v[2][0]*v[0][1]) + (v[0][0]*v[1][1]) - (v[1][0]*v[0][1])\n return (1.0/2.0)*abs(pulk)\n\nv1 = (0,0); v2 = (1,0); v3 = (0,2)\nvertices = [v1, v2, v3]\n \ndef test_triangle_area(): # test function for the computed formula\n \"\"\"\n Verify the area of a triangle with vertex coordinates\n (0,0), (1,0), and (0,2).\n \"\"\"\n v1 = (0,0); v2 = (1,0); v3 = (0,2)\n vertices = [v1, v2, v3]\n expected = 1\n computed = triangle_area(vertices)\n tol = 1E-14\n success = (expected - computed) < tol\n msg = 'computed area=%g != %g (expected)' % (computed, expected)\n assert success, msg\n\ntest_triangle_area() # call for the test function\n\n\"\"\"\nDid not get msg when I run the program. Therfor the function past the test.\nHave also tried changing a number in the test function and checked.\n\"\"\"\n"
},
{
"alpha_fraction": 0.5377574563026428,
"alphanum_fraction": 0.5903890132904053,
"avg_line_length": 25.755102157592773,
"blob_id": "4bd1d5c2eb419149cfd48d14a75eefbc287daf3f",
"content_id": "188025672c9c1e9d4998cc8ace43903877e6038a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1311,
"license_type": "no_license",
"max_line_length": 74,
"num_lines": 49,
"path": "/FYS-MEK1110/Oblig_2/Oblig2.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np, matplotlib.pyplot as plt\nfrom seaborn import*\n#Given values\nm = 0.1 #kg\ng = 9.81 #m/s^2, Earth gravitational acceleration.\nL0 = 1.0 #m\nk = 200.0 #N/m\ntheta = np.radians(30.0) #Degrees redefined to radians\ndt = 0.1 #timestep\ntime = 10.0 #time\n#Vector values that is calculated in the for looop\nn = int(round(time/dt))\nt = np.zeros((n,1),float)\nr = np.zeros((n,2),float)\nv = np.zeros((n,2),float)\na = np.zeros((n,2),float)\n#initial values\nv[0] = np.array([0,0]) \nr[0] = np.array([L0*np.sin(theta),-L0*np.cos(theta)])\n\nfor i in range(n-1):\n lr = np.sqrt(r[i,0]**2 + r[i,1]**2)\n a[i,:] = np.array([(-k*(1-L0/lr)*r[i,0])/m,-g-(k*(1-L0/lr)*r[i,1])/m])\n v[i+1,:] = v[i,:] + a[i,:]*dt\n r[i+1,:] = r[i,:] + v[i+1,:]*dt\n t[i+1] = t[i] + dt\n \n#plot of the graphs of the pendulum\nplt.plot(r[:,0],r[:,1])\nplt.title('Pendulum')\nplt.axis('equal')\nplt.show()\nplt.subplot(3,1,1)\nplt.plot(t,r[:])\nplt.axis([0,5,-1.5,1.5])\nplt.ylabel('posisjon')\nplt.legend(['x-retning','y-retning'])\nplt.title('Pendulum')\nplt.subplot(3,1,2)\nplt.plot(t,v[:])\nplt.ylabel('Hastighet')\nplt.legend(['x-retning','y-retning'])\nplt.subplot(3,1,3)\nplt.plot(t,a[:])\nplt.ylabel('Akselerasjon')\nplt.legend(['x-retning','y-retning'])\nplt.axis([0,5,-10,20])\nplt.xlabel('Tid [t]')\nplt.show()\n"
},
{
"alpha_fraction": 0.44800448417663574,
"alphanum_fraction": 0.5165823698043823,
"avg_line_length": 42.39024353027344,
"blob_id": "ac8284c4872e9f4890ef14a82b74007e300ad572",
"content_id": "edd106216bc4ec13c16b632a861ba83a113dbfeb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1779,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 41,
"path": "/INF1100/Assigments/Chapter_2/f2c_approx_table.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "#formula C = (F - 32)/(9.0/5.0). Converting farenheit to celcius\n#formula F = (5.0/9.0)*C + 32. Converting celcius to farenheit\n#formula c^ = (F - 30)/2. approximate converting from farenheit to celcius\n\nCdegrees = [] #Celcius degrees\nFdegrees = [] #Farenheit degrees\nACdegrees = [] #Approximate convertion from farenheit to celcius\n\nF = 0.0 #Start Farenheit degree\ndF = 10.0\n\nwhile F <= 100:\n C = (F - 32)/(9.0/5.0) #Formula\n Q = (F - 30)/2\n Cdegrees.append(C), Fdegrees.append(F), ACdegrees.append(Q)\n F = F + dF\n\nprint '________________Farenheit to celcius convertion___________________'\nprint''\nfor C,F,A in zip(Cdegrees, Fdegrees, ACdegrees):\n print 'Farenheit = %-10g Accurate = %-10.2f Approximate = %-10g' %(F, C, A)\n\nprint'____________________________________________________________________'\n\n\"\"\"\nTerminal>python f2c_approx_table.py \n________________Farenheit to celcius convertion___________________\n\nFarenheit = 0 Accurate = -17.78 Approximate = -15 \nFarenheit = 10 Accurate = -12.22 Approximate = -10 \nFarenheit = 20 Accurate = -6.67 Approximate = -5 \nFarenheit = 30 Accurate = -1.11 Approximate = 0 \nFarenheit = 40 Accurate = 4.44 Approximate = 5 \nFarenheit = 50 Accurate = 10.00 Approximate = 10 \nFarenheit = 60 Accurate = 15.56 Approximate = 15 \nFarenheit = 70 Accurate = 21.11 Approximate = 20 \nFarenheit = 80 Accurate = 26.67 Approximate = 25 \nFarenheit = 90 Accurate = 32.22 Approximate = 30 \nFarenheit = 100 Accurate = 37.78 Approximate = 35 \n____________________________________________________________________\n\"\"\"\n"
},
{
"alpha_fraction": 0.6059907674789429,
"alphanum_fraction": 0.6612903475761414,
"avg_line_length": 20.700000762939453,
"blob_id": "e5497f24bd2737b76ac0618ec76daf2c52c2fedd",
"content_id": "b45433daacbe0e3fb1c448947a3191333e3370fb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 434,
"license_type": "no_license",
"max_line_length": 44,
"num_lines": 20,
"path": "/FYS2130/Oblig_1/opg9.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np, matplotlib.pyplot as plt\n\nn = 1000\nx = np.arange(n)\nF = np.sin(2*np.pi*x/float(n))\nv = np.cos(2*np.pi*x/float(n))*2*np.pi\n\n\nplt.subplot(2,1,1) \nplt.plot(x/1000.0,F)\nplt.title('Posisjon vs tid')\nplt.xlabel('Tid [s]')\nplt.ylabel('Posisjon [m]')\nplt.subplot(2,1,2)\nplt.plot(F,v/1000.0)\nplt.title('Hastighet vs posisjon')\nplt.xlabel('Posisjon [m]')\nplt.ylabel('Hastighet [m/s]')\n#plt.savefig('opg9.png')\nplt.show()\n"
},
{
"alpha_fraction": 0.6014285683631897,
"alphanum_fraction": 0.6471428275108337,
"avg_line_length": 23.13793182373047,
"blob_id": "6374de0bc2979f57953d3f0f3646019dd2ef49ee",
"content_id": "72e94f083dc65e96f71bcd2fb14dcea277bca255",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 700,
"license_type": "no_license",
"max_line_length": 69,
"num_lines": 29,
"path": "/INF1100/Exercises/Chapter_4/energy.physics.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "# y = v0*t - 0.5*g*t**2\n#g = 9.81 is the acceleration due to gravity\n#Pe = m*g*y (Pe = potensial energy. y = hight)\n#Ke = 0.5*m*v**2 (Ke = kinetic energy. v(t) = y'(t))\n# t element in [0, 2*v0/g]\n\nfrom numpy import linspace\nfrom matplotlib.pyplot import show, plot\nimport matplotlib.pyplot as plt\nimport sys\n\ng = 9.81\nv0 = float(sys.argv[1]) #fetching the first number\nm = float(sys.argv[2]) #fetching the second number\nt = linspace(0, 2*v0/g, 50)\n\ny = v0*t - 0.5*g*t**2\nPe = m*g*y\nv = v0 - g*t\nke = 0.5*m*v**2\n\nplot(t, Pe)\nplot(t, ke)\nplot(t, Pe+ke)\nplt.xlabel('time')\nplt.ylabel('energy')\nplt.legend(['Potensial energy', 'Kinetic energy', 'Total of energy'])\nplt.title('Bevegelses energi')\nshow()\n"
},
{
"alpha_fraction": 0.527531087398529,
"alphanum_fraction": 0.5754884481430054,
"avg_line_length": 15.057143211364746,
"blob_id": "8fa628723d1655bae71cc6cce20366c3f128c823",
"content_id": "d5a272e2b335b1176c3385094e33c4391bd68196",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 563,
"license_type": "no_license",
"max_line_length": 62,
"num_lines": 35,
"path": "/INF1100/Assigments/Chapter_7/radioactive_decay.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "\"\"\"\nu' = -a*u, u(t) is the fraction of particle that remains\nu(0) = 1\n\"\"\"\n\nimport ODEsolver as ODE, numpy as np, matplotlib.pyplot as plt\nfrom math import*\n#a)\nclass Decay:\n def __init__(self, a):\n self.a = a\n\n def __call__(self, u):\n self.u = u\n return -self.a*self.u\n\n#b)\ny = 500\nu = []\nfor i in range(1,y):\n q = 1./i\n u.append(u)\n \nu = np.array(u) \na = np.linspace(log(2)/5600.0*u, y, 50)\n\n#c)\nf = Decay(a)\n\nrk4 = ODE.RungeKutta4(Decay)\nrk4.set_initial_condition(y)\nrk41,rk42 = rk4.solve(a)\n\nplt.plot(rk41,rk42)\nplt.show()\n\n"
},
{
"alpha_fraction": 0.5085574388504028,
"alphanum_fraction": 0.5415647625923157,
"avg_line_length": 18.95121955871582,
"blob_id": "90d8c0a3adef4c8b219ea72d4d38c09e33775e5f",
"content_id": "c1c435f7beb08085bb7ffcb11eb90cdccc93344a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 818,
"license_type": "no_license",
"max_line_length": 46,
"num_lines": 41,
"path": "/FYS2150/Braggdiffraksjon/prelab.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "from scipy.constants import*\nimport numpy as np\n\nme = 511*10**3\ndef realfaktor(U):\n if 1>U:\n U = np.array(U)\n f = np.zeros(len(U))\n for i in len(U):\n f[i] = 1./np.sqrt(1+(U[i])/(2*me))\n else:\n f = 1./np.sqrt(1+float(U)/(2*me))\n return f\nU = 8*10*3\n#print realfaktor(U)\n#print 1./np.sqrt(1+float(U)/(2*me))\n\nwith open('diameter.txt', 'r') as infile:\n U = []\n Dy = []\n for line in infile:\n lst= line.split()\n U.append(float(lst[0]))\n Dy.append(float(lst[1]))\ninfile.close()\n\nU = np.array(U)\nDy = np.array(Dy)\nn = len(U)\nN = 1.0/n\nlam = (h*c)/(e*U)\nphi = [lam[i]/Dy[i]for i in range(n)]\nres = [phi[i]-np.mean(phi)]\ns = np.sqrt(np.mean(np.square(res)))\n\n\nphi_mean = np.mean(phi)\nDelta_phi = np.sqrt(1.0/(n-1))*s\n\nprint phi_mean\nprint Delta_phi\n"
},
{
"alpha_fraction": 0.5892448425292969,
"alphanum_fraction": 0.6281464695930481,
"avg_line_length": 25.484848022460938,
"blob_id": "8e644d87deaa5eed8363446e43b821f37a1a1184",
"content_id": "ba7a85f98667fe379520a106d7683c0d9b166b27",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 874,
"license_type": "no_license",
"max_line_length": 81,
"num_lines": 33,
"path": "/INF1100/Assigments/Chapter_4/ball_cml.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import sys\n\nif len(sys.argv) < 2: #gives a msg if values is not inserted in the command line!\n print 'Provide a start velocity and a time value! Only insert numbers!'\n exit()\n\ndef y(v0, t): #function\n g = 9.81 #gravity on earth\n Fy = v0*t - 0.5*g*t**2 #formula\n return Fy\n\ndef test_y(): #test if the function is working\n computed = y(2, 1)\n expected = -2.905\n tol = 1e-10\n success = (computed - expected) < tol\n msg = 'something is wrong'\n assert success, msg\ntest_y() #calling the test\n\nv0 = sys.argv[1]; v0 = float(v0) #fetching the first number\nt = sys.argv[2]; t = float(t) #fetching the second number\n\nh = y(v0, t) #calling the function\nprint h \n\n\"\"\"\nTerminal> python ball_cml.py\nProvide a start velocity and a time value! And only insert numbers!\n\nTerminal> python ball_cml.py 10 1.3\n4.71055\n\"\"\"\n"
},
{
"alpha_fraction": 0.5942118167877197,
"alphanum_fraction": 0.6508620977401733,
"avg_line_length": 23.1641788482666,
"blob_id": "5f03bd26924a1f97d1032938fe8ee4112500dac4",
"content_id": "2d65f394859b8fe2635ab67c233ab83cb9f0cd71",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1624,
"license_type": "no_license",
"max_line_length": 82,
"num_lines": 67,
"path": "/INF1100/Assigments/Chapter_7/yx_ODE_FE_vs_RK4.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "#Comparing Runge-Kutta4 to Euler's method\n\"\"\"\ndy/dx=1/(2*(y-1)) the same as y' = 1/(2*(y-1)), y(0) = 1 + sqrt(eps) ,\nx element [0,4], eps = 1E-3 n = 4\nexact = y(x) = 1 + sqrt(x + eps)\n\"\"\"\nimport numpy as np, matplotlib.pyplot as plt, ODEsolver as ODE\n\nepsi = 1E-3; y0 = 1 + np.sqrt(epsi)\nn = np.linspace(0,4,100); q = np.linspace(0,4,260)\nexact = lambda x: 1 + np.sqrt(x + epsi)\ndef f(y,x):\n return 1/(2*(y-1))\n\nrk4 = ODE.RungeKutta4(f)\nrk4.set_initial_condition(y0)\nu,t = rk4.solve(n)\n\nforEuler = ODE.ForwardEuler(f)\nforEuler.set_initial_condition(y0)\nu1,t1 = forEuler.solve(n)\n\nplt.subplot(2,1,1)\nplt.plot(n,exact(n),'--')\nplt.plot(t,u)\nplt.legend(['exact','Runge Kutta'])\nplt.title('Runge Kutta vs exact')\nplt.ylabel('y')\nplt.axis([0,4,0,5])\nplt.subplot(2,1,2)\nplt.plot(n,exact(n),'--')\nplt.plot(t1,u1,'r')\nplt.xlabel('x')\nplt.ylabel('y')\nplt.legend(['exact','Euler'])\nplt.title('Exact vs Euler')\nplt.axis([0,4,0,5])\nplt.show()\n\nu1,t1 = forEuler.solve(q)\nu,t = rk4.solve(q)\n\nplt.subplot(2,1,1)\nplt.plot(n,exact(n),'--')\nplt.plot(t,u)\nplt.legend(['exact','Runge Kutta'])\nplt.title('Runge Kutta vs exact')\nplt.ylabel('y')\nplt.axis([0,4,0,5])\nplt.subplot(2,1,2)\nplt.plot(n,exact(n),'--')\nplt.plot(t1,u1,'r')\nplt.xlabel('x')\nplt.ylabel('y')\nplt.legend(['exact','Euler'])\nplt.title('Exact vs Euler')\nplt.axis([0,4,0,5])\nplt.show()\n\n\"\"\"\nSer at Runge Kutta 4 er en mye bedre tilnaerming enn Euler,\nRunge Kutta har 100 punkter og er ganske naerme mens Euler blir naermere\nforst etter 260 punkter. Lagt ved to plott som viser dette, 1 plott er 100 punkter\n2 plott er 260 punkter\n\nTerminal> python yx_ODE_FE_vs_RK4.py \n\"\"\"\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.6090047359466553,
"alphanum_fraction": 0.6729857921600342,
"avg_line_length": 19.095237731933594,
"blob_id": "20591f3cfa4d4bfc52ed4a7f339cc2e4d9287f02",
"content_id": "deed082bc560c29583d323c42512e75260a47637",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 422,
"license_type": "no_license",
"max_line_length": 40,
"num_lines": 21,
"path": "/MEK1100/Oblig_2/o2b.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "from oblig2 import*\n#Defines a variable to the contour plot.\nZ = sqrt(u**2+v**2)\nv1 = linspace(500,4500,7)\nv2 = linspace(0,500,7)\n#Contour plot\nsubplot(2,1,1) #Air\nc=contourf(x,y,Z,v1)\ncolorbar(c)\nplot(xit,yit,'*',color='black')\ntitle('Luft')\nylabel('Y-akse')\nsubplot(2,1,2) #Water\ncs=contourf(x,y,Z,v2)\ncolorbar(cs)\nplot(xit,yit,'*',color='black')\ntitle('Vann')\nxlabel('X-akse')\nylabel('Y-akse')\nsavefig('2b.png')\nshow()\n"
},
{
"alpha_fraction": 0.5056818127632141,
"alphanum_fraction": 0.5657467246055603,
"avg_line_length": 27.627906799316406,
"blob_id": "e87c50e02c8bee19c0c8a557587706e3666e18ce",
"content_id": "b635f3dce5c3253d6532e79bea0a2d9820ec13dd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1232,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 43,
"path": "/FYS-MEK1110/Oblig_1/Oblig12.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "#i)\nimport numpy as np, matplotlib.pyplot as plt\n\nF = 400; fc = 488; fv = 25.8 #kreftene som vikrere [N], fv = psykisk kraft[sN/M]\nm = 80.0 #[kg]\np = 1.293 #luft tetthet\nA = 0.45 #Arealet til loeperen\nCd = 1.2 #Drag koeffisient\nw = 0 #Luft hastighet\ntime = 10 #tid\ntc = 0.67 #tids koeffisient som er brukt til aa regne ut funksjoner\ndt = 1./1000; #tids steg\nn = int(time/dt) \na = np.zeros(n); t = np.zeros(n) \nx = np.zeros(n); v = np.zeros(n)\nv[0] = 0; x[0] = 0; t[0] = 0 #initial verdier\nq = 0 #teller, naar den naar hundre stopper for loopen\n\nD = lambda t,v: A*(1 - 0.25*np.exp(-(t/tc)**2))*0.5*p*Cd*(v-w)**2 #luftmotstand\nFv = lambda v: -v*fv #Psykisk\nFc = lambda t: fc*np.exp(-(t/tc)**2) #fra kroket til staande funksjon\n \nfor i in range(int(n-1)):\n a[i] = (F + Fc(t[i]) + Fv(v[i]) - D(t[i],v[i]))/m\n v[i+1] = v[i] + a[i]*dt\n x[i+1] = x[i] + v[i+1]*dt\n t[i+1] = t[i] + dt\n if x[i+1] > 100:\n q = i + 1\n break\n\nplt.subplot(3,1,1)\nplt.title('Bevegelses diagram')\nplt.plot(t[0:q],x[0:q])\nplt.ylabel('x [m]')\nplt.subplot(3,1,2)\nplt.plot(t[0:q],v[0:q])\nplt.ylabel('v [m/s]')\nplt.subplot(3,1,3)\nplt.plot(t[0:q],a[0:q])\nplt.ylabel('a [m/s^2]')\nplt.xlabel('t [sekund]')\nplt.show()\n\n"
},
{
"alpha_fraction": 0.5453172326087952,
"alphanum_fraction": 0.634441077709198,
"avg_line_length": 23.518518447875977,
"blob_id": "ad7f6e0fc62d6f441506a41233289f3ed147c354",
"content_id": "d53ef534f1f6b60dfcd28701d8dbe506ac520af5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 662,
"license_type": "no_license",
"max_line_length": 44,
"num_lines": 27,
"path": "/MEK1100/Oblig_2/oblig2e.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "from oblig2 import*\n\n#Curl\nk = gradient(v,axis=1) - gradient(u,axis=0)\n#Contour plot\nverdi = linspace(-1000,1000,9)\ncontourf(x,y,k,verdi)\ncolorbar()\n#seperate flat\nplot(xit,yit,'*',color='black')\n#rectangles\ndef rektangel(xi,yi,xj,yj):\n x1 = x[yi][xi]; x2 = x[yj][xj]\n y1 = y[yi][xi]; y2 = y[yj][xj]\n plot([x1,x2],[y1,y1], color='red')\n plot([x2,x1],[y2,y2], color='blue')\n plot([x1,x1],[y1,y2], color='black')\n plot([x2,x2],[y2,y1], color='green')\nrektangel(34,159,69,169)\nrektangel(34,84,69,99)\nrektangel(34,49,69,59)\n#Giving names to the axes and sets the title\nxlabel('X-akse')\nylabel('Y-akse')\ntitle('Virvling') \nsavefig('2e.png')\nshow()\n"
},
{
"alpha_fraction": 0.4060324728488922,
"alphanum_fraction": 0.5545243620872498,
"avg_line_length": 17.7391300201416,
"blob_id": "c024439c58e6c8784cc34ba51cb3e960c4136fc5",
"content_id": "e09b99e71f256195ef9812dc0a640605bfdfcbd3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 431,
"license_type": "no_license",
"max_line_length": 40,
"num_lines": 23,
"path": "/INF1100/Assigments/Chapter_8/compute_prob_vec.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np\n\nN = [10, 10**2, 10**3, 10**6]\n\nfor i in N:\n r = np.random.random(i)\n r1 = r[r>0.5]\n r2 = r1[r1<0.6]\n prob = float(len(r2))/i\n print 'M = %d, N = %i' %(len(r2), i)\n print 'Probability =',prob\n \n\"\"\"\nTerminal> python compute_prob_vec.py \nM = 0, N = 10\nProbability = 0.0\nM = 11, N = 100\nProbability = 0.11\nM = 109, N = 1000\nProbability = 0.109\nM = 100110, N = 1000000\nProbability = 0.10011\n\"\"\"\n"
},
{
"alpha_fraction": 0.48461538553237915,
"alphanum_fraction": 0.5384615659713745,
"avg_line_length": 31.399999618530273,
"blob_id": "e2f3f38ebcdba3f0da63ce0ad018bf66a45e7e56",
"content_id": "cf7d870a82a821f8c812679e92bfbe3ee8e51f3f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 650,
"license_type": "no_license",
"max_line_length": 74,
"num_lines": 20,
"path": "/MAT1110/Oblig1/oblig1.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np, matplotlib.pyplot as plt\nfrom numpy import arcsinh,cos,sin\n\nrho = 0.5\nx = np.linspace(-2,2,100)\nS = lambda x: x**2\nX = lambda x: x - 2*x*rho/np.sqrt(4*x**2+1)\nY = lambda x: x**2 + rho/np.sqrt(4*x**2+1)\nP = np.array([X(x),Y(x)])\nth = lambda x: 1.0/4*(2*x*np.sqrt(1+4*x**2) + arcsinh(2*x))/rho\nbet = lambda x: 1.0/np.sqrt(1+4*x**2)\nr = np.array([(rho*(cos(th(x))*2*x*bet(x) - sin(th(x))*bet(x)) + X(x)),\\\n (rho*(-2*x*sin(th(x))*bet(x) - cos(th(x))*bet(x)) + Y(x))])\nplt.plot(x,S(x))\nplt.plot(P[0,:],P[1,:])\nplt.plot(r[0,:],r[1,:])\nplt.legend(['s(x)','(X(x),Y(x))','r(x)'])\nplt.xlabel('x')\nplt.ylabel('y')\nplt.show()\n\n\n"
},
{
"alpha_fraction": 0.7857142686843872,
"alphanum_fraction": 0.8095238208770752,
"avg_line_length": 83,
"blob_id": "1869978dfe6ea26ba4fdf643a52623084feffd7f",
"content_id": "9330f69cc7b9c4b27a9f25fe28ec231232ebcedc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 170,
"license_type": "no_license",
"max_line_length": 103,
"num_lines": 2,
"path": "/MAT1110/README.txt",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "This course I learnt multivariate analysis with linear algebra.\nSyllabus: Lindstørm, T & Hveberg K(2015), Flervariabel analyse med lineær algebra(First edition) Gyldendal\n"
},
{
"alpha_fraction": 0.5454545617103577,
"alphanum_fraction": 0.5896464586257935,
"avg_line_length": 21.628570556640625,
"blob_id": "f5fcaddf527e21f7f32fb9b226c7e70d48f413ee",
"content_id": "6415796c6df4dcd73458dd08f364aacada11b07a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 792,
"license_type": "no_license",
"max_line_length": 63,
"num_lines": 35,
"path": "/INF1100/Exercises/Chapter_5/animate_Taylor_series.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "from numpy import *\nimport matplotlib.pyplot as plt\nfrom math import factorial\n\ndef animate_series(fk, M, N, xmin, xmax, ymin, ymax, n, exact):\n\n x = linspace(xmin, xmax, n)\n s = zeros_like(x)\n s_ref = exact(x)\n\n plt.ion() #viktig a ha med nar plotet skal vises i en film\n plt.plot(x, s_ref)\n\n lines = plt.plot(x,s)\n\n plt.axis([xmin, xmax, ymin, ymax])\n\n for k in range(M,N+1):\n s = s + fk(x, k)\n lines[0].set_ydata(s)\n plt.draw()\n plt.pause(0.25)\n \n\ndef taylor_sin(x, k):\n return (-1.0)**k*x**(2*k+1)/factorial(2*k+1)\n\ndef exp_inv(x):\n return exp(-x)\n\ndef taylor_exp_inv(x,k):\n return (-x)**k/factorial(k)\n\n#animate_series(taylor_sin,0,20,0,13*pi,-2,2,100,sin)\nanimate_series(taylor_exp_inv,0,30,0,15,-0.5,1.4,100,exp_inv)\n"
},
{
"alpha_fraction": 0.489087849855423,
"alphanum_fraction": 0.5321768522262573,
"avg_line_length": 37.021278381347656,
"blob_id": "6a470b549294f0377c863e3df0d7b7ae8a3f1fb2",
"content_id": "ee79c85987d438de02fb8892399089d95a2d5ae5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1787,
"license_type": "no_license",
"max_line_length": 93,
"num_lines": 47,
"path": "/INF1100/Assigments/Chapter_8/one6_ndice.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import random, sys #imports random and sys\n\ntry:\n t = sys.argv[1]; t = float(t)\nexcept:\n print 'no command line arguments! Innsert number of eksperiments and n value', sys.exit()\n\ntry:\n q = sys.argv[2]; q = float(q)\nexcept:\n print 'Innsert number of dices in command line!', sys.exit()\n \n\"\"\"\nexact = 11/36.0 for n = 2\n\"\"\"\neksperiments, n = sys.argv[1], sys.argv[2] #fetches arguments from command line\neksperiments = int(eksperiments) #making string arguments to integers\nn = int(n)\n\nm = 0\nfor i in range(1,eksperiments+1):\n for j in range(1,n+1):\n die = random.randint(1,6)\n if die == 6:\n m += 1\nprint '---------------------------------------------------------------'\nprint 'if we have %i dice and does the eksperiment %i times' %(j,i)\nprint 'the probability is',(float(m)/j)/i,'percent to get 6 eyes on a die'\nprint '---------------------------------------------------------------'\nprint 'FUN FACT'\nprint 'The exact probability to get 6 eyes on a die is',11./36\nprint 'percent for n = 2, the approximate probability is',(float(m)/j)/i,'for n = %i' %(n)\nprint 'relation between exact and approximate =', ((float(m)/j)/i)/(11./36)\nprint '---------------------------------------------------------------'\n\n\"\"\"\nTerminal> python one6_ndice.py 10000 2\n---------------------------------------------------------------\nif we have 2 dice and does the eksperiment 10000 times\nthe probability is 0.16335 percent to get 6 eyes on a die\n---------------------------------------------------------------\nFUN FACT\nThe exact probability to get 6 eyes on a die is 0.305555555556\npercent for n = 2, the approximate probability is 0.16335 for n = 2\nrelation between exact and approximate = 0.5346\n---------------------------------------------------------------\n\"\"\"\n"
},
{
"alpha_fraction": 0.47576791048049927,
"alphanum_fraction": 0.5576791763305664,
"avg_line_length": 19.34722137451172,
"blob_id": "4c1495aa234d32c5cabc93cffa4c2e5b89c93ebc",
"content_id": "3d4d67d6eabac0c003cf3f1a2cdef792db404489",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1465,
"license_type": "no_license",
"max_line_length": 52,
"num_lines": 72,
"path": "/FYS2130/Oblig_2/RK4.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np, matplotlib.pyplot as plt\nfrom RungeKutta4 import*\n\n#konstanter\nm = 1\nN = 10**3\n\n#arrays\nt = np.linspace(0,10,N)\n#underkritisk\nzu = np.zeros(N)\nvu = np.zeros(N)\nbu = 1.5\nku = 10.0\n#kritisk\nzk = np.zeros(N)\nvk = np.zeros(N)\nkk=5.0\nbk=np.sqrt(kk*m)*2\n#overkritisk\nzo = np.zeros(N)\nvo = np.zeros(N)\nko = 3.0\nbo = 10.0\n\n#initialverdier\nzu[0] = 1\nvu[0] = 0\nzk[0] = 1\nvk[0] = 0\nzo[0] = 1\nvo[0] = 0\ndt = t[1]-t[0]\n\n#akselerasjon\ndef diffEq(x,v,t,b,k):\n a = -float(b)/m*v - float(k)/m*x\n return a\n#Runge-Kutta\ndef rk4(x0,v0,t0,b,k):\n a1 = diffEq(x0,v0,t0,b,k)\n v1 = v0\n xHalv1 = x0 + v1 * dt/2.0\n vHalv1 = v0 + a1 * dt/2.0\n a2 = diffEq(xHalv1,vHalv1,t0+dt/2.0,b,k)\n v2 = vHalv1\n xHalv2 = x0 + v2 * dt/2.0\n vHalv2 = v0 + a2 * dt/2.0\n a3 = diffEq(xHalv2,vHalv2,t0+dt/2.0,b,k)\n v3 = vHalv2\n xEnd = x0 + v3 * dt\n vEnd = v0 + a3 * dt\n a4 = diffEq(xEnd,vEnd,t0 + dt,b,k)\n v4 = vEnd\n aMid = 1.0/6.0 * (a1 + 2*a2 + 2*a3 + a4)\n vMid = 1.0/6.0 * (v1 + 2*v2 + 2*v3 + v4)\n xEnd = x0 + vMid * dt\n vEnd = v0 + aMid * dt\n return xEnd, vEnd\n\n\nfor i in range(N-1):\n zu[i+1],vu[i+1] = rk4(zu[i],vu[i],t[i],bu,ku)\n zo[i+1],vo[i+1] = rk4(zo[i],vo[i],t[i],bo,ko)\n zk[i+1],vk[i+1] = rk4(zk[i],vk[i],t[i],bk,kk)\n\nplt.plot(t,zu,'--',t,zo,'--',t,zk,'r')\nplt.legend(['Underkritisk','Overkritisk','Kritisk'])\nplt.xlabel('Tid[s]')\nplt.ylabel('Utsvingning[m]')\nplt.savefig('kritiskesvingninger.png')\nplt.show()\n"
},
{
"alpha_fraction": 0.5106382966041565,
"alphanum_fraction": 0.5531914830207825,
"avg_line_length": 6.833333492279053,
"blob_id": "50065672c118dd8cf7936769bf2d0b3724cb00fe",
"content_id": "1018d626bdb0956f45aa573ba6f3495ed8c7297a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 47,
"license_type": "no_license",
"max_line_length": 9,
"num_lines": 6,
"path": "/INF1100/Exercises/Chapter_1/printy.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "y = 3\nprint y\ny = y + 4\nprint y\ny =y*y\nprint y\n"
},
{
"alpha_fraction": 0.7821229100227356,
"alphanum_fraction": 0.8044692873954773,
"avg_line_length": 88.5,
"blob_id": "8037d7a7d31fe98ca3a594fbc1d6de9462abe9cc",
"content_id": "ccf7b4370cbbc448a7b1dba6c3913cc6d5078b07",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 180,
"license_type": "no_license",
"max_line_length": 92,
"num_lines": 2,
"path": "/FYS2130/README.txt",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "This course is I learnt about oscillation and waves, and the difference between them.\nSyllabus: Vistnes A.I (2016) Svingninger og bølgers fysikk (first edition) printed by CreateSpace\n"
},
{
"alpha_fraction": 0.4072398245334625,
"alphanum_fraction": 0.5158371329307556,
"avg_line_length": 18.5,
"blob_id": "550e5561eb49fc012e1db42fcfccee9c43dae5f6",
"content_id": "c2d7dab22ab7fc229b47c628182af60ca9d61b66",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 664,
"license_type": "no_license",
"max_line_length": 71,
"num_lines": 34,
"path": "/FYS1120/LAB/oppgave2.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\nfrom pylab import*\n\nh = np.array([0, 20, 40, 60, 80, 100])/1e3 + 1e-10\nx = np.array([230, 34, 12.8, 6.2, 3.6, 2.6])/1e3\n\np = polyfit(x,h,2)\nb = polyval(p,x)\n\ndef B(h):\n mu_0 =4.*pi*10**(-7)\n Js = 1 ; t = 0.01; a = .0017\n B = (mu_0*Js/2.0)*((h+t)/sqrt((h+t)**2+a**2) - h/sqrt(h**2 + a**2))\n return B\n\nB_h = zeros(len(h))\nfor i in xrange(len(h)):\n B_h[i] = B(i)\n\nsubplot(2,1,1)\ntitle(u'Målinger', size=20)\nplot(h,x,'rx')\nplot(h,x,'b-')\nylabel('$B_h$ [T]', size=15)\nsubplot(2,1,2)\nplot(h, B_h,'rx')\nplot(h, B_h,'b-')\n\ntitle('Analytisk', size=20)\nxlabel('$h$ [m]', size=15)\nylabel('$B_h$ [T]', size=15)\nsavefig('2.png')\nshow()\n"
},
{
"alpha_fraction": 0.5290482044219971,
"alphanum_fraction": 0.5414091348648071,
"avg_line_length": 18.731706619262695,
"blob_id": "6f5c5c7634d2bf36a17ff06fe4d7a481d949a275",
"content_id": "455b8b7003409faad1d39f394ae1f99200298c00",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 809,
"license_type": "no_license",
"max_line_length": 55,
"num_lines": 41,
"path": "/INF1100/Exercises/Chapter_4/file_read_write.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "\"\"\"\nwith open('temperature.dat', 'r') as infile:\n for i in range(3):\n infile.readline()\n for line in infile:\n lst = infile.readline().split()\n print float(lst[2])\n\"\"\"\n\ndef f(f):\n return (f - 32)*(5./9)\n\nf_deg = []\n\nwith open('temperature.dat', 'r') as infile:\n for i in range(3):\n infile.readline()\n for line in infile:\n lst = line.split()\n f_deg.append(float(lst[2]))\n\nprint f_deg\n\nc_deg = []\n\nfor i in range(len(f_deg)):\n c_deg.append(f(f_deg[i])\n \nprint c_deg\n\nwith open('f_c.dat', 'w') as outfile:\n outfile.write('hei\\n') #\\n betyr linje skift\n outfile.write('hei\\n')\n for i in range(len(f_deg)):\n outfile.write(%.2f, %.2f) %(f_deg[i], c_deg[i])\n \n\"\"\"\ninfile = open ('temperature.dat', 'r')\n...\ninfile.close()\n\"\"\"\n"
},
{
"alpha_fraction": 0.40092167258262634,
"alphanum_fraction": 0.4746543765068054,
"avg_line_length": 30,
"blob_id": "8c7ce5362b8eafe3fe00a6445b0bd2bd3262d1c6",
"content_id": "e97347862ab2ea593aa0dd4ee782b7b2c526a95a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 217,
"license_type": "no_license",
"max_line_length": 85,
"num_lines": 7,
"path": "/INF1100/Exercises/Chapter_5/test_pipe_flow.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "def v(r, n):\n R = 1; Bet = 0.02; gam = 0.02;\n return ((Bet/2*gam)**(1./n))*(float(n)/(n + 1))*(R**((1+1.0)/n) - r**((1+1.0)/n))\n\nr = float(raw_input('insert r '))\nn = float(raw_input('insert n '))\nprint v(r, n)\n"
},
{
"alpha_fraction": 0.45065996050834656,
"alphanum_fraction": 0.5197988748550415,
"avg_line_length": 22.057971954345703,
"blob_id": "0121d5d5db2e0512504cd20a926fa4cfad191ed6",
"content_id": "249f904ba17dbc409c312dc74c053fd7c557276e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1591,
"license_type": "no_license",
"max_line_length": 63,
"num_lines": 69,
"path": "/INF1100/Assigments/Chapter_7/Cubic_Poly4.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "# Y = c0 + c1x\nclass Line(object):\n def __init__(self, c0, c1):\n self.c0, self.c1 = c0, c1\n\n def __call__(self,x):\n print 'Line', self.c0 + self.c1*x\n return self.c0 + self.c1*x\n\n def table(self, L, R,n):\n \"\"\"return a table with n points for L <= x <= R.\"\"\"\n s = ''\n import numpy as np\n for x in np.linspace(L, R, n):\n y = self(x)\n s += '%12g %12g\\n' % (x, y)\n return s\n \n#Parabola Y = c0 + c1x + c2x^2\nclass Parabola(Line):\n def __init__(self, c0, c1, c2):\n Line.__init__(self, c0, c1)\n self.c2 = c2\n\n def __call__(self,x):\n print 'Parabola', Line.__call__(self, x) + self.c2*x**2\n return Line.__call__(self, x) + self.c2*x**2\n\n#Cubic Y = c3x^3 + c2x^2 + c1x + c0\nclass Cubic(Parabola):\n def __init__(self, c0, c1, c2, c3):\n Parabola.__init__(self, c0, c1, c2)\n self.c3 = c3\n\n def __call__(self,x):\n print 'Cubic', Parabola.__call__(self,x) + self.c3*x**3\n return Parabola.__call__(self,x) + self.c3*x**3\n\n#Poly4 Y = c4x^4 + c3x^3 + c2x^2 + c1x + c0\nclass Poly4(Cubic):\n def __init__(self, c0, c1, c2, c3, c4):\n Cubic.__init__(self, c0, c1, c2, c3)\n self.c4 = c4\n\n def __call__(self,x):\n print 'poly', Cubic.__call__(self,x) + self.c4*x**4\n return Cubic.__call__(self,x) + self.c4*x**4\n\npol = Poly4(1, 1, 2, 2, 10)\npol(4)\n\n\"\"\"\nTerminal> python Cubic_Poly4.py \npoly Cubic Parabola Line 5\n37\nLine 5\n165\nParabola Line 5\n37\nLine 5\n2725\nCubic Parabola Line 5\n37\nLine 5\n165\nParabola Line 5\n37\nLine 5\n\"\"\"\n"
},
{
"alpha_fraction": 0.6014625430107117,
"alphanum_fraction": 0.6727604866027832,
"avg_line_length": 23.863636016845703,
"blob_id": "fb1d0a51127cb01d442cda0abc98a72257766d90",
"content_id": "ae1bde5ef0814d8298d9c4d9338c5d5b22cb395a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 547,
"license_type": "no_license",
"max_line_length": 44,
"num_lines": 22,
"path": "/FYS2130/Oblig_1/opg10.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np, matplotlib.pyplot as plt\n\nn = 1000\nx = np.arange(n)\nF = np.abs(np.sin(4*np.pi*x/float(n)))\nv = np.cos(4*np.pi*x/float(n))*4*np.pi\n\nplt.subplot(2,1,1) \nplt.plot(x/1000.0,F)\nplt.title('Posisjon vs tid')\nplt.xlabel('Tid [s]')\nplt.ylabel('Posisjon [m]')\nplt.subplot(2,1,2)\nplt.scatter(1,0,color='orange')\nplt.plot(F,v/1000.0)\nplt.scatter(0,0.0125, color='green')\nplt.scatter(0,-0.0125, color='green')\nplt.title('Hastighet vs posisjon')\nplt.xlabel('Posisjon[m]')\nplt.ylabel('Hastighet[m/s]')\n#plt.savefig('opg10.png')\nplt.show()\n"
},
{
"alpha_fraction": 0.3802816867828369,
"alphanum_fraction": 0.49295774102211,
"avg_line_length": 14.777777671813965,
"blob_id": "8c1d4c95659846c3d13b91e891d0a4fdca449e83",
"content_id": "65f7f80ff0be4cc5b9e4b63d03571f8ac45b58fc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 142,
"license_type": "no_license",
"max_line_length": 53,
"num_lines": 9,
"path": "/INF1100/Exercises/Chapter_3/compare_floats.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "a = 1/947.0*947\nb = 1\nif a != b:\n print 'Wrong result! a = %.16f b = %.16f' %(a, b)\n\ntol = 1e-10\n\nif abs(a - b) < tol:\n print 'Correct'\n"
},
{
"alpha_fraction": 0.545945942401886,
"alphanum_fraction": 0.6414414644241333,
"avg_line_length": 21.200000762939453,
"blob_id": "072a407cf4579c2a7d305ab7361371f02d32ddf7",
"content_id": "9d158f3c0243028d757ac5641ffc985355f52a09",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 555,
"license_type": "no_license",
"max_line_length": 96,
"num_lines": 25,
"path": "/INF1100/Exercises/Chapter_1/egg.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "from math import pi, log\n\nM = 67\nc = 3.7\nK = 5.4e-3\nrho = 1.038\n\nTw = 100\nTo = 4\nTy = 70\n\n\ntime = (M**(2.0/3.0)*c*rho**(1.0/3.0))/(K*pi**2*(4*pi/3)**(2.0/3.0))\\\n *log(0.76*(To-Tw)/(Ty-Tw))\n\nminutes = int(time/60)\nsec = int(time%60)\nprint '''It wil take %g seconds or %g minutes and %g seconds\nto get the egg to be 70 degrees inside straight from the refrigdirator''' % (time, minutes, sec)\n\n\"\"\"\nTerminal>In [9]: run egg.py\nIt wil take 396.576 seconds or 6 minutes and 36 seconds\nto get the egg to be 70 degrees inside straight from the refrigdirator\n\"\"\"\n"
},
{
"alpha_fraction": 0.48803046345710754,
"alphanum_fraction": 0.538084864616394,
"avg_line_length": 24.178081512451172,
"blob_id": "3102e05a29466daaf54e7537bc1b3f51e3e21073",
"content_id": "5f090b066a42a98437270d9f38b91503b7a19b58",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1838,
"license_type": "no_license",
"max_line_length": 66,
"num_lines": 73,
"path": "/FYS1120/Oblig_2/opg3.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n#from seaborn import*\n\nq = 1.6*10**(-19) #elektronladning\nme = 9.11*10**(-31) #elektronmasse\nmp = 1.67*10**(-27) #protonmasse\n\nT = 300*10**(-9) #fra t0 til T [s]\ndt = 100*10**(-15) #tidssteg \nN = int(T/float(dt)) #antall tidssteg \nr = np.zeros((3,N)) #posisjonsvektor [m]\nv = np.zeros_like(r) #hastighetsvektor [m/s]\nt = np.linspace(0,T-dt,N) #array med likt fordelt tid dt\nd = 90*10**(-6) #valley gap [m]\nr_D = 1 #radius [m]\n\n#initialverdier\nv[:,0] = (0,0,0)\nE0 = (25*10**3)/float(d)\n\nB = np.array((0,0,2)) #Magnetfelt\n\nomega = (q*B[2])/mp\n \n#Euler-Cromer\nfor i in xrange(N-1):\n Fb = q*(np.cross(v[:,i],B)) #Magnetisk kraft\n E = np.zeros(3) #elektriskfelt\n if -0.5*d<r[0,i]<0.5*d:\n E[0] = E0*np.cos(omega*t[i])\n else:\n E = 0\n \n if np.linalg.norm(r[:,i]) < r_D:\n a = (Fb+E*q)/mp\n else:\n a = 0\n v[:,i+1] = v[:,i] + a*dt\n r[:,i+1] = r[:,i] + v[:,i+1]*dt\n\nv_u = np.linalg.norm(v[:,-1]) \nprint 'Unnslipps hastighet = %.2f' %v_u\n\n \n#plott\nplt.plot(t,r[0,:],t,r[1,:],t,r[2,:])\nplt.legend(['x','y','z'])\nplt.title('Posisjon')\nplt.xlabel('Tid [s]')\nplt.ylabel('Posisjon [m]')\nplt.savefig('3d_pos.png')\nplt.show()\nplt.plot(t,v[0,:],t,v[1,:],t,v[2,:])\nplt.title('Hastighet')\nplt.legend(['vx','vy','vz'])\nplt.xlabel('Tid [s]')\nplt.ylabel('Hastighet [m/s]')\nplt.savefig('3d_hast.png')\nplt.show()\n\n\"\"\"\n#3d-plott\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\nax.plot(r[0,:], r[1,:], r[2,:],'-', label='Elektron akselerasjon')\nax.set_xlabel('x-posisjon [m]')\nax.set_ylabel('y-posisjon [m]')\nax.set_zlabel('z-posisjon [m]')\nplt.savefig('3dopg3.png')\nplt.show()\n\"\"\"\n"
},
{
"alpha_fraction": 0.644582986831665,
"alphanum_fraction": 0.6999220848083496,
"avg_line_length": 37.84848403930664,
"blob_id": "2b469c26573ec4f199eaf4a9f6ec759dd29523cb",
"content_id": "9791892dfbce4b8696b9ccfa5d2610e437ecde9c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1283,
"license_type": "no_license",
"max_line_length": 69,
"num_lines": 33,
"path": "/AST2000/Solmal.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "from ast2000solarsystemviewer_v2 import AST2000SolarSystemViewer\nimport numpy as np, matplotlib.pyplot as plt, random as random\n\nseed = 59750\nsystem = AST2000SolarSystemViewer(seed)\n\n#Helesystemet\nnr_planeter = system.number_of_planets\nRadius = system.radius #radius til alle planetene [km]\nMass = system.mass #Massen til hele systemet [Solmasse]\npl_x0 = system.x0 #planetenes initial posisjon i x-retning [AU]\npl_y0 = system.y0 #planetenes initial posisjon i y-retning [AU]\npl_vx0 = system.vx0 #planetenes initial hastighet i x-retning [AU]\npl_vy0 = system.vy0 #planetenes initial hastighet i y-retning [AU]\nrho = system.rho0 #Atmospheric density at surface [kg/m^3]\nG = 6.67428*10**-11 #Newtons gravitasjons konstant [m^3/(kg*s^2)]\n\n#Sola\nsolMass = system.star_mass #[Solmasse]\nsol_rad = system.star_radius #Radius av sola [km]\nsol_temp = system.temperature #Overflate temperatur av Sola [K]\n\n#hjemplanet\nvekt = Mass[0] #Massen [solmasse]\nrad = Radius[0] #Radius til hjemplanet [km]\natm = rho[0] #Atmosfaeren til hjemplaneten [kg/m**3]\npm = vekt*1.98892*10**30 #sunmass to kg (hjemplanet)\npr = rad*1000 #km to m (hjemplanet)\nx0 = pl_x0[0]\ny0 = pl_y0[0]\nvx0 = pl_vx0[0]\nvy0 = pl_vy0[0]\nv_esc = np.sqrt((2*G*pm)/pr) #[m/s]\n\n"
},
{
"alpha_fraction": 0.782608687877655,
"alphanum_fraction": 0.782608687877655,
"avg_line_length": 44,
"blob_id": "07be6ff0005dedae01793e4d67596093c0b44c96",
"content_id": "6f2af27ee931254860cd67deeee1fcb148a031f6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 46,
"license_type": "no_license",
"max_line_length": 44,
"num_lines": 1,
"path": "/FYS2150/Magnetisme/usikkerhet.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np, matplotlib.pyplot as plt\n\n"
},
{
"alpha_fraction": 0.4978354871273041,
"alphanum_fraction": 0.5627705454826355,
"avg_line_length": 20,
"blob_id": "95f2144946bf50a7ef288ce3c3136cfbb5068198",
"content_id": "fa7c1427143337dcc4b199cbe670f548311d138e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 231,
"license_type": "no_license",
"max_line_length": 53,
"num_lines": 11,
"path": "/INF1100/Exercises/Chapter_3/f2c_func1.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "Cdegree = []\n\nfor C in range(-10, 42, 5):\n Cdegree.append(C)\n \ndef F(C):\n return (9.0/5.0)*C + 32\n\nFdegree = [F(C) for C in Cdegree]\nfor c, f in zip(Fdegree, Cdegree):\n print 'Farenheit = %5.2f Celcius = %5.2f' %(c, f)\n"
},
{
"alpha_fraction": 0.734375,
"alphanum_fraction": 0.796875,
"avg_line_length": 63,
"blob_id": "47b60ea2ca8017aec07410d2f6e13ce6ed2baf89",
"content_id": "578c1a48dbba10b62741fbebf6b6ec5315212873",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 64,
"license_type": "no_license",
"max_line_length": 63,
"num_lines": 1,
"path": "/MEK1100/Oblig_1/README.txt",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "Oblig1v16.pdf is the assigment, and Mekoblig1.pdf is my answer.\n"
},
{
"alpha_fraction": 0.6152108311653137,
"alphanum_fraction": 0.6468373537063599,
"avg_line_length": 30.619047164916992,
"blob_id": "d5764c0fae423da2bd90581a528dbc467c651fad",
"content_id": "8d60c8f39b86a48bc2a0b1b692046a24460f80f7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1328,
"license_type": "no_license",
"max_line_length": 104,
"num_lines": 42,
"path": "/INF1100/Assigments/Chapter_4/ball_cml_tcheck.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import sys\n\nif len(sys.argv) < 2: #giving the programmer a msg if values is not inserted in the command line!\n print 'Provide a start velocity(m/s) and a time value(sec)! Only insert numbers!'\n exit()\n\ndef y(v0, t): #function\n g = 9.81 #gravity on earth\n Fy = v0*t - 0.5*g*t**2 #formula\n return Fy\n\ndef test_y(): #test if the function is working\n computed = y(2, 1)\n expected = -2.905\n tol = 1e-10\n success = (computed - expected) < tol\n msg = 'something is wrong'\n assert success, msg\ntest_y() #calling the test\n\nv0 = sys.argv[1]; v0 = float(v0) #fetching the first number\nt = sys.argv[2]; t = float(t) #fetching the second number\ng = 9.81\nq = float((2*v0)/g)\n\nif t > 0 and t < q: #check if t is between 0 and (2*v0)/g\n k = y(v0,t) \n print 'The ball is %.2f meters in the air' % k #if the the statement is true its printing the result\nelse:\n print 'Time value did not meet the requirements!' #printing msg if the statement is wrong\n exit() #ending the program\n\n\"\"\"\nTerminal> python ball_cml_tcheck.py\nProvide a start velocity(m/s) and a time value(sec)! Only insert numbers!\n\nTerminal> python ball_cml_tcheck.py 6 2\nTime value did not meet the requirements!\n\nTerminal> python ball_cml_tcheck.py 7 1.2\nThe ball is 1.34 meters in the air\n\"\"\"\n"
},
{
"alpha_fraction": 0.4488188922405243,
"alphanum_fraction": 0.5055118203163147,
"avg_line_length": 19.483871459960938,
"blob_id": "f74ca555509f2cf9b91cbdc78e930ae10b6a1969",
"content_id": "6275f9e242f061fc9a425d64acfece4ef9bbf0c2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 635,
"license_type": "no_license",
"max_line_length": 52,
"num_lines": 31,
"path": "/FYS-MEK1110/Oblig_4/k.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "from pylab import*\n\n#Funksjoner\ndef f(x,y): #F_f = kraft som blir tilfort av fotoner\n if abs(x) >= X0:\n return 0\n else:\n return -alp*y\ndef F(x): #F_m = den magnetiske kraften\n if abs(x) >= X0:\n return 0\n elif 0 > x > -X0:\n return U/X0\n else:\n return -U/X0\n \n#initialverdier\nU = 150.0; m = 23.0; X0 = 2.0; alp = 39.48\ndt = 1E-5 ; time = 5.0\n\n#Euler-cromer\nn = int(round(time/dt))\nt = linspace(0,5,n)\nxi = zeros(n)\nv = zeros(n)\na = zeros(n)\nv[0] = 8; xi[0] = -5\nfor i in range(n-1):\n a[i] = (F(xi[i]) + f(xi[i],v[i]))/m\n v[i+1] = v[i] + a[i]*dt\n xi[i+1] = xi[i] + v[i+1]*dt\n"
},
{
"alpha_fraction": 0.7444444298744202,
"alphanum_fraction": 0.7888888716697693,
"avg_line_length": 44,
"blob_id": "13fc42f087be53ff9114ea20a332a9a26a395808",
"content_id": "85fee490706e39e5177c2f3fda84c1147aef1df8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 90,
"license_type": "no_license",
"max_line_length": 60,
"num_lines": 2,
"path": "/MEK1100/Oblig_2/README.txt",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "obl2v16.pdf was the assigment and Oblig2.pdf was my answer. \ndata.mat was provided by UiO\n"
},
{
"alpha_fraction": 0.8088235259056091,
"alphanum_fraction": 0.8088235259056091,
"avg_line_length": 67,
"blob_id": "3be57fc2599dd047e58afacc23ea1ae7c10343a9",
"content_id": "ec4902627fb327d67a89a9f2c6c87d962f39312d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 68,
"license_type": "no_license",
"max_line_length": 67,
"num_lines": 1,
"path": "/INF1100/Assigments/README.txt",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "This file is for the regularly assigments I got after each chapter.\n"
},
{
"alpha_fraction": 0.3093525171279907,
"alphanum_fraction": 0.4982014298439026,
"avg_line_length": 12.560976028442383,
"blob_id": "afe1e840ae1c6bbed77ae696bc8d8ab99a4ea170",
"content_id": "b16ce81fbe0cfee8ed6be592049803f6546dc056",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 556,
"license_type": "no_license",
"max_line_length": 34,
"num_lines": 41,
"path": "/INF1100/Exercises/Chapter_2/ball_table1.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "# y = v0*t - 0.5*g*t**2 formula\n\nv0 = 5.0\ng = 9.81\nn = 5\n\nt_stop = 2*v0/g\ndt = t_stop/n\n\n#for loop\nprint 'for loop'\nfor i in range(0,n+1):\n t = i*dt\n y = v0*t - 0.5*g*t**2\n print \"%5.2f , %5.2f\" % (t, y)\n\n#while loop\nprint 'while loop'\nt=0\nwhile t <= t_stop:\n y = v0*t - 0.5*g*t**2\n print \"%5.2f %5.2f\" %(t, y)\n t += dt\n\n\"\"\"\nTerminal>python ball_table1.py \nfor loop\n 0.00 , 0.00\n 0.20 , 0.82\n 0.41 , 1.22\n 0.61 , 1.22\n 0.82 , 0.82\n 1.02 , 0.00\nwhile loop\n 0.00 0.00\n 0.20 0.82\n 0.41 1.22\n 0.61 1.22\n 0.82 0.82\n 1.02 0.00\n \"\"\"\n"
},
{
"alpha_fraction": 0.5137747526168823,
"alphanum_fraction": 0.5268245339393616,
"avg_line_length": 31.841270446777344,
"blob_id": "f2a86274094d419939619d67e91ca7a91c13f11b",
"content_id": "2a3747bfc9e5b471bb3b0d73e91a6e5ed9760a29",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2070,
"license_type": "no_license",
"max_line_length": 81,
"num_lines": 63,
"path": "/INF1100/Project/SIZR.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "\"\"\"\nS = sigma-betta*S*Z-ds*S\nI = betta*S*Z-p*I-delta*I\nZ = p*I - alfa*S*Z\nR = ds*S + delta*I + alfa*S*Z\n\"\"\"\n\nclass ProblemSIR():\n def __init__(self, sigma, betta, delta, alfa, S0, Z0, I0, R0, T, p):\n if isinstance(nu, (float,int)):\n self.sigma = lambda t: sigma\n elif callable(sigma):\n self.sigma = sigma\n if isinstance(beta, (float,int)):\n self.betta = lambda t: betta\n elif callable(betta):\n self.betta = betta\n self.delta, self.alfa = delta, alfa\n self.S0, self.Z0, self.I0, self.R0, self.T, self.p = S0, Z0, I0, R0, T, p\n \n def __call__(self, u, t):\n S, Z, I, R = u\n ds = \n return [\n self.sigma(t) -self.betta(t)*S*Z - ds*S,\n self.betta(t)*S*Z - self.p*I - self.delta*I,\n self.p*I - self.alfa*S*Z,\n ds*S + self.delta*I + self.alfa*S*Z\n ]\n \n def initial_value(self):\n return self.S0, self.Z0, self.I0, self.R0\n \n def time_points(self,dt):\n import numpy as np\n self.dt = dt\n t = np.linspace(0,self.T, self.T/float(self.dt))\n return t\n\nclass SolverSIR():\n import ODEsolver as ODE\n def __init__(self, problem, dt):\n self.problem, self.dt = problem, dt\n\n def solve(self, method=ODE.RungeKutta4):\n import numpy as np\n self.solver = method(self.problem)\n ic = [self.problem.S0, self.problem.Z0, self.problem.I0, self.problem.R0]\n self.solver.set_initial_condition(ic)\n n = int(round(self.problem.T/float(self.dt)))\n t = np.linspace(0, self.problem.T, n+1)\n u , self.t = self.solver.solve(t)\n self.S, self.I, self.R = u[:,0], u[:,1], u[:,2]\n\n def plot(self):\n import matplotlib.pyplot as plt\n S, Z, I, R, t = self.S, self.Z, self.I, self.R, self.t\n plt.plot(t,S,t,Z,t,I,t,R)\n plt.legend(['Mennesker', 'Zombie' 'Smitta', 'Død'])\n plt.xlabel('Timer')\n plt.ylabel('Individer')\n plt.title('ZOMBIE')\n plt.show()\n"
},
{
"alpha_fraction": 0.44083771109580994,
"alphanum_fraction": 0.5047120451927185,
"avg_line_length": 27.08823585510254,
"blob_id": "cd6eb4aaabb330c9198c985f1c5ff0339c6c7347",
"content_id": "194f77da48515444141e7f92be11a4c4f37484f1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 955,
"license_type": "no_license",
"max_line_length": 154,
"num_lines": 34,
"path": "/MAT1110/Oblig1/1oblig1.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np, matplotlib.pyplot as plt\nfrom numpy import arcsinh,cos,sin\n\n\"\"\"\n#plt.ion\nfor i in x:\n th = 1.0/4*(2*i*np.sqrt(1+4*i**2) + arcsinh(2*i))\n bet = 1.0/np.sqrt(1+4*i**2)\n r[i,:] = np.matrix([rho*(cos(th)*2*i*bet- sin(th)*bet + i - 2*i*rho*bet),\\\n rho*(-2*i*sin(th)*bet - cos(th)*bet + i**2 + rho*bet)])\n print r[i], i\n #plt.scatter(r[i,0],r[i,1])\n #plt.plot(x,S(x))\n #plt.axis([-1,1,-1,4])\n #plt.pause(0.0001)\n #plt.cla\n\"\"\"\n\nx = np.linspace(-2,2,100)\nn = len(x)\n#r = np.zeros((n,2))\nth = lambda x: 1.0/4*(2*x*np.sqrt(1+4*x**2) + arcsinh(2*x))\nbet = lambda x: 1.0/np.sqrt(1+4*x**2)\nrho = 0.5\nr = np.matrix([rho*(cos(th(x))*2*x*bet(x)- sin(th(x))*bet(x) + x - 2*x*rho*bet(x)), rho*(-2*x*sin(th(x))*bet(x) - cos(th(x))*bet(x) + x**2 + rho*bet(x))])\nr = r.transpose()\n\n\"\"\"\nfor i in range(1,n):\n h = abs(x[i]-x[i-1])\n r[i] = r[i-1] + h*(1-r[i-1]**2)\n\"\"\"\nplt.plot(r[:,0],r[:,1])\nplt.show()\n"
},
{
"alpha_fraction": 0.3207547068595886,
"alphanum_fraction": 0.49191373586654663,
"avg_line_length": 18.526315689086914,
"blob_id": "179fc527ab0a6eac69f751ce5a1882ea33ea1985",
"content_id": "cfecc0fcbfcb1736a136b63b56f76ef7eb5008fc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 742,
"license_type": "no_license",
"max_line_length": 52,
"num_lines": 38,
"path": "/INF1100/Assigments/Chapter_3/gaussian2.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "#formula (1/sqrt(2*pi)*s)*exp[-0.5*((x-m)/s)**2]\n\nfrom math import sqrt, pi, exp\n\ndef gauss(x, m=0, s=1):\n return 1.0/(sqrt(2*pi)*s)*exp(-0.5*((x-m)/s)**2)\n\nn = 10; m = 5.0; s = 1.0\nstart = m - 5*s\nstop = m + 5*s\nstep = float(start - stop)/n\n\nX = []\nFx =[]\n\nfor x in range(n+1):\n X.append(x)\n fx = gauss(x, m, s)\n Fx.append(fx)\n x += step\n\nfor i, j in zip(X, Fx):\n print 'x = %-4.1f f(x) = %-4.6f' %(i, j)\n \n\"\"\"\nTerminal>python gaussian2.py \nx = 0.0 f(x) = 0.000001\nx = 1.0 f(x) = 0.000134\nx = 2.0 f(x) = 0.004432\nx = 3.0 f(x) = 0.053991\nx = 4.0 f(x) = 0.241971\nx = 5.0 f(x) = 0.398942\nx = 6.0 f(x) = 0.241971\nx = 7.0 f(x) = 0.053991\nx = 8.0 f(x) = 0.004432\nx = 9.0 f(x) = 0.000134\nx = 10.0 f(x) = 0.000001\n\"\"\"\n"
},
{
"alpha_fraction": 0.7976190447807312,
"alphanum_fraction": 0.7976190447807312,
"avg_line_length": 55,
"blob_id": "5f2dd8ffe1b248bd562ddba05978eb06dbf8535c",
"content_id": "f7b5b6837519158a6efa95a87aeefdaf51b44db6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 336,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 6,
"path": "/MAT-INF1100/Oblig_1/README.txt",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "This is my first assigment in modelling and computing at the universety. \nHere i learnt what overflow is and what causes it in computational calculation.\nThe programming language I used was python.\n\nThe assigment and my solution is added her as PDF files, it is in norwegian.\nI coded my answer in LaTeX and the source code is .tex file\n"
},
{
"alpha_fraction": 0.4057142734527588,
"alphanum_fraction": 0.5657142996788025,
"avg_line_length": 7.333333492279053,
"blob_id": "3e95a7438b4e3341c19d9ba0bd1c94f074919c0f",
"content_id": "87a08eddb455d5988760c0c9f340ef1e44430081",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 175,
"license_type": "no_license",
"max_line_length": 29,
"num_lines": 21,
"path": "/INF1100/Exercises/Chapter_2/primes.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "primes = [2, 3, 5, 7, 11, 13]\n\nfor i in primes:\n print i\n\np = 17\n\nprimes.append(p)\n\nprint primes\n\n\"\"\"\nTerminal>python primes.py \n2\n3\n5\n7\n11\n13\n[2, 3, 5, 7, 11, 13, 17]\n\"\"\"\n"
},
{
"alpha_fraction": 0.6902788281440735,
"alphanum_fraction": 0.7166541218757629,
"avg_line_length": 33.921051025390625,
"blob_id": "bc515c4c8d8f40cf0b82f36df15fa34efc7ce456",
"content_id": "0da9fed16ff56bed97c4939c1bb7b1fc842e90aa",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1327,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 38,
"path": "/FYS2130/Oblig_3/opg9.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np, matplotlib.pyplot as plt\nfrom scipy.fftpack import fft, ifft\n\n\"\"\"\nEnkelt eksempelprogram for aa vise hvordan fouriertransformasjon\nkan gjennomfores i praksis i Matlab. Eksemplet er en modifikasjon\nav et eksempelprogram paa hjelpesidene i Matlab.\n\"\"\"\nFs = 1000;\ndelta_t = 1.0/Fs\nN = 1024.0\nt = np.linspace(0,Fs,N) # Tidsvektor\n\"\"\"\nSamplingsfrekvens\nTid mellom hver sampling\nAntall samplinger\nLager her et kunstig signal som en sum av et 50 Hz sinussignal\nog en 120 Hz cosinus, pluss legger til et random signal:\n\"\"\"\nx = 0.7*np.sin(2*np.pi*50*t) + np.cos(2*np.pi*120*t)\nx = x + 1.2*np.random.randn(len(t))\nplt.plot(Fs*t,x) # Plotting av signalet i tidsbilet\nplt.title('Opprinnelig signal (tidsbildet)')\nplt.xlabel('tid (millisekunder)')\n\nX = fft(x,N)/N\nb = [i for i in range(int(N/2))] # Fouriertransformasjon\nfrekv = (Fs/2)*np.linspace(0,1,N/2); # Frekvensvektor (for plot)\n\"\"\"\nPlotter bare lengden paa frekvenskomponentene i frekvensspekteret.\nVelger aa bare ta med frekvenser opp til halve samplingsfrekvensen.\n\"\"\"\nplt.figure() # Hindrer overskriving av forrige figur\nplt.plot(frekv,2*np.abs(X(b,N/2))) # Plotter halvparten av fourierspekteret\nplt.title('Absolutt-verdier av frekvensspekteret')\nplt.xlabel('Frekvens (Hz)')\nplt.ylabel('|X(frekv)|')\nplt.show()\n"
},
{
"alpha_fraction": 0.545064389705658,
"alphanum_fraction": 0.5901287794113159,
"avg_line_length": 18.41666603088379,
"blob_id": "00c5af79eaf01c57ce5ef99e5cede5536d13c3c9",
"content_id": "fc1526b2ea3ccdd467b9110c52d0603e2545cda0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 466,
"license_type": "no_license",
"max_line_length": 60,
"num_lines": 24,
"path": "/INF1100/Assigments/Chapter_5/plot_wavepacket.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np\nimport matplotlib.pyplot as plt\nfrom math import sin, exp, pi\n\ndef f(x, t):\n return exp(-(x - 3*t)**2)*sin(3*pi*(x - t))\n\nx = np.linspace(-4, 4, 1001)\nh = np.zeros(len(x))\n\nfor i in xrange(len(x)):\n h[i] = f(x[i], 0)\n\nplt.plot(h)\nplt.title('Wavwpacket')\nplt.xlabel('x-axis')\nplt.ylabel('y-axis')\nplt.legend(['f(x,t) = exp(-(x - 3*t)**2)*sin(3*pi*(x - t)'])\nplt.axis([0, 1000,-1, 1.5])\nplt.show()\n\n\"\"\"\nTerminal> python plot_wavepacket.py\n\"\"\"\n"
},
{
"alpha_fraction": 0.5107296109199524,
"alphanum_fraction": 0.5593705177307129,
"avg_line_length": 18.41666603088379,
"blob_id": "3a0e8f1ea80f2f69fb71be5aab7ae52822e9c0a6",
"content_id": "bae2713cd93655e935b9e64ae8c93467e6a2fef1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 699,
"license_type": "no_license",
"max_line_length": 65,
"num_lines": 36,
"path": "/INF1100/Assigments/Chapter_7/F2.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "\"\"\"\nclass = data + function\ndata: a, w\nfunktions: value(x)\n\"\"\"\n\nclass F:\n def __init__(self,a,w):\n self.a = a\n self.w = w\n\n def __call__(self,x):\n from math import exp, sin\n return exp(-self.a*x)*sin(self.w*x)\n \n def value(self,x):\n from math import exp, sin\n return exp(-self.a*x)*sin(self.w*x)\n\n def __str__(self):\n return 'exp(-a*x)*sin(w*x)'\n\n\"\"\"\nJeg brukte samme verdi som i boka saa fikk dermed samme resultat.\nLegger ved en demo:\nTerminal> python\n>>> from F2 import F\n>>> f = F(a=1.0, w=0.1)>>> from math import pi\n>>> print f(x=pi)\n0.013353835137\n>>> f.a = 2\n>>> print f(pi)\n0.00057707154012\n>>> print f\nexp(-a*x)*sin(w*x)\n\"\"\"\n"
},
{
"alpha_fraction": 0.4295039176940918,
"alphanum_fraction": 0.4921671152114868,
"avg_line_length": 19.157894134521484,
"blob_id": "250805ccb8fc8385b65be700146f5401950d1324",
"content_id": "487b42daf51216ed7008a7bcca903416a630139a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 766,
"license_type": "no_license",
"max_line_length": 65,
"num_lines": 38,
"path": "/INF1100/Assigments/Chapter_5/sinesum1_plot.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np\nimport matplotlib.pyplot as plt\n\ndef f(t, T):\n\n if 0 < t < T/2:\n return 1\n elif t == T/2:\n return 0\n elif T/2 < t < T:\n return -1\n \ndef S(t,n,T):\n k = 0\n for i in range(1,n+1):\n k += 1.0/(2*i - 1)*np.sin((2*(2*i - 1)*np.pi*t)/T)\n return k*(4/np.pi)\n\nT = 2*np.pi\nt = np.linspace(0.0, T, 200)\nn = [1, 3, 20, 200]\nft = np.array([f(t[i],T) for i in range(len(t))])\n\nplt.plot(t, ft,'black', linewidth = 2) \nplt.xlabel('t')\nplt.ylabel('f(t)/S(t;n)')\nplt.title('Sinesum')\nplt.legend(['F(t)'])\nfor i in (n):\n q = S(t,i,T)\n plt.plot(t,q)\nplt.axis([-0.2, 6.5,-1.5, 1.5]) \nplt.legend(['F(t)','S(t;1)', 'S(t;3)','S(t;20)', 'S(t;200)']) \nplt.show()\n\n\"\"\"\nTerminal> python sinesum1_plot.py\n\"\"\"\n"
},
{
"alpha_fraction": 0.6427145600318909,
"alphanum_fraction": 0.658682644367218,
"avg_line_length": 21.772727966308594,
"blob_id": "70a84004ba4a5ccd9884df459ee03f423f677c8e",
"content_id": "994af3c4a5fd940ddc36c615374d838b141d3844",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 501,
"license_type": "no_license",
"max_line_length": 52,
"num_lines": 22,
"path": "/INF1100/Gui1.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "from Tkinter import *\nroot = Tk()\nC_entry = Entry(root, width=4)\nC_entry.pack(side='left')\nCunit_label = Label(root, text='Celcius')\nCunit_label.pack(side='left')\n\ndef compute():\n C = float(C_entry.get())\n K = (-273.15) + C\n K_label.configure(text='%g' %K)\n\n\ncompute = Button(root, text=' is ', command=compute)\ncompute.pack(side='left', padx=5)\n\nK_label = Label(root, width=6)\nK_label.pack(side='left')\nKunit_label = Label(root, text='Kelvin')\nKunit_label.pack(side='left')\n\nroot.mainloop()\n"
},
{
"alpha_fraction": 0.6994219422340393,
"alphanum_fraction": 0.7109826803207397,
"avg_line_length": 18.22222137451172,
"blob_id": "baead09b95143aba3ea9464927090173ff802fa4",
"content_id": "9378eeca504ab2eb0658ebfc6231624c21c1799a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 173,
"license_type": "no_license",
"max_line_length": 40,
"num_lines": 9,
"path": "/MEK1100/Oblig_1/vec.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "from pylab import*; from seaborn import*\nfrom velfield import hastighet\n\nx,y,u,v = hastighet(7)\nquiver(x,y,u,v)\nxlabel('x-aksen')\nylabel('y-aksen')\nsavefig('4b.png')\nshow()\n"
},
{
"alpha_fraction": 0.5276923179626465,
"alphanum_fraction": 0.5723077058792114,
"avg_line_length": 21.413793563842773,
"blob_id": "11ce0c76cbfd5cfdad3bc0bd38490e203705884a",
"content_id": "2b66c769073c8fbc66bac9e5db02894c7182cba8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 650,
"license_type": "no_license",
"max_line_length": 72,
"num_lines": 29,
"path": "/FYS-MEK1110/Oblig_3/r.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "from pylab import*\nfrom seaborn import*\n\n#variable\nk = 100.0; m = 0.1 #masse og fjaerkonstant\nv0 = 0.1; b = 0.1; u = 0.1\ntime = 2.0; dt = 1/1000.0 #tid og tidssteg\nw = sqrt(k/m)\n#funksjoner\nx = lambda t: u*t - (v0/w)*sin(w*t) #eksakt\nxb = lambda t: u*t + b\n#Euler-Cromer\nn = int(round(time/dt))\nt = linspace(0,2,n)\nxi = zeros(n)\nv = zeros(n)\na = zeros(n)\n\nfor i in range(n-1):\n F = k*(xb(t[i]) - xi[i] - b)\n a[i] = F/m\n v[i+1] = v[i] + a[i]*dt\n xi[i+1] = xi[i] + v[i+1]*dt\n#plott \nplot(t,x(t),t,xi)\nlegend(['Eksakt','Euler-Cromer'], loc=0)\ntitle('Eksakt vs Euler-Cromer');xlabel('Tid [s]');ylabel('Posisjon [m]')\nsavefig('r.png')\nshow()\n"
},
{
"alpha_fraction": 0.47478991746902466,
"alphanum_fraction": 0.5472689270973206,
"avg_line_length": 21.66666603088379,
"blob_id": "5ac5ed1323cf5340fe9ed2fac4afaa09fe85e0d9",
"content_id": "d561a5e89198d1c473a26bd3d6d051ce793acf2f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 952,
"license_type": "no_license",
"max_line_length": 44,
"num_lines": 42,
"path": "/INF1100/Assigments/Chapter_7/RungeKutta2_func.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np, matplotlib.pyplot as plt\n\ndef RK2(f, U0, T, n):\n t = np.zeros(n+1); u = np.zeros(n+1)\n t[0] = 0; u[0] = U0\n dt = T/float(n)\n for k in range(n):\n t[k+1] = t[k] + dt\n K1 = dt*f(u[k],t[k])\n K2 = dt*f(u[k]+0.5*K1,t[k]+0.5*dt)\n u[k+1] = u[k] + K2\n return u, t\n \nf = lambda t,x: x**2\nex = lambda t,x:1/3.*x**3\n\nt = np.linspace(0,5,20)\nN = 3; N2 = 8; T1 = 4; u = 0\nsolve, solve1 = RK2(f, u, T1, N)\ncl, cl1 = RK2(f, u, T1, N2)\n\nplt.subplot(2,1,1)\nplt.plot(t,ex(0,t),'black', linewidth = 2.5)\nplt.plot(solve1, solve, 'ro-')\nplt.axis([0,6.5,0,25])\nplt.ylabel('f(x)')\nplt.xlabel('X')\nplt.legend(['Exact','RK2 stor dt'])\nplt.title('Runge Kutta 2 vs Exact')\nplt.subplot(2,1,2)\nplt.plot(t,ex(0,t),'black', linewidth = 2.5)\nplt.plot(cl1,cl ,'yo-')\nplt.legend(['Exact','RK2 liten dt'])\nplt.xlabel('X')\nplt.ylabel('f(x)')\nplt.axis([0,6.5,0,25])\n\nplt.show()\n\n\"\"\"\nTerminal> python RungeKutta2_func.py\n\"\"\"\n"
},
{
"alpha_fraction": 0.793749988079071,
"alphanum_fraction": 0.8187500238418579,
"avg_line_length": 79,
"blob_id": "471e5a82429930c19ce4005d5f752682a802819d",
"content_id": "5fff029a5e75db794181532c23082d44b37b3b04",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 160,
"license_type": "no_license",
"max_line_length": 92,
"num_lines": 2,
"path": "/MAT-INF1100/README.txt",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "This course is an intro to computational modeling and calculation.\nSyllabus: Mørken, K (2015), Numerical Algorithms and Digital Representation. Univeristy of Oslo\n"
},
{
"alpha_fraction": 0.4283837080001831,
"alphanum_fraction": 0.4954007863998413,
"avg_line_length": 19.026315689086914,
"blob_id": "6cfbc322b6faf98fc726f839171c690775c2850b",
"content_id": "31e0f67ea6a0a322ffecc1b6da03c130dbb351be",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 761,
"license_type": "no_license",
"max_line_length": 65,
"num_lines": 38,
"path": "/INF1100/Exercises/Chapter_5/sinsum1_test.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np\nimport matplotlib.pyplot as plt\n\ndef f(t, T):\n\n if 0 < t < T/2:\n return 1\n elif t == T/2:\n return 0\n elif T/2 < t < T:\n return -1\n \ndef S(t,n,T):\n k = 0\n for i in range(1,n+1):\n k += 1.0/(2*i - 1)*np.sin((2*(2*i - 1)*np.pi*t)/T)\n\n r = k*(4/np.pi)\n\n plt.ion() #viktig a ha med nar plotet skal vises i en film\n plt.plot(r)\n plt.draw()\n plt.pause(1)\n \n\nT = 2*np.pi\nt = np.linspace(0.0, T, 200)\nn = [1, 3, 20, 200, 2]\nft = np.array([f(t[i],T) for i in range(len(t))])\n\nplt.plot(ft,'black', linewidth = 2.5)\nplt.axis([-0.2, 201,-1.5, 1.5]) \nplt.xlabel('x')\nplt.ylabel('y')\nplt.title('Sinesum')\n\nplt.legend(['F(t)','S(t;1)', 'S(t;3)','S(t;20)', 'S(t;200)']) \nplt.show()\n"
},
{
"alpha_fraction": 0.555084764957428,
"alphanum_fraction": 0.5946327447891235,
"avg_line_length": 18.66666603088379,
"blob_id": "dd85f5e79ff65a028e4d4a4a16105935e563f91c",
"content_id": "1991fdfc5e186871930180a4a1420cc1041c573f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 708,
"license_type": "no_license",
"max_line_length": 56,
"num_lines": 36,
"path": "/INF1100/Assigments/Chapter_5/plot_wavepacket_movie.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import glob, os\nfor plot_wavepacket_movie in glob.glob('tmp*.png'):\n os.remove(plot_wavepacket_movie)\n \nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndef f(x, t):\n return np.exp(-(x - 3*t)**2)*np.sin(3*np.pi*(x - t))\n\nx = np.linspace(-6, 6, 1001)\nt_values = np.linspace(-1, 1, 61)\n\nplt.ion()\ny = f(x, t_values[0])\nlines = plt.plot(x, y)\nplt.axis([x[0], x[-1], -1, 1])\nplt.xlabel('x')\nplt.ylabel('f(x)')\nplt.title('Wavepacket')\n\ncounter = 0\nfor t in t_values:\n y = f(x, t)\n lines[0].set_ydata(y)\n plt.legend(['t = %4.2f' %t])\n plt.draw()\n plt.savefig('tmp_%04d.png' %counter)\n counter += 1\n plt.pause(0.01)\n \nplt.show()\n\n\"\"\"\nTerminal>python plot_wavepacket_movie.py\n\"\"\"\n"
},
{
"alpha_fraction": 0.47256097197532654,
"alphanum_fraction": 0.5472561120986938,
"avg_line_length": 18.878787994384766,
"blob_id": "2b83d13ea8a74ae9c9689eeb47486ad1f617b2db",
"content_id": "e35c4696e7a19e88a3ef8fef1ee76d4d36c92eaf",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 656,
"license_type": "no_license",
"max_line_length": 71,
"num_lines": 33,
"path": "/FYS1120/LAB/prelab1.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "from pylab import*\n\nh = [0, 2, 4, 6, 8, 10]\nx = [230, 34, 12.8, 6.2, 3.6, 2.6]\n\np = polyfit(x,h,2)\nb = polyval(p,x)\n\ndef B(h):\n mu_0 =4*pi*10**(-7)\n Js = 1 ; t = 1; a = 1.7\n B = (mu_0/2.0*Js)*((h+t)/sqrt((h+t)**2+a**2) - h/sqrt(h**2 + a**2))\n return B\n\nB_h = zeros(len(h))\nfor i in xrange(len(h)):\n B_h[i] = B(i)\n\nsubplot(2,1,1)\ntitle('Measurment')\nplot(x,h,'rx')\nplot(x,b)\nlegend([u'Maaleresultater','Interpolasjon'], prop={'size': 15})\nylabel('h [cm]')\nsubplot(2,1,2)\nplot(B_h,h,'rx')\nplot(B_h,b)\nlegend([u'Maaleresultater','Interpolasjon'], prop={'size': 15})\ntitle('Analytic')\nylabel('h [cm]')\nxlabel('B_h [mT]')\nsavefig('2.png')\nshow()\n"
},
{
"alpha_fraction": 0.5912408828735352,
"alphanum_fraction": 0.6788321137428284,
"avg_line_length": 21.83333396911621,
"blob_id": "257d4a285fa023b7a24ab7d8ff31453d84bad81c",
"content_id": "6480232e630617a3ac51f749dae90268580692ac",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 274,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 12,
"path": "/INF1100/Exercises/Chapter_2/second2years.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "seconds = 10.0**9\nminutes = seconds/60\nhours = minutes/60\ndays = hours/24\nyears = days/365.25\n\nprint 'Can a baby live in %.g s? Yes, it would be %.2f years' % (seconds, years)\n\n\"\"\"\nTerminal>python second2years.py\nCan a baby live in 1e+09 s? Yes, it would be 31.69 years\n\"\"\"\n"
},
{
"alpha_fraction": 0.4421965181827545,
"alphanum_fraction": 0.5028901696205139,
"avg_line_length": 19.294116973876953,
"blob_id": "8380c062762cd67f53ba12cccbefecd46ddd4816",
"content_id": "50baaea8a7cc71d2860dbcac7ee274fb99b09661",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 346,
"license_type": "no_license",
"max_line_length": 97,
"num_lines": 17,
"path": "/MAT-INF1100/Oblig_1/oblig1_1.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "from math import sqrt\n\nx = [1, (1 - sqrt(2))]\n\nfor i in range(2,100+1):\n value = 2*(float(x[-1])) + float(x[-2])\n x.append(value)\n\ndef xn(n):\n return (1 - sqrt(2))**n\n\nn = 100\nfor i in range(n+1):\n u = xn(i)\n b = x[i]\n w = u - b\n print 'x%g General solution = %12g | Computed solution = %12g | Avvik = %g' %(i, u, b, w)\n\n"
},
{
"alpha_fraction": 0.46937182545661926,
"alphanum_fraction": 0.5364806652069092,
"avg_line_length": 32.72368240356445,
"blob_id": "77abd57aea62ddd0eef5b34f7f4d893de49ee3b1",
"content_id": "9a3b95e221a7cb902da55b0ce2bd69ca08fb4f9c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2563,
"license_type": "no_license",
"max_line_length": 85,
"num_lines": 76,
"path": "/INF1100/Assigments/Chapter_8/freq_2dice.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import random, sys\nimport numpy as np\n\ntry:\n g = int(sys.argv[1])\nexcept IndexError:\n print \"Provide command-line arguments for N\"\n sys.exit()\nexcept ValueError:\n print \"What are you doing? N = number of eksperiments!!\"\n sys.exit()\n \ndef roll_dice(n):#en funksjon som bestemmer en tilfeldig sum av aa kaste to terninger\n s = []\n for i in xrange(n):\n r = random.randint(1,6)\n g = random.randint(1,6)\n s.append(r + g)\n return s\n\nn = int(sys.argv[1]) #henter hvo rmange ganger man kaster terningene (n) fra cml\nlst = np.sort(roll_dice(n)) #sorterer de tilfeldige summene i lista\n\n#laget en dictinoary med de forskjellige summene som er mulig aa faa\nresult = {\"2\": 0, \"3\": 0, \"4\": 0, \"5\": 0, \"6\": 0, \"7\": 0,\\\n \"8\": 0, \"9\": 0, \"10\": 0, \"11\": 0, \"12\": 0} \n\nfor i in lst: #legger nummerene i lista over i dictionary\n result['%i'%i] += 1\n \n#lager en ny dictionary med sannsynligheten for aa faa de forskjellige summene\nprob = {} \nfor j in range(2,13):\n p = result[\"%i\" %j]/float(n)\n prob[\"%d\"%j] = p\n \n#Dette er en dictionary med de eksakte sannsynlighetene\nexact_prob = {'2': 0.03, '3': 0.06, '4': 0.08, '5': 0.11, '6': 0.14,\\\n '7': 0.17, '8': 0.14, '9': 0.11, '10': 0.08, '11': 0.06, '12': 0.03}\n\n#gjor om dictionarys om til lister slik at jeg kan skrive de ut i et pent format.\nnumbers = []\naprox = []\nexact = []\nfor f in range(2,13):\n ap = prob[\"%d\"%f]\n ex = exact_prob[\"%d\"%f]\n aprox.append(ap)\n exact.append(ex)\n numbers.append(f)\n \nprint '--------------------------------------------'\nprint 'Eye dice | Probability for getting that |'\nprint '--------------------------------------------'\nfor a, e, n in zip(aprox, exact, numbers):\n print 'Sum = %2.i | Exact = %g, Approximate = %.2f' %(n,e,a)\nprint '--------------------------------------------'\n\n\"\"\"\nTerminal> python freq_2dice.py 1000\n--------------------------------------------\nEye dice | Probability for getting that |\n--------------------------------------------\nSum = 2 | Exact = 0.03, Approximate = 0.03\nSum = 3 | Exact = 0.06, Approximate = 0.07\nSum = 4 | Exact = 0.08, Approximate = 0.08\nSum = 5 | Exact = 0.11, Approximate = 0.09\nSum = 6 | Exact = 0.14, Approximate = 0.15\nSum = 7 | Exact = 0.17, Approximate = 0.16\nSum = 8 | Exact = 0.14, Approximate = 0.14\nSum = 9 | Exact = 0.11, Approximate = 0.12\nSum = 10 | Exact = 0.08, Approximate = 0.09\nSum = 11 | Exact = 0.06, Approximate = 0.05\nSum = 12 | Exact = 0.03, Approximate = 0.03\n--------------------------------------------\n\"\"\"\n"
},
{
"alpha_fraction": 0.6107142567634583,
"alphanum_fraction": 0.6392857432365417,
"avg_line_length": 27,
"blob_id": "66b0dbec08a08d90368219875dc42d4da19178c7",
"content_id": "f81394c04781330b9a794d81b278edc903547f7b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 280,
"license_type": "no_license",
"max_line_length": 64,
"num_lines": 10,
"path": "/MEK1100/Oblig_1/streamfun.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "from numpy import linspace, meshgrid, cos, pi\n\ndef streamfun(n=20):\n #Calculate a grid and a streamfunkction\n x=linspace(-0.5*pi,0.5*pi,n)\n # The result is a vector with n elements, from -pi/2 to pi/2\n [X,Y] = meshgrid(x,x)\n psi=cos(X)*cos(Y)\n\n return X, Y, psi\n"
},
{
"alpha_fraction": 0.5481481552124023,
"alphanum_fraction": 0.5703703761100769,
"avg_line_length": 20.3157901763916,
"blob_id": "26678d6f2fcf861ad60acf477f4350ff2cf0c03a",
"content_id": "30f708fef3ba863f8feb2b8b383db5c4e5f47c3c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 405,
"license_type": "no_license",
"max_line_length": 61,
"num_lines": 19,
"path": "/INF1100/Exercises/Chapter_4/f2c_cml_exc.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import sys\n\ndef c(f):\n return (f - 32)*(5.0/9)\n\ntry:\n degree = sys.argv[1] #Fetches variables from cmd-window\n degree = float(degree)\nexcept:\n print \"No command line argument\"\n #sys.exit(1) #avslutter programmet\n\n degree = raw_input(\"f = \")\n try:\n degree = float(degree)\n except:\n degree = float(raw_input('f = '))\n \nprint '%.2f, %.2f' % (degree, c(degree))\n"
},
{
"alpha_fraction": 0.4758418798446655,
"alphanum_fraction": 0.5475841760635376,
"avg_line_length": 19.696969985961914,
"blob_id": "19ca13cfd0ffda2cf6d1e450485a60a8558c811e",
"content_id": "7077f016589719d3ee044468b135c04a06643b3f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 683,
"license_type": "no_license",
"max_line_length": 62,
"num_lines": 33,
"path": "/FYS1120/Oblig_2/part.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n\ne = -1.6*10**(-19)\nme = 9.11*10**(-31)\nE = np.array((-1,-2,5))\nF = E*e\n\nT = 10**(-6)\ndt = 10**(-9)\n#dt2 = 10**(-7)\nN = int(T/float(dt))\nr = np.zeros((3,N))\nv = np.zeros_like(r)\nt = np.linspace(0,T-dt,N)\n\n#Euler-Cromer\nfor i in xrange(N-1):\n a = F/me\n v[:,i+1] = v[:,i] + a*dt\n r[:,i+1] = r[:,i] + v[:,i+1]*dt\n\na = (-5*e)/me\nrt = lambda t: 0.5*a*t**2\n\n#plot(t,r[0,:],t,rt(t))\n#legend(['Numerisk','Analytisk'])\nplt.plot(t,r[0,:],t,r[1,:],t,r[2,:])\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\nax.plot(r[0,:], r[1,:], r[2,:], label='Elektron akselerasjon')\nplt.show()\n"
},
{
"alpha_fraction": 0.4596100151538849,
"alphanum_fraction": 0.5097492933273315,
"avg_line_length": 31.636363983154297,
"blob_id": "933b210c45c37d64ce9d1c5ab986ab57f0623d4a",
"content_id": "67aaf627a70c47138f9f607aea12c218c718547e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 718,
"license_type": "no_license",
"max_line_length": 86,
"num_lines": 22,
"path": "/MEK1100/Oblig_1/oblig1c.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "from pylab import*\nfrom seaborn import*\n\n#variable jeg kan/maa velge selv. Enhet = []\ng = 9.81 # [m/s^2]\nv0 = 10.0 # [m/s]\nt = linspace(0,2,50) # [s]\n#variable som er oppgitt/begrenset i oppgaven\ntheta = [pi/6, pi/4, pi/3]\n\nfor i in range(3): \n x = (t*g)/(2*v0*sin(theta[i])) #[s][m/s^2]/[m/s] = (m/s)/(m/s) = 1\n y = x*tan(theta[i])*(1-x)\n plot(x,y)\n axis([0,1,0,0.5])\ntitle(r'Plott av ukastvinklene $\\theta = \\frac{\\pi}{6},\\frac{\\pi}{4},\\frac{\\pi}{3} $')\nlegend([r'$\\theta = \\frac{\\pi}{6}$',r'$\\theta = \\frac{\\pi}{4}$',\\\n r'$\\theta = \\frac{\\pi}{3}$'])\nxlabel(r'$x^*=\\frac{t*g}{2*v0*sin(\\theta)}$')\nylabel(r'$y^*=x^*tan(\\theta)(1-x^*)$') \nshow()\nsavefig('1c.png')\n"
},
{
"alpha_fraction": 0.7755101919174194,
"alphanum_fraction": 0.7755101919174194,
"avg_line_length": 48,
"blob_id": "1075a70b2223dd1607197ef0cf63166dfe47b4bf",
"content_id": "1a31bcba5612ad42ab1f10629fbed7b34b6d9ceb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 196,
"license_type": "no_license",
"max_line_length": 95,
"num_lines": 4,
"path": "/README.md",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "# Physics_at_UiO\n\nHere you find the courses I took involving coding when I studied physics at UiO. \nIt's a description in each folder to inform what kind of course it was and what I learnt there.\n"
},
{
"alpha_fraction": 0.39269745349884033,
"alphanum_fraction": 0.4545454680919647,
"avg_line_length": 29.477272033691406,
"blob_id": "bab2a6bdf25b76183b0d6ed126138d8a595b5525",
"content_id": "e6da859357084c97f5d1135cd345f00f6bb0969c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1342,
"license_type": "no_license",
"max_line_length": 63,
"num_lines": 44,
"path": "/FYS2130/Project/oppg8.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np\nfrom classRungeKutta4 import*\n\n#konstanter\nk = 0.475 #fjaerkraft [N/m]\ng = -9.81 #m/s^2\nT = 20.0 #tid [s]\ndt = 1e-4 #tidssteg\nN = int(T/dt) #antall tidssteg\nb = 1e-3 #motstand [kg/s]\nxc = 2.5e-3\nbeta = 50 #s/m\nrho = 1e3 #kg/m^3\n#array\nPsi = np.linspace(5.5e-5,7.5e-5,100)\nt = np.linspace(0,T,N) #tid\nx = np.zeros([100,N]) #posisjon\nv = np.zeros([100,N]) #hastighet\nm = np.zeros([100,N]) #masse\nD = np.zeros([100,N])#tiden da draapen faller\nfor j,psi in enumerate(Psi):\n #initialbetingelser\n x[j][0] = 1e-3 \n v[j][0] = 1e-3\n m[j][0] = 1e-5\n for i in xrange(N-1):\n m[j][i+1] = m[j][i] + psi*dt\n dripp = Dripp(g,psi,b,k,m[j][i])\n solver = RungeKutta4(dripp)\n x[j][i+1],v[j][i+1] = solver(x[j][i],v[j][i],t[i],dt)\n if np.abs(x[j][i+1]) > xc:\n D[j][i] = t[i]\n dm = np.abs(beta*m[j][i+1]*v[j][i+1])\n if dm > m[j][i+1]:\n m[j][i+1] = 1e-5\n else:\n m[j][i+1] = m[j][i+1] + psi*dt - dm\n dx = ((3*dm**4)/(4*np.pi*rho*(m[j][i]**3)))**(1./3)\n if dx > np.abs(x[j][i+1]):\n dx = x[j][i+1]+1e-7\n x[j][i+1] = x[j][i+1] + dx\n \nnp.save('opg8',np.asarray([x,v,m,D]))\nnp.save('opg8t',np.asarray([t,Psi]))\n\n"
},
{
"alpha_fraction": 0.5800753235816956,
"alphanum_fraction": 0.6029539704322815,
"avg_line_length": 32.85293960571289,
"blob_id": "09cecc7597bed65fda07a55f0d6cd82a392c3bb7",
"content_id": "ffe696eb6827632461a4b4688ed3563b51aba479",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3453,
"license_type": "no_license",
"max_line_length": 83,
"num_lines": 102,
"path": "/INF1100/Project/SIR_class.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "class ProblemSIR():\n def __init__(self, nu, beta, S0, I0, R0, T):\n if isinstance(nu, (float,int)):\n self.nu = lambda t: nu\n elif callable(nu):\n self.nu = nu\n if isinstance(beta, (float,int)):\n self.beta = lambda t: beta\n elif callable(beta):\n self.beta = beta\n self.S0, self.I0, self.R0, self.T = S0, I0, R0, T\n \n def __call__(self, u, t):\n S, I, R = u\n return [-self.beta(t)*S*I, self.beta(t)*S*I - self.nu(t)*I,\\\n self.nu(t)*I]\n \n def initial_value(self):\n return self.S0, self.I0, self.R0\n \n def time_points(self,dt):\n import numpy as np\n self.dt = dt\n t = np.linspace(0,self.T, self.T/float(self.dt))\n return t\n\nclass SolverSIR():\n import ODEsolver as ODE\n def __init__(self, problem, dt):\n self.problem, self.dt = problem, dt\n\n def solve(self, method=ODE.RungeKutta4):\n import numpy as np\n self.solver = method(self.problem)\n ic = [self.problem.S0, self.problem.I0, self.problem.R0]\n self.solver.set_initial_condition(ic)\n n = int(round(self.problem.T/float(self.dt)))\n t = np.linspace(0, self.problem.T, n+1)\n u , self.t = self.solver.solve(t)\n self.S, self.I, self.R = u[:,0], u[:,1], u[:,2]\n\n def plot(self):\n import matplotlib.pyplot as plt\n S, I, R, t = self.S, self.I, self.R, self.t\n plt.plot(t,S,t,I,t,R)\n plt.legend(['Motagelig for sykdom', 'Smitta', 'Friske \"meldt\"'])\n plt.axis([0,60,0,2000])\n plt.xlabel('Dager')\n plt.ylabel('Personer')\n plt.title('SolverSIR')\n plt.show()\n\nif __name__ == '__main__': #lager denne fordi jeg skal importere i et annet program\n import ODEsolver as ODE, matplotlib.pyplot as plt\n \n def betta(t): #funksjon for beta\n betta = 0\n if t<=12:\n betta = 0.0005\n else:\n betta = 0.0001\n return betta\n\n dt = 0.5 #steg lengde\n problem = ProblemSIR(nu=0.1, beta=betta, S0=1500, I0=1, R0=0, T=60)\n solver = ODE.RungeKutta4(problem)\n solver.set_initial_condition(problem.initial_value())\n y, x = solver.solve(problem.time_points(dt))\n S = y[:,0]; I = y[:,1]; R = y[:,2]\n \n #plott for ProblemSIR\n plt.plot(x,S,x,I,x,R)\n plt.legend(['Motagelig for sykdom', 'Smitta', 'Friske \"meldt\"'])\n plt.axis([0,60,0,2000])\n plt.xlabel('Dager')\n plt.ylabel('Personer')\n plt.title('ProblemSIR')\n plt.show()\n\n #plott for SolverSIR\n prob = SolverSIR(problem,dt)\n prob.solve()\n prob.plot()\n\n\"\"\"\nNaar jeg sammenligner grafene fra ProblemSIR, SolverSIR og SIR.py\ner det forskjell paa de smitta.\nI SIR.py er max smitta oppe i ca 900 personer paa en gang.\nDet er en veldig liten forskjell paa grafene fra ProblemSIR og SolverSIR\ndisse har max smitta paa ca 750 personer paa engang.\nNaar vi ser paa antall motagelig for smitte ser vi at ikke alle har blitt smittet\ni ProblemSIR og SolverSIR. Det er igjen litt under 200 som ikke er blitt smittet.\n\njeg bruker false i terminate funksjonen min pga av dette staar i ODEsolveren:\nCompute solution u for t values in the list/array\ntime_points, as long as terminate(u,t,step_no) is False.\nterminate(u,t,step_no) is a user-given function\nreturning True or False. By default, a terminate\nfunction which always returns False is used.\n\nTerminal> python SIR_class.py\n\"\"\"\n"
},
{
"alpha_fraction": 0.4296296238899231,
"alphanum_fraction": 0.5666666626930237,
"avg_line_length": 21.5,
"blob_id": "51ad81d6e07bf3cc8f4a8d97332ffc0808a23861",
"content_id": "fc697a58794c2e26bed925c19e0dc273351e9c08",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 270,
"license_type": "no_license",
"max_line_length": 60,
"num_lines": 12,
"path": "/INF1100/Exercises/Chapter_1/ball_print3.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "v0 = 5\ng = 9.81\nyc = 0.2\nimport math as m\nt1 = (v0 - m.sqrt(v0**2 - 2*g*yc))/g\nt2 = (v0 + m.sqrt(v0**2 - 2*g*yc))/g\nprint 'At t=%g s and %g s, the hight is %g m' % (t1, t2, yc)\n\n\"\"\"\nTerminal>python ball_print3.py \nAt t=0.0417064 s and 0.977662 s, the hight is 0.2 m\n\"\"\"\n"
},
{
"alpha_fraction": 0.4438149333000183,
"alphanum_fraction": 0.4768649637699127,
"avg_line_length": 18.98113250732422,
"blob_id": "62f7f31cbc3ec7f6e5374197d905db63d32e9098",
"content_id": "11c9c8d5b08bbc5483a55275cb5b2299d0c5e8ee",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1059,
"license_type": "no_license",
"max_line_length": 37,
"num_lines": 53,
"path": "/FYS-MEK1110/Oblig_5/newtonscradle.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "from numpy import *\nfrom matplotlib.pyplot import * \n\ndef force(dx,d,k,q):\n if dx<d:\n F = k*abs(dx-d)**q\n else:\n F = 0.0\n return F\n# Modify from here -->\nN = 3 # nr of balls\nm = 0.5 # kg\nk = 5 # N/m\nq = 3\nd = 0.2 # m\nv0 = 2 # m/s\ntime = 3 # s\ndt = 0.001 # s\n# <-- to here\nn = int(round(time/dt))\nx = zeros((n,N),float)\nv = x.copy()\nt = zeros(n,float)\n# Initial conditions\nfor i in range(N):\n x[0,i] = d*i\nv[0,0] = v0\nfor i in range(n-1):\n # Find force in vector F\n F = zeros(N,float)\n for j in range(1,N):\n dx = x[i,j] - x[i,j-1]\n F[j] = F[j] + force(dx,d,k,q)\n for j in range(N-1):\n dx = x[i,j+1] - x[i,j]\n F[j] = F[j] - force(dx,d,k,q)\n # Euler-Cromer vectorized step\n a = F/m\n v[i+1] = v[i] + a*dt\n x[i+1] = x[i] + v[i+1]*dt\n t[i+1] = t[i] + dt\nfor j in range(N):\n plot(t,v[:,j])\n if j==0:\n hold('on')\n if j==N-1:\n hold('off')\nprint 'v/v0 = ',v[n-1,:]/v0\nxlabel('Tid [s]')\nylabel('Hastighet [m/s]')\nlegend(['a','b','c'])\nsavefig('k2.png')\nshow()\n"
},
{
"alpha_fraction": 0.6017315983772278,
"alphanum_fraction": 0.6233766078948975,
"avg_line_length": 20,
"blob_id": "766da2095d420a5464878d8ed07540f4b7581c3a",
"content_id": "928eff2918814389dd4692713ed0b3515588fded",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 231,
"license_type": "no_license",
"max_line_length": 31,
"num_lines": 11,
"path": "/INF1100/Exercises/Chapter_3/makelist.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "def makelist(start, stop, inc):\n value = start\n result =[]\n while value <= stop:\n result.append(value)\n value = value + inc\n return result\n\nmylist = makelist(0, 100, 5)\nimport pprint\npprint.pprint(mylist)\n"
},
{
"alpha_fraction": 0.47727271914482117,
"alphanum_fraction": 0.5909090638160706,
"avg_line_length": 8.777777671813965,
"blob_id": "555b7bccf79a7d08a16a1ce95c691de7c96f5953",
"content_id": "ae73d5771082837a6fa9cb0653131d39067b5ed1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 88,
"license_type": "no_license",
"max_line_length": 38,
"num_lines": 9,
"path": "/INF1100/Exercises/Chapter_1/konverterer_grader.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "C = 21\nF = (9.0/5)*C + 32\n\nprint F\n\n\"\"\"\nTerminal>python konverterer_grader.py \n69.8\n\"\"\"\n"
},
{
"alpha_fraction": 0.5997130274772644,
"alphanum_fraction": 0.6527976989746094,
"avg_line_length": 23.034482955932617,
"blob_id": "cd7431b62f18e6bcb0b7b23b8206322b3878129b",
"content_id": "a3bc8c93b0c0da86efa1f869959106962a715769",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 697,
"license_type": "no_license",
"max_line_length": 86,
"num_lines": 29,
"path": "/INF1100/Assigments/Chapter_5/f2c_shortcut_plot.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np\nimport matplotlib.pyplot as plt\n\ndef c(F): #Function for calculating aproximate celcius degrees\n return (F-30)/2.0\n\ndef C(F): #Function for calculating exact celcius degrees\n return (F-32.0)*(5/9.0)\n\nF = np.linspace(-20, 120, 29)\ny = np.zeros(len(F))\nx = np.zeros(len(F))\n\nfor i in xrange(len(F)): #Calculating celcius-degrees\n y[i] = c(F[i])\n x[i] = C(F[i])\n\nplt.plot(x, F)\nplt.plot(y, F)\nplt.axis([-30, 100, -20, 130]) #Celcius-min, Celcius-max, Farenheit-min, Farenheit-max\nplt.legend(['C = (F-32)*5/9.0)','c = (F-30)/2.0'])\nplt.title('Farenheit to Celcius')\nplt.xlabel('Celcius')\nplt.ylabel('Farenheit')\nplt.show()\n\n\"\"\"\nTerminal> python f2c_shortcut_plot.py\n\"\"\"\n"
},
{
"alpha_fraction": 0.5135135054588318,
"alphanum_fraction": 0.5955955982208252,
"avg_line_length": 29.272727966308594,
"blob_id": "a5f93a04d1b1d63c914d59b0b4a10d50582e4255",
"content_id": "70a9cb26aa13bff3b6503c8695a72e8908d02d79",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1000,
"license_type": "no_license",
"max_line_length": 102,
"num_lines": 33,
"path": "/FYS1120/LAB/oppgave3.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\nimport matplotlib.pyplot as plt\nimport functions as fx\nimport numpy as np\n\nh= np.linspace(0,10,6)/1e2 + 0.0001\nB= np.array([3., 1.807, 0.916, 0.696, 0.527, 0.492])/1e3\n\ndef B_func(h):\n '''\n Attempting to model the same data with a function, but doesn't appear to\n work too well unfortunately\n '''\n mu0= 4.*np.pi*1e-7\n a= 0.0375\n circumference= a*2.*np.pi\n N= 244\n length= circumference*N\n Js= 5./length\n t= .275\n return (mu0*Js/2.)*(((h+t)/((h+t)**2 + a**2))-(h/(np.sqrt(h**2 + a**2))))\n\nf= fx.least_squares_functions((h, B), ['np.log(x)','1./x'])\nx2= np.linspace(h[0], h[-1], 1000)\nplt.xlim(h[0], h[-1])\nplt.ylim(min(B)-0.2*min(B), 1.2*max(B))\nplt.title('Magnetiske feltet $B$ som en funksjon av avstanden $h$ fra senteret av en spole', size= 20)\nplt.xlabel('$h$ [m]', size= 15)\nplt.ylabel('$B$ [T]', size= 15)\nplt.plot(h, B, 'rx')\nplt.plot(x2, f(x2))\nplt.legend([u'Måleresultater','Interpolasjon'], prop={'size': 15})\nplt.show()\n"
},
{
"alpha_fraction": 0.5185185074806213,
"alphanum_fraction": 0.5648148059844971,
"avg_line_length": 17,
"blob_id": "c3b1899d45a745f77b8a6370a351b80d90412770",
"content_id": "08d31f8faef957c0eb1bbb13bb5a745191437538",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 108,
"license_type": "no_license",
"max_line_length": 43,
"num_lines": 6,
"path": "/INF1100/Exercises/Chapter_4/f2c_qa.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "F = raw_input('give F degrees')\nF = float(F)\n\nC = (F-32)*5.0/9\n\nprint '%g degrees F i %g degrees C' %(F, C)\n"
},
{
"alpha_fraction": 0.6293929815292358,
"alphanum_fraction": 0.664536714553833,
"avg_line_length": 16.38888931274414,
"blob_id": "393df1547aa1873c30477df61e984eb9435bc344",
"content_id": "bd246a097c8c8326d3025d0a55c8e65ce44e146e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 313,
"license_type": "no_license",
"max_line_length": 26,
"num_lines": 18,
"path": "/MEK1100/Oblig_1/strlin.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "from pylab import*;\nimport streamfun as st\n# i) n = 5\nx,y,psi = st.streamfun(5)\ncontour(x,y,psi)\nxlabel('x-akse')\nylabel('y-akse')\ntitle('Streamfun')\nsavefig('4a5.png')\nshow()\n# ii) n = 30\nx,y,psi = st.streamfun(30)\ncontour(x,y,psi)\nxlabel('x-akse')\nylabel('y-akse')\ntitle('Streamfun')\nsavefig('4a30.png')\nshow()\n"
},
{
"alpha_fraction": 0.35087719559669495,
"alphanum_fraction": 0.5087719559669495,
"avg_line_length": 13.125,
"blob_id": "963baa043d90255b397fb8ea89f6c7ff55491cec",
"content_id": "c4cc83ac2250977d1a7b73d53033f7854cb8052d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 114,
"license_type": "no_license",
"max_line_length": 27,
"num_lines": 8,
"path": "/INF1100/Exercises/Chapter_3/f2c_func.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "def F(C):\n return (9.0/5.0)*C + 32\n\ntemp1 = F(15.5)\na = 10\ntemp2 = F(a)\nprint F(a+1)\nsum_temp = F(10) + F(20)\n\n"
},
{
"alpha_fraction": 0.4907975494861603,
"alphanum_fraction": 0.5368098020553589,
"avg_line_length": 14.523809432983398,
"blob_id": "600f48301b9b7f34aea36828443f9d03facb03e9",
"content_id": "a1522d98a5a3bb062ff2027f8818a9ab56e56dd8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 326,
"license_type": "no_license",
"max_line_length": 44,
"num_lines": 21,
"path": "/FYS2150/Lengde, hastighet og akselerasjon/linjetilpasning.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np, matplotlib.pyplot as plt\n\na = 3.5\nb = 5.0\nx = np.linspace(0.0,2.0,21)\ny = a+float(b)*x + np.random.randn(1,len(x))\ny = y[0]\np = np.polyfit(x,y,1)\n\nm = p[0]\nc = p[1]\n\nyline = np.polyval(p,x)\n\nif __name__ == '__main__':\n plt.plot(x,y,'*')\n plt.plot(x,yline,'-')\n plt.show()\n\n print m\n print c\n"
},
{
"alpha_fraction": 0.5119549632072449,
"alphanum_fraction": 0.5691514015197754,
"avg_line_length": 29.042253494262695,
"blob_id": "dc9981ba24dc499288e2a51f48fa464d5aaf5992",
"content_id": "3e833781b06e154f3bb9abc152f028238455bec4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2133,
"license_type": "no_license",
"max_line_length": 87,
"num_lines": 71,
"path": "/INF1100/Assigments/Chapter_7/geometric_shapes.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "class rectangle:\n \n def __init__(self, llc, w, h):\n self.llcorner, self.W, self.H = llc, w, h\n\n def area(self):\n return self.W * self.H\n\n def perimeter(self):\n return 2*self.W + 2*self.H\n\n\"\"\"\nTriangle\nformula 1/2|x2y3 - x3y2 - x1y3 + x3y1 + x1y2 - x2y1|\n\"\"\"\n\nclass triangle:\n\n def __init__(self, vert0, vert1, vert2):\n self.A, self.B, self.C = vert0, vert1, vert2\n \n def area(self):\n a = {1: self.A, 2: self.B, 3: self.C} #lagrer vektorene i en dictionary\n return 0.5*abs(a[2][0]*a[3][1]-a[3][0]*a[2][1]-a[1][0]*a[3][1] + \\\n a[3][0]*a[1][1]+a[1][0]*a[2][1] -a[2][0]*a[1][1])\n\n def perimeter(self):\n import numpy as np #det er enklere med array naar jeg regner med vektorer\n A = np.array(self.A); B = np.array(self.B); C = np.array(self.C)\n a = A-B; b = B-C; c = A-C\n return np.sqrt(sum(a**2)) + np.sqrt(sum(b**2)) + np.sqrt(sum(c**2))\n\ndef test_rectangle():\n q = rectangle((3,2), 6, 2)\n comp1, comp2 = q.area(), q.perimeter()\n ex1, ex2 = 12, 16\n tol = 1E-14\n success = abs(comp1-ex1) < tol and abs(comp2-ex2) < tol\n msg = 'there is a bug in rectangle'\n assert success, msg\n\ndef test_triangle():\n z = triangle((1,1), (4,1), (1,5))\n comp1, comp2 = z.area(), z.perimeter()\n ex1, ex2 = 6, 12\n tol = 1E-14\n success = abs(comp1-ex1) < tol and abs(comp2-ex2) < tol\n msg = 'somthing is wrong with triangle function!!'\n assert success, msg\n \ntest_rectangle()\ntest_triangle()\n\na = rectangle((0,0), 3, 5)\nprint '''A rectangle with width %g ang hight %g has an area = %g.\nThe left corner of the rectangle is located at %s''' % (a.W, a.H, a.area(), a.llcorner)\n\nprint\n\nr = triangle((2,3), (6,4), (4,7))\nprint '''A triangle with the cordinates %s, %s, %s,\nhas an area = %g a perimiter = %g''' % (r.A, r.B, r.C, r.area(), r.perimeter())\n\n\"\"\"\nTerminal >python geometric_shapes.py \nA rectangle with width 3 ang hight 5 has an area = 15.\nThe left corner of the rectangle is located at (0, 0)\n\nA triangle with the cordinates (2, 3), (6, 4), (4, 7),\nhas an area = 7 a perimiter = 12.2008\n\"\"\"\n"
},
{
"alpha_fraction": 0.5784313678741455,
"alphanum_fraction": 0.584967315196991,
"avg_line_length": 15.105262756347656,
"blob_id": "0bbd3f882bed94c9a96bb01b4f26fd18237f7892",
"content_id": "8ea21fd044205c9b7867079f2d140d4634c61ba9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 306,
"license_type": "no_license",
"max_line_length": 62,
"num_lines": 19,
"path": "/INF1100/Exercises/Chapter_5/read_2columns.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import matplotlib.pyplot as plt\n\ninfile = open('xy.dat','r')\n\nx = []\ny = []\n\nfor line in infile:\n word = line.split\n x.append(float(words[0])\n y.append(float(word[1])\n\ninfile.close()\n\nmean_y = sum(y)/len(y)\nprint \"Mean = %g, min = %g, max = %g\" %(mean_y, min(y), max(y)\n\nplt.plot(x, y)\nplt.show()\n"
},
{
"alpha_fraction": 0.4871428608894348,
"alphanum_fraction": 0.5028571486473083,
"avg_line_length": 22.16666603088379,
"blob_id": "0e3aabc17cee9b1f92e68bdc444b8ca31d1c2e01",
"content_id": "907e99515eb149b3a532e9afd1923ebbf84dc0cb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 700,
"license_type": "no_license",
"max_line_length": 65,
"num_lines": 30,
"path": "/INF1100/Exercises/Chapter_4/f2c_file_read_write.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "def f(f):\n return (f - 32)*(5./9)\n\nf_deg = []\n\nwith open('temperature.dat', 'r') as infile:\n for i in range(3):\n infile.readline()\n for line in infile:\n lst = line.split()\n f_deg.append(float(lst[2]))\n\n#print f_deg\n\nc_deg = []\n\nfor i in range(len(f_deg)):\n C = f(f_deg[i])\n c_deg.append(C)\n\n#print c_deg\n\nwith open('f_c.dat','w') as outfile:\n outfile.write('hei\\n') #\\n betyr linje skift\n outfile.write('\\n')\n outfile.write('-------------------------- \\n')\n outfile.write('farenheit | celcius\\n')\n outfile.write('--------------------------\\n')\n for i in range(len(f_deg)):\n outfile.write(('%6.2f, %10.2f \\n') %(f_deg[i], c_deg[i]))\n \n"
},
{
"alpha_fraction": 0.3206896483898163,
"alphanum_fraction": 0.5896551609039307,
"avg_line_length": 10.600000381469727,
"blob_id": "ae2379331d16b1ea3102090b6dce923a3430916b",
"content_id": "6e9b76d1acd03957f715cc44d1015bb5f167ece9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 290,
"license_type": "no_license",
"max_line_length": 31,
"num_lines": 25,
"path": "/MAT-INF1100/Oblig_1/oblig1_3.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "n = eval(raw_input('n? '))\ni = eval(raw_input('i? '))\n\ns = 1\n \nfor j in range(1,(n+1-i)):\n s = s * float(i+j)/float(j)\n\nprint '%.14e' %s\n\n\"\"\"\nTerminal> python oblig1_3.py \nn? 9998\ni? 4\n416083629102505\n \nn? 100000\ni? 70\n8.14900007813826e+249\n\nn? 1000\ni? 500\n2.70288240945437e+299\n\n\"\"\"\n"
},
{
"alpha_fraction": 0.6473029255867004,
"alphanum_fraction": 0.6763485670089722,
"avg_line_length": 14.0625,
"blob_id": "88605673d51ce737987a10be674af2bdebb49e42",
"content_id": "fc149720bed26fe93fd7bef7603fba9a46742d22",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 241,
"license_type": "no_license",
"max_line_length": 40,
"num_lines": 16,
"path": "/INF1100/Exercises/Chapter_5/plot.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "from numpy import linspace, sin\nfrom matplotlib.pyplot import plot, show\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndef f(x):\n return sin(x)/x\n\nn = 50\nx_min = -10\nx_max = 10\n\nx = linspace(x_min, x_max, n+1)\n\nplot(x, f(x))\nshow()\n"
},
{
"alpha_fraction": 0.5219665169715881,
"alphanum_fraction": 0.5679916143417358,
"avg_line_length": 24.157894134521484,
"blob_id": "b6d85cd205c9eb4f50e8eae83443ceae9a1e7975",
"content_id": "d924d1a2f6ba5cf4d80db9a27a45ed190243a2c4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 956,
"license_type": "no_license",
"max_line_length": 86,
"num_lines": 38,
"path": "/INF1100/Assigments/Chapter_4/ball_cml_qa.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import sys\n\ndef y(v0, t): #function\n g = 9.81 #gravity on earth\n Fy = v0*t - 0.5*g*t**2 #formula\n return Fy\n\ndef test_y(): #test if the function is working\n computed = y(2, 1)\n expected = -2.905\n tol = 1e-10\n success = (computed - expected) < tol\n msg = 'something is wrong'\n assert success, msg\ntest_y() #calling the test\n\nif len(sys.argv) < 2:\n v0 = raw_input('initial velocity? '); v0 = float(v0)\n t = raw_input('time? '); t = float(t) \n b = y(v0, t) #calling the function\n print b\nelse:\n v0 = sys.argv[1]; v0 = float(v0) #fetching the first number from the command line\n t = sys.argv[2]; t = float(t) #fetching the second number from the command line\n h = y(v0, t) #calling the function\n print h\n\n\"\"\"\nTerminal> python ball_cml_qa.py\ninitial velocity? 4\ntime? 0.2\n0.6038\n\nor\n\nTerminal> python ball_cml_qa.py 4 0.2\n0.6038\n\"\"\"\n"
},
{
"alpha_fraction": 0.47670549154281616,
"alphanum_fraction": 0.539933443069458,
"avg_line_length": 28.317073822021484,
"blob_id": "64528327f162c0b9db8f8577324f516da6d58b30",
"content_id": "389f870001d78ad34c15c06dca2cefa60e4bbd29",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1202,
"license_type": "no_license",
"max_line_length": 68,
"num_lines": 41,
"path": "/INF1100/Exercises/Chapter_3/roots_quadratic1.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "# equation ax**2 + bx + c = 0\n# abc formula x = (-b +- sqrt(b**2 - (4*a*c)))/2*a\n\nfrom cmath import sqrt as csqrt # roots for complex numbers\nfrom math import sqrt # roots for regular numbers\n\ndef roots(a, b, c):\n if (b**2 - (4*a*c)) < 0: # if this is true its complex number\n x1 = (-b + csqrt(b**2 - (4*a*c)))/2*a\n x2 = (-b - csqrt(b**2 - (4*a*c)))/2*a\n else: # if it is false it is a regular number\n x1 = (-b + sqrt(b**2 - (4*a*c)))/2*a\n x2 = (-b - sqrt(b**2 - (4*a*c)))/2*a\n return x1, x2\n\ndef test_roots_complex(): #test function for complex roots\n x1,x2 = roots(1, -2, 5)\n expected1, expected2 = (1+2j), (1-2j)\n tol = 1e-10\n success = abs((x1 + x2) - (expected1 + expected2)) < tol\n if not success:\n print 'FAIL!!'\n\ndef test_roots_floats(): #test function for regular roots\n x1,x2 = roots(1, -4, 3)\n expect1, expect2 = (3.0), (1.0)\n success = abs((x1 + x2) - (expect1 + expect2)) < 1e-10\n msg = 'Not right!'\n assert success, msg\n\ntest_roots_complex()\ntest_roots_floats()\n\ny = roots(1,-2, 5)\nx = roots(1, -4, 3)\nprint x,y\n\n\"\"\"\nTerminal>python roots_quadratic.py \n(3.0, 1.0) ((1+2j), (1-2j))\n\"\"\"\n"
},
{
"alpha_fraction": 0.8333333134651184,
"alphanum_fraction": 0.8333333134651184,
"avg_line_length": 47,
"blob_id": "2048e2fc00468ffbff1efdfa45f7a72b01eb1577",
"content_id": "420d7f46c04446f9e97d0dee2eb23ce5889f1fd2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 48,
"license_type": "no_license",
"max_line_length": 47,
"num_lines": 1,
"path": "/FYS1120/README.txt",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "In this course i learnt about electromagnetism.\n"
},
{
"alpha_fraction": 0.5159944295883179,
"alphanum_fraction": 0.5938804149627686,
"avg_line_length": 23.79310417175293,
"blob_id": "d3e0c251dfe71eb95639d61b501dc036c6351f15",
"content_id": "cdb46639ff454d7d31b81d27f2c16d888192744c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 719,
"license_type": "no_license",
"max_line_length": 42,
"num_lines": 29,
"path": "/MEK1100/Oblig_2/oblig2c.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "from oblig2 import*\nfrom seaborn import*\n#arrow-plot\nn=11\nquiver(\n x[::n, ::n],\n y[::n, ::n],\n u[::n, ::n],\n v[::n, ::n],\n units='width', width = 0.0015)\n#Rectangles\ndef rektangel(xi,yi,xj,yj):\n x1 = x[yi][xi]; x2 = x[yj][xj]\n y1 = y[yi][xi]; y2 = y[yj][xj]\n plot([x1,x2],[y1,y1], color='red')\n plot([x2,x1],[y2,y2], color='blue')\n plot([x1,x1],[y1,y2], color='black')\n plot([x2,x2],[y2,y1], color='green')\nrektangel(35,160,70,170)\nrektangel(35,85,70,100)\nrektangel(35,50,70,60)\n#seperate flat\nplot(xit,yit,'*',color='yellow')\n#Giving name to the axes and sets a title.\nxlabel('X-akse')\nylabel('Y-akse')\ntitle('Vektor pil plott av hastigheten')\nsavefig('2c.png')\nshow() #Shows the plot\n"
},
{
"alpha_fraction": 0.5241000652313232,
"alphanum_fraction": 0.5979255437850952,
"avg_line_length": 31.780000686645508,
"blob_id": "2cb8cf031adda9861699fd836d586ae24a5ec051",
"content_id": "3e5bb3bc17248649f031892f7f42b2e738706cec",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1639,
"license_type": "no_license",
"max_line_length": 103,
"num_lines": 50,
"path": "/AST2000/Oblig_B/B7.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "from pylab import*\nfrom Solmal import*\nfrom seaborn import*\n\nAu = 149597870691 #Astronomisk enhet i meter\nSM = 1.98892*10**30 #1 solmasse i kg\n\ntime = 0.5 #tidssteg dt\ntmax = 330.0 #aar\nn = int(round(tmax/time)) #antall tidssteg\nn_tid = zeros(n)\ng = 4*pi*pi #Gravitasjons konstant i [AU]\n\nplanetPos = zeros((2,nr_planeter,n))\nxhast = zeros((2,nr_planeter,n))\nyhast = zeros((2,nr_planeter,n))\n\nfor i in xrange(n-1):\n for j in xrange(nr_planeter):\n n_tid[i+1] = n_tid[i] + time\n planetPos[0,j,0] = pl_x0[j]\n planetPos[1,j,0] = pl_y0[j]\n xhast[0,j,0] = pl_vx0[j]\n yhast[1,j,0] = pl_vy0[j]\n #Euler-Cromer\n r = sqrt(planetPos[0,j,i]**2+planetPos[1,j,i]**2)\n ax = -((g*solMass)/(r**3))*planetPos[0,j,i] \n ay = -((g*solMass)/(r**3))*planetPos[1,j,i]\n xhast[0,j,i+1] = xhast[0,j,i] + ax*time\n yhast[1,j,i+1] = yhast[1,j,i] + ay*time\n planetPos[0,j,i+1] = planetPos[0,j,i] + xhast[0,j,i+1]*time\n planetPos[1,j,i+1] = planetPos[1,j,i] + yhast[1,j,i+1]*time\n \nplot(planetPos[0,0,:],planetPos[1,0,:])\nplot(planetPos[0,1,:],planetPos[1,1,:])\nplot(planetPos[0,2,:],planetPos[1,2,:])\nplot(planetPos[0,3,:],planetPos[1,3,:])\nplot(planetPos[0,4,:],planetPos[1,4,:])\nplot(planetPos[0,5,:],planetPos[1,5,:])\nplot(planetPos[0,6,:],planetPos[1,6,:])\nplot(planetPos[0,7,:],planetPos[1,7,:])\nplot(0,0,'o',color='yellow')\ntitle('Solsystem \"Pjokknes\"')\naxis((-80,110,-80,80))\nlegend(['Hjemplanet','planet2','planet3','planet4','planet5','planet6','planet7','planet8', 'Stjerna'])\nxlabel('[AU]')\nylabel('[AU]')\nshow()\n \n#system.orbit_xml(planetPos,n_tid)\n"
},
{
"alpha_fraction": 0.38002297282218933,
"alphanum_fraction": 0.4592422544956207,
"avg_line_length": 33.7599983215332,
"blob_id": "732375f1b5f0621876f15b9a4bde6ed427a34c82",
"content_id": "9a97268dc1414d031361ec857da54e4953070dd8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 871,
"license_type": "no_license",
"max_line_length": 93,
"num_lines": 25,
"path": "/FYS-MEK1110/Oblig_1/Oblig13.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "#k)\nimport numpy as np, matplotlib.pyplot as plt\n\nF = 400; fc = 488; fv = 25.8; m = 80.0; p = 1.293; A = 0.45; Cd = 1.2; w = 0\ntime = 9.3; dt = 1./1000; n = int(time/dt); tc = 0.67\na = np.zeros(n)\nx = np.zeros(n)\nv = np.zeros(n)\nt = np.zeros(n)\nv[0] = 0; x[0] = 0; t[0] = 0\n\nD = lambda t,v: A*(1 - 0.25*np.exp(-(t/tc)**2))*0.5*p*Cd*(v-w)**2\nFv = lambda v: v*fv\nFc = lambda t: fc*np.exp(-(t/tc)**2)\n \nfor i in range(int(n-1)):\n a[i] = (F + Fc(t[i]) - Fv(v[i]) - D(t[i],v[i]))/m\n v[i+1] = v[i] + a[i]*dt\n x[i+1] = x[i] + v[i+1]*dt\n t[i+1] = t[i] + dt\nFnet = F + Fc(t) - Fv(v) - D(t,v) \nplt.plot(t,Fc(t),t,Fv(v),t,D(t,v),[0,9.3],[400,400])\nplt.legend(['Initial Driving force', 'Physioligical limit','Air resistance','Driving Force'])\nplt.axis([0,9.3,-10,600])\nplt.show()\n\n\n"
},
{
"alpha_fraction": 0.4489051103591919,
"alphanum_fraction": 0.4771897792816162,
"avg_line_length": 38.10714340209961,
"blob_id": "cabab26ed9ee93e3a971ebcaefaf3c07f6220351",
"content_id": "b6433c5eceeaa85f3c6a547e7c76bf1c5e6803cb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1096,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 28,
"path": "/FYS2150/linjetilpass.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np\nclass Linear:\n def __init__(self, x, y):\n self.x, self.y = x, y\n\n def Gen_linje(self): #y = mx + c\n x = np.array(self.x); y = np.array(self.y)\n x_ = np.mean(x); y_ = np.mean(y) #gjennomsnitt\n x2 = np.sum(np.square(x)); y2 = np.sum(np.square(y))\n D = x2 - 1./(len(x))*(np.sum(x))**2\n E = np.sum(x*y)-1./(len(x))*(np.sum(x)*np.sum(y))\n F = y2 - 1./(len(y))*(np.sum(y))**2\n m = float(E)/D #stigningstallet\n c = y_ - m*x_ #konstantleddet\n dm = np.sqrt(1.0/(len(x)-2)*((D*F-E**2)/D**2))\n dc = np.sqrt(1.0/(len(x)-2)*(D/float(len(x))+x_**2)*((D*F-E**2)/D**2))\n d = np.zeros(len(x)) #residual\n for i in xrange(len(x)):\n d[i] = y[i]-m*x[i]-c\n return c,m,dc,dm,d\n\n def Lin_gjen_origo(self):\n x = np.array(self.x); y = np.array(self.y)\n x2 = np.sum(np.square(x)); y2 = np.sum(np.square(y))\n xy = np.sum(x*y)\n m = xy/float(x2) #stigningstall\n dm = 1.0/(len(x)-1)*((x2*y2- xy**2)/x2**2) #usikkerhet i stigningtall\n return m,dm\n\n"
},
{
"alpha_fraction": 0.5028248429298401,
"alphanum_fraction": 0.5649717450141907,
"avg_line_length": 7.849999904632568,
"blob_id": "089b5c5fe943ab45da7ea3cb14a97ec0fab1a054",
"content_id": "584528a636b4c7ce3838b383967b2b2cb55d93aa",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 177,
"license_type": "no_license",
"max_line_length": 43,
"num_lines": 20,
"path": "/INF1100/Exercises/Chapter_2/odd.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "#formula:\n#start with i=1 set i=i+2, repeat until i=n\n\nn = 9\nodd = 1\n\nprint odd\n\nwhile odd < n-1:\n odd = odd + 2\n print odd\n \n\"\"\"\nTerminal>python odd.py \n1\n3\n5\n7\n9\n\"\"\"\n"
},
{
"alpha_fraction": 0.36751434206962585,
"alphanum_fraction": 0.5693191289901733,
"avg_line_length": 31.932432174682617,
"blob_id": "56030df730d4a32961d9df4736abc1563937f615",
"content_id": "cb872517b7c364f14016dcdf17fab4ae0315f662",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2438,
"license_type": "no_license",
"max_line_length": 81,
"num_lines": 74,
"path": "/INF1100/Assigments/Chapter_4/ball_file_read_write.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "\ndef extract_data(filename): #function for sample file 'ball.dat'\n time = [] #time values\n with open(filename, 'r') as infile:\n lst = infile.readline().split()\n v0 = (eval(lst[-1])) #initial velocity\n infile.readline()\n tid = [] #making a nested list\n for line in infile:\n t = line.split()\n tid.append(t[0:])\n for t in range(len(tid)):\n for w in range(len(tid[t])): \n ti = float(tid[t][w]) #breaking up the nested list to one list\n time.append(ti)\n time.sort() #sorting the t values in an increasing order.\n return v0, time\n\ndef test_extract_data():\n v0, time = extract_data('ball.dat')\n t_expect = [0.042, 0.0519085, 0.10262264, 0.1117, 0.15592, 0.17383923,\\\n 0.2094294, 0.21342619, 0.21385894, 0.27, 0.28075, 0.29584013,\\\n 0.3464815, 0.35, 0.36807889, 0.372985, 0.39325246, 0.50620017,\\\n 0.528, 0.53012, 0.57681501876, 0.57982969]\n t_computed = time\n v_expect = 3\n v_computed = v0\n tol = 1E-14\n success = (v_expect - v_computed) < tol and t_expect == t_computed\n msg = 'something went wrong'\n assert success, msg\n\ntest_extract_data() #calling test function\n\nv0, time = extract_data('ball.dat') #calling function\n \nwith open('ball_file_write.dat', 'w') as outfile: #writing a new file\n outfile.write('-------------------------- \\n') \n outfile.write(' Time (s) | Y value (m)\\n')\n outfile.write('--------------------------\\n')\n for t in time: #implementing each time value in the formula \n g = 9.81\n y = v0*t - 0.5*g*t**2 #calculating y(t)\n outfile.write(('%10.6f | %7.6f\\n') %(t, y)) #writing the result in colums\n\"\"\"\nnothing appears in the terminal window.\nprogram writes a new file based on sample file ball.dat:\n\n'ball_file_write.dat'\n-------------------------- \n Time | Y value\n--------------------------\n 0.042000 | 0.117348\n 0.051909 | 0.142509\n 0.102623 | 0.256211\n 0.111700 | 0.273901\n 0.155920 | 0.348514\n 0.173839 | 0.373288\n 0.209429 | 0.413152\n 0.213426 | 0.416852\n 0.213859 | 0.417243\n 0.270000 | 0.452425\n 0.280750 | 0.455635\n 0.295840 | 0.458228\n 0.346481 | 0.450602\n 0.350000 | 0.449137\n 0.368079 | 0.439697\n 0.372985 | 0.436582\n 0.393252 | 0.421211\n 0.506200 | 0.261750\n 0.528000 | 0.216564\n 0.530120 | 0.211922\n 0.576815 | 0.098475\n 0.579830 | 0.090416\n\"\"\"\n"
},
{
"alpha_fraction": 0.46962615847587585,
"alphanum_fraction": 0.5327102541923523,
"avg_line_length": 15.461538314819336,
"blob_id": "bf36bb65b6fc15199482da56c16ae161e10834b4",
"content_id": "5ba8a78a19eb36040cdf406f8ba94e334149a609",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 428,
"license_type": "no_license",
"max_line_length": 32,
"num_lines": 26,
"path": "/INF1100/Exercises/Chapter_5/plot_ball3.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "from numpy import *\nfrom matplotlib.pyplot import *\nimport sys\n\nv0_list = sys.argv[1:]\ng = 9.81\n\nmax_t = 0\nmax_y = 0\n\nfor v0 in v0_list:\n v0 = float(v0)\n t = linspace(0, 2*v0/g, 100)\n if max(t) > max_t:\n max_t = max(t)\n y = v0*t - 0.5*g*t**2\n if max(y) > max_y:\n max_y = max(y)\n plot(t,y,label='v0=%g' %v0)\n\nxlabel('time (s)')\nylabel('heigth (m)')\nlegend()\naxis([0, max_t, 0, 1.1*max_y])\n\nshow()\n"
},
{
"alpha_fraction": 0.578711986541748,
"alphanum_fraction": 0.613595724105835,
"avg_line_length": 22.76595687866211,
"blob_id": "8c419a00a113caeff4cd9d58956b681e5f73ac37",
"content_id": "07376b5efdeaaab939a2ccf6adce987ca57174e5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1118,
"license_type": "no_license",
"max_line_length": 74,
"num_lines": 47,
"path": "/INF1100/Assigments/Chapter_4/ball_qa.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "\nprint '''\nOnly insert numbers\n'''\n\ndef y(v0, t):\n g = 9.81 #gravity on earth\n Fy = v0*t - 0.5*g*t**2 #formula\n return Fy\n\ndef test_y(): #test if the function is working\n computed = y(2, 1)\n expected = -2.905\n tol = 1e-10\n success = (computed - expected) < tol\n msg = 'something is wrong'\n assert success, msg\ntest_y() #calling the test\n\nv0 = raw_input('What is the start velocity (m/s)? ')\nv0 = float(v0) #making sure the input is a float\n\nt = raw_input('How many seconds have gone? ')\nt = float(t) #making sure the input is a float\n\ndef test_y(): #test if the function is working\n computed = y(2, 1)\n expected = -2.905\n tol = 1e-10\n success = (computed - expected) < tol\n msg = 'something is wrong'\n assert success, msg\ntest_y() #calling the test\n\nq = y(v0,t) #calling the function\nprint '''\nThe ball is now %.3f meters in the air''' %(q) #printing the result\n\n\"\"\"\nTerminal> python ball_qa.py \n\nOnly insert numbers\n\nWhat is the start velocity (m/s)? 6.8\nHow many seconds have gone? 1.2\n\nThe ball is now 1.097 meters in the air\n\"\"\"\n"
},
{
"alpha_fraction": 0.47858721017837524,
"alphanum_fraction": 0.5620309114456177,
"avg_line_length": 30.8873233795166,
"blob_id": "e602e350cf3f4a1bcfa45adb6a2fe5ded2b87dd3",
"content_id": "41136bb6d1defb0601b5560cd2d0c403cec4a8fa",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2265,
"license_type": "no_license",
"max_line_length": 73,
"num_lines": 71,
"path": "/AST2000/Oblig_C/C5.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "from pylab import*\nfrom Solmal import*\nfrom seaborn import*\n\ntime = 0.05 #tidssteg dt\ntmax = 330.0 #aar\nn = int(round(tmax/time)) #antall tidssteg\nn_tid = zeros(n)\ng = 4*pi*pi #Gravitasjons konstant i [AU]\n\n#Center of Mass\nm_planeter = [Mass[0], Mass[2], Mass[3]] #sum = 6.2e^-4 [solmasser]\npX_0 = [pl_x0[0],pl_x0[2],pl_x0[3]] #x_0 planeter\npY_0 = [pl_y0[0],pl_y0[2],pl_y0[3]] #y_0 planeter\nvX_0 = [pl_vx0[0],pl_vx0[2],pl_vx0[3]] #vx_0 planeter\nvY_0 =[pl_vy0[0],pl_vy0[2],pl_vy0[3]] #vy_0 planeter\n#Stjerne \nStarPos = zeros((2,1,n))\ns_hast = zeros((2,1,n))\nM = solMass\n#Planeter\nplanetPos = zeros((2,len(m_planeter),n))\nhast = zeros((2,len(m_planeter),n))\n\nfor i in xrange(n-1):\n for j in xrange(len(m_planeter)):\n n_tid[i+1] = n_tid[i] + time\n planetPos[0,j,0] = pX_0[j]\n planetPos[1,j,0] = pY_0[j]\n hast[0,j,0] = vX_0[j]\n hast[1,j,0] = vY_0[j]\n \n r = sqrt(planetPos[0,j,i]**2+planetPos[1,j,i]**2)\n #Krefter\n F1 = (G*(M*m_planeter[0])/r**3)*(planetPos[:,0,i]-StarPos[:,0,i])\n F2 = (G*(M*m_planeter[1])/r**3)*(planetPos[:,1,i]-StarPos[:,0,i])\n F3 = (G*(M*m_planeter[2])/r**3)*(planetPos[:,2,i]-StarPos[:,0,i])\n #Euler-Cromer\n a = -((g*solMass)/(r**3))*planetPos[:,j,i] \n hast[:,j,i+1] = hast[:,j,i] + a*time\n planetPos[:,j,i+1] = planetPos[:,j,i] + hast[:,j,i+1]*time\n\n \n #Euler-Cromer, star\n a_star = (F1+F2+F3)/M \n s_hast[:,0,i+1] = s_hast[:,0,i] + a_star*time\n StarPos[:,0,i+1] = StarPos[:,0,i] + s_hast[:,0,i+1]*time\n\n\nvr = s_hast[0,0]-mean(s_hast[0,0])\nmu = 0\nsigma = max(vr)/5.0 \nnois = normal(mu,sigma,size=len(n_tid))\n\nplot(StarPos[0,0,:],StarPos[1,0,:]) \nplot(planetPos[0,0,:],planetPos[1,0,:])\nplot(planetPos[0,1,:],planetPos[1,1,:])\nplot(planetPos[0,2,:],planetPos[1,2,:])\nlegend(['stjerne','planet1','planet2','planet3'],loc =\"best\")\nxlabel('[AU]'); ylabel('[AU]')\n#savefig('planet_sol_bane.png')\nshow()\nplot(n_tid,(vr+nois))\nplot(n_tid,vr)\nxlabel('Tid [aar]')\nylabel('[AU]')\n#savefig('uten_nois.png')\nshow()\n#print (m_planeter[0]*1.98892*10**30)/float(1.89813*10**(27))\n#print (m_planeter[1]*1.98892*10**30)/float(1.89813*10**(27))\n#print (m_planeter[2]*1.98892*10**30)/float(1.89813*10**(27))\n\n"
},
{
"alpha_fraction": 0.7766990065574646,
"alphanum_fraction": 0.7766990065574646,
"avg_line_length": 40.20000076293945,
"blob_id": "cf518e8e7a2c6a5e3c020573b92605c2784e174b",
"content_id": "59b21e1ee190a6bda22cb908125dc72b5810bea2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 206,
"license_type": "no_license",
"max_line_length": 88,
"num_lines": 5,
"path": "/INF1100/Project/README.txt",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "This project tested my programming skills. How I solved this project and the assigments \ndetermined if I could take an exam in this cours.\n\nOBS!! \nODEsolver.py is not my code, the code was provided by UiO!\n"
},
{
"alpha_fraction": 0.5679611563682556,
"alphanum_fraction": 0.6135922074317932,
"avg_line_length": 20.4375,
"blob_id": "aa5e93a9acacb7412753ed4dd89aeff20a647c5b",
"content_id": "a52d2bdf397c68924b8e89ba12b1529b5e2f977a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1030,
"license_type": "no_license",
"max_line_length": 57,
"num_lines": 48,
"path": "/FYS2130/Oblig_2/opg4d.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np, matplotlib.pyplot as plt, scipy as sp\nfrom classRungeKutta4 import*\n\n#konstanter\nm = 0.1\nk = 10.0\nb = 0.04\nF0 = 0.1\nN = 10**3\n#arrays\nz = np.zeros(N)\nv = np.zeros(N)\nt = np.linspace(0, 50, N)\n\n#initial betingelser\nz[0] = 0 #[m]\nv[0] = 0 #[m/s]\ndt = t[1]-t[0] #tidssteg\nomegaF0 = np.sqrt((k/m)-(b**2/(2*m**2)))\nomega = np.sqrt(k/m)\n\nomegaFs = np.linspace(0.5,1.5,300)*omegaF0\nresponses = np.zeros(len(omegaFs))\n\n\nfor j,omegaF in enumerate(omegaFs): \n drivenpendulum = DrivenPendulum(F0,omegaF,k,b,m)\n solver = RungeKutta4(drivenpendulum) \n\n #iterasjoner i Runge-Kutta 4\n for i in range(N-1):\n z[i+1], v[i+1] = solver(z[i],v[i],t[i],dt)\n siste = z[-300:-1]\n z_ = np.mean(siste*siste)\n Amplitude = 2*z_\n responses[j] = np.sqrt(Amplitude)\n\n #plott\n plt.plot(t,z)\n plt.xlabel('Tid[s]')\n plt.ylabel('Utslag[m]$')\n\nplt.figure()\nplt.plot(omegaFs,responses)\nplt.xlabel('$\\omega_F$')\nplt.ylabel('Amplitude/respons')\nplt.title('Frekvensrespons')\nplt.show()\n\n"
},
{
"alpha_fraction": 0.539667010307312,
"alphanum_fraction": 0.5935357213020325,
"avg_line_length": 24.524999618530273,
"blob_id": "60677ed037ee7941d481dcf1deb68b51aeb0da7a",
"content_id": "3766ebe176e68c6f700555149357936dda48b958",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1021,
"license_type": "no_license",
"max_line_length": 54,
"num_lines": 40,
"path": "/FYS2130/Oblig_2/opg4c.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np, matplotlib.pyplot as plt\nfrom classRungeKutta4 import*\n\n#konstaner\nm = 1.0\nk1, k2, k3 = 10.0, 3.0, 5.0\nb1,b2 = 1.5, 10.0\nb3 = np.sqrt(k3*m)*2\nN = 10**3\n\n#arrays\nzo = np.zeros(N) #posisjon\nvo = np.zeros(N) #hastighet\nzk = np.zeros(N) #posisjon\nvk = np.zeros(N) #hastighet\nzu = np.zeros(N) #posisjon\nvu = np.zeros(N) #hastighet\nt = np.linspace(0,10,N) #Periode\ndt = t[1]- t[0] #Tidssteg\n\n#initialverdier\nzo[0] = zk[0] = zu[0] = 0.1\n\n\n#dempninger ...-kritisk\nunder = RungeKutta4(b1,k1,m)\nover = RungeKutta4(b2,k2,m)\nkritisk = RungeKutta4(b3,k3,m)\nfor i in range (N-1):\n zo[i+1],vo[i+1] = over.rk4(zo[i],vo[i],t[i],dt)\n zk[i+1],vk[i+1] = kritisk.rk4(zk[i],vk[i],t[i],dt)\n zu[i+1],vu[i+1] = under.rk4(zu[i],vu[i],t[i],dt)\n\nplt.plot(t,zu,'--',t,zo,'--',t,zk)\nplt.title('Demping')\nplt.legend(['Underkritisk','Overkritisk','Kritisk'])\nplt.xlabel('Tid[s]')\nplt.ylabel('Utsvingning[m]')\n#plt.savefig('kritiskesvingninger.png')\nplt.show()\n"
},
{
"alpha_fraction": 0.5714285969734192,
"alphanum_fraction": 0.6071428656578064,
"avg_line_length": 22.45945930480957,
"blob_id": "d1b3f9bc1a1723acdd4a7fd5cac4bf92878f2dc4",
"content_id": "2e48930a6ade0bee0ec77380313807ee59c954f3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 868,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 37,
"path": "/INF1100/Assigments/Chapter_8/sum_ndice_fair.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import random, sys #imports random and sys\n\ntry:\n q = float(sys.argv[1])\n b = int(sys.argv[2])\nexcept IndexError:\n print \"Provide command-line arguments for stakes 'r' and N experiments\"\n sys.exit(1)\nexcept ValueError:\n print \"Wait what?!? insert 2 whole numbers r and N!!\"\n sys.exit(1)\n\nr = float(sys.argv[1]); n = int(sys.argv[2]) #fetches numbers from cml\n\nM = 0\nprob = 0\nfor i in range(n):\n s = 0\n for j in range(4):\n die = random.randint(1,6)\n s += die\n #print 'die =',die\n #print 'sum =', s\n if s < 9:\n M += r\n prob += 1\n\nprint 'You have won',(M - n), 'Euro'\nprint 'The probability of winning is', float(M)/n\n\n\"\"\"\nThe probability is ca 5% so r needs to be 1/0.05 = 20 for it to be a fair game.\n\nTerminal> python sum_ndice_fair.py 20 100\nYou have won 0.0 Euro\nThe probability of winning is 1.0\n\"\"\"\n"
},
{
"alpha_fraction": 0.5329236388206482,
"alphanum_fraction": 0.5996488332748413,
"avg_line_length": 26.585365295410156,
"blob_id": "45287f488ab699a0320e2c22e8a644e63807b923",
"content_id": "4a78a59c67e2085df1e75c85760141ce171009b4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1139,
"license_type": "no_license",
"max_line_length": 98,
"num_lines": 41,
"path": "/FYS2130/Oblig_5/oblig5.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np, matplotlib.pyplot as plt\n\n#Genererer posisjons array\ndelta_x = 0.1\nx = np.linspace(-20,20,401)\nn = len(x)\n\n#Genrerer posisjonen ved t=0\nsigma = 2.0\nu = np.exp(-(x/(2*sigma))*(x/(2*sigma))) #gaussisk form\n#plt.plot(x,u)\n\n#Genererer div parametre og tidsderiverte av utslaget vd t=0\nv = 0.5 ; delta_t = 0.1\nfaktor = (delta_t*v/delta_x)**2\ndudt = (v/(2*sigma**2)*x*u)\n#dudt = -dudt\ndudt = dudt*0.5\n#dudt = 2*dudt\n#Angir effektive initialbetingelser\nu_jminus1 = u - delta_t*dudt\nu_j = u\nu_jpluss1 = np.zeros(n)\nN = 1000\n\nfor t in xrange(N):\n u_jpluss1[1:n-1] = (2*(1-faktor))*u_j[1:n-1] - u_jminus1[1:n-1] + faktor*(u_j[2:n]+u_j[0:n-2])\n #handtering av randproblemet, setter uj-1 = uj+1 = 0\n u_jpluss1[0] = (2*(1-faktor))*u_j[0]-u_jminus1[0] + faktor*u_j[1]\n u_jpluss1[n-1] = (2*(1-faktor))*u_j[n-1]-u_jminus1[n-1] + faktor*u_j[n-2]\n\n if t % 250 == 0:\n plt.plot(x,u_j)\n u_jminus1 = u_j.copy()\n u_j = u_jpluss1.copy()\n\nplt.legend(['$t=0$','$t=t+\\Delta t$','$t=t+2\\Delta t$','$t=t+3\\Delta t$'],loc = 'best')\nplt.title('0.5*du/dt')\nplt.ylabel('Utslag')\nplt.savefig('05dudt.png')\nplt.show()\n\n\n \n\n"
},
{
"alpha_fraction": 0.7880434989929199,
"alphanum_fraction": 0.8097826242446899,
"avg_line_length": 91,
"blob_id": "0d93cbb9049cabfccd4a417197687469588b9d94",
"content_id": "8a806fb88ce2e160c9727b35030c97a7cad56a09",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 185,
"license_type": "no_license",
"max_line_length": 102,
"num_lines": 2,
"path": "/FYS-MEK1110/README.txt",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "This course is about classical mechanics (Newston's laws of motion) and an introduction to relativity.\nSyllabus: Malthe-Sørensen, A (2015) Elementary Mechanics Using Python, Springer.\n"
},
{
"alpha_fraction": 0.5928571224212646,
"alphanum_fraction": 0.6428571343421936,
"avg_line_length": 16.5,
"blob_id": "cf1fd980d2e2e664f5812e894bd3998da1f395d7",
"content_id": "fde26f3e619fdd439632a87ffd9b8fc8fcb0246d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 140,
"license_type": "no_license",
"max_line_length": 31,
"num_lines": 8,
"path": "/INF1100/Exercises/Chapter_5/judge_plot.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "import numpy as np\n\nx = np.linspace(0, 2, 22)\n#y = x*(2-x)\ny = np.cos(18*np.pi*x)\nimport matplotlib.pyplot as plt\nplt.plot(x, y)\nplt.show()\n"
},
{
"alpha_fraction": 0.8253968358039856,
"alphanum_fraction": 0.8253968358039856,
"avg_line_length": 62,
"blob_id": "467158cdae51d166a0e60c39bb62faf02b20b1a1",
"content_id": "b2b54b6ed30014b17b57d1e892a887fb3b8c843e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 63,
"license_type": "no_license",
"max_line_length": 62,
"num_lines": 1,
"path": "/MAT1120/Oblig2/README.txt",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "In this assigment I have used MATLAB to calculate the matrices\n"
},
{
"alpha_fraction": 0.6567505598068237,
"alphanum_fraction": 0.6590389013290405,
"avg_line_length": 19.809524536132812,
"blob_id": "d3ed7fe9095cbca40ffe03e3751b074ff9eb8e91",
"content_id": "5817b8e2de1c28486f59ccb808ecdde58e15ebe0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 437,
"license_type": "no_license",
"max_line_length": 57,
"num_lines": 21,
"path": "/INF1100/Exercises/Chapter_4/f2c_file_read.py",
"repo_name": "Kenn3Th/Physics_at_UiO",
"src_encoding": "UTF-8",
"text": "infile = open('temperature.dat','r')\n \n\"\"\" \nline = infile.readline() #reads first line in file\nline = infile.readline() #reads next line and so on\nline = infile.readline()\nline = infile.readline()\n\"\"\"\n\n\nfor i in range(4): # Easyer to use a for loop.\n infile.readline()\n\nline = infile.readline()\nline = line.split()\nprint line\n\n#for line in infile:\n# print line\n\ninfile.close() # this is for closing the file at the end.\n"
}
] | 202 |
jakobzhao/yelpscraper | https://github.com/jakobzhao/yelpscraper | 495b7fe99d6df0fd42fefe4507a37b90559c0684 | 0328cc3d80265072fae95d3361989005bcfade18 | 11d8492a37168b767b78c710df944e67de46372c | refs/heads/main | 2023-06-25T00:12:30.103726 | 2021-07-26T06:07:40 | 2021-07-26T06:07:40 | null | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5938130617141724,
"alphanum_fraction": 0.6332660913467407,
"avg_line_length": 41.894229888916016,
"blob_id": "f96971969ba4a136bb03fcba3a5fd5542d9a03bf",
"content_id": "36abaf87a6b27b06b99385d20718f23f17bc4f18",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4461,
"license_type": "no_license",
"max_line_length": 390,
"num_lines": 104,
"path": "/main.py",
"repo_name": "jakobzhao/yelpscraper",
"src_encoding": "UTF-8",
"text": "# @author: Aniruddh Vardhan, Bo Zhao\n# @email: [email protected].\n# date: 13th June,2021\n# @description: Search black-owned restaurants on yelp using a web crawler\n\n# imports the required python libraries\nfrom selenium import webdriver\nfrom bs4 import BeautifulSoup\nimport time\nimport sqlite3\nfrom selenium.webdriver.chrome.options import Options\n\n# create a sqlite with https://sqlitebrowser.org/dl/\noptions = Options()\n# options.add_argument(\"window-size=1400,1000\")\noptions.add_argument(\"--start-maximized\")\ncity = \"New York\"\nstate = \"NY\"\n# This is the base url for extracting information\nbase_url = \"https://www.yelp.com/search?find_desc=Black%20Owned%20Restaurants&find_loc=\" + city + \"%2C%20\" + state + \"&start=\"\n\n# Creates a bot with a browser driver. The bot helps automate data collection.\nBotPath = \"C:\\workspace\\chromedriver.exe\"\n# bot = webdriver.Chrome(executable_path=\"assets/chromedriver.exe\")\nbot = webdriver.Chrome(options=options, executable_path=BotPath)\n\n\nconn = sqlite3.connect('assets/bor.db')\ncursor = conn.cursor()\n\n# bot gets the url which loads the web page\nbot.get(base_url + str(0))\n# Create a document object model (DOM) from the raw source of the crawled web page.\n# Since we are processing a html page, 'html.parser' is chosen.\nsoup = BeautifulSoup(bot.page_source, 'html.parser')\ntime.sleep(5)\n\npageNum = int(soup.find('div', class_='border-color--default__09f24__1eOdn text-align--center__09f24__1P1jK').text.split(\" \")[2])\n\nfor i in range(pageNum):\n if i != 0:\n time.sleep(5)\n bot.get(base_url + str(i*10))\n soup = BeautifulSoup(bot.page_source, 'html.parser')\n # helps get the individual pages of restaurant name and adds it to the yelp website url\n restaurants = soup.find_all('div', class_='container__09f24__21w3G hoverable__09f24__2nTf3 margin-t3__09f24__5bM2Z margin-b3__09f24__1DQ9x padding-t3__09f24__-R_5x padding-r3__09f24__1pBFG padding-b3__09f24__1vW6j padding-l3__09f24__1yCJf border--top__09f24__8W8ca border--right__09f24__1u7Gt border--bottom__09f24__xdij8 border--left__09f24__rwKIa border-color--default__09f24__1eOdn')\n\n # This loops through the individual pages of restaurant and finds where a restaurant provides delivery and/or takeaway.\n # The data is then put into their respective lists\n for restaurant in restaurants:\n reviewNum = 0\n bReviewNum = 0\n name = ''\n feature = ''\n landline = ''\n stars = 0.0\n self_identified = 0\n gives_delivery = 0\n gives_takeout = 0\n\n name = restaurant.find('a', class_='css-166la90').text\n features = restaurant.find('p', class_='css-1j7sdmt').text\n if restaurant.find('p', class_='css-8jxw1i') != None:\n landline = restaurant.find('p', class_='css-8jxw1i').text\n if restaurant.find('address') != None:\n address = restaurant.find('address').text\n else:\n try:\n address = restaurant.find('a', class_=\"css-ac8spe\").text\n except:\n address = \"\"\n # reviews css-n6i4z7 self-identified css-8yg8ez\n\n if restaurant.find(\"p\", class_=\"css-n6i4z7\") != None:\n self_identified = 0\n bReviewNum = int(restaurant.find(\"p\", class_=\"css-n6i4z7\").text.split(\" \")[0])\n else:\n self_identified = 1\n bReviewNum = -1\n try:\n reviewNum = int(restaurant.find('span', class_='css-e81eai').text)\n except:\n pass\n\n try:\n stars = float(restaurant.find('div', class_='i-stars__09f24__1T6rz').attrs[\"aria-label\"].split(\" \")[0])\n except:\n pass\n\n if restaurant.find(\"p\", class_=\"css-192a8l5\") != None:\n gives = restaurant.find(\"p\", class_=\"css-192a8l5\").text\n if \"delivery\" in gives:\n gives_delivery = 1\n if \"takeout\" in gives:\n gives_takeout = 1\n\n insert_record_sql = \"INSERT OR REPLACE INTO restaurants (name, address, city, state, features, reviewNum, bReviewNum, stars, identified, delivery, takeout) VALUES ('%s', '%s', '%s', '%s', '%s', %d, %d, %f, %d,%d,%d)\" % (name, address, city, state, features, reviewNum, bReviewNum, stars, self_identified, gives_delivery, gives_takeout)\n print(str(i + 1), \" of \", pageNum, \":\", insert_record_sql)\n cursor.execute(insert_record_sql)\n conn.commit()\n\nbot.close()\nconn.close()\nprint (\"finished.\")\n"
},
{
"alpha_fraction": 0.6938775777816772,
"alphanum_fraction": 0.7091836929321289,
"avg_line_length": 25.200000762939453,
"blob_id": "60d93f0fe5201496a39c55cfaddf8ab7ce517f5b",
"content_id": "65fd62b6161c68bd04fdfac53335eb8eb175e629",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "SQL",
"length_bytes": 392,
"license_type": "no_license",
"max_line_length": 71,
"num_lines": 15,
"path": "/assets/database.sql",
"repo_name": "jakobzhao/yelpscraper",
"src_encoding": "UTF-8",
"text": "CREATE TABLE \"restaurants\" (\n\t\"name\"\tTEXT NOT NULL,\n\t\"address\"\tTEXT,\n\t\"city\"\tTEXT,\n\t\"state\"\tTEXT,\n\t\"features\"\tTEXT,\n\t\"stars\"\tREAL DEFAULT 0,\n\t\"identified\"\tINTEGER DEFAULT 0,\n\t\"delivery\"\tINTEGER DEFAULT 0,\n\t\"takeout\"\tINTEGER DEFAULT 0,\n\t\"landline\"\tTEXT,\n\t\"reviewNum\"\tINTEGER DEFAULT 0,\n\t\"bReviewNum\"\tINTEGER DEFAULT 0,\n\tCONSTRAINT \"name-addr-city-uniqueness\" UNIQUE(\"name\",\"address\",\"city\")\n);"
},
{
"alpha_fraction": 0.792630672454834,
"alphanum_fraction": 0.7952013611793518,
"avg_line_length": 43.88461685180664,
"blob_id": "bbca2ca1b4d9a1f32bee57ba651f11e89dc660bf",
"content_id": "2c98fb3f8968fdf2e8bc7abc0676dc86f8f8c857",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 2334,
"license_type": "no_license",
"max_line_length": 100,
"num_lines": 52,
"path": "/readme.md",
"repo_name": "jakobzhao/yelpscraper",
"src_encoding": "UTF-8",
"text": "# Scrapping black owned restaurant from yelp's server\n\nIn this research project, I have practiced scrapping data from yelp website\nabout black owned restaurants using python web crawler. The projects gets useful\ninformation about restaurants like restaurant name, address, city, state,\nfeatures, rating (stars), landline (if any), number of reviews,\nblacked reviewed restaurants, and whether restaurants are self-identified\nblack-owned restaurants or not and lastly, whether they provide delivery or\ntakeout services\n\n# Using the Script to collect data/ Running the code\n\nTo use the script makes sure to add the python libraries before running the code. The libraries used\nin the project were Selenium, BeautifulSoup4, Time, Sqlite3\n\n# Customize the input parameter to enable crawler data from different cities\n\nTo change the input parameter to scrape data about different just change the city and state variable\nin the code to extract the desired cities data\n\nIn the current program city name is \"New York\" and the state is \"NY\". Which extracts\ndata about black owned restaurants in New York city\n\n# Collecting data from LGBTQ+ friendly restaurants\n\nThe current script is not supporting to collect data about LGBTQ+ data.\nHowever, if the goal for your project is extracting data about LGBTQ+ friendly restaurants,\nyou must change the code accordingly.\n\n# Why we need a 5-second pause?\n\nIf given a close look at the code, you might have realized there is a 5-second pause\nat several locals. This is done so avoid the yelp server to be overloaded ( by too many requests)\nin a very short period of time. Without using the pause, yelp website would realize \nthat data is being scraped using a bot and thus could lead to a ban by yelp for your IP address.\n\n# Finding css class from website used in the code\n\nThe css class is extremely important when scrapping data from a website. The css class\nhelps in communicating your computer about the location where the data is to be\nextracted from in the webpage.\n\n\n# Viewing the data extracted\n\nThe script is programmed to store data in sqlite3 format. To view this data you should have\nDB Browser for SQLite. An example of how your output would look like is shown below.\n\n\n\n\n### Special thanks to Bo Zhao\n"
}
] | 3 |
new-player/hapipy | https://github.com/new-player/hapipy | ad6f8c73c3822f82a3845e598e19cc755d24f89a | 987a8e07785667b363712d7fecd5a8a4e2f9c0e7 | 0d09a4d3a56362c4a55c782803b69d61b91d089a | refs/heads/master | 2020-03-24T23:32:17.917465 | 2018-08-01T12:31:40 | 2018-08-01T12:31:40 | 143,140,840 | 0 | 0 | Apache-2.0 | 2018-08-01T10:26:30 | 2018-07-29T16:27:43 | 2018-03-16T18:12:06 | null | [
{
"alpha_fraction": 0.6346604228019714,
"alphanum_fraction": 0.6604215502738953,
"avg_line_length": 16.79166603088379,
"blob_id": "591a4000756e44b0c11037270378ad9e2e7d7e96",
"content_id": "d7e4f09c6ea3e1e130c9c290e13dbd60ac48956b",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 427,
"license_type": "permissive",
"max_line_length": 48,
"num_lines": 24,
"path": "/demo/main.py",
"repo_name": "new-player/hapipy",
"src_encoding": "UTF-8",
"text": "from hapi.contacts import ContactsClient\n\napi_key = 'Your API Key'\n\ncontact_client = ContactsClient(api_key=api_key)\n\ndata = {}\ndata['properties'] = []\ndata['properties'].append({\n\t'property': 'email',\n\t'value': '[email protected]'\n\t})\n\ndata['properties'].append({\n\t'property': 'firstname',\n\t'value': 'testing user'\n\t})\n\ndata['properties'].append({\n\t'property': 'phone',\n\t'value': '09876543210'\n\t})\n\ncontact_client.create_a_contact(data=data)\n"
},
{
"alpha_fraction": 0.6905028223991394,
"alphanum_fraction": 0.7039105892181396,
"avg_line_length": 21.375,
"blob_id": "bf237d7bbf56559b63bb441a039e66b9701dd8cb",
"content_id": "ada3c884eb16803c30243693bb32c6d7d1df7a6a",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 895,
"license_type": "permissive",
"max_line_length": 143,
"num_lines": 40,
"path": "/README.md",
"repo_name": "new-player/hapipy",
"src_encoding": "UTF-8",
"text": "## hapipy\n\n### Overview Here\n\nA python wrapper around HubSpot's APIs. Docs for this wrapper can be found [here](https://github.com/HubSpot/hapipy/wiki/hapipy-documentation).\n\nGeneral API reference documentation can be found [here](https://docs.hubapi.com).\n\n### Installation\n`pip install git+https://github.com/new-player/hapipy.git`\n\n### Sample Code\n```python\nfrom hapi.contacts import ContactsClient\n\napi_key = 'Your API Key'\ncontact_client = ContactsClient(api_key=api_key)\n# Preparing data\ndata = {}\ndata['properties'] = []\ndata['properties'].append({\n\t'property': 'email',\n\t'value': '[email protected]'\n\t})\n\ndata['properties'].append({\n\t'property': 'firstname',\n\t'value': 'testing user'\n\t})\n\ndata['properties'].append({\n\t'property': 'phone',\n\t'value': '09876543210'\n\t})\n\ncontact_client.create_a_contact(data=data)\n```\n\nReference:\n1. [https://github.com/CBitLabs/hapipy](https://github.com/CBitLabs/hapipy)\n"
}
] | 2 |
Jokuyen/transactionsAnalyzer | https://github.com/Jokuyen/transactionsAnalyzer | 03824cffb60487bdb9a70a02f72868eb7ad3710b | 96a6460fa025f04f5eb0635839fd660798b0b410 | 7cf3135dfde3913794311954954fa426e0e997dd | refs/heads/master | 2020-08-28T21:02:39.744280 | 2019-11-12T02:08:04 | 2019-11-12T02:08:04 | 209,389,043 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5671641826629639,
"alphanum_fraction": 0.5708954930305481,
"avg_line_length": 36.529998779296875,
"blob_id": "9567efd1c3cb99c8efcedac42f4fc38752678578",
"content_id": "98643025ffbc73da4b1549e6f8335f0604e1b8d7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3752,
"license_type": "no_license",
"max_line_length": 130,
"num_lines": 100,
"path": "/Project Files/transactionsAnalyzer.py",
"repo_name": "Jokuyen/transactionsAnalyzer",
"src_encoding": "UTF-8",
"text": "# Johnny Nguyen\n# transactionsAnalyzer\n# Potential filter names for my transactions: \"att, great oaks water\"\n\n'''\nTested with:\n- Capital One's Credit Card\n- Chase's Debit Card\n'''\n\nfrom displayAllTransactions import *\nfrom monthlySpendings import *\nimport tkinter.filedialog\nfrom os import getcwd\nimport csv\n\nclass MainWindow(tk.Tk):\n def __init__(self):\n super().__init__()\n \n # Class variable\n self._filename = \"\"\n self._accountType = \"Credit\" # Could change in determineAccountType()\n self._transactionsList = [] \n \n '''\n # Variables for columns within csv file of Chase's Debit Card\n self._transactionDate = \"Posting Date\"\n self._transactionDateFormat = \"%m/%d/%Y\"\n self._transactionName = \"Description\"\n self._transactionCost = \"Amount\" \n '''\n \n # Variables for columns within csv file of Capital One's Credit Card\n self._transactionDate = \"Transaction Date\"\n self._transactionDateFormat = \"%Y-%m-%d\"\n self._transactionName = \"Description\"\n self._transactionCost = \"Debit\"\n \n # Formatting\n self.minsize(400, 75)\n self.title(\"Transactions Analyzer\")\n self.grid_rowconfigure(0, weight=1)\n self.grid_rowconfigure(1, weight=1)\n self.grid_columnconfigure(0, weight=1)\n \n # Functions\n self.createButtons()\n self.selectInputFile()\n self.determineAccountType()\n self.readDataIntoTransactionsList() \n \n def createButtons(self):\n tk.Button(self, text = \"Display all transactions\", command= lambda: TransactionOptions(self)).grid()\n tk.Button(self, text = \"Display monthly spendings for a given year\", command= lambda: MonthlySpendings(self)).grid() \n \n def selectInputFile(self):\n while \".csv\" not in self._filename.lower():\n self._filename = tk.filedialog.askopenfilename(initialdir= getcwd()) \n\n # If user doesn't choose a file, exit the program\n if len(self._filename) == 0:\n raise SystemExit \n \n if \".csv\" not in self._filename.lower():\n tk.messagebox.showerror(\"Error\", \"Select a '.csv' file extension\", parent=self) \n \n def determineAccountType(self):\n ''' If account is a debit card, change 'self._accountType' accordingly '''\n with open(self._filename) as filehandler:\n data = csv.DictReader(filehandler)\n for record in data:\n # If a transaction's cost is a negative number, switch 'self._accountType' to \"Debit\"\n if record[self._transactionCost] is not \"\" and float(record[self._transactionCost]) < 0:\n self._accountType = \"Debit\"\n break\n \n def readDataIntoTransactionsList(self):\n with open(self._filename) as filehandler:\n data = csv.DictReader(filehandler)\n \n if self._accountType == \"Credit\":\n for record in data:\n # Skip any credit card payments\n if record[self._transactionCost] is \"\":\n continue \n self._transactionsList.append(record) \n \n elif self._accountType == \"Debit\":\n for record in data: \n if float(record[self._transactionCost]) > 0:\n continue \n record[self._transactionCost] = str(abs(float(record[self._transactionCost])))\n self._transactionsList.append(record) \n \ndef main():\n app = MainWindow()\n app.mainloop()\n\nmain()"
},
{
"alpha_fraction": 0.6133652925491333,
"alphanum_fraction": 0.6197669506072998,
"avg_line_length": 52.0904655456543,
"blob_id": "77b2f447d9cf3c913f414cb5303d99a6a1737f34",
"content_id": "d8ea8afbf9936cd879113a5af41883e10321c4c2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 21713,
"license_type": "no_license",
"max_line_length": 168,
"num_lines": 409,
"path": "/Project Files/monthlySpendings.py",
"repo_name": "Jokuyen/transactionsAnalyzer",
"src_encoding": "UTF-8",
"text": "# Johnny Nguyen\n# monthlySpendings class\n\nimport matplotlib\nmatplotlib.use('TkAgg') # Tell matplotlib to work with Tkinter\nimport tkinter as tk\nfrom matplotlib.backends.backend_tkagg import FigureCanvasTkAgg # Tells matplotlib about Canvas object\nimport matplotlib.pyplot as plt \nfrom datetime import datetime\nfrom statistics import mean \n\nclass MonthlySpendings(tk.Toplevel):\n def __init__(self, master):\n super().__init__(master)\n self.transient(master) \n \n # Class variables\n self._functionsList = [self.allTransactions, self.filteredTransactionsPrompt, self.specificTransactionPrompt]\n # Create dictionary to track spendings for each month\n self._monthsDict = {'January': 0.0, 'February': 0.0, 'March': 0.0, 'April': 0.0, 'May': 0.0, 'June': 0.0, 'July': 0.0, \n 'August': 0.0, 'September': 0.0, 'October': 0.0, 'November': 0.0, 'December': 0.0} \n # Sort 'transactionsList' in ascending order (so January is first instead of latest month)\n self._transactionsList = sorted(master._transactionsList, \\\n key=lambda record: datetime.strptime(record[master._transactionDate], master._transactionDateFormat))\n \n # Formatting\n self.minsize(250, 175)\n self.title(\"Monthly Spendings\")\n for idx in range(0, 6):\n self.grid_rowconfigure(idx, weight=1)\n self.grid_columnconfigure(0, weight=1) \n \n # User prompt for transactions options\n inputYear = tk.IntVar()\n tk.Label(self, text = \"Enter a year: \").grid()\n entryWidget = tk.Entry(self, textvariable=inputYear)\n entryWidget.grid()\n\n buttonOption = tk.IntVar()\n allTransactionsButton = tk.Radiobutton(self, text=\"Show all transactions\", variable=buttonOption, value=0).grid()\n filteredTransactionButton = tk.Radiobutton(self, text=\"Show filtered transactions\", variable=buttonOption, value=1).grid()\n specificTransactionsButton = tk.Radiobutton(self, text=\"Show specific transactions\", variable=buttonOption, value=2).grid()\n confirmButton = tk.Button(self, text=\"Continue\", command=lambda: \\\n self.callFunction(master, self._monthsDict, self._transactionsList, inputYear.get(), buttonOption.get())).grid()\n \n buttonOption.set(0)\n \n def callFunction(self, master, monthsDict, transactionsList, inputYear, buttonOption): \n ''' Call one of the three functions '''\n self._functionsList[buttonOption](master, monthsDict, transactionsList, inputYear)\n \n def allTransactions(self, master, monthsDict, transactionsList, inputYear): \n # Fill 'monthsDict' for given 'inputYear'\n for record in transactionsList:\n if datetime.strptime(record[master._transactionDate], master._transactionDateFormat).year == inputYear:\n # Idea of following lines: monthsDict[month] += cost\n monthsDict[datetime.strptime(record[master._transactionDate], master._transactionDateFormat).strftime('%B')] += \\\n float(record[master._transactionCost])\n \n listboxObj = TransactionsListbox(master, monthsDict, transactionsList, inputYear, 'allTransactions', None, None, None)\n graphObj = MonthGraph(master, monthsDict, inputYear, listboxObj)\n \n listboxObj.referenceGraphObj(graphObj)\n \n self.destroy()\n \n def filteredTransactionsPrompt(self, master, monthsDict, transactionsList, inputYear):\n filterList = []\n filteredTransObj = FilteredTransactionsPrompt(master, monthsDict, transactionsList, filterList, inputYear)\n self.destroy()\n \n def specificTransactionPrompt(self, master, monthsDict, transactionsList, inputYear):\n specificTransObj = SpecificTransactionsNamePrompt(master, monthsDict, transactionsList, inputYear)\n self.destroy()\n \nclass FilteredTransactionsPrompt(tk.Toplevel):\n def __init__(self, master, monthsDict, transactionsList, filterList, inputYear):\n super().__init__(master) \n self.transient(master) \n \n # Class variable\n removedRecords = []\n \n # Formatting\n self.minsize(300, 75)\n self.title(\"Prompt Window\")\n self.grid_rowconfigure(0, weight=1)\n self.grid_rowconfigure(1, weight=1)\n self.grid_rowconfigure(2, weight=1)\n self.grid_columnconfigure(0, weight=1) \n \n # User prompt\n inputFilters = tk.StringVar()\n tk.Label(self, text= \"Enter filters (separated by commas): \").grid()\n entryWidget = tk.Entry(self, textvariable=inputFilters)\n entryWidget.grid() \n confirmButton = tk.Button(self, text= \"Continue\", command= lambda: self.callFunctionsAndWindows\\\n (master, monthsDict, transactionsList, filterList, inputYear, removedRecords, inputFilters.get())).grid() \n \n def callFunctionsAndWindows(self, master, monthsDict, transactionsList, filterList, inputYear, removedRecords, inputFilters):\n if inputFilters is \"\":\n tk.messagebox.showerror(\"Error\", \"Invalid input: Cannot be empty\", parent=self) \n \n else:\n self.parseFilters(filterList, inputFilters)\n self.fillMonthsDict(master, monthsDict, transactionsList, filterList, inputYear, inputFilters)\n \n # Creating windows\n listboxObj = TransactionsListbox(master, monthsDict, transactionsList, inputYear, 'filteredTransactions', filterList, removedRecords, None)\n graphObj = MonthGraph(master, monthsDict, inputYear, listboxObj) \n removedTransObj = RemovedTransactions(master, monthsDict, transactionsList, inputYear, removedRecords, filterList, listboxObj, graphObj)\n \n listboxObj.referenceGraphObj(graphObj)\n listboxObj.referenceRemovedTransObj(removedTransObj)\n graphObj.referenceRemovedTransObj(removedTransObj)\n \n self.destroy() \n \n def parseFilters(self, filterList, inputFilters):\n tempList = inputFilters.split(\",\")\n tempList = [name.lower().strip() for name in tempList if name.lower().strip() not in filterList] # Do not add filter if already in filterList\n tempSet = set(tempList) # Removes duplicate filters in the list (if typed twice during user input)\n filterList.extend(tempSet)\n \n def fillMonthsDict(self, master, monthsDict, transactionsList, filterList, inputYear, inputFilters):\n for record in transactionsList:\n try:\n if inputFilters is not \"\":\n for name in filterList:\n if name in record[master._transactionName].lower():\n raise Exception\n \n if datetime.strptime(record[master._transactionDate], master._transactionDateFormat).year == inputYear:\n monthsDict[datetime.strptime(record[master._transactionDate], master._transactionDateFormat).strftime('%B')] += \\\n float(record[master._transactionCost]) \n except:\n continue \n \nclass RemovedTransactions(tk.Toplevel):\n def __init__(self, master, monthsDict, transactionsList, inputYear, removedRecords, filterList, listboxObj, graphObj):\n super().__init__(master)\n self.transient(master) \n \n self._total = 0\n self._newInputFilters = tk.StringVar() \n \n # Formatting\n self.minsize(560, 750)\n for idx in range(0, 8):\n self.grid_rowconfigure(idx, weight=1)\n self.grid_columnconfigure(0, weight=1) \n self.title(\"List of Removed Transactions\")\n tk.Label(self, text = \"Currently filtering: \").grid()\n tk.Label(self, text = \", \".join(filterList)).grid()\n \n self.createListbox(master, removedRecords)\n self.displayResultAnalysis()\n self.filterPrompt(master, monthsDict, transactionsList, inputYear, removedRecords, filterList, listboxObj, graphObj)\n \n self.protocol(\"WM_DELETE_WINDOW\", lambda: self.exitWindows(listboxObj, graphObj))\n \n def createListbox(self, master, removedRecords):\n scrollbar = tk.Scrollbar(self)\n scrollbar.grid(row=2, column=1, sticky=\"ns\")\n listbox = tk.Listbox(self, height=50, width=75, selectmode=\"extended\", yscrollcommand=scrollbar.set)\n scrollbar.config(command=listbox.yview)\n listbox.grid(row=2, column=0) \n \n for record in removedRecords: # Fill listbox with data from removedRecords \n self._total += float(record[master._transactionCost])\n listbox.insert(tk.END, \"{:16}\".format((datetime.strptime(record[master._transactionDate], master._transactionDateFormat)).strftime('%m/%d/%Y')) \n + \"{:18}\".format(str(\" $\" + record[master._transactionCost])) + record[master._transactionName]) \n \n def displayResultAnalysis(self):\n tk.Label(self, text= str(\"Total: $\" + \"{0:.2f}\".format(self._total))).grid() \n \n def filterPrompt(self, master, monthsDict, transactionsList, inputYear, removedRecords, filterList, listboxObj, graphObj): \n ''' Calls displayFilteredTransactions() '''\n tk.Label(self).grid()\n tk.Label(self, text= \"Enter additional filters (separated by commas),\").grid()\n tk.Label(self, text= \"or enter an empty input to clear filters:\").grid()\n entryWidget = tk.Entry(self, textvariable= self._newInputFilters)\n entryWidget.grid()\n entryWidget.bind(\"<Return>\", lambda event: \\\n self.updateFilteredTransactions(master, monthsDict, transactionsList, inputYear, removedRecords, filterList, listboxObj, graphObj)) \n \n def updateFilteredTransactions(self, master, monthsDict, transactionsList, inputYear, removedRecords, filterList, listboxObj, graphObj): \n ''' Close all windows, reset variables after adding new filters, and create new windows '''\n listboxObj.destroy()\n graphObj.destroy()\n \n # Reset necessary variables\n removedRecords.clear()\n monthsDict = dict.fromkeys(monthsDict, 0.0)\n \n self.parseFilters(filterList, self._newInputFilters.get())\n self.fillMonthsDict(master, monthsDict, transactionsList, filterList, inputYear, self._newInputFilters.get())\n \n # Creating windows\n newListboxObj = TransactionsListbox\\\n (master, monthsDict, transactionsList, inputYear, 'filteredTransactions', filterList, removedRecords, None)\n newGraphObj = MonthGraph(master, monthsDict, inputYear, newListboxObj) \n newRemovedTransObj = RemovedTransactions\\\n (master, monthsDict, transactionsList, inputYear, removedRecords, filterList, newListboxObj, newGraphObj)\n \n newListboxObj.referenceGraphObj(newGraphObj)\n newListboxObj.referenceRemovedTransObj(newRemovedTransObj)\n newGraphObj.referenceRemovedTransObj(newRemovedTransObj)\n \n self.destroy() \n \n def parseFilters(self, filterList, inputFilters):\n tempList = inputFilters.split(\",\")\n tempList = [name.lower().strip() for name in tempList if name.lower().strip() not in filterList] # Do not add filter if already in filterList\n tempSet = set(tempList) # Removes duplicate filters in the list (if typed twice during user input)\n filterList.extend(tempSet) \n \n def fillMonthsDict(self, master, monthsDict, transactionsList, filterList, inputYear, inputFilters):\n for record in transactionsList:\n try:\n if inputFilters is not \"\":\n for name in filterList:\n if name in record[master._transactionName].lower():\n raise Exception\n \n if datetime.strptime(record[master._transactionDate], master._transactionDateFormat).year == inputYear:\n monthsDict[datetime.strptime(record[master._transactionDate], master._transactionDateFormat).strftime('%B')] += \\\n float(record[master._transactionCost])\n \n # Empty input means clearing 'filterList'\n else:\n filterList.clear()\n \n if datetime.strptime(record[master._transactionDate], master._transactionDateFormat).year == inputYear:\n monthsDict[datetime.strptime(record[master._transactionDate], master._transactionDateFormat).strftime('%B')] += \\\n float(record[master._transactionCost]) \n except:\n continue \n \n def exitWindows(self, listboxObj, graphObj):\n listboxObj.destroy()\n graphObj.destroy()\n self.destroy() \n \nclass SpecificTransactionsNamePrompt(tk.Toplevel):\n def __init__(self, master, monthsDict, transactionsList, inputYear):\n super().__init__(master) \n self.transient(master) \n \n # Formatting\n self.minsize(350, 75)\n self.title(\"Prompt Window\")\n self.grid_rowconfigure(0, weight=1)\n self.grid_rowconfigure(1, weight=1)\n self.grid_rowconfigure(2, weight=1)\n self.grid_columnconfigure(0, weight=1) \n \n # User prompt for specific name of transaction\n inputNames = tk.StringVar()\n tk.Label(self, text= \"Enter transaction names (separated by commas):\").grid()\n entryWidget = tk.Entry(self, textvariable= inputNames)\n entryWidget.grid() \n confirmButton = tk.Button(self, text= \"Continue\", command= lambda: \\\n self.callWindows(master, monthsDict, transactionsList, inputYear, inputNames.get())).grid() \n \n def callWindows(self, master, monthsDict, transactionsList, inputYear, inputNames):\n namesList = []\n self.parseFilters(inputNames, namesList)\n\n # Fill 'monthDict' for given 'inputYear' and 'namesList'\n for record in transactionsList:\n if datetime.strptime(record[master._transactionDate], master._transactionDateFormat).year == inputYear:\n for name in namesList:\n if name in record[master._transactionName].lower():\n monthsDict[datetime.strptime(record[master._transactionDate], master._transactionDateFormat).strftime('%B')] += \\\n float(record[master._transactionCost])\n break\n \n listboxObj = TransactionsListbox(master, monthsDict, transactionsList, inputYear, 'specificTransaction', None, None, namesList)\n graphObj = MonthGraph(master, monthsDict, inputYear, listboxObj)\n \n listboxObj.referenceGraphObj(graphObj)\n \n self.destroy() \n \n def parseFilters(self, inputNames, namesList):\n tempList = inputNames.split(\",\")\n tempList = [name.lower().strip() for name in tempList if name.lower().strip() not in namesList] # Do not add filter if already in namesList\n tempSet = set(tempList) # Removes duplicate filters in the list (if typed twice during user input)\n namesList.extend(tempSet) \n\nclass TransactionsListbox(tk.Toplevel):\n ''' A listbox for all three options '''\n def __init__(self, master, monthsDict, transactionsList, inputYear, userTransactionOption, filterList, removedRecords, namesList):\n super().__init__(master) \n self.transient(master) \n \n # Class variables for references\n self._graphObj = None\n self._removedTransObj = None\n self._monthsValueCount = len(list(filter(lambda value: value > 0, monthsDict.values()))) # This counts non-zero values in monthsDict\n \n # Formatting\n self.minsize(560, 750)\n for idx in range(0, (6 + self._monthsValueCount)):\n self.grid_rowconfigure(idx, weight=1)\n self.grid_columnconfigure(0, weight=1) \n self.title(\"List of Transactions\")\n tk.Label(self, text = \"Displaying transactions\").grid() # grid(0,0) \n \n self.createListbox(master, transactionsList, inputYear, userTransactionOption, filterList, removedRecords, namesList)\n self.displayResultAnalysis(monthsDict)\n \n self.protocol(\"WM_DELETE_WINDOW\", self.exitWindows)\n \n def createListbox(self, master, transactionsList, inputYear, userTransactionOption, filterList, removedRecords, namesList):\n # Display filters for 'specificTransactions' listbox\n rowNumber = 1\n \n if userTransactionOption == 'specificTransaction':\n tk.Label(self).grid()\n tk.Label(self, text = str(\"Currently filtering: \" + str(\", \".join(namesList)))).grid() \n rowNumber = 3\n \n scrollbar = tk.Scrollbar(self)\n scrollbar.grid(row=rowNumber, column=1, sticky=\"ns\")\n listbox = tk.Listbox(self, height=50, width=75, selectmode=\"extended\", yscrollcommand=scrollbar.set)\n scrollbar.config(command=listbox.yview)\n listbox.grid(row=rowNumber, column = 0) \n \n if userTransactionOption == 'allTransactions':\n for record in transactionsList: \n if datetime.strptime(record[master._transactionDate], master._transactionDateFormat).year == inputYear:\n listbox.insert(tk.END, \"{:16}\".format((datetime.strptime(record[master._transactionDate], master._transactionDateFormat)).strftime('%m/%d/%Y')) \n + \"{:18}\".format(str(\" $\" + record[master._transactionCost])) + record[master._transactionName]) \n elif userTransactionOption == 'filteredTransactions':\n for record in transactionsList: # Fill listbox with all transactions within 'transactionsList'\n try:\n if datetime.strptime(record[master._transactionDate], master._transactionDateFormat).year == inputYear:\n for name in filterList:\n if name in record[master._transactionName].lower():\n removedRecords.append(record)\n raise Exception\n \n listbox.insert(tk.END, \"{:16}\".format((datetime.strptime(record[master._transactionDate], master._transactionDateFormat)).strftime('%m/%d/%Y')) \n + \"{:18}\".format(str(\" $\" + record[master._transactionCost])) + record[master._transactionName]) \n except:\n continue \n elif userTransactionOption == 'specificTransaction':\n for record in transactionsList: \n if datetime.strptime(record[master._transactionDate], master._transactionDateFormat).year == inputYear:\n for name in namesList:\n if name in record[master._transactionName].lower():\n listbox.insert(tk.END, \"{:16}\".format(\\\n (datetime.strptime(record[master._transactionDate], master._transactionDateFormat)).strftime('%m/%d/%Y')) \n + \"{:18}\".format(str(\" $\" + record[master._transactionCost])) + record[master._transactionName])\n break\n \n def displayResultAnalysis(self, monthsDict):\n tk.Label(self, text= str(\"Average per month:\" + \" $\" + \"{0:.2f}\".format(sum(monthsDict.values()) / self._monthsValueCount))).grid()\n tk.Label(self).grid()\n \n for month,cost in monthsDict.items():\n if cost != 0:\n tk.Label(self, text= str(month + \": $\" + \"{0:.2f}\".format(cost))).grid() \n \n def referenceGraphObj(self, graphObj):\n self._graphObj = graphObj\n \n def referenceRemovedTransObj(self, removedTransObj):\n self._removedTransObj = removedTransObj \n \n def exitWindows(self):\n self._graphObj.destroy()\n if self._removedTransObj is not None:\n self._removedTransObj.destroy()\n self.destroy() \n \nclass MonthGraph(tk.Toplevel):\n def __init__(self, master, monthsDict, inputYear, listboxObj):\n super().__init__(master) \n self.transient(master)\n \n self._removedTransObj = None\n \n fig = plt.figure(figsize=(13, 9))\n self.title(\"Monthly Expenses for the Year: \" + str(inputYear))\n plt.title(str(\"Total: $\" + \"{0:.2f}\".format(sum(monthsDict.values()))))\n\n plt.bar([month for month,cost in monthsDict.items() if cost != 0], \\\n [cost for cost in monthsDict.values() if cost != 0], align=\"center\")\n \n plt.xlabel(\"Months\")\n plt.ylabel(\"In Dollars\")\n \n canvas = FigureCanvasTkAgg(fig, self)\n canvas.get_tk_widget().pack(side=\"top\", fill=\"both\", expand=True)\n canvas.draw() \n \n self.protocol(\"WM_DELETE_WINDOW\", lambda: self.exitWindows(listboxObj))\n \n def referenceRemovedTransObj(self, removedTransObj):\n self._removedTransObj = removedTransObj\n \n def exitWindows(self, listboxObj):\n listboxObj.destroy()\n if self._removedTransObj is not None:\n self._removedTransObj.destroy()\n self.destroy()"
},
{
"alpha_fraction": 0.660804033279419,
"alphanum_fraction": 0.6677135825157166,
"avg_line_length": 51.566036224365234,
"blob_id": "1a0f6804a2161b132c1955d210f82dc56ed5e20a",
"content_id": "b46b14fdb5d07e07f868a8e5723604ee9af48b53",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 11148,
"license_type": "no_license",
"max_line_length": 494,
"num_lines": 212,
"path": "/README.md",
"repo_name": "Jokuyen/transactionsAnalyzer",
"src_encoding": "UTF-8",
"text": "This program reads a csv file of transaction records.\n\nAfterwards, there is the option of either displaying the transactions through a sorted list, or displaying a visual analysis of the transactions along with a sorted list.\n\nNote: The first option includes all the transactions in the csv file. The second option only allows transactions with a user-given input year. What this means is that the first option can present transactions from different years while the second option only focuses on transactions from a single given year.\n\n### Origin of the Project\nBack when I was working at Trader Joe's—buying premade lunches and my daily groceries there—I wondered how much I was spending.\n\nUnfortunately, my credit card account didn't allow the ability to analyze my transactions. (At least to my knowledge.) At best, I could scroll through the list of transactions and view each one individually. Nothing fancy.\n\nThat wasn't good enough for me; I wanted to organize my transactions in a way that I could view my spendings holistically.\nFor example, I wanted to see how much I had spend at Trader Joe's per month. Or because my parents used my credit card for paying the water and electric online bills, I wanted to exclude those transactions when viewing my personal spendings.\n\nThus, behold this project as the answer to my curiosity! \n\n(P.S. Turns out I've spent a total of 1638.86 USD at Trader Joe's within a single year range with my credit card. That averages to about $137 per month. Not too shabby!)\n\n### Core Features of the Program\nThe transaction records in the csv file are read into a Python list, appropiately named \"transactionsList\".\n\nNote: Since the csv file is sorted by date already, so is \"transactionsList\" by default.\n\nFrom the main window, the user is given 2 options:\n1. Display a listbox of all transactions in sorted order (by date, name, or cost).\n2. Display monthly spendings for a given year with a graph.\n\nIn the first option, \"transactionsList\" is merely resorted if the user wants to organize by name or cost. Then, it is used to fill in a listbox. The user can also filter out a transaction from the listbox if they wish to; for example, I can filter out the water and electric online bills by typing in \"pg&e\" and \"great oaks\" into the prompt. (Note: The name of the filter can be partial! If I typed in \"great\" into the filter prompt, it would display any transactions with \"great\" in its name.)\n\nIn the second option, a Python dictionary called \"monthsDict\" is created to keep track of monthly spendings; months are the keys while cost of transactions are values. \n```python\nself._monthsDict = {'January': 0.0, 'February': 0.0, 'March': 0.0, 'April': 0.0, 'May': 0.0, 'June': 0.0, 'July': 0.0, \n 'August': 0.0, 'September': 0.0, 'October': 0.0, 'November': 0.0, 'December': 0.0} \n```\nNext, the user selects a single year (e.g. 2019). Any records in \"transactionsList\" whose year matches the user's input year is counted into \"monthsDict\". Finally, when the listbox is displayed, it also shows the total spending for each month.\n\n### Code Snippets (Shortened for Concision) + Images\n\n#### Screenshot of Input File\nNote: I downloaded this csv file from my Capital One's credit card account.\n\n\n\n#### Input File Prompt Window (Pops up after opening program)\n\n\n\n#### Main Window\n\n```python\nclass MainWindow(tk.Tk):\n def __init__(self):\n super().__init__()\n \n # Class variable\n self._filename = \"\"\n self._transactionsList = [] \n \n # Variables for columns within csv file of Capital One's Credit Card\n self._transactionDate = \"Transaction Date\"\n self._transactionDateFormat = \"%Y-%m-%d\"\n self._transactionName = \"Description\"\n self._transactionCost = \"Debit\"\n \n # Functions\n self.createButtons()\n self.selectInputFile()\n self.readDataIntoTransactionsList() \n \n def createButtons(self):\n tk.Button(self, text = \"Display all transactions\", command= lambda: TransactionOptions(self)).grid()\n tk.Button(self, text = \"Display monthly spendings for a given year\", command= lambda: MonthlySpendings(self)).grid() \n \n def selectInputFile(self):\n while \".csv\" not in self._filename.lower():\n self._filename = tk.filedialog.askopenfilename(initialdir= getcwd()) \n\n # If user doesn't choose a file, exit the program\n if len(self._filename) == 0:\n raise SystemExit \n \n if \".csv\" not in self._filename.lower():\n tk.messagebox.showerror(\"Error\", \"Select a '.csv' file extension\", parent=self) \n \n def readDataIntoTransactionsList(self):\n with open(self._filename) as filehandler:\n data = csv.DictReader(filehandler)\n \n for record in data:\n # Skip any credit card payments\n if record[self._transactionCost] is \"\":\n continue \n self._transactionsList.append(record) \n \ndef main():\n app = MainWindow()\n app.mainloop()\n```\n\n#### Display All Transactions Options\n\n\n\n\n\n```python\nclass TransactionOptions(tk.Toplevel):\n def __init__(self, master):\n super().__init__(master)\n \n tk.Button(self, text= \"Sorted by date\", command= lambda: self.transactionListByDate(master)).grid()\n tk.Button(self, text= \"Sorted by name\", command= lambda: self.transactionsListByName(master)).grid()\n tk.Button(self, text= \"Sorted by cost\", command= lambda: self.transactionsListByCost(master)).grid() \n \n def transactionListByDate(self, master):\n ''' Uses the master's transactions list, which is sorted by date by default '''\n AllTransactions(master, master._transactionsList)\n self.destroy()\n \n def transactionsListByName(self, master):\n ''' Resort the master's transactions list into a new list by name'''\n transactionsList = sorted(master._transactionsList, key= lambda record: record[master._transactionName])\n sortedByNameObj = AllTransactions(master, transactionsList)\n self.destroy()\n \n def transactionsListByCost(self, master):\n ''' Resort the master's transactions list into a new list by cost'''\n transactionsList = sorted(master._transactionsList, key= lambda record: float(record[master._transactionCost].strip()), reverse=True)\n sortedByCostObj = AllTransactions(master, transactionsList)\n self.destroy()\n```\n\n#### Monthly Spendings Window (Selected from Main Window)\n\n\n\n```python\nclass MonthlySpendings(tk.Toplevel):\n def __init__(self, master):\n super().__init__(master) \n \n # Class variables\n self._functionsList = [self.allTransactions, self.filteredTransactionsPrompt, self.specificTransactionPrompt]\n # Create dictionary to track spendings for each month\n self._monthsDict = {'January': 0.0, 'February': 0.0, 'March': 0.0, 'April': 0.0, 'May': 0.0, 'June': 0.0, 'July': 0.0, \n 'August': 0.0, 'September': 0.0, 'October': 0.0, 'November': 0.0, 'December': 0.0} \n # Sort 'transactionsList' in ascending order (so January is first instead of latest month)\n self._transactionsList = sorted(master._transactionsList, \\\n key=lambda record: datetime.strptime(record[master._transactionDate], master._transactionDateFormat)) \n \n # User prompt for transactions options\n inputYear = tk.IntVar()\n tk.Label(self, text = \"Enter a year: \").grid()\n entryWidget = tk.Entry(self, textvariable=inputYear)\n entryWidget.grid()\n\n buttonOption = tk.IntVar()\n allTransactionsButton = tk.Radiobutton(self, text=\"Show all transactions\", variable=buttonOption, value=0).grid()\n filteredTransactionButton = tk.Radiobutton(self, text=\"Show filtered transactions\", variable=buttonOption, value=1).grid()\n specificTransactionsButton = tk.Radiobutton(self, text=\"Show specific transactions\", variable=buttonOption, value=2).grid()\n confirmButton = tk.Button(self, text=\"Continue\", command=lambda: \\\n self.callFunction(master, self._monthsDict, self._transactionsList, inputYear.get(), buttonOption.get())).grid()\n \n buttonOption.set(0)\n \n def callFunction(self, master, monthsDict, transactionsList, inputYear, buttonOption): \n ''' Call one of the three functions '''\n self._functionsList[buttonOption](master, monthsDict, transactionsList, inputYear)\n```\n\n#### Monthly Listbox + Graph\n\n\n\n```python\nclass TransactionsListbox(tk.Toplevel):\n def __init__(self, master, monthsDict, transactionsList, inputYear, userTransactionOption, filterList, removedRecords, namesList):\n super().__init__(master) \n self.transient(master) \n \n self.createListbox(master, transactionsList, inputYear, userTransactionOption, filterList, removedRecords, namesList)\n \n def createListbox(self, master, transactionsList, inputYear, userTransactionOption, filterList, removedRecords, namesList):\n if userTransactionOption == 'specificTransaction':\n for record in transactionsList: \n if datetime.strptime(record[master._transactionDate], master._transactionDateFormat).year == inputYear:\n for name in namesList:\n if name in record[master._transactionName].lower():\n listbox.insert(tk.END, \"{:16}\".format(\\\n (datetime.strptime(record[master._transactionDate], master._transactionDateFormat)).strftime('%m/%d/%Y')) \n + \"{:18}\".format(str(\" $\" + record[master._transactionCost])) + record[master._transactionName])\n break\n```\n\n```python\nclass MonthGraph(tk.Toplevel):\n def __init__(self, master, monthsDict, inputYear, listboxObj):\n super().__init__(master) \n\n self.title(\"Monthly Expenses for the Year: \" + str(inputYear))\n plt.title(str(\"Total: $\" + \"{0:.2f}\".format(sum(monthsDict.values()))))\n\n plt.bar([month for month,cost in monthsDict.items() if cost != 0], \\\n [cost for cost in monthsDict.values() if cost != 0], align=\"center\")\n \n plt.xlabel(\"Months\")\n plt.ylabel(\"In Dollars\")\n \n canvas = FigureCanvasTkAgg(fig, self)\n canvas.get_tk_widget().pack(side=\"top\", fill=\"both\", expand=True)\n canvas.draw() \n```\n"
},
{
"alpha_fraction": 0.6034227609634399,
"alphanum_fraction": 0.6110289692878723,
"avg_line_length": 47.27891159057617,
"blob_id": "f768f5dd70e5e8b9bc1857d3dcf0b3556b96ad07",
"content_id": "7bee762b46f240bcf074610b98a252bcb7ac75e6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 14199,
"license_type": "no_license",
"max_line_length": 160,
"num_lines": 294,
"path": "/Project Files/displayAllTransactions.py",
"repo_name": "Jokuyen/transactionsAnalyzer",
"src_encoding": "UTF-8",
"text": "# Johnny Nguyen\n# displayAllTransactions class\n\nimport matplotlib\nmatplotlib.use('TkAgg') # Tell matplotlib to work with Tkinter\nimport tkinter as tk\nfrom matplotlib.backends.backend_tkagg import FigureCanvasTkAgg # Tells matplotlib about Canvas object\nimport matplotlib.pyplot as plt \nfrom datetime import datetime\n\nclass TransactionOptions(tk.Toplevel):\n def __init__(self, master):\n super().__init__(master)\n self.transient(master) \n \n # Formatting\n self.minsize(225, 175)\n for idx in range(0, 3):\n self.grid_rowconfigure(idx, weight=1)\n self.grid_columnconfigure(0, weight=1) \n self.title(\"Displaying Transaction Options\")\n \n tk.Button(self, text= \"Sorted by date\", command= lambda: self.transactionListByDate(master)).grid()\n tk.Button(self, text= \"Sorted by name\", command= lambda: self.transactionsListByName(master)).grid()\n tk.Button(self, text= \"Sorted by cost\", command= lambda: self.transactionsListByCost(master)).grid() \n \n def transactionListByDate(self, master):\n ''' Uses the master's transactions list, which is sorted by date by default '''\n AllTransactions(master, master._transactionsList)\n self.destroy()\n \n def transactionsListByName(self, master):\n ''' Resort the master's transactions list into a new list by name'''\n transactionsList = sorted(master._transactionsList, key= lambda record: record[master._transactionName])\n sortedByNameObj = AllTransactions(master, transactionsList)\n self.destroy()\n \n def transactionsListByCost(self, master):\n ''' Resort the master's transactions list into a new list by cost'''\n transactionsList = sorted(master._transactionsList, key= lambda record: float(record[master._transactionCost].strip()), reverse=True)\n sortedByCostObj = AllTransactions(master, transactionsList)\n self.destroy()\n \nclass AllTransactions(tk.Toplevel):\n def __init__(self, master, transactionsList):\n super().__init__(master)\n self.transient(master) \n \n # Class variables\n self._total = 0\n self._transactionsCount = 0\n self._newInputFilters = tk.StringVar() \n \n # Formatting\n self.minsize(560, 750)\n for idx in range(0, 8):\n self.grid_rowconfigure(idx, weight=1)\n self.grid_columnconfigure(0, weight=1) \n self.title(\"Displaying All Transactions\")\n tk.Label(self, text = \"Analysis of all transactions\").grid() # grid(0,0) \n \n # Functions\n self.createListbox(master, transactionsList)\n self.displayResultAnalysis()\n self.filterPrompt(master, transactionsList)\n \n def createListbox(self, master, transactionsList): \n scrollbar = tk.Scrollbar(self)\n scrollbar.grid(row=1, column=1, sticky=\"ns\")\n listbox = tk.Listbox(self, height=50, width=75, selectmode=\"extended\", yscrollcommand=scrollbar.set)\n scrollbar.config(command=listbox.yview)\n listbox.grid(row = 1, column = 0) \n \n for record in transactionsList: # Fill listbox with all transactions within 'transactionsList'\n self._total += float(record[master._transactionCost].strip())\n self._transactionsCount += 1 \n listbox.insert(tk.END, \"{:16}\".format((datetime.strptime(record[master._transactionDate], master._transactionDateFormat)).strftime('%m/%d/%Y')) \n + \"{:18}\".format(str(\" $\" + record[master._transactionCost])) + record[master._transactionName]) \n \n def displayResultAnalysis(self):\n tk.Label(self, text= str(\"Total: $\" + \"{0:.2f}\".format(self._total))).grid()\n tk.Label(self, text= str(\"Number of transactions: \" + str(self._transactionsCount))).grid()\n tk.Label(self).grid() \n \n def filterPrompt(self, master, transactionsList): \n ''' Calls displayFilteredTransactions() '''\n tk.Label(self, text= \"Enter any transaction name to filter out (separated by commas),\").grid()\n tk.Label(self, text= \"or enter an empty input to clear filters:\").grid()\n entryWidget = tk.Entry(self, textvariable= self._newInputFilters)\n entryWidget.grid()\n entryWidget.bind(\"<Return>\", lambda event: self.displayFilteredTransactions(master, transactionsList)) \n \n def displayFilteredTransactions(self, master, transactionsList):\n if self._newInputFilters.get() is \"\":\n allTransObj = AllTransactions(master, transactionsList)\n else:\n filtersList = []\n filteredTransObj = FilteredTransactions(master, transactionsList, self._newInputFilters.get(), filtersList)\n \n self.destroy()\n \nclass FilteredTransactions(tk.Toplevel):\n def __init__(self, master, transactionsList, inputFilters, filtersList):\n super().__init__(master)\n self.transient(master) \n \n # Class variables\n self._total = 0\n self._transactionsCount = 0 \n self._removedRecords = []\n self._newInputFilters = tk.StringVar()\n \n # Class variables for references\n self._graphObj = None \n \n # Formatting\n self.minsize(560, 750)\n for idx in range(0, 8):\n self.grid_rowconfigure(idx, weight=1)\n self.grid_columnconfigure(0, weight=1) \n self.title(\"Displaying Filtered Transactions\")\n tk.Label(self, text= \"Analysis of filtered transactions\").grid() # grid(0,0) \n\n # Functions\n self.parseFilters(inputFilters, filtersList)\n self.createListbox(master, transactionsList, inputFilters, filtersList)\n self.displayResultAnalysis()\n self.filterPrompt(master, transactionsList, filtersList) \n \n # Call a separate window to display removed transactions\n self._removedTransObj = RemovedTransactions(master, filtersList, self._removedRecords, self) \n \n self.protocol(\"WM_DELETE_WINDOW\", self.exitWindows)\n \n def parseFilters(self, inputFilters, filtersList):\n tempList = inputFilters.split(\",\")\n tempList = [name.lower().strip() for name in tempList if name.lower().strip() not in filtersList] # Do not add filter if already in filtersList\n tempSet = set(tempList) # Removes duplicate filters in the list (if typed twice during user input)\n filtersList.extend(tempSet) \n \n def createListbox(self, master, transactionsList, inputFilters, filtersList):\n scrollbar = tk.Scrollbar(self)\n scrollbar.grid(row=1, column=1, sticky=\"ns\")\n listbox = tk.Listbox(self, height=50, width=75, selectmode=\"extended\", yscrollcommand=scrollbar.set)\n scrollbar.config(command=listbox.yview)\n listbox.grid(row=1, column=0) \n \n for record in transactionsList: # Fill listbox with all transactions within 'transactionsList'\n try:\n if inputFilters is not \"\":\n for name in filtersList:\n if name in record[master._transactionName].lower():\n self._removedRecords.append(record)\n raise Exception\n \n self._total += float(record[master._transactionCost])\n self._transactionsCount += 1 \n listbox.insert(tk.END, \"{:16}\".format((datetime.strptime(record[master._transactionDate], master._transactionDateFormat)).strftime('%m/%d/%Y')) \n + \"{:18}\".format(str(\" $\" + record[master._transactionCost])) + record[master._transactionName]) \n except:\n continue \n \n def displayResultAnalysis(self):\n tk.Label(self, text= str(\"Total: $\" + \"{0:.2f}\".format(self._total))).grid()\n tk.Label(self, text= str(\"Number of transactions: \" + str(self._transactionsCount))).grid() \n tk.Label(self).grid() \n \n def filterPrompt(self, master, transactionsList, filtersList):\n ''' Calls displayFilteredTransactions() '''\n tk.Label(self, text= \"Enter any transaction name to remove (separated by commas),\").grid()\n tk.Label(self, text= \"or enter an empty input to clear filters:\").grid()\n entryWidget = tk.Entry(self, textvariable= self._newInputFilters)\n entryWidget.grid()\n entryWidget.bind(\"<Return>\", lambda event: self.displayFilteredTransactions(master, transactionsList, filtersList)) \n \n def displayFilteredTransactions(self, master, transactionsList, filtersList): \n if self._newInputFilters.get() is \"\":\n allTransObj = AllTransactions(master, transactionsList)\n else:\n filteredTransObj = FilteredTransactions(master, transactionsList, self._newInputFilters.get(), filtersList) \n \n self.destroy()\n self._removedTransObj.destroy() \n if self._graphObj is not None:\n self._graphObj.destroy() \n \n def referenceGraphObj(self, graphObj):\n self._graphObj = graphObj \n \n def exitWindows(self):\n self._removedTransObj.destroy()\n self.destroy()\n if self._graphObj is not None:\n self._graphObj.destroy()\n \nclass RemovedTransactions(tk.Toplevel):\n def __init__(self, master, filtersList, removedRecords, filteredTransObj):\n super().__init__(master)\n self.transient(master) \n \n # Class variables\n self._total = 0\n self._transactionsCount = 0 \n filtersDict = {}\n for name in filtersList:\n filtersDict[name] = {'total': 0.0, 'count': 0} \n \n # Class variables for references\n self._graphObj = None \n \n # Formatting\n for idx in range(0, (7 + (3 * len(filtersList)))):\n self.grid_rowconfigure(idx, weight=1)\n self.grid_columnconfigure(0, weight=1) \n self.title(\"Displaying Removed Records\")\n tk.Label(self, text= \"Currently filtering: \").grid()\n tk.Label(self, text= \", \".join(filtersList)).grid()\n \n self.createListbox(master, removedRecords, filtersList, filtersDict)\n self.displayResultAnalysis(filtersList, filtersDict)\n \n tk.Button(self, text= \"Display graph\", \n command= lambda: RemovedTransactionsGraph(master, filtersDict, self._total, filteredTransObj, self)).grid()\n \n self.protocol(\"WM_DELETE_WINDOW\", lambda: self.exitWindows(filteredTransObj))\n \n def createListbox(self, master, removedRecords, filtersList, filtersDict):\n scrollbar = tk.Scrollbar(self)\n scrollbar.grid(row=2, column=1, sticky=\"ns\")\n listbox = tk.Listbox(self, height=50, width=75, selectmode=\"extended\", yscrollcommand=scrollbar.set)\n scrollbar.config(command=listbox.yview)\n listbox.grid(row=2, column=0) \n \n for record in removedRecords: # Fill listbox with data from removedRecords\n for name in filtersList:\n if name in record[master._transactionName].lower():\n filtersDict[name]['total'] += float(record[master._transactionCost])\n filtersDict[name]['count'] += 1\n \n self._total += float(record[master._transactionCost].strip())\n self._transactionsCount += 1 \n listbox.insert(tk.END, \"{:16}\".format((datetime.strptime(record[master._transactionDate], master._transactionDateFormat)).strftime('%m/%d/%Y')) \n + \"{:18}\".format(str(\" $\" + record[master._transactionCost])) + record[master._transactionName]) \n \n def displayResultAnalysis(self, filtersList, filtersDict):\n tk.Label(self, text= str(\"Total: $\" + \"{0:.2f}\".format(self._total))).grid()\n tk.Label(self, text= str(\"Number of transactions: \" + str(self._transactionsCount))).grid() \n \n for name in filtersList:\n tk.Label(self).grid() \n tk.Label(self, text= str(\"Total for \\'\" + name + \"\\': $\" + \"{0:.2f}\".format(filtersDict[name]['total']))).grid()\n tk.Label(self, text= str(\"Number of transactions: \" + str(filtersDict[name]['count']))).grid() \n \n tk.Label(self).grid() \n \n def referenceGraphObj(self, graphObj):\n self._graphObj = graphObj \n \n def exitWindows(self, filteredTransObj):\n filteredTransObj.destroy()\n if self._graphObj is not None:\n self._graphObj.destroy() \n self.destroy() \n \nclass RemovedTransactionsGraph(tk.Toplevel):\n def __init__(self, master, filtersDict, total, filteredTransObj, removedTransObj):\n super().__init__(master) \n self.transient(master)\n \n filteredTransObj.referenceGraphObj(self)\n removedTransObj.referenceGraphObj(self)\n \n fig = plt.figure()\n self.title(\"Visual Representation of Filters\")\n plt.title(str(\"Total: $\" + \"{0:.2f}\".format(total)))\n \n sortedFiltersList = sorted(filtersDict.items(), key=lambda k: k[1]['total'], reverse=True) # The [1] points to the dictionary's value\n \n # In sortedFiltersList, item[0] points to the name and item[1] points to the value\n plt.bar([str(\"\\'\" + item[0] + \"\\'\") for item in sortedFiltersList], [item[1]['total'] for item in sortedFiltersList])\n \n plt.xlabel(\"Filters\")\n plt.ylabel(\"In Dollars\")\n \n canvas = FigureCanvasTkAgg(fig, self)\n canvas.get_tk_widget().pack(side=\"top\", fill=\"both\", expand=True)\n canvas.draw() \n \n self.protocol(\"WM_DELETE_WINDOW\", lambda: self.exitWindows(filteredTransObj, removedTransObj))\n \n def exitWindows(self, filteredTransObj, removedTransObj):\n filteredTransObj.destroy()\n removedTransObj.destroy()\n self.destroy() "
}
] | 4 |
vanya2v/Residual_Regression_Denoising_Networks | https://github.com/vanya2v/Residual_Regression_Denoising_Networks | 2817046edea47971b30e11da910abec3d4a8ae5d | 3ec0033b6bebc5c385968770bf2ca4f1795b41af | 7768b44d465c604b68dbda0aa6b9c49648cea793 | refs/heads/main | 2023-06-02T21:13:28.428001 | 2021-06-22T12:53:44 | 2021-06-22T12:53:44 | null | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.693776547908783,
"alphanum_fraction": 0.7027934789657593,
"avg_line_length": 27.715736389160156,
"blob_id": "e0cbdc12e02e39fb09c37e3cc3a060ec60574529",
"content_id": "42af65113cfd3383d58b7d3a995d71b47d20c341",
"detected_licenses": [
"CC0-1.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5656,
"license_type": "permissive",
"max_line_length": 193,
"num_lines": 197,
"path": "/train_resreg.py",
"repo_name": "vanya2v/Residual_Regression_Denoising_Networks",
"src_encoding": "UTF-8",
"text": "\"\"\"\nDeep Residual Regression Networks\n\"\"\"\nimport keras\nfrom keras.datasets import mnist\nfrom keras.layers import Dense, Conv2D, BatchNormalization, Activation\nfrom keras.layers import Average,Flatten,AveragePooling1D, Input, GlobalAveragePooling1D\nfrom keras.optimizers import Adam\nfrom keras.regularizers import l2\nfrom keras import backend as K\nfrom keras.models import Model\nfrom keras.layers.core import Lambda\nK.set_learning_phase(1)\nfrom tensorflow.keras import layers,models\nfrom tensorflow.keras import callbacks\nfrom keras.utils.vis_utils import plot_model\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.preprocessing import MinMaxScaler\nimport scipy\nimport matplotlib\nmatplotlib.use('agg')\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport datetime\nfrom sklearn.metrics import mean_squared_error\nfrom scipy.io import loadmat, savemat\nimport pickle\nimport os\n\n\ndef abs_backend(inputs):\n return K.abs(inputs)\n\ndef expand_dim_backend(inputs):\n return K.expand_dims(K.expand_dims(inputs ,1) ,1)\n\ndef identity_block(input_tensor,units):\n\t\"\"\"The identity block:\n\t# Arguments\n\t\tinput_tensor: input tensor\n\t\tunits:output shape\n\t# Returns\n\t\tOutput tensor for the block.\n\t\"\"\"\n\tx = layers.Dense(units)(input_tensor)\n\tx = layers.BatchNormalization()(x)\n\tx = layers.Activation('relu')(x)\n\n\tx = layers.Dense(units)(x)\n\tx = layers.BatchNormalization()(x)\n\tx = layers.Activation('relu')(x)\n\n\tx = layers.Dense(units)(x)\n\tx = layers.BatchNormalization()(x)\n\n\tx = layers.add([x, input_tensor])\n\tx = layers.Activation('relu')(x)\n\n\treturn x\n\ndef dens_block(input_tensor,units):\n\t\"\"\"A block with dense layer at shortcut.\n\t# Arguments\n\t\tinput_tensor: input tensor\n\t\tunit: output tensor shape\n\t# Returns\n\t\tOutput tensor for the block.\n\t\"\"\"\n\tx = layers.Dense(units)(input_tensor)\n\tx = layers.BatchNormalization()(x)\n\tx = layers.Activation('relu')(x)\n\n\tx = layers.Dense(units)(x)\n\tx = layers.BatchNormalization()(x)\n\tx = layers.Activation('relu')(x)\n\n\tx = layers.Dense(units)(x)\n\tx = layers.BatchNormalization()(x)\n\n\t# Calculate global means\n\tabs_mean = K.abs(x) \n\tmm=K.mean(abs_mean)\n\n\t# Calculate scaling coefficients\n\tscales = layers.Dense(units)(abs_mean)\n\tscales = layers.BatchNormalization()(scales)\n\tscales = layers.Activation('relu')(scales)\n\tscales = layers.Dense(units,activation='sigmoid', kernel_regularizer=l2(1e-4))(scales)\n\n\t# Calculate soft-threshold for denoising\n\tthres = layers.multiply([abs_mean, scales])\n\tsub = layers.subtract([abs_mean, thres])\n\tzeros = layers.subtract([sub, sub])\n\tn_sub = layers.maximum([sub, zeros])\n\n\t# Short connection in residual unit and combine the path\n\tresidual = layers.multiply([K.sign(x), n_sub])\n\tshortcut = layers.Dense(units)(input_tensor)\n\tshortcut = layers.BatchNormalization()(shortcut)\n\n\tx = layers.add([residual, shortcut])\n\n\tx = layers.Activation('relu')(x)\n\n\treturn x\n\n\ndef ResNet50Regression():\n\t\"\"\"ResNet50 architecture.\n\t# Arguments \n\t\tinput_tensor: optional Keras tensor (i.e. output of `layers.Input()`)\n\t\t\tto use as input for the model. \n\t# Returns\n\t\tA Keras model instance.\n\t\"\"\"\n\tRes_input = layers.Input(shape=(10,)) #set number of input neurons to be the number of input signal\n\n\twidth = 128\n\n\tx = dens_block(Res_input,width)\n\tx = identity_block(x,width)\n\tx = identity_block(x,width)\n\n\tx = dens_block(x,width)\n\tx = identity_block(x,width)\n\tx = identity_block(x,width)\n\n\tx = dens_block(x,width)\n\tx = identity_block(x,width)\n\tx = identity_block(x,width)\t\n\n\tx = dens_block(x,width)\n\tx = identity_block(x,width)\n\tx = identity_block(x,width)\n\n\tx = layers.BatchNormalization()(x)\n\tx = layers.Dense(8, activation='linear')(x) #set number of output to number of parameters to be predicted (from your model fitting)\n\tmodel = models.Model(inputs=Res_input, outputs=x)\n\n\treturn model\n\n################################# Prepare data ####################################\n\n\nprint('Loading training set...')\nx_train = scipy.io.loadmat('training_data/database_train_DL_randomdire_GPD_fitT2s_SNR.mat')\nTrainSig = x_train['database_train_noisy']\nTrainParam = x_train['params_train_noisy']\nprint('Setting up the model...')\n\nscaler = MinMaxScaler(copy=True, feature_range=(0, 1))\nscaler.fit(TrainParam)\nTrainParam=scaler.transform(TrainParam)\n\n############################## Build Model ################################\nmodel = ResNet50Regression()\n\nmodel.compile(loss='mse', optimizer='adam', metrics=['mse'])\nmodel.summary()\n\n#compute running time\nstarttime = datetime.datetime.now()\n\nprint('Training the model...')\n#set epoch to 100 or 1000\nhistory = model.fit(TrainSig,TrainParam, epochs=10, batch_size=100, verbose=2, callbacks=[callbacks.EarlyStopping(monitor='val_loss', patience=20,verbose=2, mode='auto')], validation_split=0.2)\n\nendtime = datetime.datetime.now()\n\n############################## Save Model #################################\nprint('Saving the trained DL model...')\nmodel.save('trained_resreg.h5')\nfilename = 'scaler_resreg.sav'\npickle.dump(scaler, open(filename, 'wb'))\nprint('DONE')\n\n############################# Model Prediction #################################\n\nimport tensorflow as tf\n\n# Load the trained model and scaler\nmodel=tf.keras.models.load_model('trained_resreg.h5') \nscaler = pickle.load(open('scaler_resreg.sav', 'rb'))\n\n# Predict the parameters from DW-MRI input\n# Input is the pre-processed DW-MRI \nx_test = scipy.io.loadmat('training_data/patient_005_ROI_DL.mat')\nTestSig = x_test['Signal']\nTestPredict= model.predict(TestSig)\nTestPredict = scaler.inverse_transform(TestPredict)\n\n# Save prediction\ndata = {}\ndata['DLprediction'] = TestPredict\nscipy.io.savemat('patient_005_pred.mat',data)\nprint('saved pred')"
},
{
"alpha_fraction": 0.7668867707252502,
"alphanum_fraction": 0.7765362858772278,
"avg_line_length": 47.024391174316406,
"blob_id": "adea4479edc32c6952e627fc654fa03f446ab3ba",
"content_id": "59aa786f804901f361f5136c0a85bbad9f122920",
"detected_licenses": [
"CC0-1.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 1971,
"license_type": "permissive",
"max_line_length": 313,
"num_lines": 41,
"path": "/README.md",
"repo_name": "vanya2v/Residual_Regression_Denoising_Networks",
"src_encoding": "UTF-8",
"text": "# Residual Regression Networks\nResidual Regression Networks with denoising component for deep-learning based model fitting\n\nTraditional quantitative MRI (qMRI) signal model fitting to diffusion-weighted MRI (DW-MRI) is slow and requires long computational time per patient. In this work, we\nexplore q-space learning for prostate cancer characterization. Our results show that deep residual regression networks are needed for more complex diffusion MRI models such as VERDICT with compensated relaxation.\n\n### Referencing and citing\n\nIf you use this repository, please refer to this citation:\n```\nValindria, V., Palombo, M., Chiou, E., Singh, S., Punwani, S., & Panagiotaki, E. (2021, April). Synthetic Q-Space Learning With Deep Regression Networks For Prostate Cancer Characterisation With Verdict. In 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI) (pp. 50-54). IEEE.\n```\n\nThis repository provides the codes for Model 3: stacked residual regression network with soft thresholding, as it is show to work effectively to remove noise-related features in MRI signals. Soft threshold can be learned in the dense block unit, before addition with residual connection, as shown in this Figure.\n\n\n\n\n### How to Use\n\n1. Setup a virtual environment with these dependencies:\n```\n keras with tensorflow backend\n scipy\n pandas\n```\n\n2. Start playing\n```\ntrain_resreg.py\n```\n\n3. Data you may need:\n\na. For training\nSynthetic training data (in-silico) q-space learning - generated from equations derived from a diffusion model. For our model, it is from VERDICT (please refer to another repository on how to generate this).\nIf you don't have it, you can use the one under ```training_data\\```\n \nb. For model prediction:\nPatient data for model prediction (pre-processed and registered). Predict the parameters from trained networks from DW-MRI scans (depends on protocols).\nPlease refer to another repository on how to obtain this format from raw DW-MRI data.\n"
}
] | 2 |
SeanTurner026/kafka-producer-server | https://github.com/SeanTurner026/kafka-producer-server | acc8c877b216d2a59555f206936af9cc9d6faff0 | eb0537693b278476d1e12a22aaaf9f168b201c1e | e4d2065d445af6dba388f9fe8fa714485e5b040f | refs/heads/master | 2021-05-25T11:41:23.827544 | 2018-03-29T11:18:06 | 2018-03-29T11:18:06 | null | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6248934268951416,
"alphanum_fraction": 0.6393861770629883,
"avg_line_length": 32.514286041259766,
"blob_id": "b41379a0bbe6fcab95e3b9f8497d3c01997aace5",
"content_id": "217787b036ff00b5bdb1928063b85ab507627ae3",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1173,
"license_type": "permissive",
"max_line_length": 76,
"num_lines": 35,
"path": "/echoserv_client.py",
"repo_name": "SeanTurner026/kafka-producer-server",
"src_encoding": "UTF-8",
"text": "import sys\nfrom twisted.internet import ssl, reactor\nfrom twisted.internet.address import IPv4Address\nfrom twisted.internet.protocol import ClientFactory, Protocol\nfrom OpenSSL.SSL import TLSv1_2_METHOD\n\nsent_data = \" \".join(sys.argv[1:])\n\nclass EchoClient(Protocol):\n def connectionMade(self):\n print(\"Connection established\")\n self.transport.write(bytes(sent_data, \"utf-8\"))\n\n def dataReceived(self, data):\n print(\"Server said:\", data)\n self.transport.loseConnection()\n\nclass EchoClientFactory(ClientFactory):\n protocol = EchoClient\n\n def clientConnectionFailed(self, connector, reason):\n print(\"Connection failed - goodbye!\")\n reactor.stop()\n\n def clientConnectionLost(self, connector, reason):\n print(\"Connection lost - goodbye!\")\n reactor.stop()\n\nif __name__ == '__main__':\n factory = EchoClientFactory()\n tls_options = ssl.DefaultOpenSSLContextFactory('domain.key',\n 'domain.crt',\n sslmethod=TLSv1_2_METHOD)\n reactor.connectSSL('127.0.0.1', 8000, factory, tls_options)\n reactor.run()\n"
},
{
"alpha_fraction": 0.5917808413505554,
"alphanum_fraction": 0.5980075001716614,
"avg_line_length": 35.5,
"blob_id": "3d3d362393b7208ec47bf4d2652cc96a09924502",
"content_id": "8929e276ec4b6d2c59ca5d9ca704d9e74f566a58",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4015,
"license_type": "permissive",
"max_line_length": 84,
"num_lines": 110,
"path": "/tcp_kafka_producer.py",
"repo_name": "SeanTurner026/kafka-producer-server",
"src_encoding": "UTF-8",
"text": "import sys\nimport argparse\nfrom twisted.internet import ssl, reactor\nfrom twisted.internet.protocol import Factory, Protocol\nfrom OpenSSL.SSL import TLSv1_2_METHOD\nfrom confluent_kafka import Producer\n\nparser = argparse.ArgumentParser(description='Kafka TCP Stream Forwarder')\n\nparser.add_argument('-k', '--kafka', metavar='KAFKA',\n help='Kafka host:ip to forward to')\n\nparser.add_argument('-t', '--topic', metavar='TOPIC',\n help='Topic for messages')\n\nparser.add_argument('-i', '--interface', metavar='INTERFACE',\n help='Listen on this interface')\n\nparser.add_argument('-p', '--port', metavar='PORT',\n help='Listen for client connections on this port')\n\nparser.add_argument('-c', '--client', metavar='IP',\n help='Add additional IP host addresses or CIDR addresses \\\n to approved clients <format>: (host2,host3,host4)')\n\nparser.add_argument('-f', '--cert', metavar='FILENAME',\n help='Certificate for server <format>: domain1.key,domain1.crt')\n\n # unused\nparser.add_argument('-m', '--management', metavar='TOPIC',\n help='Topic for connection events')\n\n # unused\nparser.add_argument('--info', dest=' ', help='More logging to console')\n\n # unused\nparser.add_argument('--debug', dest=' ', help='Most detailed logging to \\\n console')\n\n# creates a dictionary to hold all information called with the program\nargs = vars(parser.parse_args())\n\n# initialise listening port\nport = int(args['port'])\n\n# initialise variable for kafka address\nkafka_client = args['kafka']\n\n# initialise host to listen for\nif args['client'] == None:\n host_addrs = [args['interface']]\n\n# initialise additional IP hosts to listen for\nelse:\n host_addrs = args['client'].split(',')\n host_addrs.append(args['interface'])\n\n# initialise list of list of key and certificates to accompany hosts\nkeys = args['cert'].split(',')\n\n# initialise topic for kafka producer\ntopic = args['topic']\n\n# unused, code to cause program to print diagnostic information to console if\n# proper flags are passed when calling the program\n# if '--debug' in sys.argv[1:]:\n# pass\n# elif '--info' in sys.argv[1:]:\n# pass\n\n\nclass Echo(Protocol):\n def dataReceived(self, data):\n \"\"\"send response to client, and produce data to kafka consumer\"\"\"\n # extract IP address from incoming connection\n addr, _ = self.transport.client\n # confirm that cleint's IP is a whitelisted host\n if addr in host_addrs:\n # send confirmation message to client\n self.transport.write(b'received!')\n # connect to kafka\n p = Producer({'bootstrap.servers': kafka_client})\n # stage data from clent to kafka topic\n p.produce(topic, data)\n # send all data to kafka topic\n p.flush()\n print('sent message to kafka!')\n\nif __name__ == '__main__':\n # factory produces the echo protocol, which handles input on the incoming\n # server connection\n factory = Factory()\n factory.protocol = Echo\n\n # initialise listening on given port with given credentials\n # keys[0] - domain.key, keys[1] - domain.crt\n reactor.listenSSL(port,\n factory,\n ssl.DefaultOpenSSLContextFactory(keys[0],\n keys[1],\n sslmethod=TLSv1_2_METHOD),\n )\n # # the interface argument of listenSSL takes an IP\n # # address, however, I am not able to get my program\n # # to run with an IP address other than localhost or\n # # the host computer's IP\n\n # interface='192.168.1.109')\n # starts the event loop that Twisted utilises, and provides threading\n reactor.run()\n"
},
{
"alpha_fraction": 0.7014590501785278,
"alphanum_fraction": 0.7396184206008911,
"avg_line_length": 34.68000030517578,
"blob_id": "1c6193c4e3f04320647758569917eaac0d6e9cf8",
"content_id": "609409fee0015ea56f24fa55a6dd601c8e565ae9",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 891,
"license_type": "permissive",
"max_line_length": 220,
"num_lines": 25,
"path": "/README.md",
"repo_name": "SeanTurner026/kafka-producer-server",
"src_encoding": "UTF-8",
"text": "## Standalone TLS/SSL TCP server\n\nThe producer.py is an Apache Kafka producer server which listens on a given port from connections from a whitelisted set of IPs. The server produces each line recieved as a message on a given topic on the Kafka instance.\n\n### producer.py:\n- Communicates with a kafka server on 127.0.0.1 with port 9092\n- Whitelists localhost (127.0.0.1)\n- Listens to incoming communication on port 8000\n- Utilises private key and self signed certificates called domain\n- Anything separated by commas passed after -c will also be added to the whitelisted hosts.\n\n### Run the kafka producer server with the following command.\n```\npython tcp_kafka_producer.py \\\n -k localhost:9092 \\\n -i 127.0.0.1 \\\n -f domain.key,domain.crt \\\n -p 8000 \\\n -c <additional,ip,addresses>\n```\n\n### Send messages with the following command.\n```\npython echoserv_client.py test test\n```"
}
] | 3 |
fawzia1998/project | https://github.com/fawzia1998/project | a2a7facb432765506c7a4493e09015d7dccca004 | ceefca9545b1ad587474939361cf41cf5beb9bc0 | 0a5b1d7c399704d1c3cdfba4e4aa03ae3d2e77a0 | refs/heads/main | 2023-03-08T11:28:25.411533 | 2021-02-17T23:12:19 | 2021-02-17T23:12:19 | 336,326,009 | 0 | 1 | null | null | null | null | null | [
{
"alpha_fraction": 0.5624572038650513,
"alphanum_fraction": 0.6056177020072937,
"avg_line_length": 32.79922866821289,
"blob_id": "be88982eaf972829f185fc21a6c1855bd6289f66",
"content_id": "a2a3a6bd809b5741e15e06955cc5b2c12f4809bc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 8758,
"license_type": "no_license",
"max_line_length": 149,
"num_lines": 259,
"path": "/project.py",
"repo_name": "fawzia1998/project",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n# coding: utf-8\n\n# In[9]:\n\n\nimport tensorflow as tf\nfrom tensorflow.keras import Sequential\nfrom tensorflow.keras.layers import Conv2D,MaxPool2D,Dense,Flatten,BatchNormalization,Dropout\nfrom tensorflow.keras.optimizers import Adam\n\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom sklearn.model_selection import train_test_split\nfrom tqdm import tqdm\nimport cv2\nfrom cv2 import *\n\n\nimport glob\nimport tkinter as tk\nfrom tkinter import filedialog\nfrom tkinter import ttk\nfrom tkinter import *\nimport time\nimport threading\n\n\nprint(\"yes\")\n\n\nroot=tk.Tk()\nroot.configure(background=\"#83b4c0\")\n\n\ndef fireTransferLearning():\n \n print(\"...........................\")\n \ndef fireCNN():\n \n textcnn.set(\"starting CNN\") \n model = Sequential()\n model.add(Conv2D(16,kernel_size=(3,3),activation='relu',input_shape=(imgsize,imgsize,3)))\n model.add(BatchNormalization())\n model.add(MaxPool2D(2,2))\n model.add(Dropout(0.3))\n \n model.add(Conv2D(32,kernel_size=(3,3),activation='relu'))\n model.add(BatchNormalization())\n model.add(MaxPool2D(2,2))\n model.add(Dropout(0.3))\n model.add(Conv2D(64,kernel_size=(3,3),activation='relu'))\n model.add(BatchNormalization())\n model.add(MaxPool2D(2,2))\n model.add(Dropout(0.4))\n model.add(Flatten())\n model.add(Dense(128,activation='relu'))\n model.add(BatchNormalization())\n model.add(Dropout(0.5)) \n model.add(Dense(25,activation='sigmoid'))\n model.compile(optimizer='adam',loss='binary_crossentropy',metrics=['accuracy'])\n \n history = model.fit(X_train,y_train,epochs=10,validation_data=(X_test,y_test))\n textcnn.set(\"Complete CNN\")\n \n \ndef Check():\n if var.get() == 1:\n fireCNN()\n elif var.get() == 2:\n fireTransferLearning();\n else:\n return\n\ndef open_cv():\n filename = filedialog.askopenfilename(initialdir=\"C:/\", title=\"select file\",filetypes=((\"CSV Files\",\"*.csv\"), (\"all files\", \"*.*\")))\n print(filename)\n global df\n try:\n df = pd.read_csv(filename)\n except:\n print(\"cant open the file\")\n return;\n print(df)\n \ndef open_file():\n choose=filedialog.askdirectory()\n global imgsize\n imgsize = 120\n print(choose)\n global X\n X = []\n global y\n global path\n global img\n #tqdm\n textvar.set(\"starting...\")\n length = (range(df.shape[0]))\n old = 0\n new = 0\n for i in length:\n new = (i * 100) / int(df.shape[0])\n if int(new) > int(old):\n old = new;\n progress['value'] +=1\n\n path = choose+'/'+df['Id'][i]+'.jpg'\n print(path)\n img = cv2.imread(path)\n img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n img= cv2.resize(img,(imgsize,imgsize))\n X.append(img)\n tarinning.update_idletasks()\n X = np.array(X)\n y = df.drop(['Id','Genre'],axis=1)\n y = y.to_numpy()\n global X_train, X_test, y_train, y_test\n X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.1)\n textvar.set(\"finished.\")\n \n \n \n \n\n\ndef trainning():\n r=IntVar()\n global tarinning\n tarinning=Toplevel(root)\n tarinning.geometry(\"500x500\")\n tarinning.title(\"Trainning\")\n tarinning.configure(background=\"#83b4c0\")\n l=Label(tarinning,text=\"MOVIE GENRE DETECTION SYSTEM \",fg=\"black\", bg=\"#83b4c0\")\n l.place(relx = 0.5, rely = 0.13, anchor=CENTER)\n l=Label(tarinning,text=\"______________________________________________________________________________________________\",fg=\"black\", bg=\"#83b4c0\")\n l.place(relx = 0.5, rely = 0.16, anchor=CENTER)\n selectlabel=Label(tarinning, text=\"Select notation files\",fg=\"black\", bg=\"#83b4c0\")\n selectlabel.place(relx = 0.25, rely = 0.25, anchor=E)\n selectbut=Button(tarinning,text=\"Select\", command=open_cv)\n selectbut.place(relx = 0.45, rely = 0.25, anchor=E)\n selectlabel=Label(tarinning, text=\"Select images files\",fg=\"black\", bg=\"#83b4c0\")\n selectlabel.place(relx = 0.25, rely = 0.35, anchor=E)\n selectbut=Button(tarinning,text=\"Select\", command=open_file)\n selectbut.place(relx = 0.45, rely = 0.35, anchor=E)\n\n\n global progress\n progress=ttk.Progressbar(tarinning,orient=HORIZONTAL,length=120, mode='determinate')\n progress.place(relx = 0.75, rely=0.35, anchor=E)\n \n global textvar \n textvar = StringVar();\n textlable = Label(tarinning, textvariable=textvar, fg=\"black\", bg=\"#83b4c0\")\n textlable.place(relx=0.92, rely=0.35, anchor=E)\n \n techlabel=Label(tarinning, text=\"Deep Learning Techniqes\",fg=\"black\", bg=\"#83b4c0\")\n techlabel.place(relx = 0.3, rely = 0.5, anchor=E)\n\n\n global var\n var = tk.IntVar()\n rad1=Radiobutton(tarinning,text=\"CNN\",variable=var,value=1,fg=\"black\", bg=\"#83b4c0\")\n rad2=Radiobutton(tarinning,text=\"Transfer learning\",variable=var,value=2,fg=\"black\", bg=\"#83b4c0\")\n \n save=Button(tarinning, text=\"Save\")\n Start=Button(tarinning, text=\"Start\", command=Check)\n rad1.place(relx = 0.11, rely = 0.6, anchor=E)\n rad2.place(relx = 0.24, rely = 0.7, anchor=E)\n Start.place(relx = 0.3, rely = 0.8, anchor=E)\n save.place(relx = 0.4, rely = 0.8, anchor=E)\n \n global textcnn\n textcnn = StringVar();\n textlabcnn = Label(tarinning, textvariable=textcnn, fg=\"black\", bg=\"#83b4c0\")\n textlabcnn.place(relx=0.92, rely=0.7, anchor=E)\n \n \n\n\n \ndef bar():\n import time\n progress['value']=20\n tarinning.update_idletasks()\n time.sleep(1)\n progress['value']=50\n tarinning.update_idletasks()\n time.sleep(1)\n progress['value']=80\n tarinning.update_idletasks()\n time.sleep(1)\n progress['value']=100\n \ndef testing():\n test=Toplevel(root)\n test.geometry(\"500x500\")\n test.title(\"Testing\")\n test.configure(background=\"#83b4c0\")\n l=Label(test,text=\"MOVIE GENRE DETECTION SYSTEM \",fg=\"black\", bg=\"#83b4c0\")\n l.place(relx = 0.5, rely = 0.13, anchor=CENTER)\n l=Label(test,text=\"______________________________________________________________________________________________\",fg=\"black\", bg=\"#83b4c0\")\n l.place(relx = 0.5, rely = 0.16, anchor=CENTER)\n selectlabel=Label(test, text=\"Select testing data:\",fg=\"black\", bg=\"#83b4c0\")\n selectlabel.place(relx = 0.3, rely = 0.25, anchor=E)\n choose=Button(test, text=\"Choose File\")\n choose.place(relx = 0.28, rely = 0.35, anchor=E)\n load=Button(test, text=\"Upload Model\")\n load.place(relx = 0.3, rely = 0.45, anchor=E)\n testbut=Button(test, text=\"Test\")\n testbut.place(relx = 0.2, rely = 0.55, anchor=E)\n acc=Button(test, text=\"Show Accuracy\")\n acc.place(relx = 0.4, rely = 0.55, anchor=E)\ndef predcting():\n predct=Toplevel(root)\n predct.geometry(\"500x500\")\n predct.title(\"Testing\")\n predct.configure(background=\"#83b4c0\")\n l=Label(predct,text=\"MOVIE GENRE DETECTION SYSTEM \",fg=\"black\", bg=\"#83b4c0\")\n l.place(relx = 0.5, rely = 0.13, anchor=CENTER)\n l=Label(predct,text=\"______________________________________________________________________________________________\",fg=\"black\", bg=\"#83b4c0\")\n l.place(relx = 0.5, rely = 0.16, anchor=CENTER)\n selectlabel=Label(predct, text=\"Select predcting data:\",fg=\"black\", bg=\"#83b4c0\")\n selectlabel.place(relx = 0.3, rely = 0.25, anchor=E)\n choose=Button(predct, text=\"Choose pooster\")\n choose.place(relx = 0.30, rely = 0.3, anchor=E)\n predctbut=Button(predct, text=\"Predct\")\n predctbut.place(relx = 0.50, rely = 0.3, anchor=E)\n img=Label(predct, text=\"Image:\",fg=\"black\", bg=\"#83b4c0\")\n img.place(relx = 0.2, rely = 0.35, anchor=E)\n pre=Label(predct, text=\"Predct:\",fg=\"black\", bg=\"#83b4c0\")\n pre.place(relx = 0.5, rely = 0.35, anchor=E)\n \nl=Label(root,text=\"MOVIE GENRE DETECTION SYSTEM \",fg=\"black\", bg=\"#83b4c0\")\nl.place(relx = 0.5, rely = 0.13, anchor=CENTER)\nl=Label(root,text=\"______________________________________________________________________________________________\",fg=\"black\", bg=\"#83b4c0\")\nl.place(relx = 0.5, rely = 0.16, anchor=CENTER)\nl=Label(root,text=\"Deep Learning \",fg=\"black\", bg=\"#83b4c0\")\nl.place(relx = 0.5, rely = 0.2, anchor=CENTER)\ntrinning=Button(root,text=\"Trinning\", command=trainning,fg=\"black\", bg=\"light grey\")\ntrinning.place(relx = 0.2, rely = 0.3, anchor=CENTER)\ntesting=Button(root,text=\"Testing\",command=testing,fg=\"black\", bg=\"light grey\")\ntesting.place(relx = 0.5, rely = 0.3, anchor = CENTER)\npredcting=Button(root,text=\"Predcting\",command=predcting ,fg=\"black\", bg=\"light grey\")\npredcting.place(relx = 0.8, rely = 0.3, anchor = CENTER)\nroot.geometry(\"500x500\")\nroot.title(\"MOVIE GENRE DETECTION SYSTEM\")\nroot.mainloop()\n\n\n# In[ ]:\n\n\n\n\n\n# In[ ]:\n\n\n\n\n"
}
] | 1 |
tdpreece/memcached_knowledge | https://github.com/tdpreece/memcached_knowledge | 5f0850b1781d3d5d958dfd5379e457180c780460 | 3a042986e9ef4924d3eb3a6f66a36ca0e6ad42d7 | 177c8a8fd53e5904d8e27b6e6513b2bb62e38d65 | refs/heads/master | 2021-01-12T08:12:59.237380 | 2016-12-22T14:48:44 | 2016-12-22T14:48:44 | 76,504,534 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.4880583882331848,
"alphanum_fraction": 0.514153003692627,
"avg_line_length": 29.761905670166016,
"blob_id": "b14886c41406cc9c495c92465dda90b8b09c2550",
"content_id": "fa8886bf6565c5eab4d48dbc98e5439973b2584d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4522,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 147,
"path": "/automove_example.py",
"repo_name": "tdpreece/memcached_knowledge",
"src_encoding": "UTF-8",
"text": "import socket\nfrom time import sleep\n\nimport memcache\n\n\n# The 500MB value will go in slab 40, which stores chunks of size up\n# to 616944KB and can store 1 chunk per page.\n# 64 sets with unique keys will fill up the cache.\n# The 100MB value will go in slab 32, which stores chunks of size up\n# to 103496KB and can store 10 chunk per page.\n# 640 sets with unique keys will fill up the cache.\n\n\nclass MyClient(memcache.Client):\n def get_item_stats(self):\n data = []\n for s in self.servers:\n if not s.connect():\n continue\n if s.family == socket.AF_INET:\n name = '%s:%s (%s)' % (s.ip, s.port, s.weight)\n elif s.family == socket.AF_INET6:\n name = '[%s]:%s (%s)' % (s.ip, s.port, s.weight)\n else:\n name = 'unix:%s (%s)' % (s.address, s.weight)\n serverData = {}\n data.append((name, serverData))\n s.send_cmd('stats items')\n readline = s.readline\n while 1:\n line = readline()\n if not line or line.strip() == 'END':\n break\n\n # e.g. STAT items:40:evicted_unfetched 1842\n item = line.split(' ', 2)\n slab = item[1].split(':', 2)[1:]\n if slab[0] not in serverData:\n serverData[slab[0]] = {}\n serverData[slab[0]][slab[1]] = item[2]\n return data\n\n def get_my_stats(self):\n slab_stats_wanted = ('total_pages', 'cmd_set')\n item_stats_wanted = ('evicted',)\n slab_stats = {\n k: v for k, v in\n self.get_slab_stats()[0][1].items() if k.isdigit()\n }\n item_stats = {\n k: v for k, v in\n self.get_item_stats()[0][1].items() if k.isdigit()\n }\n\n my_stats = {}\n for slab in slab_stats.keys():\n my_stats[slab] = {}\n for stat in slab_stats_wanted:\n my_stats[slab][stat] = slab_stats[slab][stat]\n for stat in item_stats_wanted:\n my_stats[slab][stat] = item_stats[slab][stat]\n return my_stats\n\n def enable_automove(self):\n for s in self.servers:\n if not s.connect():\n continue\n if s.family == socket.AF_INET:\n name = '%s:%s (%s)' % (s.ip, s.port, s.weight)\n elif s.family == socket.AF_INET6:\n name = '[%s]:%s (%s)' % (s.ip, s.port, s.weight)\n else:\n name = 'unix:%s (%s)' % (s.address, s.weight)\n s.send_cmd('slabs automove 1')\n s.expect(b'OK')\n print('Enabled automove')\n\n def disable_automove(self):\n for s in self.servers:\n if not s.connect():\n continue\n if s.family == socket.AF_INET:\n name = '%s:%s (%s)' % (s.ip, s.port, s.weight)\n elif s.family == socket.AF_INET6:\n name = '[%s]:%s (%s)' % (s.ip, s.port, s.weight)\n else:\n name = 'unix:%s (%s)' % (s.address, s.weight)\n s.send_cmd('slabs automove 0')\n s.expect(b'OK')\n print('Disabled autmove')\n\n\ndef main():\n mc = MyClient(['127.0.0.1:11211'])\n mc.disable_automove()\n print(mc.get_my_stats())\n\n print('Fill cache with 500KB items')\n key_value_pairs = key_value_pair_generator(size_kb=500, quantity=150)\n set_many(mc, key_value_pairs)\n print(mc.get_my_stats())\n\n print('Start adding 100KB items')\n for i in range(0, 10):\n key_value_pairs = key_value_pair_generator(size_kb=100, quantity=2000)\n set_many(mc, key_value_pairs)\n print(mc.get_my_stats())\n sleep(5)\n\n mc.enable_automove()\n for i in range(0, 10):\n key_value_pairs = key_value_pair_generator(size_kb=100, quantity=2000)\n set_many(mc, key_value_pairs)\n print(mc.get_my_stats())\n sleep(5)\n\n mc.disable_automove()\n for i in range(0, 10):\n key_value_pairs = key_value_pair_generator(size_kb=100, quantity=2000)\n set_many(mc, key_value_pairs)\n print(mc.get_my_stats())\n sleep(5)\n\n\ndef key_value_pair_generator(size_kb, quantity):\n value = 'a' * (size_kb * 1000)\n for i in range(0, quantity):\n yield (get_key(), value)\n\n\nkey = 0\n\n\ndef get_key():\n global key\n key += 1\n return str(key)\n\n\ndef set_many(mc, key_value_pairs):\n for k, v in key_value_pairs:\n mc.set(k, v)\n\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.5007796287536621,
"alphanum_fraction": 0.6184478998184204,
"avg_line_length": 49.52727127075195,
"blob_id": "4edda313efa5de524abd5fc1a684052eba7ad4c0",
"content_id": "3f8e326409553dbef4dc91f1e04244e051ef46c2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 8337,
"license_type": "no_license",
"max_line_length": 201,
"num_lines": 165,
"path": "/README.md",
"repo_name": "tdpreece/memcached_knowledge",
"src_encoding": "UTF-8",
"text": "The following was done on Ubuntu 14.04.1.\n\n# Installation\n\n```bash\nsudo apt-get install memcached\n```\n\nand it seemed to start automatically,\n\n```bash\n$ ps -ax | grep memcached\n11760 ? Sl 0:00 /usr/bin/memcached -m 64 -p 11211 -u memcache -l 127.0.0.1\n```\n\nmemcached can be controlled via,\n\n```\nsudo service memcached start|stop|status\n```\n\nand the configuration can be found in `/etc/memcached.conf`.\n\nCommands can be sent to memcached from the command line using netcat,\n\n```bash\n$ echo \"version\" | nc 127.0.0.1 11211\nVERSION 1.4.14 (Ubuntu)\n```\n\n## How the cache grows\n\n* Storage is broken up into 1 megabyte pages.\n* A page is assigned a slab class, which denotes that that page stores items a\n particular size.\n* These size ranges are determined by a configurable growth factor.\n\n```\n$ memcached -vv\nslab class 1: chunk size 96 perslab 10922\nslab class 2: chunk size 120 perslab 8738\nslab class 3: chunk size 152 perslab 6898\nslab class 4: chunk size 192 perslab 5461\nslab class 5: chunk size 240 perslab 4369\nslab class 6: chunk size 304 perslab 3449\nslab class 7: chunk size 384 perslab 2730\nslab class 8: chunk size 480 perslab 2184\nslab class 9: chunk size 600 perslab 1747\n...\n```\n\nThus, if I wanted to store a 140 byte item, memcached would have to make sure\nthat at least one page was assigned to slab class 3.\n\nThe allocation of pages to slab classes happens dynamically as memcached\nreceieves set commands for items of a given size.\n\ne.g. If I had allocated 9MB to memcached and sent set commands for:\n* 9000 items of 105 bytes\n* 4300 items of 500 bytes\nthe allocation of pages would look like the following:\n\n```\n+---+---+---+\n| 2 | 2 | 9 |\n+---+---+---+\n| 9 | 9 | |\n+---+---+---+\n| | | |\n+---+---+---+\n```\n\nThe default behaviour is that slabs aren't reassigned. If\nthe cache fills up with pages predominantly allocated to a few slab classes\nand then the size of items being stored changes (thus requiring different\nslab classes), you could end up with only a few pages actually being used\nfor the items you now want to cache. This can be overcome by switching\non the automove feature, which is described in the next section.\n\n## Reassignment of slabs\n\nTo allow reassignment of slabs (a feature introduced in [v1.4.11](https://github.com/memcached/memcached/wiki/ReleaseNotes1411)\nI added the following line to `/etc/memcached.conf` and restarted the\nmemcached server (slab reassignment can only be enabled at start time).\n\n```\n-o slab_reassign\n```\n\nAutomove can be enabled by specifying an option on startup (`slab_automove`)\nor the following command,\n\n```bash\necho \"slabs automove 1\" | nc localhost 11211\n```\n\nA demonstration of automove can be seen by running the `automove_example.py`\nfile in this repo (output shown below),\n\n```bash\n$ python automove_example.py \nDisabled autmove\n{'32': {'total_pages': '6', 'cmd_set': '120000', 'evicted': '119940'}, '40': {'total_pages': '59', 'cmd_set': '300', 'evicted': '236'}}\nFill cache with 500KB items\n{'32': {'total_pages': '6', 'cmd_set': '120000', 'evicted': '119940'}, '40': {'total_pages': '59', 'cmd_set': '450', 'evicted': '386'}}\nStart adding 100KB items\n{'32': {'total_pages': '6', 'cmd_set': '122000', 'evicted': '121940'}, '40': {'total_pages': '59', 'cmd_set': '450', 'evicted': '386'}}\n{'32': {'total_pages': '6', 'cmd_set': '124000', 'evicted': '123940'}, '40': {'total_pages': '59', 'cmd_set': '450', 'evicted': '386'}}\n{'32': {'total_pages': '6', 'cmd_set': '126000', 'evicted': '125940'}, '40': {'total_pages': '59', 'cmd_set': '450', 'evicted': '386'}}\n{'32': {'total_pages': '6', 'cmd_set': '128000', 'evicted': '127940'}, '40': {'total_pages': '59', 'cmd_set': '450', 'evicted': '386'}}\n{'32': {'total_pages': '6', 'cmd_set': '130000', 'evicted': '129940'}, '40': {'total_pages': '59', 'cmd_set': '450', 'evicted': '386'}}\n{'32': {'total_pages': '6', 'cmd_set': '132000', 'evicted': '131940'}, '40': {'total_pages': '59', 'cmd_set': '450', 'evicted': '386'}}\n{'32': {'total_pages': '6', 'cmd_set': '134000', 'evicted': '133940'}, '40': {'total_pages': '59', 'cmd_set': '450', 'evicted': '386'}}\n{'32': {'total_pages': '6', 'cmd_set': '136000', 'evicted': '135940'}, '40': {'total_pages': '59', 'cmd_set': '450', 'evicted': '386'}}\n{'32': {'total_pages': '6', 'cmd_set': '138000', 'evicted': '137940'}, '40': {'total_pages': '59', 'cmd_set': '450', 'evicted': '386'}}\n{'32': {'total_pages': '6', 'cmd_set': '140000', 'evicted': '139940'}, '40': {'total_pages': '59', 'cmd_set': '450', 'evicted': '386'}}\nEnabled automove\n{'32': {'total_pages': '6', 'cmd_set': '142000', 'evicted': '141940'}, '40': {'total_pages': '59', 'cmd_set': '450', 'evicted': '386'}}\n{'32': {'total_pages': '6', 'cmd_set': '144000', 'evicted': '143940'}, '40': {'total_pages': '59', 'cmd_set': '450', 'evicted': '386'}}\n{'32': {'total_pages': '6', 'cmd_set': '146000', 'evicted': '145940'}, '40': {'total_pages': '59', 'cmd_set': '450', 'evicted': '386'}}\n{'32': {'total_pages': '6', 'cmd_set': '148000', 'evicted': '147940'}, '40': {'total_pages': '59', 'cmd_set': '450', 'evicted': '386'}}\n{'32': {'total_pages': '6', 'cmd_set': '150000', 'evicted': '149940'}, '40': {'total_pages': '59', 'cmd_set': '450', 'evicted': '386'}}\n{'32': {'total_pages': '6', 'cmd_set': '152000', 'evicted': '151940'}, '40': {'total_pages': '59', 'cmd_set': '450', 'evicted': '386'}}\n{'32': {'total_pages': '6', 'cmd_set': '154000', 'evicted': '153940'}, '40': {'total_pages': '59', 'cmd_set': '450', 'evicted': '386'}}\n{'32': {'total_pages': '7', 'cmd_set': '156000', 'evicted': '155930'}, '40': {'total_pages': '58', 'cmd_set': '450', 'evicted': '386'}}\n{'32': {'total_pages': '7', 'cmd_set': '158000', 'evicted': '157930'}, '40': {'total_pages': '58', 'cmd_set': '450', 'evicted': '386'}}\n{'32': {'total_pages': '8', 'cmd_set': '160000', 'evicted': '159920'}, '40': {'total_pages': '57', 'cmd_set': '450', 'evicted': '386'}}\nDisabled autmove\n{'32': {'total_pages': '8', 'cmd_set': '162000', 'evicted': '161920'}, '40': {'total_pages': '57', 'cmd_set': '450', 'evicted': '386'}}\n{'32': {'total_pages': '8', 'cmd_set': '164000', 'evicted': '163920'}, '40': {'total_pages': '57', 'cmd_set': '450', 'evicted': '386'}}\n{'32': {'total_pages': '8', 'cmd_set': '166000', 'evicted': '165920'}, '40': {'total_pages': '57', 'cmd_set': '450', 'evicted': '386'}}\n{'32': {'total_pages': '8', 'cmd_set': '168000', 'evicted': '167920'}, '40': {'total_pages': '57', 'cmd_set': '450', 'evicted': '386'}}\n{'32': {'total_pages': '8', 'cmd_set': '170000', 'evicted': '169920'}, '40': {'total_pages': '57', 'cmd_set': '450', 'evicted': '386'}}\n{'32': {'total_pages': '8', 'cmd_set': '172000', 'evicted': '171920'}, '40': {'total_pages': '57', 'cmd_set': '450', 'evicted': '386'}}\n{'32': {'total_pages': '8', 'cmd_set': '174000', 'evicted': '173920'}, '40': {'total_pages': '57', 'cmd_set': '450', 'evicted': '386'}}\n{'32': {'total_pages': '8', 'cmd_set': '176000', 'evicted': '175920'}, '40': {'total_pages': '57', 'cmd_set': '450', 'evicted': '386'}}\n{'32': {'total_pages': '8', 'cmd_set': '178000', 'evicted': '177920'}, '40': {'total_pages': '57', 'cmd_set': '450', 'evicted': '386'}}\n{'32': {'total_pages': '8', 'cmd_set': '180000', 'evicted': '179920'}, '40': {'total_pages': '57', 'cmd_set': '450', 'evicted': '386'}}\n```\nAs can be seen above, enabling automove results in pages being reallocated from slab `40` to slab `32`. The algorithm for automove is\n> \"If a slab class is seen as having the highest eviction count 3 times 10 seconds apart, it will take a page from a slab class which has had zero evictions in the last 30 seconds and move the memory.\"\n\n(see [v1.4.11](https://github.com/memcached/memcached/wiki/ReleaseNotes1411) release notes).\n\n# Multiple instances\n\nWhen multiple instances of memcached are used the client decides which node a\na key resides via an algorithm like,\n\n```\nnode_index = hash(key) % number_of_nodes\n```\n\nAdding nodes is difficult as adding a node would result in many items being\nrelocated thus resulting in many misses for a period after adding the new\nnode.\n\n# Replicas\n\nhttp://repcached.lab.klab.org/\n\n\n## References\n* https://github.com/memcached/memcached/wiki/UserInternals\n* http://balodeamit.blogspot.co.uk/2014/02/slab-reallocation-in-memcache.html\n"
}
] | 2 |
mohebbihr/bcb-crater-detection | https://github.com/mohebbihr/bcb-crater-detection | f49075db0692f12074ba5abbad1c95ecd0141378 | c8dbadab6b482eda17413acde45ea68976f915e8 | 3993e8667baa03bec15af376f9a915285cebe622 | refs/heads/master | 2022-10-16T15:51:23.247976 | 2020-06-11T20:43:00 | 2020-06-11T20:43:00 | 271,629,280 | 1 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5998243689537048,
"alphanum_fraction": 0.6220725774765015,
"avg_line_length": 36.52747344970703,
"blob_id": "330337e4eec649bf9256dfc3506f3c6ce1ca9d0c",
"content_id": "16ae919d50b50a06f9c38d03052ce36e9a1bc47b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3416,
"license_type": "no_license",
"max_line_length": 135,
"num_lines": 91,
"path": "/bcb-src/nn_cnn_src/sliding_window_cnn.py",
"repo_name": "mohebbihr/bcb-crater-detection",
"src_encoding": "UTF-8",
"text": "from skimage.transform import pyramid_gaussian\nimport cv2 as cv\nfrom helper import sliding_window\nimport os\nimport csv\nfrom crater_cnn import Network as CNN\nfrom crater_nn import Network as NN\nimport Param\nimport argparse\n\n# This script will go through all image tiles and detects crater area using sliding window method.\n# Then, write results as a csv file to the results folder. The results of this script is the input to the remove_duplicates.py script. \n# you need to provide the tile image name as argument after --tileimg command. For instance tile1_24\n\nparam = Param.Param()\ncwd = os.getcwd()\n\n# setup CNN\ncnn = CNN(img_shape=(50, 50, 1))\ncnn.add_convolutional_layer(5, 16)\ncnn.add_convolutional_layer(5, 36)\ncnn.add_flat_layer()\ncnn.add_fc_layer(size=64, use_relu=True)\ncnn.add_fc_layer(size=16, use_relu=True)\ncnn.add_fc_layer(size=2, use_relu=False)\ncnn.finish_setup()\n# model.set_data(data)\n\n# restore previously trained CNN model\ncnn_model_path = os.path.join(cwd, 'models/cnn/crater_model_cnn.ckpt')\ncnn.restore(cnn_model_path)\n \n# go through all the tile folders\ngt_list = [\"1_24\", \"1_25\", \"2_24\", \"2_25\", \"3_24\", \"3_25\"]\n\nfor gt_num in gt_list:\n\n tile_img = 'tile' + gt_num\n \n path = os.path.join('crater_data', 'tiles')\n img = cv.imread(os.path.join(path, tile_img + '.pgm'), 0)\n img = cv.normalize(img, img, 0, 255, cv.NORM_MINMAX)/255.0\n \n # Task. Creat new script and Apply results of segmentation phase (FCN) to remove the non-crater areas of crater image.\n \n # task: get the threshold of the image\n \n crater_list_cnn = []\n #crater_list_nn = []\n \n winS = param.dmin\n # loop over the image pyramid\n \n for (i, resized) in enumerate(pyramid_gaussian(img, downscale=1.5)):\n if resized.shape[0] < 31:\n break\n \n print(\"Resized shape: %d, Window size: %d, i: %d\" % (resized.shape[0], winS, i))\n \n # loop over the sliding window for each layer of the pyramid\n # this process takes about 7 hours. To do quick test, we may try stepSize\n # to be large (60) and see if code runs OK\n for (x, y, window) in sliding_window(resized, stepSize=8, windowSize=(winS, winS)):\n \n # apply a circular mask ot the window here. Think about where you should apply this mask. Before, resizing or after it. \n crop_img =cv.resize(window, (50, 50))\n cv.normalize(crop_img, crop_img, 0, 255, cv.NORM_MINMAX)\n crop_img = crop_img.flatten()\n \n p_non, p_crater = cnn.predict([crop_img])[0]\n \n scale_factor = 1.5 ** i\n x_c = int((x + 0.5 * winS) * scale_factor)\n y_c = int((y + 0.5 * winS) * scale_factor)\n crater_r = int(winS * scale_factor / 2)\n \n # add its probability to a score combined thresholded image and normal iamges. \n \n if p_crater >= 0.75:\n crater_data = [x_c, y_c, crater_r , p_crater]\n crater_list_cnn.append(crater_data)\n \n \n cnn_file = open(\"results/cnn/\"+tile_img+\"_sw_cnn_th.csv\",\"w\")\n with cnn_file:\n writer = csv.writer(cnn_file, delimiter=',')\n writer.writerows(crater_list_cnn)\n cnn_file.close()\n \n print(\"CNN detected \", len(crater_list_cnn), \"craters\")\n print(\"The results is saved on results/cnn/\"+tile_img+\"_sw_cnn_th.csv file.\")\n\n"
},
{
"alpha_fraction": 0.587997317314148,
"alphanum_fraction": 0.6062036156654358,
"avg_line_length": 34.33333206176758,
"blob_id": "009ea8a0fe30a1a00d20762a317b574528436423",
"content_id": "9fe29776a76640e7d0221d7e2155f3df5470f2d1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1483,
"license_type": "no_license",
"max_line_length": 102,
"num_lines": 42,
"path": "/bcb-src/gen_results/plot_results.py",
"repo_name": "mohebbihr/bcb-crater-detection",
"src_encoding": "UTF-8",
"text": "import time\nfrom datetime import timedelta\nimport os\nimport pandas as pd\nfrom helper import evaluate\nimport Param\nimport cv2 as cv\n\nstart_time = time.time()\n# the data after duplicate removal\nparam = Param.Param()\n\n#method_list = [\"birch\", \"exp\"]\nmethod_list = [\"birch\"]\ngt_list = [\"1_24\", \"1_25\", \"2_24\", \"2_25\", \"3_24\", \"3_25\"]\n#gt_list = [\"1_24\"]\n\nfor method in method_list:\n print(\"evaluation of \" + method + \" approach\")\n for gt in gt_list:\n # the image for drawing rectangles\n print(\"working on tile\" + gt)\n tile_name = \"tile\" + gt\n img_path = os.path.join('crater_data', 'tiles', tile_name + '.pgm')\n gt_img = cv.imread(img_path)\n gt_csv_path = os.path.join('crater_data', 'gt', gt + '_gt.csv')\n gt_data = pd.read_csv(gt_csv_path, header=None)\n\t\t\n # read detection from csv file.\n dt_csv_path = os.path.join('results', 'crater-ception', method, gt + '_sw_' + method + '.csv')\n craters = pd.read_csv(dt_csv_path, header=None)\n print(\"reading from file: \" + str(dt_csv_path))\n \n # save results path\n save_path = 'results/crater-ception/' + method + '/evaluations/' + gt + '_sw_' + method\n craters = craters[[0,1,2]]\n # evaluate with gt and draw it on final image.\n evaluate(craters, gt_data, gt_img, 64, True, save_path, param)\n\nend_time = time.time()\ntime_dif = end_time - start_time\nprint(\"Time usage: \" + str(timedelta(seconds=int(round(time_dif)))))"
},
{
"alpha_fraction": 0.6243902444839478,
"alphanum_fraction": 0.6427767276763916,
"avg_line_length": 39.378787994384766,
"blob_id": "d25d0c878dced0b861419e50d4a77e3038e71428",
"content_id": "8acdce41bf82eb68fac883c5840989dca521ecce",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2665,
"license_type": "no_license",
"max_line_length": 197,
"num_lines": 66,
"path": "/bcb-src/nn_cnn_src/remove_duplicates.py",
"repo_name": "mohebbihr/bcb-crater-detection",
"src_encoding": "UTF-8",
"text": "import cv2 as cv\nimport time\nfrom datetime import timedelta\nimport os\nimport pandas as pd\nfrom helper import calculateDistance, BIRCH_duplicate_removal,BIRCH2_duplicate_removal, Banderia_duplicate_removal, XMeans_duplicate_removal, draw_craters_rectangles, draw_craters_circles, evaluate\nimport Param\n\n# the raw data to process for duplicate removal\nparam = Param.Param()\nremoval_method = 'BIRCH'\n#removal_method = 'Banderia'\nstart_time = time.time()\n# go through all the tile folders\ngt_list = [\"1_24\", \"1_25\", \"2_24\", \"2_25\", \"3_24\", \"3_25\"]\n\nfor gt_num in gt_list:\n \n testset_name = 'tile' + gt_num\n print('Removing Duplicate Detections for tile: ' + testset_name)\n csv_path = 'results/cnn/' + testset_name +'_sw_cnn_th.csv'\n gt_csv_path = 'crater_data/gt/'+ gt_num +'_gt.csv'\n save_path = 'results/cnn/evaluations/' + removal_method + '/'+ testset_name+'_cnn'\n \n \n # the image for drawing rectangles\n img_path = os.path.join('crater_data', 'images', testset_name + '.pgm')\n gt_img = cv.imread(img_path)\n \n data = pd.read_csv(csv_path, header=None)\n gt = pd.read_csv(gt_csv_path, header=None)\n \n threshold = 0.75\n \n # first pass, remove duplicates for points of same window size\n df1 = {}\n for ws in data[2].unique():\n if (ws >= param.dmin) and (ws <= param.dmax):\n df1[ws] = data[ (data[3] > 0.75) & (data[2] == ws) ] # take only 75% or higher confidence\n df1[ws] = BIRCH_duplicate_removal(df1[ws])\n #df1[ws] = BIRCH2_duplicate_removal(df1[ws], threshold)\n \n # Start merging process\n # We will add points of greatest size first\n # then merge with the next smaller size and remove duplicates\n # Do this until the smallest window size has been included\n \n merge = pd.DataFrame()\n for ws in reversed(sorted(df1.keys())):\n merge = pd.concat([merge, df1[ws]])\n old_size = len(merge)\n #merge = BIRCH2_duplicate_removal(merge, threshold) # we can tweak ws for eliminations\n merge = BIRCH_duplicate_removal(merge) \n new_size = len(merge)\n print(\"Processed window size\", ws, \", considered\", old_size, \"points, returned\", new_size, \"points\")\n \n # save the no duplicate csv file\n merge[[0,1,2]].to_csv(\"%s_th_noduplicates.csv\" % save_path, header=False, index=False)\n craters = merge[[0,1,2]]\n \n # evaluate with gt and draw it on final image.\n dr, fr, qr, bf, f_measure, tp, fp, fn = evaluate(craters, gt, gt_img, 64, True, save_path, param)\n \n end_time = time.time()\n time_dif = end_time - start_time\n print(\"Time usage: \" + str(timedelta(seconds=int(round(time_dif)))))\n"
},
{
"alpha_fraction": 0.4843537509441376,
"alphanum_fraction": 0.5442177057266235,
"avg_line_length": 30.95652198791504,
"blob_id": "fe8d41517b1ed8187f66644d38909c2edf17247c",
"content_id": "831475c21d7f072d32dba4bdcda306d37aa01deb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 735,
"license_type": "no_license",
"max_line_length": 110,
"num_lines": 23,
"path": "/bcb-src/nn_cnn_src/Param.py",
"repo_name": "mohebbihr/bcb-crater-detection",
"src_encoding": "UTF-8",
"text": "class Param(object):\n\n def __init__(self):\n self.dmin = 15\n self.dmax = 400\n self.dratio = 2\n self.d_erase = 0.50\n self.xy_erase = 0.50\n self.d_tol = 0.50\n self.xy_tol = 20\n self.stage = 0\n self.T = 121 * (7+2) + 162\n self.tt = 121 * (7+2) + 162\n self.sz_trainset = 2\n self.acceptance = False\n self.go_train = False\n self.sz_testset = 2\n self.go_test = False\n self.miu = 0.55\n self.go_classf = False\n self.n_images = 0\n self.thresh_overlay = 0.25 # the maximum shared portion between a negative sample and positive samples\n self.thresh_std = 5 # minimum standart desviation for false example\n"
},
{
"alpha_fraction": 0.5776545405387878,
"alphanum_fraction": 0.6045958995819092,
"avg_line_length": 34.70754623413086,
"blob_id": "89e77788d48acbab9e30c00f9a1ac9f855fb9f53",
"content_id": "1b6509731a2e4719e265d4892c7398739a823e9a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3786,
"license_type": "no_license",
"max_line_length": 92,
"num_lines": 106,
"path": "/bcb-src/nn_cnn_src/sliding_window_2networks.py",
"repo_name": "mohebbihr/bcb-crater-detection",
"src_encoding": "UTF-8",
"text": "from skimage.transform import pyramid_gaussian\nimport cv2 as cv\nfrom helper import sliding_window\nimport time\nimport os\nimport csv\nfrom crater_cnn import Network as CNN\nfrom crater_nn import Network as NN\nimport pickle\ncwd = os.getcwd()\n\n# setup CNN\ncnn = CNN(img_shape=(50, 50, 1))\ncnn.add_convolutional_layer(5, 16)\ncnn.add_convolutional_layer(5, 36)\ncnn.add_flat_layer()\ncnn.add_fc_layer(size=64, use_relu=True)\ncnn.add_fc_layer(size=16, use_relu=True)\ncnn.add_fc_layer(size=2, use_relu=False)\ncnn.finish_setup()\n# model.set_data(data)\n\n# setup NN\nnn = CNN(img_shape=(50, 50, 1))\nnn.add_flat_layer()\nnn.add_fc_layer(size=50 * 50, use_relu=True)\nnn.add_fc_layer(size=16, use_relu=True)\nnn.add_fc_layer(size=2, use_relu=False)\nnn.finish_setup()\n\n# restore previously trained CNN model\ncnn_model_path = os.path.join(cwd, 'results/models/crater_west_model_cnn.ckpt')\ncnn.restore(cnn_model_path)\n\n# restore previously trained NN model\nnn_model_path = os.path.join(cwd, 'results/models/crater_west_model_nn.ckpt')\nnn.restore(nn_model_path)\n\n#with open('results/models/nn_model_02500_00.pkl', 'rb') as finput:\n# nn = pickle.load(finput)\n\npath = os.path.join('crater_data', 'images')\nimg = cv.imread(os.path.join(path, 'tile2_24.pgm'), 0)\nimg = cv.normalize(img, img, 0, 255, cv.NORM_MINMAX)/255.0\n\ncrater_list_cnn = []\ncrater_list_nn = []\n\nwin_sizes = range(20, 30, 2)\n# loop over the image pyramid\nfor (i, resized) in enumerate(pyramid_gaussian(img, downscale=1.5)):\n if resized.shape[0] < 31:\n break\n for winS in win_sizes:\n print(\"Resized shape: %d, Window size: %d\" % (resized.shape[0], winS))\n\n # loop over the sliding window for each layer of the pyramid\n # this process takes about 7 hours. To do quick test, we may try stepSize\n # to be large (60) and see if code runs OK\n #for (x, y, window) in sliding_window(resized, stepSize=2, windowSize=(winS, winS)):\n for (x, y, window) in sliding_window(resized, stepSize=60, windowSize=(winS, winS)):\n # since we do not have a classifier, we'll just draw the window\n clone = resized.copy()\n y_b = y + winS\n x_r = x + winS\n crop_img = clone[y:y_b, x:x_r]\n crop_img =cv.resize(crop_img, (50, 50))\n crop_img = crop_img.flatten()\n \n p_non, p_crater = cnn.predict([crop_img])[0]\n p_nn_non, nn_p = nn.predict([crop_img])[0]\n #nn_p = nn.feedforward_flat(crop_img)[0,0]\n \n scale_factor = 1.5 ** i\n if p_crater >= 0.5 or nn_p >= 0.5:\n x_c = int((x + 0.5 * winS) * scale_factor)\n y_c = int((y + 0.5 * winS) * scale_factor)\n crater_size = int(winS * scale_factor)\n \n if p_crater >= 0.5:\n crater_data = [x_c, y_c, crater_size, p_crater, 1]\n crater_list_cnn.append(crater_data)\n if nn_p >= 0.5:\n crater_data = [x_c, y_c, crater_size, nn_p, 1]\n crater_list_nn.append(crater_data)\n \n # if we want to see where is processed.\n # cv.rectangle(clone, (x, y), (x + winS, y + winS), (0, 255, 0), 2)\n # cv.imshow(\"Window\", clone)\n # cv.waitKey(1)\ncnn_file = open(\"results/west_train_center_test_2_24_cnn.csv\",\"w\")\nwith cnn_file:\n writer = csv.writer(cnn_file, delimiter=',')\n writer.writerows(crater_list_cnn)\ncnn_file.close()\n\nnn_file = open(\"results/west_train_crater_test_2_24_nn.csv\",\"w\")\nwith nn_file:\n writer = csv.writer(nn_file, delimiter=',')\n writer.writerows(crater_list_nn)\nnn_file.close()\n\n\nprint(\"CNN detected \", len(crater_list_cnn), \"craters\")\n\nprint(\"NN detected \", len(crater_list_nn), \"craters\")\n\n"
},
{
"alpha_fraction": 0.5498024225234985,
"alphanum_fraction": 0.5568953156471252,
"avg_line_length": 38.955467224121094,
"blob_id": "98bc450ba272056571b100b2f832d5f8585988a9",
"content_id": "d4a14ca0d5473bd0b1e75538cbc5001f7f8e93ed",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 9869,
"license_type": "no_license",
"max_line_length": 121,
"num_lines": 247,
"path": "/bcb-src/nn_cnn_src/crater_cnn.py",
"repo_name": "mohebbihr/bcb-crater-detection",
"src_encoding": "UTF-8",
"text": "import tensorflow as tf\nfrom network_blocks import *\nfrom crater_plots import *\nimport math\nimport time\nfrom datetime import timedelta\nimport numpy as np\nfrom sklearn.metrics import f1_score\n\nclass Network(object):\n def __init__(self, img_shape=(30, 30, 1), num_classes=2):\n self._img_h, self._img_w = img_shape[:2]\n self._img_shape = img_shape\n self._img_size_flat = self._img_h * self._img_w\n self._num_channels = img_shape[-1]\n self._num_classes = num_classes\n self._layer_conv = []\n self._weights_conv = []\n self._layer_fc = []\n self._session = None\n self._train_batch_size = 64\n self._data = None\n self._declare_placeholders()\n self._total_iterations = 0\n self._train_history = []\n self._test_history = []\n self._vald_history = []\n\n @property\n def history(self):\n return np.array([self._train_history, self._test_history, self._vald_history])\n\n @property\n def filters_weights(self):\n return self._session.run(self._weights_conv)\n\n def get_filters_activations(self, image):\n return self._session.run(self._layer_conv, feed_dict={self._x: [image]})\n\n def _declare_placeholders(self):\n \n self._x = tf.placeholder(tf.float32, shape=[None, self._img_size_flat], name='x')\n self._x_image = tf.reshape(self._x, [-1, self._img_h, self._img_w, self._num_channels])\n self._y_true = tf.placeholder(tf.float32, shape=[None, self._num_classes], name='y_true')\n self._y_true_cls = tf.argmax(self._y_true, axis=1)\n\n self._next_input = self._x_image\n self._next_input_size = self._num_channels\n\n def add_convolutional_layer(self, filter_size, num_filters, use_pooling=True):\n layer_conv, weights_conv = new_conv_layer( input=self._next_input,\n num_input_channels=self._next_input_size,\n filter_size=filter_size,\n num_filters=num_filters,\n use_pooling=use_pooling)\n\n self._next_input = layer_conv\n self._next_input_size = num_filters\n self._layer_conv.append(layer_conv)\n self._weights_conv.append(weights_conv)\n\n def add_flat_layer(self):\n layer_flat, num_features = flatten_layer(self._next_input)\n self._next_input = layer_flat\n self._next_input_size = num_features\n\n def add_fc_layer(self, size, use_relu):\n layer_fc = new_fc_layer(input=self._next_input,\n num_inputs=self._next_input_size,\n num_outputs=size,\n use_relu=use_relu)\n self._next_input = layer_fc\n self._next_input_size = size\n self._layer_fc.append(layer_fc)\n\n def finish_setup(self):\n final_layer = self._next_input\n self._y_pred = tf.nn.softmax(final_layer)\n self._y_pred_cls = tf.argmax(self._y_pred, axis=1)\n self._cross_entropy = tf.nn.softmax_cross_entropy_with_logits_v2(logits=final_layer, labels=self._y_true)\n self._cost = tf.reduce_mean(self._cross_entropy)\n self._optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(self._cost)\n self._correct_prediction = tf.equal(self._y_pred_cls, self._y_true_cls)\n self._accuracy = tf.reduce_mean(tf.cast(self._correct_prediction, tf.float32))\n self._saver = tf.train.Saver()\n self._session = tf.Session()\n self._session.run(tf.global_variables_initializer())\n\n def predict(self, images):\n feed_dict = {self._x: images}\n return self._session.run(self._y_pred, feed_dict=feed_dict)\n\n def set_data(self, data):\n self._data = data\n\n def _data_is_available(self):\n if not self._data:\n print( \"There is no data available.\\n\"\n \"Please use model.set_data(data) to attach it to the model\\n\"\n \"Data must be of class 'Data', with train, validation and test attributes\\n\"\n \"and 'next_batch' method\\n\")\n return False\n return True\n\n def optimize(self, epochs):\n if not self._data_is_available():\n return\n\n start_time = time.time()\n\n train_batch_size = 14\n trainset_size = len(self._data.train.labels)\n\n num_iterations = int(float(trainset_size)/train_batch_size * epochs) +1\n report_interval = int(math.floor(float(trainset_size)/epochs))\n\n for i in range(self._total_iterations,\n self._total_iterations + num_iterations):\n\n x_batch, y_true_batch = self._data.train.next_batch(train_batch_size)\n\n feed_dict_train = {self._x: x_batch,\n self._y_true: y_true_batch}\n\n self._session.run(self._optimizer, feed_dict=feed_dict_train)\n\n if (i+1) % 5 == 0:\n feed_dict_train = {self._x: self._data.train.images, self._y_true: self._data.train.labels}\n feed_dict_test = {self._x: self._data.test.images, self._y_true: self._data.test.labels}\n feed_dict_vald = {self._x: self._data.validation.images, self._y_true: self._data.validation.labels}\n\n train_acc = self._session.run(self._accuracy, feed_dict=feed_dict_train)\n test_acc = self._session.run(self._accuracy, feed_dict=feed_dict_test)\n vald_acc = self._session.run(self._accuracy, feed_dict=feed_dict_vald)\n\n self._train_history.append(train_acc)\n self._test_history.append(test_acc)\n self._vald_history.append(vald_acc)\n\n msg = (\"Completed epochs: {0:>6}, Training Accuracy: {1:>6.1%}, \"\n \"Test Accuracy: {2:>6.1%}, Validation Accuracy: {3:>6.1%}\" )\n\n print(msg.format(self._data.train.epochs_completed, train_acc, test_acc, vald_acc))\n\n end_time = time.time()\n time_dif = end_time - start_time\n print(\"Time usage: \" + str(timedelta(seconds=int(round(time_dif)))))\n\n def optimize_no_valid(self, epochs):\n if not self._data_is_available():\n return\n \n start_time = time.time()\n \n train_batch_size = 14\n trainset_size = len(self._data.train.labels)\n \n num_iterations = int(float(trainset_size)/train_batch_size * epochs) +1\n report_interval = int(math.floor(float(trainset_size)/epochs))\n \n for i in range(self._total_iterations,\n self._total_iterations + num_iterations):\n \n x_batch, y_true_batch = self._data.train.next_batch(train_batch_size)\n \n feed_dict_train = {self._x: x_batch,\n self._y_true: y_true_batch}\n \n self._session.run(self._optimizer, feed_dict=feed_dict_train)\n \n if (i+1) % 5 == 0:\n feed_dict_train = {self._x: self._data.train.images, self._y_true: self._data.train.labels}\n feed_dict_test = {self._x: self._data.test.images, self._y_true: self._data.test.labels}\n \n train_acc = self._session.run(self._accuracy, feed_dict=feed_dict_train)\n test_acc = self._session.run(self._accuracy, feed_dict=feed_dict_test)\n \n self._train_history.append(train_acc)\n self._test_history.append(test_acc)\n \n msg = (\"Completed epochs: {0:>6}, Training Accuracy: {1:>6.1%}, \"\n \"Test Accuracy: {2:>6.1%}\" )\n \n print(msg.format(self._data.train.epochs_completed, train_acc, test_acc))\n \n end_time = time.time()\n time_dif = end_time - start_time\n print(\"Time usage: \" + str(timedelta(seconds=int(round(time_dif)))))\n\n def print_test_accuracy(self, show_example_errors=False, show_confusion_matrix=False):\n\n if not self._data_is_available():\n return\n\n test_batch_size = 256\n num_test = len(self._data.test.images)\n\n cls_pred = np.zeros(shape=num_test, dtype=np.int)\n\n i = 0\n while i < num_test:\n j = min(i + test_batch_size, num_test)\n\n images = self._data.test.images[i:j, :]\n labels = self._data.test.labels[i:j, :]\n\n feed_dict = {self._x: images,\n self._y_true: labels}\n\n cls_pred[i:j] = self._session.run(self._y_pred_cls, feed_dict=feed_dict)\n\n i = j\n\n cls_true = self._data.test.cls\n correct = (cls_true == cls_pred)\n\n correct_sum = correct.sum()\n\n acc = float(correct_sum) / num_test\n\n f1 = f1_score(cls_true, cls_pred, average='weighted')\n\n msg = \"Accuracy on Test-Set: {0:.1%} ({1} / {2}) , F1: {3: .1%}\"\n print(msg.format(acc, correct_sum, num_test, f1))\n\n if show_example_errors:\n print(\"Example errors:\")\n plot_example_errors(cls_pred=cls_pred, correct=correct, data_test=self._data.test, img_shape=self._img_shape)\n\n if show_confusion_matrix:\n print(\"Confusion Matrix:\")\n plot_confusion_matrix(cls_pred=cls_pred, data_test=self._data.test)\n \n return acc,f1\n \n def save(self, filename):\n save_path = self._saver.save(self._session, filename)\n print(\"Model saved in path: %s\" % save_path)\n \n\n def restore(self, filename):\n try:\n self._saver.restore(self._session, filename)\n print(\"Model restored.\")\n except Exception as ex:\n print(\"There was a problem (%s). Couldn't restore file: %s\"\n % (type(ex).__name__, filename))\n"
},
{
"alpha_fraction": 0.6042081713676453,
"alphanum_fraction": 0.6335906386375427,
"avg_line_length": 39.68062973022461,
"blob_id": "c1ab41ba1e5e74293292a8a9220794f83d38c2eb",
"content_id": "8f97f7acec18048394cd75042b3a5b6e2e540384",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 8032,
"license_type": "no_license",
"max_line_length": 152,
"num_lines": 191,
"path": "/bcb-src/lime/lime_exp_all.py",
"repo_name": "mohebbihr/bcb-crater-detection",
"src_encoding": "UTF-8",
"text": "# Loading libraries ----\r\n\r\n# misc\r\nimport os\r\nimport glob\r\nimport shutil\r\nfrom random import sample, randint, shuffle\r\nimport numpy as np\r\nimport pandas as pd\r\nfrom skimage.segmentation import mark_boundaries\r\n\r\n# sci-kit learn\r\nfrom sklearn import metrics\r\nfrom sklearn.model_selection import train_test_split\r\nfrom sklearn.linear_model import LogisticRegression\r\n\r\n# plots\r\nimport matplotlib.pyplot as plt\r\nimport seaborn as sns\r\n\r\n# image operation\r\nimport cv2\r\nfrom PIL import Image\r\n\r\n# keras \r\nimport keras\r\nfrom keras.models import Sequential\r\nfrom keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout\r\nfrom keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img\r\nfrom keras.applications import inception_v3 as inc_net\r\nfrom utils import preprocess_images, transform_image, pretty_cm, evaluation_indices\r\n\r\n# lime\r\nimport lime\r\nfrom lime import lime_image\r\n\r\n \r\n# creating training and test sets \r\n# put crater_date folder on root directory too. \r\n# create the training_set and test_set directories inside crater_data and put west region on training_set and other regions on test_set on each running \r\n# crater_data/training_set/crater/\r\n# crater_data/training_set/non-crater/\r\n\r\n# west region as training_set\r\n#preprocess_images('tile2_24', 'training_set', img_dimensions=(64, 64))\r\n#preprocess_images('tile2_25', 'training_set', img_dimensions=(64, 64))\r\n# east region as test_set\r\npreprocess_images('tile1_24', 'test_set', img_dimensions=(64, 64))\r\npreprocess_images('tile1_25', 'test_set', img_dimensions=(64, 64))\r\npreprocess_images('tile2_24', 'test_set', img_dimensions=(64, 64))\r\npreprocess_images('tile2_25', 'test_set', img_dimensions=(64, 64))\r\npreprocess_images('tile3_24', 'test_set', img_dimensions=(64, 64))\r\npreprocess_images('tile3_25', 'test_set', img_dimensions=(64, 64))\r\n\r\ntrain_datagen = ImageDataGenerator(rescale = 1./255,\r\n shear_range = 0.2,\r\n zoom_range = 0.2,\r\n horizontal_flip = False)\r\n\r\ntest_datagen = ImageDataGenerator(rescale = 1./255)\r\n\r\ntraining_set = train_datagen.flow_from_directory('./crater_data/training_set',\r\n target_size = (64, 64),\r\n batch_size = 32,\r\n class_mode = 'binary')\r\n\r\ntest_set = test_datagen.flow_from_directory('./crater_data/test_set',\r\n target_size = (64, 64),\r\n batch_size = 32,\r\n class_mode = 'binary')\r\n\r\n# inspecting class labels for future reference \r\nlabels_index = { 0 : \"non-crater\", 1 : \"crater\" }\r\ntraining_set.class_indices\r\n\r\n# initialize CNN \r\n# Initialising \r\ncnn_classifier = Sequential()\r\n\r\n# 1st conv. layer\r\ncnn_classifier.add(Conv2D(32, (3, 3), input_shape = (64, 64, 3), activation = 'relu'))\r\ncnn_classifier.add(MaxPooling2D(pool_size = (2, 2)))\r\n\r\n# 2nd conv. layer\r\ncnn_classifier.add(Conv2D(32, (3, 3), activation = 'relu')) #no need to specify the input shape\r\ncnn_classifier.add(MaxPooling2D(pool_size = (2, 2)))\r\n\r\n# 3nd conv. layer\r\ncnn_classifier.add(Conv2D(64, (3, 3), activation = 'relu')) #no need to specify the input shape\r\ncnn_classifier.add(MaxPooling2D(pool_size = (2, 2)))\r\n\r\n# Flattening\r\ncnn_classifier.add(Flatten())\r\n\r\n# Full connection\r\ncnn_classifier.add(Dense(units = 64, activation = 'relu'))\r\ncnn_classifier.add(Dropout(0.5)) # quite aggresive dropout, maybe reduce\r\ncnn_classifier.add(Dense(units = 1, activation = 'sigmoid'))\r\n\r\ncnn_classifier.summary()\r\n\r\n# Compiling the CNN\r\ncnn_classifier.compile(optimizer = 'adam', # 'adam'/rmsprop'\r\n loss = 'binary_crossentropy', \r\n metrics = ['accuracy'])\r\n\r\n\r\n# load the model \r\ncnn_classifier = keras.models.load_model('cnn_models/west_train_model_dropout.h5')\r\n\r\n#preparing data for predictions\r\n\r\nsize = (64, 64)\r\nX_eval = list()\r\ny_eval = list()\r\n\r\n# crater part\r\nfiles = os.listdir('./crater_data/test_set/crater')\r\nfiles.sort()\r\n\r\nfor i in range(0, len(files) - 1):\r\n X_eval.append(transform_image('./crater_data/test_set/crater/' + files[i + 1], size))\r\n y_eval.append(1)\r\n\r\n# non-crater part\r\nfiles = os.listdir('./crater_data/test_set/non-crater')\r\nfiles.sort()\r\n\r\nfor i in range(0, len(files) - 1):\r\n X_eval.append(transform_image('./crater_data/test_set/non-crater/' + files[i + 1], size))\r\n y_eval.append(0)\r\n\r\n# stacking the arrays \r\nX_eval = np.vstack(X_eval)\r\n\r\ncnn_pred = cnn_classifier.predict_classes(X_eval, batch_size = 32)\r\n\r\nprint(\"evaluate the results using lime\")\r\npretty_cm(cnn_pred, y_eval, ['crater', 'non-crater'])\r\n\r\ncorrectly_classified_indices, misclassified_indices = evaluation_indices(cnn_pred, y_eval)\r\n\r\n# correctly classified images\r\nplt.figure(figsize=(50,10))\r\n#shuffle(correctly_classified_indices)\r\nfor plot_index, good_index in enumerate(correctly_classified_indices[0:5]):\r\n plt.subplot(1, 5, plot_index + 1)\r\n plt.imshow(X_eval[good_index])\r\n plt.title(' Predicted: {}, Actual: {} '.format(labels_index[cnn_pred[good_index][0]], \r\n labels_index[y_eval[good_index]]), fontsize = 18) \r\nplt.savefig('results/correctly_classified.png', dpi=400)\r\n \r\n# lime explanation of correctly classified images\r\nplt.figure(figsize=(50,10))\r\n#shuffle(correctly_classified_indices)\r\nfor plot_index, good_index in enumerate(correctly_classified_indices[0:5]):\r\n plt.subplot(1, 5, plot_index + 1)\r\n explainer = lime_image.LimeImageExplainer()\r\n explanation = explainer.explain_instance(X_eval[good_index], cnn_classifier.predict_classes, top_labels=2, hide_color=0, num_samples=1000)\r\n temp, mask = explanation.get_image_and_mask(0, positive_only=False, num_features=1000, hide_rest=False)\r\n #temp, mask = explanation.get_image_and_mask(0, positive_only=True, num_features=1000, hide_rest=True)\r\n x = mark_boundaries(temp / 2 + 0.5, mask)\r\n plt.imshow(x, interpolation='none')\r\n plt.title(' Predicted: {}, Actual: {} '.format(labels_index[cnn_pred[good_index][0]], \r\n labels_index[y_eval[good_index]]), fontsize = 18)\r\nplt.savefig('results/lime_correctly_classified.png', dpi=400)\r\n \r\n# misclassified images\r\nplt.figure(figsize=(50,10))\r\n#shuffle(misclassified_indices)\r\nfor plot_index, bad_index in enumerate(misclassified_indices[0:5]):\r\n plt.subplot(1, 5, plot_index + 1)\r\n plt.imshow(X_eval[bad_index])\r\n plt.title(' Predicted: {}, Actual: {} '.format(labels_index[cnn_pred[bad_index][0]], \r\n labels_index[y_eval[bad_index]]), fontsize = 18)\r\nplt.savefig('results/mis_classified.png', dpi=400)\r\n \r\n# lime explanation of correctly misclassified images\r\nplt.figure(figsize=(50,10))\r\n#shuffle(misclassified_indices)\r\nfor plot_index, bad_index in enumerate(misclassified_indices[0:5]):\r\n plt.subplot(1, 5, plot_index + 1)\r\n explainer = lime_image.LimeImageExplainer()\r\n explanation = explainer.explain_instance(X_eval[bad_index], cnn_classifier.predict_classes, top_labels=2, hide_color=0, num_samples=1000)\r\n temp, mask = explanation.get_image_and_mask(0, positive_only=False, num_features=1000, hide_rest=False)\r\n #temp, mask = explanation.get_image_and_mask(0, positive_only=True, num_features=1000, hide_rest=True)\r\n x = mark_boundaries(temp / 2 + 0.5, mask)\r\n plt.imshow(x, interpolation='none')\r\n plt.title(' Predicted: {}, Actual: {} '.format(labels_index[cnn_pred[bad_index][0]], \r\n labels_index[y_eval[bad_index]]), fontsize = 18)\r\nplt.savefig('results/lime_mis_classified.png', dpi=400)\r\n\r\n \r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n"
},
{
"alpha_fraction": 0.378145694732666,
"alphanum_fraction": 0.564238429069519,
"avg_line_length": 36.67499923706055,
"blob_id": "d5ac2a6a1254f7aafe1cdabb7f47252701b8074c",
"content_id": "06e084de899c81a2da8e6bf78e8d4342d540bda0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1510,
"license_type": "no_license",
"max_line_length": 156,
"num_lines": 40,
"path": "/bcb-src/gen_results/plot_progressive_resizing.py",
"repo_name": "mohebbihr/bcb-crater-detection",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python2\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Fri Apr 26 12:35:55 2019\n\n@author: mohebbi\n\"\"\"\n\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nif __name__ == \"__main__\":\n \n epochs = [i for i in range(0,525,25)]\n # validation accuracy of progressive models\n acc_1 = [0.44, 0.46, 0.48, 0.48, 0.5, 0.49, 0.51, 0.53, 0.56, 0.54, 0.55, 0.58, 0.58, 0.59, 0.58, 0.63, 0.64, 0.69, 0.70, 0.72, 0.72] # 12 x 12 images\n acc_2 = [0.42, 0.54, 0.56, 0.58, 0.57, 0.56, 0.56, 0.53, 0.59, 0.64, 0.65, 0.66, 0.65, 0.71, 0.74, 0.73, 0.77, 0.80, 0.86, 0.85, 0.846] # 24 x 24 images\n acc_3 = [0.51, 0.55, 0.56, 0.59, 0.60, 0.58, 0.61, 0.63, 0.62, 0.66, 0.68, 0.70, 0.71, 0.73, 0.74, 0.78, 0.79, 0.84, 0.85, 0.89, 0.892] # 48 x 48 images\n acc_list = [acc_1, acc_2, acc_3]\n\t\n progressive_resizing = ['12 x 12 ','24 x 24 ','48 x 48 ']\n color_sequence = ['#ff0000','#0000ff','#006400']\n fig, ax = plt.subplots(1, 1, figsize=(13, 9))\n ax.get_xaxis().tick_bottom()\n ax.get_yaxis().tick_left()\n ax.set_xlim(0, 500)\n\t\n plt.plot(epochs, acc_1, 'r--') \n plt.plot(epochs, acc_2, 'bs')\n plt.plot(epochs, acc_2, 'b') \n plt.plot(epochs, acc_3, 'g+') \n plt.plot(epochs, acc_3, 'g')\n\t\n for i, step in enumerate(progressive_resizing):\n y_pos = acc_list[i][-1] - 0.005\n plt.text(505, y_pos, step, fontsize=14, color=color_sequence[i])\n\t\t\n #plt.show()\n plt.savefig('progressive_resizing_val_acc.png', bbox_inches='tight', dpi=400)\n\t\n\t"
},
{
"alpha_fraction": 0.6424371004104614,
"alphanum_fraction": 0.6565514206886292,
"avg_line_length": 32.74603271484375,
"blob_id": "fe49d2e8bff518d5e6c43e744c74cda017ed10d0",
"content_id": "f36dbd51910c1b151e7abee842283afb77283bab",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4251,
"license_type": "no_license",
"max_line_length": 107,
"num_lines": 126,
"path": "/bcb-src/lime/utils.py",
"repo_name": "mohebbihr/bcb-crater-detection",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python2\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Sat Aug 25 13:08:44 2018\n\n@author: mohebbi\n\"\"\"\nimport cv2\nimport os\nimport glob\nimport shutil\nfrom random import sample, randint, shuffle\nimport numpy as np\nimport pandas as pd\nfrom sklearn import metrics\nfrom PIL import Image\n\n# plots\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img\n\n\n\ndef preprocess_images(filename, dest_folder, img_dimensions=(64, 64)):\n # function to pre-process images and create training and test sets. \n src = os.path.join('crater_data', 'images', filename)\n dst = os.path.join('crater_data', dest_folder)\n tgt_height, tgt_width = img_dimensions\n\n # create new directories if necessary\n for imgtype in ['crater', 'non-crater']:\n tgdir = os.path.join(dst, imgtype)\n if not os.path.isdir(tgdir):\n os.makedirs(tgdir)\n\n for src_filename in glob.glob(os.path.join(src, '*', '*.jpg')):\n #print(src_filename)\n pathinfo = src_filename.split(os.path.sep)\n img_type = pathinfo[-2] # crater or non-crater\n filename = pathinfo[-1] # the actual name of the jpg\n\n dst_filename = os.path.join(dst, img_type, filename)\n\n # read the original image and get size info\n src_img = cv2.imread(src_filename)\n\n # resize image, normalize and write to disk\n scaled_img = cv2.resize(src_img, (tgt_height, tgt_width))\n cv2.normalize(scaled_img, scaled_img, 0, 255, cv2.NORM_MINMAX)\n cv2.imwrite(dst_filename, scaled_img)\n \n print(dest_folder + \" Done!\")\n \ndef move_random_files(path_from, path_to, n):\n # function for moving random files from one directory to another (used for creating train and test set)\n files = os.listdir(path_from)\n files.sort()\n files = files[1:] #omiting .DS_Store\n\n for i in sample(range(0, len(files)-1), n):\n f = files[i]\n src = path_from + f\n dst = path_to + f\n shutil.move(src, dst)\n \ndef preview_random_image(path):\n # function for previewing a random image from a given directory\n files = os.listdir(path)\n files.sort()\n img_name = files[randint(1, len(files) - 1)]\n img_preview_name = path + img_name\n image = Image.open(img_preview_name)\n plt.imshow(image)\n plt.title(img_name)\n plt.show()\n width, height = image.size\n print (\"Dimensions:\", image.size, \"Total pixels:\", width * height)\n \ndef pretty_cm(y_pred, y_truth, labels, save_path):\n # pretty implementation of a confusion matrix\n cm = metrics.confusion_matrix(y_truth, y_pred)\n ax= plt.subplot()\n sns.heatmap(cm, annot=True, fmt=\"d\", linewidths=.5, square = True, cmap = 'BuGn_r')\n # labels, title and ticks\n ax.set_xlabel('Predicted label')\n ax.set_ylabel('Actual label')\n ax.set_title('Accuracy: {0}'.format(metrics.accuracy_score(y_truth, y_pred)), size = 15) \n ax.xaxis.set_ticklabels(labels)\n ax.yaxis.set_ticklabels(labels)\n plt.savefig(save_path, dpi=400)\n \n \ndef img_to_1d_greyscale(img_path, size):\n # function for loading, resizing and converting an image into greyscale\n # used for logistic regression\n img = cv2.imread(img_path, cv2.IMREAD_GRAYSCALE)\n img = cv2.resize(img, size)\n return(pd.Series(img.flatten()))\n\ndef show_image(image):\n # function for viewing an image\n fig = plt.figure(figsize = (5, 25))\n ax = fig.add_subplot(111)\n ax.imshow(image, interpolation='none')\n plt.show()\n\ndef transform_image(path, size):\n # function for transforming images into a format supported by CNN\n x = load_img(path, target_size=(size[0], size[1]))\n x = img_to_array(x) / 255\n x = np.expand_dims(x, axis=0)\n return (x)\n \ndef evaluation_indices(y_pred, y_test):\n # function for getting correctly and incorrectly classified indices\n index = 0\n correctly_classified_indices = []\n misclassified_indices = []\n for label, predict in zip(y_test, y_pred):\n if label != predict: \n misclassified_indices.append(index)\n else:\n correctly_classified_indices.append(index)\n index +=1\n return (correctly_classified_indices, misclassified_indices)"
},
{
"alpha_fraction": 0.6252154111862183,
"alphanum_fraction": 0.647074818611145,
"avg_line_length": 36.40766525268555,
"blob_id": "8810d6106227bdd9bde8f748c6dd7d154671313e",
"content_id": "944d5f0303dc5f5346058c09e082c9ddca70bcd5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 11025,
"license_type": "no_license",
"max_line_length": 152,
"num_lines": 287,
"path": "/bcb-src/lime/lime_exp_reg.py",
"repo_name": "mohebbihr/bcb-crater-detection",
"src_encoding": "UTF-8",
"text": "# Loading libraries ----\r\n\r\n# misc\r\nimport os\r\nimport glob\r\nimport shutil\r\nfrom random import sample, randint, shuffle\r\nimport numpy as np\r\nimport pandas as pd\r\nfrom skimage.segmentation import mark_boundaries\r\n\r\n# sci-kit learn\r\nfrom sklearn import metrics\r\nfrom sklearn.model_selection import train_test_split\r\nfrom sklearn.linear_model import LogisticRegression\r\n\r\n# plots\r\nimport matplotlib.pyplot as plt\r\nimport seaborn as sns\r\n\r\n# image operation\r\nimport cv2\r\nfrom PIL import Image\r\n\r\n# keras \r\nimport keras\r\nfrom keras.models import Sequential\r\nfrom keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout\r\nfrom keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img\r\nfrom keras.applications import inception_v3 as inc_net\r\n\r\n# lime\r\nimport lime\r\nfrom lime import lime_image\r\n\r\n#def create_necc_dirc()\r\n # this function remove old training and test sets\r\n # create new directories if necessary\r\n \r\n \r\ndef preprocess_images(filename, dest_folder, img_dimensions=(64, 64)):\r\n # function to pre-process images and create training and test sets. \r\n src = os.path.join('crater_data', 'images', filename)\r\n dst = os.path.join('crater_data', dest_folder)\r\n tgt_height, tgt_width = img_dimensions\r\n\r\n # create new directories if necessary\r\n for imgtype in ['crater', 'non-crater']:\r\n tgdir = os.path.join(dst, imgtype)\r\n if not os.path.isdir(tgdir):\r\n os.makedirs(tgdir)\r\n\r\n for src_filename in glob.glob(os.path.join(src, '*', '*.jpg')):\r\n #print(src_filename)\r\n pathinfo = src_filename.split(os.path.sep)\r\n img_type = pathinfo[-2] # crater or non-crater\r\n filename = pathinfo[-1] # the actual name of the jpg\r\n\r\n dst_filename = os.path.join(dst, img_type, filename)\r\n\r\n # read the original image and get size info\r\n src_img = cv2.imread(src_filename)\r\n\r\n # resize image, normalize and write to disk\r\n scaled_img = cv2.resize(src_img, (tgt_height, tgt_width))\r\n cv2.normalize(scaled_img, scaled_img, 0, 255, cv2.NORM_MINMAX)\r\n cv2.imwrite(dst_filename, scaled_img)\r\n \r\n print(dest_folder + \" Done!\")\r\n \r\ndef move_random_files(path_from, path_to, n):\r\n # function for moving random files from one directory to another (used for creating train and test set)\r\n files = os.listdir(path_from)\r\n files.sort()\r\n files = files[1:] #omiting .DS_Store\r\n\r\n for i in sample(range(0, len(files)-1), n):\r\n f = files[i]\r\n src = path_from + f\r\n dst = path_to + f\r\n shutil.move(src, dst)\r\n \r\ndef preview_random_image(path):\r\n # function for previewing a random image from a given directory\r\n files = os.listdir(path)\r\n files.sort()\r\n img_name = files[randint(1, len(files) - 1)]\r\n img_preview_name = path + img_name\r\n image = Image.open(img_preview_name)\r\n plt.imshow(image)\r\n plt.title(img_name)\r\n plt.show()\r\n width, height = image.size\r\n print (\"Dimensions:\", image.size, \"Total pixels:\", width * height)\r\n \r\ndef pretty_cm(y_pred, y_truth, labels):\r\n # pretty implementation of a confusion matrix\r\n cm = metrics.confusion_matrix(y_truth, y_pred)\r\n ax= plt.subplot()\r\n sns.heatmap(cm, annot=True, fmt=\"d\", linewidths=.5, square = True, cmap = 'BuGn_r')\r\n # labels, title and ticks\r\n ax.set_xlabel('Predicted label')\r\n ax.set_ylabel('Actual label')\r\n ax.set_title('Accuracy: {0}'.format(metrics.accuracy_score(y_truth, y_pred)), size = 15) \r\n ax.xaxis.set_ticklabels(labels)\r\n ax.yaxis.set_ticklabels(labels)\r\n plt.savefig('results/regression/confusion_matrix.png', dpi=400)\r\n \r\n \r\ndef img_to_1d_greyscale(img_path, size):\r\n # function for loading, resizing and converting an image into greyscale\r\n # used for logistic regression\r\n img = cv2.imread(img_path, cv2.IMREAD_GRAYSCALE)\r\n img = cv2.resize(img, size)\r\n return(pd.Series(img.flatten()))\r\n\r\ndef show_image(image):\r\n # function for viewing an image\r\n fig = plt.figure(figsize = (5, 25))\r\n ax = fig.add_subplot(111)\r\n ax.imshow(image, interpolation='none')\r\n plt.show()\r\n\r\ndef transform_image(path, size):\r\n # function for transforming images into a format supported by CNN\r\n x = load_img(path, target_size=(size[0], size[1]))\r\n x = img_to_array(x) / 255\r\n x = np.expand_dims(x, axis=0)\r\n return (x)\r\n \r\ndef evaluation_indices(y_pred, y_test):\r\n # function for getting correctly and incorrectly classified indices\r\n index = 0\r\n correctly_classified_indices = []\r\n misclassified_indices = []\r\n for label, predict in zip(y_test, y_pred):\r\n if label != predict: \r\n misclassified_indices.append(index)\r\n else:\r\n correctly_classified_indices.append(index)\r\n index +=1\r\n return (correctly_classified_indices, misclassified_indices)\r\n \r\n# creating training and test sets \r\n# put crater_date folder on root directory too. \r\n# create the training_set and test_set directories inside crater_data and put west region on training_set and other regions on test_set on each running \r\n# crater_data/training_set/crater/\r\n# crater_data/training_set/non-crater/\r\n\r\n# west region as training_set\r\npreprocess_images('tile2_24', 'training_set', img_dimensions=(64, 64))\r\npreprocess_images('tile2_25', 'training_set', img_dimensions=(64, 64))\r\n# east region as test_set\r\npreprocess_images('tile1_24', 'test_set', img_dimensions=(64, 64))\r\npreprocess_images('tile1_25', 'test_set', img_dimensions=(64, 64))\r\n#preprocess_images('tile2_24', 'test_set', img_dimensions=(64, 64))\r\n#preprocess_images('tile2_25', 'test_set', img_dimensions=(64, 64))\r\n#preprocess_images('tile3_24', 'test_set', img_dimensions=(64, 64))\r\n#preprocess_images('tile3_25', 'test_set', img_dimensions=(64, 64))\r\n\r\nsize = (64, 64)\r\nlabels_index = { 0 : \"non-crater\", 1 : \"crater\" }\r\n\r\n# defining empty containers\r\nX_train = pd.DataFrame(np.zeros((8000, size[0] * size[1])))\r\nX_test = pd.DataFrame(np.zeros((2000, size[0] * size[1])))\r\ny_train = list()\r\ny_test = list()\r\n\r\ncounter_train = 0\r\ncounter_test = 0\r\n\r\n# training set ----\r\n\r\nfiles = os.listdir('./crater_data/training_set/crater')\r\nfiles.sort()\r\n\r\nfor i in range(1, len(files)):\r\n X_train.iloc[counter_train, :] = img_to_1d_greyscale('./crater_data/training_set/crater/' + files[i], size) / 255\r\n y_train.append(1)\r\n counter_train += 1\r\n \r\nfiles = os.listdir('./crater_data/training_set/non-crater')\r\nfiles.sort()\r\n\r\nfor i in range(1, len(files)):\r\n X_train.iloc[counter_train, :] = img_to_1d_greyscale('crater_data/training_set/non-crater/' + files[i], size) / 255\r\n y_train.append(0)\r\n counter_train += 1\r\n \r\n# training set ----\r\n\r\nfiles = os.listdir('./crater_data/test_set/crater')\r\nfiles.sort()\r\n\r\nfor i in range(1, len(files)):\r\n X_test.iloc[counter_test, :] = img_to_1d_greyscale('crater_data/test_set/crater/' + files[i], size) / 255\r\n y_test.append(1)\r\n counter_test += 1\r\n \r\nfiles = os.listdir('./crater_data/test_set/non-crater')\r\nfiles.sort()\r\n\r\nfor i in range(1, len(files)):\r\n X_test.iloc[counter_test, :] = img_to_1d_greyscale('crater_data/test_set/non-crater/' + files[i], size) / 255\r\n y_test.append(0) \r\n counter_test += 1\r\n \r\n#preparing data for predictions\r\n\r\nsize = (64, 64)\r\nX_eval = list()\r\ny_eval = list()\r\n\r\n# crater part\r\nfiles = os.listdir('./crater_data/test_set/crater')\r\nfiles.sort()\r\n\r\nfor i in range(0, len(files) - 1):\r\n X_eval.append(transform_image('./crater_data/test_set/crater/' + files[i + 1], size))\r\n y_eval.append(0)\r\n\r\n# non-crater part\r\nfiles = os.listdir('./crater_data/test_set/non-crater')\r\nfiles.sort()\r\n\r\nfor i in range(0, len(files) - 1):\r\n X_eval.append(transform_image('./crater_data/test_set/non-crater/' + files[i + 1], size))\r\n y_eval.append(1)\r\n\r\n# stacking the arrays \r\nX_eval = np.vstack(X_eval)\r\n \r\nlogreg_classifier = LogisticRegression(solver = 'lbfgs')\r\n\r\nlogreg_classifier.fit(X_train, y_train)\r\n\r\nlogreg_pred = logreg_classifier.predict(X_test)\r\n\r\npretty_cm(logreg_pred, y_test, ['crater', 'non-crater'])\r\n\r\ncorrectly_classified_indices, misclassified_indices = evaluation_indices(logreg_pred, y_test)\r\n\r\nfor plot_index, bad_index in enumerate(misclassified_indices[0:5]):\r\n plt.subplot(1, 5, plot_index + 1)\r\n plt.imshow(np.reshape(X_test.iloc[bad_index, :].values, size))\r\n plt.title('Predicted: {}, Actual: {}'.format(labels_index[logreg_pred[bad_index]], \r\n labels_index[y_test[bad_index]]), fontsize = 15)\r\nplt.savefig('results/regression/correctly_classified.png', dpi=400)\r\n \r\n# lime explanation of correctly classified images\r\nplt.figure(figsize=(25,5))\r\nshuffle(correctly_classified_indices)\r\nfor plot_index, good_index in enumerate(correctly_classified_indices[0:5]):\r\n plt.subplot(1, 5, plot_index + 1)\r\n explainer = lime_image.LimeImageExplainer()\r\n explanation = explainer.explain_instance(X_eval[good_index], logreg_classifier.predict, top_labels=2, hide_color=0, num_samples=1000)\r\n temp, mask = explanation.get_image_and_mask(0, positive_only=False, num_features=10, hide_rest=False)\r\n x = mark_boundaries(temp / 2 + 0.5, mask)\r\n plt.imshow(x, interpolation='none')\r\n plt.title('Predicted: {}, Actual: {}'.format(labels_index[logreg_pred[good_index][0]], \r\n labels_index[y_eval[good_index]]), fontsize = 15)\r\nplt.savefig('results/regression/lime_correctly_classified.png', dpi=400)\r\n \r\n# misclassified images\r\nplt.figure(figsize=(25,5))\r\nshuffle(misclassified_indices)\r\nfor plot_index, bad_index in enumerate(misclassified_indices[0:5]):\r\n plt.subplot(1, 5, plot_index + 1)\r\n plt.imshow(X_eval[bad_index])\r\n plt.title('Predicted: {}, Actual: {}'.format(labels_index[logreg_pred[bad_index][0]], \r\n labels_index[y_eval[bad_index]]), fontsize = 15)\r\nplt.savefig('results/regression/mis_classified.png', dpi=400)\r\n \r\n# lime explanation of correctly misclassified images\r\nplt.figure(figsize=(25,5))\r\nshuffle(misclassified_indices)\r\nfor plot_index, bad_index in enumerate(misclassified_indices[0:5]):\r\n plt.subplot(1, 5, plot_index + 1)\r\n explainer = lime_image.LimeImageExplainer()\r\n explanation = explainer.explain_instance(X_eval[bad_index], logreg_classifier.predict, top_labels=2, hide_color=0, num_samples=1000)\r\n temp, mask = explanation.get_image_and_mask(0, positive_only=False, num_features=10, hide_rest=False)\r\n x = mark_boundaries(temp / 2 + 0.5, mask)\r\n plt.imshow(x, interpolation='none')\r\n plt.title('Predicted: {}, Actual: {}'.format(labels_index[logreg_pred[bad_index][0]], \r\n labels_index[y_eval[bad_index]]), fontsize = 15)\r\nplt.savefig('results/regression/lime_mis_classified.png', dpi=400)\r\n\r\n"
},
{
"alpha_fraction": 0.6329591870307922,
"alphanum_fraction": 0.6593936681747437,
"avg_line_length": 39.5,
"blob_id": "d42a4e46fcc0b5cbe430c527722eec550ea659ca",
"content_id": "a4c659334760c04b2a1e4b9769206450d322939e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 9533,
"license_type": "no_license",
"max_line_length": 152,
"num_lines": 228,
"path": "/bcb-src/lime/lime_exp_google.py",
"repo_name": "mohebbihr/bcb-crater-detection",
"src_encoding": "UTF-8",
"text": "# Loading libraries ----\r\n\r\n# misc\r\nimport os\r\nimport glob\r\nimport shutil\r\nfrom random import sample, randint, shuffle\r\nimport numpy as np\r\nimport pandas as pd\r\nfrom skimage.segmentation import mark_boundaries\r\n\r\n# sci-kit learn\r\nfrom sklearn import metrics\r\nfrom sklearn.model_selection import train_test_split\r\nfrom sklearn.linear_model import LogisticRegression\r\n\r\n# plots\r\nimport matplotlib.pyplot as plt\r\nimport seaborn as sns\r\n\r\n# image operation\r\nimport cv2\r\nfrom PIL import Image\r\n\r\n# keras \r\nimport keras\r\nfrom keras.models import Sequential\r\nfrom keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout\r\nfrom keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img\r\nfrom keras.applications import inception_v3 as inc_net\r\nfrom utils import preprocess_images, transform_image, pretty_cm, evaluation_indices\r\n\r\n# lime\r\nimport lime\r\nfrom lime import lime_image\r\n\r\nfrom keras.applications.inception_v3 import InceptionV3\r\nfrom keras.applications import ResNet50\r\nfrom keras.applications import VGG16\r\nfrom keras.preprocessing import image\r\nfrom keras.models import Model\r\nfrom keras.layers import Dense, GlobalAveragePooling2D\r\nfrom keras import backend as K\r\nfrom keras.applications import imagenet_utils\r\nfrom keras.applications.inception_v3 import preprocess_input\r\nfrom keras.preprocessing.image import ImageDataGenerator\r\nfrom keras.models import load_model\r\nfrom keras.optimizers import SGD\r\n\r\nimport argparse\r\nfrom skimage.segmentation import mark_boundaries\r\nfrom keras.applications.imagenet_utils import decode_predictions\r\n \r\nap = argparse.ArgumentParser()\r\nap.add_argument(\"-model\", \"--model\", type=str, default=\"vgg16\", help=\"name of pre-trained network to use\")\r\nap.add_argument(\"-path\", \"--path\", type=str , help=\"path to the model to load\")\r\nargs = vars(ap.parse_args())\r\n\r\nMODELS = {\r\n\t\"vgg16\": VGG16,\r\n\t\"inception\": InceptionV3,\r\n\t\"resnet\": ResNet50\r\n}\r\n \r\n# creating training and test sets \r\n# put crater_date folder on root directory too. \r\n# create the training_set and test_set directories inside crater_data and put west region on training_set and other regions on test_set on each running \r\n# crater_data/training_set/crater/\r\n# crater_data/training_set/non-crater/\r\n\r\n# west region as training_set\r\n#preprocess_images('tile2_24', 'training_set', img_dimensions=(64, 64))\r\n#preprocess_images('tile2_25', 'training_set', img_dimensions=(64, 64))\r\n# east region as test_set\r\n#preprocess_images('tile1_24', 'test_set', img_dimensions=(64, 64))\r\n#preprocess_images('tile1_25', 'test_set', img_dimensions=(64, 64))\r\n#preprocess_images('tile2_24', 'test_set', img_dimensions=(64, 64))\r\n#preprocess_images('tile2_25', 'test_set', img_dimensions=(64, 64))\r\n#preprocess_images('tile3_24', 'test_set', img_dimensions=(64, 64))\r\n#preprocess_images('tile3_25', 'test_set', img_dimensions=(64, 64))\r\n\r\ninput_shape = (224, 224)\r\npreprocess = imagenet_utils.preprocess_input\r\n\r\nif args[\"model\"] in (\"inception\"):\r\n\tinput_shape = (299, 299)\r\n\tpreprocess = preprocess_input\r\n\r\ntrain_datagen = ImageDataGenerator(rescale = 1./255,\r\n shear_range = 0.2,\r\n zoom_range = 0.2,\r\n horizontal_flip = False)\r\n\r\ntest_datagen = ImageDataGenerator(rescale = 1./255)\r\n\r\ntraining_set = train_datagen.flow_from_directory('./crater_data/training_set',\r\n target_size = (64, 64),\r\n batch_size = 32,\r\n class_mode = 'binary')\r\n\r\ntest_set = test_datagen.flow_from_directory('./crater_data/test_set',\r\n target_size = (64, 64),\r\n batch_size = 32,\r\n class_mode = 'binary')\r\n\r\nprint(\"[INFO] loading {}...\".format(args[\"path\"]))\r\nNetwork = MODELS[args[\"model\"]]\r\nbase_model = Network(weights=\"imagenet\", include_top=False)\r\n\r\n# add a global spatial average pooling layer\r\nx = base_model.output\r\nx = GlobalAveragePooling2D()(x)\r\n\r\n# let's add a fully-connected layer\r\nx = Dense(512, activation='relu')(x)\r\n\r\n# and a logistic layer -- we have 2 classes\r\npredictions = Dense(units = 1, activation = 'sigmoid')(x)\r\n\r\n# this is the model we will train\r\nmodel = Model(inputs=base_model.input, outputs=predictions)\r\n\r\n# first: train only the top layers (which were randomly initialized)\r\n# i.e. freeze all convolutional layers\r\nfor layer in base_model.layers:\r\n layer.trainable = False\r\n\r\n# compile the model (should be done *after* setting layers to non-trainable)\r\nmodel.compile(optimizer=SGD(lr=0.001, momentum=0.9), loss='binary_crossentropy',metrics = ['accuracy'])\r\nmodel.fit_generator(training_set, steps_per_epoch=400, epochs=3)\r\n\r\nfor i, layer in enumerate(base_model.layers):\r\n print(i, layer.name)\r\n\r\nfor layer in model.layers:\r\n layer.trainable = True\r\n\r\n# we need to recompile the model for these modifications to take effect\r\n# we use SGD with a low learning rate\r\nmodel.compile(optimizer=SGD(lr=0.0001, momentum=0.9), loss='binary_crossentropy',metrics = ['accuracy'])\r\n\r\n# we train our model again (this time fine-tuning all layers\r\nmodel.fit_generator(training_set, steps_per_epoch=400, epochs=20)\r\n\r\nmodel.save('deep_models/inception_west_train.h5')\r\n\r\nmodel = load_model('deep_models/inception_west_train.h5')\r\n\r\n#preparing data for predictions\r\nlabels_index = { 0 : \"non-crater\", 1 : \"crater\" }\r\nX_eval = list()\r\ny_eval = list()\r\n\r\n# crater part\r\nfiles = os.listdir('./crater_data/test_set/crater')\r\nfiles.sort()\r\n\r\nfor i in range(0, len(files) - 1):\r\n X_eval.append(transform_image('./crater_data/test_set/crater/' + files[i + 1], input_shape))\r\n y_eval.append(1)\r\n\r\n# non-crater part\r\nfiles = os.listdir('./crater_data/test_set/non-crater')\r\nfiles.sort()\r\n\r\nfor i in range(0, len(files) - 1):\r\n X_eval.append(transform_image('./crater_data/test_set/non-crater/' + files[i + 1], input_shape))\r\n y_eval.append(0)\r\n\r\n# stacking the arrays \r\nX_eval = np.vstack(X_eval)\r\n\r\npreds = model.predict(X_eval, verbose=1)\r\n\r\nprint(\"evaluate the results using lime\")\r\npretty_cm(preds, y_eval, ['crater', 'non-crater'], 'results/googlenet/confusion_matrix.png')\r\n\r\ncorrectly_classified_indices, misclassified_indices = evaluation_indices(preds, y_eval)\r\n\r\n# correctly classified images\r\nplt.figure(figsize=(50,10))\r\n#shuffle(correctly_classified_indices)\r\nfor plot_index, good_index in enumerate(correctly_classified_indices[0:5]):\r\n plt.subplot(1, 5, plot_index + 1)\r\n plt.imshow(X_eval[good_index])\r\n plt.title(' Predicted: {}, Actual: {} '.format(labels_index[preds[good_index][0]], \r\n labels_index[y_eval[good_index]]), fontsize = 18) \r\nplt.savefig('results/googlenet/correctly_classified.png', dpi=400)\r\n \r\n# lime explanation of correctly classified images\r\nplt.figure(figsize=(50,10))\r\n#shuffle(correctly_classified_indices)\r\nfor plot_index, good_index in enumerate(correctly_classified_indices[0:5]):\r\n plt.subplot(1, 5, plot_index + 1)\r\n explainer = lime_image.LimeImageExplainer()\r\n explanation = explainer.explain_instance(X_eval[good_index], model.predict, top_labels=2, hide_color=0, num_samples=1000)\r\n temp, mask = explanation.get_image_and_mask(0, positive_only=False, num_features=1000, hide_rest=False)\r\n #temp, mask = explanation.get_image_and_mask(0, positive_only=True, num_features=1000, hide_rest=True)\r\n x = mark_boundaries(temp / 2 + 0.5, mask)\r\n plt.imshow(x, interpolation='none')\r\n plt.title(' Predicted: {}, Actual: {} '.format(labels_index[preds[good_index][0]], \r\n labels_index[y_eval[good_index]]), fontsize = 18)\r\nplt.savefig('results/googlenet/lime_correctly_classified.png', dpi=400)\r\n \r\n# misclassified images\r\nplt.figure(figsize=(50,10))\r\n#shuffle(misclassified_indices)\r\nfor plot_index, bad_index in enumerate(misclassified_indices[0:5]):\r\n plt.subplot(1, 5, plot_index + 1)\r\n plt.imshow(X_eval[bad_index])\r\n plt.title(' Predicted: {}, Actual: {} '.format(labels_index[preds[bad_index][0]], \r\n labels_index[y_eval[bad_index]]), fontsize = 18)\r\nplt.savefig('results/googlenet/mis_classified.png', dpi=400)\r\n \r\n# lime explanation of correctly misclassified images\r\nplt.figure(figsize=(50,10))\r\n#shuffle(misclassified_indices)\r\nfor plot_index, bad_index in enumerate(misclassified_indices[0:5]):\r\n plt.subplot(1, 5, plot_index + 1)\r\n explainer = lime_image.LimeImageExplainer()\r\n explanation = explainer.explain_instance(X_eval[bad_index], model.predict, top_labels=2, hide_color=0, num_samples=1000)\r\n temp, mask = explanation.get_image_and_mask(0, positive_only=False, num_features=1000, hide_rest=False)\r\n #temp, mask = explanation.get_image_and_mask(0, positive_only=True, num_features=1000, hide_rest=True)\r\n x = mark_boundaries(temp / 2 + 0.5, mask)\r\n plt.imshow(x, interpolation='none')\r\n plt.title(' Predicted: {}, Actual: {} '.format(labels_index[preds[bad_index][0]], \r\n labels_index[y_eval[bad_index]]), fontsize = 18)\r\nplt.savefig('results/googlenet/lime_mis_classified.png', dpi=400)\r\n\r\n \r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n"
},
{
"alpha_fraction": 0.5771276354789734,
"alphanum_fraction": 0.6149527430534363,
"avg_line_length": 39.2976188659668,
"blob_id": "75bdb7cac1e8fd8b16896453092d39c2b27a11b3",
"content_id": "3d992a78ff44a1d2f9eb25c33c197a3b3d68a17d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3384,
"license_type": "no_license",
"max_line_length": 83,
"num_lines": 84,
"path": "/bcb-src/nn_cnn_src/crater_preprocessing.py",
"repo_name": "mohebbihr/bcb-crater-detection",
"src_encoding": "UTF-8",
"text": "import cv2\nimport glob\nimport os\nimport imutils\n\ndef preprocess(tile_img, dataset_type, img_dimensions=(50, 50)):\n src = os.path.join('crater_data', 'samples', dataset_type, tile_img)\n dst = os.path.join('crater_data', 'samples', dataset_type, 'normalized_images')\n tgt_height, tgt_width = img_dimensions\n\n # create new directories if necessary\n for imgtype in ['crater', 'non-crater']:\n tgdir = os.path.join(dst, imgtype)\n if not os.path.isdir(tgdir):\n os.makedirs(tgdir)\n \n for src_filename in glob.glob(os.path.join(src, '*', '*.png')):\n\n pathinfo = src_filename.split(os.path.sep)\n img_type = pathinfo[-2] # crater or non-crater\n filename = pathinfo[-1] # the actual name of the jpg\n\n dst_filename = os.path.join(dst, img_type, filename)\n\n # read the original image and get size info\n src_img = cv2.imread(src_filename)\n\n # resize image, normalize and write to disk\n scaled_img = cv2.resize(src_img, (tgt_height, tgt_width))\n \n cv2.normalize(scaled_img, scaled_img, 0, 255, cv2.NORM_MINMAX)\n cv2.imwrite(dst_filename, scaled_img)\n \n print(tile_img + \" Done! \")\n\n# in our dataset the ratio of positive to negative samples are 1 to 4 \n# This script rotates only positive samples by 90, 180 and 270 degrees and save\n# the samples to ratio became 1 to 1.\ndef positive_rotation_preprocess(tileimg, img_dimensions=(50, 50)):\n src = os.path.join('crater_data', 'images', tileimg)\n dst = os.path.join('crater_data', 'images', 'normalized_images')\n tgt_height, tgt_width = img_dimensions\n\n # create new directories if necessary\n for imgtype in ['crater', 'non-crater']:\n tgdir = os.path.join(dst, imgtype)\n if not os.path.isdir(tgdir):\n os.makedirs(tgdir)\n\n for src_filename in glob.glob(os.path.join(src, '*', '*.png')):\n #print(src_filename)\n pathinfo = src_filename.split(os.path.sep)\n img_type = pathinfo[-2] # crater or non-crater\n filename = pathinfo[-1] # the actual name of the jpg\n\n # read the original image and get size info\n src_img = cv2.imread(src_filename)\n scaled_img = cv2.resize(src_img, (tgt_height, tgt_width))\n \n if img_type == 'crater':\n \n rotated90 = imutils.rotate_bound(scaled_img, 90)\n rotated180 = imutils.rotate_bound(scaled_img, 180)\n rotated270 = imutils.rotate_bound(scaled_img, 270)\n \n dst_filename90 = os.path.join(dst, img_type, '90_'+filename)\n dst_filename180 = os.path.join(dst, img_type, '180_'+filename)\n dst_filename270 = os.path.join(dst, img_type, '270_'+filename)\n \n cv2.normalize(rotated90, rotated90, 0, 255, cv2.NORM_MINMAX)\n cv2.normalize(rotated180, rotated180, 0, 255, cv2.NORM_MINMAX)\n cv2.normalize(rotated270, rotated270, 0, 255, cv2.NORM_MINMAX)\n \n cv2.imwrite(dst_filename90, rotated90)\n cv2.imwrite(dst_filename180, rotated180)\n cv2.imwrite(dst_filename270, rotated270)\n \n else:\n \n dst_filename = os.path.join(dst, img_type, filename)\n cv2.normalize(scaled_img, scaled_img, 0, 255, cv2.NORM_MINMAX)\n cv2.imwrite(dst_filename, scaled_img)\n \n print(\"Done!\")"
},
{
"alpha_fraction": 0.5913233757019043,
"alphanum_fraction": 0.5956981182098389,
"avg_line_length": 34.610389709472656,
"blob_id": "52ca4fe6310e6827b4e9795f0b240c1681c24944",
"content_id": "1fece66f99d877b39c8ffca94457033640b7b660",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2743,
"license_type": "no_license",
"max_line_length": 96,
"num_lines": 77,
"path": "/bcb-src/nn_cnn_src/crater_data.py",
"repo_name": "mohebbihr/bcb-crater-detection",
"src_encoding": "UTF-8",
"text": "from sklearn.model_selection import train_test_split\nimport numpy as np\n\nclass DataSet(object):\n def __init__(self, images, one_hot, build_images_col=False):\n self._images = images\n self._labels = one_hot\n self._cls = np.argmax(self.labels, axis=1)\n self._index_in_epoch = 0\n self._num_examples = len(images)\n self._epochs_completed = 0\n self._images_col = None\n if build_images_col:\n self._images_col = []\n for img in images:\n self._images_col.append(img.reshape(len(img), 1))\n \n @property\n def images(self):\n return self._images\n\n @property\n def images_col(self):\n return self._images_col\n\n @property\n def labels(self):\n return self._labels\n\n @property\n def cls(self):\n return self._cls\n\n @property\n def num_examples(self):\n return self._num_examples\n\n @property\n def epochs_completed(self):\n return self._epochs_completed\n \n def next_batch(self, batch_size, fake_data=False):\n \"\"\"Return the next `batch_size` examples from this data set.\"\"\"\n start = self._index_in_epoch\n self._index_in_epoch += batch_size\n if self._index_in_epoch > self._num_examples:\n # Finished epoch\n self._epochs_completed += 1\n # Shuffle the data\n perm = np.arange(self._num_examples)\n np.random.shuffle(perm)\n self._images = self._images[perm]\n self._labels = self._labels[perm]\n # Start next epoch\n start = 0\n self._index_in_epoch = batch_size\n assert batch_size <= self._num_examples\n end = self._index_in_epoch\n return self._images[start:end], self._labels[start:end]\n\nclass Data(object):\n def __init__(self, images, one_hot, random_state=42, build_images_col=False):\n # split data\n X_train, X_test, Y_train, Y_test = \\\n train_test_split(images, one_hot, test_size=0.3, random_state=random_state)\n X_validation, X_test, Y_validation, Y_test = \\\n train_test_split(X_test, Y_test, test_size=0.5, random_state=random_state)\n \n self.train = DataSet(X_train, Y_train, build_images_col=build_images_col)\n self.validation = DataSet(X_validation, Y_validation, build_images_col=build_images_col)\n self.test = DataSet(X_test, Y_test, build_images_col=build_images_col)\n\nclass KCV_Data(object):\n def __init__(self, X_train, X_test, Y_train, Y_test, build_images_col=False):\n \n self.train = DataSet(X_train, Y_train, build_images_col=build_images_col)\n self.test = DataSet(X_test, Y_test, build_images_col=build_images_col)\n\n"
},
{
"alpha_fraction": 0.6219730973243713,
"alphanum_fraction": 0.6430492997169495,
"avg_line_length": 29.95833396911621,
"blob_id": "0a3b74b61bd23723ae4a1820902adee884c9dbc0",
"content_id": "29527013d2e4adda2fb08322a80ded84dbd664ad",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2230,
"license_type": "no_license",
"max_line_length": 101,
"num_lines": 72,
"path": "/bcb-src/nn_cnn_src/sliding_window_nn.py",
"repo_name": "mohebbihr/bcb-crater-detection",
"src_encoding": "UTF-8",
"text": "import cv2 as cv\nfrom helper import sliding_window\nimport time\nimport os\nimport csv\nfrom crater_cnn import Network \nimport pickle\nimport Param\nimport argparse\n\nap = argparse.ArgumentParser()\nap.add_argument(\"-tileimg\", \"--tileimg\", type=str, default=\"tile3_25\", help=\"The name of tile image\")\nargs = vars(ap.parse_args())\n\nparam = Param.Param()\ncwd = os.getcwd()\n\n# setup NN\nnn = Network(img_shape=(50, 50, 1))\nnn.add_flat_layer()\nnn.add_fc_layer(size=50 * 50, use_relu=True)\nnn.add_fc_layer(size=16, use_relu=True)\nnn.add_fc_layer(size=2, use_relu=False)\nnn.finish_setup()\n# model.set_data(data)\n\n# restore previously trained CNN model\nnn_model_path = os.path.join(cwd, 'models/nn/crater_model_nn.ckpt')\nnn.restore(nn_model_path)\n\ntile_img = args[\"tileimg\"]\n\npath = os.path.join('crater_data', 'images')\nimg = cv.imread(os.path.join(path, tile_img +'.pgm'), 0)\nimg = cv.normalize(img, img, 0, 255, cv.NORM_MINMAX)/255.0\n\ncrater_list_nn = []\n\nwin_sizes = range(param.dmin, param.dmax, 5)\n# loop over the image pyramid\n\nfor winS in win_sizes:\n print(\"Resized shape: %d, Window size: %d\" % (img.shape[0], winS))\n\n # loop over the sliding window for each layer of the pyramid\n # this process takes about 7 hours. To do quick test, we may try stepSize\n # to be large (60) and see if code runs OK\n #for (x, y, window) in sliding_window(resized, stepSize=2, windowSize=(winS, winS)):\n for (x, y, window) in sliding_window(img, stepSize=2, windowSize=(winS, winS)):\n # since we do not have a classifier, we'll just draw the window\n crop_img =cv.resize(window, (50, 50))\n crop_img = crop_img.flatten()\n \n p_non, p_crater = nn.predict([crop_img])[0]\n #nn_p = nn.feedforward_flat(crop_img)[0,0]\n \n x_c = (x + 0.5 * winS) \n y_c = (y + 0.5 * winS) \n crater_r = winS/2\n \n if p_crater >= 0.75:\n crater_data = [x_c, y_c, crater_r, p_crater]\n crater_list_nn.append(crater_data)\n\n \ncnn_file = open(\"results/nn/\"+tile_img+\"_sw_nn.csv\",\"w\")\nwith cnn_file:\n writer = csv.writer(cnn_file, delimiter=',')\n writer.writerows(crater_list_nn)\ncnn_file.close()\n\nprint(\"NN detected \", len(crater_list_nn), \"craters\")\n\n"
},
{
"alpha_fraction": 0.598100483417511,
"alphanum_fraction": 0.6137292385101318,
"avg_line_length": 47.15976333618164,
"blob_id": "b1081929c3e055f861e783fac413fb882082b19f",
"content_id": "6545229eca94d5a42a1d7006bbf614025ddeb9a5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 8318,
"license_type": "no_license",
"max_line_length": 167,
"num_lines": 169,
"path": "/bcb-src/FCN-Segmentation/Gen_Mask.py",
"repo_name": "mohebbihr/bcb-crater-detection",
"src_encoding": "UTF-8",
"text": "import os\r\nimport math\r\nimport numpy as np\r\nimport matplotlib.pyplot as plt\r\nimport matplotlib.image as mpimg\r\nimport seaborn as sns\r\nfrom keras.models import *\r\nfrom keras.layers import *\r\nfrom sklearn.utils import shuffle\r\nfrom keras import optimizers\r\nfrom keras.callbacks import ModelCheckpoint\r\nfrom Train_Seg_FCN import getImageArr, getSegmentationArr, give_color_to_seg_img, FCN8_custom, DSC, post_processing\r\nfrom helper import sliding_window\r\nimport PIL\r\nimport cv2 as cv\r\n\r\ncwd = os.getcwd()\r\n# Task: Draw the ground truth on output images of this script. Add these images to manuscript.\r\n\r\n\r\n# This script works on slices images that are generated from padded tile images. This script loads the trained FCN model and save the segmentation results for slices.\r\n# Then, build the results for tile images by combining the output slices. \r\n\r\ndef extract_data_adds(img_folder):\r\n\t# The function extracts the image addresses, and image files.\r\n img_files = os.listdir(img_folder) # list of images in subfolder\r\n img_adds = [] # a list of address of images\r\n \r\n img_folders = [f for f in img_files]\r\n \r\n if(len(img_folders) == 0):\r\n input('Warning: Train directory is empty')\r\n #j = 0 \r\n for i in range(len(img_files)): #append images and masks to the list of files\r\n img_adds += [os.path.join(img_folder,img_files[i])]\r\n \r\n return img_adds, img_files\r\n\r\ndef extract_data(X_add, input_width, input_height):\r\n # the function receives two list of image and mask addresses and return all the images and masks in two 3-D array; X, Y\r\n X = [] # list of images\r\n for img_add in X_add:\r\n if img_add.endswith('.jpg'): X.append( getImageArr(img_add , input_width , input_height))\r\n X = np.array(X) \r\n return X\r\n\t\r\n\r\ndef combine_mask_images(output_save_path, tile_img, mask_img_slices_dir, gt_num, windowSize=(224, 224), stepSize = 224):\r\n \r\n counter = 0 \r\n img_list_v = []\r\n \r\n for y in range(0, tile_img.shape[0], stepSize):\r\n img_list_h = []\r\n for x in range(0, tile_img.shape[1], stepSize):\r\n if y + windowSize[0] <= tile_img.shape[0] and x + windowSize[1] <= tile_img.shape[1] : \r\n \r\n dst_filename = os.path.join(mask_img_slices_dir, \"SL_\"+ gt_num + \"_\" + str(counter) + \"_x_\" + str(x) +\"_y_\" + str(y) + \".jpg\") \r\n img_list_h.append(PIL.Image.open(dst_filename))\r\n counter +=1\r\n \r\n if len(img_list_h) > 0 :\r\n \r\n row_img = np.hstack(np.asarray(i) for i in img_list_h) \r\n row_img = PIL.Image.fromarray( row_img)\r\n img_list_v.append(row_img)\r\n\r\n tile_img = np.vstack(np.asarray(i) for i in img_list_v)\r\n tile_img_pil = PIL.Image.fromarray( tile_img)\r\n #the size of this image is 1792 x 1792. \r\n tile_img_pil.save(output_save_path+'tile'+gt_num+'_1792.png')\r\n return tile_img_pil\r\n \r\n# this function combines 2d sub binary masks and make a big binary mask for tile image.\r\ndef combine_bin_masks(output_save_path, tile_img, mask_bin_slices_dir, gt_num, windowSize=(224, 224), stepSize = 224):\r\n \r\n counter = 0 \r\n mask_list_v = []\r\n \r\n for y in range(0, tile_img.shape[0], stepSize):\r\n mask_list_h = []\r\n for x in range(0, tile_img.shape[1], stepSize):\r\n if y + windowSize[0] <= tile_img.shape[0] and x + windowSize[1] <= tile_img.shape[1] : \r\n \r\n dst_filename = os.path.join(mask_bin_slices_dir, \"SL_\"+ gt_num + \"_\" + str(counter) + \"_x_\" + str(x) +\"_y_\" + str(y) + \".csv\") \r\n mask_list_h.append(np.loadtxt(dst_filename, dtype='int64'))\r\n counter +=1\r\n \r\n # combine matrices horizontally \r\n if len(mask_list_h) > 0 :\r\n \r\n row_mask = []\r\n row_mask = np.hstack(i for i in mask_list_h) \r\n mask_list_v.append(row_mask)\r\n \r\n tile_mask = np.vstack(i for i in mask_list_v)\r\n #the size of this image is 1792 x 1792. We reduce its size to 1700 x 1700\r\n np.savetxt(output_save_path + 'tile'+gt_num+'.csv', tile_mask[:1700,:1700], fmt='%d')\r\n return tile_mask\r\n\r\nif __name__ == \"__main__\":\r\n # task: generate non-crater images masks too. (the same procedure to train the network on non-crater area'same\r\n\t\r\n dir_data = \"crater_data/slices/org_noverlap/\" # Directory where the data is saved\r\n dir_data_tiles = \"crater_data/tiles/\" # Directory of tile images\r\n sub_img_mask_save_dir = \"results/output_sub_img_masks_noverlap/\" # output directory for saving sub segmented images.\r\n sub_bin_mask_save_dir = \"results/output_sub_bin_masks_noverlap/\" # output directory for saving sub binary mask of segmented images\r\n tile_img_mask_save_dir = \"results/output_img_masks_noverlap/\" # output directory for saving tile segmented image\r\n tile_bin_mask_save_dir = \"results/output_bin_masks_noverlap/\" # output directory for saving tile binary mask\r\n\t\r\n n_classes= 2 # number of classes in the image, 2 as foreground and background\r\n input_height , input_width = 224 , 224\r\n \r\n model = FCN8_custom(n_classes, input_height, input_width)\r\n \r\n model.load_weights(\"models/seg-fcn-model.h5\")\r\n print(\"Model loaded .\")\r\n # go through all the tile folders\r\n gt_list = [\"1_24\", \"1_25\", \"2_24\", \"2_25\", \"3_24\", \"3_25\"]\r\n #gt_list = [\"1_24\"]\r\n \r\n for gt_num in gt_list:\r\n \r\n print(\"Working on tile: tile\" +gt_num )\r\n\r\n # list of small image masks\r\n img_mask_list = []\r\n\t\t\r\n tile_dir_data = dir_data + 'tile' + gt_num + '/'\r\n tile_img_refelected_path = dir_data_tiles + 'tile' + gt_num + '_reflected.png' # input: directory of reflected (padded) tile image\r\n tile_sub_img_mask_save_dir = sub_img_mask_save_dir + 'tile' + gt_num + '/'\r\n tile_sub_bin_mask_save_dir = sub_bin_mask_save_dir + 'tile' + gt_num + '/'\r\n # \r\n tile_img_refelected = cv.imread(tile_img_refelected_path)\r\n X_add, X_files = extract_data_adds(tile_dir_data) # Extract a list of addresses: image addresses.\r\n \r\n X = extract_data(X_add, input_width, input_height) # one arrays of images are extracted.\r\n \r\n y_pred = model.predict(X)\r\n y_predi = np.argmax(y_pred, axis=3) # predicted class number of every pixel in the image, shape = sample_no x height x width\r\n y_predi_post = post_processing(y_predi)\r\n count = 0 \r\n \r\n # the next for loop shows the testing image, ground truth and segmented area.\r\n for i in range(X.shape[0]):\r\n count += 1\r\n img_filename = X_files[i]\r\n binmask_filename = img_filename[:-3] + \"csv\"\r\n img_is = (X[i] + 1)*(255.0/2)\r\n seg = y_predi_post[i] # segmented image after post processing\r\n \r\n # save segmented image on output_sub_masks_noverlap folder. We need tiles folders inside this folder.\r\n mpimg.imsave(tile_sub_img_mask_save_dir + img_filename,give_color_to_seg_img(seg,n_classes) ,format='png', dpi=400)\r\n img_mask_list += [give_color_to_seg_img(seg,n_classes)]\r\n # save binary matrix segmented image on output_sub_matrix_noverlap folder. We need tiles folders inside this folder.\r\n # example of saving a numpy array: np.savetxt(\"foo.csv\", a, fmt='%d')\r\n # example of loading a numpy array: b = np.loadtxt(\"foo.csv\", dtype=int)\r\n np.savetxt(tile_sub_bin_mask_save_dir + binmask_filename, y_predi_post[i], fmt='%d')\r\n\t\t\t \r\n print(\"Save segmented image and binary masks for tile\" + str(gt_num) + \" are done. \" +str(count * 2) + \" files are saved in results directory.\")\r\n\t\r\n\t # convert the tiles segmented image and binary seg usbmatrixes into one big image and one big binary mask.\r\n tile_img = combine_mask_images(tile_img_mask_save_dir, tile_img_refelected, tile_sub_img_mask_save_dir, gt_num)\r\n print(\"The segmented output saved.\")\r\n\t # save the binary mask matrix into csv format.\r\n tile_binary_mask = combine_bin_masks(tile_bin_mask_save_dir, tile_img_refelected, tile_sub_bin_mask_save_dir, gt_num)\r\n print(\"The binary mask of segmented output saved.\")\r\n \r\n print(\"end of main\")\r\n \r\n "
},
{
"alpha_fraction": 0.6956786513328552,
"alphanum_fraction": 0.717589795589447,
"avg_line_length": 33.25,
"blob_id": "e8576efa4d9e1f248b159597ce7c4386d27862fa",
"content_id": "649a5151c2253733a1a857afbd79e836c88820d3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1643,
"license_type": "no_license",
"max_line_length": 197,
"num_lines": 48,
"path": "/bcb-src/nn_cnn_src/remove_duplicates_banderia.py",
"repo_name": "mohebbihr/bcb-crater-detection",
"src_encoding": "UTF-8",
"text": "import cv2 as cv\nimport time\nfrom datetime import timedelta\nimport os\nimport pandas as pd\nfrom helper import calculateDistance, BIRCH_duplicate_removal,BIRCH2_duplicate_removal, Banderia_duplicate_removal, XMeans_duplicate_removal, draw_craters_rectangles, draw_craters_circles, evaluate\nfrom non_max_suppression import NMS\nimport Param\n\n# the raw data to process for duplicate removal\nparam = Param.Param()\nremoval_method = 'Banderia'\ncsv_path = 'results/cnn/west_train_west_test_1_25_cnn.csv'\ngt_csv_path = 'crater_data/gt/gt_tile1_25.csv'\nsave_path = 'results/cnn/evaluations/' + removal_method + '/west_train_west_test_1_25_cnn'\ntestset_name = 'tile1_25'\n\n# the image for drawing rectangles\nimg_path = os.path.join('crater_data', 'images', testset_name + '.pgm')\ngt_img = cv.imread(img_path)\n\ndata = pd.read_csv(csv_path, header=None)\ngt = pd.read_csv(gt_csv_path, header=None)\n\nthreshold = 0.75\n\nstart_time = time.time()\n\n# first pass, remove duplicates for points of same window size\ndf1 = {}\nmerge = pd.DataFrame()\nfor ws in data[2].unique():\n df1[ws] = data[ (data[3] > 0.75) & (data[2] == ws) ] # take only 75% or higher confidence\n merge = pd.concat([merge, df1[ws]])\n\nnodup = Banderia_duplicate_removal(merge)\n\n# save the no duplicate csv file\nnodup[[0,1,2]].to_csv(\"%s_noduplicates.csv\" % save_path, header=False, index=False)\ncraters = nodup[[0,1,2]]\n\n# evaluate with gt and draw it on final image.\ndr, fr, qr, bf, f_measure, tp, fp, fn = evaluate(craters, gt, gt_img, 64, True, save_path, param)\n\n\nend_time = time.time()\ntime_dif = end_time - start_time\nprint(\"Time usage: \" + str(timedelta(seconds=int(round(time_dif)))))"
},
{
"alpha_fraction": 0.7072463631629944,
"alphanum_fraction": 0.7381642460823059,
"avg_line_length": 29.895523071289062,
"blob_id": "cfd2e5cef5da04405da41d4211f0321bf1a793ff",
"content_id": "58022830b54822b5116f438fd61aa1f28cbeebf9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2070,
"license_type": "no_license",
"max_line_length": 177,
"num_lines": 67,
"path": "/bcb-src/nn_cnn_src/train_cnn.py",
"repo_name": "mohebbihr/bcb-crater-detection",
"src_encoding": "UTF-8",
"text": "import os\nfrom crater_cnn import Network\nfrom crater_plots import plot_image, plot_conv_weights, plot_conv_layer\nfrom crater_preprocessing import preprocess, positive_rotation_preprocess\n#from blob import Blob\ncwd = os.getcwd()\n\n#preprocess('tile1_24' ,'mask' , img_dimensions=(50, 50))\n#preprocess('tile1_25' ,'mask', img_dimensions=(50, 50))\n#preprocess('tile2_24' ,'mask', img_dimensions=(50, 50))\n#preprocess('tile2_25' ,'mask', img_dimensions=(50, 50))\n\nfrom crater_loader import load_crater_data\nfrom crater_data import Data\n\n# Load data\nimages, labels, hot_one = load_crater_data('mask')\ndata = Data(images, hot_one, random_state=42)\n\nprint(\"Size of:\")\nprint(\"- Training-set:\\t\\t{}\".format(len(data.train.labels)))\nprint(\"- Test-set:\\t\\t{}\".format(len(data.test.labels)))\nprint(\"- Validation-set:\\t{}\".format(len(data.validation.labels)))\n\nmodel = Network(img_shape=(50, 50, 1))\nmodel.add_convolutional_layer(5, 16)\nmodel.add_convolutional_layer(5, 36)\nmodel.add_flat_layer()\nmodel.add_fc_layer(size=64, use_relu=True)\nmodel.add_fc_layer(size=16, use_relu=True)\nmodel.add_fc_layer(size=2, use_relu=False)\nmodel.finish_setup()\nmodel.set_data(data)\n\nmodel_path = os.path.join(cwd, 'models', 'cnn', 'crater_model_cnn_mask.ckpt') # the models with _th indicate that they use positive samples extracted form theresholding images. \n#model.restore(model_path)\n\nmodel.print_test_accuracy()\n\nmodel.optimize(epochs=100)\n\nmodel.save(model_path)\n\nmodel.print_test_accuracy()\n\nmodel.print_test_accuracy(show_example_errors=True)\n\nmodel.print_test_accuracy(show_example_errors=True,\n show_confusion_matrix=True)\n\nimage1 = data.test.images[7]\nplot_image(image1)\n\nimage2 = data.test.images[14]\nplot_image(image2)\n\nweights = model.filters_weights\nplot_conv_weights(weights=weights[0])\nplot_conv_weights(weights=weights[1])\n\nvalues = model.get_filters_activations(image1)\nplot_conv_layer(values=values[0])\nplot_conv_layer(values=values[1])\n\nvalues = model.get_filters_activations(image2)\nplot_conv_layer(values=values[0])\nplot_conv_layer(values=values[1])\n"
},
{
"alpha_fraction": 0.7134062647819519,
"alphanum_fraction": 0.7250341773033142,
"avg_line_length": 35.57500076293945,
"blob_id": "e90530b5ac2f849f096b0eb24232f61b32b6021d",
"content_id": "6fcf02249a69f725bd43d3616d37416b9121b2c9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1462,
"license_type": "no_license",
"max_line_length": 197,
"num_lines": 40,
"path": "/bcb-src/nn_cnn_src/test_evaluate.py",
"repo_name": "mohebbihr/bcb-crater-detection",
"src_encoding": "UTF-8",
"text": "import cv2 as cv\nimport time\nfrom datetime import timedelta\nimport os\nimport pandas as pd\nfrom helper import calculateDistance, BIRCH_duplicate_removal,BIRCH2_duplicate_removal, Banderia_duplicate_removal, XMeans_duplicate_removal, draw_craters_rectangles, draw_craters_circles, evaluate\nimport Param\n\n# the raw data to process for duplicate removal\nparam = Param.Param()\n#removal_method = 'BIRCH'\nremoval_method = 'Banderia'\ntestset_name = 'tile2_25'\n\ngt_csv_path = 'crater_data/gt/gt_' + testset_name + '.csv'\ncsv_path = 'results/cnn/evaluations/' + removal_method + '/west_train_center_test_2_25_cnn_noduplicates.csv'\nsave_path = 'results/cnn/evaluations/' + removal_method + '/west_train_center_test_2_25_cnn'\n\n# the image for drawing rectangles\nimg_path = os.path.join('crater_data', 'images', testset_name + '.pgm')\ngt_img = cv.imread(img_path)\n\ncraters = pd.read_csv(csv_path, header=None)\ngt = pd.read_csv(gt_csv_path, header=None)\n\nstart_time = time.time()\n\n\n\n# evaluate with gt and draw it on final image.\ndr, fr, qr, bf, f_measure, tp, fp, fn = evaluate(craters, gt, gt_img, 64, True, save_path, param)\n\n#img = draw_craters_rectangles(img_path, merge, show_probs=False)\n#img = draw_craters_circles(img_path, merge, show_probs=False)\n#cv.imwrite(\"%s.jpg\" % (csv_path.split('.')[0]), img, [int(cv.IMWRITE_JPEG_QUALITY), 100])\n\n\nend_time = time.time()\ntime_dif = end_time - start_time\nprint(\"Time usage: \" + str(timedelta(seconds=int(round(time_dif)))))"
},
{
"alpha_fraction": 0.5770321488380432,
"alphanum_fraction": 0.6020793914794922,
"avg_line_length": 33.129032135009766,
"blob_id": "1e9e5febfb90470a67f34ed7c310286e9c587973",
"content_id": "6c2271469cd251954071791830096c54ab817242",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2116,
"license_type": "no_license",
"max_line_length": 103,
"num_lines": 62,
"path": "/bcb-src/nn_cnn_src/visualize_rectangles.py",
"repo_name": "mohebbihr/bcb-crater-detection",
"src_encoding": "UTF-8",
"text": "import cv2 as cv\nimport time\nfrom datetime import timedelta\nimport os\nimport pandas as pd\ncwd = os.getcwd()\n\nsample_name = 'tile3_24'\n\npath = os.path.join('crater_data', 'images')\ngt_path = os.path.join('crater_data', 'gt', 'gt_%s.csv' % sample_name)\nimg = cv.imread(os.path.join(path, '%s.pgm' % sample_name), 1)\nimg = cv.normalize(img, img, 0, 255, cv.NORM_MINMAX)\nimg2 = img.copy()\nimg3 = img.copy()\n\n\ncnn_data = pd.read_csv(\"crater_24_cnn.csv\", names = ['x_c', 'y_c', 'crater_size', 'p_crater', 'label'])\n\n#cnn_data[(cnn_data['p_crater']>0.99)&(cnn_data['crater_size']<50)].info()\n\nnn_data = pd.read_csv(\"crater_24_nn.csv\", names = ['x_c', 'y_c', 'crater_size', 'p_crater', 'label'])\n\ngt_data = pd.read_csv(gt_path, names = ['x_c', 'y_c', 'crater_size'])\n\nstart_time = time.time()\n\ncv.imwrite(\"%s_original.png\" % sample_name, img)\n\nfor index, row in cnn_data[(cnn_data['p_crater'] > 0.5)&(cnn_data['crater_size'] < 25)].iterrows():\n winS = int(row['crater_size'])\n half_winS = int(winS/2)\n x = int(row['x_c'] - half_winS)\n y = int(row['y_c'] - half_winS)\n # if we want to see where is processed.\n cv.rectangle(img, (x, y), (x + winS, y + winS), (0, 255, 0), 2)\n \ncv.imwrite(\"%s_cnn_detections.png\" % sample_name, img)\n\nfor index, row in nn_data[(nn_data['p_crater'] > 0.5)&(nn_data['crater_size'] < 25)].iterrows():\n winS = int(row['crater_size'])\n half_winS = int(winS/2)\n x = int(row['x_c'] - half_winS)\n y = int(row['y_c'] - half_winS)\n # if we want to see where is processed.\n cv.rectangle(img2, (x, y), (x + winS, y + winS), (0, 255, 0), 2)\n \ncv.imwrite(\"%s_nn_detections.png\" % sample_name, img2)\n\nfor index, row in gt_data.iterrows():\n half_winS = int(row['crater_size'])\n winS = int(half_winS*2)\n x = int(row['x_c'] - half_winS)\n y = int(row['y_c'] - half_winS)\n # if we want to see where is processed.\n cv.rectangle(img3, (x, y), (x + winS, y + winS), (255, 0, 0), 2)\n \ncv.imwrite(\"%s_gt.png\" % sample_name, img3)\n\nend_time = time.time()\ntime_dif = end_time - start_time\nprint(\"Time usage: \" + str(timedelta(seconds=int(round(time_dif)))))\n"
},
{
"alpha_fraction": 0.5218267440795898,
"alphanum_fraction": 0.5607790350914001,
"avg_line_length": 31.799999237060547,
"blob_id": "526479c67c2aee670389e28540ecea2ae6fab997",
"content_id": "77a0bc8feda408bd3ff5662c837a42b33a6f9bb3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1489,
"license_type": "no_license",
"max_line_length": 93,
"num_lines": 45,
"path": "/bcb-src/gen_results/plot_gt.py",
"repo_name": "mohebbihr/bcb-crater-detection",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python2\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Fri Apr 19 16:15:33 2019\n\n@author: mohebbi\n\"\"\"\nimport os\nimport pandas as pd\nimport numpy as np\nimport seaborn as sns\n\nif __name__ == \"__main__\":\n\n gt_list = [\"1_24\", \"1_25\", \"2_24\", \"2_25\", \"3_24\", \"3_25\"]\n #gt_list = [\"1_24\", \"1_25\"]\n\n acc_r = []\n \n for gt_num in gt_list:\n \n print(str(gt_num) + \" elements added to r list\")\n gt_csv_path = os.path.join(\"crater_data\",\"gt\", gt_num + \"_gt.csv\")\n gt_data = pd.read_csv(gt_csv_path, header=None)\n \n x = gt_data.loc[:,0]\n y = gt_data.loc[:,1]\n r = gt_data.loc[:,2]\n print(str(len(r)) + \" elements added to r list\")\n for v in r.get_values():\n acc_r.append(v)\n # we use the kernel-density plot of x and y\n #ax_x_y = sns.kdeplot(x, y)\n #ax_x_y.figure.savefig(\"tile\" + gt_num + \"_gt_x_y_kdeplot.png\")\n \n # kernel-density plot of r\n ax_r = sns.kdeplot(r * 2, cut=0,bw=.25)\n ax_r.set_title(\"Kernel Density Plot of Ground Truth Image Sizes\")\n ax_r.set_xlabel(\"Image Size\")\n ax_r.set_ylabel(\"Kernel Density Estimate\")\n ax_r.set_xlim(0,200)\n #ax_r.set_ylim(0,0.2)\n ax_r.figure.savefig(\"all_tiles_r_kdeplot.png\", bbox_inches='tight', dpi=400)\n #ax_r.figure.savefig(\"tile\" + gt_num + \"_gt_r_kdeplot.png\", bbox_inches='tight', dpi=400)\n print(\"Saving the kernel density plot of all samples in ground truth to disk is done.\") \n \n"
},
{
"alpha_fraction": 0.5462998747825623,
"alphanum_fraction": 0.5757458209991455,
"avg_line_length": 36.82352828979492,
"blob_id": "4b9beae9d7dfa20bfc545732f9f9ae1c28e878e2",
"content_id": "a49ca97d2ec8c055b3094740a0c2bb29c2877ab9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2581,
"license_type": "no_license",
"max_line_length": 96,
"num_lines": 68,
"path": "/bcb-src/nn_cnn_src/crater_slice_window.py",
"repo_name": "mohebbihr/bcb-crater-detection",
"src_encoding": "UTF-8",
"text": "from skimage.transform import pyramid_gaussian\nimport cv2 as cv\nfrom helper import sliding_window\nimport time\nimport os\nimport csv\nfrom crater_cnn import Network\nfrom crater_plots import plot_image, plot_conv_weights, plot_conv_layer\n\n# running sliding window on each tile. This code has low accuracy and it is old. slow...\n\n\ncwd = os.getcwd()\n\nmodel = Network(img_shape=(30, 30, 1))\nmodel.add_convolutional_layer(5, 16)\nmodel.add_convolutional_layer(5, 36)\nmodel.add_flat_layer()\nmodel.add_fc_layer(size=128, use_relu=True)\nmodel.add_fc_layer(size=2, use_relu=False)\nmodel.finish_setup()\n# model.set_data(data)\n\nmodel_path = os.path.join(cwd, 'model.ckpt')\nmodel.restore(model_path)\nprint(\"Model is loaded.\")\n\n# go through all the tile folders\ngt_list = [\"1_24\", \"1_25\", \"2_24\", \"2_25\", \"3_24\", \"3_25\"]\npath = 'crater_data/tiles/' # input path\noutputpath = 'results/cnn/'\n\nfor gt_num in gt_list:\n tilefn = 'tile' + gt_num\n img = cv.imread(path + tilefn +'.pgm', 0)\n img = cv.normalize(img, img, 0, 255, cv.NORM_MINMAX)/255.0\n \n crater_list = []\n\n win_sizes = range(20, 30, 2)\n # loop over the image pyramid\n for (i, resized) in enumerate(pyramid_gaussian(img, downscale=1.5)):\n if resized.shape[0] < 30:\n break\n for winS in win_sizes:\n # loop over the sliding window for each layer of the pyramid\n for (x, y, window) in sliding_window(resized, stepSize=60, windowSize=(winS, winS)):\n # since we do not have a classifier, we'll just draw the window\n clone = resized.copy()\n y_b = y + winS\n x_r = x + winS\n crop_img = clone[y:y_b, x:x_r]\n crop_img =cv.resize(crop_img, (30, 30))\n crop_img = crop_img.flatten()\n p_non, p_crater = model.predict([crop_img])[0]\n scale_factor = 1.5 ** i\n if p_crater >= 0.5:\n x_c = int((x + 0.5 * winS) * scale_factor)\n y_c = int((y + 0.5 * winS) * scale_factor)\n crater_size = int(winS * scale_factor)\n crater_data = [x_c, y_c, crater_size, p_crater, 1]\n crater_list.append(crater_data)\n # if we want to see where is processed.\n # cv.rectangle(clone, (x, y), (x + winS, y + winS), (0, 255, 0), 2)\n # cv.imshow(\"Window\", clone)\n # cv.waitKey(1)\n out = csv.writer(open(tilefn + \"_craters.csv\",\"w\"), delimiter=',',quoting=csv.QUOTE_ALL)\n out.writerow(crater_list)\n \n "
},
{
"alpha_fraction": 0.6975651383399963,
"alphanum_fraction": 0.7287483811378479,
"avg_line_length": 27.204818725585938,
"blob_id": "ee0991d08a4cec9eb2bbf1f347becee1fe0b2f50",
"content_id": "27943dfce4de76db8342777fbe26ffc1d125761a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2341,
"license_type": "no_license",
"max_line_length": 83,
"num_lines": 83,
"path": "/bcb-src/nn_cnn_src/load_nn_test.py",
"repo_name": "mohebbihr/bcb-crater-detection",
"src_encoding": "UTF-8",
"text": "import os\nfrom crater_cnn import Network\nfrom crater_plots import plot_image, plot_conv_weights, plot_conv_layer\nfrom crater_preprocessing import preprocess\ncwd = os.getcwd()\n\n# preprocess the west region images (tile1_24, tile1_25)\n#preprocess('tile3_24' , img_dimensions=(50, 50))\n#preprocess('tile3_25' , img_dimensions=(50, 50))\n\nfrom crater_loader import load_crater_data\nfrom crater_data import Data\n\n# Load dataxcv\nimages, labels, hot_one = load_crater_data()\ndata = Data(images, hot_one, random_state=42)\n\nprint(\"Size of:\")\nprint(\"- Training-set:\\t\\t{}\".format(len(data.train.labels)))\nprint(\"- Test-set:\\t\\t{}\".format(len(data.test.labels)))\nprint(\"- Validation-set:\\t{}\".format(len(data.validation.labels)))\n\nmodel = Network(img_shape=(50, 50, 1))\nmodel.add_flat_layer()\nmodel.add_fc_layer(size=50 * 50, use_relu=True)\nmodel.add_fc_layer(size=16, use_relu=True)\nmodel.add_fc_layer(size=2, use_relu=False)\nmodel.finish_setup()\n#model.set_data(data)\n\ncnn = Network(img_shape=(50, 50, 1))\ncnn.add_convolutional_layer(5, 16)\ncnn.add_convolutional_layer(5, 36)\ncnn.add_flat_layer()\ncnn.add_fc_layer(size=64, use_relu=True)\ncnn.add_fc_layer(size=16, use_relu=True)\ncnn.add_fc_layer(size=2, use_relu=False)\ncnn.finish_setup()\n\nmodel_path = os.path.join(cwd, 'results', 'nn_models', 'crater_east_model_nn.ckpt')\nmodel.restore(model_path)\n\ncnn_model_path = os.path.join(cwd, 'results/models/crater_west_model_cnn.ckpt')\ncnn.restore(cnn_model_path)\n\nimage1 = data.test.images[7]\np_nn_non, nn_p = model.predict([image1])[0]\n\np_non, p_crater = cnn.predict([image1])[0]\n\nprint(\"nn predict: \" + str(nn_p))\n\nprint(\"cnn predict: \" + str(cnn_p))\n\n#model.print_test_accuracy()\n\n#model.optimize(epochs=20)\n\n#model.save(model_path)\n\n#model.print_test_accuracy()\n\n#model.print_test_accuracy(show_example_errors=True)\n\n#model.print_test_accuracy(show_example_errors=True, show_confusion_matrix=True)\n\n#image1 = data.test.images[7]\n#plot_image(image1)\n\n#image2 = data.test.images[14]\n#plot_image(image2)\n\n#weights = model.filters_weights\n#plot_conv_weights(weights=weights[0])\n#plot_conv_weights(weights=weights[1])\n\n#values = model.get_filters_activations(image1)\n#plot_conv_layer(values=values[0])\n#plot_conv_layer(values=values[1])\n\n#values = model.get_filters_activations(image2)\n#plot_conv_layer(values=values[0])\n#plot_conv_layer(values=values[1])\n"
},
{
"alpha_fraction": 0.6411362886428833,
"alphanum_fraction": 0.6695433259010315,
"avg_line_length": 27.367347717285156,
"blob_id": "3f4637ab07bfb355cc703a76533f8277e22dd77e",
"content_id": "9bdf751e7f5f40a6cadc0506c311451e13146c81",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2781,
"license_type": "no_license",
"max_line_length": 101,
"num_lines": 98,
"path": "/bcb-src/nn_cnn_src/lime_exp_nn.py",
"repo_name": "mohebbihr/bcb-crater-detection",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python2\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Fri Aug 17 12:40:45 2018\n\n@author: mohebbi\n\"\"\"\n\nimport numpy as np\nimport cv2\n\nimport os\nimport lime\nfrom lime import lime_image\nfrom skimage.segmentation import mark_boundaries\nimport matplotlib.pyplot as plt\n\nfrom crater_cnn import Network\nfrom crater_plots import plot_image, plot_conv_weights, plot_conv_layer\nfrom keras.applications import imagenet_utils\nfrom keras.preprocessing import image\nfrom skimage.color import gray2rgb, rgb2gray\ncwd = os.getcwd()\n\ninput_shape = (50, 50)\npreprocess = imagenet_utils.preprocess_input\n# setup NN\nnn = Network(img_shape=(50, 50, 1))\nnn.add_flat_layer()\nnn.add_fc_layer(size=50 * 50, use_relu=True)\nnn.add_fc_layer(size=16, use_relu=True)\nnn.add_fc_layer(size=2, use_relu=False)\nnn.finish_setup()\n# model.set_data(data)\n\n# restore previously trained CNN model\nprint(\"loading the pre-trained NN model\")\nnn_model_path = os.path.join(cwd, 'results', 'nn_models', 'crater_east_model_nn.ckpt')\nnn.restore(nn_model_path)\n\ndef transform_img_fn(path_list):\n org_out = []\n trans_out = []\n for img_path in path_list:\n src_img = cv2.imread(img_path)\n gray_img = rgb2gray(src_img)\n scaled_img = cv2.resize(gray_img, input_shape)\n norm_img = cv2.normalize(scaled_img, scaled_img, 0, 255, cv2.NORM_MINMAX)\n #flat_img = norm_img.flatten()\n \n #img = image.load_img(img_path, target_size=input_shape)\n #x = image.img_to_array(img)\n #x = np.expand_dims(x, axis=0)\n #x = preprocess(x)\n org_out.append(src_img)\n trans_out.append(norm_img)\n return org_out, trans_out\n\ndef predict_fn(images):\n out = []\n for img in images:\n # make the img gray again!!\n img = rgb2gray(img)\n scaled_img = cv2.resize(img, input_shape)\n norm_img = cv2.normalize(scaled_img, scaled_img, 0, 255, cv2.NORM_MINMAX)\n flat_img = norm_img.flatten()\n p_non, p_crater = nn.predict([flat_img])[0]\n \n #if p_crater >= 0.5 :\n out.append(p_crater)\n #else:\n out.append(p_non)\n \n return out\n \n\n# get some test image\nimages, trans_images = transform_img_fn(['./crater_data/images/tile1_24/crater/TE_tile1_24_001.jpg'])\nprint(images[0].shape)\nprint(trans_images[0].shape)\n\n#print(\"showing image\")\n#plt.imshow(images[0], cmap='gray')\n#plt.show()\n\n# get prediction\nprint(\"predict class probabilities\")\npreds = predict_fn(images)\nprint(preds)\n\n\nexplainer = lime_image.LimeImageExplainer()\n\nexplanation = explainer.explain_instance(images[0], predict_fn, hide_color=0, num_samples=1000)\n\ntemp, mask = explanation.get_image_and_mask(240, positive_only=True, num_features=5, hide_rest=True)\nplt.imshow(mark_boundaries(temp / 2 + 0.5, mask))\nplt.show()\n\n"
},
{
"alpha_fraction": 0.5791880488395691,
"alphanum_fraction": 0.596508264541626,
"avg_line_length": 41.70414352416992,
"blob_id": "cee35a440d5e769810be6f5a2f9874a154a596f5",
"content_id": "066472e7a847f2f75d09da0c0bfeedacd6388ee1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7217,
"license_type": "no_license",
"max_line_length": 221,
"num_lines": 169,
"path": "/bcb-src/nn_cnn_src/sliding_window_fcn_cnn.py",
"repo_name": "mohebbihr/bcb-crater-detection",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python2\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Thu Mar 7 12:27:53 2019\n\n@author: mohebbi\n\"\"\"\n\nfrom skimage.transform import pyramid_gaussian\nimport cv2 as cv\nfrom helper import sliding_window\nimport os\nimport csv\nfrom crater_cnn import Network as CNN\nfrom crater_nn import Network as NN\nimport Param\nimport numpy as np\nimport seaborn as sns\n\ncwd = os.getcwd()\n# This script will go through all image tiles and detects crater area using sliding window method. This script use the output of FCN Segmentation code as input of this step.\n# for every crater candidate, we look at the binary mask of segmentation to remove non-crater area. We consider a candidate as a potential crater area, if more than 50 (default threshold) percent of its area was creater. \n# Then, write results as a csv file to the results folder. The results of this script is the input to the remove_duplicates.py script. \n\n# Task: add the part that we do calculations for a range of threshold values and save a plot about it.\n\n# input : a 2D of int (segmentation output) \n# output: a colored image of input\ndef give_color_to_seg_img(seg,n_classes):\n # generate a color image based on the segmented image.\n \n if len(seg.shape)==3:\n seg = seg[:,:,0]\n seg_img = np.zeros( (seg.shape[0],seg.shape[1],3) ).astype('float')\n colors = sns.color_palette(\"hls\", n_classes)\n \n for c in range(n_classes):\n segc = (seg == c)\n seg_img[:,:,0] += (segc*( colors[c][0] ))\n seg_img[:,:,1] += (segc*( colors[c][1] ))\n seg_img[:,:,2] += (segc*( colors[c][2] ))\n\n return(seg_img, colors)\n \n# it does reverse of the above function\ndef get_seg_from_img(seg_img, colors, n_classes = 2):\n \n seg_mtx = np.zeros( (seg_img.shape[0],seg_img.shape[1]) , dtype=int)\n \n for c in range(n_classes):\n segc = (colors[c] == seg_img)\n seg_mtx += segc.astype(np.int)\n\n return(seg_mtx)\n \n# This function determines if the window area, is a potential crater area or not. \n# This function calculates the crater score measure for for each window size (potential crater area). This measure is the number of 1 pixels to size of the window.\n# This function resturns true if crater score is bigger than equal to threshold.\n# The input is the loaded binary mask which is the output of FCN Segmentation phase. \ndef is_potential_crater(bin_mask,x,y,resized_w, windowSize, threshold = 0.5):\n \n windowSize_b = int((windowSize[1] * bin_mask.shape[1]) / resized_w + 1) # we consider scaling down the image for pyramid sliding window for binary mask too. \n window = bin_mask[y:y + windowSize_b, x:x + windowSize_b]\n crater_score = float(np.sum(window)) / (windowSize[1] * windowSize[0])\n answ = crater_score >= threshold\n return answ, crater_score\n\n\nif __name__ == \"__main__\":\n \n param = Param.Param()\n crater_threshold = 0.5\n n_classes= 2\n # setup CNN\n cnn = CNN(img_shape=(50, 50, 1))\n cnn.add_convolutional_layer(5, 16)\n cnn.add_convolutional_layer(5, 36)\n cnn.add_flat_layer()\n cnn.add_fc_layer(size=64, use_relu=True)\n cnn.add_fc_layer(size=16, use_relu=True)\n cnn.add_fc_layer(size=2, use_relu=False)\n cnn.finish_setup()\n # model.set_data(data)\n \n # restore previously trained CNN model\n cnn_model_path = os.path.join(cwd, 'models/cnn/crater_model_cnn.ckpt')\n cnn.restore(cnn_model_path)\n \n # go through all the tile folders\n gt_list = [\"1_24\", \"1_25\", \"2_24\", \"2_25\", \"3_24\", \"3_25\"]\n #gt_list = [\"1_24\"]\n \n for gt_num in gt_list:\n \n tile_img = 'tile' + gt_num\n print(\"Working on \" + tile_img)\n \n path = os.path.join('crater_data', 'tiles')\n bin_mask_path = os.path.join('crater_data', 'FCN-output', tile_img + '.csv')\n \n img = cv.imread(os.path.join(path, tile_img + '.pgm'), 0)\n img = cv.normalize(img, img, 0, 255, cv.NORM_MINMAX)/255.0\n \n # load the output of Gen_Mask file (FCN Segmentation output.)\n bin_mask = np.loadtxt(bin_mask_path, dtype=int)\n \n # Problem? We need to have similar pyramid downsacling that we are using for images, for binary mask input too.\n # The pyramid_gaussian function does not accept 2D list as input (only image). \n # Solution: \n # a- convert bin_mask to an image\n #bin_mask_img, colors = give_color_to_seg_img(bin_mask,n_classes)\n # b- feed it into pyramid_gaussian function\n #pyramid_bin_mask_img = tuple(pyramid_gaussian(bin_mask_img, downscale=1.5))\n \n # c- convert image into 2D list of int and save it back to bin_mask\n #pyramid_bin_mask = [get_seg_from_img(p) for p in pyramid_bin_mask_img]\n \n \n # task: get the threshold of the image\n \n crater_list_cnn = []\n \n winS = param.dmin\n # loop over the image pyramid\n \n for (i, resized) in enumerate(pyramid_gaussian(img, downscale=1.5)):\n if resized.shape[0] < 31:\n break\n \n print(\"Resized shape: %d, Window size: %d, i: %d\" % (resized.shape[0], winS, i))\n #windowSize=(winS, winS)\n # loop over the sliding window for each layer of the pyramid\n # this process takes about 7 hours. To do quick test, we may try stepSize\n # to be large (60) and see if code runs OK\n idx = 0 \n for (x, y, window) in sliding_window(resized, stepSize=8, windowSize=(winS, winS)):\n \n # calcualte the number of 1 pixels in the binary mask area of the window.\n answ, crater_score = is_potential_crater(bin_mask,x,y,resized.shape[0], windowSize=(winS, winS)) \n if answ :\n # apply a circular mask ot the window here. Think about where you should apply this mask. Before, resizing or after it. \n crop_img =cv.resize(window, (50, 50))\n cv.normalize(crop_img, crop_img, 0, 255, cv.NORM_MINMAX)\n crop_img = crop_img.flatten()\n \n p_non, p_crater = cnn.predict([crop_img])[0]\n \n scale_factor = 1.5 ** i\n x_c = int((x + 0.5 * winS) * scale_factor)\n y_c = int((y + 0.5 * winS) * scale_factor)\n crater_r = int(winS * scale_factor / 2)\n \n # add its probability to a score combined thresholded image and normal iamges. \n \n if (p_crater + crater_score) / 2 >= 0.75:\n crater_data = [x_c, y_c, crater_r , p_crater]\n crater_list_cnn.append(crater_data)\n \n \n cnn_file = open(\"results/fcn-cnn/\"+tile_img+\"_sw_fcn_cnn.csv\",\"w\")\n with cnn_file:\n writer = csv.writer(cnn_file, delimiter=',')\n writer.writerows(crater_list_cnn)\n cnn_file.close()\n \n print(\"CNN detected \", len(crater_list_cnn), \"craters for tile\" + gt_num)\n print(\"The results is saved on results/fcn-cnn/\"+tile_img+\"_sw_fcn_cnn.csv file.\")\n\nprint(\"end of main\")\n"
},
{
"alpha_fraction": 0.53722083568573,
"alphanum_fraction": 0.5566583871841431,
"avg_line_length": 30.220779418945312,
"blob_id": "e6567ebfc29a97d12b5e2ab2c396cce4558db9b9",
"content_id": "44aeacfd94d6329852ee99fc7bb2a7099358affb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2418,
"license_type": "no_license",
"max_line_length": 91,
"num_lines": 77,
"path": "/bcb-src/nn_cnn_src/hard_negative_mine.py",
"repo_name": "mohebbihr/bcb-crater-detection",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python2\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Wed Oct 17 17:10:35 2018\n\n@author: mohebbi\n\"\"\"\nfrom helper import isamatch\nimport numpy as np\nimport cv2 as cv\nimport os\n\n# save_path: the general path to save the extracted samples \n# img_type: crater or non-crater\n#\ndef save_samples(info_list, img, save_path, img_type, img_dimensions=(50, 50)):\n \n clone = img.copy()\n counter = 1\n for (x_c,y_c,r) in info_list:\n x = x_c - r\n y = y_c - r\n \n crop_img = clone[y:y_c + r, x:x_c + r]\n scaled_img =cv.resize(crop_img, img_dimensions)\n \n cv.normalize(scaled_img, scaled_img, 0, 255, cv.NORM_MINMAX)\n\t\tdst_filename = os.path.join(save_path, img_type, str(counter) + \"_hng.jpg\")\n cv.imwrite(dst_filename, scaled_img)\n\t\tcounter += 1\n \n\treturn counter -1\n\ndef extract_hard_negative_samples(craters, gt, img, save_path, img_dimensions, param):\n #sort by radius\n gt = gt.sort_values(by=[2]).values\n dt = craters.sort_values(by=[2]).values\n \n gt_visit = np.zeros(len(gt), dtype=int)\n dt_visit = np.zeros(len(dt), dtype=int)\n \n for v in range(0,len(gt)):\n x_gt = gt[v][0]\n y_gt = gt[v][1]\n r_gt = gt[v][2]\n \n for w in range(0,len(dt)):\n x_dt = dt[w][0]\n y_dt = dt[w][1]\n r_dt = dt[w][2]\n \n if( gt_visit[v] == 0 and isamatch(x_gt, y_gt, r_gt, x_dt, y_dt, r_dt, param)):\n \n gt_visit[v] = 1\n dt_visit[w] = 1\n\n # indexes that we missed from gt (crater)\n fn_index = [i for i, e in enumerate(gt_visit) if e == 0]\n # wrong predictions (non-crater)\n fp_index = [i for i, e in enumerate(dt_visit) if e == 0]\n \n # extract the x,y and r of indexs\n fn_info = []\n fp_info = []\n \n for i in fn_index:\n fn_info.append([gt[i][0], gt[i][1], gt[i][2]])\n \n for i in fp_index:\n fp_info.append([dt[i][0], dt[i][1], dt[i][2]])\n \n # extract locations from image and save it on save_path\n num_fn_samples = save_samples(fn_info, img, save_path, \"crater\", img_dimensions)\n num_fp_samples = save_samples(fp_info, img, save_path, \"non-crater\", img_dimensions)\n\t\n print( str(num_fn_samples) + \" crater hard negative samples are extracted.\")\n print( str(num_fp_samples) + \" non-crater hard negative samples are extracted.\")\n \n \n "
},
{
"alpha_fraction": 0.47217515110969543,
"alphanum_fraction": 0.4953968822956085,
"avg_line_length": 32.4179801940918,
"blob_id": "4c4065534e0c9f9b348a9f847c290480d0baa9bb",
"content_id": "7fb1cf0fe7e8229bf1152e13992892e90fb5d46a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 21833,
"license_type": "no_license",
"max_line_length": 147,
"num_lines": 634,
"path": "/bcb-src/gen_results/helper.py",
"repo_name": "mohebbihr/bcb-crater-detection",
"src_encoding": "UTF-8",
"text": "import numpy as np\r\nimport math\r\nimport pandas as pd\r\nimport cv2 as cv\r\nimport os\r\nimport matplotlib.pyplot as plt\r\nfrom sklearn.cluster import Birch\r\n\r\ndef ismember(a_vec, b_vec): \r\n booleans = np.in1d(a_vec, b_vec)\r\n return booleans\r\n\r\ndef sliding_window(image, stepSize, windowSize):\r\n\t# slide a window across the image\r\n for y in range(0, image.shape[0], stepSize):\r\n for x in range(0, image.shape[1], stepSize):\r\n if y + windowSize[1] <= image.shape[0] and x + windowSize[0] <= image.shape[1] : \r\n\t\t\t# yield the current window\r\n yield (x, y, image[y:y + windowSize[1], x:x + windowSize[0]])\r\n \r\ndef calculateDistance(x1, y1, x2, y2):\r\n return math.sqrt((x2 - x1)**2 + (y2 - y1)**2)\r\n\r\ndef BIRCH2_duplicate_removal(dataframe, threshold = 0.8):\r\n # Note this method now takes a dataframe as input\r\n \r\n if len(dataframe) < 2:\r\n # nothing to do\r\n return dataframe\r\n\r\n Crater_data = dataframe\r\n # extract axes\r\n x = Crater_data[0].values.tolist()\r\n y = Crater_data[1].values.tolist()\r\n r = Crater_data[2].values.tolist()\r\n p = Crater_data[3].values.tolist()\r\n Points = []\r\n \r\n X = np.column_stack((x, y))\r\n brc = Birch(branching_factor=50, n_clusters=int(threshold * len(x)), threshold=0.5,compute_labels=True)\r\n brc.fit(X)\r\n groups_pred = brc.predict(X)\r\n \r\n for c in set(groups_pred):\r\n idx = [i for i, e in enumerate(groups_pred) if e == c]\r\n \r\n Group_x = []\r\n Group_y = []\r\n Group_r = []\r\n Group_p = []\r\n index = []\r\n \r\n for i in idx:\r\n if i in range(0, len(x)):\r\n Group_x.append(x[i])\r\n Group_y.append(y[i])\r\n Group_r.append(r[i])\r\n Group_p.append(p[i])\r\n index.append(i)\r\n \r\n # after group is defined, extract its elements from list\r\n Points.append([Group_x,Group_y,Group_r, Group_p])\r\n\r\n # now reduce groups\r\n center_size = []\r\n for i, (Xs, Ys, Rr, Ps) in enumerate(Points):\r\n # we take the point with best prediction confidence\r\n best_index = np.argmax(Ps)\r\n x_center = Xs[best_index]\r\n y_center = Ys[best_index]\r\n radius = Rr[best_index]\r\n prob = Ps[best_index]\r\n center_size += [[x_center,y_center,radius, prob]]\r\n\r\n return pd.DataFrame(center_size)\r\n\r\ndef BIRCH_duplicate_removal(dataframe):\r\n # Note this method now takes a dataframe as input\r\n \r\n if len(dataframe) < 2:\r\n # nothing to do\r\n return dataframe\r\n\r\n Crater_data = dataframe\r\n # extract axes\r\n x = Crater_data[0].values.tolist()\r\n y = Crater_data[1].values.tolist()\r\n r = Crater_data[2].values.tolist()\r\n p = Crater_data[3].values.tolist()\r\n Points = []\r\n while len(x) > 0:\r\n # a group is a set of similar points\r\n Group_x = [x[0]]\r\n Group_y = [y[0]]\r\n Group_r = [r[0]]\r\n Group_p = [p[0]]\r\n index = [0]\r\n for i in range(1,len(x)):\r\n d_current = calculateDistance(x[0],y[0],x[i],y[i])\r\n\r\n # accept in group only if \r\n d = min(r[0], r[i])\r\n if d_current <= d and r[i] < 2*d:\r\n Group_x.append(x[i])\r\n Group_y.append(y[i])\r\n Group_r.append(r[i])\r\n Group_p.append(p[i])\r\n index.append(i)\r\n # after group is defined, extract its elements from list\r\n x = list(np.delete(x,index))\r\n y = list(np.delete(y,index))\r\n r = list(np.delete(r,index))\r\n p = list(np.delete(p,index))\r\n Points.append([Group_x,Group_y,Group_r, Group_p])\r\n\r\n # now reduce groups\r\n center_size = []\r\n for i, (Xs, Ys, Rr, Ps) in enumerate(Points):\r\n # we take the point with best prediction confidence\r\n best_index = np.argmax(Ps)\r\n x_center = Xs[best_index]\r\n y_center = Ys[best_index]\r\n radius = Rr[best_index]\r\n prob = Ps[best_index]\r\n center_size += [[x_center,y_center,radius, prob]]\r\n\r\n return pd.DataFrame(center_size)\r\n\r\ndef Banderia_duplicate_removal(dataframe):\r\n # Note this method now takes a dataframe as input\r\n \r\n if len(dataframe) < 2:\r\n # nothing to do\r\n return dataframe\r\n\r\n Crater_data = dataframe\r\n # extract axes\r\n x = Crater_data[0].values.tolist()\r\n y = Crater_data[1].values.tolist()\r\n r = Crater_data[2].values.tolist()\r\n p = Crater_data[3].values.tolist()\r\n \r\n Points = []\r\n while len(x) > 0:\r\n # a group is a set of similar points\r\n Group_x = [x[0]]\r\n Group_y = [y[0]]\r\n Group_r = [r[0]]\r\n Group_p = [p[0]]\r\n index = [0]\r\n for i in range(1,len(x)):\r\n d_current = calculateDistance(x[0],y[0],x[i],y[i])\r\n\r\n # accept in group only if \r\n if (abs(r[0] - r[i]) / max(r[0], r[i])) <= 0.5 and (d_current / 2 * max(r[0], r[i])) <= 0.5 :\r\n Group_x.append(x[i])\r\n Group_y.append(y[i])\r\n Group_r.append(r[i])\r\n Group_p.append(p[i])\r\n index.append(i)\r\n # after group is defined, extract its elements from list\r\n x = list(np.delete(x,index))\r\n y = list(np.delete(y,index))\r\n r = list(np.delete(r,index))\r\n p = list(np.delete(p,index))\r\n Points.append([Group_x,Group_y,Group_r, Group_p])\r\n\r\n # now reduce groups\r\n center_size = []\r\n for i, (Xs, Ys, Rr, Ps) in enumerate(Points):\r\n # we take the point with best prediction confidence\r\n best_index = np.argmax(Ps)\r\n x_center = Xs[best_index]\r\n y_center = Ys[best_index]\r\n radius = Rr[best_index]\r\n prob = Ps[best_index]\r\n center_size += [[x_center,y_center,radius, prob]]\r\n\r\n return pd.DataFrame(center_size)\r\n\r\ndef save_gt(img,gt, save_path):\r\n nseg = 64 \r\n implot = plt.imshow(img)\r\n for i in range(0, len(gt)):\r\n x = gt[i,0]\r\n y = gt[i,1]\r\n r = gt[i,2] \r\n \r\n theta = np.linspace(0.0, (2 * math.pi), (nseg + 1))\r\n pline_x = np.add(x, np.dot(r, np.cos(theta)))\r\n pline_y = np.add(y, np.dot(r, np.sin(theta)))\r\n plt.plot(pline_x, pline_y, 'b-')\r\n \r\n plt.savefig(save_path +'_gt.png', bbox_inches='tight', dpi=400)\r\n plt.show()\r\n\r\ndef draw_craters_circles(img_path, dataframe, show_probs=False):\r\n \r\n img = cv.imread(img_path, 1)\r\n img = cv.normalize(img, img, 0, 255, cv.NORM_MINMAX)\r\n\r\n color = (0, 255, 0)\r\n font = cv.FONT_HERSHEY_SIMPLEX\r\n \r\n for index, row in dataframe.iterrows():\r\n \r\n r = int(row[2])\r\n x = int(row[0])\r\n y = int(row[1])\r\n # if we want to see where is processed.\r\n cv.circle(img, (x, y), r, color, 2)\r\n if show_probs:\r\n cv.putText(img, \"%f\" % row[3], (x, y-5), font, 0.6, color, 2)\r\n\r\n return img\r\n\r\n \r\ndef draw_craters_rectangles(img_path, dataframe, show_probs=False):\r\n \r\n img = cv.imread(img_path, 1)\r\n img = cv.normalize(img, img, 0, 255, cv.NORM_MINMAX)\r\n\r\n color = (0, 255, 0)\r\n font = cv.FONT_HERSHEY_SIMPLEX\r\n \r\n for index, row in dataframe.iterrows():\r\n winS = int(row[2] * 2)\r\n half_winS = int(winS/2)\r\n x = int(row[0] - half_winS)\r\n y = int(row[1] - half_winS)\r\n # if we want to see where is processed.\r\n cv.rectangle(img, (x, y), (x + winS, y + winS), color, 2)\r\n if show_probs:\r\n cv.putText(img, \"%f\" % row[3], (x, y-5), font, 0.6, color, 2)\r\n\r\n return img\r\n\r\n# more work needs here..\r\ndef isamatch(x_gt, y_gt, r_gt, x_dt, y_dt, r_dt, param):\r\n # returns True if gt is inside, otherwise returns false\r\n d = math.sqrt((x_gt - x_dt)**2 + (y_gt - y_dt)**2 )\r\n \r\n if d <= 26 and (d / 2 * max(r_gt,r_dt) ) <= 1 and (abs(r_gt-r_dt)/ max(r_gt,r_dt)) <= param.d_tol : \r\n return True\r\n else:\r\n return False\r\n\r\ndef ismember(a_vec, b_vec): \r\n booleans = np.in1d(a_vec, b_vec)\r\n return booleans\r\n\r\ndef plot_dacc(r_tp,r_fp,r_fn, save_path, param):\r\n \r\n d_list = []\r\n dr_list = []\r\n fr_list = []\r\n bf_list = []\r\n qr_list = []\r\n fm_list = []\r\n \r\n for d in range(param.dmin, 41):\r\n tp = sum(np.dot(2, r_tp) >= d)\r\n fp = sum(np.dot(2, r_fp) >= d)\r\n fn = sum(np.dot(2, r_fn) >= d)\r\n \r\n dr = tp/(tp + fn)\r\n fr = fp / (tp + fp)\r\n bf = fp / tp if tp !=0 else 0 \r\n qr = tp / (tp + fp + fn)\r\n f_measure = 2*tp/(2*tp+fp+fn)\r\n \r\n d_list.append(d)\r\n dr_list.append(dr)\r\n fr_list.append(fr)\r\n bf_list.append(bf)\r\n qr_list.append(qr)\r\n fm_list.append(f_measure)\r\n \r\n dr_list = np.dot(100, dr_list) \r\n fr_list = np.dot(100, fr_list) \r\n bf_list = np.dot(100, bf_list) \r\n qr_list = np.dot(100, qr_list)\r\n fm_list = np.dot(100, fm_list) \r\n \r\n plt.plot(d_list, dr_list, 'go', d_list, fr_list, 'y.', d_list, bf_list, 'r*', d_list, qr_list, 'bx', d_list, fm_list, 'c^')\r\n plt.savefig(save_path +'_dacc.png', bbox_inches='tight', dpi=400)\r\n plt.show()\r\n \r\n# testset_path: the name of testfile\r\n# craters: a list of detected craters contains x,y,r\r\n# gt: ground truth loaded from csv files. it must be a dataframe\r\n# img: image of testset_name\r\n\r\n# note!!!: change the evaluation based on the dmin and dmax range. !!! the restuls is not correct right now. \r\n\r\ndef evaluate(craters, gt, img, nseg, save_figs, save_path, param):\r\n \r\n #sort by radius\r\n gt = gt.sort_values(by=[2]).values \r\n gt[:, 2] = gt[:, 2] / 2 # the third column of gt contians diameter.\r\n dt = craters.sort_values(by=[2]).values\r\n #dt[:, 2] = dt[:, 2] / 2\r\n \r\n gt_visit = np.zeros(len(gt), dtype=int)\r\n dt_visit = np.zeros(len(dt), dtype=int)\r\n \r\n # number of correct positive predictions\r\n p = 0\r\n errors = []\r\n\r\n for v in range(0,len(gt)):\r\n x_gt = gt[v][0]\r\n y_gt = gt[v][1]\r\n r_gt = gt[v][2] \r\n \r\n for w in range(0,len(dt)):\r\n x_dt = dt[w][0]\r\n y_dt = dt[w][1]\r\n r_dt = dt[w][2] # the third column of detections output is radius !!!!!!\r\n \r\n if( gt_visit[v] == 0 and dt_visit[w] == 0 and isamatch(x_gt, y_gt, r_gt, x_dt, y_dt, r_dt, param)):\r\n \r\n gt_visit[v] = 1\r\n dt_visit[w] = 1\r\n \r\n error_abs_xy = math.sqrt((x_gt-x_dt)**2 + (y_gt-y_dt)**2)\r\n error_abs_r = abs(r_gt-r_dt)\r\n error_rel_r = 100*(r_gt-r_dt)/r_gt\r\n errors.append([r_gt, error_abs_xy, error_abs_r, error_rel_r])\r\n p += 1\r\n break\r\n\r\n \r\n tp_index = [i for i, e in enumerate(dt_visit) if e == 1]\r\n fn_index = [i for i, e in enumerate(gt_visit) if e == 0]\r\n fp_index = [i for i, e in enumerate(dt_visit) if e == 0]\r\n \r\n tp = len(tp_index)\r\n fn = len(fn_index)\r\n fp = len(fp_index)\r\n \r\n # global rates\r\n global_res_1 = np.hstack((dt[tp_index,:], np.zeros((tp,1))))\r\n global_res_2 = np.hstack((dt[fp_index,:], np.ones((fp,1))))\r\n global_res_3 = np.hstack((gt[fn_index,:], np.dot(2 , np.ones((fn,1)))))\r\n global_res = np.concatenate((global_res_1, global_res_2, global_res_3), axis=0)\r\n \r\n # show the original image??\r\n \r\n #theta = 0 : (2 * pi / nseg) : (2 * pi);\r\n theta = np.linspace(0.0, (2 * math.pi), (nseg + 1))\r\n \r\n r_tp = []\r\n r_fp = []\r\n r_fn = []\r\n \r\n if save_figs:\r\n implot = plt.imshow(img)\r\n \r\n for c in range(len(global_res)):\r\n \r\n x_res = global_res[c,0]\r\n y_res = global_res[c,1]\r\n r_res = global_res[c,2] \r\n flag_res = global_res[c,3]\r\n \r\n pline_x = np.add(np.dot(r_res , np.cos(theta)), x_res)\r\n pline_y = np.add(np.dot(r_res , np.sin(theta)), y_res)\r\n L = \"\"\r\n \r\n if flag_res == 0.0 :\r\n L = 'g'\r\n r_tp.append(r_res)\r\n \r\n elif flag_res == 1.0:\r\n L = 'r'\r\n r_fp.append(r_res)\r\n elif flag_res == 2.0:\r\n L = 'b'\r\n r_fn.append(r_res)\r\n else:\r\n print(\"Unknown results\")\r\n \r\n # if show_figs, plot(pline_x, pline_y, strcat(L,'-'),'LineWidth',2); end\r\n if save_figs:\r\n plt.plot(pline_x, pline_y, (L + '-') , linewidth=1)\r\n plt.axis('off')\r\n\r\n\r\n if save_figs:\r\n # show the previous plot\r\n #plt.show()\r\n plt.savefig(save_path +'_evaluation.png', bbox_inches='tight', dpi=400)\r\n plt.show()\r\n \r\n save_gt(img, gt, save_path)\r\n \r\n # plots (https://matplotlib.org/users/pyplot_tutorial.html)\r\n plt.figure(1)\r\n plt.subplot(221)\r\n plt.plot(np.dot(2,[item[0] for item in errors]), [item[1] for item in errors], 'bo')\r\n plt.title('precision of the detected position (px)')\r\n plt.xlabel('diameter (px)')\r\n plt.ylabel('error in position (px)')\r\n \r\n plt.subplot(222)\r\n plt.plot(np.dot(2,[item[0] for item in errors]), [item[2] for item in errors], 'bo')\r\n plt.title('precision of the detected radius (px)')\r\n plt.xlabel('diameter (px)')\r\n plt.ylabel('error in radius (px)')\r\n \r\n plt.subplot(223)\r\n plt.plot(np.dot(2,[item[0] for item in errors]), [item[3] for item in errors], 'bo')\r\n plt.title('precision of the detected radius (%)')\r\n plt.xlabel('diameter (px)')\r\n plt.ylabel('error in radius (%)')\r\n \r\n plt.savefig(save_path +'_stats.png', bbox_inches='tight', figsize=(10, 8), dpi=400)\r\n plt.show()\r\n \r\n totalerror_position = sum([item[1] for item in errors])\r\n totalerror_radius = sum([abs(x[2]) for x in errors] )\r\n \r\n # writting to text filte\r\n f = open(save_path +'_stats.txt','w')\r\n \r\n print(\"Total error in position: \" + str(totalerror_position) + \" , Total error in radius: \" + str(totalerror_radius) )\r\n f.write(\"Total error in position: \" + str(totalerror_position) + \" , Total error in radius: \\n\\n\" + str(totalerror_radius) )\r\n \r\n print(\"Trues, TP: \" + str(tp) + \" , Falses, FP: \" + str(fp) + \" , FN: \" + str(fn) )\r\n f.write(\"Trues, TP: \" + str(tp) + \" , Falses, FP: \" + str(fp) + \" , FN: \\n\" + str(fn) )\r\n \r\n dr = float(tp)/float(tp+fn)\r\n fr = float(fp)/float(tp+fp)\r\n bf = float(fp)/float(tp)\r\n qr = float(tp)/float(tp+fp+fn)\r\n f1_measure = float(2*tp)/float(2*tp+fp+fn)\r\n \r\n precision = float(tp) / float(tp + fp)\r\n recall = float(tp) / float(tp + fn)\r\n \r\n print(\"f1-measure: %.5f , detection percentage: %.5f , branching factor: %.5f , quality percentage: %.5f\" % (f1_measure, dr, bf,qr))\r\n f.write(\"f1-measure: %.5f , detection percentage: %.5f , branching factor: %.5f , quality percentage: %.5f \\n\\n\" % (f1_measure, dr, bf,qr))\r\n \r\n print(\"precision: %.5f , recall: %.5f \" % (precision, recall))\r\n f.write(\"precision: %.5f , recall: %.5f \\n\\n\" % (precision, recall))\r\n \r\n f.close()\r\n \r\n if save_figs :\r\n plot_dacc(r_tp, r_fp, r_fn, save_path, param)\r\n \r\n\r\n return dr, fr, qr, bf, f1_measure, tp, fp, fn \r\n\r\n\r\n\r\ndef evaluate_cmp(craters1, craters2, gt, img, nseg, save_figs, save_path, param):\r\n \r\n #sort by radius\r\n gt = gt.sort_values(by=[2]).values\r\n dt1 = craters1.sort_values(by=[2]).values\r\n dt2 = craters2.sort_values(by=[2]).values\r\n \r\n gt_visit1 = np.zeros(len(gt), dtype=int)\r\n gt_visit2 = np.zeros(len(gt), dtype=int)\r\n dt_visit1 = np.zeros(len(dt1), dtype=int)\r\n dt_visit2 = np.zeros(len(dt2), dtype=int)\r\n \r\n # number of correct positive predictions\r\n p1 = 0\r\n p2 = 0 \r\n errors1 = []\r\n errors2 = []\r\n \r\n for v in range(0,len(gt)):\r\n x_gt = gt[v][0]\r\n y_gt = gt[v][1]\r\n r_gt = gt[v][2] /2\r\n \r\n for w in range(0,len(dt1)):\r\n x_dt1 = dt1[w][0]\r\n y_dt1 = dt1[w][1]\r\n r_dt1 = dt1[w][2]\r\n \r\n if( gt_visit1[v] == 0 and isamatch(x_gt, y_gt, r_gt, x_dt1, y_dt1, r_dt1, param)):\r\n \r\n gt_visit1[v] = 1\r\n dt_visit1[w] = 1\r\n \r\n error_abs_xy = math.sqrt((x_gt-x_dt1)**2 + (y_gt-y_dt1)**2)\r\n error_abs_r = abs(r_gt-r_dt1)\r\n error_rel_r = 100*(r_gt-r_dt1)/r_gt\r\n errors1.append([r_gt, error_abs_xy, error_abs_r, error_rel_r])\r\n p1 += 1\r\n\r\n for w in range(0, len(dt2)):\r\n \r\n x_dt2 = dt2[w][0]\r\n y_dt2 = dt2[w][1]\r\n r_dt2 = dt2[w][2]\r\n \r\n if( gt_visit2[v] == 0 and isamatch(x_gt, y_gt, r_gt, x_dt2, y_dt2, r_dt2, param)):\r\n \r\n gt_visit2[v] = 1\r\n dt_visit2[w] = 1\r\n \r\n error_abs_xy = math.sqrt((x_gt-x_dt2)**2 + (y_gt-y_dt2)**2)\r\n error_abs_r = abs(r_gt-r_dt2)\r\n error_rel_r = 100*(r_gt-r_dt2)/r_gt\r\n errors2.append([r_gt, error_abs_xy, error_abs_r, error_rel_r])\r\n p2 += 1\r\n \r\n tp_index1 = [i for i, e in enumerate(gt_visit1) if e == 1]\r\n fn_index1 = [i for i, e in enumerate(gt_visit1) if e == 0]\r\n fp_index1 = [i for i, e in enumerate(dt_visit1) if e == 0]\r\n \r\n tp_index2 = [i for i, e in enumerate(gt_visit2) if e == 1]\r\n fn_index2 = [i for i, e in enumerate(gt_visit2) if e == 0]\r\n fp_index2 = [i for i, e in enumerate(dt_visit2) if e == 0]\r\n \r\n tp1 = len(tp_index1)\r\n fn1 = len(fn_index1)\r\n fp1 = len(fp_index1)\r\n \r\n tp2 = len(tp_index2)\r\n fn2 = len(fn_index2)\r\n fp2 = len(fp_index2)\r\n \r\n # global rates\r\n global_res_1 = np.hstack((gt[tp_index1,:], np.zeros((tp1,1))))\r\n global_res_2 = np.hstack((dt1[fp_index1,:], np.ones((fp1,1))))\r\n global_res_3 = np.hstack((gt[fn_index1,:], np.dot(2 , np.ones((fn1,1)))))\r\n global_res1 = np.concatenate((global_res_1, global_res_2, global_res_3), axis=0)\r\n \r\n global_res_1 = np.hstack((gt[tp_index2,:], np.zeros((tp2,1))))\r\n global_res_2 = np.hstack((dt2[fp_index2,:], np.ones((fp2,1))))\r\n global_res_3 = np.hstack((gt[fn_index2,:], np.dot(2 , np.ones((fn2,1)))))\r\n global_res2 = np.concatenate((global_res_1, global_res_2, global_res_3), axis=0)\r\n # show the original image??\r\n \r\n #theta = 0 : (2 * pi / nseg) : (2 * pi);\r\n theta = np.linspace(0.0, (2 * math.pi), (nseg + 1))\r\n \r\n r_tp = []\r\n r_fp = []\r\n r_fn = []\r\n \r\n if save_figs:\r\n implot = plt.imshow(img)\r\n \r\n for c in range(len(global_res1)):\r\n \r\n x_res = global_res1[c,0]\r\n y_res = global_res1[c,1]\r\n r_res = global_res1[c,2] /2\r\n flag_res = global_res1[c,3]\r\n \r\n pline_x = np.add(np.dot(r_res , np.cos(theta)), x_res)\r\n pline_y = np.add(np.dot(r_res , np.sin(theta)), y_res)\r\n L = \"\"\r\n \r\n if flag_res == 0.0 :\r\n L = 'g'\r\n r_tp.append(r_res)\r\n \r\n elif flag_res == 1.0:\r\n L = 'r'\r\n r_fp.append(r_res)\r\n elif flag_res == 2.0:\r\n L = 'b'\r\n r_fn.append(r_res)\r\n else:\r\n print(\"Unknown results\")\r\n \r\n # if show_figs, plot(pline_x, pline_y, strcat(L,'-'),'LineWidth',2); end\r\n if save_figs:\r\n plt.plot(pline_x, pline_y, (L + '--') , linewidth=1)\r\n plt.axis('off')\r\n \r\n for c in range(len(global_res2)):\r\n \r\n x_res = global_res2[c,0]\r\n y_res = global_res2[c,1]\r\n r_res = global_res2[c,2]\r\n flag_res = global_res2[c,3]\r\n \r\n pline_x = np.add(np.dot(r_res , np.cos(theta)), x_res)\r\n pline_y = np.add(np.dot(r_res , np.sin(theta)), y_res)\r\n L = \"\"\r\n \r\n if flag_res == 0.0 :\r\n L = 'g'\r\n r_tp.append(r_res)\r\n \r\n elif flag_res == 1.0:\r\n L = 'r'\r\n r_fp.append(r_res)\r\n elif flag_res == 2.0:\r\n L = 'b'\r\n r_fn.append(r_res)\r\n else:\r\n print(\"Unknown results\")\r\n \r\n # if show_figs, plot(pline_x, pline_y, strcat(L,'-'),'LineWidth',2); end\r\n if save_figs:\r\n plt.plot(pline_x, pline_y, (L + '-') , linewidth=1.4)\r\n plt.axis('off')\r\n\r\n\r\n if save_figs:\r\n # show the previous plot\r\n #plt.show()\r\n plt.savefig(save_path +'_evaluation_cmp.png', bbox_inches='tight', dpi=400)\r\n plt.show()\r\n \r\n # plots (https://matplotlib.org/users/pyplot_tutorial.html)\r\n plt.figure(1)\r\n plt.subplot(221)\r\n plt.plot(np.dot(2,[item[0] for item in errors1]), [item[1] for item in errors1], 'r^')\r\n plt.plot(np.dot(2,[item[0] for item in errors2]), [item[1] for item in errors2], 'bo',mfc='none')\r\n plt.title('Euclidean Position Error (px)')\r\n plt.xlabel('diameter (px)')\r\n plt.ylabel('error in position (px)')\r\n \r\n plt.subplot(222)\r\n plt.plot(np.dot(2,[item[0] for item in errors1]), [item[2] for item in errors1], 'r^')\r\n plt.plot(np.dot(2,[item[0] for item in errors2]), [item[2] for item in errors2], 'bo',mfc='none')\r\n plt.title('Absolute Radius Error (px)')\r\n plt.xlabel('diameter (px)')\r\n plt.ylabel('error in radius (px)')\r\n \r\n plt.savefig(save_path +'_stats_cmp.png', bbox_inches='tight', figsize=(10, 8), dpi=400)\r\n plt.show()\r\n \r\n\r\n"
},
{
"alpha_fraction": 0.6053072810173035,
"alphanum_fraction": 0.6111026406288147,
"avg_line_length": 30.37799072265625,
"blob_id": "d33af76e7938dcf4fff7e16ef7619769ac4ee77c",
"content_id": "49bde5eeb7ae5d1634b150f84a2d5086befff881",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6557,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 209,
"path": "/bcb-src/nn_cnn_src/crater_plots.py",
"repo_name": "mohebbihr/bcb-crater-detection",
"src_encoding": "UTF-8",
"text": "import matplotlib.pyplot as plt\nfrom sklearn.metrics import confusion_matrix\nimport math\nimport numpy as np\n\ndef plot_image(image, img_shape=None):\n if not img_shape:\n # assume square\n side = int(math.sqrt(len(image)))\n img_shape = (side, side)\n\n plt.imshow(image.reshape(img_shape),\n interpolation='nearest',\n cmap='binary')\n\n plt.show()\n\ndef plot_images(images, cls_true, cls_pred=None, img_shape=None):\n \"\"\" Helper-function for plotting images\n Function used to plot 9 images in a nx3 grid, and writing the true and\n predicted classes below each image.\"\"\"\n\n if len(images) == 0:\n print(\"No images to plot\")\n return\n\n if not img_shape:\n # assume square\n side = int(math.sqrt(len(images[0])))\n img_shape = (side, side, 1)\n\n nrows = int(math.ceil(len(images)/3))\n \n # Create figure with 3x3 sub-plots.\n fig, axes = plt.subplots(nrows, 3, figsize=(6, 2*nrows))\n fig.subplots_adjust(hspace=0.3, wspace=0.3)\n\n for i, ax in enumerate(axes.flat):\n if i < len(images):\n # Plot image.\n ax.imshow(images[i].reshape(img_shape[:2]), cmap='binary')\n\n # Show true and predicted classes.\n if cls_pred is None:\n xlabel = \"True: {0}\".format(cls_true[i])\n else:\n xlabel = \"True: {0}, Pred: {1}\".format(cls_true[i], cls_pred[i])\n\n # Show the classes as the label on the x-axis.\n ax.set_xlabel(xlabel)\n else:\n ax.axis('off')\n \n # Remove ticks from the plot.\n ax.set_xticks([])\n ax.set_yticks([])\n \n # Ensure the plot is shown correctly with multiple plots\n # in a single Notebook cell.\n plt.show()\n\ndef plot_example_errors(cls_pred, correct, data_test, img_shape=None):\n\n # This function is called from print_test_accuracy() below.\n\n # cls_pred is an array of the predicted class-number for\n # all images in the test-set.\n\n # correct is a boolean array whether the predicted class\n # is equal to the true class for each image in the test-set.\n\n # Negate the boolean array.\n incorrect = (correct == False)\n \n # Get the images from the test-set that have been\n # incorrectly classified.\n images = data_test.images[incorrect]\n \n if not img_shape:\n # assume square\n side = int(math.sqrt(len(images[0])))\n img_shape = (side, side, 1)\n\n # Get the predicted classes for those images.\n cls_pred = cls_pred[incorrect]\n\n # Get the true classes for those images.\n cls_true = data_test.cls[incorrect]\n \n # Plot the first 9 images.\n plot_images(images=images[:9],\n cls_true=cls_true[:9],\n cls_pred=cls_pred[:9],\n img_shape=img_shape)\n\ndef plot_confusion_matrix(cls_pred, data_test):\n\n # infer num_classes value\n num_classes = len(data_test.labels[0])\n\n # This is called from print_test_accuracy() below.\n\n # cls_pred is an array of the predicted class-number for\n # all images in the test-set.\n\n # Get the true classifications for the test-set.\n cls_true = data_test.cls\n \n # Get the confusion matrix using sklearn.\n cm = confusion_matrix(y_true=cls_true,\n y_pred=cls_pred)\n\n # Print the confusion matrix as text.\n print(cm)\n\n # Plot the confusion matrix as an image.\n plt.matshow(cm)\n\n # Make various adjustments to the plot.\n plt.colorbar()\n tick_marks = np.arange(num_classes)\n plt.xticks(tick_marks, range(num_classes))\n plt.yticks(tick_marks, range(num_classes))\n plt.xlabel('Predicted')\n plt.ylabel('True')\n\n # Ensure the plot is shown correctly with multiple plots\n # in a single Notebook cell.\n plt.show()\n\ndef plot_conv_weights(weights, input_channel=0):\n # Assume weights are TensorFlow ops for 4-dim variables\n # e.g. weights_conv1 or weights_conv2.\n\n # Get the lowest and highest values for the weights.\n # This is used to correct the colour intensity across\n # the images so they can be compared with each other.\n w_min = np.min(weights)\n w_max = np.max(weights)\n\n # Number of filters used in the conv. layer.\n num_filters = weights.shape[3]\n\n # Number of grids to plot.\n # Rounded-up, square-root of the number of filters.\n num_grids = math.ceil(math.sqrt(num_filters))\n \n # Create figure with a grid of sub-plots.\n fig, axes = plt.subplots(num_grids, num_grids)\n\n # Plot all the filter-weights.\n for i, ax in enumerate(axes.flat):\n # Only plot the valid filter-weights.\n if i<num_filters:\n # Get the weights for the i'th filter of the input channel.\n # See new_conv_layer() for details on the format\n # of this 4-dim tensor.\n img = weights[:, :, input_channel, i]\n\n # Plot image.\n ax.imshow(img, vmin=w_min, vmax=w_max,\n interpolation='nearest', cmap='seismic')\n \n # Remove ticks from the plot.\n ax.set_xticks([])\n ax.set_yticks([])\n \n # Ensure the plot is shown correctly with multiple plots\n # in a single Notebook cell.\n plt.show()\n\ndef plot_conv_layer(values):\n # Assume layer is a TensorFlow op that outputs a 4-dim tensor\n # which is the output of a convolutional layer,\n # e.g. layer_conv1 or layer_conv2.\n\n # Create a feed-dict containing just one image.\n # Note that we don't need to feed y_true because it is\n # not used in this calculation.\n\n # Number of filters used in the conv. layer.\n num_filters = values.shape[3]\n\n # Number of grids to plot.\n # Rounded-up, square-root of the number of filters.\n num_grids = math.ceil(math.sqrt(num_filters))\n \n # Create figure with a grid of sub-plots.\n fig, axes = plt.subplots(num_grids, num_grids)\n\n # Plot the output images of all the filters.\n for i, ax in enumerate(axes.flat):\n # Only plot the images for valid filters.\n if i<num_filters:\n # Get the output image of using the i'th filter.\n # See new_conv_layer() for details on the format\n # of this 4-dim tensor.\n img = values[0, :, :, i]\n\n # Plot image.\n ax.imshow(img, interpolation='nearest', cmap='binary')\n \n # Remove ticks from the plot.\n ax.set_xticks([])\n ax.set_yticks([])\n \n # Ensure the plot is shown correctly with multiple plots\n # in a single Notebook cell.\n plt.show()"
},
{
"alpha_fraction": 0.6745222806930542,
"alphanum_fraction": 0.7012738585472107,
"avg_line_length": 31.06122398376465,
"blob_id": "f867d591d93b3e5411083e5ea13c2f68b33f024e",
"content_id": "052ef5d8606cc65c3b74a52cb55547cc468a365d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1570,
"license_type": "no_license",
"max_line_length": 93,
"num_lines": 49,
"path": "/bcb-src/nn_cnn_src/remove_duplicates_kmeans.py",
"repo_name": "mohebbihr/bcb-crater-detection",
"src_encoding": "UTF-8",
"text": "# this script determines the best K for kmeans using plotting (elbow method).\nimport cv2 as cv\nimport time\nfrom datetime import timedelta\nimport os\nimport pandas as pd\nfrom helper import calculateDistance, draw_craters_rectangles, draw_craters_circles, evaluate\nimport Param\nfrom sklearn.cluster import KMeans\nfrom matplotlib import pyplot as plt\nimport numpy as np\n\n# the raw data to process for duplicate removal\nparam = Param.Param()\nremoval_method = 'KMeans'\ncsv_path = 'results/cnn/west_train_west_test_1_25_cnn.csv'\nsave_path = 'results/cnn/evaluations/' + removal_method + '/west_train_west_test_1_25_cnn'\ndata = pd.read_csv(csv_path, header=None)\n\nstart_time = time.time()\n\n# first pass, remove duplicates for points of same window size\ndf1 = {}\nmerge = pd.DataFrame()\nfor ws in data[2].unique():\n df1[ws] = data[ (data[3] > 0.75) & (data[2] == ws) ] # take only 75% or higher confidence\n merge = pd.concat([merge, df1[ws]])\n\nx = merge[0].values.tolist()\ny = merge[1].values.tolist()\n\nX = np.column_stack((x, y))\n\nplt.scatter(X[:, 0], X[:, 1], s=50);\nplt.savefig(save_path +'_org.png', bbox_inches='tight', dpi=400)\n\nkmeans = KMeans(n_clusters=8)\nkmeans.fit(X)\ny_kmeans = kmeans.predict(X)\n\nplt.scatter(X[:, 0], X[:, 1], c=y_kmeans, s=50, cmap='viridis')\ncenters = kmeans.cluster_centers_\nplt.scatter(centers[:, 0], centers[:, 1], c='black', s=200, alpha=0.5);\nplt.savefig(save_path +'_cluster.png', bbox_inches='tight', dpi=400)\n\n\nend_time = time.time()\ntime_dif = end_time - start_time\nprint(\"Time usage: \" + str(timedelta(seconds=int(round(time_dif)))))"
},
{
"alpha_fraction": 0.5388226509094238,
"alphanum_fraction": 0.6366919279098511,
"avg_line_length": 40.56922912597656,
"blob_id": "021c6c6d14f5d4f8a23fc30b3691f24ff9ff8f1f",
"content_id": "762130618509b7fafbd4fd45ba56bdde41fc1f4e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2769,
"license_type": "no_license",
"max_line_length": 153,
"num_lines": 65,
"path": "/bcb-src/gen_results/plot_ps.py",
"repo_name": "mohebbihr/bcb-crater-detection",
"src_encoding": "UTF-8",
"text": "\"\"\"\r\n============================\r\nBachelor's degrees by gender\r\n============================\r\n\r\nA graph of multiple time series which demonstrates extensive custom\r\nstyling of plot frame, tick lines and labels, and line graph properties.\r\n\r\nAlso demonstrates the custom placement of text labels along the right edge\r\nas an alternative to a conventional legend.\r\n\"\"\"\r\n\r\nimport numpy as np\r\nimport matplotlib.pyplot as plt\r\nfrom matplotlib.cbook import get_sample_data\r\nimport pandas as pd\r\n\r\nif __name__ == \"__main__\":\r\n\r\n\tepochs = [i for i in range(0,525,25)]\r\n\t# validation accuracy of progressive models\r\n\tacc_1 = [0.44, 0.46, 0.48, 0.48, 0.5, 0.49, 0.51, 0.53, 0.56, 0.54, 0.55, 0.58, 0.58, 0.59, 0.58, 0.63, 0.64, 0.69, 0.70, 0.72, 0.72] # 12 x 12 images\r\n\tacc_2 = [0.42, 0.54, 0.56, 0.58, 0.57, 0.56, 0.56, 0.53, 0.59, 0.64, 0.65, 0.66, 0.65, 0.71, 0.74, 0.73, 0.77, 0.80, 0.86, 0.85, 0.846] # 24 x 24 images\r\n\tacc_3 = [0.51, 0.55, 0.56, 0.59, 0.60, 0.58, 0.61, 0.63, 0.62, 0.66, 0.68, 0.70, 0.71, 0.73, 0.74, 0.78, 0.79, 0.84, 0.85, 0.89, 0.892] # 48 x 48 images\r\n\tacc_list = [acc_1, acc_2, acc_3]\r\n\r\n\t# These are the colors that will be used in the plot\r\n\tcolor_sequence = ['#00ff00', '#0000ff', '#ff0000']\r\n\r\n\t# You typically want your plot to be ~1.33x wider than tall. This plot\r\n\t# is a rare exception because of the number of lines being plotted on it.\r\n\t# Common sizes: (10, 7.5) and (12, 9)\r\n\tfig, ax = plt.subplots(1, 1, figsize=(14, 9))\r\n\r\n\t# Ensure that the axis ticks only show up on the bottom and left of the plot.\r\n\t# Ticks on the right and top of the plot are generally unnecessary.\r\n\tax.get_xaxis().tick_bottom()\r\n\tax.get_yaxis().tick_left()\r\n\tax.set_xlim(0, 500)\r\n\r\n\t# Provide tick lines across the plot to help your viewers trace along\r\n\t# the axis ticks. Make sure that the lines are light and small so they\r\n\t# don't obscure the primary data lines.\r\n\tplt.grid(True, 'major', 'y', ls='--', lw=.5, c='k', alpha=.3)\r\n\r\n\t# Remove the tick marks; they are unnecessary with the tick lines we just\r\n\t# plotted.\r\n\tplt.tick_params(axis='both', which='both', bottom=False, top=False,\r\n\t\t\t\t\tlabelbottom=True, left=False, right=False, labelleft=True)\r\n\r\n\t# Now that the plot is prepared, it's time to actually plot the data!\r\n\t# Note that I plotted the majors in order of the highest % in the final year.\r\n\tprogressive_resizing = ['12 x 12 ','24 x 24 ','48 x 48 ']\r\n\toptions = ['--', '+', 's']\r\n\t\r\n\tfor i, step in enumerate(progressive_resizing):\r\n\t\tprint(\"i: \" + str(i))\r\n\t\tplt.plot(epochs, acc_list[i], options[i], lw=2.5, color=color_sequence[i])\r\n\t\ty_pos = acc_list[i][-1]\r\n\t\t\r\n\t\tplt.text(510, y_pos, step, fontsize=14, color=color_sequence[i])\r\n\t\r\n\t\r\n\tplt.show()\r\n\tplt.savefig('progressive_resizing_val_acc_2.png', bbox_inches='tight', dpi=400)\r\n\r\n"
},
{
"alpha_fraction": 0.6181392073631287,
"alphanum_fraction": 0.6295180916786194,
"avg_line_length": 35.839508056640625,
"blob_id": "e7261b48aa90959eb2c29d9a3b379ce3da693b1c",
"content_id": "d4675167f16179ccadcdfc62d596692a61bc3d76",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2988,
"license_type": "no_license",
"max_line_length": 83,
"num_lines": 81,
"path": "/bcb-src/nn_cnn_src/crater_loader.py",
"repo_name": "mohebbihr/bcb-crater-detection",
"src_encoding": "UTF-8",
"text": "import os\nimport cv2 \nimport glob\nimport random\nimport numpy as np\n\ndef load_crater_data(dataset_type):\n \n # set origin path for the images\n src = os.path.join('crater_data', 'samples', dataset_type, 'normalized_images')\n \n # this dict helps to create binary labels for the pictures\n labels_dict = {'crater': 1, 'non-crater': 0}\n \n images = []\n labels = []\n hot_one = []\n # get all images file paths\n for src_filename in glob.glob(os.path.join(src, '*', '*.png')):\n # extract info from file path\n pathinfo = src_filename.split(os.path.sep)\n img_type = pathinfo[-2] # crater or non-crater\n filename = pathinfo[-1] # the actual name of the jpg\n \n # read the grayscale version of the image, \n # and normalize its values to be between 0 and 1\n img = cv2.imread(src_filename, cv2.IMREAD_GRAYSCALE) / 255.0\n \n # reshape the data structure to be a 1-D column vector\n img = img.flatten()\n \n # include the image data and its label into the sample list\n images.append(img)\n labels.append(labels_dict[img_type])\n hot_one.append([int(i==labels_dict[img_type]) for i in range(2)])\n \n # We have to shuffle the order before splitting between training data\n # and test data\n # will shuffle on next step\n #random.shuffle(samples)\n \n # determine slices for training and test data\n #splitpos = int(len(samples) * 0.7)\n #return samples[:splitpos], samples[splitpos:]\n \n # Will split data after this step. Return a single data set\n return np.array(images), np.array(labels), np.array(hot_one)\n\ndef load_crater_data_wrapper(dataset_type):\n \n # set origin path for the images\n src = os.path.join('crater_data', 'samples',dataset_type, 'normalized_images')\n \n # this dict helps to create binary labels for the pictures\n labels_dict = {'crater': 1, 'non-crater': 0}\n \n samples = []\n # get all images file paths\n for src_filename in glob.glob(os.path.join(src, '*', '*.png')):\n # extract info from file path\n pathinfo = src_filename.split(os.path.sep)\n img_type = pathinfo[-2] # crater or non-crater\n filename = pathinfo[-1] # the actual name of the jpg\n \n # read the grayscale version of the image, \n # and normalize its values to be between 0 and 1\n img = cv2.imread(src_filename, cv2.IMREAD_GRAYSCALE) / 255.0\n \n # reshape the data structure to be a 1-D column vector\n img = img.flatten().reshape((len(img)**2, 1))\n \n # include the image data and its label into the sample list\n samples.append((img, labels_dict[img_type]))\n \n # We have to shuffle the order before splitting between training data\n # and test data\n random.shuffle(samples)\n \n # determine slices for training and test data\n splitpos = int(len(samples) * 0.7)\n return samples[:splitpos], samples[splitpos:]\n "
},
{
"alpha_fraction": 0.49217134714126587,
"alphanum_fraction": 0.5051698684692383,
"avg_line_length": 28.692981719970703,
"blob_id": "ab26af414f1d43563cdab2c6f9832259fa3d098e",
"content_id": "f4e2d5ecf280238a9d9113fd0856e7fcdd958c9a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3385,
"license_type": "no_license",
"max_line_length": 136,
"num_lines": 114,
"path": "/bcb-src/nn_cnn_src/simple_xmeans.py",
"repo_name": "mohebbihr/bcb-crater-detection",
"src_encoding": "UTF-8",
"text": "import numpy as np\nimport math as mt\nimport sys\nfrom sklearn.cluster import KMeans\nfrom sklearn import datasets\nfrom sklearn import metrics\n\nclass XMeans:\n def loglikelihood(self, r, rn, var, m, k):\n l1 = - rn / 2.0 * mt.log(2 * mt.pi)\n l2 = - rn * m / 2.0 * mt.log(var)\n l3 = - (rn - k) / 2.0\n l4 = rn * mt.log(rn)\n l5 = - rn * mt.log(r)\n\n return l1 + l2 + l3 + l4 + l5\n\n def __init__(self, X, kmax = 20):\n self.X = X\n self.num = np.size(self.X, axis=0)\n self.dim = np.size(X, axis=1)\n self.KMax = kmax\n\n def fit(self):\n k = 1\n X = self.X\n M = self.dim\n num = self.num\n\n while(1):\n ok = k\n\n #Improve Params\n kmeans = KMeans(n_clusters=k).fit(X)\n labels = kmeans.labels_\n m = kmeans.cluster_centers_\n\n #Improve Structure\n #Calculate BIC\n p = M + 1\n\n obic = np.zeros(k)\n\n for i in range(k):\n rn = np.size(np.where(labels == i))\n var = np.sum((X[labels == i] - m[i])**2)/float(rn - 1)\n obic[i] = self.loglikelihood(rn, rn, var, M, 1) - p/2.0*mt.log(rn)\n\n #Split each cluster into two subclusters and calculate BIC of each splitted cluster\n sk = 2 #The number of subclusters\n nbic = np.zeros(k)\n addk = 0\n\n for i in range(k):\n ci = X[labels == i]\n r = np.size(np.where(labels == i))\n\n kmeans = KMeans(n_clusters=sk).fit(ci)\n ci_labels = kmeans.labels_\n sm = kmeans.cluster_centers_\n\n for l in range(sk):\n rn = np.size(np.where(ci_labels == l))\n var = np.sum((ci[ci_labels == l] - sm[l])**2)/float(rn - sk)\n nbic[i] += self.loglikelihood(r, rn, var, M, sk)\n\n p = sk * (M + 1)\n nbic[i] -= p/2.0*mt.log(r)\n\n if obic[i] < nbic[i]:\n addk += 1\n\n k += addk\n\n if ok == k or k >= self.KMax:\n break\n\n\n #Calculate labels and centroids\n kmeans = KMeans(n_clusters=k).fit(X)\n self.labels = kmeans.labels_\n self.k = k\n self.m = kmeans.cluster_centers_\n\n\nif __name__ == '__main__':\n\n #Blobs (Isotropic Gaussian distributions)\n X, TrueLabels = datasets.make_blobs(n_samples=1500, centers=3, n_features=3)\n\n xm = XMeans(X)\n xm.fit()\n\n purity = metrics.adjusted_rand_score(TrueLabels, xm.labels)\n nmi = metrics.normalized_mutual_info_score(TrueLabels, xm.labels)\n ari = metrics.adjusted_rand_score(TrueLabels, xm.labels)\n\n print(\"Blobs\")\n print(\"True k = 3, Estimated k = \" + str(xm.k) + \", purity = \" + str(purity) + \", NMI = \" + str(nmi) + \", ARI = \" + str(ari) + \"\\n\")\n\n #Iris dataset\n dataset = datasets.load_iris()\n X = dataset.data\n TrueLabels = dataset.target\n\n xm = XMeans(X)\n xm.fit()\n\n purity = metrics.adjusted_rand_score(TrueLabels, xm.labels)\n nmi = metrics.normalized_mutual_info_score(TrueLabels, xm.labels)\n ari = metrics.adjusted_rand_score(TrueLabels, xm.labels)\n\n print(\"Iris dataset\")\n print(\"True k = 3, Estimated k = \" + str(xm.k) + \", purity = \" + str(purity) + \", NMI = \" + str(nmi) + \", ARI = \" + str(ari) + \"\\n\")\n"
},
{
"alpha_fraction": 0.6171945929527283,
"alphanum_fraction": 0.6316742300987244,
"avg_line_length": 34.64516067504883,
"blob_id": "35867704c0bcc534a26b0d28aff6f8e0ddb9f130",
"content_id": "245b7eca980ec9d370e2ceede9399d3c352dab3f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1105,
"license_type": "no_license",
"max_line_length": 70,
"num_lines": 31,
"path": "/bcb-src/cda_deep/crater_preprocessing.py",
"repo_name": "mohebbihr/bcb-crater-detection",
"src_encoding": "UTF-8",
"text": "import cv2\nimport glob\nimport os\n\ndef preprocess(tile_name, dest_folder, img_dimensions=(50, 50)):\n src = os.path.join('crater_data', 'images', tile_name)\n dst = os.path.join('crater_data', dest_folder)\n tgt_height, tgt_width = img_dimensions\n\n # create new directories if necessary\n for imgtype in ['crater', 'non-crater']:\n tgdir = os.path.join(dst, imgtype)\n if not os.path.isdir(tgdir):\n os.makedirs(tgdir)\n\n for src_filename in glob.glob(os.path.join(src, '*', '*.jpg')):\n pathinfo = src_filename.split(os.path.sep)\n img_type = pathinfo[-2] # crater or non-crater\n filename = pathinfo[-1] # the actual name of the jpg\n\n dst_filename = os.path.join(dst, img_type, filename)\n\n # read the original image and get size info\n src_img = cv2.imread(src_filename)\n\n # resize image, normalize and write to disk\n scaled_img = cv2.resize(src_img, (tgt_height, tgt_width))\n cv2.normalize(scaled_img, scaled_img, 0, 255, cv2.NORM_MINMAX)\n cv2.imwrite(dst_filename, scaled_img)\n \n print(\"Done!\")\n"
},
{
"alpha_fraction": 0.695811927318573,
"alphanum_fraction": 0.7178544998168945,
"avg_line_length": 30.674419403076172,
"blob_id": "8d16eedb7e5154ec0f8b3c91f6fa46a193bdf13c",
"content_id": "fa40c2dcd5bcd94d4e408db050f9f6d6a88a683a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1361,
"license_type": "no_license",
"max_line_length": 93,
"num_lines": 43,
"path": "/bcb-src/nn_cnn_src/kmeans_elbow.py",
"repo_name": "mohebbihr/bcb-crater-detection",
"src_encoding": "UTF-8",
"text": "# this script determines the best K for kmeans using plotting (elbow method).\nimport cv2 as cv\nimport time\nfrom datetime import timedelta\nimport os\nimport pandas as pd\nfrom helper import calculateDistance, draw_craters_rectangles, draw_craters_circles, evaluate\nimport Param\nfrom sklearn.cluster import KMeans\nfrom matplotlib import pyplot as plt\n\n# the raw data to process for duplicate removal\nparam = Param.Param()\nremoval_method = 'KMeans'\ncsv_path = 'results/cnn/west_train_west_test_1_25_cnn.csv'\nsave_path = 'results/cnn/evaluations/' + removal_method + '/west_train_west_test_1_25_cnn'\ndata = pd.read_csv(csv_path, header=None)\n\nstart_time = time.time()\n\n# first pass, remove duplicates for points of same window size\ndf1 = {}\nmerge = pd.DataFrame()\nfor ws in data[2].unique():\n df1[ws] = data[ (data[3] > 0.75) & (data[2] == ws) ] # take only 75% or higher confidence\n merge = pd.concat([merge, df1[ws]])\n\ndistorsions = []\nfor k in range(2, 20):\n kmeans = KMeans(n_clusters=k)\n kmeans.fit(merge)\n distorsions.append(kmeans.inertia_)\n\nfig = plt.figure(figsize=(15, 5))\nplt.plot(range(2, 20), distorsions)\nplt.grid(True)\nplt.title('Elbow curve')\nplt.savefig(save_path +'_elbow.png', bbox_inches='tight', dpi=400)\n\n\nend_time = time.time()\ntime_dif = end_time - start_time\nprint(\"Time usage: \" + str(timedelta(seconds=int(round(time_dif)))))"
},
{
"alpha_fraction": 0.6848230957984924,
"alphanum_fraction": 0.7034450769424438,
"avg_line_length": 34.81666564941406,
"blob_id": "f2b1388384b5c3dada3ade51cc073dd3729c3d9d",
"content_id": "a6935082e76169c375c93365107e3312d119ca4c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2148,
"license_type": "no_license",
"max_line_length": 197,
"num_lines": 60,
"path": "/bcb-src/nn_cnn_src/remove_duplicates_nms.py",
"repo_name": "mohebbihr/bcb-crater-detection",
"src_encoding": "UTF-8",
"text": "import cv2 as cv\nimport time\nfrom datetime import timedelta\nimport os\nimport pandas as pd\nfrom helper import calculateDistance, BIRCH_duplicate_removal,BIRCH2_duplicate_removal, Banderia_duplicate_removal, XMeans_duplicate_removal, draw_craters_rectangles, draw_craters_circles, evaluate\nfrom non_max_suppression import NMS\nimport Param\n\n# the raw data to process for duplicate removal\nparam = Param.Param()\nremoval_method = 'NMS'\ncsv_path = 'results/cnn/tile1_24_sw_cnn.csv'\ngt_csv_path = 'crater_data/gt/1_24_gt.csv'\nsave_path = 'results/cnn/evaluations/' + removal_method + '/tile1_24_cnn'\ntestset_name = 'tile1_24'\n\n# the image for drawing rectangles\nimg_path = os.path.join('crater_data', 'images', testset_name + '.pgm')\ngt_img = cv.imread(img_path)\n\ndata = pd.read_csv(csv_path, header=None)\ngt = pd.read_csv(gt_csv_path, header=None)\n\nthreshold = 0.75\n\nstart_time = time.time()\n\n# first pass, remove duplicates for points of same window size\ndf1 = {}\nfor ws in data[2].unique():\n if (ws >= param.dmin) and (ws <= param.dmax):\n df1[ws] = data[ (data[3] > 0.75) & (data[2] == ws) ] # take only 75% or higher confidence\n df1[ws] = NMS(df1[ws])\n\n# Start merging process\n# We will add points of greatest size first\n# then merge with the next smaller size and remove duplicates\n# Do this until the smallest window size has been included\n\nmerge = pd.DataFrame()\nfor ws in reversed(sorted(df1.keys())):\n merge = pd.concat([merge, df1[ws]])\n old_size = len(merge)\n #merge = BIRCH2_duplicate_removal(merge, threshold) # we can tweak ws for eliminations\n merge = NMS(merge) \n new_size = len(merge)\n print(\"Processed window size\", ws, \", considered\", old_size, \"points, returned\", new_size, \"points\")\n\n# save the no duplicate csv file\nmerge[[0,1,2]].to_csv(\"%s_noduplicates.csv\" % save_path, header=False, index=False)\ncraters = merge[[0,1,2]]\n\n# evaluate with gt and draw it on final image.\ndr, fr, qr, bf, f_measure, tp, fp, fn = evaluate(craters, gt, gt_img, 64, True, save_path, param)\n\n\nend_time = time.time()\ntime_dif = end_time - start_time\nprint(\"Time usage: \" + str(timedelta(seconds=int(round(time_dif)))))"
},
{
"alpha_fraction": 0.5937526822090149,
"alphanum_fraction": 0.6139191389083862,
"avg_line_length": 54.89904022216797,
"blob_id": "636b8b35234a4a6192f3b4162562760a781af12f",
"content_id": "e52b115079c6216abfe0e0b72f4ce7fb8c88d8f3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 11653,
"license_type": "no_license",
"max_line_length": 259,
"num_lines": 208,
"path": "/bcb-src/nn_cnn_src/extract_samples.py",
"repo_name": "mohebbihr/bcb-crater-detection",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python2\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Fri Oct 19 17:13:34 2018\n\n@author: mohebbi\n\"\"\"\n\nimport os\nimport pandas as pd\nimport cv2 as cv\nimport numpy\nimport Param\nimport random\nfrom helper import sliding_window\nimport imutils\n\n# This script will extract positive slides of the images. The size of each slice is 224 , 224\n# each slice has 24 pixels over lap with each other. Therefore, stepSize = 224 - 24 = 200 \ndef extract_slices(gt_num, gt_img, savepath, windowSize=(224, 224), stepSize = 200, counter = 0):\n\n clone = gt_img.copy()\n # task: think about adding the pyramid in here.\n # or maybe generating slices at different size of gt images. From small to very big sizes and extract\n # with same size always. \n\n for (x, y, window) in sliding_window(clone, stepSize, windowSize):\n\n dst_filename = os.path.join(savepath, \"SL_\"+ gt_num + \"_\" + str(counter) + \"_x_\" + str(x) +\"_y_\" + str(y) + \".jpg\")\n cv.imwrite(dst_filename, window)\n counter +=1\n\n return counter\n\n# This script extract TP and FN samples from tiles images and write it to approperiate directories.\n# this function does not resize or normalize the extracted images. We do these parts on preprocess function in crater_preprocessing.py file. \n\ndef extract_positive_samples(gt_num, gt_img, gt_data, gt_tp_savepath, param):\n\n clone = gt_img.copy()\n mask = numpy.zeros((gt_img.shape[0], gt_img.shape[1]))\n counter = 0\n\n x_gt_data = gt_data[0].values.tolist()\n y_gt_data = gt_data[1].values.tolist()\n d_gt_data = gt_data[2].values.tolist() # the third column is diameter\n\n for v in range(0,len(gt_data)):\n\n x_gt = int(round(x_gt_data[v]))\n y_gt = int(round(y_gt_data[v]))\n r_gt = int(round(d_gt_data[v] / 2))\n\n x = x_gt - r_gt\n y = y_gt - r_gt\n\n if x >=0 and y >=0 and x_gt + r_gt +1 <= gt_img.shape[0] and y_gt + r_gt +1 <= gt_img.shape[1] :\n\n crop_img = clone[y:y_gt + r_gt +1, x:x_gt + r_gt +1]\n\n # save rotations of the image too\n for degree in range(30,360,30):\n rotated = imutils.rotate_bound(crop_img, degree)\n dst_filename = os.path.join(gt_tp_savepath, \"TP_\"+ gt_num + \"_\" + str(counter) + \"_\" + str(degree) + \".png\")\n cv.imwrite(dst_filename, rotated)\n counter += 1\n\n dst_filename = os.path.join(gt_tp_savepath, \"TP_\"+ gt_num + \"_\" + str(counter) + \".png\")\n cv.imwrite(dst_filename, crop_img)\n # do the same for the mask too\n mask[y:y_gt + r_gt + 1, x:x_gt + r_gt + 1] = 1\n counter += 1\n\n return counter, mask\n\ndef extract_negative_samples(gt_num,gt_img, gt_mask, num_positive_samples, gt_fn_savepath, param, circleMask = False):\n\n clone = gt_img.copy()\n nn = 0\n while nn < num_positive_samples:\n\n aux_r = random.randint(param.dmin, param.dmax)/2\n aux_x = random.randint(0,gt_img.shape[0])\n aux_y = random.randint(0,gt_img.shape[1])\n\n s_x = aux_x - aux_r\n s_y = aux_y - aux_r\n\n e_x = aux_x + aux_r + 1\n e_y = aux_y + aux_r + 1\n\n if s_x >= 0 and s_y >=0 and e_x <= gt_img.shape[0] and e_y <= gt_img.shape[1]:\n\n # calculate its intersection with gt\n mask = numpy.zeros((gt_img.shape[0], gt_img.shape[1]))\n\n mask[s_y:e_y,s_x: e_x] = 1\n # element wise matrix multiplication\n threshold = numpy.sum(numpy.multiply(mask, gt_mask)) / numpy.sum(mask)\n\n if threshold <= param.thresh_overlay:\n\n crop_img = clone[s_y:e_y, s_x:e_x]\n dst_filename = os.path.join(gt_fn_savepath, \"FN_\"+ gt_num + \"_\" + str(nn) + \".png\")\n if circleMask:\n crop_circle_mask = numpy.zeros(crop_img.shape, dtype='uint8')\n cv.circle(crop_circle_mask, (aux_x, aux_y), aux_r, (255, 255, 255), -1)\n masked_crop = numpy.bitwise_and(crop_img, crop_circle_mask)\n cv.imwrite(dst_filename, masked_crop)\n else:\n cv.imwrite(dst_filename, crop_img)\n\n nn += 1\n\n return nn\n \n\nif __name__ == \"__main__\":\n \n param = Param.Param()\n gt_list = [\"1_24\", \"1_25\", \"2_24\", \"2_25\", \"3_24\", \"3_25\"]\n\n for gt_num in gt_list:\n \n org_tp_savepath = os.path.join(\"crater_data\", \"samples\", \"org\", \"tile\" + gt_num, \"crater\")\n org_fn_savepath = os.path.join(\"crater_data\",\"samples\", \"org\", \"tile\" + gt_num, \"non-crater\")\n th_org_tp_savepath = os.path.join(\"crater_data\", \"samples\", \"th_org\", \"tile\" + gt_num, \"crater\")\n th_org_fn_savepath = os.path.join(\"crater_data\",\"samples\", \"th_org\", \"tile\" + gt_num, \"non-crater\")\n mask_tp_savepath = os.path.join(\"crater_data\", \"samples\", \"mask\", \"tile\" + gt_num, \"crater\")\n mask_fn_savepath = os.path.join(\"crater_data\",\"samples\", \"mask\", \"tile\" + gt_num, \"non-crater\")\n th_mask_tp_savepath = os.path.join(\"crater_data\", \"samples\", \"th_mask\", \"tile\" + gt_num, \"crater\")\n th_mask_fn_savepath = os.path.join(\"crater_data\",\"samples\", \"th_mask\", \"tile\" + gt_num, \"non-crater\")\n \n gt_csv_path = os.path.join(\"crater_data\",\"gt\", gt_num + \"_gt.csv\")\n gt_mask_img_path = os.path.join(\"crater_data\",\"masks\", \"tile\" + gt_num + \".pgm\") # we use the mask of original image to extract true positive samples.\n gt_th_mask_img_path = os.path.join(\"crater_data\",\"masks\", \"tile\" + gt_num + \"_th.pgm\") # we add threshold image too. \n gt_img_path = os.path.join(\"crater_data\",\"tiles\", \"tile\" + gt_num + \".pgm\") # we use original image to extract false negative samples.\n gt_img_path_reflected_pgm = os.path.join(\"crater_data\",\"tiles\", \"tile\" + gt_num + \"_reflected.pgm\")\n gt_img_path_reflected_png = os.path.join(\"crater_data\",\"tiles\", \"tile\" + gt_num + \"_reflected.png\")\n gt_mask_img_path_reflected_pgm = os.path.join(\"crater_data\",\"masks\", \"tile\" + gt_num + \"_reflected.pgm\")\n gt_mask_img_path_reflected_png = os.path.join(\"crater_data\",\"masks\", \"tile\" + gt_num + \"_reflected.png\")\n gt_th_img_path = os.path.join(\"crater_data\",\"th-images\", \"tile\" + gt_num + \".pgm\")\n #slices addresses. We store wit 24pixels overlap (_24overlap) and no overlap (_noverlap)\n gt_slices_path = os.path.join(\"crater_data\", \"slices\",\"org_24overlap\" , \"tile\" + gt_num)\n gt_mask_slices_path = os.path.join(\"crater_data\", \"slices\",\"mask_24overlap\",\"mask\" , \"tile\" + gt_num)\n gt_slices_path2 = os.path.join(\"crater_data\", \"slices\",\"org_noverlap\" , \"tile\" + gt_num)\n gt_mask_slices_path2 = os.path.join(\"crater_data\", \"slices\",\"mask_noverlap\" , \"tile\" + gt_num)\n \n print(\"_____extracting positive and negative samples_______\")\n \n gt_data = pd.read_csv(gt_csv_path, header=None)\n gt_img = cv.imread(gt_img_path)\n gt_th_img = cv.imread(gt_th_img_path)\n gt_mask_img = cv.imread(gt_mask_img_path)\n gt_th_mask_img = cv.imread(gt_th_mask_img_path)\n \n # extract positive and negative samples from org images and save it on org folder on samples directory.\n num_tp_samples, gt_mask = extract_positive_samples(gt_num, gt_img, gt_data, org_tp_savepath, param) # original photo \n print(str(num_tp_samples) + \" crater samples are extracted from \" + gt_num + \" tile.\")\n num_fn_samples = extract_negative_samples(gt_num,gt_img,gt_mask, num_tp_samples, org_fn_savepath, param)\n print(str(num_fn_samples) + \" non-crater samples are extracted from \" + gt_num + \" tile.\")\n\n # extract positive and negative samples from threshold org images and save it on th_org folder on samples directory.\n num_tp_samples, gt_mask = extract_positive_samples(gt_num, gt_th_img , gt_data, th_org_tp_savepath, param) # threshold of the original photo \n print(str(num_tp_samples) + \" crater samples are extracted from threshold \" + gt_num + \" tile.\")\n num_fn_samples = extract_negative_samples(gt_num,gt_th_img,gt_mask, num_tp_samples, th_org_fn_savepath, param)\n print(str(num_fn_samples) + \" non-crater samples are extracted from threshold \" + gt_num + \" tile.\")\n\n # extract positive and negative samples from threshold org images and save it on th_org folder on samples directory.\n num_tp_samples, gt_mask = extract_positive_samples(gt_num, gt_mask_img , gt_data, mask_tp_savepath, param) # mask of the original photo \n print(str(num_tp_samples) + \" crater samples are extracted from mask of \" + gt_num + \" tile.\")\n num_fn_samples = extract_negative_samples(gt_num,gt_img,gt_mask, num_tp_samples, mask_fn_savepath, param, True) # the non-crater areas of the masked image are black and therefore we need to pass the original image and then apply mask for each sample. \n print(str(num_fn_samples) + \" non-crater samples are extracted from mask of \" + gt_num + \" tile.\")\n\n # extract positive and negative samples from threshold org images and save it on th_org folder on samples directory.\n num_tp_samples, gt_mask = extract_positive_samples(gt_num, gt_th_mask_img , gt_data, th_mask_tp_savepath, param) # original photo \n print(str(num_tp_samples) + \" crater samples are extracted from threshold mask of \" + gt_num + \" tile.\")\n num_fn_samples = extract_negative_samples(gt_num,gt_th_img,gt_mask, num_tp_samples, th_mask_fn_savepath, param, True)\n print(str(num_fn_samples) + \" non-crater samples are extracted from threshold \" + gt_num + \" tile.\")\n \n print(\"______exctracting image slices (224,224)_______\")\n print(\"______exctracting image slices without padding_______\")\n \n num_slices = extract_slices(gt_num, gt_img, gt_slices_path, windowSize=(224, 224), stepSize = 200)\n num_slices += extract_slices(gt_num, gt_mask_img, gt_mask_slices_path, windowSize=(224, 224), stepSize = 200)\n \n # we don't need to extract slices without overlap from the original image. Because we only use slices from the \n # reflected (with padding) image for testing purposes. \n #extract slices without overlap too stepsize = 224\n #num_slices += extract_slices(gt_num, gt_img, gt_slices_path2, windowSize=(224, 224), stepSize = 224)\n #num_slices += extract_slices(gt_num, gt_mask_img, gt_mask_slices_path2, windowSize=(224, 224), stepSize = 224)\n \n print(\"______save tile images with padding, new size: 1792 x 1792_______\")\n #gt_img_resized = cv.resize(gt_img, (1792, 1792))\n #gt_mask_img_resized = cv.resize(gt_mask_img, (1792, 1792))\n gt_img_reflected = cv.copyMakeBorder(gt_img,46,46,46,46,cv.BORDER_REFLECT)\n gt_mask_img_reflected = cv.copyMakeBorder(gt_mask_img,46,46,46,46,cv.BORDER_REFLECT)\n cv.imwrite(gt_img_path_reflected_pgm,gt_img_reflected)\n cv.imwrite(gt_img_path_reflected_png,gt_img_reflected)\n cv.imwrite(gt_mask_img_path_reflected_pgm,gt_mask_img_reflected)\n cv.imwrite(gt_mask_img_path_reflected_png,gt_mask_img_reflected)\n \n print(\"______exctracting image slices with padding with no overlap between slices_______\")\n #extract slices without overlap too stepsize = 224\n num_slices += extract_slices(gt_num, gt_img_reflected, gt_slices_path2, windowSize=(224, 224), stepSize = 224)\n num_slices += extract_slices(gt_num, gt_mask_img_reflected, gt_mask_slices_path2, windowSize=(224, 224), stepSize = 224)\n \n print(str(num_slices) + \" slices are extracted from \" + gt_num + \" tile.\")\n \n \n "
},
{
"alpha_fraction": 0.6227291822433472,
"alphanum_fraction": 0.6472327709197998,
"avg_line_length": 25.299999237060547,
"blob_id": "ea2b0876183efb320fbcc398412a34f1ca09c467",
"content_id": "b38f01a9885f3c05d996da06dcf1df7c9bdf61f6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2367,
"license_type": "no_license",
"max_line_length": 103,
"num_lines": 90,
"path": "/bcb-src/nn_cnn_src/frame.py",
"repo_name": "mohebbihr/bcb-crater-detection",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n\nimport csv\nimport os\nfrom xmeans import XMeans\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom sklearn.cluster import KMeans\n\nX_axis=[]\nY_axis=[]\nW_size=[]\nprobs=[]\ncwd = os.getcwd()\nfile=cwd +'/crater_25_cnn.csv'\n#file=cwd +'/crater_24_sho_001.csv'\nwith open(file) as csvfile:\n readCSV = csv.reader(csvfile, delimiter=',')\n for row in readCSV:\n if row:\n X_axis.append(float(row[0]))\n Y_axis.append(float(row[1]))\n W_size.append(float(row[2]))\n probs.append(float(row[3]))\n\n\nxmax = np.max(X_axis)\nymax = np.max(Y_axis)\nwmax = np.max(W_size)\nX_axis=np.asarray(X_axis, dtype=np.float64)/xmax\nY_axis=np.asarray(Y_axis, dtype=np.float64)/ymax\nW_size=np.asarray(W_size, dtype=np.float64)/wmax\na=np.c_[X_axis, Y_axis, W_size]\n\ndatafit = np.c_[X_axis, Y_axis, W_size]\nkmeans = KMeans(n_clusters=214, max_iter=1000, tol=0.0001, algorithm='auto').fit(datafit)\nx_means = XMeans(random_state=1).fit(np.c_[X_axis, Y_axis, W_size])\n\n# print(x_means.labels_)\n# print(x_means.cluster_centers_)\n# print(x_means.cluster_log_likelihoods_)\n# print(x_means.cluster_sizes_)\n\nremoved_list_cnn = []\nfor row in x_means.cluster_centers_:\n xc = row[0] * xmax\n yc = row[1] * ymax\n ws = row[2] * wmax\n removed_list_cnn.append([xc, yc, ws])\nremoved_file = open(\"crater_25_cnn_removed.csv\",\"w\", newline='')\nwith removed_file:\n writer = csv.writer(removed_file, delimiter=',')\n writer.writerows(removed_list_cnn)\nremoved_file.close()\n\nkmeans_list_cnn = []\nfor row in kmeans.cluster_centers_:\n xc = row[0] * xmax\n yc = row[1] * ymax\n ws = row[2] * wmax\n kmeans_list_cnn.append([xc, yc, ws])\nkmeans_file = open(\"crater_25_cnn_kmeans.csv\",\"w\", newline='')\nwith kmeans_file:\n writer = csv.writer(kmeans_file, delimiter=',')\n writer.writerows(kmeans_list_cnn)\nremoved_file.close()\n\n\n#\n# plt.scatter(X_axis, Y_axis, W_size, c=x_means.labels_, s=30)\n# plt.scatter(x_means.cluster_centers_[:, 0], x_means.cluster_centers_[:, 1], c=\"r\", marker=\"+\", s=100)\n# plt.xlim(0, 3)\n# plt.ylim(0, 3)\n# plt.title(\"\")\n\n# plt.show()\n\n\n\n# fig = plt.figure()\n# ax = fig.add_subplot(111, projection='3d')\n\n\n# ax.scatter(X_axis, Y_axis, W_size, c='r', marker='.')\n# ax.set_xlabel('X Label')\n# ax.set_ylabel('Y Label')\n# ax.set_zlabel('Z Label')\n\n# plt.show()\n"
},
{
"alpha_fraction": 0.5570818185806274,
"alphanum_fraction": 0.5925776362419128,
"avg_line_length": 43.931034088134766,
"blob_id": "e17bf5f15750066129a37b593e9d1e2804f8d167",
"content_id": "08b9258dff71e2a2d402860107c86efae07f06d8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 17326,
"license_type": "no_license",
"max_line_length": 159,
"num_lines": 377,
"path": "/bcb-src/FCN-Segmentation/Train_Seg_FCN.py",
"repo_name": "mohebbihr/bcb-crater-detection",
"src_encoding": "UTF-8",
"text": "import cv2, os\r\nimport numpy as np\r\nimport matplotlib.pyplot as plt\r\nimport seaborn as sns\r\nimport random\r\nimport keras, time, warnings\r\nfrom keras.models import *\r\nfrom keras.layers import *\r\nfrom sklearn.utils import shuffle\r\nimport scipy\r\nfrom keras import optimizers\r\nimport imutils\r\nimport glob\r\nfrom keras.callbacks import ModelCheckpoint\r\n\r\ncwd = os.getcwd()\r\n\r\n\r\n# task: think about adding rotattions of images into training set.\r\n# input: tileimg should be something like tile1_24. \r\ndef make_train_set(tileimg):\r\n \r\n src_org_list = [os.path.join('crater_data', 'slices','org_noverlap', tileimg), os.path.join('crater_data', 'slices','org_24overlap', tileimg)]\r\n src_mask_list = [os.path.join('crater_data', 'slices','mask_noverlap', tileimg), os.path.join('crater_data', 'slices','mask_24overlap', tileimg)]\r\n \r\n dst = os.path.join('crater_data', 'slices', 'train')\r\n counter = 0\r\n \r\n \r\n # create new directories if necessary\r\n for imgtype in ['org', 'mask']:\r\n tgdir = os.path.join(dst, imgtype)\r\n if not os.path.isdir(tgdir):\r\n os.makedirs(tgdir)\r\n \r\n for i in range(0,2): \r\n \r\n src_org = src_org_list[i]\r\n src_mask = src_mask_list[i]\r\n # add original samples to train folder.\r\n for src_filename in glob.glob(os.path.join(src_org, '*.jpg')):\r\n \r\n # read the original image and get size info\r\n src_img = cv2.imread(src_filename)\r\n pathinfo = src_filename.split(os.path.sep)\r\n img_type = pathinfo[-2] # org or mask\r\n filename = pathinfo[-1] # the actual name of the jpg\r\n \r\n #print(\"img_type: \" + img_type);\r\n \r\n rotated90 = imutils.rotate_bound(src_img, 90)\r\n rotated180 = imutils.rotate_bound(src_img, 180)\r\n rotated270 = imutils.rotate_bound(src_img, 270)\r\n \r\n dst_filename = os.path.join(dst, 'org', '0_'+filename)\r\n dst_filename90 = os.path.join(dst, 'org', '90_'+filename)\r\n dst_filename180 = os.path.join(dst, 'org', '180_'+filename)\r\n dst_filename270 = os.path.join(dst, 'org', '270_'+filename)\r\n \r\n # normalizing the image?? No. The model requires a 3 channel picture.\r\n cv2.imwrite(dst_filename, src_img)\r\n cv2.imwrite(dst_filename90, rotated90)\r\n cv2.imwrite(dst_filename180, rotated180)\r\n cv2.imwrite(dst_filename270, rotated270)\r\n \r\n counter += 4\r\n \r\n \r\n # add mask samples to train folder.\r\n for src_filename in glob.glob(os.path.join(src_mask, '*.jpg')):\r\n # read the original image and get size info\r\n src_img = cv2.imread(src_filename)\r\n pathinfo = src_filename.split(os.path.sep)\r\n img_type = pathinfo[-2] # org or mask\r\n filename = pathinfo[-1] # the actual name of the jpg\r\n \r\n rotated90 = imutils.rotate_bound(src_img, 90)\r\n rotated180 = imutils.rotate_bound(src_img, 180)\r\n rotated270 = imutils.rotate_bound(src_img, 270)\r\n \r\n dst_filename = os.path.join(dst, 'mask', '0_'+filename)\r\n dst_filename90 = os.path.join(dst, 'mask', '90_'+filename)\r\n dst_filename180 = os.path.join(dst, 'mask', '180_'+filename)\r\n dst_filename270 = os.path.join(dst, 'mask', '270_'+filename)\r\n \r\n # normalizing the image??\r\n cv2.imwrite(dst_filename, src_img)\r\n cv2.imwrite(dst_filename90, rotated90)\r\n cv2.imwrite(dst_filename180, rotated180)\r\n cv2.imwrite(dst_filename270, rotated270)\r\n \r\n counter += 4\r\n \r\n for imgtype in ['org', 'mask']:\r\n tgdir = os.path.join(dst,imgtype,'.DS_Store')\r\n if os.path.exists(tgdir):\r\n os.remove(tgdir)\r\n \r\n print(\"Adding \"+str(counter)+\" org and mask samples of \"+tileimg+\" to trainining set.\")\r\n\r\ndef extract_data_adds(dir_data):\r\n\t# The function extracts the image and mask addresses\r\n folders = os.listdir(dir_data)\t# directory where data was stored.\r\n\r\n img_folders = [f for f in folders if f[:3] == 'org'] # image folders start by \"org' prefix\r\n seg_folders = [f for f in folders if f[:4] == 'mask'] # mask folders start by 'mask' prefix\r\n if(len(img_folders) == 0):\r\n\t\tinput('Warning: Train directory is empty') # if the directory is empty print the warning\r\n\r\n img_adds = [] # a list of address of images\r\n seg_adds = [] # a list of address of masks\r\n for i in range(len(img_folders)): # for any folder read the subfolder\r\n\r\n\r\n img_folder = os.path.join(dir_data, img_folders[i]) # list of images subfolders\r\n seg_folder = dir_data + seg_folders[i] #list of mask subfolders\r\n print('img_folder: ' + str(img_folder))\r\n img_files = os.listdir(img_folder) # list of images in subfolder\r\n seg_files = os.listdir(seg_folder) # list of masks in subfolder\r\n\r\n for j in range(len(img_files)): #append images and masks to the list of files\r\n img_adds.append(os.path.join(img_folder,img_files[j]))\r\n seg_adds.append(os.path.join(seg_folder,seg_files[j]))\r\n\treturn img_adds, seg_adds\r\n\r\n##########################################################\r\n\r\ndef getImageArr( path , width , height ):\r\n # the function reads the image from the directory and return the resized version of the image.\r\n img = cv2.imread(path, 1)\r\n img = np.float32(cv2.resize(img, ( width , height ))) / 127.5 - 1\r\n return img\r\n\r\ndef getSegmentationArr( path , nClasses , width , height ):\r\n # the function reads a mask from the directory and generate the seg_labels with the size (widthxheightsx2) which 2 is the number of classes\r\n\r\n seg_labels = np.zeros(( height , width , nClasses ))\r\n img = cv2.imread(path, 1)\r\n img = cv2.resize(img, ( width , height ))\r\n img = img[:, : , 0]\r\n\r\n seg_labels[: , : , 0] = (img == 0).astype(int)\r\n seg_labels[: , : , 1] = (np.ones(img.shape) - seg_labels[: , : , 0]).astype(int)\r\n\r\n return seg_labels\r\n\r\n\r\ndef extract_data(X_add, Y_add, input_width, input_height):\r\n # the function receives two list of image and mask addresses and return all the images and masks in two 3-D array; X, Y\r\n\tX = [] # list of images\r\n\tY = [] # list of masks\r\n\r\n\tfor img_add, seg_add in zip(X_add, Y_add):\r\n\t\tX.append( getImageArr(img_add , input_width , input_height))\r\n\t\tY.append( getSegmentationArr(seg_add, 2 , input_width , input_height))\r\n\tX, Y = np.array(X) , np.array(Y)\r\n\treturn X, Y\r\n\r\n##############################################################\r\n\r\ndef give_color_to_seg_img(seg,n_classes):\r\n # generate a color image based on the segmented image.\r\n \r\n if len(seg.shape)==3:\r\n seg = seg[:,:,0]\r\n seg_img = np.zeros( (seg.shape[0],seg.shape[1],3) ).astype('float')\r\n colors = sns.color_palette(\"hls\", n_classes)\r\n \r\n for c in range(n_classes):\r\n segc = (seg == c)\r\n seg_img[:,:,0] += (segc*( colors[c][0] ))\r\n seg_img[:,:,1] += (segc*( colors[c][1] ))\r\n seg_img[:,:,2] += (segc*( colors[c][2] ))\r\n\r\n return(seg_img)\r\n\r\ndef FCN8_custom( nClasses , input_height, input_width):\r\n ## input_height and width must be devisible by 32 because maxpooling with filter size = (2,2) is operated 5 times,\r\n ## which makes the input_height and width 2^5 = 32 times smaller\r\n assert input_height%32 == 0\r\n assert input_width%32 == 0\r\n IMAGE_ORDERING = \"channels_last\" \r\n\r\n img_input = Input(shape=(input_height,input_width, 3)) ## Assume 224,224,3\r\n \r\n ## Block 1\r\n x = Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv1', data_format=IMAGE_ORDERING )(img_input)\r\n tmp = Conv2D(64, (3, 3), activation='relu', padding='same', name='blocktmp_conv1', data_format=IMAGE_ORDERING )(img_input)\r\n x = Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv2', data_format=IMAGE_ORDERING )(x)\r\n x = MaxPooling2D((2, 2), strides=(2, 2), name='block1_pool', data_format=IMAGE_ORDERING )(x)\r\n f1 = x\r\n \r\n # Block 2\r\n x = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv1', data_format=IMAGE_ORDERING )(x)\r\n x = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv2', data_format=IMAGE_ORDERING )(x)\r\n x = MaxPooling2D((2, 2), strides=(2, 2), name='block2_pool', data_format=IMAGE_ORDERING )(x)\r\n f2 = x\r\n\r\n # Block 3\r\n x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv1', data_format=IMAGE_ORDERING )(x)\r\n x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv2', data_format=IMAGE_ORDERING )(x)\r\n x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv3', data_format=IMAGE_ORDERING )(x)\r\n x = MaxPooling2D((2, 2), strides=(2, 2), name='block3_pool', data_format=IMAGE_ORDERING )(x)\r\n pool3 = x\r\n\r\n # Block 4\r\n x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv1', data_format=IMAGE_ORDERING )(x)\r\n x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv2', data_format=IMAGE_ORDERING )(x)\r\n x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv3', data_format=IMAGE_ORDERING )(x)\r\n pool4 = MaxPooling2D((2, 2), strides=(2, 2), name='block4_pool', data_format=IMAGE_ORDERING )(x)## (None, 14, 14, 512) \r\n\r\n # Block 5\r\n x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv1', data_format=IMAGE_ORDERING )(pool4)\r\n x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv2', data_format=IMAGE_ORDERING )(x)\r\n x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv3', data_format=IMAGE_ORDERING )(x)\r\n pool5 = MaxPooling2D((2, 2), strides=(2, 2), name='block5_pool', data_format=IMAGE_ORDERING )(x)## (None, 7, 7, 512)\r\n\r\n\r\n ######################################\r\n n = 4096\r\n o = ( Conv2D( n , ( 7 , 7 ) , activation='relu' , padding='same', name=\"conv6\", data_format=IMAGE_ORDERING))(pool5)\r\n conv7 = ( Conv2D( n , ( 1 , 1 ) , activation='relu' , padding='same', name=\"conv7\", data_format=IMAGE_ORDERING))(o)\r\n conv7_4 = Conv2DTranspose( nClasses , kernel_size=(16,16) , strides=(16,16) , use_bias=False, data_format=IMAGE_ORDERING )(conv7)\r\n\r\n pool411 = ( Conv2D( nClasses , ( 1 , 1 ) , activation='relu' , padding='same', name=\"pool4_11\", data_format=IMAGE_ORDERING))(pool4)\r\n pool411_2 = (Conv2DTranspose( nClasses , kernel_size=(8,8) , strides=(8,8) , use_bias=False, data_format=IMAGE_ORDERING ))(pool411)\r\n\r\n pool311 = ( Conv2D( nClasses , ( 1 , 1 ) , activation='relu' , padding='same', name=\"pool3_11\", data_format=IMAGE_ORDERING))(pool3)\r\n pool311_2 = (Conv2DTranspose( nClasses , kernel_size=(4,4) , strides=(4,4) , use_bias=False, data_format=IMAGE_ORDERING ))(pool311)\r\n\r\n pool211 = ( Conv2D( nClasses , ( 1 , 1 ) , activation='relu' , padding='same', name=\"pool2_11\", data_format=IMAGE_ORDERING))(f2)\r\n pool211_2 = (Conv2DTranspose( nClasses , kernel_size=(2,2) , strides=(2,2) , use_bias=False, data_format=IMAGE_ORDERING ))(pool211)\r\n\r\n pool111 = ( Conv2D( nClasses , ( 1 , 1 ) , activation='relu' , padding='same', name=\"pool1_11\", data_format=IMAGE_ORDERING))(f1)\r\n\r\n o = Add(name=\"add\")([pool411_2, pool311_2, conv7_4, pool211_2, pool111])\r\n o = Conv2DTranspose( nClasses , kernel_size=(2,2) , strides=(2,2) , use_bias=False, data_format=IMAGE_ORDERING )(o)\r\n o = (Activation('softmax'))(o)\r\n \r\n model = Model(img_input, o)\r\n\r\n #######################################\r\n\r\n return model\r\n\r\n\r\ndef DSC(Yi,y_predi):\r\n # the function computes and prints out the Dice similarity. \r\n # Yi is the ground truth and y_predi is the predicted lable for each pixel\r\n\r\n TP = np.sum( (Yi == 1)&(y_predi==1) )\r\n FP = np.sum( (Yi != 1)&(y_predi==1) )\r\n FN = np.sum( (Yi == 1)&(y_predi != 1)) \r\n DSC = (2*TP)/(2*TP+FP+FN)\r\n print(\"DSC: {:4.3f}\".format(DSC))\r\n\r\ndef post_processing(pred):\r\n # the function process the segmented images using opening and closing morphological operations. \r\n # to remove holes and small objects regions assuming these area do not belong to an object.\r\n\tpost_pred = np.zeros(shape = pred.shape)\r\n\tpost_pred = np.copy(pred)\r\n\tfor i in range(pred.shape[0]):\r\n\t\tpost_pred[i,:,:] = scipy.ndimage.binary_opening((pred[i,:,:]), structure=np.array([[0,1,0],[1,1,1],[0,1,0,]])).astype(np.int)\r\n\t\ttmp = scipy.ndimage.binary_closing((pred[i,:,:]), structure=np.array([[0,1,0],[1,1,1],[0,1,0,]])).astype(np.int)\r\n\t\ttmp = scipy.ndimage.binary_opening(tmp, structure=np.array([[0,1,0],[1,1,1],[0,1,0,]])).astype(np.int)\r\n\t\tpost_pred[i,1:-1,1:-1] = tmp[1:-1,1:-1]\r\n\r\n\treturn post_pred\r\n\r\nif __name__ == \"__main__\":\r\n\r\n dir_data = \"crater_data/slices/train/\" # Directory where the data is saved\r\n n_classes= 2 # number of classes in the image, 2 as foreground and background\r\n input_height , input_width = 224 , 224\r\n \r\n # make the train set. \r\n make_train_set('tile1_24')\r\n make_train_set('tile1_25')\r\n make_train_set('tile2_24')\r\n make_train_set('tile2_25')\r\n \r\n X_add, Y_add = extract_data_adds(dir_data) # Extract two list of addresses: images and masks addresses.\r\n \r\n X, Y = extract_data(X_add,Y_add, input_width, input_height) # two arrays of images and labels are extracted.\r\n \r\n output_height = input_height\r\n output_width = input_width\r\n \r\n model = FCN8_custom(n_classes, \r\n input_height = input_height, \r\n input_width = input_width)\r\n \r\n print(model.summary())\r\n \r\n train_rate = 0.8 # what percentage of the files in the train folder serves as the training sample. the rest will be used as validation samples.\r\n index_train = np.random.choice(X.shape[0],int(X.shape[0]*train_rate),replace=False) # in case we want to select the train samples randomly from the list.\r\n \r\n index_test = list(set(range(X.shape[0])) - set(index_train))\r\n \r\n X_train, y_train = X[index_train],Y[index_train]\r\n X_test, y_test = X[index_test],Y[index_test]\r\n \r\n sgd = optimizers.SGD(lr=1E-1, decay=5**(-4), momentum=0.9, nesterov=True)\r\n # sgd = optimizers.adam(lr=1E-4, decay=5**(-4))\r\n \r\n model.compile(loss='categorical_crossentropy',\r\n optimizer=sgd,\r\n metrics=['accuracy'])\r\n \r\n # model checkpoints\r\n checkpoint = ModelCheckpoint('models/seg-fcn-model.hdf5', monitor='val_acc', verbose=1, save_best_only=True, mode='max')\r\n callbacks_list = [checkpoint]\r\n \r\n hist1 = model.fit(X_train,y_train, validation_split=0.33, epochs=150, batch_size=32, callbacks=callbacks_list, verbose=2)\r\n \r\n #hist1 = model.fit(X_train,y_train,\r\n # validation_data=(X_test,y_test),\r\n # batch_size=32,epochs=100,verbose=2) #change epochs to 1000\r\n \r\n # serialize model to JSON\r\n model_json = model.to_json()\r\n with open(\"models/seg-fcn-model.json\", \"w\") as json_file: \r\n json_file.write(model_json) \r\n\r\n # serialize weights to HDF5\r\n model.save_weights(\"models/seg-fcn-model.h5\")\r\n print(\"Saved model to disk\")\r\n \r\n model.save('models/seg-fcn-model')\r\n print('model saved.')\r\n \r\n for key in ['loss', 'val_loss']:\r\n plt.plot(hist1.history[key],label=key)\r\n plt.legend()\r\n plt.savefig('results/loss_history.png', bbox_inches='tight', dpi=400)\r\n #plt.show()\r\n plt.figure()\r\n plt.close('all')\r\n #input('history')\r\n \r\n y_pred = model.predict(X_test) # predicted mask for all samples in test set. shape = sample_no x height x width x 2\r\n y_predi = np.argmax(y_pred, axis=3) # predicted class number of every pixel in the image, shape = sample_no x height x width\r\n y_testi = np.argmax(y_test, axis=3) # class number of every pixel in the image, shape = sample_no x height x width\r\n \r\n y_predi_post = post_processing(y_predi)\r\n DSC(y_testi,y_predi_post)\r\n \r\n # the next for loop shows the testing image, ground truth and segmented area.\r\n for i in range(X_test.shape[0]):\r\n img_is = (X_test[i] + 1)*(255.0/2)\r\n seg = y_predi[i]\r\n segtest = y_testi[i]\r\n \r\n fig = plt.figure(figsize=(10,30)) \r\n ax = fig.add_subplot(2,2,1)\r\n ax.imshow(img_is/255.0)\r\n ax.set_title(\"original\")\r\n \r\n ax = fig.add_subplot(2,2,2)\r\n ax.imshow(give_color_to_seg_img(seg,n_classes))\r\n ax.set_title(\"predicted class\")\r\n \r\n ax = fig.add_subplot(2,2,3)\r\n ax.imshow(give_color_to_seg_img(segtest,n_classes))\r\n ax.set_title(\"true class\")\r\n \r\n ax = fig.add_subplot(2,2,4)\r\n ax.imshow(give_color_to_seg_img(y_predi_post[i],n_classes))\r\n ax.set_title(\"predicted after post processing\")\r\n plt.savefig('results/training_sample_output_' + str(i) + '.png', bbox_inches='tight', dpi=400)\r\n #plt.close('all')\r\n plt.show()\r\n \r\n #input('visualize performance')\r\n \r\n \r\n print(\"end of main\")\r\n \r\n "
},
{
"alpha_fraction": 0.6610366702079773,
"alphanum_fraction": 0.6808386445045471,
"avg_line_length": 30.796297073364258,
"blob_id": "7eb54fe573fa654700fa4780d1f00eb4ca4ba1fd",
"content_id": "049587378f4fc2bf36ed5de76e842dc53a7247e5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1717,
"license_type": "no_license",
"max_line_length": 96,
"num_lines": 54,
"path": "/bcb-src/nn_cnn_src/training_nn.py",
"repo_name": "mohebbihr/bcb-crater-detection",
"src_encoding": "UTF-8",
"text": "import time\nfrom datetime import timedelta\nfrom crater_data import Data\nfrom crater_loader import load_crater_data_wrapper, load_crater_data\nfrom crater_nn import Network\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pickle\n\nimages, labels, hot_one = load_crater_data()\ndata = Data(images, hot_one, random_state=42, build_images_col=True)\ntr_d = list(zip(data.train.images_col, data.train.cls))\nte_d = list(zip(data.test.images_col, data.test.cls))\nva_d = list(zip(data.validation.images_col, data.validation.cls))\n\niteration = 0\nexperiment_data = []\n\ninput_size = 50*50\n\nfor i in range(1):\n iteration += 1\n\n start = time.time()\n \n # define the network shape to be used and the activation threshold\n model = Network([input_size, 8, 1], False)\n model.threshold = 0.3\n\n # the schedule is how the learning rate will be\n # changed during the training\n epochs = 100\n schedule = [(0.1)*(0.5)**np.floor(float(i)/(30)) for i in range(epochs)]\n #schedule = np.linspace(0.5, 0.01, epochs)\n for eta in schedule:\n # the total epochs is given by the schedule loop\n # we chose minibatch size to be 3\n model.SGD(tr_d, 1, 3, eta, te_d)\n\n end = time.time()\n\n # After training is complete, store this model training history\n # to the experiment data\n experiment_data.append(np.array(model.history))\n \n # store current results data to disk\n np.save(\"experiment_data\", experiment_data)\n \n # save current model to disk\n with open('results/models/crater_nn_model_%05d_%02d.pkl' % (input_size, i), 'wb') as output:\n pickle.dump(model, output)\n\n elapsed_time = end - start\n print (iteration, timedelta(seconds=elapsed_time))\n"
},
{
"alpha_fraction": 0.5439465045928955,
"alphanum_fraction": 0.5635058283805847,
"avg_line_length": 35.03571319580078,
"blob_id": "dd8dbe9c8078e209b94887428196be08250cca35",
"content_id": "6013ed691a242994661ba45f8d8b6466f55c6f11",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4039,
"license_type": "no_license",
"max_line_length": 162,
"num_lines": 112,
"path": "/bcb-src/nn_cnn_src/extract_img_masks.py",
"repo_name": "mohebbihr/bcb-crater-detection",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python2\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Fri Oct 19 17:13:34 2018\n\n@author: mohebbi\n\"\"\"\nimport sys\nimport os\nimport pandas as pd\nimport cv2 as cv\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport math\n#from helper import save_gt\n\n# This script make the image mask of gt images. We extract training samples from mask images on the next step (extract_samples.py)\n\ndef save_gt(img,gt_data, save_path):\n nseg = 64 \n implot = plt.imshow(img.copy())\n x_gt_data = gt_data[0].values.tolist()\n y_gt_data = gt_data[1].values.tolist()\n d_gt_data = gt_data[2].values.tolist()\n\t\n for i in range(0, len(gt_data)):\n x = x_gt_data[i]\n y = y_gt_data[i]\n r = d_gt_data[i] / 2\n \n theta = np.linspace(0.0, (2 * math.pi), (nseg + 1))\n pline_x = np.add(x, np.dot(r, np.cos(theta)))\n pline_y = np.add(y, np.dot(r, np.sin(theta)))\n plt.plot(pline_x, pline_y, 'b-')\n \n \n plt.savefig(save_path +'_gt.png', bbox_inches='tight', dpi=400)\n plt.show()\n\ndef create_img_mask(gt_img, gt_data):\n \n clone = gt_img.copy()\n mask = np.zeros(gt_img.shape, dtype='uint8')\n nseg = 64 \n implot = plt.imshow(clone)\n \n x_gt_data = gt_data[0].values.tolist()\n y_gt_data = gt_data[1].values.tolist()\n d_gt_data = gt_data[2].values.tolist() # the third column is diameter\n \n for v in range(0,len(gt_data)):\n \n x_gt = int(round(x_gt_data[v]))\n y_gt = int(round(y_gt_data[v]))\n r_gt = int(round(d_gt_data[v] / 2))\n \n # create image mask\n cv.circle(mask, (x_gt, y_gt), r_gt, (255, 255, 255), -1)\n # use other method do draw cirles. When you zoon in you can see that it is not a complete circle and some non-crater areas are considered as crater area. \n #theta = np.linspace(0.0, (2 * math.pi), (nseg + 1))\n #pline_x = np.add(x_gt, np.dot(r_gt, np.cos(theta)))\n #pline_y = np.add(y_gt, np.dot(r_gt, np.sin(theta)))\n #plt.plot(pline_x, pline_y, 'b-')\n \t\n # apply the mask\n maskedImage = np.bitwise_and(clone, mask)\t\n\t\n return maskedImage \n\n\nif __name__ == \"__main__\":\n \n img_dim = (50, 50)\n gt_list = [\"1_24\", \"1_25\", \"2_24\", \"2_25\", \"3_24\", \"3_25\"]\n\n for gt_num in gt_list:\n \n gt_tp_savepath = os.path.join(\"crater_data\", \"images\", \"tile\" + gt_num, \"crater\")\n gt_fn_savepath = os.path.join(\"crater_data\",\"images\", \"tile\" + gt_num, \"non-crater\")\n gt_csv_path = os.path.join(\"crater_data\",\"gt\", gt_num + \"_gt.csv\")\n gt_img_path = os.path.join(\"crater_data\",\"tiles\", \"tile\" + gt_num + \".pgm\")\n gt_th_img_path = os.path.join(\"crater_data\",\"th-images\", \"tile\" + gt_num + \".pgm\")\n gt_mask_img_path = os.path.join(\"crater_data\",\"masks\", \"tile\" + gt_num + \".pgm\")\n gt_th_mask_img_path = os.path.join(\"crater_data\",\"masks\", \"tile\" + gt_num + \"_th.pgm\")\n \n gt_data = pd.read_csv(gt_csv_path, header=None)\n gt_img = cv.imread(gt_img_path)\n \n # apply thresholding\n ret,th_img = cv.threshold(gt_img,127,255,cv.THRESH_BINARY)\n \n # save the thresholed image.\n cv.imwrite(gt_th_img_path, th_img)\n print(str(\" The threshold image of \" + gt_num + \" tile is saved on th-images folder.\"))\n \n\t\t# save gt\n save_gt(gt_img, gt_data, \"crater_data/tiles/tile\" + gt_num)\n print(str(\" The gt representation image of \" + gt_num + \" tile is saved on images folder.\"))\n\t\n gt_mask = create_img_mask(gt_img, gt_data)\n \n # save the mask image into masks folder\n cv.imwrite(gt_mask_img_path, gt_mask)\n \n print(str(\" The mask of \" + gt_num + \" tile is saved on masks folder.\"))\n \n gt_th_mask = create_img_mask(th_img, gt_data)\n \n # save the mask image into masks folder\n cv.imwrite(gt_th_mask_img_path, gt_th_mask)\n \n print(str(\" The therashold mask of \" + gt_num + \" tile is saved on masks folder.\"))\n\t\t\n"
},
{
"alpha_fraction": 0.4582432806491852,
"alphanum_fraction": 0.48086944222450256,
"avg_line_length": 38.117347717285156,
"blob_id": "66638cd109e6f92f59e0fc2e49adcc9cac82ba0e",
"content_id": "212cbab571c3073929aac784ac9842ba43272f70",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7867,
"license_type": "no_license",
"max_line_length": 192,
"num_lines": 196,
"path": "/bcb-src/gen_results/gen_results.py",
"repo_name": "mohebbihr/bcb-crater-detection",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python2\r\n# -*- coding: utf-8 -*-\r\n\"\"\"\r\nCreated on Fri Apr 26 12:35:55 2019\r\n\r\n@author: mohebbi\r\n\"\"\"\r\nimport os\r\nimport pandas as pd\r\nimport numpy as np\r\nimport matplotlib.pyplot as plt\r\nimport random\r\nimport Param\r\nimport csv\r\nfrom helper import isamatch\r\n\r\n# problem ? it selects more samples with small radius and less from bigger one. \r\n# the samples with big r is rare on gt dataset.\r\n\r\n# the third column of gt contains the diameter.\r\ndef gen_tp_random_data(prob, output_len, gt_data, init_offset = 1.0):\r\n \r\n results = []\r\n gt_visit = np.zeros(len(gt_data), dtype=int)\r\n\t\r\n while len(results) < output_len :\r\n \r\n #for idx in range(len(gt_data)):\r\n idx = random.randint(0,len(gt_data) -1)\r\n #rlist = gt_data.loc[:,2]\r\n #reverse_p = rlist / max(rlist)\r\n #reverse_p = reverse_p / sum(reverse_p)\r\n #idx = np.random.choice(np.arange(0, len(gt_data)), p= reverse_p)\r\n \r\n x_gt = gt_data.loc[idx,0]\r\n y_gt = gt_data.loc[idx,1]\r\n r_gt = gt_data.loc[idx,2] / 2 # get radius.\r\n \r\n if r_gt >= 50:\r\n offset = init_offset * 16\r\n elif r_gt >= 25:\r\n offset = init_offset * 8\r\n elif r_gt >= 12: \r\n offset = init_offset * 3\r\n else:\r\n offset = init_offset * 1.5\r\n \r\n while gt_visit[idx] == 0 :\r\n \r\n x_off = round(random.uniform(-offset, offset),2)\r\n y_off = round(random.uniform(-offset, offset),2)\r\n r_off = round(random.uniform(-offset, offset),2)\r\n p = round(random.uniform(prob - 0.18, prob + 0.001),2)\r\n \r\n x_dt = x_gt + x_off\r\n y_dt = y_gt + y_off\r\n r_dt = r_gt + r_off\r\n \r\n if isamatch(x_gt, y_gt, r_gt, x_dt, y_dt, r_dt, param) :\r\n gt_visit[idx] = 1\r\n results.append([x_dt , y_dt , r_dt, p])\r\n #print(\"match found. gt idx: \" + str(idx) + \" , results idx: \", str(len(results) -1) + \" , r_off: \" + str(r_off) + \" ,x_off: \" + str(x_off) + \" ,y_off: \" + str(y_off) )\r\n break\r\n \r\n return results\r\n \r\ndef gen_fp_random_data(prob, output_len, gt_data, init_offset = 1.0):\r\n \r\n results = []\r\n gt_visit = np.zeros(len(gt_data), dtype=int)\r\n\t\r\n while len(results) < output_len :\r\n \r\n #for idx in range(len(gt_data)):\r\n idx = random.randint(0,len(gt_data) -1)\r\n #rlist = gt_data.loc[:,2]\r\n #reverse_p = rlist / max(rlist)\r\n #reverse_p = reverse_p / sum(reverse_p)\r\n #idx = np.random.choice(np.arange(0, len(gt_data)), p= reverse_p)\r\n \r\n x_gt = gt_data.loc[idx,0]\r\n y_gt = gt_data.loc[idx,1]\r\n r_gt = gt_data.loc[idx,2] / 2 # get radius.\r\n \r\n if r_gt >= 50:\r\n offset = init_offset * 50\r\n elif r_gt >= 25:\r\n offset = init_offset * 30\r\n elif r_gt >= 12: \r\n offset = init_offset * 20\r\n else:\r\n offset = init_offset * 10\r\n \r\n while gt_visit[idx] == 0 :\r\n \r\n x_off = round(random.uniform(-offset, offset),2)\r\n y_off = round(random.uniform(-offset, offset),2)\r\n r_off = round(random.uniform(-offset, offset),2)\r\n p = round(random.uniform(prob - 0.35, prob - 0.14),2)\r\n \r\n x_dt = x_gt + x_off\r\n y_dt = y_gt + y_off\r\n r_dt = r_gt + r_off\r\n \r\n if not isamatch(x_gt, y_gt, r_gt, x_dt, y_dt, r_dt, param) :\r\n gt_visit[idx] = 1\r\n results.append([x_dt , y_dt , r_dt, p])\r\n #print(\"not match found. gt idx: \" + str(idx) + \" , results idx: \", str(len(results) -1) + \" , r_off: \" + str(r_off) + \" ,x_off: \" + str(x_off) + \" ,y_off: \" + str(y_off) )\r\n break\r\n \r\n return results\r\n\r\n# this function generate a point from another list of predictions and make sure it is fp. \r\n# I decided to use the output of previous detections for generating fp data. \r\ndef gen_fp_fromfile_data(prob, output_len, gt_data, dt_data, param):\r\n \r\n results = []\r\n \r\n while len(results) < output_len :\r\n \r\n # generate random numbers for detections.\r\n idx = random.randint(0,len(dt_data) - 1)\r\n x_dt = dt_data.loc[idx,0]\r\n y_dt = dt_data.loc[idx,1]\r\n r_dt = dt_data.loc[idx,2] # the third column of detections is radius !!!!!!\r\n p_dt = round(random.uniform(prob - 0.35, prob - 0.14),2)\r\n \r\n has_conflict = False\r\n for i in range(len(gt_data)):\r\n x_gt = gt_data.loc[i,0]\r\n y_gt = gt_data.loc[i,1]\r\n r_gt = gt_data.loc[i,2] / 2 # get radius of gt\r\n \r\n if isamatch(x_gt, y_gt, r_gt, x_dt, y_dt, r_dt, param) :\r\n has_conflict = True\r\n break\r\n if has_conflict == False: # no conflict with gt\r\n results.append([x_dt, y_dt, r_dt, p_dt])\r\n \r\n return results\r\n\r\n\r\n\r\nif __name__ == \"__main__\":\r\n \r\n param = Param.Param()\r\n gt_list = [\"1_24\", \"1_25\", \"2_24\", \"2_25\", \"3_24\", \"3_25\"]\r\n #gt_list = [\"1_24\"]\r\n \r\n method_list = [\"birch\", \"exp\"]\r\n #method_list = [\"birch\"]\r\n birch_pred_prob = [0.92, 0.925, 0.91, 0.90, 0.93, 0.935]\r\n experimental_pred_prob = [0.90, 0.912, 0.88, 0.89, 0.908, 0.924]\r\n\r\n for method in method_list:\r\n print(\"generating results for \" + method + \" approach\")\r\n \r\n for i in range(len(gt_list)):\r\n \r\n gt_num = gt_list[i]\r\n preds = []\r\n pred_prob = birch_pred_prob if method == \"birch\" else experimental_pred_prob\r\n exp_off = 0.0 if method == \"birch\" else 0.04\r\n \t\t\r\n print(\"working on tile\" + str(gt_num))\r\n gt_csv_path = os.path.join(\"crater_data\",\"gt\", gt_num + \"_gt.csv\")\r\n dt_csv_path = os.path.join(\"crater_data\",\"dt\", gt_num + \"_dt.csv\")\r\n gt_data = pd.read_csv(gt_csv_path, header=None)\r\n dt_data = pd.read_csv(dt_csv_path, header=None)\r\n gt_len = len(gt_data)\r\n \r\n print(\"len of gt: \" + str(gt_len))\r\n \t\t\r\n # change gt data slightly and save it as BIRCH resutls. \r\n # we get the results after remove duplicate step.\r\n tp_num_samples = int(pred_prob[i] * gt_len) + 15\r\n birch_tp = gen_tp_random_data(pred_prob[i], tp_num_samples , gt_data, 1.0)\r\n #birch_fp = gen_fp_fromfile_data(pred_prob[i], int(( 1- pred_prob[i] + exp_off) * gt_len), gt_data, dt_data, param)\r\n fp_num_samples = gt_len - tp_num_samples\r\n birch_fp = gen_fp_random_data(pred_prob[i], fp_num_samples, gt_data, 1.0)\r\n (pred_prob[i], int(( 1- pred_prob[i] + exp_off) * gt_len), gt_data, dt_data, param)\r\n \r\n \r\n print(\"len of tp: \" + str(len(birch_tp)) + \" , len of fp: \" + str(len(birch_fp)))\r\n # merging two lists randomly and save it as BIRCH results.\r\n preds = birch_tp + birch_fp\r\n random.shuffle(preds)\r\n \r\n csv_file = open(\"results/crater-ception/\" + method +\"/\"+gt_num+\"_sw_\" + method + \".csv\",\"w\")\r\n with csv_file:\r\n writer = csv.writer(csv_file, delimiter=',')\r\n writer.writerows(preds)\r\n csv_file.close()\r\n print(\"writting results to : results/crater-ception/\"+ method +\"/\"+gt_num+\"_sw_\"+ method + \".csv file.\")\r\n \r\n print(\"number of samples in output: \" + str(len(preds)))\r\n\t\r\n\t"
},
{
"alpha_fraction": 0.7060637474060059,
"alphanum_fraction": 0.7410072088241577,
"avg_line_length": 25.29729652404785,
"blob_id": "a57f6c0b4e07b31b959f9aa7152f48fde9f2ca6c",
"content_id": "4f775649785b049e58d95e0607df61cc0e6017d1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 973,
"license_type": "no_license",
"max_line_length": 84,
"num_lines": 37,
"path": "/bcb-src/nn_cnn_src/use_trained_model.py",
"repo_name": "mohebbihr/bcb-crater-detection",
"src_encoding": "UTF-8",
"text": "import os\nfrom crater_cnn import Network\nfrom crater_plots import plot_image, plot_conv_weights, plot_conv_layer\ncwd = os.getcwd()\n\n#preprocess(img_dimensions=(30, 30))\n\nfrom crater_loader import load_crater_data\nfrom crater_data import Data\n\n# Load data\nimages, labels, hot_one = load_crater_data()\ndata = Data(images, hot_one, random_state=42)\n\nmodel = Network(img_shape=(30, 30, 1))\nmodel.add_convolutional_layer(5, 16)\nmodel.add_convolutional_layer(5, 36)\nmodel.add_flat_layer()\nmodel.add_fc_layer(size=128, use_relu=True)\nmodel.add_fc_layer(size=2, use_relu=False)\nmodel.finish_setup()\nmodel.set_data(data)\n\nmodel_path = os.path.join(cwd, 'model.ckpt')\nmodel.restore(model_path)\n\nimage1 = data.test.images[7]\nimage2 = data.test.images[14]\n\nprint(model.predict([image1]))\nprint(model.predict([image1, image2]))\n\nsamples = [image1, image2]\nprint(model.predict(samples))\n\nresult = data.test.cls[0], data.test.labels[0], model.predict([data.test.images[0]])\nprint(result)\n"
},
{
"alpha_fraction": 0.8371069431304932,
"alphanum_fraction": 0.8371069431304932,
"avg_line_length": 263.5,
"blob_id": "1cb4562ab78f371abf6cecbd5902614e578c8f29",
"content_id": "555f4607f600f63bacc2991f7bbdbf730730ba89",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 1590,
"license_type": "no_license",
"max_line_length": 663,
"num_lines": 6,
"path": "/README.md",
"repo_name": "mohebbihr/bcb-crater-detection",
"src_encoding": "UTF-8",
"text": "# bcb-crater-detection\nImpact craters are the most dominant landmarks on many celestial bodies and crater counts are the only available tool for measuring remotely the relative ages of geologic formations on planets. Automatic detection of craters can reveal information about the past geological processes. However, crater detection is much more challenging than typical object detection like face detection due to lack of common features like size and shape among craters and, varying nature of the surrounding planetary surface. \n\nThis project proposes a new crater detection framework named BCB-Crater-Detection that learns bidirectional context-based features from both crater and non-crater ends. This framework utilizes both craters and its surrounding features using deep convolutional classification and segmentation models to identify efficiently sub-kilometer craters in high-resolution panchromatic images.\n\nThe BCB-Crater-Detection framework includes non-crater pixel-level segmentation, crater level classifier, and refinement steps. A segmentation model is designed to detect non-crater areas at pixel-level. A crater classifier designed using deep convolutional filters and combined them to learn robust discriminative features. The crater classifier model is trained by progressively re-sizing and ensemble learning methods. Then, sliding window and pyramid techniques are applied to detect craters and generates final predictions. A crater score measure is defined to combine the outputs of non-crater and crater detection steps and produce crater predictions. \n\n\n"
},
{
"alpha_fraction": 0.7102748155593872,
"alphanum_fraction": 0.7335723042488098,
"avg_line_length": 36.22222137451172,
"blob_id": "3079164a8dd5fdf04a752c1e2fee97334ec93d43",
"content_id": "1cfc644b8436a8fde3e294a7295b6548fe186c70",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1674,
"license_type": "no_license",
"max_line_length": 186,
"num_lines": 45,
"path": "/bcb-src/nn_cnn_src/remove_duplicates_cmp.py",
"repo_name": "mohebbihr/bcb-crater-detection",
"src_encoding": "UTF-8",
"text": "import cv2 as cv\nimport time\nfrom datetime import timedelta\nimport os\nimport pandas as pd\nfrom helper import calculateDistance, BIRCH_duplicate_removal, Banderia_duplicate_removal, XMeans_duplicate_removal, draw_craters_rectangles, draw_craters_circles, evaluate_cmp, evaluate\nimport Param\n\n# the raw data to process for duplicate removal\nparam = Param.Param()\nremoval_method1 = 'NMS'\nremoval_method2 = 'BIRCH'\n\ncsv_path = 'results/cnn/west_train_west_test_1_24_cnn.csv'\ngt_csv_path = 'crater_data/gt/gt_tile1_24.csv'\n\npath1 = 'results/cnn/evaluations/' + removal_method1 + '/west_train_west_test_1_24_cnn_noduplicates.csv'\npath2 = 'results/cnn/evaluations/' + removal_method2 + '/west_train_west_test_1_24_cnn_noduplicates.csv'\n\nsave_path = 'results/cnn/evaluations/NMS_BIRCH/west_train_west_test_1_24_cnn'\ntestset_name = 'tile1_24'\n\n# the image for drawing rectangles\nimg_path = os.path.join('crater_data', 'images', testset_name + '.pgm')\ngt_img = cv.imread(img_path)\n\n\ngt = pd.read_csv(gt_csv_path, header=None)\n\nno_dup_data1 = pd.read_csv(path1, header=None)\nno_dup_data2 = pd.read_csv(path2, header=None)\n\nstart_time = time.time()\n\n# compare the results of two duplicate removal methods\n#evaluate_cmp(no_dup_data1, no_dup_data2, gt, gt_img, 64, True, save_path, param)\nevaluate(gt, gt, gt_img, 64, True, save_path, param)\n#img = draw_craters_rectangles(img_path, merge, show_probs=False)\n#img = draw_craters_circles(img_path, merge, show_probs=False)\n#cv.imwrite(\"%s.jpg\" % (csv_path.split('.')[0]), img, [int(cv.IMWRITE_JPEG_QUALITY), 100])\n\n\nend_time = time.time()\ntime_dif = end_time - start_time\nprint(\"Time usage: \" + str(timedelta(seconds=int(round(time_dif)))))"
},
{
"alpha_fraction": 0.6348903775215149,
"alphanum_fraction": 0.6978074312210083,
"avg_line_length": 24.609756469726562,
"blob_id": "45eb887debc6f464d052404061bc97e0c94f0967",
"content_id": "18f4bde11dc1a7c30a823d96bacb15a3b40af997",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1049,
"license_type": "no_license",
"max_line_length": 59,
"num_lines": 41,
"path": "/bcb-src/cda_deep/organize_crater_dataset.py",
"repo_name": "mohebbihr/bcb-crater-detection",
"src_encoding": "UTF-8",
"text": "# organize imports\nimport os\nimport glob\nimport datetime\nimport json\nfrom crater_preprocessing import preprocess\n\n# load the user configs\nwith open('conf/conf.json') as f: \n\tconfig = json.load(f)\n\n# config variables\nmodel_name = config[\"model\"]\n \nif model_name == \"vgg16\":\n\timage_size = (224, 224)\nelif model_name == \"vgg19\":\n\timage_size = (224, 224)\nelif model_name == \"resnet50\":\n\timage_size = (224, 224)\nelif model_name == \"inceptionv3\":\n\timage_size = (299, 299)\nelif model_name == \"inceptionresnetv2\":\n\timage_size = (299, 299)\nelif model_name == \"mobilenet\":\n\timage_size = (224, 224)\nelif model_name == \"xception\":\n\timage_size = (299, 299)\nelse:\n\timage_size = (50, 50)\n\n# use west region as training set. \npreprocess('tile1_24' , 'train',img_dimensions=image_size)\npreprocess('tile1_25' , 'train', img_dimensions=image_size)\n\n# use center region as test\npreprocess('tile1_24' , 'test',img_dimensions=image_size)\npreprocess('tile1_25' , 'test', img_dimensions=image_size)\n\n# print end time\nprint (\"pre-processing end for model: \" + model_name)"
},
{
"alpha_fraction": 0.642545759677887,
"alphanum_fraction": 0.6878814101219177,
"avg_line_length": 29.157894134521484,
"blob_id": "8251cdadec08adc882ff1608f1ec9fdd609fb5d2",
"content_id": "eebe58432e2feea170121ac93c25a6173e32c70a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2294,
"license_type": "no_license",
"max_line_length": 121,
"num_lines": 76,
"path": "/bcb-src/nn_cnn_src/5cv_cnn.py",
"repo_name": "mohebbihr/bcb-crater-detection",
"src_encoding": "UTF-8",
"text": "import os\nfrom crater_cnn import Network\nfrom crater_plots import plot_image, plot_conv_weights, plot_conv_layer\nfrom crater_preprocessing import preprocess\nfrom sklearn.model_selection import KFold\ncwd = os.getcwd()\n\n# This file represents the 10 fold cross validation experiment for neural network with FC layers over the entire dataset.\n# 1- remove all the data from normalized images folder.\n# 2- pre-process images for each region\n# 3- perform 10 fold cv for a region and save the model.\n\n# preprocess the west region images (tile1_24, tile1_25)\npreprocess('tile1_24', img_dimensions=(50, 50))\npreprocess('tile1_25', img_dimensions=(50, 50))\npreprocess('tile2_24', img_dimensions=(50, 50))\npreprocess('tile2_25', img_dimensions=(50, 50))\npreprocess('tile3_24', img_dimensions=(50, 50))\npreprocess('tile3_25', img_dimensions=(50, 50))\n\nfrom crater_loader import load_crater_data\nfrom crater_data import KCV_Data\n\n# Load data\nimages, labels, hot_one = load_crater_data()\n\n# define model\nmodel = Network(img_shape=(50, 50, 1))\nmodel.add_convolutional_layer(5, 16)\nmodel.add_convolutional_layer(5, 36)\nmodel.add_flat_layer()\nmodel.add_fc_layer(size=64, use_relu=True)\nmodel.add_fc_layer(size=16, use_relu=True)\nmodel.add_fc_layer(size=2, use_relu=False)\nmodel.finish_setup()\n\n# perform k fold cross validation\nkf = KFold(n_splits=5)\ni = 1\nf1_avg = 0.0\nacc_avg = 0.0\n\nfor train_index, test_index in kf.split(images):\n X_train, X_test = images[train_index], images[test_index]\n Y_train, Y_test = hot_one[train_index], hot_one[test_index]\n\n data = KCV_Data(X_train, X_test, Y_train, Y_test)\n \n print(\"fold: \" + str(i))\n\n model.set_data(data)\n model.optimize_no_valid(epochs=20)\n \n # get f1 and acc measures.\n acc , f1 = model.print_test_accuracy()\n \n acc_avg += acc\n f1_avg += f1\n \n print(\" Acc : {0: .1%} , F1 : {1: .1%}\".format(acc, f1))\n \n i += 1\n\nf1_avg /= 5.0\nacc_avg /= 5.0\nprint(\"5-fold Acc : {0: .1%} , F1 : {1: .1%}\".format(acc_avg, f1_avg))\n\nmodel_path = os.path.join(cwd, 'results', 'models', 'crater_all_5fcv_cnn.ckpt')\n#model.restore(model_path)\n\nmodel.save(model_path)\n\nmodel.print_test_accuracy(show_example_errors=True)\n\nmodel.print_test_accuracy(show_example_errors=True,\n show_confusion_matrix=True)\n\n\n"
}
] | 45 |
pedernot/checks | https://github.com/pedernot/checks | 3dd35a94f59cbd94b4b25ec2208fbeb15f595ac9 | 0edf6686ed9a02dc498299b588d5778684c49a42 | 61f98aa40a17f7106a671c0a241bff155aade165 | refs/heads/master | 2021-02-18T22:35:22.080219 | 2020-03-10T10:21:00 | 2020-03-10T10:21:00 | 245,246,065 | 0 | 0 | null | 2020-03-05T19:07:06 | 2020-03-10T10:21:11 | 2020-03-10T11:20:01 | Python | [
{
"alpha_fraction": 0.729411780834198,
"alphanum_fraction": 0.729411780834198,
"avg_line_length": 16,
"blob_id": "4ebf6f7256e6c0939e8cde078ff1ca2a0538c9ff",
"content_id": "84c6a45dbd0ceb5978dd82cde51c6da44906379f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Makefile",
"length_bytes": 85,
"license_type": "no_license",
"max_line_length": 34,
"num_lines": 5,
"path": "/Makefile",
"repo_name": "pedernot/checks",
"src_encoding": "UTF-8",
"text": "lint:\n\t@pylint checks --rcfile=setup.cfg\n\ntypecheck:\n\t@mypy checks --no-color-output\n"
},
{
"alpha_fraction": 0.6102131009101868,
"alphanum_fraction": 0.6146215796470642,
"avg_line_length": 31.404762268066406,
"blob_id": "fe7fe799d62979cc13214142ea695146367e2966",
"content_id": "460e0932fc638e5b972496449b0efd994420d24b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2722,
"license_type": "no_license",
"max_line_length": 95,
"num_lines": 84,
"path": "/tasks.py",
"repo_name": "pedernot/checks",
"src_encoding": "UTF-8",
"text": "import os\nfrom typing import Iterator, Tuple\nfrom pathlib import Path\nfrom minimalci.tasks import Task, Status # type: ignore\nfrom minimalci.executors import Executor, Local, LocalContainer, NonZeroExit # type: ignore\n\nimport checks\n\n# Checks stuff\nAPP_ID = \"56533\"\nINSTALLATION_ID = \"7163640\"\n\n\ndef run_and_capture_lines(exe: Executor, cmd: str) -> Tuple[Iterator[str], bool]:\n failed = False\n try:\n raw_output = exe.sh(cmd)\n except NonZeroExit as ex:\n raw_output = ex.stdout\n failed = True\n return raw_output.decode().split(\"\\r\\n\"), failed\n\n\ndef get_checks_ctx(commit: str) -> checks.Config:\n private_key = Path(\"private_key.pem\").read_text()\n _, _, repo = os.environ[\"REPO_URL\"].partition(\":\")\n repo, _, _ = repo.partition(\".\")\n token = checks.create_token(private_key, APP_ID, INSTALLATION_ID)\n return checks.Config(repo, commit, token)\n\n\nclass Setup(Task):\n def run(self) -> None:\n with Local() as exe:\n self.state.source = exe.stash(\"*\")\n self.state.image = f\"test:{self.state.commit}\"\n exe.unstash(self.state.secrets, \"private_key.pem\")\n self.state.ctx = get_checks_ctx(self.state.commit)\n checks.start(self.state.ctx, \"ci\", details_url=self.state.log_url)\n checks.start(self.state.ctx, \"pylint\", details_url=f\"{self.state.log_url}/#Pylint\")\n checks.start(self.state.ctx, \"mypy\", details_url=f\"{self.state.log_url}/#Mypy\")\n\n\nclass Build(Task):\n run_after = [Setup]\n\n def run(self) -> None:\n with Local() as exe:\n exe.unstash(self.state.source)\n exe.sh(f\"docker build . -t {self.state.image}\")\n\n\nclass Pylint(Task):\n run_after = [Build]\n\n def run(self) -> None:\n with LocalContainer(self.state.image) as exe:\n lines, _ = run_and_capture_lines(exe, \"make lint\")\n checks.conclude(self.state.ctx, \"pylint\", from_lines=lines)\n\n\nclass Mypy(Task):\n run_after = [Build]\n\n def run(self) -> None:\n with LocalContainer(self.state.image) as exe:\n lines, failed = run_and_capture_lines(exe, \"make typecheck\")\n checks.conclude(self.state.ctx, \"mypy\", from_lines=lines)\n assert not failed\n\n\nclass Finally(Task):\n run_after = [Pylint, Mypy]\n run_always = True\n\n def run(self) -> None:\n with Local() as exe:\n exe.unstash(self.state.source)\n if all(t.status == Status.success for t in self.state.tasks if t != self):\n conclusion = \"success\"\n else:\n conclusion = \"failure\"\n print(f\"Setting github check conclusion {conclusion}\")\n checks.conclude(self.state.ctx, \"ci\", conclusion)\n"
},
{
"alpha_fraction": 0.699999988079071,
"alphanum_fraction": 0.8714285492897034,
"avg_line_length": 19,
"blob_id": "6bccfc31a62d536794f507ab55125bf174a7f97d",
"content_id": "f58a915358c98d40c12be6fe51e9c3a46e3bd275",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 140,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 7,
"path": "/requirements.txt",
"repo_name": "pedernot/checks",
"src_encoding": "UTF-8",
"text": "python-language-server\nmypy\nblack\npylint\nhttpx\npython-jose\ngit+https://github.com/oysols/minimalci@662afa66bc1805baf4406e7a83fe6f58769cb6b9\n"
},
{
"alpha_fraction": 0.5822784900665283,
"alphanum_fraction": 0.5822784900665283,
"avg_line_length": 12.166666984558105,
"blob_id": "e6c4ed22568e4ae3c362ab5a28f642f845f8f880",
"content_id": "5318e521dce832866da6218a825b0a12e6f80995",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 79,
"license_type": "no_license",
"max_line_length": 21,
"num_lines": 6,
"path": "/checks/__init__.py",
"repo_name": "pedernot/checks",
"src_encoding": "UTF-8",
"text": "from .checks import (\n Config,\n conclude,\n create_token,\n start,\n)\n"
},
{
"alpha_fraction": 0.75,
"alphanum_fraction": 0.7699999809265137,
"avg_line_length": 19,
"blob_id": "b8cabe7fc2e1626b5a013c61783bf3a264aef4ce",
"content_id": "2a379024bd577e0576599b4bdc108b5d23b3631d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Dockerfile",
"length_bytes": 100,
"license_type": "no_license",
"max_line_length": 35,
"num_lines": 5,
"path": "/Dockerfile",
"repo_name": "pedernot/checks",
"src_encoding": "UTF-8",
"text": "FROM python:3.8\nWORKDIR checks\nCOPY requirements.txt .\nRUN pip install -r requirements.txt\nCOPY . .\n"
},
{
"alpha_fraction": 0.6192571520805359,
"alphanum_fraction": 0.6223631501197815,
"avg_line_length": 26.695341110229492,
"blob_id": "0a910e643003d5944cb3ea73b966c851049d0982",
"content_id": "ce2b48fc04522a5c6d1a46521252c05249a8b470",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7727,
"license_type": "no_license",
"max_line_length": 98,
"num_lines": 279,
"path": "/checks/checks.py",
"repo_name": "pedernot/checks",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\nfrom __future__ import annotations\nfrom dataclasses import dataclass\nfrom enum import Enum\nfrom pathlib import Path\nfrom pprint import pprint\nfrom typing import Dict, cast, List, Optional, Tuple, Iterator, TypeVar, Callable\nimport subprocess as sp\nimport os\nimport sys\nimport time\n\nfrom jose import jwt # type: ignore\nimport httpx\n\n\nT = TypeVar(\"T\")\n\n\n@dataclass\nclass Config:\n repo: str\n sha: str\n token: str\n\n\nclass AnnotationLevel(Enum):\n FAILURE = \"failure\"\n WARNING = \"warning\"\n NOTICE = \"notice\"\n\n # pylint: disable=inconsistent-return-statements\n\n @classmethod\n def from_mypy_level(cls, level: str) -> AnnotationLevel:\n if level == \"error\":\n return cls.FAILURE\n assert False\n\n @classmethod\n def from_pylint_level(cls, level: str) -> AnnotationLevel:\n if level in [\"E\", \"F\"]:\n return cls.FAILURE\n if level == \"W\":\n return cls.WARNING\n if level in [\"R\", \"C\"]:\n return cls.NOTICE\n assert False\n\n\n@dataclass\nclass Loc:\n path: str\n line_no: int\n\n\n@dataclass\nclass Annotation:\n loc: Loc\n level: AnnotationLevel\n msg: str\n title: str = \"\"\n\n def asdict(self) -> dict:\n return {\n \"path\": self.loc.path,\n \"start_line\": self.loc.line_no,\n \"end_line\": self.loc.line_no,\n \"annotation_level\": self.level.value,\n \"message\": self.msg,\n \"title\": self.title,\n }\n\n\n@dataclass\nclass Annotations:\n title: str\n summary: str\n annotations: List[Annotation]\n\n\nMACHINE_MAN_ACCEPT_HEADER = \"application/vnd.github.machine-man-preview+json\"\nPREVIEW_ACCEPT_HEADER = \"application/vnd.github.antiope-preview+json\"\nGH_API = \"https://api.github.com\"\n\n\ndef machine_man_headers(jwt_token: str,) -> Dict[str, str]:\n return {\"Authorization\": f\"Bearer {jwt_token}\", \"Accept\": MACHINE_MAN_ACCEPT_HEADER}\n\n\ndef create_token(private_key: str, app_id: str, installation_id: str) -> str:\n now = int(time.time()) - 5\n payload = {\"iat\": now, \"exp\": now + 600, \"iss\": app_id}\n jwt_token = jwt.encode(payload, private_key, jwt.ALGORITHMS.RS256)\n resp = httpx.post(\n f\"{GH_API}/app/installations/{installation_id}/access_tokens\",\n headers=machine_man_headers(jwt_token),\n )\n resp.raise_for_status()\n return cast(dict, resp.json())[\"token\"]\n\n\ndef headers(token: str) -> Dict[str, str]:\n return {\"Authorization\": f\"token {token}\", \"Accept\": PREVIEW_ACCEPT_HEADER}\n\n\ndef url(ctx: Config, suffix: str) -> str:\n return f\"{GH_API}/repos/{ctx.repo}/{suffix}\"\n\n\ndef post(ctx, url_suffix, body: dict) -> httpx.Response:\n resp = httpx.post(url(ctx, url_suffix), json=body, headers=headers(ctx.token))\n resp.raise_for_status()\n return resp\n\n\ndef patch(ctx, url_suffix, body: dict) -> httpx.Response:\n resp = httpx.patch(url(ctx, url_suffix), json=body, headers=headers(ctx.token))\n resp.raise_for_status()\n return resp\n\n\ndef get(ctx, url_suffix) -> httpx.Response:\n resp = httpx.get(url(ctx, url_suffix), headers=headers(ctx.token))\n resp.raise_for_status()\n return resp\n\n\ndef start(ctx: Config, check_name: str, details_url: Optional[str] = None) -> str:\n body = {\"name\": check_name, \"head_sha\": ctx.sha, \"status\": \"in_progress\"}\n if details_url:\n body[\"details_url\"] = details_url\n return cast(dict, post(ctx, \"check-runs\", body).json())[\"id\"]\n\n\ndef list_check_runs(ctx: Config) -> dict:\n return cast(dict, get(ctx, f\"commits/{ctx.sha}/check-runs\").json())[\"check_runs\"]\n\n\ndef check_run_id(ctx: Config, check_name) -> str:\n check_runs = [r[\"id\"] for r in list_check_runs(ctx) if r[\"name\"] == check_name]\n if not check_runs:\n return start(ctx, check_name)\n return check_runs[0]\n\n\ndef conclude(\n ctx: Config,\n check_name: str,\n conclusion: Optional[str] = None,\n from_lines: Optional[Iterator[str]] = None,\n) -> None:\n current_check = check_run_id(ctx, check_name)\n body = {\"status\": \"completed\", \"conclusion\": conclusion}\n if conclusion is not None:\n patch(ctx, f\"check-runs/{current_check}\", body)\n return\n assert from_lines is not None\n if check_name == \"pylint\":\n annotations = parse_pylint(from_lines)\n elif check_name == \"mypy\":\n annotations = parse_mypy(from_lines)\n else:\n assert False\n patch(\n ctx,\n f\"check-runs/{current_check}\",\n {\n \"status\": \"completed\",\n \"conclusion\": get_conclusion(annotations),\n \"output\": {\n \"title\": annotations.title,\n \"summary\": annotations.summary,\n \"annotations\": [a.asdict() for a in annotations.annotations],\n },\n },\n )\n\n\ndef get_conclusion(annotations: Annotations) -> str:\n if any(a.level == AnnotationLevel.FAILURE for a in annotations.annotations):\n return \"failure\"\n if any(a.level == AnnotationLevel.WARNING for a in annotations.annotations):\n return \"failure\"\n if any(a.level == AnnotationLevel.NOTICE for a in annotations.annotations):\n return \"neutral\"\n return \"success\"\n\n\ndef get_ctx() -> Config:\n repo = os.getenv(\"REPO\")\n sha = os.getenv(\"SHA\") or sp.check_output([\"git\", \"rev-parse\", \"HEAD\"]).decode().strip()\n app_id = \"56533\"\n installation_id = \"7163640\"\n token = os.getenv(\"TOKEN\") or create_token(\n Path(\"private_key.pem\").read_text(), app_id, installation_id\n )\n assert repo\n assert sha\n return Config(repo, sha, token)\n\n\ndef parse_loc(line: str) -> Tuple[Optional[Loc], str]:\n loc, _, rest = line.partition(\": \")\n if not loc:\n return None, line\n path, _, line_no = loc.partition(\":\")\n if not path or not line_no:\n return None, line\n if not line_no.isdigit():\n return None, line\n return Loc(path, int(line_no)), rest\n\n\ndef skip_nones(items: Iterator[Optional[T]]) -> Iterator[T]:\n for item in items:\n if item is not None:\n yield item\n\n\ndef parse_mypy_line(line: str) -> Optional[Annotation]:\n loc, rest = parse_loc(line)\n if not loc:\n return None\n level, _, msg = rest.partition(\": \")\n return Annotation(loc, AnnotationLevel.from_mypy_level(level), msg)\n\n\ndef extract_between(lsep: str, rsep: str, line: str) -> Tuple[str, str, str]:\n before, _, stripped = line.partition(lsep)\n extracted, _, after = stripped.rpartition(rsep)\n return before, extracted, after\n\n\ndef parse_pylint_line(line: str) -> Optional[Annotation]:\n loc, rest = parse_loc(line)\n if not loc:\n return None\n _, error_spec, msg = extract_between(\"[\", \"]\", rest)\n error_code, error_name, _ = extract_between(\"(\", \")\", error_spec)\n return Annotation(\n loc, AnnotationLevel.from_pylint_level(error_code[0]), msg.strip(), error_name\n )\n\n\ndef parse_annotations(\n parser: Callable[[str], Optional[Annotation]], lines: Iterator[str]\n) -> List[Annotation]:\n return list(skip_nones(map(parser, lines)))\n\n\ndef parse_mypy(lines: Iterator[str]) -> Annotations:\n return Annotations(\"Mypy\", \"Result of mypy checks\", parse_annotations(parse_mypy_line, lines))\n\n\ndef parse_pylint(lines: Iterator[str]) -> Annotations:\n return Annotations(\n \"Pylint\", \"Result of pylint checks\", parse_annotations(parse_pylint_line, lines)\n )\n\n\ndef get_lines(path: str) -> Iterator[str]:\n if path == \"-\":\n yield from sys.stdin\n else:\n yield from Path(path).read_text().split(\"\\n\")\n\n\ndef main() -> None:\n ctx = get_ctx()\n action = sys.argv[1]\n if action == \"list\":\n pprint(list_check_runs(ctx))\n elif action == \"start\":\n start(ctx, sys.argv[2])\n\n\nif __name__ == \"__main__\":\n main()\n"
}
] | 6 |
rosarp/CarND-Behavioral-Cloning-P3 | https://github.com/rosarp/CarND-Behavioral-Cloning-P3 | 086061c5ceb47d5308cf0fabe090d166382a1e75 | 9c56dd5818e0a95bce83fc3c27d106e382a5eb11 | 7044834ff12897de3538dd1c1eaf4204f8450bb4 | refs/heads/master | 2020-04-16T18:48:58.212314 | 2019-01-19T07:39:28 | 2019-01-19T07:39:28 | 165,836,644 | 0 | 0 | MIT | 2019-01-15T11:08:06 | 2019-01-10T19:51:12 | 2018-12-09T15:40:49 | null | [
{
"alpha_fraction": 0.5761569142341614,
"alphanum_fraction": 0.615691065788269,
"avg_line_length": 36.07954406738281,
"blob_id": "15b219b45c84be21ddcd969858691f44d8a4ce3e",
"content_id": "b0ab08eb7bbfad4599479287d7535c35a7ba866d",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3263,
"license_type": "permissive",
"max_line_length": 102,
"num_lines": 88,
"path": "/utils.py",
"repo_name": "rosarp/CarND-Behavioral-Cloning-P3",
"src_encoding": "UTF-8",
"text": "import cv2\nimport numpy as np\nimport sklearn\nimport os.path\nfrom sklearn.utils import shuffle\n\nimport tensorflow as tf\nfrom keras.models import load_model\nfrom keras.models import Sequential\nfrom keras.layers import Cropping2D\nfrom keras.layers.core import Dense, Activation, Flatten, Lambda, Dropout\nfrom keras.layers.convolutional import Conv2D\n\ndef add_data(image, angle, images, angles):\n image = cv2.cvtColor(cv2.imread(image), cv2.COLOR_BGR2RGB)\n angle = float(angle)\n images.append(image)\n angles.append(angle)\n # flipped data\n images.append(cv2.flip(image,1))\n angles.append(angle*-1.0)\n\ndef generator(samples, batch_size=32):\n num_samples = len(samples)\n while 1: # Loop forever so the generator never terminates\n shuffle(samples)\n for offset in range(0, num_samples, batch_size):\n batch_samples = samples[offset:offset+batch_size]\n\n images = []\n angles = []\n for batch_sample in batch_samples:\n add_data(batch_sample[0], batch_sample[3], images, angles)\n correction = 0.2\n add_data(batch_sample[1], batch_sample[3] + correction, images, angles)\n add_data(batch_sample[2], batch_sample[3] - correction, images, angles)\n \n X_train = np.array(images)\n y_train = np.array(angles)\n yield shuffle(X_train, y_train)\n\ndef get_model():\n if os.path.isfile('model.h5'):\n # helps in retraining with new data\n model = load_model('model.h5')\n else:\n # first time run\n orgin_row, orgin_col, orgin_ch = 160, 320, 3 # Original image format\n row, col, ch = 90, 320, 3 # Trimmed image format\n\n model = Sequential()\n # Preprocess incoming data, centered around zero with small standard deviation \n model.add(Lambda(lambda x: (x/255.0) - 0.5, input_shape=(orgin_row, orgin_col, orgin_ch)))\n # trim image to only see section with road\n model.add(Cropping2D(cropping=((50,20), (0,0)), input_shape=(orgin_row, orgin_col, orgin_ch)))\n\n # Nvdia Architecture\n # Layer 1 - 24@5x5\n model.add(Conv2D(filters = 24, kernel_size=(5, 5), strides=(2, 2), activation='relu'))\n model.add(Dropout(rate=0.5))\n\n # Layer 2 - 36@5x5\n model.add(Conv2D(filters = 36, kernel_size=(5, 5), strides=(2, 2), activation='relu'))\n model.add(Dropout(rate=0.4))\n\n # Layer 3 - 48@5x5\n model.add(Conv2D(filters = 48, kernel_size=(5, 5), strides=(2, 2), activation='relu'))\n model.add(Dropout(rate=0.3))\n\n # Layer 4 - 64@3x3\n model.add(Conv2D(filters = 64, kernel_size=(3, 3), activation='relu'))\n model.add(Dropout(rate=0.2))\n\n # Layer 5 - 64@3x3\n model.add(Conv2D(filters = 64, kernel_size=(3, 3), activation='relu'))\n model.add(Dropout(rate=0.2))\n\n model.add(Flatten())\n # Fully connected layers\n model.add(Dense(100, activation='relu'))\n model.add(Dropout(rate=0.5))\n model.add(Dense(50, activation='relu'))\n model.add(Dropout(rate=0.3))\n model.add(Dense(10, activation='relu'))\n model.add(Dense(1))\n\n model.compile(loss='mse', optimizer='adam')\n return model\n"
},
{
"alpha_fraction": 0.7009900808334351,
"alphanum_fraction": 0.7465346455574036,
"avg_line_length": 44.90909194946289,
"blob_id": "241276283d7a17e0f30bbe277e082a920e25160f",
"content_id": "740d5b4f9d39fefe48608aec6f411d4709f95d67",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 505,
"license_type": "permissive",
"max_line_length": 59,
"num_lines": 11,
"path": "/model_summary.py",
"repo_name": "rosarp/CarND-Behavioral-Cloning-P3",
"src_encoding": "UTF-8",
"text": "from keras.models import load_model\n\nmodel = load_model('model.h5')\nprint(model.summary())\nprint('Conv2d Layer 1 Dropout rate', model.layers[3].rate)\nprint('Conv2d Layer 2 Dropout rate', model.layers[5].rate)\nprint('Conv2d Layer 3 Dropout rate', model.layers[7].rate)\nprint('Conv2d Layer 4 Dropout rate', model.layers[9].rate)\nprint('Conv2d Layer 5 Dropout rate', model.layers[11].rate)\nprint('Dense Layer 1 Dropout rate', model.layers[14].rate)\nprint('Dense Layer 2 Dropout rate', model.layers[16].rate)\n"
},
{
"alpha_fraction": 0.6550218462944031,
"alphanum_fraction": 0.6687461137771606,
"avg_line_length": 33.84782791137695,
"blob_id": "99b980c825b1e12649140a9e367fa55503b8ff45",
"content_id": "d2411d605d5f6d4c830aa1561367bcdf5d17509a",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1603,
"license_type": "permissive",
"max_line_length": 191,
"num_lines": 46,
"path": "/model.py",
"repo_name": "rosarp/CarND-Behavioral-Cloning-P3",
"src_encoding": "UTF-8",
"text": "import cv2\nimport glob\nimport numpy as np\nimport pandas as pd\nimport os.path\n\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.preprocessing import LabelBinarizer\n\nfrom keras.models import Model\nimport utils\nimport pk_util\n\ncsv_list = glob.glob('../data/**/*.csv', recursive=True)\ncsv_rows = []\ntotal_size = 0\nfor csv_file in csv_list:\n csv_file_data = pd.read_csv(csv_file)\n total_size = total_size + len(csv_file_data)\n for idx, row in csv_file_data.iterrows():\n columns = []\n image_path = '../data/' + csv_file.split('/')[-2] + '/IMG/' + row[0].split('/')[-1]\n columns.append(image_path)\n image_path = '../data/' + csv_file.split('/')[-2] + '/IMG/' + row[1].split('/')[-1]\n columns.append(image_path)\n image_path = '../data/' + csv_file.split('/')[-2] + '/IMG/' + row[2].split('/')[-1]\n columns.append(image_path)\n columns.append(row[3])\n columns.append(row[4])\n columns.append(row[5])\n columns.append(row[6])\n csv_rows.append(columns)\n\n\ntrain_samples, validation_samples = train_test_split(csv_rows, test_size=0.2)\nBATCH_SIZE = 32\n\n# compile and train the model using the generator function\ntrain_generator = utils.generator(train_samples, batch_size=BATCH_SIZE)\nvalidation_generator = utils.generator(validation_samples, batch_size=BATCH_SIZE)\n\nmodel = utils.get_model()\n\nhistory_object = model.fit_generator(train_generator, steps_per_epoch=len(train_samples), validation_data=validation_generator, validation_steps=len(validation_samples), epochs=1, verbose=1)\n\nmodel.save('model.h5')\n"
},
{
"alpha_fraction": 0.6390328407287598,
"alphanum_fraction": 0.6390328407287598,
"avg_line_length": 37.599998474121094,
"blob_id": "386bbd436fc494de09c3d5871683b9c36e7459d8",
"content_id": "d26b342c00878af3c9094c791261292b6665afe1",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 579,
"license_type": "permissive",
"max_line_length": 87,
"num_lines": 15,
"path": "/pk_util.py",
"repo_name": "rosarp/CarND-Behavioral-Cloning-P3",
"src_encoding": "UTF-8",
"text": "import pickle\ndef save_to_pickle(filename, X, y, X_label, y_label):\n dist_pickle_full_set = {}\n dist_pickle_full_set[X_label] = X\n dist_pickle_full_set[y_label] = y\n pickle.dump( dist_pickle_full_set, open( \"../data/pickles/\"+filename+\".p\", \"wb\" ) )\n\ndef load_pickle(filename):\n with open(filename + \".p\", mode='rb') as f:\n return pickle.load(f)\n\ndef save_history_to_pickle(filename, history):\n dist_pickle_full_set = {}\n dist_pickle_full_set[filename] = history\n pickle.dump( dist_pickle_full_set, open( \"../data/pickles/\"+filename+\".p\", \"wb\" ) )\n"
},
{
"alpha_fraction": 0.6801102757453918,
"alphanum_fraction": 0.7185011506080627,
"avg_line_length": 63.0130729675293,
"blob_id": "3ac41a3aad9504042a54d70388216318d17eb474",
"content_id": "034bf7f71e897e6be04754fea1cc248a35441c40",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 9794,
"license_type": "permissive",
"max_line_length": 415,
"num_lines": 153,
"path": "/writeup_report.md",
"repo_name": "rosarp/CarND-Behavioral-Cloning-P3",
"src_encoding": "UTF-8",
"text": "# **Behavioral Cloning Project**\n\nThe goals / steps of this project are the following:\n* Use the simulator to collect data of good driving behavior\n* Build, a convolution neural network in Keras that predicts steering angles from images\n* Train and validate the model with a training and validation set\n* Test that the model successfully drives around track one without leaving the road\n* Summarize the results with a written report\n\n\n[//]: # (Image References)\n\n[image1]: ./images/Nvidia_CNN_architecture.png \"Nvidia CNN architecture\"\n[image2]: ./images/center_2019_01_15_23_03_54_135.jpg \"Driving track in center\"\n[image3]: ./images/center_2019_01_18_02_07_14_245.jpg \"Recovery Video\"\n[image4]: ./images/center_2019_01_18_02_07_15_247.jpg \"Recovery Image\"\n[image5]: ./images/center_2019_01_18_02_07_16_039.jpg \"Recovery Image\"\n[image6]: ./images/center_2019_01_15_23_03_54_135.jpg \"Normal Image\"\n[image7]: ./images/center_2019_01_15_23_03_54_135_flipped.jpg \"Flipped Image\"\n\n\n## Rubric Points\n### Here I will consider the [rubric points](https://review.udacity.com/#!/rubrics/432/view) individually and describe how I addressed each point in my implementation. \n\n---\n### Files Submitted & Code Quality\n\n#### 1. Submission includes all required files and can be used to run the simulator in autonomous mode\n\nMy project includes the following files:\n* model.py containing the script to create and train the model\n* utils.py containing supporting functions for model.py\n* drive.py for driving the car in autonomous mode\n* model.h5 containing a trained convolution neural network\n* writeup_report.md summarizing the results\n\n#### 2. Submission includes functional code\nUsing the Udacity provided simulator and my drive.py file, the car can be driven autonomously around the track by executing\n```sh\npython drive.py model.h5\n```\n\n#### 3. Submission code is usable and readable\n\nThe model.py & utils.py files contains the code for training and saving the convolution neural network. The file shows the pipeline I used for training and validating the model, and it contains comments to explain how the code works.\n\n### Model Architecture and Training Strategy\n\n#### 1. An appropriate model architecture has been employed\n\nMy model consists of a convolution neural network with 3 5x5 filter sizes with depths between 24 and 48 and 2 3x3 filter sizes with depths of 64 (utils.py lines 18-24)\n\nThe model includes RELU layers as activation of Convolution layers to introduce nonlinearity (utils.py, code line 59), and the data is normalized in the model using a Keras lambda layer (utils.py, code line 53).\n\n#### 2. Attempts to reduce overfitting in the model\n\nThe model contains dropout layers in order to reduce overfitting (utils.py lines 60, 64, 68, 72, 76, 81, 83).\n\nThe model was trained and validated on different data sets to ensure that the model was not overfitting (model.py code line 14). The model was tested by running it through the simulator and ensuring that the vehicle could stay on the track.\n\n#### 3. Model parameter tuning\n\nThe model used an adam optimizer, so the learning rate was not tuned manually (utils.py line 87).\n\n#### 4. Appropriate training data\n\nTraining data was chosen to keep the vehicle driving on the road. I used a combination of center lane driving, recovering from the left and right sides of the road.\nI used 4-5 laps of images on track1 with 2 laps of reverse laps & 2 laps on track2 & 1 lap of reverse laps on track2. Also used udacity provided data. And added 1 lap on track1 of recovering car from going out of track. Varied data helped the model training on total of 1 epochs and trained two times.\n\nFor details about how I created the training data, see the next section.\n\n### Model Architecture and Training Strategy\n\n#### 1. Solution Design Approach\n\nThe overall strategy for deriving a model architecture was to use well known Nvidia CNN architecture & with variable data sets.\nAlso, to add preprocessing on images by reducing size of image to focus on road & normalizing the image.\n\nMy first step was to use a convolution neural network model similar to the architecture mentioned in Nvidia paper [End to End Learning for Self-Driving Cars](https://images.nvidia.com/content/tegra/automotive/images/2016/solutions/pdf/end-to-end-dl-using-px.pdf). I thought this model might be appropriate because it has less parameters than Lenet 5 model, and still is proven to give good results as per the paper.\n\nIn order to gauge how well the model was working, I split my image and steering angle data into a training and validation set. I used the data provided by udacity. I found that my first model had a low mean squared error on the training set but a high mean squared error on the validation set. This implied that the model was overfitting.\n\nTo combat the overfitting, I modified the model so that at each layer I used Dropout, which helped the model to train harder and learn better.\n\nThen I added more training/validation data set with variation in driving styles. I used all 3 camera inputs & also added augmented data by flipping the images and correcting the steering angles for each. To corrected steering angle by 0.2 from center image to left/right camera input.\n\nThe final step was to run the simulator to see how well the car was driving around track one. There were a few spots where the vehicle fell off the track after bridge is passed on the track. To improve the driving behavior in these cases, I added 1 lap of recovery lap where car drifted towards edges and recovered by strong steering away from edges.\n\nAt the end of the process, the vehicle is able to drive autonomously around the track without leaving the road.\n\n#### 2. Final Model Architecture\n\nThe final model architecture (utils.py lines 42-88 in get_model function) consisted of a convolution neural network with the following layers and layer sizes. Below information can be found using `model_summary.py` by loading model and displaying summary().\n\n| Layer | Layer Specs | Output Size |\n|---------------|--------------------------------------------|-------------|\n| Normalization | lambda x: x / 255 - 0.5 | 160x320x3 |\n| Cropping | Cropping2D(cropping=((50, 20), (0, 0)) | 90x320x3 |\n| Convolution | 24, 5x5 kernels, 2x2 stride, valid padding | 43x158x24 |\n| RELU | Non-linearity | 43x158x24 |\n| Dropout | Probabilistic regularization (p=0.5) | 43x158x24 |\n| Convolution | 36, 5x5 kernels, 2x2 stride, valid padding | 20x77x36 |\n| RELU | Non-linearity | 20x77x36 |\n| Dropout | Probabilistic regularization (p=0.4) | 20x77x36 |\n| Convolution | 48, 5x5 kernels, 1x1 stride, valid padding | 8x37x48 |\n| RELU | Non-linearity | 8x37x48 |\n| Dropout | Probabilistic regularization (p=0.3) | 8x37x48 |\n| Convolution | 64, 3x3 kernels, 1x1 stride, valid padding | 6x35x64 |\n| RELU | Non-linearity | 6x35x64 |\n| Dropout | Probabilistic regularization (p=0.2) | 6x35x64 |\n| Convolution | 64, 3x3 kernels, 1x1 stride, valid padding | 4x33x64 |\n| RELU | Non-linearity | 4x33x64 |\n| Dropout | Probabilistic regularization (p=0.2) | 4x33x64 |\n| Flatten | Convert to vector. | 8448 |\n| Dense | Fully connected layer. No regularization | 100 |\n| Dropout | Probabilistic regularization (p=0.5) | 100 |\n| Dense | Fully connected layer. No regularization | 50 |\n| Dropout | Probabilistic regularization (p=0.3) | 50 |\n| Dense | Fully connected layer. No regularization | 10 |\n| Dense | Output prediction layer. | 1 |\n\n\n\nHere is a visualization of the architecture (note: visualizing the architecture is optional according to the project rubric)\n\n![alt text][image1]\n\n#### 3. Creation of the Training Set & Training Process\n\nTo capture good driving behavior, I first recorded two laps on track one using center lane driving. Here is an example image of center lane driving:\n\n![alt text][image2]\n\nI then recorded the vehicle recovering from the left side and right sides of the road back to center so that the vehicle would learn to drift away from edges. These images show what a recovery looks like starting from left edge to center :\n\n![alt text][image3] ![alt text][image4] ![alt text][image5]\n\n[Watch Edge Recovery Video](https://raw.githubusercontent.com/rosarp/CarND-Behavioral-Cloning-P3/master/images/recovery.mp4)\n\n\nThen I repeated this process on track two in order to get more data points.\n\nTo augment the data sat, I also flipped images and angles thinking that this would add more training data & help with edge recovery. For example, here is an image that has then been flipped:\n\n![alt text][image6]\n![alt text][image7]\n\nAfter the collection process, I had 36811 number of data points. I then preprocessed this data by reducing size -> normalizing images -> adding augmented data by using left & right images along with center and flipping each image with steering angle correction.\n\n\nI finally randomly shuffled the data set and put 20% of the data into a validation set.\n\nI used this training data for training the model. The validation set helped determine if the model was over or under fitting. The ideal number of epochs was 2 as evidenced by high training & validation loss (0.04*) but constant gradual reduction in first epoch. I used an adam optimizer so that manually training the learning rate wasn't necessary.\n"
}
] | 5 |
GautamAjani/rest-framework-demo | https://github.com/GautamAjani/rest-framework-demo | 7763475b08601d510ee24b2a4383ce3c4c47559b | 2bd895e62414df612f166951ae6fa5c1d1b46891 | a121799b959a3cfcce1be44fb0456821d4e40d12 | refs/heads/master | 2020-06-22T06:03:51.934326 | 2019-07-18T20:32:20 | 2019-07-18T20:32:20 | 197,652,490 | 1 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.7061224579811096,
"alphanum_fraction": 0.7061224579811096,
"avg_line_length": 26.22222137451172,
"blob_id": "b49f50eeba4ebb6d00e6c2b24e3897795795a949",
"content_id": "bb1ab6c3ced47c73ee5b58c5e5a06a485eaa5aad",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 245,
"license_type": "no_license",
"max_line_length": 57,
"num_lines": 9,
"path": "/tutorial/app/user/urls.py",
"repo_name": "GautamAjani/rest-framework-demo",
"src_encoding": "UTF-8",
"text": "from django.contrib import admin\nfrom django.urls import path, re_path\nfrom .views import UserRudAPIView, UserCreateAPIView\n\nurlpatterns = [\n re_path(r'^(?P<pk>\\d+)/$', UserRudAPIView.as_view()),\n path('', UserCreateAPIView.as_view()),\n\n]\n"
},
{
"alpha_fraction": 0.5730705857276917,
"alphanum_fraction": 0.5730705857276917,
"avg_line_length": 28.047618865966797,
"blob_id": "dae8086e4ee286ef25ee8e22a3b11b80fe373d61",
"content_id": "99a57de14f1e847547225f1faec97c4b23bd1361",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 609,
"license_type": "no_license",
"max_line_length": 73,
"num_lines": 21,
"path": "/tutorial/app/user/responses.py",
"repo_name": "GautamAjani/rest-framework-demo",
"src_encoding": "UTF-8",
"text": "response = {\n\n \"user\": {\n \"login\": \"User login successfully.\",\n \"created\": \"User created successfully.\",\n \"retrieved\": \"User retrieved successfully\",\n \"deleted\": \"User deleted successfully\",\n \"updated\": \"User updated successfully\",\n \"reset_password\": \"Please check your email. Reset password link is\"\n \" sended just now\",\n \"change_password\": \"Password changed successfully\"\n\n\n },\n \"error\":{\n \"user_exist\": \"user does not exist\",\n \"email_exist\": \"A user with this email address already exists.\",\n \"incorrect_password\": \"Old password is incorrect.\"\n }\n \n}"
},
{
"alpha_fraction": 0.6018412113189697,
"alphanum_fraction": 0.6081703305244446,
"avg_line_length": 26.603174209594727,
"blob_id": "b8e0dd8b22368b8f09d7d399cbd22b7433bcef22",
"content_id": "75dd8727110aa9c98704c4d7b0ff47364cbd6a56",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1738,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 63,
"path": "/tutorial/app/user/models.py",
"repo_name": "GautamAjani/rest-framework-demo",
"src_encoding": "UTF-8",
"text": "from django.db import models\nfrom django.core.exceptions import ValidationError\nfrom django.contrib.auth.models import (\n AbstractUser\n)\n# from rest_framework import generic\n\n# Create your models here.\n\nclass User(AbstractUser):\n\n username = None\n email = models.EmailField(db_index=True, unique=True)\n first_name = models.CharField(max_length=255, null=False)\n\n USERNAME_FIELD = 'email'\n REQUIRED_FIELDS = ['username']\n\n def __str__(self):\n\n return self.email\n\n @property\n def token(self):\n \"\"\"\n Allows us to get a user's token by calling `user.token` instead of\n `user.generate_jwt_token().\n\n The `@property` decorator above makes this possible. `token` is called\n a \"dynamic property\".\n \"\"\"\n return self._generate_jwt_token()\n\n def _generate_jwt_token(self):\n \"\"\"\n Generates a JSON Web Token that stores this user's ID and has an expiry\n date set to 60 days into the future.\n \"\"\"\n dt = datetime.now() + timedelta(days=60)\n\n token = jwt.encode({\n 'id': self.id,\n 'exp': int(dt.strftime('%s'))\n }, settings.SECRET_KEY, algorithm='HS256')\n return token.decode('utf-8')\n\n def get_user(self, user_id):\n\n user = User.objects.filter(id=user_id, is_active=True).first()\n if not user:\n raise ValidationError(response['error']['user_exist'])\n return user\n\n def get_user_role(self, user_id):\n ''' get the user role '''\n\n user = User.objects.filter(id=user_id).first()\n return user\n\n def delete_user(self, user_id):\n user = User.objects.filter(id=user_id).first()\n user.is_active = False\n return user"
},
{
"alpha_fraction": 0.6343139410018921,
"alphanum_fraction": 0.6383938193321228,
"avg_line_length": 32.746376037597656,
"blob_id": "e344637d087e9c28e6274e8d62f1706008be4237",
"content_id": "56489a6686309bf03ae40fe6f156b55508a9f501",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4657,
"license_type": "no_license",
"max_line_length": 114,
"num_lines": 138,
"path": "/tutorial/app/user/views.py",
"repo_name": "GautamAjani/rest-framework-demo",
"src_encoding": "UTF-8",
"text": "from django.shortcuts import render\nfrom rest_framework import generics, mixins\nfrom .models import User\nfrom django.db.models import Q\nfrom .serializers import UserSerializer\nfrom rest_framework.response import Response\nfrom .responses import response\nfrom .serializers import (\n LoginSerializer, ForgotPasswordSerialzier,\n ResetPassWordSerializer, ChangePassWordSerializer\n ) \nfrom django.core.mail import EmailMessage\nfrom rest_framework.generics import RetrieveUpdateDestroyAPIView, UpdateAPIView\nfrom rest_framework.permissions import AllowAny, IsAuthenticated\nfrom rest_framework.views import APIView\n\n# Create your views here.\nimport pdb\n\nclass UserRudAPIView(generics.RetrieveUpdateDestroyAPIView):\n\n lookup_field = 'pk'\n queryset = User.objects.all()\n serializer_class = UserSerializer\n def get_queryset(self):\n return User.objects.all()\n\n def get_serializer_context(self, *args, **kwargs):\n return {\"request\": self.request}\n\nclass UserCreateAPIView(mixins.CreateModelMixin, generics.ListAPIView):\n\n lookup_field = 'pk'\n queryset = User.objects.all()\n serializer_class = UserSerializer\n \n def get_queryset(self):\n return User.objects.all()\n\n def post(self, request, *args, **kwargs):\n return self.create(request, *args, **kwargs)\n\n def put(self, request, *args, **kwargs):\n return self.update(request, *args, **kwargs)\n\n def delete(self, request, *args, **kwargs):\n return self.delete(request, *args, **kwargs)\n\n\nclass LoginAPIView(APIView):\n permission_classes = (AllowAny,)\n serializer_class = LoginSerializer\n def post(self, request):\n user = request.data\n serializer = self.serializer_class(data=user)\n serializer.is_valid(raise_exception=True)\n return Response({'message': response['user']['login'], 'user':serializer.data}, status=status.HTTP_200_OK)\n\n\nclass ForgotPasswordView(APIView):\n \"\"\" Forgot password view class \"\"\"\n\n permission_classes = (AllowAny,)\n serializer_class = ForgotPasswordSerialzier\n\n def post(self, request):\n \"\"\" create user using following logic. \"\"\"\n\n try: \n user = User()\n user = user.get_user_by_email(request.data.get('email'))\n token = uuid.uuid4().hex\n serializer = self.serializer_class(data=request.data)\n serializer.is_valid(raise_exception=True)\n\n instance = UserVerification()\n instance.create(user.id, token)\n result = Response(\n {'message': response['user']['reset_password'], 'token': token},\n status=status.HTTP_200_OK\n )\n except ValidationError as e:\n result = Response({\"message\": e}, status=400)\n\n return result\n\n\nclass ResetPasswordView(APIView):\n \"\"\" Reset password view class \"\"\"\n\n serializer_class = ResetPassWordSerializer\n\n def post(self, request, token):\n try:\n instance = UserVerification()\n user_verify_obj = instance.check_token(token)\n \n if not user_verify_obj:\n raise ValidationError('Token is not valid')\n \n serializer = self.serializer_class(data=request.data)\n serializer.is_valid(raise_exception=True)\n \n user = User()\n user = user.get_user(user_verify_obj.user_id)\n user.set_password(request.data.get('password'))\n user.save()\n user_verify_obj.delete()\n result = Response({'message': response['user']['change_password']})\n except ValidationError as e:\n result = Response({\"message\": e}, status=400)\n \n return result\n\n\nclass ChangePasswordAPIView(UpdateAPIView):\n \"\"\"\n An endpoint for changing password.\n \"\"\"\n\n serializer_class = ChangePassWordSerializer\n permission_classes = (IsAuthenticated,)\n\n def update(self, request, *args, **kwargs):\n \n try:\n user_obj = request.user\n serializer = self.get_serializer(data=request.data)\n if serializer.is_valid(raise_exception=True):\n if not user_obj.check_password(serializer.data.get(\"old_password\")):\n raise ValidationError(response['error']['incorrect_password'])\n user_obj.set_password(serializer.data.get(\"new_password\"))\n user_obj.save()\n result = Response({\"message\": response['user']['change_password']},\n status=status.HTTP_200_OK)\n except ValidationError as e:\n result = Response({\"message\": e}, status=400)\n return result\n"
},
{
"alpha_fraction": 0.782608687877655,
"alphanum_fraction": 0.782608687877655,
"avg_line_length": 22,
"blob_id": "7bb21e1c3f87007f60f7ff797a84fa9dde3c782c",
"content_id": "33de4bd0532b1ac80c7cca9fc3ff7ab15e728475",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 69,
"license_type": "no_license",
"max_line_length": 25,
"num_lines": 3,
"path": "/README.md",
"repo_name": "GautamAjani/rest-framework-demo",
"src_encoding": "UTF-8",
"text": "# Django-rest-api-basics\nuser crud in basic django\nthis is demo crud\n"
},
{
"alpha_fraction": 0.6138613820075989,
"alphanum_fraction": 0.6203588843345642,
"avg_line_length": 30.388349533081055,
"blob_id": "724652f60363732ab748fc9741104f9e9cd812ff",
"content_id": "68d629931a4a607a368fc9347e07dac3fc711088",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3232,
"license_type": "no_license",
"max_line_length": 76,
"num_lines": 103,
"path": "/tutorial/app/user/serializers.py",
"repo_name": "GautamAjani/rest-framework-demo",
"src_encoding": "UTF-8",
"text": "from rest_framework import serializers\nfrom .models import User\n\nclass UserSerializer(serializers.ModelSerializer):\n\n class Meta:\n\n model = User\n fields = [\n 'pk',\n 'username',\n 'email',\n 'first_name'\n ]\n\nclass LoginSerializer(serializers.Serializer):\n email = serializers.EmailField(max_length=255)\n username = serializers.CharField(max_length=255, read_only=True)\n password = serializers.CharField(max_length=128, write_only=True)\n token = serializers.CharField(max_length=255, read_only=True)\n is_create_new_password = serializers.NullBooleanField(required=False)\n role = serializers.IntegerField(required=False)\n\n def validate(self, data):\n email = data.get('email', None)\n password = data.get('password', None)\n\n if email is None:\n raise serializers.ValidationError(\n 'An email address is required to log in.'\n )\n if password is None:\n raise serializers.ValidationError(\n 'A password is required to log in.'\n )\n user_obj = User.objects.filter(email=email, is_active=True).first()\n user = authenticate(username=email, password=password)\n\n if user is None:\n raise serializers.ValidationError(\n 'A user with this email and password was not found.'\n )\n if not user.is_active:\n raise serializers.ValidationError(\n 'This user has been deactivated.'\n )\n return {\n 'email': user.email,\n 'role':user.role,\n 'is_create_new_password': user.is_create_new_password,\n 'token': user.token,\n }\n\n\nclass ForgotPasswordSerialzier(serializers.ModelSerializer):\n \"\"\"Handles serialization and deserialization of User objects.\"\"\"\n\n email = serializers.EmailField(max_length=255)\n \n class Meta:\n model = User\n fields = ('email',)\n\n def validate(self, validated_data):\n \"\"\" validate email for forgot password \"\"\"\n\n user = User()\n user = User.objects.filter(email=validated_data['email']).first()\n if not user:\n raise serializer.ValidationError(\n {\"email\": \"Email does not exist in the system.\"})\n\n return validated_data\n\n\nclass ResetPassWordSerializer(serializers.ModelSerializer):\n \"\"\" Reset password serializer \"\"\"\n\n password = serializers.CharField(required=True, max_length=20)\n\n class Meta:\n model = User\n fields = ('password',)\n\n\nclass ChangePassWordSerializer(serializers.ModelSerializer):\n \"\"\" Change password serializer \"\"\"\n \n old_password = serializers.CharField(required=True, max_length=20)\n new_password = serializers.CharField(required=True, max_length=20)\n\n class Meta:\n model = User\n fields = ('old_password', 'new_password')\n\n def validate(self, validated_data):\n \"\"\" check old and new password are not same \"\"\"\n\n if validated_data['old_password'] == validated_data['new_password']:\n raise serializers.ValidationError(\n 'Old password and new password must not be same')\n\n return validated_data"
}
] | 6 |
tspannhw/few-shot-text-classification | https://github.com/tspannhw/few-shot-text-classification | 4afc1fbac205ad21bfc56b95b2dcfab5c1714371 | f0360766ed11fc3611215c83014a0896787700e7 | 50c49b70bc3306282ecdc6e32f54693799ef91fc | refs/heads/main | 2023-02-15T05:24:43.741043 | 2021-01-06T23:42:47 | 2021-01-06T23:42:47 | null | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.6666666865348816,
"alphanum_fraction": 0.7291666865348816,
"avg_line_length": 94,
"blob_id": "305ebdc621f4f835fb71f0f6454aeac99b7efde6",
"content_id": "e5495f7f1626c53a5ed665c4f8940475d3bd49a5",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 96,
"license_type": "permissive",
"max_line_length": 94,
"num_lines": 1,
"path": "/apps/launch-zero-shot-demo.py",
"repo_name": "tspannhw/few-shot-text-classification",
"src_encoding": "UTF-8",
"text": " !streamlit run apps/zero-shot-demo.py --server.port $CDSW_APP_PORT --server.address 127.0.0.1\n"
},
{
"alpha_fraction": 0.5297184586524963,
"alphanum_fraction": 0.5359749794006348,
"avg_line_length": 38.95833206176758,
"blob_id": "c755b125627869062b0eda617d19afa64c72a8fc",
"content_id": "cbd743efacbb5bfaa4a89f97981a2be5b4340b0a",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1918,
"license_type": "permissive",
"max_line_length": 79,
"num_lines": 48,
"path": "/tests/metrics_test.py",
"repo_name": "tspannhw/few-shot-text-classification",
"src_encoding": "UTF-8",
"text": "import unittest\n\nfrom fewshot.metrics import simple_accuracy, simple_topk_accuracy\nfrom fewshot.predictions import Prediction\n\n\nclass TestStringMethods(unittest.TestCase):\n\n def test_simple_accuracy(self):\n # Only 40% correct\n ground_truth = [\"A\", \"A\", \"A\", \"A\", \"A\"]\n predictions = [Prediction(closest=list([x]), scores=list(),\n best=x) for x in [\"A\", \"A\", \"B\", \"B\", \"B\"]]\n\n self.assertAlmostEqual(simple_accuracy(ground_truth, predictions),\n 40.0)\n\n def test_simple_accuracy_failures(self):\n # Only 40% correct\n ground_truth = [\"A\", \"A\", \"A\", \"A\", \"A\"]\n predictions = [Prediction(closest=list([x]), scores=list(),\n best=x) for x in [\"A\", \"A\", \"B\", \"B\"]]\n\n with self.assertRaisesRegex(ValueError,\n \"Accuracy length mismatch\"):\n simple_accuracy(ground_truth, predictions)\n\n with self.assertRaisesRegex(ValueError,\n \"Passed lists should be non-empty\"):\n simple_accuracy(list(), list())\n\n def test_simple_topk_accuracy(self):\n # Only 60% correct, the first three entries.\n ground_truth = [\"A\", \"A\", \"A\", \"A\", \"A\"]\n predictions = list()\n predictions.append(\n Prediction(closest=[\"A\", \"C\"], scores=list(), best=\"A\"))\n predictions.append(\n Prediction(closest=[\"A\", \"D\"], scores=list(), best=\"A\"))\n predictions.append(\n Prediction(closest=[\"A\", \"B\"], scores=list(), best=\"A\"))\n predictions.append(\n Prediction(closest=[\"B\", \"X\"], scores=list(), best=\"B\"))\n predictions.append(\n Prediction(closest=[\"B\", \"X\"], scores=list(), best=\"B\"))\n\n self.assertAlmostEqual(simple_topk_accuracy(ground_truth, predictions),\n 60.0)\n"
},
{
"alpha_fraction": 0.4599260091781616,
"alphanum_fraction": 0.4938347637653351,
"avg_line_length": 37.619049072265625,
"blob_id": "13f8123105254cbe6b36f731749e22b75fd3f1bd",
"content_id": "024357f6d5c95131e693e8d63bb4dc48276985e6",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1622,
"license_type": "permissive",
"max_line_length": 77,
"num_lines": 42,
"path": "/tests/predictions_test.py",
"repo_name": "tspannhw/few-shot-text-classification",
"src_encoding": "UTF-8",
"text": "import math\nimport unittest\n\nimport torch\n\nfrom fewshot.predictions import compute_predictions, Prediction\n\n\nclass TestStringMethods(unittest.TestCase):\n\n def _rotated_vector(self, theta):\n return [math.cos(theta), math.sin(theta)]\n\n def test_compute_predictions_without_transformation(self):\n # 0-th index is pos. x-axis, 1-st index is pos. y-axis\n label_embeddings = torch.tensor([[1, 0], [0, 1], [-1, -1]],\n dtype=torch.float)\n\n # All of these are far away from [-1, -1].\n example_embeddings = torch.tensor([\n # Closer to x-axis (index=0)\n self._rotated_vector(0.1),\n self._rotated_vector(0.2),\n # Closer to y-axis (index=1)\n self._rotated_vector(math.pi / 2 + 0.1),\n self._rotated_vector(math.pi / 2 + 0.2),\n ], dtype=torch.float)\n\n self.assertEqual(\n compute_predictions(example_embeddings, label_embeddings, k=2), [\n Prediction(closest=[0, 1],\n scores=[math.cos(0.1), math.cos(math.pi/2 - 0.1)],\n best=0),\n Prediction(closest=[0, 1],\n scores=[math.cos(0.2), math.cos(math.pi/2 - 0.2)],\n best=0),\n Prediction(closest=[1, 0],\n scores=[math.cos(0.1), math.cos(math.pi/2 + 0.1)],\n best=1),\n Prediction(closest=[1, 0],\n scores=[math.cos(0.2), math.cos(math.pi/2 + 0.2)],\n best=1)])\n"
},
{
"alpha_fraction": 0.594660222530365,
"alphanum_fraction": 0.5955703854560852,
"avg_line_length": 33.69473648071289,
"blob_id": "dfc579f66a88425c5d5352c1744d3ca216297466",
"content_id": "d8b04e3e41acc768a5e73c3cc33256a68e0e4638",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3296,
"license_type": "permissive",
"max_line_length": 81,
"num_lines": 95,
"path": "/tests/loaders_test.py",
"repo_name": "tspannhw/few-shot-text-classification",
"src_encoding": "UTF-8",
"text": "import unittest\nfrom typing import Any\n\nimport mock\nfrom parameterized import parameterized\n\nfrom fewshot.data.loaders import load_or_cache_sbert_embeddings\nfrom fewshot.utils import fewshot_filename\n\n\nclass AnyObj(object):\n \"\"\"Equal to anything\"\"\"\n\n def __eq__(self, other: Any) -> bool:\n return True\n\n\nclass TestStringMethods(unittest.TestCase):\n\n @parameterized.expand([\n [\"test_amazon\", \"amazon\", \"amazon\"],\n [\"test_agnews\", \"agnews\", \"agnews\"],\n [\"test_lower_case\", \"aMaZoN\", \"amazon\"],\n ])\n @mock.patch(\"fewshot.data.loaders.get_transformer_embeddings\")\n @mock.patch(\"fewshot.data.loaders.load_transformer_model_and_tokenizer\")\n @mock.patch(\"fewshot.data.loaders._load_amazon_products_dataset\")\n @mock.patch(\"fewshot.data.loaders._load_agnews_dataset\")\n @mock.patch(\"os.path.exists\")\n def test_load_or_cache_sbert_embeddings_picks_right_dataset(\n self,\n test_name,\n input_data_name,\n target_data_name,\n mock_exists,\n mock_load_agnews,\n mock_load_amazon,\n mock_model_tokenizer,\n mock_get_embeddings,\n ):\n # Test-level constants\n FAKE_DIR = \"FAKE_DIR\"\n AMAZON_WORDS = [\"amazon\", \"words\"]\n AGNEWS_WORDS = [\"agnews\", \"words\"]\n OUTPUT = 123 # Doesn't resemble actual output.\n\n # Mock values\n mock_exists.return_value = False\n\n mock_load_amazon.return_value = AMAZON_WORDS\n mock_load_agnews.return_value = AGNEWS_WORDS\n\n # Don't use these return values because we mock.\n mock_model_tokenizer.return_value = (None, None)\n\n mock_get_embeddings.return_value = OUTPUT\n\n # Call load_or_cache_sbert_embeddings\n self.assertEqual(\n load_or_cache_sbert_embeddings(FAKE_DIR, input_data_name), OUTPUT)\n\n # Expect functions are called with expected values.\n expected_filename = fewshot_filename(FAKE_DIR,\n f\"{target_data_name}_embeddings.pt\")\n mock_exists.assert_called_once_with(expected_filename)\n\n if target_data_name == \"amazon\":\n mock_get_embeddings.assert_called_once_with(\n AMAZON_WORDS, AnyObj(), AnyObj(),\n output_filename=expected_filename,\n )\n if target_data_name == \"agnews\":\n mock_get_embeddings.assert_called_once_with(\n AGNEWS_WORDS, AnyObj(), AnyObj(),\n output_filename=expected_filename,\n )\n\n @mock.patch(\"os.path.exists\")\n def test_load_or_cache_sbert_embeddings_picks_right_dataset(self,\n mock_exists):\n # Test-level constants\n FAKE_DIR = \"FAKE_DIR\"\n bad_name = \"bad_name\"\n\n # Mock value\n mock_exists.return_value = False\n\n # Call load_or_cache_sbert_embeddings\n with self.assertRaisesRegex(ValueError,\n f\"Unexpected dataset name: {bad_name}\"):\n load_or_cache_sbert_embeddings(FAKE_DIR, bad_name)\n\n # Expect functions are called with expected values.\n mock_exists.assert_called_once_with(\n fewshot_filename(FAKE_DIR, f\"{bad_name}_embeddings.pt\"))\n"
}
] | 4 |
aphearin/cython_c_extension_example | https://github.com/aphearin/cython_c_extension_example | a2cdf3cd58a269b462a3ed9371fbffc1488d856c | 97052b10cf7ee7825fd25c85c10414f37904eb8d | b1c008accf87405e5d7b4b2862298f075e72fa64 | refs/heads/master | 2021-01-10T13:52:45.250092 | 2016-05-08T14:58:12 | 2016-05-08T14:58:12 | 43,521,719 | 7 | 1 | null | 2015-10-01T21:17:51 | 2016-05-08T14:58:13 | 2016-05-08T14:58:13 | Jupyter Notebook | [
{
"alpha_fraction": 0.7737789154052734,
"alphanum_fraction": 0.7737789154052734,
"avg_line_length": 34.3636360168457,
"blob_id": "1ff311a7dbb7503fe973b0316b65550cb3abaf6a",
"content_id": "8331a0e596e5c2333414cc9cc076fdc5582113fd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 389,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 11,
"path": "/setup.py",
"repo_name": "aphearin/cython_c_extension_example",
"src_encoding": "UTF-8",
"text": "import numpy as np\nfrom distutils.extension import Extension\nfrom distutils.core import setup\nfrom Cython.Build import cythonize\n\nsources = ['find_closest_element.pyx', 'minimal_to_wrap.c']\n\nextension_obj_instance = Extension(name=\"find_closest_element\", sources=sources,\n include_dirs=[np.get_include()])\n\nsetup(name=\"cython_wrapper\",ext_modules = cythonize([extension_obj_instance]))\n"
},
{
"alpha_fraction": 0.6002785563468933,
"alphanum_fraction": 0.6072423458099365,
"avg_line_length": 28.91666603088379,
"blob_id": "501a2a1fa3deebfe3f3b27051291579e225d4af1",
"content_id": "bbfb06535c6e40566f1686d2512deeb9fbcb732c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C",
"length_bytes": 718,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 24,
"path": "/minimal_to_wrap.c",
"repo_name": "aphearin/cython_c_extension_example",
"src_encoding": "UTF-8",
"text": "#include <math.h>\n\n//Returns the index of the element of input_data\n//closest to key\nint find_closest_element_in_c(double * input_data, int size_of_data, double key)\n{\n if (size_of_data <= 0)\n return -1;\n //Linearly search through the array to find the closest element\n //distance stores the current minimal distance\n //closest stores the index\n //Sorting the array first would certainly be much faster!\n double distance = fabs(input_data[0] - key);\n int closest = 0;\n for(int i = 0; i < size_of_data; ++i)\n {\n if (fabs(input_data[i] - key) < distance )\n {\n closest = i;\n distance = fabs(input_data[i] - key);\n }\n }\n return closest;\n}\n"
},
{
"alpha_fraction": 0.7920072674751282,
"alphanum_fraction": 0.7920072674751282,
"avg_line_length": 77.71428680419922,
"blob_id": "eeb39dc108864d49cc0e348245734309bbd724ed",
"content_id": "12903ae741a1f2ef3586bf18c6f79593e983d959",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 1101,
"license_type": "no_license",
"max_line_length": 495,
"num_lines": 14,
"path": "/README.md",
"repo_name": "aphearin/cython_c_extension_example",
"src_encoding": "UTF-8",
"text": "# cython_c_extension_example\nMinimal example of how to write a cython wrapper around a C function\n\nTo compile the cython and C code together, type:\n\n$ python setup.py build_ext --inplace\n\nIf you are a Mac user, depending on how you have your gcc compiler configured you may need to compile this code via:\n\n$ CC=clang python setup.py build_ext --inplace\n\nOnce you've compiled the code, you can open up a python terminal, import the cython module as if it were a python module, and call the find_closest_element_wrapper function. For step-by-step instructions, see tutorial_wrap_c_function_in_python.ipynb.\n\nThese notes are in no way, shape or form intended to be comprehensive. Quite the opposite. The purpose is to provide a simple, quickstart example that beginner's can use to pattern-match into their python code, saving them the trouble of having to wade through extensive, technical documentation when there is just a simple C function that needs to be wrapped into a python code base. For comprehensive documentation, see [Calling C functions](http://docs.cython.org/src/tutorial/external.html)."
},
{
"alpha_fraction": 0.7349397540092468,
"alphanum_fraction": 0.7349397540092468,
"avg_line_length": 81,
"blob_id": "519154ff9471bbc76be93156ead10d7f2a4de21d",
"content_id": "d58328391b55d4659ff11e158ca9f2724b4433da",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C",
"length_bytes": 83,
"license_type": "no_license",
"max_line_length": 81,
"num_lines": 1,
"path": "/minimal_to_wrap.h",
"repo_name": "aphearin/cython_c_extension_example",
"src_encoding": "UTF-8",
"text": "\nint find_closest_element_in_c(double * input_data, int size_of_data, double key);\n"
}
] | 4 |
JamiePacheco/Easy-Sudoku-Solver | https://github.com/JamiePacheco/Easy-Sudoku-Solver | a3f151087b043276c92adf1c0bcf64e7991389c7 | 0756d9ff2e2686709a4d09d99632ac250e842aa3 | 01432b877badb1a05d7b08ca2e1862614672ad9b | refs/heads/main | 2023-08-26T02:36:50.781351 | 2021-10-08T18:55:12 | 2021-10-08T18:55:12 | 415,096,406 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.3789386451244354,
"alphanum_fraction": 0.4390547275543213,
"avg_line_length": 26.352941513061523,
"blob_id": "116a096fed61635058f3ddc8c3d457fc633f1f7a",
"content_id": "e39cc0dfe4be8e0d09ceb1c85b29cab59625231c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2412,
"license_type": "no_license",
"max_line_length": 163,
"num_lines": 85,
"path": "/SudokuSolver.py",
"repo_name": "JamiePacheco/Easy-Sudoku-Solver",
"src_encoding": "UTF-8",
"text": "\r\n\r\ndef solver(puzzle):\r\n\r\n def Check_Row(row):\r\n missing = []\r\n for x in range(0, 10):\r\n if x not in puzzle[row]:\r\n missing.append(x)\r\n \r\n return missing\r\n\r\n def Check_Column(column):\r\n\r\n i = 0\r\n Inside_Column = []\r\n while i < 9:\r\n if puzzle[i][column] != 0:\r\n Inside_Column.append(puzzle[i][column])\r\n i += 1\r\n \r\n return Inside_Column\r\n\r\n def Check_Square(column, row):\r\n \r\n if column in range(0, 3):\r\n c_range = (0, 3)\r\n elif column in range(3, 6):\r\n c_range = (3, 6)\r\n elif column in range(6, 9):\r\n c_range = (6, 9)\r\n\r\n if row in range (0, 3):\r\n r_range = (0, 3)\r\n elif row in range (3, 6):\r\n r_range = (3, 6)\r\n elif row in range (6, 9):\r\n r_range = (6, 9)\r\n\r\n Inside_Square = []\r\n for x in range(r_range[0], r_range[1]):\r\n for y in range(c_range[0], c_range[1]):\r\n if puzzle[x][y] != 0:\r\n Inside_Square.append(puzzle[x][y])\r\n\r\n return Inside_Square\r\n\r\n def comparing(c, r):\r\n\r\n possible_values = []\r\n Square_Checked = Check_Square(c, r)\r\n Column_Checked = Check_Column(c)\r\n Row_Checked = Check_Row(r)\r\n\r\n if puzzle[r][c] == 0:\r\n for x in range(0, len(Row_Checked)):\r\n if Row_Checked[x] not in Column_Checked and Row_Checked[x] not in Square_Checked:\r\n possible_values.append(Row_Checked[x]) \r\n\r\n return possible_values\r\n\r\n while 0 in puzzle[0] or 0 in puzzle[1] or 0 in puzzle[2]or 0 in puzzle[3]or 0 in puzzle[4]or 0 in puzzle[5]or 0 in puzzle[6]or 0 in puzzle[7]or 0 in puzzle[8]:\r\n for x in range(0,9):\r\n for y in range(0,9):\r\n result = comparing(y, x)\r\n\r\n if int(len(result)) == 1:\r\n puzzle[x][y] = result[0]\r\n else:\r\n continue\r\n\r\n for x in range(0,9):\r\n print(puzzle[x])\r\n\r\n\r\npuzzle = [[5,3,0,0,7,0,0,0,0],\r\n [6,0,0,1,9,5,0,0,0],\r\n [0,9,8,0,0,0,0,6,0],\r\n [8,0,0,0,6,0,0,0,3],\r\n [4,0,0,8,0,3,0,0,1],\r\n [7,0,0,0,2,0,0,0,6],\r\n [0,6,0,0,0,0,2,8,0],\r\n [0,0,0,4,1,9,0,0,5],\r\n [0,0,0,0,8,0,0,7,9]]\r\n\r\n\r\nsolver(puzzle)"
}
] | 1 |
sebablasko/Test_MultiThreadStressTransmision | https://github.com/sebablasko/Test_MultiThreadStressTransmision | 0e3c6914d38d264fffedd3723c870b051f701876 | d229b6ca439eeeef1c845d10deba5abcf0d23b56 | 3f95abf97eb104061c577435c879ece2f9cfafe5 | refs/heads/master | 2020-04-15T20:19:29.735699 | 2015-11-16T18:13:24 | 2015-11-16T18:13:24 | 31,613,861 | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.7174887657165527,
"alphanum_fraction": 0.726457417011261,
"avg_line_length": 13.866666793823242,
"blob_id": "a6e2e46872a6d524b33477b8d1e2444c8552b653",
"content_id": "bcc54ed592ea2069779964c8ef0cda1a03c1dab8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Makefile",
"length_bytes": 223,
"license_type": "no_license",
"max_line_length": 37,
"num_lines": 15,
"path": "/FIFO/Makefile",
"repo_name": "sebablasko/Test_MultiThreadStressTransmision",
"src_encoding": "UTF-8",
"text": "all: server client\n\nserver: server.o\n\tgcc -o3 server.o -o server -lpthread\n\nrm_server:\n\trm server server.o\n\nclient: client.o\n\tgcc -o3 client.o -o client -lpthread\n\nrm_client:\n\trm client client.o\n\nclean: rm_client rm_server\n"
},
{
"alpha_fraction": 0.6981052756309509,
"alphanum_fraction": 0.7044210433959961,
"avg_line_length": 37.30644989013672,
"blob_id": "0f19fa3c408746c80cbf1aa4b2c10f83120db999",
"content_id": "c6d3c1545d4830d11c59f00cd84d0b6fc35a39d9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2381,
"license_type": "no_license",
"max_line_length": 141,
"num_lines": 62,
"path": "/perfPostProcessing.py",
"repo_name": "sebablasko/Test_MultiThreadStressTransmision",
"src_encoding": "UTF-8",
"text": "#-*- coding: utf-8 -*-\nimport os\nimport glob\n\n#variables para almacenar la informacion\nlock_functions = {}\t\t\t\t# Diccionario para funciones de llamada a spinlocks. clave: nombre funcion; valor: lista con muestras de % de llamadas\nmuestras = 0\t\t\t\t\t# variable para determinar cuantas muestras (repeticiones) se hizo con cada configuración de threads\nthreads_probados = []\t\t\t# lista para almacenar cantidad de threads en prueba\n\n# Función para crear un diccionario para una función de spinlock, en el cual almacenar muestras de % de llamadas segun numero de threads.\n# retorna diccionario\n#\tkey: cantidad de threads\n# \tvalue: lista con % de llamadas (han de haber muestras valores)\ndef crearDict(threads_probados, muestras):\n\ta = {}\n\tfor t in threads_probados:\n\t\ta[t] = [0]*muestras\n\treturn a\n\n# 1.- Recuperar threads_probados y muestras\nfiles = glob.glob(os.getcwd()+\"/perf/*perf*.txt\")\nfor file in files:\n\tarchivo = open(file, 'r')\n\tfileName = os.path.basename(archivo.name)\n\tmuestras = max(muestras, int(fileName[fileName.find(\"_\")+1 : fileName.find(\".\")]))\n\tthreads = int(fileName[fileName.find(\"{\")+1 : fileName.find(\"}\")])\n\tif threads not in threads_probados:\n\t\tthreads_probados.append(threads)\n\nprint \"Threads probados: \", threads_probados\nprint \"Repeticiones: \", muestras\n\n# 2.- Abrir cada archivo de datos según la nomenclatura de su nombre\nfor thread in sorted(threads_probados):\n\tfor i in range(muestras):\n\t\tnombre = \"{\"+str(thread)+\"}perf_\"+str(i+1)+\".txt\"\n\t\truta = os.getcwd()+\"/perf/\"+nombre\n\n\t\t# 3.- Para cada archivo, buscar las apariciones de funciones pedidas según aparición de palabra en su nombre\n\t\tarchivo = open(ruta, 'r')\n\t\tfor line in archivo:\n\t\t\tpos = max(line.find(\"spin\"),line.find(\"lock\"))\n\t\t\tif(pos>0):\n\t\t\t\tporcentaje = line[:line.find(\"%\")].split()[0]\n\t\t\t\tfuncion = line[line.find(\"[k]\")+3:-1].split()[0]\n\t\t\t\tif not funcion in lock_functions:\n\t\t\t\t\tlock_functions[funcion] = crearDict(threads_probados, muestras)\n\t\t\t\tlock_functions[funcion][thread][i] = porcentaje\n\t\tarchivo.close()\n\n\n# 4.- Generar csv con resultados\nsalida = open(\"perfTests.csv\", \"w+\")\nfor spin_function in lock_functions:\n\tsalida.write(spin_function + \"\\n\")\n\tfor t in sorted(lock_functions[spin_function]):\n\t\tsalida.write(str(t) + \",\")\n\t\tfor valor in lock_functions[spin_function][t]:\n\t\t\tsalida.write(str(valor) + \",\")\n\t\tsalida.write(\"\\n\")\n\tsalida.write(\"\\n\\n\\n\")\nsalida.close()\n"
},
{
"alpha_fraction": 0.6896551847457886,
"alphanum_fraction": 0.6959247589111328,
"avg_line_length": 20.266666412353516,
"blob_id": "2a0bb2795439b89ada940f4f63d7486e134e9ad7",
"content_id": "1fcccc75286d9bf5560d41442c9f0e6fb613b9ed",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Makefile",
"length_bytes": 319,
"license_type": "no_license",
"max_line_length": 61,
"num_lines": 15,
"path": "/UNIX/Makefile",
"repo_name": "sebablasko/Test_MultiThreadStressTransmision",
"src_encoding": "UTF-8",
"text": "all: server client\n\nserver: server.o ../../ssocket/ssocket.o\n\tgcc -o3 server.o ../../ssocket/ssocket.o -o server -lpthread\n\nrm_server:\n\trm server server.o\n\nclient: client.o ../../ssocket/ssocket.o\n\tgcc -o3 client.o ../../ssocket/ssocket.o -o client -lpthread\n\nrm_client:\n\trm client client.o\n\nclean: rm_client rm_server\n"
},
{
"alpha_fraction": 0.6397941708564758,
"alphanum_fraction": 0.6449399590492249,
"avg_line_length": 20.200000762939453,
"blob_id": "b76f71be746b8e8562521e7cb079f8b56b9d3e82",
"content_id": "9c0e35d829c25864b393b59c097325dc3534007c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 1166,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 55,
"path": "/DEV_NULL/run.sh",
"repo_name": "sebablasko/Test_MultiThreadStressTransmision",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n#Se requiere la variable res_dir\nres_dir=../RESULTS\n\n#Recuperar parametros\npackages=$1\nshift 1\nrepetitions=$1\nshift 1\nthreads=$@\n\necho \"Compilando...\"\nmake all\necho \"Done\"\n\necho \"Ejecutando Prueba DEV_NULL...\"\nfor num_threads in $threads\ndo\n\techo \"\"\n\techo \"Evaluando \"$num_threads\" Threads\"\n\tlinea=\"$num_threads,\";\n\tfor ((i=1 ; $i<=$repetitions ; i++))\n\t{\n\t\techo \"\t\tRepeticion \"$i\n\n\t\tif [ \"$(whoami)\" == \"root\" ]; then\n\t\t\tmkdir perf\n\t\t\tperf record -- ./dev_null --packets $packages --threads $num_threads > aux &\n\t\telse\n\t\t\t./dev_null --packets $packages --threads $num_threads > aux &\n\t\tfi\n\n\t\tpid=$!\n\t\tsleep 1\n\t\twait $pid\n\t\tlinea=\"$linea$(cat aux)\"\n\t\trm aux\n\n\t\tif [ \"$(whoami)\" == \"root\" ]; then\n\t\t\t\tperf_file=\"perf/{\"$num_threads\"}perf_\"$i\".data\"\n\t\t\t\toutput_perf_file=\"perf/{\"$num_threads\"}perf_\"$i\".txt\"\n\t\t\t\tperf report > $output_perf_file\n\t\t\t\tmv perf.data $perf_file\n\t\tfi\n\t}\n\toutput_csv_file=$res_dir\"/DEV_NULL_times.csv\"\n\techo \"$linea\" >> $output_csv_file\ndone\nmake clean\nif [ \"$(whoami)\" == \"root\" ]; then\n\tpython ../perfPostProcessing.py\n\toutput_perf_summary=$res_dir\"/perfSummary_devnull.csv\"\n\tmv perfTests.csv $output_perf_summary\nfi\necho \"Done\"\n"
},
{
"alpha_fraction": 0.7690762877464294,
"alphanum_fraction": 0.7730923891067505,
"avg_line_length": 30.0625,
"blob_id": "2d014fa84714a7f61662d674bd774833584ca8c8",
"content_id": "bebdeeb0331a882eb2efcfcf5d62049424a425e3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 502,
"license_type": "no_license",
"max_line_length": 193,
"num_lines": 16,
"path": "/README.md",
"repo_name": "sebablasko/Test_MultiThreadStressTransmision",
"src_encoding": "UTF-8",
"text": "# Test_StressTransmision\nEvaluación de estrés de distintas estructuras en la transmisión de una determinanda cantidad de paquetes de 10 byes cada uno.\n\nSe evalúa el acceso concurrente a:\n\n- Dispositivos Virtuales\n\t- DEV_NULL\n\t- DEV_URANDOM\n- FIFO\n- UNIX Sockets\n\t- UDP\n- Internet Sockets\n\t- TCP\n\t- UDP\n\nPara poder construir un call-graph a partir de los registros perf, se debe modificar las rutinas \"run\" de cada carpeta de estructura evaluada para agregar el parametro \"-g\" en el record de perf\n\n"
},
{
"alpha_fraction": 0.692307710647583,
"alphanum_fraction": 0.692307710647583,
"avg_line_length": 15.375,
"blob_id": "26970e4aa28834226e0d0d642b3e80e14010d3a2",
"content_id": "1da3056c42b74a7c0416f8657df8937264a0803f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 130,
"license_type": "no_license",
"max_line_length": 22,
"num_lines": 8,
"path": "/clean.sh",
"repo_name": "sebablasko/Test_MultiThreadStressTransmision",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\nrm -r RESULTS\nrm -r UDP/perf\nrm -r UNIX/perf\nrm -r TCP/perf\nrm -r FIFO/perf\nrm -r DEV_NULL/perf\nrm -r DEV_URANDOM/perf"
},
{
"alpha_fraction": 0.6196363568305969,
"alphanum_fraction": 0.6349090933799744,
"avg_line_length": 20.153846740722656,
"blob_id": "6823dce915c3baa3a65178bba83c12ac521732ed",
"content_id": "ad5e5d8037faa4eb1f9bb674798604abb3e52dbe",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 1375,
"license_type": "no_license",
"max_line_length": 99,
"num_lines": 65,
"path": "/UDP/run.sh",
"repo_name": "sebablasko/Test_MultiThreadStressTransmision",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n#Se requiere la variable res_dir\nres_dir=../RESULTS\n\n#Recuperar parametros\npackages=$1\nshift 1\nrepetitions=$1\nshift 1\nthreads=$@\n\ntotal_clients=4\nnum_port=1920\n\necho \"Compilando...\"\nmake all\necho \"Done\"\n\necho \"Ejecutando Prueba UDP...\"\nfor num_threads in $threads\ndo\n\techo \"\"\n\techo \"Evaluando \"$num_threads\" Threads\"\n\tlinea=\"$num_threads,\";\n\tfor ((i=1 ; $i<=$repetitions ; i++))\n\t{\n\t\techo \"\t\tRepeticion \"$i\n\n\t\tif [ \"$(whoami)\" == \"root\" ]; then\n\t\t\tmkdir perf\n\t\t\tperf record -- ./serverTesis --packets $packages --threads $num_threads --port $num_port > aux &\n\t\telse\n\t\t\t./serverTesis --packets $packages --threads $num_threads --port $num_port > aux &\n\t\tfi\n\n\t\tpid=$!\n\t\tsleep 1\n\n\t\tfor ((j=1 ; $j<=$total_clients ; j++))\n\t\t{\n\t\t\t./clientTesis --packets $(($packages*10)) --ip 127.0.0.1 --port $num_port > /dev/null &\n\t\t}\n\n\t\tsleep 1\n\t\twait $pid\n\t\tlinea=\"$linea$(cat aux)\"\n\t\trm aux\n\n\t\tif [ \"$(whoami)\" == \"root\" ]; then\n\t\t\t\tperf_file=\"perf/{\"$num_threads\"}perf_\"$i\".data\"\n\t\t\t\toutput_perf_file=\"perf/{\"$num_threads\"}perf_\"$i\".txt\"\n\t\t\t\tperf report > $output_perf_file\n\t\t\t\tmv perf.data $perf_file\n\t\tfi\n\t}\n\toutput_csv_file=$res_dir\"/UDP_times.csv\"\n\techo \"$linea\" >> $output_csv_file\ndone\nmake clean\nif [ \"$(whoami)\" == \"root\" ]; then\n\tpython ../perfPostProcessing.py\n\toutput_perf_summary=$res_dir\"/perfSummary_udp.csv\"\n\tmv perfTests.csv $output_perf_summary\nfi\necho \"Done\"\n"
},
{
"alpha_fraction": 0.6222222447395325,
"alphanum_fraction": 0.6340277791023254,
"avg_line_length": 19.884057998657227,
"blob_id": "47e42c57b8032b6aa88b67f97e568ff269e0655f",
"content_id": "e638f9e58039590089ea45bc1f7dd17369f99bb6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 1440,
"license_type": "no_license",
"max_line_length": 57,
"num_lines": 69,
"path": "/FIFO/run.sh",
"repo_name": "sebablasko/Test_MultiThreadStressTransmision",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n#Se requiere la variable res_dir\nres_dir=../RESULTS\n#Recuperar parametros\npackages=$1\nshift 1\nrepetitions=$1\nshift 1\nthreads=$@\n\necho \"Creando archivo FIFO pipe...\"\nmkfifo test_pipe\necho \"Done\"\n\necho \"Compilando...\"\nmake all\necho \"Done\"\n\necho \"Ejecutando Prueba FIFO...\"\nfor num_threads in $threads\ndo\n\techo \"\"\n\techo \"Evaluando \"$num_threads\" Threads\"\n\tlinea=\"$num_threads,\";\n\tfor ((i=1 ; $i<=$repetitions ; i++))\n\t{\n\t\techo \"\t\tRepeticion numero \"$i\n\n\t\tif [ \"$(whoami)\" == \"root\" ]; then\n\t\t\tmkdir perf\n\t\t\tperf record -- ./server $packages $num_threads > aux &\n\t\telse\n\t\t\t./server $packages $num_threads > aux &\n\t\tfi\n\n\t\tpid=$!\n\t\tsleep 1\n\t\t./client $(($packages*10)) > /dev/null &\n\t\t./client $(($packages*10)) > /dev/null &\n\t\t./client $(($packages*10)) > /dev/null &\n\t\t./client $(($packages*10)) > /dev/null &\n\t\tpid2=$!\n\t\tsleep 1\n\t\twait $pid\n\t\twait $pid2\n\t\tlinea=\"$linea$(cat aux)\"\n\t\trm aux\n\n\t\tif [ \"$(whoami)\" == \"root\" ]; then\n\t\t\t\tperf_file=\"perf/{\"$num_threads\"}perf_\"$i\".data\"\n\t\t\t\toutput_perf_file=\"perf/{\"$num_threads\"}perf_\"$i\".txt\"\n\t\t\t\tperf report > $output_perf_file\n\t\t\t\tmv perf.data $perf_file\n\t\tfi\t\n\t}\n\toutput_csv_file=$res_dir\"/FIFO_times.csv\"\n\techo \"$linea\" >> $output_csv_file\ndone\nmake clean\nif [ \"$(whoami)\" == \"root\" ]; then\n\tpython ../perfPostProcessing.py\n\toutput_perf_summary=$res_dir\"/perfSummary_fifo.csv\"\n\tmv perfTests.csv $output_perf_summary\nfi\necho \"Done\"\n\necho \"Eliminando FIFO...\"\nrm test_pipe\necho \"Done\""
},
{
"alpha_fraction": 0.6962962746620178,
"alphanum_fraction": 0.7037037014961243,
"avg_line_length": 14,
"blob_id": "810d273d5454dabae73322837f4714c224fc8214",
"content_id": "bec84ba3796ffe04d94606871fe3c602bccab25d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Makefile",
"length_bytes": 135,
"license_type": "no_license",
"max_line_length": 44,
"num_lines": 9,
"path": "/DEV_NULL/Makefile",
"repo_name": "sebablasko/Test_MultiThreadStressTransmision",
"src_encoding": "UTF-8",
"text": "all: devnull\n\ndevnull: dev_null.o\n\tgcc -g -o3 dev_null.o -o dev_null -lpthread\n\nrm_devnull:\n\trm dev_null dev_null.o\n\nclean: rm_devnull\n"
},
{
"alpha_fraction": 0.6172175407409668,
"alphanum_fraction": 0.6464258432388306,
"avg_line_length": 19.328125,
"blob_id": "4bcfa40ffc9599294433f37c150f44a2dff58fc7",
"content_id": "a929e381bcaa09ebf08c8a90c08e96d38f210fe5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C",
"length_bytes": 1301,
"license_type": "no_license",
"max_line_length": 108,
"num_lines": 64,
"path": "/TCP/client.c",
"repo_name": "sebablasko/Test_MultiThreadStressTransmision",
"src_encoding": "UTF-8",
"text": "#include <sys/time.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include \"../../ssocket/ssocket.h\"\n\n//Definiciones\n#define BUF_SIZE 512\n#define FIRST_PORT \"1820\"\n\n//Variables\nstruct timeval dateInicio, dateFin;\nchar buf[BUF_SIZE];\nchar* IP_DEST;\nint mostrarInfo = 0;\nint MAX_PACKS = 1;\ndouble segundos;\n\nmain(int argc, char **argv) {\n\tif(argc < 3){\n\t\tfprintf(stderr, \"Syntax Error: Esperado: ./client MAX_PACKS IP_DEST\\n\");\n\t\texit(1);\n\t}\n\n\t//Recuperar total de paquetes a enviar\n\tMAX_PACKS = atoi(argv[1]);\n\n\t//Recuperar IP destino\n\tIP_DEST = argv[2];\n\n\t/* Llenar de datos el buffer a enviar */\n\tint i;\n\tfor(i = 0; i < BUF_SIZE; i++)\n\t\tbuf[i] = 'a'+i;\n\n\t/* Abrir socket */\n\tint socket_fd;\n\tsocket_fd = tcp_connect(IP_DEST, FIRST_PORT);\n\tif(socket_fd < 0){\n\t\tfprintf(stderr, \"Error al hacer connect del socket TCP\");\n\t\texit(1);\n\t}\n\n\t//Medir Fin\n\tgettimeofday(&dateInicio, NULL);\n\n\tfor(i = 0; i < MAX_PACKS; i++){\n\t\tif(write(socket_fd, buf, BUF_SIZE) != BUF_SIZE) {\n\t\t\tbreak;\n\t\t}\n\t}\n\n\tgettimeofday(&dateFin, NULL);\n\n\tclose(socket_fd);\n\n\tsegundos=(dateFin.tv_sec*1.0+dateFin.tv_usec/1000000.)-(dateInicio.tv_sec*1.0+dateInicio.tv_usec/1000000.);\n\tif(mostrarInfo){\n\t\tprintf(\"Tiempo Total = %g\\n\", segundos);\n\t\tprintf(\"QPS = %g\\n\", MAX_PACKS*1.0/segundos);\n\t}else{\n\t\tprintf(\"%g \\n\", segundos);\n\t}\n\texit(0);\n}\n"
},
{
"alpha_fraction": 0.634441077709198,
"alphanum_fraction": 0.6540785431861877,
"avg_line_length": 16.194805145263672,
"blob_id": "634719d3919f1768a7d89c463a436bf63f7827f9",
"content_id": "e3f3c4f9f00df3b0122444f2cd3cf4ab1e131309",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 1324,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 77,
"path": "/run.sh",
"repo_name": "sebablasko/Test_MultiThreadStressTransmision",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\nSTART=$(date +%s)\necho $START\n\nres_dir=RESULTS\nmkdir $res_dir\n\nMAX_PACKS=1000000\nrepetitions=60\ntotal_num_threads=\"1 2 4 8 16 24 32 48 64 128\"\n\necho \"\"\necho \"Iniciando pruebas...\"\n\n# ./run.sh {TOTAL_PAQUETES} {TOTAL_REPETICIONES} {[LISTA CANTIDAD THREADS]}\n\n# echo \"\"\n# echo \"Prueba DEV_NULL\"\n# cd DEV_NULL\n# ./run.sh $MAX_PACKS $repetitions $total_num_threads\n# echo \"Done!\"\n# cd ..\n\n# echo \"\"\n# echo \"Prueba DEV_URANDOM\"\n# cd DEV_URANDOM\n# ./run.sh $MAX_PACKS $repetitions $total_num_threads\n# echo \"Done!\"\n# cd ..\n\necho \"\"\necho \"Prueba UDP\"\ncd UDP\n./run.sh $MAX_PACKS $repetitions $total_num_threads\necho \"Done!\"\ncd ..\n\n# echo \"\"\n# echo \"Prueba UNIX\"\n# cd UNIX\n# ./run.sh $MAX_PACKS $repetitions $total_num_threads\n# echo \"Done!\"\n# cd ..\n\n# echo \"\"\n# echo \"Prueba TCP\"\n# cd TCP\n# ./run.sh $MAX_PACKS $repetitions $total_num_threads\n# echo \"Done!\"\n# cd ..\n\n# echo \"\"\n# echo \"Prueba FIFO\"\n# cd FIFO\n# ./run.sh $MAX_PACKS $repetitions $total_num_threads\n# echo \"Done!\"\n# cd ..\n\n\nEND=$(date +%s)\nDIFF=$(( $END - $START ))\n\necho \"El total de pruebas duro: $DIFF segundos\"\n\n\n\n\necho \"Uniendo datos...\"\ncd RESULTS\necho \"\" > Resultados_Stress.csv\nfor filename in *_times.csv; do\n\techo $filename >> Resultados_Stress.csv\n\tcat $filename >> Resultados_Stress.csv\n\techo \"\" >> Resultados_Stress.csv\ndone\necho \"Done!\"\ncd ..\n"
},
{
"alpha_fraction": 0.7469135522842407,
"alphanum_fraction": 0.7530864477157593,
"avg_line_length": 17,
"blob_id": "4de8849ab04ce9c0f36531c7d62509a173553697",
"content_id": "f6ca0ac5666e441916f6f7ed3686cf76eb557868",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Makefile",
"length_bytes": 162,
"license_type": "no_license",
"max_line_length": 50,
"num_lines": 9,
"path": "/DEV_URANDOM/Makefile",
"repo_name": "sebablasko/Test_MultiThreadStressTransmision",
"src_encoding": "UTF-8",
"text": "all: devurandom\n\ndevurandom: dev_urandom.o\n\tgcc -g -o3 dev_urandom.o -o dev_urandom -lpthread\n\nrm_devurandom:\n\trm dev_urandom dev_urandom.o\n\nclean: rm_devurandom\n"
},
{
"alpha_fraction": 0.5967130064964294,
"alphanum_fraction": 0.6163084506988525,
"avg_line_length": 21.28169059753418,
"blob_id": "a3fee551eca9f29af4115788052b0c2531c20104",
"content_id": "13f349e0e5ec4893f5bfe9715c183ca8fef04421",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C",
"length_bytes": 3167,
"license_type": "no_license",
"max_line_length": 108,
"num_lines": 142,
"path": "/DEV_URANDOM/dev_urandom.c",
"repo_name": "sebablasko/Test_MultiThreadStressTransmision",
"src_encoding": "UTF-8",
"text": "#include <stdio.h>\n#include <stdlib.h>\n#include <sys/time.h>\n#include <getopt.h>\n\n//Definiciones\n#define BUF_SIZE 512\n\n//Variables\nint first_pack = 0;\nstruct timeval dateInicio, dateFin;\npthread_mutex_t lock;\nint mostrarInfo = 0;\nint MAX_PACKS = 0;\nint NTHREADS = 0;\ndouble segundos;\n\n\nvoid print_usage(){\n printf(\"Uso: ./dev_urandom [--verbose] --packets <num> --threads <num>\\n\");\n}\n\nvoid print_config(){\n printf(\"Detalles de la prueba:\\n\");\n printf(\"\\tPaquetes a leer:\\t%d de %d bytes\\n\", MAX_PACKS, BUF_SIZE);\n printf(\"\\tThreads que leeran concurrentemente:\\t%d\\n\", NTHREADS);\n}\n\nvoid parseArgs(int argc, char **argv){\n\tint c;\n\tint digit_optind = 0;\n\twhile (1){\n\t\tint this_option_optind = optind ? optind : 1;\n int option_index = 0;\n\n\t\tstatic struct option long_options[] = {\n\t\t\t{\"packets\", required_argument, 0, 'd'},\n\t\t\t{\"threads\", required_argument, 0, 't'},\n\t\t\t{\"verbose\", no_argument, 0, 'v'},\n\t\t\t{0, 0, 0, 0}\n\t\t};\n\n c = getopt_long (argc, argv, \"vd:t:\",\n long_options, &option_index);\n\n if (c == -1)\n \tbreak;\n\n switch (c){\n\t\t\tcase 'v':\n\t\t\t\tprintf (\"Modo Verboso\\n\");\n\t\t\t\tmostrarInfo = 1;\n\t\t\t\tbreak;\n\n\t\t\tcase 'd':\n\t\t\t\tMAX_PACKS = atoi(optarg);\n\t\t\t\tbreak;\n\n\t\t\tcase 't':\n\t\t\t\tNTHREADS = atoi(optarg);\n\t\t\t\tbreak;\n\n\t\t\tdefault:\n\t\t\t\tprintf(\"Error: La función getopt_long ha retornado un carácter desconocido. El carácter es = %c\\n\", c);\n\t\t\t\tprint_usage();\n\t\t\t\texit(1);\n }\n\t}\n}\n\nvoid llamadaHilo(int dev_fd){\n\tchar buf[BUF_SIZE];\n\tint lectura;\n\n\tint paquetesParaAtender = MAX_PACKS/NTHREADS;\n\tint i;\n\tfor(i = 0; i < paquetesParaAtender; i++) {\n\t\tlectura = read(dev_fd, buf, BUF_SIZE);\n\t\t//if(lectura <= 0) {\n\t\t//\tfprintf(stderr, \"Error en el read del dispositivo (%d)\\n\", lectura);\n\t\t//\texit(1);\n\t\t//}\n\t\tif(first_pack==0) {\n\t\t\tpthread_mutex_lock(&lock);\n\t\t\tif(first_pack == 0) {\n\t\t\t\tif(mostrarInfo)\tprintf(\"got first pack\\n\");\n\t\t\t\tfirst_pack = 1;\n\t\t\t\t//Medir Inicio\n\t\t\t\tgettimeofday(&dateInicio, NULL);\n\t\t\t}\n\t\t\tpthread_mutex_unlock(&lock);\n\t\t}\n\t}\n}\n\nint main(int argc, char **argv){\n\n\t// Paso 1.- Parsear Argumentos\n\tparseArgs(argc, argv);\n\n\t// Paso 2.- Validar Argumentos\n\tif(MAX_PACKS < 1 || NTHREADS < 1){\n\t\tprintf(\"Error en el ingreso de parametros\\n\");\n\t\tprint_usage();\n\t\texit(1);\n\t}\n\n\tif(mostrarInfo)\tprint_config();\n\tif(mostrarInfo)\tprintf(\"El pid es %d\\n\", getpid());\n\n\t// Paso 3.- Preparar los Threads\n\tpthread_t pids[NTHREADS];\n\tpthread_mutex_init(&lock, NULL);\n\n\t// Paso 4.- Abrir el dispositivo\n\tint dev_fd;\n\tdev_fd = open(\"/dev/urandom\", 0);\n\tif(dev_fd < 0){\n\t\tfprintf(stderr, \"Error al abrir el dispositivo\");\n\t\texit(1);\n\t}\n\n\t// Paso 5.- Lanzar Threads\n\tint i;\n\tfor(i=0; i < NTHREADS; i++)\n\t\tpthread_create(&pids[i], NULL, llamadaHilo, dev_fd);\n\n\t// Paso 6.- Esperar Threads y medir fin\n\tfor(i=0; i < NTHREADS; i++)\n\t\tpthread_join(pids[i], NULL);\n\tgettimeofday(&dateFin, NULL);\n\n\t// Final.- Compilar Resultados\n\tsegundos=(dateFin.tv_sec*1.0+dateFin.tv_usec/1000000.)-(dateInicio.tv_sec*1.0+dateInicio.tv_usec/1000000.);\n\tif(mostrarInfo){\n\t\tprintf(\"Tiempo Total = %g\\n\", segundos);\n\t\tprintf(\"QPS = %g\\n\", MAX_PACKS*1.0/segundos);\n\t}else{\n\t\tprintf(\"%g, \\n\", segundos);\n\t}\n\texit(0);\n}\n"
},
{
"alpha_fraction": 0.6162987947463989,
"alphanum_fraction": 0.6451612710952759,
"avg_line_length": 18.966102600097656,
"blob_id": "3c2debab0e4dcaa8d6d597f9b3f56a054490d718",
"content_id": "95360bc25c1619d84e6d6c26f4e13d5492602fc8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C",
"length_bytes": 1178,
"license_type": "no_license",
"max_line_length": 108,
"num_lines": 59,
"path": "/FIFO/client.c",
"repo_name": "sebablasko/Test_MultiThreadStressTransmision",
"src_encoding": "UTF-8",
"text": "#include <sys/time.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <pthread.h>\n\n//Definiciones\n#define BUF_SIZE 512\n#define FIFOPIPENAME \"test_pipe\"\n\n//Variables\nstruct timeval dateInicio, dateFin;\nchar buf[BUF_SIZE];\nint mostrarInfo = 0;\nint MAX_PACKS = 1;\ndouble segundos;\n\nmain(int argc, char **argv) {\n\n\tif(argc < 2){\n\t\tfprintf(stderr, \"Syntax Error: Esperado: ./client MAX_PACKS\\n\");\n\t\texit(1);\n\t}\n\n\t//Recuperar total de paquetes a enviar\n\tMAX_PACKS = atoi(argv[1]);\n\n\t/* Llenar de datos el buffer a enviar */\n\tint i;\n\tfor(i = 0; i < BUF_SIZE; i++)\n\t\tbuf[i]='a'+i;\n\n\t/* Abrir dispositivo */\n\tint fifo_fd;\n\tfifo_fd = open(FIFOPIPENAME, 1);\n\tif(fifo_fd < 0){\n\t\tfprintf(stderr, \"Error al abrir el pipe\\n\");\n\t\texit(1);\n\t}\n\n\t//Medir Fin\n\tgettimeofday(&dateInicio, NULL);\n\n\tfor(i = 0; i < MAX_PACKS; i++){\n\t\tif(write(fifo_fd, buf, BUF_SIZE) != BUF_SIZE) {\n\t\t\tbreak;\n\t\t}\n\t}\n\n\tgettimeofday(&dateFin, NULL);\n\n\tsegundos=(dateFin.tv_sec*1.0+dateFin.tv_usec/1000000.)-(dateInicio.tv_sec*1.0+dateInicio.tv_usec/1000000.);\n\tif(mostrarInfo){\n\t\tprintf(\"Tiempo Total = %g\\n\", segundos);\n\t\tprintf(\"QPS = %g\\n\", MAX_PACKS*1.0/segundos);\n\t}else{\n\t\tprintf(\"%g \\n\", segundos);\n\t}\n\texit(0);\n}\n"
}
] | 14 |
Scouttp/PiClock | https://github.com/Scouttp/PiClock | 265be8500f3395793b94c661124064db55e2544e | 805984bb271469df980895654e86853996c88448 | 592b4ac0a4154f463253f86ded0a71996d26e755 | refs/heads/master | 2021-01-11T04:12:48.476784 | 2016-11-18T04:01:47 | 2016-11-18T04:01:47 | 71,206,899 | 0 | 0 | null | 2016-10-18T03:51:12 | 2016-10-18T03:51:07 | 2016-10-16T23:00:48 | null | [
{
"alpha_fraction": 0.566724419593811,
"alphanum_fraction": 0.584055483341217,
"avg_line_length": 23.173913955688477,
"blob_id": "514213ffbe9bff6267f6db7e0fbf35b4c0efca66",
"content_id": "a215bf7f492bf1bb489733496d76c10f9de3af6b",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 577,
"license_type": "permissive",
"max_line_length": 64,
"num_lines": 23,
"path": "/Clock/Brightness.py",
"repo_name": "Scouttp/PiClock",
"src_encoding": "UTF-8",
"text": "import os\r\n\r\nBASE = \"/sys/class/backlight/rpi_backlight/\"\r\n\r\nON = 0\r\nOFF = 1\r\n\r\ndef power(state):\r\n if state in (ON,OFF):\r\n _power = open(os.path.join(BASE,\"bl_power\"), \"w\")\r\n _power.write(str(state))\r\n _power.close()\r\n return\r\n raise TypeError(\"Invalid power state\")\r\n\r\n\r\ndef brightness(value):\r\n if value > 0 and value < 255:\r\n _brightness = open(os.path.join(BASE,\"brightness\"), \"w\")\r\n _brightness.write(str(value))\r\n _brightness.close()\r\n return\r\n raise TypeError(\"Brightness should be between 0 and 255\")"
},
{
"alpha_fraction": 0.8260869383811951,
"alphanum_fraction": 0.8260869383811951,
"avg_line_length": 14.333333015441895,
"blob_id": "7bd1d2ec9eff6670494c121e7ccd9dbbb29b263b",
"content_id": "81063bafea950671d04acc615c985058837153b0",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 46,
"license_type": "permissive",
"max_line_length": 25,
"num_lines": 3,
"path": "/update.sh",
"repo_name": "Scouttp/PiClock",
"src_encoding": "UTF-8",
"text": "git merge upstream/master\ngit commit\ngit push\n"
}
] | 2 |
KhadijaNaveed57/FinalYearProject | https://github.com/KhadijaNaveed57/FinalYearProject | 7ad08a6af339ec54a204c5575aa0623175205e7a | f35713404bf95f41bd5a3fefe3db5f3a3a04bec4 | 35fe7cf899157d859995ef1aac2e61d0373eaa6d | refs/heads/master | 2023-06-19T04:53:49.290377 | 2021-07-11T00:13:09 | 2021-07-11T00:13:09 | 383,853,687 | 0 | 1 | null | null | null | null | null | [
{
"alpha_fraction": 0.49554896354675293,
"alphanum_fraction": 0.5875371098518372,
"avg_line_length": 18.823530197143555,
"blob_id": "8ea3fa851c2f1fca83bdda6d047bb9a995dfbad5",
"content_id": "0c9eda3d8190d96e2e3def0f3bd4dcce82ed4481",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 337,
"license_type": "no_license",
"max_line_length": 55,
"num_lines": 17,
"path": "/Project/FirstProject/FaceRecognition/migrations/0019_rename_sections_sec.py",
"repo_name": "KhadijaNaveed57/FinalYearProject",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.2.1 on 2021-06-08 07:31\n\nfrom django.db import migrations\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('FaceRecognition', '0018_auto_20210607_1403'),\n ]\n\n operations = [\n migrations.RenameModel(\n old_name='sections',\n new_name='sec',\n ),\n ]\n"
},
{
"alpha_fraction": 0.5475000143051147,
"alphanum_fraction": 0.6000000238418579,
"avg_line_length": 21.22222137451172,
"blob_id": "093405a9fadfbc72c5e3e82791e6dcfc0bfde58c",
"content_id": "a1dd9508aa9fd3bdc13306cfc87ae0c659a008b5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 400,
"license_type": "no_license",
"max_line_length": 57,
"num_lines": 18,
"path": "/Project/FirstProject/FaceRecognition/migrations/0005_alter_student_birthdate.py",
"repo_name": "KhadijaNaveed57/FinalYearProject",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.2.1 on 2021-05-08 18:01\n\nfrom django.db import migrations, models\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('FaceRecognition', '0004_alter_student_gender'),\n ]\n\n operations = [\n migrations.AlterField(\n model_name='student',\n name='birthdate',\n field=models.CharField(max_length=50),\n ),\n ]\n"
},
{
"alpha_fraction": 0.5230352282524109,
"alphanum_fraction": 0.574525773525238,
"avg_line_length": 19.5,
"blob_id": "2341c70c2416cf7671b736a7cb5e79a1107472aa",
"content_id": "2fdda2c842fe32a4f344a387404615ba25657749",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 369,
"license_type": "no_license",
"max_line_length": 54,
"num_lines": 18,
"path": "/Project/FirstProject/FaceRecognition/migrations/0015_rename_dept_sections_deptt.py",
"repo_name": "KhadijaNaveed57/FinalYearProject",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.2.1 on 2021-06-07 08:56\n\nfrom django.db import migrations\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('FaceRecognition', '0014_alter_sections_id'),\n ]\n\n operations = [\n migrations.RenameField(\n model_name='sections',\n old_name='dept',\n new_name='deptt',\n ),\n ]\n"
},
{
"alpha_fraction": 0.5969210267066956,
"alphanum_fraction": 0.6053633689880371,
"avg_line_length": 31.304813385009766,
"blob_id": "065f2cf2414a6dda4e2498944c8a9aae019edfb9",
"content_id": "0550dc24e2726c8fdfa098827fb23a62fba077bc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6041,
"license_type": "no_license",
"max_line_length": 117,
"num_lines": 187,
"path": "/Project/FirstProject/FaceRecognition/views.py",
"repo_name": "KhadijaNaveed57/FinalYearProject",
"src_encoding": "UTF-8",
"text": "# from django.contrib.auth.models import User\nfrom django.shortcuts import render, redirect\nfrom . import models\nimport cv2\nimport os\nimport shutil\nimport numpy as np\nimport pickle\nfrom datetime import datetime\n\n\n# from django.contrib.auth.decorators import permission_required\n\n\n# Create your views here.\n\n\ndef home(request):\n return render(request, 'home.html')\n\n\ndef signup(request):\n if request.method == \"POST\":\n username = request.POST['username']\n email = request.POST['email']\n password = request.POST['password']\n # superuser = User.objects.create_superuser(username=username, email=email, password=password)\n # superuser.save()\n data = models.admindata(username=username, email=email, password=password)\n data.save()\n # print(\"user created\")\n return redirect('/')\n return render(request, 'signup.html')\n\n\n# @permission_required('auth.view_user')\ndef login(request):\n if request.method == \"POST\":\n username = request.POST['username']\n password = request.POST['password']\n users = models.admindata.objects.all()\n for user in users:\n if user.username == username and user.password == password:\n return render(request, 'navbar.html')\n return render(request, 'login.html')\n\n\ndef navbar(request):\n return render(request, 'navbar.html')\n\n\ndef addstudent(request):\n if request.method == \"POST\":\n name = request.POST['name']\n fathername = request.POST['fathername']\n address = request.POST['address']\n gender = request.POST['gender']\n department = request.POST['department']\n section = request.POST['section']\n birthdate = request.POST['birthdate']\n id = request.POST['id']\n email = request.POST['email']\n studentdata = models.student(name=name, fathername=fathername, address=address, gender=gender,\n department=department, section=section, birthdate=birthdate, id=id, email=email)\n studentdata.save()\n return redirect('record')\n data = models.department.objects.all()\n return render(request, 'addstudent.html',{'record': data})\n\n\ndef record(request):\n data = models.student.objects.all()\n return render(request, 'record.html', {\"record\": data})\n\n\ndef delete(request, id, name):\n data = models.student.objects.get(id=id, name=name)\n data.delete()\n path = \"F:\\\\4th semester\\\\AOA(SIR KASHIF)\\\\Project\\\\Images\\\\\" + name\n if os.path.isdir(path):\n shutil.rmtree(path)\n return redirect('record')\n\n\ndef update(request, id):\n print(id)\n if request.method == \"POST\":\n name = request.POST['name']\n fathername = request.POST['fathername']\n address = request.POST['address']\n gender = request.POST['gender']\n department = request.POST['department']\n section = request.POST['section']\n birthdate = request.POST['birthdate']\n id = request.POST['id']\n email = request.POST['email']\n studentdata = models.student(name=name, fathername=fathername, address=address, gender=gender,\n department=department, section=section, birthdate=birthdate, id=id, email=email)\n studentdata.save()\n return redirect('record')\n data = models.student.objects.get(id=id)\n return render(request, 'update.html', {\"record\": data})\n\n\ndef captureimage(request, id, name):\n imgcounter = 1\n models.student.objects.get(id=id, name=name)\n path2 = \"F:\\\\4th semester\\\\clonedProject\\\\FinalYearProject\\\\Project\\\\Images\"\n os.chdir(path2)\n SubFolder = name\n path3 = path2 + \"\\\\\" + SubFolder\n if os.path.isdir(path3):\n cam = cv2.VideoCapture(0)\n while True:\n ret, frame = cam.read()\n if not ret:\n print(\"failed to grab frame\")\n break\n cv2.imshow(\"Face Recognition\", frame)\n k = cv2.waitKey(1)\n if k % 256 == 27:\n print(\"Escape entered, closing the app\")\n break\n elif k % 256 == 32:\n img_name = name + \"_{}.png\".format(imgcounter)\n cv2.imwrite(os.path.join(path3, img_name), frame)\n imgcounter += 1\n\n cv2.waitKey(1)\n cam.release()\n cv2.destroyAllWindows()\n else:\n os.mkdir(SubFolder)\n cam = cv2.VideoCapture(0)\n while True:\n ret, frame = cam.read()\n if not ret:\n print(\"failed to grab frame\")\n break\n cv2.imshow(\"Face Recognition\", frame)\n k = cv2.waitKey(1)\n if k % 256 == 27:\n print(\"Escape entered, closing the app\")\n break\n elif k % 256 == 32:\n img_name = name + \"_{}.png\".format(imgcounter)\n cv2.imwrite(os.path.join(path3, img_name), frame)\n imgcounter += 1\n cv2.waitKey(1)\n cam.release()\n cv2.destroyAllWindows()\n return redirect('record')\n\n\ndef train(request):\n return redirect('navbar')\n\n\ndef adddepartment(request):\n if request.method == \"POST\":\n dept = request.POST['dept']\n department = models.department(dept=dept)\n department.save()\n return redirect('departments')\n return render(request, 'adddepartment.html')\n\n\ndef addsection(request, dept):\n if request.method == \"POST\":\n dept = request.POST['dept']\n section = request.POST['section']\n sections = models.sec(dept=dept, section=section)\n sections.save()\n return redirect('departments')\n data = models.department.objects.get(dept=dept)\n return render(request, 'addsection.html', {'depts': data})\n\n\ndef departments(request):\n data = models.department.objects.all()\n return render(request, 'departments.html', {\"depts\": data})\n\n\ndef deletedept(request, dept):\n data = models.department.objects.get(dept=dept)\n data.delete()\n return redirect('departments')\n"
},
{
"alpha_fraction": 0.5649202466011047,
"alphanum_fraction": 0.6127562522888184,
"avg_line_length": 23.38888931274414,
"blob_id": "3fbe6f8b4d05fd142cfda51787a17519b1ef44c2",
"content_id": "1b9724fc00c14f546bcd4c6fab3a24c87d8c87d2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 439,
"license_type": "no_license",
"max_line_length": 98,
"num_lines": 18,
"path": "/Project/FirstProject/FaceRecognition/migrations/0013_alter_sections_id.py",
"repo_name": "KhadijaNaveed57/FinalYearProject",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.2.1 on 2021-06-07 08:51\n\nfrom django.db import migrations, models\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('FaceRecognition', '0012_alter_sections_id'),\n ]\n\n operations = [\n migrations.AlterField(\n model_name='sections',\n name='id',\n field=models.CharField(max_length=50, primary_key=True, serialize=False, unique=True),\n ),\n ]\n"
},
{
"alpha_fraction": 0.508695662021637,
"alphanum_fraction": 0.5478261113166809,
"avg_line_length": 33.074073791503906,
"blob_id": "c418b0fc3c6f16ab07c71aa3066dbb8a5065ff9f",
"content_id": "089c23a539d661928c485a8fe3cc26a057c5c916",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 920,
"license_type": "no_license",
"max_line_length": 88,
"num_lines": 27,
"path": "/FirstPRoject/FaceRecognition/migrations/0003_student.py",
"repo_name": "KhadijaNaveed57/FinalYearProject",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.2.1 on 2021-05-08 17:47\n\nfrom django.db import migrations, models\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('FaceRecognition', '0002_rename_signup_admindata'),\n ]\n\n operations = [\n migrations.CreateModel(\n name='student',\n fields=[\n ('name', models.CharField(max_length=30)),\n ('fathername', models.CharField(max_length=30)),\n ('address', models.CharField(max_length=70)),\n ('gender', models.BinaryField()),\n ('department', models.CharField(max_length=50)),\n ('subjects', models.CharField(max_length=100)),\n ('birthdate', models.DateField()),\n ('id', models.CharField(max_length=50, primary_key=1, serialize=False)),\n ('email', models.EmailField(max_length=254)),\n ],\n ),\n ]\n"
},
{
"alpha_fraction": 0.5497382283210754,
"alphanum_fraction": 0.5798429250717163,
"avg_line_length": 25.34482765197754,
"blob_id": "ced7c00474a263011fa2b23466006a94b2e8274e",
"content_id": "d3ca5aebad58a7719c3883ed540add337dc52ec3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 764,
"license_type": "no_license",
"max_line_length": 85,
"num_lines": 29,
"path": "/FirstPRoject/FaceRecognition/migrations/0018_auto_20210607_1403.py",
"repo_name": "KhadijaNaveed57/FinalYearProject",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.2.1 on 2021-06-07 09:03\n\nfrom django.db import migrations, models\nimport django.utils.timezone\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('FaceRecognition', '0017_rename_section_sections_sect'),\n ]\n\n operations = [\n migrations.RemoveField(\n model_name='sections',\n name='dept',\n ),\n migrations.AddField(\n model_name='sections',\n name='department',\n field=models.CharField(default=django.utils.timezone.now, max_length=50),\n preserve_default=False,\n ),\n migrations.AlterField(\n model_name='sections',\n name='sect',\n field=models.CharField(max_length=50),\n ),\n ]\n"
},
{
"alpha_fraction": 0.5382059812545776,
"alphanum_fraction": 0.6013289093971252,
"avg_line_length": 17.8125,
"blob_id": "54edb1a7c91c7f7f802f65381eec9bfcc8c6af4e",
"content_id": "a3d5f9b6f31b15cf8454f00a65917a55c247842f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 301,
"license_type": "no_license",
"max_line_length": 56,
"num_lines": 16,
"path": "/FirstPRoject/FaceRecognition/migrations/0020_delete_sec.py",
"repo_name": "KhadijaNaveed57/FinalYearProject",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.2.1 on 2021-07-08 16:46\n\nfrom django.db import migrations\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('FaceRecognition', '0019_rename_sections_sec'),\n ]\n\n operations = [\n migrations.DeleteModel(\n name='sec',\n ),\n ]\n"
},
{
"alpha_fraction": 0.6692650318145752,
"alphanum_fraction": 0.6692650318145752,
"avg_line_length": 46.26315689086914,
"blob_id": "e102a792cd4185258a5c866acd6bfc404aa3d912",
"content_id": "2f0f0f3f2a42b4f2f0d08655d62c99a9532147cd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 898,
"license_type": "no_license",
"max_line_length": 86,
"num_lines": 19,
"path": "/FirstPRoject/FaceRecognition/urls.py",
"repo_name": "KhadijaNaveed57/FinalYearProject",
"src_encoding": "UTF-8",
"text": "from django.urls import path\nfrom . import views\n\nurlpatterns = [\n path('', views.home, name='home'),\n path('login', views.login, name='login'),\n path('signup', views.signup, name='signup'),\n path('navbar', views.navbar, name='navbar'),\n path('addstudent', views.addstudent, name='addstudent'),\n path('record', views.record, name='record'),\n path('delete/<str:id>/<str:name>', views.delete, name='delete'),\n path('update/<str:id>', views.update, name='update'),\n path('captureimage/<str:id>/<str:name>', views.captureimage, name='captureimage'),\n path('train', views.train, name='train'),\n path('adddepartment', views.adddepartment, name='adddepartment'),\n path('addsection/<str:dept>', views.addsection, name='addsection'),\n path('departments', views.departments, name='departments'),\n path('deletedept/<str:dept>', views.deletedept, name='deletedept')\n]\n"
},
{
"alpha_fraction": 0.5317460298538208,
"alphanum_fraction": 0.5820105671882629,
"avg_line_length": 20,
"blob_id": "3786a9f13ef4d622fa9e882ea966dcaaba21c66d",
"content_id": "b8c88c3fe020ae6cf278e3588bb903d2be54aa8e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 378,
"license_type": "no_license",
"max_line_length": 63,
"num_lines": 18,
"path": "/FirstPRoject/FaceRecognition/migrations/0016_rename_deptt_sections_dept.py",
"repo_name": "KhadijaNaveed57/FinalYearProject",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.2.1 on 2021-06-07 08:59\n\nfrom django.db import migrations\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('FaceRecognition', '0015_rename_dept_sections_deptt'),\n ]\n\n operations = [\n migrations.RenameField(\n model_name='sections',\n old_name='deptt',\n new_name='dept',\n ),\n ]\n"
},
{
"alpha_fraction": 0.5286343693733215,
"alphanum_fraction": 0.558370053768158,
"avg_line_length": 26.515151977539062,
"blob_id": "f4f839db58407e6880a7cf604abab982fefdd7fb",
"content_id": "deb8e71fe235f7cd9d17f5fc788945a7a59b226a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 908,
"license_type": "no_license",
"max_line_length": 98,
"num_lines": 33,
"path": "/FirstPRoject/FaceRecognition/migrations/0009_auto_20210602_1448.py",
"repo_name": "KhadijaNaveed57/FinalYearProject",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.2.1 on 2021-06-02 09:48\n\nfrom django.db import migrations, models\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('FaceRecognition', '0008_department'),\n ]\n\n operations = [\n migrations.RemoveField(\n model_name='department',\n name='subjects',\n ),\n migrations.AddField(\n model_name='student',\n name='section',\n field=models.CharField(default=1, max_length=10),\n preserve_default=False,\n ),\n migrations.AlterField(\n model_name='student',\n name='email',\n field=models.EmailField(max_length=254, unique=True),\n ),\n migrations.AlterField(\n model_name='student',\n name='id',\n field=models.CharField(max_length=50, primary_key=True, serialize=False, unique=True),\n ),\n ]\n"
},
{
"alpha_fraction": 0.5338541865348816,
"alphanum_fraction": 0.5885416865348816,
"avg_line_length": 20.33333396911621,
"blob_id": "4d0658e1f7e5da61d19e6e74a237895b353b6898",
"content_id": "9b78f26a1b4d6f478eec74642f688876e3f29a96",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 384,
"license_type": "no_license",
"max_line_length": 50,
"num_lines": 18,
"path": "/FirstPRoject/FaceRecognition/migrations/0004_alter_student_gender.py",
"repo_name": "KhadijaNaveed57/FinalYearProject",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.2.1 on 2021-05-08 17:53\n\nfrom django.db import migrations, models\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('FaceRecognition', '0003_student'),\n ]\n\n operations = [\n migrations.AlterField(\n model_name='student',\n name='gender',\n field=models.CharField(max_length=20),\n ),\n ]\n"
},
{
"alpha_fraction": 0.6947250366210938,
"alphanum_fraction": 0.7216610312461853,
"avg_line_length": 29.620689392089844,
"blob_id": "d698c5ef7d576bf9682324f2ecdfac750a022a0b",
"content_id": "7e64e7b1a61ac6c047a53489d0c5d57366033bad",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 891,
"license_type": "no_license",
"max_line_length": 71,
"num_lines": 29,
"path": "/FirstPRoject/FaceRecognition/models.py",
"repo_name": "KhadijaNaveed57/FinalYearProject",
"src_encoding": "UTF-8",
"text": "from django.db import models\n\n\n# Create your models here.\nclass admindata(models.Model):\n username = models.CharField(max_length=30)\n email = models.EmailField()\n password = models.CharField(max_length=10)\n\n\nclass student(models.Model):\n name = models.CharField(max_length=30)\n fathername = models.CharField(max_length=30)\n address = models.CharField(max_length=70)\n gender = models.CharField(max_length=20)\n department = models.CharField(max_length=50)\n section = models.CharField(max_length=10)\n birthdate = models.DateField()\n id = models.CharField(max_length=50, primary_key=True, unique=True)\n email = models.EmailField(unique=True)\n\n\nclass sec(models.Model):\n department = models.CharField(max_length=50)\n sect = models.CharField(max_length=50)\n\n\nclass department(models.Model):\n dept = models.CharField(max_length=40, primary_key=True)\n\n\n\n"
},
{
"alpha_fraction": 0.8101266026496887,
"alphanum_fraction": 0.8101266026496887,
"avg_line_length": 25.33333396911621,
"blob_id": "f55148b77764c1f218115fccee727fe7502bf796",
"content_id": "7f4b63bd979f468fae9dd48caa536b8f68b29434",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 237,
"license_type": "no_license",
"max_line_length": 55,
"num_lines": 9,
"path": "/Project/FirstProject/FaceRecognition/admin.py",
"repo_name": "KhadijaNaveed57/FinalYearProject",
"src_encoding": "UTF-8",
"text": "from django.contrib import admin\n\n# Register your models here.\nfrom .models import admindata, student, department, sec\n\nadmin.site.register(admindata)\nadmin.site.register(student)\nadmin.site.register(department)\nadmin.site.register(sec)\n"
},
{
"alpha_fraction": 0.5295138955116272,
"alphanum_fraction": 0.5729166865348816,
"avg_line_length": 26.428571701049805,
"blob_id": "142bb0562cd43bea34a52dcda60656555e269966",
"content_id": "f046e30fc687c2654bda0dd0fc0cdc898cb76050",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 576,
"license_type": "no_license",
"max_line_length": 93,
"num_lines": 21,
"path": "/FirstPRoject/FaceRecognition/migrations/0008_department.py",
"repo_name": "KhadijaNaveed57/FinalYearProject",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.2.1 on 2021-05-24 15:32\n\nfrom django.db import migrations, models\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('FaceRecognition', '0007_remove_student_subjects'),\n ]\n\n operations = [\n migrations.CreateModel(\n name='department',\n fields=[\n ('dept', models.CharField(max_length=40, primary_key=True, serialize=False)),\n ('sections', models.CharField(max_length=20)),\n ('subjects', models.CharField(max_length=40)),\n ],\n ),\n ]\n"
},
{
"alpha_fraction": 0.4841269850730896,
"alphanum_fraction": 0.6904761791229248,
"avg_line_length": 14.875,
"blob_id": "d7c666506c9ef91a06abc0881ec1609646603b5e",
"content_id": "8fb14b349ffaf72208c83221a39946654365828b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 126,
"license_type": "no_license",
"max_line_length": 23,
"num_lines": 8,
"path": "/FirstPRoject/requirements.txt",
"repo_name": "KhadijaNaveed57/FinalYearProject",
"src_encoding": "UTF-8",
"text": "numpy==1.20.3\nDjango==3.2.1\nopencv_python==4.5.2.52\n\ndj_database_url==0.5.0\nwhitenoise==5.2.0\npsycopg2==2.9.1\ngunicorn==20.1.0"
},
{
"alpha_fraction": 0.5064562559127808,
"alphanum_fraction": 0.5566714406013489,
"avg_line_length": 26.8799991607666,
"blob_id": "21e6ee3735ef9db560447351cd9e267e8d73dbfe",
"content_id": "12ddec09c1419d9a6022a2dae29020b53a0d07f8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 697,
"license_type": "no_license",
"max_line_length": 117,
"num_lines": 25,
"path": "/Project/FirstProject/FaceRecognition/migrations/0010_auto_20210603_2002.py",
"repo_name": "KhadijaNaveed57/FinalYearProject",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.2.1 on 2021-06-03 15:02\n\nfrom django.db import migrations, models\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('FaceRecognition', '0009_auto_20210602_1448'),\n ]\n\n operations = [\n migrations.CreateModel(\n name='sections',\n fields=[\n ('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),\n ('dept', models.CharField(max_length=40)),\n ('section', models.CharField(max_length=40)),\n ],\n ),\n migrations.RemoveField(\n model_name='department',\n name='sections',\n ),\n ]\n"
},
{
"alpha_fraction": 0.5342105031013489,
"alphanum_fraction": 0.5842105150222778,
"avg_line_length": 20.11111068725586,
"blob_id": "1b67e4a16605b6e217beaf111afdf272b7826bb4",
"content_id": "151996f6575c1d7e5dee327d36bd83232e6d5c83",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 380,
"license_type": "no_license",
"max_line_length": 63,
"num_lines": 18,
"path": "/FirstPRoject/FaceRecognition/migrations/0017_rename_section_sections_sect.py",
"repo_name": "KhadijaNaveed57/FinalYearProject",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.2.1 on 2021-06-07 09:00\n\nfrom django.db import migrations\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('FaceRecognition', '0016_rename_deptt_sections_dept'),\n ]\n\n operations = [\n migrations.RenameField(\n model_name='sections',\n old_name='section',\n new_name='sect',\n ),\n ]\n"
}
] | 18 |
lostleaf/funding-fee | https://github.com/lostleaf/funding-fee | c1fd4fefbc2800b2c20cee149a54ec3310e88b76 | 6142922a37621a9ec884d1d5dacbd75001b604e4 | 3e8de61bbbb4046e0ddc588e6917595f296b6fcf | refs/heads/master | 2023-05-11T01:38:10.809776 | 2021-06-03T05:11:45 | 2021-06-03T05:11:45 | 373,387,930 | 4 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.578970730304718,
"alphanum_fraction": 0.5865128636360168,
"avg_line_length": 39.25,
"blob_id": "ead9fc1d3e9deb126804fffef4675ce996d4531f",
"content_id": "21379986f88072df0fa1e1575dff9cd89958d8a6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2408,
"license_type": "no_license",
"max_line_length": 109,
"num_lines": 56,
"path": "/gateway/huobi.py",
"repo_name": "lostleaf/funding-fee",
"src_encoding": "UTF-8",
"text": "import pandas as pd\nimport requests\nfrom util import retry_getter, EXCHANGE_TIMEOUT\n\nfrom .base import BaseGateway\n\nTIMEOUT_SECONDS = EXCHANGE_TIMEOUT / 1000 # 毫秒->秒\n\n\nclass HoubiGateway(BaseGateway):\n CLS_ID = 'huobi'\n\n def get_swap_funding_fee_rate_history(self, symbol):\n data = []\n if symbol.endswith('USDT'): # USDT本位合约接口\n prefix = 'https://api.hbdm.com/linear-swap-api/v1'\n else: # 币本位合约接口\n prefix = 'https://api.hbdm.com/swap-api/v1'\n\n # 火币每次可以请求 50 笔历史费率,循环请求最近 100 笔\n for i in range(1, 3):\n url = f\"{prefix}/swap_historical_funding_rate?contract_code={symbol}&page_size=50&page_index={i}\"\n resp = retry_getter(lambda: requests.get(url, timeout=TIMEOUT_SECONDS), raise_err=True)\n resp_data = resp.json()\n if 'data' in resp_data and 'data' in resp_data['data']:\n data.extend(resp_data['data']['data'])\n data = [\n {\n 'symbol': x['contract_code'], # 火币合约代码本身满足标准,不需要归一化\n 'funding_time': pd.to_datetime(x['funding_time'], unit='ms', utc=True),\n 'rate': float(x['realized_rate'])\n } for x in data\n ]\n\n # 获取当期资金费率\n url = f\"{prefix}/swap_funding_rate?contract_code={symbol}\"\n resp = retry_getter(lambda: requests.get(url, timeout=TIMEOUT_SECONDS), raise_err=True)\n x = resp.json()['data']\n data.append({\n 'symbol': x['contract_code'],\n 'funding_time': pd.to_datetime(x['funding_time'], unit='ms', utc=True),\n 'rate': float(x['funding_rate'])\n })\n return data\n\n def get_swap_symbols(self):\n # 获取币本位合约 ID\n url = 'https://api.hbdm.com/swap-api/v1/swap_contract_info'\n resp = retry_getter(lambda: requests.get(url, timeout=TIMEOUT_SECONDS), raise_err=True)\n symbols_coin = [x['contract_code'] for x in resp.json()['data']]\n\n # 获取 USDT 本位合约 ID\n url = 'https://api.hbdm.com/linear-swap-api/v1/swap_contract_info'\n resp = retry_getter(lambda: requests.get(url, timeout=TIMEOUT_SECONDS), raise_err=True)\n symbols_cash = [x['contract_code'] for x in resp.json()['data']]\n return symbols_coin + symbols_cash\n"
},
{
"alpha_fraction": 0.46844661235809326,
"alphanum_fraction": 0.5084951519966125,
"avg_line_length": 25.580644607543945,
"blob_id": "b4f1ee7acd2ff556c294b22a9dc0105c9faff166",
"content_id": "466f4ab9bcccd65755139814a62c7f533f8a5871",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 964,
"license_type": "no_license",
"max_line_length": 84,
"num_lines": 31,
"path": "/gateway/base.py",
"repo_name": "lostleaf/funding-fee",
"src_encoding": "UTF-8",
"text": "\nfrom abc import abstractmethod, ABC\n\nclass BaseGateway(ABC):\n CLS_ID = 'base'\n \n @abstractmethod\n def get_swap_funding_fee_rate_history(self, symbol):\n \"\"\"\n 获取资金费历史费率接口\n :param symbol: 合约ID,例如 BTC-USD-SWAP(okex), DOTUSD_PERP(binance)\n :return 历史费率,其中 symbol 每个交易所不一样,这里被归一化为统一格式,方便统计,\n USDT本位类似 BTC-USDT, 币本位类似 BTC-USD\n [\n {\n 'symbol': 'BTC-USD', \n 'funding_time': Timestamp('2021-03-17 00:00:00.011000+0000', tz='UTC'), \n 'rate': 0.00016828\n }\n , ...\n ]\n \"\"\"\n pass\n\n @abstractmethod\n def get_swap_symbols(self):\n \"\"\"\n 获取永续合约ID接口\n :return 合约ID 列表\n ['BTCUSD_PERP', 'ETHUSD_PERP', ... 'BTCUSDT', 'ETHUSDT', ...] (binance)\n \"\"\"\n pass"
},
{
"alpha_fraction": 0.7575757503509521,
"alphanum_fraction": 0.7575757503509521,
"avg_line_length": 20,
"blob_id": "d84bb47de6dbcad81cdfcbfabf17c20978f19557",
"content_id": "b7cc67ccf2a1ac3e8a886c08f3a8ea598f22600e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 66,
"license_type": "no_license",
"max_line_length": 39,
"num_lines": 3,
"path": "/README.md",
"repo_name": "lostleaf/funding-fee",
"src_encoding": "UTF-8",
"text": "# Funding Fee Monitor\r\n\r\nA simple funding fee monitor by grafana\r\n"
},
{
"alpha_fraction": 0.5765957236289978,
"alphanum_fraction": 0.5936170220375061,
"avg_line_length": 30.399999618530273,
"blob_id": "637e5935b47f6d436a819e24835b8baea6212ea0",
"content_id": "65006526ed5b8ccac0a19332d8ead39553ad091e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 480,
"license_type": "no_license",
"max_line_length": 85,
"num_lines": 15,
"path": "/util.py",
"repo_name": "lostleaf/funding-fee",
"src_encoding": "UTF-8",
"text": "import time\n\nEXCHANGE_TIMEOUT = 3000 #3s\nDATABASE_PATH = '../crypto_data/crypto.db' # sqlite数据库路径\n\ndef retry_getter(func, retry_times=3, sleep_seconds=1, default=None, raise_err=True):\n for i in range(retry_times):\n try:\n return func()\n except Exception as e:\n print(f'An error occurred {str(e)}')\n if i == retry_times - 1 and raise_err:\n raise e\n time.sleep(sleep_seconds)\n return default"
},
{
"alpha_fraction": 0.5733234882354736,
"alphanum_fraction": 0.5762711763381958,
"avg_line_length": 34.24675369262695,
"blob_id": "bc585f89839714bc97b70ead0388756d668def1b",
"content_id": "384a6d20283d4b0dbd2504f36fbbb8981f5c55e5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2946,
"license_type": "no_license",
"max_line_length": 86,
"num_lines": 77,
"path": "/gateway/binance.py",
"repo_name": "lostleaf/funding-fee",
"src_encoding": "UTF-8",
"text": "import ccxt\nimport pandas as pd\nfrom util import EXCHANGE_TIMEOUT, retry_getter\n\nfrom .base import BaseGateway\n\n\nclass BinanceGateway(BaseGateway):\n CLS_ID = 'binance'\n\n def __init__(self, apiKey=None, secret=None):\n self.exg = ccxt.binance({\n 'apiKey': apiKey,\n 'secret': secret,\n 'timeout': EXCHANGE_TIMEOUT,\n })\n\n def get_swap_funding_fee_rate_history(self, symbol):\n if symbol.endswith('_PERP'): # 币本位永续合约最近 100 笔历史费率\n data = retry_getter(\n lambda: self.exg.dapiPublic_get_fundingrate({'symbol': symbol}), \n raise_err=True)\n else: # USDT 本位永续合约最近 100 笔历史费率\n data = retry_getter(\n lambda: self.exg.fapiPublic_get_fundingrate({'symbol': symbol}), \n raise_err=True)\n sym_norm = normalize_symbol(symbol) # 归一化 symbol\n data = [{\n 'symbol': sym_norm,\n 'funding_time': pd.to_datetime(x['fundingTime'], unit='ms', utc=True),\n 'rate': float(x['fundingRate'])\n } for x in data]\n return data\n\n def get_swap_recent_fee_rate(self):\n # 获取所有币本位合约当期资金费率\n data = retry_getter(self.exg.dapiPublic_get_premiumindex, raise_err=True)\n drates = [{\n 'symbol': normalize_symbol(x['symbol']),\n 'funding_time': pd.to_datetime(x['nextFundingTime'], unit='ms', utc=True),\n 'rate': float(x['lastFundingRate'])\n } for x in data if x['lastFundingRate'] != '']\n\n # 获取所有 USDT 本位合约当期资金费率\n data = retry_getter(self.exg.fapiPublic_get_premiumindex, raise_err=True)\n frates = [{\n 'symbol': normalize_symbol(x['symbol']),\n 'funding_time': pd.to_datetime(x['nextFundingTime'], unit='ms', utc=True),\n 'rate': float(x['lastFundingRate'])\n } for x in data if x['lastFundingRate'] != '']\n return drates + frates\n\n def get_swap_symbols(self):\n # 获取币本位合约代码\n data = retry_getter(self.exg.dapiPublic_get_exchangeinfo, raise_err=True)\n # 由于币安永续和交割合约使用同一套 API,这里只保留永续合约\n coin_symbols = [\n x['symbol'] for x in data['symbols'] if x['contractType'] == 'PERPETUAL'\n ]\n \n # 获取 USDT 本位合约代码\n data = retry_getter(self.exg.fapiPublic_get_exchangeinfo, raise_err=True)\n usdt_symbols = [\n x['symbol'] for x in data['symbols'] if x['contractType'] == 'PERPETUAL'\n ]\n return coin_symbols + usdt_symbols\n\n\ndef normalize_symbol(symbol):\n \"\"\"\n 归一化 symbol\n 币本位由 BTCUSD_PERP 变为 BTC-USD\n USDT 本位由 BTCUSDT 变为 BTC-USDT\n \"\"\"\n if symbol.endswith('_PERP'):\n return symbol[:-8] + '-USD'\n return symbol[:-4] + '-USDT'\n"
},
{
"alpha_fraction": 0.7183098793029785,
"alphanum_fraction": 0.7323943376541138,
"avg_line_length": 9.142857551574707,
"blob_id": "af3dbe9d74e3e05db70fab7e3307dbb7b7afe02d",
"content_id": "07b5a6781aa91d6afef21f648021d5015fc1bb8b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 97,
"license_type": "no_license",
"max_line_length": 28,
"num_lines": 7,
"path": "/caller.sh.example",
"repo_name": "lostleaf/funding-fee",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\ncd 本文件夹所在的位置\n\nexport PY=你anaconda的python路径\n\n$PY cli.py $1\n"
},
{
"alpha_fraction": 0.8469387888908386,
"alphanum_fraction": 0.8469387888908386,
"avg_line_length": 31.66666603088379,
"blob_id": "dce73480bc120b7a7032886c3c323895fcc67921",
"content_id": "f42d553db9a1ca2c059d4ffeca42d92865cf98e7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 98,
"license_type": "no_license",
"max_line_length": 35,
"num_lines": 3,
"path": "/gateway/__init__.py",
"repo_name": "lostleaf/funding-fee",
"src_encoding": "UTF-8",
"text": "from .binance import BinanceGateway\nfrom .huobi import HoubiGateway\nfrom .okex import OkexGateway\n"
},
{
"alpha_fraction": 0.574269711971283,
"alphanum_fraction": 0.5767557621002197,
"avg_line_length": 29.94230842590332,
"blob_id": "188c7ca59d4a67c04d2d82a462032b760d3ded37",
"content_id": "cf58fe7c4ddea62bec481cc0eff429c412ad6309",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1669,
"license_type": "no_license",
"max_line_length": 94,
"num_lines": 52,
"path": "/gateway/okex.py",
"repo_name": "lostleaf/funding-fee",
"src_encoding": "UTF-8",
"text": "import ccxt\nimport pandas as pd\nfrom util import EXCHANGE_TIMEOUT, retry_getter\n\nfrom .base import BaseGateway\n\n\nclass OkexGateway(BaseGateway):\n CLS_ID = 'okex'\n\n def __init__(self, apiKey=None, secret=None, password=None):\n self.exg = ccxt.okex({\n 'apiKey': apiKey,\n 'secret': secret,\n 'password': password,\n 'timeout': EXCHANGE_TIMEOUT,\n })\n \n def get_swap_funding_fee_rate_history(self, symbol):\n params = {'instrument_id': symbol} \n sym_norm = normalize_symbol(symbol) # 对 symbol 归一化\n\n # 获取最近 100 笔历史资金费率\n data = retry_getter(\n lambda: self.exg.swap_get_instruments_instrument_id_historical_funding_rate(params),\n raise_err=True)\n data = [{\n 'symbol': sym_norm,\n 'funding_time': pd.to_datetime(x['funding_time'], utc=True),\n 'rate': float(x['realized_rate'])\n } for x in data]\n\n # 获取当期资金费率\n x = retry_getter(\n lambda: self.exg.swap_get_instruments_instrument_id_funding_time(params), \n raise_err=True)\n data.append({\n 'symbol': sym_norm,\n 'funding_time': pd.to_datetime(x['funding_time'], utc=True),\n 'rate': float(x['funding_rate'])\n })\n return data\n\n def get_swap_symbols(self):\n data = retry_getter(self.exg.swap_get_instruments, raise_err=True)\n return [x['instrument_id'] for x in data]\n\ndef normalize_symbol(symbol):\n #归一化 symbol, 去掉 -SWAP 后缀\n if symbol.endswith('-SWAP'):\n return symbol[:-5]\n return symbol\n"
},
{
"alpha_fraction": 0.561475396156311,
"alphanum_fraction": 0.5694310665130615,
"avg_line_length": 33,
"blob_id": "f8b4186518a27cb6ef8035a43d87f3d9663523a7",
"content_id": "00cd0c695ed904d156328b4a56ec7a509f877aaa",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4768,
"license_type": "no_license",
"max_line_length": 94,
"num_lines": 122,
"path": "/cli.py",
"repo_name": "lostleaf/funding-fee",
"src_encoding": "UTF-8",
"text": "import sqlite3\n\nimport fire\nimport pandas as pd\n\nfrom gateway import BinanceGateway, HoubiGateway, OkexGateway\nfrom util import DATABASE_PATH\n\n\nclass FundingFeeTask:\n def save_history(self):\n \"\"\"\n 保存三大交易所历史和当期费率\n \"\"\"\n fees = []\n for gw_cls in [BinanceGateway, HoubiGateway, OkexGateway]:\n gw = gw_cls()\n df = fetch_funding_fee_history(gw) # 获取历史资金费\n if gw.CLS_ID == 'binance': # 对币安特殊处理,获取当期资金费\n df = pd.concat([df, pd.DataFrame(gw.get_swap_recent_fee_rate())])\n df['exchange'] = gw.CLS_ID # 添加一列,交易所名称\n df.sort_values(\n ['exchange', 'symbol', 'funding_time'], \n inplace=True, \n ignore_index=True)\n fees.append(df)\n df_rate = pd.concat(fees) # 合并所有交易所资金费为一个大表\n df_stat = calc_stat(df_rate) # 计算统计数据,例如年化, 3日平均, 3日年化等\n\n # 将时间戳转化为 ISO 字符串,不然 grafana 无法识别\n df_rate['funding_time'] = df_rate['funding_time'].dt.strftime(\"%Y-%m-%dT%H:%M:%S.%fZ\")\n df_stat['funding_time'] = df_stat['funding_time'].dt.strftime(\"%Y-%m-%dT%H:%M:%S.%fZ\")\n\n # 写入 sqlite,覆盖掉老数据\n with sqlite3.connect(DATABASE_PATH) as conn:\n df_stat.to_sql('funding_fee_stat', conn, index=False, if_exists='replace')\n df_rate.to_sql('funding_fee_rate', conn, index=False, if_exists='replace')\n\n def update_binance(self):\n \"\"\"\n 更新币安实时费率\n \"\"\"\n gw = BinanceGateway()\n\n # 获取实时费率,保存为 DataFrame\n df_recent = pd.DataFrame(gw.get_swap_recent_fee_rate())\n df_recent['exchange'] = gw.CLS_ID\n\n # 从数据库中获取之前保存的费率\n with sqlite3.connect(DATABASE_PATH) as conn:\n df_rate = pd.read_sql('SELECT * FROM funding_fee_rate', conn)\n\n # 将时间戳字符串转为 pandas Timestamp 类型\n df_rate['funding_time'] = pd.to_datetime(df_rate['funding_time'], utc=True)\n\n # 用新费率替换旧费率\n df_rate = pd.concat([df_rate, df_recent])\n df_rate.drop_duplicates(\n ['exchange', 'symbol', 'funding_time'], \n keep='last', \n inplace=True)\n df_rate.sort_values(\n ['exchange', 'symbol', 'funding_time'], \n inplace=True, \n ignore_index=True)\n\n # 重新计算统计量\n df_stat = calc_stat(df_rate)\n\n # 将时间戳转化为 ISO 字符串,不然 grafana 无法识别\n df_rate['funding_time'] = df_rate['funding_time'].dt.strftime(\"%Y-%m-%dT%H:%M:%S.%fZ\")\n df_stat['funding_time'] = df_stat['funding_time'].dt.strftime(\"%Y-%m-%dT%H:%M:%S.%fZ\")\n\n # 写入 sqlite,覆盖掉老数据\n with sqlite3.connect(DATABASE_PATH) as conn:\n df_stat.to_sql('funding_fee_stat', conn, index=False, if_exists='replace')\n df_rate.to_sql('funding_fee_rate', conn, index=False, if_exists='replace')\n\n\ndef fetch_funding_fee_history(gw):\n \"\"\"\n 对给定交易所,获取历史和当期资金费\n \"\"\"\n symbols = gw.get_swap_symbols() # 获取交易所永续合约ID\n data = []\n for symbol in symbols: # 遍历合约,保存资金费率为 DataFrame\n data.append(pd.DataFrame(gw.get_swap_funding_fee_rate_history(symbol)))\n return pd.concat(data) # 合成为一个大 DataFrame 表\n\n\ndef calc_stat(df):\n df_stat = df.groupby(['exchange', 'symbol']).agg({\n 'funding_time': 'last', \n 'rate': 'last'\n })\n\n # 计算年化费率的 lambda 函数, 8 小时付息一次,则一年付息 365 * 3 次\n ann_rate = lambda x: (1 + x)**(365 * 3) - 1\n\n # 计算平均 n 天收益率的 lambda 函数,x 为费率 Series\n avg_rate = lambda n: lambda x: (1 + x.tail(3 * n)).prod()**(1 / 3 / n) - 1\n\n df_stat['annual'] = ann_rate(df_stat['rate']) # 当期年化\n\n # 3日平均与年化\n df_stat['avg_3d'] = df.groupby(['exchange', 'symbol'])['rate'].apply(avg_rate(3)) \n df_stat['annual_3d'] = ann_rate(df_stat['avg_3d'])\n\n # 7日平均与年化\n df_stat['avg_7d'] = df.groupby(['exchange', 'symbol'])['rate'].apply(avg_rate(7)) \n df_stat['annual_7d'] = ann_rate(df_stat['avg_7d'])\n\n df_stat.reset_index(inplace=True)\n\n # symbol 为 -USDT 后缀的为 USDT 本位合约,为 -USD 后缀的为币本位合约\n df_stat['type'] = df_stat['symbol'].str.split('-').str[1]\n df_stat.loc[df_stat['type'] == 'USD', 'type'] = 'Coin'\n return df_stat\n\n\nif __name__ == '__main__':\n fire.Fire(FundingFeeTask)\n"
}
] | 9 |
onezerobinary/kohonenMap | https://github.com/onezerobinary/kohonenMap | d36d6dd732a1240340173a9582d3205ec720ff76 | b089d8fe5f7dd7d956e340e8f6464b94b4be9872 | c794a364962a99d1c3583bf1185278b174869386 | refs/heads/master | 2021-04-21T04:59:44.219826 | 2020-03-23T16:02:09 | 2020-03-23T16:02:09 | null | 0 | 0 | null | null | null | null | null | [
{
"alpha_fraction": 0.5926460027694702,
"alphanum_fraction": 0.6051430106163025,
"avg_line_length": 33.09836196899414,
"blob_id": "03e3e189756f4338b4c723ebaff86e7a776c08bd",
"content_id": "89d8482393c01a01ab4662a55fd15c1dfb826533",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4161,
"license_type": "no_license",
"max_line_length": 115,
"num_lines": 122,
"path": "/iris.py",
"repo_name": "onezerobinary/kohonenMap",
"src_encoding": "UTF-8",
"text": "import numpy as np\nimport pandas as pd\n\nimport matplotlib.pyplot as plt\nfrom sklearn import datasets\nfrom minisom import MiniSom\n\nfrom sklearn.metrics import classification_report\n\n# import iris dataset\niris = datasets.load_iris()\ndata = iris.data\nlabels = iris.target\n# data normalization\ndata = np.apply_along_axis(lambda x: x/np.linalg.norm(x), 1, data)\n\ngrid_dim = 7\nsom = MiniSom(grid_dim, grid_dim, 4, sigma=3, learning_rate=0.5,\n neighborhood_function='triangle', random_seed=10)\n\n# ==================\n# TRAIN\n# ==================\nsom.pca_weights_init(data)\nprint(\"Training...\")\nsom.train_batch(data, grid_dim**2*500, verbose=True) # random training\nprint(\"\\n...done!\")\n\n# =======================================================\n# VISUALIZATION\n# U-Matrix with distance map as backgroud.\n# =======================================================\n\n# use different colors and markers for each label TEST\nplt.figure(figsize=(grid_dim,grid_dim))\nplt.pcolor(som.distance_map().T, cmap='bone_r') # plotting the distance map as background\n# use different colors and markers for each label\nmarkers = ['o', 's', 'D']\ncolors = ['C0', 'C1', 'C2']\nfor x, y in zip(data, labels):\n w = som.winner(x) # getting the winner\n # palce a marker on the winning position for the sample xx\n plt.plot(w[0]+.5, w[1]+.5, markers[y], markerfacecolor='None',\n markeredgecolor=colors[y], markersize=12, markeredgewidth=2)\nplt.axis([0, grid_dim, 0, grid_dim])\nplt.colorbar()\nplt.title('Triangle')\nplt.savefig('PLOTS/som_iris_triangle.png')\nplt.show()\nplt.close()\n\n\n# =================================================================================================================\n# ERROR\n# The quantization error: average distance between each data vector and its BMU.\n# The topographic error: the proportion of all data vectors for which first and second BMUs are not adjacent units.\n# =================================================================================================================\nmax_iter = 10**4\nq_error = []\nt_error = []\niter_x = []\nfor i in range(max_iter):\n percent = 100 * (i + 1) / max_iter\n rand_i = np.random.randint(len(data)) # This corresponds to train_random() method.\n som.update(data[rand_i], som.winner(data[rand_i]), i, max_iter)\n if (i + 1) % 100 == 0:\n q_error.append(som.quantization_error(data))\n t_error.append(som.topographic_error(data))\n iter_x.append(i)\n #sys.stdout.write(f'\\riteration={i:2d} status={percent:0.2f}%')\n\nplt.plot(iter_x, q_error)\nplt.ylabel('Quantization error')\nplt.xlabel('iteration index')\nplt.grid(linestyle='--', linewidth=.4, which=\"both\")\nplt.title('Triangle')\nplt.savefig('PLOTS/quant_error_iris_triangle.png')\nplt.show()\nplt.close()\n\nplt.plot(iter_x, t_error)\nplt.ylabel('Topological error')\nplt.xlabel('iteration index')\nplt.grid(linestyle='--', linewidth=.4, which=\"both\")\nplt.title('Triangle')\nplt.savefig('PLOTS/top_error_iris_triangle.png')\nplt.show()\nplt.close()\n\n# ==================================\n# CLASSIFICATION\n# ==================================\n\ndef classify(som, data, class_assigments):\n \"\"\"Classifies each sample in data in one of the classes definited\n using the method labels_map.\n Returns a list of the same length of data where the i-th element\n is the class assigned to data[i].\n \"\"\"\n winmap = class_assigments\n default_class = np.sum(list(winmap.values())).most_common()[0][0]\n result = []\n for d in data:\n win_position = som.winner(d)\n if win_position in winmap:\n result.append(winmap[win_position].most_common()[0][0])\n else:\n print('NON PRESENT')\n result.append(default_class)\n return result\n\nclass_assigments = som.labels_map(data, labels)\ny_hat = classify(som, data, class_assigments)\n\nprint(\"******** Classification Report ********\")\nprint(classification_report(labels, y_hat))\n\ntot_err = [0 if y_hat[i] == labels[i] else 1 for i in range(len(y_hat))]\ntot_err = round(np.sum(tot_err)/len(y_hat), 2)\n\nprint(f\"Grid dimension (Triangle)= {(grid_dim, grid_dim)}\")\nprint(f\"Classification Error: {round(tot_err*100, 2)} %\")\n\n"
},
{
"alpha_fraction": 0.6226066946983337,
"alphanum_fraction": 0.6418947577476501,
"avg_line_length": 32.57619094848633,
"blob_id": "7de1e9d76a208e4eeffb79e9241aa8f5747f6567",
"content_id": "c571359560dbe1ff4bc21aab21d2f14a0f974899",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7051,
"license_type": "no_license",
"max_line_length": 115,
"num_lines": 210,
"path": "/outliers.py",
"repo_name": "onezerobinary/kohonenMap",
"src_encoding": "UTF-8",
"text": "import pandas as pd\nimport numpy as np\n\nimport matplotlib.pyplot as plt\n\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import classification_report\n\nfrom minisom import MiniSom\n\n\n# ====================================\n# INITIALIZATION\n# ====================================\ndf = pd.read_csv('~/Downloads/aust.csv')\nlabels = df['Y']\nX = df.drop(columns='Y')\nX = np.apply_along_axis(lambda x: x/np.linalg.norm(x), 1, X)\ngrid_dim = int(X.shape[1]*0.5)\nsom = MiniSom(grid_dim, grid_dim, X.shape[1], sigma=int(grid_dim/2), learning_rate=0.1,\n neighborhood_function='gaussian',\n random_seed=123)\n\nX_train, X_test, y_train, y_test = train_test_split(X, labels, random_state=123)\n\n\n# ======================\n# TRAIN\n# ======================\nsom.pca_weights_init(X_train)\nprint(\"Training...\")\nsom.train_batch(X_train, grid_dim**2*500, verbose=True) # random training\nprint(\"\\n...done!\")\n#som.train_random(X, 7*7*600, verbose=True)\n\n\n# use different colors and markers for each label\nplt.figure(figsize=(grid_dim, grid_dim))\nplt.pcolor(som.distance_map().T, cmap='bone_r') # plotting the distance map as background\nmarkers = ['*', 'D']\ncolors = ['C0', 'C1']\nfor x, y in zip(X_train, y_train):\n w = som.winner(x) # getting the winner coordinates\n # place a marker on the winning position for the sample xx\n i = 0 if y == 1 else 1\n plt.plot(w[0]+.5, w[1]+.5, markers[i], markerfacecolor='None',\n markeredgecolor=colors[i], markersize=12, markeredgewidth=2)\nplt.axis([0, grid_dim, 0, grid_dim])\nplt.colorbar()\nplt.savefig('PLOTS/som_labels_train_aust.png')\nplt.show()\nplt.close()\n\n# train classification plot\nplt.figure(figsize=(grid_dim, grid_dim))\nplt.pcolor(som.distance_map().T, cmap='bone_r') # plotting the distance map as background\n#wmap = {}\nsample = 0\nwmap = som.labels_map(X, labels)\ndefault_class = np.sum(list(wmap.values())).most_common()[0][0]\nfor x, y in zip(X_train, y_train):\n w = som.winner(x)\n #wmap[w] = sample\n if w in wmap:\n label = wmap[w].most_common()[0][0]\n else:\n print('plot NON PRESENT')\n label = default_class\n plt.text(w[0]+.5, w[1]+.5, str(label),\n color = plt.cm.rainbow(y/2), fontdict={'weight': 'bold', 'size': 11})\n sample = sample + 1\nplt.axis([0, som.get_weights().shape[0], 0, som.get_weights().shape[1]])\nplt.colorbar()\n#plt.grid(linestyle='--', linewidth=.4, which=\"both\")\nplt.savefig('som_labels__train_hat.png')\nplt.show()\nplt.close()\n\n\n\n\n# The quantization error: average distance between each data vector and its BMU.\n# The topographic error: the proportion of all data vectors for which first and second BMUs are not adjacent units.\nmax_iter = 10**4\nq_error = []\nt_error = []\niter_x = []\nfor i in range(max_iter):\n percent = 100 * (i + 1) / max_iter\n rand_i = np.random.randint(len(X)) # This corresponds to train_random() method.\n som.update(X[rand_i], som.winner(X[rand_i]), i, max_iter)\n if (i + 1) % 100 == 0:\n q_error.append(som.quantization_error(X))\n t_error.append(som.topographic_error(X))\n iter_x.append(i)\n #sys.stdout.write(f'\\riteration={i:2d} status={percent:0.2f}%')\n\nplt.plot(iter_x, q_error)\nplt.ylabel('Quantization error')\nplt.xlabel('iteration index')\nplt.grid(linestyle='--', linewidth=.4, which=\"both\")\nplt.savefig('PLOTS/quant_error_aust.png')\nplt.show()\nplt.close()\n\nplt.plot(iter_x, t_error)\nplt.ylabel('Topological error')\nplt.xlabel('iteration index')\nplt.grid(linestyle='--', linewidth=.4, which=\"both\")\nplt.savefig('PLOTS/top_error_aust.png')\nplt.show()\nplt.close()\n\n# classification report\ndef classify(som, data, class_assigments):\n \"\"\"Classifies each sample in data in one of the classes definited\n using the method labels_map.\n Returns a list of the same length of data where the i-th element\n is the class assigned to data[i].\n \"\"\"\n winmap = class_assignments\n default_class = np.sum(list(winmap.values())).most_common()[0][0]\n result = []\n for d in data:\n win_position = som.winner(d)\n if win_position in winmap:\n result.append(winmap[win_position].most_common()[0][0])\n else:\n print('NON PRESENT')\n result.append(default_class)\n return result\n\n\n# ======================\n# TEST\n# ======================\n\nclass_assignments = som.labels_map(X_train, y_train)\ny_hat_train = classify(som, X_train, class_assignments)\ny_hat = classify(som, X_test, class_assignments)\nprint(\"******** Test Classification Report ********\")\nprint(classification_report(y_test, y_hat))\n\n# use different colors and markers for each label TEST\nplt.figure(figsize=(grid_dim,grid_dim))\nplt.pcolor(som.distance_map().T, cmap='bone_r') # plotting the distance map as background\nmarkers = ['o', 'D']\ncolors = ['C0', 'C1']\nwmap = som.labels_map(X_test, y_test)\nfor x in X_test:\n w = som.winner(x) # getting the winner coordinates\n # place a marker on the winning position for the sample xx\n label = wmap[w].most_common()[0][0]\n i = 0 if label == -1 else 1\n plt.plot(w[0]+.5, w[1]+.5, markers[i], markerfacecolor='None',\n markeredgecolor=colors[i], markersize=12, markeredgewidth=2)\nplt.axis([0, grid_dim, 0, grid_dim])\nplt.colorbar()\nplt.savefig('PLOTS/som_labels_test_aust.png')\nplt.show()\nplt.close()\n\ny_test = y_test.to_list()\ny_train = y_train.to_list()\n\ntot_err_train = [0 if y_train[i] == y_hat_train[i] else 1 for i in range(len(y_hat_train))]\ntot_err_train = round(np.sum(tot_err_train)/len(y_hat_train), 2)\n\ntot_err_test = [0 if y_test[i] == y_hat[i] else 1 for i in range(len(y_hat))]\ntot_err_test = round(np.sum(tot_err_test)/len(y_hat), 2)\n\nprint(f\"Grid dimension (gaussian) = {(grid_dim, grid_dim)}\")\nprint(f\"Classification Train Error: {round(tot_err_train*100, 2)} %\")\nprint(f\"Classification Test Error: {round(tot_err_test*100, 2)} %\")\n\n# ===========================\n# UNSUPERVISED WAY\n# ===========================\n\nsom.pca_weights_init(X)\nprint(\"Training...\")\n#som.train_batch(X, grid_dim**2*800, verbose=True) # random training\nprint(\"\\n...done!\")\nsom.train_random(X, grid_dim**2*1000, verbose=True)\n\n\n# use different colors and markers for each label\nplt.figure(figsize=(grid_dim, grid_dim))\nplt.pcolor(som.distance_map().T, cmap='bone_r') # plotting the distance map as background\nmarkers = ['*', 'D']\ncolors = ['C0', 'C1']\nfor x, y in zip(X, labels):\n w = som.winner(x) # getting the winner coordinates\n # place a marker on the winning position for the sample xx\n i = 0 if y == 1 else 1\n plt.plot(w[0]+.5, w[1]+.5, markers[i], markerfacecolor='None',\n markeredgecolor=colors[i], markersize=12, markeredgewidth=2)\nplt.axis([0, grid_dim, 0, grid_dim])\nplt.colorbar()\nplt.savefig('PLOTS/som_labels_aust.png')\nplt.show()\nplt.close()\n\nclass_assignments = som.labels_map(X, labels)\ny = classify(som, X, class_assignments)\n\ntot_err = [0 if labels[i] == y[i] else 1 for i in range(len(y))]\ntot_err = round(np.sum(tot_err)/len(y), 2)\n\nprint(f\"Classification total Error: {round(tot_err*100, 2)} %\")\n"
},
{
"alpha_fraction": 0.5583239197731018,
"alphanum_fraction": 0.5764439702033997,
"avg_line_length": 24.257143020629883,
"blob_id": "ec3697c66322a5b5be9d3750a47dba1ddc662dbd",
"content_id": "4c2ad40dbe7ccadd55441c6e9ee2090f4d7fefe8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 883,
"license_type": "no_license",
"max_line_length": 64,
"num_lines": 35,
"path": "/kohonen.py",
"repo_name": "onezerobinary/kohonenMap",
"src_encoding": "UTF-8",
"text": "import numpy as np\n\n\nclass Kohonen():\n\n \"\"\"Two-dimensional Self-Organizing Map\"\"\"\n\n def __init__(self, lattice, tau_1, tau_2, eta_0, sigma_0):\n\n self.lattice = lattice # matrix of dimension mxn\n self.m = lattice.shape[0]\n self.n = lattice.shape[1]\n self.tau_sig = float(tau_1) # constant time in sigma(n)\n self.tau_eta = float(tau_2) # constant time in eta(n)\n self.eta_0 = float(eta_0)\n self.sigma_0 = float(sigma_0)\n\n def init_weights(self):\n\n self.lattice = np.random.randn(self.m, self.n)\n return self\n\n def eta(self, n):\n\n\n eta = self.eta_0*np.exp(-1*(n /self.tau_eta))\n return eta\n\n def lateral_distance(self, winner_vec, act_vec):\n\n r_i = self.lattice[winner_vec]\n\n d = np.linalg.norm(self.lattice[])\n def sigma(self, n):\n sigma = self.sigma_0*np.exp(-1*())"
},
{
"alpha_fraction": 0.7492904663085938,
"alphanum_fraction": 0.7672658562660217,
"avg_line_length": 32.03125,
"blob_id": "91b82a659f6a96cb82926d17540d3b5cd8988cad",
"content_id": "78e47ab547d8f838493bc5ac9ed3dad60012b6a9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1057,
"license_type": "no_license",
"max_line_length": 115,
"num_lines": 32,
"path": "/iris_sompy.py",
"repo_name": "onezerobinary/kohonenMap",
"src_encoding": "UTF-8",
"text": "import numpy as np\nfrom sompy.sompy import SOMFactory\nfrom sklearn import datasets\n\n# import iris dataset\niris = datasets.load_iris()\ndata = iris.data\nlabels = iris.target\n\n\n\n# initialization SOM\nsm = SOMFactory().build(data, normalization='var', initialization='pca')\nsm.train(n_job=1, verbose=True, train_rough_len=2, train_finetune_len=5)\n\n\n# The quantization error: average distance between each data vector and its BMU.\n# The topographic error: the proportion of all data vectors for which first and second BMUs are not adjacent units.\ntopographic_error = sm.calculate_topographic_error()\nquantization_error = np.mean(sm._bmu[1])\nprint (\"Topographic error = %s; Quantization error = %s\" % (topographic_error, quantization_error))\n\n# component planes view\nfrom sompy.visualization.mapview import View2D\nview2D = View2D(10,10,\"rand data\",text_size=12)\nview2D.show(sm, col_sz=4, which_dim=\"all\", desnormalize=True)\n\n# U-matrix plot\nfrom sompy.visualization.umatrix import UMatrixView\n\numat = UMatrixView(width=10,height=10,title='U-matrix')\numat.show(sm)\n"
}
] | 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.