Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
43,781,633 | 2017-05-04T11:28:00.000 | 0 | 0 | 0 | 0 | python,python-3.x,tensorflow,scipy,64-bit | 43,782,019 | 2 | false | 0 | 0 | Did you try using Anaconda or similar for the installation? - if this is an option in your case I would highly recommend it under Windows. | 2 | 1 | 1 | I'm trying to install Python 3.5 both 32 & 64 bit and also be able to transfer between the two as needed, but am not having any luck. Scipy will only install when I use the 32bit (Various issues when trying to install 64bit version even with physical .whl files).
Meanwhile Tensorflow only works on x64.
I'm using windows 7, and have tried various solutions I've found on Stackoverflow, but have had no luck.
Also, was thinking of just dual installing linux mint and running python off there. Would any of you recommend?
Thank you! | Python 3.5 32 & 64 bit -- Scipy & Tensorflow issues | 0 | 0 | 0 | 384 |
43,781,633 | 2017-05-04T11:28:00.000 | 0 | 0 | 0 | 0 | python,python-3.x,tensorflow,scipy,64-bit | 43,782,112 | 2 | false | 0 | 0 | Installing Linux (highly recommended)
While your Issue with Scipy & Tensorflow.
Why don't you use the Anaconda Installer for Windows x64 bit(it contains alomost all major packges for Data Science already(Scipy too)) , and then install Tensorflow | 2 | 1 | 1 | I'm trying to install Python 3.5 both 32 & 64 bit and also be able to transfer between the two as needed, but am not having any luck. Scipy will only install when I use the 32bit (Various issues when trying to install 64bit version even with physical .whl files).
Meanwhile Tensorflow only works on x64.
I'm using windows 7, and have tried various solutions I've found on Stackoverflow, but have had no luck.
Also, was thinking of just dual installing linux mint and running python off there. Would any of you recommend?
Thank you! | Python 3.5 32 & 64 bit -- Scipy & Tensorflow issues | 0 | 0 | 0 | 384 |
43,782,851 | 2017-05-04T12:24:00.000 | 0 | 1 | 0 | 0 | python,email,imap | 43,785,616 | 3 | false | 0 | 0 | Avoid reacting to messages whose Return-Path isn't in From, or that contain an Auto-Submitted or X-Loop header field, or that have a bodypart with type multipart/report.
You may also want to specify Auto-Submitted: auto-generated on your outgoing mail. I expect that if you do as Max says that'll take care of the problem, but Auto-Submitted isn't expensive. | 1 | 2 | 0 | How do you detect "bounceed" email replies and other automated responses for failed delivery attempts in Python?
I'm implementing a simple server to relay messages between email and comments inside a custom web application. Because my comment model supports a "reply to all" feature, if two emails in a comment thread become invalid, there would possibly be an infinite email chain where my system would send out an email, get a bounceback email, relay this to the other invalid email, get a bounceback email, relay this back to the first, ad infinitum.
I want to avoid this. Is there a standard error code used for bounced or rejected emails that I could check for, ideally with Python's imaplib package? | How to detect bounce emails | 0 | 0 | 1 | 2,370 |
43,785,311 | 2017-05-04T14:12:00.000 | 0 | 0 | 1 | 0 | python-2.7,floating-point,decimal,odoo-8 | 44,043,217 | 1 | false | 0 | 0 | I have'nt found a proper answer for this issue anywhere.
Either we need to use double precision which it is not possible in odoo or we have to convert to string and get the exponential length.
I have chosen second one.
Thank you | 1 | 1 | 0 | I have a floating point field in one of my form, consider it as field_x. Based on that field_x i have some computation.
After all if field_x have n digits after decimal result also should have n digits.
For example:
field_x = 0.00000001(n digits after decimal)
result = some calculations
if result = 22
i have to display it as 22.00000000(n digits after decimal)
len(str(number-int(number))[1:]) Gives the answer
**here the number can be 0.00101,0.110,0.787,etc
But for some values like 0.000001 its giving incorrect answer | Python:computing the number of digits after decimal gives wrong answer | 0 | 0 | 0 | 326 |
43,785,929 | 2017-05-04T14:39:00.000 | 0 | 0 | 0 | 0 | python,linux,selenium,background-process | 43,824,917 | 1 | false | 0 | 0 | I met the similar situation recently. It looks like that the antivirus software on my laptop interrupted the geckodriver in the middle. I turn off that software and now it keep running. | 1 | 2 | 0 | I am running a python script on a linux server 4.9 kernel. This script uses selenium to open a firefox instance and load some website. The script is supposed to be running for days. Since I am running the process over ssh, I have tried both screen and nohup, but the process just stops after a few hours.
I can see the python process using top but its terminal output is just paused. I am unable to understand why is this happening. | Python selenium script stops working after a few hours | 0 | 0 | 1 | 228 |
43,787,228 | 2017-05-04T15:33:00.000 | 1 | 0 | 0 | 0 | python,ssis,web-scraping | 43,787,287 | 1 | true | 0 | 0 | You can run python script from within SSIS by calling the .py script file from execute process task. That being said, the server where this is being run needs to have the Python installed. | 1 | 0 | 0 | Can we run a Python Web-Scraping Code inside SSIS. If Yes, What is the effect of Using Beautiful Soup & Selenium ? Which one can be preferred. Is there a better way to run this.
My Requirement is to, get the data from the website using python script and store it in a table every time I run the package. | Python WebScraping Script inside SSIS Package during ETL | 1.2 | 0 | 1 | 507 |
43,787,699 | 2017-05-04T15:56:00.000 | 1 | 1 | 0 | 1 | python,amazon-web-services,amazon-ec2,oauth-2.0,google-analytics-api | 48,089,379 | 1 | false | 0 | 0 | I am not sure why this is happening, But I have a list of steps which might help you.
check if this issue is caused by google analytics API version, google generally deprecates the previous versions of their API.
I am guessing that you are running this code on cron on your EC2 serv, make sure that you include the path to the folder where the .dat file is.
3.check whether you have the latest credentials in the .dat file.
Authentication to the API will happen through the .dat file.
Hope this solves your issue. | 1 | 6 | 0 | I have an AWS EC2 machine that has been running nightly google analytics scripts to load into a database. It has been working fine up for months until this weekend. I have not made any changes to the code.
These are the two errors that are showing up in my logs:
/venv/lib/python3.5/site-packages/oauth2client/_helpers.py:256: UserWarning: Cannot access analytics.dat: No such file or directory
warnings.warn(_MISSING_FILE_MESSAGE.format(filename))
Failed to start a local webserver listening on either port 8080
or port 8090. Please check your firewall settings and locally
running programs that may be blocking or using those ports.
Falling back to --noauth_local_webserver and continuing with
authorization.
It looks like it is missing my analytics.dat file but I have checked and the file is in the same folder as the script that calls the GA API. I have been searching for hours trying to figure this out but there are very little resources on the above errors for GA.
Does anyone know what might be going on here? Any ideas on how to troubleshoot more? | Google analytics .dat file missing, falling back to noauth_local_webserver | 0.197375 | 0 | 0 | 913 |
43,789,951 | 2017-05-04T18:08:00.000 | -1 | 0 | 0 | 1 | python,postgresql,odbc,julia | 43,872,906 | 2 | false | 0 | 0 | Config SSL Mode: allow in ODBC Driver postgres, driver version: 9.3.400 | 1 | 1 | 0 | When attempting to connect to a PostgreSQL database with ODBC I get the following error:
('08P01', '[08P01] [unixODBC]ERROR: Unsupported startup parameter: geqo (210) (SQLDriverConnect)')
I get this with two different ODBC front-ends (pyodbc for Python and ODBC.jl for Julia), so it's clearly coming from the ODBC library itself. Is there a way to stop it from passing this "geqo" parameter?
An example in pyodbc would be very useful.
Thanks. | unsupported startup parameter geqo when connecting to PostgreSQL with ODBC | -0.099668 | 1 | 0 | 913 |
43,790,494 | 2017-05-04T18:41:00.000 | 0 | 0 | 1 | 0 | python-3.x | 43,790,837 | 2 | false | 0 | 0 | I would recommend not doing anything. I don't believe in editing any password submissions, except for sanitizing to prevent security risks. | 2 | 1 | 0 | From experience I know that sometimes while copying and pasting a password into the password filed a white space is copied along with the password and this is causing errors (I don't know how common this is, but it happens). Now I'm learning Python (no previous programming experience) and came across rstrip() lstrip() and strip() method. What would be the "right" way to handle such situations ?
Any inside is highly appreciated. | white spaces while copying password into password filed | 0 | 0 | 0 | 367 |
43,790,494 | 2017-05-04T18:41:00.000 | 1 | 0 | 1 | 0 | python-3.x | 43,790,613 | 2 | false | 0 | 0 | I would use strip() to strip both sides :-) I think it's very annoying when you copy paste a password and it's not accepted because you mis-copied with some extra blank characters. | 2 | 1 | 0 | From experience I know that sometimes while copying and pasting a password into the password filed a white space is copied along with the password and this is causing errors (I don't know how common this is, but it happens). Now I'm learning Python (no previous programming experience) and came across rstrip() lstrip() and strip() method. What would be the "right" way to handle such situations ?
Any inside is highly appreciated. | white spaces while copying password into password filed | 0.099668 | 0 | 0 | 367 |
43,791,236 | 2017-05-04T19:24:00.000 | 5 | 0 | 0 | 0 | python-3.x,amazon-s3,boto,boto3 | 43,791,579 | 1 | true | 1 | 0 | There is no way to append data to an existing object in S3. You would have to grab the data locally, add the extra data, and then write it back to S3. | 1 | 5 | 0 | I know how to write and read from a file in S3 using boto. I'm wondering if there is a way to append to a file without having to download the file and re-upload an edited version? | Appending to a text file in S3 | 1.2 | 0 | 1 | 4,234 |
43,792,282 | 2017-05-04T20:31:00.000 | 1 | 0 | 0 | 0 | python,eve | 43,792,616 | 1 | false | 0 | 0 | Nevermind... found it... (ALLOW_UNKNOWN) | 1 | 0 | 0 | I have some pretty big, multi level, documents with LOTS of fields (over 1500 fields). While I want to save the whole document in mongo, Ido not want to define the whole schema. Only a handful of fields are important. I also need to index those "important" fields. Is this something that can be done?
Thank you | Is it possible to define a partial schema Python-eve? | 0.197375 | 1 | 0 | 96 |
43,793,905 | 2017-05-04T22:46:00.000 | 1 | 0 | 1 | 0 | python,multiprocessing,distributed-computing,file-sharing | 43,793,932 | 1 | true | 0 | 0 | The best you can do here, all very abstract really by the way, is to share the Queue(), and each worker ask if "has something to do, else they sleep (lock) | 1 | 1 | 0 | Master:
have multiple files that he need to apply the same function for each file, so i considered each file as a Job and I put them in a Queue().
Worker:
Each process get a job from the shared queue and process the file in it and return the processed file.
My question is :
Do I have to send the file from master to worker or just share it with the Queue() ?
for information : the file here is a video sequence. | Using Queue() to share files in a distributed application with python | 1.2 | 0 | 0 | 526 |
43,796,569 | 2017-05-05T04:32:00.000 | 0 | 0 | 1 | 0 | python-2.7,intellij-idea,pyspark | 47,936,661 | 4 | false | 0 | 0 | Click on edit configuration
Click on environment variables
Add these variables
PYTHONPATH = %SPARK_HOME%\python;%SPARK_HOME%\python\build;%PYTHONPATH%
PYSPARK_SUBMIT_ARGS = --master local[2] pyspark-shell
SPARK_HOME = <spark home path>
SPARK_CONF_DIR = %SPARK_HOME\conf
SPARK_LOCAL_IP = 127.0.0.1 | 2 | 0 | 0 | How to set up pySpark on intellij. Even after setting the environment variables spark_home and pythonpath, import pySpark is giving error - Import error : No module named pySpark | setup pySpark on intellij | 0 | 0 | 0 | 6,672 |
43,796,569 | 2017-05-05T04:32:00.000 | 0 | 0 | 1 | 0 | python-2.7,intellij-idea,pyspark | 43,887,989 | 4 | false | 0 | 0 | Go to File -> Settings
Look for Project Structure
Click on Add Content Root and add $SPARK_HOME/python
After this, your editor will look in the Spark's python directory for source files. | 2 | 0 | 0 | How to set up pySpark on intellij. Even after setting the environment variables spark_home and pythonpath, import pySpark is giving error - Import error : No module named pySpark | setup pySpark on intellij | 0 | 0 | 0 | 6,672 |
43,796,774 | 2017-05-05T04:53:00.000 | 1 | 0 | 0 | 1 | python,python-3.x,dask | 46,379,569 | 2 | false | 0 | 0 | Network solution :
Under Windows only it should works with a shared forlder: dd.read_csv("\\server\shared_dir")
Under Unix/Linux only it should works with HDFS: import hdfs3 and then hdfs.read_csv('/server/data_dir'...)
But if you want to use Windows AND Linux workers at the same time I don't know since dd.read_csv() with UNC does not seem to be supported under Linux (because of the file path '\server\data_dir') and HDFS with hdfs.read_csv is not working under Windows (import hdfs3 failed because the lib libhdfs3.so doesn't exist under Windows)
Does anyone have a Network solution for workers under Windows and Unix ? | 1 | 6 | 1 | A bit of a beginner question, but I was not able to find a relevant answer on this..
Essentially my data about (7gb) is located on my local machine. I have distributed cluster running on the local network. How can I get this file onto the cluster?
The usual dd.read_csv() or read_parquet() fails, as the workers aren't able to locate the file in their own environments.
Would I need to manually transfer the file to each node in the cluster?
Note: Due to admin restrictions I am limited to SFTP... | Loading local file from client onto dask distributed cluster | 0.099668 | 0 | 0 | 2,868 |
43,798,377 | 2017-05-05T06:54:00.000 | 0 | 0 | 0 | 0 | python,scikit-learn | 43,798,752 | 4 | false | 0 | 0 | Can't get your point as OneHotEncoder is used for nominal data, and StandardScaler is used for numeric data. So you shouldn't use them together for your data. | 1 | 24 | 1 | I'm confused because it's going to be a problem if you first do OneHotEncoder and then StandardScaler because the scaler will also scale the columns previously transformed by OneHotEncoder. Is there a way to perform encoding and scaling at the same time and then concatenate the results together? | One-Hot-Encode categorical variables and scale continuous ones simultaneouely | 0 | 0 | 0 | 26,114 |
43,799,270 | 2017-05-05T07:44:00.000 | 2 | 0 | 0 | 0 | python,nlp,spacy | 43,799,855 | 1 | false | 0 | 0 | spaCy's tokenizer is non-destructive, so you can always find your way back to the original string -- text[token.idx : token.idx + len(token)] will always get you the text of the token.
So, you should never need to embed non-linguistic metadata within the text, and then tell the statistical model to ignore it.
Instead, make the metadata a standoff annotation, that holds a character start and end point. You can always make a labelled Span object after the doc is parsed for your paragraphs.
Btw, in order to keep the alignment, spaCy does have tokens for significant whitespace. This sometimes catches people out. | 1 | 0 | 0 | I'm running rather long documents through Spacy, and would like to retain position markers of paragraphs in the Spacy doc but ignore them in the parse. I'm doing this to avoid creating a lot of different docs for all the paragraphs.
Example using XPath:
\\paragraph[@id="ABC"] This is a test sentence in paragraph ABC
I'm looking for a bit of direction here. Do I need to add entities/types or implement a customized tokenizer? Can I use the matcher with a callback function to affect that specific token?
Your Environment
Installed models: en
Python version: 3.4.2
spaCy version: 1.8.1
Platform: Linux-3.16.0-4-686-pae-i686-with-debian-8.6 | Spacy: Retain position markers in string, ignore them in Spacy | 0.379949 | 0 | 1 | 837 |
43,804,250 | 2017-05-05T11:51:00.000 | 1 | 0 | 0 | 0 | linux,windows,python-3.x,tensorflow | 43,805,354 | 2 | true | 0 | 0 | No, the model will be exactly the same. You'll only have to make sure that your TF versions on Linux and Windows are compatible ones, but this is not made more difficult by the different OS, it's only a matter of which versoin you install on which device. | 2 | 1 | 1 | Will the performance of the model be affected if I train the data on LINUX system and then use that model in the WINDOWS application or a python script? | Tensor-flow training and Testing on different OS | 1.2 | 0 | 0 | 496 |
43,804,250 | 2017-05-05T11:51:00.000 | 0 | 0 | 0 | 0 | linux,windows,python-3.x,tensorflow | 43,805,554 | 2 | false | 0 | 0 | Akshay, as gdelab said, make sure about the version. If you are running the same model on same data(training and testing) again & again, it might affect but using on different OS might not. I faced the same issue. | 2 | 1 | 1 | Will the performance of the model be affected if I train the data on LINUX system and then use that model in the WINDOWS application or a python script? | Tensor-flow training and Testing on different OS | 0 | 0 | 0 | 496 |
43,807,305 | 2017-05-05T14:17:00.000 | 12 | 0 | 1 | 0 | python,pycharm | 43,836,532 | 11 | false | 0 | 0 | The key combination you are looking for is Ctrl + Shift + F10. This will run the current script with current being the one displayed in the viewer. | 8 | 57 | 0 | How can I run the current file in PyCharm? I would like a single hotkey that will execute the current file (whether normal file, scratch file, or scratch buffer) using the project default python interpreter. I would like to NOT have to create a custom run configuration, but just launch with the default Python configuration. Is such a thing possible? | How can I run the current file in PyCharm | 1 | 0 | 0 | 40,781 |
43,807,305 | 2017-05-05T14:17:00.000 | -1 | 0 | 1 | 0 | python,pycharm | 64,573,565 | 11 | false | 0 | 0 | When intalling on Pycharm select :
Add launchers dir to the PATH: Allows running this PyCharm instance from the Console without specifying the path to it.
:-) | 8 | 57 | 0 | How can I run the current file in PyCharm? I would like a single hotkey that will execute the current file (whether normal file, scratch file, or scratch buffer) using the project default python interpreter. I would like to NOT have to create a custom run configuration, but just launch with the default Python configuration. Is such a thing possible? | How can I run the current file in PyCharm | -0.01818 | 0 | 0 | 40,781 |
43,807,305 | 2017-05-05T14:17:00.000 | 0 | 0 | 1 | 0 | python,pycharm | 56,196,686 | 11 | false | 0 | 0 | To Run/Execute current python file in Pycharm use following keys in windows machine
Shift+Ctrl+F10
OR
for selected code (specific line that are selected/marked)
Shift+Alt+E | 8 | 57 | 0 | How can I run the current file in PyCharm? I would like a single hotkey that will execute the current file (whether normal file, scratch file, or scratch buffer) using the project default python interpreter. I would like to NOT have to create a custom run configuration, but just launch with the default Python configuration. Is such a thing possible? | How can I run the current file in PyCharm | 0 | 0 | 0 | 40,781 |
43,807,305 | 2017-05-05T14:17:00.000 | 0 | 0 | 1 | 0 | python,pycharm | 52,574,004 | 11 | false | 0 | 0 | Basically, if you just need to run the current .py file in PyCharm. Right-click inside the file, and you can click the "Run file.py" button, and it also tells you the shortcut which on Mac is Control + Shift + R. | 8 | 57 | 0 | How can I run the current file in PyCharm? I would like a single hotkey that will execute the current file (whether normal file, scratch file, or scratch buffer) using the project default python interpreter. I would like to NOT have to create a custom run configuration, but just launch with the default Python configuration. Is such a thing possible? | How can I run the current file in PyCharm | 0 | 0 | 0 | 40,781 |
43,807,305 | 2017-05-05T14:17:00.000 | 1 | 0 | 1 | 0 | python,pycharm | 43,807,408 | 11 | false | 0 | 0 | File->Settings->Keymap->Run->Run and see your current keymap | 8 | 57 | 0 | How can I run the current file in PyCharm? I would like a single hotkey that will execute the current file (whether normal file, scratch file, or scratch buffer) using the project default python interpreter. I would like to NOT have to create a custom run configuration, but just launch with the default Python configuration. Is such a thing possible? | How can I run the current file in PyCharm | 0.01818 | 0 | 0 | 40,781 |
43,807,305 | 2017-05-05T14:17:00.000 | 1 | 0 | 1 | 0 | python,pycharm | 43,807,403 | 11 | false | 0 | 0 | Keyboard shortcuts can be different on some machines. So you can just click right key on mouse and then "Run "(also you can select part of code and do the same) | 8 | 57 | 0 | How can I run the current file in PyCharm? I would like a single hotkey that will execute the current file (whether normal file, scratch file, or scratch buffer) using the project default python interpreter. I would like to NOT have to create a custom run configuration, but just launch with the default Python configuration. Is such a thing possible? | How can I run the current file in PyCharm | 0.01818 | 0 | 0 | 40,781 |
43,807,305 | 2017-05-05T14:17:00.000 | 39 | 0 | 1 | 0 | python,pycharm | 43,857,932 | 11 | true | 0 | 0 | As it turns out, the action I was seeking is "Run context configuration" (or "Debug context configuration" for debugging). The default key binding on Windows is ctrl+shift+f10, or ctrl+option+R on Mac, as Ev. Kounis pointed out, although you can bind it to any key you like.
These settings can be found under the "Other" section in File->Settings->Keymap. The easiest way to find them is to simply use the search box. | 8 | 57 | 0 | How can I run the current file in PyCharm? I would like a single hotkey that will execute the current file (whether normal file, scratch file, or scratch buffer) using the project default python interpreter. I would like to NOT have to create a custom run configuration, but just launch with the default Python configuration. Is such a thing possible? | How can I run the current file in PyCharm | 1.2 | 0 | 0 | 40,781 |
43,807,305 | 2017-05-05T14:17:00.000 | 2 | 0 | 1 | 0 | python,pycharm | 43,807,354 | 11 | false | 0 | 0 | Alt+Shift+F10 and then select the script you want to run.
After that Shift+F10 will run the last script that has been run. | 8 | 57 | 0 | How can I run the current file in PyCharm? I would like a single hotkey that will execute the current file (whether normal file, scratch file, or scratch buffer) using the project default python interpreter. I would like to NOT have to create a custom run configuration, but just launch with the default Python configuration. Is such a thing possible? | How can I run the current file in PyCharm | 0.036348 | 0 | 0 | 40,781 |
43,809,618 | 2017-05-05T16:21:00.000 | 5 | 1 | 0 | 1 | python,ubuntu,gunicorn,conda | 43,809,861 | 1 | true | 1 | 0 | So, I was right - the problem is entirely related to my own ineptitude. Rather than deleting this question, though, I'm going to answer it myself and leave it here in case any future fledgling developers run into the same problem. The issue, as it turns out, is that I was running gunicorn --bind 0.0.0.0:8000 wsgi:app in the wrong directory. After I cd into the directory containing wsgi.py, gunicorn works just fine. The takeaway: gunicorn must be run from within the directory containing wsgi.py. | 1 | 4 | 0 | I'm trying to deploy a Flask app on an EC2 instance running Ubuntu. I have my WSGI file set up, but I'm having some issues running gunicorn. At first, I installed gunicorn with sudo apt-get install gunicorn. However, it ran with the wrong version of python, and it threw import errors for each of the modules my Flask app uses. I ascertained that this was due to the fact that I use conda as an environment manager, and because installing with apt-get placed gunicorn outside of the purview virtual environment. So, I uninstalled gunicorn (sudo apt-get purge gunicorn) and reinstalled it through conda (conda install gunicorn). Now, when I run gunicorn (gunicorn --bind 0.0.0.0:8000 wsgi:app), I don't get a 50 line traceback. I do, however, get the following error: -bash: /usr/bin/gunicorn: No such file or directory. I tried uninstalling gunicorn and reinstalling with pip, but I still get the same error. I've tried searching Google and StackOverflow for solutions, but all I've discovered is that I should be installing gunicorn within a virtual environment to overcome this error (which, I beleive, I'm already doing). I'm guessing there's an easy fix to this, and that the problem is related to my ineptitude, as opposed to conda or something else. Any suggestions would be much appreciated. Thanks. | Running gunicorn on Ubuntu in a conda environment | 1.2 | 0 | 0 | 3,262 |
43,814,526 | 2017-05-05T22:20:00.000 | 0 | 0 | 0 | 0 | git,proxy,pip,python-requests,ntlm | 44,008,516 | 1 | false | 0 | 0 | I solved this by installing Fiddler and using it as my local proxy, while Fiddler itself used the corporate proxy. | 1 | 0 | 0 | Issue
I can't get the python requests library, easy_install, or pip to work behind the corporate proxy. I can, however, get git to work.
How I got git working
I set the git proxy settings
git config --global http.proxy http ://proxyuser:[email protected]:8080
The corporate proxy server I work behind requires a user name and password and is indeed in the format
http: //username:passsword@ipaddress:port
I did not have to set the https.proxy
Things I have tried
(None of it has worked)
Environment Variables - Pip and Requests library
Method 1
$ export HTTP_PROXY="http://username:passsword@ipaddress:port"
$ export HTTPS_PROXY="http://username:passsword@ipaddress:port"
Method 2
SET HTTP_PROXY="http://username:passsword@ipaddress:port"
SET HTTPS_PROXy="HTTPS_PROXY="http://username:passsword@ipaddress:port"
I have tried both restarting after setting the proxy variables, and trying them right after setting them
Checking the variables with the 'SET' command shows that both are set correctly
Using Proxy Argument - Requests library
Creating a dictionary with the proxy information and passing it to requests.get()
proxies = {
'http': 'http: //username:passsword@ipaddress:port',
'https': 'http: //username:passsword@ipaddress:port'}
requests.get('http: //example.org', proxies=proxies)
Using Proxy Argument - pip
pip install library_name -–proxy=http: //username:passsword@ipaddress:port
pip install library_name -–proxy=username:passsword@ipaddress:port
Results - Requests library
Response
Response [407]
Reason
'Proxy Authorization Required'
Header Information
{'Proxy-Authenticate': 'NTLM', 'Date': 'Fri, 05 May 2017 21:49:06 GMT', 'Cache-Control': 'no-cache', 'Pragma': 'no-cache', 'Content-Type': 'text/html; charset="UTF-8"', 'Content-Length': '4228', 'Accept-Ranges': 'none', 'Proxy-Connection': 'keep-alive'}
Results - Pip
Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after connec
tion broken by 'ProxyError('Cannot connect to proxy.', OSError('Tunnel connectio
n failed: 407 Proxy Authorization Required',))'
Note: In regards to this post, I have included a space in all my authentication 'links' between "http" and "://" because stackoverflow won't let me publish this with so many 'links'.
(I set up a new Stackoverflow account as my old account was a login via facebook thing and I can't access it from work) | Requests library & Pip Ntlm Proxy settings issues. - Python | 0 | 0 | 1 | 939 |
43,816,435 | 2017-05-06T03:30:00.000 | 0 | 0 | 1 | 1 | python,anaconda,macos-sierra | 45,470,042 | 2 | false | 0 | 0 | I was able to open the navigator using 'sudo anaconda-navigator' | 1 | 2 | 0 | Anaconda 4.3.1 can't open on macOS Sierra 10.12.4
Anaconda Navigator crashes upon launching it.
Please, help me to solve this problem
Tips for layman would be appreciated. | Anaconda 4.3.1 Navigator can't open on macOS Sierra 10.12.4 | 0 | 0 | 0 | 1,424 |
43,817,988 | 2017-05-06T07:37:00.000 | 3 | 0 | 0 | 0 | python,python-imaging-library,pillow,imaging | 43,818,002 | 1 | false | 0 | 0 | Convert the image into HSV color space. Then the croma will be determined just by the H value. So you could put a threshold only on H to get red color. | 1 | 2 | 0 | currently I am trying to detect red pixels inside a small picture (about 360*360 pixels).
The images have a broad range of red-values which is why I can't just iterate over all pixels and check for a certain rgb-value.
What would some efficient ways be to analyze such a picture to get the percentage of pixels which are perceived as red by humans?
Thanks in advance. | Filter red pixels in an image with PIL/Pillow | 0.53705 | 0 | 0 | 1,331 |
43,821,167 | 2017-05-06T13:43:00.000 | 0 | 0 | 0 | 0 | python,machine-learning,tensorflow,conv-neural-network,image-segmentation | 43,895,291 | 1 | true | 0 | 0 | You'll want to train your NN in such as way that you'll be able to use it for prediction.
If you want to just predict the classes from the image, then all you want to send to your NN is
the original image (probably color balanced) and
predict the classes from the XML (convert that into a 1 hot class encoding)
if you want to predict the mask also, then send
the original image (probably color balanced) and
predict the mask and classes from the XML (convert that into a 1 hot class encoding)
The above objectives (just classes, or classes+mask prediction) drives the decision to store the classes or the classes + mask. | 1 | 0 | 1 | So I know for a standard convolutional neural network you can provide the neural net (NN) a file with a list of labels or simply separate your classes by folders but for instance segmentation I imagine it's different right?
For example using a site like labelme2 you can annotate and segment images and then download them along with mask files and XML files for labels. Does one need to subsequently input the original image, mask image and XML file to the instance segmentation NN?
Thanks in advance. | How does one input images and labels for Semantic Instance Segmentation with neural networks? | 1.2 | 0 | 0 | 667 |
43,824,092 | 2017-05-06T18:39:00.000 | 1 | 0 | 1 | 0 | python,string,algorithm,cryptography,checksum | 43,831,767 | 1 | false | 0 | 0 | If your hashes are 32 bit integers, then you have 2^32 possible hash codes. A 20 character ASCII string has 7 x 20 = 140 bits minimum, 8 x 20 = 160 bits if you are working in bytes. Original ASCII is a 7-bit code, hence the difference.
You cannot fit 140 bits into 32 bits without duplicating some hash values.
A unique checksum for 20 ASCII character strings would need a minimum of 140 bits, probably more like 160 bits. | 1 | 0 | 0 | To save space in the executable, I want to compute checksums (or hashes) over ASCII string and later use a checksum to look the corresponding string.
This saves space, since I don't have to fill up the executable with ASCII strings; instead only, say 32-bit integers are stored instead.
Now, for this idea to work, I need a checksum algorithm that is able to compute unique checksums for strings up to N characters. Because, most of the strings are identifiers, N=20 would be acceptable.
Does anyone know of a checksum algorithm that satisfies my criteria?
Theory: Since a checksum algorithm maps {0,1}^* -> {0,1}^m an infinite number of collisions exist in general. However, here I consider only strings of up to N characters, so checksum (compress) algorithms mapping {0,1}^N -> {0,1}^m, with N<=m, are guaranteed to exist without collisions (injective). | Looking for a simple checksum (or hash) algorithm with no collisions for ASCII strings up to N characters | 0.197375 | 0 | 0 | 625 |
43,824,204 | 2017-05-06T18:51:00.000 | 0 | 0 | 1 | 1 | python,dulwich | 46,393,625 | 1 | false | 0 | 0 | You can "stage" a file that no longer exists, which will remove it. Alternatively, there is also a dulwich.porcelain.remove function that provides the equivalent of git rm (i.e. removes the file if it exists and then unversions it). | 1 | 0 | 0 | With dulwich I can stage a file using repo.stage, but how do I remove a file ?
I am looking for the equivalent of git rm | How do I remove a file from a git repository with dulwich? | 0 | 0 | 0 | 75 |
43,827,307 | 2017-05-07T02:41:00.000 | -1 | 0 | 0 | 0 | python,scrapy,scrapy-spider | 43,827,353 | 1 | false | 1 | 0 | I just found out,
in setting set :
"RETRY_ENABLED = False" that will take care of it :) | 1 | 0 | 0 | For optimization purposes, I need to my spider to skip website that has been once timed out, and dont let scrapy que it and try it again and again.
How can this be achieved?
Thanks. | How to not allow scrapy to retry timed out websites? | -0.197375 | 0 | 1 | 117 |
43,827,536 | 2017-05-07T03:31:00.000 | 0 | 0 | 1 | 0 | python | 43,827,600 | 2 | false | 0 | 0 | If your is
1 2 3 4 5 6
7 8 9 10 11 12
....
....
then,go searching line by line and find the line number and index of your number.
with open('data.txt') as f:
content = f.readlines()
for x in range(len(content)):
if '5' in content[x].split(' '):
lno = x
index = content[x].split(' ').index('5')
So,now you got the index.Add the user input to the number and save it into the file as you have the line number and index. | 2 | 0 | 0 | I am having some trouble getting a part of my code to read a value from a text file which then can be converted to an integer and then modified by adding a user input value, then input the new value into the file. This is for a simple inventory program that keeps track of certain items.
Example:
User inputs 10 to be added to the number in the file. The number in the file is 231 so 10+231 = 241. 241 is the new number that is put in the file in place of the original number in the file. I have tried many different things and tried researching this topic, but no code I could come up with has worked. If it isn't apparent by now I am new to python. If anyone one can help it would be greatly appreciated! | Editing a value from a file in python | 0 | 0 | 0 | 50 |
43,827,536 | 2017-05-07T03:31:00.000 | 0 | 0 | 1 | 0 | python | 43,827,589 | 2 | true | 0 | 0 | the steps that you need to take are
Open the file in read mode: file = open("path/to/file", "r")
Read the file to a python string: file_str = file.read()
Convert the string to an integer: n = int(file_str)
Add 10 and convert num: num_str = str(n + 10)
Close the file: file.close()
Reopen the file in write mode: file = open("path/to/file", "w")
Write the num string to the file: file.write(num_str) | 2 | 0 | 0 | I am having some trouble getting a part of my code to read a value from a text file which then can be converted to an integer and then modified by adding a user input value, then input the new value into the file. This is for a simple inventory program that keeps track of certain items.
Example:
User inputs 10 to be added to the number in the file. The number in the file is 231 so 10+231 = 241. 241 is the new number that is put in the file in place of the original number in the file. I have tried many different things and tried researching this topic, but no code I could come up with has worked. If it isn't apparent by now I am new to python. If anyone one can help it would be greatly appreciated! | Editing a value from a file in python | 1.2 | 0 | 0 | 50 |
43,827,756 | 2017-05-07T04:15:00.000 | 0 | 0 | 1 | 0 | python,word-wrap,text-processing | 66,640,368 | 3 | false | 0 | 0 | The simplest solution might just be to use a monospace font, where each character is the same width. Obviously you can't always use one, but when you can it's much simpler. | 1 | 2 | 0 | I am drawing text atop a base image via PIL. One of the requirements is for it to overflow to the next line(s) if the combined width of all characters exceeds the width of the base image.
Currently I'm using textwrap.wrap(text, width=16) to accomplish this. Here width defines the number of characters to accommodate in one line. Now the text can be anything since it's user generated. So the problem is that hard-coding width won't take into account width variability due to font type, font size and character selection.
What do I mean?
Well imagine I'm using DejaVuSans.ttf, size 14. A W is 14 in length, whereas an 'i' is 4. For a base image of width 400, up to 100 i characters can be accommodated in a single line. But only 29 W characters. I need to formulate a smarter way of wrapping to the next line, one where the string is broken when the sum of character-widths exceeds the base image width.
Can someone help me formulate this? An illustrative example would be great! | Breaking string into multiple lines according to character width (python) | 0 | 0 | 0 | 1,696 |
43,827,784 | 2017-05-07T04:21:00.000 | 34 | 0 | 1 | 0 | ipython,jupyter-notebook,jupyter | 43,827,809 | 1 | true | 1 | 0 | You just need to execute or run the cell which is in markdown format.
If you press Ctrl + Enter will execute and convert the raw text to markdown form.
Or you can press Shift + Enter which will execute the current cell and will move to the next one. | 1 | 24 | 0 | In IPython/Jupyter notebooks, is there a clear, concise summary of the 'rules' for when markdown is rendered into that nice, rich text format that's pleasant to look at?
Here's what I've got so far:
When I create a new cell, then switch it to 'Markdown' it stays in 'raw markdown mode' (meaning: I can see the raw markdown. There's some nice, syntax-based color-coding and font-sizing, but it's clearly raw markdown)
If I save the notebook, close it (i.e., close the browser's page) and then re-open the notebook I see the nice, rich-text version of the markdown (i.e., "#Topic 1" is rendered as H1 by the browser, and the browser hides the "#" at the start - it's clearly NOT the 'raw markdown'
If I click on the markdown cell it remains in 'nice mode'
If I press the 'Enter' key I enter Jupyter's edit mode, it replaces the 'nice mode' view with the 'raw markdown mode' view, and I can edit the markdown.
What I'd love to know is:
How do I get Jupyter to render that 'raw markdown mode' cell again? (Without closing and re-opening the notebook)
(Alternately -is this the expected behavior? You get the nice view when you first load it, and you're stuck with the 'raw markdown' view for any cell you edit until you reload it?) | IPython/Jupyter Notebook: How to render a markdown cell without reloading doc? | 1.2 | 0 | 0 | 11,457 |
43,828,879 | 2017-05-07T07:23:00.000 | 15 | 0 | 1 | 0 | python,pip,dependency-management | 43,828,909 | 1 | true | 0 | 0 | A common way to manage dependencies for a python project is via a file in root of the project named "requirements.txt". An easy way to make this is:
Setup a python virtualenv for your project
Manually install the required modules via pip
Execute pip freeze > requirements.txt to generate the requirements file
You can then install all the dependencies in other locations using pip install -r requirements.txt.
If you want dependencies to be installed automatically when other people pip install your package, you can use install_requires() in your setup.py. | 1 | 15 | 0 | I am coming from Java background and completely new at Python.
Now I have got a small project with a few Python files that contain a few imports. I know I does not have the imported dependencies installed on my computer, so I try to figure out the required dependencies and run pip to install them.
I would like to do it differently. I would prefer to have the dependencies listed in a single file and install them automatically during the build process.
Does it make sense ? If it does I have a few questions:
How to list the project dependencies required to install by pip ?
How to run pip to install the dependencies from the list ? | Simple dependency management for a Python project | 1.2 | 0 | 0 | 8,100 |
43,834,976 | 2017-05-07T18:12:00.000 | 0 | 0 | 1 | 0 | python,module,pycharm | 43,843,812 | 1 | false | 0 | 0 | Update your Question with the used Version's:
OS Version, Python Version, pyCharm Version, Missing Module Version
Update your Question with the Output of
For single Python setup or Python 2.x
pip show <Name of the failed Module>
Expexted Output:
# pip show pycrypto
\---
Name: pycrypto
Version: 2.6.1
Location: /opt/usr/lib/python2.7/dist-packages
Requires:
For Python 3.x
pip3 show <Name of the missing Module> | 1 | 0 | 0 | I am trying to install pycrypto to my pycharm, and every time I install it says that I have installed it correctly. I made sure to do so by going into settings and clicking the + button. But when I try to import pycrypto it is giving me an error that the module does not exist. I have been trying, and its really frustrating at this point.
I also tried adding https://pypi.python.org/pypi as a repository. as a repo but whenever I add it and click OK the repository disappears when I go back into the + window | PY Charm not loading module | 0 | 0 | 0 | 64 |
43,835,016 | 2017-05-07T18:15:00.000 | 0 | 0 | 0 | 0 | python-3.x,pandas | 43,835,158 | 3 | false | 0 | 0 | During reading a csv file:
Use dtype or converters attribute in read_csv in pandas
import pandas as pd
import numpy as np
df = pd.read_csv('data.csv',dtypes = {'a':float64,'b':int32},headers=None)
Here,automatically the types will be read as the datatype you specified.
After having read the csv file:
Use astype function to change the column types.
Check this code.
Consider you have two columns
df[['a', 'b']] = df[['a', 'b']].astype(float)
The advantage of this is you change type of multiple columns at once. | 1 | 1 | 1 | I have imported a CSV file as a Pandas dataframe. When I run df.dtypes I get most columns as "object", which is useless for taking into Bokeh for charts.
I need to change a column as int, another column as date, and the rest as strings.
I see the data types only once I import it. Would you recommend changing it during import (how?), or after import? | How to change data types "object" in Pandas dataframe after importing a CSV? | 0 | 0 | 0 | 7,317 |
43,836,027 | 2017-05-07T19:57:00.000 | 0 | 0 | 0 | 1 | python,docker,deployment,debian,dpkg | 43,836,429 | 1 | true | 0 | 0 | A container is an isolated environment, so you have to ship all what will be needed for your program to run.
Your Dockerfile will be based on Debian, so begin with
FROM debian
and will have some
RUN apt-get update \
&& apt-get install -y mysoft mydependency1 mydependency2
and also
RUN pip install xxx
and end with something like
CMD ["python","myapp.py"]
As your Python program does certainly things like
import module1, module2
Those Python modules will need to be installed in your Dockerfile in a RUN directive | 1 | 0 | 0 | I wrote a small application for Debian linux that calls python2.7 to perform almost all of its functions.
The python functions include for example remote database access, so the app will depend on python modules that are not in every linux distribution by default.
The app is packaged in a dpkg file in order to be used on many other machines (with same linux distribution), using dpkg -i MyApp01.
But the python dependencies have to be installed separately in order for the app to work: for example pip install mysql-connector-python-rf
Now I want to use Docker to ship my dependencies with the app and make it work on other machines without having to install them as above.
Can Docker be used to do this?and how?
If no, Is there a better approach to natively bundle the python dependencies in the dpkg file (assuming target machines have similar environment)? | A better way to deploy a Debian-python hybrid application | 1.2 | 0 | 0 | 52 |
43,836,766 | 2017-05-07T21:12:00.000 | 0 | 0 | 0 | 1 | python,c++ | 43,836,803 | 1 | true | 0 | 0 | Java has a native keyword that allows functions from c++ to be brought into java as methods. Python might have the same feature. | 1 | 4 | 0 | I need to connect to a data stream written in C++ with my current program in Python, any advice or resources on how to connect? | Connecting to a data stream with Python | 1.2 | 0 | 0 | 95 |
43,838,497 | 2017-05-08T01:33:00.000 | 0 | 0 | 0 | 0 | python,image,plot | 68,993,454 | 1 | false | 0 | 0 | Use the command fig.savefig('line plot.jpg', bbox_inches='tight', dpi=150) | 1 | 1 | 1 | I want to draw the plot result in an image and save it as an image in python.
I have a method that could Harris method in result there are alot of point that I want to trace in a picture but win I save the figure is giving me just the point not the image also please give me an advice | Draw the plot result in an image and save it as an image in python | 0 | 0 | 0 | 40 |
43,841,023 | 2017-05-08T06:23:00.000 | 0 | 0 | 1 | 0 | python,string,split,python-internals | 43,963,239 | 3 | false | 0 | 0 | CPython internally uses NUL-terminated strings in addition to storing a length. This is a very early design choice, present since the very first version of Python, and still true in the latest version.
You can see that in Include/unicodeobject.h where PyASCIIObject says "wchar_t representation (null-terminated)" and PyCompactUnicodeObject says "UTF-8 representation (null-terminated)". (Recent CPython implementations select from one of 4 back-end string types, depending on the Unicode encoding needs.)
Many Python extension modules expect a NUL terminated string. It would be difficult to implement substrings as slices into a larger string and preserve the low-level C API. Not impossible, as it could be done using a copy-on-C-API-access. Or Python could require all extension writers to use a new subslice-friendly API. But that complexity is not worthwhile given the problems found from experience in other languages which implement subslice references, as Dietrich Epp described.
I see little in Kevin's answer which is applicable to this question. The decision had nothing do to with the lack of circular garbage collection before Python 2.0, nor could it. Substring slices are implemented with an acyclic data structure. 'Competently-implemented' isn't a relevant requirement as it would take a perverse sort of incompetence or malice to turn it into a cyclic data structure.
Nor would there necessarily be extra branch overhead in the deallocator. If the source string were one type and the substring slice another type, then Python's normal type dispatcher would automatically use the correct deallocator, with no additional overhead. Even if there were an extra branch, we know that branching overhead in this case is not "expensive". Python 3.3 (because of PEP 393) has those 4 back-end Unicode types, and decides what to do based on branching. String access occurs much more often than deallocation, so any dellocation overhead due to branching would be lost in the noise.
It is mostly true that in CPython "variable names are internally stored as strings". (The exception is that local variables are stored as indices into a local array.) However, these names are also interned into a global dictionary using PyUnicode_InternInPlace(). There is therefore no deallocation overhead because these strings are not deallocated, outside of cases involving dynamic dispatch using non-interned strings, like through getattr(). | 1 | 9 | 0 | By looking at the CPython implementation it seems the return value of a string split() is a list of newly allocated strings. However, since strings are immutable it seems one could have made substrings out of the original string by pointing at the offsets.
Am I understanding the current behavior of CPython correctly ? Are there reasons for not opting for this space optimization ? One reason I can think of is that the parent string cannot be freed until all its substrings are. | Python Strings are immutable so why does s.split( ) return a list of new strings | 0 | 0 | 0 | 1,102 |
43,842,363 | 2017-05-08T07:47:00.000 | 0 | 0 | 0 | 0 | python,simpy | 43,845,103 | 1 | false | 1 | 0 | You can pass any event to env.run(until=event). The simulation will run until this event has been triggered.
env.run(until=60) is just a shortcut for env.run(until=env.timeout(60)) (if env.now == 0). | 1 | 1 | 0 | I am using one of the examples in the tutorial as a basis to model call center simulation.
Simulation is for one hour window (max simulation time of 60 minutes). When I execute env.run(until = 60) calls arrive even few seconds before 60 minute ends, which is quite ok and realistic.
I'm trying to have the simulation terminate when the resource served this last call. Is there a way to do this?
Any guidance or advice on this is greatly appreciated. | Terminating SimPy simulation | 0 | 0 | 0 | 489 |
43,844,038 | 2017-05-08T09:22:00.000 | 0 | 0 | 1 | 0 | python,spyder,deprecation-warning | 45,048,398 | 1 | false | 0 | 0 | I got the same warning in sklearn.
Instead of using from sklearn.cross_validation import train_test_split
use
from sklearn.model_selection import train_test_split | 1 | 0 | 0 | I am new to programming. I am trying to run a script I downloaded on spyder. I am getting
Deprecation Warning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20.
Please help me understand and resolve this | "This module will be removed in 0.20.", DeprecationWarning in spyder | 0 | 0 | 0 | 1,596 |
43,844,299 | 2017-05-08T09:35:00.000 | 1 | 0 | 1 | 0 | python | 43,844,404 | 2 | false | 0 | 0 | read() method returns empty string because you have reach end of the file and there is no more text in the file.
f = open('f.txt')
print f.read()
print f.tell()
Here f.tell() will give you the seek position and when you do f.tell() it would be at the end of file and returns the length of the file. | 1 | 0 | 0 | Why read() returns an empty string when it reaches the end of the file; this empty string shows up as a blank line. I know we can remove it using rstrip(). | Python, read() empty string at the end of a file | 0.099668 | 0 | 0 | 1,807 |
43,846,875 | 2017-05-08T11:48:00.000 | 0 | 0 | 0 | 0 | python-3.x,selenium | 43,850,009 | 3 | false | 1 | 0 | I do not know the Python syntax, but in Selenium you have to switch to the frame much like you would switch to a new tab or window before performing your click.
The syntax in Java would be: driver.switchTo().frame(); | 1 | 0 | 0 | i have written parameters in webpage & applying them by selecting apply button.
1. apply button is getting clicked for web elements outside frame using below command
browser1.find_element_by_name("action").click()
apply button not getting clicked when saving the paramters inside a frame of web page using same command
browser1.find_element_by_name("action").click() | click() method not working inside a frame in selenium python | 0 | 0 | 1 | 1,392 |
43,849,017 | 2017-05-08T13:29:00.000 | 1 | 0 | 0 | 0 | python,selenium,webdriver | 43,851,121 | 1 | true | 0 | 0 | Selenium is designed for end to end testing. If you have to alter the content of the page, you are probably using the wrong tool.
It's possible to alter the page with Selenium with a piece of JavaScript via executeScript. However it can only be done once the page is fully loaded and probably after the execution of the original script.
One way would be to use a proxy to intercept the resource and forward a different one. | 1 | 3 | 0 | Title pretty much sums it up.
I have a website, where custom js file is located, and in order to write some tests, I need to disable loading this js file but inserting my custom js file instead. Using execute_script function from Chromedriver. And here I'm not sure about the working approach.
I was thinking about adding rules in NoScript add-on which will prevent loading of first js file, turning him off and injecting my js file, but I'm not sure whenever it's possible. | How to prevent Selenium from loading js file? | 1.2 | 0 | 1 | 758 |
43,849,370 | 2017-05-08T13:46:00.000 | 0 | 1 | 0 | 0 | python-3.x,networking,simplehttpserver,captivenetwork,captiveportal | 43,850,242 | 2 | false | 0 | 0 | You need to work out the document in question, with all the necessary terms and conditions that will link the acceptance of the conditions to a specific user. You can implement a simple login, and link a String (Name - ID) to a Boolean for example, "accepted = true". If you don't need to store the user data, just redirect yo your document and when checked "agree", then you allow the user connection. | 1 | 4 | 0 | I would like to serve a Captive Portal - page that would prompt the user to agree to certain terms before starting browsing the web from my wireless network (through WiFi) using http.server on my Raspberry Pi that runs the newest version of Raspbian.
I have http.server (comes with Python3) up an running and have a webpage portal.html locally on the Pi. This is the page that I want users to be redirected to when they connect to my Pi. Lets say that the local IP of that page is 192.168.1.5:80/portal.html
My thought is that I would then somehow allow their connection when they have connected and accepted the terms and conditions.
How would I go about that? | How do I implement a simple Captive Portal with `http.server`? | 0 | 0 | 1 | 2,404 |
43,857,305 | 2017-05-08T21:09:00.000 | 0 | 0 | 1 | 0 | python,heroku,pip | 43,857,760 | 1 | false | 1 | 0 | Okay so a workaround:
Remove the latter package in the requirements.txt file which depends on the earlier one. Deploy to heroku so the earlier one is installed. Then add back the package you removed and deploy again. Huzzah! | 1 | 0 | 0 | I have a Flask app running on Heroku.
Installing from requirements.txt is failing as later packages depend on the on the earlier ones to be installed first.
Is there a way to stagger these installs, or to run a bash script to install one at a time when the dyno is spun up? | Staggered install of pip requirements on Heroku | 0 | 0 | 0 | 39 |
43,857,661 | 2017-05-08T21:37:00.000 | 0 | 0 | 0 | 0 | python,django | 43,857,755 | 1 | true | 1 | 0 | The Django Admin is intended as a really basic administration panel, not a staff tool or content management system. It doesn't have any notion of roles or user access levels, so any user in the Admin can edit any other users' records.
What you're trying to do goes way beyond what the Django Admin was designed to do. If you want custom behaviour or appearance, you should build actual pages using ModelViews. That way you can apply apply per-user restrictions on what they can see and modify.
If that's more work than you anticipated then you should probably just accept what the Django Admin gives you. | 1 | 0 | 0 | I want to be able to apply a list filter based on a known foreign-key value without showing the sidebar at all.
I have 3 schools with IDs 1, 2 & 3.
I have 39 programs, each having various fields, one of which being 'school' a foreign-key to schools table, and 39 records having either 1, 2, or 3 in 'school' field.
In admin.py, I create a ProgramsAdmin with list_filter = (('school')). This works perfectly, with the 3 schools appearing in sidebar. Clicking on any of them properly filters the programs.
Since user is going to log in and select the school they are working on, I want the list to be filtered without seeing the sidebar. Chosen school will be stored in database in settings table, but for now I just want to get it to work hard-coded to 1, 2 or 3 and not show sidebar.
This works SO easy in models.py, filtering a many-to-many relationship, just using limit_choices_to clause. Not so easy filtering in admin. Is it even possible to filter the admin on a hard-coded value, or a function which returns a filter value like limit_choices_to does?
Thanks... | Django Admin List Filter Without Sidebar | 1.2 | 0 | 0 | 509 |
43,859,002 | 2017-05-09T00:01:00.000 | 0 | 0 | 1 | 1 | python,windows,python-2.7,pycharm | 43,984,353 | 1 | true | 0 | 0 | i fixed this issue by uninstalling everything related to python from my computer and removing all environment variables as well , reinstalling anaconda after that to its default location fixed the problem . | 1 | 0 | 0 | this has been very frustrating now, like once every 2 days my anaconda just stops working correctly , i am using pycharm as my IDE and everything works fine , suddenly i find my anaconda terminal (command prompt if u will) starting randomly for no obvious reason , if i hit run on pycharm it opens the terminal and doesn't do anything , if i navigate to my .py file location and try to run it through command prompt it simply runs a new command prompt ( the python command prompt ) and does nothing no code is running its awaiting further commands. is this normal ? anyone has a solution to this
i am using windows 7 | python randomly opening the command prompt reinstall to get back to work | 1.2 | 0 | 0 | 147 |
43,859,133 | 2017-05-09T00:18:00.000 | 3 | 0 | 0 | 0 | python,tensorflow,keras,tensorboard | 43,879,111 | 2 | false | 0 | 0 | I debugged this and found that the problem was I was not providing any validation data when I called fit(). The TensorBoard callback will only report on the weights when validation data is provided. That seems a bit restrictive, but I at least have something that works. | 1 | 21 | 1 | I just got started with Keras and built a Q-learning example program. I created a tensorboard callback and I include it in the call to model.fit, but the only things that appear in TensorBoard are the scalar summary for the loss and the network graph. Interestingly, if I open up the dense layer in the graph, I see a little summary icon labeled "bias_0" and one labeled "kernel_0", but I don't see these appearing in the distributions or histograms tabs in TensorBoard like I did when I built a model in pure tensorflow.
Do I need to do something else to enable these in Tensorboard? Do I need to look into the details of the model that Keras produces and add my own tensor_summary() calls? | Keras - is it possible to view the weights and biases of models in Tensorboard | 0.291313 | 0 | 0 | 17,275 |
43,859,988 | 2017-05-09T02:17:00.000 | 0 | 0 | 0 | 0 | python,mysql,matplotlib | 43,860,724 | 1 | true | 0 | 0 | you can use datetime module,although i use now() function to extract datetime from mysql,but i consider the format is the same。
for instance
python>import datetime as dt
i put the datetime data into a list named datelist,and now you can use datetime.strptime function to convert the date format to what you want
python>dates = [dt.datetime.strptime(d,'%Y-%m-%d %H:%M:%S') for d in datelist
At last,you can put the list named dates into plot X-axis
I hope it helps you. | 1 | 0 | 1 | After doing a bit of research I am finding it difficult to find out how to use mysql timestamps in matplotlib.
Mysql fields to plot
X-axis:
Field: entered
Type: timestamp
Null: NO
Default: CURRENT TIMESTAMP
Sample: 2017-05-08 18:25:10
Y-axis:
Field: value
Type: float(12,6)
Null: NO
Sample: 123.332
What date format is matplotlib looking for? How do I convert to this format? I found out how to convert from unix timestamp to a format that is acceptable with matplotlib, is unix timestamp better than the timestamp field type I am using? Should I convert my whole table to unix timestamps instead?
Would appreciate any help! | Python - convert mysql timestamps type to matplotlib and graph | 1.2 | 1 | 0 | 202 |
43,860,814 | 2017-05-09T03:55:00.000 | 3 | 0 | 0 | 0 | python,django,ldap,django-auth-ldap | 43,929,764 | 2 | true | 1 | 0 | Turns out that AUTH_LDAP_FIND_GROUPS_PERMS doesn't actually add users to a group, but virtually adds them to it making sure their permissions respond as if they are in the groups that match names. | 1 | 1 | 0 | I'm running Django 1.8.18 and django-auth-ldap 1.2.11 authenticating against Active Directory.
My current configuration authenticates properly against the AD, however, when I enabled AUTH_LDAP_FIND_GROUPS_PERMS it doesn't seem to do anything. I've previously tried AUTH_LDAP_MIRROR_GROUPS (which works without any problem), and found all of the user's groups created. The only slight issue is that it also remove any local group memberships the user had.
In any case, after having the groups auto-created by AUTH_LDAP_MIRROR_GROUPS I would expect AUTH_LDAP_FIND_GROUPS_PERMS would auto-add the user to that same group on the next login. However, this did not happen. The only change in configuration was those two lines. The AUTH_LDAP_GROUP_TYPE is set to NestedActiveDirectoryGroupType()
Any ideas why users aren't being added to the groups with matching names? | django-auth-ldap AUTH_LDAP_FIND_GROUPS_PERMS not working | 1.2 | 0 | 0 | 1,265 |
43,861,642 | 2017-05-09T05:12:00.000 | 0 | 0 | 1 | 0 | python,dns | 43,861,751 | 1 | false | 0 | 0 | You could simply open a UDP socket and then craft the complete DNS packet by hand. This would be the obvious approach.
If you want to use a library I would have a look at dpkt. Note that it is badly documented, but very powerful. It also allows you to set the DNS Opcode's directly.
For the asynchronous part you should have a look at threading. | 1 | 0 | 0 | How to query DNS with "DNSSEC OK" (DO) bit in Python Asynchronously?
I have researched on pythondns , aiodns but they do not offer a function to set the "DNSSEC OK" (DO) bit.
Thanks in advance! | How to query DNS with "DNSSEC OK" (DO) bit in Python Asynchronously? | 0 | 0 | 0 | 116 |
43,862,233 | 2017-05-09T06:03:00.000 | 0 | 0 | 0 | 0 | python,django,forms | 43,864,634 | 2 | false | 1 | 0 | may be you can store the first form values in session and provide them as initial data for the first form when you are rendering the second form with error. For eg: data={"f1":request.session['abc'],"f2":request.session["xyz"]} form1 = abc(initial=data) | 1 | 0 | 0 | I have a page that I want to behave like this: First, the user only sees a single form, for the sake of example, lets say it allows the user to select the type of product. Upon submitting this form, a second form (whose contents depend on the product type) appears below it. The user can then either fill out the second form and submit it, or revise and resubmit the first form - either way I want the first form to maintain the user's input (in this case, the product type), even if the second form is submitted.
How can I do this cleanly in django? What I am struggling with is preserving the data in the first form: e.g. if the user submits the second form and it has validation errors, when the page displays the first form the product type will be rendered blank but I want the option to remain set to what the user picked. This behaviour isn't mysterious or unexpected, but is not what I want. Also, if the user submits the second form successfully, I would like to redirect so that the first form maintains the selection and the second form is cleared.
The best that I've thought of is mucking up the URL with the fields of the first form (admittedly not too many parameters) and storing its state there, or combining both forms into one form object in HTML and responding differently based on the name of the submit button (though I don't see how I could use a redirect to clear the second form and keep the first if I do this). Are there any cleaner, more obvious ways that I'm missing? Thanks. | Retain edit data for one form when submitting a second | 0 | 0 | 0 | 35 |
43,865,989 | 2017-05-09T09:22:00.000 | 153 | 0 | 1 | 0 | python,django,pycharm | 43,867,016 | 6 | false | 1 | 0 | You need to enable Django support. Go to
PyCharm -> Preferences -> Languages & Frameworks -> Django
and then check Enable Django Support | 1 | 84 | 0 | I use community pycharm and the version of python is 3.6.1, django is 1.11.1. This warning has no affect on running, but I cannot use the IDE's auto complete. | Unresolved attribute reference 'objects' for class '' in PyCharm | 1 | 0 | 0 | 51,524 |
43,866,696 | 2017-05-09T09:56:00.000 | 0 | 0 | 0 | 0 | python-3.x,turtle-graphics | 43,875,556 | 1 | true | 0 | 1 | First, you need the bounds of the rectangle in some form -- it can be the lower left position plus a width and height or it can be the lower left position and the upper right position, etc. (It could even be the formulas of the four lines that make up the rectangle.)
Then write a predicate function that tests if an (x,y) position is fully within the rectangle or not. You can simply do a series of comparisons to make sure x is greater than the lower left x and less than the upper right x, and ditto for y. Typically returning True or False.
If the predicate returns False, indicating you've touched or crossed some line of the rectangle, then turn around and go in the opposite direction (or some other recovery technique.) You can also consider first using turtle's undo feature to eliminate the move that made you touch the line.
If you'd like example code that does the above, please indicate such. | 1 | 1 | 0 | Our task is to create a turtle that always stays within a rectangle.
It would be really great if you could show me how I can make a turtle run away from a line another turtle has created.
Please don't fix the problem for me. | How can I make a turtle not touch a line? | 1.2 | 0 | 0 | 220 |
43,869,734 | 2017-05-09T12:21:00.000 | 1 | 0 | 0 | 0 | python,numpy,dynamic-arrays,numba | 43,870,685 | 3 | false | 0 | 0 | Typically the strategy I employ is to just allocate more than enough array storage to accommodate the calculation and then keep track of the final index/indices used, and then slice the array down to the actual size before returning. This assumes that you know beforehand what the maximum size you could possibly grow the array to is. The thought is that in most of my own applications, memory is cheap but resizing and switching between python and jitted functions a lot is expensive. | 2 | 2 | 1 | It seems that numpy.resize is not supported in numba.
What is the best way to use dynamically growing arrays with numba.jit in nopython mode?
So far the best I could do is define and resize the arrays outside the jitted function, is there a better (and neater) option? | dynamically growing array in numba jitted functions | 0.066568 | 0 | 0 | 2,018 |
43,869,734 | 2017-05-09T12:21:00.000 | 1 | 0 | 0 | 0 | python,numpy,dynamic-arrays,numba | 68,612,285 | 3 | false | 0 | 0 | To dynamically increase the size of an existing array (and therefore do it in-place), numpy.ndarray.resize must be used instead of numpy.resize. This method is NOT implemented in Python, and is not available in Numba, so it just cannot be done. | 2 | 2 | 1 | It seems that numpy.resize is not supported in numba.
What is the best way to use dynamically growing arrays with numba.jit in nopython mode?
So far the best I could do is define and resize the arrays outside the jitted function, is there a better (and neater) option? | dynamically growing array in numba jitted functions | 0.066568 | 0 | 0 | 2,018 |
43,871,444 | 2017-05-09T13:39:00.000 | 0 | 0 | 1 | 0 | python,csv,dataframe,merge | 43,872,949 | 1 | false | 0 | 0 | The first file is smth like:
Timestamp ; Flow1 ; Flow 2
2017/02/17 00:05 ; 540 ; 0
2017/02/17 00:10 ; 535 ; 0
2017/02/17 00:15 ; 543 ; 0
2017/02/17 00:20 ; 539 ; 0
CSV file #2:
Timestamp ; DOC ; Temperatute ; UV254;
2017/02/17 00:14 ; 668.9 ; 15,13 ; 239,23
2017/02/17 00:15 ; 669,46 ; 15,14 ; 239,31
2017/02/17 00:19 ; 668 ; 15,13 ; 239,43
2017/02/17 00:20 ; 669,9 ; 15,14 ; 239,01
he output file is supposed to be like :
Timestamp ; DOC ; Temperatute ; UV254 ; Flow1 ; Flow2 2017/02/17 00:15 ; 669,46 ; 15,14 ; 239,31 ; 543 ; 0
2017/02/17 00:20 ; 669,9 ; 15,14 ; 239,01 ; 539 ; 0 | 1 | 0 | 1 | I would like to know how can I proceed in order to concatenate two csv files, here is the composition of this two files:
The first one contains some datas related to water chemical parameters, these measurements are taken in different dates.
The second one shows the different flow values of waste water, during a certain period of time.
The problem is that I am looking to assign each value of the second file (Flow values) to the right row in the first file (water chemical parameters) in such a way that the flow and the other chemical parameters are measured in the same moments.
Any suggestions ? | Merging two DataFrames (CSV files) with different dates using Python | 0 | 0 | 0 | 37 |
43,871,607 | 2017-05-09T13:46:00.000 | 0 | 0 | 0 | 0 | python,numpy,keras | 43,871,677 | 2 | false | 0 | 0 | If you want to create a 'list of numpys' you can do np.array(yourlist).
If you print result.shape you will see what the resulting shape is. Hope this helps! | 2 | 0 | 1 | I am currently aware that keras doesn't support list of list of numpys.. but I can't see other way to pass my input.
My input to my neural network is each columns (total 45 columns) of 33 different images.
The way I've currently stored it is as an
list of list in which the outer list has length 45, and the inner has length 33, and within this inner list I store a numpy.ndarray of shape (1,8,3)..
I feed it this as I need to do 45 convolutions, one for each column in the image. The same convolution has to be applied on all images for their respective column number.
So convolution_1 has to be applied on every first column on all the 33 images. | How do I pass my input to keras? | 0 | 0 | 0 | 218 |
43,871,607 | 2017-05-09T13:46:00.000 | 0 | 0 | 0 | 0 | python,numpy,keras | 43,872,953 | 2 | false | 0 | 0 | You can use Input(batch_shape = (batch_size, height, width, channels)), where batch_size = 45, channels = 33 and use np.ndarray of shape (45, height, width, 33) if your backend is tensorflow | 2 | 0 | 1 | I am currently aware that keras doesn't support list of list of numpys.. but I can't see other way to pass my input.
My input to my neural network is each columns (total 45 columns) of 33 different images.
The way I've currently stored it is as an
list of list in which the outer list has length 45, and the inner has length 33, and within this inner list I store a numpy.ndarray of shape (1,8,3)..
I feed it this as I need to do 45 convolutions, one for each column in the image. The same convolution has to be applied on all images for their respective column number.
So convolution_1 has to be applied on every first column on all the 33 images. | How do I pass my input to keras? | 0 | 0 | 0 | 218 |
43,878,271 | 2017-05-09T19:23:00.000 | 0 | 0 | 0 | 0 | python,c++,opencv,image-processing,surf | 43,878,521 | 1 | true | 0 | 0 | got it !
C++: SURF::SURF(double hessianThreshold, int nOctaves=4, int nOctaveLayers=2, bool extended=true, bool upright=false ) | 1 | 0 | 1 | what are the equivalent flags of SURF in opencv C++ to python SURF flags extended and upright ?
in python version upright flag decides whether to calculate orientation or not
And extended flag gives option of whether to use 64 dim or 128 dim
Is there a to do this similar operation in opencv C++ version of SURF function
FYI I am using opencv version 2.4.13 | extended and upright flags in SURF opencv c++ function | 1.2 | 0 | 0 | 175 |
43,881,785 | 2017-05-10T00:25:00.000 | 5 | 0 | 0 | 0 | python,django,virtualenv | 43,881,812 | 1 | true | 1 | 0 | All the apps installed (INSTALLED_APPS) within a single project run under the same python process, so it's going to be one virtualenv for all apps.
If you have an app that requires a specific python environment and the others really aren't able to run in that environment (for example, if one app requires python3 and another requires python2), then you would have to run the problem app in its own Django application server instance.
Since normally you would have nginx or Apache in front of your Django instance, you could have multiple Django instances appear to be one server. But it's a situation you'd want to avoid. | 1 | 1 | 0 | With Django,
Is it better to have a virtualenv created at project level.
Or,
is it better to have a virtualenv per app within a single project? | Virtualenv for Django dev, better to be at Project or App level? | 1.2 | 0 | 0 | 62 |
43,881,941 | 2017-05-10T00:51:00.000 | 1 | 0 | 0 | 0 | python-2.7,lua,torch,luarocks | 44,149,840 | 1 | true | 0 | 0 | Check that the header file exists and that you have the correct path.
If the header file is missing you skipped the preprocess step. If the header file exists it's likely in your data directory and not in the same directory as the sample.lua code:
th train.lua -input_h5 data/my_data.h5 -input_json data/my_data.json | 1 | 1 | 1 | I'm following the instructions on github.com/jcjohnson/torch-rnn and have it working until the training section. When I use th train.lua -input_h5 my_data.h5 -input_json my_data.jsonI get the error Error: unable to locate HDF5 header file at /usr/local/Cellar/hdf5/1.10.0-patch1/include;/usr/include;/usr/local/opt/szip/include/hdf5.h
I'm new to luarocks and torch, so I'm not sure what's wrong. I installed torch-hd5f. Any advice would be very much appreciated. | Error using Torch RNN | 1.2 | 0 | 0 | 84 |
43,882,170 | 2017-05-10T01:22:00.000 | 0 | 0 | 0 | 1 | python,unix | 43,883,782 | 1 | false | 0 | 0 | Neither Python nor Unix has the notion of a transaction for actions on multiple files.
For movement with in a disk partition, the mv command will just update the directory entries using the same inodes so the file doesn't actually move (no risk of failure during the move).
For movement across disks, you could be a temporary directory on the target drive, copy all the files, and it it succeed just do a mv as described about, and finally clear the source. This would provide some measure of protection. | 1 | 0 | 0 | I have a number of files that I want to move from one folder to another. If for any reason movement of one of those files fails, I want none of them moved. Basically, either all of the files should be moved, or none of them. I could write logic that approximates this myself, but before I do, is there a native Python or Unix way to do this? Figured the situation comes up often enough that a solution probably already exists and I just haven't heard of it. | Moving Files as a Transaction in Python? | 0 | 0 | 0 | 144 |
43,884,375 | 2017-05-10T05:42:00.000 | 1 | 0 | 0 | 1 | python,azure,azure-table-storage | 43,884,796 | 2 | true | 0 | 0 | Returning number of entities in the table storage is for sure not available in Azure Table Storage SDK and service. You could make a table scan query to return all entities from your table but if you have millions of these entities the query will probably time out. it is also going to have pretty big perf impact on your table. Alternatively you could try making segmented queries in a loop until you reach the end of the table. | 1 | 0 | 0 | We have a table in Azure Table Storage that is storing a LOT of data in it (IoT stuff). We are attempting a simple migration away from Azure Tables Storage to our own data services.
I'm hoping to get a rough idea of how much data we are migrating exactly.
EG: 2,000,000 records for IoT device #1234.
The problem I am facing is in getting a count of all the records that are present in the table with some constrains (EG: Count all records pertaining to one IoT device #1234 etc etc).
I did some fair amount of research to find posts that say that this count feature is not implemented in the ATS. These posts however, were circa 2010 to 2014.
I'm assumed (hoped) that this feature has been implemented now since it's now 2017 and I'm trying to find docs to it.
I'm using python to interact with out ATS.
Could someone please post the link to the docs here that show how I can get the count of records using python (or even HTTP / rest etc)?
Or if someone knows for sure that this feature is still unavailable, that would help me move on as well and figure another way to go about things!
Thanks in advance! | Counting Records in Azure Table Storage (Year: 2017) | 1.2 | 0 | 0 | 1,349 |
43,891,266 | 2017-05-10T11:32:00.000 | 0 | 0 | 1 | 0 | python,git,github,merge | 43,891,996 | 2 | false | 0 | 0 | I guess the best you could do is to check all merge commits in the newly pushed commits to check whether they have exactly two parents and the second parent is contained in the branch you want to enforce merging from.
But you cannot add arbitrary git hooks to GitHub repositories anyway, or can you? | 1 | 0 | 0 | I have a github repo that contains three protected branches; master, staging & uat. Anyone may make other branches to make changes but I would like a way make sure that people merge in this order:
users_branch -> uat -> staging -> master.
I have looked at pre-receive hooks using python but cant seem to get information I need on which branches are being merged to create this logic. The only arguments available in pre-receive are; base, commit & ref
Is there anyway to enforce that only uat may merge into staging and only staging may merge into master? | Can I enforce the order branches are merged on GitHub | 0 | 0 | 0 | 74 |
43,893,208 | 2017-05-10T12:58:00.000 | 1 | 0 | 0 | 1 | python,flask,server | 43,896,817 | 5 | false | 1 | 0 | This error appear because your server is overloaded! Stop and start it! | 1 | 3 | 0 | I write a very simple flask server. This server response to GET and give back my home.html.
I visit the site on 127.0.0.1:5000
everything is good till now.
However, if I keep pressing "fresh" (Command+R on my computer) a lot of times for a few second, as fast as I can, then, my flask give this error and breaks down.
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 801, in bootstrap_inner
self.run()
File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/local/lib/python2.7/site-packages/werkzeug/serving.py", line 659, in inner
srv.serve_forever()
File "/usr/local/lib/python2.7/site-packages/werkzeug/serving.py", line 499, in serve_forever
HTTPServer.serve_forever(self)
File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/SocketServer.py", line 238, in serve_forever
self._handle_request_noblock()
File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/SocketServer.py", line 297, in _handle_request_noblock
self.handle_error(request, client_address)
File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/SocketServer.py", line 295, in _handle_request_noblock
self.process_request(request, client_address)
File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/SocketServer.py", line 321, in process_request
self.finish_request(request, client_address)
File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/SocketServer.py", line 334, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/SocketServer.py", line 655, in __init
self.handle()
File "/usr/local/lib/python2.7/site-packages/werkzeug/serving.py", line 216, in handle
rv = BaseHTTPRequestHandler.handle(self)
File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/BaseHTTPServer.py", line 340, in handle
self.handle_one_request()
File "/usr/local/lib/python2.7/site-packages/werkzeug/serving.py", line 251, in handle_one_request
return self.run_wsgi()
File "/usr/local/lib/python2.7/site-packages/werkzeug/serving.py", line 193, in run_wsgi
execute(self.server.app)
File "/usr/local/lib/python2.7/site-packages/werkzeug/serving.py", line 184, in execute
write(data)
File "/usr/local/lib/python2.7/site-packages/werkzeug/serving.py", line 152, in write
self.send_header(key, value)
File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/BaseHTTPServer.py", line 401, in send_header
self.wfile.write("%s: %s\r\n" % (keyword, value))
IOError: [Errno 32] Broken pipe
I believe this is what happened: When my server was trying to transmit html to my browser, I pressed the refresh and broke the pipe. So my server got confused and then give error.
How to solve this problem? Or this site is not usable as all since anyone who ask constantly for my page can break my webpage down.
Thanks! | Flask gives "error 32 broken pipe" when being requested too often | 0.039979 | 0 | 0 | 4,808 |
43,894,608 | 2017-05-10T13:56:00.000 | 0 | 0 | 1 | 0 | python,multiprocessing,openmp | 43,899,399 | 2 | false | 0 | 0 | Resolved in comments:
OMP_NUM_THREADS is an option for OpenMP, a C/C++/Fortran API for doing multi-threading within a process.
It's unclear how that's even related to Python multiprocessing.
Is your Python program calling modules written in C that use OpenMP internally? – Wyzard | 1 | 12 | 0 | I heard that using OMP_NUM_THREADS=1 before calling a Python script that use multiprocessing make the script faster.
Is it true or not ? If yes, why so ? | Use of OMP_NUM_THREADS=1 for Python Multiprocessing | 0 | 0 | 0 | 16,051 |
43,896,369 | 2017-05-10T15:07:00.000 | 1 | 0 | 0 | 0 | python-3.x,text,nlp,embedding,data-cleaning | 43,917,161 | 1 | true | 0 | 0 | I post this here just to summarise the comments in a longer form and give you a bit more commentary. No sure it will answer your question. If anything, it should show you why you should reconsider it.
Points about your question
Before I talk about your question, let me point a few things about your approaches. Word embeddings are essentially mathematical representations of meaning based on word distribution. They are the epitome of the phrase "You shall know a word by the company it keeps". In this sense, you will need very regular misspellings in order to get something useful out of a vector space approach. Something that could work out, for example, is US vs. UK spelling or shorthands like w8 vs. full forms like wait.
Another point I want to make clear (or perhaps you should do that) is that you are not looking to build a machine learning model here. You could consider the word embeddings that you could generate, a sort of a machine learning model but it's not. It's just a way of representing words with numbers.
You already have the answer to your question
You yourself have pointed out that using hunspell introduces new mistakes. It will be no doubt also the case with your other approach. If this is just a preprocessing step, I suggest you leave it at that. It is not something you need to prove. If for some reason you do want to dig into the problem, you could evaluate the effects of your methods through an external task as @lenz suggested.
How does external evaluation work?
When a task is too difficult to evaluate directly we use another task which is dependent on its output to draw conclusions about its success. In your case, it seems that you should pick a task that depends on individual words like document classification. Let's say that you have some sort of labels associated with your documents, say topics or types of news. Predicting these labels could be a legitimate way of evaluating the efficiency of your approaches. It is also a chance for you to see if they do more harm than good by comparing to the baseline of "dirty" data. Remember that it's about relative differences and the actual performance of the task is of no importance. | 1 | 0 | 1 | I am a graduate student focusing on ML and NLP. I have a lot of data (8 million lines) and the text is usually badly written and contains so many spelling mistakes.
So i must go through some text cleaning and vectorizing. To do so, i considered two approaches:
First one:
cleaning text by replacing bad words using hunspell package which is a spell checker and morphological analyzer
+
tokenization
+
convert sentences to vectors using tf-idf
The problem here is that sometimes, Hunspell fails to provide the correct word and changes the misspelled word with another word that don't have the same meaning. Furthermore, hunspell does not reconize acronyms or abbreviation (which are very important in my case) and tends to replace them.
Second approache:
tokenization
+
using some embeddings methode (like word2vec) to convert words into vectors without cleaning text
I need to know if there is some (theoretical or empirical) way to compare this two approaches :)
Please do not hesitate to respond If you have any ideas to share, I'd love to discuss them with you.
Thank you in advance | Embeddings vs text cleaning (NLP) | 1.2 | 0 | 0 | 842 |
43,896,486 | 2017-05-10T15:12:00.000 | 0 | 0 | 1 | 0 | python,2d-games,pythonista | 48,909,025 | 2 | false | 0 | 1 | Instead of putting the joystick on a separate scene, you should draw it on a scene.Node. Then in your game scene, you can add it like another sprite, using Scene.add_child().
To convert the touch positions to the nodes coordinate system, you can use Node.point_from_scene(), and to convert back to the scene’s coordinate system, you use Node.point_to_scene() | 1 | 0 | 0 | I am learning Python through Pythonista on the iPhone. The first thing I did was make a simple touch-screen joystick (controller). Im starting to work on the actual game, but i don't know how to merge or overlay the 2 scenes. (One is the actual game, the other is the controller I made in another file.) I have already tried importing and running it, but it seems like only 1 could be run at once, the controller file or the game file. Any help is appreciated. | Running multiple scenes in Pythonista | 0 | 0 | 0 | 528 |
43,897,009 | 2017-05-10T15:34:00.000 | 0 | 0 | 0 | 0 | python,google-apps-script,gspread | 65,546,184 | 2 | false | 0 | 0 | Make sure you're using the latest version of gspread. The one that is e.g. bundled with Google Colab is outdated:
!pip install --upgrade gspread
This fixed the error in gs.csv_import for me on a team drive. | 1 | 1 | 0 | I am trying to access a Spreadsheet on a Team Drive using gspread. It is not working. It works if the spreadsheet is on my Google Drive. I was wondering if gspread has the new Google Drive API v3 capability available to open spreadsheets on Team Drives. If so, how do I specify the fact I want to open a spreadsheet on a Google Team Drive and not my own Google drive? If not, when will that functionality be available? Thanks! | Does gspread Support Accessing Spreadsheets on Team Drives? | 0 | 1 | 0 | 426 |
43,898,566 | 2017-05-10T16:54:00.000 | 1 | 0 | 0 | 0 | python,amazon-web-services,chalice | 44,975,845 | 2 | true | 1 | 0 | You wouldn't serve HTML from Chalice directly. It is explicitly designed to work in concert with AWS Lambda and API Gateway to serve dynamic, API-centric content. For the static parts of an SPA, you would use a web server (nginx or Apache) or S3 (with or without CloudFront).
Assuming you are interested in a purely "serverless" application model, I suggest looking into using the API Gateway "Proxy" resource type, forwarding to static resources on S3.
Worth noting that it's probably possible to serve HTML from Chalice, but from an architecture perspective, that's not the intent of the framework and you'd be swimming upstream to get all the capabilities and benefits from tools purpose-built for serving static traffic (full HTTP semantics w/ caching, conditional gets, etc) | 1 | 1 | 0 | Has anyone here ever worked with chalice? Its an aws tool for creating api's. I want to use it to create a single page application, but Im not sure how to actually serve html from it. I've seen videos where its explored, but I can't figure out how they actually built the thing. Anyone have any advice on where to go, how to start this? | Using aws chalice to build a single page application? | 1.2 | 0 | 1 | 923 |
43,903,460 | 2017-05-10T22:06:00.000 | 0 | 0 | 1 | 1 | python,windows,python-2.7 | 43,903,621 | 2 | false | 0 | 0 | If you really do not want to remove the whitespaces in your folder's name, put backslashes \ before the spaces in the variable z, to espace them. | 1 | 0 | 0 | I am relatively new to python. I am trying to call a python file "plotting.py" in another file "main.py". To execute the "plotting.py" file the path should also be given as argument.
So in the "main.py" I have executed so
z='Stream 20170424 15_20_25_856'
os.system('python plotting.py '+z)
Where variable z is the name of the folder and this name, in general, contains the whitespaces and when I execute the "main.py" it gives an error. But when I replace the whitespaces in the folder name with _ and change the variable z accordingly and execute the "main.py" it executes without an error. But I cannot change the name of the folder every time. So is there any possibility to execute the code changing the folder name and giving the variable z has mentioned? | executing the python in another python file with arguments as path | 0 | 0 | 0 | 44 |
43,903,569 | 2017-05-10T22:17:00.000 | 0 | 1 | 1 | 0 | python,unit-testing,nose | 43,927,343 | 2 | false | 0 | 0 | I figured out what the problem was. I had a non-test function which included 'test' in its name even it id did not start with 'test_', as a result of which it was treated as a test function by nose. When I modified the name of the function, the problem was solved. | 2 | 0 | 0 | I have written a python file containing some unit tests (testfile.py).
When I run 'python testfile.py' the tests run fine and there are no errors.
But when I run 'nosetests testfile.py', I get a TypeError of the form
TypeError: func_name() takes exactly 3 arguments (1 given).
Can you please help me understand what might be going on and how can I modify the python file so that it can be run using both python and nosetests.
Thanks in advance,
Ambarish. | unit test runs with python but fails with nosetests | 0 | 0 | 0 | 590 |
43,903,569 | 2017-05-10T22:17:00.000 | 0 | 1 | 1 | 0 | python,unit-testing,nose | 43,903,654 | 2 | false | 0 | 0 | Yeah, you're getting hung on how discovery works differently on the different tools. It trips me up all the time. Go into the directory with your tests and do:
$ nosetests testfile
(no .py on the end of testfile)
You can also use the python unittest module:
$ python -m unittest testfile | 2 | 0 | 0 | I have written a python file containing some unit tests (testfile.py).
When I run 'python testfile.py' the tests run fine and there are no errors.
But when I run 'nosetests testfile.py', I get a TypeError of the form
TypeError: func_name() takes exactly 3 arguments (1 given).
Can you please help me understand what might be going on and how can I modify the python file so that it can be run using both python and nosetests.
Thanks in advance,
Ambarish. | unit test runs with python but fails with nosetests | 0 | 0 | 0 | 590 |
43,904,029 | 2017-05-10T23:09:00.000 | 2 | 0 | 0 | 0 | python,machine-learning,gensim,word2vec,text-classification | 43,976,879 | 1 | false | 0 | 0 | If you have trained word2vec model, you can get word-vector by __getitem__ method
model = gensim.models.Word2Vec(sentences)
print(model["some_word_from_dictionary"])
Unfortunately, embeddings from word2vec/doc2vec not interpreted by a person (in contrast to topic vectors from LdaModel)
P/S If you have texts at the object in your tasks, then you should use Doc2Vec model | 1 | 1 | 1 | I want to analyze the vectors looking for patterns and stuff, and use SVM on them to complete a classification task between class A and B, the task should be supervised. (I know it may sound odd but it's our homework.) so as a result I really need to know:
1- how to extract the coded vectors of a document using a trained model?
2- how to interpret them and how does word2vec code them?
I'm using gensim's word2vec. | How extract vocabulary vectors from gensim's word2vec? | 0.379949 | 0 | 0 | 1,519 |
43,905,907 | 2017-05-11T03:12:00.000 | 2 | 0 | 0 | 1 | linux,python-2.7,subprocess,shutil | 48,022,437 | 3 | false | 0 | 0 | I'll try to complement this after I go home, but just to begin I'll tell an example code I had wrote yesterday.
You can try it yourself though.
I had made 100k copies from an empty file with shutil and with subprocess.call, using the command time to get the execution time.
The result was worse than I expected.
shutil has taken 7 seconds.
subprocess has taken 2 minutes and 30 seconds.
Depending on how you use subprocess, you can allow code injection... by configuration files or user input.
Compatibility issues. Shutil already handle it for you. | 1 | 3 | 0 | I'm a beginner in Python and from a shell scripting background. I have learned shutil and also subprocess to create files/directories.
My question is, which one is better and recommended way of manage files in my OS(Linux/Windows) ? I read some Python books that discourage the use of OS commands to do so.
I'm comfortable with Linux and mostly work in Linux environments, I have a very high tendency to use rm, mkdir, cp commands to manage files. Are there problems/benefits of using one over the other? | Python: Should I use shutil or subprocess to manipulate files and directories as a better approach? | 0.132549 | 0 | 0 | 2,641 |
43,909,664 | 2017-05-11T07:59:00.000 | 1 | 0 | 1 | 0 | python,class,object,initialization | 43,909,759 | 2 | true | 0 | 0 | Simply equate the the parameters that you would want to be defaulted to their default value.
For instance, __init__(a,b,c=0) would give 0 as the default value to c which you can override by passing another value when object is created. | 1 | 0 | 0 | I am new in python and please excuse me if my question is a basic question or is not clear.
I have a class in python and I want to set some attributes for my objects whenever I generate my objects from this class. I know, I must use __init__ method for this purpose.
The problem is that I want to have this ability that I can set my attributes in two ways. For some of my objects, I want to pass some values to __init__ method and set the attributes with those values. In addition, for some objects, I want to set some default values for attributes of my objects in __init__ method.
My question: How can I have an __init__ method that for some objects uses passed values for setting its attributes, and for some other objects, that are not passed, use default values? | using __init__ method with and without parameters | 1.2 | 0 | 0 | 617 |
43,914,835 | 2017-05-11T12:01:00.000 | 0 | 0 | 1 | 0 | python,syntax-highlighting,geany | 43,915,979 | 2 | false | 0 | 0 | In filetypes.python (can be opened from menu Tools → Configuration Files → Filetype Configuration → Scripting Languages), section [styling]. Colors are coded in hexadecimal. | 1 | 2 | 0 | I have two computers, each with Geany. One has the colour scheme that I like for Python, the other one has some sort of basic scheme with only keywords highlighted. I've looked and looked without any success at filetypes.python and filetypes.common. There's nothing in colorschemes apart from alt.conf.
Where do I find this? | Change colours of syntax highlighting in Geany | 0 | 0 | 0 | 3,619 |
43,920,802 | 2017-05-11T16:17:00.000 | 0 | 0 | 1 | 0 | python,jupyter-notebook | 43,920,803 | 1 | false | 0 | 0 | The files path names are too long. Reducing the path length by reducing the number of folders and/or folder name lengths will solve the problem. | 1 | 0 | 1 | My Jupyter Notebook doesn't show in the Jupyter Dashboard in Windows 10.
Additionally, I get the following error in my Jupyter cmd line console:
[W 00:19:39.638 NotebookApp] C:\Users\danie\Documents\Courses\Python-Data-Science-and-Machine-Learning-Bootcamp Jupyter Notebooks\Python-Data-Science-and-Machine-Learning-Bootcamp\Machine Learning Sections\Decision-Trees-and-Random-Forests\Decision Trees and Random Forest Project - Solutions.ipynb doesn't exist | Jupyter Notebook doesn't show in Dashboard (Windows 10) | 0 | 0 | 0 | 368 |
43,920,923 | 2017-05-11T16:23:00.000 | 2 | 0 | 0 | 0 | python,ibm-cloud,ibm-watson,training-data,nl-classifier | 43,935,121 | 2 | false | 0 | 0 | For NLC it depends on the type of data, and quantity. There is no fixed time to when it completes, but I have seen a classifier run a training session for nearly a day.
That said, normally anywhere from 30 minutes to a couple of hours.
Watson conversation Intents is considerably faster (minutes). But both use different models, so I would recommend to test both and see the results. Also check how each is scoring when comparing (absolute/relative). | 2 | 3 | 1 | I have a data-set that contains about 14,700 records. I wish to train it on ibm watson and currently i'm on trial version. What is the rough estimate about the time that the classifier will take to train? Each record of dataset contains a sentence and the second column contains the class-name. | IBM Watson nl-c training time | 0.197375 | 0 | 0 | 222 |
43,920,923 | 2017-05-11T16:23:00.000 | 0 | 0 | 0 | 0 | python,ibm-cloud,ibm-watson,training-data,nl-classifier | 43,921,011 | 2 | false | 0 | 0 | If your operating system is UNIX, you can determine how long a query takes to complete and display results when executed using dbaccess. You can use the time command to report how much time is spent, from the beginning to the end of a query execution. Including the time to connect to the database, execute the query and write the results to an output device.
The time command uses another command or utility as an argument, and writes a message to standard error that lists timing statistics for that command. It reports the elapsed time between invocation of the command and its termination. The message includes the following information:
The elapsed (real) time between invocation and termination of the utility. The real time is divided in two components, based on the kind of processing:
The User CPU time, equivalent to the sum of the tms_utime and tms_cutime fields returned by the times function for the process in which utility is executed.
or,
The System CPU time, equivalent to the sum of the tms_stime and tms_cstime fields returned by the times() function for the process in which utility is executed. | 2 | 3 | 1 | I have a data-set that contains about 14,700 records. I wish to train it on ibm watson and currently i'm on trial version. What is the rough estimate about the time that the classifier will take to train? Each record of dataset contains a sentence and the second column contains the class-name. | IBM Watson nl-c training time | 0 | 0 | 0 | 222 |
43,922,344 | 2017-05-11T17:44:00.000 | 0 | 0 | 0 | 0 | django,python-2.7,jinja2 | 43,922,380 | 1 | true | 1 | 0 | Unfortunately you can't do this directly with Django. You'll have to set up an AJAX handler (probably on keypress) in order to do this. | 1 | 1 | 0 | So I'm sending an item to my html page and put a value off this item in an input.
What i want is when i change the input, i want to dynamically print the new value next to the input.
Something like that :
<input type='text' value="{{item.qty}}"/>
{{myNewInputValue}}
I know how to do this with angular but don't know if it's possible with Python
Thanks | How to show input value next to it ? Python | 1.2 | 0 | 0 | 61 |
43,923,482 | 2017-05-11T18:51:00.000 | 0 | 1 | 0 | 0 | python,amazon-web-services,tomcat,amazon-ec2,aws-lambda | 43,923,646 | 1 | false | 1 | 0 | has the security group 8080 port open to internet?
To connect Lambdas with VPC you can't use the default VPC, you have to create one with a nat gateway.
EDIT: Only if the Lambda fucntion needs to access to internet and VPC. | 1 | 0 | 0 | On AWS, I created a new lambda function. I added a role to the lambda that has the policy, AWSLambdaVPCAccessExecutionRole. I placed the lambda in the same VPC as my EC2 instance and made sure the security group assigned to the lambda and EC2 instance have the same default VPC security group created by AWS which allows all traffic within the vpc. On my EC2 instance, I have a tomcat app running on port 8080. I tried to hit the URL by two methods in my lambda function:
Using my load balancer, which has the same assigned security group
Hitting the IP address of the EC2 box with port 8080
Both of these options do not work for the lambda function. I tried it on my local computer and it is fine.
Any suggestions?
Security Group for Inbound
Type = All Traffic
Protocol = All
Port Range = All
Source = Group ID of Security Group | AWS Lambda function can't communicate with EC2 Instance | 0 | 0 | 1 | 1,713 |
43,923,747 | 2017-05-11T19:06:00.000 | 0 | 1 | 1 | 0 | python,linux,encryption,console,passwords | 45,161,150 | 3 | false | 0 | 0 | Store password with code on untrustworthy server, that definitely unsafe. You have to change the way like below if you can.
on the server you control, encrypt password with pub_key
general the different pub_key/private_key every request
the client you don't trust get id and encrypt_msg with auth
the password_required server get the id and private_key and decrypt encrypt_msg from client and compare the password.
delete the auth if the client is useless any more. | 1 | 0 | 0 | I am trying to run a python script on a remote server which i dont trust. The script contains a password that is kind of important.
What would be a good way to protect that code/password?
I would give it as an argument or i could prompt input on the terminal but that would be saved in history. | Protect python code including a password, encrypt? | 0 | 0 | 0 | 2,212 |
43,924,479 | 2017-05-11T19:57:00.000 | 2 | 0 | 1 | 0 | python,performance,pygame,pygame-surface | 43,940,127 | 1 | true | 0 | 1 | I've made games that have had arrays with over ten thousand surfaces in it. It ran just fine. You shouldn't be afraid to use a lot of surfaces, go crazy.
If you've ever seen the game Don't Starve, I was making a game similar to it. I had fifteen different biomes, and I had multiple ground tiles for each biome for variation. That alone pushed it over 100 surfaces. I then had a bunch of different plants, like trees, berry bushes, carnivorous happyplants, etc. There was a large variety of these, and you could have a hundred of them loaded at one time in some locations. Again, each of these had variations. I also had many moving entities, like giant spiders and stuff like that. Some types of entities required over a hundred surfaces, just for all the animations that might occur from their variety of possible actions.
All in all, I probably had a maximum of a thousand surfaces being displayed at one time, and over twenty thousand loaded. The game worked just fine, and it would even run at 60 fps; but not without the extensive use of threads, as expected.
The conclusion: Assume there is no limit, because there may as well not be any. | 1 | 2 | 0 | How many instances of pygame.Surface is it reasonable to use simultaneously?
More precisely, how costly is it to:
Hold a surface in memory?
Blit a surface onto another?
Consequently, how many surfaces can be kept in a list (or any other container), and how many surfaces can be blitted at every frame, with respect for other operations related to the application?
Since this might be too broad, here is a concrete situation. I want to make an animated background, with repeating patterns. I want every element of the pattern to move independently, so I use one surface for each of them.
If I want to display one hundred elements, is it still acceptable to use one surface per element? | How many surfaces is reasonable? | 1.2 | 0 | 0 | 47 |
43,926,478 | 2017-05-11T22:19:00.000 | 0 | 0 | 1 | 0 | python,list,types,structure | 45,784,200 | 1 | false | 0 | 0 | My understanding is that lists, tuples, and dictionaries are data structures.
A data structure is essentially a container that can hold different data types (eg int, float) and on its own has a set of operating primitives that have different properties than data types. The line between types and structures is a bit blurred though- for example, strings can be understood as both. | 1 | 1 | 0 | I'm a bit confused as to what lists, tuples and dictionaries are classified as in python. I understand that int and string are examples of primitive data types in the language but I am not sure what lists, tuples and dictionaries are. Are they data structures? | In Python are Lists, Tuples, and Dictionaries Data Types? Or Data Structures? | 0 | 0 | 0 | 738 |
43,927,525 | 2017-05-12T00:20:00.000 | 0 | 0 | 1 | 0 | python,python-2.7,html5lib | 43,927,551 | 2 | false | 1 | 0 | I think you're on a Mac.
And it smells like you are trying to install into the system-level Python directories without being root (hence, "Operation not permitted.")
See if you have the "virtualenv" command on your system (you probably do), and then read up on that for how to use it. (You really want it.) | 1 | 0 | 0 | I am still pretty new to python, and I need html5lib for a project, but when I run pip install html5lib, here's what I get:
Error: [('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/init.py', '/var/folders/yr/8762117x5h7_pwb9fx5f0tzr0000gn/T/pip-uiZ0aQ-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/init.py', "[Errno 1] Operation not permitted: '/var/folders/yr/8762117x5h7_pwb9fx5f0tzr0000gn/T/pip-uiZ0aQ-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/init.py'"), ('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/init.pyc', '/var/folders/yr/8762117x5h7_pwb9fx5f0tzr0000gn/T/pip-uiZ0aQ-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/init.pyc', "[Errno 1] Operation not permitted: '/var/folders/yr/8762117x5h7_pwb9fx5f0tzr0000gn/T/pip-uiZ0aQ-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/init.pyc'"), ('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.py', '/var/folders/yr/8762117x5h7_pwb9fx5f0tzr0000gn/T/pip-uiZ0aQ-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.py', "[Errno 1] Operation not permitted: '/var/folders/yr/8762117x5h7_pwb9fx5f0tzr0000gn/T/pip-uiZ0aQ-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.py'"), ('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.pyc', '/var/folders/yr/8762117x5h7_pwb9fx5f0tzr0000gn/T/pip-uiZ0aQ-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.pyc', "[Errno 1] Operation not permitted: '/var/folders/yr/8762117x5h7_pwb9fx5f0tzr0000gn/T/pip-uiZ0aQ-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.pyc'"), ('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib', '/var/folders/yr/8762117x5h7_pwb9fx5f0tzr0000gn/T/pip-uiZ0aQ-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib', "[Errno 1] Operation not permitted: '/var/folders/yr/8762117x5h7_pwb9fx5f0tzr0000gn/T/pip-uiZ0aQ-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib'")]
Really long gross error, I know, but I'm not sure what is going on. I've actually had errors trying to install other python packages as well and I am not sure what the problem is. Any help or insight would be greatly aprecaited, thanks! | Error when trying to install html5lib | 0 | 0 | 0 | 1,726 |
43,928,063 | 2017-05-12T01:46:00.000 | 9 | 1 | 1 | 0 | python,pylint | 43,928,076 | 1 | true | 0 | 0 | just figured it out from some trial and error, a little hard to figure out by reading the docs only:
pylint --disable=all --enable=W0611 | 1 | 7 | 0 | I want to pylint a project against unused imports to make upgrading a module easier (W0611). | Is it possible to pylint for a specific error code? | 1.2 | 0 | 0 | 736 |
43,928,229 | 2017-05-12T02:08:00.000 | 1 | 0 | 0 | 0 | python,pygame,cursor,coordinates,python-3.2 | 43,928,789 | 2 | false | 0 | 1 | You can use map:
x, y = map(str, pygame.mouse.get_pos()) | 1 | 1 | 0 | I'm using pygame and am using the code pygame.mouse.get_pos(), but need to turn this into two seperate strings: one where x = the x coordinate, and one where y = the y coordinate. Thanks for the help. | pygame.mouse.get_pos() into two separate strings | 0.099668 | 0 | 0 | 899 |
43,928,517 | 2017-05-12T02:50:00.000 | 1 | 0 | 0 | 0 | python,django,windows-7 | 43,939,021 | 3 | true | 1 | 0 | Exprator's answer worked for me. I installed Django while inside virtualenv and did "django-admin --version", it works fine then I did the code: "django-admin startproject mysite ." (The period is included) and it worked. Finally the manage.py showed up inside the project folder. | 1 | 0 | 0 | I tried to research on all related topics here and googled it but the django-admin command is not doing anything whenever I type it in CMD (win 7).
It only opens my pycharm and shows codes inside the django-admin.py. I already added it to environment variables and tried these syntax but didn't work (I was able to create and run virtual environment):
version1:
django-admin startproject mysite
version2:
django-admin.py startproject mysite
version3:
path-to-django-admin\django-admin.py startproject mysite
I even tried to copy the django-admin.py on my project folder but didn't do any good. | Django-admin is not working Django 1.11 Python 3.6 | 1.2 | 0 | 0 | 1,335 |
43,931,149 | 2017-05-12T06:53:00.000 | 0 | 0 | 1 | 0 | python,vba | 43,934,019 | 2 | false | 0 | 0 | I think basically you want to translate the VBA to python.
If you can take a look at how the API was constructed then it is possible but you have to translate the code by yourself.
If you can not, then you could build python scripts for your own, base on the logic you figure out, and you have to know about actuarial software you mentioned, if they have API to extract data, or any other means so you can get the data to process (this is possible since the API in VBA could do that)
I have experienced in finance banking, VBA and python in working with finance data, and I'm somewhat familiar with API for accounting software, so if you want you can contact me to discuss so I can help. I think wrap this up in an answer is impossible. | 2 | 0 | 0 | I bought some API for VBA Excel/reference. Is is possible to use this API in Python 2.x ? Maybe the question could be is there possible to import VBA reference into Python. This is just the idea. Do not have a any clue if this is even possible ? If it is not possible, is there some nice solution ? Do you have some some experience ? Thanks | VBA API in Python | 0 | 0 | 0 | 498 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.