Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
35,109,095
2016-01-31T00:54:00.000
0
0
0
0
python,multithreading,http,operating-system,communication
35,109,280
1
false
0
0
I Think you can return x in function callback. or like twisted reactor.callLater, server B could ask twice waiting for A get lasted result. but x update per-second, i may lead to other servers always request。
1
0
0
Let assume I have computer A and a variable x which updated very frequently, and also takes some time for this update( Lets say: asked to be updated every sec and update tooks 0.5 sec). Now, once a minute, I have computer B which asks in a HTTP GET request for x's value. A sends him a copy of x. Because x might be used by A, I need to make sure that nothing gets wrong. How can I assure it? What are my options for doing this?
How to send constantly updated data in http request?
0
0
1
52
35,109,757
2016-01-31T02:44:00.000
2
0
0
0
python,django,api,security,token
35,111,076
1
false
1
0
A session cookie is not a bad solution. You could also mark the cookie as "secure" to make sure that it only can be sent over https. It is far better than using i.e. localstorage.
1
2
0
I have an application which issues a simple request with basic auth which returns a session token. I then want to use that token for subsequent calls to that same application interface. My question is, is it OK to store this token in the session/cookie of the logged in user, or should I approach this a different way? I want to ensure 100% user security at all times.
Django - Storing a users API session token securely
0.379949
0
0
1,030
35,113,104
2016-01-31T11:05:00.000
0
0
0
0
python,html,django
36,265,905
1
false
1
0
I figured out the issue; all I had to do was restart the web server after making the changes. Thanks anyway guys!
1
0
0
I am new here. I have googled for help regarding my issue to no avail. I am new to Python and the Django framework. Issue I am facing: I have a website built with the Django framework. In this website, I have a drop-down menu and I want to add a new page there. This new webpage only contains text and there is no user interaction needed. 1) I have created the new webpage, and put it in the "webapps/template" folder. 2) I updated the urls.py file with the new webpage's url in the "webapps" folder. 3) I have updated the base.html with the new webpage's url in the "webapps/template" folder. I have worked with 3 files in total: new webpage, urls.py and base.html. When I upload the files, the site breaks. What am I missing here? Do I need to update another URL file somewhere? Please advise?
Having issues adding a new web page in Django
0
0
0
117
35,113,618
2016-01-31T12:03:00.000
0
0
0
0
android,console,qpython
52,803,215
2
false
0
1
edit the file end.sh. Remove or comment the "read" command.
1
4
0
I'm running Qpython (andoid) 1.2.3 (the latest as of 2016-01-31) and I can get scripts to run. I've been able to create a shortcut on my homescreen for a simple test script that put the current date in yyyy-mm-dd format into the clipboard, and speaks the content. I have #qpy: console directing the script to execure in the console. When I click the shortcut, the clipboard is happily updated, and read out to me. But... the console window stays open, telling me to hit enter to close the window. I have tried: adding "exit" adding "exit(0)" adding import sys sys.exit() I have tried printing ctrl-d I still get the console staying open until I manually hit enter. I tried reheadering my script to run as kivy instead of console, and I could get it to exit, but it takes several seconds to load up kivy, and it's silly to load a huge amount of gui capability, when I neither need nor want them. How can I close the console automatically?
How do I exit Qpython (android) console automatically afer script completes?
0
0
0
1,803
35,119,959
2016-01-31T21:52:00.000
0
0
0
0
python,mongodb,pymongo,database
35,120,084
3
false
0
0
You can create a little Rest API for your database with unique keys and all peoples in your team will can use it. If you want to use export only one time - just export it to JSON and no problem.
1
1
0
Our current Python pipeline scrapes from the web and stores those data into the MongoDB. After that we load the data into an analysis algorithm. This works well on a local computer since mongod locates the database, but I want to upload the database on sharing platform like Google Drive so that other users can use the data without having to run the scraper again. I know that MongoDB stores data at /data/db as default, so could I upload the entire /data/db onto the Google Drive? Another option seems to be exporting MongoDB into JSON or CSV, but our current implementation for the analysis algorithm already loads directly from MongoDB.
How to share database created by MongoDB?
0
1
0
7,187
35,121,192
2016-02-01T00:13:00.000
14
0
0
0
python,r,pandas,dataframe
35,121,242
3
true
0
0
Edit: If you can install and use the {reticulate} package, then this answer is probably outdated. See the other answers below for an easier path. You could load the pickle in python and then export it to R via the python package rpy2 (or similar). Once you've done so, your data will exist in an R session linked to python. I suspect that what you'd want to do next would be to use that session to call R and saveRDS to a file or RAM disk. Then in RStudio you can read that file back in. Look at the R packages rJython and rPython for ways in which you could trigger the python commands from R. Alternatively, you could write a simple python script to load your data in Python (probably using one of the R packages noted above) and write a formatted data stream to stdout. Then that entire system call to the script (including the argument that specifies your pickle) can use used as an argument to fread in the R package data.table. Alternatively, if you wanted to keep to standard functions, you could use combination of system(..., intern=TRUE) and read.table. As usual, there are /many/ ways to skin this particular cat. The basic steps are: Load the data in python Express the data to R (e.g., exporting the object via rpy2 or writing formatted text to stdout with R ready to receive it on the other end) Serialize the expressed data in R to an internal data representation (e.g., exporting the object via rpy2 or fread) (optional) Make the data in that session of R accessible to another R session (i.e., the step to close the loop with rpy2, or if you've been using fread then you're already done).
1
39
1
Is there an easy way to read pickle files (.pkl) from Pandas Dataframe into R? One possibility is to export to CSV and have R read the CSV but that seems really cumbersome for me because my dataframes are rather large. Is there an easier way to do so? Thanks!
Reading a pickle file (PANDAS Python Data Frame) in R
1.2
0
0
36,680
35,122,755
2016-02-01T03:52:00.000
2
0
1
1
python,scheduled-tasks
35,123,125
3
false
0
0
The easiest way is going to be to do this in the shell, not using pure python. Just run python test1.py && python test2.py or python test1.py; python test2.py. The one with && won't run test2.py if test1 fails while the one using ; will run both regardless.
1
0
0
For example, I have two python file test1.py and test2.py. At first, test1.py will run. And I want test2.py to be run when the test1.py is finished. I want the two python files run in different shell. That means test1.py should be closed when it is finished. All the help is appreciated! Thank you! I want this task to be some kind of scheduler task. At 12:00 pm the test1.py is executed. And after test1.py is finished, I want to execute test2.py automatically
How to run another python file when a python file is finished
0.132549
0
0
1,210
35,123,248
2016-02-01T04:59:00.000
2
0
0
0
python,machine-learning,cluster-analysis,word
35,149,501
2
false
0
0
Word clustering will be really disappointing because the computer does not understand language. You could use levenshtein distance and then do hierarchical clustering. But: dog and fog have a distance of 1, i.e. are highly similar. dog and cat have 3 out of 3 letters different. So unless you can define a good measure of similarity, don't cluster words.
1
0
1
How to cluster only words in a given set of Data: i have been going through few algorithms online like k-Means algotihm,but it seems they are related to document clustering instead of word clustering.Can anyone suggest me some way to only cluster words in a given set of data???. please am new to python.
Word clustering in python
0.197375
0
0
4,147
35,124,720
2016-02-01T07:05:00.000
0
0
0
1
python,scrapy,scrapyd
48,967,865
2
false
1
0
Facing the same issue, the solution was hastened by reviewing scrapyd's error log. The logs are possibly located in the folder /tmp/scrapydeploy-{six random letters}/. Check out stderr. Mine contained a permissions error: IOError: [Errno 13] Permission denied: '/usr/lib/python2.7/site-packages/binary_agilo-1.3.15-py2.7.egg/EGG-INFO/entry_points.txt'. This happens to be a packaged that was installed system-wide last week, thus leading to scrapyd-deploy failing to execute. Removing the package fixes the issue. (Instead, the binary_agilo package is installed in a virtualenv.)
1
1
0
Traceback (most recent call last): File "/usr/local/bin/scrapyd-deploy", line 273, in main() File "/usr/local/bin/scrapyd-deploy", line 95, in main egg, tmpdir = _build_egg() File "/usr/local/bin/scrapyd-deploy", line 240, in _build_egg retry_on_eintr(check_call, [sys.executable, 'setup.py', 'clean', '-a', 'bdist_egg', '-d', d], stdout=o, stderr=e) File "/usr/local/lib/python2.7/dist-packages/scrapy/utils/python.py", line 276, in retry_on_eintr return function(*args, **kw) File "/usr/lib/python2.7/subprocess.py", line 540, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['/usr/bin/python', 'setup.py', 'clean', '-a', 'bdist_egg', '-d', '/tmp/scrapydeploy-sV4Ws2']' returned non-zero exit status 1
Fail to scrapyd-deploy
0
0
0
1,047
35,126,954
2016-02-01T09:27:00.000
1
0
0
0
python,tensorflow,recurrent-neural-network
35,147,923
1
false
0
0
It looks like there is a difference between your dev and train data: global step 374600 learning rate 0.0069 step-time 1.92 perplexity 1.02 eval: bucket 0 perplexity 137268.32 Your training perplexity is 1.02 -- the model is basically perfect on the data it receives for training. But your dev perplexity is enormous, the model does not work at all for the dev set. How did it look in earlier epochs? I would suspect that there is some mismatch. Maybe the tokenization is different for train and dev? Maybe you loaded the wrong file? Maybe the sizes of the buckets from the original translation model are not appropriate for your dev data? It's hard to say without knowing more details. As to when to stop: the original translation model has an infinite training loop because it has a large data-set and capacity and could continue improving for many weeks of training. But it also lowers the learning rate when it's not improving any more, so if your learning rate is very low (as it seems to be in your case), it's a clear signal you can stop.
1
1
1
I am training a seq2seq model since many days on a custom parallel corpus of about a million sentences with default settings for the seq2seq model. Following is the output log which has crossed 350k steps as mentioned in the tutorial. I saw that the bucket perplexity have suddenly increased significantly the overall train perplexity is constant at 1.02 since a long time now , also the learning rate was initialized at 0.5 but now it shows about 0.007 , so the learning rate has also significantly decreased, Also the output of the system is not close to satisfactory. How can I know if the epoch point is reached and should I stop and reconfigure settings like parameter tuning and optimizer improvements? global step 372800 learning rate 0.0071 step-time 1.71 perplexity 1.02 eval: bucket 0 perplexity 91819.49 eval: bucket 1 perplexity 21392511.38 eval: bucket 2 perplexity 16595488.15 eval: bucket 3 perplexity 7632624.78 global step 373000 learning rate 0.0071 step-time 1.73 perplexity 1.02 eval: bucket 0 perplexity 140295.51 eval: bucket 1 perplexity 13456390.43 eval: bucket 2 perplexity 7234450.24 eval: bucket 3 perplexity 3700941.57 global step 373200 learning rate 0.0071 step-time 1.69 perplexity 1.02 eval: bucket 0 perplexity 42996.45 eval: bucket 1 perplexity 37690535.99 eval: bucket 2 perplexity 12128765.09 eval: bucket 3 perplexity 5631090.67 global step 373400 learning rate 0.0071 step-time 1.82 perplexity 1.02 eval: bucket 0 perplexity 119885.35 eval: bucket 1 perplexity 11166383.51 eval: bucket 2 perplexity 27781188.86 eval: bucket 3 perplexity 3885654.40 global step 373600 learning rate 0.0071 step-time 1.69 perplexity 1.02 eval: bucket 0 perplexity 215824.91 eval: bucket 1 perplexity 12709769.99 eval: bucket 2 perplexity 6865776.55 eval: bucket 3 perplexity 5932146.75 global step 373800 learning rate 0.0071 step-time 1.78 perplexity 1.02 eval: bucket 0 perplexity 400927.92 eval: bucket 1 perplexity 13383517.28 eval: bucket 2 perplexity 19885776.58 eval: bucket 3 perplexity 7053727.87 global step 374000 learning rate 0.0071 step-time 1.85 perplexity 1.02 eval: bucket 0 perplexity 46706.22 eval: bucket 1 perplexity 35772455.34 eval: bucket 2 perplexity 8198331.56 eval: bucket 3 perplexity 7518406.42 global step 374200 learning rate 0.0070 step-time 1.98 perplexity 1.03 eval: bucket 0 perplexity 73865.49 eval: bucket 1 perplexity 22784461.66 eval: bucket 2 perplexity 6340268.76 eval: bucket 3 perplexity 4086899.28 global step 374400 learning rate 0.0069 step-time 1.89 perplexity 1.02 eval: bucket 0 perplexity 270132.56 eval: bucket 1 perplexity 17088126.51 eval: bucket 2 perplexity 15129051.30 eval: bucket 3 perplexity 4505976.67 global step 374600 learning rate 0.0069 step-time 1.92 perplexity 1.02 eval: bucket 0 perplexity 137268.32 eval: bucket 1 perplexity 21451921.25 eval: bucket 2 perplexity 13817998.56 eval: bucket 3 perplexity 4826017.20 And when will this stop ?
How can I know if the epoch point is reached in seq2seq model?
0.197375
0
0
972
35,129,697
2016-02-01T11:44:00.000
3
0
0
0
python,django,django-rest-framework
37,944,880
3
false
1
0
Both of them refers to the same thing with a little bit difference. Model fields are used within the database i.e while creating the schema, visible to the developer only. while Serializer fields are used to when exposing the api to the client, visible to client also.
3
9
0
As we can validate the values using the conventional model field then why Django REST Framework contains its own serializer fields. I know that serializer fields are used to handle the converting between primitive values and internal datatypes. Except this, is there anything different between them.
Difference between model fields(in django) and serializer fields(in django rest framework)
0.197375
0
0
5,561
35,129,697
2016-02-01T11:44:00.000
12
0
0
0
python,django,django-rest-framework
35,133,156
3
true
1
0
Well there is a ModelSerializer that can automatically provide the serializer fields based on your model fields (given the duality you described). A ModelSerializer allows you to select which models fields are going to appear as fields in the serializer, thus allowing you to show/hide some fields. A field in a model, is conventionally tied to a data store (say a column in a database). A DRF Serializer can exist without a Django model too, as it serves to communicate between the API and the client, and its fields can be in many forms that are independent from the model and the backing database, e.g. ReadOnlyField, SerializerMethodField etc
3
9
0
As we can validate the values using the conventional model field then why Django REST Framework contains its own serializer fields. I know that serializer fields are used to handle the converting between primitive values and internal datatypes. Except this, is there anything different between them.
Difference between model fields(in django) and serializer fields(in django rest framework)
1.2
0
0
5,561
35,129,697
2016-02-01T11:44:00.000
6
0
0
0
python,django,django-rest-framework
35,133,230
3
false
1
0
Model fields are what you keep in your database. (it answers how you want your data organized) Serializer fields are what you expose to your clients. (it answers how you want your data represented) For models.ForeignKey(User) of your model, you can represent it in your serializer as an Int field, or UserSerializer(which you will define), or as http link that points to the api endpoint for the user. You can represent the user with username, it's up to how you want to represent it. With DRF, You can hide model fields, mark it as read-only/write-only. You can also add a field that is not mappable to a model field.
3
9
0
As we can validate the values using the conventional model field then why Django REST Framework contains its own serializer fields. I know that serializer fields are used to handle the converting between primitive values and internal datatypes. Except this, is there anything different between them.
Difference between model fields(in django) and serializer fields(in django rest framework)
1
0
0
5,561
35,132,569
2016-02-01T14:06:00.000
0
0
0
0
python,feature-selection,supervised-learning
35,132,831
2
false
0
0
I personally am new to Python but I would use the data type of a list. I would then proceed to making a membership check and reference the list you just wrote. Then proceed to say that if member = true then run/use randomForest regressor. If false use/run another regressor.
1
4
1
What I'm trying to do is build a regressor based on a value in a feature. That is to say, I have some columns where one of them is more important (let's suppose it is gender) (of course it is different from the target value Y). I want to say: - If the gender is Male then use the randomForest regressor - Else use another regressor Do you have any idea about if this is possible using sklearn or any other library in python?
Use a different estimator based on value
0
0
0
47
35,133,678
2016-02-01T14:59:00.000
0
0
0
0
python,adodb
35,134,057
1
true
0
0
Yes, that was the problem. Once I change the value of one of a recordset's ADODB.Field objects, I have to either update the recordset using ADODB.Recordset.Update() or call CancelUpdate(). The reason I'm going through all this rigarmarole of the ADODB.Command object is that ADODB.Recordset.Update() fails at random (or so it seems to me) times, complaining that "query-based update failed because row to update cannot be found". I've never been able to predict when that will happen or find a reliable way to keep it from happening. My only choice when that happens is to replace the ADODB.Recordset.Update() call with the construction of a complete update query and executing it using an ADODB.Connection or ADODB.Command object.
1
0
0
My Python script uses an ADODB.Recordset object. I use an ADODB.Command object with a collection of ADODB.Parameter objects to update a record in the set. After that, I check the state of the recordset, and it was 1, which is adStateOpen. But when I call MyRecordset.Close(), I get an exception complaining that the operation is invalid in the set's current state. What state could an open recordset be in that would make it invalid to close it, and what can I do to fix it? Code is scattered between a couple of files. I'll work on getting an illustration together.
Why can't a close an open ADODB.Recordset?
1.2
1
0
305
35,134,225
2016-02-01T15:28:00.000
0
0
1
1
cmd,ipython,jupyter
65,552,980
8
false
0
0
To add Jupyter as the windows CLI Command. You need to add the "C:\Users\user\AppData\Roaming\Python\Python38\Scripts" into your environment path. This was to solve for me.
6
22
0
I cannot get jupyter running from my Command line using: jupyter notebook jupyter is not recognised as an internal or external command, operable program or batch file' But I can get it running from pycharm (slick but with issues). When I take the kernel's IP and Port from pycharm and paste it into my browser I can get it running from there. I cannot use anaconda because of Arcpy, and I have dug around the jupyter files for some hints. I'm assuming I need to add something to my path?
Jupyter From Cmd Line in Windows
0
0
0
60,064
35,134,225
2016-02-01T15:28:00.000
4
0
1
1
cmd,ipython,jupyter
61,023,501
8
false
0
0
This is an old question, but try using python -m notebook This was the only way I was able to get jupyter to start after installing it on the windows 10 command line using pip. I didn't try touching the path.
6
22
0
I cannot get jupyter running from my Command line using: jupyter notebook jupyter is not recognised as an internal or external command, operable program or batch file' But I can get it running from pycharm (slick but with issues). When I take the kernel's IP and Port from pycharm and paste it into my browser I can get it running from there. I cannot use anaconda because of Arcpy, and I have dug around the jupyter files for some hints. I'm assuming I need to add something to my path?
Jupyter From Cmd Line in Windows
0.099668
0
0
60,064
35,134,225
2016-02-01T15:28:00.000
-1
0
1
1
cmd,ipython,jupyter
49,664,386
8
false
0
0
Go to Anaconda Command Prompt and type jupyter notebook and wait for 30 seconds. You can see that your local host site will automatically open.
6
22
0
I cannot get jupyter running from my Command line using: jupyter notebook jupyter is not recognised as an internal or external command, operable program or batch file' But I can get it running from pycharm (slick but with issues). When I take the kernel's IP and Port from pycharm and paste it into my browser I can get it running from there. I cannot use anaconda because of Arcpy, and I have dug around the jupyter files for some hints. I'm assuming I need to add something to my path?
Jupyter From Cmd Line in Windows
-0.024995
0
0
60,064
35,134,225
2016-02-01T15:28:00.000
0
0
1
1
cmd,ipython,jupyter
44,598,275
8
false
0
0
For future reference: the first hurdle of starting with Python is to install it. I downloaded the Anaconda 4.4 for Windows, Python 3.6 64-bit installer. After sorting the first hurdle of updating the "path" Environmental Variable, and running (at the Python prompt) "import pip", all the instructions I found to install the IPython Notebook generated errors. Submitting the commands "ipython notebook" or "jupyther notebook" from the Windows Command Prompt or the Python prompt generated error messages. Then I found that the Anaconda installation consists of a host of applications, on of them being the "Jupyter Notebook" application accessible from the Start menu. This application launch (first a shell, then) a browser page. The application points to a shortcut in , a directory set during the Anaconda installation. The shortcut itself refers to a few locations. Ready for next hurdle.
6
22
0
I cannot get jupyter running from my Command line using: jupyter notebook jupyter is not recognised as an internal or external command, operable program or batch file' But I can get it running from pycharm (slick but with issues). When I take the kernel's IP and Port from pycharm and paste it into my browser I can get it running from there. I cannot use anaconda because of Arcpy, and I have dug around the jupyter files for some hints. I'm assuming I need to add something to my path?
Jupyter From Cmd Line in Windows
0
0
0
60,064
35,134,225
2016-02-01T15:28:00.000
0
0
1
1
cmd,ipython,jupyter
59,917,360
8
false
0
0
If you use Python 3, try running the command from your virtual environment and or Anaconda command instead of your computer's OS CMD.
6
22
0
I cannot get jupyter running from my Command line using: jupyter notebook jupyter is not recognised as an internal or external command, operable program or batch file' But I can get it running from pycharm (slick but with issues). When I take the kernel's IP and Port from pycharm and paste it into my browser I can get it running from there. I cannot use anaconda because of Arcpy, and I have dug around the jupyter files for some hints. I'm assuming I need to add something to my path?
Jupyter From Cmd Line in Windows
0
0
0
60,064
35,134,225
2016-02-01T15:28:00.000
8
0
1
1
cmd,ipython,jupyter
48,469,315
8
false
0
0
Try to open it using the Anaconda Prompt. Just type jupyter notebook and press Enter. Anaconda Prompt has existed for a long time and is the correct way of using Anaconda. May be you have a broken installation somehow. Try this, if the above doesn't work- In the Command Prompt type, pip3 install jupyter if you're using Python3 Else, if you are using Python2.7 then type pip install jupyter. ...Some installation should happen... Now retry typing jupyter notebook in the CMD, it should work now.
6
22
0
I cannot get jupyter running from my Command line using: jupyter notebook jupyter is not recognised as an internal or external command, operable program or batch file' But I can get it running from pycharm (slick but with issues). When I take the kernel's IP and Port from pycharm and paste it into my browser I can get it running from there. I cannot use anaconda because of Arcpy, and I have dug around the jupyter files for some hints. I'm assuming I need to add something to my path?
Jupyter From Cmd Line in Windows
1
0
0
60,064
35,136,140
2016-02-01T17:02:00.000
0
1
0
1
python,deployment,updates,beagleboneblack,yocto
35,147,597
1
false
1
0
A natural strategy would be to make use of the package manager also used for the rest of the system. The various package managers of Linux distributions are not closed systems. You can create your own package repository containing just your application/scripts and add it as a package source on your target. Your "updater" would work on top of that. This is also a route you can go when using yocto.
1
1
0
For the moment I've created an Python web application running on uwsgi with a frontend created in EmberJS. There is also a small python script running that is controlling I/O and serial ports connected to the beaglebone black. The system is running on debian, packages are managed and installed via ansible, the applications are updated also via some ansible scripts. With other words, updates are for the moment done by manual work launching the ansible scripts over ssh. I'm searching now a strategy/method to update my python applications in an easy way and that can also be done by our clients (ex: via webinterface). A good example is the update of a router firmware. I'm wondering how I can use a similar strategy for my python applications. I checked Yocto where I can build my own linux with but I don't see how to include my applications in those builds, and I don't wont to build a complete image in case of hotfixes. Anyone who has a similar project and that would like to share with me some useful information to handle some upgrade strategies/methods?
Update strategy Python application + Ember frontend on BeagleBone
0
0
0
141
35,139,766
2016-02-01T20:32:00.000
8
1
1
0
python,python-imaging-library
52,045,391
1
false
0
0
You have to install PIL or pillow, try: pip install pillow
1
5
0
I am getting following error: raise ImportError('PILKit was unable to import the Python Imaging Library. Please confirm it's installed and available on your current Python path.') ImportError: PILKit was unable to import the Python Imaging Library. Please confirm it's installed and available on your current Python path.
PILKit was unable to import the Python Imaging Library
1
0
0
4,867
35,140,140
2016-02-01T20:55:00.000
1
0
0
0
python,macos,python-3.x,tkinter
35,140,293
1
true
0
1
Focus means that your window will receive all keyboard events until some other window gets the focus. A grab tells the window manager that your window should have the focus until you explicitly tell it that it is allowed to take it away (ungrab).
1
2
0
In tkinter I am creating a Toplevel widget to prompt the user with "Are you sure you want to exit?" every time they try to exit my application. While this is happening, I want the Toplevel widget to have full focus of my application and the user to be unable to click anything on the root window, if possible. While trying to figure out to do this, I discovered grabs and the ability to set the focus of the application. What is the difference between these two things? Thanks in advance.
Tkinter: What's the difference between grab and focus?
1.2
0
0
1,583
35,143,105
2016-02-02T00:40:00.000
1
0
0
0
python,tkinter
35,143,172
1
true
0
1
That's simply how it's designed to work. It's a very powerful feature. You worry about what widgets you want in a frame and thinter can take care of doing all the math to make sure everything fits.
1
0
0
Quick question everyone. I apologize for how simple it is but why is it that when I add something to a frame like a checkbutton or label the formatting of the frame goes away and it's size snaps to whatever it is I put inside it? Thank you, Mark
TKInter Frame formatting not working
1.2
0
0
30
35,143,935
2016-02-02T02:16:00.000
1
0
1
1
python,file,directory
35,143,988
1
false
0
0
The default folder is your current working directory, likely to be where you started your python interpreter. You can check it by print(os.getcwd()) to display it. To change the current working directory, you can run os.chdir('C:/MyFolder'), where you can swap C:/MyFolder to any desired path you want.
1
0
0
I know you can use os.remove(myfile) to delete files. But what is the default folder location of this file? How do I change the folder directory?
How to delete a file in a specific folder in Python?
0.197375
0
0
2,385
35,146,444
2016-02-02T06:21:00.000
1
0
1
0
python,python-2.7,tensorflow
35,172,568
4
false
0
0
You simply can't get value of 0th element of [[1,2,3]] without run()-ning or eval()-ing an operation which would be getting it. Because before you 'run' or 'eval', you have only a description how to get this inner element(because TF uses symbolic graphs/calculations). So even if you would use tf.gather/tf.slice, you still would have to get values of these operations via eval/run. See @mrry's answer.
2
53
1
This question is with respect to accessing individual elements in a tensor, say [[1,2,3]]. I need to access the inner element [1,2,3] (This can be performed using .eval() or sess.run()) but it takes longer when the size of the tensor is huge) Is there any method to do the same faster? Thanks in Advance.
Tensorflow python : Accessing individual elements in a tensor
0.049958
0
0
79,845
35,146,444
2016-02-02T06:21:00.000
1
0
1
0
python,python-2.7,tensorflow
35,148,137
4
false
0
0
I suspect it's the rest of the computation that takes time, rather than accessing one element. Also the result might require a copy from whatever memory is stored in, so if it's on the graphics card it will need to be copied back to RAM first and then you get access to your element. If this is the case you might skip it by adding an tensorflow operation to take the first element, and only return that.
2
53
1
This question is with respect to accessing individual elements in a tensor, say [[1,2,3]]. I need to access the inner element [1,2,3] (This can be performed using .eval() or sess.run()) but it takes longer when the size of the tensor is huge) Is there any method to do the same faster? Thanks in Advance.
Tensorflow python : Accessing individual elements in a tensor
0.049958
0
0
79,845
35,146,632
2016-02-02T06:35:00.000
0
0
0
0
javascript,python,sockets
35,147,809
1
false
1
0
Maybe you need to deploy the website on your own server,because listening for client connecting is server side things,but the blogger.com host the server,the javascript section who provide to you is just for static pages. If the blogger.com provide you api,that have some function like app.on("connection",function(){ /*send data to your python program*/ })
1
1
0
I have a blog on blogger.com, and they have a section where you can put html/javascript code in. I'm a total beginner to javascript/html but I'm somewhat adept at python. I want to open a listening socket on python(my computer) so that everytime a guest looks at my blog the javascript sends my python socket some data, like ip or datetime for example. I looked around on the internet, and ended up with the tornado module for my listening socket, but I have a hard time figuring out the javascript code. Basically it involves no servers.
Web Socket between Javascript and Python
0
0
1
93
35,146,733
2016-02-02T06:41:00.000
0
1
0
0
python,python-requests,urllib
35,147,116
1
true
0
0
It's most likely that DNS caching sped up the requests. DNS queries might take a lot of time in corporate networks, don't know why but I experience the same. The first time you sent the request with urllib2 DNS queried, slow, and cached. The second time you sent the request with requests, DNS needed not to be queried just retrieved from the cache. Clear up the DNS cache and change the order, i.e. request with requests first, see if there is any difference.
1
4
0
I use python to simply call api.github.gist. I have tried urllib2 at first which cost me about 10 seconds!. The requests takes less than 1 senond I am under a cooperation network, using a proxy. Do these two libs have different default behavior under a proxy? And I use fiddler to check the network. In both situation, the http request finished in about 40ms. So where urllib spends the time on?
Is urllib2 slower than requests in python3
1.2
0
1
241
35,150,683
2016-02-02T10:19:00.000
5
1
1
0
python,autocomplete,ide,atom-editor
41,311,935
3
false
0
0
Atom is getting various modifications. Autocomplete-python package is a handy package which helps code faster. The way to install it has changed. In all new Atom editor go to File->Settings->install search for autocomplete-python and click on install. Voila its done, restart Atom is not required and you will see the difference with next time you edit python code. Deb
2
10
0
I am using atom IDE for my python projects. there are auto-complete suggestions in some cases but I'd like to know if it's possible to have a list of all possible functions that a imported module has, for instance if i import import urllib when I type urlib. and press (ctrl+tab) would like to see a list with the possible functions/methods to use. Is that possible? Thanks
python - atom IDE how to enable auto-complete code to see all functions from a module
0.321513
0
0
23,531
35,150,683
2016-02-02T10:19:00.000
14
1
1
0
python,autocomplete,ide,atom-editor
35,151,184
3
false
0
0
I found the solution for my own question. Actually I had the wrong plugin installed! So, in the IDE, edit->preferences, and in the packages section just typed autocomplete-python and press install button. After restart Atom, it should start work :)
2
10
0
I am using atom IDE for my python projects. there are auto-complete suggestions in some cases but I'd like to know if it's possible to have a list of all possible functions that a imported module has, for instance if i import import urllib when I type urlib. and press (ctrl+tab) would like to see a list with the possible functions/methods to use. Is that possible? Thanks
python - atom IDE how to enable auto-complete code to see all functions from a module
1
0
0
23,531
35,152,052
2016-02-02T11:21:00.000
0
0
0
0
python-2.7,theano
35,153,245
2
false
0
0
You need to do some data formatting. The input size of a NN is constant, so if the images for your CNN have different sizes you need to resize them to your input size before feeding them in. Its like a person being too close or far away from a painting, your field of view is contant, in order to see everything clearly you need to adjust your distance from the image.
1
0
1
I built a cnn network in Theano. Input are many images, but the size of them are different. The elements of numpy.array have the same size. How can I make them as the input? Thanks a lot.
Take input of arbitrary size in theano
0
0
0
61
35,152,616
2016-02-02T11:46:00.000
0
0
0
1
python,ios,py2exe
35,152,961
2
false
0
0
Use www.pyinstaller.org This piece of software makes executables from python scrips for Windows, Linux and OSX.
1
3
0
I have a Python script I would like to transform into an executable for Windows and a dmg file to be run on Apple computers. For the Windows systems I have found py2exe (only valid for Windows) and for the Apple ones py2app (it can only be run on Io systems). My question is whether there is some way to create a dmg file of the Python script running a program from a Windows system (even though the program cannot be run). Is it possible? Thank you in advance!
Convert python script to dmg program from windows
0
0
0
7,803
35,153,061
2016-02-02T12:06:00.000
0
0
1
0
python,windows,pyautogui
42,882,695
2
false
0
0
This has been fixed in 0.9.34, so just try installing PyAutoGUI again.
1
0
0
I've downloaded the folder, I paste it but, when I run the test it says: No module: PIL. Also, The pip install has the same error. When is almost concluded the installing process it says: Nomodule nademPIL. Thanks!
How do I install Pyautogui?
0
0
0
3,101
35,158,809
2016-02-02T16:31:00.000
1
0
1
0
python,python-2.7,pycharm,python-2.x
35,158,959
2
true
0
0
Create new project Create new py file in your project or copy your file to under the project directory Second option would be import existing project by selecting a directory where you have your python file.
2
3
0
New to python and PyCharm, but trying to use if for an online course. After opening an assignment .py document (attached image), I get an error message if I open the python console: Error:Cannot start process, the working directory '\c:...\python_lab.py' is not a directory. Obviously, it is not - it is a python file, but I don't know how to address the problem. How can I assign a working directory that is functional from within PyCharm, or in general, what is the meaning of the error message?
How to assign a directory to PyCharm
1.2
0
0
882
35,158,809
2016-02-02T16:31:00.000
1
0
1
0
python,python-2.7,pycharm,python-2.x
35,158,965
2
false
0
0
Looks like your default working directory is a .tmp folder. Best way to fix this is to create a new project, just make sure it's not pointing to a .tmp directory.
2
3
0
New to python and PyCharm, but trying to use if for an online course. After opening an assignment .py document (attached image), I get an error message if I open the python console: Error:Cannot start process, the working directory '\c:...\python_lab.py' is not a directory. Obviously, it is not - it is a python file, but I don't know how to address the problem. How can I assign a working directory that is functional from within PyCharm, or in general, what is the meaning of the error message?
How to assign a directory to PyCharm
0.099668
0
0
882
35,159,748
2016-02-02T17:16:00.000
1
0
1
0
python
35,159,902
2
true
0
0
Is there a way use the def foo(*x) notation to let python know it needs a certain range of number of arguments? Nope. Also, scipy.optimize.curve_fit ultimately gets its argument count information from f.__code__.co_argcount, not co_nlocals or n_locals (which doesn't exist).
1
1
1
I was working on a project where I doing regressions and I wanted to used scipy.optomize.curve_fit which takes a function and tries to find the right parameters for it. The odd part was that it was never given how many parameters the function took. Eventually we guessed that it used foo.__code__.co_nlocals, but in the case we would've used it I needed 33 arguments. To the question: Is there a way use the def foo(*x) notation to let python know it needs a certain range of number of arguments? Like a def foo(*x[:32])? I'm sure I will never use this in any real code, but it would be interesting to know.
Python set number of arguments to capture
1.2
0
0
40
35,163,501
2016-02-02T20:47:00.000
1
0
1
0
python,python-2.7
35,163,645
1
true
0
0
I don't know if there are any standard tools for doing this, but it shouldn't be too difficult to mark the sections with appropriately coded remarks and then run all your files through a script that outputs a new set of files omitting the lines between those remarks.
1
3
0
I want to release a subset of my code for external use. Only certain functions or methods should be used (or even seen) by the external customer. Is there a way to do this in Python? I thought about wrapping the code I want removed in an if __debug__: and then creating a .pyc file with py_compile or compileall and then recreate source code from the new byte-code using uncompyle2. The __debug__ simply creates an if False condition which gets stripped out by the "compiler". I couldn't figure out how to use those "compiler modules" with the -O option.
How to remove code from external release
1.2
0
0
75
35,163,521
2016-02-02T20:49:00.000
0
1
0
0
android,python,wifi,sl4a
36,829,122
1
false
0
0
you mentioned two questions in your statement, 1 how to communicate via the same wifi network, 2 which one should be the server. 1, i have tried communicating two nodes using socket and multiproceseing manager, theyre really helpful for you to communicate that kind of over-network communication. you can communicate two nodes using manager or socket, but socket also provides helps for you to get the ip of node over the network, while the manager simplify the whole process. 2, if i were you, i would choose laptop as server as you would listen for certain port, bind port receiving data. One of the reason to choose laptop as server is that it would be more convenient if you want to add more smartphones to collect data I do not know well about sl4a, but i did some projects communicating via network, heres just suggestion, hope it would be helpful and not too late for you.
1
0
0
I want to collect accelerometer data on my android phone and communicate it to my laptop over wifi. A py script collects data on the phone with python for sl4a and another py script recieves data on the laptop. Both devices are on the same wifi network. The principle looks pretty straightforward, but I have no clue on how to communicate between the two devices. Who should be server, who sould be client? I'm not looking for a way to collect accelerometer data or somebody to write my script, I just can't find info on my wifi issues on the web. Can anybody provide any help? Thanks in advance
sl4a python communicate with pc over wifi
0
0
1
346
35,163,789
2016-02-02T21:04:00.000
0
0
0
0
python,numpy,theano,tensorflow
50,763,868
4
false
0
0
tf.transpose is probably what you are looking for. it takes an arbitrary permutation.
1
7
1
I have seen that transpose and reshape together can help but I don't know how to use. Eg. dimshuffle(0, 'x') What is its equivalent by using transpose and reshape? or is there a better way? Thank you.
Theano Dimshuffle equivalent in Google's TensorFlow?
0
0
0
4,682
35,164,413
2016-02-02T21:48:00.000
1
0
0
0
android,python,appium,python-appium
35,277,434
1
false
1
1
Update. As it turns out that cannot be done with appium webdriver. For those of you who are wondering this is the answer I rec'd from the appium support group: This cannot be done by appium as underlying UIAutomator framework does not allow us to do so. In app's native context this cannot be done In app's webview's context this will be same as below because webview is nothing but a chromeless browser session inside and app print searchBtn.value_of_css_property("background-color"). Summary for element inside NATIVE CONTEXT ==>> NO for element inside WEBVIEW CONTEXT ==>> YES Hope this helps.
1
1
0
I would like to verify the style of an element i.e. the color of the text shown in a textview. Whether it is black or blue ex. textColor or textSize. This information is not listed in the uiautomatorviewer. I can get the text using elem.get_attribute("text") as the text value is seen in the Node Detail. Is there a way to check for the style attributes?( I can do this fairly easy with straight selenium.)
Appium Android UI testing - how to verify the style attribute of an element?
0.197375
0
0
1,771
35,165,461
2016-02-02T23:01:00.000
3
0
1
0
python,oop,user-interface,tkinter
35,166,083
1
false
0
1
It is definitely possible to use multiple classes in GUI apps. For example you can have one class which defines and layouts GUI elements (like buttons, text fields, scrollbars etc.) and the second class would subclass it adding some functionality on top of it.
1
1
0
I'm new to GUI-programming and using now tkinter for python. In the past my "non-GUI" programs always consisted out of a few classes but if I look to the examples with a GUI it appears that only one class is used. All functions are included in this one class. Is this the normal way or is it possible to write a gui class which "calls" functions from other classes? As I look at it now it seems the concept of object oriented programming dissapears by implementing the GUI in an OOP manner
Only one Class possible for GUI programming
0.53705
0
0
110
35,166,656
2016-02-03T00:57:00.000
1
0
1
0
python,vim,pycharm,ideavim
35,195,777
1
true
0
0
Have you set "Use tab character" in "File | Settings | Editor | Code Style | Python"? Does everything work as expected with IdeaVim disabled? Please give a code example and your keystrokes where you would expect tabs, but got spaces instead.
1
1
0
I'm primarily a vim user, but at work most folks are using PyCharm, so I thought I'd at least give PyCharm a serious try. To make the transition a little less... well... repetitively typed, I installed PyCharm's IdeaVim plugin. However, our coding style -requires- hard tabs, not spaces (yes, it's the opposite of what most places do today), and I'm having a hard time getting IdeaVim to insert hard tabs instead of 4 spaces. I've tried :set noexpandtab (and a :%retab), and I've tried ^V^I, but neither seems to be working. Does anyone have IdeaVim doing hard tabs? If yes, how? Thanks!
Can PyCharm's IdeaVim plugin be persuaded to use hard tabs?
1.2
0
0
433
35,168,823
2016-02-03T04:59:00.000
1
0
1
0
python-3.x
35,169,115
2
true
0
0
If the files are both sorted, or if you can produce sorted versions of the files, then this is relatively easy. Your simplest approach (conceptually speaking) would be to take one word from file A, call it a, and then read a word from file B, calling it b. Either b is alphabetically prior to a, or it is after a, or they are the same. If they are the same, add the word to a list you're maintaining. If b is prior to a, read b from file B until b >= a. If equal, collect that word. If a < b, obviously, read a from A until a >= b, and collect if equal. Since file size is a problem, you might need to write your collected words out to a results file to avoid running out of memory. I'll let you worry about that detail. If they are not sorted and you can't sort them, then it's a harder problem. The naive approach would be to take a word from A, and then scan through B looking for that word. Since you say the files are large, this is not an attractive option. You could probably do better than this by reading in chunks from A and B and working with set intersections, but this is a little more complex. Putting it as simply as I can, I would read in a reasonably-sized chunks of file A, and convert it to a set of words, call that a1. I would then read similarly-sized chunks of B as sets b1, b2, ... bn. The union of the intersections of (a1, b1), (a1, b2), ..., (a1, bn) is the set of words appearing in a1 and B. Then repeat for chunk a2, a3, ... an. I hope this makes sense. If you haven't played with sets, it might not, but then I guess there's a cool thing for you to learn about.
2
0
0
I have a file that is all strings and I want to loop through the file and check its contents against another file. Both files are too big to place in the code so I have to open each file with open method and then turn each into a loop that iterates over the file word for word (in each file) and compare every word for every word in other file. Any ideas how to do this?
Python: how to open a file and loop through word for word and compare to a list
1.2
0
0
69
35,168,823
2016-02-03T04:59:00.000
0
0
1
0
python-3.x
35,211,196
2
false
0
0
I found the answer. There is a pointer when reading files . The problem is that when using a nested loop it doesn't redirect back to the next statement in the outer loop for Python.
2
0
0
I have a file that is all strings and I want to loop through the file and check its contents against another file. Both files are too big to place in the code so I have to open each file with open method and then turn each into a loop that iterates over the file word for word (in each file) and compare every word for every word in other file. Any ideas how to do this?
Python: how to open a file and loop through word for word and compare to a list
0
0
0
69
35,174,394
2016-02-03T10:25:00.000
2
0
0
0
python,sockets,tcp
35,206,533
2
false
0
0
shutdown is useful when you have to signal the remote client that no more data is being sent. You can specify in the shutdown() parameter which half-channel you want to close. Most commonly, you want to close the TX half-channel, by calling shutdown(1). In TCP level, it sends a FIN packet, and the remote end will receive 0 bytes if blocking on read(), but the remote end can still send data back, because the RX half-channel is still open. Some application protocols use this to signal the end of the message. Some other protocols find the EOM based on data itself. For example, in an interactive protocol (where messages are exchanged many times) there may be no opportunity, or need, to close a half-channel. In HTTP, shutdown(1) is one method that a client can use to signal that a HTTP request is complete. But the HTTP protocol itself embeds data that allows to detect where a request ends, so multiple-request HTTP connections are still possible. I don't think that calling shutdown() before close() is always necessary, unless you need to explicitly close a half-channel. If you want to cease all communication, close() does that too. Calling shutdown() and forgetting to call close() is worse because the file descriptor resources are not freed. From Wikipedia: "On SVR4 systems use of close() may discard data. The use of shutdown() or SO_LINGER may be required on these systems to guarantee delivery of all data." This means that, if you have outstanding data in the output buffer, a close() could discard this data immediately on a SVR4 system. Linux, BSD and BSD-based systems like Apple are not SVR4 and will try to send the output buffer in full after close(). I am not sure if any major commercial UNIX is still SVR4 these days. Again using HTTP as an example, an HTTP client running on SVR4 would not lose data using close() because it will keep the connection open after request to get the response. An HTTP server under SVR would have to be more careful, calling shutdown(2) before close() after sending the whole response, because the response would be partly in the output buffer.
1
2
0
I am currently working on a server + client combo on python and I'm using TCP sockets. From networking classes I know, that TCP connection should be closed step by step, first one side sends the signal, that it wants to close the connection and waits for confirmation, then the other side does the same. After that, socket can be safely closed. I've seen in python documentation function socket.shutdown(flag), but I don't see how it could be used in this standard method, theoretical of closing TCP socket. As far as I know, it just blocks either reading, writing or both. What is the best, most correct way to close TCP socket in python? Are there standard functions for closing signals or do I need to implement them myself?
Proper way to close tcp sockets in python
0.197375
0
1
5,767
35,175,209
2016-02-03T11:01:00.000
0
0
1
1
python,packaging,software-distribution,software-packaging
35,175,640
3
false
0
0
I use virtualenv for multiple Python installations setuptools for packaging (via pip)
1
1
0
How can I install (on Linux) a plain Python distribution to e.g. /opt/myPythonProject/python? When I afterwards install packages (e.g. pip) all packages should go in /opt/myPythonProject. It should simply ignore system python and it's packages. My ultimate goal is to place my own code in /opt/myPythonProject/mycode, then zip op the entire project root directory, to deploy it on customer machine. Does this in general work (assuming my own arch/OS/etc. is the same). So the bigger question is: can I deliver python/packages/my own code in 1 big zip? If yes, what do I need to take into account? If not: what is the easiest solution to distribute a Python application together with the runtimes/packages and to get this deployed as application user (non root).
How to install a second/third/python on my system?
0
0
0
49
35,182,926
2016-02-03T16:48:00.000
1
0
0
0
python,django
35,493,697
1
true
1
0
I went with app_name/manangement/helpers.py. No issues.
1
1
0
What is the generally accepted way for isolating functionality that is shared between multiple management commands in a given app? For example, I have some payload building code that is used across multiple management commands that access a third-party API. Is the proper location app_name/manangement/helpers.py which would then be imported in a management command with from ..helpers import build_api_payload? I don't want to put it at the root of the app (we typically use app_name/helpers.py for shared functionality), since it pulls in dev dependencies that wouldn't exist in production, and is never really used outside the management command anyways.
Extracting common functionality in Django management commands
1.2
0
0
56
35,183,538
2016-02-03T17:16:00.000
0
0
0
1
python,python-2.7
36,043,728
1
true
0
0
@Magalhaes, the auxiliary files *.sub, *.mon and *.con are input files. You have to write them; PSSE doesn't generate them. Your recording shows that you defined a bus subsystem twice, generated a *.dfx from existing auxiliary files, ran an AC contingency solution, then generated an *.acc report. So when you did this recording, you must have started with already existing auxiliary files.
1
1
1
I'm using python to interact with PSS/E (siemens software) and I'm trying to create *.acc file for pss/e, from python. I can do this easily using pss/e itself: 1 - create *.sub, *.mon, *.con files 2 - create respective *.dfx file 3 - and finally create *.acc file The idea is to perform all these 3 tasks automatically, using python. So, using the record tool from pss/e I get this code: psspy.bsys(0,0,[ 230., 230.],1,[1],0,[],0,[],0,[]) psspy.bsys(0,0,[ 230., 230.],1,[1],0,[],0,[],0,[]) psspy.dfax([1,1],r"""PATH\reports.sub""",r"""PATH\reports.mon""",r"""PATH\reports.con""",r"""PATH\reports.dfx""") psspy.accc_with_dsp_3( 0.5,[0,0,0,1,1,2,0,0,0,0,0],r"""IEEE""",r"""PATH\reports.dfx""",r"""PATH\reports.acc""","","","") psspy.accc_single_run_report_4([1,1,2,1,1,0,1,0,0,0,0,0],[0,0,0,0,6000],[ 0.5, 5.0, 100.0,0.0,0.0,0.0, 99999.],r"""PATH\reports.acc""") It happens that when I run this code on python, the *.sub, *.mon, *.con and *.dfx files are not created thus the API accc_single_run_report_4() reports an error. Can anyone tell me why these files aren't being created with this code? Thanks in advance for your time
Creating Contingency Solution Output File for PSS/E using Python 2.7
1.2
0
0
1,112
35,184,815
2016-02-03T18:27:00.000
2
0
0
0
python,numpy
35,185,050
2
false
0
0
In numpy, (10, 1), (10,) are not the same at all: (10, 1) is a two dimensional array, with a single column. (10, ) is a one dimensional array If you have an array a, and print out len(a.shape), you'll see the difference.
1
4
1
I am continously getting the error: "(shapes (10, 1), (10,) mismatch)" when doing a NumPy operation and I am somewhat confused. Wouldn't (10,1) and (10,) be identical shapes? And if for whatever reason this is not valid, is there a way to convert (10,1) to (10,)? I cannot seem to find it in the NumPy doucmentation. Thanks
Shape Mismatch Numpy
0.197375
0
0
1,521
35,184,894
2016-02-03T18:31:00.000
1
0
0
0
python-3.x,pandas
35,184,973
1
false
0
0
You can pass nrows=number_of_rows_to_read to your read_csv function to limit the lines that are read.
1
1
1
I often work with csv files that are 100s of GB in size. Is there any way to tell read_csv to only read a fixed number of MB from a csv file? Update: It looks like chunks and chunksize can be used for this, but the documentation looks a bit slim here. What would be an example of how to do this with a real csv file? (e.g. say a 100GB file, read only rows up to approximately ~10MB)
Limiting the number of GB to read in read_csv in Pandas
0.197375
0
0
779
35,185,797
2016-02-03T19:19:00.000
3
0
1
0
python,string
35,186,242
2
true
0
0
No - actually identifiers in Python are always strings. Whether you keep then in a dictionary yourself (you say you are using a "big dictionary") or the object is used programmaticaly, with a name hard-coded into the source code. In this later case, Python creates the name in one of its automaticaly handled internal dictionary (that can be inspected as the return of globals() or locals()). Moreover, Python does not use "utf-8" internally, it does use "unicode" - which means it is simply text, and you should not worry how that text is represented in actual bytes.
1
1
0
I am developing a small app for managing my favourite recipes. I have two classes - Ingredient and Recipe. A Recipe consists of Ingredients and some additional data (preparation, etc). The reason i have an Ingredient class is, that i want to save some additional info in it (proper technique, etc). Ingredients are unique, so there can not be two with the same name. Currently i am holding all ingredients in a "big" dictionary, using the name of the ingredient as the key. This is useful, as i can ask my model, if an ingredient is already registered and use it (including all it's other data) for a newly created recipe. But thinking back to when i started programming (Java/C++), i always read, that using strings as an identifier is bad practice. "The Magic String" was a keyword that i often read (But i think that describes another problem). I really like the string approach as it is right now. I don't have problems with encoding either, because all string generation/comparison is done within my program (Python3 uses UTF-8 everywhere if i am not mistaken), but i am not sure if what i am doing is the right way to do it. Is using strings as an object identifier bad practice? Are there differences between different languages? Can strings prove to be an performance issue, if the amount of data increases? What are the alternatives?
Is using strings as an object identifier bad practice?
1.2
0
0
132
35,187,355
2016-02-03T20:46:00.000
0
0
1
0
python
35,187,459
2
false
0
0
If you have to use a dict (and can't use an OrderedDict as @Oscar Loper is suggesting), use sorted(your_dict.keys()).
1
2
0
I have a fairly complex set of data (dict of keys-to-list-of-dicts of etc.) that passes through a fairly complex set of transformation functions to arrive at a final structure. Very rarely during testing, the unit tests I have fail with an error due to one of the lists coming back with the items in an unexpected order. I believe this is due to iterating over a dict without sorting the keys, but I cannot find a case where that happens by code review, and trying to inspect the intermediate values is difficult because the failures only happen on our CI server. Is it possible to instruct python to randomize the order of all dict iteration? I'm pretty sure that doing so would make it easy to debug locally (or rule out my hunch entirely). I'm open to doing hacky things like messing with the metaclass of dict or w/e, just for local testing. I cannot easily do things like "manually wrap all dicts with this function call" since I've already tried similar with sorted and it didn't fix things.
Forcing randomized dict iteration order
0
0
0
72
35,188,282
2016-02-03T21:40:00.000
0
0
0
0
python,user-interface,wxpython
35,229,531
1
false
0
1
When you create the UI, you can keep the default config in a variable. A dictionary would probably work. Then when you create the tabs, you can pass them a dictionary. Alternatively, you could just save the defaults to a config file and then use Python to read it and load it into the UI. Python can parse csv, json, xml and whatnot right out of the box after all.
1
0
0
I'm really sorry if this question sounds really simple but I couldn't figure out the solution yet... I'm using wxPython in order to create a GUI. I've used wx.Notebook, and created some tabs, all default configuration are located in the last tab. My question is, how can I get these default values from the last tab and use it?!? I tried "pub" (wx.lib.pubsub), but I only get these default values after an event (e.g. button click). Also there is/are any magic to get these values after user modification without a button click? Thanks all, Regards,
Default values from wxPython notebook
0
0
0
47
35,188,305
2016-02-03T21:42:00.000
2
0
0
0
python,html,flask,bokeh,flask-socketio
35,204,805
2
true
1
0
You really are asking two questions in one. Really, you have two problems here. First, you need a mechanism to periodically give the client access to updated data for your tables and charts. Second, you need the client to incorporate those updates into the page. For the first problem, you have basically two options. The most traditional one is to send Ajax requests (i.e. requests that run in the background of the page) to the server on a regular interval. The alternative is to enhance your server with WebSocket, then the client can establish a permanent connection and whenever the server has new data it can push it to the client. Which option to use largely depends on your needs. If the frequency of updates is not too high, I would probably use background HTTP requests and not worry about adding Socket.IO to the mix, which has its own challenges. On the other side, if you need sort of a live, constantly updating page, then maybe WebSocket is a good idea. Once the client has new data, you have to deal with the second problem. The way you deal with that is specific to the tables and charts that you are using. You basically need to write Javascript code that passes these new values that were received from the server into these components, so that the page is updated. Unfortunately there is no automatic way to cause an update. You can obviously throw the current page away and rebuild it from scratch with the new data, but that is not going to look nice, so you should probably find what kind of Javascript APIs these components expose to receive updates. I hope this helps!
1
0
0
I have developed a python web application using flask microframework. I have some interactive plots generated by Bokeh and some HTML5 tables. My question is how I can update my table and graph data on fly? Should I use threading class and set the timer and then re run my code every couple of seconds and feed updated data entries to the table and graphs? I also investigated flask-socketIO, but all I found is for sending and receiving Messages, Is there a way to use flask-socketIO for this purpose? I also worked a little bit with Bokeh-server, should I go that direction? does it mean I need to run two servers? my flask web server and bokeh-server? I am new to this kind of work. I appreciate if you can explain in detail what I need to do.
Streaming live data in HTML5 graphs and tables
1.2
0
0
1,747
35,188,674
2016-02-03T22:04:00.000
0
0
1
0
python,python-2.7,docstring
35,188,722
2
false
0
0
The __init__ method should return None. If you try to return anything else, Python will raise an error when the object is instantiated. Note: If you don't explicitly tell Python what a function should return, it returns None. Because of this, I have never needed to use the return statement in any of my __init__ methods.
2
2
0
I've recently started using PyCharm and it support type hinting for Python 2.x using docstrings which I'd like to start using. What should be the :return: value for the __init__ method of a class Foo? I can't find an answer to wether it should be Foo, None, nothing, or remove the attribute that PyCharm is creating in the docstring template for me?
Return type for __init__ in docstrings
0
0
0
527
35,188,674
2016-02-03T22:04:00.000
1
0
1
0
python,python-2.7,docstring
37,261,460
2
true
0
0
(The answer to the question credit should go to @ppperry) Because all __init__ methods return None, no docstrings about return type are required.
2
2
0
I've recently started using PyCharm and it support type hinting for Python 2.x using docstrings which I'd like to start using. What should be the :return: value for the __init__ method of a class Foo? I can't find an answer to wether it should be Foo, None, nothing, or remove the attribute that PyCharm is creating in the docstring template for me?
Return type for __init__ in docstrings
1.2
0
0
527
35,195,348
2016-02-04T07:51:00.000
0
0
1
0
python,multithreading
35,222,210
2
false
0
0
Thanks for the response. After some thoughts, I have decided to use the approach of many queues and a router-thread (hub-and-spoke). Every 'normal' thread has its private queue to the router, enabling separate send and receive queues or 'channels'. The router's queue is shared by all threads (as a property) and used by 'normal' threads as a send-only-channel, ie they only post items to this queue, and only the router listens to it, ie pulls items. Additionally, each 'normal' thread uses its own queue as a 'receive-only-channel' on which it listens and which is shared only with the router. Threads register themselves with the router on the router queue/channel, the router maintains a list of registered threads including their queues, so it can send an item to a specific thread after its registration. This means that peer to peer communication is not possible, all communication is sent via the router. There are several reasons I did it this way: 1. There is no logic in the thread for checking if an item is addressed to 'me', making the code simpler and no constant pulling, checking and re-putting of items on one shared queue. Threads only listen on their queue, when a message arrives the thread can be sure that the message is addressed to it, including the router itself. 2. The router can act as a message bus, do vocabulary translation and has the possibility to address messages to external programs or hosts. 3. Threads don't need to know anything about other threads capabilities, ie they just speak the language of the router. In a peer-to-peer world, all peers must be able to understand each other, and since my threads are of many different classes, I would have to teach each class all other classes' vocabulary. Hope this helps someone some day when faced with a similar challenge.
1
3
0
I have read lots about python threading and the various means to 'talk' across thread boundaries. My case seems a little different, so I would like to get advice on the best option: Instead of having many identical worker threads waiting for items in a shared queue, I have a handful of mostly autonomous, non-daemonic threads with unique identifiers going about their business. These threads do not block and normally do not care about each other. They sleep most of the time and wake up periodically. Occasionally, based on certain conditions, one thread needs to 'tell' another thread to do something specific - an action -, meaningful to the receiving thread. There are many different combinations of actions and recipients, so using Events for every combination seems unwieldly. The queue object seems to be the recommended way to achieve this. However, if I have a shared queue and post an item on the queue having just one recipient thread, then every other thread needs monitor the queue, pull every item, check if it is addressed to it, and put it back in the queue if it was addressed to another thread. That seems a lot of getting and putting items from the queue for nothing. Alternatively, I could employ a 'router' thread: one shared-by-all queue plus one queue for every 'normal' thread, shared with the router thread. Normal threads only ever put items in the shared queue, the router pulls every item, inspects it and puts it on the addressee's queue. Still, a lot of putting and getting items from queues.... Are there any other ways to achieve what I need to do ? It seems a pub-sub class is the right approach, but there is no such thread-safe module in standard python, at least to my knowledge. Many thanks for your suggestions.
Recommended way to send messages between threads in python?
0
0
1
1,923
35,196,150
2016-02-04T08:39:00.000
0
0
0
0
python,gspread
36,905,525
1
false
1
0
I don't believe there is a way to change those settings. However, you can use python's datetime module to convert the time to time zone you want.
1
0
0
Is there any way to change Spreadsheet Settings from the api?? Is there any other way I can do this from python?? Im using gspread to pull the results of a google form into python. I want to change the time zone of the results to fit to my needs, since my local time and form's time don't match. Thank you in advance
Change Spreadsheet Settings from gspread
0
0
0
110
35,202,184
2016-02-04T13:23:00.000
1
1
0
1
python,linux,python-3.x
35,202,372
5
false
0
0
You can use runit supervisor monit systemd (i think) Do not hack this with a script
3
0
0
I want to make sure my python script is always running, 24/7. It's on a Linux server. If the script crashes I'll restart it via cron. Is there any way to check whether or not it's running?
How to check whether or not a python script is up?
0.039979
0
0
270
35,202,184
2016-02-04T13:23:00.000
1
1
0
1
python,linux,python-3.x
35,202,314
5
false
0
0
Create a script (say check_process.sh) which will Find the process id for your python script by using ps command. Save it in a variable say pid Create an infinite loop. Inside it, search for your process. If found then sleep for 30 or 60 seconds and check again. If pid not found, then exit the loop and send mail to your mail_id saying that process is not running. Now call check_process.sh by a nohup so it will run in background continuously. I implemented it way back and remember it worked fine.
3
0
0
I want to make sure my python script is always running, 24/7. It's on a Linux server. If the script crashes I'll restart it via cron. Is there any way to check whether or not it's running?
How to check whether or not a python script is up?
0.039979
0
0
270
35,202,184
2016-02-04T13:23:00.000
1
1
0
1
python,linux,python-3.x
35,202,268
5
false
0
0
Try this and enter your script name. ps aux | grep SCRIPT_NAME
3
0
0
I want to make sure my python script is always running, 24/7. It's on a Linux server. If the script crashes I'll restart it via cron. Is there any way to check whether or not it's running?
How to check whether or not a python script is up?
0.039979
0
0
270
35,203,097
2016-02-04T14:03:00.000
0
0
1
0
python,ipython
35,214,417
1
false
0
0
Just an idea: IPython also uses .inputrc, but things in that config value take precedence, and sometimes Ctrl+L is in there by default.
1
1
0
I updated my iPython to 4.0.3, and when I did I had a few issues. At first, it seemed I had to reinstall pyreadline to make all the syntax appear as normal again. Now, my biggest problem is that Ctrl + L no longer works to clear my screen. Typing clear does or clear-screen. I have edited the config file and uncommented c.InteractiveShell.readline_parse_and_bind which has '"\\C-l": clear-screen' but all this does is insert a new line into my terminal. It's not a huge problem, but it is an annoyance. I have tried a few combinations of different C-l commands in my config, but so far nothing has worked. Any ideas?
iPython Ctrl + L no longer working
0
0
0
112
35,203,141
2016-02-04T14:05:00.000
0
1
0
0
python,exit,raspberry-pi2
61,990,275
2
false
0
0
I had a similar problem programming a simple GPIO app on the Pi. I was using the GPIOZero library, and as their code examples suggest, I was waiting for button pushes using signal.pause(). This would cause the behavior you describe - even sys.exit() would not exit! The solution was, when it was time for the code to finish, to do this: # Send a SIGUSER1 signal; this will cause signal.pause() to finish. os.kill(os.getpid(), signal.SIGUSR1) You don't even have to define a signal handler if you don't mind the system printing out "User defined signal 1" on the console. HTH
1
3
0
My program in python runs on RaspBerry Pi, and instantiates several objects (GPIO inputs and outputs, http server, webSocket, I2C interface, etc..., with thread). When exiting my program, I try to clear all the resources, and delete all the instances. For the network objects, I close listening sockets and so on. I finish with a sys.exit() call, but program doe not exit and does not returns alone to linux console (I need to press ctrl+z). Are there some objects that are not released, how to know, and how to force exit ? Best regards.
How to exit python program on raspberry
0
0
0
9,479
35,204,352
2016-02-04T14:58:00.000
1
0
0
0
python,postgresql,ubuntu,docker
35,205,633
1
true
0
0
Sorry guys I think I found the problem. I'm using plpython3 in my stored procedure, but intalled my custom module using python 2. I just did sudo python3 setup.py install and now it's working on the native Ubuntu. I'll now try modifying my docker image and see if it works there too. Thanks
1
2
0
I created my own python module and packaged it with distutils. Now I installed it on a new system (python setup.py install) and I'm trying to call it from a plpython3u function, but I get an error saying the module does not exist. It was working on a previous Ubuntu instalation, and I'm not sure what I did wrong when setting up my new system. I'm trying this on a Ubuntu 15.10 pc with postgresql 9.5, everything freshly installed. I'm also trying this setup in a docker image built with the same componentes (ubuntu 15.10 and pg 9.5). I get the same error in both setups. Could you please hint me about why this is failing? I wrote down my installation instructions for both systems (native and docker), so I can provide them if that helps. Thanks
Can't import own python module in Postgresql plpython function
1.2
1
0
884
35,213,592
2016-02-04T23:19:00.000
5
0
0
0
python,numpy,inner-product
35,213,773
4
false
0
0
I don't know if the performance is any good, but (a**2).sum() calculates the right value and has the non-repeated argument you want. You can replace a with some complicated expression without binding it to a variable, just remember to use parentheses as necessary, since ** binds more tightly than most other operators: ((a-b)**2).sum()
1
7
1
I have vector a. I want to calculate np.inner(a, a) But I wonder whether there is prettier way to calc it. [The disadvantage of this way, that if I want to calculate it for a-b or a bit more complex expression, I have to do that with one more line. c = a - b and np.inner(c, c) instead of somewhat(a - b)]
NumPy calculate square of norm 2 of vector
0.244919
0
0
38,727
35,216,112
2016-02-05T04:06:00.000
1
0
1
0
python,pip
35,254,517
1
true
1
0
This was a bug of pip https://github.com/pypa/pip/pull/3258 it's now fixed I wonder why people downvoted the question...
1
1
0
I installed one of the fork I made using -e git://github.com/pcompassion/django.js.git@bd0f7b56d8ab2ae77795797fd10812d0b76883dc#egg=django.js-fork then I create a requirements.pip using pip freeze > requirements.pip it shows django.js==0.8.2.dev0 and it is not usable to use this in production. Why is this happening and how can I prevent it?
pip freeze > requirements.pip loses some info on github installed packages
1.2
0
0
48
35,217,322
2016-02-05T06:02:00.000
2
0
1
0
python,python-2.7
35,217,378
1
true
0
0
Python 2 is notorious for mangling unicode characters. Consider switching to Python 3 which handles all of this natively. It appears to me that given dict = {'japanese': u'japanese\xa0(r\u014dmaji)'} the characters appear the way you presented them when printed straight away (print dict), but work better if you do print dict['japanese'] or first iterate over keys and then print. Clearly, the u'xxx' format is how unicode strings are represented internally by Python. They are then converted to human-readable form when printed in isolation, but not when they exist as part of a bigger structure.
1
0
0
I am working on Python2.7 and grabbing Japanese/Chinese characters from page. It prints fine on Console but when I am storing in a list and dict it does not and print(records) displays as: u'portuguese': u'sirena abisgundecheck translation', u'japanese\xa0(r\u014dmaji)': u'm\u0101meiru - abisugunde', u'chinese': u'\u6c34\u7cbe\u9cde-\u6df1\u6e0a\u6208\u8feacheck translation',...
Python: Can't print/store Int'l characters properly
1.2
0
0
39
35,226,451
2016-02-05T14:22:00.000
0
1
0
0
python,python-2.7,rpc,pyro
35,517,121
1
false
0
0
I'd let the client report back to the server as soon as it no longer needs the proxy. I.e. don't overcomplicate your server with dependencies/knowledge about the clients.
1
0
0
Situation: A Pyro4 server gives a Pyro4 client a Pyro4 proxy. I want to detect whether the client is still indeed using this proxy, so that the server can give the proxy to other clients. My idea at the moment is to have the server periodically ping the client. To do this, the client itself need to host a Pyro Daemon, and give the server a Pyro4 proxy so that the Server can use this proxy to ping clients. Is there a cleaner way to do this?
How to check if Pyro4 client is still alive
0
0
1
225
35,229,136
2016-02-05T16:37:00.000
1
0
0
0
python,matplotlib,fft,spectrogram
35,238,504
1
true
0
0
The redundancy is because you input a strictly real signal to your FFT, thus the DFT result is complex conjugate (Hermitian) symmetric. This redundancy is due to the fact that all the imaginary components of strictly real input are zero. But the output of this DFT can include non-zero imaginary components to indicate phase. Thus, the this DFT result has to be conjugate symmetric so that all the imaginary components in the result will cancel out between the two DFT result halves (same magnitudes, but opposite phases), indicating strictly real input. Also, the lower 257 bins of the basis transform will have 512 degrees of (scaler)freedom, just like the input. However, a spectrogram throws away all phase information, so it can only display 257 unique values (magnitude-only). If you input a complex (quadrature, for instance) signal to a DFT, then there would likely not be Hermitian redundancy, and you would have 1024 degrees of freedom from a 512 length DFT. If you want an image height of 512 (given real input), try an FFT size of 1024.
1
1
1
I'm currently computing the spectrogram with the matplotlib. I specify NFFT=512 but the resulting image has a height of 257. I then tried to just do a STFT (short time fourier transform) which gives me 512 dimensional vectors (as expected). If I plot the result of the STFT I can see that half of the 512 values are just mirrored so really I only get 257 values (like the matplotlib). Can somebody explain to me why that is the case? I always thought of the FT as a basis transform, why would it introduce this redundancy? Thank you.
Matplotlib spectrogram versus STFT
1.2
0
0
1,806
35,229,352
2016-02-05T16:48:00.000
0
0
1
0
python,python-3.x,tkinter,twisted
49,701,788
2
false
0
1
I currently have twisted 17.9.0 and python 3.6. In reference to the answer above, tksupport for python 3 is now available with twisted, so no need to create your own tksupport module.
1
1
0
I am currently working on a Battleship game project (for learning purposes) that uses tkinter for the UI and, because I want this program to be able to run on two computers for multiplayer, twisted for data transfer. This is my first time using twisted however I have used tkinter many times. I know both twisted and tkinter run in loops so it is normally not possible to have these running in the same thread. So I found out there's two ways to get around this: tksupport and running twisted's reactor in a separate thread. However, I tried to import tksupport from twisted.internet but it said that it didn't exist. I checked my twisted folder in my site-packages to be sure and it is indeed not there, but even the twisted docs claim it is. I assuming that this is because I am running python 3.5 and tksupport hasn't been ported over yet, but If this is not the case, please let me know. Also, as for the solution with threading, I discovered there's some controversy over putting twisted's reactor in it's own thread. Is it ok to put the reactor in its own thread, and, if so, what precautions should I take? Thanks.
Threading with Twisted with Tkinter
0
0
0
698
35,231,936
2016-02-05T19:25:00.000
2
0
1
0
python,bigdata
35,231,967
1
true
0
0
Hy: 3.9594086612e+65 It's more than all computer memory in the world. (And I do not multiply by 10)!
1
1
0
I need to create and search a big list, for example list of 3.9594086612e+65 arrays of 10x10 (or bigger list of bigger arrays). I need to create list of combinations. After I need to filter out some combinations according to some rule. This process should be repeated till only one combination is left. If I try to create this list it crashes because of the memory after few minutes. I can imagine, that solution should be to store the data in different way than list in memory. What is possible, correct and easy way? SQL database? NoSQL database? Or just multiple text files opened and closed one after one? I need to run through this list multiple times.
Python - big list issue
1.2
0
0
111
35,233,419
2016-02-05T21:01:00.000
0
0
1
1
python,shell,python-idle
35,233,580
1
false
0
0
When you double click on a .py file, it will run it using the Python Interpreter. You can right-click on the file instead and choose to open it in IDLE.
1
0
0
I am using python 3.4.2 IDLE on windows. When I open the IDLE shell and then open .py file, then it works, but when I try to open the .py file by double cliking, it just doesn't open, or proceed anything. Looks like as if nothing has happened. I would like to open .py file and then just press F5 to see what is going on rather than individually open all the file (I am still beginner to python, I know I can use pycharm, but at this point, just that would be good enough)
Python shell is open but not .py file?
0
0
0
88
35,235,049
2016-02-05T23:05:00.000
0
0
1
1
python,multithreading,gdb,pexpect
35,244,385
2
false
0
0
I found a library called python-ptrace, it seems working for now(with a few problems I never faced while using gdb).
2
0
0
I'm trying to develop a program that uses gdb for it's basic debugging purposes. It executes the gdb from the command line, attaches to the target process and gives some specific commands, then reads the std output. Everything seemed good on paper at first so I started out with python and pexpect. But recently, while thinking about future implementations, I've encountered a problem. Since I can only execute one command at a time from the command-line(there can be only one gdb instance per process), the threads that request data constantly to refresh some UI element will lead to chaos eventually. Think about it: 1-)GDB stops the program to execute commands 2-)blocks the other threads while executing the code 3-)GDB continues the program after execution finishes 4-)One of the waiting threads will try to use GDB immediately 5-)go to 1 and repeat The process we'll work on will freeze every 0.5 sec, this would be unbearable. So, the thing I want to achieve is multi-threading while executing the commands. How can I do it? I thought about using gdb libraries but since I use python and those codes are written in C, it left a question mark on my head about compatibility.
Is there a better way to control gdb other than using command-line tools/libraries such as pexpect in python?
0
0
0
137
35,235,049
2016-02-05T23:05:00.000
1
0
1
1
python,multithreading,gdb,pexpect
35,245,056
2
true
0
0
There are two main ways to script gdb. One way is to use the gdb MI ("Machine Interface") protocol. This is a specialized input and output mode that gdb has that is intended for programmatic use. It has some warts but is "usable enough" - it is what most of the gdb GUIs use. The other way to do this is to write Python scripts that run inside gdb, using gdb's Python API. This approach is often simpler to program, but on the downside the Python API is missing some useful pieces, so sometimes this can't be done, depending on exactly what you're trying to accomplish.
2
0
0
I'm trying to develop a program that uses gdb for it's basic debugging purposes. It executes the gdb from the command line, attaches to the target process and gives some specific commands, then reads the std output. Everything seemed good on paper at first so I started out with python and pexpect. But recently, while thinking about future implementations, I've encountered a problem. Since I can only execute one command at a time from the command-line(there can be only one gdb instance per process), the threads that request data constantly to refresh some UI element will lead to chaos eventually. Think about it: 1-)GDB stops the program to execute commands 2-)blocks the other threads while executing the code 3-)GDB continues the program after execution finishes 4-)One of the waiting threads will try to use GDB immediately 5-)go to 1 and repeat The process we'll work on will freeze every 0.5 sec, this would be unbearable. So, the thing I want to achieve is multi-threading while executing the commands. How can I do it? I thought about using gdb libraries but since I use python and those codes are written in C, it left a question mark on my head about compatibility.
Is there a better way to control gdb other than using command-line tools/libraries such as pexpect in python?
1.2
0
0
137
35,236,851
2016-02-06T03:00:00.000
0
0
0
0
python-2.7,tensorflow
35,237,068
1
false
0
0
@mkarlovitz Looks like /Library/Python/2.7/site-packages/ is not in the list of paths python is looking for. To see what paths are python uses to find packages, do the below ( you can use command line for this ). 1. import sys 2. sys.path ( This tells the list of Paths ) If the /Library/Python/2.7/site-packages/ is not in the above list. Add it as follows: In the Python file/script you are executing. 1. import sys 2. sys.path.append('/Library/Python/2.7/site-packages/')
1
1
1
I've installed tensor flow on Mac OS X. Successfully ran simple command line test. Now trying the first tutorial. Fail on the first python line: [python prompt:] import tensorflow.examples.tutorials.mnist.input_data Traceback (most recent call last): File "", line 1, in ImportError: No module named examples.tutorials.mnist.input_data But the file seems to be there: new-host-4:~ karlovitz$ ls /Library/Python/2.7/site-packages/tensorflow/examples/tutorials/mnist/ BUILD fully_connected_feed.py mnist.py mnist_with_summaries.py init.py input_data.py mnist_softmax.py
tensorflow no module named example.tutorials.mnist.input_data
0
0
0
5,529
35,237,044
2016-02-06T03:34:00.000
9
0
0
0
python,random-forest,xgboost,kaggle
35,248,119
1
false
0
0
Extra-trees(ET) aka. extremely randomized trees is quite similar to random forest (RF). Both methods are bagging methods aggregating some fully grow decision trees. RF will only try to split by e.g. a third of features, but evaluate any possible break point within these features and pick the best. However, ET will only evaluate a random few break points and pick the best of these. ET can bootstrap samples to each tree or use all samples. RF must use bootstrap to work well. xgboost is an implementation of gradient boosting and can work with decision trees, typical smaller trees. Each tree is trained to correct the residuals of previous trained trees. Gradient boosting can be more difficult to train, but can achieve a lower model bias than RF. For noisy data bagging is likely to be most promising. For low noise and complex data structures boosting is likely to be most promising.
1
8
1
I am new to all these methods and am trying to get a simple answer to that or perhaps if someone could direct me to a high level explanation somewhere on the web. My googling only returned kaggle sample codes. Are the extratree and randomforrest essentially the same? And xgboost uses boosting when it chooses the features for any particular tree i.e. sampling the features. But then how do the other two algorithms select the features? Thanks!
What is the difference between xgboost, extratreeclassifier, and randomforrestclasiffier?
1
0
0
2,360
35,237,874
2016-02-06T05:53:00.000
0
0
0
0
python-2.7,pandas
35,237,949
3
false
0
0
If the operations are done in the pydata stack (numpy/pandas), you're limited to fixed precision numbers, up to 64bit. Arbitrary precision numbers as string, perhaps?
2
0
1
I am working on some problem where i have to take 15th power of numbers, when I do it in python console i get the correct output, however when put these numbers in pandas data frame and then try to take the 15th power, i get a negative number. Example, 1456 ** 15 = 280169351358921184433812095498240410552501272576L, however when similar operation is performed in pandas i get negative values. Is there a limit on the size of number which pandas can hold and how can we change this limit.
Python Pandas largest number
0
0
0
112
35,237,874
2016-02-06T05:53:00.000
0
0
0
0
python-2.7,pandas
35,237,988
3
false
0
0
I was able to overcome by changing the data type from int to float, as doing this gives the answer to 290 ** 15 = 8.629189e+36, which is good enough for my exercise.
2
0
1
I am working on some problem where i have to take 15th power of numbers, when I do it in python console i get the correct output, however when put these numbers in pandas data frame and then try to take the 15th power, i get a negative number. Example, 1456 ** 15 = 280169351358921184433812095498240410552501272576L, however when similar operation is performed in pandas i get negative values. Is there a limit on the size of number which pandas can hold and how can we change this limit.
Python Pandas largest number
0
0
0
112
35,240,337
2016-02-06T11:14:00.000
1
1
1
0
python,twitter,config
35,240,378
2
false
0
0
You can do from main_script import variable if the variables are not encapsulated into functions.
1
0
0
I'm writing an automated internet speed testing program and I've setup a secondary script, config.py, to make it simpler for the user to edit the configuration. The program can send tweets when the internet speed results falls below a certain point and I want to give the users the ability to edit the tweet. However the user wil likely want to include the results in the tweet which will be defined in the script within which config.py is called. How can I use the variables from the main script in config.py? Edit: Should've mentioned the variables in the main script are also in functions.
Using variables defined in main python script in imported script
0.099668
0
0
51
35,242,589
2016-02-06T15:15:00.000
2
0
0
0
python,django,django-models,django-forms
35,242,633
1
true
1
0
models.py is just a convention. You are not required to put your models in any specific module, you could put everything in one file if you wanted to. If your contact form doesn't store anything in your database, you don't need any models either. You could do everything with just a form, then email the information entered elsewhere, or write it to disk by other means. Even if you did want to put the information into a database, you could still do that without creating a model. However, creating a model just makes this task far easier and convenient, because Django can then generate a form from that, do validation, provide helpful feedback to your users when they make a mistake, handle transactions, etc.
1
2
0
I'm creating a contact form in django. I've read some tuts and some of them use models.py and some of them skip the models part. What is the role of models.py in creating a contact form?
Do I need to do something in models.py if I'm creating a contact form?
1.2
0
0
109
35,243,795
2016-02-06T17:03:00.000
0
0
0
0
python,networkx,graph-theory
45,540,101
3
false
0
0
This answer has been taken from a Google Groups on the issue (in the context of using R) that helps clarify the maths taken along with the above answer: Freeman's approach measures "the average difference in centrality between the most central actor and all others". This 'centralization' is exactly captured in the mathematical formula sum(max(x)-x)/(length(x)-1) x refers to any centrality measure! That is, if you want to calculate the degree centralization of a network, x has simply to capture the vector of all degree values in the network. To compare various centralization measures, it is best to use standardized centrality measures, i.e. the centrality values should always be smaller than 1 (best position in any possible network) and greater than 0 (worst position)... if you do so, the centralization will also be in the range of [0,1]. For degree, e.g., the 'best position' is to have an edge to all other nodes (i.e. incident edges = number of nodes minus 1) and the 'worst position' is to have no incident edge at all.
1
4
1
I have a graph and want to calculate its indegree and outdegree centralization. I tried to do this by using python networkx, but there I can only find a method to calculate indegree and outdegree centrality for each node. Is there a way to calculate in- and outdegree centralization of a graph in networkx?
calculate indegree centralization of graph with python networkx
0
0
1
2,646
35,246,386
2016-02-06T20:58:00.000
0
0
1
1
python,zsh,anaconda,miniconda
70,835,470
21
false
0
0
I have encountered this problem lately and I have found a solution that worked for me. It is possible that your current user might not have permissions to anaconda directory, so check if you can read/write there, and if not, then change the files owner by using chown.
8
135
0
I've installed Miniconda and have added the environment variable export PATH="/home/username/miniconda3/bin:$PATH" to my .bashrc and .bash_profile but still can't run any conda commands in my terminal. Am I missing another step in my setup? I'm using zsh by the way.
Conda command not found
0
0
0
435,658
35,246,386
2016-02-06T20:58:00.000
1
0
1
1
python,zsh,anaconda,miniconda
66,254,477
21
false
0
0
export PATH="~/anaconda3/bin":$PATH
8
135
0
I've installed Miniconda and have added the environment variable export PATH="/home/username/miniconda3/bin:$PATH" to my .bashrc and .bash_profile but still can't run any conda commands in my terminal. Am I missing another step in my setup? I'm using zsh by the way.
Conda command not found
0.009524
0
0
435,658
35,246,386
2016-02-06T20:58:00.000
5
0
1
1
python,zsh,anaconda,miniconda
51,863,203
21
false
0
0
I had the same issue. I just closed and reopened the terminal, and it worked. That was because I installed anaconda with the terminal open.
8
135
0
I've installed Miniconda and have added the environment variable export PATH="/home/username/miniconda3/bin:$PATH" to my .bashrc and .bash_profile but still can't run any conda commands in my terminal. Am I missing another step in my setup? I'm using zsh by the way.
Conda command not found
0.047583
0
0
435,658
35,246,386
2016-02-06T20:58:00.000
1
0
1
1
python,zsh,anaconda,miniconda
67,916,328
21
false
0
0
It can be a silly mistake, make sure that you use anaconda3 instead of anaconda in the export path if you installed so.
8
135
0
I've installed Miniconda and have added the environment variable export PATH="/home/username/miniconda3/bin:$PATH" to my .bashrc and .bash_profile but still can't run any conda commands in my terminal. Am I missing another step in my setup? I'm using zsh by the way.
Conda command not found
0.009524
0
0
435,658
35,246,386
2016-02-06T20:58:00.000
0
0
1
1
python,zsh,anaconda,miniconda
70,267,089
21
false
0
0
This worked for me on CentOS and miniconda3. Find out which shell you are using echo $0 conda init bash (could be conda init zsh if you are using zsh, etc.) - this adds a path to ~/.bashrc Reload command line sourc ~/.bashrc OR . ~/.bashrc
8
135
0
I've installed Miniconda and have added the environment variable export PATH="/home/username/miniconda3/bin:$PATH" to my .bashrc and .bash_profile but still can't run any conda commands in my terminal. Am I missing another step in my setup? I'm using zsh by the way.
Conda command not found
0
0
0
435,658
35,246,386
2016-02-06T20:58:00.000
-1
0
1
1
python,zsh,anaconda,miniconda
65,051,111
21
false
0
0
MacOSX: cd /Users/USER_NAME/anaconda3/bin && ./activate
8
135
0
I've installed Miniconda and have added the environment variable export PATH="/home/username/miniconda3/bin:$PATH" to my .bashrc and .bash_profile but still can't run any conda commands in my terminal. Am I missing another step in my setup? I'm using zsh by the way.
Conda command not found
-0.009524
0
0
435,658
35,246,386
2016-02-06T20:58:00.000
28
0
1
1
python,zsh,anaconda,miniconda
44,342,045
21
false
0
0
Maybe you need to execute "source ~/.bashrc"
8
135
0
I've installed Miniconda and have added the environment variable export PATH="/home/username/miniconda3/bin:$PATH" to my .bashrc and .bash_profile but still can't run any conda commands in my terminal. Am I missing another step in my setup? I'm using zsh by the way.
Conda command not found
1
0
0
435,658
35,246,386
2016-02-06T20:58:00.000
23
0
1
1
python,zsh,anaconda,miniconda
46,866,740
21
false
0
0
Sometimes, if you don't restart your terminal after you have installed anaconda also, it gives this error. Close your terminal window and restart it. It worked for me now!
8
135
0
I've installed Miniconda and have added the environment variable export PATH="/home/username/miniconda3/bin:$PATH" to my .bashrc and .bash_profile but still can't run any conda commands in my terminal. Am I missing another step in my setup? I'm using zsh by the way.
Conda command not found
1
0
0
435,658
35,248,476
2016-02-07T00:54:00.000
3
0
0
0
python,tensorflow
35,256,493
4
false
0
0
You're most likely using an older version of TensorFlow. I just noticed that some of our install docs still link to 0.5 -- try upgrading to 0.6 or to head. I'll fix the docs soon, but in the meantime, if you installed via pip, you can just change the 0.5 to 0.6 in the path. If you're building from source, just check out the appropriate release tag (or head).
1
3
1
Getting the following error when working through the ipython notebooks on Google's tensorflow udacity course: AttributeError: 'module' object has no attribute 'compat' Trying to call: tf.compat.as_str(f.read(name)).split() Running on Ubuntu 14.04 and wondering if this a tensorflow early bug issue or just me being stupid. :P
Tensorflow compat modules issues?
0.148885
0
0
4,727
35,249,741
2016-02-07T04:29:00.000
2
0
0
1
python,django,chat,tornado
35,250,150
3
false
1
0
You certainly can develop a synchronous chat app, you don't necessarily need to us an asynchronous framework. but it all comes down to what do you want your app to do? how many people will use the app? will there be multiple users and multiple chats going on at the same time?
1
3
0
I need to implement a chat application for my web service (that is written in Django + Rest api framework). After doing some google search, I found that Django chat applications that are available are all deprecated and not supported anymore. And all the DIY (do it yourself) solutions I found are using Tornado or Twisted framework. So, My question is: is it OK to make a Django-only based synchronous chat application? And do I need to use any asynchronous framework? I have very little experience in backend programming, so I want to keep everything as simple as possible.
Why do chat applications have to be asynchronous?
0.132549
0
0
1,314
35,250,175
2016-02-07T05:43:00.000
12
0
0
1
c,python-3.x
36,926,551
1
false
0
1
Maybe a little too late, but I found a work around for the missing 'python3x_d.lib' : When installing the python with pythoninstaller.exe, choose the advanced setup options in the first command window of the installation wizard, there choose the option "download debug binaries", then the file python3x_d.lib is automatically installed. I faced this error when trying to build opencv with python bindings
1
4
0
I have downloaded the 3.5 version of python on my windows 7 home premium computer with version 6.1 software. I wish to use a C main program with python library extensions. I have aded the path to the include folder and the library folder to the dev studio c-compiler. I am testing with the supplied test program that prints out the time but I get a compile error. While it can find Python.h, it can't find python35_d.lib. I can't either. Is it missing from the download or is this another name for a one of the libraries in the download? Thanks
I cannot find python35_d.lib
1
0
0
4,011
35,250,611
2016-02-07T06:47:00.000
0
0
0
0
python,machine-learning,neural-network,large-data
35,255,569
1
true
0
0
What you are probably looking for is minibatching. In general many methods of training neural nets are gradient based, and as your loss function is a function of trianing set - so is the gradient. As you said - it may exceed your memory. Luckily, for additive loss functions (and most you will ever use - are additive) one can prove that you can substitute full gradient descent with stochastic (or minibatch) gradient descent and still converge to a local minima. Nowadays it is very often practise to use batches of 32, 64 or 128 rows, thus rather easy to fit in your memory. Such networks can actually converge faster to solution than the ones trained with full gradient, as you make N / 128 moves per dataset instead of just one. Even if each of them is rather rough - as a combination they work pretty well.
1
0
1
I am trying to train a neural net (backprop + gradient descent) in python with features I am constructing on top of the google books 2-grams (English), it will end up being around a billion rows of data with 20 features each row. This will easily exceed my memory and hence using in-memory arrays such as numpy would not be an option as it requires loading the complete training set. I looked into memory mapping in numpy which could solve the problem for input layer (which are readonly), but I will also need to store and manipulate my internal layers in the net which requires extensive data read/write and considering the size of data, performance is extremely crucial in this process as could save days of processing for me. Is there a way to train the model without having to load the complete training set in memory for each iteration of cost (loss) minimization?
Processing array larger than memory for training a neural net in python
1.2
0
0
597
35,254,240
2016-02-07T13:44:00.000
1
0
0
0
java,python,selenium
35,255,120
2
false
0
0
It has to be in combination with a cron job. You can start the cron 1-2 minutes earlier, open the login page and, in your python script, sleep until 7am then just login.
1
1
0
I have written a python selenium script to log me into a website. This website is where I need to book a court at precisely 7am. I cannot run the script using cron scheduler because that only runs the script and by the time selenium has logged in 7am will have passed. I've tried time() and Webdriverwait but these only allow me to delay hitting a web page button. I need to synchronise the click of a button at a precise time from within the python script.
How can I run code in selenium to execute a function at a specific hour of the day within the python script not cron
0.099668
0
1
705
35,256,569
2016-02-07T17:20:00.000
0
0
0
0
python,django,csrf,django-csrf
35,256,740
1
true
1
0
For header name and cookie name you can change it using CSRF_COOKIE_NAME and CSRF_HEADER_NAME. Unfortunately, you can't change POST field that easy. You will have to modify CsrfViewMiddleware for that. But if you're using angular, you can use only headers and completely omit POST fields for that.
1
2
0
I am building a app using Angular and Django by default, Django uses X-CSRFToken as the csrf header and csrftoken as the token name. I Want to rename the header name to something X-SOMENAME and token as sometokenName, I know with Angular we can change the default names with$http.defaults Is it possible to change the token name on Django so that the generated token has sometokenName and the header Django looks to X-SOMENAME? Thank you.
Is it possible to change the Django csrf token name and token header
1.2
0
0
2,189
35,259,943
2016-02-07T22:29:00.000
-1
0
1
0
python,utf-8,pyhook
35,260,023
2
false
0
1
try to add this line: # -* - coding: utf-8 -* -
1
0
0
I'm making a python app that triggers an action when the print screen key is pressed. I'm using pyhook library. However, every time I press a character written with the language of my country (ãíé and others) is doubling the characters . For example : ~~a ''e ''i , causing problems in the normal user written use. Is there any way to fix ?
Pyhook UTF-8 issue
-0.099668
0
0
208
35,260,536
2016-02-07T23:32:00.000
1
0
0
0
python,sqlalchemy,flask-sqlalchemy,alembic
35,275,008
1
true
1
0
If you know the state of the database you can just stamp the revision you were at when you created in the instance. setup instance run create_all alembic heads (to determine latest version available in scripts dir) alembic stamp Here is the doc from the commandline: stamp 'stamp' the revision table with the given revision; don't run any migrations.
1
4
0
In a platform using Flask, SQLAlchemy, and Alembic, we constantly need to create new separate instances with their own set of resources, including a database. When creating a new instance, SQLAlchemy's create_all gives us a database with all the updates up to the point when the instance is created, but this means that this new instance does not have the migrations history that older instances have. It doesn't have an Alembic revisions table pointing to the latest migration. So when the time comes to update both older instances (with migrations histories) and a newer instance without migrations history we have to either give the newer instance a custom set of revisions (ignoring older migrations than the database itself) or create a fake migrations history for it and use a global set of migrations. For the couple of times that this has happened, we have done the latter. Is making a root migration that sets up the entire database as it was before the first migration and then running all migrations instead of create_all a better option for bootstrapping the database of new instances? I'm concerned for the scalability of this as migrations increase in number. Is there perhaps another option altogether?
SQLAlchemy, Alembic and new instances
1.2
1
0
591
35,261,188
2016-02-08T01:14:00.000
0
1
0
1
clang,cpython
35,265,868
1
true
0
0
There is a environment variable for that. CC=clang python setup.py build Both of compiled binaries are compatible with CPython
1
3
0
All's in the title: I'd like to try using clang for compiling a C extension module for CPython on Linux (CPython comes from the distro repositories, and is built with gcc). Do distutils/setuptools support this? Does the fact that CPython and the extension are built with two different compilers matter? Thanks.
Is is possible to select clang for compiling CPython extensions on Linux?
1.2
0
0
189