Q_Id
int64 2.93k
49.7M
| CreationDate
stringlengths 23
23
| Users Score
int64 -10
437
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| DISCREPANCY
int64 0
1
| Tags
stringlengths 6
90
| ERRORS
int64 0
1
| A_Id
int64 2.98k
72.5M
| API_CHANGE
int64 0
1
| AnswerCount
int64 1
42
| REVIEW
int64 0
1
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 15
5.1k
| Available Count
int64 1
17
| Q_Score
int64 0
3.67k
| Data Science and Machine Learning
int64 0
1
| DOCUMENTATION
int64 0
1
| Question
stringlengths 25
6.53k
| Title
stringlengths 11
148
| CONCEPTUAL
int64 0
1
| Score
float64 -1
1.2
| API_USAGE
int64 1
1
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 15
3.72M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
34,291,760 | 2015-12-15T14:28:00.000 | 2 | 0 | 1 | 0 | 0 | python,c,modulo | 0 | 46,969,300 | 0 | 2 | 0 | false | 0 | 0 | easily implement a C-like modulo in python.
Since C does truncation toward zero when integers are divided, the sign of the remainder is always the sign of the first operand. There are several ways to implement that; pick one you like:
def mod(a, b): return abs(a)%abs(b)*(1,-1)[a<0]
def mod(a, b): return abs(a)%abs(b)*(1-2*(a<0)) | 1 | 2 | 0 | 0 | Modulo operator % on negative numbers is implemented in different way in python and in C. In C:
-4 % 3 = -1, while in python:-4 % 3 = 2.
I know how to implement python-like modulo in C. I wonder how to do the reverse, that is: easily implement a C-like modulo in python. | How to easily implement C-like modulo (remainder) operation in python 2.7 | 0 | 0.197375 | 1 | 0 | 0 | 1,018 |
34,310,736 | 2015-12-16T11:23:00.000 | 1 | 0 | 0 | 0 | 0 | python,google-app-engine,caching,server-side,multilingual | 0 | 34,314,025 | 0 | 1 | 0 | false | 1 | 0 | I assume the individual product rendering in a particular language accounts for the majority (or at least a big chunk) of the rendering effort for the entire page.
You could cache server-side the rendered product results for a particular language, prior to assembling them in a complete results page and sending them to the client, using a 2D product x language lookup scheme.
You could also render individual product info offline, on a task queue, whenever products are added/modified, and store/cache them on the server ahead of time. Maybe just for the most heavily used languages?
This way you avoid individual product rendering on the critical path - in response to the client requests, at the expense of added memcache/storage.
You just need to:
split your rendering in 2 stages (individual product info and complete results page assembly)
add logic for cleanup/update of the stored/cached rendered product info when products add/change/delete ops occur
(maybe) add logic for on-demand product info rendering when pre-rendered info is not yet available when the client request comes in (if not acceptable to simply not display the info)
You might want to check if it's preferable to cache/store the rendered product info compressed (html compresses well) - balancing memcache/storage costs vs instance runtime costs vs response time performance (I have yet to do such experiment). | 1 | 0 | 0 | 0 | Question:
What are the most efficient approaches to multi-lingual data caching on a web server, given that clients want the same base set of data but in their locale format. So 1000 data items max to be cached and then rendered on demand in specific locale format.
My current approach is as follows:
I have a multilingual python Google App Engine project. The multi-lingual part uses Babel and various language .po and .mo files for translation. This is all fine and dandy. Issues, start to arise when considering caching of data. For example, let's say I have 1000 product listings that I want clients to be able to access 100 at a time. I use memcache with a datastore backup entity if the memcache gets blasted. Again, all is fine and dandy, but not multilingual. Each product has to be mapped to match the key with the particular locale of any client, English, French, Turkish, whatever. The way I do it now is to map the products under a specific locale, say 'en_US', and render server side using jinja2 templates. Each bit of data that is multilingual specific is rendered using the locale settings for date, price formatting title etc. etc. in the 'en_US' format and placed into the datastore and memcache all nicely mapped out ready for rendering. However, I have an extra step to take for getting those multilingual data into the correct format for a clients locale, and that is by way of standard {{ }} translations and jinja2 filters, generally for stuff like price formatting and dates. Problem is this is slowing things up as this all has to be rendered on the server and then passed back to the client. The initial 100 products are always server side rendered, however, before caching I was rendering the rest client side from JSON data via ajax calls to the server. Now it's all server side rendering.
I don't want to get into a marathon discussion regarding server vs client side rendering, but I would appreciate any insights into how others have successfully handled multi-lingual caching | Issues with Multi-lingual website data caching - Python - Google App Engine | 0 | 0.197375 | 1 | 0 | 0 | 47 |
34,322,297 | 2015-12-16T21:19:00.000 | 0 | 0 | 1 | 0 | 0 | python,logging | 0 | 34,322,394 | 0 | 1 | 0 | false | 0 | 0 | Ok I found the answer. The master parent for all loggers is root logger - no mater it's name doesn't appear in the canonical name. | 1 | 0 | 0 | 0 | I understand and like the idea of hierarchical structure of loggers with canonical module name as the name of the logger. But I don't know how to tie everything up at the top level.
Supposing I have application using
package1.subpackage1.module1 and
package2.subpackage2.module2.
And now I'd like to define one handler and one formatter for all. But I don't want to enumerate all module's loggers and setup them separately.
It seems that all module loggers should be automagically attached somewhere to "master" logger, where the only handler is defined.
How to achieve this? | how to gather all module's loggers under one parent? | 0 | 0 | 1 | 0 | 0 | 15 |
34,333,808 | 2015-12-17T11:45:00.000 | 0 | 0 | 1 | 0 | 0 | python,build,scons | 0 | 34,335,139 | 0 | 2 | 0 | false | 0 | 1 | Create two Environments, one with each compiler, use where necessary.
Then use whichever Environment you need for linking object from either Environment. | 1 | 0 | 0 | 0 | I'm trying to set up a complete build environment with SCons and I came across this problem:
My project can be compiled with two different compilers (c or cpp compilers) and the resulting object files linked with the same linker.
Because of this, I need to know how to split the compilation part from the linking part.
Also, there are cases when I only need the .o files so I want to avoid linking.
Is this possible using the same environment ? | How to compile with two different compilers using SCons? | 1 | 0 | 1 | 0 | 0 | 657 |
34,337,788 | 2015-12-17T15:06:00.000 | 1 | 0 | 0 | 1 | 0 | python,windows,docker,tensorflow | 0 | 34,340,617 | 0 | 1 | 0 | false | 0 | 0 | If you're using one of the devel tags (:latest-devel or :latest-devel-gpu), the file should be in /tensorflow/tensorflow/models/image/imagenet/classify_image.py.
If you're using the base container (b.gcr.io/tensorflow/tensorflow:latest), it's not included -- that image just has the binary installed, not a full source distribution, and classify_image.py isn't included in the binary distribution. | 1 | 2 | 1 | 0 | I have installed tensorflow on Windows using Docker, I want to go to the folder "tensorflow/models/image/imagenet" that contains "classify_image.py" python file..
Can someone please how to reach this mentioned path? | Location of tensorflow/models.. in Windows | 0 | 0.197375 | 1 | 0 | 0 | 1,104 |
34,341,489 | 2015-12-17T18:13:00.000 | 3 | 0 | 0 | 0 | 0 | python,django,python-3.x | 0 | 34,341,868 | 0 | 1 | 0 | true | 0 | 0 | PyMySQL is a pure-python database connector for MySQL, and can be used as a drop-in replacement using the install_as_MySQLdb() function. As a pure-python implementation, it will have some more overhead than a connector that uses C code, but it is compatible with other versions of Python, such as Jython and PyPy.
At the time of writing, Django recommends to use the mysqlclient package on Python 3. This fork of MySQLdb is partially written in C for performance, and is compatible with Python 3.3+. You can install it using pip install mysqlclient. As a fork, it uses the same module name, so you only have to install it and Django will use it in its MySQL database engine. | 1 | 0 | 0 | 0 | MySQLdb as I understand doesn't support Python 3. I've heard about PyMySQL as a replacement for this module. But how does it work in production environment?
Is there a big difference in speed between these two? I asking because I will be managing a very active webapp that needs to create entries in the database very often. | MySQL module for Python 3 | 0 | 1.2 | 1 | 1 | 0 | 79 |
34,357,680 | 2015-12-18T14:19:00.000 | 0 | 0 | 1 | 0 | 1 | python,pdf,encryption,pdfminer | 0 | 71,596,480 | 0 | 3 | 0 | false | 0 | 0 | This is pdfminer's error, use pdfplumber.open(file_name, password="".encode()) to skip this error or TypeError: can only concatenate str (not "bytes") to str. | 1 | 11 | 0 | 0 | I'm trying to extract text from pdf-files and later try to identify the references. I'm using pdfminer 20140328. With unencrypted files its running well, but I got now a file where i get:
File "C:\Tools\Python27\lib\site-packages\pdfminer\pdfdocument.py", line 348, in _initialize_password
raise PDFEncryptionError('Unknown algorithm: param=%r' % param)
pdfminer.pdfdocument.PDFEncryptionError: Unknown algorithm: param={'CF': {'StdCF': {'Length': 16, 'CFM': /AESV2, 'AuthEvent': /DocOpen}}, 'O': '}\xe2>\xf1\xf6\xc6\x8f\xab\x1f"O\x9bfc\xcd\x15\xe09~2\xc9\\x87\x03\xaf\x17f>\x13\t^K\x99', 'Filter': /Standard, 'P': -1548, 'Length': 128, 'R': 4, 'U': 'Kk>\x14\xf7\xac\xe6\x97\xb35\xaby!\x04|\x18(\xbfN^Nu\x8aAd\x00NV\xff\xfa\x01\x08', 'V': 4, 'StmF': /StdCF, 'StrF': /StdCF}
I checked with pdfinfo, that this file seemed to be AES encrypted, but i can open it without any problems.
So i have two questions:
at first: how is it possible that a document is encrypted but i can open it without a password?
and secondly: how do i make PDFMiner read that file properly? Somewhere i read to install pycrypto to get additional algorithms but it doesnt fixed my problem.
Many thanks. | PDF Miner PDFEncryptionError | 1 | 0 | 1 | 0 | 0 | 5,581 |
34,376,936 | 2015-12-20T00:46:00.000 | 2 | 0 | 1 | 0 | 0 | collections,ironpython,garbage | 0 | 34,381,901 | 0 | 1 | 0 | true | 1 | 0 | In general managed enviroments relase there memory, if no reference is existing to the object anymore (from connection from the root to the object itself). To force the .net framework to release memory, the garbage collector is your only choice. In general it is important to know, that GC.Collect does not free the memory, it only search for objects without references and put the in a queue of objects, which will be released. If you want to free memory synchron, you also need GC.WaitForPendingFinalizers.
One thing to know about large objects in the .net framework is, that they are stored seperatly, in the Large Object Heap (LOH). From my point of few, it is not bad to free those objects synchron, you only have to know, that this can cause some performance issues. That's why in general the GC decide on it's own, when to collect and free memory and when not to.
Because gc.collect is implemented in Python as well as in IronPython, you should be able to use it. If you take a look at the implementation in IronPython, gc.collect does exactly what you want, call GC.Collect() and GC.WaitForPendingFinalizer. So in your case, i would use it.
Hope this helps. | 1 | 1 | 0 | 0 | I am a creating a huge mesh object (some 900 megabytes in size).
Once I am done with analysing it, I would like to somehow delete it from the memory.
I did a bit of search on stackoverflow.com, and I found out that del will only delete the reference to mentioned mesh. Not the mesh object itself.
And that after some time, the mesh object will eventually get garbage collected.
Is gc.collect() the only way by which I could instantly release the memory, and there for somehow remove the mentioned large mesh from the memory?
I've found replies here on stackoverflow.com which state that gc.collect() should be avoided (at least when it comes to regular python, not specifically ironpython).
I've also find comments here on stackoverflow which claim that in IronPython it is not even guaranteed the memory will be released if nothing else is holding a reference.
Any comments on all these issues?
I am using ironpython 2.7 version.
Thank you for the reply. | Delete the large object in ironpython, and instantly release the memory? | 0 | 1.2 | 1 | 0 | 0 | 499 |
34,400,788 | 2015-12-21T16:59:00.000 | 0 | 0 | 1 | 0 | 0 | python,file,3d,3dsmax | 0 | 34,401,590 | 0 | 1 | 0 | false | 0 | 1 | According to my understanding, there are some advanced graphics libraries out there for advanced usage, however Blender, (an application developed in python) supports python scripting. There are even a simple drag and drop game engine for simpler tasks. | 1 | 1 | 0 | 0 | I am new to 3d world. I would like to open 3ds files with python and visualize the objects.
I could not find any easy and straight forward way to play with 3ds max files.
Can you let me know how can I achieve this? | how to open 3ds files with python | 0 | 0 | 1 | 0 | 0 | 717 |
34,400,922 | 2015-12-21T17:07:00.000 | 0 | 0 | 0 | 0 | 0 | javascript,python,django,forms | 0 | 34,403,444 | 0 | 1 | 0 | false | 1 | 0 | There is some terminology confusion here, as SColvin points out; it's really not clear what you mean by "custom variables", and how those relates to models.
However your main confusion seems to be around forms. There is absolutely no requirement to use them: they are just one method of updating models. It is always possible to edit the models directly in code, and the data from that can of course come from Javascript if you want. The tutorial has good coverage of how to update a model from code without using a form.
If you're doing a lot of work via JS though, you probably want to look into the Django Rest Framework, which simplifies the process of converting Django model data to and from JSON to use in your client-side code. Again though DRF isn't doing anything you couldn't do manually in your own code, all without the use of forms. | 1 | 0 | 0 | 1 | I have a contract job for editing a Django application, and Django is not my main framework to use, so I have a question regarding models in it.
The application I am editing has a form that each user can submit, and every single model in the application is edited directly through the form.
From this perspective, it seems every model is directly a form object, I do not see any model fields that I could use for custom variables. Meaning instead of a "string" that I could edit with JS, I only see a TextField where the only way it could be edited is by including it on a form directly.
If I wanted to have some models that were custom variables, meaning I controlled them entirely through JS rather than form submissions, how would I do that in Django?
I know I could, for example, have some "hidden" form objects that I manipulated with JS. But this solution sounds kind of hacky. Is there an intended way that I could go about this?
Thanks!
(Edit: It seems most responses do not know what I am referring to. Basically I want to allow the client to perform some special sorting functions etc, in which case I will need a few additional lists of data. But I do not want these to be visible to the user, and they will be altered exclusively by js.
Regarding the response of SColvin, I understand that the models are a representation of the database, but from how the application I am working on is designed, it looks as if the only way the models are being used is strictly through forms.
For example, every "string" is a "TextField", and lets say we made a string called "myField", the exclusive use of this field would be to use it in templates with the syntax {{ form.myField|attr:"rows:4" }}.
There are absolutely no use of this model outside of the forms. Every place you see it in the application, there is a form object. This is why I was under the impression that is the primary way to edit the data found in the models.
I did the Django tutorial prior to accepting this project but do not remember seeing any way to submit changes to models outside of the forms.
So more specifically what I would like to do in this case: Let's say I wanted to add a string to my models file, and this string will NOT be included/edited on the form. It will be invisible to the user. It will be modified browser-side by some .js functions, and I would like it to be saved along when submitting the rest of the form. What would be the intended method for going about doing this?
If anyone could please guide me to documentation or examples on how to do this, it would be greatly appreciated! )
(Edit2: No responses ever since the first edit? Not sure if this post is not appearing for anyone else. Still looking for an answer!) | Django saving models by JS rather than form submissions? | 0 | 0 | 1 | 0 | 0 | 51 |
34,428,046 | 2015-12-23T03:13:00.000 | 2 | 0 | 0 | 1 | 1 | python,database,rest,concurrency,etag | 0 | 34,428,792 | 0 | 3 | 0 | false | 1 | 0 | This is really a question about how to use ORMs to do updates, not about ETags.
Imagine 2 processes transferring money into a bank account at the same time -- they both read the old balance, add some, then write the new balance. One of the transfers is lost.
When you're writing with a relational DB, the solution to these problems is to put the read + write in the same transaction, and then use SELECT FOR UPDATE to read the data and/or ensure you have an appropriate isolation level set.
The various ORM implementations all support transactions, so getting the read, check and write into the same transaction will be easy. If you set the SERIALIZABLE isolation level, then that will be enough to fix race conditions, but you may have to deal with deadlocks.
ORMs also generally support SELECT FOR UPDATE in some way. This will let you write safe code with the default READ COMMITTED isolation level. If you google SELECT FOR UPDATE and your ORM, it will probably tell you how to do it.
In both cases (serializable isolation level or select for update), the database will fix the problem by getting a lock on the row for the entity when you read it. If another request comes in and tries to read the entity before your transaction commits, it will be forced to wait. | 3 | 6 | 0 | 0 | Maybe I'm overlooking something simple and obvious here, but here goes:
So one of the features of the Etag header in a HTTP request/response it to enforce concurrency, namely so that multiple clients cannot override each other's edits of a resource (normally when doing a PUT request). I think that part is fairly well known.
The bit I'm not so sure about is how the backend/API implementation can actually implement this without having a race condition; for example:
Setup:
RESTful API sits on top of a standard relational database, using an ORM for all interactions (SQL Alchemy or Postgres for example).
Etag is based on 'last updated time' of the resource
Web framework (Flask) sits behind a multi threaded/process webserver (nginx + gunicorn) so can process multiple requests concurrently.
The problem:
Client 1 and 2 both request a resource (get request), both now have the same Etag.
Both Client 1 and 2 sends a PUT request to update the resource at the same time. The API receives the requests, proceeds to uses the ORM to fetch the required information from the database then compares the request Etag with the 'last updated time' from the database... they match so each is a valid request. Each request continues on and commits the update to the database.
Each commit is a synchronous/blocking transaction so one request will get in before the other and thus one will override the others changes.
Doesn't this break the purpose of the Etag?
The only fool-proof solution I can think of is to also make the database perform the check, in the update query for example. Am I missing something?
P.S Tagged as Python due to the frameworks used but this should be a language/framework agnostic problem. | Etags used in RESTful APIs are still susceptible to race conditions | 0 | 0.132549 | 1 | 0 | 0 | 1,416 |
34,428,046 | 2015-12-23T03:13:00.000 | 1 | 0 | 0 | 1 | 1 | python,database,rest,concurrency,etag | 0 | 63,120,699 | 0 | 3 | 0 | false | 1 | 0 | You are right that you can still get race conditions if the 'check last etag' and 'make the change' aren't in one atomic operation.
In essence, if your server itself has a race condition, sending etags to the client won't help with that.
You already mentioned a good way to achieve this atomicity:
The only fool-proof solution I can think of is to also make the database perform the check, in the update query for example.
You could do something else, like using a mutex lock. Or using an architecture where two threads cannot deal with the same data.
But the database check seems good to me. What you describe about ORM checks might be an addition for better error messages, but is not by itself sufficient as you found. | 3 | 6 | 0 | 0 | Maybe I'm overlooking something simple and obvious here, but here goes:
So one of the features of the Etag header in a HTTP request/response it to enforce concurrency, namely so that multiple clients cannot override each other's edits of a resource (normally when doing a PUT request). I think that part is fairly well known.
The bit I'm not so sure about is how the backend/API implementation can actually implement this without having a race condition; for example:
Setup:
RESTful API sits on top of a standard relational database, using an ORM for all interactions (SQL Alchemy or Postgres for example).
Etag is based on 'last updated time' of the resource
Web framework (Flask) sits behind a multi threaded/process webserver (nginx + gunicorn) so can process multiple requests concurrently.
The problem:
Client 1 and 2 both request a resource (get request), both now have the same Etag.
Both Client 1 and 2 sends a PUT request to update the resource at the same time. The API receives the requests, proceeds to uses the ORM to fetch the required information from the database then compares the request Etag with the 'last updated time' from the database... they match so each is a valid request. Each request continues on and commits the update to the database.
Each commit is a synchronous/blocking transaction so one request will get in before the other and thus one will override the others changes.
Doesn't this break the purpose of the Etag?
The only fool-proof solution I can think of is to also make the database perform the check, in the update query for example. Am I missing something?
P.S Tagged as Python due to the frameworks used but this should be a language/framework agnostic problem. | Etags used in RESTful APIs are still susceptible to race conditions | 0 | 0.066568 | 1 | 0 | 0 | 1,416 |
34,428,046 | 2015-12-23T03:13:00.000 | 1 | 0 | 0 | 1 | 1 | python,database,rest,concurrency,etag | 0 | 34,428,187 | 0 | 3 | 0 | false | 1 | 0 | Etag can be implemented in many ways, not just last updated time. If you choose to implement the Etag purely based on last updated time, then why not just use the Last-Modified header?
If you were to encode more information into the Etag about the underlying resource, you wouldn't be susceptible to the race condition that you've outlined above.
The only fool proof solution I can think of is to also make the database perform the check, in the update query for example. Am I missing something?
That's your answer.
Another option would be to add a version to each of your resources which is incremented on each successful update. When updating a resource, specify both the ID and the version in the WHERE. Additionally, set version = version + 1. If the resource had been updated since the last request then the update would fail as no record would be found. This eliminates the need for locking. | 3 | 6 | 0 | 0 | Maybe I'm overlooking something simple and obvious here, but here goes:
So one of the features of the Etag header in a HTTP request/response it to enforce concurrency, namely so that multiple clients cannot override each other's edits of a resource (normally when doing a PUT request). I think that part is fairly well known.
The bit I'm not so sure about is how the backend/API implementation can actually implement this without having a race condition; for example:
Setup:
RESTful API sits on top of a standard relational database, using an ORM for all interactions (SQL Alchemy or Postgres for example).
Etag is based on 'last updated time' of the resource
Web framework (Flask) sits behind a multi threaded/process webserver (nginx + gunicorn) so can process multiple requests concurrently.
The problem:
Client 1 and 2 both request a resource (get request), both now have the same Etag.
Both Client 1 and 2 sends a PUT request to update the resource at the same time. The API receives the requests, proceeds to uses the ORM to fetch the required information from the database then compares the request Etag with the 'last updated time' from the database... they match so each is a valid request. Each request continues on and commits the update to the database.
Each commit is a synchronous/blocking transaction so one request will get in before the other and thus one will override the others changes.
Doesn't this break the purpose of the Etag?
The only fool-proof solution I can think of is to also make the database perform the check, in the update query for example. Am I missing something?
P.S Tagged as Python due to the frameworks used but this should be a language/framework agnostic problem. | Etags used in RESTful APIs are still susceptible to race conditions | 0 | 0.066568 | 1 | 0 | 0 | 1,416 |
34,441,206 | 2015-12-23T17:54:00.000 | 1 | 0 | 0 | 0 | 0 | python,c | 0 | 34,441,467 | 0 | 1 | 0 | true | 0 | 0 | To compile your code so expression statements invoke sys.displayhook, you need to pass Py_single_input as the start parameter, and you need to provide one statement at a time. | 1 | 1 | 0 | 0 | In a python shell, if I type a = 2 nothing is printed. If I type a 2 gets printed automatically. Whereas, this doesn't happen if I run a script from idle.
I'd like to emulate this shell-like behavior using the python C api, how is it done?
For instance, executing this code PyRun_String("a=2 \na", Py_file_input, dic, dic); from C, will not print anything as the output.
I'd like to simulate a shell-like behavior so that when I execute the previous command, the value "2" is stored in a string. Is it possible to do this easily, either via python commands or from the C api? Basically, how does the python shell do it? | Simulate shell behavior (force eval of last command to be displayed) | 1 | 1.2 | 1 | 0 | 0 | 81 |
34,472,609 | 2015-12-26T15:18:00.000 | 1 | 0 | 0 | 0 | 0 | python,django,database,postgresql,database-migration | 1 | 34,480,125 | 0 | 1 | 0 | true | 1 | 0 | Try those same steps WITHOUT running syncdb and migrate at all. So overall, your steps will be:
heroku pg:backups capture
curl -o latest.dump heroku pg:backups public-url
`scp -P latest.dump [email protected]:/home/myuser
drop database mydb;
create database mydb;
pg_restore --verbose --clean --no-acl --no-owner -U myuser -d mydb latest.dump | 1 | 0 | 0 | 0 | I have a Django app with a postgres backend hosted on Heroku. I'm now migrating it to Azure. On Azure, the Django application code and postgres backend have been divided over two separate VMs.
Everything's set up, I'm now at the stage where I'm transferring data from my live Heroku website to Azure. I downloaded a pg_dump to my local machine, transferred it to the correct Azure VM, ran syncdb and migrate, and then ran pg_restore --verbose --clean --no-acl --no-owner -U myuser -d mydb latest.dump. The data got restored (11 errors were ignored, pertaining to 2 tables that get restored, but which my code now doesn't use).
When I try to access my website, I get the kind of error that usually comes in my website if I haven't run syncdb and migrate:
Exception Type: DatabaseError Exception Value:
relation "user_sessions_session" does not exist LINE 1:
...last_activity", "user_sessions_session"."ip" FROM "user_sess...
^
Exception Location:
/home/myuser/.virtualenvs/myenv/local/lib/python2.7/site-packages/django/db/backends/postgresql_psycopg2/base.py
in execute, line 54
Can someone who has experienced this before tell me what I need to do here? It's acting as if the database doesn't exist and I had never run syncdb. When I use psql, I can actually see the tables and the data in them. What's going on? Please advise. | Unable to correctly restore postgres data: I get the same error I usually get if I haven't run syncdb and migrate | 1 | 1.2 | 1 | 1 | 0 | 119 |
34,483,277 | 2015-12-27T18:04:00.000 | 1 | 0 | 0 | 0 | 0 | python,theano,symbolic-computation | 0 | 34,484,383 | 0 | 2 | 0 | true | 0 | 0 | Theano variables do not have explicit shape information since they are symbolic variables, not numerical. Even dtensor3 = T.tensor3(T.config.floatX) does not have an explicit shape. When you type dtensor3.shape you'll get an object Shape.0 but when you do dtensor3.shape.eval() to get its value you'll get an error.
For both cases however, dtensor.ndim works and prints out 5 and 3 respectively. | 1 | 1 | 1 | 0 | I was wondering how to make a 5D tensor in Theano.
Specifically, I tried dtensor = T.TensorType('float32', (False,)*5). However, the only issue is that dtensor.shapereturns: AttributeError: 'TensorType' object has no attribute 'shape'
Whereas if I used a standard tensor type likedtensor = T.tensor3('float32'), I don't get this issue when I call dtensor.shape.
Is there a way to have this not be an issue with a 5D tensor in Theano? | 5D tensor in Theano | 0 | 1.2 | 1 | 0 | 0 | 506 |
34,486,981 | 2015-12-28T02:26:00.000 | 0 | 0 | 1 | 0 | 0 | python-2.7 | 1 | 34,493,500 | 0 | 3 | 0 | false | 0 | 0 | If you wanna check correct in english dictionary then you can use pyenchant ... use pip to install it ... its east to use and gives true if spelling of a word is correct and false if word doesn't exist in english dictionary.
pip install pyenchant | 1 | 3 | 0 | 0 | In Python, how do I check that the user has entered a name instead of a number, when asking for user input as string? I want a string input in the form of their name, but I want to use error checking to make sure the user doesn't enter a number. | In Python, how do I check that the user has entered a name instead of a number? | 0 | 0 | 1 | 0 | 0 | 1,176 |
34,500,369 | 2015-12-28T20:33:00.000 | 1 | 0 | 0 | 1 | 0 | python,windows,cmd | 0 | 34,500,631 | 0 | 3 | 0 | true | 0 | 0 | Try something like this: runas /user:administrator regedit. | 1 | 4 | 0 | 0 | I have my own python script that manages the IP address on my computer. Mainly it executes the netsh command in the command line (windows 10) which for you must have administrator rights.
It is my own computer, I am the administrator and when running the script I am already logged in with my user (Adrian) which is of type administrator.
I can`t use the right click and "run as administrator" solution because I am executing my netsh command from my python script.
Anybody knows how to get "run as administrator" with a command from CMD ?
Thanks | open cmd with admin rights (Windows 10) | 0 | 1.2 | 1 | 0 | 0 | 7,654 |
34,502,840 | 2015-12-29T00:37:00.000 | 2 | 0 | 0 | 0 | 0 | python,pandas | 0 | 34,502,877 | 0 | 2 | 1 | false | 0 | 0 | To sort by name: df.fruit.value_counts().sort_index()
To sort by counts: df.fruit.value_counts().sort_values() | 1 | 0 | 1 | 0 | Let's say that I have pandas DataFrame with a column called "fruit" that represents what fruit my classroom of kindergartners had for a morning snack. I have 20 students in my class. Breakdown would be something like this.
Oranges = 7, Grapes = 3, Blackberries = 4, Bananas = 6
I used sort to group each of these fruit types, but it is grouping based on alphabetical order. I would like it to group based on the largest quantity of entries for that class of fruit. In this case, I would like Oranges to turn up first so that I can easily see that Oranges is the most popular fruit.
I'm thinking that sort is not the best way to go about this. I checked out groupby but could not figure out how to use that appropriately either.
Thanks in advance. | Python pandas: determining which "group" has the most entries | 0 | 0.197375 | 1 | 0 | 0 | 38 |
34,507,744 | 2015-12-29T08:59:00.000 | 1 | 0 | 1 | 0 | 0 | python,python-3.x,beautifulsoup,pip | 0 | 70,827,357 | 0 | 6 | 0 | false | 1 | 0 | I had some mismatch between Python version and Beautifulsoup. I was installing this project
Th3Jock3R/LinuxGSM-Arma3-Mod-Update
to a linux centos8 dedicated Arma3 server. Python3 and Beautifulsoup4 seem to match.So I updated Python3, removed manually Beautifulsoup files and re-installed it with: sudo yum install python3-beautifulsoup4 (note the number 3). Works. Then pointing directories in Th3Jock3R:s script A3_SERVER_FOLDER = "" and A3_SERVER_DIR = "/home/arma3server{}".format(A3_SERVER_FOLDER) placing and running the script in same folder /home/arma3server with python3 update.py. In this folder is also located new folder called 'modlists' Now the lightness of mod loading blows my mind. -Bob- | 2 | 27 | 0 | 0 | I have both Python 2.7 and Python 3.5 installed. When I type pip install beautifulsoup4 it tells me that it is already installed in python2.7/site-package directory.
But how do I install it into the python3 dir? | How to install beautifulsoup into python3, when default dir is python2.7? | 0 | 0.033321 | 1 | 0 | 0 | 73,703 |
34,507,744 | 2015-12-29T08:59:00.000 | 0 | 0 | 1 | 0 | 0 | python,python-3.x,beautifulsoup,pip | 0 | 63,598,946 | 0 | 6 | 0 | false | 1 | 0 | If you are on windows, this works for Python3 as well
py -m pip install bs4 | 2 | 27 | 0 | 0 | I have both Python 2.7 and Python 3.5 installed. When I type pip install beautifulsoup4 it tells me that it is already installed in python2.7/site-package directory.
But how do I install it into the python3 dir? | How to install beautifulsoup into python3, when default dir is python2.7? | 0 | 0 | 1 | 0 | 0 | 73,703 |
34,514,164 | 2015-12-29T15:34:00.000 | 0 | 0 | 0 | 0 | 0 | wxpython | 0 | 44,302,082 | 0 | 1 | 0 | false | 0 | 0 | Use EVT_AUINOTEBOOK_TAB_RIGHT_DOWN to catch the event. The event.page will give you the clicked page. | 1 | 1 | 0 | 0 | I create a menu that popups after a right on a tab. The menu contains three options: close, close other and close all. Right clicking on a tabs does not display its content (it not already displayed), it just show the menu that control the clicked tab. The issue is that right clicking on another tab popups the menu but the program does not know which tab was clicked.
Is there any built-in methods to get the index of a tabs in AuiNotebook after a right click event? | How to get the index of a tab in AuiNotebook after a right click on a non-active tab? | 0 | 0 | 1 | 0 | 0 | 174 |
34,520,233 | 2015-12-29T22:40:00.000 | 2 | 0 | 0 | 1 | 0 | python,api,heroku,oauth-2.0,spotify | 0 | 34,520,316 | 0 | 2 | 0 | false | 1 | 0 | I once ran into a similar issue with Google's Calendar API. The app was pretty low-importance so I botched a solution together by running through the auth locally in my browser, finding the response token, and manually copying it over into an environment variable on Heroku. The downside of course was that tokens are set to auto-expire (I believe Google Calendar's was set to 30 days), so periodically the app stopped working and I had to run through the auth flow and copy the key over again. There might be a way to automate that.
Good luck! | 1 | 6 | 0 | 0 | Working on a small app that takes a Spotify track URL submitted by a user in a messaging application and adds it to a public Spotify playlist. The app is running with the help of spotipy python on a Heroku site (so I have a valid /callback) and listens for the user posting a track URL.
When I run the app through command line, I use util.prompt_for_user_token. A browser opens, I move through the auth flow successfully, and I copy-paste the provided callback URL back into terminal.
When I run this app and attempt to add a track on the messaging application, it does not open a browser for the user to authenticate, so the auth flow never completes.
Any advice on how to handle this? Can I auth once via terminal, capture the code/token and then handle the refreshing process so that the end-user never has to authenticate?
P.S. can't add the tag "spotipy" yet but surprised it was not already available | Completing Spotify Authorization Code Flow via desktop application without using browser | 0 | 0.197375 | 1 | 0 | 1 | 1,073 |
34,547,795 | 2015-12-31T14:30:00.000 | 0 | 0 | 1 | 0 | 0 | python,anaconda,macports,caffe | 0 | 34,549,989 | 0 | 2 | 0 | false | 0 | 0 | Install pyvenv ... it is easy to do on a Mac - then you can use whatever you want to. | 2 | 0 | 0 | 0 | I installed caffe using Macports sudo port install caffe. However, Macports didn't use my anaconda python, which I would like to use for development. I can only import caffe in Macports' python2.7.
Is there a way to show anaconda python where to look or do I have to reinstall for anaconda python? Either way, I would be grateful for a hint how to do it. | Macports caffe wrong python | 0 | 0 | 1 | 0 | 0 | 132 |
34,547,795 | 2015-12-31T14:30:00.000 | 0 | 0 | 1 | 0 | 0 | python,anaconda,macports,caffe | 0 | 34,557,806 | 0 | 2 | 0 | true | 0 | 0 | MacPorts never installs modules for use with other Python versions than MacPorts' own version. As a consequence, there is no switch to select the Python version to build against.
You'll have to install caffe for your Anaconda Python yourself, e.g. in a virtualenv, or use MacPorts Python. | 2 | 0 | 0 | 0 | I installed caffe using Macports sudo port install caffe. However, Macports didn't use my anaconda python, which I would like to use for development. I can only import caffe in Macports' python2.7.
Is there a way to show anaconda python where to look or do I have to reinstall for anaconda python? Either way, I would be grateful for a hint how to do it. | Macports caffe wrong python | 0 | 1.2 | 1 | 0 | 0 | 132 |
34,570,992 | 2016-01-02T21:38:00.000 | 1 | 0 | 1 | 0 | 0 | python,python-3.x,abstract-syntax-tree | 0 | 34,571,022 | 0 | 4 | 0 | true | 0 | 0 | You might create some hash table associating AST nodes to AST nodes and scan (recursively) your topmost AST tree to register in that hash table the parent of each node. | 2 | 16 | 0 | 0 | I'm working with Abstract Syntax Trees in Python 3. The ast library gives many ways to get children of the node (you can use iter_child_nodes() or walk()) but no ways to get parent of one. Also, every node has links to its children, but it hasn't links to its parent.
How I can get the parent of AST node if I don't want to write some plugin to ast library?
What is the most correct way to do this? | Getting parent of AST node in Python | 0 | 1.2 | 1 | 0 | 0 | 4,325 |
34,570,992 | 2016-01-02T21:38:00.000 | 0 | 0 | 1 | 0 | 0 | python,python-3.x,abstract-syntax-tree | 0 | 34,571,024 | 0 | 4 | 0 | false | 0 | 0 | It wouldn't be really be a plugin, but you can always write a function which adds a weakref to parent in every child. | 2 | 16 | 0 | 0 | I'm working with Abstract Syntax Trees in Python 3. The ast library gives many ways to get children of the node (you can use iter_child_nodes() or walk()) but no ways to get parent of one. Also, every node has links to its children, but it hasn't links to its parent.
How I can get the parent of AST node if I don't want to write some plugin to ast library?
What is the most correct way to do this? | Getting parent of AST node in Python | 0 | 0 | 1 | 0 | 0 | 4,325 |
34,574,396 | 2016-01-03T07:43:00.000 | 1 | 0 | 0 | 0 | 0 | python,numpy | 0 | 34,574,445 | 0 | 1 | 0 | true | 0 | 0 | If you need to re-read it quickly into numpy you could just use the cPickle module.
This is going to be much faster that parsing it back from an ASCII dump (but however only the program will be able to re-read it). As a bonus with just one instruction you could dump more than a single matrix (i.e. any data structure built with core python and numpy arrays).
Note that parsing a floating point value from an ASCII string is a quite complex and slow operation (if implemented correctly down to ulp). | 1 | 2 | 1 | 0 | Here is my question:
I have a 3-d numpy array Data which in the shape of (1000, 100, 100).
And I want to save it as a .txt or .csv file, how to achieve that?
My first attempt was to reshape it into a 1-d array which length 1000*100*100, and transfer it into pandas.Dataframe, and then, I save it as .csv file.
When I wanted to call it next time,I would reshape it back to 3-d array.
I think there must be some methods easier. | How to save the n-d numpy array data and read it quickly next time? | 0 | 1.2 | 1 | 0 | 0 | 53 |
34,583,385 | 2016-01-04T00:51:00.000 | 4 | 0 | 0 | 1 | 0 | python-2.7,google-app-engine,google-cloud-datastore | 0 | 34,583,572 | 0 | 1 | 0 | true | 1 | 0 | Create your "file" in memory (use e.g io.BytesIO) and then use the getvalue method of the in-memory "file" to get the blob of bytes for the datastore. Do note that a datastore entity is limited to a megabyte or so, thus it's quite possible that some SVG file might not fit in that space -- in which case, you should look into Google Cloud Storage. But, that's a different issue. | 1 | 3 | 0 | 0 | i have a doubt, i need to create some svg files (in a sequence) and upload to data store. I know how to create the svg, but it save to filesystem, and i have understood that GAE cannot use it.
So, i don't know how to create and put it on the datastore. | Create SVG and save it to datastore(GAE + Python) | 0 | 1.2 | 1 | 0 | 0 | 260 |
34,592,010 | 2016-01-04T13:13:00.000 | 0 | 0 | 0 | 0 | 0 | python,image,pdf,pdf-generation | 0 | 34,592,133 | 0 | 1 | 0 | true | 0 | 0 | I really cannot see how this is significantly slower.
The local disks bandwidth should be much larger than the internet bandwidth.
So the extra local save/load times should be extremely low overhead.
If you really wan't to speed stuff up, check if the file already exists locally before downloading.
unless of course if you are doing this download manually? | 1 | 0 | 0 | 0 | Is there a python pdf generator that can use an image directly from a given url? Right now I'm using ReportLab and I have to download and save the image to a file and then using filename I can add it to the PDF. Which is significatly slower I imagine, in comparison to directly downloading the image, storing it in memory somehow and write to the PDF. | Python PDF generation - image from internet | 0 | 1.2 | 1 | 0 | 0 | 124 |
34,593,129 | 2016-01-04T14:15:00.000 | 0 | 0 | 1 | 0 | 0 | pycharm,ipython-notebook | 0 | 34,593,227 | 0 | 1 | 0 | false | 0 | 0 | Simplest solution - print somewhere __doc__ or use LPM when clicking on method you want to check to go directly to the file, where method you need is declared. Then you should have docstring of it. | 1 | 0 | 0 | 1 | I am developing ipython notebook in pycharm. But I don't know how to check the python doc of the library I use in the notebook with IDE support (like mouse over the code and python api doc display automatically). Does anyone know that ? Thanks | how to see the python api doc in pycharm ipython notebook | 0 | 0 | 1 | 0 | 0 | 58 |
34,597,732 | 2016-01-04T18:45:00.000 | 0 | 0 | 0 | 0 | 0 | python,numpy,curve-fitting,algebra | 0 | 38,167,204 | 0 | 1 | 0 | false | 0 | 0 | Numpy has functions for multi-variable polynomial evaluation in the polynomial package -- polyval2d, polyval3d -- the problem is getting the coefficients. For fitting, you need the polyvander2d, polyvander3d functions that create the design matrices for the least squares fit. The multi-variable polynomial coefficients thus determined can then be reshaped and used in the corresponding evaluation functions. See the documentation for those functions for more details. | 1 | 0 | 1 | 0 | Given some coordinates in 3D (x-, y- and z-axes), what I would like to do is to get a polynomial (fifth order). I know how to do it in 2D (for example just in x- and y-direction) via numpy. So my question is: Is it possible to do it also with the third (z-) axes?
Sorry if I missed a question somewhere.
Thank you. | Create 3D- polynomial via numpy etc. from given coordinates | 1 | 0 | 1 | 0 | 0 | 260 |
34,629,913 | 2016-01-06T09:43:00.000 | 3 | 0 | 1 | 0 | 0 | ipython,spyder | 0 | 34,643,480 | 0 | 2 | 0 | false | 0 | 0 | (Spyder dev here) There is no way to start an IPython console inside Spyder with a different profile. We use the default profile to create all our consoles. | 2 | 1 | 0 | 0 | I use two installations of Spyder, one using my default python 2.7 and the other running in a python 3.4 virtualenv. However, the history of the IPython console is shared between the two. The cleanest way to have separate histories would be to define a new IPython profile for the python 3.4 installation. My question is: how to convince Spyder to run IPython with a non-default profile? I could not find any way to supply command line options. | Non-default IPython profile in Spyder console | 0 | 0.291313 | 1 | 0 | 0 | 397 |
34,629,913 | 2016-01-06T09:43:00.000 | 0 | 0 | 1 | 0 | 0 | ipython,spyder | 0 | 34,657,322 | 0 | 2 | 0 | true | 0 | 0 | As mentioned by Carlos' answer, there is no way to start an IPython console inside Spyder with a different profile. A workaround is to duplicate the ~/.ipython directory (I named mine ~/.ipython3) and set the environment variable IPYTHONDIR to the new location before running the python 3 version of Spyder. It will then use the profile_default in the new directory. | 2 | 1 | 0 | 0 | I use two installations of Spyder, one using my default python 2.7 and the other running in a python 3.4 virtualenv. However, the history of the IPython console is shared between the two. The cleanest way to have separate histories would be to define a new IPython profile for the python 3.4 installation. My question is: how to convince Spyder to run IPython with a non-default profile? I could not find any way to supply command line options. | Non-default IPython profile in Spyder console | 0 | 1.2 | 1 | 0 | 0 | 397 |
34,649,751 | 2016-01-07T07:41:00.000 | 3 | 0 | 0 | 0 | 0 | python,machine-learning,scikit-learn | 0 | 34,653,721 | 0 | 2 | 0 | false | 0 | 0 | From the Sci-Kit Documentation
apply(X) Apply trees in the ensemble to X, return leaf indices
This function will take input data X and each data point (x) in it will be applied to each non-linear classifier tree. After application, data point x will have associated with it the leaf it end up at for each decision tree. This leaf will have its associated classes ( 1 if binary ).
apply(X) returns the above information, which is of the form [n_samples, n_estimators, n_classes].
Thus, the apply(X) function doesn't really have much to do with the Gradient Boosted Decision Tree + Logic Regression (GBDT+LR) classification and feature transform methods. It is a function for the application of data to an existing classification model.
I'm sorry if I have misunderstood you in any way, though a few grammar/syntax errors in your question made it harder to decipher. | 1 | 0 | 1 | 0 | In scikit-learn new version ,there is a new function called apply() in Gradient boosting. I'm really confused about it .
Does it like the method:GBDT + LR that facebook has used?
If dose, how can we make it work like GBDT + LR? | What the function apply() in scikit-learn can do? | 0 | 0.291313 | 1 | 0 | 0 | 1,064 |
34,661,669 | 2016-01-07T17:38:00.000 | 2 | 1 | 1 | 0 | 0 | python,pycharm,perforce,pyscripter | 1 | 34,661,837 | 0 | 1 | 0 | false | 0 | 0 | Go to
File --> Open
in Pycharm and select your Scripts(folder) and open it. Then the Pycharm will treat it as a project and you will be able to ctrl + click a function. | 1 | 1 | 0 | 0 | Ok, so I'm looking to switch to PyCharm from PyScripter for OS independent development. I also wanted to mention that I'm using Perforce for version control.
So what I currently do is double click a .py for editing in Perforce, and PyScripter opens up and I edit to my hearts desire. I can click on an imported function, and it'll open up the corresponding .py file and bring me right to the function. Awesome.
So I have yet to be able to achieve that on PyCharm. I'm using the community version which should be just fine for what I want, which is just an editor with some python checking & built in console.
When I set the default .py program to use in Perforce to PyCharm, I click on the .py and PyCharm fires up. Good so far. But my problem arises when I try to "ctrl + click" a function or method. I get the "Cannot find declaration to go to." I import the associated class & file.
(Just an example, not actual code). So in Transportation.py I have "import Cars", which is a .py. I do Cars.NumberOfDoors() and I get the above error. My folder structure is:
Scripts (folder)
Population.py (General support script)
Citybudget.py (General support script)
MassTransit (folder)
Transportation.py
Cars.py
So question boils down to, is how do I properly setup the root to be the Scripts folder when I click on a file from Perforce? How do I set it up that it recognizes where it's at in the folder structure? So if I'm in the MassTransit it'll set the root as Scripts folder, and same for if I'm accessing the general support scripts like Population.py? | Directory issues within Pycharm (free version) & Perforce | 0 | 0.379949 | 1 | 0 | 0 | 207 |
34,668,997 | 2016-01-08T03:24:00.000 | 0 | 0 | 1 | 0 | 1 | python,virtualenv | 0 | 34,669,093 | 0 | 1 | 0 | false | 0 | 0 | Your default environment variables may be wrong in path, since the default path must point to the python globally.
Try this:
Start menu > Run > 'sysdm.cpl' > Enter
Search for the tab 'Advanced'
Finally 'Environment Variables'
Edit the system variable 'Path'
Carefully and search for the python path, just change it to the global paths instead.
c:\python27\Lib\site-packages\PyQT4 (usually)
c:\python27
c:\python27\scripts
Hope I could help. | 1 | 0 | 0 | 0 | I have activated a python virtualenv (dev) for one of my projects. However, don't know what happened, it looks like it has changed the path permanently. I cannot access my global packages. When I print sys.path it shows me paths related to virtualenv (dev) which is no more activated. When I run pip list, it shows me packages installed for virtualenv (dev) and not the ones install globally (c:\python27\Lib\sitepackages). Any idea what must have gone wrong? And how do I reset sys.path?
I checked out RegistryKey (HKEY_LOCAL_MACHINE\SOFTWARE\Python\PythonCore\2.7\PythonPath) and Environment variables, everything looks okay. Any light on this issue would be helpful... | Python virtualenv has changed sys.path permanently | 0 | 0 | 1 | 0 | 0 | 156 |
34,692,370 | 2016-01-09T10:31:00.000 | 2 | 0 | 1 | 0 | 1 | python,django,azure,pip,azure-web-app-service | 0 | 41,843,617 | 0 | 5 | 0 | false | 1 | 0 | You won't be able to upgrade the pip of your Django webapp because you will not have access to system files.
Instead you can upgrade pip of your virtualenv, which you can do by adding a line in deploy.cmd file before install requirements.txt command.
env\scripts\python -m pip install --upgrade pip
Remember not to upgrade pip with pip (env/scripts/pip) else it will uninstall global pip. | 2 | 9 | 0 | 0 | I'm pretty new to Azure and I'm trying to get a Django WebApp up and running. I uploaded the files using FTP, But Azure doesn't run my requirements.txt.
So I searched for a bit and found out that you can install the requirements.txtwith pip.
Back in Azure, PIP doesn't seem to work. Neither in the Console, The KUDU CMD or the KUDU powershell. Python does work.
When I try to install PIP via Python, it first says that a older version is already installed. When Python tries to upgrade PIP, it doesn't have access to the folder that it needs to edit.
I was wondering how I could use PIP in azure.
(If you know a seperate way to install the requirements.txt please tell, because this was how I originally came to this point.) | Using PIP in a Azure WebApp | 0 | 0.07983 | 1 | 0 | 0 | 7,236 |
34,692,370 | 2016-01-09T10:31:00.000 | 2 | 0 | 1 | 0 | 1 | python,django,azure,pip,azure-web-app-service | 0 | 38,240,151 | 0 | 5 | 0 | false | 1 | 0 | Have you tried upgrading pip with easy_install? The following worked for me in Azure kudu console:
python -m easy_install --upgrade --user pip | 2 | 9 | 0 | 0 | I'm pretty new to Azure and I'm trying to get a Django WebApp up and running. I uploaded the files using FTP, But Azure doesn't run my requirements.txt.
So I searched for a bit and found out that you can install the requirements.txtwith pip.
Back in Azure, PIP doesn't seem to work. Neither in the Console, The KUDU CMD or the KUDU powershell. Python does work.
When I try to install PIP via Python, it first says that a older version is already installed. When Python tries to upgrade PIP, it doesn't have access to the folder that it needs to edit.
I was wondering how I could use PIP in azure.
(If you know a seperate way to install the requirements.txt please tell, because this was how I originally came to this point.) | Using PIP in a Azure WebApp | 0 | 0.07983 | 1 | 0 | 0 | 7,236 |
34,700,577 | 2016-01-10T00:22:00.000 | 1 | 0 | 1 | 0 | 0 | python,lxml,pypy,pyquery | 0 | 36,188,448 | 0 | 2 | 0 | true | 0 | 0 | Pypy 5.0 and lxml 3.6 are designed to work well with each other. | 1 | 1 | 0 | 0 | I'm trying to use pyquery with pypy but it depends on lxml2, which won't build under pypy. I know there's a lxml2 build that is meant to be used with pypy but I don't know how to make pyquery use that instead of the usual one. | How can I build pyquery for pypy? | 0 | 1.2 | 1 | 0 | 0 | 89 |
34,710,059 | 2016-01-10T19:50:00.000 | 0 | 1 | 0 | 0 | 0 | java,android,python | 0 | 34,710,122 | 0 | 1 | 0 | false | 1 | 1 | Instead of running it as one app, what about running the python script as separate from the original script? I believe it would bee possible, as android is in fact a UNIX based OS. Any readers could give their input on this idea an if it would work. | 1 | 0 | 0 | 0 | I want to develop an app to track people's Whatsapp last seen and other stuff, and found out that there are APIs out there to deal with it, but the thing is they are writen in python and are normally run in Linux I think
I have Java and Android knowledge but not python, and wonder if there's a way to develop the most of the app in Java and get the info I want via calls using these python APIs, but without having to install a python interpreter or similar on the device, so the final user just has to download and run the Android app as he would do with any other
I want to know if it would be very hard for someone inexperienced as me (this is the 2nd and final year of my developing grade), for it's what I have in mind for the final project, thx in advance | how to write an Android app in Java which needs to use a Python library? | 0 | 0 | 1 | 0 | 0 | 49 |
34,726,376 | 2016-01-11T16:23:00.000 | 0 | 0 | 1 | 0 | 0 | python-2.7 | 0 | 34,726,494 | 0 | 4 | 0 | false | 0 | 0 | map(int, "1 2 3 4 5".split())
This will take your string and convert to a list of ints.
Split defaults to splitting on a space, so you don't need an argument.
For raw_input(), you can do:
map(int, raw_input().split()) | 2 | 1 | 0 | 0 | I want to insert 5 integers by simply typing 3 4 6 8 9 and hitting enter. I know how to insert strings in a list by using list=raw_input().split(" ",99), but how can I insert integers using space? | how to insert integers in list/array separated by space in python | 0 | 0 | 1 | 0 | 0 | 6,406 |
34,726,376 | 2016-01-11T16:23:00.000 | 0 | 0 | 1 | 0 | 0 | python-2.7 | 0 | 34,726,835 | 0 | 4 | 0 | false | 0 | 0 | The above answer is perfect if you are looking to parse strings into a list.
Else you can parse them into Integer List using the given way
integers = '22 33 11'
integers_list = []
try:
integers_list = [int(i) for i in integers.split(' ')]
except:
print "Error Parsing Integer"
print integers_list | 2 | 1 | 0 | 0 | I want to insert 5 integers by simply typing 3 4 6 8 9 and hitting enter. I know how to insert strings in a list by using list=raw_input().split(" ",99), but how can I insert integers using space? | how to insert integers in list/array separated by space in python | 0 | 0 | 1 | 0 | 0 | 6,406 |
34,729,149 | 2016-01-11T19:07:00.000 | 0 | 0 | 0 | 0 | 0 | python,django,nginx,permissions,file-permissions | 0 | 36,338,260 | 0 | 1 | 0 | true | 1 | 0 | Coulnd't find out who exactly is it created by, however, the permissions depend on the user (root or non root).
This means if you run the commands (for example: python manage.py runserver) with sudo or under root the folder gets root permissions which can't be edited from a non root user. | 1 | 1 | 0 | 0 | I set up django using nginx and gunicorn. I am looking at the permission in my project folder and I see that the permission for the media folder is set to root (all others are set to debian):
-rw-r--r-- 1 root root 55K Dec 2 13:33 media
I am executing all app relevant commands like makemigrations, migrate, collectstatic, from debian, therefore everything else is debian.
But the media folder doesn't exist when I start my app. I will be created once I upload stuff.
But who creates it and how do I change the permissions to debain? | who creates the media folder in django and how to change permission rights? | 0 | 1.2 | 1 | 0 | 0 | 343 |
34,734,714 | 2016-01-12T02:41:00.000 | 12 | 0 | 1 | 0 | 0 | ipython,jupyter,jupyter-notebook | 0 | 59,724,231 | 0 | 4 | 0 | false | 0 | 0 | Maybe it is easier to just use unix to just unzip the data.
Steps:
Transform the folder into a .zip file in your computer.
Upload the .zip file to jupyter home.
In jupyter notebook run
! unzip ~/yourfolder.zip -d ~/
where
! tells the jupyter notebook that you are going to give code directly to unix, not python code
unzip is the unzip commmand
~/yourfolder.zip tells the command where your .zip folder is (at ~/ if you uploaded to the home folder)
-d ~/ tells the command where you want to put the unzipped folder (this assumes you want to put it in the home folder, but you can also put it in any other subfolder with -d ~/my_first_level_subfolder or -d ~/my_first_level_subfolder/my_second_level_subfolder, etc.)
If you want to delete the original .zip folder, delete it manually at jupyter home or use
!rm ~/yourfolder.zip
Hope if helps somebody | 1 | 37 | 0 | 0 | Can you upload entire folders in IPython Jupyter? If so, how? I know how to upload individual files, of course, but this can get tedious if there are a large number of files and/or subdirectories. | IPython Jupyter: uploading folder | 0 | 1 | 1 | 0 | 0 | 61,675 |
34,736,964 | 2016-01-12T06:30:00.000 | 0 | 0 | 0 | 0 | 0 | python,web,flask,host | 0 | 34,775,584 | 0 | 1 | 0 | false | 1 | 0 | Enable port forwarding on your router, start flask on the 0.0.0.0 address of your computer, set the forwarded port to be the one started on your laptop. This will now allow your LAN and calls to your ISP provided address to be directed to your laptop.
To clarify, LAN can do it without port forwarding in my experience. | 1 | 0 | 0 | 0 | Im developing an app using flask framework in python, i wanted to host it on my pc for a few people to be able to visit it, similar to wamps put online feature but for flask instead, i dont want to deploy it to the cloud just yet. how can i do it. | How to host a flask web app on my own pc? | 0 | 0 | 1 | 0 | 0 | 2,566 |
34,737,287 | 2016-01-12T06:54:00.000 | 0 | 0 | 0 | 1 | 0 | python,celery-task | 0 | 34,738,281 | 0 | 2 | 0 | false | 0 | 0 | How can I get which worker is executing which input?
There are 2 options to use multiple workers:
You run each worker separately with separate run commands
You run in one command using command line option -c i.e. concurrency
First method, flower will support it and will show you all the workers, all the tasks (you call inputs), which worker processed which task and other information too.
With second method, flower will show you all the tasks being processed by single worker. In this case you can only differentiate by viewing logs generated by celery worker as in logs it does store which worker THREAD executed which task. So, i think you will be better using first option given your requirements.
Each worker executed how many inputs and its status?
As I mentioned, using first approach, flower will give you this information.
If any task is failed how can get failed input data in separately and
re-execute with available worker?
Flower does provide the filters to filter the failed tasks and does provide what status tasks returned when exiting. Also you can set how many times celery should retry a failed task. But even after retries task fails, then you will have to relaunch the task yourself. | 1 | 3 | 0 | 0 | I have celery task with 100 input data in queue and need to execute using 5 workers.
How can I get which worker is executing which input?
Each worker executed how many inputs and its status?
If any task is failed how can get failed input data in separately and re-execute with available worker?
Is there any possible ways to customize celery based on worker specific.
We can combine celery worker limitation and flower
I am not using any framework. | Celery worker details | 0 | 0 | 1 | 0 | 0 | 834 |
34,757,084 | 2016-01-13T01:47:00.000 | 0 | 0 | 1 | 0 | 0 | python,linux,scripting,virtual-machine | 0 | 34,761,507 | 0 | 2 | 0 | false | 0 | 0 | For aws use boto.
For GCE use Google API Python Client Library
For OpenStack use the python-openstackclient and import its methods directly.
For VMWare, google it.
For Opsware, abandon all hope as their API is undocumented and has like 12 years of accumulated abandoned methods to dig through and an equally insane datamodel back ending it.
For direct libvirt control there are python bindings for libvirt. They work very well and closely mimic the c libraries.
I could go on. | 1 | 0 | 0 | 0 | I want to manage virtual machines (any flavor) using Python scripts. Example, create VM, start, stop and be able to access my guest OS's resources.
My host machine runs Windows. I have VirtualBox installed. Guest OS: Kali Linux.
I just came across a software called libvirt. Do any of you think this would help me ?
Any insights on how to do this? Thanks for your help. | Controlling VMs using Python scripts | 0 | 0 | 1 | 0 | 0 | 1,117 |
34,763,600 | 2016-01-13T10:01:00.000 | 1 | 1 | 0 | 0 | 0 | python,node.js,ibm-cloud | 0 | 34,790,983 | 0 | 2 | 0 | false | 1 | 0 | I finally fixed this as adding an entry to dependencies in package.json of the project, which causes the call of npm install for the linked github repo. It is kinda straightforward but I found no explanation for that on Bluemix resources. | 1 | 1 | 0 | 0 | I'd like to run text processing Python scripts after submitting searchForms of my node.js application.
I know how the scripts can be called with child_process and spawn within js, but what should I set up on the app (probably some package.json entries?) so that it will be able to run Python after deploying to Bluemix?
Thanks for any help! | How to invoke python scripts in node.js app on Bluemix? | 0 | 0.099668 | 1 | 0 | 0 | 358 |
34,771,013 | 2016-01-13T15:47:00.000 | 2 | 1 | 0 | 0 | 0 | python,smpp | 0 | 34,810,025 | 0 | 1 | 0 | false | 0 | 0 | Take a look at jasmin sms gateway, it's pythonic and has smpp server implementation. | 1 | 1 | 0 | 1 | Does anyone know a tool to implement a Python SMPP server and some tips on how to proceed?
I found Pythomnic3k framework, but did not find material needed for me to use it as SMPP server ... | Implementing an SMPP Server in Python | 0 | 0.379949 | 1 | 0 | 0 | 491 |
34,786,665 | 2016-01-14T10:05:00.000 | 0 | 0 | 0 | 0 | 0 | python,flask,flask-admin | 0 | 34,786,896 | 0 | 1 | 0 | false | 1 | 0 | You have to follow this steps
Javascript
Bind a on change event to your Department select .
If the select changes you get the value selected.
When you get the value, you have to send it to the server through an AJAX request.
Flask
Implement a method that reads the value and loads the associated Subdepartments.
Send a JSON response to the view with your Subdepartments
Javascript
In your AJAX request implement a success function. This function by default has as first parameter the data received from the server. Loop over them and append them to the wished select. | 1 | 0 | 0 | 0 | I have a Flask-admin application and I have a class with a "Department" and a "Subdepartment" fields.
In the create form, I want that when a Department is selected, the Subdepartment select automatically loads all the corresponding subdepartments.
In the database, I have a "department" table and a "sub_department" table that was a foreign key "department_id".
Any clues on how I could achieve that?
Thanks in advance. | Load a select list when selecting another select | 0 | 0 | 1 | 1 | 0 | 44 |
34,801,342 | 2016-01-14T22:56:00.000 | 0 | 0 | 0 | 0 | 0 | python,tensorflow | 0 | 53,392,066 | 0 | 8 | 0 | false | 0 | 0 | For rotating an image or a batch of images counter-clockwise by multiples of 90 degrees, you can use tf.image.rot90(image,k=1,name=None).
k denotes the number of 90 degrees rotations you want to make.
In case of a single image, image is a 3-D Tensor of shape [height, width, channels] and in case of a batch of images, image is a 4-D Tensor of shape [batch, height, width, channels] | 1 | 15 | 1 | 0 | In tensorflow, I would like to rotate an image from a random angle, for data augmentation. But I don't find this transformation in the tf.image module. | tensorflow: how to rotate an image for data augmentation? | 0 | 0 | 1 | 0 | 0 | 28,208 |
34,825,214 | 2016-01-16T08:56:00.000 | 1 | 0 | 0 | 1 | 1 | python,freebsd,ports,unison | 1 | 36,164,028 | 0 | 1 | 0 | false | 0 | 0 | I think the message is pretty clear: unison-fsmonitor can't be run on freebsd10 because it's not supported, so you can't use Unison with the -repeat option.
Since it's just written in Python, though, I don't see why it shouldn't be supported. Maybe message the developer. | 1 | 1 | 0 | 0 | After installing unison from /usr/ports/net/unison with X11 disabled via make config, running the command unison -repeat watch /dir/mirror/1 /dir/mirror/2
Yields the message:
Fatal error: No file monitoring helper program found
From here I decided to try using pkg to install unison-nox11 and this yields the same error message.
I've also tried copying the fsmonitor.py file from unison-2.48.3.tar.gz to /usr/bin/unison-fsmonitor and I got the following error:
Fatal error: Unexpected response 'Usage: unison-fsmonitor [options] root [path] [path]...' from the filesystem watcher (expected VERSION)
Running the command unison-fsmonitor version shows the message
unsupported platform freebsd10
Anyone have any ideas on how to fix this? | Using Unison "-repeat watch" in FreeBSD (10.2) after installing from ports yields error | 0 | 0.197375 | 1 | 0 | 0 | 847 |
34,826,533 | 2016-01-16T11:41:00.000 | 0 | 1 | 1 | 0 | 0 | python,performance-testing,trace,python-asyncio | 0 | 34,839,535 | 0 | 2 | 0 | false | 0 | 0 | If you only want to measure performance of "your" code, you could used approach similar to unit testing - just monkey-patch (even patch + Mock) the nearest IO coroutine with Future of expected result.
The main drawback is that e.g. http client is fairly simple, but let's say momoko (pg client)... it could be hard to do without knowing its internals, it won't include library overhead.
The pro are just like in ordinary testing:
it's easy to implement,
it measures something ;), mostly one's implementation without overhead of third party libraries,
performance tests are isolated, easy to re-run,
it's to run with many payloads | 1 | 16 | 0 | 0 | I can't use normal tools and technics to measure the performance of a coroutine because the time it takes at await should not be taken in consideration (or it should just consider the overhead of reading from the awaitable but not the IO latency).
So how do measure the time a coroutine takes ? How do I compare 2 implementations and find the more efficent ? What tools do I use ? | How to measure Python's asyncio code performance? | 0 | 0 | 1 | 0 | 0 | 5,793 |
34,835,172 | 2016-01-17T04:40:00.000 | 2 | 0 | 1 | 0 | 0 | python,oop,simulation | 0 | 34,835,543 | 0 | 2 | 1 | true | 0 | 0 | If you're running a simulation, it is certainly reasonable design to have a single "simulation engine", with various components. As long as you don't implement these as application-wide singletons, you will be fine. This is actually a great example of what the advice to avoid singletons is actually all about! Not having these as singletons will allow, for example, running several simulations at once within the same process.
One of the common designs for a system such as yours is an event-based design. With such a design, you'll have a single event manager component for the simulation. It will support registering functions to be called given certain conditions, e.g. a given amount of simulation time has passed. You can then register your update_age() events to be fired off at intervals for each of the Actors in your simulation.
If you go this route, remember that you will need to be able to remove registered event handlers for Actors that are no longer relevant, e.g. if they die in the simulation. This can be done by creating a unique ID for each registered event, which can be used to remove it later. | 1 | 2 | 0 | 0 | I'm making a program that simulates governments and families in the medieval ages. People (represented by objects of the class Actor) are born, grow old, have kids, and die.
This means I need to track quite a few objects, and figure out some to e.g. call update_age() for every tracked person every year/month/week.
This brings up several problems. I need to find some way to iterate over the set of all tracked Actors. I also need to be able to dynamically add to that set, to account for births.
My first idea was to make an object Timekeeper with a method that calls update_age() for every object in the set of tracked objects. Then, in the main program loop, I would call the Timekeeper's method. However, this makes Timekeeper a singleton, a concept which is not always a good idea. Since I am only a novice programmer, I'd like to learn good design patterns now, rather than learn wrong.
It still leaves me with the problem of how to get a set/list/dictionary of all the tracked people to update. | How to track and update a great many values efficiently? | 0 | 1.2 | 1 | 0 | 0 | 57 |
34,845,704 | 2016-01-18T00:51:00.000 | 0 | 0 | 1 | 1 | 0 | python,pandas,module | 1 | 34,845,928 | 0 | 1 | 0 | false | 0 | 0 | May be you are using different Python versions in IDLE and the command line, if this is the case, you should install Pandas for the Python version that you are using in IDLE | 1 | 0 | 0 | 0 | This is a beginner question. I am using "import pandas as pd" in IDLE,
but got the following error message "ImportError: No module named 'pandas",
I don't know how to install the the pandas in IDLE. I run the same code in MAC linux command window, it worked. Not sure why not working in IDLE.
Thanks for the help! | import pandas using IDLE error | 0 | 0 | 1 | 0 | 0 | 716 |
34,864,038 | 2016-01-18T21:03:00.000 | 1 | 0 | 1 | 0 | 0 | python,django,python-2.7,python-3.x | 0 | 34,864,106 | 0 | 1 | 0 | true | 0 | 0 | I'd imagine your environment variables are set up to use the python2.7 environment variable for python and the path to the python3.3 pip for that, you either need to adjust those or use the full paths when using the tool as you require. | 1 | 0 | 0 | 0 | I have installation of python 2.7 and 3.3 side by side (C:\Python27 and C:\Python33). I am now trying to install virtualenv.
Python2.7 is my default interpreter. Whenever I open a command prompt and type 'python' it brings up "Python 2.7.10 (default, May 23 2015, 09:40:32) [MSC v.1500 32 bit (Intel)] on win32" for me. But when I am firing "pip install virtualenv", it is installing virtualenv inside python3.3 folder.
I am quite surprised that my active interpreter is python2.7, but virtualenv installation is somehow getting inside python3.3 folder instead of expected python2.7 folder. Can anyone please explain this anomaly and suggest me how to install virtualenv inside python 2.7 ? | Installing virtualenv for Python2.7 | 0 | 1.2 | 1 | 0 | 0 | 1,818 |
34,864,672 | 2016-01-18T21:48:00.000 | -1 | 0 | 1 | 0 | 0 | python,ipython,ipython-notebook,jupyter,jupyter-notebook | 0 | 34,864,812 | 0 | 2 | 0 | true | 0 | 0 | The best thing to do for repeated code you want all your notebook to access is to add it to the profile directory. The notebook will load all scripts from that directory in order, so it's recommended you name files 01-<projname>.py if you want them to load in a certain order. All files in that directory will be loaded via exec which executes the file as though it were in your context, it's not a module load so globals will squash each other and all of the model context will be in your local namespace afterwards (similar to an import * effect).
To find your profile directory the docs recommend you use ipython locate profile <my_profile_name>. This will tell you where you can place the script. | 1 | 3 | 0 | 0 | I'd like to write a program using Python in Jupiter. To make things easy, it'd be better off writing a few subroutines (functions) and probably some user-defined classes first before writing the main script. How do I arrange them in Jupiter? Just each sub function/class for a new line and write sequentially and then write main script below to call subroutines? I just wonder if this is the right way to use Jupyter.
I am new to Jupyter and Python, but in Matlab, for instance, I would create a folder which contains all sub functions to be used. And I will also write a script inside the same folder to call these functions to accomplish the task. However, how do I achieve this in Python using Jupyter? | In Jupyter notebook, how do I arrange subroutines in order to write a project efficiently? | 0 | 1.2 | 1 | 0 | 0 | 4,032 |
34,872,610 | 2016-01-19T09:09:00.000 | 2 | 0 | 1 | 0 | 0 | python,django,ubuntu | 0 | 34,872,786 | 0 | 2 | 0 | false | 1 | 0 | A lot of application still require Python 2.7 and are not yet compatible with Python3. So it really depends on what you do on the server (Only running Django?).
One solution would be to use virtualenv so that you do not depend on which python version is installed in your server, and you totally control all the packages.
Look for django + virtualenv, you will find a lot of tutorials. | 1 | 0 | 0 | 0 | I am on ubuntu 15.10. I notice that i have many python versions installed. Is it safe now to remove 2.7 completely? And how to make 3.5 the default one? I ask this because i think it messes up my django installation because django gets intsalled in share directory. | can python 2.7 be removed completely now? | 0 | 0.197375 | 1 | 0 | 0 | 57 |
34,881,105 | 2016-01-19T15:49:00.000 | 5 | 0 | 1 | 0 | 0 | python-3.x,anaconda,python-idle,conda | 0 | 47,986,798 | 0 | 3 | 0 | false | 0 | 0 | Type idle3 instead of idle from your conda env. | 1 | 6 | 0 | 0 | For running python2 all I do is activate the required conda environment and just type idle. It automatically opens IDLE for python 2.7. But I can't figure out how to do this for Python 3. I have python 3.5 installed in my environment.
I used conda create -n py35 anaconda for install installing python 3.5 . | How to run IDLE for python 3 in a conda environment? | 1 | 0.321513 | 1 | 0 | 0 | 14,423 |
34,898,422 | 2016-01-20T11:10:00.000 | 0 | 0 | 1 | 0 | 1 | python,pycharm,rdkit | 0 | 68,302,546 | 0 | 2 | 0 | false | 0 | 0 | Another option is to select the existing virtual environment when you create a new project in PyCharm. Once you go through the steps that Anna laid out above, the "Previously configured interpreter" section of the "Create Project" screen should show the ~/anaconda/envs/my-rdkit-env/bin/python as an option. | 1 | 5 | 0 | 0 | So, I am trying to add RDKit to my project in PyCharm. I have found that if you are using interpreter /usr/bin/python2.7 PyCharm will try to install stuff using the pip. While, RDKit requires conda. I have tried to change the interpreter to conda, but RDKit is either not on the list or it can't open the URL with the repo. Does anyone know how to fix that?
By the way, is it possible while keeping the interpreter /usr/bin/python2.7 to make it use anything else (not pip), while installing stuff? | How to add RDKit to project in PyCharm? | 1 | 0 | 1 | 0 | 0 | 3,938 |
34,905,744 | 2016-01-20T16:46:00.000 | 0 | 0 | 0 | 0 | 0 | python,django,postgresql,amazon-ec2 | 0 | 34,922,249 | 0 | 1 | 0 | false | 1 | 0 | Ok, thanks for your answers, I used :
find . -name "postgresql.conf" to find the configuration find, which was located into the "/etc/postgresql/9.3/main" folder. There is also pg_lsclusters if you want to show the directory data.
Then I edited that file putting the new path, restarted postgres and imported my old DB. | 1 | 1 | 0 | 0 | I have a Django website running on an Amazon EC2 instance. I want to add an EBS. In order to do that, I need to change the location of my PGDATA directory if I understand well. The new PGDATA path should be something like /vol/mydir/blabla.
I absolutely need to keep the data safe (some kind of dump could be useful).
Do you have any clues on how I can do that ? I can't seem to find anything relevant on the internet.
Thanks | Django PostgreSQL : migrating database to a different directory | 0 | 0 | 1 | 1 | 0 | 62 |
34,912,784 | 2016-01-20T23:40:00.000 | 1 | 0 | 1 | 1 | 0 | python,installation,pip,upgrade,six | 1 | 34,912,892 | 1 | 2 | 0 | false | 0 | 0 | I, too, have had some issues with installing modules, and I sometimes find that it helps just to start over. In this case, it looks like you already have some of the 'six' module, but isn't properly set up, so if sudo pip uninstall six yields the same thing, go into your directory and manually delete anything related to six, and then try installing it. You may have to do some digging where you have your modules are stored (or have been stored, as pip can find them in different locations). | 1 | 5 | 0 | 0 | When I run sudo pip install --upgrade six I run into the issue below:
2016-01-20 18:29:48|optim $ sudo pip install --upgrade six
Collecting six
Downloading six-1.10.0-py2.py3-none-any.whl
Installing collected packages: six
Found existing installation: six 1.4.1
Detected a distutils installed project ('six') which we cannot uninstall. The metadata provided by distutils does not contain a list of files which have been installed, so pip does not know which files to uninstall.
I have Python 2.7, and I'm on Mac OS X 10.11.1.
How can I make this upgrade successful?
(There are other kind of related posts, but they do not actually have a solution to this same error.)
EDIT:
I am told I can remove six manually by removing things from site-packages. These are the files in site-packages that begin with six:
six-1.10.0.dist-info, six-1.9.0.dist-info, six.py, six.py.
Are they all correct/safe to remove?
EDIT2:
I decided to remove those from site-packages, but it turns out the existing six that cannot be installed is actually in
/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python.
There I see the files:
six-1.4.1-py2.7.egg-info, six.py, six.pyc
but doing rm on them (with sudo, even) gives Operation not permitted.
So now the question is, how can I remove those files, given where they are? | Python - Cannot upgrade six, issue uninstalling previous version | 0 | 0.099668 | 1 | 0 | 0 | 6,127 |
34,914,142 | 2016-01-21T02:03:00.000 | 0 | 0 | 0 | 0 | 0 | python,tkinter | 0 | 34,914,501 | 0 | 1 | 0 | true | 0 | 1 | I do not see the associated code that actually displays the current position of the snake on the screen and remove it after movement, but this is where you can change the size if you make the length of the snake variable and have it drawn and removed in an iterate fashion. When food is eaten, you can simply increase the size of the snake length variable and pause the erasing of the snake movement as it proceeds along its vector until the desired growth has occurred, at which time removal can proceed at the new length rate. Please clarify the part of the code that actually renders the snakes position. | 1 | 1 | 0 | 0 | I'm currently creating a Snake game on Python using the TKinter library.
So right now, I've implemented the movements, the food system, and the score system. I still need some help on how I can make the snake grow when it eats the food. | Need help for a Python Snake game | 0 | 1.2 | 1 | 0 | 0 | 1,681 |
34,939,193 | 2016-01-22T04:46:00.000 | 0 | 0 | 1 | 0 | 0 | python,ipython | 0 | 34,939,512 | 0 | 4 | 0 | false | 0 | 0 | By default ipython and jupyter set the top of the working tree to the current directory at the time of launching the notebook server. You can change this by setting c.NotebookApp.notebook_dir in the either the .ipython/profile_XXX/ipython_notebook_config.py or .jupyter/jupyter_notebook_config.py (*nix/Mac - not sure where these are located on Windows).
As long as the top of working tree includes the subdirectory with your scripts then you can just use the cmd/explorer to move the .ipynb to your scripts directory and then browse http://localhost:XXXX/tree to open the ipython notebook. | 1 | 0 | 0 | 1 | I am currently working on a machine learning project, and I would like to save my IPython files with the rest of my scripts. However, I have been unable to find any information on how to change the path that IPython files are saved to. "ipython locate" only gives me the location they are saved to, and does not appear to give me a way to change it, and the iPython editor does not have a file selector that I can use to change the save path. I am using Windows 10. Any help would be appreciated. | Save IPython file to a particular directory | 0 | 0 | 1 | 0 | 0 | 1,705 |
34,949,364 | 2016-01-22T14:44:00.000 | 1 | 0 | 0 | 1 | 0 | python,tornado,upgrade | 0 | 34,960,704 | 0 | 1 | 0 | false | 0 | 0 | Easy way, do it with nginx.
Start a latest tornado server.
Redirect all new connections to the new tornado server.(Change nginx configure file and reload with nginx -s reload)
Tell the old tornado server shutdown itself if all connections are closed.
Hard way
If you want to change your server on the fly, maybe you could find a way by reading nginx's source code, figure out how nginx -s reload works, but I think you need to do lots of work. | 1 | 0 | 0 | 0 | I have an HTTP server created by the Tornado framework. I need to update/reload this server without any connection lost and shutdown.
I have no idea how to do it.
Could you get me any clue? | Graceful reload of python tornado server | 0 | 0.197375 | 1 | 0 | 0 | 583 |
34,970,818 | 2016-01-24T00:34:00.000 | 0 | 0 | 0 | 0 | 0 | python,neural-network,time-series,keras,recurrent-neural-network | 0 | 45,060,104 | 0 | 1 | 0 | true | 0 | 0 | I think this has more to do with your particular dataset than Bi-LSTMs in general.
You're confusing splitting a dataset for training/testing vs. splitting a sequence in a particular sample. It seems like you have many different subjects, which constitute a different sample. For a standard training/testing split, you would split your dataset between subjects, as you suggested in the last paragraph.
For any sort of RNN application, you do NOT split along your temporal sequence; you input your entire sequence as a single sample to your Bi-LSTM. So the question really becomes whether such a model is well-suited to your problem, which has multiple labels at specific points in the sequence. You can use a sequence-to-sequence variant of the LSTM model to predict which label each time point in the sequence belongs to, but again you would NOT be splitting the sequence into multiple parts. | 1 | 1 | 1 | 0 | When it comes to normal ANNs, or any of the standard machine learning techniques, I understand what the training, testing, and validation sets should be (both conceptually, and the rule-of-thumb ratios). However, for a bidirectional LSTM (BLSTM) net, how to split the data is confusing me.
I am trying to improve prediction on individual subject data that consists of monitored health values. In the simplest case, for each subject, there is one long time series of values (>20k values), and contiguous parts of that time series are labeled from a set of categories, depending on the current health of the subject. For a BLSTM, the net is trained on all of the data going forwards and backwards simultaneously. The problem then is, how does one split a time series for one subject?
I can't just take the last 2,000 values (for example), because they might all fall into a single category.
And I can't chop the time series up randomly, because then both the learning and testing phases would be made of disjointed chunks.
Finally, each of the subjects (as far as I can tell) has slightly different (but similar) characteristics. So, maybe, since I have thousands of subjects, do I train on some, test on some, and validate on others? However, since there are inter-subject differences, how would I set up the tests if I was only considering one subject to start? | Training, testing, and validation sets for bidirectional LSTM (BLSTM) | 1 | 1.2 | 1 | 0 | 0 | 1,028 |
34,979,145 | 2016-01-24T17:42:00.000 | 9 | 0 | 1 | 0 | 1 | python,django,pycharm | 1 | 34,993,725 | 0 | 3 | 0 | true | 1 | 0 | You can clean out old PyCharm interpreters that are no longer associated with a project via Settings -> Project Interpreter, click on the gear in the top right, then click "More". This gives you a listing where you can get rid of old virtualenvs that PyCharm thinks are still around. This will prevent the "(1)", "(2)" part.
You don't want to make the virtualenv into the content root. Your project's code is the content root.
As a suggestion:
Clear out all the registered virtual envs
Make a virtualenv, outside of PyCharm
Create a new project using PyCharm's Django template
You should then have a working example. | 3 | 3 | 0 | 0 | How do I remove all traces of legacy projects from PyCharm?
Background: I upgraded from PyCharm Community Edition to PyCharm Pro Edition today.
Reason was so I could work on Django projects, and in particular, a fledgling legacy project called 'deals'.
I deleted the legacy project folders.
I then opened the Pro Edition and went through the steps of creating a Django project called 'deals' with a python3.4 interpreter in a virtualenv.
It didn't work, I got an error message saying something about a missing file, and in the PyCharm project explorer, all I could see was
deals
.ideas
So I deleted it (ie. deleted the folders in both ~/.virtualenvs/deals and ~/Projects/deals).
I tried again, although this time I got an interpreter with a number suffix, ie. python3.4 (1).
I continued, and got the same empty file structure.
I deleted both folders again, went and cleaned out the intepreters in Settings > Project Interpreters .
I then tried again, getting 'new' interpreters,until I finally had python3.4 (5)
Plus, along the way I also invalidated the caches and restarted.
(ie. File > Invalidate Caches/Restart)
Then to prove if it works at all, I tried a brand new name 'junk'.
This time it worked fine, and I could see the Django folders in the PyCharm explorer. Great.
But I really want to work on a project called 'deals'.
So I deleted all the 'deal's folders again, and tried to create a deals Django project again.
Same result.
After googling, I went to the Settings > Project Structure > + Content Root, and pointed it to the folder at ~/.virtual/deals.
Ok, so now I could see the files in the virtual env, but there's no Django files, and plus, the project folder was separate to the virtualenv folder, eg
deals
deals (~/project/deals) <- separate
deals (~/.virtualenvs/deals) <- separate
deals
init.py
settings.py
urls.py
wsgi.py
manage.py
Really stuck now.
Any advice on how to get this working please?
Eg. how do I
(i) get it back to 'cleanskin' so that I can start up a Django project and get the proper folders in the project space.
(ii) get it working with virtualenv, and ensure that the interpreter doesn't have a number suffix, such as python3.4(6)
Many thanks in advance,
Chris | PyCharm & VirtualEnvs - How To Remove Legacy | 0 | 1.2 | 1 | 0 | 0 | 16,475 |
34,979,145 | 2016-01-24T17:42:00.000 | 0 | 0 | 1 | 0 | 1 | python,django,pycharm | 1 | 60,949,461 | 0 | 3 | 0 | false | 1 | 0 | In addition to the answer above, which removed the Venv from the Pycharm list, I also had to go into my ~/venvs directory and delete the associated directory folder in there.
That did the trick. | 3 | 3 | 0 | 0 | How do I remove all traces of legacy projects from PyCharm?
Background: I upgraded from PyCharm Community Edition to PyCharm Pro Edition today.
Reason was so I could work on Django projects, and in particular, a fledgling legacy project called 'deals'.
I deleted the legacy project folders.
I then opened the Pro Edition and went through the steps of creating a Django project called 'deals' with a python3.4 interpreter in a virtualenv.
It didn't work, I got an error message saying something about a missing file, and in the PyCharm project explorer, all I could see was
deals
.ideas
So I deleted it (ie. deleted the folders in both ~/.virtualenvs/deals and ~/Projects/deals).
I tried again, although this time I got an interpreter with a number suffix, ie. python3.4 (1).
I continued, and got the same empty file structure.
I deleted both folders again, went and cleaned out the intepreters in Settings > Project Interpreters .
I then tried again, getting 'new' interpreters,until I finally had python3.4 (5)
Plus, along the way I also invalidated the caches and restarted.
(ie. File > Invalidate Caches/Restart)
Then to prove if it works at all, I tried a brand new name 'junk'.
This time it worked fine, and I could see the Django folders in the PyCharm explorer. Great.
But I really want to work on a project called 'deals'.
So I deleted all the 'deal's folders again, and tried to create a deals Django project again.
Same result.
After googling, I went to the Settings > Project Structure > + Content Root, and pointed it to the folder at ~/.virtual/deals.
Ok, so now I could see the files in the virtual env, but there's no Django files, and plus, the project folder was separate to the virtualenv folder, eg
deals
deals (~/project/deals) <- separate
deals (~/.virtualenvs/deals) <- separate
deals
init.py
settings.py
urls.py
wsgi.py
manage.py
Really stuck now.
Any advice on how to get this working please?
Eg. how do I
(i) get it back to 'cleanskin' so that I can start up a Django project and get the proper folders in the project space.
(ii) get it working with virtualenv, and ensure that the interpreter doesn't have a number suffix, such as python3.4(6)
Many thanks in advance,
Chris | PyCharm & VirtualEnvs - How To Remove Legacy | 0 | 0 | 1 | 0 | 0 | 16,475 |
34,979,145 | 2016-01-24T17:42:00.000 | 0 | 0 | 1 | 0 | 1 | python,django,pycharm | 1 | 63,129,392 | 0 | 3 | 0 | false | 1 | 0 | When virtual env is enabled, there will be a 'V' symbol active in the bottom part of pycharm in the same line with terminal and TODO. When you click on the 'V' , the first one will be enabled with a tick mark. Just click on it again. Then it will get disabled. As simple as that. | 3 | 3 | 0 | 0 | How do I remove all traces of legacy projects from PyCharm?
Background: I upgraded from PyCharm Community Edition to PyCharm Pro Edition today.
Reason was so I could work on Django projects, and in particular, a fledgling legacy project called 'deals'.
I deleted the legacy project folders.
I then opened the Pro Edition and went through the steps of creating a Django project called 'deals' with a python3.4 interpreter in a virtualenv.
It didn't work, I got an error message saying something about a missing file, and in the PyCharm project explorer, all I could see was
deals
.ideas
So I deleted it (ie. deleted the folders in both ~/.virtualenvs/deals and ~/Projects/deals).
I tried again, although this time I got an interpreter with a number suffix, ie. python3.4 (1).
I continued, and got the same empty file structure.
I deleted both folders again, went and cleaned out the intepreters in Settings > Project Interpreters .
I then tried again, getting 'new' interpreters,until I finally had python3.4 (5)
Plus, along the way I also invalidated the caches and restarted.
(ie. File > Invalidate Caches/Restart)
Then to prove if it works at all, I tried a brand new name 'junk'.
This time it worked fine, and I could see the Django folders in the PyCharm explorer. Great.
But I really want to work on a project called 'deals'.
So I deleted all the 'deal's folders again, and tried to create a deals Django project again.
Same result.
After googling, I went to the Settings > Project Structure > + Content Root, and pointed it to the folder at ~/.virtual/deals.
Ok, so now I could see the files in the virtual env, but there's no Django files, and plus, the project folder was separate to the virtualenv folder, eg
deals
deals (~/project/deals) <- separate
deals (~/.virtualenvs/deals) <- separate
deals
init.py
settings.py
urls.py
wsgi.py
manage.py
Really stuck now.
Any advice on how to get this working please?
Eg. how do I
(i) get it back to 'cleanskin' so that I can start up a Django project and get the proper folders in the project space.
(ii) get it working with virtualenv, and ensure that the interpreter doesn't have a number suffix, such as python3.4(6)
Many thanks in advance,
Chris | PyCharm & VirtualEnvs - How To Remove Legacy | 0 | 0 | 1 | 0 | 0 | 16,475 |
34,988,678 | 2016-01-25T09:08:00.000 | 1 | 0 | 0 | 1 | 0 | python,linux,sockets,subprocess | 0 | 34,989,073 | 0 | 2 | 0 | false | 0 | 0 | It seems like a permission issue. The subprocess is probably running as an other user and therefore you will not have access to the process. Use sudo ps xauw |grep [processname] to figure as under what user the daemon process is running. | 1 | 0 | 0 | 0 | In my program, A serve-forever daemon is restarted in a subprocess.
The program itself is a web service, using port 5000 by default.
I don't know the detail of the start script of that daemon, but it seems to inherit the socket listening on port 5000.
So if I were to restart my program, I'll find that the port is already occupied by the daemon process.
Now I am considering to fine tune the subprocess function to close the inherited socket FD, but I don't know how to get the FD in the first place. | How to get a socket FD according to the port occupied in Python? | 0 | 0.099668 | 1 | 0 | 1 | 429 |
34,992,856 | 2016-01-25T12:39:00.000 | 0 | 0 | 0 | 0 | 0 | python,django,django-rest-framework,offlineapps | 0 | 34,996,085 | 0 | 1 | 1 | false | 0 | 0 | I'm confused as to how you're approaching this. My understanding is that when the app is offline you want to "queue up" any API requests that are sent.
Your process seems fine however without knowing the terms around the app being "offline" it's hard to understand if this best.
Assuming you're meaning the server(s) holding the application are offline you're correct you want a process in the android app that will store the request until the application becomes online. However, this can be dangerous for end users. They should be receiving a message on the application being offline and to "try again later" as it were. The fear being they submit a request for x new contacts to be queued and then re-submit not realizing the application was offline.
I would suggest you have the android app built to either notify the user of the app being down or provide some very visible notification that requests are queued locally on their phone until the application becomes available and for them to view/modify/delete said locally cached requests until the application becomes available. When the API becomes available a notification can be set for users to release their queue on their device. | 1 | 3 | 0 | 0 | I have multiple api which we have provided to android developers.
Like :
1) Creating Business card API
2) Creating Contacts API
So these api working fine when app is online. So our requirement is to handle to create business card and contacts when app is offline.
We are following steps but not sure:-
1) Android developer store the business card when app offline and send this data to server using separate offline business card api when app comes online.
2) Same we do for creating contacts offline using offline contact api.
My problem is I want do in one api call to send all data to server and do operation.
Is this approach will right?? Also please suggest what is the best approach to handle offline data. Also how to handle syncing data when app would come online??
Please let me know if I could provide more information. | how to design rest api which handle offline data | 1 | 0 | 1 | 0 | 1 | 1,524 |
34,998,280 | 2016-01-25T17:10:00.000 | 1 | 0 | 0 | 0 | 0 | python,apache-spark,python-3.4,pyspark | 1 | 35,013,791 | 0 | 2 | 0 | false | 0 | 0 | This is not the problem of PySpark, this is a limit of Spark implement.
Spark use a scala array to store the broadcast elements, since the max Integer of Scala is 2*10^9, so the total string bytes is 2*2*10^9 = 4GB, you can view the Spark code. | 1 | 1 | 1 | 0 | In Pyspark, I am trying to broadcast a large numpy array of size around 8GB. But it fails with the error "OverflowError: cannot serialize a string larger than 4GiB". I have 15g in executor memory and 25g driver memory. I have tried using default and kyro serializer. Both didnot work and show same error.
Can anyone suggest how to get rid of this error and the most efficient way to tackle large broadcast variables? | Broadcast large array in pyspark (~ 8GB) | 0 | 0.099668 | 1 | 0 | 0 | 3,217 |
35,002,061 | 2016-01-25T20:48:00.000 | 3 | 0 | 0 | 0 | 1 | python,django,rest,django-views,django-rest-framework | 0 | 35,009,997 | 0 | 2 | 0 | true | 1 | 0 | You don't have to "fix" Deprecation Warnings as they are, well, only warnings and things still work. However, if you'll decide to update they might break your app. So usually it's a good idea to rewrite the parts with warnings to new interfaces, that are hinted in those warnings if it's in your code. If it's in some side library you use, then you might want to wait if the library creator will update his/her library in the next release.
Regarding your particular warnings, unless you'll decide to update to Django 1.10, your code should work well. | 1 | 4 | 0 | 0 | I am a new user of the Django Framework. I am currently building a REST API with the django_rest_framework. When starting my server I am getting deprecation warnings that I have no idea how to fix.
RemovedInDjango110Warning: 'get_all_related_objects is an unofficial API that has been deprecated. You may be able to replace it with 'get_fields()'
for relation in opts.get_all_related_objects()
The above is the first of these. Does anyone know how to fix this issue. All I have in my API at the minute is standard rest calls using the built in ModelViewSet and I have also overwritten the default authentication & user system with my own so I have no idea why I'm getting these warnings as I have been using Django 1.9 from the start.
I also got this:
RemovedInDjango110Warning: render() must be called with a dict, not a RequestContext
From my initial research this is related to templates. I am not using any templates so I don't know why this is coming up.
Can anyone help me to fix these issues? | How to fix a Deprecation Warning in Django 1.9 | 0 | 1.2 | 1 | 0 | 0 | 2,418 |
35,004,619 | 2016-01-25T23:46:00.000 | 2 | 0 | 0 | 0 | 0 | python,deep-learning,tensorflow | 0 | 35,004,791 | 0 | 1 | 0 | true | 0 | 0 | The amount of pre-fetching depends on your queue capacity. If you use string_input_producer for your filenames and batch for batching, you will have 2 queues - filename queue, and prefetching queue created by batch. Queue created by batch has default capacity of 32, controlled by batch(...,capacity=) argument, therefore it can prefetch up to 32 images. If you follow outline in TensorFlow official howto's, processing examples (everything after batch) will happen in main Python thread, whereas filling up the queue will happen in threads created/started by batch/start_queue_runners, so prefetching new data and running prefetched data through the network will occur concurrently, blocking when the queue gets full or empty. | 1 | 3 | 1 | 1 | I am not quite sure about how file-queue works. I am trying to use a large dataset like imagenet as input. So preloading data is not the case, so I am wondering how to use the file-queue. According to the tutorial, we can convert data to TFRecords file as input. Now we have a single big TFRecords file. So when we specify a FIFO queue for the reader, does it mean the program would fetch a batch of data each time and feed the graph instead of loading the whole file of data? | reading a large dataset in tensorflow | 0 | 1.2 | 1 | 0 | 0 | 2,670 |
35,020,609 | 2016-01-26T17:57:00.000 | 2 | 0 | 1 | 0 | 0 | python | 0 | 35,020,764 | 0 | 3 | 0 | true | 0 | 0 | A quick and dirty approach might be len(str(NUMBER).strip('0')) which will trim off any trailing zeros and count the remaining digits.
To discount the decimal point then you'd need len(str(NUMBER).replace('.','').strip('0'))
However you need to bear in mind that in many cases converting a python float to a string can give you some odd behaviour, due to the way floating point numbers are handled. | 1 | 4 | 0 | 0 | For a coding exercise I'm working on, I'm trying to compare two numbers and choose the one that has the larger number of significant digits.
For example: compare 2.37e+07 and 2.38279e+07, select 2.38279e+07 because it has more significant digits.
I don't know how to implement this in Python. I considered counting the length of each number using len(str(NUMBER)), but this method returns "10" for both of the numbers above because it doesn't differentiate between zero and non-zero digits.
How can I compare the number of significant digits in Python? | Compare the number of significant digits in two numbers | 0 | 1.2 | 1 | 0 | 0 | 1,713 |
35,045,038 | 2016-01-27T18:16:00.000 | 3 | 1 | 1 | 0 | 0 | python,virtualenv,pytest | 0 | 39,231,653 | 0 | 4 | 0 | false | 0 | 0 | In my case I was obliged to leave the venv (deactivate), remove pytest (pip uninstall pytest), enter the venv (source /my/path/to/venv), and then reinstall pytest (pip install pytest). I don't known exacttly why pip refuse to install pytest in venv (it says it already present).
I hope this helps | 2 | 75 | 0 | 0 | I installed pytest into a virtual environment (using virtualenv) and am running it from that virtual environment, but it is not using the packages that I installed in that virtual environment. Instead, it is using the main system packages. (Using python -m unittest discover, I can actually run my tests with the right python and packages, but I want to use the py.test framework.)
Is it possible that py.test is actually not running the pytest inside the virtual environment and I have to specify which pytest to run?
How to I get py.test to use only the python and packages that are in my virtualenv?
Also, since I have several version of Python on my system, how do I tell which Python that Pytest is using? Will it automatically use the Python within my virtual environment, or do I have to specify somehow? | How do I use pytest with virtualenv? | 1 | 0.148885 | 1 | 0 | 0 | 32,064 |
35,045,038 | 2016-01-27T18:16:00.000 | 95 | 1 | 1 | 0 | 0 | python,virtualenv,pytest | 0 | 54,597,424 | 0 | 4 | 0 | false | 0 | 0 | There is a bit of a dance to get this to work:
activate your venv : source venv/bin/activate
install pytest : pip install pytest
re-activate your venv: deactivate && source venv/bin/activate
The reason is that the path to pytest is set by the sourceing the activate file only after pytest is actually installed in the venv. You can't set the path to something before it is installed.
Re-activateing is required for any console entry points installed within your virtual environment. | 2 | 75 | 0 | 0 | I installed pytest into a virtual environment (using virtualenv) and am running it from that virtual environment, but it is not using the packages that I installed in that virtual environment. Instead, it is using the main system packages. (Using python -m unittest discover, I can actually run my tests with the right python and packages, but I want to use the py.test framework.)
Is it possible that py.test is actually not running the pytest inside the virtual environment and I have to specify which pytest to run?
How to I get py.test to use only the python and packages that are in my virtualenv?
Also, since I have several version of Python on my system, how do I tell which Python that Pytest is using? Will it automatically use the Python within my virtual environment, or do I have to specify somehow? | How do I use pytest with virtualenv? | 1 | 1 | 1 | 0 | 0 | 32,064 |
35,047,691 | 2016-01-27T20:48:00.000 | 2 | 0 | 1 | 1 | 0 | python,linux | 0 | 35,047,953 | 0 | 4 | 0 | false | 0 | 0 | Use update-alternatives --config python and shoose python2.7 from choices.
If you need to remove it use update-alternatives --remove python /usr/bin/python2.7. | 1 | 3 | 0 | 0 | The version of Linux I am working on has python 2.6 by default, and we installed 2.7 on it in a separate folder.
If I want to run a .py script, how do I tell it to use 2.7 instead of the default? | How to select which version of python I am running on Linux? | 0 | 0.099668 | 1 | 0 | 0 | 7,476 |
35,048,996 | 2016-01-27T22:06:00.000 | 0 | 0 | 1 | 1 | 1 | python,terminal | 1 | 35,049,070 | 0 | 3 | 0 | false | 0 | 0 | When you type "python", your path is searched to run this version. But, if you specify the absolute path of the other python, you run it the way you want it.
Here, in my laptop, I have /home/user/python3_4 and /home/user/python2_7. If I type python, the 3.4 version is executed, because this directory is set in my path variable. When I want to test some scripts from the 2.7 version, I type in the command line: /home/user/python2_7/bin/python script.py. (Both directory were chosen by me. It's not the default for python, of course).
I hope it can help you. | 2 | 1 | 0 | 0 | I have downloaded a python program from git.
This program is python 3.
On my laptop i have both python 2.7 and python 3.4. Python 2.7 is default version.
when i want run this program in terminal it gives some module errors because of it used the wrong version.
how can i force an name.py file to open in an (non) default version of python.
I have tried so search on google but this without any result because of lack of search tags.
also just trying things like ./name.py python3 but with same result(error) | Run python program from terminal | 0 | 0 | 1 | 0 | 0 | 7,570 |
35,048,996 | 2016-01-27T22:06:00.000 | 0 | 0 | 1 | 1 | 1 | python,terminal | 1 | 35,964,107 | 0 | 3 | 0 | true | 0 | 0 | The Method of @Tom Dalton and @n1c9 work for me!
python3 name.py | 2 | 1 | 0 | 0 | I have downloaded a python program from git.
This program is python 3.
On my laptop i have both python 2.7 and python 3.4. Python 2.7 is default version.
when i want run this program in terminal it gives some module errors because of it used the wrong version.
how can i force an name.py file to open in an (non) default version of python.
I have tried so search on google but this without any result because of lack of search tags.
also just trying things like ./name.py python3 but with same result(error) | Run python program from terminal | 0 | 1.2 | 1 | 0 | 0 | 7,570 |
35,063,946 | 2016-01-28T14:17:00.000 | 1 | 0 | 0 | 0 | 0 | python-2.7,csv,pandas,pandasql | 0 | 35,064,268 | 0 | 1 | 0 | true | 0 | 0 | Reading the entire index column will still need to read and parse the whole file.
If no fields in the file are multiline, you could scan the file backwards to find the first newline (but with a check if there is a newline past the data). The value following that newline will be your last index.
Storing the last index in another file would also be a possibility, but you would have to make sure both files stay consistent.
Another way would be to reserve some (fixed amount of) bytes at the beginning of the file and write (in place) the last index value there as a comment. But your parser would have to support comments, or be able to skip rows. | 1 | 2 | 1 | 0 | I have a .csv file on disk, formatted so that I can read it into a pandas DataFrame easily, to which I periodically write rows. I need this database to have a row index, so every time I write a new row to it I need to know the index of the last row written.
There are plenty of ways to do this:
I could read the entire file into a DataFrame, append my row, and then print the entire DataFrame to memory again. This might get a bit slow as the database grows.
I could read the entire index column into memory, and pick the largest value off, then append my row to the .csv file. This might be a little better, depending on how column-reading is implemented.
I am curious if there is a way to just get that one cell directly, without having to read a whole bunch of extra information into memory. Any suggestions? | reading the last index from a csv file using pandas in python2.7 | 1 | 1.2 | 1 | 0 | 0 | 511 |
35,074,209 | 2016-01-28T23:33:00.000 | 37 | 0 | 0 | 0 | 0 | python,excel,google-sheets,ipython,ipython-notebook | 0 | 35,090,610 | 0 | 6 | 0 | true | 0 | 0 | Try using the to_clipboard() method. E.g., for a dataframe, df: df.to_clipboard() will copy said dataframe to your clipboard. You can then paste it into Excel or Google Docs. | 2 | 19 | 1 | 0 | I've been using iPython (aka Jupyter) quite a bit lately for data analysis and some machine learning. But one big headache is copying results from the notebook app (browser) into either Excel or Google Sheets so I can manipulate results or share them with people who don't use iPython.
I know how to convert results to csv and save. But then I have to dig through my computer, open the results and paste them into Excel or Google Sheets. That takes too much time.
And just highlighting a resulting dataframe and copy/pasting usually completely messes up the formatting, with columns overflowing. (Not to mention the issue of long resulting dataframes being truncated when printed in iPython.)
How can I easily copy/paste an iPython result into a spreadsheet? | How to copy/paste a dataframe from iPython into Google Sheets or Excel? | 0 | 1.2 | 1 | 0 | 0 | 17,289 |
35,074,209 | 2016-01-28T23:33:00.000 | 1 | 0 | 0 | 0 | 0 | python,excel,google-sheets,ipython,ipython-notebook | 0 | 66,239,699 | 0 | 6 | 0 | false | 0 | 0 | Paste the output to an IDE like Atom and then paste in Google Sheets/Excel | 2 | 19 | 1 | 0 | I've been using iPython (aka Jupyter) quite a bit lately for data analysis and some machine learning. But one big headache is copying results from the notebook app (browser) into either Excel or Google Sheets so I can manipulate results or share them with people who don't use iPython.
I know how to convert results to csv and save. But then I have to dig through my computer, open the results and paste them into Excel or Google Sheets. That takes too much time.
And just highlighting a resulting dataframe and copy/pasting usually completely messes up the formatting, with columns overflowing. (Not to mention the issue of long resulting dataframes being truncated when printed in iPython.)
How can I easily copy/paste an iPython result into a spreadsheet? | How to copy/paste a dataframe from iPython into Google Sheets or Excel? | 0 | 0.033321 | 1 | 0 | 0 | 17,289 |
35,079,670 | 2016-01-29T08:10:00.000 | 0 | 0 | 1 | 0 | 0 | python,windows | 1 | 35,080,840 | 0 | 2 | 0 | false | 0 | 0 | I couldn't find a solution anywhere to this, so I just deleted every trace of Python from my computer and installed Anaconda.
I don't feel this was a very informed or optimal solution, but I now have consistent behavior in various places. Also, the Anaconda installer seems much more smooth than the pip installer. | 2 | 0 | 0 | 0 | I'm having some trouble tracking down where my pip modules are going, and I finally found what seems to be the root of the issue when I did a "pip list" command in two separate cmd windows.
One window was running as admin, and the other not. They showed two completely different lists of modules installed. When I ran "python" in each window, one started python 3.4.3, and the other python 3.5.0a2.
The reason I'm doing this in two separate types of windows is because I'm running into "access is denied" errors when trying to install modules with pip. (For example, requests.)
When I check my PATH variable, it points to C:\Program Files\Python 3.5. Is there an admin PATH variable somewhere that I can modify so that I can run python3.5 as admin?
Can someone help me understand how I can get around access is denied without using admin cmd, or how I can change admin Path variable, or something?
I'm running Windows 7, 64 bit, with several versions of python installed. 2.7, 3.3, 3.4.3, 3.5.0a2. I can get more refined details if I need to.
Edit Addition: I'd like to use virtualenv with python3.5, but when I try to install it with pip install virtualenv, I get Permission denied error. | Different versions of python when running cmd as admin, how do I alter admin version? | 0 | 0 | 1 | 0 | 0 | 226 |
35,079,670 | 2016-01-29T08:10:00.000 | 1 | 0 | 1 | 0 | 0 | python,windows | 1 | 35,079,769 | 0 | 2 | 0 | false | 0 | 0 | Although you are running Python on a Windows Machine - I am assuming a Client i.e. Desktop. You should go and look at Virtual Python Environments - there are lots of resources documenting how this is accomplished...
You are directly manipulating the System copy of the Python Environment and 1 mistake will screw the whole lot up. Much better (and safer) for either project/Projects(s) to share a Virtual Env - which you can then either upgrade using pip requirements. | 2 | 0 | 0 | 0 | I'm having some trouble tracking down where my pip modules are going, and I finally found what seems to be the root of the issue when I did a "pip list" command in two separate cmd windows.
One window was running as admin, and the other not. They showed two completely different lists of modules installed. When I ran "python" in each window, one started python 3.4.3, and the other python 3.5.0a2.
The reason I'm doing this in two separate types of windows is because I'm running into "access is denied" errors when trying to install modules with pip. (For example, requests.)
When I check my PATH variable, it points to C:\Program Files\Python 3.5. Is there an admin PATH variable somewhere that I can modify so that I can run python3.5 as admin?
Can someone help me understand how I can get around access is denied without using admin cmd, or how I can change admin Path variable, or something?
I'm running Windows 7, 64 bit, with several versions of python installed. 2.7, 3.3, 3.4.3, 3.5.0a2. I can get more refined details if I need to.
Edit Addition: I'd like to use virtualenv with python3.5, but when I try to install it with pip install virtualenv, I get Permission denied error. | Different versions of python when running cmd as admin, how do I alter admin version? | 0 | 0.099668 | 1 | 0 | 0 | 226 |
35,105,825 | 2016-01-30T19:03:00.000 | 1 | 0 | 0 | 0 | 0 | python,django,tastypie | 0 | 35,167,054 | 0 | 3 | 0 | false | 1 | 0 | That'll only work for list endpoints though. My advice is to use a middleware to add X- headers, it's a cleaner, more generalized solution. | 1 | 1 | 0 | 0 | I want to measure execution time for some queries and add this data to responses, like: {"meta": {"execution_time_in_ms": 500 ...}} I know how to add fields to tastypie's responses, but I haven't idea how to measure time in it, where I should initialize the timer and where I should stop it. Any ideas? | Tastypie. How to add time of execution to responses? | 0 | 0.066568 | 1 | 0 | 0 | 107 |
35,136,140 | 2016-02-01T17:02:00.000 | 0 | 1 | 0 | 1 | 0 | python,deployment,updates,beagleboneblack,yocto | 0 | 35,147,597 | 0 | 1 | 0 | false | 1 | 0 | A natural strategy would be to make use of the package manager also used for the rest of the system. The various package managers of Linux distributions are not closed systems. You can create your own package repository containing just your application/scripts and add it as a package source on your target. Your "updater" would work on top of that.
This is also a route you can go when using yocto. | 1 | 1 | 0 | 0 | For the moment I've created an Python web application running on uwsgi with a frontend created in EmberJS. There is also a small python script running that is controlling I/O and serial ports connected to the beaglebone black.
The system is running on debian, packages are managed and installed via ansible, the applications are updated also via some ansible scripts. With other words, updates are for the moment done by manual work launching the ansible scripts over ssh.
I'm searching now a strategy/method to update my python applications in an easy way and that can also be done by our clients (ex: via webinterface). A good example is the update of a router firmware. I'm wondering how I can use a similar strategy for my python applications.
I checked Yocto where I can build my own linux with but I don't see how to include my applications in those builds, and I don't wont to build a complete image in case of hotfixes.
Anyone who has a similar project and that would like to share with me some useful information to handle some upgrade strategies/methods? | Update strategy Python application + Ember frontend on BeagleBone | 0 | 0 | 1 | 0 | 0 | 141 |
35,150,683 | 2016-02-02T10:19:00.000 | 5 | 1 | 1 | 0 | 0 | python,autocomplete,ide,atom-editor | 0 | 41,311,935 | 0 | 3 | 0 | false | 0 | 0 | Atom is getting various modifications. Autocomplete-python package is a handy package which helps code faster. The way to install it has changed.
In all new Atom editor go to File->Settings->install
search for autocomplete-python
and click on install. Voila its done, restart Atom is not required and you will see the difference with next time you edit python code.
Deb | 2 | 10 | 0 | 0 | I am using atom IDE for my python projects.
there are auto-complete suggestions in some cases but I'd like to know if it's possible to have a list of all possible functions that a imported module has, for instance if i import
import urllib
when I type urlib. and press (ctrl+tab) would like to see a list with the possible functions/methods to use.
Is that possible?
Thanks | python - atom IDE how to enable auto-complete code to see all functions from a module | 0 | 0.321513 | 1 | 0 | 0 | 23,531 |
35,150,683 | 2016-02-02T10:19:00.000 | 14 | 1 | 1 | 0 | 0 | python,autocomplete,ide,atom-editor | 0 | 35,151,184 | 0 | 3 | 0 | false | 0 | 0 | I found the solution for my own question.
Actually I had the wrong plugin installed!
So, in the IDE, edit->preferences, and in the packages section just typed autocomplete-python and press install button.
After restart Atom, it should start work :) | 2 | 10 | 0 | 0 | I am using atom IDE for my python projects.
there are auto-complete suggestions in some cases but I'd like to know if it's possible to have a list of all possible functions that a imported module has, for instance if i import
import urllib
when I type urlib. and press (ctrl+tab) would like to see a list with the possible functions/methods to use.
Is that possible?
Thanks | python - atom IDE how to enable auto-complete code to see all functions from a module | 0 | 1 | 1 | 0 | 0 | 23,531 |
35,158,809 | 2016-02-02T16:31:00.000 | 1 | 0 | 1 | 0 | 0 | python,python-2.7,pycharm,python-2.x | 1 | 35,158,959 | 0 | 2 | 0 | true | 0 | 0 | Create new project
Create new py file in your project or copy your file to under the project directory
Second option would be import existing project by selecting a directory where you have your python file. | 2 | 3 | 0 | 0 | New to python and PyCharm, but trying to use if for an online course.
After opening an assignment .py document (attached image), I get an error message if I open the python console:
Error:Cannot start process, the working directory '\c:...\python_lab.py' is not a directory.
Obviously, it is not - it is a python file, but I don't know how to address the problem.
How can I assign a working directory that is functional from within PyCharm, or in general, what is the meaning of the error message? | How to assign a directory to PyCharm | 0 | 1.2 | 1 | 0 | 0 | 882 |
35,158,809 | 2016-02-02T16:31:00.000 | 1 | 0 | 1 | 0 | 0 | python,python-2.7,pycharm,python-2.x | 1 | 35,158,965 | 0 | 2 | 0 | false | 0 | 0 | Looks like your default working directory is a .tmp folder. Best way to fix this is to create a new project, just make sure it's not pointing to a .tmp directory. | 2 | 3 | 0 | 0 | New to python and PyCharm, but trying to use if for an online course.
After opening an assignment .py document (attached image), I get an error message if I open the python console:
Error:Cannot start process, the working directory '\c:...\python_lab.py' is not a directory.
Obviously, it is not - it is a python file, but I don't know how to address the problem.
How can I assign a working directory that is functional from within PyCharm, or in general, what is the meaning of the error message? | How to assign a directory to PyCharm | 0 | 0.099668 | 1 | 0 | 0 | 882 |
35,163,501 | 2016-02-02T20:47:00.000 | 1 | 0 | 1 | 0 | 0 | python,python-2.7 | 0 | 35,163,645 | 0 | 1 | 0 | true | 0 | 0 | I don't know if there are any standard tools for doing this, but it shouldn't be too difficult to mark the sections with appropriately coded remarks and then run all your files through a script that outputs a new set of files omitting the lines between those remarks. | 1 | 3 | 0 | 0 | I want to release a subset of my code for external use. Only certain functions or methods should be used (or even seen) by the external customer. Is there a way to do this in Python?
I thought about wrapping the code I want removed in an if __debug__: and then creating a .pyc file with py_compile or compileall and then recreate source code from the new byte-code using uncompyle2. The __debug__ simply creates an if False condition which gets stripped out by the "compiler". I couldn't figure out how to use those "compiler modules" with the -O option. | How to remove code from external release | 0 | 1.2 | 1 | 0 | 0 | 75 |
35,163,789 | 2016-02-02T21:04:00.000 | 0 | 0 | 0 | 0 | 0 | python,numpy,theano,tensorflow | 0 | 50,763,868 | 0 | 4 | 0 | false | 0 | 0 | tf.transpose is probably what you are looking for. it takes an arbitrary permutation. | 1 | 7 | 1 | 0 | I have seen that transpose and reshape together can help but I don't know how to use.
Eg. dimshuffle(0, 'x')
What is its equivalent by using transpose and reshape? or is there a better way?
Thank you. | Theano Dimshuffle equivalent in Google's TensorFlow? | 0 | 0 | 1 | 0 | 0 | 4,682 |
35,164,413 | 2016-02-02T21:48:00.000 | 1 | 0 | 0 | 0 | 0 | android,python,appium,python-appium | 0 | 35,277,434 | 0 | 1 | 0 | false | 1 | 1 | Update. As it turns out that cannot be done with appium webdriver.
For those of you who are wondering this is the answer I rec'd from the appium support group:
This cannot be done by appium as underlying UIAutomator framework does not allow us to do so.
In app's native context this cannot be done
In app's webview's context this will be same as below because webview is nothing but a chromeless browser session inside and app
print searchBtn.value_of_css_property("background-color").
Summary
for element inside NATIVE CONTEXT ==>> NO
for element inside WEBVIEW CONTEXT ==>> YES
Hope this helps. | 1 | 1 | 0 | 0 | I would like to verify the style of an element i.e. the color of the text shown in a textview. Whether it is black or blue ex. textColor or textSize. This information is not listed in the uiautomatorviewer.
I can get the text using elem.get_attribute("text") as the text value is seen in the Node Detail. Is there a way to check for the style attributes?( I can do this fairly easy with straight selenium.) | Appium Android UI testing - how to verify the style attribute of an element? | 0 | 0.197375 | 1 | 0 | 0 | 1,771 |
35,168,823 | 2016-02-03T04:59:00.000 | 1 | 0 | 1 | 0 | 0 | python-3.x | 0 | 35,169,115 | 0 | 2 | 0 | true | 0 | 0 | If the files are both sorted, or if you can produce sorted versions of the files, then this is relatively easy. Your simplest approach (conceptually speaking) would be to take one word from file A, call it a, and then read a word from file B, calling it b. Either b is alphabetically prior to a, or it is after a, or they are the same. If they are the same, add the word to a list you're maintaining. If b is prior to a, read b from file B until b >= a. If equal, collect that word. If a < b, obviously, read a from A until a >= b, and collect if equal.
Since file size is a problem, you might need to write your collected words out to a results file to avoid running out of memory. I'll let you worry about that detail.
If they are not sorted and you can't sort them, then it's a harder problem. The naive approach would be to take a word from A, and then scan through B looking for that word. Since you say the files are large, this is not an attractive option. You could probably do better than this by reading in chunks from A and B and working with set intersections, but this is a little more complex.
Putting it as simply as I can, I would read in a reasonably-sized chunks of file A, and convert it to a set of words, call that a1. I would then read similarly-sized chunks of B as sets b1, b2, ... bn. The union of the intersections of (a1, b1), (a1, b2), ..., (a1, bn) is the set of words appearing in a1 and B. Then repeat for chunk a2, a3, ... an.
I hope this makes sense. If you haven't played with sets, it might not, but then I guess there's a cool thing for you to learn about. | 2 | 0 | 0 | 0 | I have a file that is all strings and I want to loop through the file and check its contents against another file. Both files are too big to place in the code so I have to open each file with open method and then turn each into a loop that iterates over the file word for word (in each file) and compare every word for every word in other file. Any ideas how to do this? | Python: how to open a file and loop through word for word and compare to a list | 0 | 1.2 | 1 | 0 | 0 | 69 |
35,168,823 | 2016-02-03T04:59:00.000 | 0 | 0 | 1 | 0 | 0 | python-3.x | 0 | 35,211,196 | 0 | 2 | 0 | false | 0 | 0 | I found the answer. There is a pointer when reading files . The problem is that when using a nested loop it doesn't redirect back to the next statement in the outer loop for Python. | 2 | 0 | 0 | 0 | I have a file that is all strings and I want to loop through the file and check its contents against another file. Both files are too big to place in the code so I have to open each file with open method and then turn each into a loop that iterates over the file word for word (in each file) and compare every word for every word in other file. Any ideas how to do this? | Python: how to open a file and loop through word for word and compare to a list | 0 | 0 | 1 | 0 | 0 | 69 |
35,174,394 | 2016-02-03T10:25:00.000 | 2 | 0 | 0 | 0 | 0 | python,sockets,tcp | 0 | 35,206,533 | 0 | 2 | 0 | false | 0 | 0 | shutdown is useful when you have to signal the remote client that no more data is being sent. You can specify in the shutdown() parameter which half-channel you want to close.
Most commonly, you want to close the TX half-channel, by calling shutdown(1). In TCP level, it sends a FIN packet, and the remote end will receive 0 bytes if blocking on read(), but the remote end can still send data back, because the RX half-channel is still open.
Some application protocols use this to signal the end of the message. Some other protocols find the EOM based on data itself. For example, in an interactive protocol (where messages are exchanged many times) there may be no opportunity, or need, to close a half-channel.
In HTTP, shutdown(1) is one method that a client can use to signal that a HTTP request is complete. But the HTTP protocol itself embeds data that allows to detect where a request ends, so multiple-request HTTP connections are still possible.
I don't think that calling shutdown() before close() is always necessary, unless you need to explicitly close a half-channel. If you want to cease all communication, close() does that too. Calling shutdown() and forgetting to call close() is worse because the file descriptor resources are not freed.
From Wikipedia: "On SVR4 systems use of close() may discard data. The use of shutdown() or SO_LINGER may be required on these systems to guarantee delivery of all data." This means that, if you have outstanding data in the output buffer, a close() could discard this data immediately on a SVR4 system. Linux, BSD and BSD-based systems like Apple are not SVR4 and will try to send the output buffer in full after close(). I am not sure if any major commercial UNIX is still SVR4 these days.
Again using HTTP as an example, an HTTP client running on SVR4 would not lose data using close() because it will keep the connection open after request to get the response. An HTTP server under SVR would have to be more careful, calling shutdown(2) before close() after sending the whole response, because the response would be partly in the output buffer. | 1 | 2 | 0 | 0 | I am currently working on a server + client combo on python and I'm using TCP sockets. From networking classes I know, that TCP connection should be closed step by step, first one side sends the signal, that it wants to close the connection and waits for confirmation, then the other side does the same. After that, socket can be safely closed.
I've seen in python documentation function socket.shutdown(flag), but I don't see how it could be used in this standard method, theoretical of closing TCP socket. As far as I know, it just blocks either reading, writing or both.
What is the best, most correct way to close TCP socket in python? Are there standard functions for closing signals or do I need to implement them myself? | Proper way to close tcp sockets in python | 0 | 0.197375 | 1 | 0 | 1 | 5,767 |
35,184,894 | 2016-02-03T18:31:00.000 | 1 | 0 | 0 | 0 | 0 | python-3.x,pandas | 0 | 35,184,973 | 0 | 1 | 0 | false | 0 | 0 | You can pass nrows=number_of_rows_to_read to your read_csv function to limit the lines that are read. | 1 | 1 | 1 | 0 | I often work with csv files that are 100s of GB in size. Is there any way to tell read_csv to only read a fixed number of MB from a csv file?
Update:
It looks like chunks and chunksize can be used for this, but the documentation looks a bit slim here. What would be an example of how to do this with a real csv file? (e.g. say a 100GB file, read only rows up to approximately ~10MB) | Limiting the number of GB to read in read_csv in Pandas | 0 | 0.197375 | 1 | 0 | 0 | 779 |
35,188,282 | 2016-02-03T21:40:00.000 | 0 | 0 | 0 | 0 | 0 | python,user-interface,wxpython | 0 | 35,229,531 | 0 | 1 | 0 | false | 0 | 1 | When you create the UI, you can keep the default config in a variable. A dictionary would probably work. Then when you create the tabs, you can pass them a dictionary. Alternatively, you could just save the defaults to a config file and then use Python to read it and load it into the UI. Python can parse csv, json, xml and whatnot right out of the box after all. | 1 | 0 | 0 | 0 | I'm really sorry if this question sounds really simple but I couldn't figure out the solution yet...
I'm using wxPython in order to create a GUI. I've used wx.Notebook, and created some tabs, all default configuration are located in the last tab.
My question is, how can I get these default values from the last tab and use it?!? I tried "pub" (wx.lib.pubsub), but I only get these default values after an event (e.g. button click). Also there is/are any magic to get these values after user modification without a button click?
Thanks all,
Regards, | Default values from wxPython notebook | 0 | 0 | 1 | 0 | 0 | 47 |
Subsets and Splits