Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
21,712,671 | 2014-02-11T20:47:00.000 | 2 | 0 | 1 | 0 | python,multithreading,python-3.x,gil | 21,712,781 | 2 | true | 0 | 0 | As Daniel said in the comments, it depends on how you "compile" the code.
For example, running the code using Jython does indeed get around the limitations imposed by the GIL.
On the other hand, using something like py2exe makes no difference, since this effectively just packages CPython alongside your code. | 1 | 1 | 0 | As the GIL is a lock that surrounds the interpreter does it affect compiled Python? I'm wondering whether it is possible to get past the inherent multi-threading limitations of cpython by simply compiling my python before executing it.
Hopefully that makes sense and I'm not missing something obvious or misinterpreting how the GIL works/affects execution.
Thanks | Is compiled multi-threaded Python affected by the GIL | 1.2 | 0 | 0 | 357 |
21,715,132 | 2014-02-11T23:07:00.000 | 0 | 1 | 0 | 0 | python,unicode,flask | 21,716,642 | 1 | true | 1 | 0 | OK, after wrestling with it under the hood for a while I fixed it, but not in a very elegant way, I had to modify the source of some werkzeug things. In "http.py", I replaced str(value) with unicode(value), and replaced every instance of "latin-1" with "utf-8" in both http.py and datastructures.py. It fixed the problem, file gets downloaded fine in both the latest Firefox and Chrome. As I said before, I would rather not have to modify the source of the libraries I am using because this is a pain when deploying/testing on different systems, so if anyone has a better fix for this please share. I've seen some people recommend just making the filename part of the URL but I cannot do this as I need to keep my URLs simple and clean. | 1 | 1 | 0 | So I am using Flask to serve some files. I recently downgraded the project from Python 3 to Python 2.7 so it would work with more extensions, and ran into a problem I did not have before. I am trying to serve a file from the filesystem with a Japanese filename, and when I try return send_from_directory(new_folder_path, filename, as_attachment=True)
I get UnicodeEncodeError: 'ascii' codec can't encode characters in position 15-20: ordinal not in range(128). in quote_header_value = str(value) (that is a werkzeug thing).
I have template set to display the filename on the page by just having {{filename}} in the HTML and it is displaying just fine, so I'm assuming it is somehow reading the name from the filesystem? Only when I try send_from_directory so the user can download it does it throw this error. I tried a bunch of combinations of .encode('utf-8') and.decode('utf-8')`none of which worked at all and I'm getting very frustrated with this. In Python 3 everything just worked seamlessly because everything was treated as unicode, and searching for a way to solve this brought up results that it seems I would need a degree in compsci to wrap my head around. Does anyone have a fix for this?
Thanks. | Python - how to send file from filesystem with a unicode filename? | 1.2 | 0 | 0 | 887 |
21,716,890 | 2014-02-12T01:36:00.000 | 1 | 0 | 1 | 0 | python,serialization,pickle,shelve | 21,718,777 | 2 | true | 1 | 0 | Without trying it out I'm fairly sure the answer is:
They can both be served at once, however, if one user is reading while the other is writing the reading user may get strange results.
Probably not. Once the tree has been read from the file into memory the other user will not see edits of the first user. If the tree hasn't been read from the file then the change will still be detected.
Both changes will be made simultaneously and the file will likely be corrupted.
Also, you mentioned shelve. From the shelve documentation:
The shelve module does not support concurrent read/write access to
shelved objects. (Multiple simultaneous read accesses are safe.) When
a program has a shelf open for writing, no other program should have
it open for reading or writing. Unix file locking can be used to solve
this, but this differs across Unix versions and requires knowledge
about the database implementation used.
Personally, at this point, you may want to look into using a simple key-value store like Redis with some kind of optimistic locking. | 1 | 0 | 0 | I have data that is best represented by a tree. Serializing the structure makes the most sense, because I don't want to sort it every time, and it would allow me to make persistent modifications to the data.
On the other hand, this tree is going to be accessed from different processes on different machines, so I'm worried about the details of reading and writing. Basic searches didn't yield very much on the topic.
If two users simultaneously attempt to revive the tree and read from it, can they both be served at once, or does one arbitrarily happen first?
If two users have the tree open (assuming they can) and one makes an edit, does the other see the change implemented? (I assume they don't because they each received what amounts to a copy of the original data.)
If two users alter the object and close it at the same time, again, does one come first, or is an attempt made to make both changes simultaneously?
I was thinking of making a queue of changes to be applied to the tree, and then having the tree execute them in the order of submission. I thought I would ask what my problems are before trying to solve any of them. | Can serialized objects be accessed simultaneously by different processes, and how do they behave if so? | 1.2 | 0 | 0 | 2,682 |
21,717,013 | 2014-02-12T01:48:00.000 | 0 | 0 | 0 | 0 | javascript,python,graph,visualization,network-flow | 21,729,708 | 2 | false | 0 | 0 | d3 may be the solution to what you're trying to do, but it's good to keep in mind what it is and what it is not.
What it is: a very effective tool at creating data-based graphics. What it is not: a graphing library. That being said, you CAN use it for graphs. Most of the graphs that I do in javascript are built on d3, but when doing so, expect to do write a lot of to code for setting up your plots. You can create a flow graph that will show you what you want, but d3 doesn't contain a canned flow graph that you can drop your data into. | 1 | 0 | 0 | I'm looking to implement (or use a library if one already exists) the Max Flow algorithm on a graph with directed and undirected edges, and visualize it. I am leaning toward JavaScript. I am aware that d3.js and arbor.js allow interactive graph visualization, but is there a recommended way to visualize the actual flow from node to node? This is to demonstrate some concepts in theoretical computer science.
The ideal graph would be able to show edge capacities, edge costs (different from capacities), and node names, and edges can be one-way (directed) or two-way (bidirectional, arrows pointing to both nodes, or just no arrows at all. This is not two separate directed edges).
Any advice regarding a graph visualization tool - one where you can see the flow going from edge to edge - would be appreciated.
Note: I am not opposed to using Python or some other language if someone is aware of a nice framework/library that allows this kind of visualization.
Thanks. | Max flow visualization with JavaScript api - use d3.js, or something similar? | 0 | 0 | 1 | 733 |
21,718,701 | 2014-02-12T04:30:00.000 | 2 | 0 | 1 | 0 | windows,python-2.7,ctypes | 21,718,853 | 2 | false | 0 | 1 | Try passing SPIF_SENDCHANGE (which is 2) as the last parameter. You might also need to bitwise-or it with SPIF_UPDATEINIFILE (which is 1). | 2 | 1 | 0 | I am using windows 8 (not yet updated to 8.1)
The code I am using is
import ctypes
SPI_SETDESKWALLPAPER = 20
ctypes.windll.user32.SystemParametersInfoA(SPI_SETDESKWALLPAPER, 0, "word.jpg", 0)
print "hi"
For some reason regardless if i give it a valid image (in the same directory as program) or not, regardless of type of image (bmp, gif, jpg) the code always ends up setting my background to a black screen.
Why is this? How can it be fixed? | How to Change Windows background using Python 2.7.3 | 0.197375 | 0 | 0 | 1,203 |
21,718,701 | 2014-02-12T04:30:00.000 | 0 | 0 | 1 | 0 | windows,python-2.7,ctypes | 26,006,082 | 2 | false | 0 | 1 | Sorry, I know this is late, but the problem is that you need to include the path. Instead of "image.jpg" do r"C:\path to file\image.jpg" Otherwise python doesn't know where to look for the image. | 2 | 1 | 0 | I am using windows 8 (not yet updated to 8.1)
The code I am using is
import ctypes
SPI_SETDESKWALLPAPER = 20
ctypes.windll.user32.SystemParametersInfoA(SPI_SETDESKWALLPAPER, 0, "word.jpg", 0)
print "hi"
For some reason regardless if i give it a valid image (in the same directory as program) or not, regardless of type of image (bmp, gif, jpg) the code always ends up setting my background to a black screen.
Why is this? How can it be fixed? | How to Change Windows background using Python 2.7.3 | 0 | 0 | 0 | 1,203 |
21,719,461 | 2014-02-12T05:30:00.000 | 0 | 0 | 0 | 1 | python,google-app-engine | 21,805,320 | 1 | true | 1 | 0 | I managed to discover the problem on my own.
The issue was with adding s~ before the app_id in the app.yaml file.
Despite the Google App Engine documentation stating that s~ should be before the app_id for applications using the High Replication Datastore, this apparently causes an error when uploading the the development server. | 1 | 0 | 0 | Using the bulk loader, I've downloaded the live datastore, and am now trying to upload it to the development server. When running the upload_data command to upload the datastore to the dev server I get the following error,
BadRequestError: Illegal string "dev~s~app_id" in dataset id.
The command I'm using the upload the data is
appcfg.py upload_data --url=://localhost:8080/_ah/remote_api --filename=datastore_2-11-14
The command I used the downlaod the data is
appcfg.py download_data --url=://app_id.appspot.com/_ah/remote_api --filename=datastore_2-11-14 | Google App Engine, Illegal string in dataset id when uploading to local datastore | 1.2 | 0 | 0 | 141 |
21,719,726 | 2014-02-12T05:50:00.000 | -2 | 0 | 0 | 0 | python,django,django-models,django-admin | 21,720,470 | 2 | false | 1 | 0 | All tables creates in to the models.py
and all the query can be write in to views.py | 1 | 0 | 0 | I am create a new web application with django but I have one question, if I need create 30 o 40 tables into data base, I need put all models into file models.py?
So I think that this is very complicated of maintain, because this file may growa lot
This is my question, is optimum make that? | In django It's correct create much models into file models.py | -0.197375 | 0 | 0 | 181 |
21,720,346 | 2014-02-12T06:29:00.000 | 1 | 0 | 0 | 1 | python,sockets,tornado | 21,735,934 | 2 | false | 0 | 0 | finish() doesn't apply here because a connection in the "keep-alive" state is not associated with a RequestHandler. In general there's nothing you can (or need to) do with a keep-alive connection except close it, since the browser isn't listening for a response.
Websockets are another story - in that case you may want to close the connections yourself before shutting down (but don't have to - your clients should be robust against the connection just going away). | 1 | 1 | 0 | I'm running a set of tornado instances that handles many requests from a small set of keep-alive connections. When I take down the server for maintenance I want to gracefully close the keep-alive requests so I can take the server down. Is there a way to tell clients "Hey this socket is closing" with Tornado? I looked around and self.finish() just flushes the connection. | Close all (keep-alive) socket connections in tornado? | 0.099668 | 0 | 1 | 1,783 |
21,721,827 | 2014-02-12T07:54:00.000 | 1 | 0 | 1 | 1 | python-2.7 | 21,722,581 | 2 | false | 0 | 0 | You should run the exe file as "Administrator".
Even if you are in the administrator account, you have to explicitly run it with administrator permission by right clicking on the exe. | 2 | 1 | 0 | I installed Python 2.7.6 Windows Installer (Windows binary) and then, I was trying to install the extension pywin32-218.win-amd64-py2.7.exe. But everytime I run this extension, I get the issue of "pywin32-218.win-amd64-py2.7.exe has stopped working". | Issue installing python windows extension | 0.099668 | 0 | 0 | 149 |
21,721,827 | 2014-02-12T07:54:00.000 | 0 | 0 | 1 | 1 | python-2.7 | 28,891,881 | 2 | false | 0 | 0 | You need to run it as administrator, anything that modifies folders it is not in or in its folder require root access which is right click -> run as admin in Windows or sudo in mac and linux. | 2 | 1 | 0 | I installed Python 2.7.6 Windows Installer (Windows binary) and then, I was trying to install the extension pywin32-218.win-amd64-py2.7.exe. But everytime I run this extension, I get the issue of "pywin32-218.win-amd64-py2.7.exe has stopped working". | Issue installing python windows extension | 0 | 0 | 0 | 149 |
21,723,830 | 2014-02-12T09:34:00.000 | 0 | 0 | 0 | 0 | python,pandas,dataframe | 59,060,580 | 2 | false | 0 | 0 | If you are replacing the entire row then you can just use an index and not need row,column slices.
...
data.loc[2]=5,6 | 1 | 14 | 1 | I want to start with an empty data frame and then add to it one row each time.
I can even start with a 0 data frame data=pd.DataFrame(np.zeros(shape=(10,2)),column=["a","b"]) and then replace one line each time.
How can I do that? | replace rows in a pandas data frame | 0 | 0 | 0 | 63,571 |
21,726,386 | 2014-02-12T11:15:00.000 | 0 | 0 | 1 | 0 | python | 31,185,952 | 2 | false | 0 | 0 | Are the keys large? If not, you can loop over the dict to determine which entries should be deleted; store the key for each such entry in a list. Then loop over those keys and delete them from the dict. | 1 | 0 | 0 | I have a Python application that performs correlation an large files. It stores those in a dict. Depending on the input files, this dict can become really large, to the point where it does not fit into memory anymore. This causes the system to hang, so I want to prevent this.
My idea is that there are always correlations which are not so relevant for the later processing. These could be deleted without changing the overall result too much. I want to do this when I have not much memory left.
Hence, I check for available memory periodically. If it becomes too few (say, less than 300MB), if delete the irrelevant correlations to gain more space. That's the theory.
Now for my problem: In Python, you cannot delete from a dict while iterating over it. But this is exactly what I need to do, since I have to check each dict entry for relevancy before deleting.
The usual solution would be to create a copy of the dict for iteration, or to create a new dict containing only the elements that I want to preserve. However, the dict might be several GBs big and there are only a few hundred MB of free memory left. So I cannot do much copying since that may again cause the system to hang.
Here I am stuck. Can anyone think of a better method to achieve what I need? If in-place deletion of dict entries is absolutely not possible while iterating, maybe there is some workaround that could save me?
Thanks in advance!
EDIT -- some more information about the dict:
The keys are tuples specifying the values by which the data is correlated.
The values are dicts containing the correlated date. The keys of these dicts are always strings, the values are numbers (int or float).
I am checking for relevancy by comparing the number values in the value-dicts with certain thresholds. If the values are below the thresholds, the particular correlation can be dropped. | Python: Deleting from a dict in-place | 0 | 0 | 0 | 219 |
21,729,761 | 2014-02-12T13:47:00.000 | 5 | 0 | 1 | 0 | python,ubuntu,bluetooth,bluez | 21,854,158 | 1 | true | 0 | 0 | Finally I could solve that problem!
Kill the Bluetooth-Applet:
sudo killall bluetooth-applet
For PIN-Pairing set sspmode to 0:
sudo hciconfig hci0 sspmode 0
I opened the simple-agent, so you can edit the code in RequestPinCode Method the if you want to:
sudo gedit /usr/local/bin/simple-agent
Start simple-agent:
su -c /usr/local/bin/simple-agent | 1 | 3 | 0 | I am trying to pair two devices without clicking on "match" on both devices for each pairing cycle. How can I set my own constant PIN? My Devices which should be connected is Notebook and a Smartphone.
I am using Python bluez on ubuntu. | Bluetooth bluez pairing without matching the autogenerated PIN on Ubuntu 12.10 | 1.2 | 0 | 0 | 5,850 |
21,730,339 | 2014-02-12T14:12:00.000 | 1 | 0 | 1 | 0 | python,pickle,nscoding,pyobjc | 21,734,013 | 2 | false | 0 | 0 | PyObjC does support writing Python objects to a (keyed) archive (that is, any object that can be pickled implements NSCoding).
That’s probably the easiest way to serialize arbitrary graphs of Python and Objective-C objects.
As I wrote in the comments for another answer I ran into problems when trying to find a way to implement pickle support for any object that implements NSCoding due to incompatibilities in how NSArchiver and pickle traverse the object graph (IIRC primarily when restoring the archive). | 2 | 2 | 1 | Title says it all. It seems like it ought be possible (somehow) to implement python-side pickling for PyObjC objects whose Objective-C classes implement NSCoding without re-implementing everything from scratch. That said, while value-semantic members would probably be straightforward, by-reference object graphs and conditional coding might be tricky. How might you get the two sides to "collaborate" on the object graph parts? | PyObjC: How can one use NSCoding to implement python pickling? | 0.099668 | 0 | 0 | 261 |
21,730,339 | 2014-02-12T14:12:00.000 | 0 | 0 | 1 | 0 | python,pickle,nscoding,pyobjc | 21,733,669 | 2 | false | 0 | 0 | Shouldn't it be pretty straightforward?
On pickling, call encodeWithCoder on the object using an NSArchiver or something. Have pickle store that string.
On unpickling, use NSUnarchiver to create an NSObject from the pickled string. | 2 | 2 | 1 | Title says it all. It seems like it ought be possible (somehow) to implement python-side pickling for PyObjC objects whose Objective-C classes implement NSCoding without re-implementing everything from scratch. That said, while value-semantic members would probably be straightforward, by-reference object graphs and conditional coding might be tricky. How might you get the two sides to "collaborate" on the object graph parts? | PyObjC: How can one use NSCoding to implement python pickling? | 0 | 0 | 0 | 261 |
21,730,906 | 2014-02-12T14:35:00.000 | 0 | 0 | 0 | 0 | javascript,python,postgresql,redis,uniqueidentifier | 21,740,467 | 2 | false | 1 | 0 | UUID version 4 is basically 122 random bits + 4 bits for UUID version + 2 bits reserved. Its uniqueness relies on low probability of generating the same 122 bits.
UUID version 5 is basically 122 hash bits + 4 bits for UUID version + 2 bits reserved. Its uniquess relies on low probability of collision for 122-bit truncated SHA1 hash.
When you replace N bits of UUID (as long as they are not "version" or "reserved" bits), you make a tradeoff: probability of collision becomes higher 2^N times.
For example, if you use UUID4, probability of collision is neglectible, namely 2^122. In the same time, if you have up to 8 entity types and use UUID4 with 8 bits replaced, probability of collision becomes 2^194, which is bigger, though still neglectible.
So probably using UUID4 with N bits replaced may be a safe option without taking extra care to guarantee uniquess. | 1 | 1 | 0 | I need to generate unique id in distributed environment. Catch is that each id has to have a group/type information that can be examined by simple script.
Details:
I have some, fixed, number of entity types (lets call them: message, resource, user, session, etc). I need to generate unique id in form: so i can know where to direct request based only on id - without db, list, or anything.
I have considered uuid in version 3 or 5 but as far as I can see it is impossible to know "namespace" provided for generating id.
I have also considered just replacing first x characters of uuid with fixed values but then i will lose uniqueness.
I have also considered Twitter snowflake or Instagram way of generating id's but I don't know the number of nodes in each group and I cannot assume anything.
I will be using them in JS, Python, Redis and Postgresql so portability of code (and representation - big integer representation is full of bugs in JavaScript) is required. So either pure "number" or string that can be formatted as uuid (binary representation) for database.
edit:
I will generate them in Python or in Postgresql and only pass them in JavaScript and Redis. | Generating entity id with easily distinguished types/groups | 0 | 0 | 0 | 115 |
21,731,043 | 2014-02-12T14:40:00.000 | 1 | 0 | 1 | 0 | python,python-3.x,input,python-2.x,raw-input | 44,509,311 | 6 | false | 0 | 0 | Explicitly load the function:
from builtins import input
Then you can use input() in python2 as well as python3.
You may have to install the dependency:
pip install future | 1 | 48 | 0 | I would like to set a user prompt with the following question:
save_flag is not set to 1; data will not be saved. Press enter to continue.
input() works in python3 but not python2. raw_input() works in python2 but not python3. Is there a way to do this so that the code is compatible with both python 2 and python 3? | Use of input/raw_input in python 2 and 3 | 0.033321 | 0 | 0 | 29,802 |
21,736,206 | 2014-02-12T18:14:00.000 | 0 | 0 | 1 | 0 | python,python-2.7,multiprocessing | 21,828,693 | 2 | false | 0 | 0 | Thanks for your responses.
I managed to solve the problem using
multiprocessing.Manager().dict()
Is this the best way....I'm not entirely sure. I think I need to read a lot more on Multiprocessing.
What I find challenging is the fact that the multiprocessing module offers a lot of functionality and as a beginner, its quite challenging to know the right 'tools' to use for the job.
I started off with threads...then moved to Processes....then used Queues....then used Managers. But as I read more on this subject, I see Python has a lot more to offer. | 1 | 0 | 0 | From the main process, I create 3 child processes and I pass an instance of a 'common' class...the same instance is passed to all 3 child processes.
This common class has a dictionary and a queue (which contains a lot of items)
These child processes retrieve 'items' from this queue
For these items, I call a REST service to get some data about the 'item'
I add this info to the "common" dictionary
There are no errors
However, when I try to access this dictionary from the main process, its empty. | Python Multiprocessing - shared memory | 0 | 0 | 0 | 1,865 |
21,736,489 | 2014-02-12T18:27:00.000 | 3 | 0 | 0 | 0 | python,django,apache,mod-wsgi | 47,601,617 | 2 | false | 1 | 0 | I solved this problem with:
python manage.py runserver --http_timeout 120 | 1 | 7 | 0 | I have Django + mod_wsgi + Apache server. I need to change default HTTP connection timeout. There is Timeout directive in apache config but it's not working.
How can I set this up? | Django http connection timeout | 0.291313 | 0 | 0 | 14,680 |
21,736,742 | 2014-02-12T18:39:00.000 | 0 | 0 | 0 | 0 | python,django,facebook,facebook-graph-api,facebook-fql | 21,740,472 | 1 | false | 1 | 0 | Find friend button is inefficient indeed, I would search the database just once at user registration. | 1 | 1 | 0 | I'm developing a Django site that has as a feature its own social network. Essentially, the site stores connections between users. I want users to be able to import their pre-existing Facebook connections (friends) to my site, so that they will automatically be connected to their existing Facebook friends who are users on my site.
The way I envision doing this is by allowing users to login with Facebook (probably with something like django-socialauth), and store the user's Facebook ID in the database. Then, each time a user clicks the "find friends from Facebook" button, I could query the Facebook API to see if any of my existing users are their friends. What's the best way to do this? I could use FQL and get a list of their friend's Facebook IDs, and then check that against my users', but that seems really inefficient at scale. Is there any way to do this without running through each of my users, one by one, and checking whether their Facebook ID is in the user's friends list?
Thanks. | Find user's Facebook friends who are also registered users on my site in Django | 0 | 0 | 0 | 220 |
21,740,376 | 2014-02-12T21:44:00.000 | 2 | 0 | 0 | 0 | python,xml | 21,740,512 | 1 | true | 1 | 0 | Beautiful Soup has no streaming API that I know of. You have, however, alternatives.
The classic approach for parsing large XML streams is using an event-oriented parser, namely SAX. In python, xml.sax.xmlreader. It will not choke with malformed XML. You can avoid erroneous portions of the file and extract information from the rest.
SAX, however, is low-level and a bit rough around the edges. In the context of python, it feels terrible.
The xml.etree.cElementTree implementation, on the other hand, has a much nicer interface, is pretty fast, and can handle streaming through the iterparse() method.
ElementTree is superior, if you can find a way to manage the errors. | 1 | 3 | 0 | I have a dilemma.
I need to read very large XML files from all kinds of sources, so the files are often invalid XML or malformed XML. I still must be able to read the files and extract some info from them. I do need to get tag information, so I need XML parser.
Is it possible to use Beautiful Soup to read the data as a stream instead of the whole file into memory?
I tried to use ElementTree, but I cannot because it chokes on any malformed XML.
If Python is not the best language to use for this project please add your recommendations. | Need to read XML files as a stream using BeautifulSoup in Python | 1.2 | 0 | 1 | 1,799 |
21,740,498 | 2014-02-12T21:50:00.000 | 1 | 0 | 0 | 0 | python,csv,scipy,correlation | 21,740,743 | 1 | true | 1 | 0 | Each dataset is a column and all the datasets combined to make a CSV. It get read as a 2D array by numpy.genfromtxt() and then call numpy.corrcoef() to get correlation coefficients.
Note: you should also consider the same data layout, but using pandas. Read CSV into a dataframe by pandas.read_csv() and get the correlation coefficients by .corr() | 1 | 1 | 1 | I'm using a Java program to extract some data points, and am planning on using scipy to determine the correlation coefficients. I plan on extracting the data into a csv-style file. How should I format each corresponding dataset, so that I can easily read it into scipy? | Best format to pack data for correlation determination? | 1.2 | 0 | 0 | 73 |
21,742,263 | 2014-02-12T23:35:00.000 | 2 | 0 | 1 | 0 | python,arrays,numbers | 21,742,331 | 1 | false | 0 | 0 | No you can't "continue" numbers over lines like you can statements etc...
You could really kind of cheat though...
n=int("""73167176531330624919225119674426574742355349194934 96983520312774506326239578318016984801869478851843 85861560789112949495459501737958331952853208805511 12540698747158523863050715693290963295227443043557 66896648950445244523161731856403098711121722383113 62229893423380308135336276614282806444486645238749 30358907296290491560440772390713810515859307960866 70172427121883998797908792274921901699720888093776 65727333001053367881220235421809751254540594752243 52584907711670556013604839586446706324415722155397 53697817977846174064955149290862569321978468622482 83972241375657056057490261407972968652414535100474 82166370484403199890008895243450658541227588666881 16427171479924442928230863465674813919123162824586 17866458359124566529476545682848912883142607690042 24219022671055626321111109370544217506941658960408 07198403850962455444362981230987879927244284909188 84580156166097919133875499200524063689912560717606 05886116467109405077541002256983155200055935729725 71636269561882670428252483600823257530420752963450""".replace('\n','')) | 1 | 0 | 0 | In project euler problem 8 , I cant put the number in variable . Is there any easy away
instead of spam backspace ? something like an espace character or something
n=73167176531330624919225119674426574742355349194934
96983520312774506326239578318016984801869478851843
85861560789112949495459501737958331952853208805511
12540698747158523863050715693290963295227443043557
66896648950445244523161731856403098711121722383113
62229893423380308135336276614282806444486645238749
30358907296290491560440772390713810515859307960866
70172427121883998797908792274921901699720888093776
65727333001053367881220235421809751254540594752243
52584907711670556013604839586446706324415722155397
53697817977846174064955149290862569321978468622482
83972241375657056057490261407972968652414535100474
82166370484403199890008895243450658541227588666881
16427171479924442928230863465674813919123162824586
17866458359124566529476545682848912883142607690042
24219022671055626321111109370544217506941658960408
07198403850962455444362981230987879927244284909188
84580156166097919133875499200524063689912560717606
05886116467109405077541002256983155200055935729725
71636269561882670428252483600823257530420752963450 | Python project euler 8 huge number in variable | 0.379949 | 0 | 0 | 110 |
21,745,110 | 2014-02-13T04:14:00.000 | 2 | 0 | 1 | 0 | python,windows,download,installation | 21,745,507 | 1 | false | 0 | 0 | The Python 2.7.6 Windows Installer (32-bit Python) might have slightly more precompiled extension module compatability. The Python 2.7.6 Windows X86-64 Installer will give you something with a bit more memory capacity, which you probably don't need but might. | 1 | 1 | 0 | I have an HP laptop with a 64 bit operating system, and Windows 8. I have looked all over google and on the Python website itself, and I can't seem to find anything saying exactly what version of Python I should download. Is Python 2.7.6 Windows X86-64 Installer the correct version to install? The only other version that seems right is Python 2.7.6 Windows Installer, but I'm not sure what the difference is. | Should I download Python 2.7.6 Windows X86-64 Installer? | 0.379949 | 0 | 0 | 2,797 |
21,746,564 | 2014-02-13T06:12:00.000 | 0 | 0 | 0 | 0 | python,django,django-socialauth | 21,747,354 | 2 | true | 1 | 0 | A simple solution (when someone don't know the actual implementation) can be
Create a new table with user as foreign key and one more column with will work as flag for type of authentication.
The flag can be 1 for django user and 2 for social auth user
While creating the user in your system populate this table accordingly.
Hide the option of change the password on the basis of same table. | 2 | 0 | 0 | I'm develop the site using Django and am using django social_auth API for social login authentication. Here in my website no need to display the change password option when am login using social account. So how to hide that option when am login with social account. OR If there is any possibility to know whether login using social account or website login. Kindly let me know if you have an idea to solve this issue. Thanking you. | How to know whether login using social account or website login in Django | 1.2 | 0 | 0 | 313 |
21,746,564 | 2014-02-13T06:12:00.000 | 0 | 0 | 0 | 0 | python,django,django-socialauth | 21,769,885 | 2 | false | 1 | 0 | Check the session value social_auth_last_login_backend, if it's set it will have the last social backend used to login, if it's not set, then it means that the user logged in with non-social auth. | 2 | 0 | 0 | I'm develop the site using Django and am using django social_auth API for social login authentication. Here in my website no need to display the change password option when am login using social account. So how to hide that option when am login with social account. OR If there is any possibility to know whether login using social account or website login. Kindly let me know if you have an idea to solve this issue. Thanking you. | How to know whether login using social account or website login in Django | 0 | 0 | 0 | 313 |
21,747,765 | 2014-02-13T07:26:00.000 | 0 | 0 | 0 | 0 | python,django,unicode,encoding,utf-8 | 21,777,287 | 2 | false | 1 | 0 | In your FileField definition the 'upload_to' argument might be like os.path.join(u'uploaded', 'files', '%Y', '%m', '%d')
(see the first u'uploaded' started with u') so all string will be of type unicode and this may help you. | 1 | 1 | 0 | I'm making a fileupload feature using django.db.models.FileField of Django 1.4
When I try to upload a file whose name includes non-ascii characters, it produces error below.
'ascii' codec can't encode characters in position 109-115: ordinal not
in range(128)
The actual code is like below
file = models.FileField(_("file"),
max_length=512,
upload_to=os.path.join('uploaded', 'files', '%Y', '%m', '%d'))
file.save(filename, file, save=True) #<- This line produces the error
above, if 'filename' includes non-ascii character
If I try to use unicode(filename, 'utf-8') insteadof filename, it produces error below
TypeError: decoding Unicode is not supported
How can I upload a file whose name has non-ascii characters?
Info of my environment:
sys.getdefaultencoding() : 'ascii'
sys.getfilesystemencoding() : 'UTF-8'
using Django-1.4.10-py2.7.egg | Django 1.4 - django.db.models.FileField.save(filename, file, save=True) produces error with non-ascii filename | 0 | 0 | 0 | 1,378 |
21,755,235 | 2014-02-13T13:11:00.000 | 0 | 0 | 0 | 1 | python,macos,bundle,.app | 21,756,168 | 2 | false | 0 | 1 | I solved the problem, and in hindsight it was rather trivial. In the shell script, I need to invoke my binary with exec, so that the running bash process is replaced (a la execve()) rather than spawning a new process. The only problem is that my interpreter now replaces the icon with the stock one, but I have only one icon in the dock now, and behaves naturally. | 1 | 0 | 0 | I am trying to integrate a complex python application (with a custom python interpreter shipped along) for OSX. In order to handle a set of issues due to cross platform requirements, I created a .app bundle pointing at a shell script with its CFExecutable entry in Info.plist. This works, and the invoked shell script starts up the actual application binary. However, I have the following problems:
The .app icon bounces endlessly on the dock, never reaching the "activated" status. I guess it's because the shell script does not terminate. This dock entry has the correct "application icon"
When the binary executable is invoked by the script, a new Dock entry appears with a generic python icon. This icon successfully starts up and stops bouncing as the application starts up.
When I try to kill the first Dock entry via Force quit, the actual application still keeps running, as it's clearly controlled by the second entry on the dock.
Is there a way to have this setup behave more naturally? Do I need to ditch shell script for an objective C wrapper? If I have to use a obj-C wrapper (instead of a shell script) to spawn my application, how can I prevent the same spawning of a secondary icon to happen?
Edit: note, I am not running a python script. I am running a custom made python interpreter. py2app is not what I need. | How to have a natural MacOSX .app of a complex python application (including custom interpreter) via a shell initialization script? | 0 | 0 | 0 | 138 |
21,755,574 | 2014-02-13T13:25:00.000 | 1 | 0 | 1 | 0 | python,pdf,ipython,ipython-notebook | 55,154,772 | 6 | false | 0 | 0 | ipython nbconvert notebook.ipynb --to pdf | 3 | 3 | 0 | I'm trying to export my IPython notebook to pdf, but somehow I can't figure out how to do that. I searched through stackoverflow and already read about nbconvert, but where do I type that command? In the notebook? In the cmd prompt?
If someone can tell me, step by step, what to do? I'm using Python 3.3 and IPython 1.1.0.
Thank you in advance :) | IPython notebook - unable to export to pdf | 0.033321 | 0 | 0 | 13,981 |
21,755,574 | 2014-02-13T13:25:00.000 | 2 | 0 | 1 | 0 | python,pdf,ipython,ipython-notebook | 25,941,564 | 6 | false | 0 | 0 | open terminal
navigate to the directory of your notebook
ipython nbconvert mynotebook.ipynb --to latex --post PDF | 3 | 3 | 0 | I'm trying to export my IPython notebook to pdf, but somehow I can't figure out how to do that. I searched through stackoverflow and already read about nbconvert, but where do I type that command? In the notebook? In the cmd prompt?
If someone can tell me, step by step, what to do? I'm using Python 3.3 and IPython 1.1.0.
Thank you in advance :) | IPython notebook - unable to export to pdf | 0.066568 | 0 | 0 | 13,981 |
21,755,574 | 2014-02-13T13:25:00.000 | 1 | 0 | 1 | 0 | python,pdf,ipython,ipython-notebook | 54,918,257 | 6 | false | 0 | 0 | I was facing the same problem. I tried to use the option select File --> Download as --> Pdf via LaTeX (.pdf) in the notebook but it did not worked for me(It is not working for me). I tried other options too still not working.
I solved it by using the following very simple steps. I hope it will help you too:
You can do it by 1st converting the notebook into HTML and then into PDF format:
Following steps I have implemented on:
OS: Ubuntu, Anaconda-Jupyter notebook, Python 3
1 Save Notebook in HTML format:
Start the jupyter notebook that you want to save in HTML format. First save the notebook properly so that HTML file will have a latest saved version of your code/notebook.
Run the following command from the notebook itself:
!jupyter nbconvert --to html notebook_name.ipynb
After execution will create HTML version of your notebook and will save it in the current working directory. You will see one html file will be added into the current directory with your_notebook_name.html name
(notebook_name.ipynb --> notebook_name.html).
2 Save html as PDF:
Now open that notebook_name.html file (click on it). It will be opened in a new tab of your browser.
Now go to print option. From here you can save this file in pdf file format.
Note that from print option we also have the flexibility of selecting a portion of a notebook to save in pdf format. | 3 | 3 | 0 | I'm trying to export my IPython notebook to pdf, but somehow I can't figure out how to do that. I searched through stackoverflow and already read about nbconvert, but where do I type that command? In the notebook? In the cmd prompt?
If someone can tell me, step by step, what to do? I'm using Python 3.3 and IPython 1.1.0.
Thank you in advance :) | IPython notebook - unable to export to pdf | 0.033321 | 0 | 0 | 13,981 |
21,759,946 | 2014-02-13T16:30:00.000 | 1 | 0 | 1 | 0 | python | 21,759,988 | 5 | false | 0 | 0 | The input() function merely waits for you to enter a line of text (optional) till you press Enter. The sys.exit("some error message") is the correct way to terminate a program. This could be after the line with the input() function. | 1 | 1 | 0 | Michael Dawson says in his book Python Programming (Third Edition, page 14) that if I enter input("\n\nPress the enter key to exit.") when the user presses the Enter key the program will end.
I have tried this several times and it doesn't happen. I have tried using Python 3.1 and 3.3. Help would be appreciated. | How to exit program using the enter key | 0.039979 | 0 | 0 | 28,271 |
21,761,216 | 2014-02-13T17:25:00.000 | 0 | 1 | 0 | 0 | python,cron,raspberry-pi | 21,785,587 | 2 | false | 1 | 0 | You can always add the unix command sleep xx to the cronjob before executing your command.
Example: */15 * * * * (sleep 20; /root/crontabjob.sh)
Now the job will run every 15 minutes and 20 seconds (00:15:20, 00:30:20), 00:45:20 ....) | 1 | 0 | 0 | I have a python script that loads up a webpage on a raspberry.
This script MUST run at startup, and then every 15 minutes. In future there will be many of these, maybe 1000 or even more. Currently i am doing this with a cronjob, but the problem with that is that all 1000 raspberries will connect to the webpage at the very same time (plus minus a few seconds given that they take the precise clock from the web) It would be good to execute the command after 15 minutes from the last run, regardless of the time. I like the cronjob solution because i have nothing running in background, so it simply executes does its job and then it's over.
at the other hand, cronjob takes care only of the minutes, and not the seconds, so even if i scatter the 1000 pi's over these 15 minutes I will still end having about 80 simultaneous requests to the webpage every single minute.
Is there a nice solution to this? | Cronjob at given interval | 0 | 0 | 0 | 52 |
21,762,173 | 2014-02-13T18:09:00.000 | 0 | 0 | 1 | 0 | python,csv | 21,762,285 | 2 | false | 0 | 0 | If I were doing this I think I would add a marker line after each read - before the file is saved again , then I would read the file in as a string , split on the marker, convert back to a list and feed the list to the process. | 1 | 4 | 1 | I need to read a CSV with a couple million rows. The file grows throughout the day. After each time I process the file (and zip each row into a dict), I start the process over again, except creating the dict only for the new lines.
In order to get to the new lines though, I have to iterate over each line with CSV reader and compare the line number to my 'last line read' number (as far as I know).
Is there a way to just 'skip' to that line number? | Python CSV reader start at line_num | 0 | 0 | 0 | 1,904 |
21,762,574 | 2014-02-13T18:29:00.000 | 5 | 0 | 0 | 0 | python,cherrypy | 21,762,864 | 1 | true | 1 | 0 | Reloading modules is very, very hard to do in a sane way. It leads to the potential of stale objects in your code with impossible-to-interrogate state and subtle bugs. It's not something you want to do.
What real web applications tend to do is to have a server that stays alive in front of their application, such as Apache with mod_proxy, to serve as a reverse proxy. You start your new app server, change your reverse proxy's routing, and only then kill the old app server.
No downtime. No insane, undebuggable code. | 1 | 0 | 0 | Is it possible to use the python reload command (or similar) on a single module in a standalone cherrypy web app? I have a CherryPy based web application that often is under continual usage. From time to time I'll make an "important" change that only affects one module. I would like to be able to reload just that module immediately, without affecting the rest of the web application. A full restart is, admittedly, fast, however there are still several seconds of downtime that I would prefer to avoid if possible. | Reload single module in cherrypy? | 1.2 | 0 | 0 | 468 |
21,763,762 | 2014-02-13T19:29:00.000 | 0 | 0 | 1 | 1 | python,linux,eclipse,pydev,java-7 | 39,937,221 | 2 | false | 0 | 0 | Debian Jessie . Eclipse Mars 4.1 .
I installed whilst my java environment was set to /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java and after the restart no reference to the PyDev install could be found other than in the installation details.
After Changing to java 1.8 ( sudo update-alternatives --config java ) and restarting eclipse all PyDev components appeared. | 1 | 0 | 0 | Hope it helps someone else.
So the problem I had was:
I installed PyDev into Eclipse Kepler using Eclipse Marketplace. Everything goes on fine and ends successfully. But PyDev doesn't show up anywhere after restart. E.g. no Python Editor, No "PyDev" in the preferences, no PyDev perspective, ... It's as if PyDev isn't installed. The only place where it shows up is in the Eclipse Maretplace where I can see it under installed tab.
Tried to reinstall (uninstall from Marketplace) via update site. Same result.
I was using Java 1.6 with Eclipse Kepler and installing latest version of PyDev 3.3.3.
No errors reported in eclipse logs. | PyDev installation not working. No editor. No preferences | 0 | 0 | 0 | 869 |
21,763,924 | 2014-02-13T19:38:00.000 | 0 | 0 | 0 | 1 | django,python-multithreading,django-commands | 21,764,727 | 2 | false | 1 | 0 | If you don't want to implement celery (which in my opinion isn't terribly difficult to setup), then your best bet is probably implementing a very simple queue using either your database. It would probably work along the lines of this:
System determines that an email needs to be sent and creates a row in the database with a status of being 'created' or 'queued'
On the other side there will be a process that scans your "queue" periodically. If they find anything to send (in this case any rows that with status "created/queued", they will update the status to 'sending'. The process will then proceed to send the email and finally update the status to sent.
This will take care of both asynchronously sending the objects and keeping track of the statuses of all emails should things go awry.
You could potentially go with a Redis backend for your queue if the additional updates are too taxing onto your database as well. | 1 | 0 | 0 | I'm running a django app and when some event occurs I'd like to send email to a list of recipients.
I know that using Celery would be an intelligent choice, but I'd like to know if there's another, most simple way to do it without having to install a broker server, supervisor to handle the daemon process running in the background...
I'd like to find a more simple way to do it and change it to celery when needed. I'm not in charge of the production server and I know the guy who's running it will have big troubles setting all the configuration to work. I was thinking about firing a django command which opens several processes using multiprocessing library or something like that. | Simple way to send emails asynchronously | 0 | 0 | 0 | 345 |
21,764,170 | 2014-02-13T19:51:00.000 | 0 | 0 | 1 | 0 | python,codeskulptor | 21,801,491 | 1 | true | 1 | 0 | This is probably an insecure solution but it works for my current purposes. My PHP script calls shell_exec('python3 -c "X") where X is the user-supplied code appended with the code I use for testing, e.g. calling their created functions etc. | 1 | 0 | 0 | For a school project I want to make a small Codecademy-like site that teaches programming for beginners. I want the site to teach Python as it has a syntax that is suitable for beginners to learn, and for this reason I found Skulpt to be useful as it has both browser text and drawing output capabilities.
My question now though is, is there some way to integrate testing with the code the user writes, so the site can mark the code as correct or incorrect? E.g. a task could be to write a function that returns the nth fibonacci number, and the site runs the user-provided code and checks for instance that their fib(5) returns 8.
How does CodingBat do it? | Python in the browser (Skulpt/Codeskulptor) + tests? | 1.2 | 0 | 0 | 324 |
21,765,266 | 2014-02-13T20:54:00.000 | 1 | 0 | 0 | 1 | python,django,multithreading,rabbitmq,celery | 21,765,816 | 2 | true | 1 | 0 | It's impossible to really answer your question without an in-depth analysis of your actual code AND benchmark protocol, and while having some working experience with Python, Django and Celery I wouldn't be able to do such an in-depth analysis. Now there are a couple very obvious points :
if your workers are running on the same computer as your Django instance, they will compete with Django process(es) for CPU, RAM and IO.
if the benchmark "client" is also running on the same computer then you have a "heisenbench" case - bombing a server with 100s of HTTP request per second also uses a serious amount of resources...
To make a long story short: concurrent / parallel programming won't give you more processing power, it will only allow you to (more or less) easily scale horizontally. | 2 | 0 | 0 | I'm doing some metric analysis on on my web app, which makes extensive use of celery. I have one metric which measures the full trip from a post_save signal through a celery task (which itself calls a number of different celery tasks) to the end of that task. I've been hitting the server with up to 100 requests in 5 seconds.
What I find interesting is that when I hit the server with hundreds of requests (which entails thousands of celery worker processes being queued), the time it takes for the trip from post save to the end of the main celery task increases significantly, even though I never do any additional database calls, and none of the celery tasks should be blocking the main task.
Could the fact that there are so many celery tasks in the queue when I make a bunch of requests really quickly be slowing down the logic in my post_save function and main celery task? That is, could the processing associated with getting the sub-tasks that the main celery task creates onto a crowded queue be having a significant impact on the time it takes to reach the end of the main celery task? | Does Django Block When Celery Queue Fills? | 1.2 | 0 | 0 | 400 |
21,765,266 | 2014-02-13T20:54:00.000 | 0 | 0 | 0 | 1 | python,django,multithreading,rabbitmq,celery | 34,550,948 | 2 | false | 1 | 0 | I'm not sure about slowing down, but it can cause your application to hang. I've had this problem where one application would backup several other queues with no workers. My application could then no longer queue messages.
If you open up a django shell and try to queue a task. Then hit ctrl+c. I can't quite remember what the stack trace should be, but if you post it here I could confirm it. | 2 | 0 | 0 | I'm doing some metric analysis on on my web app, which makes extensive use of celery. I have one metric which measures the full trip from a post_save signal through a celery task (which itself calls a number of different celery tasks) to the end of that task. I've been hitting the server with up to 100 requests in 5 seconds.
What I find interesting is that when I hit the server with hundreds of requests (which entails thousands of celery worker processes being queued), the time it takes for the trip from post save to the end of the main celery task increases significantly, even though I never do any additional database calls, and none of the celery tasks should be blocking the main task.
Could the fact that there are so many celery tasks in the queue when I make a bunch of requests really quickly be slowing down the logic in my post_save function and main celery task? That is, could the processing associated with getting the sub-tasks that the main celery task creates onto a crowded queue be having a significant impact on the time it takes to reach the end of the main celery task? | Does Django Block When Celery Queue Fills? | 0 | 0 | 0 | 400 |
21,765,396 | 2014-02-13T21:00:00.000 | 0 | 0 | 0 | 0 | python,selenium-webdriver | 22,903,875 | 3 | false | 1 | 0 | css=span.error -- Error
css=span.warning -- Warning
css=span.critical -- Critical Error
Simple above are the CSS Selectors we can use. | 1 | 2 | 0 | I am trying to target specific CSS elements on a page, but the problem is that they have varying selector names. For instance, input#dp156435476435.textinput.wihtinnextyear.datepicker.hasDatepicker.error. I need to target the CSS because i am specifcally looking for the .error at the end of the element, and that is only in the CSS (testing error validation for fields on a website. I know if I was targeting class/name/href/id/etc, I could use xpath, but I'm not aware of a partial CSS selector in selenium webdriver. Any help would be appreciated, thanks! | Selenium webdriver, Python - target partial CSS selector? | 0 | 0 | 1 | 2,815 |
21,765,573 | 2014-02-13T21:12:00.000 | 2 | 0 | 1 | 0 | python,python-3.x | 21,765,988 | 3 | false | 0 | 0 | Actually is more like len(str(abs(n))) because -1 should probably have length 1. | 1 | 1 | 0 | In other words, if I am presented with the number 54352, I want an expression that will tell me the width of that number is 5. I could use a for each loop to do this I know, but that seems rather cumbersome. Any other ideas? | Without using module, how can I determine the width of an integer? | 0.132549 | 0 | 0 | 44 |
21,765,647 | 2014-02-13T21:16:00.000 | 4 | 0 | 0 | 0 | python,image,image-processing,numpy,pytables | 21,765,862 | 2 | true | 0 | 0 | You can use numpy.memmap and let the operating system decide which parts of the image file to page in or out of RAM. If you use 64-bit Python the virtual memory space is astronomic compared to the available RAM. | 1 | 1 | 1 | I'm trying to compute the difference in pixel values of two images, but I'm running into memory problems because the images I have are quite large. Is there way in python that I can read an image lets say in 10x10 chunks at a time rather than try to read in the whole image? I was hoping to solve the memory problem by reading an image in small chunks, assigning those chunks to numpy arrays and then saving those numpy arrays using pytables for further processing. Any advice would be greatly appreciated.
Regards,
Berk | How to read a large image in chunks in python? | 1.2 | 0 | 0 | 3,371 |
21,767,229 | 2014-02-13T22:48:00.000 | 1 | 0 | 0 | 0 | python,django,sqlite | 21,768,188 | 2 | true | 1 | 0 | If you care about taking control over every single aspect of how you want to render your data in HTML and serve it to others, Then for sure Django is a great tool to solve your problem.
Django's ORM models make it easier for you to read and write to your database, and they're database-agnostic. Which means that you can reuse the same code with a different database (like MySQL) in the future.
So, to wrap it up. If you're planning to do more development in the future, then use Django. If you only care about creating these HTML pages once and for all, then don't.
PS: With Django, you can easily integrate these scripts into your Django project as management commands, run them with cronjobs and integrate everything you develop together with a unified data access layer. | 1 | 0 | 0 | I am just beginning learning Django and working through the tutorial, so sorry if this is very obvious.
I have already a set of Python scripts whose ultimate result is an sqlite3 db that gets constantly updated; is Django the right tool for turning this sqlite db something like a pretty HTML table for a website?
I can see that Django is using an sqlite db for managing groups/users and data from its apps (like the polls app in the tutorial), but I'm not yet sure where my external sqlite db, driven by my other scripts, fits into the grand scheme of things?
Would I have to modify my external python scripts to write out to a table in the Django db (db.sqlite3 in the Django project dir in tutorial at least), then make a Django model based on my database structure and fields?
Basically,I think my question boils down to:
1) Do I need to create Django model based on my db, then access the one and only Django "project db", and have my external script write into it.
2) or can Django utilise somehow a seperate db driven by another script somehow?
3) Finally, is Django the right tool for such a task before I invest weeks of reading... | Django and external sqlite db driven by python script | 1.2 | 1 | 0 | 1,298 |
21,767,951 | 2014-02-13T23:35:00.000 | 6 | 0 | 0 | 0 | google-app-engine,python-2.7 | 21,767,994 | 1 | false | 1 | 0 | Explicitly setting a property to None is defining a value, and yes defaults work and the property will be indexed. This assumes None is a valid value for a particular property type.
Some issues will arise, as you pointed out, often you use None as a sentinal value, so how do you tell between no Value provided and an explicit None? | 1 | 2 | 0 | I'd like to be able to run a query like: MyModel.query(MyModel.some_property == None) and get results. I know that if I don't put a default=<some default> in a property, I won't be able to query for it, but if I set default=None will it index it?
Similarly, does setting values to None cause properties to be indexed in ndb.Model? What if you pass some_keyword_arg=None to the constructor?
I know that doing something like: ndb.StringProperty(default='') means you can query on it, just not clear on the semantics of using None. | Does NDB still index with default=None or properties set to None? | 1 | 0 | 0 | 363 |
21,768,416 | 2014-02-14T00:14:00.000 | 0 | 0 | 1 | 0 | c#,python,ironpython,pycharm | 21,776,226 | 1 | false | 0 | 0 | It looks like PyCharm handles Python C extensions by generating a "skeleton" module and using that for completion. The same approach would work for IronPython easily, thanks to .NET reflection, but I don't know if PyCharm supports that sort of extensibility. | 1 | 0 | 0 | I have IronPython right now working with PyCharm. Is it possible to import classes from a 3rd party .NET DLL that I have written and get code completion with it?
Currently I'm creating a .NET application where users can upload their Python scripts and interact with the application. Basically I want to create a .NET library that users can import into their Python project and use classes from it with code completion.
Is this possible? | IronPython in PyCharm 3rd Party DLLs | 0 | 0 | 0 | 703 |
21,772,673 | 2014-02-14T06:53:00.000 | 0 | 0 | 0 | 0 | python,django | 21,795,994 | 3 | false | 1 | 0 | First copy the file today.html into your projects template folder.
Add the path in settings.py file. ie your project-path/your file-path(today.html).
Then create a function to open today.html file in views.py.
Now in urls.py file give the url.
ie. url(r'^today/$', 'your project-path.views.today', name='today'),
that's it | 2 | 0 | 0 | I have a html file in my home directory named today.html
I have to load that file on a click of a link using django.
How to add the file path in views.py and settings.py files. | How to add path for the file today.html in views.py and settings.py in django | 0 | 0 | 0 | 1,522 |
21,772,673 | 2014-02-14T06:53:00.000 | 0 | 0 | 0 | 0 | python,django | 21,772,731 | 3 | false | 1 | 0 | You could add your home directory path to the TEMPLATE_DIRS setting in your project's settings.py file. Then when you try to render the template in your view, Django will be able to find it. | 2 | 0 | 0 | I have a html file in my home directory named today.html
I have to load that file on a click of a link using django.
How to add the file path in views.py and settings.py files. | How to add path for the file today.html in views.py and settings.py in django | 0 | 0 | 0 | 1,522 |
21,773,514 | 2014-02-14T07:45:00.000 | 0 | 0 | 0 | 0 | python,pandas | 21,773,776 | 2 | false | 0 | 0 | In pandas it would be del df['columnname']. | 1 | 1 | 1 | I have a dataframe where some columns (not row) are like ["","","",""].
Those columns with that characteristic I would like to delete.
Is there an efficient way of doing that? | Detecting certain columns and deleting these | 0 | 0 | 0 | 72 |
21,773,821 | 2014-02-14T08:03:00.000 | -1 | 1 | 1 | 0 | python,unit-testing,python-unittest | 21,775,708 | 5 | false | 0 | 0 | The problem is that the name of config_custom.csv should itself be a configurable parameter. Then each test can simply look for config_custom_<nonce>.csv, and any number of tests may be run in parallel.
Cleanup of the overall suite can just clear out config_custom_*.csv, since we won't be needing any of them at that point. | 2 | 7 | 1 | tl;dr - I want to write a Python unittest function that deletes a file, runs a test, and the restores the file. This causes race conditions because unittest runs multiple tests in parallel, and deleting and creating the file for one test messes up other tests that happen at the same time.
Long Specific Example:
I have a Python module named converter.py and it has associated tests in test_converter.py. If there is a file named config_custom.csv in the same directory as converter.py, then the custom configuration will be used. If there is no custom CSV config file, then there is a default configuration built into converter.py.
I wrote a unit test using unittest from the Python 2.7 standard library to validate this behavior. The unit test in setUp() would rename config_custom.csv to wrong_name.csv, then it would run the tests (hopefully using the default config), then in tearDown() it would rename the file back the way it should be.
Problem: Python unit tests run in parallel, and I got terrible race conditions. The file config_custom.csv would get renamed in the middle of other unit tests in a non-deterministic way. It would cause at least one error or failure about 90% of the time that I ran the entire test suite.
The ideal solution would be to tell unittest: Do NOT run this test in parallel with other tests, this test is special and needs complete isolation.
My work-around is to add an optional argument to the function that searches for config files. The argument is only passed by the test suite. It ignores the config file without deleting it. Actually deleting the test file is more graceful, that is what I actually want to test. | Making a Python unit test that never runs in parallel | -0.039979 | 0 | 0 | 2,425 |
21,773,821 | 2014-02-14T08:03:00.000 | 0 | 1 | 1 | 0 | python,unit-testing,python-unittest | 34,140,669 | 5 | false | 0 | 0 | The best testing strategy would be to make sure your testing on disjoint data sets. This will bypass any race conditions and make the code simpler. I would also mock out open or __enter__ / __exit__ if your using the context manager. This will allow you to fake the event that a file doesn't exist. | 2 | 7 | 1 | tl;dr - I want to write a Python unittest function that deletes a file, runs a test, and the restores the file. This causes race conditions because unittest runs multiple tests in parallel, and deleting and creating the file for one test messes up other tests that happen at the same time.
Long Specific Example:
I have a Python module named converter.py and it has associated tests in test_converter.py. If there is a file named config_custom.csv in the same directory as converter.py, then the custom configuration will be used. If there is no custom CSV config file, then there is a default configuration built into converter.py.
I wrote a unit test using unittest from the Python 2.7 standard library to validate this behavior. The unit test in setUp() would rename config_custom.csv to wrong_name.csv, then it would run the tests (hopefully using the default config), then in tearDown() it would rename the file back the way it should be.
Problem: Python unit tests run in parallel, and I got terrible race conditions. The file config_custom.csv would get renamed in the middle of other unit tests in a non-deterministic way. It would cause at least one error or failure about 90% of the time that I ran the entire test suite.
The ideal solution would be to tell unittest: Do NOT run this test in parallel with other tests, this test is special and needs complete isolation.
My work-around is to add an optional argument to the function that searches for config files. The argument is only passed by the test suite. It ignores the config file without deleting it. Actually deleting the test file is more graceful, that is what I actually want to test. | Making a Python unit test that never runs in parallel | 0 | 0 | 0 | 2,425 |
21,782,897 | 2014-02-14T15:11:00.000 | 0 | 0 | 0 | 0 | django,python-2.7,login,django-middleware | 21,784,110 | 2 | false | 1 | 0 | Maybe you should try to use user_logged_in signal instead of middleware?
Also you can check user object from request for is_anonymous, maybe it can helps | 1 | 0 | 0 | Let's say I created a Middleware which should redirect user after login to a view with "next" parameter taken from LOGIN_REDIRECT_URL. But it should do it only once directly after logging, not with every request to LOGIN_REDIRECT_URL. At the moment I check User.last_login and compare it with datetime.datetime.now(), but it seems not to be a reasonable solution. Any better ideas? | Django Middleware, single action after login | 0 | 0 | 0 | 795 |
21,785,951 | 2014-02-14T17:38:00.000 | 1 | 0 | 1 | 0 | python | 21,785,991 | 2 | false | 0 | 0 | It really depends what you mean by "display" the file. When we display text, we need to take the file, get all of its text, and put it onto the screen. One possible display would be to read every line and print them. There are certainly others. You're going to have to open the file and read the lines in order to display it, though, unless you make a shell command to something like vim file.txt. | 1 | 0 | 0 | I am trying to find a way to display a txt/csv file to the user of my Python script. Everytime I search how to do it, I keep finding information on how to open/read/write etc ... But I just want to display the file to the user.
Thank you in advance for your help. | Display a file to the user with Python | 0.099668 | 0 | 0 | 47 |
21,788,522 | 2014-02-14T20:01:00.000 | 0 | 0 | 0 | 0 | python,user-interface | 21,789,391 | 1 | true | 0 | 1 | The console could possibly appear if you used the 'console' parameter to setup(). Switch to 'windows' instead if that is the case. Can't say for sure without seeing your setup.py script. Possibly your app could also be opening console, but again hard to say without seeing source. One thing to check is to make sure you are not printing anything to stdout or stderr. You might want to redirect all of stdout and stderr to your log just in case, and do this right at the top of your start script so that if some 3rd party import was writing to stdout you'd be able to capture that.
The db is not part of your executable, so py2exe will not do anything with it. However, you should probably package your application with an installer, and you can make the installer include the db and install it along with the executable. | 1 | 0 | 0 | I am new to python programming and development. After much self study through online tutorials I have been able to make a GUI with wxpython. This GUI interacts with a access database in my computer to load list of teams and employees into the comboboxes.
Now my first question is while converting the whole program into a windows exe file can I also include the .accdb file with it...as in I only need to send the exe file to the users and not the database..if yes how.
My second question is... I actually tried converting the program into exe using the py2exe (excluding the database...am not sure how to do that) and I got the .exe file of my program into the "Dist" folder. But when I double click it to run it a black screen (cmd) appears for less than a second and disappears. Please help me understand the above issue and resolve it.
am not sure if I have a option of attaching files...then I could have attached my wxpython program for reference.
Thanks in advance.
Regards,
Premanshu | wxpython GUI program to exe using py2exe | 1.2 | 0 | 0 | 373 |
21,790,271 | 2014-02-14T21:59:00.000 | -1 | 0 | 0 | 1 | python,queue,popen | 21,790,418 | 2 | false | 0 | 0 | Use subprocess.call() instead of Popen, or use Popen.wait(). | 2 | 0 | 0 | I currently have a working python application, gui with wxpython. I send this application a folder which then gets processed by a command line application via Popen. Each time I run this application it take about 40 mins+ to process before it finishes. While a single job processes I would like to queue up another job, I don't want to submit multiple jobs at the same time, I want to submit one job, while it's processing I want to submit another job, so when the first one finishes it would then just process the next, and so on, but I am unsure of how to go about this and would appreciate some suggestions. | Python send jobs to queue processed by Popen | -0.099668 | 0 | 0 | 222 |
21,790,271 | 2014-02-14T21:59:00.000 | 1 | 0 | 0 | 1 | python,queue,popen | 21,790,443 | 2 | true | 0 | 0 | Presumably you have either a notification that the task has finished being passed back to the GUI or the GUI is checking the state of the task periodically. In either case you can allow the user to just add to a list of directories to be processed and when your popen task has finished take the first one off of the list and start a new popen task, (remembering to remove the started one off of the list. | 2 | 0 | 0 | I currently have a working python application, gui with wxpython. I send this application a folder which then gets processed by a command line application via Popen. Each time I run this application it take about 40 mins+ to process before it finishes. While a single job processes I would like to queue up another job, I don't want to submit multiple jobs at the same time, I want to submit one job, while it's processing I want to submit another job, so when the first one finishes it would then just process the next, and so on, but I am unsure of how to go about this and would appreciate some suggestions. | Python send jobs to queue processed by Popen | 1.2 | 0 | 0 | 222 |
21,790,472 | 2014-02-14T22:14:00.000 | 2 | 0 | 0 | 0 | python,c++,opengl,glut,pyopengl | 21,790,746 | 1 | true | 0 | 1 | Use something like SDL2 or GLFW3 for multimonitor support. They'll let you query the number of monitors and their sizes, as well as let you create multiple windows to cover them. | 1 | 2 | 0 | I'm trying to write a program in either python or C++ using opengl that will allow me to control different displays. Currently I have three different displays, I have the computer monitor, a LCD and a DLP all hooked up to my computer. I want to control each screen separately, and I want to full screen them all so they go black. Currently when I try to use glutfullscreen() in only makes the computer monitor black, and I can't control the other two screen.
In my set up I have removed the backlight from a LCD screen and I'm projecting onto it with a DLP projector to increase my dynamic range. I'm trying to write software to align the two image. I have it all working in MATLAB with mgl. But I don't know where to go with C++.
I need to be able to control the pixels of where each image is displayed, but I can't access the other two screen. | Using different displays opengl | 1.2 | 0 | 0 | 100 |
21,790,816 | 2014-02-14T22:43:00.000 | 3 | 0 | 0 | 0 | python-2.7,pandas | 21,791,001 | 2 | false | 0 | 0 | for the 60 days you're looking to compare to, create a timedelta object of that value timedelta(days=60) and use that for the filter. and if you're already getting timedelta objects from the subtraction, recasting it to a timedelta seems unnecessary.
and finally, make sure you check the signs of the timedeltas you're comparing. | 1 | 3 | 1 | I got a pandas dataframe, containing timestamps 'expiration' and 'date'.
I want to filter for rows with a certain maximum delta between expiration and date.
When doing fr.expiration - fr.date I obtain timedelta values, but don't know how
to get a filter criteria such as fr[timedelta(fr.expiration-fr.date)<=60days] | filter pandas dataframe for timedeltas | 0.291313 | 0 | 0 | 2,238 |
21,791,565 | 2014-02-14T23:57:00.000 | 2 | 0 | 1 | 1 | python,macos | 21,791,729 | 2 | false | 0 | 0 | I do all of my main development on OSX. I deploy on a linux box. Pycharm (CE) is your friend. | 2 | 1 | 0 | I'm new to Mac, and I have OS X 10.9.1. The main question is whether it is better to create a virtual machine with Linux and do port forwarding or set all packages directly to the Mac OS and work with it directly? If I create a virtual machine, I'm not sure how it will affect the health of SSD and ease of development. On the other hand, I also do not know how to affect the stability and performance of Mac OS installation packages directly into it. Surely there are some best practices, but I do not know them. | Python development on Mac OS X: pure Mac OS or linux in virtualbox | 0.197375 | 0 | 0 | 1,076 |
21,791,565 | 2014-02-14T23:57:00.000 | 3 | 0 | 1 | 1 | python,macos | 21,791,847 | 2 | true | 0 | 0 | On my Mac, I use Python and PyCharm and all the usual Unix tools, and I've always done just fine. Regard OS X as a Unix machine with a very nice GUI on top of it, because it basically is -- Mac OS X is POSIX-compliant, with BSD underpinnings. Why would you even consider doing VirtualBox'd Linux? Even if you don't want to relearn the hotkeys, PyCharm provides a non-OS X mapping, and in Terminal, CTRL and ALT work like you expect.
If you're used to developing on Windows but interfacing with Unix machines through Cygwin, you'll be happy to use Terminal, which is a normal bash shell and has (or can easily get through Homebrew) all the tools you're used to. Plus the slashes go the right way and line endings don't need conversion.
If you're used to developing on a Linux distro, you'll be happy with all the things that "just work" and let you move on with your life.
So in answer to your question, do straight Mac OS X. Working in a virtualized Linux environment imparts a cost and gains you nothing. | 2 | 1 | 0 | I'm new to Mac, and I have OS X 10.9.1. The main question is whether it is better to create a virtual machine with Linux and do port forwarding or set all packages directly to the Mac OS and work with it directly? If I create a virtual machine, I'm not sure how it will affect the health of SSD and ease of development. On the other hand, I also do not know how to affect the stability and performance of Mac OS installation packages directly into it. Surely there are some best practices, but I do not know them. | Python development on Mac OS X: pure Mac OS or linux in virtualbox | 1.2 | 0 | 0 | 1,076 |
21,800,806 | 2014-02-15T17:05:00.000 | 2 | 0 | 0 | 1 | python,google-app-engine | 21,802,072 | 2 | false | 1 | 0 | You have many options:
Use a timer in your client to check periodically (i.e. every 15 seconds) if the file is ready. This is the simplest option that requires only a few lines of code.
Use the Channel API. It's elegant, but it's an overkill unless you face similar problems frequently.
Email the results to the user. | 1 | 0 | 0 | I have an app on GAE that takes csv input from a web form and stores it to a blob, does some stuff to obtain new information using input from the csv file, then uses csv.writer on self.response.out to write a new csv file and prompt the user to download it. It works well, but my problem is if it takes over 60 seconds it times out. I've tried to setup the do some stuff part as a task in task queue, and it would work, except I can't make the user wait while this is running, and there's no way of calling the post that would write out the new csv file automatically when the task queue is complete, and having the user periodically push a button to see if it is done is less than optimal.
Is there a better solution to a problem like this other than using the task queue and having the user have to manually push a button periodically to see if the task is complete? | GAE Request Timeout when user uploads csv file and receives new csv file as response | 0.197375 | 0 | 0 | 79 |
21,802,946 | 2014-02-15T20:07:00.000 | 2 | 0 | 0 | 0 | python,scikit-learn,cluster-analysis,data-mining,k-means | 21,824,056 | 2 | false | 0 | 0 | Is the data already in vector space e.g. gps coordinates? If so you can cluster on it directly, lat and lon are close enough to x and y that it shouldn't matter much. If not, preprocessing will have to be applied to convert it to a vector space format (table lookup of locations to coords for instance). Euclidean distance is a good choice to work with vector space data.
To answer the question of whether they played music in a given location, you first fit your kmeans model on their location data, then find the "locations" of their clusters using the cluster_centers_ attribute. Then you check whether any of those cluster centers are close enough to the locations you are checking for. This can be done using thresholding on the distance functions in scipy.spatial.distance.
It's a little difficult to provide a full example since I don't have the dataset, but I can provide an example given arbitrary x and y coords instead if that's what you want.
Also note KMeans is probably not ideal as you have to manually set the number of clusters "k" which could vary between people, or have some more wrapper code around KMeans to determine the "k". There are other clustering models which can determine the number of clusters automatically, such as meanshift, which may be more ideal in this case and also can tell you cluster centers. | 2 | 2 | 1 | I have a dataset of users and their music plays, with every play having location data. For every user i want to cluster their plays to see if they play music in given locations.
I plan on using the sci-kit learn k-means package, but how do I get this to work with location data, as opposed to its default, euclidean distance?
An example of it working would really help me! | Computing K-means clustering on Location data in Python | 0.197375 | 0 | 0 | 976 |
21,802,946 | 2014-02-15T20:07:00.000 | 5 | 0 | 0 | 0 | python,scikit-learn,cluster-analysis,data-mining,k-means | 21,825,022 | 2 | true | 0 | 0 | Don't use k-means with anything other than Euclidean distance.
K-means is not designed to work with other distance metrics (see k-medians for Manhattan distance, k-medoids aka. PAM for arbitrary other distance functions).
The concept of k-means is variance minimization. And variance is essentially the same as squared Euclidean distances, but it is not the same as other distances.
Have you considered DBSCAN? sklearn should have DBSCAN, and it should by now have index support to make it fast. | 2 | 2 | 1 | I have a dataset of users and their music plays, with every play having location data. For every user i want to cluster their plays to see if they play music in given locations.
I plan on using the sci-kit learn k-means package, but how do I get this to work with location data, as opposed to its default, euclidean distance?
An example of it working would really help me! | Computing K-means clustering on Location data in Python | 1.2 | 0 | 0 | 976 |
21,803,246 | 2014-02-15T20:30:00.000 | 1 | 0 | 1 | 0 | python,vim,file-io | 21,803,426 | 2 | false | 0 | 0 | You probably misset your fileformats options somehow to use mac end-of-line characters, which is a single \r (used only with pre-OSX-macs, OSX uses UNIX line endings).
You can check your setting by typing :set fileformat. The default should be set to unix. | 1 | 0 | 0 | I'm writing to a txt file from python. Whenever I specify a \n write in the python file, I find a ^J in the txt file - when opened using Vi. If I open using any other text editor, I see a clean new line. The standard j and k commands don't work when trying to navigate the txt file. Any solutions?
I'm using Ubuntu 12.04 | Navigate a file containing ^J in Vim | 0.099668 | 0 | 0 | 138 |
21,807,032 | 2014-02-16T03:48:00.000 | 0 | 0 | 0 | 0 | python,amazon-s3,flask | 21,817,783 | 1 | true | 1 | 0 | I'm stupid. Right in the Flask API docs it says you can include the parameter attachment_filename in send_from_directory if it differs from the filename in the filesystem. | 1 | 0 | 0 | I am trying to serve up some user uploaded files with Flask, and have an odd problem, or at least one that I couldn't turn up any solutions for by searching. I need the files to retain their original filenames after being uploaded, so they will have the same name when the user downloads them. Originally I did not want to deal with databases at all, and solved the problem of filename conflicts by storing each file in a randomly named folder, and just pointing to that location for the download. However, stuff came up later that required me to use a database to store some info about the files, but I still kept my old method of handling filename conflicts. I have a model for my files now and storing the name would be as simple as just adding another field, so that shouldn't be a big problem. I decided, pretty foolishly after I had written the implmentation, on using Amazon S3 to store the files. Apparently S3 does not deal with folders in the way a traditional filesystem does, and I do not want to deal with the surely convoluted task of figuring out how to create folders programatically on S3, and in retrospect, this was a stupid way of dealing with this problem in the first place, when stuff like SQLalchemy exists that makes databases easy as pie. Anyway, I need a way to store multiple files with the same name on s3, without using folders. I thought of just renaming the files with a random UUID after they are uploaded, and then when they are downloaded (the user visits a page and presses a download button so I need not have the filename in the URL), telling the browser to save the file as its original name retrieved from the database. Is there a way to implement this in Python w/Flask? When it is deployed I am planning on having the web server handle the serving of files, will it be possible to do something like this with the server? Or is there a smarter solution? | Is there a way to tell a browser to download a file as a different name than as it exists on disk? | 1.2 | 1 | 0 | 143 |
21,807,914 | 2014-02-16T06:02:00.000 | 1 | 0 | 0 | 0 | python,web-scraping | 21,807,979 | 2 | false | 0 | 0 | You can't access page behind paywall directly because that page may require some authentication data like session or cookies. So you have to first create these data and store it so that when you pass request to secure pages you pass require data as part of request and also have authentication session data.
To get authentication data you should scrape login page first. Get the session info,cookies of login page and pass login input as a request (get or post based on form action type) to action page. Once you will be logged in store authentication data and use this to scrape pages behind paywall. | 1 | 1 | 0 | I wish to scrape news articles of the local newspaper. The archive is behind a paywall and I have a paid account, how would I go about automating the input of my credentials? | How to scrape a website behind a paywall | 0.099668 | 0 | 1 | 6,129 |
21,808,913 | 2014-02-16T08:27:00.000 | 3 | 0 | 1 | 0 | python,regex,json,python-2.6 | 44,021,599 | 2 | false | 0 | 0 | Assuming what you are asking...
I believe you're asking if it's faster to obtain information from a serialized JSON string by deserializing it or searching for the relevant match via regex.
Quick answer
In my unofficial experience with looking for a single key-value pair in an activity streams object (tweet, retweet or quote) in serialized JSON, using regex scales better than parsing the entire JSON object.
Why?
This is because tweets are pretty big, and when you're working with hundreds of thousands of them, deserializing the entire JSON string and randomly accessing the resulting JSON object for a single key-value pair is like using a sledgehammer to crack a nut.
Potential plotholes...
The problem arises, however, when keys are repeated at different levels of nesting.
For example, quotes have a root level attribute called twitter_quoted_status which contains a copy of the tweet this quote object refers to.
That means any attribute name shared by both tweets and quotes would return at least 2 matches if you searched a serialized quote object with regex.
Since you cannot and should not rely on the reliability of the order of attributes within a JSON object (dictionary keys are supposed to be unordered!), you can't even rely on the match you want being the first or second (or whatever) match if you've identified that pattern so far.
The only evidence I can share with you at the moment, is that to retrieve a single key-value pair from 100,000 original tweet objects (no quotes nor retweets), my desktop tended to take 8-14 seconds when using the deserialization method, and 0-2 when using regex.
Disclaimer
Numbers are approximate and from memory. Sorry just providing a quick answer, don't have the tools to test this and post findings at my disposal right now. | 2 | 2 | 0 | Which is faster method, using JSON parser (python 2.6) or regex for obtaining relevant data. Since the amount of data is huge, I presume there will considerable difference in time when one method is used in comparison to other. | Using JSON or regex when processing tweets | 0.291313 | 0 | 0 | 1,489 |
21,808,913 | 2014-02-16T08:27:00.000 | -1 | 0 | 1 | 0 | python,regex,json,python-2.6 | 25,631,958 | 2 | false | 0 | 0 | You can't use regex to parse JSON.
As an example, if you wanted to select an item from a JSON list, you would have to count the number of elements that come before it. This would require you to know what an element is and to be smart about matching braces and so forth. Pretty soon you'll have implemented a JSON parser, but one that depends on lots of tiny regexes that probably aren't very efficient. | 2 | 2 | 0 | Which is faster method, using JSON parser (python 2.6) or regex for obtaining relevant data. Since the amount of data is huge, I presume there will considerable difference in time when one method is used in comparison to other. | Using JSON or regex when processing tweets | -0.099668 | 0 | 0 | 1,489 |
21,814,585 | 2014-02-16T17:14:00.000 | 3 | 1 | 0 | 1 | python,apache,nginx,wsgi,uwsgi | 21,814,847 | 1 | true | 1 | 0 | They are just 2 different ways of running WSGI applications.
Have you tried googling for mod_wsgi nginx?
Any wsgi compliant server has that entry point, that's what the wsgi specification requires.
Yes, but that's only how uwsgi communicates with Nginx. With mod_wsgi the Python part is run from within Nginx, with uwsgi you run a separate app. | 1 | 6 | 0 | There seems to be mod_wsgi module in Apache and uwsgi module in Nginx. And there also seems to be the wsgi protocol and uwsgi protocol.
I have the following questions.
Are mod_wsgi and uwsgi just different implementations to provide WSGI capabilities to the Python web developer?
Is there a mod_wsgi for Nginx?
Does uwsgi also offer the application(environ, start_response) entry point to the developers?
Is uwsgi also a separate protocol apart from wsgi? In this case, how is the uwsgi protocol different from the wsgi protocol? | What is the difference between mod_wsgi and uwsgi? | 1.2 | 0 | 0 | 5,775 |
21,817,135 | 2014-02-16T21:47:00.000 | 2 | 0 | 1 | 0 | python,file-io | 21,818,173 | 3 | false | 0 | 0 | A few tips:
use try catch wherever possible.
Even if it crashes, stack trace will tell which line was last executed. | 1 | 0 | 0 | Is there a way to programmatically find out why a Python program closed?
I'm making a game in python, and I've been using the built in open() function to create a log in a .txt file. A major problem I've come across is that when it occasionally crashes, the log doesn't realise it's crashed.
I've managed to record if the user closes the game through pressing an exit button, but I was wondering if there is a way to check how the program closed. For instance if the user presses exit, if it crashes or if it is forcefully closed(through the task manager for instance) | Is there a way to find out why a python program closed? | 0.132549 | 0 | 0 | 114 |
21,822,306 | 2014-02-17T06:34:00.000 | 1 | 0 | 1 | 0 | python | 21,822,374 | 2 | false | 0 | 0 | When you say tuple, I think you mean list. Tuples don't have an append operation, they are fixed in size.
If you append to a list while iterating, you'll get the expected result. It's not good practice, however, to alter a collection while walking it.
A much better approach is to collect items to be appended in a second list, and concatenate the two lists when you finish iterating the first. | 1 | 0 | 0 | I have a tuple which contains multiple values which I use for searching in some file. I get this tuple as an input parameter to my method. I try to search for that value(text) in one file and returns related result. But I have a requirement that if that text is not found then search for the value 'Unknown' in that file and return corresponding value. To achieve this I am planning to append value 'Unknown' to the tuple so that if it doesn't find anything, it will return something corresponding to 'Unknown'. But my question is that if I apeend 'Unknown' at last, while looping through this tuple does it loop through in the same order which the elements were added to it? I have tried it on python shell and noticed that it loops through it in the same order. But I don't want my code to accidentally search for 'Unknown' value before desired ones. Please help. | Does tuple sort itself when looped through | 0.099668 | 0 | 0 | 46 |
21,823,306 | 2014-02-17T07:39:00.000 | -2 | 0 | 1 | 0 | python-3.x,ipython-notebook | 21,826,880 | 1 | false | 0 | 0 | This error means that your ipython notebook server is not running. If you are running Ubuntu or OSX, you need to go to the command-line, cd into the directory where your notebook file is, and run ipython notebook. This will start the local notebook webserver and you can then run code inside your notebooks. The error you are getting probably means that you accidentally killed the local webserver that lets the notebooks run. | 1 | 3 | 0 | I am trying to run IPython notebook but its not execute any output,it gives error like that,Error:A WebSocket connection to could not be established. You will NOT be able to run code. Check your network connection or notebook server configuration,so what can i do for that? | How to execute the ipython notebook | -0.379949 | 0 | 1 | 521 |
21,824,413 | 2014-02-17T08:47:00.000 | 1 | 0 | 1 | 0 | python,multithreading,user-interface | 21,824,574 | 1 | true | 0 | 0 | The question is quite vague as you should probably specify exactly to which gui library your're referring.
In most GUIs I know however the main design is that only one thread (the main thread) should deal with the GUI and therefore it's important that other threads never directly interact with the user interface. The only thing you're normally allowed to do from a different thread is post-ing a message for the main GUI loop.
If for example you need a progress then open the progress window in the main thread, start the reader thread that keeps posting messages as the reading proceeds and then a final message once the procedure is complete. Any interaction with the interface should be done in the main thread when handling these posted (async) messages.
If for example you need to implement also a cancel button then the main thread should just set a variable for the worker thread to notice (for a simple variable assignment no mutex protection is needed in Python because assignment is an atomic operation). | 1 | 2 | 0 | What is the most proper way of using threads in a python GUI application, if the application has to read a big file at some point? There will probably be 2 threads, 1 for GUI, 1 for reading the file.
Should I create the threads at the start up of the application, or should I create the "file read" thread when it has to read the file? | How to create threads in a python GUI application? | 1.2 | 0 | 0 | 199 |
21,824,677 | 2014-02-17T09:01:00.000 | 1 | 0 | 0 | 0 | python,rest,bigdata | 21,829,930 | 1 | false | 0 | 0 | If you need to process a long running task, from client point of view it is always better to process it asynchronously as follows.
A client sends a POST request, the server creates a new resource (or can start immediate background processing) and returns HTTP 202 Accepted with a representation of the task (e.g. status, start time, expected end time and the like) along with the task URL in Content-Location header so that the client can track it.
The client can send a GET request to the specified URL to get the status. Server can return following responses.
Not done yet
Server returns HTTP 200 OK along with the task resource so that client can check status.
Done
Server returns HTTP 303 See Other and a Location header with the URL of a resource that shows the task results.
Error
Server returns HTTP 200 OK with the task resource describing the error | 1 | 1 | 0 | We have a REST API as part of which we provide the client with several APIs to draw analytic reports. Some very large queries can take 5 to 10 minutes to complete and can return responses in the 50mb to 150mb range.
At the moment, the client is just expected to wait for the response. We are not sure if this is really the best practice or if such complex/large queries & responses should be dealt with in another manner. Any advice on current best practices would be appreciated please?
Note: The API will be called by automated processes building large reports, so we are not sure if standard pagination is efficient or desirable. | API Responses with large result sets | 0.197375 | 0 | 1 | 277 |
21,825,122 | 2014-02-17T09:23:00.000 | 0 | 0 | 0 | 0 | automated-tests,wxpython,robotframework,python-2.6 | 51,078,298 | 3 | false | 1 | 0 | You probably have the different versions for wxPython and Python in your machine. Always make sure you should install the wxPython version same as the python version i.e. Python 2.7. | 1 | 3 | 0 | For automated testing on RIDE(Robot framework), I had already installed PYTHON 2.6 and wxPython 3.0 version,PATH had already been updated in Environment variables, and when I jumped to the last phase i.e Installing RIDE(version -"robotframework-ride-1.3.win32.exe") through Windows Installer, application is been installed when I try to through "Run as Administrator", it was unable to open the IDE. How I can resolve this issue? | Installing RIDE(Robot Framework) | 0 | 0 | 0 | 21,961 |
21,826,863 | 2014-02-17T10:42:00.000 | 3 | 0 | 0 | 0 | python,hadoop,machine-learning,bigdata,scikit-learn | 21,828,293 | 2 | false | 0 | 0 | Look out for jpype module. By using jpype you can run Mahout Algorithms and you will be writing code in Python. However I feel this won't be the best of solution. If you really want massive scalability than go with Mahout directly. I practice, do POC's, solve toy problems using scikit-learn, however when I need to do massive big data clustering and so on than I go Mahout. | 1 | 7 | 1 | I know it is possible to use python language over Hadoop.
But is it possible to use scikit-learn's machine learning algorithms on Hadoop ?
If the answer is no, is there some machine learning library for python and Hadoop ?
Thanks for your Help. | Is it possible to run Python's scikit-learn algorithms over Hadoop? | 0.291313 | 0 | 0 | 5,638 |
21,827,432 | 2014-02-17T11:09:00.000 | 0 | 0 | 0 | 0 | python,django,templates | 21,827,881 | 2 | false | 1 | 0 | The static url should point to the staticfiles directory. And why do you put templates under staticfiles? You may have it as a seperate folder in the main folder(along with manage.py) | 1 | 0 | 0 | The path to my templates folder in TEMPLATE_DIRS looks like this:
os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir)) + '/static/templates/'
When I run my server locally and open the page everything works fine and it detects the templates at ~/Documents/projects/clupus/static/templates. Whenever I pull everything onto my server and access the URL it gives me this error:
Django tried loading these templates, in this order: Using loader
django.template.loaders.filesystem.Loader:
/home/ubuntu/public_html/clupus.com/clupus/templates/clupus/index.html
(File does not exist)
It's not following TEMPLATE_DIRS and is looking in the wrong directory. I've checked the TEMPLATE_DIRS value that's on the server and it matches that which I have locally. What's the issue?
EDIT
Rather embarrassingly there was nothing wrong with my code and I simply forgot to restart apache by doing sudo service apache2 restart. As to why my templates folder was inside static this is at the request of the front end developer. When I asked him why he said:
the reason why they are inside it is because I'm trying to reference the templates in Javascript aswell because we are using shared templates between server and client | Server doesn't follow TEMPLATE_DIRS path | 0 | 0 | 0 | 57 |
21,830,868 | 2014-02-17T13:51:00.000 | 4 | 0 | 0 | 0 | python,google-bigquery | 21,868,123 | 1 | true | 0 | 0 | There is no way to create a table automatically during streaming, since BigQuery doesn't know the schema. JSON data that you post doesn't have type information -- if there is a field "123" we don't know if that will always be a string or whether it should actually be an integer. Additionally, if you post data that is missing an optional field, the schema that got created would be narrower than the one you wanted.
The best way to create the table is with a tables.insert() call (no need to run a load job to load data from GCS). You can provide exactly the schema you want, and once the table has been created you can stream data to it.
In some cases, customers pre-create a month worth of tables, so they only have to worry about it every 30 days. In other cases, you might want to check on startup to see if the table exists, and if not, create it. | 1 | 3 | 0 | Maybe I got this wrong: Is there a way to automatically create the target table for a tabledata.insertAll command? If yes please point me in the right direction.
If not - what is the best approach to create the tables needed? Check for existing tables on startup and create the ones that does not exist by loading from GCS? Or can they be created directly from code without a load job?
I have a number of event classes (Python Cloud endpoints) defined and the perfect solution would be using those definitions to create matching BQ tables. | Auto-create BQ tables for streaming inserts | 1.2 | 1 | 0 | 973 |
21,833,383 | 2014-02-17T15:46:00.000 | 0 | 0 | 1 | 0 | python,dictionary,tuples,lowercase | 21,833,466 | 3 | false | 0 | 0 | You would have to change the string to lowercase before putting it into the tuple. Tuples are read-only after they are created. | 1 | 2 | 0 | I'm having troubles understanding this. I have a dictionary, where the key is a tuple consisting of two strings. I want to lowercase the first string in this tuple, is this possible? | When the key is a tuple in dictionary in Python | 0 | 0 | 0 | 528 |
21,836,929 | 2014-02-17T18:47:00.000 | 2 | 0 | 0 | 0 | python,igraph,shortest-path | 21,840,597 | 1 | false | 0 | 0 | For the first question, you can find all shortest paths, and then choose between the pairs making up the longest distances.
I don't really understand the second question. If you are searching for unweighted paths, then every pair of vertices at both ends of an edge have the minimum distance (1). That is, if you don't consider paths to the vertices themselves, these have length zero, by definition. | 1 | 0 | 1 | In igraph, what's the least cpu-expensive way to find:
the two most remote vertices (in term of shortest distances form one another) of a graph. Unlike the farthest.points() function, which chooses the first found pair of vertices with the longest shortest distance if more than one pair exists, I'd like to randomly select this pair.
same thing with the closest vertices of a graph.
Thanks! | least cpu-expensive way to find the two most (and least) remote vertices of a graph [igraph] | 0.379949 | 0 | 1 | 99 |
21,838,287 | 2014-02-17T20:03:00.000 | 0 | 0 | 1 | 1 | python,path | 21,838,385 | 2 | false | 0 | 0 | I would use your suggested method of os.chdir(r'..\..') to make sure your current working directory is in folder2. I'm not really sure what you're asking though, so maybe clarify why you think this ISN'T the right solution? | 1 | 0 | 0 | I have a script that will pull files from two directories back, so the script resides at:
/folder2/folder1/folder0/script.py
and the files that will be processed will be in folder2.
I can get back one level with "..//" (I'm making a Windows executable with cx_free) but I'm thinking this isn't the best way to do this.
I am setting an input directory and an output directory. I want to keep the paths relative to the location of the script so that "folder2" can be moved without screwing up the functionality of the script or force rewriting of it.
thanks | Pull files into script from two directories back | 0 | 0 | 0 | 2,992 |
21,841,805 | 2014-02-18T00:04:00.000 | 1 | 0 | 1 | 0 | python,oop | 21,842,028 | 2 | false | 0 | 0 | Think of a Class as nothing more than a bluprint in code. You know how big factories make one object over and over again easilly? They created a bluprint of that object. Added all the needed components to the bluprint like what should it do, how will it respond to certain events.
When you create your Class you create functions in there that will do a task and sometimes return a value or even another object! You do everything in that class and the beauty of it is that you can then create and object of that class and it will carry all the functions and properties and you can access them all from your new object of that type. Each additional object you create is unique and can be seperate from the other objects.
This is the very basics. Im more than happy to help you. And sorry for not going deeper. Im writting this from my cellphone :) | 1 | 0 | 0 | I'm very new to Python and programming. Say I have a text file that has a a bunch of people's names, articles they've written, and their assigned ID. I made a class with these attributes and put that text file through the class. No problem. Now, the user is prompted to enter someone's name from that text file and all of the articles by that person is to be printed. My question is this: Do I do this by creating a new function inside of the class? Or do I create it outside of the class?
I'm having a difficult time wrapping my head around when and when not to touch a class. I think I get that a class defines the attributes to an object. However, when I add an object to the class, I'm finding difficulty referencing that object outside of the class. Say, when asking the user to enter a name and comparing it to an object.
I get that this isn't the greatest question and I'm sure some will scold me for posting. I'm just running out of options and I'm desperate for help. I'm reading books, doing online tutorials, watching videos, and it's not clicking like it should. I understand if this gets deleted.
AHA! Thank you all for the replies! I was able to set up the function outside of the class and compare the input variable to the author attribute in that class and KA-BLAMO! It prints correctly! I still have a lot to learn but it's moments like this that turn me into a giddy little school-girl. Thank you all so much for your help and for being so nice! | Not sure what/what not to put in classes in Python | 0.099668 | 0 | 0 | 68 |
21,846,661 | 2014-02-18T07:13:00.000 | 1 | 0 | 0 | 0 | python,numpy,matplotlib | 59,014,783 | 3 | false | 0 | 0 | I would suggest that first uninstall numpy and matplotlib using pip uninstall command then install again using pip install from python command line terminal and restart your system. | 1 | 1 | 1 | I am unable to import numpy and matplotlib package in python33. I am getting this error. I have tried to install this two packages but unable to import. I am getting the following error:
import numpy
Traceback (most recent call last):
File "", line 1, in
import numpy
ImportError: No module named 'numpy'
import matplotlib
Traceback (most recent call last):
File "", line 1, in
import matplotlib
ImportError: No module named 'matplotlib' | i have python 33 but unable to import numpy and matplotlib package | 0.066568 | 0 | 0 | 275 |
21,846,978 | 2014-02-18T07:32:00.000 | 0 | 0 | 0 | 0 | python,selenium,automated-tests,robotframework | 22,005,969 | 2 | false | 1 | 0 | Could you please provide a part of your code you use to get the span element and a part of your GUI application where you are trying to get the element from (HTML, or smth.)? | 2 | 0 | 0 | I am using robot framework to test a GUI application , I need to select a span element which is a list box , but I have multiple span elements with same class ID in the same page , So how can I select each span element(list box) ???
Thanks in advance | How to select a span element which is a list box ,when multiple span elements with same class ID are present in the same page? | 0 | 0 | 1 | 313 |
21,846,978 | 2014-02-18T07:32:00.000 | 0 | 0 | 0 | 0 | python,selenium,automated-tests,robotframework | 22,013,855 | 2 | false | 1 | 0 | Selenium provides various ways to locate elements in the page. If you can't use id, consider using CSS or Xpath. | 2 | 0 | 0 | I am using robot framework to test a GUI application , I need to select a span element which is a list box , but I have multiple span elements with same class ID in the same page , So how can I select each span element(list box) ???
Thanks in advance | How to select a span element which is a list box ,when multiple span elements with same class ID are present in the same page? | 0 | 0 | 1 | 313 |
21,852,518 | 2014-02-18T11:34:00.000 | 0 | 0 | 0 | 0 | python,django,django-admin,django-sites | 21,852,651 | 1 | false | 1 | 0 | 1 WAY
You can go to your models.py into your app by using django signal you can do this.
from django.db.models.signals import post_save
class Test(models.Model):
# ... fields here
# method for updating
def update_on_test(sender, instance, **kwargs):
# custome operation as you want to perform
# register the signal
post_save.connect(update_on_test, sender=Test)
2 WAY
You can ovveride save() method of modeladmin class if you are filling data into table by using django admin.
class TestAdmin( admin.ModelAdmin ):
fields = ['title', 'body' ]
form = TestForm
def save_model(self, request, obj, form, change):
# your login if you want to perform some comutation on save
# it will help you if you need request into your work
obj.save() | 1 | 1 | 0 | i have a Django project and right now everything works fine. i have a Django admin site and now, i want that when i add a new record to my model, a function calls simultaneously and a process starts. how i can do this? what is this actions name? | call a method in Django admin site | 0 | 0 | 0 | 2,361 |
21,853,660 | 2014-02-18T12:20:00.000 | 0 | 0 | 0 | 0 | python,sqlalchemy,pymysql | 21,866,204 | 2 | false | 0 | 0 | Drop the : from your connection string after your username. It should instead be mysql+pymsql://root@localhost/pydb | 1 | 0 | 0 | I tried to use pymsql with sqlalchemy using this code :
from sqlalchemy import create_engine
engine = create_engine("mysql+pymsql://root:@localhost/pydb")
conn = engine.connect()
and this exception is raised here is the full stack trace :
Traceback (most recent call last):
File "D:\Parser\dal__init__.py", line 3, in
engine = create_engine("mysql+pymsql://root:@localhost/pydb")
File "C:\Python33\lib\site-packages\sqlalchemy-0.9.2-py3.3.egg\sqlalchemy\engine__init__.py", line 344, in create_engine
File "C:\Python33\lib\site-packages\sqlalchemy-0.9.2-py3.3.egg\sqlalchemy\engine\strategies.py", line 48, in create
File "C:\Python33\lib\site-packages\sqlalchemy-0.9.2-py3.3.egg\sqlalchemy\engine\url.py", line 163, in make_url
File "C:\Python33\lib\site-packages\sqlalchemy-0.9.2-py3.3.egg\sqlalchemy\engine\url.py", line 183, in _parse_rfc1738_args
File "C:\Python33\lib\re.py", line 214, in compile
return _compile(pattern, flags)
File "C:\Python33\lib\re.py", line 281, in _compile
p = sre_compile.compile(pattern, flags)
File "C:\Python33\lib\sre_compile.py", line 498, in compile
code = _code(p, flags)
File "C:\Python33\lib\sre_compile.py", line 483, in _code
_compile(code, p.data, flags)
File "C:\Python33\lib\sre_compile.py", line 75, in _compile
elif _simple(av) and op is not REPEAT:
File "C:\Python33\lib\sre_compile.py", line 362, in _simple
raise error("nothing to repeat")
sre_constants.error: nothing to repeat | Error when trying to use pymysql with sqlalchemy sre_constants.error: nothing to repeat | 0 | 1 | 0 | 1,686 |
21,855,357 | 2014-02-18T13:29:00.000 | 0 | 0 | 1 | 0 | python,django | 61,074,643 | 14 | false | 1 | 0 | Adding to Jon Answer, If timezone.now() still not working after changing the TIME_ZONE='Asia/Kolkata'.
Instead of timezone.now() you can use timezone.localtime().
Hope it solves.. :) | 6 | 56 | 0 | We want to get the current time in India in our Django project. In settings.py, UTC is there by default. How do we change it to IST? | How to add Indian Standard Time (IST) in Django? | 0 | 0 | 0 | 54,914 |
21,855,357 | 2014-02-18T13:29:00.000 | 0 | 0 | 1 | 0 | python,django | 56,727,629 | 14 | false | 1 | 0 | Simple Change TIME ZONE from 'UTC' to 'Asia/Kolkata' remember K and A is Capital here. | 6 | 56 | 0 | We want to get the current time in India in our Django project. In settings.py, UTC is there by default. How do we change it to IST? | How to add Indian Standard Time (IST) in Django? | 0 | 0 | 0 | 54,914 |
21,855,357 | 2014-02-18T13:29:00.000 | 0 | 0 | 1 | 0 | python,django | 63,849,327 | 14 | false | 1 | 0 | LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'Asia/Calcutta'
USE_I18N = True
USE_L10N = True
USE_TZ = True
This should work. | 6 | 56 | 0 | We want to get the current time in India in our Django project. In settings.py, UTC is there by default. How do we change it to IST? | How to add Indian Standard Time (IST) in Django? | 0 | 0 | 0 | 54,914 |
21,855,357 | 2014-02-18T13:29:00.000 | 0 | 0 | 1 | 0 | python,django | 46,900,867 | 14 | false | 1 | 0 | Modify setting.py and change the time zone to TIME_ZONE = 'Asia/Kolkata' | 6 | 56 | 0 | We want to get the current time in India in our Django project. In settings.py, UTC is there by default. How do we change it to IST? | How to add Indian Standard Time (IST) in Django? | 0 | 0 | 0 | 54,914 |
21,855,357 | 2014-02-18T13:29:00.000 | 6 | 0 | 1 | 0 | python,django | 48,295,467 | 14 | false | 1 | 0 | Use Below settings its worked for me.
TIME_ZONE = 'Asia/Kolkata'
USE_I18N = True
USE_L10N = True
USE_TZ = False | 6 | 56 | 0 | We want to get the current time in India in our Django project. In settings.py, UTC is there by default. How do we change it to IST? | How to add Indian Standard Time (IST) in Django? | 1 | 0 | 0 | 54,914 |
21,855,357 | 2014-02-18T13:29:00.000 | 0 | 0 | 1 | 0 | python,django | 54,306,812 | 14 | false | 1 | 0 | Keep TIME_ZONE = 'Asia/Kolkata' in settings.py file and restart the service from where you are accessing the timezone (server or shell).
In my case, I restarted the python shell in which I was working and it worked fine for me. | 6 | 56 | 0 | We want to get the current time in India in our Django project. In settings.py, UTC is there by default. How do we change it to IST? | How to add Indian Standard Time (IST) in Django? | 0 | 0 | 0 | 54,914 |
21,856,559 | 2014-02-18T14:20:00.000 | 2 | 0 | 0 | 0 | python,xlwt | 22,414,279 | 2 | false | 0 | 0 | You read in the file using xlrd, and then 'copy' it to an xlwt Workbook using xlutils.copy.copy().
Note that you'll need to install both xlrd and xlutils libraries.
Note also that not everything gets copied over. Things like images and print settings are not copied, for example, and have to be reset. | 1 | 1 | 0 | I have created an excel sheet using XLWT plugin using Python. Now, I need to re-open the excel sheet and append new sheets / columns to the existing excel sheet. Is it possible by Python to do this? | How to append to an existing excel sheet with XLWT in Python | 0.197375 | 1 | 0 | 8,628 |
21,857,982 | 2014-02-18T15:21:00.000 | 2 | 0 | 1 | 0 | python,mpi,mpi4py | 21,859,408 | 1 | false | 0 | 0 | There are two ways you can do this off the top of my head. I wouldn't say one is better or worse than the other, though the first probably matches your use case better.
Use the name publishing system (or some other method) to open a connection using MPI_COMM_CONNECT and MPI_COMM_ACCEPT to connect A to whomever needs to communicate with it. This might result in a bunch of communicators for A depending on how many processes you are creating so this may result in some bad things, but this is probably the most direct way to make this work. You'll just have to have A do a bunch of calls to MPI_COMM_ACCEPT (unfortunately there isn't a non-blocking version of this call).
Continually merge your intercommunicators that you're creating with MPI_COMM_SPAWN to create one giant communicator containing all of the processes. Then you can just send messages as you usually would (or create new sub-communciators with A and all of the spawnees so you can do collectives among just them). | 1 | 4 | 0 | Given a situation where master process A spawns a set of worker processes B, who each spawn their own unique worker process C, how can I open a communicator between C to A?
I'm trying to create a loop, using mpi4py, between several pieces of code that were written separately from one another while minimizing modifications to the codes. So, the general framework of the MPI code is going to be:
Master A (one process) spawns 8 processes of worker B, and scatters an array to them.
Each B process spawns a worker C, does some manipulation to the array, and broadcasts it to their own worker.
Each worker C manipulates the array in their own way, and then (ideally) master A gathers an array back from each of C's arrays.
I know this will involve opening an intercommunicator between existing processes, possibly using group communication. What would be the best way to accomplish this?
Thank you. | Intercommunication with spawned process in mpi4py? | 0.379949 | 0 | 0 | 563 |
21,866,780 | 2014-02-18T22:24:00.000 | 0 | 0 | 0 | 0 | python,emacs | 21,866,921 | 2 | false | 0 | 0 | The actual command is set-mark-command bound to C-SPC, so you should be able to use C-h k C-SPC to see how it's bound. (C-u just adds a prefix argument).
For my emacs (24.3.1) C-u C-SPC works exactly as you say it should. What version are you using? | 2 | 0 | 0 | I have a simple question. Normally (and currently in other modes), after setting a mark with C-SPC, C-u C-SPC will return the cursor to that mark. However, in (Python) mode, and only (Python) mode, does that behavior not work, wherein it says "C-u C-SPC" is undefined.
I tried to look up the function and rebind it myself (i.e. C-h k then the command) but that returned as soon as I typed C-u. Can someone tell me the actual command C-u C-SPC invokes,
and/or why (Python) mode seems to debind it? | jump-to-mark (C-u C-SPC) in emacs python mode not working | 0 | 0 | 0 | 158 |
21,866,780 | 2014-02-18T22:24:00.000 | 0 | 0 | 0 | 0 | python,emacs | 21,873,256 | 2 | false | 0 | 0 | Works nicely from emacs -Q (python.el) as with python-mode.el
Also can't imagine one of the Python-IDE's out there took this key.
Maybe start from emacs -Q and load your init-file step by step. | 2 | 0 | 0 | I have a simple question. Normally (and currently in other modes), after setting a mark with C-SPC, C-u C-SPC will return the cursor to that mark. However, in (Python) mode, and only (Python) mode, does that behavior not work, wherein it says "C-u C-SPC" is undefined.
I tried to look up the function and rebind it myself (i.e. C-h k then the command) but that returned as soon as I typed C-u. Can someone tell me the actual command C-u C-SPC invokes,
and/or why (Python) mode seems to debind it? | jump-to-mark (C-u C-SPC) in emacs python mode not working | 0 | 0 | 0 | 158 |
21,866,951 | 2014-02-18T22:34:00.000 | 1 | 1 | 0 | 0 | python | 21,867,082 | 7 | false | 0 | 0 | The (effective) network speed is simply bytes transferred in a given time interval, divided by the length of the interval. Obviously there are different ways to aggregate / average the times and they give you different "measures" ... but it all basically boils down to division. | 1 | 5 | 0 | I'm using the library called psutil to get system/network stats, but I can only get the total uploaded/downloaded bytes on my script.
What would be the way to natively get the network speed using Python? | Get upload/download kbps speed | 0.028564 | 0 | 0 | 12,062 |
21,866,960 | 2014-02-18T22:35:00.000 | 0 | 1 | 0 | 0 | python,flask,eve | 21,874,409 | 1 | true | 0 | 0 | If performance is a concern you should consider using Redis, or something like that, to store this sort of frequently updating data. You could then reconcile with the database when appropriate (idle moments, etc.).
This being said, since you are writing to the database after the response has been sent, you aren't actively delaying the response (something you would do if you hooked to the on_fetch event instead).
I guess it all depends on 1) the kind of traffic your API is going to handle and 2) where you are storing these stats. If you are going to get a lot of traffic (or want to be ready for it), then consider using an alternate storage (possibly in memory) other than your main database. | 1 | 0 | 0 | I am looking to find a way to increment the numOfViews field when an item is retrieve from GET, my current approach is hock on the app.on_post_GET_items event and update the field accordingly, is it something we do typically? my concern is this will slow down the 'GET' i.e. read operation as we always 'write' afterward. Do we have a better solution in general? | In Python-Eve, what is the most efficient way to update the NumOfView field? | 1.2 | 0 | 0 | 317 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.