Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
21,867,972
2014-02-18T23:43:00.000
1
0
0
0
python,url,web,web2py
21,868,611
1
false
1
0
Well I guess you're going to have a build the worlds most remarkable single page application :) Security through obscurity is never a good design pattern. There is absolutely no security "reason" for hiding a URL if your system is designed in a such a way that the use of the URLs is meaningless unless the access control layer defines permissions for such use (usually through an authentication and role/object based permission architecture). Keep in mind - anyone these days can use Chrome inspector to see whatever you are trying to hide in the address bar. For example. Say you want to load domain.com/adduser Sure you make an AJAX call to that URL, and the browser address bar would never change from domain.com/ - but a quick look in the source will uncover /adduser pretty quickly. Sounds like you need to have a think about what these addresses really expose and start locking them down.
1
0
0
I am building a website using web2py. For security reasons I would like to hide the url after the domain to the visitors. For example, when a person clicks a link to "domain.com/abc", it will go to that page and the address bar shows "domain.com". I have played with the routes_in and routes_out, but it only seems to map your typed url to a destination but not hiding the url. How can I do that? Thanks!
How to hide url after domain in web2py?
0.197375
0
0
493
21,868,709
2014-02-19T00:47:00.000
1
0
0
0
python,api,flask
25,578,832
1
false
1
0
Tornado would do the trick. Flask is not designed for asynchronization. A Flask instance processes one request at a time in one thread. Therefore, when you hold the connection, it will not proceed to next request.
1
0
0
I have an HTTP API using Flask and in one particular operation clients use it to retrieve information obtained from a 3rd party API. The retrieval is done with a celery task. Usually, my approach would be to accept the client request for that information and return a 303 See Other response with an URI that can be polled for the response as the background job is finished. However, some clients require the operation to be done in a single request. They don't want to poll or follow redirects, which means I have to run the background job synchronously, hold on to the connection until it's finished, and return the result in the same response. I'm aware of Flask streaming, but how to do such long-pooling with Flask?
Flask request waiting for asynchronous background job
0.197375
0
1
2,430
21,871,636
2014-02-19T05:13:00.000
0
0
1
0
python,parsing,query-string
21,872,039
1
false
1
0
It depends on the website itself. If it has other values of field1 or field2, you can only know that by looking into the code or documentation(if available). That's the only accurate way of knowing it. Otherwise, you can try brute forcing (trying all possible alphanumeric values Ever), but that doesn't guaranty anything. In that case you'll need a way to know which values are valid and which are not. Hardly efficient.
1
0
0
I'm trying to figure out how to parse a website that doesn't have documentation available to explain the query string. I am wondering if there is a way to get all possible valid values for different fields in a query string using Python. For example, let's say I have the current URL that I wish to parse: http://www.website.com/stat?field1=a&field2=b Is there a way to find all of the possible values for field1 that return information? Let's say that field1 of the qs can take either values "a" or "z" and I do not know it can take value "z". Is there a way to figure out that "z" is the only other value that is possible in that field without any prior knowledge?
Find all possible query string values with python
0
0
1
299
21,871,784
2014-02-19T05:24:00.000
0
0
0
0
python,namespaces
21,871,871
1
true
0
0
Currently, I'm using a stupid and violent solution. I create dict for each connection, and exec code in dict accordingly. exec code in connection_dict[connection]. Any smart solution ? Such as python CAPI ? Thanks again!
1
0
0
I want to write a simple python C/S exec code model, which will send all codes written in client to execute in server. Simply, you can think that I'm using exec(code, globals()) to run remote code. And I meet a problem about namespace : If I import something in a connection, another connection can also use this module. For example, we have two connections: A and B. I import os in connection A, then connection B can use os module also. Question : And what I want is that each connection have its own execute environment, say 'globals'.
How to set local namespace for specified connection?
1.2
0
1
23
21,871,997
2014-02-19T05:38:00.000
5
0
0
0
python,openerp,importerror,openerp-7
21,872,185
1
true
1
0
Getting this error because psutil is not installed. you have to install psutil using this command. sudo apt-get install python-psutil in terminal. after this restart server. This will solve your error.
1
2
0
I download trunk version of OpenERP from lauchpad. When i start server it's gives following error Traceback (most recent call last): File "./openerp-server", line 2, in import openerp File "/home/jack/trunk/trunk-server/openerp/init.py", line 72, in import http File "/home/jack/trunk/trunk-server/openerp/http.py", line 37, in from openerp.service import security, model as service_model File "/home/jack/trunk/trunk-server/openerp/service/init.py", line 28, in import server File "/home/jack/trunk/trunk-server/openerp/service/server.py", line 10, in import psutil ImportError: No module named psutil
OpenERP trunk server gives import Error psutil
1.2
0
0
3,312
21,872,179
2014-02-19T05:50:00.000
2
0
1
0
python,json,scrapy
21,873,566
2
false
1
0
I think you should use scrapy crawl yourspider -o output.json -t json where -o output filename and -t output format.
1
1
0
I'm able to run a scrapy spider from a script. But I want to store the output in a specific file(say output.json) in json format. I did a lot of research & also tried to override FEED_URI & FEED_FORMAT from settings. I also tried to use JsonItemExporter function but all in vain. Any help will be appreciated. Thanks!
Setting/Configuring the output file after running scrapy spider
0.197375
0
0
487
21,872,515
2014-02-19T06:12:00.000
1
0
0
0
python,url
21,872,656
1
true
0
0
You cannot do it platform independent way. You need to use pywin32 for Windows platform (or any other suitable module which provides access to platform API, for example pywm) to access window (you can get it by window name). After that you should analyse all child to get to window which represents URL string. Finally you can get text of this.
1
0
0
I searched the net but couldn't get anything that works. I am trying to write a python script which will trigger a timer if a particular url is opened in the current browser. How do i obtain the url from the browser.
Python: Getting current URL
1.2
0
1
477
21,877,935
2014-02-19T10:37:00.000
0
0
1
0
python,notepad
21,878,937
3
false
0
0
First check how to open notepad through command then use this command in subprocess or os.system. or use open() in os module which allow to open file.
1
0
0
I am doing automation task in which I have to open notepad, write some contents and save that file. I know how to open and do keyboard simulation. Is there any way through which I can save that opened Notepad file through script
Opening Notepad and saving using python
0
0
0
3,320
21,878,696
2014-02-19T11:07:00.000
0
1
0
0
python,unit-testing
21,879,161
1
true
0
0
The purpose of a unit test is to verify the implementation of a requirement. As any other piece of software, you have to distinguish what the unit test does, how it tests the requirement (roughly speaking its design), and how it is implemented. Unless the requirement itself is changed, the design of the unit test should not be changed. However, it may happen that a change from another requirement impacts its implementation (because of side effect, interface change, etc.). Then according to your process, you may let the new implementation be reviewed to make sure that the change doesn't impact the nature of the test and that the original requirement is still fulfilled.
1
0
0
I'm not sure if the language I'm using makes a difference or not, but for the record it's python (2.7.3). I recently tried to add functionality to a project I forked on GitHub. Specifically, I changed the underlying http request library from httplib2 to requests, so that I could easily add proxies to requests. The resultant function calls changed slightly (more variables passed and in a slightly different order), and the mock unit test calls failed as a result. What's the best approach to resolving this? Is it OK to just jump in and rewrite the unit test so that they pass with the new function calls? Intuitively, that would seem to be undermining the purpose of unit tests somewhat.
Changing unit tests based on added functionality
1.2
0
1
29
21,884,127
2014-02-19T14:56:00.000
3
0
1
0
python,editor,keyboard-shortcuts,pycharm
21,885,102
1
false
0
0
The shortcuts: Next Tab = 'Alt'+'Right' Previous Tab = 'Alt' + 'Left' While the cursor is in the Terminal. You can see all the shortcuts in the menu (and add more, or change, or delete): File -> Setting... -> [IDE Settings] Keymap
1
2
0
I am recently started to use pycharm. Its embedded terminal is really cool. We can create multiple sessions of terminal using 'ctr+shift+t'. Also we can close sessions using 'ctrl+sht+w'. But how to toggle between these sessions? Is there any keyboard shortcut? Also where should I get list of all shortcuts? Thanks in advance.
keyboard shortcut for toggling sessions of terminals in pycharm
0.53705
0
0
410
21,884,271
2014-02-19T15:00:00.000
1
0
0
0
python,python-3.x,matplotlib
71,455,335
7
false
0
0
matplotlib by default keeps a reference of all the figures created through pyplot. If a single variable used for storing matplotlib figure (e.g "fig") is modified and rewritten without clearing the figure, all the plots are retained in RAM memory. Its important to use plt.cla() and plt.clf() instead of modifying and reusing the fig variable. If you are plotting thousands of different plots and saving them without clearing the figure, then eventually your RAM will get exhausted and program will get terminated. Clearing the axes and figures have a significant impact on memory usage if you are plotting many figures. You can monitor your RAM consumption in task manager (Windows) or in system monitor (Linux). First your RAM will get exhausted, then the OS starts consuming SWAP memory. Once both are exhausted, the program will get automatically terminated. Its better to clear figures, axes and close them if they are not required.
1
233
1
In a script where I create many figures with fix, ax = plt.subplots(...), I get the warning RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (matplotlib.pyplot.figure) are retained until explicitly closed and may consume too much memory. However, I don't understand why I get this warning, because after saving the figure with fig.savefig(...), I delete it with fig.clear(); del fig. At no point in my code, I have more than one figure open at a time. Still, I get the warning about too many open figures. What does that mean / how can I avoid getting the warning?
warning about too many open figures
0.028564
0
0
164,804
21,885,365
2014-02-19T15:42:00.000
0
0
0
0
python,flask,blueprint
21,885,753
2
false
1
0
am I right your logic should be in models and service classes? and blueprints (aka views) only a thin middleware between template and these modules?
1
1
0
I'm a bit confused about separation for my flask app. Users can login, post adverts and these are available to the public. The URL structure would be something like this: User Home - www.domain.com/user User login - www.domain.com/user/login User advert List - www.domain.com/user/advert User advert add - www.domain.com/user/vacancy/add Public Advert - www.domain.com/advert/1 The issue comes from the fact that there is advert forms and logic which is required inside and outside of the user control panel. Which of these is the most correct way of laying out my application: Option 1: User Blueprint (no url prefix) Contains all user related logic Advert Blueprint (no url prefix) Contains all advert related logic, including the user posting adverts and displaying them to the public Option 2 User Blueprint (/user/ prefix) Contains user logic and advert logic (adding adverts from the user control panel) Advert Blueprint (/advert/ prefix) Contains advert logic relating only to advert tasks outside of the user control panel.
Flask python blueprint logic code separation
0
0
0
344
21,885,856
2014-02-19T16:03:00.000
1
0
0
0
python,jinja2,python-sphinx
21,909,382
2
true
1
0
I've found a good way to do this. Sphinx's configuration parameter template_bridge allows to control over TemplateBribge object - which is responsible for themes rendering. Standard sphinx.jinja2glue.TemplateBridge constructs environment attribute in init method (it's not a constructor, unfortunate name for method) - which is jinja2's environment itself used for templates rendering. So just subclass TemplateBridge and override init method.
1
0
0
I'd like to implement custom navigation to my sphinx docs. I use my custom theme based on basic sphinx theme. But I don't know how to create new tag for template system or use my custom sphinx plugin's directive in html templates. Any ideas where I can plug in? Update As I can see in sphinx sources, jinja2 environment constructed in websupport jinja2glue module. Though I can't understand the way it can be reconfigured besides monkey-patching.
Custom jinja2 tag in sphinx template
1.2
0
0
737
21,890,973
2014-02-19T19:54:00.000
3
1
0
1
python,openshift
21,893,287
2
true
0
0
You are looking for the add-on cartridge that is called cron. However, by default the cron cartridge only supports jobs that run every minute or every hour. You would have to write a job that runs minutely to determine if its a 10 minute interval and then execute your script. Make sense? rhc cartridge add cron -a yourAppName Then you will have a cron directory in application directory under .openshift for placing the cron job.
1
1
0
How to create shedule on OpenShift hosting to run python script that parses RSS feeds and will send filtered information to my email? It feature is available? Please help, who works with free version of this hosting. I have script that works fine. But i dont know how to run it every 10 min to catch freelance jobs. Or anyone does know free hosting with python that can create shedule for scripts.
OpenShift, Python Application run script every 10 min
1.2
0
0
1,892
21,892,302
2014-02-19T20:58:00.000
1
0
0
0
python,amazon-web-services,cron,queue,boto
21,899,718
1
false
1
0
Question: (1) How to use boto to pass the variable/argument to another AWS instance and start a script to work on those variable Use shared datasource, such as DynamoDB or messaging framework such as SQS and .. use boto to retrieve the result back to the master box. Again, shared datasource, or messaging. (2) What is the best way to schedule a job only on specific time period inside Python code. Say only work on 6:00pm to 6:00 am everyday... I don't think the Linux crontab will fit my need in this situation. I think crontab fits well here.
1
0
0
I have 200,000 URLs that I need to scrape from a website. This website has a very strict scraping policy and you will get blocked if the scraping frequency is 10+ /min. So I need to control my pace. And I am thinking about start a few AWS instances (say 3) to run in parallel. In this way, the estimated time to collect all the data will be: 200,000 URL / (10 URL/min) = 20,000 min (one instance only) 4.6 days (three instances) which is a legit amount of time to get my work done. However, I am thinking about building a framework using boto. That I have a paragraph of code and a queue of input (a list of URLs) in this case. Meanwhile I also don't want to do any damage to their website so I only want to scrape during the night and weekend. So I am thinking about all of this should be controlled on one box. And the code should look similar like this: class worker (job, queue) url = queue.pop() aws = new AWSInstance() result aws.scrape(url) return result worker1 = new worker() worker2 = new worker() worker3 = new worker() worker1.start() worker2.start() worker3.start() The code above is totally pseudo and my idea is to pass the work to AWS. Question: (1) How to use boto to pass the variable/argument to another AWS instance and start a script to work on those variable and .. use boto to retrieve the result back to the master box. (2) What is the best way to schedule a job only on specific time period inside Python code. Say only work on 6:00pm to 6:00 am everyday... I don't think the Linux crontab will fit my need in this situation. Sorry about that if my question is more verbally descriptive and philosophical.. Even if you can offer me any hint or throw away some package/library name that meet my need. I will be gratefully appreciated!
BOTO distribute scraping tasks among AWS
0.197375
0
1
262
21,893,973
2014-02-19T22:28:00.000
0
0
0
0
python,algorithm,graph,scipy,mathematical-optimization
21,894,459
2
false
0
0
The prohibition against self-flows makes some instances of this problem infeasible (e.g., one node that has in- and out-flows of 1). Otherwise, a reasonably sparse solution with at most one self-flow always can be found as follows. Initialize two queues, one for the nodes with positive out-flow from lowest ID to highest and one for the nodes with positive in-flow from highest ID to lowest. Add a flow from the front node of the first queue to the front node of the second, with quantity equal to the minimum of the out-flow of the former and the in-flow of the latter. Update the out- and in-flows to their residual values and remove the exhausted node(s) from their queues. Since the ID of the front of the first queue increases, and the ID of the front of the second queue decreases, the only node that self-flows is the one where the ID numbers cross. Minimizing the total flow is trivial; it's constant. Finding the sparsest solution is NP-hard; there's a reduction from subset sum where each of the elements being summed has a source node with that amount of out-flow, and two more sink nodes have in-flows, one of which is equal to the target sum. The subset sum instance is solvable if and only if no source flows to both sinks. The algorithm above is a 2-approximation. To get rid of the self-flow on that one bad node sparsely: repeatedly grab a flow not involving the bad node and split it into two, via the bad node. Stop when we exhaust the self-flow. This fails only if there are no flows left that don't use the bad node and there is still a self-flow, in which case the bad node has in- and out-flows that sum to more than the total flow, a necessary condition for the existence of a solution. This algorithm is a 4-approximation in sparsity.
1
0
1
I'm looking for a solution to the following graph problem in order to perform graph analysis in Python. Basically, I have a directed graph of N nodes where I know the following: The sum of the weights of the out-edges for each node The sum of the weights of the in-edges for each node Following from the above, the sum of the sum across all nodes of the in-edges equals the sum of the sum of out-edges No nodes have edges with themselves All weights are positive (or zero) However, I know nothing about to which nodes a given node might have an edge to, or what the weights of any edges are Represented as a weighted adjacency matrix, I know the column sums and row sums but not the value of the edges themselves. I've realized that there is not a unique solution to this problem (Does anyone how to prove that, given the above, there is an assured solution?). However, I'm hoping that I can at least arrive at a solution to this problem that minimizes the sum of the edge weights or maximizes the number of 0 edge weights or something along those lines (Basically, out of infinite choices, I'd like the most 'simple' graph). I've thought about representing it as: Min Sum(All Edge Weights) s.t. for each node, the sum of its out-edge weights equals the known sum of these, and the sum of its in-edge weights equals the known sum of these. Additionally, constrained such that all weights are >= 0 I'm primarily using this for data analysis in Scipy and Numpy. However, using their constrained minimization techniques, I'll end up with approximately 2N^2-2N constraints from the edge-weight sum portion, and N constraints from the positive portion. I'm worried this will be unfeasible for large data sets. I could have up to 500 nodes. Is this a feasible solution using SciPy's fmin_cobyla? Is there another way to layout this problem / another solver in Python that would be more efficient? Thanks so much! First post on StackOverflow.
SciPy - Constrained Minimization derived from a Directed Graph
0
0
0
200
21,895,657
2014-02-20T00:30:00.000
0
0
0
0
python,pygame
21,896,365
2
false
0
1
This one is pretty easy, its seems you forgot to add a event callback. state 1 while (sate != 0): for event in pygame.event.get():
1
5
0
I just recently learned Python and Pygame library. I noticed that the rendering and main loop is automatically paused when I click and hold on the window menu bar (the bar with the title/icon). For example, in a snake game, the snake will be moving every frame. When I click and hold (or drag) the window menu, the snake is not moving anymore and the game is "paused". When I release it, it resumes. Is there a way to let the game NOT pause when I drag the windows menu bar?
Pygame built-in automatic pause?
0
0
0
430
21,896,030
2014-02-20T01:06:00.000
5
0
0
0
python,numpy,copy-on-write
21,900,644
2
true
0
0
Copy-on-write is a nice concept, but explicit copying seems to be "the NumPy philosophy". So personally I would keep the "readonly" solution if it isn't too clumsy. But I admit having written my own copy-on-write wrapper class. I don't try to detect write access to the array. Instead the class has a method "get_array(readonly)" returning its (otherwise private) numpy array. The first time you call it with "readonly=False" it makes a copy. This is very explicit, easy to read and quickly understood. If your copy-on-write numpy array looks like a classical numpy array, the reader of your code (possibly you in 2 years) may have a hard time.
1
4
1
I have a class that returns large NumPy arrays. These arrays are cached within the class. I would like the returned arrays to be copy-on-write arrays. If the caller ends up just reading from the array, no copy is ever made. This will case no extra memory will be used. However, the array is "modifiable", but does not modify the internal cached arrays. My solution at the moment is to make any cached arrays readonly (a.flags.writeable = False). This means that if the caller of the function may have to make their own copy of the array if they want to modify it. Of course, if the source was not from cache and the array was already writable, then they would duplicate the data unnecessarily. So, optimally I would love something like a.view(flag=copy_on_write). There seems to be a flag for the reverse of this UPDATEIFCOPY which causes a copy to update the original once deallocated. Thanks!
NumPy Array Copy-On-Write
1.2
0
0
2,415
21,896,157
2014-02-20T01:19:00.000
1
0
0
0
python,image,flask,memcached
21,925,544
1
true
1
0
Yes, you can do it. Create a controller or serlvet called for example www.yoursite.com/getImage/ID When you execute this URL, your program shoud connect to the memcached and return the image object that you have previously stored in it. Finally when in your html you add: src="www.yoursite.com/getImage/ID" the browser will execute this url, but instead of reading a file from disk it will ask the memcached for the specific ID. Be sure to add the correct content-type in your response from the server in order that the browser understand that you are sending an image content. Fer
1
0
0
Im writing simple blogging platform in Flask microframework, and I'd like to allow users to change image on the front page but without actually writing it into filesystem. Is it possible to point src attribute in img tag to an object stored in memory?
Using memcached to host images
1.2
0
0
112
21,897,254
2014-02-20T03:03:00.000
0
0
1
0
python,windows,python-2.7
35,723,227
1
false
0
0
You have typed Python. You need a lower case: python. (obviously, make sure you have it installed first, unlike mac os x, it does not come with most default installations).
1
0
0
I have opened and Powershell and when I type Python I get the message, The term 'Python' is not recognized as the name of a cmdlet, function, scriptfile, or operable program, I did like the book said and typed in the [Enviroment]::SetEnviromentVariable etc. Still not working. What do I try next. I'm running Windows 7.
Learning Python the Hardway
0
0
0
95
21,899,681
2014-02-20T06:14:00.000
1
0
0
0
python-2.7,pyqt4,mouseover,qlineedit
21,900,144
2
false
0
1
You can use enterEvent, leaveEvent, enterEvent is triggered when mouse enters the widget and leave event is triggered when the mouse leaves the widget. These events are in the QWidget class, QLineEdit inherits QWidget, so you can use these events in QLineEdit. I you don't see these events in QLineEdit's documentation, click on the link List of all members, including inherited members at the top of the page.
1
2
0
I have a QLineEdit, and I need to know if there is a signal which can track mouse hover over that QLineEdit, and once mouse is over that QLineEdit it emits a signal. I have seen the documents, and found we have the following signals: cursorPositionChanged ( int old, int new ) editingFinished () returnPressed () selectionChanged () textChanged ( const QString & text ) textEdited ( const QString & text ) However, none of this is exactly for hover-over. Can you suggest if this can be done by any other way in PyQt4?
QLineEdit hover-over Signal - when mouse is over the QlineEdit
0.099668
0
0
1,894
21,902,861
2014-02-20T09:02:00.000
0
0
0
0
python,openerp,record,crud,access-rights
21,903,368
1
true
1
0
It seems this one works: Needed to create two rules (applies r,w,c): ['&', ('user_id','=',user.id),('state','=','stage1')] And second rule (applies r): [('stage','=','stage2')]
1
0
0
I need to apply different record rules on same object to give different access rights depending on state that record is. For example there are three stages: stage1, stage2, stage3. On first stage user with specific access rights group can do this: Read, Write, Create his own records. When he presses button to go to stage2, then he can only Read that record (if that record would go back to stage1 - not by that user, then he could do previous things). And on stage3 that user does not see any records nor his nor any others. I tried doing something like this: First rule (applies r,w,c): [('user_id','=',user.id)] This one works. But I get problems when going to other stages. I tried to create another rule2 (applies r): [('stage','=','stage2')] But it does not work, that user can still do anything that he can do in stage1. If I make rule like this (applies r,w,c): ['|', ('user_id','=',user.id),('stage','=','stage1')] Then it gives access rights error that you can't go to next stage, because you don't have read access rights on that stage. How can this be solved?..
Record rules on same object with different CRUD options?
1.2
0
0
78
21,903,246
2014-02-20T09:18:00.000
1
0
1
1
python,vim
21,903,485
2
true
0
0
Vim's Python integration (i.e. the :python[3] commands that most plugins use) does not depend on the python interpreter binary (from PATH); instead, Vim must have been compiled with the Python library(-ies), which you can check in the :version output (look for +python, and the -DDYNAMIC_PYTHON_DLL=...). To be able to use both Python versions, you need both +python/dyn and +python3/dyn, and the corresponding DLLs accessible. You can check with the :py / :py3 commands.
1
1
0
I'm using Vim and lots of Vim plugins, on a Windows machine. Some of these plugins use Python 2, and some use Python 3. I can use only one in the system %PATH% environment variable, how can I overcome this limitation?
Using both Python 2 and 3 in Vim (on Windows)
1.2
0
0
1,174
21,905,084
2014-02-20T10:33:00.000
1
0
1
0
python,vim,spf13vim
21,905,085
1
true
0
0
When folding (or anything) goes awry in VIM, whether you have spf13vim or any other elaborate .vimrc mod, begin with this... rm ~/.vimviews/* ...and restart vim. It cleared this problem up with magic. I hope this helps someone stay young.
1
0
0
I had folding just stop working in my Python sources... without a rhyme or reason. The fix was't immediately obvious and I did not find an answer on SO... I'm using spf13vim and I have tried set foldmethod=indent six ways to Sunday and no dice... I have nothing else futzing with folding in my .vimrc.local and I have already tried updating spf13vim although the problem just started willy nilly while I was coding after writing my buffer to disk.
What to do when vim syntax folding fails altogether?
1.2
0
0
55
21,905,560
2014-02-20T10:54:00.000
0
0
0
0
python,flask
21,905,637
1
true
1
0
You need to set your router to forward the relevant port to your laptop.
1
0
0
I have flask running on my Macbook (10.9.1 if it makes a difference). I have no problem accessing what I have hosted there over my local network, but I'm trying to see if I can access it publicly. For example load a webpage on my iPhone over it's 3G connection. It doesn't appear to be as simple as /index. With my limited knowledge, my public IP seems to be the one for our internet connection, moreso my own laptop. Is that what is causing the issue? Appreciate any help!
Connect to flask over public connection
1.2
0
1
115
21,907,349
2014-02-20T12:07:00.000
0
0
0
0
python,django,amazon-web-services,django-south
21,927,635
2
false
1
0
Turns out the problem was that the git fetch on one of the front servers didnt take, which is what was causing the problem..it had nothing to do with running migrations in parallel (though I shouldnt have done that anyway)
1
0
0
I ran a code update that points at two front end servers (Amazon Web Service Instances). A south migration was included as part of the update.Since the migration the live site appears to flit between the current code revision , and the previous revision, at will. Since discovering this, A previous developer (who has left the company before I turned up), said, & I quote: "never run migrations in parallel. Running migrations twice causes duplication of new objects >and other errors!" My code changes did not involve any models.py changes ; the migrate commands were just part of the fabric update script. Also no errors were thrown during the migrations, they seemingly ran as normal. I have database backups, so I can roll back the database as a last resort. Is there any other way to sort the issue without doing this? Thanks for reading edit: I should add, I pushed the same code to a staging server and it worked fine, so the issue isnt the code
Parrallel south migration in django causes errors
0
0
0
87
21,908,068
2014-02-20T12:38:00.000
3
0
1
0
python,json,django,postgresql
21,909,779
1
false
1
0
Storing data as json (whether in text-typed fields, or PostgreSQL's native jsontype) is a form of denormalization. Like most denormalization, it can be an appropriate choice when working with very difficult to model data, or where there are serious performance challenges with storing data fully normalized into entities. PostgreSQL reduces the impact of some of the problems caused by data denormalization by supporting some operations on json values in the database - you can iterate over json arrays or key/value pairs, join on the results of json field extraction, etc. Most of the useful stuff was added in 9.3; in 9.2, json support is just a validating data type. In 9.4, much more powerful json features will be added, including some support for indexing in json values. There's no simple one-size-fits all answer to your question, and you haven't really characterized your data or your workload. Like most database challenges "it depends" on what you're doing with the data. In general, I would tend to say it's best to relationally model the data if it is structured and uniform. If it's unstructured and non-uniform, storage with something like json may be more appropriate.
1
1
0
I'm sometimes using a TextField to store data with a structure that may change often (or very complex data) into model instances, instead of modelling everything with the relational paradigm. I could mostly achieve the same kind of things using more models, foreignkeys and such, but it sometimes feels more straightforward to store JSON directly. I still didn't delve into postgres JSON type (can be good for read-queries notably, if I understand well). And for the moment I perform some json.dumps and json.loads each time I want to access this kind of data. I would like to know what are (theoretically) the performance and caching drawbacks of doing so (using JSON type and not), compared to using models for everything. Having more knowledge about that could help me to later perform some clever comparison and profiling to enhance the overall performance.
Django & postgres - drawbacks of storing data as json in model fields
0.53705
1
0
1,302
21,909,346
2014-02-20T13:31:00.000
1
0
1
1
python,django,elasticsearch,django-haystack
21,909,665
2
true
0
0
I used haystack in my last project. I checked my virtualenv and I have only 'pyelasticsearch==0.5'. Keep in my mind that documentation can be outdated.
1
2
0
How can I install official elasticsearch binding for python instead of pyelasticsearch? Haystack documentation says: You’ll also need an Elasticsearch binding: elasticsearch-py (NOT pyes). Place elasticsearch somewhere on your PYTHONPATH (usually python setup.py install or pip install elasticsearch). But when I install elasticsearch with pip, haystack still asks for pyelasticsearch.
Install "elasticsearch" instead of "pyelasticsearch"
1.2
0
0
2,082
21,912,670
2014-02-20T15:44:00.000
1
0
1
0
python,matplotlib,operating-system
21,912,823
2
false
0
0
I would say, ensure that you're using the same backend, fonts, etc. by having identical .matplotlibrc files, and specify the dpi of your plots in your code.
2
1
0
It is easily verified that depending of the version/operating system, the plots done with Python differ meaningly in its appearance/resolution: how to solve that?
Ambiguity at the time of plotting
0.099668
0
0
35
21,912,670
2014-02-20T15:44:00.000
0
0
1
0
python,matplotlib,operating-system
21,914,524
2
false
0
0
Beyond different .matplotlibrc files, there are two likely reasons for this. 1) Different fonts. For example, Arial is likely to be the default san serif on Windows, but it usually isn't available on Linux. This is the main reason why you'd see different results. The resolution shouldn't change, however. 2) The interactive backend is likely to be different on different OS-es. The displayed window (but not the saved .png, .pdf, etc) will appear very different depending on the interactive backend used. Which backends are available will depend on how matplotlib was built. TkAgg is a very common backend, but Tkinter isn't available by default on OSX (or rather, the version of Tkinter shipped with most OSX versions isn't compatible). Therefore, it's common to see the OSX interactive backend on OSX. Again, the second one mostly just affects the style of the interactive window that pops up when you call show. The contents of the window will be essentially identical. What exact differences are you seeing?
2
1
0
It is easily verified that depending of the version/operating system, the plots done with Python differ meaningly in its appearance/resolution: how to solve that?
Ambiguity at the time of plotting
0
0
0
35
21,912,993
2014-02-20T15:57:00.000
0
0
0
0
python,django,sqlite,rest,orm
21,914,906
1
false
1
0
It depends on what your application is doing. If your REST application reads a piece of data from SQLITE using the Django ORM and then the other app does a write you can run into some interesting race situations. To prevent that it might make sense to have both these applications as django-app in a single Django project.
1
0
0
I have a django app which provides a rest api using Django-rest-framework. The API is used by clients as expected, but I also have another process(on the same node) that uses Django ORM to read the app's database, which is sqlite3. Is it better architecture for the process to use the rest api to interact(only reads) with the app's database? Or is there a better, perhaps more efficient way than making a ton of HTTP requests from the same node? The problem with the ORM approach(besides the hacky nature) is that occasionally reads fail and must be retried. Also, I want to write to the app's db which would probably causes more sqlite concurrency issues.
SOA versus Django ORM with multiple processes
0
1
0
138
21,914,998
2014-02-20T17:17:00.000
3
0
1
0
python,list,recursion
21,915,140
2
false
0
0
Well, first, you couldn't have infinite list comprehensions, because list comprehensions actually construct real lists. Fitting the Fibonacci sequence into RAM would be tricky. You might instead ask for a recursive generator expression, because it wouldn't use infinite memory. But we already have generator functions for the use cases that would serve, and generator expressions (like list comprehensions) were designed for simple, common use cases, not to be swiss army knives of data-making.
2
4
0
I think list comprehension is one of the most useful features of Python. I think it would be even more useful if Python allows the recursion inside list comprehension, for something like generation of Fibonacci numbers or prime numbers. I know that Python used to have locals()['_[1]'] for referencing the list that's being generated, but it has never been recommended and had been taken out. So, is there any reason that Python developers do not want users to use recursion in list comprehension?
Is there any reason why recursive list comprehension is prohibited (or not recommended) in Python?
0.291313
0
0
524
21,914,998
2014-02-20T17:17:00.000
3
0
1
0
python,list,recursion
21,915,387
2
false
0
0
I believe this is related to the general guideline that list comprehensions and generator expressions should only be used when they are more simple, clear and clean than the equivalent explicit loop, and this to the point that nested comprehensions, although supported, are generally discouraged from a stylistic point of view. A recursive comprehension would quickly cross that simplicity threshold. This also seems to fit the spirit of other, unrelated design decisions, like the lack of a form to declare generalized anonymous functions. You get lambda, which only supports one expression. If you need something more complicated than that, you have to define a regular function.
2
4
0
I think list comprehension is one of the most useful features of Python. I think it would be even more useful if Python allows the recursion inside list comprehension, for something like generation of Fibonacci numbers or prime numbers. I know that Python used to have locals()['_[1]'] for referencing the list that's being generated, but it has never been recommended and had been taken out. So, is there any reason that Python developers do not want users to use recursion in list comprehension?
Is there any reason why recursive list comprehension is prohibited (or not recommended) in Python?
0.291313
0
0
524
21,915,864
2014-02-20T17:58:00.000
0
0
1
0
python
21,916,124
3
false
0
0
StringIO has moved in Python 3. Try from io import StringIO. You also need to decide whether you want a StringIO or a BytesIO. However, it sounds as though you're trying to monkey-patch over sys.stdout in (something like) a unit test. I wouldn't recommend doing this in your tests unless you're sure you need to; it'll make for hard-to-maintain tests. I'd suggest that your code needs refactoring - consider changing your function to return a string, which clients can print (or write to a file, or display on a GUI, or...) at their leisure.
1
2
0
I'm calling a function and trying to capture the output it prints out, but in 3.3 I don't have access to StringIO. Is there another method around this?
Capture what a Python function prints out in 3.3?
0
0
0
84
21,918,718
2014-02-20T20:23:00.000
1
0
1
0
python,matplotlib,plot,weather
21,919,317
3
false
0
0
Matplotlib xticks are your friend. Will allow you to set where the ticks appear. As for date formatting, make sure you're using dateutil objects, and you'll be able to handle the formatting.
2
0
1
I want to plot weather data over the span of several days every half hour, but I only want to label the days at the start as a string in the format 'mm/dd/yy'. I want to leave the rest unmarked. I would also want to control where such markings are placed along the x axis, and control the range of the axis. I also want to plot multiple sets of measurements taken over different intervals on the same figure. Therefore being able to set the axis and plot the measurements for a given day would be best. Any suggestions on how to approach this with matplotlib?
How to label certain x values
0.066568
0
0
2,934
21,918,718
2014-02-20T20:23:00.000
2
0
1
0
python,matplotlib,plot,weather
21,919,748
3
false
0
0
You can use a DayLocator as in: plt.gca().xaxis.set_major_locator(dt.DayLocator()) And DateFormatter as in: plt.gca().xaxis.set_major_formatter(dt.DateFormatter("%d/%m/%Y")) Note: import matplotlib.dates as dt
2
0
1
I want to plot weather data over the span of several days every half hour, but I only want to label the days at the start as a string in the format 'mm/dd/yy'. I want to leave the rest unmarked. I would also want to control where such markings are placed along the x axis, and control the range of the axis. I also want to plot multiple sets of measurements taken over different intervals on the same figure. Therefore being able to set the axis and plot the measurements for a given day would be best. Any suggestions on how to approach this with matplotlib?
How to label certain x values
0.132549
0
0
2,934
21,921,509
2014-02-20T22:56:00.000
1
0
0
0
python,sockets,network-programming
22,111,178
2
false
0
0
Firewall may be the explanation behind this unexpected response. Rather than supposing the remote firewall accepts connection, using timeout is the best option. Since, making a connection is a swift process and within a network, it won't take longer time. So, give a proper timeout so that you can tell that the host is either down or dropping packets.
2
1
0
I'm learning to use sockets in python and something weird is happening. I call socket.connect in a try block, and typically it either completes and I have a new socket connection, or it raises the exception. Sometimes, however, it just hangs. I don't understand why sometimes it returns (even without connecting!) and other times it just hangs. What makes it hang? I am using blocking sockets (non-blocking don't seem to work for connect...), so I've added a timeout, but I'd prefer connect to finish without needing to timeout. Perhaps, when it doesn't hang, it receives a response that tells it the requested ip/port is not available, and when it does hang there is just no response from the other end? I'm on OSX10.8 using python2.7
Python socket.connect hangs sometimes
0.099668
0
1
2,733
21,921,509
2014-02-20T22:56:00.000
3
0
0
0
python,sockets,network-programming
21,921,616
2
true
0
0
When connect() hangs it is usually because you connect to an address that is behind a firewall and the firewall just drops your packets with no response. It keeps trying to connect for around 2 minutes on Linux and then times out and return an error.
2
1
0
I'm learning to use sockets in python and something weird is happening. I call socket.connect in a try block, and typically it either completes and I have a new socket connection, or it raises the exception. Sometimes, however, it just hangs. I don't understand why sometimes it returns (even without connecting!) and other times it just hangs. What makes it hang? I am using blocking sockets (non-blocking don't seem to work for connect...), so I've added a timeout, but I'd prefer connect to finish without needing to timeout. Perhaps, when it doesn't hang, it receives a response that tells it the requested ip/port is not available, and when it does hang there is just no response from the other end? I'm on OSX10.8 using python2.7
Python socket.connect hangs sometimes
1.2
0
1
2,733
21,923,046
2014-02-21T00:56:00.000
0
1
0
1
python
21,923,164
4
false
0
0
If you can put your own programs or scripts on the remote machine there are a couple of things you can do: Write a script on the remote machine that outputs just what you want, and execute that over ssh. Use ssh to tunnel a port on the other machine and communicate with a server on the remote machine which will respond to requests for information with the data you want over a socket.
1
1
0
I would like to be able to gather the values for number of CPUs on a server and stuff like storage space etc and assign them to local variables in a python script. I have paramiko set up, so I can SSH to remote Linux nodes and run arbitrary commands on them, and then have the output returned to the script. However, many commands are very verbose "such as df -h", when all I want to assign is a single integer or value. For the case of number of CPUs, there is Python functionality such as through the psutil module to get this value. Such as 'psutil.NUM_CPUS' which returns an integer. However, while I can run this locally, I can't exactly execute it on remote nodes as they don't have the python environment configured. I am wondering how common it is to manually parse output of linux commands (such as df -h etc) and then grab an integer from it (similar to how bash has a "cut" function). Or whether it is somehow better to set up an environment on each remote server (or a better way).
Reading values over ssh in python
0
0
0
1,205
21,923,479
2014-02-21T01:37:00.000
1
0
1
1
python,macos,python-2.7,python-3.x
21,923,496
1
true
0
0
In general, no, you can't do that easily. Just bite the bullet and install new copies of the modules you need for your Python 3 installation. Remember to first install a new copy of pip (or, if you must, easy_install) using your Python 3.3 and use it to install the modules you need for Python 3. One of the reasons you can't is that for many packages that support both Python 2 and 3 by using 2to3 require the source distribution to do so. The resultant Python 2 installed distribution will not necessarily have everything needed to produce a new Python 3 installation.
1
0
0
I would like to use installed Python 2 modules in Python 3. One step would be to add to the PythonPath3 the directories where the Python2 modules are installed. Of course this would work only if the modules are coded for Python3 compatibility. Is there a way that I can import modules in Python3 and have them automatically converted (using 2to3) to usable Python3 code? Specs: Mac OS 10.9.1 Python2 = python 2.7.6 Python3 = python 3.3.3
Use python 2 module in python 3 in mac OS
1.2
0
0
333
21,929,329
2014-02-21T08:40:00.000
0
0
0
1
python-3.x,operating-system,zip
21,941,266
1
false
0
0
You could check using the zipfile module. It will check for everything that is needed to run. If the OS version of zip is missing the import of the module will either fail (because it's missing) or the module works without the OS version (which should be fine too). I can't think of any other easy, portable approach. On UNIX systems you could just check if "zip" is found in the PATH but on windows it's not guaranteed that it is in the PATH.
1
0
0
I try to work through Byte of Python3 and there is an example script for backing up folders and creating a zip file. I would like the script to check if zip is available within the os (Windows, Linux, Mac). Is there a way you can do this? Thanks, Mark
Checking with Python3 if ZIP is installed on OS (Terminal)
0
0
0
34
21,933,555
2014-02-21T11:43:00.000
0
1
1
0
python,import
21,934,129
3
false
0
0
Firstly, it's a violation of PEP8 using imports inside functions. Calling import it's an expensive call EVEN if the module is already loaded, so if your function is gonna being called many times this will not compensate the performance gain. Also when you call "import test" python do this: dataFile = __ import__('test') The only downside of imports at the top of file it's the namespace that get polluted very fast depending on complexity of the file, but if your file it's too complex it's a signal of bad design.
1
0
0
I am writing a python module and I am using many imports of other different modules. I am bit confused that whether I should import all the necessary dependent modules in the opening of the file or shall I do it when necessary. I also wanted to know the implications of both. I come from C++ back ground so I am really thrilled with this feature and does not see any reason of not using __import__(), importing the modules only when needed inside my function. Kindly throw some light on this.
import module_name Vs __import__('module_name')
0
0
0
2,473
21,933,610
2014-02-21T11:46:00.000
0
0
1
0
python,rdf,owl,ontology,rdflib
28,051,822
1
false
0
0
If I have a rdflib.Uriref that point to a resource that I do not need any more. How can i remove it safely using rdflib. An RDF graph is just a collection of triples. It doesn't contain any resources or nodes independent of those triples. If for example I just remove all the triples that refer to it may be a could broke something like a Bnode that is a list. Removing all the triples that use a URI resource is the correct way to "remove it from the graph". There's no way for this to "break" the graph. Whether it invalidates any structure in the graph is another question, but one that you'd have to answer based on the structure that you're putting in the graph. You'd need to check in advance whether the resource appears in any triples that shouldn't be removed.
1
0
0
If I have a rdflib.Uriref that point to a resource that I do not need any more. How can i remove it safely using rdflib. If for example I just remove all the triples that refer to it may be a could broke something like a Bnode that is a list.
Remove Safely Reference in rdflib
0
0
0
281
21,936,158
2014-02-21T13:45:00.000
4
0
1
0
python,version-control,virtualenv,ignore,pyc
21,936,238
1
true
1
0
That is fine, just remove them! Python auto-generates them from the corresponding .py file any time it wants to, so you needn't worry about simply deleting them all from your repository. A couple of related tips - if you don't want them generated at all on your local dev machine, set the environment variable PYTHONDONTWRITEBYTECODE=1. Python 3.2 fixed the annoyance of source folders cluttered with .pyc files with a new __pycache__ subfolder
1
2
0
I've created a virtualenv for my project and checked it into source control. I've installed a few projects into the virtualenv with pip: django, south, and pymysql. After the fact I realized that I had not set up source control for ignoring .pyc files. Could there be any subtle problems in simply removing all .pyc files from my project's repository and then putting in place the appropriate file ignore rules? Or is removing a .pyc file always a safe thing to do?
clean up .pyc files in virtualenv stored in souce repository after the fact?
1.2
0
0
1,403
21,937,072
2014-02-21T14:25:00.000
0
1
0
0
python,html,django
21,937,650
3
false
1
0
Why you do not do simple form on index page when user is not authenticated?
1
3
0
Here is the deal, how do I put the simplest password protection on an entire site. I simply want to open the site to beta testing but don't really care about elegance - just a dirty way of giving test users a username and password without recourse to anything complex and ideally i'd like to not to have to install any code or third party solutions. I'm trying to keep this simple.
Whats the smartest way to password protect an entire Django site for testing purposes
0
0
0
4,949
21,941,030
2014-02-21T17:23:00.000
0
0
0
1
python,google-app-engine,mapreduce,task-queue
21,962,823
2
false
1
0
First, writes to the datastore take milliseconds. By the time your user hits the refresh button (or whatever you offer), the data will be as "real-time" as it gets. Typically, developers become concerned with real-time when there is a synchronization/congestion issue, i.e. each user can update something (e.g. bid on an item), and all users have to get the same data (the highest bid) in real time. In your case, what's the harm if a user gets the number of check-ins which is 1 second old? Second, data in Memcache can be lost at any moment. In your proposed solution (update the datastore every 5 minutes), you risk losing all data for the 5 min period. I would rather use Memcache in the opposite direction: read data from datastore, put it in Memcache with 60 seconds (or more) expiration, serve all users from Memcache, then refresh it. This will minimize your reads. I would do it, of course, unless your users absolutely must know how many checkins happened in the last 60 seconds. The real question for you is how to model your data to optimize writes. If you don't want to lose data, you will have to record every checkin in datastore. You can save by making sure you don't have unnecessary indexed fields, separate out frequently updated fields from the rest, etc.
1
1
0
I am trying to design an app that uses Google AppEngine to store/process/query data that is then served up to mobile devices via Cloud Endpoints API in as real time as possible. It is straight forward enough solution, however I am struggling to get the right balance between, performance, cost and latency on AppEngine. Scenario (analogy) is a user checks-in (many times per day from different locations, cities, countries), and we would like to allow the user to query all the data via their device and provide as up to date information as possible. Such as: The number of check-ins over the last: 24 hours 1 week 1 month All time Where is the most checked in place/city/country over the same time periods Where is the least checked in place over the same time periods Other similar querying reports We can use Memcache to store the most recent checkins, pushing to the Datastore every 5 minutes, but this may not scale very well and is not robust! Use a Cron job to run the Task Queue/Map Reduce to get the aggregates, averages for each location every 30 mins and update the Datastore. The challenge is to use as little read/writes over the datastore because the last "24 hours" data is changing every 5 mins, and hence so is the last weeks data, last months data and so on. The data has to be dynamic to some degree, so it is not fixed points in time, they are always changing - here in lies the issue! It is not a problem to set this up, but to set it up in an efficient manner, balancing performance/latency for the user and cost/quotas for us is not so easy! The simple solution would be to use SQL, and run date range queries but this will not scale very well. We could eventually use BigTable & BigQuery for the "All time" time period querying, but in order to give the users as real-time as possible data via the API for the other time periods is proving quite the challenge! Any suggestions of AppEngine architecture/approaches would be seriously welcomed. Many thanks.
AppEngine real time querying - cost, performance, latency balancing act and quotas
0
1
0
173
21,941,503
2014-02-21T17:45:00.000
2
0
0
0
python,django,django-admin
21,943,591
5
false
1
0
You can use signals. Than use os.remove() to clean up related files on delete. This way your file system always reflects you db. No need for hitting some button.
1
37
0
I have a django project in which the admins are able to upload media. As items sell, they are deleted from the site, thus removing their entry in the MySQL database. The images associated with the item, however, remain on the file system. This isn't neccessarily bad behavior - I don't mind keeping files around in case a deletion was an accident. The problem I forsee is two years from now, when storage space is limited because of a media folder bloated with old product images. Does anyone know of a systematic/programmatic way to sort through ALL the images and compare them to the relevant MySQL fields, deleting any image which DOESN'T have a match from the filesystem? In the perfect world I'm imagining a button in the django-admin like "Clean-up unused media" which executes a python script capable of this behavior. I'll be sharing whatever my eventual solution is here, but what I'm looking for right now is anyone who has ideas, knows resources, or has done this at some point themselves.
Django delete unused media files
0.07983
0
0
19,573
21,942,110
2014-02-21T18:15:00.000
1
0
0
0
python,linux,django,heroku
21,942,195
1
true
1
0
You create it in the project root.
1
0
0
I am sorry if this is too basic of a question but I spent the whole morning unsuccessfully figuring it out. I want to use the Heroku Scheduler for a Django app, and as per their documentary, I am supposed to put the python file I wan't to be executed by the Scheduler in the bin/ folder on Heroku. Now on my local copy of the project, where do I create the folder bin w.r.t. the project root?
Heroku: Putting a file in the bin folder
1.2
0
0
301
21,944,428
2014-02-21T20:23:00.000
1
0
1
1
python,pylint
21,945,800
1
false
0
0
set PATH=%PATH%;C:\python27\scripts is apparently what i needed to make it work... thanks for the path direction.
1
3
0
I've just installed all the dependencies (asteroid and logilab-commons) and pylint, ran the tests for pylint and they all passed, but i just cant get pylint to work... i keep getting 'pylint' is not recognized as an internal or external command, operable program or batch file. while running in the command prompt. im not sure what im doing wrong and i cant seem to find any explanation anywhere
Not being able to run pylint using windows (7) command prompt
0.197375
0
0
4,751
21,947,487
2014-02-21T23:49:00.000
-1
0
1
0
python,string,pandas,extract
21,947,575
2
false
0
0
This will grab the number 10 and put it in a variable called yards. x = "(12:25) (No Huddle Shotgun) P.Manning pass short left to W.Welker pushed ob at DEN 34 for 10 yards (C.Graham)." yards = (x.split("for ")[-1]).split(" yards")[0]
1
4
1
I have an NFL dataset with a 'description' column with details about the play. Each successful pass and run play has a string that's structured like: "(12:25) (No Huddle Shotgun) P.Manning pass short left to W.Welker pushed ob at DEN 34 for 10 yards (C.Graham)." How do I locate/extract the number after "for" in the string, and place it in a new column?
Extract a certain part of a string after a key phrase using pandas?
-0.099668
0
0
4,497
21,947,723
2014-02-22T00:11:00.000
-1
0
0
0
python,session,flask,flask-login
21,947,739
4
false
1
0
Nope. That's literally impossible over pure HTTP. HTTP is a stateless protocol, which means that in order to preserve state, the client has to be able to identify itself on every request. What you might be able to do is HTTP Basic Authentication over HTTPS, then access that Authentication on the server side.
2
4
0
I'm working on a Flask app to be used by a medical client. Their IT dept is so up tight about security that they disable cookies and scripting network-wide. Luckly, wtf-forms was able to address one of these issues with server-side validation of form input. However, I'm getting hung up on the login system. I've implemented flask-login, but this apparently requires client-side data as I'm unable to log in when testing in a browser with these features disabled. Is there any way to create a login with zero client-side data? Thanks for the help.
Possible to make flask login system which doesn't use client-side session/cookie?
-0.049958
0
0
1,749
21,947,723
2014-02-22T00:11:00.000
1
0
0
0
python,session,flask,flask-login
21,947,780
4
false
1
0
With such restrictions as not having zero client side data, you could pass a session token in the GET parameters of every link rendered in the html page. Or you could create only POST views with a hidden token input (may be more secure indeed).
2
4
0
I'm working on a Flask app to be used by a medical client. Their IT dept is so up tight about security that they disable cookies and scripting network-wide. Luckly, wtf-forms was able to address one of these issues with server-side validation of form input. However, I'm getting hung up on the login system. I've implemented flask-login, but this apparently requires client-side data as I'm unable to log in when testing in a browser with these features disabled. Is there any way to create a login with zero client-side data? Thanks for the help.
Possible to make flask login system which doesn't use client-side session/cookie?
0.049958
0
0
1,749
21,950,193
2014-02-22T05:32:00.000
1
0
0
1
python,macos,postgresql,openerp,openerp-7
22,423,411
2
true
0
0
Install PostgreSQL Create a user for OpenERP Install all dependencies for Python, using brew or MacPorts Download OpenERP and extract it Run the following command: cd openerp; python openerp-server
1
2
0
I want to install OpenERP v7 on Mac OS X. How can I install it? I tried to install it brew install postgresql I succeed to install postgresql but when I create the user with following command createuser openerpI got the error like createuser:command not found I also got an error when I type psql.
How to install OpenERP on Mac OS X Mavericks?
1.2
0
0
7,748
21,951,190
2014-02-22T07:22:00.000
3
1
1
0
python
21,951,214
1
true
0
0
The best way would be to create something called a settings.py file, that houses are your shared variables of importance. This approach is followed by the django team for their web framework called django, whcih creates a settings.py file to house all the data that needs to be shared, for example database logins, and static file roots.
1
1
0
In php, people often call a bootstrap file to set variables used throughout a program. I have a python program that calls methods from different modules. I want those methods from different modules to share some variables. Can I set these variables up in something like a boostrap.py? Or is this not very "pythonic" because a module should contain all of the variables it needs?
Is there an equivalent of a bootstrap.php for python?
1.2
0
0
113
21,952,650
2014-02-22T09:56:00.000
0
1
0
1
python,printing,background,raspberry-pi
21,953,931
1
false
0
0
1) You should never run a script with sudo. You could potentially destroy your system. 2) Once your SSH session is closed all processes go with it. That is unless you use nohup or screen as you have found.
1
0
0
I have learned online that there are several ways of running a python program in the background: sudo python scriptfile.py& sudo python scriptfile.py, then Control+Z, then bg Using nohup Using screen However, I would like to know if when doing any of the first two options, after I close and reopen SSH again, I can recover what the python program is internally printing by the print commands. So I run python and I start to see my print commands output, but if I close the SSH, even though the program is still running, I need to restart it in order to again see my print statements.
Raspberry Pi (python) Run in background and reopen print output
0
0
0
655
21,957,966
2014-02-22T17:45:00.000
0
0
1
0
python,multithreading
21,958,016
3
false
0
0
You could have the thread function set a boolean flag on startup, and then check that flag.
1
5
0
How to determine if a python thread has been started? There is a method is_alive() but this is true before and while a thread is running.
Determine if thread has been started
0
0
0
4,522
21,962,475
2014-02-23T00:15:00.000
0
0
0
1
python,django,.htaccess,subdomain,virtualenv
21,969,799
2
true
1
0
The issue was solved by contacting the support service and asking them to open the port 8000 for me.
1
0
0
I just installed Django and create a project and an app following the basic tutorial part 1, I created a virtualenv since centOS default python version is 2.4.3, I also created a subdomain to work on this for the development phase. when I try to access like dev.domain.com/admin/ or dev.domain.com/ I get a 404 error, it's like django is not even there. When I run the server I get a normal response: (python2.7env)-bash-3.2# python manage.py runserver Validating models... 0 errors found February 22, 2014 - 23:54:07 Django version 1.6.2, using settings 'ct_project.settings' Starting development server at http://127.0.0.1:8000/ Any ideas what I'm missing? EDIT:after starting the server correctly(with the right ip) I tried again and as a result I got the browser hanging. Then I went to tried an online port scanner and found out that the port 8000 is not responding, any ideas what I can try next? Thanks
Error 404 when trying to access a Django app installed in a subdomain
1.2
0
0
718
21,963,270
2014-02-23T02:09:00.000
0
1
0
1
python,output
49,316,597
8
false
0
0
You could also do this by going to the path of the folder you have the python script saved at with cmd, then do the name.py > filename.txt It worked for me on windows 10
2
13
0
I'm executing a .py file, which spits out a give string. This command works fine execfile ('file.py') But I want the output (in addition to it being shown in the shell) written into a text file. I tried this, but it's not working :( execfile ('file.py') > ('output.txt') All I get is this: tugsjs6555 False I guess "False" is referring to the output file not being successfully written :( Thanks for your help
How to execute a python script and write output to txt file?
0
0
0
92,210
21,963,270
2014-02-23T02:09:00.000
2
1
0
1
python,output
33,993,200
8
false
0
0
The simplest way to run a script and get the output to a text file is by typing the below in the terminal: PCname:~/Path/WorkFolderName$ python scriptname.py>output.txt *Make sure you have created output.txt in the work folder before executing the command.
2
13
0
I'm executing a .py file, which spits out a give string. This command works fine execfile ('file.py') But I want the output (in addition to it being shown in the shell) written into a text file. I tried this, but it's not working :( execfile ('file.py') > ('output.txt') All I get is this: tugsjs6555 False I guess "False" is referring to the output file not being successfully written :( Thanks for your help
How to execute a python script and write output to txt file?
0.049958
0
0
92,210
21,967,398
2014-02-23T11:08:00.000
27
0
0
1
python,celery,luigi
25,704,688
2
true
1
0
Update: As Erik pointed, Celery is better choice for this case. Celery: What is Celery? Celery is a simple, flexible and reliable distributed system to process vast amounts of messages, while providing operations with the tools required to maintain such a system. Why use Celery? It is simple to use & has lots of features. django-celery: provides good integration with Django. flower: Real-time monitor and web admin for Celery distributed task queue. Active & large community(based on Stackoverflow activity, Pyvideos, tutorials, blog posts). Luigi What is Luigi? Luigi(Spotify's recently open sourced Python framework) is a Python package that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization, handling failures, command line integration, and much more. Why use Luigi? Builtin support for Hadoop. Generic enough to be used for everything from simple task execution and monitoring on a local work station, to launching huge chains of processing tasks that can run in synchronization between many machines over the span of several days. Lugi's visualiser: Gives a nice visual overview of dependency graph of workflow. Conclusion: If you need a tool just to simply schedule tasks & run them you can use Celery. If you are dealing with big data & huge processing you can go for Luigi.
2
37
0
I am using django as a web framework. I need a workflow engine that can do synchronous as well as asynchronous(batch tasks) chain of tasks. I found celery and luigi as batch processing workflow. My first question is what is the difference between these two modules. Luigi allows us to rerun failed chain of task and only failed sub-tasks get re-executed. What about celery: if we rerun the chain (after fixing failed sub-task code), will it rerun the already succeed sub-tasks? Suppose I have two sub-tasks. The first one creates some files and the second one reads those files. When I put these into chain in celery, the whole chain fails due to buggy code in second task. What happens when I rerun the chain after fixing the code in second task? Will the first task try to recreate those files?
Python based asynchronous workflow modules : What is difference between celery workflow and luigi workflow?
1.2
0
0
8,450
21,967,398
2014-02-23T11:08:00.000
45
0
0
1
python,celery,luigi
34,112,320
2
false
1
0
(I'm the author of Luigi) Luigi is not meant for synchronous low-latency framework. It's meant for large batch processes that run for hours or days. So I think for your use case, Celery might actually be slightly better
2
37
0
I am using django as a web framework. I need a workflow engine that can do synchronous as well as asynchronous(batch tasks) chain of tasks. I found celery and luigi as batch processing workflow. My first question is what is the difference between these two modules. Luigi allows us to rerun failed chain of task and only failed sub-tasks get re-executed. What about celery: if we rerun the chain (after fixing failed sub-task code), will it rerun the already succeed sub-tasks? Suppose I have two sub-tasks. The first one creates some files and the second one reads those files. When I put these into chain in celery, the whole chain fails due to buggy code in second task. What happens when I rerun the chain after fixing the code in second task? Will the first task try to recreate those files?
Python based asynchronous workflow modules : What is difference between celery workflow and luigi workflow?
1
0
0
8,450
21,967,466
2014-02-23T11:14:00.000
0
0
1
0
python,dsl,xtext
22,065,084
1
false
0
0
Well, I think the answer is quite simple: I can generate the code in a XML or pyconf format and read from python :) thank you anyway
1
0
0
I am new to DSL area and I am developing a DSL language. I will need to provide an editor to write the in that language what makes Xtext a very good option. However, some of my libraries are in Python and I need to "run" the DSL in python. Any idea how to integrate them? the perfect scenario would be: Xtext -> Pass the Tokens to Python -> Semantics in Python thank you
Xtext integration with Python
0
0
0
780
21,969,258
2014-02-23T14:08:00.000
1
0
1
0
python,matlab,ipython
21,969,604
3
false
0
0
I guess you'll have to reimport your module on every code change. And you could use from my_module import * to avoid module name before every function call, though this construction works slowly.
1
4
0
If you are familiar to Matlab you know that you are able to use any defined function from its definition file, if it is in the workspace path, you do not need to call or import. Is there any mechanism in ipython mimics that behaviour of matlab. My current work flow is to write the code in the Sublime text and paste it to ipython (seems stupid). Would you suggest any other way to achieve efficient I am too lazy to do so but writing a periodic auto import code in the startup file of Ipython in some way might work. Maybe ipython curators would consider this.
How can I use python code file as you type in ipython console?
0.066568
0
0
291
21,970,771
2014-02-23T16:12:00.000
2
0
1
0
python,django,python-2.7,ide,pycharm
27,070,676
2
false
1
0
Another option would be to place libraries into separate project (or go even further and place each library in its own project) and then open this project/these projects side-by-side with the main project. This way you have clear separation between the main project and libraries used. This comes handy when you work on another project using some of the same libraries as then you only need to open already existing project containing libraries and you are done.
1
13
0
I have an issue where I am developing a Django project which includes other libraries we are also developing. My current structure is as follows: Main Project App1 App2 Libraries Library 1 Library 2 All libraries have their own setup scripts and are in separate git repositories, and we are adding them in PyCharm in the PYTHONPATH, and referencing them simply by their name. Which works good, but they are not in my current project, which means no re-factoring ( renaming, moving etc... ) and I have to use External search to find my class from the libraries. How do I set some libraries as project related to make them view-able and refactorable like we do on the currently set project.
PyCharm include and modify External library in project
0.197375
0
0
15,029
21,973,249
2014-02-23T19:19:00.000
0
1
0
0
python,matplotlib,gnuplot,swig
21,977,146
2
false
0
0
You should start by reading the first few chapters of swig manual, and making some of its example projects for python, the distrib has many that illustrate many different capabilities of swig and the make files are already built so one less thing to learn.
1
0
0
I want to access some functions from a large C-project from Python. It seems to me that SWIG is the way to go. I'm not very used to programming i C and my experience with "make" is mostly from downloading source tars. The functions I want to access resides in a large C-project (Gnuplot) and I have no idea who to use SWIG on such a large number of source files. The functions I want to access are all in a single c-file but there are many recursive includes. I would like some suggestions on how to get started. What I want to access: term/emf.trm Reason: Missing support for symbols an LaTex in the EMF-backend to matplotlib (this backend has even been removed from matplotlib). I'm stuck with an old version of Word at work and there is no way to get plots in this program that are suitable for my purpose without EMF. I could use Gnuplot instead of matplotlib but many of the plots are specialized for a certain purpose and matplotlib is much easier to use than Gnuplot. Any suggestions would be much appreciated.
Using SWIG to interface large C-project with Python
0
0
0
728
21,974,997
2014-02-23T21:45:00.000
4
0
1
0
python,python-2.7,python-3.x,pip
22,714,674
2
true
0
0
Try these two solutions: 1)Remove python3.3 from the path variable and try installing library using pip now. so that pip from python27 can install things. 2)if this doesn't work then use C:\python27\Scripts\pip.exe install
1
0
0
I am using pip to pull down libraries but didnt realize the key one is only for 2.7. So now I am working in the 2.7 directory but pip is still installing libs in 3.3. So pyCharm keeps saying the lib is missing. I have the PATH var set (this is gasp fn windows 8) so that Python 2.7 comes first but i think the python exe isn't looking in the first place I had pip install things. Maybe there is a setting in pip that will install it elsewhere now? Any hints on how to make this work would be great. Maybe I just need to start over w/o python 3.3? Thank you for your time!
Is it practical to have both Python 2.7 and 3.3 installed at the same time?
1.2
0
0
739
21,976,383
2014-02-23T23:49:00.000
11
0
0
0
python,django,shell,sqlite
23,184,956
4
true
1
0
I meet the same problem today and fix it I think you miss some command in tutorial 1 just do follow: ./python manage.py makemigrations polls python manage.py sql polls ./python manage.py syncdb then fix it and gain the table polls and you can see the table created you should read the "manage.py makemigrations" command
1
5
0
Going through Django tutorial 1 using Python 2.7 and can't seem to resolve this error: OperationalError: no such table: polls_poll This happens the moment I enter Poll.objects.all() into the shell. Things I've already tried based on research through the net: 1) Ensured that 'polls' is listed under INSTALLED_APPS in settings.py Note: I've seen lots of suggestions inserting 'mysite.polls' instead of 'polls' into INSTALLED_APPS but this gives the following error: ImportError: cannot import name 'polls' from 'mysite' 2) Run python manage.py syncdb . This creates my db.sqlite3 file successfully and seemingly without issue in my mysite folder. 3) Finally, when I run python manage.py shell, the shell runs smoothly, however I do get some weird Runtime Warning when it starts and wonder if the polls_poll error is connected: \django\db\backends\sqlite3\base.py:63: RuntimeWarning: SQLite received a naive datetime (2014-02-03 17:32:24.392000) while time zone support is active. Any help would be appreciated.
Django Error: OperationalError: no such table: polls_poll
1.2
1
0
13,012
21,977,987
2014-02-24T02:41:00.000
2
0
1
1
python
21,978,072
2
true
0
0
There are plenty of ways someone can get your program, even if you remove the USB drive. They can install a program that triggers when a USB stick is inserted, search the stick for .py files, and copies them to disk. If the Python installation you're using is on the disk instead of the USB drive, they can replace the Python executable with a wrapper that saves copies of any file the Python interpreter opens. Your program is going to go into RAM, and depending on what it does and what else is using the machine, it may get swapped to disk. An attacker may be able to read your program out of RAM or reconstruct it from the swap file.
2
0
0
When I run a program from a USB memory, and remove the USB memory the program still goes on running (I mean with out really copying the program into the Windows PC). However, does the program make its copy inside the Windows in any hidden location or temporary folder while running by the python IDLE. From where the python IDLE receive the code to be running after removing the USB memory? I am going to run python program in a public shared PC so I do not want anyone find out my code, I just want to run it, and get the result next day. Does someone can get my code even I remove the USB memory?
Running a Python Program Hiding the Source
1.2
0
0
121
21,977,987
2014-02-24T02:41:00.000
0
0
1
1
python
21,978,357
2
false
0
0
It sounds like you are doing something you probably shouldn't be doing. Depending on how much people want your code they could go as far as physically freezing the ram and doing a forensic IT analysis. In short, you can't prevent code cloning on a machine you don't administer.
2
0
0
When I run a program from a USB memory, and remove the USB memory the program still goes on running (I mean with out really copying the program into the Windows PC). However, does the program make its copy inside the Windows in any hidden location or temporary folder while running by the python IDLE. From where the python IDLE receive the code to be running after removing the USB memory? I am going to run python program in a public shared PC so I do not want anyone find out my code, I just want to run it, and get the result next day. Does someone can get my code even I remove the USB memory?
Running a Python Program Hiding the Source
0
0
0
121
21,979,038
2014-02-24T04:43:00.000
0
0
0
1
python,google-app-engine,cron,leaderboard
21,979,231
3
false
1
0
Whether this is simpler or not is debatable. I have assumed that ranking is not just a matter of ordering an accumulation of points, in which case thats just a simple query. I ranking involves other factors rather than just current score. I would consider writing out an Event record for each update of points for a User (effectively a queue) . Tasks run collecting all the current Event records, In addition you maintain a set of records representing the top of the leaderboard. Adjust this set of records, based on the incoming event records. Discard event records once processed. This will limit your reads and writes to only active events in a small time window. The leader board could probably be a single entity, and fetched by key and cached. I assume you may have different ranking schemes like current active rank (for the current 7 days), vs all time ranks. (ie players not playing for a while won't have a good current rank). As the players view their rank, you can do that with two simple queries Players.query(Players.score > somescore).fetch(5) and Players.query(Players.score < somescore).fetch(5) this shouldn't cost too much and you could cache them.
2
3
0
I want to build a backend for a mobile game that includes a "real-time" global leaderboard for all players, for events that last a certain number of days, using Google App Engine (Python). A typical usage would be as follows: - User starts and finishes a combat, acquiring points (2-5 mins for a combat) - Points are accumulated in the player's account for the duration of the event. - Player can check the leaderboard anytime. - Leaderboard will return top 10 players, along with 5 players just above and below the player's score. Now, there is no real constraint on the real-time aspect, the board could be updated every 30 seconds, to every hour. I would like for it to be as "fast" as possible, without costing too much. Since I'm not very familiar with GAE, this is the solution I've thought of: Each Player entity has a event_points attribute Using a Cron job, at a regular interval, a query is made to the datastore for all players whose score is not zero. The query is sorted. The cron job then iterates through the query results, writing back the rank in each Player entity. When I think of this solution, it feels very "brute force". The problem with this solution lies with the cost of reads and writes for all entities. If we end up with 50K active users, this would mean a sorted query of 50K+1 reads, and 50k+1 writes at regular intervals, which could be very expensive (depending on the interval) I know that memcache can be a way to prevent some reads and some writes, but if some entities are not in memcache, does it make sense to query it at all? Also, I've read that memcache can be flushed at any time anyway, so unless there is a way to "back it up" cheaply, it seems like a dangerous use, since the data is relatively important. Is there a simpler way to solve this problem?
Global leaderboard in Google App Engine
0
0
0
930
21,979,038
2014-02-24T04:43:00.000
2
0
0
1
python,google-app-engine,cron,leaderboard
21,980,623
3
true
1
0
You don't need 50,000 reads or 50,000 writes. The solution is to set a sorting order on your points property. Every time you update it, the datastore will update its order automatically, which means that you don't need a rank property in addition to the points property. And you don't need a cron job, accordingly. Then, when you need to retrieve a leader board, you run two queries: one for 6 entities with more or equal number of points with your user; second - for 6 entities with less or equal number of points. Merge the results, and this is what you want to show to your user. As for your top 10 query, you may want to put its results in Memcache with an expiration time of, say, 5 minutes. When you need it, you first check Memcache. If not found, run a query and update the Memcache. EDIT: To clarify the query part. You need to set the right combination of a sort order and inequality filter to get the results that you want. According to App Engine documentation, the query is performed in the following order: Identifies the index corresponding to the query's kind, filter properties, filter operators, and sort orders. Scans from the beginning of the index to the first entity that meets all of the query's filter conditions. Continues scanning the index, returning each entity in turn, until it encounters an entity that does not meet the filter conditions, or reaches the end of the index, or has collected the maximum number of results requested by the query. Therefore, you need to combine ASCENDING order with GREATER_THAN_OR_EQUAL filter for one query, and DESCENDING order with LESS_THAN_OR_EQUAL filter for the other query. In both cases you set the limit on the results to retrieve at 6. One more note: you set a limit at 6 entities, because both queries will return the user itself. You can add another filter (userId NOT_EQUAL to your user's id), but I would not recommend it - the cost is not worth the savings. Obviously, you cannot use GREATER_THAN/LESS_THAN filters for points, because many users may have the same number of points.
2
3
0
I want to build a backend for a mobile game that includes a "real-time" global leaderboard for all players, for events that last a certain number of days, using Google App Engine (Python). A typical usage would be as follows: - User starts and finishes a combat, acquiring points (2-5 mins for a combat) - Points are accumulated in the player's account for the duration of the event. - Player can check the leaderboard anytime. - Leaderboard will return top 10 players, along with 5 players just above and below the player's score. Now, there is no real constraint on the real-time aspect, the board could be updated every 30 seconds, to every hour. I would like for it to be as "fast" as possible, without costing too much. Since I'm not very familiar with GAE, this is the solution I've thought of: Each Player entity has a event_points attribute Using a Cron job, at a regular interval, a query is made to the datastore for all players whose score is not zero. The query is sorted. The cron job then iterates through the query results, writing back the rank in each Player entity. When I think of this solution, it feels very "brute force". The problem with this solution lies with the cost of reads and writes for all entities. If we end up with 50K active users, this would mean a sorted query of 50K+1 reads, and 50k+1 writes at regular intervals, which could be very expensive (depending on the interval) I know that memcache can be a way to prevent some reads and some writes, but if some entities are not in memcache, does it make sense to query it at all? Also, I've read that memcache can be flushed at any time anyway, so unless there is a way to "back it up" cheaply, it seems like a dangerous use, since the data is relatively important. Is there a simpler way to solve this problem?
Global leaderboard in Google App Engine
1.2
0
0
930
21,979,134
2014-02-24T04:52:00.000
6
0
1
0
python,python-3.x
21,979,218
2
true
0
0
They are bitwise shift operators. For example, 2 has the binary equivalent 00000010, so 2 << 1 is 00000010 shifted left 1 time. This yields 00000100, which is 4. 1 >> 2 is 00000001 shifted right 2 times which is 00000000 (the 1 falls off the end after the first shift though, so 1>>1 is also 0), obviously that is 0.
2
1
0
If I do print(1 >> 2) I get 0. If I do print(2 << 1) I get 4. If I do print(9 << 3) I get 72 If I do print(3 >> 9) I get 0 What do >> and << do in python?
What do >> and << do in python
1.2
0
0
88
21,979,134
2014-02-24T04:52:00.000
3
0
1
0
python,python-3.x
21,979,149
2
false
0
0
Bitwise shift left and bitwise shift right. They're roughly equivalent to doubling (<<) or halving (>>) just like decimal shift left is roughly equivalent to multiplying by 10 and decimal shift right is roughly equivalent to dividing by 10.
2
1
0
If I do print(1 >> 2) I get 0. If I do print(2 << 1) I get 4. If I do print(9 << 3) I get 72 If I do print(3 >> 9) I get 0 What do >> and << do in python?
What do >> and << do in python
0.291313
0
0
88
21,981,387
2014-02-24T07:25:00.000
0
0
0
1
python,google-app-engine,task
21,981,710
2
false
1
0
It seems impossible to guarantee that B will be next.
2
1
0
On taskqueue in gae. For example, I have task A, B. How to ensure that task B starts right after task A finishes. There could be other tasks, like C, to fix this problem. Also, 'right after' could be loose to 'after'. How about a dedicate queue with max_current_requests set to 1?
How to make a task start right after another one finish in google app engine?
0
0
0
159
21,981,387
2014-02-24T07:25:00.000
2
0
0
1
python,google-app-engine,task
21,982,975
2
false
1
0
If you only have two tasks, you can start task B at the end of task A. For example, a task that updates user scores can start a task to send emails after it finished updating scores. In this case, you are guaranteed that task B is executed after task A, but there is no guarantee that there is no task C in between them - unless, of course, you don't have task C - or any other tasks - at all.
2
1
0
On taskqueue in gae. For example, I have task A, B. How to ensure that task B starts right after task A finishes. There could be other tasks, like C, to fix this problem. Also, 'right after' could be loose to 'after'. How about a dedicate queue with max_current_requests set to 1?
How to make a task start right after another one finish in google app engine?
0.197375
0
0
159
21,983,713
2014-02-24T09:33:00.000
1
0
1
0
python,twisted,pyinstaller,zope.interface
21,988,632
2
false
0
0
finally problem solved. Problem is zope.interface is not getting added in PYTHONPATH. Actually I have tried different setups (like pip & exe), though it was not getting added. Exact reason I don't know. But after installing zope.interface using 'easy_install ', it is added in PYTHONPATH & I am able to create executable file. Thank you for taking interest.
1
0
0
I am trying to create standalone exe of twisted application using PyInstaller. Everything is ok, even executable file is getting build, but its not working. I mean if try to execute it gives error 'Import error: Twisted requires zope.interface 3.6.0 or later: no module named zope.interface." I already have installed zope.interface 4.1.0. Also twisted application is running fine, with 'python ' But at the time of building executable file Pyinstaller is unble to import zope.interface. How to solve this issue? Thank you in advance.
Pyinstaller import error: zope.interface not found
0.099668
0
0
1,987
21,986,203
2014-02-24T11:17:00.000
3
0
0
0
python,openerp,openerp-7
21,987,900
1
true
1
0
you can write a stored functional integer field on hr.employee with a function returning the month as integer. then you can use this field for filters.
1
0
0
In HR module, in Employee form, I want to create a filter which gives me list of all employees whose birthday's appear in current month. Currently I am trying with static month, as below - but gives me error. [('birthday.month','=','02')] Error: File "/usr/lib/pymodules/python2.7/openerp/osv/expression.py", line 1079, in __leaf_to_sql or left in MAGIC_COLUMNS, "Invalid field %r in domain term %r" % (left, leaf) AssertionError: Invalid field 'birthday.month' in domain term ('birthday.month', '=', '02') Is there any way out to accomplish it?
How to create a filter which compares only month in date type in OpenERP?
1.2
0
0
670
21,986,356
2014-02-24T11:24:00.000
2
0
0
0
python,opencv,image-processing
21,987,220
3
false
0
0
If you think that there will not be any change in the shape (i mean arc won't become line or something like this) then you can have a look a Generalized Hough Transform (GHT) which can detect any shape you want. Cons: There is no directly function in openCV library for GHT but you can get several source code at internet. It is sometimes slow but can become fast if you set the parameters properly. It won't be able to detect if the shape changes. for exmaple, i tried to detect squares using GHT and i got good results but when square were not perfect squares (i.e. rectangle or something like that), it didn't detect.
2
1
1
I am trying to detect arcs inside an image. The information that I have for certain with me is the radius of the arc. I can try and maybe get the centre of the circle whose arc I want to identify. Is there any algorithm in Open CV which can tell us that the detected contour ( or edge from canny edge is an arc or an approximation of an arc) Any help on how this would be possible in OpenCV with Python or even a general approach would be very helpful Thanks
Detect an arc from an image contour or edge
0.132549
0
0
5,077
21,986,356
2014-02-24T11:24:00.000
1
0
0
0
python,opencv,image-processing
22,008,350
3
false
0
0
You can do it this way: Convert the image to edges using canny filter. Make the image binary using threshold function there is an option for regular threshold, otsu or adaptive. Find contours with sufficient length (findContours function) Iterate all the contours and try to fit ellipse (fitEllipse function) Validate fitted ellipses by radius. Check if detected ellipse is good fit - checking how much of the contour pixels are on the detected ellipse. Select the best one. You can try to increase the speed using RANSAC each time selecting 6 points from binarized image and trying to fit.
2
1
1
I am trying to detect arcs inside an image. The information that I have for certain with me is the radius of the arc. I can try and maybe get the centre of the circle whose arc I want to identify. Is there any algorithm in Open CV which can tell us that the detected contour ( or edge from canny edge is an arc or an approximation of an arc) Any help on how this would be possible in OpenCV with Python or even a general approach would be very helpful Thanks
Detect an arc from an image contour or edge
0.066568
0
0
5,077
21,986,588
2014-02-24T11:35:00.000
1
0
1
0
python,python-multithreading
21,986,753
2
false
0
0
Use a Queue from the multithreading module to store the tasks you want to execute. The main loop periodically checks for entries in the queue and executes them one by one when it finds something. You GPIO monitoring threads put their tasks into the queue (only one is required to collect from many threads). You can model your tasks as callable objects or function objects.
1
0
0
I am using Python with the Rasbian OS (based on Linux) on the Raspberry Pi board. My Python script uses GPIOs (hardware inputs). I have noticed when a GPIO activates, its callback will interrupt the current thread. This has forced me to use locks to prevent issues when the threads access common resources. However it is getting a bit complicated. It struck me that if the GPIO was 'queued up' until the main thread went to sleep (e.g. hits a time.sleep) it would simplify things considerably (i.e. like the way that javascript deals with things). Is there a way to implement this in Python?
Force Python to run in a single thread
0.099668
0
0
2,967
21,988,618
2014-02-24T13:02:00.000
0
0
0
0
python,google-glass
22,016,838
1
false
1
0
A couple of things that are standard debugging practices, and you may want to update the original question to clarify: Did OAuth actually fail? What information do you have that it failed? Can you verify from web server logs that the callback URL was hit and that it contained non-error return values? Can you check your web server and app server logs to see if there are any error messages or exceptions logged?
1
0
0
I went through the instructions on for the Google Glass Python Quick Start. I deployed the app and the app supposedly finished deploying successfully. I then went to the main URL for the app and attempted to open the page. The page asked me which Google Account I wanted to use to access the app, and I chose one. It went through some type of redirect and then came back to my app and tried to open up the openauth2callback page at which time nothing else happened. It just stopped on the openauth2callback page and sat there whitescreened. I assume that the app is supposed to look like the sample app that was posted where I should see timeline cards and be able to send messages, but I don't see any of that. I checked my oauth callbacks and they look exactly like the quick start instructions said to make them. What am I missing?
OAuth fails after deploying google glass application
0
0
1
48
21,988,970
2014-02-24T13:18:00.000
3
0
1
0
python,pyobjc
22,010,598
2
true
0
1
The easiest way to ensure problem free usage of the bridge is to ensure that the delegate methods that you use don't use C arrays as arguments or return values and don't use variadic signatures ("..."). Futhermore ensure that all pass-by-reference arguments (such as the commonly used "NSError**" argument) are marked up with "in", "out" or "inout" to indicate in which direction values are passed. This ensures that the bridge can get all information it needs from the Objective-C runtime. There are two options to pass a preconstructed object to Python code: Create a class method that returns the object Use the PyObjC API to create the python proxy for the Objective-C object. The latter uses an internal PyObjC API (also used by the framework wrappers) and could break in future versions of PyObjC. That said, I don't have active plans to break the solution I describe here. First ensure that the right version of "pyobjc-api.h" and "pyobjc-compat.h" are available for the Objective-C compiler. Use #include "pyobjc-api.h" to make the API available. Call "PyObjC_ImportAPI" after initialising the Python interpreter, but before you use any other PyObjC function. Use "pyValue = PyObjC_IdToPython(objcValue)" to create a Python representation for an Objective-C object.
1
1
0
I'm embedding Python in an Objective C application using PyObjC, setting the Python environment up by hand on the ObjC side (i.e. not using py2app). I'm starting the Python in a separate thread (on the ObjC side) with a call to PyRun_File(), and then one to PyObject_CallFunction() to start it doing something (specifically update some menus, register and handle menu callbacks). The Python function starts the run loop for the secondary thread and the callback class instance hangs around. All of this works. I can pass basic data such as strings to the initial Python function without problem and menu actions work as I'd like. I'd like to provide a pre-instantiated delegate instance to the Python function for ease of configuration (from the point of view of the Objective C developer). How do I pass an Objective C instance (my delegate) to the Python function? Is it possible? Are there bridge functions I'm missing? Do I need to create a suitably configured PyObject by hand? What sort of conversion should I do on the Python side to ensure the delegate.methods() are usable from Python and proxying works as it should? Are there any memory-management issues I should be aware of (on either side of the bridge)? I'm using Python 2.7, PyObjC 2.5 and targeting OSX 10.6. Happy to consider changing any of those if the solution specifically demands it. TIA.
How do I pass an Objective C instance into a PyObjC Python function when using PyRun_File()/PyObject_CallFunction()?
1.2
0
0
853
21,993,460
2014-02-24T16:33:00.000
2
0
0
0
python,pyramid,chameleon,deform,bokeh
22,392,624
2
false
0
0
You want to use plot.create_html_snippet. This function returns the code that you want to appear in the HTML, the function also writes out an embed file. This is what an embed snippet looks like <script src="http://localhost:5006/static/dc0c7cfd-e657-4c79-8150-6a66be4dccb8.embed.js" bokeh_plottype="embeddata" bokeh_modelid="dc0c7cfd-e657-4c79-8150-6a66be4dccb8" bokeh_modeltype="Plot" async="true"></script> the following arguments control how the embed file is written out, and where the js code searches for the embed files. embed_base_url controls the url path (it can be absolute or relative) that the javascript will search for the embed file in. embed_save_loc controls the directory that python will write out the embed file in. embed_save_loc isn't necessary when server=True static_path controls the url path (it can absolute or relative) that the javascript will use to construct URLS for bokeh.js and bokeh.css. It defaults to http://localhost:5006/static/, but could just as easily point to a CDN When running the bokeh server, navigate to http://localhost:5006/bokeh/generate_embed/static . I think this requires you to be running on master because of a bug. I hope this helps.
1
3
0
I have a project with many scripts using Matplotlib. I'd like to build a web interface for this project. How do you place a Bokeh chart within a Chameleon template? I'm using Pyramid and the Deform bootstrap if that matters. Does anyone have a good example out there?
How do you place a Bokeh chart within a Chameleon template?
0.197375
0
0
608
21,995,068
2014-02-24T17:46:00.000
0
0
0
0
python,linux
21,995,095
2
false
0
0
it's not open() it is choking on. It is the with syntax. Context managers did not exist in Python 2.4. you must explicitly close the file also.
1
0
0
When I use open() in python 2.4.3, i get the following error File "/tmp/chgjdbcconfig.py", line 16 with open("text.new", "wt") as fout: ^ SyntaxError: invalid syntax I checked the python version and this is my output Python 2.4.3 I was looking for advice on what could be the alternative. I am trying to edit lines in an XML file on a linux server and I have no control over the upgrades of the python version. Any advice would be great!
What is the alternative for open in python 2.4.3
0
0
0
5,875
21,997,897
2014-02-24T20:07:00.000
18
0
0
0
python,matplotlib
21,998,600
2
true
0
0
To get the whiskers to appear at the min and max of the data, set the whis parameter to an arbitrarily large number. In other words: boxplots = ax.boxplot(myData, whis=np.inf). The whis kwarg is a scaling factor of the interquartile range. Whiskers are drawn to the outermost data points within whis * IQR away from the quartiles. Now that v1.4 is out: In matplotlib v1.4, you can say: boxplots = ax.boxplot(myData, whis=[5, 95]) to set the whiskers at the 5th and 95th percentiles. Similarly, you'll be able to say boxplots = ax.boxplot(myData, whis='range') to set the whiskers at the min and max. Note: you could probably modify the artists contained in the boxplots dictionary returned by the ax.boxplot method, but that seems like a huge hassle
1
11
1
I understand that the ends of whiskers in matplotlib's box plot function extend to max value below 75% + 1.5 IQR and minimum value above 25% - 1.5 IQR. I would like to change it to represent max and minimum values of the data or the 5th and 95th quartile of the data. Is is possible to do this?
Changing what the ends of whiskers represent in matplotlib's boxplot function
1.2
0
0
8,736
21,998,474
2014-02-24T20:37:00.000
0
0
1
0
python,python-2.7,scrapy
22,020,164
1
false
1
0
The only application I see for running at the same time multiple instances of a spider is when each instance will have it's own part of start_urls. But each instance should be run on a different network interface, otherwise you cannot control effectively crawling intensity for the same domain.
1
1
0
I am using scrapy 0.20 with python 2.7 I want to ask, what is the cons and pros of running the same spider twice in the same time? Please know that I am using a pipeline in order to write the results to a json file. Thanks
Running the same spider simultaniously
0
0
0
79
21,999,501
2014-02-24T21:34:00.000
2
0
1
0
python,fixed-point
22,003,552
2
true
0
0
I am assuming that you have (starting from the left end) one sign bit, the assumed binary point, and then seven bits that represent a fractional value. If that's the case then you can just take the signed integer value of the fixed-point number and divide by 128. You'll need to do this division using floating-point values, of course, because the result will be less than 1. The range of values that can be represented in this fixed-point format is -1.0 to +(127/128).
1
4
0
I would like to read an 8 bit number that comes in a 2's complement fix8_7 (8 bit number and the binary point is in the 7 bit). How can I do this in Python?
How to read fixed point numbers in python
1.2
0
0
1,245
21,999,711
2014-02-24T21:46:00.000
6
0
0
0
python,matplotlib
22,573,142
2
false
0
0
The FuncAnimation is a subclass of TimedAnimation. It takes frames as an input for update functionm, which could be a number or a generator. It takes repeat as an argument, which is inherited from TimedAnimation, by setting reapeat to False you could stop the animation to repeat itself. PS: Matplotlib documentation is lame, mistakes, lazy writing style, unclear explanation. Sometime, I really have to dig into the source code to figure out what to do. Like FuncAnimation, it also takes fargs as an extra arguments for update function, but in its documentation, it doesn't say the type of fargs. In fact, what I found in its source code, fargs is used as a list.
1
4
0
Does anyone know the preferred method for stopping FuncAnimation? I am using it to record data from a oscilloscope and would like to be able to pause and restart the data on demand. Is there any way I can send a button click event to it? Thanks, Derek
Python - Stop FuncAnimation
1
0
0
6,236
22,000,918
2014-02-24T22:55:00.000
1
0
0
0
python,web-services,api,rest,amazon-web-services
22,004,122
2
false
1
0
A bit broad really especially the Python part. Yes this can be considered a API. Think of SOAP and REST services as an API available via the network. This question is opinion based and not suited for discussion here. A guideline is that if it works for you it is good. Yes you should use the REST services for the website otherwise you will duplicate work.
1
0
0
I just decided to start working on a mobile application for fun, but it will require a back-end. So I created an EC2 instance on Amazon Web Services, with an Amazon Linux AMI installed. I also have set up an database instance as well, and inserted some dummy data in there. Now, the next step I want to take is to write an RESTful web service that will run on my server that will interface with my database (which is independent from my server) First question, would this be considered an API? Second, I am doing research to implement this web service in Python, in your opinion are there better choices? Third, if I make a website, would/should it also be able to use this RESTful web service to query data from the database?
A Few questions on writing a RESTful web service
0.099668
0
1
81
22,001,176
2014-02-24T23:12:00.000
1
0
0
0
python,csv,pandas
22,001,369
1
false
0
0
Just do str(int(float('2.09228E+14'))) which should give you '209228000000000'
1
0
1
I'm trying to read a csv with pandas using the read_csv command. However, one of my columns is a 15 digit number which is read in as a float and then truncated to exponential notation. So the entries in this column become 2.09228E+14 instead of the 15 digit number I want. I've tried reading it as a string, but I get '2.09228E+14' instead of the number. Any suggestions?
Pandas read csv data type
0.197375
0
0
1,866
22,001,578
2014-02-24T23:42:00.000
0
0
0
1
python,google-app-engine,google-cloud-datastore,app-engine-ndb
22,031,727
1
false
1
0
You need to take a look at the google spreadsheets api, google it, try it, come back when something specific doesnt work. Also consider using google forms instead which already do what you want (save responses to a spreadsheet)
1
0
0
I am making a survay website on google app engine using Python. For saving the survey form data i am using NDB Datastore. After the survey I have to import it as spreadsheet or CSV. How can i do that. Thanks.
backing up data from app engine datastore as spreadsheet or csv
0
0
0
163
22,004,118
2014-02-25T03:43:00.000
0
0
1
0
python,python-2.7,python-3.x
22,004,184
2
false
0
0
print is a keyword, and print() is a function. For almost all purposes that I have encountered, print(x) (python 3.x) will work just like print x (python 2.x). However, you cannot use the comma to avoid going to a new line any more. Instead, you must use print(x, end=k), where k is a string that whatever you will print out after printing out x. For most purposes, you can use k = '' to avoid going to a new line.
1
0
0
I've had to follow an example recently that encouraged me to use the new style python print() function, which I can only access after from __future__ import print_function. What are the major differences between the two? What was the old print, if not a function?
Why is the new python "print(x)" function better than the old "print x" approach?
0
0
0
229
22,004,386
2014-02-25T04:09:00.000
0
0
1
1
ipython,anaconda
22,021,547
5
false
0
0
Recent versions of iTerm send notifications to notification center when there is output in a non-visible tab. They fold into notification center by default, but you can change them to stay on the screen in the Notifications preferences in System Preferences.
1
21
0
How do I get IPython to notify me when a command has been executed? Can I get it to use the bell/alert, or by pop-up? I'm running Anaconda on iTerm on OS X 10.8.5.
notify when execution/command is completed
0
0
0
8,943
22,004,427
2014-02-25T04:13:00.000
0
1
1
0
javascript,python,language-agnostic,dynamic-typing,static-typing
22,004,578
1
true
0
0
I'm a C++/C# dev by training, and I found that I got better with JS after I started writing it. Try going all in on JS and write something in it. Maybe Node.js. Maybe learn to use it with a frontend framework like Angular or Knockout. Maybe both together. If you want to improve from there, check out Douglas Crockford's "JavaScript: The Good Parts". He writes some good suggestions on how to write better JS. It's not ironclad, community-proven best practices, but he offers some solid stuff.
1
2
0
I have a strong background in Java, which obviously is statically-typed, and type-safe language. I find it that I am able to read through large amounts of code very quickly and easily assuming that the programmer who had written it followed basic conventions and best practices. I am also able to write code pretty quickly, given a pretty good IDE like Eclipse and IntelliJ because of the benefits of compilation and auto completion. I'd like to become more proficient, effective and efficient at reading/writing code in more dynamic languages like Python and JavaScript. The problem is that I can't find myself understanding code nearly as fast as I would in Java mainly because I comprehend code very quickly based on their types. Also when writing, there really is no auto complete available to quickly see what methods are available. Edit -- I ask this in the context of larger-scale projects where the code continues to grow and evolve. What are general strategies or caveats when reading and writing in languages like these when the project sizes are much larger and are non-trivial? Or does it come with time? Much thanks!
Strategies to be more effective at programming in dynamic languages
1.2
0
0
93
22,004,809
2014-02-25T04:44:00.000
0
0
0
0
python,csv,sqlite
22,005,726
2
false
0
0
make headers of the columns in csv as the same column names in sqlite3 table. Then directly read and check the type by using type() before inserting into DB.
1
2
0
I have a csv file with about 280 columns, which are possibly changing from time to time. Is there a way to import a csv file to sqlite3 and have it 'guess' the column types? I am using a python script to import this.
csv import sqlite3 without specifying column types
0
1
0
1,025
22,008,273
2014-02-25T08:16:00.000
26
0
1
0
python,buffer,block,chunks,sector
22,009,516
1
true
0
0
Chunk is used for any (typically rather large) amount of data which still is only a part of any size of a whole, e. g. the first 1000 bytes of a file. The next 3000 bytes could be the next chunk. Block is used for a fixed amount of data (typically technically determined) which typically is only part of a whole, e. g. the first 1024 bytes of a file. The next block would then also be 1024 bytes long. Also, sometimes not all of a block is used; the second and last block of a file of 1034 bytes is still 1024 bytes large, but only 10 bytes of it will be in use. Offset is a positional distance, typically between the beginning of something and the position of interest; e. g. if the 23rd byte in a file of weather data stores the temperature, then the temperature's offset is 23 bytes. It can also be a shift of a data position, e. g. if something has gone wrong and now a file is corrupted, this can be because all bytes are shifted 32 bytes to the back (after inserting 32 zeros at the beginning or similar), then the whole file has an offset of 32 bytes. Buffer is a piece of memory in which things are collected in order to process them as a whole when the buffer is full (or nearly full). A typical example is buffered output; here single characters are buffered until a line is complete, and then the whole line is printed to the terminal in one write operation. Sometimes buffers have a fixed size, sometimes they just have an upper limit. Sector is like a block, a fixed size part of a whole, but related even more to a technical origin. The whole in this case often is a piece of hardware (like a hard drive or a CD), and typically sectors contain blocks.
1
11
0
I have seen some of the scripts which are either dealing with archive or binary data or copy files (not using python default functions) use chunk or block or offset or buffer or sector. I have created a Python application and few of the requirements have been met by external libraries (archival / extracting data) or binaries. I would like to dive deeper now to get those third party library features into my application by writing a module of my own. Now I would like to know what those terms mean and where I can get started. Is there any documentation for the subject above? Any documentation relevant to those words on the Python programming language would also be appreciated.
What do "chunk", "block", "offset", "buffer", and "sector" mean?
1.2
0
0
12,224
22,013,532
2014-02-25T11:59:00.000
0
0
0
0
javascript,jquery,python,api
25,426,295
1
false
1
0
It all depends on what you're authenticating. If you're authenticating each user that uses your API, you have to do something like the following: Your site has to somehow drop a cookie in that user's browser, Your API needs to support CORS (we use easyXDM.js), somehow upon logging in to their site, their site needs to send the user to your site to have a token passed that authenticates the user against your API (or vice versa, depending on the relationship). If you're just authenticating that a certain site is authorized to use your API, you can issue that site an API key. You check for that API key whenever your API is called. The problem with this approach is that JavaScript is visible to the end user. Anyone who really wants to use your API could simply use the same API key. It's not really authentication without some sort of server to server call. At best, you're simply offering a very weak line of defense against the most obvious of attacks.
1
1
0
I have a javascript placed on third party site and this js makes API calls to my server. JS is publicly available and third party cannot save credentials in JS. I want to authenticate API calls before sharing JSON and also want to rate limit. Any one has ideas on how can i authenticate API?
how to do authentication of rest api from javascript, if javascript is on third party site?
0
0
1
117
22,017,118
2014-02-25T14:28:00.000
3
0
1
0
python,multiprocessing,pool
33,544,728
2
false
0
0
You can use as many workers as you have memory for. That being said, if you set up a pool without any process flag, you'll get workers equal to the machine CPUs: From Pool docs: processes is the number of worker processes to use. If processes is None then the number returned by os.cpu_count() is used. If you're doing CPU intensive work, i wouldn't want more workers in the pool than your CPU count. More workers would force the OS to context switch out your processes, which in turn lowers the system performance. Even resorting to using hyperthreading cores can, depending on your work, choke the processor. On the other hand, if your task is like a webserver with many concurrent requests that individually are not maxing out your processor, go ahead and spawn as many workers as you've got memory and/or IO capacity for. maxtasksperchild is something different. This flag forces the pool to release all resources accumulated by a worker, once the worker has been used/reused a certain number of times. If you imagine your workers read from a disk, and this work has some setup overhead, maxtasksperchild will clear that overhead once a worker has done this many tasks.
1
6
0
I am making use of Python's multiprocessor library and wondering what would be the maximum of worker processes I can call? E.g. I have defined async.pool = Pool(100) which would allow me to have max 100 async processes running at the same time, but I have no clue what would be the real maximum value for this? Does anyone know how to find the max value for my Pool? I'm guessing it depends on CPU or memory.
Python multiprocessing: max. number of Pool worker processes?
0.291313
0
0
6,331
22,017,835
2014-02-25T14:55:00.000
2
0
0
0
python,tcp,udp,recv,recvfrom
22,018,452
1
false
0
0
Why does TCP socket.recvfrom() not return the sender address as it does with UDP? Because once a TCP connection is established that address does not change. That is the address that was passed to connect or received from accept calls. You can also find out the peer's address (if you lost it somehow) with getpeername. When does TCP socket.recv() an empty string? When the peer has closed the connection and no more data will be coming in. You can still send data though because TCP connections can be half-closed.
1
0
0
Why does TCP socket.recvfrom() not return the sender address as it does with UDP? When does TCP socket.recv() an empty string? Thanks!
recv() and recvfrom() methods for TCP
0.379949
0
1
2,605
22,018,798
2014-02-25T15:30:00.000
2
0
0
1
python,django,rabbitmq
32,414,239
1
false
1
0
If anyone else bumps into this problem: The solution is using a RabbitMQ consumer from a different process (But in the same Django codebase) then Django (Not the running through wsgi, etc. you have to start it by it self) The consumer, connects to the appropriate rabbitmq queues and writes the data into the Django models. Then the usual Django process(es) is actually a "read model" of the data inserted/updated/created/deleted as delivered by the message queue (RabbitMQ or other) from a remote process.
1
2
0
I need to implement a quite simple Django server that server some http requests and listens to a rabbitmq message queue that streams information into the Django app (that should be written to the db). the data must be written to the db in a synchronized order , So I can't use the obvious celery/rabbit configuration. I was told that there is no way to do this in the same Django project. since Django would listen to http requests on It's process. and It can't handle another process to listen for Rabbit - forcing me to to add Another python/django project for the rabbit/db writes part - working with the same models The http bound django project works with.. You can smell the trouble with this config from here. .. Any Ideas how to solve this? Thanks!
Django - listening to rabbitmq, in a synchronized way. without celery. in the same process of the web bound django
0.379949
0
0
1,008