Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
17,884,487 | 2013-07-26T14:47:00.000 | 1 | 0 | 1 | 0 | python,sublimetext2 | 17,885,345 | 2 | true | 0 | 0 | Sorry snippets are primarily meant for reusability; you can do precisely as you say above but cannot insert specific things - you would have to write your own plugin in order to add this functionality as it would require things like a specific way of selecting things to modify and replace which would be specific to it. | 1 | 0 | 0 | Is there any way to access current scope (class name or function name) inside a snippet? I am trying to write a snippet for super(CurrentClassName, self).get(*args, **kwargs) but seems like I can't really replace CurrentClassName with actual class name. Does anyone know how to do that? | Sublime text 2 snippets | 1.2 | 0 | 0 | 351 |
17,887,193 | 2013-07-26T17:10:00.000 | 0 | 1 | 1 | 0 | python,batch-file | 17,887,277 | 2 | false | 0 | 0 | Ultimately there needs to be some way to tell your script what you are needing to process. There's a variety of ways for that, but it really just needs to happen. I think the most obvious thing to do in the batch file is to copy the target file in place before running your Python script.
However, a cleaner solution might be to take a command line argument into the python script (via sys.argv) which tells the script which file it needs to process. | 1 | 0 | 0 | I have a python script "cost_and_lead_time.py" in C:\Desktop\A. This script imports 3 other scripts and a "cost_model.json" file, all of which are in folder A.
Now, I have a simulation result in say C:\Desktop\Model\Results. I have a one line batch file in this folder of "call C:\Desktop\A\cost_and_lead_time.py", but it returns an error when it tries to open the cost_model.json file. It doesn't appear to have an issue importing the 3 other scripts as those appear before the json is opened.
My question is, is there any way to keep this cost_model.json file in that directory and run the script through the batch file without copy/pasting the json file into the results folder? The only way I can think of is to hard code the full path of the file in the python script, but that isn't ideal for me. I'm looking for code to add to the batch file, not python script.
Thanks | bat file running py script in different directory | 0 | 0 | 0 | 1,303 |
17,888,244 | 2013-07-26T18:15:00.000 | 0 | 1 | 0 | 1 | python,fabric | 17,929,568 | 1 | false | 0 | 0 | Overrode the "env" variable via parameter in the function. Dumb mistake. | 1 | 0 | 0 | I'm developing a task where I need to have a few pieces of information specific to the environment.
I setup the ~/.fabricrc file but when I run the task via command line, the data is not in the env variable
I don't really want to add the -c config to simplify the deployment.
in the task, I'm calling
env.cb_account
and I have in ~/.fabricrc
cb_account=foobar
it throws AttributeError
Has anybody else run into this problem?
I found the information when I view env outside of my function/task. So now the question is how do I get that information into my task? I already have 6 parameters so I don't think it would be wise to add more especially when those parameters wouldn't change. | Python Fabric config file (~/.fabricrc) is not used | 0 | 0 | 0 | 675 |
17,888,802 | 2013-07-26T18:47:00.000 | 2 | 0 | 1 | 0 | python,multithreading,python-2.7,subprocess | 17,888,827 | 2 | true | 0 | 0 | If you spawn a thread that runs a function, and then that function completes before the end of the program, then yes, the thread will get garbage collected once it is (a) no longer running, and (b) no longer referenced by anything else. | 1 | 2 | 0 | Will the resources of non-daemon thread get released back to OS once the threads completes? ie If the main thread is not calling join() on these non-daemon threads, will the python GC call join on them and release the resources once held by the thread? | Is the Python non-daemon thread a non-detached thread? When is its resource freed? | 1.2 | 0 | 0 | 1,763 |
17,893,099 | 2013-07-27T00:39:00.000 | 0 | 1 | 0 | 0 | python-2.7,configuration,flask | 17,984,932 | 1 | true | 1 | 0 | The solution hybrid solution between my inital plan and Seans suggestion. I use multiple config files and set a environment variable before each kind of app instance. This means that you need to use
from os import environ
environ["APP_SETTINGS"] = "config.py"
before every import app call. The best approach to this problem is to use flask-script as Sean suggests and to have a python manage.py request where request could range from
run_unit_tests to run_server
and that manage script sets the environment variable (as well as builds the database, sets up a profiler, or anything else you need). | 1 | 0 | 0 | I have a flask application that I would like to operate different when in production, unit testing, functional testing, and performance testing. Flasks one debug operation doesn't cover what I want to do, I was wondering if there is any way to pass parameters to flasks __init__.py.
I have several different scripts which build my app and create my data structures.
I know I can do this using environment variables but I was hoping for a better solution. | Passing paramaters to flask __init__.py | 1.2 | 0 | 0 | 236 |
17,893,360 | 2013-07-27T01:23:00.000 | 3 | 0 | 0 | 0 | python,django | 17,893,390 | 2 | true | 1 | 0 | Django only serves the responses that you explicitly create and return from your views. There is no general ability to request files from it.
Make sure your source code isn't in a directory that your web server is configured to serve from, and make sure your settings.py value for DEBUG is False, and you should be fine. Oh, and just in case - don't try to use the Django development server in production. | 2 | 0 | 0 | I am using Django to develop an API using an algorithm I wrote.
When someone requests a url, my urls.py calls a function in views.py which serves
page that returns a JSON string.
If my algorithm is in my views.py file, or in another file on my server, would it be possible for a user to view the contents of this file, and then see my algorithm?
In other words, when using Django, which files will never be served to a user, and which files will be?
Is there any way I can stop someone from viewing my algorithm if it's in a .py file? Other than Chmodding the file or encrypting the code?
Thank you for your time. | Django - Privatizing Code - Which files are served to a user? | 1.2 | 0 | 0 | 43 |
17,893,360 | 2013-07-27T01:23:00.000 | 1 | 0 | 0 | 0 | python,django | 17,893,392 | 2 | false | 1 | 0 | As long as nobody has shell access to your server, people will never see more than the actual HTML output of your page. .py files are not shown to the user that has requested an url in the browser. | 2 | 0 | 0 | I am using Django to develop an API using an algorithm I wrote.
When someone requests a url, my urls.py calls a function in views.py which serves
page that returns a JSON string.
If my algorithm is in my views.py file, or in another file on my server, would it be possible for a user to view the contents of this file, and then see my algorithm?
In other words, when using Django, which files will never be served to a user, and which files will be?
Is there any way I can stop someone from viewing my algorithm if it's in a .py file? Other than Chmodding the file or encrypting the code?
Thank you for your time. | Django - Privatizing Code - Which files are served to a user? | 0.099668 | 0 | 0 | 43 |
17,894,135 | 2013-07-27T04:07:00.000 | 2 | 0 | 1 | 1 | python | 17,894,151 | 2 | false | 0 | 0 | Run it with pythonw.exe instead of python.exe. | 2 | 0 | 0 | Is it possible to make a program in Python that, when run, does not actually open any window (including command prompt)?
For example, opening the program would appear to do nothing, but in reality, the program is running in the background somewhere.
Thanks! | How to run a Python program without a window? | 0.197375 | 0 | 0 | 89 |
17,894,135 | 2013-07-27T04:07:00.000 | 3 | 0 | 1 | 1 | python | 17,894,155 | 2 | true | 0 | 0 | Are you running the python program by double clicking *.py file in Windows?
Then, rename the *.py file to *.pyw. | 2 | 0 | 0 | Is it possible to make a program in Python that, when run, does not actually open any window (including command prompt)?
For example, opening the program would appear to do nothing, but in reality, the program is running in the background somewhere.
Thanks! | How to run a Python program without a window? | 1.2 | 0 | 0 | 89 |
17,897,988 | 2013-07-27T12:50:00.000 | 1 | 0 | 0 | 0 | python | 17,898,513 | 5 | true | 0 | 1 | You have two lists, I'll name them currentState, and newChanges. Here will be the workflow:
Iterate over currentState, figuring out which are newly born cells, and which ones are going to die. Do NOT add these changes to your currentState. If there is a cell to be born or a death, add it to the newChanges list. When you are finished with this step, currentState should look exactly the same as it did at the beginning.
Once you have finished all calculations in step 1 for every cell, then iterate over newChanges. For each pair in newChanges, change it in currentState from dead to alive or vice versa.
Example:
currentState has {0,0} {0,1} {0,2}. (Three dots in a line)
newChanges is calculated to be {0,0} {-1,1} {1,1} {0,2} (The two end dots die, and the spot above and below the middle are born)
currentState recieves the changes, and becomes {-1,1} {0,1} {1 ,1}, and newChanges is cleared. | 2 | 5 | 0 | Now I have read the other stackoverflow Game of Life questions and also Googled voraciously.I know what to do for my Python implementation of the Game Of Life.I want to keep track of the active cells in the grid.The problem is I'm stuck at how should I code it.
Here's what I thought up but I was kinda at my wit's end beyond that:
Maintain a ActiveCell list consisting of cell co-ordinates tuples which are active
dead or alive.
When computing next generation , just iterate over the ActiveCell list,compute cell
state and check whether state changes or not.
If state changes , add all of the present cells neighbours to the list
If not , remove that cell from the list
Now the problem is : (" . "--> other cell)
B C D
. A .
. . .
If A satisfies 3) then it adds B,C,D
then if B also returns true for 3) ,which means it will add A,C again
(Duplication)
I considered using OrderedSet or something to take care of the order and avoid duplication.But still these I hit these issues.I just need a direction. | Game Of Life : How to keep track of active cells | 1.2 | 0 | 0 | 1,235 |
17,897,988 | 2013-07-27T12:50:00.000 | 0 | 0 | 0 | 0 | python | 17,898,267 | 5 | false | 0 | 1 | Did you consider using an ordered dictionary and just set the values to None? | 2 | 5 | 0 | Now I have read the other stackoverflow Game of Life questions and also Googled voraciously.I know what to do for my Python implementation of the Game Of Life.I want to keep track of the active cells in the grid.The problem is I'm stuck at how should I code it.
Here's what I thought up but I was kinda at my wit's end beyond that:
Maintain a ActiveCell list consisting of cell co-ordinates tuples which are active
dead or alive.
When computing next generation , just iterate over the ActiveCell list,compute cell
state and check whether state changes or not.
If state changes , add all of the present cells neighbours to the list
If not , remove that cell from the list
Now the problem is : (" . "--> other cell)
B C D
. A .
. . .
If A satisfies 3) then it adds B,C,D
then if B also returns true for 3) ,which means it will add A,C again
(Duplication)
I considered using OrderedSet or something to take care of the order and avoid duplication.But still these I hit these issues.I just need a direction. | Game Of Life : How to keep track of active cells | 0 | 0 | 0 | 1,235 |
17,898,306 | 2013-07-27T13:24:00.000 | 0 | 0 | 1 | 0 | python,subprocess | 17,898,532 | 3 | false | 1 | 0 | There are a couple of tricks - they basically reply on discipline on the development teams part:
Publish an API and stick to it all V2 functions of the same name must accept the same parameters and return compatible results when so called.
Have a general interface say fred that just wraps your V1 functions
Make use of named and defaulted parameters rather than positional parameters, then if your version 2 code supports some additional parameters but the defaults result in the version 1 behaviour you will be golden.
Use Namespaces
If you define your functions in the fn(*arg, **agv) format you can have a wrapper in your general interface that works out if the V1 or V2 functions are intended in a given call.
You can have an optional parameter ifversion=1 in all your functions that might be ambiguous and use it when you specifically need the version 2 stuff.
There is another really simple method which is for the V2 front end to connect to a different port or use a specific flag in the html requrest. | 2 | 0 | 0 | I'm working on the back end of a web-based system. My code will receive calls from our web site and perform the actions requested by the user. We would like to support multiple versions of our front end simultaneously. So, for example, I might receive a request from V1 of our front end or from V2 of it. I need to respond to either of these calls.
As you might expect, a lot of my code will be the same across versions. For example, my function *get_list_access_params()* will probably appear in both V1 and V2 (although there may be some changes to the code in it). My listener should grab the request, figure out which version of our system the call came from and, then, call the right version of *get_list_access_params()*.
My hope is to not have to duplicate and rename the function as v1_get... and v2_get... but, rather, to duplicate the function in two code files, a v1 file and a v2 file.
This must be a common need but I can't figure out where to look for the answer. Does anyone have a quick answer or can you direct me to a simple place to find it (I am a Python novice, BTW)? Thank you! | How to structure Python code to support multiple releases of MY project (i.e. not multiple versions of Python) | 0 | 0 | 0 | 133 |
17,898,306 | 2013-07-27T13:24:00.000 | 0 | 0 | 1 | 0 | python,subprocess | 17,898,413 | 3 | false | 1 | 0 | Your subprocess offers an API to the web sites.
The trick is to make it so that API v2 of the subprocess code can handle calls from both v1 and v2 web sites. This is called backward compatibility.
Also, it's nice if the v1 web site is not too picky about the data it receives and can for instance handle a v2 answer from the subprocess that has more information than it used to have in v1. This is called forward compatibility. Json and xml are good ways to achieve it, since you can add properties and attributes at will without harming the parsing of the old properties.
So the solution, I think, does not lie in a python trick, but in careful design of the API of your subprocess such that the API will not break as the subprocess's functionality increases. | 2 | 0 | 0 | I'm working on the back end of a web-based system. My code will receive calls from our web site and perform the actions requested by the user. We would like to support multiple versions of our front end simultaneously. So, for example, I might receive a request from V1 of our front end or from V2 of it. I need to respond to either of these calls.
As you might expect, a lot of my code will be the same across versions. For example, my function *get_list_access_params()* will probably appear in both V1 and V2 (although there may be some changes to the code in it). My listener should grab the request, figure out which version of our system the call came from and, then, call the right version of *get_list_access_params()*.
My hope is to not have to duplicate and rename the function as v1_get... and v2_get... but, rather, to duplicate the function in two code files, a v1 file and a v2 file.
This must be a common need but I can't figure out where to look for the answer. Does anyone have a quick answer or can you direct me to a simple place to find it (I am a Python novice, BTW)? Thank you! | How to structure Python code to support multiple releases of MY project (i.e. not multiple versions of Python) | 0 | 0 | 0 | 133 |
17,900,112 | 2013-07-27T16:38:00.000 | 0 | 0 | 0 | 0 | python,excel,csv,tree,hierarchy | 17,900,531 | 2 | false | 0 | 0 | If spreadsheet is a must in this solution, hierarchy can be represented by indents on the Excel side (empty cells at the beginnings of rows), one row per node/leaf. On the Python side, one can parse them to tree structure (of course, one needs to filter out empty rows and some other exceptions). Node type can be specified on it's own column. For example, it could even be the first non-empty cell.
I guess, hierarchy level is limited (say, max 8 levels), otherwise Excel is not good idea at all.
Also, there is a library called openpyxl, which can help reading Excel files directly, without user needing to convert them to CSV (it adds usability to the overall approach).
Another approach is to put a level number in the first cell. The number should never be incremented by 2 or more.
Yet another approach is to use some IDs for each node and each node leaf would need to specify parent's id. But this is not very user-friendly. | 1 | 14 | 1 | I have a non-technical client who has some hierarchical product data that I'll be loading into a tree structure with Python. The tree has a variable number of levels, and a variable number nodes and leaf nodes at each level.
The client already knows the hierarchy of products and would like to put everything into an Excel spreadsheet for me to parse.
What format can we use that allows the client to easily input and maintain data, and that I can easily parse into a tree with Python's CSV? Going with a column for each level isn't without its hiccups (especially if we introduce multiple node types) | Represent a tree hierarchy using an Excel spreadsheet to be easily parsed by Python CSV reader? | 0 | 0 | 0 | 13,410 |
17,900,140 | 2013-07-27T16:41:00.000 | 3 | 0 | 1 | 0 | python,setuptools,distutils,distribute | 17,900,490 | 1 | false | 0 | 0 | You won't know whether future versions will break your app; no one can foretell the future. Future problems can be solved by then pinning versions on installation, or you can issue a new release of your project with a fix or <= requirement specification.
Use >= when a minimum version is required; e.g. when you know that you rely on a specific feature of that library that was introduced as of a specific version, or because older versions use a different API.
You generally want to avoid using == in install_requires; leave version pinning up to the installer and you need to retain flexibility. If that specific version turns out to have a major security flaw, you need to update your setup.py and release a new version just to allow anyone whom installed your package to benefit. | 1 | 2 | 0 | How should I know whether future versions of dependencies will break my app? Is >= preferred over ==, so that developers don't need to install so many old package versions? | In setup.py:install_requires, when should I use == vs. >=? | 0.53705 | 0 | 0 | 462 |
17,901,514 | 2013-07-27T19:15:00.000 | 1 | 0 | 1 | 0 | python,loops,for-loop,syntax,while-loop | 17,901,607 | 4 | false | 0 | 0 | while, print, for etc. are keywords. That means they are parsed by the python parser whilst reading the code, stripped any redundant characters and result in tokens. Afterwards a lexer takes those tokens as input and builds a program tree which is then excuted by the interpreter. Said so, those constructs are used only as syntactic sugar for underlying lexical machinery and as such are not visible from inside the code. | 1 | 2 | 0 | What i mean is, how is the syntax defined, i.e. how can i make my own constructs like these?
I realise in a lot of languages, things like this will be built into the compiler / spec, and so it's dealt with by the compiler (at least that how i understand it to work).
But with python, everything i've come across so far has been accessible to the programmer, and so you more or less have the freedom to do whatever you want.
How would i go about writing my own version of for or while? Is it even possible?
I don't have any actual application for this, so the answer to any WHY?! questions is just "because why not?" or "curiosity". | How do the for / while / print *things* work in python? | 0.049958 | 0 | 0 | 140 |
17,902,229 | 2013-07-27T20:38:00.000 | 2 | 0 | 0 | 0 | python,pyramid,middleware,tween | 17,915,346 | 2 | true | 1 | 0 | Everything is better as WSGI middleware unless you need framework-specific details. Especially if you're smart and use the webob decorators to turn the complex WSGI protocol into simple request/response objects. For example when integrating with permissions I'm not even sure a tween makes sense. From within your groupfinder you can just connect to your entitlement system. For logging there are a lot of examples of both WSGI (paste's translogger) and tween (pyramid_exclog, pyramid_debugtoolbar) loggers that you can pull ideas from. | 1 | 3 | 0 | I would like to get a clear understanding of what would be the most pythonic and cleaner way to implement:
a custom logger.
a piece of code which connects via REST to a third-party entitlement system to be combined with the internal Pyramid ACLs and permission system.
Should I rather write a WSGI middleware which gets the app as parameter or a pure Pyramid Tween for either one or both my requirements?
Also, which of wsgi middleware or tween is the most compliant with apache + mod_wsgi?
Thanks | Pyramid: Tween or WSGI middleware for custom logger and external entitlement system? | 1.2 | 0 | 0 | 792 |
17,903,025 | 2013-07-27T22:20:00.000 | 1 | 0 | 0 | 1 | google-app-engine,python-2.7 | 17,959,762 | 1 | true | 1 | 0 | Official response from Chris Ramsdale, Product Manager, Google App Engine:
while there's currently no defined date for decommissioning this API, we are committed to supporting it throughout the remainder of the year (2013). please don't hesitate to reach out to me directly [redacted], if you have further questions (this thread is fine as well). | 1 | 5 | 0 | This is a question for the App Engine team.
Last week we realized that the App Engine team had marked the file-like API for writing and reading to the blobstore as being deprecated and likely to be removed in the future. We have quite a bit of infrastructure relying on that API that now we need to port to the alternative they suggest (Google Cloud Storage) and this is not a trivial effort (especially considering our current backlog). So the question is: how soon will this file-like API be unavailable? It's fairly important for us to know as depending on the answer, we might shuffle our backlog to prioritize the porting of using the Blobstore to GCS.
Thanks. | GAE Blobstore file-like API deprecation timeline (py 2.7 runtime) | 1.2 | 0 | 0 | 677 |
17,904,600 | 2013-07-28T03:06:00.000 | 0 | 0 | 0 | 0 | python-2.7,pandas,easy-install | 61,605,458 | 3 | false | 0 | 0 | Panda does not work with python 2.7 , do you will need python 3.6 or higer | 1 | 3 | 1 | I tried installing pandas using easy_install and it claimed that it successfully installed the pandas package in my Python Directory.
I switch to IDLE and try import pandas and it throws me the following error -
Traceback (most recent call last):
File "<pyshell#0>", line 1, in <module>
import pandas
File "C:\Python27\lib\site-packages\pandas-0.12.0-py2.7-win32.egg\pandas\__init__.py", line 6, in <module>
from . import hashtable, tslib, lib
File "numpy.pxd", line 157, in init pandas.hashtable (pandas\hashtable.c:20282)
ValueError: numpy.dtype has the wrong size, try recompiling
Please help me diagnose the error.
FYI: I have already installed the numpy package | Pandas import error | 0 | 0 | 0 | 26,604 |
17,906,746 | 2013-07-28T09:16:00.000 | 1 | 0 | 1 | 0 | python,types,unicode-string | 17,907,177 | 1 | true | 0 | 0 | It depends on what exactly is needed, but usually just one Unicode string is enough. If you need to take non-tiny slices, you can keep them as 3-tuples (big unicode, start pos, end pos) or just make custom objects with these 3 attributes and whatever API is needed. The point is that a lot of methods like unicode.find() or the regex pattern objects's search() support specifying start and end points. So you can do most basic things without actually needing to slice the single big unicode string. | 1 | 2 | 0 | After an initial search on this, I'm bit lost.
I want to use a buffer object to hold a sequence of Unicode code points. I just need to scan and extract tokens from said sequence, so basically this is a read only buffer, and we need functionality to advance a pointer within the buffer, and to extract sub-segments. The buffer object should of course support the usual regex and search ops on strings.
An ordinary Unicode string can be used for this, but the issue would be the creating of sub-string copies to simulate advancing a pointer within the buffer. This seems to be very inefficient esp for larger buffers, unless there's some workaround.
I can see that there's a Memoryview object that would be suitable, but it does not support Unicode (?).
What else can I use to provide the above functionality? (Whether in Py2 or Py3). | How to implement a Unicode buffer in python | 1.2 | 0 | 0 | 553 |
17,909,688 | 2013-07-28T15:15:00.000 | 1 | 1 | 0 | 1 | php,python,google-app-engine,runtime | 17,911,235 | 3 | false | 1 | 0 | Quite simply, no. You'll have to use separate modules, or pick one language and use it for both of the things you describe. | 3 | 0 | 0 | I heard that this was possible using the new modules features for the google app engine, but this will require two different modules, which basically is like two different apps. I would like to be able to run my python and php in the same application. Im getting some results via python and I want to parse them using php to get an API that is able to communicate with my other webapplications online. it will be like a proxy between my python scripts and webapplication.
Is there any way to achieve this? | Run both php and python at the same time on google app engine | 0.066568 | 0 | 0 | 431 |
17,909,688 | 2013-07-28T15:15:00.000 | 0 | 1 | 0 | 1 | php,python,google-app-engine,runtime | 17,911,325 | 3 | false | 1 | 0 | Segregate your applications in different modules and communicate between the two using the GAE Data Store or Memcache.
Your applications can signal each other using a GET request with the name of the Memcache key or the url safe data store key. | 3 | 0 | 0 | I heard that this was possible using the new modules features for the google app engine, but this will require two different modules, which basically is like two different apps. I would like to be able to run my python and php in the same application. Im getting some results via python and I want to parse them using php to get an API that is able to communicate with my other webapplications online. it will be like a proxy between my python scripts and webapplication.
Is there any way to achieve this? | Run both php and python at the same time on google app engine | 0 | 0 | 0 | 431 |
17,909,688 | 2013-07-28T15:15:00.000 | 0 | 1 | 0 | 1 | php,python,google-app-engine,runtime | 17,914,860 | 3 | false | 1 | 0 | You can achieve the proxy pattern by simply making http requests from one module to the other, using the URLFetch service. | 3 | 0 | 0 | I heard that this was possible using the new modules features for the google app engine, but this will require two different modules, which basically is like two different apps. I would like to be able to run my python and php in the same application. Im getting some results via python and I want to parse them using php to get an API that is able to communicate with my other webapplications online. it will be like a proxy between my python scripts and webapplication.
Is there any way to achieve this? | Run both php and python at the same time on google app engine | 0 | 0 | 0 | 431 |
17,911,091 | 2013-07-28T17:54:00.000 | 23 | 0 | 1 | 0 | python,list,variables,append | 56,122,883 | 10 | false | 0 | 0 | You can use Unpack list:
a = 5
li = [1,2,3]
li = [a, *li]
=> [5, 1, 2, 3] | 1 | 680 | 0 | I have an integer and a list. I would like to make a new list of them beginning with the variable and ending with the list.
Writing a + list I get errors. The compiler handles a as integer, thus I cannot use append, or extend either.
How would you do this? | Append integer to beginning of list in Python | 1 | 0 | 0 | 1,096,126 |
17,912,720 | 2013-07-28T20:46:00.000 | 0 | 0 | 0 | 1 | google-app-engine,ipython,ipython-notebook | 17,931,783 | 1 | true | 1 | 0 | iPython Notebook has profile directories in ~/.ipython, which have a startup directory for Python scripts that can be used to do the customization of sys.path and login credentials as remote_api_shell.py does. | 1 | 0 | 0 | I'd like to run modified remote_api_shell.py in an iPython notebook web interface, so that non-technical users with a basic grasp of Python could have read-only access to our production database.
Has anyone set something like this up, and what's the best way of going about it? | Running GAE remote_api_shell.py in a iPython notebook web interface | 1.2 | 0 | 0 | 259 |
17,913,982 | 2013-07-28T23:20:00.000 | 0 | 0 | 1 | 0 | python,debugging,exception,pdb | 17,915,977 | 2 | false | 0 | 0 | Can you search the code in the failing script for the text of the message that is logged (I realize that this may be difficult if the string is generated in a complex way). If you can find the point where the message is generated/logged then you can set an appropriate break point to troubleshoot the problem.
Unfortunately, AFAIK Python pdb debugging does not offer the capability that is present in some other languages to say, for example, break when Exception is raised. | 1 | 2 | 0 | Is there a command in the Python debugger (pdb) that says something like "run until the next exception is raised?"
Seems an obvious requirement but can't seem to find it.
Update : To be clear, my problem is an exception which is being caught and turned into an inadequate message in a log file. And I can't find where the exception is raised.
I figured that if I could go into trace mode and say "run until an exception is thrown" that would be the most straightforward way of finding it. I don't think post-mortem will work here. | Python Debugging : Find where an exception is raised, go into debug mode? | 0 | 0 | 0 | 994 |
17,914,476 | 2013-07-29T00:35:00.000 | 0 | 0 | 0 | 0 | python-2.7,tkinter | 17,914,482 | 1 | false | 0 | 1 | parse it as an int! Despite common belief, int parsage is always effective! | 1 | 0 | 0 | So I have a Tkinter program, which is a nice IDE, since none of them aren't at my standards. Similar to most programs, they have the menu bar at the top? Since in HTML you use the li tag, What is the equivalent of the li tag in python? | Li tag equivalent in python? | 0 | 0 | 0 | 32 |
17,917,066 | 2013-07-29T06:07:00.000 | 2 | 0 | 1 | 0 | python,while-loop,do-while | 17,917,329 | 1 | true | 0 | 0 | Use the control statement that best suits your needs in each situation.
An advice like "Don't use while, only use for" boils down to "If the only tool you know is a hammer, all problems look like a nail." | 1 | 0 | 0 | One of my teachers warned me not to use while in python. It is really strange for me as I have not found any articles on why to do so. What do you think on what could the grounds be? | Are there any grounds not to use while in python | 1.2 | 0 | 0 | 68 |
17,917,794 | 2013-07-29T06:59:00.000 | 0 | 0 | 1 | 0 | json,python-2.7 | 17,929,549 | 1 | true | 0 | 0 | The basic thing to understand is that json and csv files are extremely different on a very fundamental level.
A csv file is just a series of value separated by commas, this is useful for defining data like those in relational databases where you have exactly the same fields repeated for a large number of objects.
A json file has structure to it, there is no straightforward way to represent any kind of tree structure in a csv. You can have various types of foreign key relationships, but when it comes right down to it, trees don't make any sense in a csv file.
My advice to you would be to reconsider using a csv or post your specific example because for the vast majority of cases, there is no sensible way to convert a json document into a csv. | 1 | 0 | 0 | I am trying to convert a number of .json files to .csv's using Python 2.7
Is there any general way to convert a json file to a csv?
PS: I saw various similar solutions on stackoverflow.com but they were very specific
to json tree and doesn't work if the tree structure changes. I am new to this site and am sorry for my bad english and reposting ty | json tree file,url to csv file using python | 1.2 | 0 | 0 | 148 |
17,918,480 | 2013-07-29T07:41:00.000 | 0 | 0 | 1 | 1 | python,xcode,macos,pyobjc | 17,934,374 | 2 | false | 0 | 0 | Use py2app to create the application bundle, and do that using a separate install of Python (that is don't use /System/Library/Framework/Python.framework). The python install you use should be compiled with the MACOSX_DEPLOYMENT_TARGET set to the minimum OSX release you want to support.
When you do this, it should be possible to deploy to older OSX releases. I regularly do this for building apps on a 10.8 machine that get deployed to a 10.5 machine.
You do need to take some care when including other libraries, especially when those include a configure script: sometimes the configure script detects functionality that is available on the build machine, but not on the deployment machine.
BTW. You need to link against the same version of Python as you use at runtime. CPython's ABI is not compatible between feature releases (that is, the 2.6 ABI is not necessarily compatible with the 2.7 ABI). For python 3.x there is a stable ABI that is compatible between feature releases, but AFAIK that's primarily targeting Python extensions and I don't know how useful that is for embedding Python in your application. | 1 | 0 | 0 | I am using Xcode to build a PyObjC application. The app runs fine on the build machine (running 10.8) but crashes on startup on a machine running 10.6, because it fails to find the Python 2.7 installation. Fair enough -- the preinstalled Python on 10.6 is Python 2.5. But I don't really care which Python version my app uses, I just want it to use the latest version of Python it can find.
How can I either:
A) Tell my app to use the latest version of Python available on the host system, OR
B) Bundle the entire Python source into my app?
I have been very frustrated by this issue and any help would be greatly appreciated! | Running PyObjC application (built in Xcode) on previous version of Mac OS? | 0 | 0 | 0 | 271 |
17,919,503 | 2013-07-29T08:42:00.000 | 0 | 0 | 0 | 0 | python,selenium | 18,606,843 | 2 | false | 0 | 0 | You can use WebDriverWait function if you are sure that the element is on your document. You should define WebDriverWait at the beginning with from selenium.webdriver.support.ui import WebDriverWait and if you didn't define before from selenium.webdriver.common.by import By, then use WebDriverWait(driver, 20).until(EC.presence_of_element_located((By.ID, "blabla")))
That's all we need to do. I hope this will help you. | 1 | 1 | 0 | I'm trying to test with selenium webdriver. My version of selenium is 2.33 and the browser is Firefox. The scripting language is python
Now when I call the method find_element_by_xpath(blabla) If the widget does not exist. The program just gets stuck there with no exception shown. It's just stuck. By the way I have tried find_element_by_id, find_element_by_name, find_elements and changed Firefox to 3.5, 14.0, 21.0, 22.0. The problem always shows up.
Anybody ever got this problem?
I just want an exception not just getting stuck. Help... | Selenium Webdriver stuck when find_element method called with a non-existed widget | 0 | 0 | 1 | 1,799 |
17,920,065 | 2013-07-29T09:13:00.000 | 0 | 0 | 0 | 0 | python,selenium | 17,929,109 | 1 | false | 0 | 0 | make the del method return boolean.
True on success, false otherwise.
If it returns false, close the driver in main. | 1 | 0 | 0 | I had a test script with Selenium RC. Now I don't know what could happen during the test but I want the browser to be shut down if the program is killed accidentally. BTW, the script is running in a sub-process.
Now I tried method __del__ but it doesn't work and I don't see if there's any exceptions. So I have no idea where to put try --- except or with --- in.
If I run the script in main-process it works fine. Any help?
Version of Selenium is 2.33. Browser is Firefox 21.0 | How can I close the browser when program is killed accidentally in selenium? | 0 | 0 | 1 | 73 |
17,925,460 | 2013-07-29T13:32:00.000 | 0 | 0 | 1 | 0 | python,arrays,slice | 17,938,780 | 5 | false | 0 | 0 | For each range in your limits list create an empty list plus one for the overflow values as a tupple with the max value in the list and the min value for that list, the last one will have a max on None
For each value in the values list run through your tupples until you find the one that your value is > min and < max or the max is None.
When you find the right list append the value to it and go on to the next. | 1 | 0 | 1 | How do I divide a list into smaller not evenly sized intervals, give the ideal initial and final values of each interval?
I have a list of 16383 items. I also have a separate list of the values at which each interval should end and the following should enter.
I would need to use the given intervals to assign each element to the partition it belongs to, depending on its value.
I have tried reading stuff, but I encountered only the case when given the original list, people split it into evenly sized partitions...
Thanks
Blaise | Dividing an array into partitions NOT evenly sized, given the points where each partition should start or end, in python | 0 | 0 | 0 | 141 |
17,931,476 | 2013-07-29T18:27:00.000 | 0 | 0 | 0 | 0 | python-3.x,sas-jmp,jsl | 34,890,487 | 3 | false | 1 | 0 | Make sure jmp.exe is available in your system environment so that if you type "jmp.exe" in the command line, it would launch jmp. Then have your *.jsl ready.
use python procees to run this command "jmp.exe *.jsl" and that would open jmp and run the *.jsl script and then you can import whatever you generate from jmp back in to python. | 1 | 4 | 0 | I have a python script running. I want to call *.jsl script in my running python script and want to make use of it's output in python. May I know how can I do that? | How to call a *.jsl script from python script | 0 | 0 | 0 | 4,983 |
17,931,579 | 2013-07-29T18:32:00.000 | 0 | 0 | 0 | 0 | python,amazon-web-services,amazon-s3,boto,s3cmd | 20,389,005 | 1 | false | 1 | 0 | Since your source path contains your destination path, you may actually be copying things more than once -- first into the destination path, and then again when that destination path matches your source prefix. This would also explain why copying to a different bucket is faster than within the same bucket.
If you're using s3s3mirror, use the -v option and you'll see exactly what's getting copied. Does it show the same key being copied multiple times? | 1 | 1 | 0 | I am trying to copy the entire /contentstore/ folder on a bucket to a timestamped version. Basically /contenstore/ would be copied to /contentstore/20130729/.
My entire script uses s3s3mirror first to clone my production S3 bucket to a backup. I then want to rename the backup to a timestamped copy so that I can keep multiple versions of the same.
I have a working version of this using s3cmd but it seems to take an abnormally long time. The s3s3mirror part between the two buckets is done within minutes, possibly because it is a refresh on existing folder. But even in the case of a clean s3s3mirror (no existing contentstore on backup) it take around 20 minutes.
On the other hand copying the conentstore to a timestamped copy on the backup bucket takes over an hour and 10 minutes.
Am I doing something incorrectly? Should the copy of data on the same bucket take longer than a full clone between two different buckets?
Any ideas would be appreciated.
P.S: The command I am running is s3cmd --recursive cp backupBucket/contentStore/ backupBucket/20130729/ | Copying files in the same Amazon S3 bucket | 0 | 1 | 0 | 995 |
17,931,729 | 2013-07-29T18:41:00.000 | 0 | 0 | 1 | 0 | python,algorithm | 17,931,924 | 1 | true | 0 | 0 | You can pre-process the existing data such that your data consists of non-overlapping intervals. That is if (a,b) and (c,d) intersect, you can merge them into a single interval (a,d) or (a,b) or (c,b) or (c,d) depending on the nature of the overlap.
More details on this step: Sort the intervals according to their start point (end point as secondary key). The merging step can be done at O(n) time then.
Now, sort the merged interval. For testing overlap, you can do two binary searches. | 1 | 0 | 0 | I have a a list of a million pairs of integers (a,b). How can I prepare a data structure in python with the following property? When I see a new pair of integers I would like to be able to tell if it overlaps any existing pair in my list very quickly? Assuming b > a and d > c I say that (a,b) and (c,d) overlap if (a <= c and b >= c) or (a<=d and b>=d) or both a and b are between c and d.
Can this be done somehow in log time? | Discarding isolated pairs of points | 1.2 | 0 | 0 | 91 |
17,932,150 | 2013-07-29T19:04:00.000 | 0 | 0 | 1 | 0 | python-3.x,zip | 18,062,676 | 1 | false | 0 | 0 | Yes as danielu13 said just unzip to temp folder and copy. Also you may wanto include more specifics into you question (code samples, directory structures etc.
ps @danielu13 why not post your comment as an answer? I'm new here so there could be a good reason. | 1 | 1 | 0 | The title says it all, I'm trying to transfer a list of files from one zip to another without the need to de-compress then re-compress the files.
Any suggestions? | How to transfer files from on zip to another without decrompressing | 0 | 0 | 0 | 125 |
17,932,649 | 2013-07-29T19:33:00.000 | 0 | 0 | 0 | 0 | python,django-south,django-guardian | 27,130,661 | 1 | false | 1 | 0 | I used get_user_model() instead of orm['auth.User'] | 1 | 1 | 0 | I can't get django-guardian assign_perm to work in a south datamigration ... the reason it is not working seems to be because guardian is not using the frozen ORM. How can I get other apps in general and django-guardian specifically to use frozen models. | Using guardian's assign_perm in a south migration | 0 | 0 | 0 | 193 |
17,933,572 | 2013-07-29T20:27:00.000 | 0 | 0 | 1 | 0 | python,macos,ide,terminal,komodo | 17,934,503 | 2 | false | 0 | 0 | I am using the Komodo editor under Linux, so I hope I am not pointing you to a place that does not exist in the IDE, but I hardly doubt it.
So, have you tried adding the module in Komodo:
preferences --> languages --> python --> additional python import directories
This might work. | 1 | 0 | 0 | I hope it is just that I am doing something stupid but for some reason Komodo wont see a modules I have on my machine. I used PIP to install paramiko and it installed fine. From a terminal the builtin python and ipython will see and import the module fine. When I am trying to write a Python script and I try to import paramiko, Komodo thins its not there at all. I am not sure what to do to fix this.
I did instal some other items from pip and Komodo sees them right away but for what ever reason it wont see paramiko. I will try this on Windwos and Linux to make sure its not just my setup. I hope someone can help me out on this. | Komodo IDE wont see my Python Modules | 0 | 0 | 0 | 1,234 |
17,934,427 | 2013-07-29T21:22:00.000 | 1 | 0 | 0 | 1 | python,websocket,tornado | 18,775,244 | 1 | false | 0 | 0 | Browsers may handle websocket client messages in a separate thread, which is not blocked by sleep.
Even if a thread of your custom application is not active, when you force it to sleep (like sleep(100)), TCP connection is not closed in this case. The socket handle is still managed by OS kernel and the TCP server still sends the messages until it reaches the TCP client's receive window overflow. And even after this an application on server side can still submit new messages successfully, which are buffered on TCP level on server side until TCP outgoing buffer is overflown. When outgoing buffer is full, an application should get error code on send request, like "no more space". I have not tried myself, but it should behave like this.
Try to close the client (terminate the process), you will see totally different picture - the server will notice disconnect.
Both cases, disconnect and overflow, are difficult to handle on server side for highly reliable scenarios. Disconnect case can be converted to overflow case (websocket server can buffer messages up to some limit on user space while client is being reconnected). However, there is no easy way to handle reliably overflow of transmit buffer limit. I see only one solution - propagate overflow error back to originator of the event, which raised the message, which has been discarded due to overflow. | 1 | 0 | 0 | We have a server, written using tornado, which sends asynchronous messages to a client over websockets. In this case, a javascript app running in Chrome on a Mac. When the client is forcibly disconnected, in this case by putting the client to sleep, the server still thinks it is sending messages to the client. Additionally, when the client awakens from sleep, the messages are delivered in a burst.
What is the mechanism by which these messages are queued/buffered? Who is responsible? Why are they still delivered? Who is reconnecting the socket? My intuition is that even though websockets are not request/response like HTTP, they should still require ACK packets since they are built on TCP. Is this being done on purpose to make the protocol more robust to temporary drops in the mobile age? | WebSocket messages get queued when client disconnected | 0.197375 | 0 | 1 | 1,868 |
17,937,010 | 2013-07-30T01:39:00.000 | 1 | 1 | 0 | 0 | python,selenium-webdriver,browser-automation,pyjamas | 17,937,214 | 4 | false | 0 | 0 | I am not expect in web scraping. But I had some experience with both Mechanize and Selenium. I think in your case, either Mechanize or Selenium will suit your needs well, but also spend some time look into these Python libraries Beautiful Soup, urllib and urlib2.
From my humble opinion, I will recommend you use Mechanize over Selenium in your case. Because, Selenium is not as light weighted compare to Mechanize. Selenium is used for emulating a real web browser, so you can actually perform 'click action'.
There are some draw back from Mechanize. You will find Mechanize give you a hard time when you try to click a type button input. Also Mechanize doesn't understand java-scripts, so many times I have to mimic what java-scripts are doing in my own python code.
Last advise, if you somehow decided to pick Selenium over Mechanize in future. Use a headless browser like PhantomJS, rather than Chrome or Firefox to reduce Selenium's computation time. Hope this helps and good luck. | 1 | 1 | 0 | I'm trying to make a simple script in python that will scan a tweet for a link and then visit that link.
I'm having trouble determining which direction to go from here. From what I've researched it seems that I can Use Selenium or Mechanize? Which can be used for browser automation. Would using these be considered web scraping?
Or
I can learn one of the twitter apis , the Requests library, and pyjamas(converts python code to javascript) so I can make a simple script and load it into google chrome's/firefox extensions.
Which would be the better option to take? | Can anyone clarify some options for Python Web automation | 0.049958 | 0 | 1 | 1,466 |
17,937,331 | 2013-07-30T02:24:00.000 | 1 | 0 | 0 | 0 | python,linux,shell,xorg | 17,937,441 | 1 | true | 0 | 0 | There is no good way. The least bad way I am aware of is to inspect the contents of /tmp/.X11-unix; this will contain Unix domain sockets named X0 for :0, X1 for :1, and so on. If you want a TCP socket instead, attempt to connect to port 6000 and up until you get ECONNREFUSED. Beware that both of these approaches have inherent race conditions which are AFAIK unfixable. | 1 | 0 | 0 | How do I get the first unused X display in either Python (Don't think there's a way) or {,ba,z}sh? It could return <number>, :<number>, or $(hostname):<number>).
For example:
X session at :0, 1 returned.
X session at :0, VNC at :1, 2 returned.
X session at :0, QEMU at :2, 1 returned. | How do I get the first unused display in X? | 1.2 | 0 | 0 | 105 |
17,938,841 | 2013-07-30T05:20:00.000 | 1 | 0 | 1 | 0 | python,audio,plot | 17,939,214 | 1 | false | 0 | 0 | FFT and, for this matter, any spectral analysis of discrete samples, will (almost) never give you a precise graph of spectra.
If you give the FFT the entire input, it will provide you with the highest resolution graph, but this graph will contain the entire input.
The less samples you provide, the lower the spectral resolution will be. It's a trade-off.
Nevertheless, finding the exact sample in which a certain frequency is introduced is quite meaningless.
You should provide the graphing a certain portion of the samples (a 'window'). It is trivial to calculate the playback time it represents. Finding the appropriate number of samples to use depends on your needs (transient vs spectral resolution).
I don't know what is your knowledge of signal processing so I do not wish to get too technical at the moment, but the general method is quite trivial:
Find the appropriate number of samples that suits your needs.
Chart/analyze those windows in parallel to the playback or ahead of it.
Determine the time corresponding to the identified window. | 1 | 0 | 0 | I am looking for a way the accurately plot sound generated in real time, using python.
Basically, I am generating a tone with the frequency varying following a noise function. When the frequency reaches certain thresholds, I need to output a visual cue (ie: a print statement)
I have been outputting the audio using pyaudio, which works fine. But I have yet to find a way to plot it, or monitor when it reaches certain levels.
EDIT : to clarify a little bit : Lets say I generate 1 second of samples. The frequency reaches the desired level at 0.1 second and 0.7 second. How can I play this audio sample, and print a statement precisely at the moment it reaches 0.1 second and 0.7 second. How can I sync some sort of visual clue, or any function call, precisely synced with the audio playing. | python play generated sound and accurate plot | 0.197375 | 0 | 0 | 599 |
17,939,824 | 2013-07-30T06:33:00.000 | 0 | 0 | 0 | 0 | python,mysql,beautifulsoup,mysql-python | 17,940,205 | 2 | false | 1 | 0 | My Suggestion is instead of updating values row by row try to use Bulk Insert in temporary table and then move the data into an actual table based on some timing key. If you have key column that will be good for reading the recent rows as you added. | 1 | 1 | 0 | I have an html file on network which updates almost every minute with new rows in a table. At any point, the file contains close to 15000 rows I want to create a MySQL table with all data in the table, and then some more that I compute from the available data.
The said HTML table contains, say rows from the last 3 days. I want to store all of them in my mysql table, and update the table every hour or so (can this be done via a cron?)
For connecting to the DB, I'm using MySQLdb which works fine. However, I'm not sure what are the best practices to do so. I can scrape the data using bs4, connect to table using MySQLdb. But how should I update the table? What logic should I use to scrape the page that uses the least resources?
I am not fetching any results, just scraping and writing.
Any pointers, please? | Update a MySQL table from an HTML table with thousands of rows | 0 | 1 | 0 | 509 |
17,946,411 | 2013-07-30T11:57:00.000 | 0 | 0 | 0 | 0 | python,signals,subprocess,popen,sigint | 17,946,607 | 2 | false | 0 | 0 | If you are controlling them by using a PIPE to send keystrokes then sending chr(0x1b) should do the trick. | 2 | 2 | 0 | I'm currently designing a script that will, in the end, control a range of games with the ability to start and stop them all from the main script.
However, one of the games can only gracefully stop by pressing the 'ESC' key. How can I interpret this into a signal or something similar?
*The games are started with gamename.Popen() and then usually stopping is done by sending SIGINT or SIGQUIT.
Ideas? | Sending 'ESC' or signal to subprocess | 0 | 0 | 0 | 1,331 |
17,946,411 | 2013-07-30T11:57:00.000 | 1 | 0 | 0 | 0 | python,signals,subprocess,popen,sigint | 17,946,630 | 2 | true | 0 | 0 | However, one of the games can only gracefully stop by pressing the 'ESC' key. How can I interpret this into a signal or something similar?
You can't. You're trying to cover for a design error in a child process which I suspect grabs its own input and doesn't use the stdin which would allow you to to send an Esc. Let me know if this assumption is incorrect.
Have you tried using SIGTERM with this game? That's the more conventional "clean yourself up and exit" signal, and I can well imagine someone writing code to handle SIGTERM gracefully while leaving SIGINT and SIGQUIT to their defaults. | 2 | 2 | 0 | I'm currently designing a script that will, in the end, control a range of games with the ability to start and stop them all from the main script.
However, one of the games can only gracefully stop by pressing the 'ESC' key. How can I interpret this into a signal or something similar?
*The games are started with gamename.Popen() and then usually stopping is done by sending SIGINT or SIGQUIT.
Ideas? | Sending 'ESC' or signal to subprocess | 1.2 | 0 | 0 | 1,331 |
17,950,492 | 2013-07-30T14:57:00.000 | 0 | 0 | 0 | 0 | python,optimization,scipy | 17,951,581 | 1 | false | 0 | 0 | It seems it is in fact impossible to pass a 2D list to numpy.optimize.fmin. However flattening the input f was not that much of a problem and while it makes the code slightly uglier, the optimisation now works.
Interestingly I also coded the optimisation in Matlab which does take 2D inputs to its fminsearch function. Both programs give the same output (y). | 1 | 0 | 1 | I currently have a function PushLogUtility(p,w,f) that I am looking to optimise w.r.t f (2xk) list for fixed p (9xk list) and w (2xk) list.
I am using the scipy.optimize.fmin function but am getting errors I believe because f is 2-dimensional. I had written a previous function LogUtility(p,q,f) passing a 1-dimensional input and it worked.
One option it seems is to write the p, w and f into 1-dimensional lists but this would be time-consuming and less readable. Is there any way to make fmin optimise a function with a 2D input? | Passing 2D argument into numpy.optimize.fmin error | 0 | 0 | 0 | 380 |
17,951,872 | 2013-07-30T15:59:00.000 | 1 | 0 | 0 | 0 | python,flask,flask-login,flask-security | 28,427,169 | 1 | false | 1 | 0 | Mark Hildreth is correct.
flask-social allows you to log in via a form (username/password) or via social.
So you can use it in conjunction with flask-security, flask-login, or whatever password-based authentication you want. I have used flask-social in conjunction with flask-security and can confirm they work quite well together.
flask-social links each User object to zero or more additional social accounts, which are stored in a separate table/datastore. Thus, it does not replace the existing password infrastructure...it just augments the User model and adds additional social methods to also allow for the user to log in alternatively via social accounts. | 1 | 5 | 0 | Looking to implement social authentication in our application with LinkedIn, Google, Facebook. I'm currently using flask-security to help manage users/roles in our application. I'm looking for some guidance on best practices with Flask/Flask-Security and Social Authentication.
I've seen the flask-social plugin, but I'd like to have the option of local form-based login, too.
So far, I'm planning on writing a new login view implementation for flask-security that can determine whether I'm using a social site (via passing a query parameter when user clicks on "login with XYZ") for the login. After social authentication occurs, I was planning on running the regular flask-security login to set all the appropriate session tokens and user and roles so the @login_required decorator will continue to work.
I didn't really see any hooks for overriding the login view function in flask-security, so I'm planning on either 1) copying the existing implementation into my own app or 2) calling flask_security_views::login.
However, I'm wondering if there's some of this that's already been implemented somewhere, or a better start. It seems like I'm really going to be cutting up a lot of existing code.
Thanks | Implementing social login in Flask | 0.197375 | 0 | 0 | 2,924 |
17,952,612 | 2013-07-30T16:34:00.000 | 3 | 0 | 1 | 0 | python,list,min | 17,952,674 | 6 | false | 0 | 0 | len(list_) - list_[::-1].index(min(list_)) - 1
Get the length of the list, subtract from that the index of the min of the list in the reversed list, and then subtract 1. | 1 | 8 | 0 | For example [1,2,3,4,1,2]
has min element 1, but it occurs for the last time at index 4. | Python: Finding the last index of min element? | 0.099668 | 0 | 0 | 2,568 |
17,953,124 | 2013-07-30T17:04:00.000 | 314 | 0 | 1 | 1 | python,cmd | 27,385,986 | 15 | false | 0 | 0 | Try "py" instead of "python" from command line:
C:\Users\Cpsa>py
Python 3.4.1 (v3.4.1:c0e311e010fc, May 18 2014, 10:38:22) [MSC v.1600 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> | 7 | 124 | 0 | So I have recently installed Python Version 2.7.5 and I have made a little loop thing with it but the problem is, when I go to cmd and type python testloop.py I get the error:
'python' is not recognized as an internal or external command
I have tried setting the path but no avail.
Here is my path:
C:\Program Files\Python27
As you can see, this is where my Python is installed. I don't know what else to do. Can someone help? | 'python' is not recognized as an internal or external command | 1 | 0 | 0 | 662,372 |
17,953,124 | 2013-07-30T17:04:00.000 | 0 | 0 | 1 | 1 | python,cmd | 57,309,414 | 15 | false | 0 | 0 | If you uninstalled then re-installed, and running 'python' in CLI, make sure to open a new CMD after your installation for 'python' to be recognized. 'py' will probably be recognized with an old CLI because its not tied to any version. | 7 | 124 | 0 | So I have recently installed Python Version 2.7.5 and I have made a little loop thing with it but the problem is, when I go to cmd and type python testloop.py I get the error:
'python' is not recognized as an internal or external command
I have tried setting the path but no avail.
Here is my path:
C:\Program Files\Python27
As you can see, this is where my Python is installed. I don't know what else to do. Can someone help? | 'python' is not recognized as an internal or external command | 0 | 0 | 0 | 662,372 |
17,953,124 | 2013-07-30T17:04:00.000 | 0 | 0 | 1 | 1 | python,cmd | 58,568,261 | 15 | false | 0 | 0 | Option 1 : Select on add environment var during installation
Option 2 : Go to C:\Users-> AppData (hidden file) -> Local\Programs\Python\Python38-32(depends on version installed)\Scripts
Copy path and add to env vars path.
For me this path worked : C:\Users\Username\AppData\Local\Programs\Python\Python38-32\Scripts | 7 | 124 | 0 | So I have recently installed Python Version 2.7.5 and I have made a little loop thing with it but the problem is, when I go to cmd and type python testloop.py I get the error:
'python' is not recognized as an internal or external command
I have tried setting the path but no avail.
Here is my path:
C:\Program Files\Python27
As you can see, this is where my Python is installed. I don't know what else to do. Can someone help? | 'python' is not recognized as an internal or external command | 0 | 0 | 0 | 662,372 |
17,953,124 | 2013-07-30T17:04:00.000 | 0 | 0 | 1 | 1 | python,cmd | 45,619,149 | 15 | false | 0 | 0 | Another helpful but simple solution might be restarting your computer after doing the download if Python is in the PATH variable. This has been a mistake I usually make when downloading Python onto a new machine. | 7 | 124 | 0 | So I have recently installed Python Version 2.7.5 and I have made a little loop thing with it but the problem is, when I go to cmd and type python testloop.py I get the error:
'python' is not recognized as an internal or external command
I have tried setting the path but no avail.
Here is my path:
C:\Program Files\Python27
As you can see, this is where my Python is installed. I don't know what else to do. Can someone help? | 'python' is not recognized as an internal or external command | 0 | 0 | 0 | 662,372 |
17,953,124 | 2013-07-30T17:04:00.000 | 11 | 0 | 1 | 1 | python,cmd | 48,145,947 | 15 | false | 0 | 0 | Type py -v instead of python -v in command prompt | 7 | 124 | 0 | So I have recently installed Python Version 2.7.5 and I have made a little loop thing with it but the problem is, when I go to cmd and type python testloop.py I get the error:
'python' is not recognized as an internal or external command
I have tried setting the path but no avail.
Here is my path:
C:\Program Files\Python27
As you can see, this is where my Python is installed. I don't know what else to do. Can someone help? | 'python' is not recognized as an internal or external command | 1 | 0 | 0 | 662,372 |
17,953,124 | 2013-07-30T17:04:00.000 | 6 | 0 | 1 | 1 | python,cmd | 49,065,296 | 15 | false | 0 | 0 | i solved this by running CMD in administration mode, so try this. | 7 | 124 | 0 | So I have recently installed Python Version 2.7.5 and I have made a little loop thing with it but the problem is, when I go to cmd and type python testloop.py I get the error:
'python' is not recognized as an internal or external command
I have tried setting the path but no avail.
Here is my path:
C:\Program Files\Python27
As you can see, this is where my Python is installed. I don't know what else to do. Can someone help? | 'python' is not recognized as an internal or external command | 1 | 0 | 0 | 662,372 |
17,953,124 | 2013-07-30T17:04:00.000 | 12 | 0 | 1 | 1 | python,cmd | 53,706,895 | 15 | false | 0 | 0 | If you want to see python version then you should use py -V instead of python -V
C:\Users\ghasan>py -V
Python 3.7.1
If you want to go to python's running environment then you should use py instead of python
C:\Users\ghasan>py
Python 3.7.1 (v3.7.1:260ec2c36a, Oct 20 2018, 14:57:15) [MSC v.1915 64
bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
Here you can run the python program as:
print('Hello Python')
Hello Python | 7 | 124 | 0 | So I have recently installed Python Version 2.7.5 and I have made a little loop thing with it but the problem is, when I go to cmd and type python testloop.py I get the error:
'python' is not recognized as an internal or external command
I have tried setting the path but no avail.
Here is my path:
C:\Program Files\Python27
As you can see, this is where my Python is installed. I don't know what else to do. Can someone help? | 'python' is not recognized as an internal or external command | 1 | 0 | 0 | 662,372 |
17,953,552 | 2013-07-30T17:29:00.000 | 1 | 0 | 0 | 0 | python,sqlite,static-site | 18,099,967 | 3 | false | 1 | 0 | It looks like your needs has changed and you are going into direction where static web site is not sufficient any more.
Firstly, I would pick appropriate Python framework for your needs. if static website was sufficient until recently Django can be perfect for you.
Next I would suggest describing your DB schema for ORM used in chosen framework. I see no point in querying your DB using SQL until you would have a specific reason.
And finally, I would start using static content of your website as templates, replacing places where dynamic data is required. Django internal template language can be easily used that way. If not, Jinja2 also could be good.
My advise is base on many assumptions, as your question is quite open and undefined.
Anyway, I think it would be the best way to start transition period from static to dynamic. | 1 | 5 | 0 | Title question says it all. I was trying to figure out how I could go about integrating the database created by sqlite3 and communicate with it through Python from my website.
If any further information is required about the development environment, please let me know. | I have a static website built using HTML, CSS and Javascript. How do I integrate this with a SQLite3 database accessed with the Python API? | 0.066568 | 1 | 0 | 1,713 |
17,955,275 | 2013-07-30T19:04:00.000 | 0 | 0 | 1 | 0 | python,performance,mongodb,pymongo | 24,357,799 | 1 | false | 0 | 0 | pymongo is thread safe, so you can run multiple queries in parallel. (I assume that you can somehow partition your document space.)
Feed the results to a local Queue if processing the result needs to happen in a single thread. | 1 | 1 | 0 | I'm currently running into an issue in integrating ElasticSearch and MongoDB. Essentially I need to convert a number of Mongo Documents into searchable documents matching my ElasticSearch query. That part is luckily trivial and taken care of. My problem though is that I need this to be fast. Faster than network time, I would really like to be able to index around 100 docs/second, which simply isn't possible with network calls to Mongo.
I was able to speed this up a lot by using ElasticSearch's bulk indexing, but that's only half of the problem. Is there any way to either bundle reads or cache a collection (a manageable part of a collection, as this collection is larger than I would like to keep in memory) to help speed this up? I was unable to really find any documentation about this, so if you can point me towards relevant documentation I consider that a perfectly acceptable answer.
I would prefer a solution that uses Pymongo, but I would be more than happy to use something that directly talks to MongoDB over requests or something similar. Any thoughts on how to alleviate this? | Bundling reads or caching collections with Pymongo | 0 | 1 | 0 | 195 |
17,957,619 | 2013-07-30T21:20:00.000 | 0 | 0 | 0 | 0 | python,ios,django,sockets,push | 17,957,913 | 1 | false | 1 | 0 | Once your app is no longer in the foreground, the only way to communicate with it at all is via push notification.
If the app is open, you could create some kind of listener socket, register it with your Django server, and have the server talk to the socket. Otherwise your best bet would be to just poll the server every minute or two. | 1 | 0 | 0 | Is it possible to push information from Django to an iOS application over a local intranet? Whenever there is a specific POST request to the Django-server, I would like to either push out some information to the devices, or just send a signal to the devices, asking them to pull from the API.
This problem would normally be solved using push notifications, but the fact is that all the devices including the server are only connected to a local network without internet connection.
I have been thinking of using some kind of a socket, but haven't been able to find something that suits this purpose, and writing my own would be a lot of work and probably not worth it.
Does anyone know of any frameworks that can help, or have another approach to the problem? | Push information from Django to iOS locally | 0 | 0 | 0 | 114 |
17,960,013 | 2013-07-31T01:14:00.000 | 1 | 0 | 0 | 0 | python,rgb,fits,pyfits | 17,984,159 | 1 | true | 0 | 0 | I don't think there is enough information for me to answer your question completely; for example, I don't know what call you are making to perform the "image" "save", but I can guess:
FITS does not store RGB data like you wish it to. FITS can store multi-band data as individual monochromatic data layers in a multi-extension data "cube". Software, including ds9 and aplpy, can read that FITS data cube and author RGB images in RGB formats (png, jpg...). The error you see comes from PIL, which has no backend to author FITS files (I think, but the validity of that point doesn't matter).
So I think that you should use aplpy.make_rgb_cube to save a 3 HDU FITS cube based your 3 input FITS files, then import that FITS cube back into aplpy and use aplpy.make_rgb_image to output RGB compatible formats. This way you have the saved FITS cube in near native astronomy formats, and a means to create RGB formats from a variety of tools that can import that cube. | 1 | 0 | 0 | I am trying to make a three colour FITS image using the $aplpy.make_rgb_image$ function. I use three separate FITS images in RGB to do so and am able to save a colour image in png, jpeg.... formats, but I would prefer to save its as a FITS file.
When I try that I get the following error.
IOError: FITS save handler not installed
I've tried to find a solution in the web for a few days but was unable to get any good results.
Would anyone know how to get such a handler installed, or perhaps any other approach I could use to get this done? | Making a 3 Colour FITS file using aplpy | 1.2 | 0 | 0 | 1,054 |
17,960,261 | 2013-07-31T01:42:00.000 | 0 | 0 | 0 | 0 | python,django,deployment,django-cms,bluehost | 18,367,549 | 1 | true | 1 | 0 | As I wondered, the problem was that I was accessing the site using my temporary link from BlueHost, which the function throwing the error could not handle.
When my clients finally pointed their domain name at the server this problem and a few others (CSS inconsistencies in the Django admin, trouble with .htaccess) disappeared. Everything is up now and working fine. | 1 | 0 | 0 | I'm deploying my first Django app on a BlueHost shared server. It is a simple site powered by Django-CMS, and portions of it are working, however there are some deal-breaking quirks.
A main recurring one reads TypeError, a float is required. The exception location each time is .../python/lib/python2.7/site-packages/django/core/urlresolvers.py in _reverse_with_prefix, line 391. For example, I run into it when trying to load a page which includes {% cms_toolbar %} in the template, pressing "save and continue editing" when creating a page, or trying to delete a page through the admin interface.
I don't know if this is related, but nothing happens when I select a plugin from the "Available Plugins" drop-down while editing a page and press "Add Plugin".
Has anyone had any experience with this error, or have any ideas how to fix it? | Django-CMS Type Error "a float is required" | 1.2 | 0 | 0 | 397 |
17,960,882 | 2013-07-31T03:03:00.000 | 2 | 0 | 1 | 1 | python,file | 17,960,919 | 1 | true | 0 | 0 | That is completely dependent on the application in question. Some applications do support a mechanism for specifying a document to open via COM or DDE, some may allow you to invoke a second copy with the file as an argument which will tell the first to open that file, and some may have no provision for this at all. You will need to check the documentation of the application in question to see which, if any, it supports. | 1 | 0 | 0 | Is it possible to use Python to open a file from an existing running application? For example, I have a notepad application open. If I run os.startfile(newnotepad.txt) it opened up a new notepad application. I would like it to open in the existing one. | Using python to open a file from an existing running application? | 1.2 | 0 | 0 | 74 |
17,961,363 | 2013-07-31T04:00:00.000 | 0 | 0 | 1 | 0 | python,pyqt,qtreewidget | 17,962,727 | 1 | true | 0 | 1 | Every QWidget has a contextMenuPolicy property which defines what to do when context menu is requested. A simpliest way to do what you need is like this:
Create QAction objects that call methods you want.
Add these actions to your tree widgets using widget.addAction()
Call widget.setContextMenuPolicy(QtCore.Qt.ActionsContextMenu)
That's it. Context menu for the widget will contain actions you added. | 1 | 1 | 0 | i have 2 treewidgets placed in a frame in mainwindow. how can i have 2 different set of context menu options for the 2 treewidgets? i need individual set of right click options for the treewidgets.Thanks in advance.. | PyQT treewidgets with different context menu options | 1.2 | 0 | 0 | 1,433 |
17,961,391 | 2013-07-31T04:04:00.000 | 1 | 0 | 0 | 0 | python,opencv | 17,971,361 | 2 | true | 0 | 0 | You'll have to install all the libraries you want to use together with OpenCV for Python 2.7. This is not much of a problem, you can do it with pip in one line, or choose one of the many pre-built scientific Python packages. | 1 | 0 | 1 | I have Python 3.3 and 2.7 installed on my computer
For Python 3.3, I installed many libraries like numpy, scipy, etc
Since I also want to use opencv, which only supports python 2.7 so far, I installed opencv under Python 2.7.
Hey, here comes the problem, what if I want to import numpy as well as cv in the same script? | OpenCV Python 3.3 | 1.2 | 0 | 0 | 1,449 |
17,966,015 | 2013-07-31T09:03:00.000 | 0 | 0 | 0 | 0 | python,google-app-engine,webapp2 | 17,979,219 | 1 | true | 1 | 0 | Use the static handler:
You don't need to startup an instance to serve your file. This generally means it'll be served quicker, and you save on CPU hours.
You don't have to worry edge caching.
The cons might be that the files are static, and it might require more manual intervention with your framework. | 1 | 0 | 0 | In a project I'm developing I'm using several Python projects as dependencies. These projects each come with static files (JavaScript, images, etc.) and a set of handlers (with default URLs). To register the URLs for the handlers I add them to the routes in the WSGI application. The static files however need to be registered in the app.yaml. This is something I would like to avoid so it becomes a breeze to register both handler URLs and static files.
I thought about implementing a request handler that takes a file location and serves it with HTTP cache (like I think the default static handlers do).
I've discussed the idea with a colleague and he thought this was a bad idea. He told me that when registering the static files in the app.yaml the files are served in a more optimized way (possibly without Python).
Before I go and implement a static handler I'd like to hear what would be the pros/cons of both methods and if the static handler idea is a good idea.
In current projects we let Buildout generate the app.yaml from a template. The static files are added there. The (obvious) downside is that this process is error prone (if done automatically) or redundant (if done manually). | Serve static files through a custom handler or register in app.yaml? | 1.2 | 0 | 0 | 105 |
17,968,422 | 2013-07-31T10:54:00.000 | 1 | 0 | 0 | 0 | python,django,linux,ubuntu | 17,968,753 | 3 | false | 1 | 0 | System path is an system environment variable that contains the path for some folders where the os will search for applications, scripts, etc.
In windows django-admin.py is in C:\Python\scripts, so if you have set the PYTHONPATH environment variable and added all the required python folder in that variable like C:\Python;C:\Python\Lib;C:\Python\scripts;C:\Python\Lib\site-packages, the os will automatically find django-admin.py when you will type the command django-admin.py startproject myproj on commandline.
same with linux, the django-admin.py is in /usr/bin/django-admin.py if you install django for the default python installation.
so, one way could be if you create a alias for that script so that you can run it from where ever you want.
i am using centos and what i did is i edited the /etc/bashrc and added
alias djangoadmin='/usr/bin/django-admin.py' and it works for me very well. | 2 | 1 | 0 | I am trying a python, django tutorial. It says type django-admin.py however I get 'command not found' with this.
Someone told me that the problem could be that django is not in your system path, what does that mean?
I am using ubuntu. | Putting something on your system path | 0.066568 | 0 | 0 | 77 |
17,968,422 | 2013-07-31T10:54:00.000 | 0 | 0 | 0 | 0 | python,django,linux,ubuntu | 17,972,309 | 3 | false | 1 | 0 | If you installed Django from the Ubuntu repositories via apt-get or synaptic, the script will be simply django-admin (without the .py). | 2 | 1 | 0 | I am trying a python, django tutorial. It says type django-admin.py however I get 'command not found' with this.
Someone told me that the problem could be that django is not in your system path, what does that mean?
I am using ubuntu. | Putting something on your system path | 0 | 0 | 0 | 77 |
17,974,995 | 2013-07-31T15:48:00.000 | 3 | 0 | 1 | 0 | python,security,encryption,passwords | 17,976,478 | 2 | true | 0 | 0 | It seems that if you give someone the program and it needs to use the API key, there is no way to avoid giving out the API key. The best you can hope for is to obscure it enough that someone will think it is easier to get the API key elsewhere. Supposing that the API key is so difficult to get elsewhere that someone persists in attempting to decode it from your program, they will eventually get it.
Consider that the end user will be able to snoop on communications with the server, even going man in the middle on an SSL connection, where you are almost certainly sending the key plain-text anyways.
Apply some nuisance crypto, like rot13, and forget about it. | 1 | 3 | 0 | I'm currently writing a program sending data to a server using a private apikey.
I don't want to keep the key in plaintext, but i need it to contact the server.
What kind of reversible encryption could work for this ? | How to safely store sensitive data? | 1.2 | 0 | 0 | 1,314 |
17,979,028 | 2013-07-31T19:17:00.000 | 3 | 0 | 1 | 0 | python,python-3.x,generator | 17,979,067 | 1 | true | 0 | 0 | StopIteration is the proper exception to raise to stop an iteration entirely. However, max_depth shouldn't stop the traversal, the traversal should simply not recursively descend into child nodes when it's already at max_depth depth. | 1 | 0 | 0 | What exception should a Python generator function raise when it ends prematurely?
Context: searches of trees represented as classes with __iter__ defined allowing code like for i in BreadthFirstSearch(mytree).
These searches have a max_depth value after which the it should stop returning values.
What exception should be raised when this occurs, or should this be done some other way? | Python generator function premature end exception | 1.2 | 0 | 0 | 160 |
17,980,525 | 2013-07-31T20:38:00.000 | 1 | 0 | 1 | 0 | python,parallel-processing | 17,980,957 | 2 | false | 0 | 0 | If C-calls are slow, like downloading, database requests, other IO - You can use just threading.Thread
If python code is slow, like frameworks, your logic, not accelerated parsers - You need to use multiprocessing Pool or Process. Also it speedups python code, but it is less tread-save and need to deep understanding how it works in complex code (locks, semaphores). | 1 | 1 | 0 | I have an architecture which is basically a queue with url addresses and some classes to process the content of those url addresses. At the moment the code works good, but it is slow to sequentially pull a url out of the queue, send it to the correspondent class, download the url content and finally process it.
It would be faster and make proper use of resources if for example it could read n urls out of the queue and then shoot n processes or threads to handle the downloading and processing.
I would appreciate if you could help me with these:
What packages could be used to solve this problem ?
What other approach can you think of ? | Parallel data processing in Python | 0.099668 | 0 | 0 | 392 |
17,982,992 | 2013-07-31T23:55:00.000 | 0 | 0 | 0 | 1 | python,google-app-engine | 18,027,540 | 1 | true | 1 | 0 | It's an sqlite file, can just read it with sqlite3 module. | 1 | 0 | 0 | I want to read the logs.db file with an external script, but I don't know what format it's in (it's binary) to know what to read it with. It's a massive rabbit hole to try to figure out the log_service module, and I'm hoping I can shortcut that and just open it into readable text some other way.
Any ideas? | Reading the app engine logs.db file? | 1.2 | 0 | 0 | 67 |
17,984,890 | 2013-08-01T03:42:00.000 | 0 | 1 | 0 | 0 | python,coding-style,project-management,jira,issue-tracking | 18,004,681 | 2 | false | 1 | 0 | Every time you revisit code, make a list of the information you are not finding. Then the next time you create code, make sure that information is present. It can be in comments, Wiki, bugs or even text notes in a separate file. Make the notes useful for other people, so private notebooks aren't a good idea except for personal notes. | 1 | 3 | 0 | I have been in this problem for long time and i want to know how its done in real / big companies project.
Suppose i have the project to build a website. now i divide the project into sub tasks and do it.
But u know that suppose i have task1 in hand like export the page to pdf. Now i spend 3 days to do that , came accross various problems , many stack overflow questions and in the end i solve it.
Now 4 months after someone told me that there is some error in the code.
Now by that i comepletely forgot about(60%) how i did it and why i do this way. I document the code but i can't write the whole story of that in the code.
Then i have to spend much time on code to find what was the problem so that i added this line etc.
I want to know that is there any way that i can log steps in completeing the project.
So that i can see how i end up with code , what erros i got , what questions i asked on So and etc.
How people do it in real time. Which software to use.
I know in our project management softaware called JIRA we have tasks but that does not cover what steps i took to solve that tasks.
what is the besy way so that when i look backt at my 2 year old project , i know how i solve particular task | What is the best way to track / record the current programming project u work on | 0 | 0 | 0 | 278 |
17,987,732 | 2013-08-01T07:30:00.000 | 0 | 0 | 0 | 0 | python,sqlite,pysqlite | 17,988,741 | 1 | true | 0 | 0 | This behaviour is version dependent.
If you want a guaranteed reordering, you have to copy all records into a new table yourself.
(This works with both implicit and explicit ROWIDs.) | 1 | 1 | 0 | I'm working on an app that employs the python sqlite3 module. My database makes use of the implicit ROWID column provided by sqlite3. I expected that the ROWIDs be reordered after I delete some rows and vacuum the database. Because in the sqlite3 official document:
The VACUUM command may change the ROWIDs of entries in any tables that
do not have an explicit INTEGER PRIMARY KEY.
My pysqlite version is 2.6.0 and the sqlite version is 3.5.9. Can anybody tell me why it is not working? Anything I should take care when using vacuum?
P.S. I have a standalone sqlite installed whose version is 3.3.6. I tested the vacuum statement in it, and the ROWIDs got updated. So could the culprit be the version? Or could it be a bug of pysqlite?
Thanks in advance for any ideas or suggestions! | Why are not ROWIDs updated after VACUUM when using python sqlite3 module? | 1.2 | 1 | 0 | 134 |
17,988,237 | 2013-08-01T07:57:00.000 | 0 | 0 | 0 | 0 | python,sms,nlp,normalization | 17,988,306 | 1 | false | 0 | 0 | Just set up a dictionary of short cuts that you wish to translate, split your message, replace any matches in the dictionary and join. | 1 | 0 | 0 | I am trying to do sms normalization with the help of python
suppose the request is "btw m gng home"
It should translate it into "by the way I am going home"
Can somebody tell me some logic for this which I can apply. Can I use NLP for the same purpose.If yes then how? | can somebody tell me the method for sms normalization with python | 0 | 0 | 0 | 101 |
17,988,283 | 2013-08-01T07:59:00.000 | 1 | 0 | 0 | 0 | python | 17,988,454 | 3 | false | 0 | 0 | Put the python code to be called into a namespace/module that is visible to python through sys.path and import the methods/classes in your secondary .py files. That way you can directly access the code and execute it the way you require.
Like other answers already suggest you could execute the code in your secondary files directly but I would personally always prefer to package it up and import it - it's much cleaner and easier to maintain as you can do more selective changes to the code in your secondary files without affecting any part that imports already existing parts of it. | 2 | 2 | 0 | I need another python script that will execute these 3 scripts. | How to execute three .py files from another python file? | 0.066568 | 0 | 0 | 386 |
17,988,283 | 2013-08-01T07:59:00.000 | 1 | 0 | 0 | 0 | python | 17,988,349 | 3 | false | 0 | 0 | import - will execute code which you import (once)
os.system("scriptname.py")
subprocess
popen | 2 | 2 | 0 | I need another python script that will execute these 3 scripts. | How to execute three .py files from another python file? | 0.066568 | 0 | 0 | 386 |
17,988,389 | 2013-08-01T08:06:00.000 | 0 | 0 | 1 | 0 | python,module | 17,988,797 | 2 | false | 0 | 0 | As far as I am aware, there is no single python module that can generically manipulate both Microsoft and OpenOffice document formats.
That said, both Microsoft Office and OpenOffice (can) use XML to store their documents. For Office 2003 XML is optional, but from 2010 it is the default.
So you can follow two approaches:
quick-and-dirty
Using an XML toolkit and XPath, select (XML) text nodes in the document. Run your replacement routine on each text node.
neat-but-slow
Study the XML format of each document type. Using an XML toolkit and XPath, select the nodes that will contain (document) text. Run your replacement routine on each of the text nodes.
I would start with quick-and-dirty and see how far it gets you. Then if you see that nodes are changed that you did not want to be changed, you can add add-hoc measures to prevent that based on studying the XML formats. | 1 | 0 | 0 | Can anyone reccomend python module for manipulating documents. I need module which can replace any vars in text( i.e. $$TITLE$$) without format lossing. Module need for Microsoft Word 2003/2007, OpenDocuments. | python module for document templating mange | 0 | 0 | 0 | 47 |
17,990,496 | 2013-08-01T09:42:00.000 | 0 | 0 | 0 | 1 | python,eclipse,pydev,subclipse | 17,991,013 | 1 | false | 0 | 0 | You must not "check out" the directory, you have to "export" it. You can export anything from svn into any directory.
Also you could simply copy the other directory (from another check out) and delete the hidden .svn directory below it. If the directory contains subdirectories, every .svn directory must be deleted in the subdirectories as well. | 1 | 0 | 0 | Is it possible to ckeckout directory from SVN repo to existing project? My motivation: I using PyDev and have a directory with python package and I want to check it out. But problem is subpackages don't see root Python package and I can't add to PYTHONPATH directory which is outside the project.
What I need is to create a directory with a project and checkout directory with my python package into project directory. But I can't do it with Sublclipse, because it checkout python package directly to the project directory. | Subclipse - ckeckout directory to existing project | 0 | 0 | 0 | 142 |
17,991,190 | 2013-08-01T10:14:00.000 | 0 | 0 | 0 | 0 | python,django,django-admin-tools | 18,301,020 | 2 | false | 1 | 0 | After upgrading django admin tools I faced the same problem and ended up dropping tables admin_tools_dashboard_preferences and admin_tools_menu_bookmark and recreating them using python manage.py syncdb. Obviously, it will erase all custom parameters you may set before so make sure you made a backup. | 1 | 2 | 0 | I upgraded Django admin_tools to the latest version 0.5 . And I'm using Django 1.3
Now I am getting this error when I go to admin pages:
OperationalError: (1054, "Unknown column 'admin_tools_dashboard_preferences.dashboard_id' in 'field list'")
There are no instructions mentioned in the documentation for fixing this. What ALTER TABLE should I fire without letting go of the old data?
PS: I do not use South. | Django admin tools new version model changes | 0 | 0 | 0 | 500 |
17,995,963 | 2013-08-01T13:51:00.000 | 0 | 0 | 0 | 0 | python,django,django-south | 17,996,086 | 4 | false | 0 | 0 | are you using south?
If you are, there is a migration history database that exists.
Make sure to delete the row mentionnaing the migration you want to run again. | 2 | 0 | 0 | I am stuck with this issue: I had some migration problems and I tried many times and on the way, I deleted migrations and tried again and even deleted one table in db. there is no data in db, so I don't have to fear. But now if I try syncdb it is not creating the table I deleted manually.
Honestly, I get really stuck every time with this such kind of migration issues.
What should I do to create the tables again? | syncdb is not creating tables again? | 0 | 1 | 0 | 824 |
17,995,963 | 2013-08-01T13:51:00.000 | 0 | 0 | 0 | 0 | python,django,django-south | 29,407,625 | 4 | false | 0 | 0 | Try renaming the migration file and running python manage.py syncdb. | 2 | 0 | 0 | I am stuck with this issue: I had some migration problems and I tried many times and on the way, I deleted migrations and tried again and even deleted one table in db. there is no data in db, so I don't have to fear. But now if I try syncdb it is not creating the table I deleted manually.
Honestly, I get really stuck every time with this such kind of migration issues.
What should I do to create the tables again? | syncdb is not creating tables again? | 0 | 1 | 0 | 824 |
17,998,464 | 2013-08-01T15:35:00.000 | 1 | 0 | 1 | 0 | python,multiprocessing,master | 18,034,766 | 2 | false | 0 | 0 | Here's one way to implement your workflow:
Have two multiprocessing.Queue objects: tasks_queue and
results_queue. The tasks_queue will hold device outputs, and results_queue will hold results of the assertions.
Have a pool of workers, where each worker pulls device output from
tasks_queue, parses it, asserts, and puts the result of assertion on the results_queue.
Have another process continuously polling device and put device
output on the tasks_queue.
Have one last process continuously polling results_queue, and
ending the overall program when the desired number of resuts (successful
assertions) is reached.
Total number of processes (multiprocessing.Process objects) is 2 + k, where k is the number of workers in the pool. | 1 | 1 | 0 | I'm fairly familiar with the python multiprocessing module, but I'm unsure of how to implement this setup. My project has this basic flow:
request serial device -> gather response -> parse response -> assert response -> repeat
It is right now a sequential operation that loops over this until it has gather the desired number of asserted responses. I was hoping to speed this task up by having a 'master process' do the first two operations, and then pass off the parsing and assertion task into a queue of worker processes. However, this is only beneficial if the master process is ALWAYS running. I'm guaranteed to be working on a multi-core machine.
Is there any way to have a process in the multiprocessing module always have focus / make run so I can achieve this? | Python Multiprocessing Management | 0.099668 | 0 | 0 | 743 |
18,001,984 | 2013-08-01T18:39:00.000 | 3 | 0 | 0 | 0 | python,workflow,plone | 18,020,552 | 2 | false | 1 | 0 | The recommended way is to set a guard instead.
The guard expression should be able to look up a view to facilitate more complex guard code, but when a guard returns False the transition isn't even listed as available. | 1 | 4 | 0 | I am working on a complex validation in a dexterity content type which should check the dependencies across several fields at the workflow transition time - I want it to work in the SimplePublicationWorkflow being triggered when the content is sent from "private" to "pending".
I've registered an event listener for IBeforeEvent and hooked it up - but nothing done there short of raising an exception can stop the transition from happening. (and if you raise an exception there, it goes uncaught and the user sees an error page instead of a custom message).
So, what is the recommended way to validate a transition in modern Plone? I've came across documentation suggesting adding External methods to be called on the Guard expression of the transition - but I would not like to use external methods, and if possible, I'd like to keep the default workflow. Creating a custom one is an option provided a clean way to do the check. | How to abort a workflow transition in Plone | 0.291313 | 0 | 0 | 298 |
18,002,824 | 2013-08-01T19:25:00.000 | 0 | 1 | 0 | 1 | php,python,call,command-line-interface | 18,003,558 | 1 | false | 0 | 0 | Seems to have been caused by too much coverage data exhausting PHP_CodeCoverage_Report_HTML. No idea why the php scripts output was suppressed making me believe the script never got running.
After asking for more memory using ini_set('memory_limit', '2048M'); in the start of the php script, the success rate went up dramatically (5/6 successful builds so far).
I guess I'll need to play around with memory management in php/zend to properly handle this. | 1 | 0 | 0 | The short version
I want to, in python, subprocess.call(['php', '/path/somescript.php']), the first line of the php script is basically "echo 'Here!';". But the subprocess.call returns an error code of -11, and the php script does not get to execute its first line and echo anything to the output. This is all happening on an Ubuntu Server 12.04.2 and a Ubuntu Desktop 12.04.2.
Can anybody point me in the direction of what the -11 return code might mean? (Is it coming from python, the system, or the php command?
A couple of times, I've seen it run deep into the php script and then fail by printing "zend_mm_heap corrupted" and returning 1.
The more descriptive version of the question:
I have a python script that, after running some phpunit tests using subprocess.call(['phpunit', ...]), wants to run another php script to collect the code coverage data gathered while running the tests, by doing subprocess.call(['php', '/path/coverage_collector.php']).
For months, the script worked fine, but today, after adding a couple more files & tests, it started failing (not a 100% of the time, about 5-10% of times it works).
When it fails, subprocess.call returns -11, and the first line of coverage_collector.php has not managed to echo its message to stdout. A couple of times it ran deeper into the php script, and failed with error code 1 and printed "zend_mm_heap corrupted".
I have a directory structure where each folder may contain subfolders, each folder gets its unit tests executed, and then coverage data is collected for that folder + its subfolders.
The script works fine on all the folders and their subfolders (executing all the tests & collecting all of the coverage), and used to work fine on the root level folder too (and is currently working fine for a lot of smaller projects with the same exact structure and scripts) - until today, after it started failing after an innocent enough code checkin, that added some files and tests to one of the php projects using the script.
The weird thing is that it's failing in this weird spot - while trying to call a php command, without even getting to execute the first line of the php script, and this happens just seconds after the same php script has been executed for a number of other folders and worked fine.
I'm suspecting it might be due to the fact that the root level script simply has more data to process - combining its own coverage with that of all of the subfolders (which might explain the zend heap corruption, when that occurs), but that still does not explain why the majority of times the call fails with -11, and does not let the php script even start working on the collecting the coverage data.
Any ideas? | What does this php return code (-11) mean (subprocess.call'ed from python)? | 0 | 0 | 0 | 287 |
18,005,198 | 2013-08-01T21:46:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,virtualenv | 18,005,604 | 1 | false | 0 | 0 | venv doesn't create redistributable standalone applications, it creates virtual environments.
If you want a standalone application, you need to use some tool to build it—py2exe, PyInstaller, py2app, cx_Freeze, etc. These tools have all kinds of logic to do things like bundle up just enough of Python itself, its stdlib, your third-party modules, C libraries any of the above depend on, etc., and organize everything in such a way that it just works on any machine of a given platform. | 1 | 0 | 0 | I am trying to run my Python application using Python 3 virtual env but without Python 3 installed, my scripts don't run. Am I doing something wrong? | Python virtual env still requires Python to be installed in order to run scripts | 0 | 0 | 0 | 72 |
18,005,678 | 2013-08-01T22:24:00.000 | 1 | 0 | 1 | 0 | python,string,syntax | 18,005,699 | 2 | false | 0 | 0 | From the Python docs:
If we make the string literal a “raw” string, \n sequences are not converted to newlines, but the backslash at the end of the line, and the newline character in the source, are both included in the string as data.
You have the wrong idea about raw strings. | 1 | 2 | 0 | I was under the impression that in Python a raw string, written as r'this is a raw string' would omit any escape characters, and print EXACTLY what is between the quotes. My problem is that when I try print r'\' I get SyntaxError: EOL while scanning string literal. print r'\n' correctly prints \n, though. | Escape characters in raw Python string | 0.099668 | 0 | 0 | 5,469 |
18,006,014 | 2013-08-01T22:53:00.000 | 2 | 1 | 0 | 1 | python,nginx,uwsgi,bottle | 49,163,067 | 3 | false | 1 | 0 | I also suggest you look at running bottle via gevent.pywsgi server. It's awesome, super simple to setup, asynchronous, and very fast.
Plus bottle has an adapter built for it already, so even easier.
I love bottle, and this concept that it is not meant for large projects is ridiculous. It's one of the most efficient and well written frameworks, and can be easily molded without a lot of hand wringing. | 2 | 11 | 0 | I am developing a Python based application (HTTP -- REST or jsonrpc interface) that will be used in a production automated testing environment. This will connect to a Java client that runs all the test scripts. I.e., no need for human access (except for testing the app itself).
We hope to deploy this on Raspberry Pi's, so I want it to be relatively fast and have a small footprint. It probably won't get an enormous number of requests (at max load, maybe a few per second), but it should be able to run and remain stable over a long time period.
I've settled on Bottle as a framework due to its simplicity (one file). This was a tossup vs Flask. Anybody who thinks Flask might be better, let me know why.
I have been a bit unsure about the stability of Bottle's built-in HTTP server, so I'm evaluating these three options:
Use Bottle only -- As http server + App
Use Bottle on top of uwsgi -- Use uwsgi as the HTTP server
Use Bottle with nginx/uwsgi
Questions:
If I am not doing anything but Python/uwsgi, is there any reason to add nginx to the mix?
Would the uwsgi/bottle (or Flask) combination be considered production-ready?
Is it likely that I will gain anything by using a separate HTTP server from Bottle's built-in one? | Python bottle vs uwsgi/bottle vs nginx/uwsgi/bottle | 0.132549 | 0 | 0 | 7,565 |
18,006,014 | 2013-08-01T22:53:00.000 | 15 | 1 | 0 | 1 | python,nginx,uwsgi,bottle | 18,006,120 | 3 | true | 1 | 0 | Flask vs Bottle comes down to a couple of things for me.
How simple is the app. If it is very simple, then bottle is my choice. If not, then I got with Flask. The fact that bottle is a single file makes it incredibly simple to deploy with by just including the file in our source. But the fact that bottle is a single file should be a pretty good indication that it does not implement the full wsgi spec and all of its edge cases.
What does the app do. If it is going to have to render anything other than Python->JSON then I go with Flask for its built in support of Jinja2. If I need to do authentication and/or authorization then Flask has some pretty good extensions already for handling those requirements. If I need to do caching, again, Flask-Cache exists and does a pretty good job with minimal setup. I am not entirely sure what is available for bottle extension-wise, so that may still be worth a look.
The problem with using bottle's built in server is that it will be single process / single thread which means you can only handle processing one request at a time.
To deal with that limitation you can do any of the following in no particular order.
Eventlet's wsgi wrapping the bottle.app (single threaded, non-blocking I/O, single process)
uwsgi or gunicorn (the latter being simpler) which is most ofter set up as single threaded, multi-process (workers)
nginx in front of uwsgi.
3 is most important if you have static assets you want to serve up as you can serve those with nginx directly.
2 is really easy to get going (esp. gunicorn) - though I use uwsgi most of the time because it has more configurability to handle some things that I want.
1 is really simple and performs well... plus there is no external configuration or command line flags to remember. | 2 | 11 | 0 | I am developing a Python based application (HTTP -- REST or jsonrpc interface) that will be used in a production automated testing environment. This will connect to a Java client that runs all the test scripts. I.e., no need for human access (except for testing the app itself).
We hope to deploy this on Raspberry Pi's, so I want it to be relatively fast and have a small footprint. It probably won't get an enormous number of requests (at max load, maybe a few per second), but it should be able to run and remain stable over a long time period.
I've settled on Bottle as a framework due to its simplicity (one file). This was a tossup vs Flask. Anybody who thinks Flask might be better, let me know why.
I have been a bit unsure about the stability of Bottle's built-in HTTP server, so I'm evaluating these three options:
Use Bottle only -- As http server + App
Use Bottle on top of uwsgi -- Use uwsgi as the HTTP server
Use Bottle with nginx/uwsgi
Questions:
If I am not doing anything but Python/uwsgi, is there any reason to add nginx to the mix?
Would the uwsgi/bottle (or Flask) combination be considered production-ready?
Is it likely that I will gain anything by using a separate HTTP server from Bottle's built-in one? | Python bottle vs uwsgi/bottle vs nginx/uwsgi/bottle | 1.2 | 0 | 0 | 7,565 |
18,009,550 | 2013-08-02T05:45:00.000 | 0 | 0 | 1 | 0 | python,pythonpath | 18,010,177 | 3 | false | 0 | 0 | PYTHONPATH passed to sys.path there any module can modify it before importing another modules. | 1 | 0 | 0 | Will the sources in PYTHONPATH always be searched in the very same order as they are listed? Or may the order of them change somewhere?
The specific case I'm wondering about is the view of PYTHONPATH before Python is started and if that differs to how Python actually uses it. | Will the first source in PYTHONPATH always be searched first? | 0 | 0 | 0 | 752 |
18,013,832 | 2013-08-02T09:48:00.000 | 1 | 0 | 1 | 0 | powershell,python-2.7,pip | 18,014,559 | 1 | true | 0 | 0 | Unlike the old command shell, PowerShell will not run programs or scripts in the current directory by default.
You should modify your path to include the specific directory or directories that you need to include.
I would advise against adding . to your path (which would make PowerShell behave more similarly to DOS) as this introduces a possible attack vector. | 1 | 0 | 0 | I just installed pip, distribute, nose and virtualenv in windows using powershell. For some reason powershell makes me add .\ before pip and nosetests? Does anyone know why?
I read the help in powershell and it talks about the correct path which I think I have. All are installed in Python27/Scripts... same path I'm using in Powershell.
Powershell gives me the "doesnt recognize as cmlet" etc then suggests I should use the .\ before... When I use this it works.
Looked on here and couldn't find the answer so forgive me if this has been asked previously. | Why is powershell telling me I need .\ before pip | 1.2 | 0 | 0 | 149 |
18,014,122 | 2013-08-02T10:00:00.000 | 3 | 1 | 0 | 1 | python,path,environment-variables,interpreter,uwsgi | 18,021,303 | 1 | true | 0 | 0 | uWSGI is not a python application (it only calls libpython functions) so the effective executable is the uwsgi binary. If you use virtualenvs you can assume the binary is in venv/bin/python | 1 | 3 | 0 | How can I get python interpreter path in uwsgi process (if I started it with -h parameter)? I tryed to use VIRTUAL_ENV and UWSGI_PYHOME environment variables, but they are empty, I do not know why. Also i tryed to use sys.executable, but it points to uwsgi process path. | How to get python interpreter path in uwsgi process | 1.2 | 0 | 0 | 1,200 |
18,017,150 | 2013-08-02T12:42:00.000 | 1 | 0 | 0 | 1 | python,google-app-engine,app-engine-ndb,graph-databases | 18,035,092 | 2 | false | 1 | 0 | There's two ways to implement one-to-many relationships in App Engine.
Inside entity A, store a list of keys to entities B1, B2, B3. In th old DB, you'd use a ListProperty of db.Key. In ndb you'd use a KeyProperty with repeated = True.
Inside entity B1, B2, B3, store a KeyProperty to entity A.
If you use 1:
When you have Entity A, you can fetch B1, B2, B3 by id. This can be potentially more consistent than the results of a query.
It could be slightly less expensive since you save 1 read operation over a query (assuming you don't count the cost of fetching entity A). Writing B instances is slightly cheaper since it's one less index to update.
You're limited in the number of B instances you can store by the maximum entity size and number of indexed properties on A. This makes sense for things like conference tracks since there's generally a limited number of tracks that doesn't go into the thousands.
If you need to sort the order of B1, B2, B3 arbitrarily, it's easier to store them in order in a list than to sort them using some sorted indexed property.
If you use 2:
You only need entity A's Key in order to query for B1, B2, B3. You don't actually need to fetch entity A to get the list.
You can have pretty much unlimited # of B entities. | 1 | 1 | 0 | I'm designing a g+ application for a big international brand. the entities I need to create are pretty much in form of a graph, hence a lot of many-to-many relations (arcs) connecting nodes that can be traversed in both directions. I'm reading all the readable docs online, but I haven't found anything so far specific to ndb design best practices and guidelines. unfortunately I am under nda, and cannot reveal details of the app, but it can match almost one to one the context of scientific conferences with proceedings, authors, papers and topics.
below the list of entities envisioned so far (with context shifted to match the topics mentioned):
organization (e.g. acm)
conference (e.g. acm multimedia)
conference issue (e.g. acm multimedia 13)
conference track (e.g. nosql, machine learning, computer vision, etc.)
author (e.g. myself)
paper (e.g. "designing graph like db for ndb")
as you can see, I can visit and traverse the graph through any direction (or facet, from a frontend point of view):
author with co-authors
author to conference tracks
conference tracks to papers
...
and so on, you fill the list.
I want to make it straight and solid because it will launch with a lot of p.r. and will need to scale consistently overtime, both in content and number of users. I would like to code it from scratch hence designing my own models, restful api to read/write this data, avoiding non-rel django and keeping the presentation layer to a minimum template mechanism. I need to check with the company where I work, but we might be able to release part of the code with a decent open source license (ideally, a restful service for ndb models).
if anyone could point me towards the right direction, that would be awesome.
thanks!
thomas
[edit: corrected typo related to many-to-many relations] | best practice for graph-like entities on appengine ndb | 0.099668 | 1 | 0 | 478 |
18,019,599 | 2013-08-02T14:35:00.000 | 1 | 0 | 0 | 0 | python,properties,cgkit | 18,019,711 | 1 | false | 0 | 1 | cgkit.wintab.Packet is a class. You need to look up x on an instance of the class, not the class itself. | 1 | 1 | 0 | I'm using the cgkit.wintab module in python to access x,y coordinates of my Wacom tablet. The command cgkit.wintab.Packet.x is supposed to give me the value of the x co-ordinate of the touch on the tablet. Instead I get the response <property object at (hex memory address)>.
How do I extract the value of the x co-ordinate from this object?
Thanks! | How to get the value of a Property Object in Python? | 0.197375 | 0 | 0 | 1,974 |
18,020,557 | 2013-08-02T15:20:00.000 | 2 | 0 | 0 | 0 | python,class,google-app-engine | 18,020,650 | 2 | false | 1 | 0 | Make a seperate .py file called something like classes and variables, place all classes you use in your code in that file and call it upon startup for both files. | 1 | 2 | 0 | I have a main.py which contains class definitions for objects that are fetched from db and displayed.
I also have a scrape.py that fetches these same sorts of objects from the web, and stores them to the db.
How do I avoid having to have class definitions for these objects in both main.py and scrape.py? | GAE, Python: How to define db object classes in only one place? | 0.197375 | 0 | 0 | 65 |
18,022,429 | 2013-08-02T17:04:00.000 | 0 | 0 | 1 | 0 | python,build,python-module,python-install | 18,048,958 | 1 | true | 0 | 0 | Reinstall them. It may seem like a no-brainer to reuse modules (in a lot of cases, you can), but in the case of modules that have compiled code - for long term systems administration this can be an utter nightmare.
Consider supporting multiple versions of Python for multiple versions / architectures of Linux. Some modules will reference libraries in /usr/local/lib, but those libraries can be the wrong arch or wrong version.
You're better off making a requirements.txt file and using pip to install them from source. | 1 | 1 | 0 | I normally use python 2.7.3 traditionally installed in /usr/local/bin, but I needed to rebuild python 2.6.6 (which I did without using virtualenv) in another directory ~/usr/local/ and rebuild numpy, scipy, all libraries I needed different versions from what I had for python 2.7.3 there...
But all the other packages that I want exactly as they were (meaning same version) in my default installation, I don't know how to just use them in the python 2.6.6 without having to download tarballs, build and installing them using --prefix=/home/myself/usr/local/bin.
Is there a fast or simpler way of "re-using" those packages in my "local" python 2.6.6? | Allowing python use modules from other python installation | 1.2 | 0 | 0 | 48 |
18,023,356 | 2013-08-02T18:02:00.000 | 1 | 0 | 1 | 0 | python,levenshtein-distance | 18,023,402 | 1 | false | 0 | 0 | Create a dictionary keyed by zipcode, with lists of company names as the values. Now you only have to match company names per zipcode, a much smaller search space. | 1 | 0 | 1 | I have a huge list of company names and a huge list of zipcodes associated with those names. (>100,000).
I have to output similar names (for example, AJAX INC and AJAX are the same company, I have chosen a threshold of 4 characters for edit distance), but only if their corresponding zipcodes match too.
The trouble is that I can put all these company names in a dictionary, and associate a list of zipcode and other characteristics with that dictionary key. However, then I have to match each pair, and with O(n^2), it takes forever. Is there a faster way to do it? | Disambiguation of Names using Edit Distance | 0.197375 | 0 | 0 | 164 |
18,025,411 | 2013-08-02T20:12:00.000 | 0 | 0 | 1 | 0 | python,dictionary,split,multiprocessing | 18,025,494 | 2 | false | 0 | 0 | You don't need to split the dictionary, all you need to do is to split the keys in 20 groups and work on the same dictionary. I think that is simpler. | 1 | 2 | 0 | I am running a simulation with around 10 million unique dna sequences stored in a dictionary. And I need to process each sequence, which consists of going through the sequence letter by letter in groups of 5 (taking first 5 letters, shift index by one take another 5),
and processing it in a separate function. This takes quite a bit of time as I have it set up so it goes through each sequence one by one in a for loop.
What I am looking for is a way to split up the dictionary into approximately 20 chunks which I can use multiprocessing to process. Is there an easier way than just going through each key and filling up 20 dictionaries iteratively? | Python: Process large dictionary in chunks using multiprocessing | 0 | 0 | 0 | 1,272 |
18,025,763 | 2013-08-02T20:40:00.000 | 2 | 0 | 1 | 1 | python,memory-management,garbage-collection | 18,025,982 | 2 | false | 0 | 0 | Usually in this kind of situation, refactoring is the only way out.
You mentioned you're storing a lot in memory, perhaps in a dict or a set, then output onto only one file.
Maybe you can append output to the output file after processing each input, then do a quick clean-up before processing new input file. That way, RAM usage can be reduced.
Appending can even be done line by line from input, so that no storage is needed.
Since I don't know the specific algorithm you're using, given you mentioned no sharing between files is needed, this may help. Remember to flush output too :P | 1 | 1 | 0 | I need to sequentially read a large text files, storing a lot of data in memory, and then use them to write a large file. These read/write cycles are done one at a time, and there is no common data, so I don't need any of the memory to be shared between them.
I tried putting these procedures in a single script, hoping that the garbage collector would delete the old, no-longer-needed objects when the RAM got full. However, this was not the case. Even when I explicitly deleted the objects between cycles it would take far longer than running the procedures separately.
Specifically, the process would hang, using all available RAM but almost no CPU. It also hung when gc.collect() was called. So, I decided to split each read/write procedure into separate scripts and call them from a central script using execfile(). This didn't fix anything, sadly; the memory still piled up.
I've used the simple, obvious solution, which is to simply call the subscripts from a shell script rather than using execfile(). However, I would like to know if there is a way to make this work. Any input? | Clearing memory between memory-intensive procedures in Python | 0.197375 | 0 | 0 | 538 |
18,028,841 | 2013-08-03T02:58:00.000 | 1 | 0 | 0 | 1 | google-app-engine,python-2.7,codenvy | 23,685,697 | 2 | false | 1 | 0 | I discovered that if you go to the Google App Developper Console, there is a menu on the left. Click on App Engine, then click on Logs. There you can see the internal server error (error 500) log, which pretty much tell you what went wrong. | 1 | 2 | 0 | I'm working with Codenvy writing a Google Appengine app, and I have found it to be INSANELY difficult to debug. If there's a syntax error I have to find it manually as the web page that loads when testing give me and error:500. Also, I often want to print but Codenvy doesn't support printing for python (that or I don't understand the correct method). Has anyone else experienced this and able to help? Perhaps developing in the cloud isn't as easy as I was hoping... | Python 2.7 - Codenvy - Debugging issues | 0.099668 | 0 | 0 | 427 |
18,031,073 | 2013-08-03T08:50:00.000 | 0 | 0 | 1 | 0 | python,parallel-processing,synchronization,memcached | 18,031,233 | 2 | false | 0 | 0 | Probably your simplest answer would be to put your data into a database and let the database handle arbitration.
Most databases provide a mechanism for record locking that can be used for exactly this sort of thing as one essential of databases, (other than storing data), is the ability for multiple users to read and write records asynchronously. | 1 | 0 | 0 | I have multiple processes that should access the same data. The idea was to use memcache for this. But problem is if p1 reads data and right after that p2 does the same. Now, if p1 stores altered data in mc, when p2 does the same id overrides changes p1 made. If this where in the same process with thread I would use lock. But this could be done by multiple different processes. It could be using java, python, php.
So it seams like memcache is not right choice for this. I need something that will handle locking and everything but to be dead smple key/value storage.
Is there some lib or system for this? How this could be done? | Python multiple workers altering the same data | 0 | 0 | 0 | 215 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.