Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
13,001,913 |
2012-10-21T20:32:00.000
| 8 | 0 | 1 | 0 |
python,hash,dictionary,equality
| 13,001,967 | 3 | true | 0 | 0 |
First, __hash__(myNewMyClassObj) gets called. If no object with the same hash is found in the dictionary, Python assumes myNewMyClassObj is not in the dictionary. (Note that Python requires that whenever __eq__ evaluates as equal for two objects, their __hash__ must be identical.)
If some objects with the same __hash__ are found in the dictionary, __eq__ gets called on each of them. If __eq__ evaluates as equal for any of them, the myNewMyClassObj in dict_ returns True.
Thus, you just need to make sure both __eq__ and __hash__ are fast.
To your follow up question: yes, dict_ stores only one of a set of equivalent MyClass objects (as defined by __eq__). (As does set.)
Note that __eq__ is only called on the objects that had the same hash and got allocated to the same bucket. The number of such objects is usually a very small number (dict implementation makes sure of that). So you still have (roughly) O(1) lookup performance.
| 2 | 6 | 0 |
I have a class (let's call it myClass) that implements both __hash__ and __eq__. I also have a dict that maps myClass objects to some value, computing which takes some time.
Over the course of my program, many (in the order of millions) myClass objects are instantiated. This is why I use the dict to keep track of those values.
However, sometimes a new myClass object might be equivalent to an older one (as defined by the __eq__ method). So rather than compute the value for that object again, I'd rather just lookup the value of older myClass object in the dict. To accomplish this, I do if myNewMyClassObj in dict.
Here's my question:
When I use that in clause, what gets called, __hash__ or __eq__? The point of using a dict is that it's O(1) lookup time. So then __hash__ must be called. But what if __hash__ and __eq__ aren't equivalent methods? In that case, will I get a false positive for if myNewMyClassObj in dict?
Follow up question:
I want to minimize the number of entries in my dict, so I would ideally like to keep only one of a set of equivalent myClass objects in the dict. So again, it seems that __eq__ needs to be called when computing if myNewClassObj in dict, which would defile a dict's O(1) lookup time to an O(n) lookup time
|
What happens when you call `if key in dict`
| 1.2 | 0 | 0 | 607 |
13,002,676 |
2012-10-21T22:07:00.000
| 0 | 0 | 0 | 0 |
python,http,post,upload,cherrypy
| 26,299,500 | 2 | false | 1 | 0 |
Huge file uploads always problematic. What would you do when connection closes in the middle of uploading? Use chunked file upload method instead.
| 1 | 3 | 0 |
I have a cherrypy web server that needs to be able to receive large files over http post. I have something working at the moment, but it fails once the files being sent gets too big (around 200mb). I'm using curl to send test post requests, and when I try to send a file that's too big, curl spits out "The entity sent with the request exceeds the maximum allowed bytes." Searching around, this seems to be an error from cherrypy.
So I'm guessing that the file being sent needs to be sent in chunks? I tried something with mmap, but I couldn't get it too work. Does the method that handles the file upload need to be able to accept the data in chunks too?
|
Python: sending and receiving large files over POST using cherrypy
| 0 | 0 | 1 | 4,776 |
13,004,359 |
2012-10-22T02:38:00.000
| 6 | 0 | 1 | 0 |
python,regex
| 13,004,581 | 3 | true | 0 | 0 |
If you are after a general solution, your algorithm would need to look something like:
Read a chunk of the stream into a buffer.
Search for the regexp in the buffer
If the pattern matches, do whatever you want with the match, discard the start of the buffer up to match.end() and go to step 2.
If the pattern does not match, extend the buffer with more data from the stream
This could end up using a lot of memory if no matches are found, but it is difficult to do better in the general case (consider trying to match .*x as a multi-line regexp in a large file where the only x is the last character).
If you know more about the regexp, you might have other cases where you can discard part of the buffer.
| 2 | 6 | 0 |
Suppose you want to do regular expression search and extract over a pipe, but the pattern may cross multiple lines, How to do it? Maybe a regular expression library work for a stream?
I hope do this job using Python library? But any solution will be OK, a library not a cmd line tool of course.
BTW, I know how to solve my current problem, just seeking a general solution.
If no such libray exists, why regular library can not work with stream given the regular mathing algorithm never need backward scaning.
|
regular expression on stream instead of string?
| 1.2 | 0 | 0 | 2,524 |
13,004,359 |
2012-10-22T02:38:00.000
| -2 | 0 | 1 | 0 |
python,regex
| 13,004,594 | 3 | false | 0 | 0 |
I do not believe that it is possible to use a regular expression on a stream, because without an entire piece of data, you cant make a positive match. This means that you would only have a probable match.
However, as @James Henstridge stated, you could use buffers to overcome this.
| 2 | 6 | 0 |
Suppose you want to do regular expression search and extract over a pipe, but the pattern may cross multiple lines, How to do it? Maybe a regular expression library work for a stream?
I hope do this job using Python library? But any solution will be OK, a library not a cmd line tool of course.
BTW, I know how to solve my current problem, just seeking a general solution.
If no such libray exists, why regular library can not work with stream given the regular mathing algorithm never need backward scaning.
|
regular expression on stream instead of string?
| -0.132549 | 0 | 0 | 2,524 |
13,004,789 |
2012-10-22T03:56:00.000
| 0 | 0 | 0 | 0 |
python,mysql,passwords
| 16,186,975 | 1 | false | 0 | 0 |
Try to MySQLdb package, you can punctuation in password to connect database through this package.
| 1 | 2 | 0 |
I cannot get a connection to a MySQL database if my password contains punctuation characters in particular $ or @. I have tried to escape the characters, by doubling the $$ etc. but no joy.
I have tried the pymysql library and the _mssql library.
the code...
self.dbConn = _mysql.connect(host=self.dbDetails['site'], port=self.dbDetails['port'], user=self.dbDetails['user'], passwd=self.dbDetails['passwd'], db=self.dbDetails['db'])
where self.dbDetails['passwd'] = "$abcdef".
I have tried '$$abcdef', and re.escape(self.dbDetails['passwd']), and '\$abcdef' but nothing works until I change the users password to remove the "$". Then it connects just fine. The only error I am getting is a failure to connect. I guess I will have to figure out how to print the actual exception message.
|
How to connect with passwords that contains characters like "$" or "@"?
| 0 | 1 | 0 | 432 |
13,006,151 |
2012-10-22T06:42:00.000
| 2 | 0 | 0 | 1 |
python,django,celery,django-celery
| 36,787,909 | 3 | false | 1 | 0 |
I think you are trying to avoid race condition of your own scripts, not asking for a method to delay a task run.
Then you can create a task, and in that task, call each of your task with .apply(), not .apply_async() or .delay(). So that these tasks run sequentially
| 1 | 18 | 0 |
I have a small script that enqueues tasks for processing. This script makes a whole lot of database queries to get the items that should be enqueued. The issue I'm facing is that the celery workers begin picking up the tasks as soon as it is enqueued by the script. This is correct and it is the way celery is supposed to work but this often leads to deadlocks between my script and the celery workers.
Is there a way I could enqueue all my tasks from the script but delay execution until the script has completed or until a fixed time delay?
I couldn't find this in the documentation of celery or django-celery. Is this possible?
Currently as a quick-fix I've thought of adding all the items to be processed into a list and when my script is done executing all the queries, I can simply iterate over the list and enqueue the tasks. Maybe this would resolve the issue but when you have thousands of items to enqueue, this might be a bad idea.
|
How can I defer the execution of Celery tasks?
| 0.132549 | 0 | 0 | 25,567 |
13,014,789 |
2012-10-22T15:35:00.000
| 0 | 0 | 0 | 0 |
python,merge,python-3.x,stata
| 13,016,728 | 2 | false | 0 | 0 |
Type "help shell" in Stata. What you want to do is shell out from Stata, call Python, and then have Stata resume whatever you want it to do after the Python script has completed.
| 1 | 1 | 1 |
This is probably very easy, but after looking through documentation and possible examples online for the past several hours I cannot figure it out.
I have a large dataset (a spreadsheet) that gets heavily cleaned by a DO file. In the DO file I then want to save certain variables of the cleaned data as a temp .csv run some Python scripts, that produce a new CSV and then append that output to my cleaned data.
If that was unclear here is an example.
After cleaning my data set (XYZ) goes from variables A to Z with 100 observations. I want to take variables A and D through F and save it as test.csv. I then want to run a python script that takes this data and creates new variables AA to GG. I want to then take that information and append it to the XYZ dataset (making the dataset now go from A to GG with 100 observations) and then be able to run a second part of my DO file for analysis.
I have been doing this manually and it is fine but the file is going to start changing quickly and it would save me a lot of time.
|
Calling Python from Stata
| 0 | 0 | 0 | 2,897 |
13,014,946 |
2012-10-22T15:44:00.000
| 0 | 0 | 1 | 0 |
python,macos,ipython
| 29,693,294 | 4 | false | 0 | 0 |
Following trouble in iPython and up-&-down arrows to access history, and browsing this post, a simple solution (turn off "Scroll lock") turned out to work for me.
| 2 | 6 | 0 |
In my installation of ipython I have this strange problem where I cannot reliably move through command history with up and down arrows... a lot of the time it just doesn't work (nothing happens on the key press). Also sometimes writing normal characters at the end of the command just doesn't work.
My system: Mac OSX Lion
I have readline installed...
thank you for the help!
david
|
ipython up and down arrow strange behaviour
| 0 | 0 | 0 | 4,181 |
13,014,946 |
2012-10-22T15:44:00.000
| 7 | 0 | 1 | 0 |
python,macos,ipython
| 14,385,255 | 4 | false | 0 | 0 |
Make sure you installed readline before ipython.
sudo pip uninstall ipython
sudo pip install readline ipython
(I know this question is a few months old, but for future reference)
| 2 | 6 | 0 |
In my installation of ipython I have this strange problem where I cannot reliably move through command history with up and down arrows... a lot of the time it just doesn't work (nothing happens on the key press). Also sometimes writing normal characters at the end of the command just doesn't work.
My system: Mac OSX Lion
I have readline installed...
thank you for the help!
david
|
ipython up and down arrow strange behaviour
| 1 | 0 | 0 | 4,181 |
13,015,593 |
2012-10-22T16:22:00.000
| 0 | 0 | 0 | 0 |
python,nltk
| 15,627,502 | 2 | false | 0 | 0 |
This is a very late answer, but perhaps it will help someone.
What you're asking about is regression. Regarding Jacob's answer, linear regression is only one way to do it. However, I agree with his recommendation of scikit-learn.
| 1 | 8 | 1 |
In the light of a project I've been playing with Python NLTK and Document Classification and the Naive Bayes classifier. As I understand from the documentation, this works very well if your different documents are tagged with either pos or neg as a label (or more than 2 labels)
The documents I'm working with that are already classified don't have labels, but they have a score, a floating point between 0 and 5.
What I would like to do is build a classifier, like the movies example in the documentation, but that would predict the score of a piece of text, rather than the label. I believe this is mentioned in the docs but never further explored as 'probabilities of numeric features'
I am not a language expert nor a statistician so if someone has an example of this lying around I would be most grateful if you would share this with me. Thanks!
|
NLTK: Document Classification with numeric score instead of labels
| 0 | 0 | 0 | 1,239 |
13,017,421 |
2012-10-22T18:24:00.000
| 2 | 0 | 0 | 0 |
python,django
| 13,017,620 | 1 | false | 1 | 0 |
I would image that calling sleep() should block the execution of all Django code in most cases. However it might depend on the deployment architecture (e.g. gevent, gunicorn, etc). For instance if you are using a server which fires Django threads for each request, then no it will not block all the code.
In most cases however using something like Celeri would like to be a much better solution because (1) don't reinvent the wheel and (2) it has been tested.
| 1 | 5 | 0 |
In Django, if the view uses a sleep() function while answering a request, does this block the handling of the whole queue of requests?
If so, how to delay an http answer without this blocking behavior? Can we do that out-of-the-box and avoid using a job queue like Celeri?
|
Is sleep() blocking the handling of requests in Django?
| 0.379949 | 0 | 0 | 1,668 |
13,018,147 |
2012-10-22T19:12:00.000
| 16 | 0 | 0 | 0 |
python,django,django-socialauth
| 13,032,929 | 5 | true | 1 | 0 |
DSA doesn't logout accounts (or flush sessions) at the moment. AuthAlreadyAssociated highlights the scenario where the current user is not associated to the current social account trying to be used. There are a couple solutions that might suite your project:
Define a sub-class of social_auth.middleware.SocialAuthExceptionMiddleware and override the default behavior (process_exception()) to redirect or setup the warning you like in the way you prefer.
Add a pipeline method (replacing social_auth.backend.pipeline.social.social_auth_user) that logouts the current user instead of raising an exception.
| 1 | 18 | 0 |
After I create a user using say Facebook(let's say fbuser) or Google(googleuser). If I create another user through the normal django admin(normaluser), and try logging again using Facebook or Google while third user(normaluser) is logged in, it throws an error exception AuthAlreadyAssociated.
Ideally it should throw an error called you are already logged in as
user normaluser.
Or it should log out normal user, and try associating with the
account which is already associated with FB or Google, as the case
may be.
How do I implement one of these two above features? All advice welcome.
Also when I try customizing SOCIAL_AUTH_PIPELINE, it is not possible to login with FB or Google, and it forces the login URL /accounts/login/
|
AuthAlreadyAssociated Exception in Django Social Auth
| 1.2 | 0 | 0 | 9,224 |
13,018,157 |
2012-10-22T19:13:00.000
| 1 | 1 | 0 | 0 |
python,macos,mercurial,build-automation
| 13,667,498 | 1 | true | 0 | 0 |
You can always check exit-code of used commands
hg add (if new, unversioned files appeared in WC) "Returns 0 if all files are successfully added": non-zero means "some troubles here, not all files added"
hg commit "Returns 0 on success, 1 if nothing changed": 1 means "no commit, nothing to push"
hg push "Returns 0 if push was successful, 1 if nothing to push"
| 1 | 1 | 0 |
What I would like it is to run a script that automatically checks for new assets (files that aren't code) that have been submitted to a specific directory, and then every so often automatically commit those files and push them.
I could make a script that does this through the command line, but I was mostly curious if mercurial offered any special functionality for this, specifically I'd really like some kind of return error code so that my script will know if the process breaks at any point so I can send an email with the error to specific developers. For example if for some reason the push fails because a pull is necessary first, I'd like the script to get a code so that it knows this and can handle it properly.
I've tried researching this and can only find things like automatically doing a push after a commit, which isn't exactly what I'm looking for.
|
Automating commit and push through mercurial from script
| 1.2 | 0 | 0 | 448 |
13,018,968 |
2012-10-22T20:05:00.000
| 3 | 0 | 0 | 0 |
python,image-processing
| 13,019,636 | 2 | false | 0 | 0 |
You are making this way too hard. I handled this in production code by generating a histogram of the image, throwing away outliers (1 black pixel doesn't mean that the whole image has lots of black; 1 white pixel doesn't imply a bright image), then seeing if the resulting distribution covered a sufficient range of brightnesses.
In stats terms, you could also see if the histogram approximates a Gaussian distribution with a satisfactorily large standard deviation. If the whole image is medium gray with a tiny stddev, then you have a low contrast image - by definition. If the mean is approximately medium-gray but the stddev covers brightness levels from say 20% to 80%, then you have a decent contrast.
But note that neither of these approaches require anything remotely resembling machine learning.
| 1 | 1 | 1 |
I want an algorithm to detect if an image is of high professional quality or is done with poor contrast, low lighting etc. How do I go about designing such an algorithm.
I feel that it is feasible, since if I press a button in picassa it tries to fix the lighting, contrast and color. Now I have seen that in good pictures if I press the auto-fix buttons the change is not that high as in the bad images. Could this be used as a lead?
Please throw any ideas at me. Also if this has already been done before, and I am doing the wheel invention thing, kindly stop me and point me to previous work.
thanks much,
|
How to automatically detect if image is of high quality?
| 0.291313 | 0 | 0 | 3,360 |
13,021,093 |
2012-10-22T22:50:00.000
| -1 | 0 | 1 | 0 |
python,regex
| 13,021,174 | 3 | false | 0 | 0 |
It means: string /actors, follow by an optional capture group, which contains a literal ., and then one or more of whatever the non-literal . is configured to match.
| 1 | 2 | 0 |
I am confused about the semantics of the following Python regular expression:
r"/actors(\\..+)?"
I looked through the Python documentation section on regular expressions, but couldn't make sense of this expression. Can someone help me out?
|
Python Regular Expression (\..+)?
| -0.066568 | 0 | 0 | 484 |
13,021,375 |
2012-10-22T23:20:00.000
| 0 | 0 | 0 | 0 |
ironpython,web-hosting,shared-hosting
| 13,024,373 | 1 | false | 1 | 0 |
IronPython should work in shared hosting environments. I'm assuming they have some sort of partial-trust setup and not a full-trust environment; if it's full-trust, there's no issues. If not, it should still work, but it hasn't been as heavily tested. You have to deploy it with your project (in the bin directory), but aside from that, it should just work.
You can use NuGet to add is to your project ("IronPython"), or find the necessary files in the Platforms/Net40 directory of an installation or the zip file.
| 1 | 0 | 0 |
Does anyone have experience running IronPython in a shared hosting environment? Am using one hosting company but they don't support it. It's a project mixing ASP.NET MVC 4 with IronPython.
I would do a VM somewhere if all else fails, but figured I give this a shot to save a few bucks. #lazystackoverflow
Thanks,
-rob
|
Using IronPython at a hosting company
| 0 | 0 | 0 | 136 |
13,023,103 |
2012-10-23T03:21:00.000
| 1 | 0 | 0 | 0 |
python,django,apache,mahout
| 13,696,473 | 1 | false | 1 | 0 |
I think you could build an independent application with mahout, and you python application is just a client.
| 1 | 1 | 0 |
I am building the web application in python/django.
I need to apply some machine learning algorithms on some data. I know there are libraries available for python. But someone in my company was saying that Mahout is very good toll for that.
i want to know that can i use it with python/django. or i should do that with python libraries only
|
Can i use apache mahout with django application
| 0.197375 | 0 | 0 | 326 |
13,024,361 |
2012-10-23T06:01:00.000
| 0 | 0 | 0 | 0 |
python,django,mongodb,django-models,django-nonrel
| 13,031,452 | 1 | true | 1 | 0 |
After some deep digging into the Django Models i was able to solve the problem. The save() method inturn call the save_base() method. This method saves the returned results, ids in case of mongo, into self.id. This _id field can then be picked by by over riding the save() method for the model
| 1 | 0 | 0 |
I am using Django non-rel version with mongodb backends. I am interested in tracking the changes that occur on model instances e.g if someone creates/edits or deletes a model instance. Backend db is mongo hence models have an associated "_id" fields with them in the respective collections/dbs.
Now i want to extract this "_id" field on which this modif operation took place. The idea is to write this "_id" field to another db so someone can pick it up from there and know what object was updated.
I thought about overriding the save() method from Django "models.Model" since all my models are derived from that. However the mongo "_id" field is obviously not present there since the mongo-insert has not taken place yet.
Is there any possibility of a pseudo post-save() method that can be called after the save operation has taken place into mongo? Can django/django-toolbox/pymongo provide such a combination?
|
Django-Nonrel(mongo-backend):Model instance modification tracking
| 1.2 | 1 | 0 | 132 |
13,025,856 |
2012-10-23T07:56:00.000
| 0 | 0 | 1 | 0 |
java,python,pylucene,jcc
| 13,026,184 | 1 | false | 1 | 0 |
you could create a proxy class in Python that calls the Java class. Then on the proxy class you can override whatever you need.
| 1 | 0 | 0 |
I'm using JCC to create a Python wrapper for a Java library and I need to override a method from a Java class inside the Python script. Is it possible? How can you do that if it is possible?
|
Overriding a Java class from Python using JCC. Is that possible?
| 0 | 0 | 0 | 148 |
13,026,437 |
2012-10-23T08:35:00.000
| 0 | 0 | 1 | 0 |
python,multithreading,concurrency,process,multiprocess
| 13,026,805 | 2 | false | 0 | 0 |
There's no official Python API for killing threads -- you need to use an OS-specific method.
If you spawn a new process (using multiprocessing.Process) then you can kill it with .terminate(). Of course, this will cause it to stop immediately -- it won't clean up after itself and it may pollute any shared data structures.
| 1 | 0 | 0 |
In python (2.6.6) what is the best way I can have a thread/process checking a network (message queue) for things while concurrently doing work (compiling). If i receive a command down the message queue, i must be able to kill and spawn compile threads.
|
Python threading - listen to network while compiling
| 0 | 0 | 0 | 137 |
13,027,848 |
2012-10-23T09:53:00.000
| 1 | 0 | 0 | 0 |
django,python-2.7,mod-wsgi,amazon-elastic-beanstalk
| 14,729,865 | 4 | false | 1 | 0 |
To get around mod_wsgi limitation, you can deploy your application under your own wsgi container like uWSGI and add configuration to apache to serve as a reverse proxy for your WSGI container.
You can use container_commands to place your apache configuration files under /etc/httpd/...
| 1 | 6 | 0 |
According to the docs, AWS Elastic Beanstalk supports Python 2.6. I wonder if anyone has set up a custom AMI using the EBS backed 64 bit Linux AMI to run django under Python 2.7 on the beanstalk? While most aspects of a set up under 2.7 will probably be straightforward using virtualenv or changing the symlinks, I'm worried about the amazon build of mod_wsgi. I understand that depending on how mod_wsgi has been compiled there may be issues with running it in combination with Python 2.7. I also wonder if there will be any postgreSQL issues...
|
Django running under python 2.7 on AWS Elastic Beanstalk
| 0.049958 | 0 | 0 | 1,731 |
13,033,782 |
2012-10-23T15:18:00.000
| 1 | 0 | 1 | 0 |
python,data-structures,python-2.7
| 13,038,251 | 2 | false | 0 | 0 |
What is efficient would depend on what you are doing and how often you are doing it. There isn't much information in your question to hazard a guess.
For example, it is not clear whether all balls in the same box have the same colour. If that is so, then you could assign the colour to the box rather than to the ball for maybe space efficiency; but what do you want to make efficient?
A lot of quesswork could be spared if you showed some of your code.
| 1 | 1 | 0 |
Here's the problem I'm trying to solve:
I have boxes A, B, C, and D and have balls a ,b ,c ,d ,e, f, ... -- each of which is in one of the aforementioned boxes. So I know that, for example, balls a, b, c and d are in box A. Balls e, j, p, and w are in box B, etc.
I'm trying to color each ball based on which box contains it, so I need a data structure to keep and handle this information efficiently.
My first thought was to keep a dict like {'a':'A', 'b':'A',... 'w' : 'B' ...} and if a's value is A color him (for example) red, but I'm not sure that this is the best way to keep info in this case.
|
I need an appropriate Python data structure
| 0.099668 | 0 | 0 | 112 |
13,033,979 |
2012-10-23T15:28:00.000
| 0 | 0 | 0 | 0 |
python,django,django-models
| 13,034,305 | 4 | false | 1 | 0 |
Add a ManytoMany relationship in your article to the User model. Everytime a user likes one article add him into it. Length of that field will be the number of like in that article.
| 1 | 3 | 0 |
I have a django project using the built in user model.
I need to add relationships to the user. For now a "like" relationship for articles the user likes and a "following" relationship for other users followed.
What's the best way to define these relationships? The django doc recommends creating a Profile model with a one on one relation to the user to add fields to the user. but given no extra fields will be added to the user profile in my case this is overkill.
Any suggestions?
|
django add relationships to user model
| 0 | 0 | 0 | 7,527 |
13,034,991 |
2012-10-23T16:21:00.000
| 86 | 0 | 1 | 0 |
python,python-3.x,jit
| 13,035,238 | 7 | true | 0 | 0 |
First off, Python 3(.x) is a language, for which there can be any number of implementations. Okay, to this day no implementation except CPython actually implements those versions of the language. But that will change (PyPy is catching up).
To answer the question you meant to ask: CPython, 3.x or otherwise, does not, never did, and likely never will, contain a JIT compiler. Some other Python implementations (PyPy natively, Jython and IronPython by re-using JIT compilers for the virtual machines they build on) do have a JIT compiler. And there is no reason their JIT compilers would stop working when they add Python 3 support.
But while I'm here, also let me address a misconception:
Usually a JIT compiler is the only thing that can improve performances in interpreted languages
This is not correct. A JIT compiler, in its most basic form, merely removes interpreter overhead, which accounts for some of the slow down you see, but not for the majority. A good JIT compiler also performs a host of optimizations which remove the overhead needed to implement numerous Python features in general (by detecting special cases which permit a more efficient implementation), prominent examples being dynamic typing, polymorphism, and various introspective features.
Just implementing a compiler does not help with that. You need very clever optimizations, most of which are only valid in very specific circumstances and for a limited time window. JIT compilers have it easy here, because they can generate specialized code at run time (it's their whole point), can analyze the program easier (and more accurately) by observing it as it runs, and can undo optimizations when they become invalid. They can also interact with interpreters, unlike ahead of time compilers, and often do it because it's a sensible design decision. I guess this is why they are linked to interpreters in people's minds, although they can and do exist independently.
There are also other approaches to make Python implementation faster, apart from optimizing the interpreter's code itself - for example, the HotPy (2) project. But those are currently in research or experimentation stage, and are yet to show their effectiveness (and maturity) w.r.t. real code.
And of course, a specific program's performance depends on the program itself much more than the language implementation. The language implementation only sets an upper bound for how fast you can make a sequence of operations. Generally, you can improve the program's performance much better simply by avoiding unnecessary work, i.e. by optimizing the program. This is true regardless of whether you run the program through an interpreter, a JIT compiler, or an ahead-of-time compiler. If you want something to be fast, don't go out of your way to get at a faster language implementation. There are applications which are infeasible with the overhead of interpretation and dynamicness, but they aren't as common as you'd think (and often, solved by calling into machine code-compiled code selectively).
| 2 | 88 | 0 |
I found that when I ask something more to Python, python doesn't use my machine resource at 100% and it's not really fast, it's fast if compared to many other interpreted languages, but when compared to compiled languages i think that the difference is really remarkable.
Is it possible to speedup things with a Just In Time (JIT) compiler in Python 3?
Usually a JIT compiler is the only thing that can improve performances in interpreted languages, so i'm referring to this one, if other solutions are available i would love to accept new answers.
|
Does the Python 3 interpreter have a JIT feature?
| 1.2 | 0 | 0 | 61,675 |
13,034,991 |
2012-10-23T16:21:00.000
| 2 | 0 | 1 | 0 |
python,python-3.x,jit
| 13,035,216 | 7 | false | 0 | 0 |
If you mean JIT as in Just in time compiler to a Bytecode representation then it has such a feature(since 2.2). If you mean JIT to machine code, then no. Yet the compilation to byte code provides a lot of performance improvement. If you want it to compile to machine code, then Pypy is the implementation you're looking for.
Note: pypy doesn't work with Python 3.x
| 2 | 88 | 0 |
I found that when I ask something more to Python, python doesn't use my machine resource at 100% and it's not really fast, it's fast if compared to many other interpreted languages, but when compared to compiled languages i think that the difference is really remarkable.
Is it possible to speedup things with a Just In Time (JIT) compiler in Python 3?
Usually a JIT compiler is the only thing that can improve performances in interpreted languages, so i'm referring to this one, if other solutions are available i would love to accept new answers.
|
Does the Python 3 interpreter have a JIT feature?
| 0.057081 | 0 | 0 | 61,675 |
13,036,605 |
2012-10-23T18:12:00.000
| 0 | 0 | 0 | 0 |
python,gtk,pygtk
| 13,037,240 | 1 | false | 0 | 1 |
Silly me, just use GtkTreeView:Hover or GtkTreeView:Selected
| 1 | 0 | 0 |
What is the class-name in which I should use to style a TreeView's rows?
I've tried GtkCellRendererText, but it doesn't work.
|
Styling TreeView's row using CSS
| 0 | 0 | 0 | 467 |
13,040,048 |
2012-10-23T22:04:00.000
| 2 | 0 | 0 | 0 |
python,json,reddit
| 13,040,391 | 1 | true | 1 | 0 |
Would it make sense to keep the Python scraper application running on
it's own server, which then writes the scraped URL's to the database?
Yes, that is a good idea. I would set up a cron job to run the program every so often. Depending on the load you're expecting, it doesn't necessarily need to be on its own server. I would have it as its own application.
I heard it may make sense to split the application and one does the
reading while the other does the writing, whats this about?
I am assuming the person who said this meant that you should have an application to write to your database (your python script) and an application to read URLs from the database (your WordPress wrapper, or perhaps another Python script to write something WordPress can understand).
What would the flow of the Python code look like? I can fumble my way
around writing it but I just am not entirely sure on how it should
flow.
This is a somewhat religious matter among programmers. However I feel that your program should be simple enough. I would simply grab the JSON and have a query that inserts into the database if the entry doesn't exist yet.
What else am I not thinking of here, any tips?
I personally would use urllib2 and MySQLdb modules for the Python script. Good luck!
| 1 | 2 | 0 |
I'm extremely new to Python, read about half a beginner book for Python3. I figure doing this will get me going and learning with something I actually want to do instead of going through some "boring" exercises.
I'm wanting to build an application that will scrape Reddit for the top URL's and then post these onto my own page. It would only check a couple times a day so no hammering at all here.
I want to parse the Reddit json (http://www.reddit.com/.json) and other subreddits json into URL's that I can organize into my own top list and have my own categories as well on my page so I don't have to keep visiting Reddit.
The website will be a Wordpress template with the DB hosted on it's own server (mysql). I will be hosting this on AWS using RDS, ELB, Auto-scaling, and EC2 instances for the webservers.
My questions are:
-Would it make sense to keep the Python scraper application running on it's own server, which then writes the scraped URL's to the database?
-I heard it may make sense to split the application and one does the reading while the other does the writing, whats this about?
-What would the flow of the Python code look like? I can fumble my way around writing it but I just am not entirely sure on how it should flow.
-What else am I not thinking of here, any tips?
|
Scraping news sites with Python
| 1.2 | 0 | 1 | 570 |
13,040,834 |
2012-10-23T23:25:00.000
| 2 | 0 | 0 | 1 |
python,linux,networking,packet
| 13,040,908 | 2 | false | 0 | 0 |
No; there is no /dev/eth1 device node -- network devices are in a different namespace from character/block devices like terminals and hard drives. You must create an AF_PACKET socket to send raw IP packets.
| 1 | 3 | 0 |
I have a file which contains raw IP packets in binary form. The data in the file contains a full IP header, TCP\UDP header, and data. I would like to use any language (preferably python) to read this file and dump the data onto the line.
In Linux I know you can write to some devices directly (echo "DATA" > /dev/device_handle). Would using python to do an open on /dev/eth1 achieve the same effect (i.e. could I do echo "DATA" > /dev/eth1)
|
Writing raw IP data to an interface (linux)
| 0.197375 | 0 | 1 | 2,540 |
13,042,897 |
2012-10-24T04:09:00.000
| 2 | 1 | 1 | 0 |
python,py2exe
| 16,571,073 | 3 | false | 0 | 0 |
Using os.system() will be problematic for many reasons; for example, when you have spaces or Unicode in the file names. It will also be more opaque relative to exceptions/failures.
If this is on windows, using win32file.CopyFile() is probably the best approach, since that will yield the correct file attributes, dates, permissions, etc. relative to the original file (that is, it will be more similar to the results you'd get by using Explorer to copy the file).
| 1 | 4 | 0 |
I need an alternative to the shutil module in particular shutil.copyfile.
It is a little known bug with py2exe that makes the entire shutil module useless.
|
Alternative to shutil.copyfile
| 0.132549 | 0 | 0 | 4,141 |
13,049,515 |
2012-10-24T12:45:00.000
| 1 | 0 | 0 | 0 |
python,google-app-engine,webapp2,wtforms
| 13,051,668 | 1 | true | 1 | 0 |
I think this will work if the routes are part of the same app.
But why not using a single handler with get and put and a method _create, which can be called (self._create instead of a redirect) by get and put to render the template with the form. It is faster than a browser redirect and you can pass arguments in an easy way.
| 1 | 1 | 0 |
I am having a problem with webapp2 and wtforms. More specifically I have defined two methods in two different handlers, called:
create, which is a GET method listening to a specific route
save, which is a POST method listening to another route
In the save method I validate my form and if fails, I want to redirect to the create method via the redirect_to method, where I can render the template with the form. Is this possible with any way? I found an example on how this can be done if the same handler with get and post methods, but is this possible in methods of different handlers?
Thanks in advance!
|
Webapp2 + WTForms issue: How to pass values and errors back to user?
| 1.2 | 0 | 0 | 348 |
13,049,553 |
2012-10-24T12:48:00.000
| 0 | 0 | 1 | 0 |
python,coding-style,documentation,inline
| 13,049,627 | 2 | false | 0 | 0 |
Lines are usually limited to 80 characters in the computer world, the PEP style guide recommends a maximum of 79.
| 1 | 0 | 0 |
When writing inline documentation, is there a standard method for line breaks when it comes longer lines of text (Obviously this shouldn't happen too frequently)?
For example:
"This modules does blah blah blah blah blah blah blah blah blah blah blah blahblah blah blah blah blah blahblah blah blah blah blah blahblah blah blah blah blah blah"
Or should it be:
"This modules does blah blah blah blah blah blah blah blah blah blah blah
blahblah blah blah blah blah blahblah blah blah blah blah blahblah blah
blah blah blah blah"
If there is no well defined standard or tradition for this I'm not really looking for a debate, in which case the question should probably just be closed. But if there is a standard or very common practice I would like to know what it is.
|
Line Length for Inline Python Documentation
| 0 | 0 | 0 | 127 |
13,053,253 |
2012-10-24T16:13:00.000
| 3 | 0 | 1 | 0 |
python,multithreading
| 13,053,314 | 2 | true | 0 | 0 |
Run two threads and open and read file separately in both threads, you can use seek to jump to specific positions
| 1 | 0 | 0 |
I have a file onto which I have written some data. Say 8 bytes of data
Now using my python script, I want to read the first four bytes using one thread and the next 4 bytes using another thread while the first thread is still running or suspended.
How can I do this using python? i.e
1) Read first 4 bytes using thread1 from file1
2) while thread1 running or suspended, read next 4 bytes from file1 using thread2
|
Concurrent reads on a file in python
| 1.2 | 0 | 0 | 360 |
13,054,970 |
2012-10-24T17:56:00.000
| 22 | 0 | 1 | 0 |
python,python-2.7
| 13,054,992 | 3 | false | 0 | 0 |
pydoc foo.bar from the command line or help(foo.bar) or help('foo.bar') from Python.
| 2 | 41 | 0 |
I know this question is very simple, I know it must have been asked a lot of times and I did my search on both SO and Google but I could not find the answer, probably due to my lack of ability of putting what I seek into a proper sentence.
I want to be able to read the docs of what I import.
For example if I import x by "import x", I want to run this command, and have its docs printed in Python or ipython.
What is this command-function?
Thank you.
PS. I don't mean dir(), I mean the function that will actually print the docs for me to see and read what functionalities etc. this module x has.
|
How to print module documentation in Python
| 1 | 0 | 0 | 50,842 |
13,054,970 |
2012-10-24T17:56:00.000
| 1 | 0 | 1 | 0 |
python,python-2.7
| 54,911,846 | 3 | false | 0 | 0 |
You can use help() function to display the documentation. or you can choose method.__doc__ descriptor.
Eg: help(input) will give the documentation on input() method.
| 2 | 41 | 0 |
I know this question is very simple, I know it must have been asked a lot of times and I did my search on both SO and Google but I could not find the answer, probably due to my lack of ability of putting what I seek into a proper sentence.
I want to be able to read the docs of what I import.
For example if I import x by "import x", I want to run this command, and have its docs printed in Python or ipython.
What is this command-function?
Thank you.
PS. I don't mean dir(), I mean the function that will actually print the docs for me to see and read what functionalities etc. this module x has.
|
How to print module documentation in Python
| 0.066568 | 0 | 0 | 50,842 |
13,055,699 |
2012-10-24T18:46:00.000
| 0 | 0 | 0 | 0 |
python,snmp
| 13,191,755 | 1 | false | 0 | 0 |
what exactly you want to know. whether :
python net-snmp api supports v3 or not
or
oid for finding request-id of a snmp packet
| 1 | 1 | 0 |
Which SNMP library offers an API to get single properties of captured (tcpdump) SNMP packets like the request-ID or the protocol version?
I found that pySNMP offers such a low-level API but only for v1/v2c versions. But I need both v2c and v3.
|
How to get the request ID of a SNMP packet in Python
| 0 | 0 | 1 | 624 |
13,057,113 |
2012-10-24T20:24:00.000
| 3 | 0 | 0 | 0 |
python,machine-learning,scikit-learn
| 13,057,566 | 1 | true | 0 | 0 |
You probably need to derive from the KMeans class and override the following methods to use your vocabulary logic:
fit_transform will only be called on the train data
transform will be called on the test data
Maybe class derivation is not alway the best option. You can also write your own transformer class that wraps calls to an embedded KMeans model and provides the fit / fit_transform / transform API that is expected by the Pipeline class for the first stages.
| 1 | 2 | 1 |
I would like to be use GridSearchCV to determine the parameters of a classifier, and using pipelines seems like a good option.
The application will be for image classification using Bag-of-Word features, but the issue is that there is a different logical pipeline depending on whether training or test examples are used.
For each training set, KMeans must run to produce a vocabulary that will be used for testing, but for test data no KMeans process is run.
I cannot see how it is possible to specify this difference in behavior for a pipeline.
|
Using custom Pipeline for Cross Validation scikit-learn
| 1.2 | 0 | 0 | 1,601 |
13,059,142 |
2012-10-24T23:00:00.000
| 5 | 1 | 0 | 0 |
python,ruby,sqlite
| 13,059,204 | 1 | true | 0 | 0 |
There is no good reason to choose one over the other as far as sqlite performance or usability.
Both languages have perfectly usable (and pythonic/rubyriffic) sqlite3 bindings.
In both languages, unless you do something stupid, the performance is bounded by the sqlite3 performance, not by the bindings.
Neither language's bindings are missing any uncommon but sometimes performance-critical functions (like an "exec many", manual transaction management, etc.).
There may be language-specific frameworks that are better or worse in how well they integrate with sqlite3, but at that point you're choosing between frameworks, not languages.
| 1 | 0 | 0 |
Which of these two languages interfaces better and delivers a better performance/toolset for working with sqlite database? I am familiar with both languages but need to choose one for a project I'm developing and so I thought I would ask here. I don't believe this to be opinionated as performance of a language is pretty objective.
|
ruby or python for use with sqlite database?
| 1.2 | 1 | 0 | 150 |
13,059,316 |
2012-10-24T23:18:00.000
| 1 | 0 | 1 | 0 |
python,wxpython,exe,pyinstaller
| 13,059,522 | 3 | false | 0 | 1 |
Checking for sys.frozen is a really good approach. You can also look into img2py which will let you load the binary data for images into a .py file. Later, instead of having to open files, they can be imported.
| 1 | 2 | 0 |
I'm trying to get a wxPython app working as an exe. I've heard that PyInstaller is now superior to py2exe. I'd like to include my .ico and two .png files that the script requires to run. What would the spec file for this look like? I can't seem to find a decent example anywhere. I have PyInstaller installed, but I can't find this "makespec" anywhere.
|
wxPython to exe with PyInstaller?
| 0.066568 | 0 | 0 | 2,693 |
13,059,891 |
2012-10-25T00:26:00.000
| 0 | 0 | 0 | 0 |
python,scroll,pygame,geometry-surface
| 13,068,085 | 3 | false | 0 | 1 |
I don't think so, I have an idea though. I'm guessing your background wraps horizontally and always to the right, then you could attach part of the beginning to the end of the background.
Example, if you have a 10,000px background and your viewport is 1000px, attach the first 1000px to the end of the background, so you'll have a 11,000px background. Then when the vieport reaches the end of the background, you just move it to the 0px position and continue moving right.
| 3 | 2 | 0 |
I'm coding a game where the viewport follows the player's ship in a finite game world, and I am trying to make it so that the background "wraps" around in all directions (you could think of it as a 2D surface wrapped around a sphere - no matter what direction you travel in, you will end up back where you started).
I have no trouble getting the ship and objects to wrap, but the background doesn't show up until the viewport itself passes an edge of the gameworld. Is it possible to make the background surface "wrap" around?
I'm sorry if I'm not being very articulate. It seems like a simple problem and tons of games do it, but I haven't had any luck finding an answer. I have some idea about how to do it by tiling the background, but it would be nice if I could just tell the surface to wrap.
|
Wrapping a pygame surface around a viewport
| 0 | 0 | 0 | 1,379 |
13,059,891 |
2012-10-25T00:26:00.000
| 0 | 0 | 0 | 0 |
python,scroll,pygame,geometry-surface
| 13,096,965 | 3 | false | 0 | 1 |
I was monkeying around with something similar to what you described that may be of use. I decided to try using a single map class which contained all of my Tiles, and I wanted only part of it loaded into memory at once so I broke it up into Sectors (32x32 tiles). I limited it to only having 3x3 Sectors loaded at once. As my map scrolled to an edge, it would unload the Sectors on the other side and load in new ones.
My Map class would have a Rect of all loaded Sectors, and my camera would have a Rect of where it was located. Each tick I would use those two Rects to find what part of the Map I should blit, and if I should load in new Sectors. Once you start to change what Sectors are loaded, you have to shift
Each sector had the following attributes:
1. Its Coordinate, with (0, 0) being the topleft most possible Sector in the world.
2. Its Relative Sector Coordinate, with (0, 0) being the topleft most loaded sector, and (2,2) the bottom right most if 3x3 were loaded.
3. A Rect that held the area of the Sector
4. A bool to indicate of the Sector was fully loaded
Each game tick would check if the bool to see if Sector was fully loaded, and if not, call next on a generator that would blit X tiles onto the Map surface. I
The entire Surface
Each update would unload, load, or update and existing Sector
When an existing Sector was updated, it would shift
It would unload Sectors on update, and then create the new ones required. After being created, each Sector would start a generator that would blit X amount of tiles per update
| 3 | 2 | 0 |
I'm coding a game where the viewport follows the player's ship in a finite game world, and I am trying to make it so that the background "wraps" around in all directions (you could think of it as a 2D surface wrapped around a sphere - no matter what direction you travel in, you will end up back where you started).
I have no trouble getting the ship and objects to wrap, but the background doesn't show up until the viewport itself passes an edge of the gameworld. Is it possible to make the background surface "wrap" around?
I'm sorry if I'm not being very articulate. It seems like a simple problem and tons of games do it, but I haven't had any luck finding an answer. I have some idea about how to do it by tiling the background, but it would be nice if I could just tell the surface to wrap.
|
Wrapping a pygame surface around a viewport
| 0 | 0 | 0 | 1,379 |
13,059,891 |
2012-10-25T00:26:00.000
| 0 | 0 | 0 | 0 |
python,scroll,pygame,geometry-surface
| 13,131,429 | 3 | true | 0 | 1 |
Thanks everyone for the suggestions. I ended up doing something a little different from the answers provided. Essentially, I made subsurfaces of the main surface and used them as buffers, displaying them as appropriate whenever the viewport included coordinates outside the world. Because the scrolling is omnidirectional, I needed to use 8 buffers, one for each side and all four corners. My solution may not be the most elegant, but it seems to work well, with no noticeable performance drop.
| 3 | 2 | 0 |
I'm coding a game where the viewport follows the player's ship in a finite game world, and I am trying to make it so that the background "wraps" around in all directions (you could think of it as a 2D surface wrapped around a sphere - no matter what direction you travel in, you will end up back where you started).
I have no trouble getting the ship and objects to wrap, but the background doesn't show up until the viewport itself passes an edge of the gameworld. Is it possible to make the background surface "wrap" around?
I'm sorry if I'm not being very articulate. It seems like a simple problem and tons of games do it, but I haven't had any luck finding an answer. I have some idea about how to do it by tiling the background, but it would be nice if I could just tell the surface to wrap.
|
Wrapping a pygame surface around a viewport
| 1.2 | 0 | 0 | 1,379 |
13,060,069 |
2012-10-25T00:50:00.000
| 0 | 0 | 1 | 0 |
python,algorithm,colors,python-imaging-library
| 13,062,863 | 3 | false | 0 | 0 |
K-means is a good choice for this task because you know number of main colors beforehand. You need to optimize K-means. I think you can reduce your image size, just scale it down to 100x100 pixels or so. Find the size on witch your algorithm works with acceptable speed. Another option is to use dimensionality reduction before k-means clustering.
And try to find fast k-means implementation. Writing such things in python is a misuse of python. It's not supposed to be used like this.
| 1 | 8 | 1 |
Does anyone know a fast algorithm to detect main colors in an image?
I'm currently using k-means to find the colors together with Python's PIL but it's very slow. One 200x200 image takes 10 seconds to process. I've several hundred thousand images.
|
Fast algorithm to detect main colors in an image?
| 0 | 0 | 0 | 4,441 |
13,060,146 |
2012-10-25T01:00:00.000
| 3 | 0 | 1 | 0 |
python,windows,installation,windows-installer
| 13,060,316 | 2 | false | 0 | 0 |
You can do this with any of the installer applications out there. Each of the dependent installers has a silent install option, so your installer just needs to invoke the installers for each of the dependencies in the right order. I won't recommend any windows installer application in particular because I don't like any of them, but they will all do what you want.
The other option you have is to use py2exe which can bundle everything into a single exe file that runs in its own python environment. The plus side to this is you don't have to worry about installing Python in the users environment and have the user potentially uninstall python and then have your app stop working because everything is in a standalone environment.
Other ways that I have seen this done is with a custom exe written in whatever compiled Windows Language you prefer that does all this for you, but this takes a lot of work.
You could also get the advantage of the py2exe route with a little work on an installer you write with either an installer app or a standalone exe that handles the install, by manually placing the python.exe, dll and related code in the proper directories relative to your application code. You may have to mess with your PYTHONPATH environment setting when your app starts to get everything working, but this way you don't have to worry about installing Python and whether the user already has Python installed or if they uninstall it because then you have the Python version you need bundled with your app.
One thing to note is that if you are worried about size the Python installer itself is about 10 MB before any dependencies, but a lot of that is not relevant to an end user using your app, There is no Python Runtime Environment installer like there is a Java runtime Environment installer that just install what you need to run Python, you always get the development tools.
Hope this helps a little.
| 1 | 1 | 0 |
I know nothing on this subject, but I need suggestions about the best tools or method for creating a setup program that installs python, some custom python modules, some other python modules such as PIL, and some EXE dependencies, all living on a network repository, on windows machines. In the repository are installers for python (msi file), PIL (exe file), the custom python modules (pyc files), and two windows executables (and exe file and a zip file). Any advice welcome.
|
how to write installer (installing python, python modules and other dependencies) for windows boxes?
| 0.291313 | 0 | 0 | 2,621 |
13,060,254 |
2012-10-25T01:15:00.000
| 1 | 0 | 1 | 0 |
python,regex
| 13,060,269 | 4 | false | 0 | 0 |
Search for [^a-zA-Z] and replace with ' '
| 1 | 0 | 0 |
How can I replace any character outside of the English alphabet?
For example, 'abcdükl*m' replaced with a ' ' would be 'abcd kl m'
|
Replace any character outside of the English alphabet in Python?
| 0.049958 | 0 | 0 | 3,730 |
13,060,427 |
2012-10-25T01:40:00.000
| 1 | 0 | 0 | 0 |
python,sql,sorting,select
| 13,060,535 | 2 | false | 0 | 0 |
This is a very general question, but there are multiple things that you can do to possibly make your life easier.
1.CSV These are very useful if you are storing data that is ordered in columns, and if you are looking for easy to read text files.
2.Sqlite3 Sqlite3 is a database system that does not require a server to use (it uses a file instead), and is interacted with just like any other database system. However, for very large scale projects that are handling massive amounts of data, it is not recommended.
3.MySql MySql is a database system that requires a server to interact with, but can be tweaked for very large scale projects, as well as small scale projects.
There are many other different types of systems though, so I suggest you search around and find that perfect fit. However, if you want to mess around with Sqlite3 or CSV, both Sqlite3 and CSV modules are supplied in the standard library with python 2.7 and 3.x I believe.
| 1 | 0 | 0 |
I have huge tables of data that I need to manipulate (sort, calculate new quantities, select specific rows according to some conditions and so on...). So far I have been using a spreadsheet software to do the job but this is really time consuming and I am trying to find a more efficient way to do the job.
I use python but I could not figure out how to use it for such things. I am wondering if anybody can suggest something to use. SQL?!
|
sorting and selecting data
| 0.099668 | 1 | 0 | 97 |
13,061,800 |
2012-10-25T04:54:00.000
| 1 | 0 | 0 | 0 |
python,csv,etl,sql-loader,smooks
| 14,449,025 | 3 | false | 0 | 0 |
Create a process / script that will call a procedure to load csv files to external Oracle table and another script to load it to the destination table.
You can also add cron jobs to call these scripts that will keep track of incoming csv files into the directory, process it and move the csv file to an output/processed folder.
Exceptions also can be handled accordingly by logging it or sending out an email. Good Luck.
| 2 | 3 | 1 |
I have come across a problem and am not sure which would be the best suitable technology to implement it. Would be obliged if you guys can suggest me some based on your experience.
I want to load data from 10-15 CSV files each of them being fairly large 5-10 GBs. By load data I mean convert the CSV file to XML and then populate around 6-7 stagings tables in Oracle using this XML.
The data needs to be populated such that the elements of the XML and eventually the rows of the table come from multiple CSV files. So for e.g. an element A would have sub-elements coming data from CSV file 1, file 2 and file 3 etc.
I have a framework built on Top of Apache Camel, Jboss on Linux. Oracle 10G is the database server.
Options I am considering,
Smooks - However the problem is that Smooks serializes one CSV at a time and I cant afford to hold on to the half baked java beans til the other CSV files are read since I run the risk of running out of memory given the sheer number of beans I would need to create and hold on to before they are fully populated written to disk as XML.
SQLLoader - I could skip the XML creation all together and load the CSV directly to the staging tables using SQLLoader. But I am not sure if I can a. load multiple CSV files in SQL Loader to the same tables updating the records after the first file. b. Apply some translation rules while loading the staging tables.
Python script to convert the CSV to XML.
SQLLoader to load a different set of staging tables corresponding to the CSV data and then writing stored procedure to load the actual staging tables from this new set of staging tables (a path which I want to avoid given the amount of changes to my existing framework it would need).
Thanks in advance. If someone can point me in the right direction or give me some insights from his/her personal experience it will help me make an informed decision.
regards,
-v-
PS: The CSV files are fairly simple with around 40 columns each. The depth of objects or relationship between the files would be around 2 to 3.
|
Choice of technology for loading large CSV files to Oracle tables
| 0.066568 | 1 | 0 | 2,011 |
13,061,800 |
2012-10-25T04:54:00.000
| 2 | 0 | 0 | 0 |
python,csv,etl,sql-loader,smooks
| 13,062,737 | 3 | true | 0 | 0 |
Unless you can use some full-blown ETL tool (e.g. Informatica PowerCenter, Pentaho Data Integration), I suggest the 4th solution - it is straightforward and the performance should be good, since Oracle will handle the most complicated part of the task.
| 2 | 3 | 1 |
I have come across a problem and am not sure which would be the best suitable technology to implement it. Would be obliged if you guys can suggest me some based on your experience.
I want to load data from 10-15 CSV files each of them being fairly large 5-10 GBs. By load data I mean convert the CSV file to XML and then populate around 6-7 stagings tables in Oracle using this XML.
The data needs to be populated such that the elements of the XML and eventually the rows of the table come from multiple CSV files. So for e.g. an element A would have sub-elements coming data from CSV file 1, file 2 and file 3 etc.
I have a framework built on Top of Apache Camel, Jboss on Linux. Oracle 10G is the database server.
Options I am considering,
Smooks - However the problem is that Smooks serializes one CSV at a time and I cant afford to hold on to the half baked java beans til the other CSV files are read since I run the risk of running out of memory given the sheer number of beans I would need to create and hold on to before they are fully populated written to disk as XML.
SQLLoader - I could skip the XML creation all together and load the CSV directly to the staging tables using SQLLoader. But I am not sure if I can a. load multiple CSV files in SQL Loader to the same tables updating the records after the first file. b. Apply some translation rules while loading the staging tables.
Python script to convert the CSV to XML.
SQLLoader to load a different set of staging tables corresponding to the CSV data and then writing stored procedure to load the actual staging tables from this new set of staging tables (a path which I want to avoid given the amount of changes to my existing framework it would need).
Thanks in advance. If someone can point me in the right direction or give me some insights from his/her personal experience it will help me make an informed decision.
regards,
-v-
PS: The CSV files are fairly simple with around 40 columns each. The depth of objects or relationship between the files would be around 2 to 3.
|
Choice of technology for loading large CSV files to Oracle tables
| 1.2 | 1 | 0 | 2,011 |
13,062,423 |
2012-10-25T05:55:00.000
| 0 | 0 | 1 | 0 |
python,coding-style,mutable,mutability
| 13,062,474 | 4 | false | 0 | 0 |
I suppose it depends on the use case. I don't see why returning an object from an in-place operation would hurt, other than maybe you won't use the result, but that's not really a problem if you're not being super-fastidious about pure functionalism. I like the call-chaining pattern, such as jQuery uses, so I appreciate it when functions return the object they've acted upon, in case I want to use it further.
| 1 | 27 | 0 |
I'm talking mostly about Python here, but I suppose this probably holds for most languages. If I have a mutable object, is it a bad idea to make an in-place operation also return the object? It seems like most examples just modify the object and return None. For example, list.sort.
|
Is making in-place operations return the object a bad idea?
| 0 | 0 | 0 | 3,445 |
13,068,227 |
2012-10-25T12:08:00.000
| 1 | 0 | 0 | 0 |
python,mysql,mysql-python
| 13,299,592 | 2 | true | 0 | 0 |
MySQLdb-1.2.4 (to be released within the next week) and the current release candidate has support for MySQL-5.5 and newer and should solve your problem. Please try 1.2.4c1 from PyPi (pip install MySQL-python)
| 1 | 2 | 0 |
I'm using python's MySQLdb to fetch rows from a MySQL 5.6.7 db, that supports microsecond precision datetime columns. When I read a row with MySQLdb I get "None" for the time field. Is there are way to read such time fields with python?
|
How to read microsecond-precision mysql datetime fields with python
| 1.2 | 1 | 0 | 310 |
13,068,257 |
2012-10-25T12:10:00.000
| 11 | 0 | 0 | 0 |
python,multithreading,numpy,machine-learning,scikit-learn
| 13,084,224 | 2 | false | 0 | 0 |
For linear models (LinearSVC, SGDClassifier, Perceptron...) you can chunk your data, train independent models on each chunk and build an aggregate linear model (e.g. SGDClasifier) by sticking in it the average values of coef_ and intercept_ as attributes. The predict method of LinearSVC, SGDClassifier, Perceptron compute the same function (linear prediction using a dot product with an intercept_ threshold and One vs All multiclass support) so the specific model class you use for holding the average coefficient is not important.
However as previously said the tricky point is parallelizing the feature extraction and current scikit-learn (version 0.12) does not provide any way to do this easily.
Edit: scikit-learn 0.13+ now has a hashing vectorizer that is stateless.
| 1 | 10 | 1 |
I got linearsvc working against training set and test set using load_file method i am trying to get It working on Multiprocessor enviorment.
How can i get multiprocessing work on LinearSVC().fit() LinearSVC().predict()? I am not really familiar with datatypes of scikit-learn yet.
I am also thinking about splitting samples into multiple arrays but i am not familiar with numpy arrays and scikit-learn data structures.
Doing this it will be easier to put into multiprocessing.pool() , with that , split samples into chunks , train them and combine trained set back later , would it work ?
EDIT:
Here is my scenario:
lets say , we have 1 million files in training sample set , when we want to distribute processing of Tfidfvectorizer on several processors we have to split those samples (for my case it will only have two categories , so lets say 500000 each samples to train) . My server have 24 cores with 48 GB , so i want to split each topics into number of chunks 1000000 / 24 and process Tfidfvectorizer on them. Like that i would do to Testing sample set , as well as SVC.fit() and decide(). Does it make sense?
Thanks.
PS: Please do not close this .
|
Multiprocessing scikit-learn
| 1 | 0 | 0 | 11,435 |
13,068,300 |
2012-10-25T12:12:00.000
| 4 | 0 | 0 | 1 |
python,cloud,celery,distributed-computing
| 13,068,496 | 1 | true | 0 | 0 |
IMHO It's a very good idea. I have used it few times in Amazon EC2 in this manner and it was great each time.
One of the big advantages is that it can handle failure of worker servers, so the dynamic nature of the infrastructure is not a problem and you still get things done.
I'm sorry that this answer is so brief, but I believe it answers OPs question. There's not much more to it. Celery is great, does the job, has good docs. Go with it :)
| 1 | 1 | 0 |
I'm going to use Celery to manage tasks in cluster. There will be one master server and some worker servers. Master sends tasks to the worker servers (any number) and gets the result. Task state should be trackable. Backend is RabbitMQ.
Is using Celery in this case a good Idea? Or are there better solutions?
|
Is using Celery for task management in cluster good idea?
| 1.2 | 0 | 0 | 858 |
13,070,759 |
2012-10-25T14:24:00.000
| 1 | 1 | 0 | 0 |
php,python
| 13,070,806 | 1 | false | 0 | 0 |
Not at runtime - this would make no sense due to the overheads involved and the risk of the download failing.
| 1 | 0 | 0 |
Is there a way to dynamically download and install a package like AWS API from a PHP or Python script at runtime?
Thanks.
|
Python/PHP - Downloading and installing AWS API
| 0.197375 | 0 | 1 | 42 |
13,073,147 |
2012-10-25T16:31:00.000
| 1 | 0 | 0 | 0 |
python
| 13,073,177 | 2 | true | 1 | 0 |
There are a number of tools out there for this purpose. For example, Selenium, which even has a package on PyPI with Python bindings for it, will do the job.
| 1 | 1 | 0 |
I am not sure if this is possible, but I was wondering if it would be possible to write a script or program that would automatically open up my web browser, go to a certain site, fill out information, and click "send"? And if so, where would I even begin? Here's a more detailed overview of what I need:
Open browser
Go to website
Fill out a series of forms
Click OK
Fill out more forms
Click OK
Thank you all in advance.
|
script to open web browser and enter data
| 1.2 | 0 | 1 | 1,062 |
13,077,263 |
2012-10-25T21:04:00.000
| 1 | 1 | 0 | 0 |
python,image-processing,pixel,imaging
| 13,078,321 | 2 | true | 0 | 0 |
If you want to keep a compressed file format, you can break each image up into smaller rectangles and store them separately. Using a fixed size for the rectangles will make it easier to calculate which one you need. When you need the pixel value, calculate which rectangle it's in, open that image file, and offset the coordinates to get the proper pixel.
This doesn't completely optimize access to a single pixel, but it can be much more efficient than opening an entire large image.
| 1 | 0 | 0 |
Is there any way with Python to directly get (only get, no modify) a single pixel (to get its RGB color) from an image (compressed format if possible) without having to load it in RAM nor processing it (to spare the CPU)?
More details:
My application is meant to have a huge database of images, and only of images.
So what I chose is to directly store images on harddrive, this will avoid the additional workload of a DBMS.
However I would like to optimize some more, and I'm wondering if there's a way to directly access a single pixel from an image (the only action on images that my application does), without having to load it in memory.
Does PIL pixel access allow that? Or is there another way?
The encoding of images is my own choice, so I can change whenever I want. Currently I'm using PNG or JPG. I can also store in raw, but I would prefer to keep images a bit compressed if possible. But I think harddrives are cheaper than CPU and RAM, so even if images must stay RAW in order to do that, I think it's still a better bet.
Thank you.
UPDATE
So, as I feared, it seems that it's impossible to do with variable compression formats such as PNG.
I'd like to refine my question:
Is there a constant compression format (not necessarily specific to an image format, I'll access it programmatically), which would allow to access any part by just reading the headers?
Technically, how to efficiently (read: fast and non blocking) access a byte from a file with Python?
SOLUTION
Thank's to all, I have successfully implemented the functionality I described by using run-length encoding on every row, and padding every row to the same length of the maximum row.
This way, by prepeding a header that describes the fixed number of columns for each row, I could easily access the row using first a file.readline() to get the headers data, then file.seek(headersize + fixedsize*y, 0) where y is the row currently selected.
Files are compressed, and in memory I only fetch a single row, and my application doesn't even need to uncompress it because I can compute where the pixel is exactly by just iterating over every RLE values. So it is also very easy on CPU cycles.
|
Direct access to a single pixel using Python
| 1.2 | 0 | 0 | 693 |
13,078,071 |
2012-10-25T22:08:00.000
| 2 | 0 | 1 | 1 |
python,subprocess
| 13,078,126 | 2 | false | 0 | 0 |
When I use subprocess.Popen, it starts the separate program, but does so under the original program's Python instance...
Incorrect.
... so that they share the first Python console.
This is the crux of your problem. If you want it to run in another console then you must run another console and tell it to run your program instead.
... I'm aiming for cross-platform compatibility ...
Sorry, there's no cross-platform way to do it. You'll need to run the console/terminal appropriate for the platform.
| 1 | 8 | 0 |
I'm trying to run an external, separate program from Python. It wouldn't be a problem normally, but the program is a game, and has a Python interpreter built into it. When I use subprocess.Popen, it starts the separate program, but does so under the original program's Python instance, so that they share the first Python console. I can end the first program fine, but I would rather have separate consoles (mainly because I have the console start off hidden, but it gets shown when I start the program from Python with subprocess.POpen).
I would like it if I could start the second program wholly on its own, as though I just 'double-clicked on it'. Also, os.system won't work because I'm aiming for cross-platform compatibility, and that's only available on Windows.
|
Start Another Program From Python >Separately<
| 0.197375 | 0 | 0 | 13,835 |
13,080,270 |
2012-10-26T03:17:00.000
| 1 | 0 | 1 | 0 |
python,review-board
| 13,080,371 | 2 | false | 0 | 0 |
I just think before going through diff, you should reformat JSON object lets say on alphabetical and numeric order.
| 1 | 0 | 0 |
How do I create a diff of two json objects such that they are in the manual diff format which can be sent to reviewboard? I need to generate the diff from inside a python script.I think manual diffs are generated using the "diff file1 file2" command line utility. Can I generate a similar reviewboard compatible diff using difflib? Or is there another library that I need to use? Thanks!
|
How to create a manual diff between two Json objects which can be sent to Reviewboard using python?
| 0.099668 | 0 | 0 | 515 |
13,081,659 |
2012-10-26T06:09:00.000
| 7 | 1 | 0 | 0 |
python,django,wsgi,pyc
| 13,081,746 | 1 | true | 0 | 0 |
The best strategy for doing deployments is to write the deployed files into a new directory, and then use a symlink or similar to swap the codebase over in a single change. This has the side-benefit of also automatically clearing any old .pyc files.
That way, you get the best of both worlds - clean and atomic deployments, and the caching of .pyc if your webapp needs to restart.
If you keep the last N deployment directories around (naming them by date/time is useful), you also have an easy way to "roll back" to a previously deployed version of the code. If you have multiple server machines, you can also deploy to all of the machines but wait to switch them over until all of them have gotten the new code.
| 1 | 2 | 0 |
Seems like with ever increasing frequency, I am bit by pyc files running outdated code.
This has led to deployment scripts scrubbing *.pyc each time, otherwise deployments don't seem to take effect.
I am wondering, what benefit (if any) is there to pyc files in a long-running WSGI application? So far as I know, the only benefit is improved startup time, but I can't imagine it's that significant--and even if it is, each time new code is deployed you can't really use the old pyc files anyways.
This makes me think that best practice would be to run a WSGI application with the PYTHONDONTWRITEBYTECODE environment variable set.
Am I mistaken?
|
Is there any benefit to pyc files in a WSGI app where deployments happen several times per week?
| 1.2 | 0 | 0 | 593 |
13,083,026 |
2012-10-26T07:58:00.000
| 2 | 1 | 1 | 0 |
python,import
| 13,083,221 | 3 | false | 0 | 0 |
if you don't want python to search builtin modules then search in current folder first,,
you can change sys.path
upon program startup, the first item of this list, path[0], is the directory containing the script that was used to invoke the Python interpreter
sys.path[0] is the empty string, which directs Python to search modules in the current directory first, you can put this at the end of the list, that way it will first search in all possible location before coming to current directory
| 1 | 5 | 0 |
Imagine I have a script, let's say my_tools.py that I import as a module. But my_tools.py is saved twice: at C:\Python27\Lib
and at the same directory from where the script is run that does the import.
Can I change the order where python looks for my_tools.py first? That is, to check first if it exists at C:\Python27\Lib and if so, do the import?
|
Can I change the order where python looks for a module first?
| 0.132549 | 0 | 0 | 2,883 |
13,084,686 |
2012-10-26T09:55:00.000
| -1 | 0 | 0 | 1 |
python,linux,storage,hdf5,data-acquisition
| 13,102,233 | 3 | false | 0 | 0 |
In your case, you could just create 15 files and save each sample sequentially into the corresponding file. This will make sure the requested samples are stored continuous on disk and hence reduce the number of disk seeks while reading.
| 2 | 7 | 0 |
I'm building a system for data acquisition. Acquired data typically consists of 15 signals, each sampled at (say) 500 Hz. That is, each second approx 15 x 500 x 4 bytes (signed float) will arrive and have to persisted.
The previous version was built on .NET (C#) using a DB4O db for data storage. This was fairly efficient and performed well.
The new version will be Linux-based, using Python (or maybe Erlang) and ... Yes! What is a suitable storage-candidate?
I'm thinking MongoDB, storing each sample (or actually a bunch of them) as BSON objects. Each sample (block) will have a sample counter as a key (indexed) field, as well as a signal source identification.
The catch is that I have to be able to retrieve samples pretty quickly. When requested, up to 30 seconds of data have to be retrieved in much less than a second, using a sample counter range and requested signal sources. The current (C#/DB4O) version manages this OK, retrieving data in much less than 100 ms.
I know that Python might not be ideal performance-wise, but we'll see about that later on.
The system ("server") will have multiple acquisition clients connected, so the architecture must scale well.
Edit: After further research I will probably go with HDF5 for sample data and either Couch or Mongo for more document-like information. I'll keep you posted.
Edit: The final solution was based on HDF5 and CouchDB. It performed just fine, implemented in Python, running on a Raspberry Pi.
|
What is a good storage candidate for soft-realtime data acquisition under Linux?
| -0.066568 | 0 | 0 | 877 |
13,084,686 |
2012-10-26T09:55:00.000
| 2 | 0 | 0 | 1 |
python,linux,storage,hdf5,data-acquisition
| 13,143,593 | 3 | false | 0 | 0 |
Using the keys you described, you should able to scale via sharding if necesssary. 120kB / 30sec ist not that much, so i think you do not need to shard too early.
If you compare that to just using files you'll get more sophisticated queries and build in replication for high availability, DS or offline processing (Map Reduce etc).
| 2 | 7 | 0 |
I'm building a system for data acquisition. Acquired data typically consists of 15 signals, each sampled at (say) 500 Hz. That is, each second approx 15 x 500 x 4 bytes (signed float) will arrive and have to persisted.
The previous version was built on .NET (C#) using a DB4O db for data storage. This was fairly efficient and performed well.
The new version will be Linux-based, using Python (or maybe Erlang) and ... Yes! What is a suitable storage-candidate?
I'm thinking MongoDB, storing each sample (or actually a bunch of them) as BSON objects. Each sample (block) will have a sample counter as a key (indexed) field, as well as a signal source identification.
The catch is that I have to be able to retrieve samples pretty quickly. When requested, up to 30 seconds of data have to be retrieved in much less than a second, using a sample counter range and requested signal sources. The current (C#/DB4O) version manages this OK, retrieving data in much less than 100 ms.
I know that Python might not be ideal performance-wise, but we'll see about that later on.
The system ("server") will have multiple acquisition clients connected, so the architecture must scale well.
Edit: After further research I will probably go with HDF5 for sample data and either Couch or Mongo for more document-like information. I'll keep you posted.
Edit: The final solution was based on HDF5 and CouchDB. It performed just fine, implemented in Python, running on a Raspberry Pi.
|
What is a good storage candidate for soft-realtime data acquisition under Linux?
| 0.132549 | 0 | 0 | 877 |
13,085,658 |
2012-10-26T11:00:00.000
| 1 | 0 | 0 | 0 |
python,django,postgresql,django-south
| 13,085,822 | 2 | true | 1 | 0 |
If you add a column to a table, which already has some rows populated, then either:
the column is nullable, and the existing rows simply get a null value for the column
the column is not nullable but has a default value, and the existing rows are updated to have that default value for the column
To produce a non-nullable column without a default, you need to add the column in multiple steps. Either:
add the column as nullable, populate the defaults manually, and then mark the column as not-nullable
add the column with a default value, and then remove the default value
These are effectively the same, they both will go through updating each row.
I don't know South, but from what you're describing, it is aiming to produce a single DDL statement to add the column, and doesn't have the capability to add it in multiple steps like this. Maybe you can override that behaviour, or maybe you can use two migrations?
By contrast, when you are creating a table, there clearly is no existing data, so you can create non-nullable columns without defaults freely.
| 2 | 0 | 0 |
I see that when you add a column and want to create a schemamigration, the field has to have either null=True or default=something.
What I don't get is that many of the fields that I've written in my models initially (say, before initial schemamigration --init or from a converted_to_south app, I did both) were not run against this check, since I didn't have the null/default error.
Is it normal?
Why is it so? And why is South checking this null/default thing anyway?
|
South initial migrations are not forced to have a default value?
| 1.2 | 1 | 0 | 107 |
13,085,658 |
2012-10-26T11:00:00.000
| 0 | 0 | 0 | 0 |
python,django,postgresql,django-south
| 13,085,826 | 2 | false | 1 | 0 |
When you have existing records in your database and you add a column to one of your tables, you will have to tell the database what to put in there, south can't read your mind :-)
So unless you mark the new field null=True or opt in a default value it will raise an error. If you had an empty database, there are no values to be set, but a model field would still require basic properties. If you look deeper at the field class you're using you will see django sets some default values, like max_length and null (depending on the field).
| 2 | 0 | 0 |
I see that when you add a column and want to create a schemamigration, the field has to have either null=True or default=something.
What I don't get is that many of the fields that I've written in my models initially (say, before initial schemamigration --init or from a converted_to_south app, I did both) were not run against this check, since I didn't have the null/default error.
Is it normal?
Why is it so? And why is South checking this null/default thing anyway?
|
South initial migrations are not forced to have a default value?
| 0 | 1 | 0 | 107 |
13,085,946 |
2012-10-26T11:18:00.000
| 0 | 1 | 0 | 0 |
python,django,gmail,gmail-imap
| 13,087,381 | 2 | false | 1 | 0 |
I would suggest you to look at context.io, I've used it before and it works great.
| 1 | 3 | 0 |
I'm looking for an API or library that gives me access to all features of Gmail from a Django web application.
I know I can receive and send email using IMAP or POP3. However, what I'm looking for are all the GMail features such as marking emails with star or important marker, adding or removing tags, etc.
I know there is a Settings API that allows me to create or delete labels and filters, but I haven't found anything that actually allows me to set labels to emails, or set emails as starred, and so on.
Can anyone give me a pointer?
|
A Django library for Gmail
| 0 | 0 | 0 | 1,086 |
13,090,222 |
2012-10-26T15:51:00.000
| 0 | 0 | 0 | 0 |
python,user-interface,pyqt4
| 14,197,531 | 1 | false | 0 | 1 |
There is no need for searching fully suited library. Also, i think, it is unnecessary and waste of time. There are lots of frustrating issue, such as localization settings, number format settings, validation issue etc.
I recommend, use Qt designer form layout to create quickly a form.
Then use pyqt4/QValidation and u can use some regular expression(python new regex module better than QRegExp).
At the and of, manage custom objects via jsonpickle or just python pickle.
| 1 | 1 | 0 |
My question is not specific to Python/PyQt4 but it's the language and api I'm currently using. I want to know if there is a library to automatically generate GUI forms from public parameters of an object. It would be very useful for automatic settings generation.
|
Best practice to generate the settings view of an UI
| 0 | 0 | 0 | 70 |
13,090,227 |
2012-10-26T15:52:00.000
| 1 | 0 | 0 | 0 |
python,user-interface,tkinter,ttk
| 13,090,292 | 2 | true | 0 | 1 |
okay, this might not be the best way, but for each widget in the tabs, pass through a variable in the functions which will then be used in an if statement to check which tab is currently selected, as you're only using two this could be Boolean? if more is needed more complex step will be needed, but that is a simple way to do this, but not pretty :p
xx
| 1 | 4 | 0 |
I current am making a GUI using tk, I have implemented a ttk notebook to have two separate tabs, each of these tabs hold data but call the same functions to interact with this data, is that a sane way to do this? Or should I just make more functions to call them separately? They need to know which tab is currently selected.
Thanks.
|
Two tabs using ttk notebook, but separate functions for the two?
| 1.2 | 0 | 0 | 1,005 |
13,090,476 |
2012-10-26T16:06:00.000
| 1 | 0 | 1 | 0 |
google-app-engine,memory-management,python-2.7,multi-tenant
| 13,126,025 | 1 | false | 1 | 0 |
i agree with nick. there should be no python code in the tenant specific zip. to solve the memory issue i would cache most of the pages in the datastore. to serve them you don't need to have all tenants loaded in your instances. you might also wanna look in pre generating html views on save rather then on request.
| 1 | 1 | 0 |
i have a multitenant app with a zipped package for each tenant/client which contains the templates and handlers for the public site for each of them. right now i have under 50 tenants and its fine to keep the imported apps in memory after the first request to that specific clients domain.
this approach works well but i have to redeploy the app with the new clients zipped package every time i make changes and/or a new client gets added.
now im working to make it possible to upload those packages via web upload and store them into the blobstore.
my concerns now are:
getting the packages from the blobstore is of course slower than importing a zipped package in the filesystem.
but this is not the biggest issue.
how do i load/import a module that is not in the filesystem and has no path?
if every clients package is around 1mb its not a problem as long as the client base is low but what if it raises
to 1k or even more? obviously there i dont have enough memory to store a few GB of data in memory.
what is the best way to deal with this?
if i use the instance memory to store the previously tenant package in memory how would
invalidate the data in memory if there would be a newly uploaded package?
i would appreciate some thougts about how to deal this kind of situation.
|
zipped packages and in memory storage strategies
| 0.197375 | 0 | 0 | 57 |
13,090,479 |
2012-10-26T16:06:00.000
| 11 | 0 | 0 | 0 |
python,django,django-admin
| 13,090,557 | 1 | true | 1 | 0 |
This sounds like a browser issue rather than a Django issue.
To unselect an element in a multiple select, press the Ctrl key (linux / windows) or the Command key (mac) when you click on it.
| 1 | 3 | 0 |
Having A = ManyToManyField(B, null=True, blank=True), when I go in A's admin page, it seems I can't unselect every entries in the ManyToMany box after having clicked on a B element.
And even if I don't click on any entry, there is a related B element selected after saving (the first B element I guess).
But I want to add A elements without having to relate them to any one of B...
Is there any way to say to Django admin to select no element? (other than creating a dummy B element for those situations)
|
ManyToMany in Django admin: select none
| 1.2 | 0 | 0 | 2,338 |
13,093,951 |
2012-10-26T20:25:00.000
| 2 | 0 | 0 | 0 |
python,django,gethostbyaddr
| 38,819,664 | 3 | false | 1 | 0 |
You can just print HttpRequest.META and find what you want and I think req.META['HTTP_ORIGIN'] is the thing you need, It's the same as the browser address bar value.
| 1 | 2 | 0 |
What is the easiest way to obtain the user's host/domain name, if available?
Or is there a function to lookup the IP address of the user to see if it is bound to a named address? (i.e. gethostbyaddr() in PHP)
HttpRequest.get_host() only returns the IP address of the user.
|
Django get client's domain name/host name
| 0.132549 | 0 | 0 | 10,741 |
13,094,941 |
2012-10-26T21:51:00.000
| 4 | 0 | 1 | 0 |
python,spyder
| 13,096,004 | 6 | true | 0 | 0 |
At the time this question was asked, Python 3 was not supported by Spyder (and this answer said so, giving some details of the then-incomplete porting efforts).
But that's not the case any longer! Recent builds of Spyder should work with Python 3. Check out the other answers for some links to places to get it from (though they may be out of date themselves by this point).
| 3 | 25 | 0 |
By default Spyder uses Python 2.7.2, and my question is: is there a way to set up Spyder so that it automatically uses Python 3.x? Thanks!
|
Switch to Python 3.x in Spyder
| 1.2 | 0 | 0 | 40,370 |
13,094,941 |
2012-10-26T21:51:00.000
| 2 | 0 | 1 | 0 |
python,spyder
| 32,678,175 | 6 | false | 0 | 0 |
On Ubuntu 14.04 I found spyder3 at official repository
| 3 | 25 | 0 |
By default Spyder uses Python 2.7.2, and my question is: is there a way to set up Spyder so that it automatically uses Python 3.x? Thanks!
|
Switch to Python 3.x in Spyder
| 0.066568 | 0 | 0 | 40,370 |
13,094,941 |
2012-10-26T21:51:00.000
| 18 | 0 | 1 | 0 |
python,spyder
| 17,067,938 | 6 | false | 0 | 0 |
Since end of May 2013, version v2.3.0dev1 of Spyder works for Python 3.3 and above.
It is in a usable state but there are a few minor problems.
Hopefully they will be resolved soon.
| 3 | 25 | 0 |
By default Spyder uses Python 2.7.2, and my question is: is there a way to set up Spyder so that it automatically uses Python 3.x? Thanks!
|
Switch to Python 3.x in Spyder
| 1 | 0 | 0 | 40,370 |
13,095,994 |
2012-10-27T00:11:00.000
| 0 | 0 | 0 | 1 |
python,linux
| 13,097,273 | 2 | false | 0 | 0 |
If it is the login program for X11, you can put it into ~/.xinitrc. It is X session startup script.
| 1 | 1 | 0 |
I am trying to write a custom login program for a linux system using Python 3. what is the best way to have an application automatically run at startup?
|
Running a Python Application at Startup
| 0 | 0 | 0 | 791 |
13,097,843 |
2012-10-27T06:25:00.000
| 0 | 0 | 1 | 0 |
python,plone,zope
| 13,676,795 | 2 | false | 1 | 0 |
If you change the content in any way (or just re-save it) a duplicate of the object is created (which allows you to undo later). If you change only the metadata (like the title) the object is usually not duplicated.
These duplicated "backup" copies are removed (and the undo option for them) whenever the database is packed.
There rules dependent on the object being persistent: that's almost all normal Zope (and Plone) objects. Some exceptions may exist, but they are rare.
| 1 | 4 | 0 |
In plone, how many physical copies of a file (or any content) exist if it is revised say 4 times? I am using plone 4.1 wherein the files and images are stored on the file system.
|
Do as many copies as the number of revisions exist for a file in plone?
| 0 | 0 | 0 | 122 |
13,097,975 |
2012-10-27T06:52:00.000
| 5 | 0 | 0 | 1 |
python,google-app-engine,deployment
| 13,102,795 | 1 | true | 1 | 0 |
No, there isn't. If you change one file, you need to package and upload the whole application.
| 1 | 4 | 0 |
Is it possible to update single py file in existing GAE app.something like we update cron.yaml using, appcfg.py update_cron
Is there any way to update .py file?
Regrads.
|
Can I deploy (update) Single Python file to existing Google App Engine application?
| 1.2 | 0 | 0 | 1,137 |
13,098,457 |
2012-10-27T08:19:00.000
| 3 | 0 | 0 | 0 |
python,django,macos,pip
| 13,098,637 | 4 | false | 1 | 0 |
django-admin is not on path, you could search for it find / -name django-admin.py and add it to your .profile/.bashrc/.whatever. Let me recommend using virtualenv for everything python related you do though. Installing it in a local environment prevents this kind of problem.
Each environment comes with its own Python distribution, so you can keep different versions of Python in different environments. It also ignores globally installed packages with the --no-site-packages flag (which is default) but this doesn't work properly with packages installed using eg Ubuntu's apt-get (they are in dist-packages iirc). Any packages installed using pip or easy_install inside the environment are also only local. This lets you simulate different deployments. But most importantly, it keeps the global environment clean.
| 1 | 4 | 0 |
I installed python using: brew install python and then eventually pip install Django. However when I try to run django-admin.py startproject test I just get a file not found. What did I forget?
|
Installing Django with pip, django-admin not found
| 0.148885 | 0 | 0 | 7,509 |
13,099,032 |
2012-10-27T09:50:00.000
| 2 | 0 | 0 | 1 |
python,logging,uwsgi,gevent
| 13,099,158 | 2 | true | 1 | 0 |
If latency is a crucial factor for your app, undefinitely writing to disk could make things really bad.
If you want to survive a reboot of your server while redis is still down i see no other solutions than writing to disk, otherwise you may want to try with a ramdisk.
Are you sure having a second server with a second instance of redis would not be a better choice ?
Regarding logging, i would simply use low-level I/O functions as they have less overhead (even if we are talking of very few machine cycles)
| 1 | 1 | 0 |
I am using the gevent loop in uWSGI and I write to a redis queue. I get about 3.5 qps. On occasion, there will be an issue with the redis connection so....if fail, then write to a file where I will have a separate process do cleanup later. Because my app very latency aware, what is the fastest way to dump to disk in python? Will python logging suffice?
|
Fastest way to write to a log in python
| 1.2 | 0 | 0 | 2,514 |
13,099,963 |
2012-10-27T11:56:00.000
| 1 | 0 | 1 | 0 |
python,oop,class,decoupling,astronomy
| 13,100,096 | 1 | true | 0 | 1 |
Instead of body ID's why not add the Bodys themselves to the dictionary of pygame objects instead of an ID? After all, a Python variable is just a label, so the renderer wouldn't need to know about what the variable represents. And it might save you having to look-up IDs.
A related option is to add one or multiple viewport objects to your universe. Because no matter what the implementation of the viewing mechanism, you typically don't want to show the whole universe, so a viewport would be a proper attribute of the universe. With that viewport would go a couple of methods (except from creating and sizing). First a method to get all Bodys in the viewport (that could just return a list you'd keep in the viewport object). Second a method to get a tuple containing two lists; one of Bodys that have appeared in the viewport, and second a list of Bodys that gone out of the viewport since the last update. The viewport should also have an update method that is called by the universe's update method.
| 1 | 1 | 0 |
I'm working on an astronomy project, making one of those gravity simulator programs. I have a Uni class, which represents a universe filled with celestial bodies (instances of the Body class).
This Uni class is capable of updating itself, adding new bodies, and removing bodies by their id. It's completely math-based, and should work alone.
Around it, I'm planning to build the program that uses PyGame to optionally display the simulation in real time and MatPlotLib to analyze the results. However, I am a bit confused over how to keep my computation (Uni) and rendering (Renderer) decoupled!
I envisioned it like this:
The main program:
Imports PyGame, initializes it, and creates a screen.
Instantiates a universe, fills it with bodies, etc (actually done by a FileManager, which reads JSON specs for a uni).
Creates a Renderer
Enters a while run: loop:
uni.update(dt)
#Listen to PyGame events, respond
r.render(screen, uni, ui) #The UI class has a list of UI elements.
However, the renderer needs to keep a persistent list of PyGame surfaces and images that need to be drawn, and there's the problem. Neither the Uni nor Body instances must be aware of PyGame, so they can't keep those themselves.
The renderer, on the other hand, is only there for its render method, which can't just create and destroy PyGame surfaces as needed (I guess that would be performance-heavy).
One possible solution would be to have the renderer have a dictionary of PyGame objects, all identified by body ids. Then, it would iterate over it, remove any gone bodies, and add any new ones each frame.
Is this the right way to go?
|
How to properly decouple computation from rendering
| 1.2 | 0 | 0 | 396 |
13,101,486 |
2012-10-27T15:25:00.000
| 1 | 0 | 0 | 0 |
python,pyqt
| 13,101,822 | 1 | true | 0 | 1 |
Have you considered using a QGraphicsView? This allows scrolling in addition to efficient rendering of only the visible objects (and plenty of other benefits such as hit testing).
| 1 | 0 | 0 |
I have a Widget that is huge (80,000 px long maybe? 800 elements at 100px each) because it lays out many smaller widgets. I've put the huge widget into a QScrollArea. But the scroll area still renders the entire widget. This causes manipulation of the widget to be choppy, and I want things to be smoother.
Instead I want the QScrollAea to be intelligent enough to only render the elements that I know will be displayed. (The elements are ordered and are all the same fixed size, so this computation should be fast)
What's the best approach to go about this? Should QScrollArea already be doing this?
Does QListView already implement this functionality? (But I want my own custom widget in there it has buttons that interact with the user, QListWiget doesn't cut it.)
|
How can I limit the rendering done by QScrollArea?
| 1.2 | 0 | 0 | 194 |
13,104,279 |
2012-10-27T21:19:00.000
| 0 | 0 | 1 | 1 |
python,python-2.7,cygwin,opencl,pyopencl
| 13,387,079 | 2 | false | 0 | 0 |
Did you install Python into Cygwin?
If not, launch setup.exe, get to the packages screen, and do a search for python.
You can install 2.6.8 and 3 side by side if you want.
After that it's like using python anywhere else. You can do a $ python my.py to run my.py. You can install easy_install or pip, etc. if they'll help. Otherwise, follow the directions for PyOpenCL and you should be good!
| 1 | 0 | 0 |
I cannot figure out how to install pyopencl with Cygwin. Never used Cygwin before so I am very lost as to how I initiate python and use it to run my .py setup files.
|
Can someone help walk me through installing PyOpenCL using Cygwin?
| 0 | 0 | 0 | 1,353 |
13,108,615 |
2012-10-28T11:56:00.000
| 0 | 1 | 0 | 0 |
python,sockets,smtp
| 13,108,630 | 2 | false | 0 | 0 |
These are simply newline characters. In GMail they'll be processed and "displayed" so you don't see them. But they are still part of the email text message so it makes sense that get_payload() returns them.
| 1 | 0 | 0 |
I am not sure if this is the right forum to ask, but I give it a try.
A device is sending an E-Mail to my code in which I am trying to receive the email via a socket in python, and to decode the E-Mail with Messsage.get_payload() calls. However. I always have a \n.\n at the end of the message.
If the same device send the same message to a genuine email client (e.g. gmail), I get the correct original message without the \n.\n.
I would like to know what it is with this closing set of special characters in SMTP/E-Mail handling/sending, and how to encode it away.
|
Mysterious characters at the end of E-Mail, received with socket in python
| 0 | 0 | 1 | 92 |
13,111,899 |
2012-10-28T18:50:00.000
| 1 | 0 | 1 | 0 |
python,wxpython,exe
| 13,111,965 | 2 | true | 0 | 1 |
I use pyinstaller as it is very easy to use and the applications run without any problems
Some Windows DDLs are needed for the included Python Compiler and for the maybe win32 api calls
With Pyinstaller you can bundle everything needed (except config files and dbs of course) using the -F flag.
| 1 | 0 | 0 |
I've been trying for over a week now to make a .exe of a wxPython script. I still have a number of questions, and the process of creating an exe is still quite unclear.
Which utility should be used? I've heard to use py2exe, pyinstaller, gui2exe, some combination of gui2exe and some shady .bat file, etc. Which is best for a wxPython app?
What's the deal with .dlls? What do they do, do I need to "bundle" them, and why wouldn't they be on the user's computer?
Can you bundle things into the .exe, or do they need to be in the directory the .exe is in?
These things have not been made particularly clear, as I've been trying to find out exactly how to make my program into an exe for a while.
|
Creating wxPython GUI .exe?
| 1.2 | 0 | 0 | 2,794 |
13,112,473 |
2012-10-28T20:00:00.000
| 0 | 0 | 0 | 0 |
python,windows,windows-7,emacs,python-3.3
| 13,112,953 | 1 | false | 0 | 1 |
here output arrives in a buffer *Python*, which is not displayed by default unfortunately. M-x list-buffers should mention it.
| 1 | 1 | 0 |
I cannot get emacs to evaluate my buffer. I put it into python mode and started the interpreter but C-c C-c does not seem to do anything. I also tried C-c C-l to load the file but after selecting the file nothing happens. Typing directly into the python shell does work.
I tried it out in linux and everything worked fine so I know I am using the correct commands and there is no problem with my code.
I am running GNU Emacs 24.2.1 and Python 3.3 on Windows 7. I am new to emacs and I like it so far, but unless I can get the shell working I will need to switch to a different editor.
Update: I am trying to run an application developed with the Pyglet library which creates its own window to display graphics in.
Update #2: So if I try to evaluate the buffer and then go to the python buffer and stop compilation and then evaluate it again then it works. This is obviously not ideal.
Also maybe related, any errors or exceptions will not show up in the shell unless I go into the shell and hit enter.
|
Python Pyglet application not working in Windows Emacs
| 0 | 0 | 0 | 201 |
13,114,116 |
2012-10-28T23:30:00.000
| 1 | 0 | 1 | 1 |
python,software-design
| 13,114,163 | 1 | true | 0 | 0 |
If files logically belong together in pairs, the least error prone method is probably to require them to be entered together, e.g.
mycommand -Pair FileA1,FileA1 -Pair FileB1, FileB2
That way, you can enforce the contract that files must be entered in pairs (any -Pair argument without two input files can generate an error), and it is obvious to the user that the files must be entered together.
| 1 | 0 | 0 |
I'm writing a Python script that takes five pairs of files as arguments. I would like to allow the user to input these files as command-line arguments, but I'm worried he will put the files in the wrong order or not put a file right after the file it's paired with. How can I design my command-line arguments to avoid this problem in the least clunky way possible?
For example, if the files are "U1", "M1", "U2", "M2", "U3", "M3", "U4", "M4", "U5", "M5", I'm afraid the person might put the files in the order "U1 U2 U3 U4 U5 M1 M2 M3 M4 M5", or "U1 M2 U3 M4 M5 ..."
|
What is good software design practice for taking multiple pairs of files on the command line?
| 1.2 | 0 | 0 | 65 |
13,114,430 |
2012-10-29T00:17:00.000
| 3 | 0 | 1 | 0 |
python,regex
| 13,114,523 | 1 | true | 0 | 0 |
This is easiest done using two regexen. "^(a)bc(1)23" and "ab(c)12(3)$". It may be possible to merge these two, but the regular expression will get pretty unreadable.
| 1 | 0 | 0 |
I got strings which go:
abc123abc123abc123
abc123
abc123abc123abc123abc123abc123abc123
etc (varying units of abc123 which I don't know the length of repeat)
The task is to extract the first 1 and first a and the last c and last 3. Is it possible to do it with 1 regex and how exactly if it is possible? I kept count of the repeating units and based on the count I have been able to perform the task with a few regex, but would like to use one regex if possible. Thanks
Edit:
In the real situation. It is more like a:(a number)bc:(a number)1:(a number)23:(a number) etc and I have to capture the first a number, the first 1 number, the last c number and the last 3 number.
Jeff
|
regex to match repeating string python
| 1.2 | 0 | 0 | 167 |
13,115,435 |
2012-10-29T03:11:00.000
| 7 | 0 | 1 | 0 |
python,list,slice
| 13,115,443 | 6 | false | 0 | 0 |
Slices don't wrap like that, but a[-3:] + a[:3] would give you that list.
| 2 | 6 | 0 |
Is there any simple way to invert a list slice in python? Give me everything except a slice? For example:
Given the list a = [0,1,2,3,4,5,6,7,8,9] I want to be able to extract [7,8,9,0,1,2] i.e. everything but a[3:7].
Thinking about it logically, I thought that a[-3:3] would give me what I want, but it only returns an empty list.
I am preferring a solution which will work for both python 2 and 3
|
Invert slice in python
| 1 | 0 | 0 | 3,316 |
13,115,435 |
2012-10-29T03:11:00.000
| 0 | 0 | 1 | 0 |
python,list,slice
| 13,123,879 | 6 | false | 0 | 0 |
OK, so this may not be exactly what you want, but is useful in some situations where you might want a slice like that.
There are two important disclaimers.
It doesn't preserve order
It removes repeated items
list(set(a).difference(a[3:7]))
| 2 | 6 | 0 |
Is there any simple way to invert a list slice in python? Give me everything except a slice? For example:
Given the list a = [0,1,2,3,4,5,6,7,8,9] I want to be able to extract [7,8,9,0,1,2] i.e. everything but a[3:7].
Thinking about it logically, I thought that a[-3:3] would give me what I want, but it only returns an empty list.
I am preferring a solution which will work for both python 2 and 3
|
Invert slice in python
| 0 | 0 | 0 | 3,316 |
13,115,599 |
2012-10-29T03:42:00.000
| 1 | 1 | 0 | 0 |
python,google-app-engine,google-drive-api,google-api-python-client
| 13,125,588 | 1 | true | 1 | 0 |
There are the mock http and request classes that the apiclient package uses for its own testing. They are in apiclient/http.py and you can see how to use them throughout the test suite.
| 1 | 2 | 0 |
There are several components involved in auth and the discovery based service api.
How can one test request handlers wrapped with decorators used from oauth2client (eg oauth_required, etc), httplib2, services and uploads?
Are there any commonly available mocks or stubs?
|
How can one test appengine/drive/google api based applications?
| 1.2 | 0 | 1 | 259 |
13,116,301 |
2012-10-29T05:27:00.000
| 4 | 0 | 0 | 0 |
python,django
| 13,116,348 | 2 | true | 1 | 0 |
Model.objects.none() always gives you an empty queryset
| 1 | 0 | 0 |
Is there any way to specify a Django Queryset, which will do nothing but still be valid Queryset? Empty queryset should ideally not call DB and results would be empty.
|
Django Queryset for no-operation
| 1.2 | 0 | 0 | 126 |
13,117,502 |
2012-10-29T07:28:00.000
| 3 | 0 | 0 | 1 |
python,ssh,twisted
| 13,274,700 | 1 | false | 0 | 0 |
I haven't used Twisted and don't know Conch at all, but with nobody else answering, I'll give it a shot.
As a general principle, you probably want to buffer very little if any in the middle of the network. (Jim Gettys' notes on "buffer bloat" are enlightening.) So it's clear that you're asking a sensible question.
I assume Conch calls a function in your code when data arrives from the client. Does it suffice to simply not return from that call until you can deliver the data to the backend server? The kernel is still going to buffer data in both the inbound and outbound sockets, so the condition won't be signalled to the downstream client immediately, but I'd expect it to settle into a steady state.
As an alternative, of course, you could tunnel across this router at a different layer than SSH. If you tunnel at a lower layer so you have one end-to-end TCP connection, then the TCP stack should figure out a good window size.
If you tunnel at a higher layer, by doing a git push to the intermediate server and then using a post-receive hook to push the objects the rest of the way, then you get maximal buffering (it's all spooled to disk) and faster response time to the client, though longer total latency. It has the distinct advantage of being much simpler to implement.
| 1 | 5 | 0 |
I have a Twisted Conch SSH server and the typical scenario is this:
git via OpenSSH client >>--- WAN1 --->> Twisted conch svr >>--- WAN2 -->> Git server
There will be occassions that the 'git push' is sending data faster over WAN1 than I can proxy it over WAN2, so I need to tell the client to slow down (well before any TCP packet loss causes adjustments the TCP window size) to avoid buffering too much on the Twisted server. Reading the RFC for SSH this is accomplished with not acknowledging via adj window this will then cause the git push to block on syscall write to the pipe backed by openssh.
Looking at conch/ssh/connection.py:L216 in the method def ssh_CHANNEL_DATA(self, packet):
I can accomplish this with setting localWindowSize to 0, and inflight data will still land as the predicate on 230 should still pass (give localWindowLeft). I am wondering if this is the correct approach or am I missing something blindly obvious with regards to flow control with Twisted SSH Conch? *
Note: I acknowledge there are methods placeholders for stopWriting and startWriting on (channel) that I can override so I have hooks to control the other side of the transmission 'git pull', but Im interested in the other side. Also IPush/IPull producer dont seem applicable at this level and I cant see how I can tie in these higher abstraction without butchering conch?
|
Twisted Conch - Flow control
| 0.53705 | 0 | 0 | 485 |
13,121,529 |
2012-10-29T12:20:00.000
| 0 | 0 | 0 | 0 |
python,excel,win32com,office-2013
| 42,290,194 | 3 | false | 0 | 0 |
wilywampa's answer corrects the problem. However, the combrowse.py at win32com\client\combrowse.py can also be used to get the IID (Interface Identifier) from the registered type libraries folder and subsequently integrate it with code as suggested by @cool_n_curious. But as stated before, wilywampa's answer does correct the problem and you can just use the makepy.py utility as usual.
| 1 | 4 | 0 |
I'm using python and excel with office 2010 and have no problems there.
I used python's makepy module in order to bind to the txcel com objects.
However, on a different computer I've installed office 2013 and when I launched makepy no excel option was listed (as opposed to office 2010 where 'Microsoft Excel 14.0 Object Library' is listed by makepy).
I've searched for 'Microsoft Excel 15.0 Object Library' in the registry and it is there.
I tried to use : makepy -d 'Microsoft Excel 15.0 Object Library'
but that didn't work.
Help will be much appreciated.
Thanks.
|
Python Makepy with Office 2013 (office 15)
| 0 | 1 | 0 | 2,318 |
13,122,575 |
2012-10-29T13:27:00.000
| 3 | 0 | 1 | 0 |
python,string,compression,whitespace
| 13,122,949 | 3 | false | 0 | 0 |
If you don't care about the exact compressed form you may want to look at zlib.compress and zlib.decompress. zlibis a standard Python library that can compress a single string and will probably get better compression than a self implemented compression algorithm.
| 1 | 3 | 0 |
I have strings with blocks of the same character in, eg '1254,,,,,,,,,,,,,,,,982'. What I'm aiming to do is replace that with something along the lines of '1254(,16)982' so that the original string can be reconstructed. If anyone could point me in the right direction that would be greatly appreciated
|
How to compress by removing duplicates in python?
| 0.197375 | 0 | 0 | 253 |
13,124,913 |
2012-10-29T15:43:00.000
| 1 | 0 | 0 | 0 |
python,django,uwsgi,mezzanine
| 14,267,828 | 1 | true | 1 | 0 |
After a long time I've figured out what the problem is! I had followed some directions on how to set up uwsgi with nginx that said to include a line saying uwsgi_param SCRIPT_NAME /;. The purpose of SCRIPT_NAME is to provide the base path for the UWSGI application, so in this case it serves to double the slashes. I found the same problem occurring in pyramid. I suspect this will occur with any UWSGI application.
| 1 | 2 | 0 |
Running Django behind UWSGI, I have set up an instance of Mezzanine that is almost working perfectly. The only problem is the admin login page does not work properly. If you just try to log in normally than the browser is redirected to http://admin/. The html form action attribute is set to //admin/ instead of /admin/ so the browser sees "admin" as being a domain name instead of a root directory of the current domain.
I've tried wading through the Django and Mezzanine package codes, but I can't see anything in there that should be causing an extraneous slash. I found one web page saying that changing settings.FORCE_SCRIPT_NAME to "/" could cause this, but I am not overriding the default value of None so this shouldn't be the cause.
In urls.py I have the following (which I think is the default):
urlpatterns = patterns("",
# Change the admin prefix here to use an alternate URL for the
# admin interface, which would be marginally more secure.
("^admin/", include(admin.site.urls)),
....
|
UWSGI adding double slash to admin login form in Django
| 1.2 | 0 | 0 | 768 |
13,125,271 |
2012-10-29T16:03:00.000
| 3 | 0 | 1 | 0 |
python
| 13,125,435 | 2 | false | 0 | 0 |
Create a factory class which returns an implementation based on the parameter. You can then have a common base class for both DB types, one implementation for each and let the factory create, configure and return the correct implementation to the user based on a parameter.
This works well when the two classes behave very similarly; but as soon as you want to use DB specific features, it gets ugly because you need methods like isFeatureXSupported() (good approach) or isOracle() (more simple but bad since it moves knowledge of which DB has which feature from the helper class into the app code).
Alternatively, you can implement all features for both and throw an exception when one isn't supported. In your code, you can then look for the exception to check this. This makes the code more clean but now, you can really check whether a feature is available without actually using it. That can cause problems in the app code (when you want to disable menus, for example, or when the app could do it some other way).
| 1 | 5 | 0 |
I have a class that can interface with either Oracle or MySQL. The class is initialized with a keyword of either "Oracle" or "MySQL" and a few other parameters that are standard for both database types (what to print, whether or not to stop on an exception, etc.).
It was easy enough to add if Oracle do A, elif MySQL do B as necessary when I began, but as I add more specialized code that only applies to one database type, this is becoming ugly. I've split the class into two, one for Oracle and one for MySQL, with some shared functions to avoid duplicate code.
What is the most Pythonic way to handle calling these new classes? Do I create a wrapper function/class that uses this same keyword and returns the correct class? Do I change all of my code that calls the old generic class to call the correct DB-specific class?
I'll gladly mock up some example code if needed, but I didn't think it was necessary. Thanks in advance for any help!
|
Most Pythonic way to handle splitting a class into multiple classes
| 0.291313 | 1 | 0 | 126 |
13,127,381 |
2012-10-29T18:16:00.000
| 1 | 0 | 0 | 0 |
python,chipmunk,pymunk
| 13,130,409 | 1 | true | 0 | 1 |
There are a couple of unsafe methods to modify a shape. Right now (v3.0) pymunk only supports updates of the Circle shape and the Segment shapes. However, I just committed a method to update the Poly shape as well, available in latest trunk of pymunk.
If you dont want to run latest trunk I suggest you instead just replace the shape instead of modifying it. The end result will be the same anyway.
(The reason why modification of shapes is discouraged is that its very hard to do a good simulation, the resize happen magically in one instant. For example, how should a collision between of a small object that after a resize would lie inside a large object be resolved?)
| 1 | 1 | 0 |
I am just getting started with pymunk, and I have a problem that I wasn't able to find a solution to in the documentation.
I have a character body that changes shape during a specific animation. I know how to attach shapes to a physics body, but how do I change them? Specifically, I need to change the box to a smaller one temporarily.
Is that possible?
|
Changing the shape of a pymunk/Chipmunk physics body
| 1.2 | 0 | 0 | 898 |
13,127,708 |
2012-10-29T18:40:00.000
| 0 | 0 | 0 | 0 |
python,django,ubuntu
| 13,127,985 | 1 | true | 1 | 0 |
If the file size is greater than 2.5MB Django will write the uploaded file to your /tmp directory (on Linux) before saving it. After the upload is complete you can remove the file manually or you can have a cron job (or something similar) to remove the temp files automatically.
| 1 | 0 | 0 |
When I use a form to upload a large video to the server, there is a temp.upload created in the /tmp directory. Where does this .upload created? Can I remove it after the uploading is complete? I use Django and python on ubuntun.
I check Django documentation for file upload. It says that:
"If an uploaded file is too large, Django will write the uploaded file to a temporary file stored in your system's temporary directory. On a Unix-like platform this means you can expect Django to generate a file called something like /tmp/tmpzfp6I6.upload. If an upload is large enough, you can watch this file grow in size as Django streams the data onto disk."
How to let Django to remove this file automatically after the uploading is complete? How can I get this temporary .upload path information ?
Thanks
|
How to remove .upload file
| 1.2 | 0 | 0 | 594 |
13,128,466 |
2012-10-29T19:41:00.000
| 0 | 0 | 0 | 0 |
python,tkinter
| 13,129,010 | 2 | false | 0 | 1 |
What you create with Tkinter is not pointless. It sounds to me like you're trying to compile a stand-alone program in Python, using the Tkinter library to provide the GUI. Once you have a script working, you can use a program to compile into a standalone program. Look into using py2app on a mac, or py2exe on Windows. Google them and see if that's what you're looking for.
| 1 | 0 | 0 |
I have looked at similar questions that may answer my question but I am still very unclear on how to go about the following:
I can create programs to run in the Python Shell in Idle and I can also set up windows with widgets in Tkinter, but whatever I create in Tkinter is pointless because I cannot figure out how to take my Python Shell code and "wrap" it in the Tkinter GUI.
I have assumed that it cannot be done, and that entirely new code must be written to assist the language that is specific to Tkinter. I am very confused on how to create a well-rounded program without being left with just a GUI "skeleton" with random buttons, labels, entries, etc. and a Python program that is very unappealing and can only run in the ugly little Shell.
|
Porting a Python Shell Program to a Tkinter GUI
| 0 | 0 | 0 | 1,228 |
13,131,139 |
2012-10-29T23:27:00.000
| 1 | 0 | 1 | 0 |
python,nltk,lemmatization
| 50,628,168 | 5 | false | 0 | 0 |
If you are performing Machine Learning algorithms on your text, you may use n-grams instead of word tokens. It is not strictly lemmatization but it detects series of n similar letters and it is supprisingly powerful to gather words with the same meaning.
I use sklearn's function CountVectorizer(analyzer='char_wb') and for some specific text, it is way more efficient than bag of words.
| 1 | 29 | 0 |
I have some text in French that I need to process in some ways. For that, I need to:
First, tokenize the text into words
Then lemmatize those words to avoid processing the same root more than once
As far as I can see, the wordnet lemmatizer in the NLTK only works with English. I want something that can return "vouloir" when I give it "voudrais" and so on. I also cannot tokenize properly because of the apostrophes. Any pointers would be greatly appreciated. :)
|
Lemmatize French text
| 0.039979 | 0 | 0 | 29,989 |
13,131,699 |
2012-10-30T00:41:00.000
| 1 | 0 | 1 | 1 |
python,windows,path,os.system
| 13,140,093 | 1 | true | 0 | 0 |
I think you can add the location of the files in the PATH environment variable. Follow the steps: Go to My Computer->Right click->Properties->Advanced System Settings->Click Environmental Variables. Now click PATH and then click EDIT. In the variable value field, go to the end and append ';' (without quotes) and then add the absolute path of the .exe file which you want to run via your program.
| 1 | 1 | 0 |
I am trying to create a Python program that uses the os.system() function to create a new process (application) based on user input... However, this only works when the user inputs "notepad.exe". It does not work, for instance, when a user inputs "firefox.exe". I know this is a path issue because the error says that the file does not exist. I assume then that Windows has some default path setup for notepad that does allow notepad to run when I ask it to? So this leads to my question: is there any way to programmatically find the path to any application a user inputs, assuming it does in fact exist? I find it hard to believe the only way to open a file is by defining the entire path at some point. Or maybe there's a way that Windows does this for me that I do not know how to access? Any help would be great, thanks!
|
Python - get file path programmatically?
| 1.2 | 0 | 0 | 598 |
13,133,304 |
2012-10-30T05:41:00.000
| 5 | 0 | 1 | 0 |
python,django,arrays
| 13,134,299 | 2 | false | 0 | 0 |
You want abs(a - b), not abs(abs(a)-abs(b))
| 1 | 1 | 0 |
I have this numbers a = 7 , b= 9
Now i want to subtract the two numbers.
b - a = 2. now that is ok
but a -b = -2
But i only want to know the diff i.e 2 not -ve or +ve like we have mod operator
How can i do that in python
|
How can i get unsigned magnitude of the -ve number in python
| 0.462117 | 0 | 0 | 282 |
13,133,986 |
2012-10-30T06:47:00.000
| 0 | 1 | 0 | 0 |
python,vb.net,heuristics
| 13,134,922 | 2 | false | 0 | 0 |
Heuristic can roughly be tranlated into 'rule of thumb'
It's not a programming-specific concept.
| 1 | 0 | 0 |
Could I please have some ideas for a project utilising Heuristics
Thankyou in advance for your help
|
Programming with Heuristics?
| 0 | 0 | 0 | 4,568 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.