Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
14,552,348 |
2013-01-27T21:10:00.000
| 0 | 1 | 1 | 0 |
visual-c++,python-2.7,visual-studio-2005,manifest,boost-python
| 32,307,577 | 13 | false | 0 | 0 |
In my case, I realised the problem was coming when, after compiling the app into an exe file, I would rename that file. So leaving the original name of the exe file doesn't show the error.
| 5 | 41 | 0 |
I am working on an application which uses Boost.Python to embed the Python interpreter. This is used to run user-generated "scripts" which interact with the main program.
Unfortunately, one user is reporting runtime error R6034 when he tries to run a script. The main program starts up fine, but I think the problem may be occurring when python27.dll is loaded.
I am using Visual Studio 2005, Python 2.7, and Boost.Python 1.46.1. The problem occurs only on one user's machine. I've dealt with manifest issues before, and managed to resolve them, but in this case I'm at a bit of a loss.
Has anyone else run into a similar problem? Were you able to solve it? How?
|
Runtime error R6034 in embedded Python application
| 0 | 0 | 0 | 52,195 |
14,552,460 |
2013-01-27T21:22:00.000
| 1 | 0 | 1 | 0 |
python-2.7
| 14,552,508 | 2 | false | 0 | 0 |
Why not generate all the integers from 0 to 2^16-1, and convert to hex? Yes, I know that it could be done more efficiently, but you are doing this ONCE. Why make things difficult?
| 1 | 0 | 0 |
I'm working on a cryptography project and I need to generate all hexadecimal possible numbers (length of the number= 16 bits in binary) and put them in a list in order to use them later.
Any suggestions?
Thanks in advance
|
Generate a list of hexadecimal numbers in python
| 0.099668 | 0 | 0 | 3,586 |
14,553,762 |
2013-01-28T00:05:00.000
| 2 | 0 | 1 | 0 |
python,oop,tree,class-method,instance-method
| 14,553,768 | 1 | true | 0 | 0 |
Make a Tree subclass of Node and add the tree-only methods on that class instead.
Then make your root an instance of Tree, the rest of your graph uses Node instances.
| 1 | 2 | 0 |
Need a way to indicate/enforce that certain methods can only be done on the root Node of my tree data structure. I'm working in Python 2.x
I have a class, Node, which I use in conjunction with another class, Edge, to build a tree data structure (edges are letters and nodes are words, in this case).
Some methods in Node are needed for every instance of the Node, like get_word, which runs backwards through the tree to determine the word being represented by that Node. But other Node operations, like load_word_into_tree, seem more like a class methods -- they operate on the entire tree. Furthermore, the way I have structured the that call, it requires the root node and the root node only as its input. If it is called on any other node, it'll totally mess up the tree.
I see two options:
Make load_word_into_tree an instance method, but raise an error if it is called on any Node that isn't the root. I'm leaning towards this, but something just seems not right about it. In my mind, instance methods are methods that every instance should need, and to have this method tacked on to every Node when it can only ever be used for the root seems like a waste.
Make load_word_into_tree a class method, but pass it the root node as an arg. This gets around the problem of a 'wasteful' instance method, but also seems like a misuse of concept of a class method, since it takes a single node as its input. Furthermore, I'm not sure what use I'd have for the required cls variable available to every class method.
Any help on where and how to implement this function would be greatly appreciated.
|
Root node operations in tree data structures
| 1.2 | 0 | 0 | 487 |
14,554,551 |
2013-01-28T02:14:00.000
| 3 | 0 | 0 | 0 |
python,web2py
| 14,554,639 | 1 | true | 1 | 0 |
You can use the admin interface to install (i.e., unpack) the app. From that point, the app is just a bunch of files in folders, so you can use any editor, IDE, and version control system on those files as you see fit.
| 1 | 1 | 0 |
I've created a web2py app using the admin interface, but I want to use my own editor and version control. I've downloaded the packed app, but what do I do with it?
|
web2py - setting up my own environment
| 1.2 | 0 | 0 | 95 |
14,555,393 |
2013-01-28T04:23:00.000
| 1 | 0 | 0 | 0 |
python,flask,distribution
| 14,559,747 | 2 | false | 1 | 0 |
Why distribute it at all? If the user you want to use it is on the same local network as the Flask application, just give them the IP address and they can access it via a browser just as you are doing, and no access to the source code either!
| 1 | 5 | 0 |
I've made a simple Flask app which is essentially a wrapper around sqlite3. It basically runs the dev server locally and you can access the interface from a web browser. At present, it functions exactly as it should.
I need to run it on a computer operated by someone with less-than-advanced computing skills. I could install Python on the computer, and then run my .py file, but I am uncomfortable with the files involved being "out in the open". Is there a way I can put this app into an executable file? I've attempted to use both py2exe and cx_freeze, but both of those raised an ImportError on "image". I also tried zipping the file (__main__.py and all that) but was greeted with 500 errors attempting to run the file (I am assuming that the file couldn't access the templates for some reason.)
How can I deploy this Flask app as an executable?
|
Distributing a local Flask app
| 0.099668 | 0 | 0 | 1,761 |
14,556,744 |
2013-01-28T06:42:00.000
| 1 | 0 | 0 | 1 |
python,google-app-engine,webserver,hosting,tornado
| 14,639,967 | 3 | false | 1 | 0 |
At heroku the WebSockets protocol is not yet supported on the Cedar stack.
| 1 | 2 | 0 |
Is It there any hosting service for hosting simple apps developed using tornado.(Like we hosting in Google App Engine). Is it possible to host on Google App Engine?.The Apps is just like some student datas(adding,removing,searching etc).I'm devoloped using python.
Thanks in advance
|
where I host apps developed using tornado webserver
| 0.066568 | 0 | 0 | 1,996 |
14,559,547 |
2013-01-28T10:00:00.000
| 0 | 0 | 0 | 0 |
python,neural-network,image-recognition,online-algorithm
| 14,756,113 | 2 | false | 0 | 0 |
This is not entirely correct.
A 3-layer feedforward MLP can theoretically replicate any CONTINUOUS function.
If there are discontinuities, then you need a 4th layer.
Since you are dealing with pixelated screens and such, you probably would need to consider a fourth layer.
Finally, if you are looking at circular shapes, etc., than a radial basis function (RBF) network may be more suitable.
| 1 | 0 | 1 |
I learned, that neural networks can replicate any function.
Normally the neural network is fed with a set of descriptors to its input neurons and then gives out a certain score at its output neuron. I want my neural network to recognize certain behaviours from a screen. Objects on the screen are already preprocessed and clearly visible, so recognition should not be a problem.
Is it possible to use the neural network to recognize a pixelated picture of the screen and make decisions on that basis? The amount of training data would be huge of course. Is there way to teach the ANN by online supervised learning?
Edit:
Because a commenter said the programming problem would be too general:
I would like to implement this in python first, to see if it works. If anyone could point me to a resource where i could do this online-learning thing with python, i would be grateful.
|
Can a neural network recognize a screen and replicate a finite set of actions?
| 0 | 0 | 0 | 888 |
14,563,584 |
2013-01-28T13:54:00.000
| 0 | 1 | 0 | 0 |
python,eclipse,autocomplete
| 46,099,882 | 2 | false | 0 | 0 |
Just press Ctrl + Space, as Jonas Karlsson said.
| 1 | 5 | 0 |
Basically I have code completion working (to the best of my knowledge that it 'works') in Eclipse, but it's not nearly as good as what Visual Studio has. I have it set to call auto-complete when ( is pressed, but doing this does not show a list of the method parameters. I have to mouse over the method for that to happen, and I'd prefer for it to happen while I type, like Intellisense in VS.
I'm using Aptana 3 with PyDev if it's relevant.
|
How do I get Eclipse to show me a method's signature while typing?
| 0 | 0 | 0 | 2,276 |
14,563,591 |
2013-01-28T13:55:00.000
| 2 | 0 | 0 | 0 |
python,wxpython,wxwidgets,py2app
| 14,571,464 | 1 | true | 0 | 1 |
Make sure you are using the latest py2app, it looks like they've resolved some issues lately related to semi-standalone, perhaps that may affect your build too.
Use the Python from Python.org, not Apple's python installed with the OS.
Don't use the --semi-standalone flag or options like it in the setup script.
That should be all there is to it. Last I checked creating standalone applications was the default when using the Python.org Python, and it by default should be copying in the other packages (including wx) that your application imports as well. You can look inside the generated application bundle to see exactly what it is and isn't including and you can then adjust your setup script as needed.
| 1 | 2 | 0 |
An app I made from my script with py2app doesn't work on newer versions of OS X. I was told that this was because the build was partially standalone, meaning it requires my version of wxPython, but doesn't include it. How can I make it fully standalone (where my version of wxPython is included), or not standalone, where it uses whatever version of wxPython the host has installed? Which would be preferable, and how can I check that it has worked if I only have one mac with one version of wxPython?
|
Toggling py2app standalone?
| 1.2 | 0 | 0 | 684 |
14,563,801 |
2013-01-28T14:06:00.000
| 6 | 0 | 0 | 0 |
python,openerp,erp
| 14,564,692 | 1 | true | 1 | 0 |
Openerp 6.1 modules directly can not be used in openerp 7. You have to do some basic changes
in openerp 6.1 modules. Like tree, form tag compulsory string and verision="7" include in form. If you have inherited some basic modules like sale, purchase then you have to do changes in inherit xpath etc. Some objects res.parter.address removed then you have take care of this and replace with res.partner.
Thanks
| 1 | 2 | 0 |
I have couple OpenERP modules implemented for OpenERP 6.1 version. When I installed OpenERP 7.0, i copied these modules into addons folder for OpenERP 7. After that, I tried to update modules list trough web interface, but nothings changed. Also, I started server again with options --database=mydb --update=all, but modules list didn't change. Did I miss something? Is it possible in OpenERP version 7, usage of modules from version 6.1?
Thanks for advice.
UPDATE:
I already exported my database from version 6.1 in *.sql file. Will it OpenERP 7 work, if I just import these data in new database, which I created with OpenERP 7?
|
OpenERP 7 with modules from OpenERP 6.1
| 1.2 | 1 | 0 | 3,217 |
14,564,440 |
2013-01-28T14:41:00.000
| 1 | 0 | 0 | 0 |
python,performance
| 14,564,521 | 4 | false | 0 | 0 |
Threading is definitely what you need. It will remove the serialized nature of your algorithm, and since it is mostly IO bounded, you will gain a lot by sending HTTP requests in parallel.
Your flow would become:
MySQL query to get all of the active domains to scan (6,300 give or take)
Iterate through each domain and create a thread that will use urllib to send an HTTP request to each
Log the results in threads
You can make this algorithm better by creating a n worker threads with queues, and add domains to queues instead of creating one thread per each domain. I just wanted to make things a little bit easier for you since you're not familiar with threads.
| 2 | 2 | 0 |
I have a working Python script that checks the 6,300 or so sites we have to ensure they are up by sending an HTTP request to each and measuring the response. Currently the script takes about 40 min to run completely, I was interested in possibly some other ways to speed up the script, two thoughts were either threading or multiple running instances.
This is the order of execution now:
MySQL query to get all of the active domains to scan (6,300 give or take)
Iterate through each domain and using urllib send an HTTP request to each
If the site doesn't return '200' then log the results
repeat until complete
This seems like it could possibly be sped up significantly with threading but I am not quite sure how that process flow would look since I am not familiar with threading.
If someone could offer a sample high-level process flow and any other pointers for working with threading or offer any other insights on how to improve the script in general it would be appreciated.
|
Python Script - Improve Speed
| 0.049958 | 0 | 0 | 324 |
14,564,440 |
2013-01-28T14:41:00.000
| 2 | 0 | 0 | 0 |
python,performance
| 14,564,562 | 4 | true | 0 | 0 |
The flow would look something like this:
Create a domain Queue
Create a result Queue
MySQL query to get all of the active domains to scan
Put the domains in the domain Queue
Spawn a pool of worker threads
Run the threads
Each worker will get a domain from the domain Queue, send a request and put the result in the result Queue
Wait for the threads to finish
Get everything from the result Queue and log it
You'll probably want to tune the number of threads, thus the pool, and not just 6300 threads for every domain.
| 2 | 2 | 0 |
I have a working Python script that checks the 6,300 or so sites we have to ensure they are up by sending an HTTP request to each and measuring the response. Currently the script takes about 40 min to run completely, I was interested in possibly some other ways to speed up the script, two thoughts were either threading or multiple running instances.
This is the order of execution now:
MySQL query to get all of the active domains to scan (6,300 give or take)
Iterate through each domain and using urllib send an HTTP request to each
If the site doesn't return '200' then log the results
repeat until complete
This seems like it could possibly be sped up significantly with threading but I am not quite sure how that process flow would look since I am not familiar with threading.
If someone could offer a sample high-level process flow and any other pointers for working with threading or offer any other insights on how to improve the script in general it would be appreciated.
|
Python Script - Improve Speed
| 1.2 | 0 | 0 | 324 |
14,567,172 |
2013-01-28T17:00:00.000
| 0 | 0 | 0 | 0 |
python,sql,django,django-database
| 14,567,526 | 4 | false | 1 | 0 |
select * from app_model where name = %s is a prepared statement. I would recommend you to log the statement and the parameters separately. In order to get a wellformed query you need to do something like "select * from app_model where name = %s" % quote_string("user") or more general query % map(quote_string, params).
Please note that quote_string is DB specific and the DB 2.0 API does not define a quote_string method. So you need to write one yourself. For logging purposes I'd recommend keeping the queries and parameters separate as it allows for far better profiling as you can easily group the queries without taking the actual values into account.
| 1 | 0 | 0 |
I am trying to analyse the SQL performance of our Django (1.3) web application. I have added a custom log handler which attaches to django.db.backends and set DEBUG = True, this allows me to see all the database queries that are being executed.
However the SQL is not valid SQL! The actual query is select * from app_model where name = %s with some parameters passed in (e.g. "admin"), however the logging message doesn't quote the params, so the sql is select * from app_model where name = admin, which is wrong. This also happens using django.db.connection.queries. AFAIK the django debug toolbar has a complex custom cursor to handle this.
Update For those suggesting the Django debug toolbar: I am aware of that tool, it is great. However it does not do what I need. I want to run a sample interaction of our application, and aggregate the SQL that's used. DjDT is great for showing and shallow learning. But not great for aggregating and summarazing the interaction of dozens of pages.
Is there any easy way to get the real, legit, SQL that is run?
|
How to retrieve the real SQL from the Django logger?
| 0 | 1 | 0 | 189 |
14,568,331 |
2013-01-28T18:08:00.000
| 2 | 0 | 1 | 0 |
python,notepad++,indentation
| 14,568,395 | 2 | false | 0 | 0 |
Option 1:
Search Replace
Search Mode - Regex
Find what ^\s
Replace with <2 space characters>
Option 2:
Do a block select of all the columns. For block select, use ALT + SHIFT followed by dragging your mouse all the way from start to end
Add as many spaces as you want
| 2 | 1 | 0 |
I have written a large block of code in Python that I just realized is indented exactly three spaces (not four!) Using Notepad++ as my IDE, I cannot find any way to indent exactly one additional space to make it line up with everything else.
I imagine there is some way to write a macro to shift everything by one space, but I have little intention on mastering Notepad++'s macros just for this one case. Perhaps there is even a setting I missed?
Is there a non-manual way to indent to the proper alignment (adding one space)?
|
Indent by exactly one space
| 0.197375 | 0 | 0 | 380 |
14,568,331 |
2013-01-28T18:08:00.000
| 2 | 0 | 1 | 0 |
python,notepad++,indentation
| 14,568,398 | 2 | true | 0 | 0 |
Just to write up the comment as an answer (as asked by the OP).
You just need to do a find and replace with a Regular Expression that matches for 3 space characters at the beginning of a line and replace it with four characters. So the pattern to match would be something like ^\s{3}[^\s].
| 2 | 1 | 0 |
I have written a large block of code in Python that I just realized is indented exactly three spaces (not four!) Using Notepad++ as my IDE, I cannot find any way to indent exactly one additional space to make it line up with everything else.
I imagine there is some way to write a macro to shift everything by one space, but I have little intention on mastering Notepad++'s macros just for this one case. Perhaps there is even a setting I missed?
Is there a non-manual way to indent to the proper alignment (adding one space)?
|
Indent by exactly one space
| 1.2 | 0 | 0 | 380 |
14,570,901 |
2013-01-28T20:44:00.000
| 1 | 0 | 0 | 0 |
python,html,search,indexing,django-flatpages
| 14,570,969 | 1 | true | 1 | 0 |
With Solr, you would write code that retrieves content to be indexed, parses out the target portions from the each item then sends it to Solr for indexing.
You would then interact with Solr for search, and have it return either the entire indexed document an ID or some other identifying information about the original indexed content, using that to display results to the user.
| 1 | 1 | 0 |
I'm looking to add search capability into an existing entirely static website. Likely, the new search functionality itself would need to be dynamic, as the search index would need to be updated periodically (as people make changes to the static content), and the search results will need to be dynamically produced when a user interacts with it. I'd hope to add this functionality using Python, as that's my preferred language, though am open to ideas.
The Google Web Search API won't work in this case because the content being indexed is on a private network. Django haystack won't work for this case, as that requires that the content be stored in Django models. A tool called mnoGoSearch might be an option, as I think it can spider a website like Google does, but I'm not sure how active that project is anymore; the project site seems a bit dated.
I'm curious about using tools like Solr, ElasticSearch, or Whoosh, though I believe that those tools are only the indexing engine and don't handle the parsing of search content. Does anyone have any recommendations as to how one may index static html content for retrieving as a set of search results? Thanks for reading and for any feedback you have.
|
Search index for flat HTML pages
| 1.2 | 0 | 0 | 349 |
14,571,975 |
2013-01-28T21:55:00.000
| 1 | 1 | 0 | 0 |
python,opencv,computer-vision,robot
| 14,572,614 | 2 | false | 1 | 0 |
I'd do the following, and I'm pretty sure it would work:
I assume that the background of the video stream (the robots vicinity) is pretty static, so the firs step is:
1. background subtraction
2. detect movement in the foreground, this is your robot and everything else that changes from the background model, you'll need some thresholding here
3. connected-component detection to get the blobs
4. identify the blob corresponding to the robot (biggest?)
5. now you can get the coordinates of the blob
6. you can compute the heading if you track your blob through multiple frames
you can find good examples by googling the keywords
Distinctive color would work with color filtering and template matching and the likes, but the above method is more general.
| 1 | 0 | 0 |
Here's my problem:
Suppose there's a course for robots to go through, and there's an overhead webcam that can see the whole of it, and which the robot can use to navigate. Now the question is, what's the best way to detect the robot (position and heading) on the image of this webcam? I was thinking about a few solutions, like putting leds on it, or two separate colored circles, but those doesn't seem to be the best way to do it.
Is there a better solution to this, and if yes, I would really appreciate some opencv2 python code example of it, as I'm new to computer vision.
|
How can I detect my robot from an overhead webcam image?
| 0.099668 | 0 | 0 | 684 |
14,573,082 |
2013-01-28T23:19:00.000
| 2 | 0 | 1 | 0 |
python,pickle
| 14,573,154 | 2 | false | 0 | 0 |
pickle.dumps is the function that serializes Python objects into Pickle strings, and pickle.loads converts serialized Pickle strings into Python objects. You're basically asking how to deserialize an object which isn't serialized. The answer is that you can't.
However, bikeshredder is correct -- your string is already in JSON's serialization format, so you can use json.loads.
| 1 | 1 | 0 |
Say I have a string "[A,B,C]" and I want to convert it into a list [A,B,C], I have googled around and I know I should use pickle, but it seems that before I pickle.loads I seems to have to pickle.dumps the object, is that true? If yes, how can I walk around this?
|
Can I pickle a string without any dump?
| 0.197375 | 0 | 0 | 1,690 |
14,573,728 |
2013-01-29T00:28:00.000
| 38 | 0 | 0 | 0 |
python,scipy,scikits,statsmodels
| 14,575,243 | 3 | true | 0 | 0 |
Statsmodels has scipy.stats as a dependency. Scipy.stats has all of the probability distributions and some statistical tests. It's more like library code in the vein of numpy and scipy. Statsmodels on the other hand provides statistical models with a formula framework similar to R and it works with pandas DataFrames. There are also statistical tests, plotting, and plenty of helper functions in statsmodels. Really it depends on what you need, but you definitely don't have to choose one. They have different aims and strengths.
| 3 | 26 | 1 |
I need some advice on selecting statistics package for Python, I've done quite some search, but not sure if I get everything right, specifically on the differences between statsmodels and scipy.stats.
One thing that I know is those with scikits namespace are specific "branches" of scipy, and what used to be scikits.statsmodels is now called statsmodels. On the other hand there is also scipy.stats. What are the differences between the two, and which one is the statistics package for Python?
Thanks.
--EDIT--
I changed the title because some answers are not really related to the question, and I suppose that's because the title is not clear enough.
|
Python statistics package: difference between statsmodel and scipy.stats
| 1.2 | 0 | 0 | 16,292 |
14,573,728 |
2013-01-29T00:28:00.000
| -1 | 0 | 0 | 0 |
python,scipy,scikits,statsmodels
| 14,574,087 | 3 | false | 0 | 0 |
I think THE statistics package is numpy/scipy. It works also great if you want to plot your data using matplotlib.
However, as far as I know, matplotlib doesn't work with Python 3.x yet.
| 3 | 26 | 1 |
I need some advice on selecting statistics package for Python, I've done quite some search, but not sure if I get everything right, specifically on the differences between statsmodels and scipy.stats.
One thing that I know is those with scikits namespace are specific "branches" of scipy, and what used to be scikits.statsmodels is now called statsmodels. On the other hand there is also scipy.stats. What are the differences between the two, and which one is the statistics package for Python?
Thanks.
--EDIT--
I changed the title because some answers are not really related to the question, and I suppose that's because the title is not clear enough.
|
Python statistics package: difference between statsmodel and scipy.stats
| -0.066568 | 0 | 0 | 16,292 |
14,573,728 |
2013-01-29T00:28:00.000
| 5 | 0 | 0 | 0 |
python,scipy,scikits,statsmodels
| 14,575,672 | 3 | false | 0 | 0 |
I try to use pandas/statsmodels/scipy for my work on a day-to-day basis, but sometimes those packages come up a bit short (LOESS, anybody?). The problem with the RPy module is (last I checked, at least) that it wants a specific version of R that isn't current---my R installation is 2.16 (I think) and RPy wanted 2.14. So either you have to have two parallel installations of R, or you have to downgrade. (If you don't have R installed, then you can just install the correct version of R and use RPy.)
So when I need something that isn't in pandas/statsmodels/scipy I write R scripts, and run them with the subprocess module. This lets me interact with R as little as possible (which I really don't like programming in), but I can still leverage all the stuff that R has that the Python packages don't.
The lesson is that there isn't ever one solution to any problem---you have to assemble a whole bunch of parts that are all useful to you (and maybe write some of your own), in a way that you understand, to solve problems. (R aficionados will disagree, of course!)
| 3 | 26 | 1 |
I need some advice on selecting statistics package for Python, I've done quite some search, but not sure if I get everything right, specifically on the differences between statsmodels and scipy.stats.
One thing that I know is those with scikits namespace are specific "branches" of scipy, and what used to be scikits.statsmodels is now called statsmodels. On the other hand there is also scipy.stats. What are the differences between the two, and which one is the statistics package for Python?
Thanks.
--EDIT--
I changed the title because some answers are not really related to the question, and I suppose that's because the title is not clear enough.
|
Python statistics package: difference between statsmodel and scipy.stats
| 0.321513 | 0 | 0 | 16,292 |
14,574,836 |
2013-01-29T02:46:00.000
| 2 | 0 | 1 | 0 |
python,scheme
| 14,574,870 | 2 | true | 0 | 0 |
In Scheme terms it does not make sense to refer to pairs as two-element tuples because that would imply that there's such a thing as a three-element tuple or a four-element tuple in Scheme, but there's not.
That said the closest Python concept to a Scheme pair would indeed be a two-element tuple. A list of pairs is definitely not the same as a list of lists.
Oh and to answer the question you implied in your title:
In Scheme a list is either the empty list (()) or a pair whose second element is a list. So every list is a pair, but some pairs aren't list. For example the pair (1 . (2 . ())) is a list (more commonly written as (1 2)), but the pair (1 . 2) is not a list because 2 is not a list.
None of this applies to Python. Python lists are growable arrays - not linked lists made out of pairs/tuples.
| 1 | 0 | 0 |
In Scheme, if you have a list of pairs, like :
((4 . 7) (4 . 9))
isn't this basically a list of 2 element tuples? So if you were to write this in python, would it be like:
[[4, 7], [4,9]] or [(4, 7), (4,9)]?
I want it to be as close as possible to python. Or would creating a class be even closer?
|
Difference between pair in scheme and tuple in python?
| 1.2 | 0 | 0 | 1,074 |
14,575,161 |
2013-01-29T03:24:00.000
| 1 | 1 | 0 | 0 |
python,gevent,pyro
| 18,750,345 | 1 | true | 0 | 0 |
I use gevent.spawn(daemon.requestLoop). I can't say more without knowing more about the specifics.
| 1 | 1 | 0 |
Is it possible to use Pyro and gevent together? How would I go about doing this?
Pyro wants to have its own event loop, which underneath probably uses epoll etc. I am having trouble reconciling the two.
Help would be appreciated.
|
How can I use Pyro with gevent?
| 1.2 | 0 | 0 | 304 |
14,577,790 |
2013-01-29T07:31:00.000
| 0 | 0 | 1 | 0 |
python,set
| 14,577,827 | 2 | false | 0 | 0 |
I don't know if there is an arbitrary limit for the number of items in a set. More than likely the limit is tied to the available memory.
| 1 | 2 | 0 |
I am trying to use a python set as a filter for ids from a mysql table.
The python set stores all the ids to filter (about 30 000 right now) this number will grow slowly over time and I am concerned about the maximum capacity of a python set. Is there a limit to the number of elements it can contain?
|
Is there a limit to the number of values that a python set can contain?
| 0 | 1 | 0 | 2,460 |
14,580,684 |
2013-01-29T10:25:00.000
| 1 | 0 | 0 | 0 |
python,pandas,matplotlib,jupyter-notebook
| 14,600,682 | 1 | true | 0 | 0 |
Ok, if you go that route, this answer stackoverflow.com/a/5314808/243434 on how to capture >matplotlib figures as inline PNGs may help – @crewbum
To prevent duplication of plots, try running with pylab disabled (double-check your config >files and the command line). – @crewbum
--> this last requires a restart of the notebook: ipython notebook --pylab (NB no inline)
| 1 | 4 | 1 |
I have an IPython Notebook that is using Pandas to back-test a rule-based trading system.
I have a function that accepts various scalars and functions as parameters and outputs a stats pack as some tables and a couple of plots.
For automation, I want to be able to format this nicely into a "page" and then call the function in a loop while varying the inputs and have it output a number of pages for comparison, all from a single notebook cell.
The approach I am taking is to create IpyTables and then call _repr_html_(), building up the HTML output along the way so that I can eventually return it from the function that runs the loop.
How can I capture the output of the plots this way - matplotlib subplot objects don't seem to implement _repr_html_()?
Feel free to suggest another approach entirely that you think might equally solve the problem.
TIA
|
How to grab matplotlib plot as html in ipython notebook?
| 1.2 | 0 | 0 | 2,790 |
14,582,415 |
2013-01-29T12:02:00.000
| 0 | 0 | 1 | 0 |
python,tornado
| 44,468,620 | 3 | false | 0 | 0 |
@tornado.web.asynchronous is essentially a just a marker you put on a handler method like get() or post() that tells the framework that it shouldn't call finish() automatically when the method returns, because it contains code that is going to set up finish() to be called at a later time.
| 1 | 18 | 0 |
If code didn't use this decorator, is it non-blocking?
Why this name is asynchronous, it means add decorator let code asynchronous?
Why @tornado.gen always use with @tornado.web.asynchronous together?
|
what does @tornado.web.asynchronous decorator mean?
| 0 | 0 | 0 | 8,567 |
14,586,171 |
2013-01-29T15:18:00.000
| 1 | 0 | 0 | 1 |
python,matlab,simulink
| 14,650,472 | 1 | false | 0 | 0 |
Yes, it is quite possible. You should take a look at "Real-Time Testing" document which you can find in your dSPACE installation directory.
| 1 | 1 | 0 |
I hope someone can help us.
We are using a dSpace 1103 out of Simulink/Matlab and ControlDesk.
What I would like to know is, is it possible to use python in ControlDesk to transfer data into the dSpace from network? I mean, write an UDP Listener in python and use that script to update variables inside the Simulink/Matlab model?
Or is there any other good chance to transfer data from a program into ControlDesk such that the changes are send to dSpace?
Another question is, how long does it normally take if I change a variable in ControlDesk that the change is done inside dSpace (1-2 ms)??
Is this completely stochastic or more or less a constant value?
Thanks a lot.
|
Python and ControlDesk interaction
| 0.197375 | 0 | 0 | 3,445 |
14,587,135 |
2013-01-29T16:08:00.000
| 0 | 1 | 0 | 1 |
python,unix,ssh,autosys
| 18,859,497 | 2 | false | 0 | 0 |
After reading your comment on the first answer, you might want to create a bash script with bash path as the interpreter line and then the autosys commands.
This will create a bash shell and run the commands from the script in the shell.
Again, if you are using autosys commands in the shell you better set autosys environment up for the user before running any autosys commands.
| 1 | 0 | 0 |
I would like to achieve the following things:
Given file contains a job list which I need to execute one by one in a remote server using SSH APIs and store results.
When I try to call the following command directly on remote server using putty it executes successfully but when I try to execute it through python SSH programming it says cant find autosys.ksh.
autosys.ksh autorep -J JOB_NAME
Any ideas? Please help. Thanks in advance.
|
How to call .ksh file as part of Unix command through ssh in Python
| 0 | 0 | 0 | 1,343 |
14,592,328 |
2013-01-29T21:08:00.000
| 0 | 0 | 0 | 0 |
python,linux,excel,view,protected
| 14,592,481 | 1 | false | 0 | 0 |
Figured this out. Just used the for loop to keep a running total. Sorry for the wasted question.
| 1 | 0 | 0 |
No code examples here. Just running into an issue with Microsoft Excel 2010 where I have a python script on linux that pulls data from csv files, pushes it into excel, and emails that file to a certain email address as an attachment.
My problem is that I'm using formulas in my excel file, and when it first opens up it goes into "Protected View". My formulas don't load until after I click "Enable Editing". Is there anyway to get my numbers to show up even if Protected Mode is on?
|
Protected View in Microsoft Excel 2010 and Python
| 0 | 1 | 0 | 853 |
14,592,390 |
2013-01-29T21:12:00.000
| 4 | 1 | 0 | 1 |
python,linux,bash,user-interface,command-line-interface
| 14,592,451 | 5 | false | 0 | 0 |
It can check the value of $DISPLAY to see whether or not it's running under X11, and $(tty) to see whether it's running on an interactive terminal. if [[ $DISPLAY ]] && ! tty; then chances are good you'd want to display a GUI popup.
| 1 | 4 | 0 |
I want to do the following:
If the bash/python script is launched from a terminal, it shall do something such as printing an error message text. If the script is launched from GUI session like double-clicking from a file browser, it shall do something else, e.g. display a GUI message box.
|
How can Linux program, e.g. bash or python script, know how it was started: from command line or interactive GUI?
| 0.158649 | 0 | 0 | 788 |
14,592,874 |
2013-01-29T21:44:00.000
| 2 | 1 | 0 | 0 |
python,twitter
| 14,593,070 | 1 | true | 0 | 0 |
I think you should use "since_id" parameter in your url. since_id provides u getting pages that older than since_id. So, for the next page you should set the since_id parameter as the last id of your current page.
| 1 | 0 | 0 |
So I'm trying to run a search query through the Twitter API in Python. I can get it to return up to 100 results using the "count" parameter. Unfortunately, version 1.1 doesn't seem to have the "page" parameter that was present in 1.0. Is there some sort of alternative for 1.1? Or, if not, does anyone have any suggestions for alternative ways to get a decent amount of tweets returned for a subject.
Thanks.
Update with solution:
Thanks to the Ersin below.
I queried as a normally would for a page, and when it's return I would check for the id of the oldest tweet. I'd then use this as the max_id in the next URL.
|
Returning more than one page in Python Twitter search
| 1.2 | 0 | 1 | 297 |
14,592,879 |
2013-01-29T21:44:00.000
| 1 | 1 | 0 | 1 |
python,python-idle
| 62,436,025 | 2 | false | 0 | 0 |
Your function keys are locked,I think so.
Function keys can be unlocked by fn key + esc.
Then f5 will work without any issue.
| 2 | 4 | 0 |
I cannot run any script by pressing F5 or selecting run from the menus in IDLE. It stopped working suddenly. No errors are coughed up. IDLE simply does nothing at all.
Tried reinstalling python to no effect.
Cannot run even the simplest script.
Thank you for any help or suggestions you have.
Running Python 2.6.5 on windows 7.
Could not resolve the problem with idle. I have switched to using pyDev in Aptana Studio 3.
|
IDLE no longer runs any script on pressing F5
| 0.099668 | 0 | 0 | 7,570 |
14,592,879 |
2013-01-29T21:44:00.000
| 1 | 1 | 0 | 1 |
python,python-idle
| 48,695,999 | 2 | false | 0 | 0 |
I am using a Dell laptop, and ran into this issue. I found that if I pressed Function + F5, the program would run.
On my laptop keyboard, functions key items are in blue (main functions in white). The Esc (escape) key has a blue lock with 'Fn' on it. I pressed Esc + F5, and it unlocked my function keys. I can now run a program in the editor by only pressing F5
Note: Running Python 3 - but I do not think this is an issue with Idle or Python - I think this is a keyboard issue.
| 2 | 4 | 0 |
I cannot run any script by pressing F5 or selecting run from the menus in IDLE. It stopped working suddenly. No errors are coughed up. IDLE simply does nothing at all.
Tried reinstalling python to no effect.
Cannot run even the simplest script.
Thank you for any help or suggestions you have.
Running Python 2.6.5 on windows 7.
Could not resolve the problem with idle. I have switched to using pyDev in Aptana Studio 3.
|
IDLE no longer runs any script on pressing F5
| 0.099668 | 0 | 0 | 7,570 |
14,595,855 |
2013-01-30T02:46:00.000
| 2 | 0 | 1 | 0 |
python,regex
| 14,595,888 | 3 | false | 0 | 0 |
They are skipping because your regular expression is consuming two characters: [^\ ] and {. You need to use 0-width negative lookbehind for the preceding space in order not to consume it: (?!<\s){. Then you can just replace it with " {", without the lambda hassle.
| 1 | 1 | 0 |
I have the string aa{{{a {{ {aaa{ that I would like to translate to aa { { {a { { {aaa {. Basically every { must a space character before it.
My regular expression substitution function I am currently using is: re.sub(r'[^\ ]{', lambda x:x.group(0)[0]+' {', test_case)
The result from the function is: aa {{ {a { { {aaa { (Close, but there is a {{ in the string)
My method performs very well on section like a{a{a. However if two { characters are together like a{{a it only seems to operate on the first { and completely neglect the following {.
A more clear example will be a large series of {{{{{{{{{{{{. My regex substitution returns:{ {{ {{ {{ {{ {{ {. Which clearly skips over every other character given tightly nested {.
Why are they skipping? Any help to untangle this confusion would be greatly appreciated!
P.S. I am sorry to everyone out there that have the strong desire to close all the opened curly-brace.
|
Confusion with re.sub
| 0.132549 | 0 | 0 | 176 |
14,596,057 |
2013-01-30T03:12:00.000
| 2 | 0 | 1 | 0 |
python,package,setuptools,distutils,distutils2
| 14,596,192 | 3 | false | 0 | 0 |
The .py extension is only necessary when you want to import the model AFAICT. Remember that pip, easy_install, and the like are simply executable files with the shebang at the top. The only OS that relies on file extensions for execution purposes is Windows.
| 2 | 7 | 0 |
if I'm writing a package in Python for distribution and I put some scripts to be regarded as executables in the scripts of setup.py, is there a standard way to make them not have the *.py extension? Is it sufficient to just make files that do not have the .py extension, or is anything extra needed? Will removing the .py from the filename that break any of the functionality associated with Python tools like setup.py/distutils etc? Thanks.
|
Distributing Python scripts without .py extension
| 0.132549 | 0 | 0 | 2,536 |
14,596,057 |
2013-01-30T03:12:00.000
| 0 | 0 | 1 | 0 |
python,package,setuptools,distutils,distutils2
| 14,596,618 | 3 | false | 0 | 0 |
If the script is meant to be executed from the command line, the .py extension doesn't actually do anything for you. The script will be executed using the Python interpreter under two circumstances:
You explicity said to do so at the command line: $ python nameofyourscript
You explicity said to do so by including a shebang at the top of the script pointing to Python. The preferred version of that is #!/usr/bin/env python.
By including a shebang in each of your scripts, you can name the file anything you want.
Without doing one of these things, the script will be executed as a normal script meant for whatever shell you are using.
| 2 | 7 | 0 |
if I'm writing a package in Python for distribution and I put some scripts to be regarded as executables in the scripts of setup.py, is there a standard way to make them not have the *.py extension? Is it sufficient to just make files that do not have the .py extension, or is anything extra needed? Will removing the .py from the filename that break any of the functionality associated with Python tools like setup.py/distutils etc? Thanks.
|
Distributing Python scripts without .py extension
| 0 | 0 | 0 | 2,536 |
14,596,416 |
2013-01-30T03:54:00.000
| 3 | 0 | 0 | 0 |
python,z3,pickle
| 14,605,652 | 1 | false | 0 | 0 |
Yes, the Z3 Python API is a wrapper over the Z3 shared library (i.e., a DLL on Windows).
It is feasible to add methods __getstate__() and __setstate(state)__ to the Z3 Python objects that wrap formulas, models, etc. If these methods are available, the Python pickler will use them.
So, in principle, this functionality can be added. That is, we can add to the Z3 API (the C API) procedures for encoding/decoding Z3 expressions/formulas and modules into byte streams. These APIs are then used in to implement __getstate__() and __setstate(state)__. There are some details:
Sharing: suppose we have a Python list of Z3 expressions, and these expressions share a lot of sub-expressions. The Python pickler would invoke __getstate__() for each element of the list, and Z3 would encode the shared sub-expressions multiple times. The problem is that, for Python, each Z3 expression is a "blob", and the Z3 encoder/serializer does not know that these different expressions are part of a bigger Python data-structure. So, users should be careful when pickling a Python object that contains references to many different Z3 objects. Note that, in some cases, it is easy to fix this issue. For example, we can use a Z3 ASTVector instead of a Python list of Z3 expressions. Then, Z3 can encode the ASTVector as one big "blob" where each shared sub-expression is encoded only once.
Z3 objects, such as expressions and models, are associated with a context. Note that most procedures in the Python API have an extra ctx parameter. For example, Int('x') creates an integer variable named x in the default context, and Int('x', ctx) creates it in the context ctx. Multiple contexts are useful because we can concurrently access them from different execution threads. When we unpickle a Z3 object, we have to decide in which context we will store it. A possibility is to set a global parameter that specifies the context to be used. If it is not set, then the default context is used. This is not a perfect solution. Suppose we have a Python data-structure that contains references to Z3 expressions from different contexts, and we pickle it. Then, when we unpickle the data, all expressions would be added to the same Z3 context. Perhaps, this is not a big problem, since most users use only one Z3 context, and the ones that use multiple contexts usually do not store references to expressions from different contexts in the same Python object.
Please feel free to suggest alternative solutions. None of us in the Z3 team is a Python expert.
| 1 | 3 | 0 |
Is support for pickling (or serializing) Z3 objects being considered for future releases? I am currently trying to pickle the model produced by the Z3 Python API to a file, and I get the error message ctypes objects containing pointers cannot be pickled, which I take to mean that the Python API is merely a wrapper around the Z3 DLL.
Or is there a better way to save the objects produced by the Z3 Python API to files for future use?
Thanks!
|
Pickling Z3 Python Objects
| 0.53705 | 0 | 0 | 590 |
14,599,820 |
2013-01-30T08:44:00.000
| 0 | 1 | 0 | 0 |
python,sikuli
| 31,121,679 | 3 | false | 0 | 0 |
Do you want to run the file as an executable script or use its contents?
As an executable script make sure that the file is a valid script and will execute something when called and then use suprocess.Popen() from another .py file to execute that file.
To use the module's contents make sure the file is on the PYTHONPATH and use import name and everything within name will now be available for use.
| 1 | 1 | 0 |
I have a python code (name.py) written in separate file and now I want to execute that code using sikuli.
I have tried
openApp but its not working
could be possible I did some mistake but still looking for working logic.
|
How to execute python script file (filename.py) using sikuli
| 0 | 0 | 0 | 3,069 |
14,607,461 |
2013-01-30T15:23:00.000
| 6 | 0 | 1 | 0 |
python,sublimetext
| 14,609,698 | 1 | false | 0 | 0 |
SublimeText clears auto-indent whitespace automatically if the trim_automatic_white_space setting is enabled (default). This only affects blank lines.
Python does not care about whitespace on blank lines; blank lines do not need to match the indentation of the rest of the code. However, if you copy lines to the python interpreter, empty lines signal the end of a block and that block is then compiled; this is different from running a saved file directly.
If you see indentation errors when running your python file, you are mixing tabs and spaces elsewhere in your code. Run your code with python -tt modulename.py to test. For python code, you really want to use spaces only (convert tabs to spaces, set sublime to use spaces for indentation).
| 1 | 2 | 0 |
I can't work properly with python because Sublime text 2 is deleting indent. if I add and edit another part of the document ST removes again all tabulations. Obviously python throws the error: IndentationError: unexpected indent.
How can I adjust this?
|
Sublime text is deleting tabulations on a document continuously
| 1 | 0 | 0 | 752 |
14,611,236 |
2013-01-30T18:34:00.000
| 3 | 0 | 1 | 0 |
import,python-2.7,pydev,nltk
| 17,776,071 | 2 | false | 0 | 0 |
@TheGT seems to be on the correct path, though I found the instructions a little confusing. My solution:
Project->Properties->PYDEV-PYTHONPATH->External Libraries
Add source folder (button)
/Library/Python/2.7/site-packages/nltk-2-0/4-py2.7.egg
Obviously, your path, version, etc... could be different.
Here's what seems odd.
There's a button to add zip/jar/egg and that doesn't want to work correctly with the nltk...directory...egg. The nltk egg behaves like a directory in the chooser (i.e. continues to drill down rather than return).
On the other hand, the source folder button does allow you to choose a folder... so I chose the egg and that seems to work.
It seems like the nltk egg is not configured correctly for OSX. And, depending on how it is accessed, it can behave like a folder or a final destination.
NOTE: Adding the nltk egg into the external libraries path of your project makes the error go away. But adding the egg into preferences>PyDev>Interpreter does not appear to resolve the problem (on it's own).
| 2 | 1 | 0 |
I want to get rid of this error message and I want to have the benefits of auto completion and suggestions. PyDev obviously does find nltk, because when running it from inside the IDE it works. Not only from console.
Surely someone needs to know why I got this "unresolved import" error message but on the other way when clicking on "run" it works perfectly well.
|
PyDev: "Unresolved import nltk" When running, pydev imports it
| 0.291313 | 0 | 0 | 2,954 |
14,611,236 |
2013-01-30T18:34:00.000
| 1 | 0 | 1 | 0 |
import,python-2.7,pydev,nltk
| 15,373,662 | 2 | false | 0 | 0 |
I faced the exact same error when I was trying to use nltk in my project. I did 2 things to resolve the unresolved error to go away.
I added the setupctools**.egg file (the file that is used to install nltk in mac/*nix systems) as an external library
[Project->Properties->PYDEV-PYTHONPATH->External Libraries]
I am using Eclipse Indigo, and Python 2.6.1 on my mac btw.
I restarted the eclipse
Bam! - the error goes away.
Although, the error is not there anymore, I would like to know why Eclipse was behaving this way. The strange thing to note was that when I tried to run the program, the program did run successfully, even though eclipse marked "import nltk" as unresolved import.
| 2 | 1 | 0 |
I want to get rid of this error message and I want to have the benefits of auto completion and suggestions. PyDev obviously does find nltk, because when running it from inside the IDE it works. Not only from console.
Surely someone needs to know why I got this "unresolved import" error message but on the other way when clicking on "run" it works perfectly well.
|
PyDev: "Unresolved import nltk" When running, pydev imports it
| 0.099668 | 0 | 0 | 2,954 |
14,612,294 |
2013-01-30T19:41:00.000
| 8 | 0 | 0 | 0 |
python,firefox,selenium
| 14,612,967 | 1 | true | 0 | 0 |
On your mac, have you looked in /var/folders/? You might find a bunch of anonymous*webdriver-profile folders a few levels down. (mine appear in /var/folders/sm/jngvd6s57ldb916b7h25d57r0000dn/T/)
Also, are you using driver.close() or driver.quit()? I thought driver.quit() cleans up the temp folder, but I could be wrong.
| 1 | 5 | 0 |
I'm running some fairly simple tests using browsermob and selenium to open firefox browsers and navigate through a random pages. Each firefox instance is supposed to be independent and none of them share any cookies or cache. On my mac osx machine, this works quite nicely. The browsers open, navigate through a bunch of pages and then close.
On my windows machine, however, even after the firefox browser closes, the tmp** folders remain and, after leavin the test going on for a while, they begin to take up a lot of space. I was under the impression that each newly spawned browser would have its own profile, which it clearly does, but that it would delete the profile it made when the browser closes.
Is there an explicit selenium command I'm missing to enforce this behaviour?
Additionally, I've noticed that some of the tmp folders are showing up in AppData/Local/Temp/2 and that many others are showing up in the folder where I started running the script...
|
Selenium not deleting profiles on browser close
| 1.2 | 0 | 1 | 3,222 |
14,612,802 |
2013-01-30T20:13:00.000
| 0 | 0 | 0 | 0 |
python,gtk
| 14,612,976 | 1 | false | 0 | 1 |
Are you using a theme? Check the panel.rc file and look for bg[ACTIVE]. Change that value and the button should change colour
| 1 | 0 | 0 |
I have a Gtk IconView. Actually, selected items are draw with a different background color (this is the normal behavior). However, I'd like to be able to distinguish between "selected" and "active" items, by using a different background color for the "active" item. How can I achieve that?
|
How to distinguish active and selected item in Gtk IconView?
| 0 | 0 | 0 | 127 |
14,614,196 |
2013-01-30T21:32:00.000
| 2 | 0 | 0 | 1 |
python,unix,cron
| 14,614,330 | 2 | false | 0 | 0 |
Have the program run every 5 hours -- I'm not to familiar with
system-level timing operations.
for nix cron is the default solution to accomplish this
Have the program efficiently run in the background -- I want these
'updates' to occur without the user knowing.
Using cron the program will be run in the background on your server. The user shouldn't be adversly affected by it. If the user loads a page viewing mp3s you have scraped. Then in the midst of your script running/saving data to the database the user hits refresh, the new mp3's might show up, i don't know if this is what you had in mind by "without the user knowing"
Have the program activate on startup -- I know how I would set this up
as a user, but I'm not sure how to add such a configuration to the
python file, if that's even possible. Keep in mind that this is going
to be a simple .py script -- I'm not compiling it into an executable.
I'm pretty sure cron entries will persist at reboot, (i'm not 100%), make sure that cron daemon is started on boot
| 1 | 2 | 0 |
I have a fairly light script that I want to run periodically in the background every 5 hours or so. The script runs through a few different websites, scans them for new material, and either grabs .mp3 files from them or likes songs on youtube based on their content. There are a few things I want to achieve with this program that I am unsure of how to attain:
Have the program run every 5 hours -- I'm not to familiar with system-level timing operations.
Have the program efficiently run in the background -- I want these 'updates' to occur without the user knowing.
Have the program activate on startup -- I know how I would set this up as a user, but I'm not sure how to add such a configuration to the python file, if that's even possible. Keep in mind that this is going to be a simple .py script -- I'm not compiling it into an executable.
The program is designed mainly with OSX and other Unix based systems in mind. Any advice on achieving some of these goals?
|
How to run a python background process periodically
| 0.197375 | 0 | 0 | 2,035 |
14,614,966 |
2013-01-30T22:20:00.000
| -2 | 0 | 0 | 0 |
python,numpy,scipy,signal-processing
| 15,783,554 | 6 | false | 0 | 0 |
SciPy will support any filter. Just calculate the impulse response and use any of the appropriate scipy.signal filter/convolve functions.
| 1 | 7 | 1 |
SciPy/Numpy seems to support many filters, but not the root-raised cosine filter. Is there a trick to easily create one rather than calculating the transfer function? An approximation would be fine as well.
|
Easy way to implement a Root Raised Cosine (RRC) filter using Python & Numpy
| -0.066568 | 0 | 0 | 13,131 |
14,617,604 |
2013-01-31T02:42:00.000
| 0 | 0 | 1 | 0 |
java,math,python-2.7,rounding,parentheses
| 14,617,905 | 1 | true | 0 | 0 |
You are better off doing a multiply before a divide. Since a divide has a lesser probable chance of being represented correctly as compared to a overflow happening during multiplication.
| 1 | 1 | 0 |
I was recently debugging a program that consistently returned errors just a few decimal points off. It turns out the error was on the very line I believed that I didn't need to check: 999999 * (.18 / 12).
I knew the parenthesis were not necessary because mathematically, the expression should evaluate to the same answer regardless of their presence, but I included them just for clarity. When I typed the statement into the Python interpreter, however, it returned 14999.98499999999, while the correct answer should be 14999.985. When I removed the parenthesis and typed 999999 * .18 / 12, I got the correct answer (14999.985). Just to make sure it wasn't just a Python thing, I tried it with Java too, same answer.
Why is this? I understand that computers can't store fractions as exact values, so some degree of error is to be expected, but what's the difference in the way the computer evaluates those two statements?
TL;DR: 999999 * (.18 / 12) gives me a different answer than 999999 * .18 / 12 when mathematically, they should both evaluate to 14999.985. Why?
|
Using parenthesis causes decimal errors on Python and Java
| 1.2 | 0 | 0 | 123 |
14,617,983 |
2013-01-31T03:31:00.000
| 0 | 0 | 0 | 1 |
python,windows,delphi,winapi,filesystems
| 14,619,854 | 2 | false | 0 | 0 |
You can setup a filter driver which can act in two ways: (1) modify the flags when the file is opened, and (2) it can capture the data when it's written to the file and save a copy of the data elsewhere.
This approach is much more lightweight and efficient than volume shadow copy service, mentioned in comments, however it requires having a filter driver. There exist several drivers on the market (i.e. those are products which include a driver and let you write business logic in user mode) yet they are costly and can be an overkill in your case. Still, if you need the thing for private use only, contact me privately for a license for our CallbackFilter.
Update: if you want to let the writer open the file which has been already opened, then a filter which will modify flags when the file is being opened is your only option.
| 1 | 2 | 0 |
I have a file that I want to read. The file may at any time be overwritten by another process. I do not want to block that writing. I am prepared to manage corruption to the data that I read, but do not want my reading to be in any way change the behaviour of the writing process.
The process that is writing the file is a delphi program running locally on the server. It opens the file using fmCreate. fmCreate tries to open the file exclusively and fails if there are any other handles on the file.
I am reading the file from a python script that accesses the file remotely across our network.
I am interested in whether there is a solution, independent of whether it is supported by python or delphi. I want to know if there is any way of achieving this under windows without modifying the writing program.
Edit: To reiterate, this is not a duplicate. The other question was trying to get read access to a file that is being written to. I want to the writer to have access to a file that I have open for reading. These are different questions (although I fear the answer will be similar, that it can't be done.)
|
Reading a windows file without preventing another process from writing to it
| 0 | 0 | 0 | 1,040 |
14,624,421 |
2013-01-31T11:16:00.000
| 0 | 1 | 0 | 1 |
python,post,push-notification,bitbucket,githooks
| 14,627,911 | 1 | false | 0 | 0 |
Select the administration menu for the repository (the gear symbol), then Services. There you can set up integration with external services, such as email or twitter.
| 1 | 0 | 0 |
I want to fetch the commit message to my bitbucket repository each time a user is doing any push operation.
How can I do that?
I am in development version. So is there any way by which I can post to localhost/someurl for each commit from my repository.
Else suggest other ways by which I can achieve this.
Thanks in advance for help.
|
get bitbucket commit message for each push
| 0 | 0 | 0 | 258 |
14,627,334 |
2013-01-31T13:50:00.000
| 0 | 0 | 0 | 1 |
python,google-app-engine
| 14,630,265 | 1 | false | 1 | 0 |
No, there is no automated way where async Url Fetch would store data automatically to memcache on completion. You have to do it in your code, but this defeats what you are trying to do.
Also remember that memcache is volatile and it's content can be purged at any time.
| 1 | 1 | 0 |
Is it possible to make a async url fetch on appengine and to store the rpc object in the memcache?
What I try to do is to start the asynch url fetch within a task, but I don't want the task to wait until the fetch has finished.
Therefore I tought I would just write it to memcache and access it later from outside the task, which has created the fetch.
|
Async URL Fetch and Memcache on Appengine
| 0 | 0 | 0 | 150 |
14,631,306 |
2013-01-31T17:09:00.000
| 0 | 0 | 0 | 1 |
python,google-app-engine
| 29,110,829 | 2 | false | 1 | 0 |
I had the same question so I dug into the nosegae code, and then into the actual testbed code.
All you need to do is set nosegae_blobstore = True where you're setting up all the other stubs. This sets up a dict-backed blobstore stub.
| 1 | 1 | 0 |
We're using nose with nose-gae for unit testing our controllers and models. We now have code that hits the blob store and files API. We are having a hard time testing those due to lack of testing proxies/mocks. Is there a good way to unit tests these services or lacking unit testing is there a way to automated acceptance test those APIs? TIA.
|
Unit testing GAE Blobstore (with nose)
| 0 | 0 | 0 | 123 |
14,632,862 |
2013-01-31T18:43:00.000
| -1 | 0 | 1 | 0 |
python
| 14,632,922 | 3 | true | 0 | 0 |
Lists are passed by adress, so extra overhead is just single function parameter (pointer). I dont think it is noticeable.
You have to test yourself, but I would be surprised if it would be significant.
| 3 | 0 | 0 |
In python, if I have a recursive function that modifies a list of integers, and assume the list is large, which is faster to do: keep the list as a global variable, and not pass it as an argument, or pass it as an argument and not make it global?
|
What's faster, recursion or global variables?
| 1.2 | 0 | 0 | 310 |
14,632,862 |
2013-01-31T18:43:00.000
| 1 | 0 | 1 | 0 |
python
| 14,633,172 | 3 | false | 0 | 0 |
For fast and clean code, you should use an iterative approach to recursion (as far as there's no algorithmic alternative) with a custom stack, instead of globals.
| 3 | 0 | 0 |
In python, if I have a recursive function that modifies a list of integers, and assume the list is large, which is faster to do: keep the list as a global variable, and not pass it as an argument, or pass it as an argument and not make it global?
|
What's faster, recursion or global variables?
| 0.066568 | 0 | 0 | 310 |
14,632,862 |
2013-01-31T18:43:00.000
| 0 | 0 | 1 | 0 |
python
| 14,633,280 | 3 | false | 0 | 0 |
If the list is passed by reference and not duplicated in the recursive function, then the performance difference will be negligible, and you should use whichever method will result in more clear and maintainable code. Usually this would be passing the array as a parameter, but not always.
| 3 | 0 | 0 |
In python, if I have a recursive function that modifies a list of integers, and assume the list is large, which is faster to do: keep the list as a global variable, and not pass it as an argument, or pass it as an argument and not make it global?
|
What's faster, recursion or global variables?
| 0 | 0 | 0 | 310 |
14,633,062 |
2013-01-31T18:55:00.000
| 0 | 0 | 1 | 0 |
python,rabbitmq,python-multithreading
| 14,633,085 | 5 | false | 0 | 0 |
I don't think you should want to. MQ means asynch processing. Doing both consuming and producing in the same thread defeats the purpose in my opinion.
| 2 | 3 | 0 |
can both consuming and publishing be done in one Python thread using RabbitMQ channels?
|
RabbitMQ: can both consuming and publishing be done in one thread?
| 0 | 0 | 0 | 2,232 |
14,633,062 |
2013-01-31T18:55:00.000
| 0 | 0 | 1 | 0 |
python,rabbitmq,python-multithreading
| 14,641,830 | 5 | false | 0 | 0 |
I think the simple answer to your question is yes. But it depends on what you want to do. My guess is you have a loop that is consuming from your thread on one channel and after some (small or large) processing it decides to send it on to another queue (or exchange) on a different channel then I do not see any problem with that at all. Though it might be preferable to dispatch it to a different thread it is not necessary.
If you give more details about your process then it might help give a more specific answer.
| 2 | 3 | 0 |
can both consuming and publishing be done in one Python thread using RabbitMQ channels?
|
RabbitMQ: can both consuming and publishing be done in one thread?
| 0 | 0 | 0 | 2,232 |
14,633,329 |
2013-01-31T19:11:00.000
| 1 | 0 | 1 | 0 |
python,gis,census
| 14,633,770 | 3 | false | 0 | 0 |
well, i got it.
ex = 'Block 2022, Block Group 2, Census Tract 1, Shelby County, Tennessee'
new_id = '47157' + ex[40:len(ex)-26].zfill(4) + '0' + ex[24] + ex[6:10]
state and county values are constant; block groups only go to one digit (afaik).
| 1 | 2 | 0 |
I've been tasked to dig through census data for things at the block level.
After learning how to navigate AND find what i'm looking for I hit a snag.
tabblock polygons (block level polygon) have an id consisting of a 15 length string,
ex: '471570001022022'
but the format from the census data is labelled:
'Block 2022, Block Group 2, Census Tract 1, Shelby County, Tennessee'
the block id is formatted:
state-county-tract-group-block, with some leading zeros to make 15 characters.
sscccttttggbbbb
Does anyone know a quick way to get this into a usable format?
I thought i would ask before i spend my time trying to cook up a python script.
Thanks,
gm
|
Reformat census title
| 0.066568 | 0 | 0 | 88 |
14,635,036 |
2013-01-31T20:59:00.000
| 2 | 0 | 0 | 0 |
java,android,python
| 14,635,099 | 2 | false | 0 | 0 |
Try sending a UDP packet to the special broadcast address 255.255.255.255. Every device in the network should receive a copy of that packet (barring firewalls), and you can arrange to have the server reply to the packet with its identity.
| 1 | 6 | 0 |
my apologies if this is a trivial question.
I've recently begun doing some android programming and I'm writing a simple app that allows you to use your android device as a controller for your windows PC. Specifically it allows the user to do things like turn off the machine, make it sleep, reboot it etc etc. I'm currently using a python library called CherryPy as a server on the windows machine to execute the actual win32api calls to perform the desired function. What i'm not sure about is how to discover (dynamically) which machine on the network is actually hosting the server. Everything is working fine if I hardcode my machines public IP into the android app, but obviously that is far less than ideal. I've considered having the user manually enter their machines public IP in the app, but if there's a way to, say, broadcast a quick message to all machines on the WiFi and check for a pre-canned response that my Python server would send out, that'd be wonderful. Is that possible?
Thanks in advance guys.
|
Broadcast a message to all available machines on WiFi
| 0.197375 | 0 | 1 | 1,597 |
14,635,549 |
2013-01-31T21:33:00.000
| 3 | 0 | 1 | 0 |
python,math,vector-graphics
| 14,635,675 | 1 | true | 0 | 0 |
No, the standard in numpy. I wouldn't think of it as overkill, think of it as a very well written and tested library, even if you do just need a small portion of it. All the basic vector & matrix operations are implemented efficiently (falling back to C and Fortan) which makes it fast and memory efficient. Don't make your own, use numpy.
| 1 | 0 | 1 |
I'm about to write my very own scaling, rotation, normalization functions in python. Is there a convenient way to avoid this? I found NumPy, but it kind-a seems like an overkill for my little 2D-needs.
Are there basic vector operations available in the std python libs?
|
Python vector transformation (normalize, scale, rotate etc.)
| 1.2 | 0 | 0 | 1,480 |
14,635,693 |
2013-01-31T21:42:00.000
| 2 | 0 | 0 | 0 |
python,google-app-engine,mapreduce
| 20,688,782 | 1 | true | 0 | 0 |
I don't think such functionality exists (yet?) in the GAE Mapreduce library.
Depending on the size of your dataset, and the type of output required, you can small-time-investment hack your way around it by co-opting the reducer as another output writer. For example, if one of the reducer outputs should go straight back to the datastore, and another output should go to a file, you could open a file yourself and write the outputs to it. Alternatively, you could serialize and explicitly store the intermediate map results to a temporary datastore using operation.db.Put, and perform separate Map or Reduce jobs on that datastore. Of course, that will end up being more expensive than the first workaround.
In your specific key-value example, I'd suggest writing to a Google Cloud Storage File, and postprocessing it to split it into three files as required. That'll also give you more control over final file names.
| 1 | 0 | 1 |
I have a data set which I do multiple mappings on.
Assuming that I have 3 key-values pair for the reduce function, how do I modify the output such that I have 3 blobfiles - one for each of the key value pair?
Do let me know if I can clarify further.
|
GAE MapReduce, How to write Multiple Outputs
| 1.2 | 0 | 0 | 139 |
14,636,918 |
2013-01-31T23:16:00.000
| 1 | 0 | 0 | 0 |
python,numpy,a-star
| 42,075,989 | 4 | false | 0 | 0 |
No, there is no A* search in Numpy.
| 1 | 11 | 1 |
i tried searching stackoverflow for the tags [a-star] [and] [python] and [a-star] [and] [numpy], but nothing. i also googled it but whether due to the tokenizing or its existence, i got nothing.
it's not much harder than your coding-interview tree traversals to implement. but, it would be nice to have a correct efficient implementation for everyone.
does numpy have A*?
|
A-star search in numpy or python
| 0.049958 | 0 | 0 | 15,316 |
14,637,696 |
2013-02-01T00:25:00.000
| 5 | 0 | 1 | 0 |
python,string,performance,list,coding-style
| 14,637,848 | 6 | true | 0 | 0 |
I think both are OK, but I think that unless speed is a big consideration that max(len(w) for w in words) is the most readable.
When I was looking at them, it took me longer to figure out what len(max(words, key=len)) was doing, and I was still wrong until I thought about it more. Code should be immediately obvious unless there's a good reason for it not to be.
It's clear from the other posts (and my own tests) that the less readable one is faster. But it's not like either of them are dog slow. And unless the code is on a critical path it's not worth worrying about.
Ultimately, I think more readable is more Pythonic.
As an aside, this one of the few cases in which Python 2 is notably faster than Python 3 for the same task.
| 1 | 7 | 0 |
What is the more pythonic way of getting the length of the longest word:
len(max(words, key=len))
Or:
max(len(w) for w in words)
Or.. something else? words is a list of strings.
I am finding I need to do this often and after timing with a few different sample sizes the first way seems to be consistently faster, despite seeming less efficient at face value (the redundancy of len being called twice seems not to matter - does more happen in C code in this form?).
|
Length of longest word in a list
| 1.2 | 0 | 0 | 14,986 |
14,638,799 |
2013-02-01T02:31:00.000
| 0 | 0 | 0 | 0 |
python,django,django-templates,django-views,template-inheritance
| 14,639,666 | 5 | false | 1 | 0 |
I believe separating out your upload related functionality into separate views will be better way to go about it. That way all your templates (inheriting from base.html) will refer to appropriate view for uploads.
You can use HTTP_REFERER header to redirect to appropriate page from the upload views.
| 3 | 1 | 0 |
I have a parent template that I'm using in many parts of the site, called base.html. This template holds a lot of functional components, such as buttons that trigger different forms (inside modal windows) allowing users to upload different kinds of content, etc. I want users to be able to click these buttons from almost any part of the site (from all the templates that inherit from base.html).
I've written a view that handles the main page of the site, HomeView (it renders homepage.html, which inherits from base.html). I've written a bunch of functionality into this view, which handles all the uploads.
Since many templates are going to inherit from base.html, and therefore have all that same functionality, do I have to copy-and-paste the hundreds of lines of code from the HomeView into the views that render all the other pages??
There's got to be a better way, right?
How do I make sure that the functionality in a parent base template holds true for all views which call child templates that inherit from this base template?
|
Django: How do I extend the same functionality to many views?
| 0 | 0 | 0 | 159 |
14,638,799 |
2013-02-01T02:31:00.000
| 0 | 0 | 0 | 0 |
python,django,django-templates,django-views,template-inheritance
| 14,645,465 | 5 | false | 1 | 0 |
You can render many templates in just one view by requiring unique value in each or use request session.
| 3 | 1 | 0 |
I have a parent template that I'm using in many parts of the site, called base.html. This template holds a lot of functional components, such as buttons that trigger different forms (inside modal windows) allowing users to upload different kinds of content, etc. I want users to be able to click these buttons from almost any part of the site (from all the templates that inherit from base.html).
I've written a view that handles the main page of the site, HomeView (it renders homepage.html, which inherits from base.html). I've written a bunch of functionality into this view, which handles all the uploads.
Since many templates are going to inherit from base.html, and therefore have all that same functionality, do I have to copy-and-paste the hundreds of lines of code from the HomeView into the views that render all the other pages??
There's got to be a better way, right?
How do I make sure that the functionality in a parent base template holds true for all views which call child templates that inherit from this base template?
|
Django: How do I extend the same functionality to many views?
| 0 | 0 | 0 | 159 |
14,638,799 |
2013-02-01T02:31:00.000
| 0 | 0 | 0 | 0 |
python,django,django-templates,django-views,template-inheritance
| 14,646,308 | 5 | false | 1 | 0 |
Load the functionality part with ajax on your base.html.
That way you have a view_method that deals exclusively with those funcionalities.
| 3 | 1 | 0 |
I have a parent template that I'm using in many parts of the site, called base.html. This template holds a lot of functional components, such as buttons that trigger different forms (inside modal windows) allowing users to upload different kinds of content, etc. I want users to be able to click these buttons from almost any part of the site (from all the templates that inherit from base.html).
I've written a view that handles the main page of the site, HomeView (it renders homepage.html, which inherits from base.html). I've written a bunch of functionality into this view, which handles all the uploads.
Since many templates are going to inherit from base.html, and therefore have all that same functionality, do I have to copy-and-paste the hundreds of lines of code from the HomeView into the views that render all the other pages??
There's got to be a better way, right?
How do I make sure that the functionality in a parent base template holds true for all views which call child templates that inherit from this base template?
|
Django: How do I extend the same functionality to many views?
| 0 | 0 | 0 | 159 |
14,639,338 |
2013-02-01T03:40:00.000
| 0 | 0 | 0 | 1 |
python,linux,macos,window,cross-platform
| 14,640,571 | 1 | false | 0 | 0 |
Not on a cross-platform basis. While windows do have IDs on both Linux and Mac OS, the meaning of the IDs is quite different, as is what you can do with them. There's basically nothing in common between the two.
And no, you cannot get those IDs when you launch an application, as the window(s) aren't created until later.
| 1 | 0 | 0 |
Is there any way to query window id by window name from python? Something that would work cross-platform perhaps (linux / mac)?
Or even better catch that id when starting a new window directly from os.sys ?
|
Query window id from python in linux and mac
| 0 | 0 | 0 | 1,222 |
14,639,480 |
2013-02-01T03:56:00.000
| 5 | 0 | 1 | 0 |
python,iterator,generator
| 14,640,048 | 2 | false | 0 | 0 |
A general practice that I've been taught to follow is to return the same data type for all valid values within the input domain, if you can. It makes it easier for others to use your code, and your documentation will be cleaner. For values that are outside of the valid input domain, raise Exceptions.
An empty iterator, rather than None, seems to be a better practice in this case. I know other programming languages like to return null in these instances, but I don't see a benefit to doing that in the scenario you described.
| 1 | 5 | 0 |
Opinions seem to be mixed on this -- is there a Pythonic "right way" to do this?
|
Is it bad Python style to return empty iterators rather than None?
| 0.462117 | 0 | 0 | 3,651 |
14,639,538 |
2013-02-01T04:04:00.000
| 0 | 0 | 1 | 0 |
python,templates,data-structures,stream
| 14,645,959 | 1 | true | 0 | 0 |
I'd declare ctypes.Struct classes in Python as templates, and then use the regular ctypes functions to marshal and unmarshal from byte arrays to structs.
| 1 | 0 | 0 |
I have a data file which has written data (which forms a data of C++ structure from an application). Now I am using python to read this data file. What is the best way to provide this structure as a input template and then have a logic to read based on this template.
Idea is that if the structure (in my C++ application changes) then for script to maintain compatibility, I would like to change the template and no other change should suffice me with the data read.
|
Unmarshal data stream according to rule file in python
| 1.2 | 0 | 0 | 125 |
14,642,443 |
2013-02-01T08:40:00.000
| 0 | 0 | 0 | 0 |
python,audio,random,microphone
| 40,968,039 | 2 | false | 1 | 0 |
u can refer speech_recognition module or PyAudio module for recording speech.I used PyAudio module with Microsoft cognitive service.It worked fine for me.
| 1 | 6 | 0 |
I am trying to make python grab data from my microphone, as I want to make a random generator which will use noise from it.
So basically I don't want to record the sounds, but rather read it in as a datafile, but realtime.
I know that Labview can do this, but I dislike that framework and am trying to get better at python.
Any help/tips?
|
Python read microphone
| 0 | 0 | 0 | 19,017 |
14,643,282 |
2013-02-01T09:34:00.000
| 1 | 1 | 1 | 0 |
python,postgresql,public-key-encryption,gnupg,pgcrypto
| 14,660,589 | 1 | false | 0 | 0 |
Have a look at PyCrypto, it doesn't seem to use forking. pgcrypto can be configured to fit most crypto configurations.
| 1 | 4 | 0 |
For a project I am working on I would like to use a pgcrypto compatible encryption in python. And specific the public key encryption part.
The problem I have is that most (all) of the implementations make use of subprocess like approaches to fork gpg, as I have to encrypt a lot of data (50.000+ entries per session) this approach will not work for me.
Can someone give me some pointers how that this could be achieved?
|
How to encrypt in a pgcrypto compatible way in python
| 0.197375 | 0 | 0 | 690 |
14,643,579 |
2013-02-01T09:52:00.000
| 0 | 0 | 0 | 0 |
python,opengl,pygame
| 14,807,386 | 2 | false | 0 | 1 |
I'm going to use pymunk. The python port of Chipmunk.
I did a silly experiment with it little over a year ago when I first started programming. It was pretty easy. I just totally forgot about it.
I couldn't get pybox2d to work in any python version.
| 1 | 0 | 0 |
I'm writing a game with two processes. One for rendering with OpenGL. The other is for Collision detection. This is so I can use more than a single core.
However I can't use any pygame surfaces without the display open. So I can't use bitmasks to do pixel perfect collision or any other collision for that matter.
I've tried to simply open another window just to see if I can the Surfaces to work but I can't open a second pygame window without getting an OpenGL function error.
You can open two non-OpenGL windows with pygame in two separate processes but I'm using OpenGL.
I figured there might be somewhere I can insert a pointer to the display to get the surfaces to stop saying Dead Display. Some kind of SDL variable I can manipulate in the second process to say "its not Dead its here". Or some other way to use the pixel perfect collision.
I'm open to pixel perfect alternatives that don't use pygame.
|
Using pygames Collision module without initializing display
| 0 | 0 | 0 | 141 |
14,644,986 |
2013-02-01T11:08:00.000
| 2 | 1 | 1 | 0 |
python,py2exe
| 14,645,057 | 2 | false | 0 | 0 |
In short: nothing. Any executable can always be reverse-engineered.
More in detail: do you really think your code is so valuable that people would go to spend months to do that?
Also keep in mind that if you import any module released under GPL, you would be doing something illegal in not having your code as GPL as well.
| 1 | 1 | 0 |
My project requires such that my python files have to be converted to py2exe. Fair and well , my py2exe is working. Assume my binary is called as "test.exe". I know that my test.exe contains all pyc files of my python file. What i want to do is , protect my text.exe, so that my source is not seen, in other words i dont want it be decompiled back, what can i do for this ?
|
protect binary generated from py2exe python
| 0.197375 | 0 | 0 | 930 |
14,647,006 |
2013-02-01T13:07:00.000
| 3 | 1 | 0 | 0 |
python,dataset,statistics,python-module,spss
| 14,669,613 | 9 | false | 0 | 0 |
But the benefit of using the IBM libraries is that they get this rather complex binary file format right. They are free, relieve you of the burden of writing code for this format, and the license permits you to redistribute them. What more could you ask?
| 1 | 39 | 0 |
Is there a module for Python to open IBM SPSS (i.e. .sav) files? It would be great if there's something up-to-date which doesn't require any additional dll files/libraries.
|
Is there a Python module to open SPSS files?
| 0.066568 | 0 | 0 | 47,105 |
14,647,317 |
2013-02-01T13:26:00.000
| 2 | 0 | 1 | 0 |
python,python-2.7,virtualenv,pycharm,pythonpath
| 14,661,799 | 3 | true | 0 | 0 |
Even when a package is not using setuptools pip monkeypatches setup.py to force it to use setuptools.
Maybe you can remove that PYTHONPATH hack and pip install -e /path/to/package.
| 1 | 6 | 0 |
Yesterday, I edited the bin/activate script of my virtualenv so that it sets the PYTHONPATH environment variable to include a development version of some external package. I had to do this because the setup.py of the package uses distutils and does not support the develop command à la setuptools. Setting PYTHONPATH works fine as far as using the Python interpreter in the terminal is concerned.
However, just now I opened the project settings in PyCharm and discovered that PyCharm is unaware of the external package in question - PyCharm lists neither the external package nor its path. Naturally, that's because PyCharm does not (and cannot reliably) parse or source the bin/activate script. I could manually add the path in the PyCharm project settings, but that means I have to repeat myself (once in bin/activate, and again in the PyCharm project settings). That's not DRY and that's bad.
Creating, in site-packages, a symlink that points to the external package is almost perfect. This way, at least the source editor of PyCharm can find the package and so does the Python interpreter in the terminal. However, somehow PyCharm still does not list the package in the project settings and I'm not sure if it's ok to leave it like that.
So how can I add the external package to my virtualenv/project in such a way that…
I don't have to repeat myself; and…
both the Python interpreter and PyCharm would be aware of it?
|
PYTHONPATH vs symbolic link
| 1.2 | 0 | 0 | 2,662 |
14,650,989 |
2013-02-01T16:41:00.000
| 0 | 0 | 0 | 0 |
python,django,amazon-ec2,python-django-storages
| 15,756,378 | 1 | false | 1 | 0 |
If you're using virtualenv I find you don't need to add in sudo. So just try doing a pip install django-storages?
| 1 | 0 | 0 |
I'm working on deploying my first django app to an EC2 server. I'm serving my static files from an S3 server, so I'm using the django-storages app.
I installed it using sudo pip install django-storages on the EC2 server. However, I keep getting the error "no module found" when I try to import it. Yet, when I run pip freeze django-storages shows up as installed.
I followed the exact same procedure on my development machine and everything works perfectly. Any ideas?
I should also mention that the EC2 server is running the bitnami ubunutu 64 bit django stack.
|
Django-Storages not being installed properly
| 0 | 0 | 0 | 173 |
14,651,973 |
2013-02-01T17:40:00.000
| 0 | 1 | 0 | 0 |
python,pdf,selenium,webdriver,selenium-webdriver
| 14,760,698 | 1 | true | 1 | 0 |
We ultimately accomplished this by clearing firefox's temporary internet files before the test, then looking for the most recently created file after the report was generated.
| 1 | 2 | 0 |
We test an application developed in house using a python test suite which accomplishes web navigations/interactions through Selenium WebDriver. A tricky part of our web testing is in dealing with a series of pdf reports in the app. We are testing a planned upgrade of Firefox from v3.6 to v16.0.1, and it turns out that the way we captured reports before no longer works, because of changes in the directory structure of firefox's temp folder. I didn't write the original pdf capturing code, but I will refactor it for whatever we end up using with v16.0.1, so I was wondering if there' s a better way to save a pdf using Python's selenium webdriver bindings than what we're currently doing.
Previously, for Firefox v3.6, after clicking a link that generates a report, we would scan the "C:\Documents and Settings\\Local Settings\Temp\plugtmp" directory for a pdf file (with a specific name convention) to be generated. To be clear, we're not saving the report from the webpage itself, we're just using the one generated in firefox's Temp folder.
In Firefox 16.0.1, after clicking a link that generates a report, the file is generated in "C:\Documents and Settings\ \Local Settings\Temp\tmp*\cache*", with a random file name, not ending in ".pdf". This makes capturing this file somewhat more difficult, if using a technique similar to our previous one - each browser has a different tmp*** folder, which has a cache full of folders, inside of which the report is generated with a random file name.
The easiest solution I can see would be to directly save the pdf, but I haven't found a way to do that yet.
To use the same approach as we used in FF3.6 (finding the pdf in the Temp folder directory), I'm thinking we'll need to do the following:
Figure out which tmp*** folder belongs to this particular browser instance (which we can do be inspecting the tmp*** folders that exist before and after the browser is instantiated)
Look inside that browser's cache for a file generated immedaitely after the pdf report was generated (which we can by comparing timestamps)
In cases where multiple files are generated in the cache, we could possibly sort based on size, and take the largest file, since the pdf will almost certainly be the largest temp file (although this seems flaky and will need to be tested in practice).
I'm not feeling great about this approach, and was wondering if there's a better way to capture pdf files. Can anyone suggest a better approach?
Note: the actual scraping of the PDF file is still working fine.
|
Capturing PDF files using Python Selenium Webdriver
| 1.2 | 0 | 1 | 1,728 |
14,651,989 |
2013-02-01T17:41:00.000
| 1 | 0 | 0 | 0 |
python,input
| 14,652,103 | 2 | true | 0 | 1 |
There are a few ways to do this, and they are all different.
If your game is a terminal application using curses, you would catch the q when you call getch(), and then raise SystemExit or simply break out of your while loop that many curses applications use.
Using tkinter or another GUI library, you would bind a key press event to your Frame widget that holds the tic-tac-toe board.
| 1 | 1 | 0 |
Is there a way to program a function that takes user input without requesting it? For example, during a game of tic-tac-toe, the user could press "Q" at any time and the program would close?
|
Python 3.2 Passive User Input
| 1.2 | 0 | 0 | 362 |
14,653,860 |
2013-02-01T19:46:00.000
| 0 | 0 | 0 | 0 |
python,qt,qt4
| 14,654,391 | 1 | true | 0 | 1 |
This behaves fine with Qt 4.8.5, and as far as I can tell from the source it should work with versions as old as Qt 4.5.
Try upgrading Qt to a reasonably recent version, or at least try your code on a more modern version.
| 1 | 0 | 0 |
I am using QT4 (4.2.1) with python 2.4 on CentOS.
I assigned QAction with shortcuts to my menu and disable/enable them accordingly. I have event handlers assigned to the triggered event for the actions. Everything works as expected except that the shortcuts trigger the events for disabled actions. For example, I have a Delete QAction with Del shortcut. I see the disabled Delete menu option but if I hit the Del key my triggered event handler gets called. This is kind of odd...
Is this by design or I am doing something wrong?
As a workaround I am now checking QAction isEnabled() in each action event handler but is there a way to not get triggered events for disabled actions?
Thank you very much for your help,
Leo
|
Shortcuts trigger events for disabled QActions
| 1.2 | 0 | 0 | 199 |
14,655,681 |
2013-02-01T21:55:00.000
| 0 | 0 | 0 | 0 |
c++,python,algorithm,random,montecarlo
| 14,655,846 | 3 | true | 0 | 0 |
Acceptance\Rejection:
Find a function that is always higher than the pdf. Generate 2 Random variates. The first one you scale to calculate the value, the second you use to decide whether to accept or reject the choice. Rinse and repeat until you accept a value.
Sorry I can't be more specific, but I haven't done it for a while..
Its a standard algorithm, but I'd personally implement it from scratch, so I'm not aware of any implementations.
| 3 | 0 | 1 |
I'm trying to write a Monte Carlo simulation. In my simulation I need to generate many random variates from a discrete probability distribution.
I do have a closed-form solution for the distribution and it has finite support; however, it is not a standard distribution. I am aware that I could draw a uniform[0,1) random variate and compare it to the CDF get a random variate from my distribution, but the parameters in the distributions are always changing. Using this method is too slow.
So I guess my question has two parts:
Is there a method/algorithm to quickly generate finite, discrete random variates without using the CDF?
Is there a Python module and/or a C++ library which already has this functionality?
|
Is there a way to generate a random variate from a non-standard distribution without computing CDF?
| 1.2 | 0 | 0 | 374 |
14,655,681 |
2013-02-01T21:55:00.000
| 0 | 0 | 0 | 0 |
c++,python,algorithm,random,montecarlo
| 14,657,373 | 3 | false | 0 | 0 |
Indeed acceptance/rejection is the way to go if you know analytically your pdf. Let's call it f(x). Find a pdf g(x) such that there exist a constant c, such that c.g(x) > f(x), and such that you know how to simulate a variable with pdf g(x) - For example, as you work with a distribution with a finite support, a uniform will do: g(x) = 1/(size of your domain) over the domain.
Then draw a couple (G, U) such that G is simulated with pdf g(x), and U is uniform on [0, c.g(G)]. Then, if U < f(G), accept U as your variable. Otherwise draw again. The U you will finally accept will have f as a pdf.
Note that the constant c determines the efficiency of the method. The smaller c, the most efficient you will be - basically you will need on average c drawings to get the right variable. Better get a function g simple enough (don't forget you need to draw variables using g as a pdf) but will the smallest possible c.
| 3 | 0 | 1 |
I'm trying to write a Monte Carlo simulation. In my simulation I need to generate many random variates from a discrete probability distribution.
I do have a closed-form solution for the distribution and it has finite support; however, it is not a standard distribution. I am aware that I could draw a uniform[0,1) random variate and compare it to the CDF get a random variate from my distribution, but the parameters in the distributions are always changing. Using this method is too slow.
So I guess my question has two parts:
Is there a method/algorithm to quickly generate finite, discrete random variates without using the CDF?
Is there a Python module and/or a C++ library which already has this functionality?
|
Is there a way to generate a random variate from a non-standard distribution without computing CDF?
| 0 | 0 | 0 | 374 |
14,655,681 |
2013-02-01T21:55:00.000
| 0 | 0 | 0 | 0 |
c++,python,algorithm,random,montecarlo
| 18,890,513 | 3 | false | 0 | 0 |
If acceptance rejection is also too inefficient you could also try some Markov Chain MC method, they generate a sequence of samples each one dependent on the previous one, so by skipping blocks of them one can subsample obtaining a more or less independent set. They only need the PDF, or even just a multiple of it. Usually they work with fixed distributions, but can also be adapted to slowly changing ones.
| 3 | 0 | 1 |
I'm trying to write a Monte Carlo simulation. In my simulation I need to generate many random variates from a discrete probability distribution.
I do have a closed-form solution for the distribution and it has finite support; however, it is not a standard distribution. I am aware that I could draw a uniform[0,1) random variate and compare it to the CDF get a random variate from my distribution, but the parameters in the distributions are always changing. Using this method is too slow.
So I guess my question has two parts:
Is there a method/algorithm to quickly generate finite, discrete random variates without using the CDF?
Is there a Python module and/or a C++ library which already has this functionality?
|
Is there a way to generate a random variate from a non-standard distribution without computing CDF?
| 0 | 0 | 0 | 374 |
14,655,765 |
2013-02-01T22:02:00.000
| 0 | 1 | 0 | 0 |
php,python,web
| 14,655,886 | 2 | false | 0 | 0 |
Its not only PHP to be considered for large file uploads. Your web server also need to support that, at least in nginx. I don't know how httpd handles that, but as you said splitting in chunks are viable solution. FTP is another option.
| 1 | 1 | 0 |
hi i wanted to know if uploading large files like videos ( over 200 mb - 1gb) from php is a good option after setting up the server configuration like max_post_size , execution time etc. The reason i ask this question is because i read some where that when a large file is uploaded , best practice is to break that file into chunks and upload it ( I think youtube does that). Do i need to use another language like python or C++ for uploading large files or is php enough. If i need to use another language can anyone please help me with reading material for that .
Thank you.
|
Is php good for large file uploads such as videos
| 0 | 0 | 0 | 978 |
14,660,277 |
2013-02-02T08:58:00.000
| 0 | 0 | 1 | 0 |
python
| 14,660,409 | 2 | false | 0 | 0 |
Often, packages are split into those parts for use and those parts that are required for development. You probably have those parts for use installed (interpreter, modules), but the libraries for development of modules are missing. Look for a python-dev package.
| 1 | 1 | 0 |
I'm building a package with Python 2.7.3 and gcc 4.7.2. Build is scons based and it terminates complaining that it can't find 'C library for python2.7'?
What is this C library and how do I build it?
|
How to build a C library for python
| 0 | 0 | 0 | 109 |
14,661,070 |
2013-02-02T10:47:00.000
| 1 | 0 | 1 | 1 |
emacs,ipython
| 14,666,166 | 2 | true | 0 | 0 |
In emacs you can use python-mode, and from there send the code to *REPL* buffer with C-c C-c.
When you send the buffer for the first time, it asks you what executable you use for python, so you can use ipython, or other one.
| 2 | 0 | 0 |
The environment is Emacs 24.1.1 on Ubuntu. using Ipython for python programming.
The auto indent is works well when running ipython command on shell directly, but when i come to emacs run ipython there is no auto indent any more. and even worse when i type TAB it will prompt the Completion buffer.I also have searched this issue many times but still not found a practical method. as a result i have to enter space manually.
anyone could help to resolve this issue ?
1. auto indent on emacs ipython shell
2. disable completion on emacs ipython shell separately.keep the Tab-completion work when i am not in ipython interactive shell.
|
The tab indent in emacs ipython shell
| 1.2 | 0 | 0 | 592 |
14,661,070 |
2013-02-02T10:47:00.000
| 0 | 0 | 1 | 1 |
emacs,ipython
| 14,671,433 | 2 | false | 0 | 0 |
Any invocation of ipython-shell should do a correct setup.
Please file a bug-report.
If running python-mode.el -- modeline shows "Py" --
please checkout current trunk first
When bazaar is available
bzr branch lp:python-mode
| 2 | 0 | 0 |
The environment is Emacs 24.1.1 on Ubuntu. using Ipython for python programming.
The auto indent is works well when running ipython command on shell directly, but when i come to emacs run ipython there is no auto indent any more. and even worse when i type TAB it will prompt the Completion buffer.I also have searched this issue many times but still not found a practical method. as a result i have to enter space manually.
anyone could help to resolve this issue ?
1. auto indent on emacs ipython shell
2. disable completion on emacs ipython shell separately.keep the Tab-completion work when i am not in ipython interactive shell.
|
The tab indent in emacs ipython shell
| 0 | 0 | 0 | 592 |
14,664,554 |
2013-02-02T17:36:00.000
| 3 | 0 | 1 | 0 |
python,virtualenv,pip,easy-install,distribute
| 14,664,665 | 1 | true | 0 | 0 |
Unless you are doing something really unusual in your setup.py, it will work fine with virtualenv and pip. This is because setup.py is quite declarative in the first place (despite being an executable program), and virtualenv and pip were designed to work with existing packages which were written before they existed.
| 1 | 2 | 0 |
How can a Python package, written with the usual setup.py setup of distutils (containing a scripts setting for scripts meant to be executed) be made compatible with virtualenv and pip? Is there a guide for Do's and Don'ts or issues to keep in mind when writing a package in a way so as to be compatible with virtual environments? Or is any setuptools package guaranteed to work with virtualenv? thanks.
|
Making a Python package compatible with virtualenv
| 1.2 | 0 | 0 | 70 |
14,665,379 |
2013-02-02T19:03:00.000
| 0 | 0 | 0 | 0 |
python,numpy
| 14,665,480 | 2 | false | 0 | 0 |
numpy.loadtxt() is the function you are looking for. This returns a two-dimenstional array.
| 1 | 0 | 1 |
I need some help taking data from a .txt file and putting it into an array. I have a very rudimentary understanding of Python, and I have read through the documentation sited in threads relevant to my problem, but after hours of attempting to do this I still have not been able to get anywhere. The data in my file looks like this:
0.000000000000000000e+00 7.335686114232199684e-02
1.999999999999999909e-07 7.571960558042964973e-01
3.999999999999999819e-07 9.909475704320810374e-01
5.999999999999999728e-07 3.412754086075696081e-01
I used numpy.genfromtxt, but got the following output: array(nan)
Could you tell me what the proper way to do this is?
|
Taking data from a text file and putting it into a numpy array
| 0 | 0 | 0 | 132 |
14,667,144 |
2013-02-02T22:20:00.000
| 0 | 0 | 1 | 0 |
python,ipython,ipython-notebook
| 16,352,660 | 1 | false | 0 | 0 |
Last developement version of IPython now support raw_input in notebook. (since beginning of may 2013 for future reader)
| 1 | 2 | 0 |
I'm running a Python program that wants to accept raw_input which Ipython notebook does not do. (a known limitation)
What is a recommended way to achieve the functionality? (work around?) What I'd like to do is to be able to run the program, accept input and respond..(will be choices determined based on information retrieved), and also prompting for user id and password info..
Of course I'd like to do as little violence to the existing code as possible.
I found IPython.utils.io.raw_input_ext(prompt='', ps2='... ') in the Ipython docs but it calls raw_input and gets the same not implemented error
|
Ipython raw_input work around?
| 0 | 0 | 0 | 1,885 |
14,667,208 |
2013-02-02T22:30:00.000
| 1 | 0 | 1 | 0 |
python,twain
| 17,285,395 | 1 | true | 0 | 0 |
TWAIN doesn't let you read single lines from the device, it's a much higher-level API than that. Even when TWAIN is transferring buffers of image data, the driver will assume it has to apply various corrections to the data - such as, correcting for non-uniform lighting across the bar, which you would have to model and undo.
I think you'd be much better off searching for a USB interface to a CCD, or some kind of capture card that lets you talk more directly to your sensor. Lots of enthusiasts out there doing things along those lines.
| 1 | 0 | 0 |
I'd like to build a spectrophotometer using the CCD of a desktop scanner as the detector.
TWAIN should allow me to do that via the existing USB interface of the scanner (i.e. removing the CCD from the scanner unit and just using it without the scanning hardware).
Are any of the existing python twain packages fine-grained enough to repeatedly access the single-line output of a desktop scanner CCD?
|
python & twain : access scanner CCD with TWAIN on line-by-line basis
| 1.2 | 0 | 0 | 634 |
14,667,218 |
2013-02-02T22:31:00.000
| 0 | 0 | 1 | 0 |
python,string
| 14,667,497 | 5 | false | 0 | 0 |
You don't want to assign each letter to a separate variable. Then you'd be writing the rest of your program without even being able to know how many variables you have defined! That's an even worse problem than dealing with the whole string at once.
What you instead want to do is have just one variable holding the string, but you can refer to individual characters in it with indexing. Say the string is in s, then s[0] is the first character, s[1] is the second character, etc. And you can find out how far up the numbers go by checking len(s) - 1 (because the indexes start at 0, a length 1 string has maximum index 0, a length 2 string has maximum index 1, etc).
That's much more manageable than figuring out how to generate len(s) variable names, assign them all to a piece of the string, and then know which variables you need to reference.
Strings are immutable though, so you can't assign to s[1] to change the 2nd character. If you need to do that you can instead create a list with e.g. l = list(s). Then l[1] is the second character, and you can assign l[1] = something to change the element in the list. Then when you're done you can get a new string out with s_new = ''.join(l) (join builds a string by joining together a sequence of strings passed as its argument, using the string it was invoked on to the left as a separator between each of the elements in the sequence; in this case we're joining a list of single-character strings using the empty string as a separator, so we just get all the single-character strings joined into a single string).
| 1 | 0 | 0 |
I need to make a program in which the user inputs a word and I need to do something to each individual letter in that word. They cannot enter it one letter at a time just one word.
I.E. someone enters "test" how can I make my program know that it is a four letter word and how to break it up, like make my program make four variables each variable set to a different letter. It should also be able to work with bigger and smaller words.
Could I use a for statement? Something like For letter ste that letter to a variable, but what is it was like a 20 character letter how would the program get all the variable names and such?
|
Python, breaking up Strings
| 0 | 0 | 0 | 3,180 |
14,667,578 |
2013-02-02T23:15:00.000
| 1 | 0 | 1 | 0 |
python,list,data-structures,set,append
| 14,667,595 | 5 | false | 0 | 0 |
You could probably use a set object instead. Just add numbers to the set. They inherently do not replicate.
| 1 | 37 | 0 |
I am writing a python program where I will be appending numbers into a list, but I don't want the numbers in the list to repeat. So how do I check if a number is already in the list before I do list.append()?
|
check if a number already exist in a list in python
| 0.039979 | 0 | 0 | 180,790 |
14,668,994 |
2013-02-03T02:47:00.000
| 0 | 0 | 1 | 0 |
python
| 14,669,038 | 2 | false | 0 | 0 |
As Hyperboreus says, no x++ operator in Python. I think it's interesting to guess at why - I think it's that Python makes a point of assignment not being an expression, and users of x++ experienced in other languages might expect the result of this expression to be the un-incremented value of x. If assignment isn't an expression with a value, then there is not difference between x++ and ++x. I think having one of these but not the other would be confusing, but having them both do the same thing would be redundant.
| 1 | 0 | 0 |
I know in Java if you want to add one to a variable you can use x++ is there anything similar to this in Python.
|
Python - how to shorten operations
| 0 | 0 | 0 | 270 |
14,669,283 |
2013-02-03T03:42:00.000
| 0 | 0 | 0 | 0 |
python,html,parsing,beautifulsoup
| 55,544,880 | 2 | false | 1 | 0 |
It can be due to an invalid character (due to charset encoding/decoding), therefor BeautifulSoup has issues to parse the input.
I solve it by passing my string directly to BeautifulSoup without doing any encoding/decoding.
In my case, I was trying to convert UTF-16 to UTF-8 myself.
| 1 | 7 | 0 |
While processing html using Beautifulsoup, the < and > were converted to <and >, since the tag anchor were all converted, the whole soup lost its structure, any suggestion?
|
< > changed to < and > while parsing html with beautifulsoup in python
| 0 | 0 | 1 | 8,456 |
14,669,819 |
2013-02-03T05:38:00.000
| 0 | 0 | 0 | 1 |
python,google-app-engine
| 14,670,069 | 2 | false | 1 | 0 |
Yes and no.
Appengine is great in terms of reliability, server speed, features, etc. However, it has two main drawbacks: You are in a sandboxed environment (no filesystem access, must use datastore), and you are paying by instance hour. Normally, if you're just hosting a small server accessed once in a while, you can get free hosting; if you are running a cron job all day every day, you must use at least one instance at all times, thus costing you money.
Your concerns about speed and propagation on google's servers is moot; they have a global time server pulsating through their datacenters ensuring your operations are atomic; if you request data with consistency=STRONG, so long as your get begins after the put, you will see the updated data.
| 1 | 0 | 0 |
I have a python script that creates a few text files, which are then uploaded to my current web host. This is done every 5 minutes. The text files are used in a software program which fetches the latest version every 5 min. Right now I have it running on my web host, but I'd like to move to GAE to improve reliability. (Also because my current web host does not allow for just plain file hosting, per their TOS.)
Is google app engine right for me? I have some experience with python, but none related to web technologies. I went through the basic hello world tutorial and it seems pretty straightforward for a website, but I don't know how I would implement my project. I also worry about any caching which could cause the latest files not to propagate fast enough across google's servers.
|
Is google app engine right for me (hosting a few rapidly updating text files created w/ python)
| 0 | 0 | 0 | 108 |
14,670,990 |
2013-02-03T09:12:00.000
| 0 | 0 | 0 | 0 |
python,django,linux,unix,sorting
| 14,671,017 | 1 | false | 1 | 0 |
It is highly unlikely that sort would on its own volition change line endings from Unix to Windows. It is more likely that A.csv already contains Windows line endings, and sort merely preserves them. If it is your script that's creating A.csv in the first place, double-check the newline convention that's being used.
| 1 | 0 | 0 |
I am messing around with Django. I have a custom admin script in one of my apps (inside the management/commands folder) that has a subprocess.call() line. I am doing a 'sort A.csv -o A_sorted.csv' call. The sorted file that gets written is full of '^M' at the end of every line. I find this doesn't happen when running the sort command from the command line or calling the same command through subprocess.call() from within a normal python script not running in Django.
Any ideas on why this is happening and what I can do to keep this from happening?
Thanks.
|
subprocess.call() of 'sort' command within Django script is adding \M to the end of my files
| 0 | 0 | 0 | 129 |
14,672,039 |
2013-02-03T11:36:00.000
| 1 | 0 | 0 | 0 |
python,keypress,joystick,analog-digital-converter
| 14,709,657 | 1 | true | 0 | 1 |
Use PostMessage with the WM_CHAR command (if you're using Windows - you didn't say).
| 1 | 0 | 0 |
In python I want to simulate a joystick that when used, to give values between -63 and +63 let's say. When the value it's positive, I want to press the "w" key and "s" key when negative.
I am not having problems receiving the values, but to transform these analog values into digital key presses. Does anyone has any idea how to do it (code can be in any language, I just need an general idea).
|
Transforming analog keypresses to digital
| 1.2 | 0 | 0 | 118 |
14,672,753 |
2013-02-03T13:02:00.000
| 3 | 0 | 0 | 0 |
python,flask
| 14,673,087 | 2 | false | 1 | 0 |
For requests that take a long time, you might want to consider starting a background job for them.
| 1 | 65 | 0 |
My Flask applications has to do quite a large calculation to fetch a certain page. While Flask is doing that function, another user cannot access the website, because Flask is busy with the large calculation.
Is there any way that I can make my Flask application accept requests from multiple users?
|
Handling multiple requests in Flask
| 0.291313 | 0 | 1 | 72,055 |
14,673,642 |
2013-02-03T14:41:00.000
| 2 | 0 | 0 | 1 |
python,google-app-engine,app-engine-ndb,bigtable
| 14,713,169 | 3 | false | 1 | 0 |
This is indeed a frustrating issue. I've been doing some work in this area lately to get some general count stats - basically, the number of entities that satisfy some query. count() is a great idea, but it is hobbled by the datastore RPC timeout.
It would be nice if count() supported cursors somehow so that you could cursor across the result set and simply add up the resulting integers rather than returning a large list of keys only to throw them away. With cursors, you could continue across all 1-minute / 10-minute boundaries, using the "pass the baton" deferred approach. With count() (as opposed to fetch(keys_only=True)) you can greatly reduce the waste and hopefully increase the speed of the RPC calls, e.g., it takes a shocking amount of time to count to 1,000,000 using the fetch(keys_only=True) approach - an expensive proposition on backends.
Sharded counters are a lot of overhead if you only need/want periodic count statistics (e.g., a daily count of all my accounts in the system by, e.g., country).
| 1 | 4 | 0 |
For 100k+ entities in google datastore, ndb.query().count() is going to cancelled by deadline , even with index. I've tried with produce_cursors options but only iter() or fetch_page() will returns cursor but count() doesn't.
How can I count large entities?
|
ndb.query.count() failed with 60s query deadline on large entities
| 0.132549 | 1 | 0 | 2,669 |
14,674,487 |
2013-02-03T16:14:00.000
| 0 | 0 | 1 | 0 |
python,configuration
| 15,259,314 | 1 | false | 1 | 0 |
Create a default settings module which contains your desired default settings. Create a second module intended to be used by the the user with a from default_settings import * statement at the top, and instructing the user to write any replacements into this module instead.
Python is rather expressive, so in most cases, if you can expect the user to understand it on any level, you can use a Python module itself as the configuration file.
| 1 | 2 | 0 |
What is the most universal and best application configurations management method? I want to have these properties in order to have "good configuration management":
A list of all available properties and their default values in one
place.
A list of properties which can be changed by an app user, also in one
place.
When I retrieve a specific property, it's value is returned from the
2nd list (user changeable configs) or if it's not there, from the
first list.
So far, what I did was hard coding the 1st list as an object (more specific as a dict), wrote .conf file used by ConfigParser to make an app user to easily change some of the properties (2nd list), and wrote a public method on the config object to retrieve a property by it's name or if it's not there, raise an exception. In the end, one object was responsible for managing all the stuff (parsing file, raising exception, overriding properties etc.) But I was wondering, if there's a built-in library which does more or less the same thing, or even a better way to manage configuration, which takes into account all the KISS, DRY and other principles (I'm not always successful to do that with this method)?
Thanks in advance.
|
How to properly manage application configurations
| 0 | 0 | 0 | 175 |
14,676,686 |
2013-02-03T20:02:00.000
| 0 | 0 | 1 | 0 |
python,cmd,pyinstaller
| 67,405,629 | 1 | false | 0 | 0 |
Try adding -m after python.
Also, from what I have seen, and tried, "pyinstaller.py" doesn't work, but after removing the ".py" it works
So, perhaps you can try the following:
python -m PyInstaller -F myscript.py
Note: 'PyInstaller' in this case is case sensitive, it doesn't work if you just use 'pyinstaller'
'pyinstaller' does work however in the following case: pyinstaller -F myscript.py
| 1 | 2 | 0 |
after trying to build a standalone by entering
python pyinstaller.py -F myscript.py
in the pyinstaller directory i get an error:
error: Requires at least one scriptname file or exactly one .specfile
I have the script in the same directory as the pyinstaller. What might be causing the error?
EDIT: To answer the comments: I run the command from the same directory as pyinstaller. I can access both files.
|
Pyinstaller error- requires at least one scriptname file
| 0 | 0 | 0 | 1,926 |
14,676,954 |
2013-02-03T20:29:00.000
| 1 | 0 | 1 | 0 |
python,interactive
| 16,405,369 | 1 | false | 0 | 0 |
I suggest you give IPython a try. It is an advanced interactive programming environment for python. Inside Ipython you could use the magic command %edit to open an external editor like Notepad++ to type your code, and then execute the code in IPython. Or you can type %run to load your script to IPython. There are a lot other magic commands to help you improve your efficiency.
You can also try IPython's new feature notebook.
| 1 | 0 | 0 |
I use python as follows...
notepad++ is on left of screen.
Black command prompt is on right of screen.
I write my python prog in notepad and save it to dir python as ggg.py ,then using (control S).
I switch to command prompt screen and run it.
Any errors I go back and fix and run again.
Could it be that using the python with the >>> prompt is more efficient.ie How can I duplicate
the above using interactive python(>>>)? Is it easier than my current method?
acorn.
|
develop with notepad and run in python, or use Interactive Python?
| 0.197375 | 0 | 0 | 221 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.