Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
15,443,732 |
2013-03-15T23:31:00.000
| 1 | 1 | 0 | 1 |
python,optimization,nginx,redis,uwsgi
| 45,384,113 | 4 | false | 1 | 0 |
You said nothing about writing this data back, is it static? In this case, the solution is every simple, and I have no clue what is up with all the "it's not feasible" responses.
Uwsgi workers are always-running applications. So data absolutely gets persisted between requests. All you need to do is store stuff in a global variable, that is it. And remember it's per-worker, and workers do restart from time to time, so you need proper loading/invalidation strategies.
If the data is updated very rarely (rarely enough to restart the server when it does), you can save even more. Just create the objects during app construction. This way, they will be created exactly once, and then all the workers will fork off the master, and reuse the same data. Of course, it's copy-on-write, so if you update it, you will lose the memory benefits (same thing will happen if python decides to compact its memory during a gc run, so it's not super predictable).
| 2 | 8 | 0 |
I doubt this is even possible, but here is the problem and proposed solution (the feasibility of the proposed solution is the object of this question):
I have some "global data" that needs to be available for all requests. I'm persisting this data to Riak and using Redis as a caching layer for access speed (for now...). The data is split into about 30 logical chunks, each about 8 KB.
Each request is required to read 4 of these 8KB chunks, resulting in 32KB of data read in from Redis or Riak. This is in ADDITION to any request-specific data which would also need to be read (which is quite a bit).
Assuming even 3000 requests per second (this isn't a live server so I don't have real numbers, but 3000ps is a reasonable assumption, could be more), this means 96KBps of transfer from Redis or Riak in ADDITION to the already not-insignificant other calls being made from the application logic. Also, Python is parsing the JSON of these 8KB objects 3000 times every second.
All of this - especially Python having to repeatedly deserialize the data - seems like an utter waste, and a perfectly elegant solution would be to just have the deserialized data cached in an in-memory native object in Python, which I can refresh periodically as and when all this "static" data becomes stale. Once in a few minutes (or hours), instead of 3000 times per second.
But I don't know if this is even possible. You'd realistically need an "always running" application for it to cache any data in its memory. And I know this is not the case in the nginx+uwsgi+python combination (versus something like node) - python in-memory data will NOT be persisted across all requests to my knowledge, unless I'm terribly mistaken.
Unfortunately this is a system I have "inherited" and therefore can't make too many changes in terms of the base technology, nor am I knowledgeable enough of how the nginx+uwsgi+python combination works in terms of starting up Python processes and persisting Python in-memory data - which means I COULD be terribly mistaken with my assumption above!
So, direct advice on whether this solution would work + references to material that could help me understand how the nginx+uwsgi+python would work in terms of starting new processes and memory allocation, would help greatly.
P.S:
Have gone through some of the documentation for nginx, uwsgi etc but haven't fully understood the ramifications per my use-case yet. Hope to make some progress on that going forward now
If the in-memory thing COULD work out, I would chuck Redis, since I'm caching ONLY the static data I mentioned above, in it. This makes an in-process persistent in-memory Python cache even more attractive for me, reducing one moving part in the system and at least FOUR network round-trips per request.
|
Persistent in-memory Python object for nginx/uwsgi server
| 0.049958 | 0 | 0 | 4,976 |
15,448,884 |
2013-03-16T11:47:00.000
| 0 | 0 | 1 | 0 |
python,installation,pythonpath
| 15,448,915 | 2 | false | 0 | 0 |
You are correct. End user should be able to run your exe even if python is not there on his system. Instead, when end-user runs your exe, your exe should make calls to the pyx,pyo files. And, those files will be packaged by the pyinstaller. Also, you need to make sure that you point the path to those pyx,pyo files
| 1 | 2 | 0 |
Hi I have created an application that allows end user python scripting. The main portion of the application is written in python, this I have compiled away to an exe using pyinstaller this is fine that part of the application works just fine.
This application then calls a DLL that embeds python, this then calls some end user python scripts. There has been no problem with this when I've been testing it but once I have compiled the program using pyinstaller the DLL prints the error "ImportError: No module named site".
I'm on Windows with Python 2.7.
From what I can tell from other posts this is a problem with the PYTHONHOME PYTHONPATH environment variables, which I'm sure I can set from within the DLL. However considering that the end user may not have Python installed on their computer do I need to provide the full Python 2.7 installation with my program changing the PYTHONHOME and PYTHONPATH to that installation? Is this the correct way to go about this?
|
Embedding python scripting installation ImportError: No module named site
| 0 | 0 | 0 | 3,125 |
15,452,390 |
2013-03-16T17:32:00.000
| 1 | 0 | 1 | 0 |
python,virtualenv
| 15,454,563 | 1 | true | 0 | 0 |
This is exactly what virtual environments are for.
Create your virtual environment and activate, then any 'pip install' or 'easy_install' that you do will only affect that environment, not your site.
If I were you once you get 2.8 working, install 3.0 in a different virtenv and then think about deleting the site-wide Weblogo.
| 1 | 0 | 0 |
so I have a question about installing multiple versions of a single program. Apparently I need to use Weblogo-3.3 for one part of my project, but another program I'm using for a different part uses Weblogo-2.8.2 as a dependency, and cannot work with 3.3. This is...problematic, as I need to do both parts. Both use python 2.7.
Is there any way I can use a virtual environment to selectively install and run Weblogo-2.8? I'm concerned that even if I do that and try to run the program that uses it as a dependency, it will try and call the Weblogo-3.3. Won't they both be in python's dist-packages folder and cause conflicts?
I was about to try to install it with Virtualenv, but I didn't want to mess up my current installation of Weblogo-3.3 so I was going to hold off until I knew for sure. Thanks!
|
Using Virtualenv to install two versions of a program
| 1.2 | 0 | 0 | 105 |
15,456,709 |
2013-03-17T01:39:00.000
| 0 | 0 | 0 | 0 |
python,excel,google-drive-api
| 15,505,507 | 1 | true | 0 | 0 |
Ended up just downloading with xlrd and using that. Thanks for the link Rob.
| 1 | 0 | 0 |
So I know how to download Excel files from Google Drive in .csv format. However, since .csv files do not support multiple sheets, I have developed a system in a for loop to add the '&grid=tab_number' to the file download url so that I can download each sheet as its own .csv file. The problem I have run into is finding out how many sheets are in the excel workbook on the Google Drive so I know how many times to set the for loop for.
|
Complicated Excel Issue with Google API and Python
| 1.2 | 1 | 1 | 95 |
15,461,429 |
2013-03-17T13:45:00.000
| 0 | 0 | 1 | 0 |
python,datetime
| 15,461,593 | 1 | true | 0 | 0 |
I would write a function to check if a given tuple is within your range and then use a list comprehension such as
flagged = [x for x in myList if inMyRange(x)]
To get a list of all flagged ranges. Or perform an operation on the flagged items in the comprehension itself
operated = [myOperation(x) for x in myList if inMyRange(x)]
| 1 | 0 | 0 |
I have time data in the form of:[ (from , to) , (.. , ..) , ..]
[('16:35', '16:10'),
('18:45', '18:15'),
('19:14', '12:15'),
('10:36', '00:10'),
('21:08', '13:40'),
('22:20', '06:10'),
('03:20', '16:40'),
('23:56', '12:10'),
('00:16', '21:30'),
I need to perform operation like if time range falls within the range > 23:00 & time < 01:15` then I need to flag it. Like, in the case of (21:00, 23:33), it should flag. And, in the case of (02:00, 06:00), it shouldn't flag. Midnight scenario should also be taken care of.
Any tips?
|
Python : Operations on DateTime range intervals
| 1.2 | 0 | 0 | 173 |
15,463,191 |
2013-03-17T16:29:00.000
| 1 | 0 | 0 | 0 |
python,opencv,image-processing,computer-vision
| 18,670,288 | 1 | false | 0 | 0 |
Such image statistics as mean, std etc. are not sufficient to answer the question, and canny may not be the best approach; it all depends on characteristics of the image. To learn about those characteristics and approaches, you may google for a survey of image segmentation / edge detection methods. And this kind of problems often involve some pre-processing and post-processing steps.
| 1 | 2 | 1 |
I'm trying to choose the best parameters for the hysteresis phase in the canny function of OpenCV. I found some similar questions in stackoverflow but they didn't solve my problem. So far I've found that there are two main approaches:
Compute mean and standard deviation and set the thresholds as: lowT = mean - std, highT = mean+std
Compute the median and set the thresholds as: 0.6*median, 1.33*median
However, any of these thresholds is the best fit for my data. Manually, I've found that lowT=100, highT=150 are the best values. The data (gray-scale image) has the following properties:
median=202.0, mean=206.6283375, standard deviation = 35.7482520742
Does anyvbody know where is the problem? or knows where can I found more information about this?
|
Choosing the threshold values for hysteresis
| 0.197375 | 0 | 0 | 2,035 |
15,469,799 |
2013-03-18T04:31:00.000
| 0 | 0 | 0 | 1 |
python,xcode,interface
| 15,469,982 | 3 | true | 0 | 1 |
Open Automator
Choose "Application"
Drag a "Run Shell Script" onto the workflow panel
Choose "/usr/bin/python" as the shell. Paste in your script, and select Pass Input: "to stdin"
Or, choose bash as the shell, and simply have the automator script run your Python script with Pass Input "as arguments" selected on the top right. You'll then use the contents of $@ as your arguments.
Save the application.
Done. You have a .app onto which files can be dragged.
| 1 | 3 | 0 |
So I have a lot of python scripts that I have written for my work but no one in my lab knows how to use Python so I wanted to be able to generate a simple Mac App where you can 'Browse' for a file on your computer and type in the name of the file that you want to save . . . everything else will be processed by the application for the python script I have generated.
Does anyone know if this is possible? I watched some tutorials on people generating applications in Xcode with Objective C but I don't want to have to learn a new language to reconstruct my Python scripts.
Thank you
|
Is it possible to create Python-based Application in Xcode or equivalent?
| 1.2 | 0 | 0 | 3,884 |
15,470,346 |
2013-03-18T05:33:00.000
| 1 | 0 | 1 | 0 |
python,list,slice,colon
| 15,470,366 | 3 | false | 0 | 0 |
There's nothing at the index len(somelist) (list indices start at 0 in python). Therefore, trying to access a non-existing element raises an error.
However, list slicing (with the myList[i:] syntax) returns a new list containing the elements including and after i. Since there are no elements in the list at index i (or after), an empty list is returned
| 1 | 8 | 0 |
I understand that somelist[len(somelist)] cannot access an index that is outside of the defined list - this makes sense.
But why then does Python allow you to do somelist[len(somelist):]?
I've even read that somelist[len(somelist):] = [1] is equivalent to somelist.append(1)
But why does slice notation change the fact that the index "len(somelist)" is still outside the range of the list?
|
Why does somelist[len(somelist)] generate an IndexError but not somelist[len(somelist):]?
| 0.066568 | 0 | 0 | 218 |
15,471,372 |
2013-03-18T07:06:00.000
| 2 | 0 | 0 | 0 |
python,machine-learning,scikit-learn
| 15,743,100 | 2 | true | 0 | 0 |
Do you just want to do majority voting? This is not implemented afaik. But as I said, you can just average the predict_proba scores. Or you can use LabelBinarizer of the predictions and average those. That would implement a voting scheme.
Even if you are not interested in the probabilities, averaging the predicted probabilities might be more robust than doing a simple voting. This is hard to tell without trying out, though.
| 1 | 1 | 1 |
Is there any way to combine different classifiers into one in sklearn? I find sklearn.ensamble package. It contains different models, like AdaBoost and RandofForest, but they use decision trees under the hood and I want to use different methods, like SVM and Logistic regression. Is it possible with sklearn?
|
Ensamble methods with scikit-learn
| 1.2 | 0 | 0 | 694 |
15,478,126 |
2013-03-18T13:33:00.000
| 3 | 0 | 0 | 0 |
python,rss,urllib,bots
| 15,478,540 | 1 | false | 0 | 0 |
You could use some HTTP sniffer (like fiddler) or any protocol sniffer (tcpdump, wireshark) to sniff your network traffic to Google and check if your urllib request and wget/browser requests differ. Also check and compare all the cookies and HTTP-headers of both requests. And remember , that for IPs with big number of requests to Google - google sends captcha every N requests , so if you need to parse it's content - you possibly need to use some proxies for Google parsing.
| 1 | 2 | 0 |
I am trying to import results from a Google Search results rss/xml feed into my website but every time I run the python script I get a message from Google:
Our systems have detected unusual traffic from your computer network.
This page checks to see if it's really you sending the requests and
not a robot.
The script uses urllib to download pages and works with other rss feeds.
Doesn't really make sense as I thought rss feeds were supposed to be consumed by software (bots), I left the script over the weekend and ran on Monday morning but still got the message so I am not hitting their servers too much.
I can load the feed in my browser though and I can also download the feed using wget on the server?
|
How to fetch a google search results feed with a python script and not be identified as a bot?
| 0.53705 | 0 | 1 | 1,138 |
15,479,928 |
2013-03-18T14:59:00.000
| 17 | 0 | 1 | 0 |
python,dictionary,set,python-internals
| 19,749,013 | 5 | false | 0 | 0 |
"Arbitrary" isn't the same thing as "non-determined".
What they're saying is that there are no useful properties of dictionary iteration order that are "in the public interface". There almost certainly are many properties of the iteration order that are fully determined by the code that currently implements dictionary iteration, but the authors aren't promising them to you as something you can use. This gives them more freedom to change these properties between Python versions (or even just in different operating conditions, or completely at random at runtime) without worrying that your program will break.
Thus if you write a program that depends on any property at all of dictionary order, then you are "breaking the contract" of using the dictionary type, and the Python developers are not promising that this will always work, even if it appears to work for now when you test it. It's basically the equivalent of relying on "undefined behaviour" in C.
| 1 | 161 | 0 |
I don't understand how looping over a dictionary or set in python is done by 'arbitrary' order.
I mean, it's a programming language so everything in the language must be 100% determined, correct? Python must have some kind of algorithm that decides which part of the dictionary or set is chosen, 1st, second and so on.
What am I missing?
|
Why is the order in dictionaries and sets arbitrary?
| 1 | 0 | 0 | 20,118 |
15,482,842 |
2013-03-18T17:13:00.000
| 0 | 0 | 0 | 0 |
python,node.js,openshift,openshift-client-tools
| 15,485,252 | 1 | false | 0 | 0 |
What do you mean you need the RHC open? It just sends a commands and then finishes executing.
The RHC client is a stateless means to send REST requests to the OpenShift Broker - it has absolutely no connection to the node running your app UNLESS you use RHC APP TAIL command or something like that. Do you mean when you ssh in the app stays open? Please don't forget we spin the app down after a day of no use and it will take a little while for it to spin up.
Is there a git hub repo of the app?
There is no need for a post-deploy script.
What do the logs for you application say. You can get to them either by doing RHC APP TAIL or by SSH'ing in to your application and going to ~/python-2.6/logs.
| 1 | 0 | 0 |
I m using openshift to deploy my app which uses python,mongodb and node.js.
After pushing all my code and data into server it is saying service not available when rhc client is closed.
Is that because i did not write postdeploy script?
|
Openshift script after deployment
| 0 | 0 | 0 | 263 |
15,484,850 |
2013-03-18T19:06:00.000
| 0 | 0 | 0 | 1 |
python,rabbitmq,amqp
| 15,501,068 | 1 | false | 0 | 0 |
What you're describing sounds like a pretty typical middleware pipeline. While that achieves the same effect of modifying messages before they are delivered to their intended consumer, it doesn't work by accessing queues.
The basic idea is that all messages first go into a special queue where they are delivered to the middleware. Th middleware then composes a new message, based on the one it just received, and the publishes that to the intended recipient's queue
| 1 | 0 | 0 |
I use rabbitmq in Python via amqplib. I try to use AMQP for something more than just a queue, if that's possible - searching messages by ID, modifying them before dequeing, deleting from queue before dequeing. Those things are used to store/update a real users queue for a balancer, and that queue could be updated asynchronously by changing real user' state (for example, user is dead - his AMQP message must be deleted, or user changed it's state - and every such a change must be reflected in users' AMQP queue, in appropriate user's AMQP message) , and before the real dequeuing of a message happens.
My questions are the following :
Is there a way through amqplib to modify AMQP message body in
some queueN before it would be dequed , searching it by some ID in
it's header? I mean - i want to modify message body before
dispatching it by receiver.
Is there a way for a worker to pop
excactly 5 (any number) last messages from queueN via amqplib?
Can i asynchronously delete message from a queueN before it would be
dequed, and it's neighbors would take it's place in the queueN?
Which is the way for a message ID1 from queueN - to get it's real
current queue position, counted from the beginning of the queueN?
Does AMQP stores/updates for any message it's real queue position?
Thanks in advance.
UPDATE: according to rabbitmq documentation, there are problem with such a random access to messages in AMQP queue. Please advise another proper decision of a queue in Python, which supports fast asynchronous access to it's elements- searching a message by it's body, updating/deleting queue messages and getting fast queue index for any queue message. We tried deque + additional dict with user_info, but in this case we need to lock this deque+dict on each update, to avoid race conditions. Main purpose - is to serve a load balancer's queue and get rid of blocking when counting changes in queue.
|
Message queue with random read\write access to queue elements before dequeing (rabbitmq or)
| 0 | 0 | 0 | 875 |
15,485,567 |
2013-03-18T19:47:00.000
| 0 | 0 | 1 | 1 |
python
| 15,486,937 | 1 | false | 0 | 0 |
I am pretty sure OSX build tools (XCode et. al.) exist only on Apple platforms and there is no business rationale why Apple would have ported them to Windows.
So the probable answer is "buy Mac".
| 1 | 0 | 0 |
Given that the code has been written indepdently of platform, how do I build a package for MAC OS when I am on Windows and the package has been successfully built there? I can use python setup.py bdist_msi on windows, but not python setup.py bdist_dmg, since I am not on MAC. What to do about that?
Python 3.3, tkinter, cxFreeze, Windows 8.
|
Build package for OSX when on Windows (Python 3.3, tkinter)
| 0 | 0 | 0 | 98 |
15,486,008 |
2013-03-18T20:13:00.000
| 2 | 0 | 1 | 0 |
python
| 15,486,143 | 3 | true | 0 | 0 |
IMHO : Do NEVER touch the ConfigParser Module, its outdated (reads: obviously not brought up to date) and has quite some quirks. API for comments? Forget it! Access to the DEFAULT-Section? LOL! Wondering about not parsed Config-Fields? Well guess what: ConfigParser iterates over all your config files and fails silently (glossing over) to maybe EVENTUALLY do a condensed Exception-Trace over ALL Errors. It's not easy to find out which error belongs to which file. No nested config-sections...
Rather CSV then ConfigParser!
IMO: Use Json for config-files. Very flexible, direct mapping between python-data structures and Json-String representation is possible and still human readable. Btw. interchange between different languages is quite easy.
Curly-Braces in the file aren't that pretty, but you can't have it all! ;-)
| 2 | 6 | 0 |
I'm writing a daemon script in Python and would like to be able to specify settings in a file that the program expects to read (like .conf files). The standard library has configparser and xdrlib file formats, but neither seems pythonic; the former being a Microsoft standard and the latter a Sun Microsystems standard.
I could write my own, but I'd rather stick to a well-known and documented standard, than reinvent the wheel.
|
What's the most pythonic method for specifying a configuration file?
| 1.2 | 0 | 0 | 187 |
15,486,008 |
2013-03-18T20:13:00.000
| 5 | 0 | 1 | 0 |
python
| 15,486,098 | 3 | false | 0 | 0 |
Unless you have any particularly complex needs, there's nothing wrong with using configparser. It's a simple, flat file format that's easily understood and familiar to many people. If that really rubs you the wrong way, there's always the option of using JSON or YAML config but those are somewhat more heavyweight formats.
A third popular option is to just use Python for your configuration file. This gives you more flexibility but also increases the odds of introducing strange bugs by 'just' modifying your config file.
| 2 | 6 | 0 |
I'm writing a daemon script in Python and would like to be able to specify settings in a file that the program expects to read (like .conf files). The standard library has configparser and xdrlib file formats, but neither seems pythonic; the former being a Microsoft standard and the latter a Sun Microsystems standard.
I could write my own, but I'd rather stick to a well-known and documented standard, than reinvent the wheel.
|
What's the most pythonic method for specifying a configuration file?
| 0.321513 | 0 | 0 | 187 |
15,486,091 |
2013-03-18T20:17:00.000
| 3 | 0 | 0 | 0 |
python,application-restart,panda3d
| 15,522,504 | 2 | false | 0 | 1 |
Not really. Panda3D doesn't really make this abstraction, that's up to the application developer. It depends on the specific case what "restart" really means; do you want to close the existing window and then reopen a new one? If not, then that means that you have to keep many Panda3D objects in place and can't simply recreate the ShowBase instance. But do you then want to unload any models loaded into memory? Do you want to resend the geometry for your UI objects to the graphics card?
So, depending on your specific needs, you will have to unload and get rid of the objects that you need to restart and then recreate them. If you use an object oriented approach and structure your objects properly, this should be straightforward - ie, you implement an unload() on your Game object that unloads things specific to that game, and then let the references to that object go (causing it to be garbage collected), and create a new one. (Beware of circular references! If you have them, they may keep old instances of your objects in memory even when they have gone out of scope.)
| 1 | 1 | 0 |
I have a game, written in Python, and I use Panda3d.
Now I want to have a possibility for restart, that is, I want to press a button with the result that the main is executed again, just as if the current instance of the game had never existed.
Is there a simple way to do this?
|
panda3d - how to restart a game
| 0.291313 | 0 | 0 | 583 |
15,487,848 |
2013-03-18T22:06:00.000
| 1 | 1 | 1 | 1 |
python,linux
| 15,487,877 | 3 | false | 0 | 0 |
Do /aaa/python2.5 python_code.py. If you use Python 2.5 more often, consider changing the $PATH variable to make Python 2.5 the default.
| 1 | 2 | 0 |
I am doing maintenance for a python code. Python is installed in /usr/bin, the code installed in /aaa, a python 2.5 installed under /aaa/python2.5. Each time I run Python, it use /usr/bin one. How to make it run /aaa/python2.5?
Also when I run Python -v; import bbb; bbb.__file__; it will automatically show it use bbb module under /usr/ccc/(don't know why), instead of use bbb module under /aaa/python2.5/lib
How to let it run python2.5 and use `/aaa/python2.5/lib' module? The reason I asking this is if we maintain a code, but other people is still using it, we need to install the code under a new directory and modify it, run it and debug it.
|
How to run python in different directory?
| 0.066568 | 0 | 0 | 1,363 |
15,488,944 |
2013-03-18T23:36:00.000
| -1 | 0 | 1 | 0 |
python,pygame
| 15,625,491 | 3 | false | 0 | 1 |
You cannot run a python program without python installed. However you could make the program an exe file by using a certain converter. Some converters are py2exe, pyinstaller, and cx_freeze. I personally recommend cx_freeze. If you use cx_freeze you must add the line import pygame._view to your source code due to a bug.
| 2 | 1 | 0 |
Do you have to have pygame installed to run pygame games? I have programmed a game in pygame on my Raspberry Pi (using the Adafruit WebIDE), and I don't want to have to run it on the Pi itself, so I am planning to use it on my Windows 8 box, and I don't have pygame installed on the Windows box.
|
Do you have to have pygame installed to run pygame games?
| -0.066568 | 0 | 0 | 4,043 |
15,488,944 |
2013-03-18T23:36:00.000
| -1 | 0 | 1 | 0 |
python,pygame
| 15,488,983 | 3 | false | 0 | 1 |
Yes, for a computer to run a program, it needs to have the modules included in the program. It's like saying that you want to solve a quadratic equation but numbers do not exist yet. Especially if you want to edit it on your computer, you would need to have Pygame installed.
| 2 | 1 | 0 |
Do you have to have pygame installed to run pygame games? I have programmed a game in pygame on my Raspberry Pi (using the Adafruit WebIDE), and I don't want to have to run it on the Pi itself, so I am planning to use it on my Windows 8 box, and I don't have pygame installed on the Windows box.
|
Do you have to have pygame installed to run pygame games?
| -0.066568 | 0 | 0 | 4,043 |
15,489,371 |
2013-03-19T00:20:00.000
| 0 | 1 | 0 | 1 |
python,serial-port,tty,modbus
| 15,494,099 | 2 | false | 0 | 0 |
There is no straightforward solution to trick your linux server into thinking that a MODBUS RTU is actually of MODBUS TCP connection.
In all cases, your modem will have to transfer data from TCP to serial (and the other way around). So I assume that:
1) somehow you can program your modem and instruct it to do whatever you want
2) the manufacturer of the modem has provided a built-in mechanism to do that.
If 1): you should program your modem so that it can replace TCP ADUs by RTU ADUs (and the other way around) when copying data from the TCP connection to the RS link.
If 2): simply provide your RTU frame to whatever API the manufacturer devised.
| 2 | 0 | 0 |
I will be creating a connection between my Linux server and a cellular modem where the modem will act as a server for serial over TCP.
The modem itself is connected to a modbus device (industrial protocol) via an RS232 connection.
I would like to use pymodbus to facilitate talking to the end modbus device. However, I cannot use the TCP modbus option in PyModbus as the end device speaks serial modbus (Modbus RTU). And I cannot use the serial modbus option in Pymodbus as it expects to open an actual local serial port (tty device) on the linux server.
How can I bridge the serial connection such that the pymodbus library will see the connection as a local serial device?
|
Pymodbus (Serial) over a tcp serial connection
| 0 | 0 | 0 | 1,866 |
15,489,371 |
2013-03-19T00:20:00.000
| 0 | 1 | 0 | 1 |
python,serial-port,tty,modbus
| 16,742,894 | 2 | false | 0 | 0 |
I actually was working on something similar and decided to make my own Serial/TCP bridge. Using virtual serial ports to handle the communication with each of the modems.
I used the minimalmodbus library although I had to modify it a little in order to handle the virtual serial ports.
I hope you solved your problem and if you didn't I can try to help you out.
| 2 | 0 | 0 |
I will be creating a connection between my Linux server and a cellular modem where the modem will act as a server for serial over TCP.
The modem itself is connected to a modbus device (industrial protocol) via an RS232 connection.
I would like to use pymodbus to facilitate talking to the end modbus device. However, I cannot use the TCP modbus option in PyModbus as the end device speaks serial modbus (Modbus RTU). And I cannot use the serial modbus option in Pymodbus as it expects to open an actual local serial port (tty device) on the linux server.
How can I bridge the serial connection such that the pymodbus library will see the connection as a local serial device?
|
Pymodbus (Serial) over a tcp serial connection
| 0 | 0 | 0 | 1,866 |
15,491,308 |
2013-03-19T04:07:00.000
| 1 | 1 | 0 | 1 |
python,serial-port,tty,modbus
| 15,680,046 | 1 | false | 0 | 0 |
if i do understand, you need make a connection of this manner:
[pyModbus <-(fake serial)->process]<-(tcp/ip)->[modem<-(serial)->device]
I suggest use socat for this
| 1 | 1 | 0 |
I have a library (PyModbus) I would like to use that requires a tty device as it will be communicating with a device using serial connection. However, the device I am going to talk to is going to be behind a modem that supports serial over tcp (the device plugs into a com port on the modem).
Without the modem in the way it would be trivial. I would connect a usb serial cable to the device and the other end to the computer. With the modem in the way, the server has to connect to a tcp port on the modem and pump serial data through that. The modem passes the data received to the device connected to the com port.
In linux, whats the best way to create a fake tty from the "serial over tcp connection" for momentary use and then be destroyed. This would happen periodically, and an individual linux server may have 10~500 of these emulated device open at any given time.
|
Create a fake TTY device from a serial-over TCP connection
| 0.197375 | 0 | 0 | 1,308 |
15,493,342 |
2013-03-19T06:59:00.000
| 0 | 1 | 1 | 0 |
python,emacs,restructuredtext
| 28,541,254 | 3 | false | 0 | 0 |
As far as for edit-purposes, narrowing to docstring and activating rst-mode should be the way to go.
python-mode el provides py--docstring-p, which might be easily adapted for python.el
Than binding the whole thing to some idle-timer, would do the narrowing/switching.
Remains some expression which toggles-off rst-mode and widens.
| 1 | 13 | 0 |
How to I get Emacs to use rst-mode inside of docstrings in Python files? I vaguely remember that different modes within certain regions of a file is possible, but I don't remember how it's done.
|
Have Emacs edit Python docstrings using rst-mode
| 0 | 0 | 0 | 1,425 |
15,494,054 |
2013-03-19T07:49:00.000
| 1 | 0 | 1 | 0 |
list,python-2.7
| 15,494,144 | 1 | true | 0 | 0 |
Try list of tuples... May help.
There is also an array module.
| 1 | 0 | 0 |
I have a text file of approx 300MB, and each line is a (short) list of integer; most lines contain just 1 integer, the longest list contains 10. If I create a list of lists in Python, one list for each line, with the entries cast as int, I run into MemoryErrors... how can that be when I have 3GB of RAM? Environment is Python 2.7.3 on XP.
|
Alternatives for integer list in Python
| 1.2 | 0 | 0 | 389 |
15,495,949 |
2013-03-19T09:42:00.000
| 1 | 0 | 0 | 0 |
python,file,amazon-s3,boto
| 15,503,402 | 1 | false | 1 | 0 |
No, there really isn't any way to do this without putting some sort of service between the people clicking on the links and the S3 objects.
The reason is that access to the S3 content is determined by your AWS access_key and secret_key. There is no way to "login" with these credentials and logging into the AWS web console uses a different set of credentials that are only useful for the console. It does not authenticate you with the S3 service.
| 1 | 1 | 0 |
I'm having trouble with S3 files. I have some python code using boto that uploads file to S3, and I want to write to a log file links to the files I created for future reference.
I can't seem to find a way to generate a link that works to only people that authenticated. I can create a link using the generate_url method, but then anybody who clicks on that link can access the file. Any other of creating the url, creates a link that doesn't work even if I'm logged in (Get an XML with access denied).
Anybody knows of a way of doing this? Preferably permanent links, but I can do with only temporary links that expires after given time
Thanks,
Ophir
|
Creating links to private S3 files which still requires authentication
| 0.197375 | 0 | 1 | 415 |
15,496,065 |
2013-03-19T09:48:00.000
| 0 | 0 | 0 | 1 |
python,google-app-engine,sdk
| 15,499,700 | 2 | false | 1 | 0 |
@Bharadwaj Please check if the version number you have specified in the command actually exists in the appengine.
Also Make sure that you are providing your right appengine credentials.
| 1 | 0 | 0 |
my app name is nfcVibe but still i am getting error like below.anyone suggest me to download my app. i think i gave the command correct only. but where it is going wrong that i dont know.
C:\Program Files\Google\google_appengine>appcfg.py download_app -A nfcVibe -V 1
"e:\nfcvibe1"
03:11 PM Host: appengine.google.com
03:11 PM Fetching file list...
Error 400: --- begin server output ---
Client Error (400)
The request is invalid for an unspecified reason.
--- end server output ---
|
while downloading app from google app engine its throwing error <400>
| 0 | 0 | 0 | 304 |
15,500,808 |
2013-03-19T13:26:00.000
| 0 | 0 | 1 | 0 |
python,ply
| 15,502,318 | 1 | true | 0 | 0 |
It turns out, you can use %prec fakeToken at the end of the production with different precedence, and insert facetoken in the right place in the precedence list.
| 1 | 0 | 0 |
I'm writing a parser in PLY for a language that consists of two sublanguages: the "normal" expression language, and the language of type annotations. The problem is that they share some tokens, and that precedence differs between the two languages.
For example, in the expression language a | b, c should be equivalent to (a | b), c (and means the same as in Python), while in the type language the same should be equivalent to a | (b, c) (either type a or type b, c, that is a tuple with members of type b and type c).
The real problem is a bit more complicated than that, but it's still basically the same.
Would it be possible in PLY to temporarily change precedence? If not, would there be another solution that I can apply?
|
Precedence for sublanguages
| 1.2 | 0 | 0 | 206 |
15,507,009 |
2013-03-19T17:59:00.000
| 1 | 0 | 0 | 0 |
python,matlab,wxpython,tkinter
| 15,508,213 | 3 | false | 0 | 1 |
Haven't used Matlab before, not sure about it's GUI. But if you tend to use Python interactively, you may want to give iPython a try. Ipython with Qt can render you an elegant GUI.
| 1 | 0 | 0 |
I'm really trying to fall in love with Python and move away from Matlab. Is it possible to make Matlab style GUIs in Python? How easy is it by comparison? (I make matlab GUIs programmatically, wouldn't dare use GUIDE) Can I put matplotlib graphics in these GUIs? Is tk or wx (or something else) better for this?
|
Can python make Matlab style GUIs
| 0.066568 | 0 | 0 | 2,810 |
15,510,254 |
2013-03-19T20:55:00.000
| 1 | 1 | 0 | 1 |
python,unix,python-2.7,unix-timestamp,dropbox-api
| 15,529,123 | 1 | false | 0 | 0 |
The modified time on the Dropbox server isn't necessarily going to be the modified time on the client, but rather the time the file was uploaded to the server. You can use the 'rev' property on files from the /metadata call to keep track of files instead.
| 1 | 0 | 0 |
I'm doing a file sync between a client, server and Dropbox (Mac client, Debian server). I'm looking at the mod times of files to determine which is newest. On the client I'm using os.path.getmtime(filePath) to get the modified time.
When I check the last modification time of the file on the client and then, after uploading I check again on the server or Dropbox there is a varying difference in the time between them all for the same file. I thought file mod times were associated with the file rather than os they are on, so if the file was last modified on the client, that mod time stamp should be the same when checked on the server?
Could anyone clarify if uploading the file has an impact on the mod time, or suggest where this variation in time for one file could be coming from? Any advice would be greatly appreciated!
|
File Mod Time Discrepancies On Upload
| 0.197375 | 0 | 0 | 77 |
15,511,400 |
2013-03-19T22:05:00.000
| 1 | 0 | 1 | 0 |
python,integer-division
| 15,511,436 | 2 | false | 0 | 0 |
Integer division takes (I believe) the floor() of whatever float comes out, more or less.
So that's -2 for the first division and 1 for the second.
| 1 | 9 | 0 |
Why does -103/100 == -2 but 103/100 == 1 in Python? I can't seem to understand why.
|
Why does -103/100 == -2 but 103/100 == 1 in Python?
| 0.099668 | 0 | 0 | 320 |
15,512,276 |
2013-03-19T23:12:00.000
| 0 | 0 | 0 | 0 |
python,numpy,dataset
| 15,513,160 | 2 | false | 0 | 0 |
You could assign a unique sequential number to each row, then choose a random sample of those numbers, then serially extract each relevant row to a new file.
| 1 | 0 | 1 |
I have a large dataset. It's currently in the form of uncompressed numpy array files that were created with numpy.array.tofile(). Each file is approximately 100000 rows of 363 floats each. There are 192 files totalling 52 Gb.
I'd like to separate a random fifth of this data into a test set, and a random fifth of that test set into a validation set.
In addition, I can only train on 1 Gb at a time (limitation of GPU's onboard memory) So I need to randomize the order of all the data so that I don't introduce a bias by training on the data in the order it was collected.
My main memory is 8 Gb in size. Can any recommend a method of randomizing and partitioning this huge dataset?
|
How should I divide a large (~50Gb) dataset into training, test, and validation sets?
| 0 | 0 | 0 | 756 |
15,513,699 |
2013-03-20T01:28:00.000
| 0 | 0 | 0 | 0 |
python,http,http-headers
| 15,513,793 | 2 | false | 1 | 0 |
I guess you will have to create a list of all known file extensions that you do NOT want, and then scan the content of the http response, checking with "if substring not in nono-list:"
The problem is all href's ending with TLDs, forwardslashes, url-delivered variables and so on, so i think it would be easier to check for stuff you know you dont want.
| 1 | 2 | 0 |
I want to be able to get the list of all URLs that a browser will do a GET request for when we try to open a page. For eg: if we try to open cnn.com, there are multiple URLs within the first HTTP response which the browser recursively requests for.
I'm not trying to render a page but I'm trying to obtain a list of all the urls that are requested when a page is rendered. Doing a simple scan of the http response content wouldn't be sufficient as there could potentially be images in the css which are downloaded. Is there anyway I can do this in python?
|
How can I extract the list of urls obtained during a HTML page render in python?
| 0 | 0 | 1 | 996 |
15,514,294 |
2013-03-20T02:38:00.000
| 3 | 0 | 0 | 0 |
java,android,c++,python,django
| 15,514,369 | 2 | false | 1 | 1 |
You might want to checkout the likes of Phonegap, Scala, Groovy, Mirah, Rhodes, Clojure
| 1 | 1 | 0 |
I want to learn how to how to program apps for Android. I am not very fond of Java. I read that you could build Android apps with Python and C++. So can I build apps completely without using Java? Also what are the advantages of C++, Python, and Java when building Android? Another question: Will Django Framework work for Android? Thank you for your time.
|
Is it possible to build Android apps without Java?
| 0.291313 | 0 | 0 | 3,672 |
15,514,593 |
2013-03-20T03:10:00.000
| 2 | 0 | 1 | 0 |
python,jupyter-notebook,ipython,jupyter,importerror
| 19,124,508 | 21 | false | 0 | 0 |
Before installing ipython, I installed modules through easy_install; say sudo easy_install mechanize.
After installing ipython, I had to re-run easy_install for ipython to recognize the modules.
| 6 | 193 | 0 |
I'm trying to run a script that launches, amongst other things, a python script. I get a ImportError: No module named ..., however, if I launch ipython and import the same module in the same way through the interpreter, the module is accepted.
What's going on, and how can I fix it? I've tried to understand how python uses PYTHONPATH but I'm thoroughly confused. Any help would greatly appreciated.
|
"ImportError: No module named" when trying to run Python script
| 0.019045 | 0 | 0 | 367,136 |
15,514,593 |
2013-03-20T03:10:00.000
| 6 | 0 | 1 | 0 |
python,jupyter-notebook,ipython,jupyter,importerror
| 25,022,624 | 21 | false | 0 | 0 |
Doing sys.path.append('my-path-to-module-folder') will work, but to avoid having to do this in IPython every time you want to use the module, you can add export PYTHONPATH="my-path-to-module-folder:$PYTHONPATH" to your ~/.bash_profile file.
| 6 | 193 | 0 |
I'm trying to run a script that launches, amongst other things, a python script. I get a ImportError: No module named ..., however, if I launch ipython and import the same module in the same way through the interpreter, the module is accepted.
What's going on, and how can I fix it? I've tried to understand how python uses PYTHONPATH but I'm thoroughly confused. Any help would greatly appreciated.
|
"ImportError: No module named" when trying to run Python script
| 1 | 0 | 0 | 367,136 |
15,514,593 |
2013-03-20T03:10:00.000
| 18 | 0 | 1 | 0 |
python,jupyter-notebook,ipython,jupyter,importerror
| 27,944,947 | 21 | false | 0 | 0 |
Just create an empty python file with the name __init__.py under the folder which showing error, while you running the python project.
| 6 | 193 | 0 |
I'm trying to run a script that launches, amongst other things, a python script. I get a ImportError: No module named ..., however, if I launch ipython and import the same module in the same way through the interpreter, the module is accepted.
What's going on, and how can I fix it? I've tried to understand how python uses PYTHONPATH but I'm thoroughly confused. Any help would greatly appreciated.
|
"ImportError: No module named" when trying to run Python script
| 1 | 0 | 0 | 367,136 |
15,514,593 |
2013-03-20T03:10:00.000
| 1 | 0 | 1 | 0 |
python,jupyter-notebook,ipython,jupyter,importerror
| 36,972,264 | 21 | false | 0 | 0 |
Had a similar problem, fixed it by calling python3 instead of python, my modules were in Python3.5.
| 6 | 193 | 0 |
I'm trying to run a script that launches, amongst other things, a python script. I get a ImportError: No module named ..., however, if I launch ipython and import the same module in the same way through the interpreter, the module is accepted.
What's going on, and how can I fix it? I've tried to understand how python uses PYTHONPATH but I'm thoroughly confused. Any help would greatly appreciated.
|
"ImportError: No module named" when trying to run Python script
| 0.009524 | 0 | 0 | 367,136 |
15,514,593 |
2013-03-20T03:10:00.000
| 0 | 0 | 1 | 0 |
python,jupyter-notebook,ipython,jupyter,importerror
| 54,964,060 | 21 | false | 0 | 0 |
Solution without scripting:
Open Spyder -> Tools -> PYTHONPATH manager
Add Python paths by clicking "Add Path".
E.g: 'C:\Users\User\AppData\Local\Programs\Python\Python37\Lib\site-packages'
Click "Synchronize..." to allow other programs (e.g. Jupyter Notebook) use the pythonpaths set in step 2.
Restart Jupyter if it is open
| 6 | 193 | 0 |
I'm trying to run a script that launches, amongst other things, a python script. I get a ImportError: No module named ..., however, if I launch ipython and import the same module in the same way through the interpreter, the module is accepted.
What's going on, and how can I fix it? I've tried to understand how python uses PYTHONPATH but I'm thoroughly confused. Any help would greatly appreciated.
|
"ImportError: No module named" when trying to run Python script
| 0 | 0 | 0 | 367,136 |
15,514,593 |
2013-03-20T03:10:00.000
| 0 | 0 | 1 | 0 |
python,jupyter-notebook,ipython,jupyter,importerror
| 45,054,485 | 21 | false | 0 | 0 |
I found yet another source of this discrepancy:
I have ipython installed both locally and in commonly in virtualenvs. My problem was that, inside a newly made virtualenv with ipython, the system ipython was picked up, which was a different version than the python and ipython in the virtualenv (a 2.7.x vs. a 3.5.x), and hilarity ensued.
I think the smart thing to do whenever installing something that will have a binary in yourvirtualenv/bin is to immediately run rehash or similar for whatever shell you are using so that the correct python/ipython gets picked up. (Gotta check if there are suitable pip post-install hooks...)
| 6 | 193 | 0 |
I'm trying to run a script that launches, amongst other things, a python script. I get a ImportError: No module named ..., however, if I launch ipython and import the same module in the same way through the interpreter, the module is accepted.
What's going on, and how can I fix it? I've tried to understand how python uses PYTHONPATH but I'm thoroughly confused. Any help would greatly appreciated.
|
"ImportError: No module named" when trying to run Python script
| 0 | 0 | 0 | 367,136 |
15,514,641 |
2013-03-20T03:15:00.000
| 1 | 0 | 0 | 0 |
python,performance,algorithm,numpy,kdtree
| 15,514,922 | 2 | false | 0 | 0 |
The first thing that comes to my mind is:
If we calculate the distance between each two atoms in the set it will be O(N^2) operations. It is very slow.
What about to introduce the statical orthogonal grid with some cells size (for example close to the distance you are interested) and then determine the atoms belonging to the each cell of the grid (it takes O(N) operations) After this procedure you can reduce the time for searching of the neighbors.
| 2 | 0 | 1 |
In an python application I'm developing I have an array of 3D points (of size between 2 and 100000) and I have to find the points that are within a certain distance from each other (say between two values, like 0.1 and 0.2). I need this for a graphic application and this search should be very fast (~1/10 of a second for a sample of 10000 points)
As a first experiment I tried to use the scipy.spatial.KDTree.query_pairs implementation, and with a sample of 5000 point it takes 5 second to return the indices. Do you know any approach that may work for this specific case?
A bit more about the application:
The points represents atom coordinates and the distance search is useful to determine the bonds between atoms. Bonds are not necessarily fixed but may change at each step, such as in the case of hydrogen bonds.
|
Finding points in space closer than a certain value
| 0.099668 | 0 | 0 | 772 |
15,514,641 |
2013-03-20T03:15:00.000
| 5 | 0 | 0 | 0 |
python,performance,algorithm,numpy,kdtree
| 15,514,859 | 2 | true | 0 | 0 |
Great question! Here is my suggestion:
Divide each coordinate by your "epsilon" value of 0.1/0.2/whatever and round the result to an integer. This creates a "quotient space" of points where distance no longer needs to be determined using the distance formula, but simply by comparing the integer coordinates of each point. If all coordinates are the same, then the original points were within approximately the square root of three times epsilon from each other (for example). This process is O(n) and should take 0.001 seconds or less.
(Note: you would want to augment the original point with the three additional integers that result from this division and rounding, so that you don't lose the exact coordinates.)
Sort the points in numeric order using dictionary-style rules and considering the three integers in the coordinates as letters in words. This process is O(n * log(n)) and should take certainly less than your 1/10th of a second requirement.
Now you simply proceed through this sorted list and compare each point's integer coordinates with the previous and following points. If all coordinates match, then both of the matching points can be moved into your "keep" list of points, and all the others can be marked as "throw away." This is an O(n) process which should take very little time.
The result will be a subset of all the original points, which contains only those points that could be possibly involved in any bond, with a bond being defined as approximately epsilon or less apart from some other point in your original set.
This process is not mathematically exact, but I think it is definitely fast and suited for your purpose.
| 2 | 0 | 1 |
In an python application I'm developing I have an array of 3D points (of size between 2 and 100000) and I have to find the points that are within a certain distance from each other (say between two values, like 0.1 and 0.2). I need this for a graphic application and this search should be very fast (~1/10 of a second for a sample of 10000 points)
As a first experiment I tried to use the scipy.spatial.KDTree.query_pairs implementation, and with a sample of 5000 point it takes 5 second to return the indices. Do you know any approach that may work for this specific case?
A bit more about the application:
The points represents atom coordinates and the distance search is useful to determine the bonds between atoms. Bonds are not necessarily fixed but may change at each step, such as in the case of hydrogen bonds.
|
Finding points in space closer than a certain value
| 1.2 | 0 | 0 | 772 |
15,515,299 |
2013-03-20T04:24:00.000
| 0 | 0 | 0 | 1 |
google-app-engine,python-2.x
| 44,672,760 | 3 | false | 1 | 0 |
If you are using GAE Flex (where the secure: directive doesn't work), the only way I've found to detect this (to redirect http->https myself) is to check if request.environ['HTTP_X_FORWARDED_PROTO'] == 'https'
| 1 | 1 | 0 |
I would like to know if is there a way to validate that a request (say a POST or a GET) was made over https,
I need to check this in a webapp2.RequestHandler to invalidate every request that is not sent via https
best regards
|
how to check the request is https in app engine python
| 0 | 0 | 1 | 428 |
15,516,354 |
2013-03-20T05:58:00.000
| 0 | 0 | 0 | 0 |
python,selenium
| 15,517,460 | 1 | false | 0 | 0 |
Selenium IDE 1.10.0 release notes says it only supports upto Firefox 17 so you may face issue with v19. will you re-try downgrading Firefox please?
| 1 | 1 | 0 |
I am trying to create a sample python script using Selenium IDE 1.10.0 with Firefox version 19.0.2.
I am able to create the script, but during run time i'm getting exception: "INFO - Got result: Failed to start new browser session: Error while launching browser on session null"
So my question is that can i run the generated script against Firefox version 19.0.2. If yes then why i'm getting this error, if not then please provide me your input.
Thanks in Advance
Abhishek
|
Does Selenium IDE 1.10.0 support Firefox 19.0.2
| 0 | 0 | 1 | 934 |
15,516,572 |
2013-03-20T06:15:00.000
| 0 | 0 | 1 | 0 |
python,audio,mp3,text-to-speech
| 71,868,861 | 3 | false | 0 | 0 |
You can bypass the 100 character rule by splitting the 'theText' string into an array: theText = f.read().split("<null>") (i used "<null>" as the delimiter) put the delimiter in the text every sentence or space before 100 characters.
Create a for-loop: for section in theText: and every section run engine.Speak(section)
I hope this helps with 100 character limits!
| 1 | 6 | 0 |
I can convert text to speech in python using puttsx. and I can do record audio using microphone(headphone) to mp3 file.
What I want to do is to convert text to mp3 file.
Is there a way to store audio playing using pyttsx to memory or unicode string.
Can anyone help me storing audio to memory, or how I can convert that string to mp3 file.
|
How can I convert text to speech (mp3 file) in python?
| 0 | 0 | 0 | 9,809 |
15,517,766 |
2013-03-20T07:33:00.000
| 0 | 0 | 0 | 1 |
python,django,google-app-engine,python-2.7,django-nonrel
| 15,525,796 | 2 | false | 1 | 0 |
The django library built into GAE is straight up normal django that has an SQL ORM. So you can use this with Cloud SQL but not the HRD.
django-nonrel is up to 1.4.5 according to the messages on the newsgroup. The documentation, unfortunately, is sorely behind.
| 1 | 0 | 0 |
AppEngine 1.7.6 has promoted Django 1.4.2 to GA.
I wonder how and if people this are using The reason for my question is that Django-nonrel seems to be stuck on Django 1.3 and there are no signs of an updated realease.
What I would like to use from Djano are controllers, views and especially form validations.
|
AppEngine 1.7.6 and Django 1.4.2 release
| 0 | 0 | 0 | 154 |
15,519,904 |
2013-03-20T09:40:00.000
| 0 | 0 | 0 | 0 |
php,c++,python,flask
| 15,520,011 | 3 | false | 1 | 1 |
You can use sockets, and start listening on some port from C++ program, then from PHP you can connect and send/receive data to/from your program.
| 1 | 0 | 0 |
I have c++ code that can be compiled under Linux, windows or Mac OS. The code compares two images. I would like to have its front end running on a browser and make available to the www.
I am familiar with hosting and dns and that is not the issue. what I can't seem to figure out is:
How do I invoke the script once the image is uploaded by users?
The results from the code needs to be displayed back to the browser. How can a callback be set up for this?
Is there a php solution? Or python (with flask)?
|
Web front-end for c++ code
| 0 | 0 | 0 | 959 |
15,524,030 |
2013-03-20T12:50:00.000
| 4 | 0 | 1 | 0 |
python,list,logging,dictionary,system
| 15,524,068 | 3 | false | 0 | 0 |
You should look into collections.Counter. Your question is a bit unclear.
| 1 | 0 | 0 |
Can anyone tell me how to count the number of times a word appears in a dictionary. Iv already read a file into the terminal into a list. would I need to put the list into a dictionary or start put reading the file into the terminal into a dictionary and not a list? the file is a log file if that matters...
|
Counting words in python
| 0.26052 | 0 | 0 | 1,257 |
15,524,627 |
2013-03-20T13:15:00.000
| 2 | 0 | 1 | 0 |
python,multithreading,queue,python-multithreading
| 15,524,819 | 1 | true | 0 | 0 |
Using queues
Usually, a queue is used in a scenario with a bunch of worker threads that get their jobs from the queue. Free threads are waiting on the queue for new jobs to be put in it. Then the job is executed by a thread while all remaining threads are waiting for the next job. If there are more jobs posted than threads are available the queue starts to fill up.
That doesn't apply to your scenario as you describe it. Maybe you can just read the data directly without putting it in a queue. If you write in shared data structures, you may consider a locking strategy.
You should read up on parallel programming in general. The concepts are fairly language independent. Then you can read a tutorial about threads with Python. There is plenty of material on the internet about both topics.
Edit:
Communication between threads using threading.Event
The simplest way to communicate between two threads is a threading.Event . The event can be set to true or false. Usually, one thread is setting the event and another thread checks the value of the Event and acts accordingly. For example, the event could indicate that there is something new to do. The indicating thread first fills up data structures that are necessary for the upcoming task and then sets the event true. Another thread that was waiting on the event is activated after the event is true. Subsequently, it reads out the data structures and performs the task.
| 1 | 2 | 0 |
I am looking for a way to pass values(ex integers,arrays) between multiple threads in Python. I understand that this task can be achieved by using the Queue module, but I am not very familiar neither with python or this specific module.
I have the following scenario: each thread needs to do some calculations based on its own data or data from other threads. Also each thread knows what other thread holds the data it needs for a specific job (all threads have an array of all threads, so any thread knows that for a task X he needs to get the data from a specific thread(row,col) from that array).
How can this communication between threads be done using the Queue module or perhaps another technique(the Queue module seemed to be the right thing for this job).
Any help is most appreciated. Thanks a lot.
|
Passing values between threads using queue module in Python
| 1.2 | 0 | 0 | 2,566 |
15,524,753 |
2013-03-20T13:20:00.000
| 3 | 0 | 1 | 0 |
python,set
| 15,524,837 | 1 | true | 0 | 0 |
A set in python is a hash itself. So implementing difference for it is not as hard as you imagine. Looking from a higher level how does one implement set difference? Iterate over one of the collections and add to the result all elements that are not present in the other sequence.
| 1 | 4 | 0 |
Recently, i am looking through some python modules to understand their behavior and how optimized their implementation are. Can any one tell what algorithm does python use to perform the set difference operations. One possible way to achieve set difference is by using hash tables which will involve an extra N space. I tried to find the source code of set operations but i am not able to find out the code location. Please help.
|
how does python' set difference work internally?
| 1.2 | 0 | 0 | 956 |
15,524,822 |
2013-03-20T13:23:00.000
| 0 | 0 | 1 | 0 |
python,regex,python-2.7
| 15,524,921 | 3 | false | 0 | 0 |
Have you considered using the Levenshtein distance algorithm for help? It is used to determine how similar two strings are to each other.
Here is a naive implementation:
For i = 0 to len(haystack_str) - len(needle_str)
Let potential_match = haystack_str[i,i+len]
See what the Levenshtein distance is between potential_match and needle_str
If the distance is 0, you have a perfect match
If the distance is less than threshold, you have an imperfect but close enough match
Otherwise, continue to the next i
| 1 | 0 | 0 |
I have a list of substrings of equal length, for all of which I want to find a position in a big string. However tricky part is I should find also substrings which have limited number of mismatches (number of mismatches given, too). I thought I could do this with regular expressions, but I can't find how. UPD: I'm using Python 2.7.
Example:
Input string: s = 'ATGTCGATCGATGCTAGCTATAGATAAAA', input substring is s0 = 'ATG', number of mismatches allowed is n = 1. What I want is return an iterable, let's say a list, of positions: [0,7,19,23,6], which correspond to position of 'ATG' (twice), 'ATA' (twice), 'ATC' correspondinly, as none of other 3-mers with mismatches don't occur in the string.
|
How to find a imperfect substring?
| 0 | 0 | 0 | 1,936 |
15,529,417 |
2013-03-20T16:35:00.000
| 2 | 0 | 1 | 0 |
python,module,version
| 15,529,520 | 2 | false | 0 | 0 |
I'm not sure if it's possible to change the active installed versions of a given module. Given my understanding of how imports and site-packages work, I'm leaning towards no.
Have you considered using virtualenv though ?
With virtualenv, you could create multiple shared environments -- one for biopython 1.58 , another for 1.61 , another for whatever other special situations you need. They don't need to be locked down to a particular user, so while it would take more space than what you desired, it could take less space than everyone having their own python environment.
| 1 | 4 | 0 |
so I am working on a shared computer. It is a workhorse for computations across the department. The problem we have run into is controlling versions of imported modules. Take for example Biopython. Some people have requirements of an older version of biopython - 1.58. And yet others have requirements for the latest biopython - 1.61. How would I have both versions of the module installed side by side, and how does one specifically access a particular version. I ask because sometimes these apis change and break old scripts for other people (or they expect certain functionality that is no longer there).
I understand that one could locally (i.e. per user) install the module and specifically direct python to that module. Is there another way to handle this? Or would everyone have to create an export PYTHONPATH before using?
|
How do I access different python module versions?
| 0.197375 | 0 | 0 | 361 |
15,530,071 |
2013-03-20T17:03:00.000
| 2 | 0 | 0 | 0 |
python,screen-scraping,scrapy
| 25,308,766 | 1 | false | 1 | 0 |
Before trying to give you an idea...
I must say I would try first your database option. Databases are made just for that and, even if your DB gets really big, this should not do the crawling significantly slow.
And one lesson I have learned: "First do the dumb implementation. After that, you try to optimize." Most of times when you optimize first, you just optimize the wrong part.
But, if you really want another idea...
Scrapy's default is not to crawl the same url two times. So, before start the crawling you can put the already scraped urls (3 days before) in the list that Scrapy uses to know which urls were already visited. (I don't know how to do that.)
Or, simpler, in your item parser you can just check if the url was already scraped and return None or scrape the new item accordingly.
| 1 | 3 | 0 |
Please help me solve following case:
Imagine a typical classified category page. A page with list of items. When you click on items you land on internal pages.Now currently my crawler scrapes all these URLs, further scrapes these urls to get details of the item, check to see if the initial seed URL as any next page. If it has, it goes to the next page and do the same. I am storing these items in a sql database.
Let say 3 days later, there are new itmes in the Seed URL and I want to scrap only new items. Possible solutions are:
At the time of scraping each item, I check in the database to see if the URL is already scraped. If it has, I simply ask Scrapy to stop crawling further.
Problem : I don't want to query database each time. My database is going to be really large and it will eventually make crawling super slow.
I try to store last scraped URL and pass it on in the beginning, and the moment it finds this last_scraped_url it simply stops the crawler.
Not possible, given the asynchronous nature of crawling URLs are not scraped in the same order they are received from seed URLs.
( I tried all methods to make it in orderly fashion - but that's not possible at all )
Can anybody suggest any other ideas ? I have been struggling over it for past three days.
Appreciate your replies.
|
Scrapy Case : Incremental Update of Items
| 0.379949 | 0 | 1 | 1,365 |
15,530,866 |
2013-03-20T17:39:00.000
| 2 | 0 | 0 | 1 |
google-app-engine,python-2.7
| 15,538,956 | 5 | false | 1 | 0 |
I updated GAE SDK to 1.7.6 from 1.7.5, since then I started getting this error. I reverted back to 1.7.5, the application is functioning normally :)
| 3 | 2 | 0 |
This is my first program in GAE. I'm working with latest GAE SDK, and Python 2.7 on Windows XP 32 bit. All was working fine; but to my surprise I'm getting the following error:
2013-03-20 22:48:26 Running command: "['C:\\Python27\\pythonw.exe', 'C:\\Program Files\\Google\\google_appengine\\dev_appserver.py', '--skip_sdk_update_check=yes', '--port=9080', '--admin_port=8001', u'B:\\AppEngg\\huddle-up']"
INFO 2013-03-20 22:48:27,236 devappserver2.py:401] Skipping SDK update check.
WARNING 2013-03-20 22:48:27,253 api_server.py:328] Could not initialize images API; you are likely missing the Python "PIL" module.
INFO 2013-03-20 22:48:27,283 api_server.py:152] Starting API server at: http://localhost:1127
INFO 2013-03-20 22:48:27,299 api_server.py:517] Applying all pending transactions and saving the datastore
INFO 2013-03-20 22:48:27,299 api_server.py:520] Saving search indexes
Traceback (most recent call last):
File "C:\Program Files\Google\google_appengine\dev_appserver.py", line 194, in
_run_file(__file__, globals())
File "C:\Program Files\Google\google_appengine\dev_appserver.py", line 190, in _run_file
execfile(script_path, globals_)
File "C:\Program Files\Google\google_appengine\google\appengine\tools\devappserver2\devappserver2.py", line 545, in
main()
File "C:\Program Files\Google\google_appengine\google\appengine\tools\devappserver2\devappserver2.py", line 538, in main
dev_server.start(options)
File "C:\Program Files\Google\google_appengine\google\appengine\tools\devappserver2\devappserver2.py", line 513, in start
self._dispatcher.start(apis.port, request_data)
File "C:\Program Files\Google\google_appengine\google\appengine\tools\devappserver2\dispatcher.py", line 95, in start
servr.start()
File "C:\Program Files\Google\google_appengine\google\appengine\tools\devappserver2\server.py", line 827, in start
self._watcher.start()
File "C:\Program Files\Google\google_appengine\google\appengine\tools\devappserver2\win32_file_watcher.py", line 74, in start
raise ctypes.WinError()
WindowsError: [Error 6] The handle is invalid.
2013-03-20 22:48:27 (Process exited with code 1)
I Googled it; but it seems that most of the people getting this error have something wrong in there PATH config or in x64 Windows.
|
Windows Error in Google App Engine
| 0.07983 | 0 | 0 | 1,412 |
15,530,866 |
2013-03-20T17:39:00.000
| 0 | 0 | 0 | 1 |
google-app-engine,python-2.7
| 25,406,846 | 5 | false | 1 | 0 |
I got exactly the same problem with SDK 1.99 on Windows 8.
I was running a test script .yaml and .go file from Google Go's own working directory.
Moving my code to its own subdirectory solved the problem.
| 3 | 2 | 0 |
This is my first program in GAE. I'm working with latest GAE SDK, and Python 2.7 on Windows XP 32 bit. All was working fine; but to my surprise I'm getting the following error:
2013-03-20 22:48:26 Running command: "['C:\\Python27\\pythonw.exe', 'C:\\Program Files\\Google\\google_appengine\\dev_appserver.py', '--skip_sdk_update_check=yes', '--port=9080', '--admin_port=8001', u'B:\\AppEngg\\huddle-up']"
INFO 2013-03-20 22:48:27,236 devappserver2.py:401] Skipping SDK update check.
WARNING 2013-03-20 22:48:27,253 api_server.py:328] Could not initialize images API; you are likely missing the Python "PIL" module.
INFO 2013-03-20 22:48:27,283 api_server.py:152] Starting API server at: http://localhost:1127
INFO 2013-03-20 22:48:27,299 api_server.py:517] Applying all pending transactions and saving the datastore
INFO 2013-03-20 22:48:27,299 api_server.py:520] Saving search indexes
Traceback (most recent call last):
File "C:\Program Files\Google\google_appengine\dev_appserver.py", line 194, in
_run_file(__file__, globals())
File "C:\Program Files\Google\google_appengine\dev_appserver.py", line 190, in _run_file
execfile(script_path, globals_)
File "C:\Program Files\Google\google_appengine\google\appengine\tools\devappserver2\devappserver2.py", line 545, in
main()
File "C:\Program Files\Google\google_appengine\google\appengine\tools\devappserver2\devappserver2.py", line 538, in main
dev_server.start(options)
File "C:\Program Files\Google\google_appengine\google\appengine\tools\devappserver2\devappserver2.py", line 513, in start
self._dispatcher.start(apis.port, request_data)
File "C:\Program Files\Google\google_appengine\google\appengine\tools\devappserver2\dispatcher.py", line 95, in start
servr.start()
File "C:\Program Files\Google\google_appengine\google\appengine\tools\devappserver2\server.py", line 827, in start
self._watcher.start()
File "C:\Program Files\Google\google_appengine\google\appengine\tools\devappserver2\win32_file_watcher.py", line 74, in start
raise ctypes.WinError()
WindowsError: [Error 6] The handle is invalid.
2013-03-20 22:48:27 (Process exited with code 1)
I Googled it; but it seems that most of the people getting this error have something wrong in there PATH config or in x64 Windows.
|
Windows Error in Google App Engine
| 0 | 0 | 0 | 1,412 |
15,530,866 |
2013-03-20T17:39:00.000
| 1 | 0 | 0 | 1 |
google-app-engine,python-2.7
| 15,578,142 | 5 | false | 1 | 0 |
I had the same issue with GAE SDK 1.7.6, downgrading to 1.7.5 solved it for me too.
| 3 | 2 | 0 |
This is my first program in GAE. I'm working with latest GAE SDK, and Python 2.7 on Windows XP 32 bit. All was working fine; but to my surprise I'm getting the following error:
2013-03-20 22:48:26 Running command: "['C:\\Python27\\pythonw.exe', 'C:\\Program Files\\Google\\google_appengine\\dev_appserver.py', '--skip_sdk_update_check=yes', '--port=9080', '--admin_port=8001', u'B:\\AppEngg\\huddle-up']"
INFO 2013-03-20 22:48:27,236 devappserver2.py:401] Skipping SDK update check.
WARNING 2013-03-20 22:48:27,253 api_server.py:328] Could not initialize images API; you are likely missing the Python "PIL" module.
INFO 2013-03-20 22:48:27,283 api_server.py:152] Starting API server at: http://localhost:1127
INFO 2013-03-20 22:48:27,299 api_server.py:517] Applying all pending transactions and saving the datastore
INFO 2013-03-20 22:48:27,299 api_server.py:520] Saving search indexes
Traceback (most recent call last):
File "C:\Program Files\Google\google_appengine\dev_appserver.py", line 194, in
_run_file(__file__, globals())
File "C:\Program Files\Google\google_appengine\dev_appserver.py", line 190, in _run_file
execfile(script_path, globals_)
File "C:\Program Files\Google\google_appengine\google\appengine\tools\devappserver2\devappserver2.py", line 545, in
main()
File "C:\Program Files\Google\google_appengine\google\appengine\tools\devappserver2\devappserver2.py", line 538, in main
dev_server.start(options)
File "C:\Program Files\Google\google_appengine\google\appengine\tools\devappserver2\devappserver2.py", line 513, in start
self._dispatcher.start(apis.port, request_data)
File "C:\Program Files\Google\google_appengine\google\appengine\tools\devappserver2\dispatcher.py", line 95, in start
servr.start()
File "C:\Program Files\Google\google_appengine\google\appengine\tools\devappserver2\server.py", line 827, in start
self._watcher.start()
File "C:\Program Files\Google\google_appengine\google\appengine\tools\devappserver2\win32_file_watcher.py", line 74, in start
raise ctypes.WinError()
WindowsError: [Error 6] The handle is invalid.
2013-03-20 22:48:27 (Process exited with code 1)
I Googled it; but it seems that most of the people getting this error have something wrong in there PATH config or in x64 Windows.
|
Windows Error in Google App Engine
| 0.039979 | 0 | 0 | 1,412 |
15,534,297 |
2013-03-20T20:42:00.000
| 1 | 1 | 0 | 0 |
python,service,web
| 15,534,482 | 1 | false | 1 | 0 |
I'm not experiment on this topic but what I would do is setup a database in between (on the Synology rather than on the Raspberry Pi). Let's call your Synology server, and Raspberry Pi a sensor client.
I would host a database on the server, and push the from the sensor client. The data would be pushed either using an API through webservices or a more low level if you need it faster (some code needed on server side for this) or, since the client computer is under your control, it could directly push in the database.
Your concrete choice between database, webservice or other API depends on:
How much data have to be pushed?
How fast data have to pushed?
How much do you trust your network?
How much do you trust your sensor client?
I've never used it but I suggest you use SQLAlchemy for connecting to the database (from both side).
If in some use case the remote server can be down, the sensor client would store sensor data in some local file and push them when the server come back online.
| 1 | 2 | 0 |
I'm looking for ideas, on how to display sensor data in a webpage, hosted by a Synology Diskstation, where the data comes from sensors connected to a Raspberry pi. This is going to be implemented in Python.
I have put together the sensors, and have these connected to the Raspberry. I have also the Python code, so I can read the sensors. I have a webpage up and running on the Diskstation using Python. But how do I get the data from the rasp to the Diskstation. The reading is just done, when the webpage is displayed.
Guess some kind of WebServices on the Rasp ? I have looked at Pyro4, but doesn't look like it can be installed at the Diskstation. And I would prefer not to install a whole WebServer Framework on the rasp.
Do you have a suggestion ?
|
Move data from Raspberry pi to a synology diskstation to present in a webpage
| 0.197375 | 0 | 0 | 699 |
15,535,205 |
2013-03-20T21:35:00.000
| 16 | 0 | 1 | 0 |
python
| 15,535,251 | 4 | false | 0 | 0 |
It means "all elements of the sequence but the last". In the context of f.readline()[:-1] it means "I'm pretty sure that line ends with a newline and I want to strip it".
| 2 | 51 | 0 |
Working on a python assignment and was curious as to what [:-1] means in the context of the following code: instructions = f.readline()[:-1]
Have searched on here on S.O. and on Google but to no avail. Would love an explanation!
|
What does [:-1] mean/do in python?
| 1 | 0 | 0 | 149,801 |
15,535,205 |
2013-03-20T21:35:00.000
| 3 | 0 | 1 | 0 |
python
| 15,535,244 | 4 | false | 0 | 0 |
It gets all the elements from the list (or characters from a string) but the last element.
: represents going through the list
-1 implies the last element of the list
| 2 | 51 | 0 |
Working on a python assignment and was curious as to what [:-1] means in the context of the following code: instructions = f.readline()[:-1]
Have searched on here on S.O. and on Google but to no avail. Would love an explanation!
|
What does [:-1] mean/do in python?
| 0.148885 | 0 | 0 | 149,801 |
15,538,867 |
2013-03-21T03:17:00.000
| 0 | 1 | 0 | 1 |
python,eclipse,configuration,pydev,interpreter
| 18,466,358 | 1 | false | 0 | 0 |
I've faced same problem. The solution was reinstalling Aptana (or Eclipse, tested also on Kepler 4.2.x).
The source of problem was in path to your eclipse/aptana installition. I think that trouble here is determined by diacritic symbols in your name 'Andres Diaz', according to your username here))) (my case is: cyrillic username and user's home folder 'Михаил' in Windows8). Path to your python interpreter does not matter here.
The cure is: move/reinstall your Eclipse to folder with the path which does not contain any non-acsii character. In my case I've moved Aptana Studio from C:\Users\Михаил\Aptana3 to C:\Aptana3 and (maybe it's not necesarry, I don't know) its' workspace also to root C:\ folder.
P.S. I think it can be useful for those who also faced such problem cause I was not able to find any answer about how to solve this troubles but a lot of similar questions.
P.P.S. Sorry for my English, languages are not my leading skill)))
| 1 | 1 | 0 |
I just recently installed the PyDev 2.6 plugin for Eclipse (I run Eclipse SDK 4.2.1) and when I try to configure the Python interpreter to the path: > C:\Python27\python.exe , it gives me an "Error info on interpreter" and in error log it says:
com.sun.org.apache.xerces.internal.impl.io.MalformedByteSequenceException: unvalid Byte 2 of the sequence UTF-8 of 3 bytes
I have read other similar questions on this website about the same issue but the solutions do not suit my situation, as I don't have any unicode char in my path. I run Python 2.7.3. I would really appreciate any help or advice on how to solve this issue, as I would really love to start coding Python in Eclipse soon. Cheers.
|
Error when configuring Python interpreter for PyDev in Eclipse
| 0 | 0 | 0 | 362 |
15,539,096 |
2013-03-21T03:44:00.000
| 0 | 0 | 1 | 0 |
python,ubuntu,odt
| 25,458,631 | 2 | false | 0 | 0 |
An easy way is to just rename the foo.odt to foo.zip and then extract it. the extracted directory contains many files including Pictures.
However I think it's better to change it's type to docx and then do the process on docx (extract it). Because it extract images with better name (image1, image2, ...).
| 1 | 1 | 0 |
How can I extract the tables, text and the pictures in an ODT(OpenDocumentText) file to output them to another ODT file using Python on Ubuntu?
|
How can I extract the tables, text and the pictures in ODT(OpenDocumentText) format using Python?
| 0 | 0 | 0 | 894 |
15,540,360 |
2013-03-21T05:48:00.000
| 1 | 0 | 0 | 0 |
python,google-app-engine,google-drive-api,google-oauth,oauth2client
| 15,593,836 | 1 | true | 1 | 0 |
Sorry, you can't do that. You will need to re-authorize the user. I agree that it would be nice to incrementally add scopes, but you will still need to show an authorization page, so I think you won't gain much doing that.
| 1 | 1 | 0 |
Tried adding additional scopes using oauth2client's OAuth2DecoratorFromClientSecrets via the scopes parameter.
I believe users of an application would prefer to gradually expand privileges; as its needed, and as trust forms...
What is the best way to add/expand/remove scopes when the application has an existing grant? Revoke and reauthorize?
|
How should an application add/remove scopes to an existing grant?
| 1.2 | 0 | 0 | 140 |
15,540,640 |
2013-03-21T06:10:00.000
| 0 | 0 | 1 | 0 |
python-2.7,virtualenv,scientific-computing
| 15,540,786 | 2 | false | 0 | 0 |
There's no performance overhead to using virtualenv. All it's doing is using different locations in the filesystem.
The only "overhead" is the time it takes to set it up. You'd need to install each package in your virtualenv (numpy, pandas, etc.)
| 2 | 5 | 1 |
I have received several recommendations to use virtualenv to clean up my python modules. I am concerned because it seems too good to be true. Has anyone found downside related to performance or memory issues in working with multicore settings, starcluster, numpy, scikit-learn, pandas, or iPython notebook.
|
Are there any downsides to using virtualenv for scientific python and machine learning?
| 0 | 0 | 0 | 1,022 |
15,540,640 |
2013-03-21T06:10:00.000
| 3 | 0 | 1 | 0 |
python-2.7,virtualenv,scientific-computing
| 15,540,795 | 2 | true | 0 | 0 |
Virtualenv is the best and easiest way to keep some sort of order when it comes to dependencies. Python is really behind Ruby (bundler!) when it comes to dealing with installing and keeping track of modules. The best tool you have is virtualenv.
So I suggest you create a virtualenv directory for each of your applications, put together a file where you list all the 'pip install' commands you need to build the environment and ensure that you have a clean repeatable process for creating this environment.
I think that the nature of the application makes little difference. There should not be any performance issue since all that virtualenv does is to load libraries from a specific path rather than load them from the directory where they are saved by default.
In any case (this may be completely irrelevant), but if performance is an issue, then perhaps you ought to be looking at a compiled language. Most likely though, any performance bottlenecks could be improved with better coding.
| 2 | 5 | 1 |
I have received several recommendations to use virtualenv to clean up my python modules. I am concerned because it seems too good to be true. Has anyone found downside related to performance or memory issues in working with multicore settings, starcluster, numpy, scikit-learn, pandas, or iPython notebook.
|
Are there any downsides to using virtualenv for scientific python and machine learning?
| 1.2 | 0 | 0 | 1,022 |
15,543,783 |
2013-03-21T09:34:00.000
| 3 | 1 | 0 | 0 |
c++,python,c,ctypes,embedding
| 15,544,287 | 2 | false | 0 | 1 |
It depends, there's not a definitive answer. If you write bad code in C++ it could be even slower than well written Python code.
Assuming that you can write good quality C++ code, you can expect speedups up to 20x in the performance critical parts.
As the other answer says, NumPy is a good option for numerical bottlenecks (if you think in matrix operations rather than loops!); and SciPy comes with weaver, that allows you to embed inline C++ and other goodies.
| 1 | 11 | 0 |
Does embedding c++ code in python using ctypes, boost.python, etc make your python application faster?
Suppose I am making an application in pygtk and I need some functions which need to be fast. So if I use c++ for certain tasks in my application will it be beneficial?
And what are other options to make python code faster?
|
Does embedding c++ code in python make your python application faster?
| 0.291313 | 0 | 0 | 624 |
15,550,178 |
2013-03-21T14:23:00.000
| 2 | 0 | 1 | 0 |
google-app-engine,python-2.7,google-cloud-datastore,app-engine-ndb
| 15,550,220 | 2 | false | 0 | 0 |
I would say that DateProperty is when you want a date-part only (i.e. dd/mm/yyyy or whatever format), and DateTimeProperty is useful when you want a full date & time representation (i.e. dd/mm/yyyy 00:00:00 or whatever format).
| 1 | 0 | 0 |
In layman's term, what's the difference between ndb.DateProperty and ndb.DateTimeProperty? When would I use which? For example, my intent is to use either with the param auto_now_add=True.
|
difference between DateProperty and DateTimeProperty
| 0.197375 | 0 | 0 | 734 |
15,551,092 |
2013-03-21T15:02:00.000
| 0 | 0 | 0 | 0 |
python,django
| 15,551,326 | 1 | true | 1 | 0 |
your form will be more complicated than a simple ModelForm .
maybe you could subclass ModelForm and populate a new DateTimeField for each DateTimeField in the model...
as for the making of the query... it will be a work too.
think about hardcode the extra DateField if you want to filter for only one Model
| 1 | 0 | 0 |
I want to filter some models in a list.
I know I can use ModelForm and filter in my view.
But my question is, how can I take advantage of ModelForms to filter a date field by range?
Also, I wish my form would generate two date widgets for my date field, one for start date and another for end date.
|
Can I use ModelForm to filter a Date by range?
| 1.2 | 0 | 0 | 57 |
15,552,658 |
2013-03-21T16:12:00.000
| 2 | 0 | 1 | 0 |
python,python-3.x,format,locale
| 15,552,828 | 1 | true | 0 | 0 |
Unfortunately, the locale module gets and sets global state. This is intrinsic to the design of locale.
The various workarounds include setting locks or calling a subprocess as a service.
| 1 | 2 | 0 |
Is there a correct way to format numbers by locale (getting the correct decimal separator) without modifying global state? This is for text generation server-side, so setlocale is not a good idea, and Babel does not yet support Python 3
|
How to format numbers by (several different) locales in Python 3
| 1.2 | 0 | 0 | 177 |
15,552,788 |
2013-03-21T16:18:00.000
| 1 | 0 | 0 | 0 |
python,xmpp,chat
| 15,570,548 | 1 | true | 0 | 0 |
In case it helps anyone, I figured it out. You just need to specify an id attribute in each chat message. They can be random id's but each message should have a different one. I assume gtalk was 'blocking' repeated messages b/c it couldn't tell if the messages were distinct or just repeats without an id.
| 1 | 1 | 0 |
I've created an XMPP chat client in python. Chat generally works except it seems that Google Talk 'blocks' some messages when sending from my chat client to a user using Google Talk. For example, if I send the same one word message 'hi' multiple times to gtalk user, it only displays it once. However, when sending that same sequence of messages to a user on iChat or on Adium, all of the 'hi's get shown. Sometimes, Google Talk also doesn't display the first 1-2 incoming messages from my client.
Otherwise, chatting works. My client never has any trouble with incoming chats. Thoughts?
|
XMPP Chat Client - Some IM messages to Google Talk don't get received
| 1.2 | 0 | 1 | 277 |
15,553,685 |
2013-03-21T17:02:00.000
| 2 | 0 | 1 | 0 |
python,linux,select,epoll
| 15,554,684 | 1 | true | 0 | 0 |
Have the object write to one side of a pipe(2), and pass the other end to epoll.register(). Obviously the object can't run in the same thread and at the same time as epoll.poll(). But that still leaves other valid usescases.
| 1 | 3 | 0 |
Is it possible to create an object that will support epoll()?
I assume the epoll_* system calls depending on a compatible system fd makes it difficult, if not impossible, to create an object with a compatible 'pseudo fd'- but thought I'd see if I was wrong. ( it happens :p )
|
Creating epoll()able objects
| 1.2 | 0 | 0 | 192 |
15,555,468 |
2013-03-21T18:35:00.000
| 1 | 1 | 0 | 1 |
python,nose
| 15,581,683 | 2 | true | 0 | 0 |
It looks like the expected way to handle this in nose is to use the logger framework within your tests, and then control the level to be captured with the --logging-level option.
By default nose will capture all logs made by the tests, but a filter can be specified using --logging-filter config parameter.
| 1 | 4 | 0 |
I've got some tests which log to stdout, and I'd like to change the log level in my test script based on the verbosity that nose is running on.
How can I access the verbosity of the running nose instance, from within one of the tests being run?
|
Accessing nose verbosity programmatically
| 1.2 | 0 | 0 | 930 |
15,557,790 |
2013-03-21T20:49:00.000
| 1 | 0 | 1 | 0 |
python
| 15,557,820 | 3 | false | 0 | 0 |
Use negative indexing.
seq[-1] is the last element of a sequence. seq[-3:] gives you the last three.
| 1 | 0 | 0 |
e.g., for a sequence of unknown length, what is the most "Pythonic" way of getting the last n elements?
Obviously I could calculate the starting and ending indices. Is there anything slicker?
|
Is there a Python idiom for the last n elements of a sequence?
| 0.066568 | 0 | 0 | 306 |
15,562,446 |
2013-03-22T03:55:00.000
| 0 | 0 | 0 | 0 |
python,flask,flask-extensions
| 63,349,977 | 16 | false | 1 | 0 |
Google Cloud VM instance + Flask App
I hosted my Flask Application on Google Cloud Platform Virtual Machine.
I started the app using python main.py But the problem was ctrl+c did not work to stop the server.
This command $ sudo netstat -tulnp | grep :5000 terminates the server.
My Flask app runs on port 5000 by default.
Note: My VM instance is running on Linux 9.
It works for this. Haven't tested for other platforms.
Feel free to update or comment if it works for other versions too.
| 3 | 154 | 0 |
I want to implement a command which can stop flask application by using flask-script.
I have searched the solution for a while. Because the framework doesn't provide app.stop() API, I am curious about how to code this. I am working on Ubuntu 12.10 and Python 2.7.3.
|
How to stop flask application without using ctrl-c
| 0 | 0 | 0 | 204,744 |
15,562,446 |
2013-03-22T03:55:00.000
| -6 | 0 | 0 | 0 |
python,flask,flask-extensions
| 56,710,034 | 16 | false | 1 | 0 |
For Windows, it is quite easy to stop/kill flask server -
Goto Task Manager
Find flask.exe
Select and End process
| 3 | 154 | 0 |
I want to implement a command which can stop flask application by using flask-script.
I have searched the solution for a while. Because the framework doesn't provide app.stop() API, I am curious about how to code this. I am working on Ubuntu 12.10 and Python 2.7.3.
|
How to stop flask application without using ctrl-c
| -1 | 0 | 0 | 204,744 |
15,562,446 |
2013-03-22T03:55:00.000
| 6 | 0 | 0 | 0 |
python,flask,flask-extensions
| 59,755,698 | 16 | false | 1 | 0 |
If you're working on the CLI and only have one flask app/process running (or rather, you just want want to kill any flask process running on your system), you can kill it with:
kill $(pgrep -f flask)
| 3 | 154 | 0 |
I want to implement a command which can stop flask application by using flask-script.
I have searched the solution for a while. Because the framework doesn't provide app.stop() API, I am curious about how to code this. I am working on Ubuntu 12.10 and Python 2.7.3.
|
How to stop flask application without using ctrl-c
| 1 | 0 | 0 | 204,744 |
15,564,876 |
2013-03-22T07:20:00.000
| -2 | 0 | 1 | 0 |
python,regex
| 15,564,998 | 2 | false | 0 | 0 |
I think you are looking for a way to extract Proper Nouns out of sentences. You should look at NLTK for proper approach. Regex can be only helpful of a limited context free grammer. On the other hand you seem to asking for ability to parse human language which is non-trivial (for computers).
| 1 | 5 | 0 |
I want to look for a phrase, match up to a few words following it, but stop early if I find another specific phrase.
For example, I want to match up to three words following "going to the", but stop the matching process if I encounter "to try". So for example "going to the luna park" will result with "luna park"; "going to the capital city of Peru" will result with "capital city of" and "going to the moon to try some cheesecake" will result with "moon".
Can it be done with a single, simple regular expression (preferably in Python)? I've tried all the combinations I could think of, but failed miserably :).
|
Regular Expressions: Match up to a word or a maximum number of words
| -0.197375 | 0 | 0 | 309 |
15,566,117 |
2013-03-22T08:49:00.000
| 0 | 1 | 1 | 0 |
python,unit-testing,testing,language-agnostic
| 15,566,501 | 8 | false | 0 | 0 |
I can't see an easy way to refactor a test suite, and depending on the extent of your refactor you're obviously going to have to change the test suite. How big is your test suite?
Refactoring properly takes time and attention to detail (and a lot of Ctrl+C Ctrl+V!). Whenever I've refactored my tests I don't try and find any quick ways of doing things, besides find & replace, because there is too much risk involved.
You're best of doing things properly and manually albeit slowly if you want to make keep the quality of your tests.
| 6 | 9 | 0 |
I have a test suit for my app. As the test suit grew organically, the tests have a lot of repeated code which can be refactored.
However I would like to ensure that the test suite doesn't change with the refactor. How can test that my tests are invariant with the refactor.
(I am using Python+UnitTest), but I guess the answer to this can be language agnostic.
|
How do i test/refactor my tests?
| 0 | 0 | 0 | 432 |
15,566,117 |
2013-03-22T08:49:00.000
| 2 | 1 | 1 | 0 |
python,unit-testing,testing,language-agnostic
| 15,566,738 | 8 | false | 0 | 0 |
Interesting question - I'm always keen to hear discussions of the type "how do I test the tests?!". And good points from @marksweb above too.
It's always a challenge to check your tests are actually doing what you want them to do and testing what you intend, but good to get this right and do it properly. I always try to consider the rule-of-thumb that testing should make up 1/3 of development effort in any project... regardless of project time constraints, pressures and problems that inevitably crop up.
If you intend to continue and grow your project have you considered refactoring like you say, but in a way that creates a proper test framework that allows test driven development (TDD) of any future additions of functionality or general expansion of the project?
| 6 | 9 | 0 |
I have a test suit for my app. As the test suit grew organically, the tests have a lot of repeated code which can be refactored.
However I would like to ensure that the test suite doesn't change with the refactor. How can test that my tests are invariant with the refactor.
(I am using Python+UnitTest), but I guess the answer to this can be language agnostic.
|
How do i test/refactor my tests?
| 0.049958 | 0 | 0 | 432 |
15,566,117 |
2013-03-22T08:49:00.000
| 0 | 1 | 1 | 0 |
python,unit-testing,testing,language-agnostic
| 15,566,925 | 8 | false | 0 | 0 |
Don't refactor the test suite.
The purpose of refactoring is to make it easier to maintain the code, not to satisfy some abstract criterion of "code niceness". Test code doesn't need to be nice, it doesn't need to avoid repetition, but it does need to be thorough. Once you have a test that is valid (i.e. it really does test necessary conditions on the code under test), you should never remove it or change it, so test code doesn't need to be easy to maintain en masse.
If you like, you can rewrite the existing tests to be nice, and run the new tests in addition to the old ones. This guarantees that the new combined test suite catches all the errors that the old one did (and maybe some more, as you expand the new code in future).
There are two ways that a test can be deemed invalid -- you realise that it's wrong (i.e. it sometimes fails falsely for correct code under test), or else the interface under test has changed (to remove the API tested, or to permit behaviour that previously was a test failure). In that case you can remove a test from the suite. If you realise that a whole bunch of tests are wrong (because they contain duplicated code that is wrong), then you can remove them all and replace them with a refactored and corrected version. You don't remove tests just because you don't like the style of their source.
To answer your specific question: to test that your new test code is equivalent to the old code, you would have to ensure (a) all the new tests pass on your currently-correct-as-far-as-you-known code base, which is easy, but also (b) the new tests detect all the errors that the old tests detect, which is usually not possible because you don't have on hand a suite of faulty implementations of the code under test.
| 6 | 9 | 0 |
I have a test suit for my app. As the test suit grew organically, the tests have a lot of repeated code which can be refactored.
However I would like to ensure that the test suite doesn't change with the refactor. How can test that my tests are invariant with the refactor.
(I am using Python+UnitTest), but I guess the answer to this can be language agnostic.
|
How do i test/refactor my tests?
| 0 | 0 | 0 | 432 |
15,566,117 |
2013-03-22T08:49:00.000
| 1 | 1 | 1 | 0 |
python,unit-testing,testing,language-agnostic
| 15,567,104 | 8 | false | 0 | 0 |
In theory you could write a test for the test, mocking the actualy object under test.But I guess that is just way to much work and not worth it.
So what you are left with are some strategies, that will help, but not make this fail safe.
Work very carefully and slowly. Use the features of you IDEs as much as possible in order to limit the chance of human error.
Work in pairs. A partner looking over your shoulder might just spot the glitch that you missed.
Copy the test, then refactor it. When done introduce errors in the production code to ensure, both tests find the the problem in the same (or equivalent) ways. Only then remove the original test.
The last step can be done by tools, although I don't know the python flavors. The keyword to search for is 'mutation testing'.
Having said all that, I'm personally satisfied with steps 1+2.
| 6 | 9 | 0 |
I have a test suit for my app. As the test suit grew organically, the tests have a lot of repeated code which can be refactored.
However I would like to ensure that the test suite doesn't change with the refactor. How can test that my tests are invariant with the refactor.
(I am using Python+UnitTest), but I guess the answer to this can be language agnostic.
|
How do i test/refactor my tests?
| 0.024995 | 0 | 0 | 432 |
15,566,117 |
2013-03-22T08:49:00.000
| 0 | 1 | 1 | 0 |
python,unit-testing,testing,language-agnostic
| 15,587,332 | 8 | false | 0 | 0 |
Test code can be the best low level documentation of your API since they do not outdate as long as they pass and are correct. But messy test code doesn't serve that purpose very well. So refactoring is essential.
Also might your tested code change over time. So do the tests. If you want that to be smooth, code duplication must be minimized and readability is a key.
Tests should be easy to read and always test one thing at once and make the follwing explicit:
what are the preconditions?
what is being executed?
what is the expected outcome?
If that is considered, it should be pretty safe to refactor the test code. One step at a time and, as @Don Ruby mentioned, let your production code be the test for the test.
For many refactoring you can often safely rely on advanced IDE tooling – if you beware of side effects in the extracted code.
Although I agree that refactoring without proper test coverage should be avoided, I think writing tests for your tests is almost absurd in usual contexts.
| 6 | 9 | 0 |
I have a test suit for my app. As the test suit grew organically, the tests have a lot of repeated code which can be refactored.
However I would like to ensure that the test suite doesn't change with the refactor. How can test that my tests are invariant with the refactor.
(I am using Python+UnitTest), but I guess the answer to this can be language agnostic.
|
How do i test/refactor my tests?
| 0 | 0 | 0 | 432 |
15,566,117 |
2013-03-22T08:49:00.000
| 3 | 1 | 1 | 0 |
python,unit-testing,testing,language-agnostic
| 15,649,053 | 8 | false | 0 | 0 |
Coverage.py is your friend.
Move over all the tests you want to refactor into "system tests" (or some such tag). Refactor the tests you want (you would be doing unit tests here right?) and monitor the coverage:
After running your new unit tests but before running the system tests
After running both the new unit tests and the system tests.
In an ideal case, the coverage would be same or higher but you can thrash your old system tests.
FWIW, py.test provides mechanism for easily tagging tests and running only the specific tests and is compatible with unittest2 tests.
| 6 | 9 | 0 |
I have a test suit for my app. As the test suit grew organically, the tests have a lot of repeated code which can be refactored.
However I would like to ensure that the test suite doesn't change with the refactor. How can test that my tests are invariant with the refactor.
(I am using Python+UnitTest), but I guess the answer to this can be language agnostic.
|
How do i test/refactor my tests?
| 0.07486 | 0 | 0 | 432 |
15,569,151 |
2013-03-22T11:23:00.000
| 2 | 0 | 1 | 0 |
python,debugging,matlab,ipython
| 23,613,002 | 7 | false | 0 | 0 |
I have been moved from matlab and R to python. I have tried different editors so I can give you some advices.
1- Spyder is the closer to matlab but my impression is that it is not very good. It often crash when I start to run long simulation with a lot of data.
If you are new to python I suggest you to use this one for a while and then move to something else.
2- emacs python mode. Works very well . In my opinion it is difficult to configure and probably not the best choice if you are not familiar with python.
3- pycharm. I have just started to use pycharm and it seems to be very good (this reminds my Rstudio). I do not think that this supports an interactive console like the one inside spyder or emacs.
You can still obtain something similar in the debug mode
4- A lot of people love ipython notebook but I think that this is not a good choice for long code. It is good if you want something easy to visualize.
| 1 | 9 | 0 |
I'm trying to migrate from Matlab to python. One of the things that is nice about Matlab is that when debugging I can put a breakpoint in some code and do something to call that code form the command line. Using PyCharm + IPython I haven't found a way to do this in Python. It seems I have to run an entire script in debug mode to do any debugging rather than being able to do so from a simple command. I suppose I could write a one line script with the command I'm interested in, but it seems like there should be a better way. What is the Python way of doing this?
|
Debugging with breakpoints from console in Python
| 0.057081 | 0 | 0 | 6,830 |
15,570,452 |
2013-03-22T12:34:00.000
| 1 | 1 | 1 | 1 |
python,phpstorm
| 16,013,655 | 1 | true | 0 | 0 |
Use Settings | File Types | Ignore Files and Folders to exclude directories by name or pattern.
| 1 | 1 | 0 |
I was wondering if anyone knew a code fix to the pstorm python script where you could exclude directories from being indexed in a directory when you open it from the command line.
I know this is not currently a feature in the IDE but maybe there is a work around someone knows of.
Thanks
|
Exclude Directories when using Pstorm in PhpStorm
| 1.2 | 0 | 0 | 174 |
15,572,295 |
2013-03-22T14:04:00.000
| 14 | 1 | 0 | 1 |
python,eclipse,pydev
| 15,580,217 | 1 | true | 0 | 0 |
PyDev has a find references with Ctrl+Shift+G (not sure that'd be what you're calling a call hierarchy).
| 1 | 12 | 0 |
Is there a way to get a good call hierarchy in PyDev?
I want to be able to select a function and see in which files it is called and eventually by which other functions. I tried the Hierarchy View in Eclipse by pressing F4, but it does not output what I want.
|
Good Call Hierarchy in Eclipse/PyDev
| 1.2 | 0 | 0 | 4,318 |
15,573,229 |
2013-03-22T14:47:00.000
| 4 | 0 | 1 | 0 |
python,metaprogramming
| 15,573,285 | 1 | true | 0 | 0 |
You don't provide it to type as an argument, the metaclass would subclass type itself, so you would call the metaclass' constructor instead.
| 1 | 3 | 0 |
I would like to create a new type using type(...). How do I provide the Meta class for this type?
|
In python, how would I create a new type from scratch which has a Meta class in it
| 1.2 | 0 | 0 | 75 |
15,575,466 |
2013-03-22T16:35:00.000
| 24 | 0 | 0 | 0 |
python,graph,matplotlib
| 15,578,952 | 1 | true | 0 | 0 |
You can save the images in a vector format so that they will be scalable without quality loss. Such formats are PDF and EPS. Just change the extension to .pdf or .eps and matplotlib will write the correct image format. Remember LaTeX likes EPS and PDFLaTeX likes PDF images. Although most modern LaTeX executables are PDFLaTeX in disguise and convert EPS files on the fly (same effect as if you included the epstopdf package in your preamble, which may not perform as well as you'd like).
Alternatively, increase the DPI, a lot. These are the numbers you should keep in mind:
300dpi: plain paper prints
600dpi: professional paper prints. Most commercial office printers reach this in their output.
1200dpi: professional poster/brochure grade quality.
I use these to adapt the quality of PNG figures in conjunction with figure's figsize option, which allows for correctly scaled text and graphics as you improve the quality through dpi.
| 1 | 18 | 1 |
I am using a python program to produce some data, plotting the data using matplotlib.pyplot and then displaying the figure in a latex file.
I am currently saving the figure as a .png file but the image quality isn't great. I've tried changing the DPI in matplotlib.pyplot.figure(dpi=200) etc but this seems to make little difference. I've also tried using differnet image formats but they all look a little faded and not very sharp.
Has anyone else had this problem?
Any help would be much appreciated
|
How do you improve matplotlib image quality?
| 1.2 | 0 | 0 | 18,876 |
15,575,872 |
2013-03-22T16:54:00.000
| 1 | 0 | 1 | 0 |
python,multithreading,user-interface,parallel-processing,queue
| 15,576,036 | 1 | true | 0 | 1 |
Most GUI systems are event driven and expect all event handling to come from a single thread. This is true of the Windows event system, Android events, Swing, and probably many others. In the case of GUIs, the practical benefit of making all the event management functions thread-safe is small, while the difficulty is quite large. Most large scale concurrent systems do combine event based and thread based approaches to concurrency, for instance modern browsers. In your case, it's much simpler to just register an update event and have it posted to the event dispatching thread by your worker processes/threads. This way your GUI remains responsive to other windowing events as it is only being notified periodically.
| 1 | 0 | 0 |
My main question: can you have several processes write to one queue in a loop and use that queue to update a GUI?
I have been looking at the posts regarding queues and multiple processes and I was wondering if anyone had any idea if it is possible or beneficial to use combinations of them.
My thought process is this: since all processors are made now with ~8 cores, most programs I make should have the ability to access this power if there is any part of the program that is at all computationally expensive. I would like to have a GUI which displays the progress of several different processes at the same time. I would like each of these processes to use as much of the processors as possible, but they all have to write to the GUI at the same time, so from what I have read, it seems like a queue will work for this.
Is the best way to approach this to have several processes communicate to a queue via a pipe, and have the queue update the GUI?
At the moment I am using pyQt signals and slots but I feel this is a bad solution for modern times since it only uses one CPU core.
|
Shared queue across multiple processes
| 1.2 | 0 | 0 | 356 |
15,577,185 |
2013-03-22T18:13:00.000
| 3 | 1 | 0 | 0 |
java,python,c,modulo
| 15,577,257 | 4 | true | 0 | 0 |
Python's %-operator calculates the mathematical remainder, not the modulus. The remainder is by definition a number between 0 and the divisor, it doesn't depend on the sign of the dividend like the modulus.
| 1 | 4 | 0 |
Why modulo operator is not working as intended in C and Java?
|
Why -1%26 = -1 in Java and C, and why it is 25 in Python?
| 1.2 | 0 | 0 | 963 |
15,578,942 |
2013-03-22T20:02:00.000
| 6 | 1 | 0 | 0 |
python,selenium
| 15,579,077 | 3 | false | 0 | 0 |
Selenium isn't actually a testing framework, it's a browser driver. You don't write tests in Selenium any more than you write GUI apps in OpenGL. You usually write tests in a unit testing framework like unittest, or something like nose or lettuce built on top of it. Your tests then use Selenium to interact with a browser, as they use a database API to access the DB or an HTTP library to communicate with web services.
| 2 | 0 | 0 |
I wonder, what is the advantage of using selenium for automation if at the end of the test he emits no reports where the test passed or failed?
|
Selenium Webdriver Testing - Python
| 1 | 0 | 1 | 590 |
15,578,942 |
2013-03-22T20:02:00.000
| 0 | 1 | 0 | 0 |
python,selenium
| 15,594,935 | 3 | false | 0 | 0 |
Its up to the discretion of the user what to do with the selenium webdriver automation and how to report the test results. Selenium webdriver will give you the power to control your web browser and to automate your web application tests.
Same as how you have to program in any other automation tool the conditions for checking your pass or fail criteria for any tests, in Selenium also it has to be programmed.It is totally up to the programmer how to report their results and the template to be followed. You will have to write your own code to format and store the test results.
| 2 | 0 | 0 |
I wonder, what is the advantage of using selenium for automation if at the end of the test he emits no reports where the test passed or failed?
|
Selenium Webdriver Testing - Python
| 0 | 0 | 1 | 590 |
15,579,545 |
2013-03-22T20:45:00.000
| 0 | 0 | 0 | 0 |
python,webapp2,temporary-files
| 15,587,348 | 1 | false | 1 | 0 |
I would suggest using a client-created UUID on the server, and when the server stores it, send back an error (forcing a retry) to the client. Under most circumstances, the UUID will be completely unique and won't collide with anything already stored. If it does, the client can pick a new name and try again. If you want to make this slightly better, wait a random number of milliseconds between retries to reduce the likelihood of collisions being repeated.
That'd be my approach to this specific, insecure, short-term storage problem.
As for removal, I'd leave that in the responsibility of the server to remove them at intervals, basically checking to see if any file is greater than 5 minutes old and removing them. As long as in-process downloads leave the file open, it shouldn't interrupt.
If you want to leave the client in control, you will not have an easy way to enforce deletion when the client is offline, so I'd suggest keeping a list of the files in date order and delete them:
in a background thread as necessary if you expect to be running a long time
at startup (which will require persisting these to disk)
at shutdown (doesn't require persisting to disk)
However, all of these mechanisms are prone to leaving unnecessary files on the server if you crash or lose the persistent information, so I'd still recommend making the deletion the responsibility of the server.
| 1 | 0 | 0 |
What's the best way to name and store a generated file on a server, such that if the user requests the file in the next 5 minutes or so, you return it, otherwise, return an error code? I am using Python and Webapp2 (although this would work with any WSGI server).
|
Store file on server for a short time period
| 0 | 0 | 0 | 79 |
15,582,962 |
2013-03-23T03:19:00.000
| 1 | 0 | 1 | 0 |
python,linux,rest,concurrency,parallel-processing
| 15,583,041 | 1 | true | 0 | 0 |
No, multiple python processes executed separately will not share threads or any other state.
The most likely case is that 300MB/s is either the fastest your client can support, or the fastest your server can support.
300MB/s is extremely fast, so much so that I wonder if you haven't confused megabytes with megabits.
| 1 | 2 | 0 |
I have a Python script that sends 4GB worth of data to a server in 10MB chunks using REST API. No matter how many of these scripts I invoke concurrently, I get exactly the same overall throughput client-side (10Gb network, server class system):
1 invocation = 300MB/s
2 invocations = 300MB/s
4 invocations = 300MB/s
8 invocations = 300MB/s
At first I though it was some kind of disk read limitation, but I modified the script so that it does not require hard drive access and uses minimal memory and I still get the exact same throughput. CPU and memory usage during execution is minimal.
Researching further, I read that the Python interpreter is single threaded. That is fine (and makes sense I guess), but is it possible that only one instance of the Python interpreter is invoked at a time, despite multiple Python scripts being invoked concurrently?
|
Does only one Python interpreter execute multiple concurrent scripts?
| 1.2 | 0 | 0 | 371 |
15,591,618 |
2013-03-23T20:19:00.000
| 0 | 0 | 0 | 0 |
python,wysiwyg
| 15,591,769 | 3 | false | 0 | 1 |
See Glade, particularly in use with the libglade Python bindings.
| 1 | 0 | 0 |
I am searching for month now and growing quite frustrated.
I just love python.
So after doing a lot of console based stuff I wanted to do some graphical UIs as well.
I am aware of most of the frameworks (wxpython, glade, tk etc).
But: I do not want to write the code for the GUI itself per hand! Declaring every element from hand, thinking about grids and doing a trail and error to find out just how many pixels you have to move an object to get it in the right place. Well, lets say that just sounds like 1990's to me, and it is no fun at all.
So to put it plain and simple, what I am looking for is a solution that allows me to design a GUI graphically (WYSIWYG) and have an event based linking to python code.
Almost all major languages have that: For C/C++ their are certainly the most IDEs/tools that can do that. For Java there is Netbeans with Wwing (example of what i want; it would be ideal if that UI designer in Netbeans could spit out jython code, but no: python is supported but not UIdesign). Even Mono/Visual Basic etc. has tools like that.
So why the hell is their nothing for python?
P.S. And please, no comments like "If you are are real programmer you do it by hand to get cleaner code". If I want something very specific I edit it by hand, but designing a standard UI by hand is a waste of time.
|
Is there really no event based wysiwyg Gui builder for python/jython etc
| 0 | 0 | 0 | 5,155 |
15,592,613 |
2013-03-23T22:04:00.000
| 1 | 0 | 0 | 0 |
python,gtk
| 15,596,366 | 1 | true | 0 | 1 |
Your editor is most likely saving the source file in another encoding, such as Latin-1 or Windows-1252, where GTK expects UTF-8. Try replacing "après" with u"apr\u00e8s".encode("utf-8"). If that makes it work, the problem lies there.
To correctly fix the problem, you need to:
declare the encoding to Python using the # -*- coding: utf-8 -*-
make sure your editor is saving the file in the declared encoding. If necessary, use a hex editor to verify this.
use Unicode string literals for non-ASCII strings, i.e. u"après" instead of "après". Where unicode strings are not accepted, use u"après".encode("utf-8"). PyGTK generally accepts Unicode strings, so explicit encoding to UTF-8 should not be necessary.
| 1 | 0 | 0 |
The python code: menu_item = gtk.MenuItem("après") gives a warning message: Gtk warning Invalid input string and the menu item is not shown. What should I add / change to have the menu item displayed?
|
Gtk gives a warning and won't show my menu item
| 1.2 | 0 | 0 | 460 |
15,592,980 |
2013-03-23T22:45:00.000
| 0 | 0 | 0 | 0 |
python,netezza
| 15,643,468 | 3 | false | 0 | 0 |
You need to get the nzcli installed on the machine that you want to run nzload from - your sysadmin should be able to put it on your unix/linux application server. There's a detailed process to setting it all up, caching the passwords, etc - the sysadmin should be able to do that to.
Once it is set up, you can create NZ control files to point to your data files and execute a load. The Netezza Data Loading guide has detailed instructions on how to do all of this (it can be obtained through IBM).
You can do it through aginity as well if you have the CREATE EXTERNAL TABLE privledge - you can do a INSERT INTO FROM EXTERNAL ... REMOTESOURCE ODBC to load the file from an ODBC connection.
| 2 | 2 | 1 |
I have a huge csv file which contains millions of records and I want to load it into Netezza DB using python script I have tried simple insert query but it is very very slow.
Can point me some example python script or some idea how can I do the same?
Thank you
|
How to use NZ Loader (Netezza Loader) through Python Script?
| 0 | 1 | 0 | 4,583 |
15,592,980 |
2013-03-23T22:45:00.000
| 1 | 0 | 0 | 0 |
python,netezza
| 17,522,337 | 3 | false | 0 | 0 |
you can use nz_load4 to load the data,This is the support utility /nz/support/contrib/bin
the syntax is same like nzload,by default nz_load4 will load the data using 4 thread and you can go upto 32 thread by using -tread option
for more details use nz_load4 -h
This will create the log files based on the number of thread,like if
| 2 | 2 | 1 |
I have a huge csv file which contains millions of records and I want to load it into Netezza DB using python script I have tried simple insert query but it is very very slow.
Can point me some example python script or some idea how can I do the same?
Thank you
|
How to use NZ Loader (Netezza Loader) through Python Script?
| 0.066568 | 1 | 0 | 4,583 |
15,593,572 |
2013-03-24T00:09:00.000
| 0 | 0 | 0 | 0 |
python,django,postgresql,postgis,geodjango
| 15,593,621 | 3 | false | 1 | 0 |
You're probably right, PostGIS/GeoDjango is probably overkill, but making your own Django app would not be too much trouble for your simple task. Django offers a lot in terms of templating, etc. and with the built in admin makes it pretty easy to enter single records. And GeoDjango is part of contrib, so you can always use it later if your project needs it.
| 1 | 0 | 0 |
For my app, I need to determine the nearest points to some other point and I am looking for a simple but relatively fast (in terms of performance) solution. I was thinking about using PostGIS and GeoDjango but I think my app is not really that "geographic" (I still don't really know what that means though). The geographic part (around 5 percent of the whole) is that I need to keep coordinates of objects (people and places) and then there is this task to find the nearest points. To put it simply, PostGIS and GeoDjango seems to be an overkill here.
I was also thinking of django-haystack with SOLR or Elasticsearch because I am going to need a strong, strong text search capabilities and these engines have also these "geographic" features. But not sure about it either as I am afraid of core db <-> search engine db synchronisation and hardware requirements for these engines. At the moment I am more akin to use posgreSQL trigrams and some custom way to do that "find near points problem". Is there any good one?
|
Django + postgreSQL: find near points
| 0 | 1 | 0 | 689 |
15,593,813 |
2013-03-24T00:41:00.000
| 1 | 0 | 0 | 0 |
python,loading,gif
| 15,593,845 | 1 | true | 0 | 1 |
Simply put, you cannot display multi-frame GIFs in Pygame, unless you use an extra library. Instead, explode your GIF. You will have to do everything manually, as Pygame does not control flow time, etc, which is necessary for animated GIFs.
| 1 | 2 | 0 |
Does anyone know a simple way to load a .gif image into python using pygame? I tried loading a .gif image using 'pygame.image.load(path)' which worked although only the first frame loaded.Ever since I had to use a loop to display multiple images at once.
|
Loading a multiframe/moving .gif image into python using pygame
| 1.2 | 0 | 0 | 944 |
15,608,229 |
2013-03-25T05:27:00.000
| 36 | 0 | 1 | 0 |
python,python-3.x
| 15,608,332 | 3 | false | 0 | 0 |
There are a couple of things to understand here. One is the difference between buffered I/O and unbuffered I/O. The concept is fairly simple - for buffered I/O, there is an internal buffer which is kept. Only when that buffer is full (or some other event happens, such as it reaches a newline) is the output "flushed". With unbuffered I/O, whenever a call is made to output something, it will do this, 1 character at a time.
Most I/O functions fall into the buffered category, mainly for performance reasons: it's a lot faster to write chunks at a time (all I/O functions eventually get down to syscalls of some description, which are expensive.)
flush lets you manually choose when you want this internal buffer to be written - a call to flush will write any characters in the buffer. Generally, this isn't needed, because the stream will handle this itself. However, there may be situations when you want to make sure something is output before you continue - this is where you'd use a call to flush().
| 1 | 53 | 0 |
There is a boolean optional argument to the print() function flush which defaults to False.
The documentation says it is to forcibly flush the stream.
I don't understand the concept of flushing. What is flushing here? What is flushing of stream?
|
What does print()'s `flush` do?
| 1 | 0 | 0 | 30,447 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.