Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
13,852,646 |
2012-12-13T03:51:00.000
| 2 | 1 | 1 | 0 |
python,arrays,parallel-processing,fortran,f2py
| 13,858,423 | 3 | false | 0 | 0 |
An alternative approach to VladimirF's suggestion, could be to set up the two parts as a client server construct, where your Python part could talk to the Fortran part using sockets. Though this comes with the burden to implement some protocol for the interaction, it has the advantage, that you get a clean separation and can even go on running them on different machines with an interaction over the network.
In fact, with this approach you even could do the embarrassing parallel part, by spawning as many instances of the Fortran application as needed and feed them all with different data.
| 1 | 5 | 1 |
I have a python script I hope to do roughly this:
calls some particle positions into an array
runs algorithm over all 512^3 positions to distribute them to an NxNxN matrix
feed that matrix back to python
use plotting in python to visualise matrix (i.e. mayavi)
First I have to write it in serial but ideally I want to parrallelize step 2 to speed up computation. What tools/strategy might get me started. I know Python and Fortran well but not much about how to connect the two for my particular problem. At the moment I am doing everything in Fortran then loading my python program - I want to do it all at once.I've heard of py2f but I want to get experienced people's opinions before I go down one particular rabbit hole. Thanks
Edit: The thing I want to make parallel is 'embarrassingly parallel' in that is is just a loop of N particles and I want to get through that loop as quickly as possible.
|
I want Python as front end, Fortran as back end. I also want to make fortran part parallel - best strategy?
| 0.132549 | 0 | 0 | 929 |
13,853,053 |
2012-12-13T04:39:00.000
| 8 | 1 | 1 | 0 |
python,c,compilation
| 13,853,083 | 2 | false | 0 | 0 |
Byte codes are not natural to the CPU so they need interpretation (by a CPU native code called interpreter). The advantage of byte code is that it enables optimizations, pre-computations, and saves space. C compiler produces machine code and machine code does not need interpretation, it is native to CPU.
| 2 | 8 | 0 |
I know this is probably a very obvious answer and that I'm exposing myself to less-than-helpful snarky comments, but I don't know the answer so here goes.
If Python compiles to bytecode at runtime, is it just that initial compiling step that takes longer? If that's the case wouldn't that just be a small upfront cost in the code (ie if the code is running over a long period of time, do the differences between C and python diminish?)
|
What makes C faster than Python?
| 1 | 0 | 0 | 7,934 |
13,853,053 |
2012-12-13T04:39:00.000
| 18 | 1 | 1 | 0 |
python,c,compilation
| 13,853,280 | 2 | true | 0 | 0 |
It's not merely the fact that Python code is interpreted which makes it slower, although that definitely sets a limit to how fast you can get.
If the bytecode-centric perspective were right, then to make Python code as fast as C all you'd have to do is replace the interpreter loop with direct calls to the functions, eliminating any bytecode, and compile the resulting code. But it doesn't work like that. You don't have to take my word for it, either: you can test it for yourself. Cython converts Python code to C, but a typical Python function converted and then compiled doesn't show C-level speed. All you have to do is look at some typical C code thus produced to see why.
The real challenge is multiple dispatch (or whatever the right jargon is -- I can't keep it all straight), by which I mean the fact that whereas a+b if a and b are both known to be integers or floats can compile down to one op in C, in Python you have to do a lot more to compute a+b (get the objects that the names are bound to, go via __add__, etc.)
This is why to make Cython reach C speeds you have to specify the types in the critical path; this is how Shedskin makes Python code fast using (Cartesian product) type inference to get C++ out of it; and how PyPy can be fast -- the JIT can pay attention to how the code is behaving and specialize on things like types. Each approach eliminates dynamism, whether at compile time or at runtime, so that it can generate code which knows what it's doing.
| 2 | 8 | 0 |
I know this is probably a very obvious answer and that I'm exposing myself to less-than-helpful snarky comments, but I don't know the answer so here goes.
If Python compiles to bytecode at runtime, is it just that initial compiling step that takes longer? If that's the case wouldn't that just be a small upfront cost in the code (ie if the code is running over a long period of time, do the differences between C and python diminish?)
|
What makes C faster than Python?
| 1.2 | 0 | 0 | 7,934 |
13,855,390 |
2012-12-13T07:56:00.000
| 1 | 0 | 0 | 0 |
python,sockets,python-2.7,sync,dropbox
| 30,990,211 | 2 | false | 0 | 0 |
You may consider packaging the file into a torrent, and transferring it that way. Torrents have LOTS of error recovery. Many large companies, for example, Blizzard, use torrents to deliver content to their users.
You'll still need a way to transfer the torrent info of course
See python package libtorrent and the server software called opentracker
I've also done file transfer with sockets, which is fine if the internet connection is uninterrupted, and you don't want to parallel stream.
| 1 | 1 | 0 |
I wrote a python app which transfers files via sockets to a server. It always works, but my question is: is that a good way to transfer files from desktop client to server via sockets? How, for example, do Google Drive or Dropbox desktop clients syncronize files (as I know for already existent files GD client sends only changes, like rsync), but what about new files?
|
Python file transfer
| 0.099668 | 0 | 1 | 1,435 |
13,858,776 |
2012-12-13T11:22:00.000
| 1 | 0 | 1 | 0 |
python,debugging,pdb
| 13,961,662 | 4 | false | 0 | 0 |
Using pdb, any function call can be stepped into. For any other statement, pdb can print the values of the relevant names in the line. What additional functionality are you looking for that isn't covered?
If you're trying to 'step into' things like a list comprehension, that won't work from a pure Python perspective because it's a single opcode. At some point for every expression you'll need to tell your students 'and this is where Python goes into the C implementation and evaluates this...'.
| 1 | 7 | 0 |
I want to build a visual debugger, which helps programming students to see how expression evaluation takes place (how subexpressions get evaluated and "replaced" by their values, something like expression evaluation visualizer in Excel).
Looks like you can't step through this process with Python's pdb, as its finest step granularity is line of code. Is it somehow possible to step through Python bytecode? Any other ideas how to achieve this goal?
EDIT: I need a lightweight solution that can be built on top of CPython standard library.
|
How to step through Python expression evaluation process?
| 0.049958 | 0 | 0 | 1,143 |
13,859,124 |
2012-12-13T11:44:00.000
| 0 | 0 | 0 | 0 |
python,html,parsing
| 18,350,504 | 3 | true | 1 | 0 |
No, to this moment there is no such HTML parser and every parser has it's own limitations.
| 1 | 1 | 0 |
I want to parse html code in python and tried beautiful soup and pyquery already. The problem is that those parsers modify original code e.g insert some tag or etc. Is there any parser out there that do not change the code?
I tried HTMLParser but no success! :(
It doesn't modify the code and just tells me where tags are placed. But it fails in parsing web pages like mail.live.com
Any idea how to parse a web page just like a browser?
|
python html parser which doesn't modify actual markup?
| 1.2 | 0 | 1 | 273 |
13,862,300 |
2012-12-13T14:48:00.000
| 2 | 0 | 1 | 0 |
python,list
| 13,862,492 | 1 | true | 0 | 0 |
Returning tuples is a fairly standard way to give back immutable lists. Another option would be to return an immutable "view" of the list. I don't think the stdlib currently contains such a class, so you'd probably have to roll your own, but it would be fairly straight forward. Basically, the class would contain a single private instance variable (the underlying list); and would implement only the read operations (__getitem__, __len__, etc) and would delegate them to the instance variable (wrapping any child items in "view" objects as necessary).
| 1 | 3 | 0 |
I'm working om my python skills and am trying out creating some classes. Since python is reference based any list returned by a class method can be modified by the caller which would then reflect back in the class (as I have it). What would be the correct way of avoiding this? I was thinking either converting the nested list to nested tuples or doing a deep copy?
|
Preventing modification of returned lists in python
| 1.2 | 0 | 0 | 118 |
13,862,562 |
2012-12-13T15:01:00.000
| 7 | 0 | 0 | 1 |
python,py2exe,pyinstaller,cx-freeze
| 13,862,767 | 1 | true | 0 | 0 |
I've already had a solution when writing the question - I'm putting it here because it's probable that other people will find it here easily.
The solution: Create empty __init__.py in Lib/site-packages/google of your python installation directory, and compile it somehow (import google in interactive python session for example).
When there is __init__.pyc in the package directory, the freezing tools start to work.
| 1 | 4 | 0 |
When trying to freeze python (2.7) application with any of cx_freeze, bbfreeze, pyinstaller or py2exe, the frozen application cannot find google.protobuf.
In logs of the freezing process there is usually something like 'cannot find google'. So the google package is not found and not packaged, although it's in python's site-packages and the non-frozen version works just fine.
|
Google protocol buffers not found when trying to freeze python app
| 1.2 | 0 | 0 | 1,539 |
13,867,475 |
2012-12-13T19:58:00.000
| 8 | 0 | 0 | 0 |
python,user-interface,wxpython,tkinter,pyside
| 13,867,619 | 1 | true | 0 | 1 |
This is just a really generalized high-level explanation about "GUI toolkits"...
Lets say you decide to use the Qt framework. This framework is written in C++. There are two different python bindings that can be used, allowing one to write a GUI application in python against the same API as the C++ version.
The python bindings provide a wrapping around calls into the C++ code. PyQt4 for instance uses sip, while PySide uses shiboken. These are just language wrapping tools that take specifications for how to map between the C++ objects and their intended python interface.
Ok, so you start using PyQt... All of the code you write has to pass through the python interpreter. Some of it may be pure python. Some of it will call into C++ libs to create things like your widgets. In Qt, there will be a C++ pointer associated with the python instance counterpart.
It is the C++ layer that is then communicating with the window manager of your platform, to turn platform-independent API calls into something platform specific, like how to exactly draw a button or menu.
Whether you create a console only or GUI based python application, it all goes through the python interpreter to interpret your python code. Something must interpret the python language for you.
| 1 | 3 | 0 |
I got curious and have been reading about GUI development using Python for the past hour. After reading documentation of wxPython, PyQt, Nokia's Python bindings for Qt along with Tkinter a question came to my mind.
When I create a console application with Python, it runs using the embedded Python interpreter (which I assume is usually if not always in my case cpython).
So I was wondering, what's the case with these "widget toolkits"?
How is the Python code executed and what interprets it (or executed it)?
Which part of my Python code is interpreted using the Python
interpreter?
Or does the Python code get lexically analysed and then parsed by the widget's
toolkit which then interpretes and executes (or compile during build)?
I am looking forward to understanding what goes on in the background in comparison with Python applications' (a bit simpler to understand) interpretation with the Python interpreter.
Thank you.
PS. To whichever genius thinks that this question deserves to be closed;
A lot of people wonder the internals of external libraries and systems. Especially those which are not as simple as they look. There currently is not any question explaining this on SE.
|
How is Python interpreted with wxPython, PyQt, PySide or Tkinter?
| 1.2 | 0 | 0 | 1,387 |
13,867,676 |
2012-12-13T20:11:00.000
| 5 | 0 | 1 | 0 |
python,python-2.7
| 13,867,810 | 3 | false | 0 | 0 |
Circular imports are a "code smell," and often (but not always) indicate that some refactoring would be appropriate. E.g., if A.x uses B.y and B.y uses A.z, then you might consider moving A.z into its own module.
If you do think you need circular imports, then I'd generally recommend importing the module and referring to objects with fully qualified names (i.e, import A and use A.x rather than from A import x).
| 2 | 9 | 0 |
One way is to use import x, without using "from" keyword. So then you refer to things with their namespace everywhere.
Is there any other way? like doing something like in C++ ifnotdef __b__ def __b__ type of thing?
|
Avoiding circular (cyclic) imports in Python?
| 0.321513 | 0 | 0 | 12,783 |
13,867,676 |
2012-12-13T20:11:00.000
| 1 | 0 | 1 | 0 |
python,python-2.7
| 13,868,011 | 3 | false | 0 | 0 |
If you're trying to do from A import *, the answer is very simple: Don't do that. You're usually supposed to do import A and refer to the qualified names.
For quick&dirty scripts, and interactive sessions, that's a perfectly reasonable thing to do—but in such cases, you won't run into circular imports.
There are some cases where it makes sense to do import * in real code. For example, if you want to hide a module structure that's complex, or that you generate dynamically, or that changes frequently between versions, or if you're wrapping up someone else's package that's too deeply nested, import * may make sense from a "wrapper module" or a top-level package module. But in that case, nothing you import will be importing you.
In fact, I'm having a hard time imagining any case where import * is warranted and circular dependencies are even a possibility.
If you're doing from A import foo, there are ways around that (e.g., import A then foo = A.foo). But you probably don't want to do that. Again, consider whether you really need to bring foo into your namespace—qualified names are a feature, not a problem to be worked around.
If you're doing the from A import foo just for convenience in implementing your functions, because A is actually long_package_name.really_long_module_name and your code is unreadable because of all those calls to long_package_name.really_long_module_name.long_class_name.class_method_that_puts_me_over_80_characters, remember that you can always import long_package_name.really_long_module_name as P and then use P for you qualified calls.
(Also, remember that with any from done for implementation convenience, you probably want to make sure to specify a __all__ to make sure the imported names don't appear to be part of your namespace if someone does an import * on you from an interactive session.)
Also, as others have pointed out, most, but not all, cases of circular dependencies, are a symptom of bad design, and refactoring your modules in a sensible way will fix it. And in the rare cases where you really do need to bring the names into your namespace, and a circular set of modules is actually the best design, some artificial refactoring may still be a better choice than foo = A.foo.
| 2 | 9 | 0 |
One way is to use import x, without using "from" keyword. So then you refer to things with their namespace everywhere.
Is there any other way? like doing something like in C++ ifnotdef __b__ def __b__ type of thing?
|
Avoiding circular (cyclic) imports in Python?
| 0.066568 | 0 | 0 | 12,783 |
13,869,132 |
2012-12-13T21:53:00.000
| 0 | 0 | 1 | 0 |
python
| 13,869,243 | 2 | false | 0 | 0 |
Make an empty list. Then start an endless loop where you read the user input and append to the list. The break depends on another user input ("yes" or "no").
| 1 | 0 | 0 |
What can I write that will add on whatever the user inputs (raw_input / input) to an already existing list? Also, this will have to continue on. For example, if the user adds one item to a list, it will ask if they would like to add another. And if the answer is yes, then it will add another item to the list.
Thanks for any help!
|
PYTHON - Add on to list
| 0 | 0 | 0 | 215 |
13,870,894 |
2012-12-14T00:25:00.000
| 1 | 0 | 0 | 0 |
python,django,django-signals
| 13,892,073 | 1 | true | 1 | 0 |
I would suggest you to use django-celery with RabbitMQ. You can add the notifications thing in the tasks of celery and have your view start the task queque. Have a look....I hope it will be helpful to you.
| 1 | 2 | 0 |
Its hard to explain what I am trying to achieve. Please have the patience to go through this. And let me know if you have any questions.
Say I have a Django project with two applications which I would like to have them coupled loosely. One of the application is 'Jobs' and other is 'Notifications'.
Now I want to create notifications when the Job instance is updated. So, I was thinking of using Django Signals. But some of the reservations I have are:
If I use the build-in signals like post_save. I could validate the conditions on job instance and generate notification(which is good). But the problem comes when, in the same view logic I call the save method on job instance multiple times. This would generate notifications multiple times. Else, I use the home made signals I would be required to call it manually which is not good for loose coupling.
Moreover, the signals are not asynchronous so, I would have to wait for the notification generation to complete before I can proceed.
Can anyone please suggest a good implementation strategy using Signals. One solution I was looking into was Python Threading which seems to take care of the asynchronous problem. But are there any other consequences of using Threading.
|
Django Signal vs Python Threading
| 1.2 | 0 | 0 | 769 |
13,873,119 |
2012-12-14T05:20:00.000
| 0 | 0 | 0 | 0 |
python,django,amazon-s3
| 13,892,252 | 2 | false | 1 | 0 |
No its not possible to create a bucket for each user as Amazon allows only 100 buckets per account. So unless you are sure not to have more than 100 users, it will be a very bad idea.
The ideal solution will be to remember each user's storage in you Django app itself in database. I guess you would be using S3 boto library for storing the files, than it returns the byte size after each upload. You can use that to store that.
There is also another way out, you could create many folders inside a bucket with each folder specific to an user. But still the best way to remember the storage usage in your app
| 1 | 0 | 0 |
I'm building a file hosting app that will store all client files within a folder on an S3 bucket. I then want to track the amount of usage on S3 recursively per top folder to charge back the cost of storage and bandwidth to each corresponding client.
Front-end is django but the solution can be python for obvious reasons.
Is it better to create a bucket per client programmatically?
If I do go with the approach of creating a bucket per client, is it then possible to get the cost of cloudfront exposure of the bucket if enabled?
|
How can I track s3 bucket folder usage with python?
| 0 | 1 | 1 | 818 |
13,873,190 |
2012-12-14T05:27:00.000
| 0 | 0 | 0 | 0 |
python,user-interface,python-2.7,tkinter,gui-designer
| 13,875,940 | 2 | false | 0 | 1 |
Are you sure you have to create a class for a Dialog? Isn't Tkinter built-in dialog class ok?
You could provide an iterator of Dialogs to a next() function, which every Next button would call when clicked. Did you mean something like that?
| 1 | 2 | 0 |
I want to write a program that asks the user a series of questions in different dialog boxes. Each box shows up one at a time, and goes to the next box if the button next is clicked. My question is do I create a class for each Dialog and just call the next class once the button next is clicked? Or is there a more elegant solution to this?
|
How to design GUI sequence in Tkinter with Python 2.7
| 0 | 0 | 0 | 1,732 |
13,873,719 |
2012-12-14T06:23:00.000
| 5 | 0 | 0 | 0 |
python,selenium,scrapy,selenium-webdriver,phantomjs
| 13,873,743 | 2 | false | 1 | 0 |
Use Htmlunitdriver.For making it fail proof You would have to make some changes accordingly.But It will work without browser.
| 1 | 3 | 0 |
I have a use case where I need to fill the form in a website but don't have access to API. Currently we are using webdriver along with browser but it gets very heavy and not fool proof as the process is asynchronous. Is there any way I can do it without browser and also make the process synchronous by closely monitoring the pending requests?
Casperjs and htmlunitdriver seems to be some of the best options I have. Can someone explain advantages or disadvantages in terms of maintenance, fail-proof, light weight.
I would need to navigate complex and many different types of webpages. Some of the webpages I would like to navigate are heavily JS driven.
Can Scrapy be used for this purpose?
|
Navigation utility without browser, light weight and fail-proof
| 0.462117 | 0 | 1 | 811 |
13,873,755 |
2012-12-14T06:26:00.000
| 1 | 0 | 0 | 0 |
python,django,web,static
| 13,873,805 | 1 | true | 1 | 0 |
django requires STATIC_DIR to be absolute path.
set a variable like PROJECT_DIR to os.path.dirname(os.path.realpath(__file__)).
then set STATIC_DIR to os.path.join(PROJECT_DIR, 'static')
| 1 | 1 | 0 |
I have searched around and apologize if this is a basic question. I am trying to get my django app to serve static files. If the STATIC_URL is set to the absolute path (ie http://localhost/static) then the files work however if STATIC_URL is relative like /static/ it doesn't pull in any static files.
I would like it to be able to use /static/ for when I move the application to a production server and have a reverse proxy serving the static files.
|
Django absolute url works but relative url does not for static files
| 1.2 | 0 | 0 | 1,375 |
13,875,584 |
2012-12-14T09:09:00.000
| 1 | 0 | 0 | 0 |
python,python-3.x,k-means
| 13,875,710 | 2 | false | 0 | 0 |
One way is to sort your list and then run over the elements by comparing each one to the previous one. If they are not equal sum 1 to your "distinct counter". This operation is O(n), and for sorting you can use the sorting algorithm you prefer, such as quick sort or merge sort, but I guess there is an available sorting algorithm in the lib you use.
Another option is to create a hash table and add all the elements. The number of insertions will be the distinct elements, since repeated elements will not be inserted. I think this is O(1) in the best case so maybe this is the better solution. Good luck!
Hope this helps,
Dídac Pérez
| 1 | 0 | 1 |
I'm trying to do K-Means Clustering using Kruskal's Minimum Spanning Tree Algorithm. My original design was to run the full-length Kruskal algorithm of the input and produce an MST, after which delete the last k-1 edges (or equivalently k-1 most expensive edges).
Of course this is the same as running Kruskal algorithm and stopping it just before it adds its last k-1 edges.
I want to use the second strategy i.e instead of running the full length Kruskal algorithm, stop it just after the number of clusters so far equals K. I'm using Union-Find data structure and using a list object in this Union-Find data structure.
Each vertex on this graph is represented by its current cluster on this list e.g [1,2,3...] means vertices 1,2,3 are in their distinct independent clusters. If two vertices are joined their corresponding indices on the list data structure are updated to reflect this.
e.g merging vertices 2 and 3 leaves the list data object as [1,2,2,4,5.....]
My strategy is then every time two nodes are merged, count the number of DISTINCT elements in the list and if it equals the number of desired clusters, stop. My worry is that this may not be the most efficient option. Is there a way I could count the number of distinct objects in a list efficiently?
|
Efficient way to find number of distinct elements in a list
| 0.099668 | 0 | 0 | 296 |
13,875,641 |
2012-12-14T09:13:00.000
| 0 | 0 | 1 | 0 |
python,time,tkinter
| 13,875,712 | 2 | false | 0 | 1 |
At first thought you can have a tk config dialog, where you can add new rules, edit messagebox strings etc. Then you will have in which you will use the time module. From there you can access the system time.
If you want to display windows messageboxes, you can find them in the ctypes libary.
| 1 | 3 | 0 |
I am using Windows 7 (32bit).
As a programming exercise, I have to make a reminder using tkinter.
To be more specified :
My main project is to help people who suffer from alzheimer disease.
So, one of my targets is to make a reminder for helping them do the necessary activities/things.
For example,
At 14:00 o'clock, create a messagebox (and if it possible play a sound)..saying "It's time to eat".
Then at 18:00 o'clock, create a messagebox (and if it possible play a sound)..saying "It's time for a walk".
etc...
How I can do this?
Is there a special module or some tools in python(or in tkinter) that can help me?
Before, coding in Visual Basic as remember I used something like "Timer"..But I can't remember more about this.. :P
Thanks in advance.
|
Reminder Using Tkinter
| 0 | 0 | 0 | 292 |
13,877,327 |
2012-12-14T10:58:00.000
| 2 | 0 | 0 | 0 |
python,django,amazon-web-services,amazon-s3,django-uploads
| 13,877,675 | 3 | false | 1 | 0 |
Uhm, you need to be more specific with your question, but we're doing the same thing and the workflow is as follows:
1) You get the file handle on file upload from request.FILES and store it somewhere on your local filesystem, so you don't work on stream -- which is what i would guess is causing your problems
2) You use PIL (or better yet, Pillow) to manipulate the image on the FS, do resizing, thumbnailing, whatever.
3) You use Boto (http://boto.cloudhackers.com/en/latest/) to upload to S3, because Boto takes the handling of AWS out of your hands.
It's quite straightforward and works well
| 2 | 1 | 0 |
I am trying upload images and than create an thumbnail of it and than store both in S3. After the file has been uploaded i am first uploading it to S3 and than trying to create thumbnail but it doesn't work as than PIL is not able to recognise the image. And secondly if I create the thumbnail first than while uploading original image I get EOF.
I think Django allows just once for the uploaded files to be used only once....Please kindly tell me a way to do so....Thanks in advance
|
Image upload and Manipulation in Django
| 0.132549 | 0 | 0 | 955 |
13,877,327 |
2012-12-14T10:58:00.000
| 0 | 0 | 0 | 0 |
python,django,amazon-web-services,amazon-s3,django-uploads
| 13,889,477 | 3 | false | 1 | 0 |
I finally figured it out. The problem was that it was a stream that the uploaded file is stored into, so everytime i read the file it would reach the EOF.
The only and best way out is to seek(0) everytime we read the file.
This is also needed when playing with other files also in django.
| 2 | 1 | 0 |
I am trying upload images and than create an thumbnail of it and than store both in S3. After the file has been uploaded i am first uploading it to S3 and than trying to create thumbnail but it doesn't work as than PIL is not able to recognise the image. And secondly if I create the thumbnail first than while uploading original image I get EOF.
I think Django allows just once for the uploaded files to be used only once....Please kindly tell me a way to do so....Thanks in advance
|
Image upload and Manipulation in Django
| 0 | 0 | 0 | 955 |
13,877,727 |
2012-12-14T11:23:00.000
| 0 | 0 | 0 | 0 |
python,tkinter
| 13,877,837 | 2 | false | 0 | 1 |
To run another file, you can import it, or load the data with
file = open("filename.txt", "r")
then use
exec(string)
to run the program inside it.
| 1 | 2 | 0 |
Is it possible to run my program inside Tkinter?
I have a program which fits the curves. I want to make it GUI and looking for the ways to insert it into Tkinter.
I want my program to run after clicking a BUTTON widget. Is there a option in Tkinter to run another file.py?
|
Run my program inside Tkinter program
| 0 | 0 | 0 | 2,372 |
13,879,230 |
2012-12-14T13:02:00.000
| 1 | 0 | 0 | 0 |
python,svg
| 13,879,820 | 2 | false | 0 | 0 |
Try 'Inkscape' (IMO the best SVG editor out there), looking at their source code, and see how they do it - and possibly you can reuse their libraries (they have a embedded Python scripting engine too) without much rework.
| 1 | 6 | 0 |
I'm looking for a library with Python bindings that can do calculations on SVG paths, such as calculating the length, and finding the coordinates of a point on the paths (ie, say the coordinates of the point 24.4% the length of the path).
Is there something around already?
A C-library would be acceptable as well, as I can easily make my own Python bindings.
|
Library for SVG path calculations
| 0.099668 | 0 | 0 | 3,164 |
13,879,569 |
2012-12-14T13:26:00.000
| 1 | 0 | 1 | 1 |
python,shebang
| 13,879,633 | 4 | false | 0 | 0 |
As you note, they probably both work on linux. However, if someone has installed a newer version of python for their own use, or some requirement makes people keep a particular version in /usr/bin, the env allows the caller to set up their environment so that a different version will be called through env.
Imagine someone trying to see if python 3 works with the scripts. They'll add the python3 interpreter first in their path, but want to keep the default on the system running on 2.x. With a hardcoded path that's not possible.
| 2 | 10 | 0 |
How should the shebang for a Python script look like?
Some people support #!/usr/bin/env python because it can find the Python interpreter intelligently. Others support #!/usr/bin/python, because now in most GNU/Linux distributions python is the default program.
What are the benefits of the two variants?
|
#!/usr/bin/python and #!/usr/bin/env python, which support?
| 0.049958 | 0 | 0 | 9,499 |
13,879,569 |
2012-12-14T13:26:00.000
| 4 | 0 | 1 | 1 |
python,shebang
| 13,879,608 | 4 | false | 0 | 0 |
I use #!/usr/bin/env python as the default install location on OS-X is NOT /usr/bin. This also applies to users who like to customize their environment -- /usr/local/bin is another common place where you might find a python distribution.
That said, it really doesn't matter too much. You can always test the script with whatever python version you want: /usr/bin/strange/path/python myscript.py. Also, when you install a script via setuptools, the shebang seems to get replaced by the sys.executable which installed that script -- I don't know about pip, but I would assume it behaves similarly.
| 2 | 10 | 0 |
How should the shebang for a Python script look like?
Some people support #!/usr/bin/env python because it can find the Python interpreter intelligently. Others support #!/usr/bin/python, because now in most GNU/Linux distributions python is the default program.
What are the benefits of the two variants?
|
#!/usr/bin/python and #!/usr/bin/env python, which support?
| 0.197375 | 0 | 0 | 9,499 |
13,881,533 |
2012-12-14T15:24:00.000
| 4 | 0 | 0 | 0 |
python,sqlite
| 13,881,814 | 2 | false | 0 | 0 |
You can create a class which wraps sqlite3. It takes its .connect() method and maybe others and exposes it to the outside, and then you add your own stuff.
Another option would be subclassing - if that works.
| 1 | 1 | 0 |
How would I extend the sqlite3 module so if I import Database I can do Database.connect() as an alias to sqlite3.connect(), but define extra non standard methods?
|
How do I extend a python module to include extra functionality? (sqlite3)
| 0.379949 | 1 | 0 | 206 |
13,884,439 |
2012-12-14T18:37:00.000
| 2 | 0 | 1 | 0 |
python,audio
| 13,884,538 | 2 | false | 0 | 0 |
The answer is highly platform dependent and more details are required. Different Operating Systems have different ways of handling Interprocess Communication, or IPC.
If you're using a UNIXlike environment, there are a rich set of IPC primitives to work with. Pipes, SYS V Message Queues, shared memory, sockets, etc. In your case I think it would make sense to use a pipe or a socket, depending on whether the A and B are running in the same process or not.
Update:
In your case, I would use python's subprocess and or os module and a pipe. The idea here is to create calling contexts to the two APIs in processes which share a parent process, which has also created a unidirectional named pipe and passed it to its children. Then, data written to the named pipe in create_recorder will immediately be available for read()ing in the named pipe.
| 1 | 1 | 0 |
Suppose I have two functions drawn from two different APIs, function A and B.
By default, function A outputs audio data to a wav file.
By default, function B takes audio input from a wav file and process it.
Is it possible to stream the data from function A to B? If so, how do I do this? I work on lubuntu if that is relevant.
This is function A I'm thinking about from the PJSUA python API:
create_recorder(self, filename)
Create WAV file recorder.
Keyword arguments
filename -- WAV file name
Return:
WAV recorder ID
And this is function B from the Pocketsphinx Python API
decode_raw(...)
Decode raw audio from a file.
Parameters:
fh (file) - Filehandle to read audio from.
uttid (str) - Identifier to give to this utterance.
maxsamps (int) - Maximum number of samples to read. If not specified or -1, the rest of the file will be read.
update:
When I try to pass the filename of a socket or named pipe, it outputs this error message, seems that the C function that the python bindings use doesn't like anything but .wav files... Why would that be?
pjsua_aud.c .pjsua_recorder_create() error: unable to determine file format for /tmp/t_fifo. Exception: Object: LIb, operation=create(recorder), error=Option/operation is not supported (PJ_ENOTSUP)
I need to use a value returned by create_recorder(), it is an int that is used to get the wav recorder id (which is not passed on directly to decode_raw() but rather passed on to some other function.
|
Redirecting audio output from one function to another function in python
| 0.197375 | 0 | 0 | 1,006 |
13,885,520 |
2012-12-14T20:05:00.000
| 0 | 0 | 0 | 0 |
python
| 13,885,541 | 2 | false | 1 | 0 |
Use the glob module to get a list of *.html files.
| 1 | 1 | 0 |
I have a folder (courses) with sub-folders and a random number of files. I want to run multiple search and replaces on those random files. Is it possible to do a wild card search for .html and have the replaces run on every html file ?
Search and replaces:
1) "</b>" to "</strong>"
2) "</a>" to "</h>"
3) "<p>" to "</p>"
Also all these replaces have to be run on every file in the folder and sub-folders.
Thank you so much
|
Run multiple find and replaces in Python (on every file in the folder and sub-folder)
| 0 | 0 | 0 | 139 |
13,886,168 |
2012-12-14T20:59:00.000
| 4 | 0 | 1 | 0 |
python
| 30,484,534 | 16 | false | 0 | 0 |
The pass statement does nothing. It can be used when a statement is required syntactically but the program requires no action.
| 1 | 418 | 0 |
I am in the process of learning Python and I have reached the section about the pass statement. The guide I'm using defines it as being a null statement that is commonly used as a placeholder.
I still don't fully understand what that means though. What would be a simple/basic situation where the pass statement would be used and why would it be needed?
|
How to use the pass statement
| 0.049958 | 0 | 0 | 307,390 |
13,889,066 |
2012-12-15T03:39:00.000
| 3 | 0 | 1 | 0 |
python
| 13,889,088 | 5 | false | 0 | 0 |
You can do timings within Python, but if you want to know the overall CPU consumption of your program, that is kind of silly to do. The best thing to do is to just use the GNU time program. It even comes standard in most operating systems.
| 1 | 7 | 0 |
Pretty simple, I'd like to run an external command/program from within a Python script, once it is finished I would also want to know how much CPU time it consumed.
Hard mode: running multiple commands in parallel won't cause inaccuracies in the CPU consumed result.
|
Run an external command and get the amount of CPU it consumed
| 0.119427 | 0 | 0 | 2,139 |
13,890,935 |
2012-12-15T09:14:00.000
| 3 | 0 | 1 | 0 |
python,time,timezone
| 39,049,225 | 9 | false | 0 | 0 |
There is no such thing as an "epoch" in a specific timezone. The epoch is well-defined as a specific moment in time, so if you change the timezone, the time itself changes as well. Specifically, this time is Jan 1 1970 00:00:00 UTC. So time.time() returns the number of seconds since the epoch.
| 1 | 654 | 0 |
Does time.time() in the Python time module return the system's time or the time in UTC?
|
Does Python's time.time() return the local or UTC timestamp?
| 0.066568 | 0 | 0 | 1,177,945 |
13,892,113 |
2012-12-15T12:20:00.000
| 6 | 0 | 0 | 0 |
python,rss
| 13,892,148 | 1 | false | 0 | 0 |
Each RSS have some format.
See what Content-Type the server returns for the given URL. However, this may not be specific and a server may not necessarily return the correct header.
Try to parse the content of the URL as RSS and see if it is successful - this is likely the only definitive proof that a given URL is a RSS feed.
| 1 | 0 | 0 |
am trying to find a way to detect if a given URL has an RSS feed or not. Any suggestions?
|
given a URL in python, how can i check if URL has RSS feed?
| 1 | 0 | 1 | 812 |
13,893,486 |
2012-12-15T15:28:00.000
| 1 | 0 | 1 | 0 |
python,database,text,python-2.7
| 13,893,825 | 2 | false | 0 | 0 |
Disclaimer: As always with performance, don't rely on assumptions, but measure.
That being said, here are some considerations:
Whether you use a database or plain text files, the choice of data structure and algorithm may have a significant effect on performance. For example, a brute force search though a list will be inefficient in either case.
An optimized in-memory data structure is likely faster than an on-disk database.
On the other hand, the database solution is may use memory more effectively.
| 1 | 3 | 0 |
Just a query into both personal experience and understanding of limitations etc. If I had, for example, a text file with 100,000 lines (entries) and a database with 100,000 identical entries, each containing one word and no doubles, which one would I be able to process faster and which would consume the least memory?
It is my understanding that I could load the entire text file into onto memory into a list at the start (only about 1MB.) This information is being used to confirm string contents. Every word (delimited by a space) in the string has to exist in the file or else it gets changed to the most similar entry in the list. In a nutshell, it's very high-level auto-correct. Sadly, however, I have to reinvent the wheel.
So anyway, my question still stands. Which is my best choice? I'm trying to use the fewest external modules possible, so I'm thinking I might stick with SQLite (it's standard, is it not? Though one more can't hurt) If newline delimited text files are both my fastest and most economical option, is there a specific way I should go about handling them? I want this script to be able to perform at least 100 match operations in a second, if that's computationally possible with a language such as Python.
|
Text or database, speed and resource consumption in python
| 0.099668 | 0 | 0 | 1,388 |
13,895,763 |
2012-12-15T20:11:00.000
| 1 | 0 | 1 | 1 |
python,linux,ubuntu,python-3.x,pyside
| 13,895,878 | 2 | false | 0 | 0 |
I think you should install the pyside from its source files that have setup.py and then run the command python3.3 setup.py build and sudo python3.3 setup.py install because if you install by apt for example, it will use the default interpreter which is 3.2 that you mentioned.
| 1 | 0 | 0 |
So, to keep it simple. Ubuntu 12.10 has python 3.2 pre installed and it is linked to "python3". I downloaded python 3.3 and it's command is "python3.3". However, I downloaded pySide for python3 from synaptic. Using "from PySide.QtCore import *" fails on python3.3. BUT, when I ran just "python3" (aka 3.2) everything works fine. Synaptic just installed lib for python3.2 which is default for python3 in ubuntu. How can I force synaptic to install modules for python3.3?
Thanks
|
Installing python modules for specific version on linux (pySide)
| 0.099668 | 0 | 0 | 1,564 |
13,898,247 |
2012-12-16T03:04:00.000
| 0 | 0 | 1 | 0 |
python,floating-point,precision,solver,sympy
| 13,898,797 | 4 | false | 0 | 0 |
Going with float instead of double should reduce storage by 1/2, and probably speedup by atleast factor of 2 - moving from double to single precision has benefits when you're not doing anything nonlinear or with state.
Other paralleization, and algorithm optimization techniques may also help, depending on when you show us the code.
| 2 | 1 | 0 |
The solve functionality in sympy takes some time to come up with the solution. Is it possible to speed it up by reducing the required precision (I don't really need like 15 digit after the decimal point!)?
|
speed up sympy solver by reducing the precision?
| 0 | 0 | 0 | 1,406 |
13,898,247 |
2012-12-16T03:04:00.000
| 1 | 0 | 1 | 0 |
python,floating-point,precision,solver,sympy
| 13,901,413 | 4 | false | 0 | 0 |
Note that having 15 printed decimals does not mean that relative error bound is 10^-15.
I encourage to analyze the effective precision before switching to single precision float.
Using an arbitrary precision package like suggested above is a good way to check how the result are altered: double number of digits and see how your result vary. Also check effect of slight variation of your inputs.
| 2 | 1 | 0 |
The solve functionality in sympy takes some time to come up with the solution. Is it possible to speed it up by reducing the required precision (I don't really need like 15 digit after the decimal point!)?
|
speed up sympy solver by reducing the precision?
| 0.049958 | 0 | 0 | 1,406 |
13,898,391 |
2012-12-16T03:38:00.000
| 1 | 0 | 0 | 0 |
wxpython
| 13,918,810 | 1 | true | 0 | 1 |
Notebook tabs always start off in the top left for the native widget. The only thing you can change is which side the tabs appear on (i.e. top, left, bottom or right). You cannot control where on the side they appear.
You might be able to take FlatNotebook and hack it a bit to add this functionality since it is written in pure Python versus wx.Notebook which is wrapped C++.
| 1 | 0 | 0 |
How can I have the wxPython notebooks tabs be center on the top?
EXTRA TEXT TO MAKE IT LOOK LONGER EVER THOUGH IT IS A SIMPLE QUESTION.
|
wxPython notebook have tabs be center top?
| 1.2 | 0 | 0 | 106 |
13,899,823 |
2012-12-16T08:46:00.000
| 2 | 0 | 0 | 0 |
python,pip,web2py
| 14,100,013 | 2 | true | 1 | 0 |
I think I can give my answer to my own question: we don't need to install web2py, just download it and python it.
| 1 | 5 | 0 |
I tried to install Web2py with pip. The installation is completed successfully. But after that I don't know how to start the server. I know there are three apps which are 'w2p_clone', 'w2p_apps' and 'w2p_run'. I don't know how to use these three apps. And also I did not set up my virtual env for Web2py, however even I do not have virtual env I can start Web2py sever from src code (like python web2py.py)
I just want to know how to use pip intall for Web2py.
Thank you very much.
|
pip install web2py
| 1.2 | 0 | 0 | 2,902 |
13,905,861 |
2012-12-16T21:58:00.000
| 11 | 0 | 0 | 1 |
python,debian,supervisord
| 13,905,927 | 1 | true | 0 | 0 |
To be able to run any subprocess as a different user from what supervisord is running as, you must run supervisord as root.
When you run supervisord as a user other than root, it cannot run subprocesses under another user. This is a UNIX process security restriction.
| 1 | 3 | 0 |
I have been trying to get supervisor running as a non root user but came against problems time after time. The more I have read into it the more it looks like supervisor is meant to be run as root.
I even read somewhere that it is only possible to run subprocesses as their own users under supervisor if supervisor is running as root.
MY question is, is it possible to get supervisor to run as non-root and still start subprocesses as non-root users also. Secondly, other then creating the user and setting the user in the supervisor.conf, is there anything else I have to do?
|
Supervisor as non-root user
| 1.2 | 0 | 0 | 7,256 |
13,905,936 |
2012-12-16T22:06:00.000
| 1 | 0 | 1 | 0 |
python,list,integer,type-conversion
| 50,688,445 | 12 | false | 0 | 0 |
you can use:
First convert the value in a string to iterate it, Them each value can be convert to a Integer value = 12345
l = [ int(item) for item in str(value) ]
| 1 | 61 | 0 |
What is the quickest and cleanest way to convert an integer into a list?
For example, change 132 into [1,3,2] and 23 into [2,3]. I have a variable which is an int, and I want to be able to compare the individual digits so I thought making it into a list would be best, since I can just do int(number[0]), int(number[1]) to easily convert the list element back into int for digit operations.
|
Converting integer to digit list
| 0.016665 | 0 | 0 | 189,811 |
13,905,975 |
2012-12-16T22:09:00.000
| 2 | 0 | 0 | 0 |
python,report,openerp,tryton
| 13,906,107 | 4 | true | 1 | 0 |
in the first table cell, this worked:
[[((case.excluded == False) or removeParentNode('blockTable')) and '']][[case.name]]
although i am still interested in knowing if there's a more logical way instead of destroying the entire created blocktable, especially since i'll be trying to figure out how not to leave an empty line when removing the parent node 'blocktable'.
| 2 | 2 | 0 |
I've been struggling trying to mimic in openerp a report found in Tryton for the module Health GNU. In their report folder lies a report.odt file very similar to any sxw report found in openerp, with a few exceptions that is. For instance, instead of openERP's:
[[repeatIn(objects,'test')]]
we have an opening and closing tag of for making the previous exmaple as such:
<FOR EACH="TEST IN OBJECTS"> .... </FOR>
How can I mimic the following in a traditional sxw report:
<for each="case in test.critearea">
<if test="case.excluded==0"> #this is outside the table
...values in table... #table starts here
</if>
<for>
which basically excludes an entire row when matched.
using familiar syntax such as [[ case.excluded==False ]] didnt work.
|
openerp reporting syntax
| 1.2 | 0 | 0 | 1,361 |
13,905,975 |
2012-12-16T22:09:00.000
| 2 | 0 | 0 | 0 |
python,report,openerp,tryton
| 13,925,567 | 4 | false | 1 | 0 |
You can iterate on a list generated by a function defined on report's related .py file.
Just look for examples on the addons, there are plenty of them, like:
account/report/account_aged_partner_balance.rml:
[[ repeatIn(get_lines(data['form']), 'partner') ]]
| 2 | 2 | 0 |
I've been struggling trying to mimic in openerp a report found in Tryton for the module Health GNU. In their report folder lies a report.odt file very similar to any sxw report found in openerp, with a few exceptions that is. For instance, instead of openERP's:
[[repeatIn(objects,'test')]]
we have an opening and closing tag of for making the previous exmaple as such:
<FOR EACH="TEST IN OBJECTS"> .... </FOR>
How can I mimic the following in a traditional sxw report:
<for each="case in test.critearea">
<if test="case.excluded==0"> #this is outside the table
...values in table... #table starts here
</if>
<for>
which basically excludes an entire row when matched.
using familiar syntax such as [[ case.excluded==False ]] didnt work.
|
openerp reporting syntax
| 0.099668 | 0 | 0 | 1,361 |
13,906,679 |
2012-12-16T23:50:00.000
| 0 | 1 | 1 | 0 |
c++,python,memory,ram
| 13,928,292 | 4 | false | 0 | 0 |
Your problem description is kind of vague and can be read in several different ways.
One way in which I read this is that you have some kind of ASCII representation of a data structure on disk. You read this representation into memory, and then grep through it one or more times looking for things that match a given regular expression.
Speeding this up depends a LOT on the data structure in question.
If you are simply doing line splitting, then maybe you should just read the whole thing into a byte array using a single read instruction. Then you can alter how you grep to use a byte-array grep that doesn't span multiple lines. If you fiddle the expression to always match a whole line by putting ^.*? at the beginning and .*?$ at the end (the ? forces a minimal instead of maximal munch) then you can check the size of the matched expression to find out how many bytes forward to go.
Alternately, you could try using the mmap module to achieve something similar without having to read anything and incur the copy overhead.
If there is a lot of processing going on to create your data structure and you can't think of a way to use the data in the file in a very raw way as a simple byte array, then you're left with various other solutions depending, though of these it sounds like creating a daemon is the best option.
Since your basic operation seems to be 'tell me which tables entries match a regexp', you could use the xmlrpc.server and xmlrpc.client libraries to simply wrap up a call that takes the regular expression as a string and returns the result in whatever form is natural. The library will take care of all the work of wrapping up things that look like function calls into messages over a socket or whatever.
Now, your idea of actually keeping it in memory is a bit of a red-herring. I don't think it takes 30 minutes to read 2G of information from disk these days. It likely takes at most 5, and likely less than 1. So you might want to look at how you're building the data structure to see if you could optimize that instead.
What pickle and/or marshal will buy you is highly optimized code for building the data structure out of a serialized form. This will cause the data structure creation to possibly be constrained by disk read speeds instead. That means the real problem you're addressing is not reading it off disk each time, but building the data structure in your own address space.
And holding it in memory and using a daemon isn't a guarantee that it will stay in memory. It just guarantees that it stays built up as the data structure you want within the address space of a Python process. The os may decide to swap that memory to disk at any time.
Again, this means that focusing on the time to read it from disk is likely not the right focus. Instead, focus on how to efficiently re-create (or preserve) the data structure in the address space of a Python process.
Anyway, that's my long-winded ramble on the topic. Given the vagueness of your question, there is no definite answer, so I just gave a smorgasbord of possible techniques and some guiding ideas.
| 3 | 2 | 0 |
Is it possible to store python (or C++) data in RAM for latter use and how can this be achieved?
Background:
I have written a program that finds which lines in the input table match the given regular expression. I can find all the lines in roughly one second or less. However the problem is that i process the input table into a python object every time i start this program. This process takes about 30minutes.
This program will eventually run on a machine with over 128GB of RAM. The python object takes about 2GB of RAM. The input table changes rarely and therefore the python object (that i'm currently recalculating every time) actually changes rarely. Is there a way that i can create this python object once, store it in RAM 24/7 (recreate if input table changes or server restarts) and then use it every time when needed?
NOTE: The python object will not be modified after creation. However i need to be able to recreate this object if needed.
EDIT: Only solution i can think of is just to keep the program running 24/7 (as a daemon??) and then issuing commands to it as needed.
|
Storing large python object in RAM for later use
| 0 | 0 | 0 | 2,201 |
13,906,679 |
2012-12-16T23:50:00.000
| 2 | 1 | 1 | 0 |
c++,python,memory,ram
| 13,924,546 | 4 | false | 0 | 0 |
We regularly load and store much larger chunks of memory than 2 Gb in no time (seconds). We can get 350 Mb/s from our 3 year old SAN.
The bottlenecks /overheads seem to involve mainly python object management. I find that using marshal is much faster than cPickle. Allied with the use of data structures which involve minimal python object handles, this is more than fast enough.
For data structures, you can either use array.array or numpy. array.array is slightly more portable (no extra libraries involved) but numpy is much more convenient in many ways.
For example, instead of having 10 million integer (python objects), you would create a single array.array('i') with 10 million elements.
The best part to using marshal is that it is a very simple format you can write to and read from easily using c/c++ code.
| 3 | 2 | 0 |
Is it possible to store python (or C++) data in RAM for latter use and how can this be achieved?
Background:
I have written a program that finds which lines in the input table match the given regular expression. I can find all the lines in roughly one second or less. However the problem is that i process the input table into a python object every time i start this program. This process takes about 30minutes.
This program will eventually run on a machine with over 128GB of RAM. The python object takes about 2GB of RAM. The input table changes rarely and therefore the python object (that i'm currently recalculating every time) actually changes rarely. Is there a way that i can create this python object once, store it in RAM 24/7 (recreate if input table changes or server restarts) and then use it every time when needed?
NOTE: The python object will not be modified after creation. However i need to be able to recreate this object if needed.
EDIT: Only solution i can think of is just to keep the program running 24/7 (as a daemon??) and then issuing commands to it as needed.
|
Storing large python object in RAM for later use
| 0.099668 | 0 | 0 | 2,201 |
13,906,679 |
2012-12-16T23:50:00.000
| 2 | 1 | 1 | 0 |
c++,python,memory,ram
| 13,906,794 | 4 | false | 0 | 0 |
You could try pickling your object and saving it to a file, so that each time the program runs it just has to deserialise the object instead of recalculating it. Hopefully the server's disk cache will keep the file hot if necessary.
| 3 | 2 | 0 |
Is it possible to store python (or C++) data in RAM for latter use and how can this be achieved?
Background:
I have written a program that finds which lines in the input table match the given regular expression. I can find all the lines in roughly one second or less. However the problem is that i process the input table into a python object every time i start this program. This process takes about 30minutes.
This program will eventually run on a machine with over 128GB of RAM. The python object takes about 2GB of RAM. The input table changes rarely and therefore the python object (that i'm currently recalculating every time) actually changes rarely. Is there a way that i can create this python object once, store it in RAM 24/7 (recreate if input table changes or server restarts) and then use it every time when needed?
NOTE: The python object will not be modified after creation. However i need to be able to recreate this object if needed.
EDIT: Only solution i can think of is just to keep the program running 24/7 (as a daemon??) and then issuing commands to it as needed.
|
Storing large python object in RAM for later use
| 0.099668 | 0 | 0 | 2,201 |
13,907,359 |
2012-12-17T01:47:00.000
| 1 | 0 | 0 | 0 |
python,mysql,django,django-orm
| 14,781,925 | 2 | false | 1 | 0 |
I know that this might be OFF...
but maybe prevents some headache
before you make any "unicode" related change then please get to know what unicode means, and note that what you wrote "ö" == ö is only right when the unicode is encoded by the method UTF-8
| 1 | 2 | 0 |
I'm setting up a django-admin on top of a legacy MySQL database.
The database declares that it is latin-1 encoded. Some of the entered data in the database is indeed in latin-1 but some is actually UTF-8. This shows up as corrupt characters like: é € ä ö
The legacy application does some black magic to hide these errors and I cannot modify the database.
I found a Python library ftfy that can convert latin-1 corrupted UTF-8 to real UTF-8, for example the above chars get translated to "é € ä ö". I want to use it on all django.db.models.CharField and django.db.models.TextField data that is loaded from database. How to do it?
I tried to subclass django.db.models.CharField and django.db.models.TextField but couldn't figure out where to intercept the data from database. Optimal solution would be something like FTFYCharField which would always correct data that it gets from database.
|
Correct latin-1 encoded UTF-8 in Django-ORM
| 0.099668 | 0 | 0 | 937 |
13,908,455 |
2012-12-17T04:54:00.000
| 6 | 0 | 0 | 0 |
python,authentication,flask
| 13,908,497 | 1 | false | 1 | 0 |
Flask does not require any authentication by default. Usually you have to decorate your view functions if you want such behaviour. So your error is most likely within your web server configuration.
| 1 | 1 | 0 |
I'm working on a Flask application and don't see in the documentation how to turn off the requirement for a user to be logged in. We need to do this for some REST services coinciding
(possibly) with some pages that do require logins.
What's the best way to do this? I've scoured the docs and snippets and don't see how to turn off the requirement for certain blueprints. I'm getting 401 (Unauthorized) pages on all that I try.
Thanks!
|
How to set Flask blueprints to allow anonymous access?
| 1 | 0 | 0 | 344 |
13,909,225 |
2012-12-17T06:17:00.000
| 3 | 0 | 0 | 0 |
python,rest,python-2.7
| 13,923,221 | 2 | true | 0 | 0 |
Errr?
WSDL usually refers to SOAP which to my knowledge encapsulates the actual remote call protocol inside it's own protocol and just happens to use HTTP as a transport
REST refers to (usually) to using HTTP methods appropriately e.g. DELETE /frobnication/1 would delete it PUT /frobnication/1 would completely replace the thing (resource) under that URL. POST /frobnication/1 updates it .... (HTTP does have a few more methods).
REST doesn't usually have a WSDL thou, IIRC, there is some talk about "commonly known entry points" (Google for that).
Vote me down but to me that question seems to mix up 2 completely different topics...
| 1 | 1 | 0 |
I wan't to generate a WSDL file for my REST web service. I also need to parse it in Python. How can I do this?
|
How do I generate a WSDL file for my REST client?
| 1.2 | 0 | 1 | 800 |
13,910,113 |
2012-12-17T07:42:00.000
| 1 | 0 | 0 | 0 |
javascript,python,django,web-applications,backbone.js
| 13,910,427 | 3 | false | 1 | 0 |
You need a REST backend in your django application in order to communicate with backbone.
Django views are built to respond with html but they can also respond with json.
I wouldn't recommend trying to build your own json views though, but rather use something like django-tastypie
| 1 | 1 | 0 |
I'm building a simple application as part of my learning(to understand REST APIs) using django and backbone. I would like to create a user registration and pass these values to django backend using json. Can someone point me to some examples and source codes to construct good APIs using django and using them with backbone?
|
User registration using django and backbone over a RESTful API
| 0.066568 | 0 | 0 | 455 |
13,910,576 |
2012-12-17T08:23:00.000
| 0 | 0 | 1 | 0 |
python,sqlalchemy
| 13,910,851 | 3 | false | 0 | 0 |
This is a classic case of buffering. Try a reasonably large chunk and reduce it if there's too much disk I/O (or you don't like it causing long pauses, etc) or increase it if your profile shows too much CPU time in I/O calls.
To implement, use an array, each "write" you append an item to the array. Have a separate "flush" function that writes the whole thing. Each append you check, and if it has reached maximum size, write them all, and clear the array. At the end, call the flush function to write partially filled array.
| 1 | 0 | 0 |
I developed an application which parses a lot of data, but if I commit the data after parsing all the data, it will consume too much memory. However, I cannot commit it each time, because it costs too much hard disk I/O.
Therefore, my question is how can I know how many uncommitted items are in the session?
|
Find out how many uncommitted items are in the session
| 0 | 0 | 0 | 1,666 |
13,911,219 |
2012-12-17T09:13:00.000
| 0 | 1 | 0 | 0 |
python,database,web,payment-processing
| 13,914,479 | 1 | true | 0 | 0 |
Most payment processors have a sandbox/developers account where you can process transactions in a test mode so you can fully test them as if you were in a live environment.
| 1 | 0 | 0 |
I believe this question is probably outside of the scope of SO, but I was wondering what the best practice is for testing a payment processing feature?
For any feature developed, it's been relatively easy to test, if not through unit testing than through a front-end walkthrough, but with this, I'm at a bit of a loss, as I have not done this before.
What is suggested here?
|
Testing Payment Processing Feature
| 1.2 | 0 | 0 | 85 |
13,919,367 |
2012-12-17T17:51:00.000
| 3 | 1 | 0 | 0 |
python,bittorrent,libtorrent
| 13,938,080 | 1 | false | 0 | 0 |
The closest thing you have to alerts is probably stats_alert. It will tell you the number of payload bytes uploaded. It won't give the you granularity of a full block being sent though.
If you'd like to add an alert, have a look at bt_peer_connection::write_piece.
patches are welcome!
| 1 | 2 | 0 |
I am trying to get alerts for the data that I'm sending peers. My code works great for incoming blocks by looking for libtorrent.block_finished_alert but I want to know when and what I am sending to peers. I can't find an alert that will give me the equivalent for outbound transfers. I need to know the file and offset (the peer request).
Is there an alert for outbound block requests?
I'm using the python bindings but C++ code is fine too.
|
Get alerts for upload activity with libtorrent (rasterbar)
| 0.53705 | 0 | 0 | 384 |
13,919,448 |
2012-12-17T17:56:00.000
| 1 | 0 | 0 | 0 |
python,sqlite,python-2.7
| 13,919,496 | 3 | false | 0 | 0 |
SQLite can handle huge transactions with ease, so why not commit at the end? Have you tried this at all?
If you do feel one transaction is a problem, why not commit ever n transactions? Process rows one by one, insert as needed, but every n executed insertions add a connection.commit() to spread the load.
| 2 | 0 | 0 |
I`m trying to write loader to sqlite that will load as fast as possible simple rows in DB.
Input data looks like rows retrieved from postgres DB. Approximated amount of rows that will go to sqlite: from 20mil to 100mil.
I cannot use other DB except sqlite due to project restrictions.
My question is :
what is a proper logic to write such loader?
At first try I`ve tried to write set of encapsulated generators, that will take one row from Postgres, slightly ammend it and put it into sqlite. I ended up with the fact that for each row, i create separate sqlite connection and cursor. And that looks awfull.
At second try , i moved sqlite connection and cursor out of the generator , to the body of the script and it became clear that i do not commit data to sqlite untill i fetch and process all 20mils records. And this possibly could crash all my hardware.
At third try I strated to consider to keep Sqlite connection away from the loops , but create/close cursor each time i process and push one row to Sqlite. This is better but i think also have some overhead.
I also considered to play with transactions : One connection, one cursor, one transaction and commit called in generator each time row is being pushed to Sqlite. Is this i right way i`m going?
Is there some widely-used pattern to write such a component in python? Because I feel as if I am inventing a bicycle.
|
How to write proper big data loader to sqlite
| 0.066568 | 1 | 0 | 293 |
13,919,448 |
2012-12-17T17:56:00.000
| 0 | 0 | 0 | 0 |
python,sqlite,python-2.7
| 13,976,529 | 3 | true | 0 | 0 |
Finally i managed to resolve my problem. Main issue was in exessive amount of insertions in sqlite. After i started to load all data from postgress to memory, aggregate it proper way to reduce amount of rows, i was able to decrease processing time from 60 hrs to 16 hrs.
| 2 | 0 | 0 |
I`m trying to write loader to sqlite that will load as fast as possible simple rows in DB.
Input data looks like rows retrieved from postgres DB. Approximated amount of rows that will go to sqlite: from 20mil to 100mil.
I cannot use other DB except sqlite due to project restrictions.
My question is :
what is a proper logic to write such loader?
At first try I`ve tried to write set of encapsulated generators, that will take one row from Postgres, slightly ammend it and put it into sqlite. I ended up with the fact that for each row, i create separate sqlite connection and cursor. And that looks awfull.
At second try , i moved sqlite connection and cursor out of the generator , to the body of the script and it became clear that i do not commit data to sqlite untill i fetch and process all 20mils records. And this possibly could crash all my hardware.
At third try I strated to consider to keep Sqlite connection away from the loops , but create/close cursor each time i process and push one row to Sqlite. This is better but i think also have some overhead.
I also considered to play with transactions : One connection, one cursor, one transaction and commit called in generator each time row is being pushed to Sqlite. Is this i right way i`m going?
Is there some widely-used pattern to write such a component in python? Because I feel as if I am inventing a bicycle.
|
How to write proper big data loader to sqlite
| 1.2 | 1 | 0 | 293 |
13,920,682 |
2012-12-17T19:19:00.000
| 1 | 0 | 1 | 0 |
python,.net,powerbuilder
| 13,943,263 | 1 | true | 0 | 1 |
A PowerBuilder application can load a DataWindow from a PBL (doesn't have to be in the library path), modify it, and save it back to the PBL. I've written a couple of tools that do that. PowerBuilder will allow you to modify the DataWindow according to its object model using the modify method. I don't know why anyone would want to reinvent all of this. I recall seeing Python bindings for PB somewhere. You could get the DW syntax from PB, call out to Python, then save it back in PB. But you'd have to do all the parsing in Python, whereas PB already understands the DW. Finally I'm surprised Terry didn't plug PBL Peeper. You could use PBL Peeper to export the DataWindows, massage them to your hearts's content in Python. then import them back into PB.
| 1 | 1 | 0 |
I want to get the content of DataWindow from PBL (PowerBuilder Library) file and edit it in place. The idea is to read the pbl file, and access individual DataWindows to modify source code. Somehow, I have managed to do the first part with PblReader .NET library using IronPython. It allows me to read PBL files, and access DataWindow source code. However it doesn't support modifications. I would like to know if anyone have an idea for editing PBL files?
|
idea/solution how to edit PBL (PowerBuilder Library) files?
| 1.2 | 0 | 0 | 3,599 |
13,921,373 |
2012-12-17T20:07:00.000
| 0 | 0 | 0 | 0 |
python,django,caching
| 13,922,532 | 4 | false | 1 | 0 |
If you are running it on apache with mod_wsgi, just update the timestamp of wsgi config file everytime you make change to a model. Apache automatically restarts the application if the wsgi file gets updated.
| 2 | 1 | 0 |
We have certain sysadmin settings that we expose to superusers of our django webapp. Things like the domain name (uses contrib.sites) and single sign-on configuration. Some of these settings are cached by the system, sometimes because we want to avoid an extra DB hit in the middleware on every request if we can help it, sometimes because it's contrib.sites, which has its own caching. So when the settings get changed, the changes don't take effect until the app is reloaded.
We want the app to restart itself when these changes are made, so that our clients don't need to pester us to do the restart for them.
Our webapp is running on apache via mod_wsgi, so we should be able to do this just by touching the wsgi file for the app whenever one of these settings is changed, but it feels a little weird to do that, and I'm worried there's some more graceful convention we should be following.
Is there a right way to apply updates that are cached and require the app to reload? Invalidating the caches for these things will be pretty hairy, so I think I'd avoid that unless the app restart thing has serious drawbacks.
|
Best practice for making a django webapp restart itself
| 0 | 0 | 0 | 403 |
13,921,373 |
2012-12-17T20:07:00.000
| 0 | 0 | 0 | 0 |
python,django,caching
| 13,929,058 | 4 | false | 1 | 0 |
It depends on your setup:
If you are using wsgi on a single server you could touch the wsgi file to let apache restart every instance of the app
If you are using gunicorn you probably use supervisord to controll it. Then a supervisorctl restart APPNAME would be the solution
If you scale your app on multiple servers you have to ensure that every server restarts their instances. There are several ways to achieve this:
use the same filesystem if you are using mod_wsgi then a touch would count for every server
log in to the other servers using ssh and make them restart your app
I am sure there are more ways to restart your app but it highly depends on your setup and whether or not you have to restart all instances or only one.
| 2 | 1 | 0 |
We have certain sysadmin settings that we expose to superusers of our django webapp. Things like the domain name (uses contrib.sites) and single sign-on configuration. Some of these settings are cached by the system, sometimes because we want to avoid an extra DB hit in the middleware on every request if we can help it, sometimes because it's contrib.sites, which has its own caching. So when the settings get changed, the changes don't take effect until the app is reloaded.
We want the app to restart itself when these changes are made, so that our clients don't need to pester us to do the restart for them.
Our webapp is running on apache via mod_wsgi, so we should be able to do this just by touching the wsgi file for the app whenever one of these settings is changed, but it feels a little weird to do that, and I'm worried there's some more graceful convention we should be following.
Is there a right way to apply updates that are cached and require the app to reload? Invalidating the caches for these things will be pretty hairy, so I think I'd avoid that unless the app restart thing has serious drawbacks.
|
Best practice for making a django webapp restart itself
| 0 | 0 | 0 | 403 |
13,921,647 |
2012-12-17T20:27:00.000
| 165 | 0 | 0 | 0 |
python,pandas
| 13,921,674 | 2 | true | 0 | 0 |
df.shape, where df is your DataFrame.
| 1 | 104 | 1 |
New to Python.
In R, you can get the dimension of a matrix using dim(...). What is the corresponding function in Python Pandas for their data frame?
|
Python - Dimension of Data Frame
| 1.2 | 0 | 0 | 147,970 |
13,922,955 |
2012-12-17T22:01:00.000
| 21 | 0 | 0 | 1 |
centos,mysql-python
| 13,932,070 | 3 | false | 0 | 0 |
So it transpires that mysql_config is part of mysql-devel. mysql-devel is for compiling the mysql client, not the server. Installing mysql-devel allows the installation of MySQL-python.
| 1 | 13 | 0 |
I'm attempting to install MySQL-python on a machine running CentOS 5.5 and python 2.7. This machine isn't running a mysql server, the mysql instance this box will be using is hosted on a separate server. I do have a working mysql client. On attempting sudo pip install MySQL-python, I get an error of EnvironmentError: mysql_config not found, which as far as I can tell is a command that just references /etc/my.cnf, which also isn't present. Before I go on some wild goose chase creating spurious my.cnf files, is there an easy way to get MySQL-python installed?
|
Installing MySQL-python without mysql-server on CentOS
| 1 | 1 | 0 | 27,898 |
13,924,980 |
2012-12-18T01:26:00.000
| 3 | 0 | 0 | 0 |
python,qt,pyqt
| 13,928,629 | 1 | true | 0 | 1 |
If you had used QGraphicsScene instead of rolling your own, you could have used the items(..) methods to very efficiently find your children in a particular area.
It's only possible in QGraphicsScene because it uses a BSP spatial acceleration structure, so if you cannot migrate to QGraphicsScene in a reasonable amount of time - you are going to have write your own. It's not as hard as it sounds, I've written numerous bounding volume hierarchy structures and they're quite straightforward.
| 1 | 3 | 0 |
This is my first question ever so bear with me!
Currently in my program, I have a parent widget which acts as a canvas. The user can add or remove widgets to the parent at run-time. Those widgets are then given an absolute position, that is, they are not positioned by a layout. Once added, a widget can be moved around arbitrarily by the user.
I want the user to be able to select a group of widgets by dragging a box around them. I have already coded the part that displays the rectangle while the user is dragging. Now, I want to be able to retrieve all the widgets within that rectangle (region).
I am aware of the findChild() and findChildren() functions, and they indeed do return the children as they are supposed to. But what I'd really need is a way to limit the search to the boundaries of the region since there will most-likely be quite a lot of widgets within the 'canvas'. (There could be thousands of widgets spread over a very large area due to the nature of what I'm doing!)
Here is my question: What would be my best option? Should I just go ahead and use findChildren() and loop through the list to find the children within the region manually. Or should I loop through all the pixels within the region using findChild(x, y)? Or perhaps there is an even simpler solution that would speed up the process? Something along the lines of findChildren(x, y, width, height)?
Hopefully my question made sense. I tried to explain things as best as I could. Thanks!
|
Qt: get all children within region of the parent
| 1.2 | 0 | 0 | 493 |
13,930,458 |
2012-12-18T10:03:00.000
| 0 | 0 | 1 | 0 |
python,regex
| 13,930,491 | 4 | false | 0 | 0 |
You can't do that with regular expressions alone. But you can use a regular expression to extract the string of a's and then check its length.
Then you can create a replacement string of the appropriate length and do a replace.
| 1 | 0 | 0 |
For example, there is a string like aaaaaab, where a repeats n times (in this case, n=6). How could I get the number n ?
Then if I want to use the n to replace it by make a to n/2 times like aaab, or n-2 times aaaab. How to do with it ?
|
How to get the number of replications of a pattern and use it
| 0 | 0 | 0 | 67 |
13,931,048 |
2012-12-18T10:35:00.000
| 2 | 0 | 1 | 0 |
python,multithreading,ubuntu
| 13,931,114 | 1 | false | 0 | 0 |
You probably shouldn't create so many threads. Instead, use Queue.Queues to communicate between different threads.
| 1 | 0 | 0 |
I have a code that use threading in python. each thread if met a condition will create 2 new threads. now problem is total number of active threads exceeds total number of active threads supported by ubuntu 12.04. now each thread in active thread queue need space to create new threads and then it will terminate. My system have 8 CPUs. now my code is going in deadlock.
|
Thread queue is full - python
| 0.379949 | 0 | 0 | 112 |
13,931,924 |
2012-12-18T11:26:00.000
| 7 | 0 | 0 | 0 |
python,sockets
| 13,932,042 | 3 | true | 0 | 0 |
From documentation,
socket.gethostname returns a string containing the hostname of the machine where the Python interpreter is currently executing.
socket.getfqdn returns a fully qualified domain name if it's available or gethostname otherwise.
Fully qualified domain name is a domain name that specifies its exact location in the tree hierarchy of the DNS. From wikipedia examples:
For example, given a device with a local hostname myhost and a parent
domain name example.com, the fully qualified domain name is
myhost.example.com.
| 3 | 8 | 0 |
Just the title, what the difference between them?
In python, socket.gethostbyname(socket.gethostname()) and socket.gethostbyname(socket.getfqdn()) return different results on my computer.
|
what's the difference between gethostname and getfqdn?
| 1.2 | 0 | 1 | 10,295 |
13,931,924 |
2012-12-18T11:26:00.000
| 0 | 0 | 0 | 0 |
python,sockets
| 13,931,968 | 3 | false | 0 | 0 |
The hostname is not the fully qualified domain name, hence why they return different results.
getfqdn() will return the fully qualified domain name while gethostname() will return the hostname.
| 3 | 8 | 0 |
Just the title, what the difference between them?
In python, socket.gethostbyname(socket.gethostname()) and socket.gethostbyname(socket.getfqdn()) return different results on my computer.
|
what's the difference between gethostname and getfqdn?
| 0 | 0 | 1 | 10,295 |
13,931,924 |
2012-12-18T11:26:00.000
| 4 | 0 | 0 | 0 |
python,sockets
| 43,330,159 | 3 | false | 0 | 0 |
Note that the selected reply above is quite confusing.
YES socket.getfqdn WILL return a full-qualified hostname. But if it's going to be 'localhost.localdomain' you probably actually want socket.gethostname instead so that you get something that is somewhat useable.
The difference is that one reads from /etc/hostname and /etc/domainname while the other reads the kernel nodename. Depending on your distribution, configuration, OS, etc. your mileage WILL vary.
What this means is that you generally want to first check socket.getfqdn, and verify if it returns 'localhost.localdomain'. if it does, use socket.gethostname instead.
Finally, python also has platform.node which is basically the same as socket.gethostname on python, though this might be a better choice for multiplatform code.
That's quite an important detail.
| 3 | 8 | 0 |
Just the title, what the difference between them?
In python, socket.gethostbyname(socket.gethostname()) and socket.gethostbyname(socket.getfqdn()) return different results on my computer.
|
what's the difference between gethostname and getfqdn?
| 0.26052 | 0 | 1 | 10,295 |
13,933,169 |
2012-12-18T12:36:00.000
| 8 | 1 | 0 | 1 |
python,linux
| 13,933,228 | 7 | false | 0 | 0 |
yes there is. add
#!/usr/bin/env python
to the beginning of the file and do
chmod u+rx <file>
assuming your user owns the file, otherwise maybe adjust the group or world permissions.
.py files under windows are associated with python as the program to run when opening them just like MS word is run when opening a .docx for example.
| 1 | 32 | 0 |
I am using linux mint, and to run a python file I have to type in the terminal: python [file path], so is there way to make the file executable, and make it run the python command automatically when I doublr click it?
And since I stopped dealing with windows ages ago, I wonder if the .py files there are also automatically executable or do I need some steps.
Thanks
|
How to execute python file in linux
| 1 | 0 | 0 | 290,768 |
13,933,507 |
2012-12-18T12:54:00.000
| 0 | 0 | 1 | 0 |
python,multiprocessing
| 13,937,001 | 4 | false | 0 | 0 |
I found that client-server architecture was solution for me. Running server, and spawning many clients talking to server and between clients directly, something like messenger.
Talking/comunication can be achieved through network or text file located in memory, (to speed things up and save hard drive).
Bakuriu: give u a good tip about logging module.
| 1 | 0 | 0 |
I want to run a function continuoulsy in parallel to my main process.How do i do it in python?multiprocessing?threading or thread module?
I am new to python.Any help much appreciated.
|
continuosly running function in background in python
| 0 | 0 | 0 | 8,013 |
13,934,268 |
2012-12-18T13:39:00.000
| 1 | 0 | 0 | 0 |
python,python-2.7,tkinter,pypdf
| 13,934,811 | 2 | false | 0 | 1 |
Tkinter has no support for displaying pdf.
| 1 | 1 | 0 |
I`m using pyPdf to crop pdf pages.
And the only thing i miss, is GUI for this script.
I picked up tkinter module to do the GUI, but i cannot find whether it is possible to display pdf pages with GUI created with tkinter.
Any thoughts ?
Thank you.
|
display pdf pages with GUI on tkinter
| 0.099668 | 0 | 0 | 2,722 |
13,934,311 |
2012-12-18T13:42:00.000
| 1 | 1 | 0 | 0 |
c#,java,php,python,api
| 13,934,575 | 4 | false | 0 | 0 |
you could look for unusual character combinations like many consecutive vowels/consonants, and watch your registrations and create a list of recurring patterns (like asd) in false names
i would refrain from automatically block those inputs and rather mark them for examination
| 4 | 2 | 0 |
Im searching about services/strategies to detect when entered names in forms are spammy, example: asdasdasd, ksfhaiodsfh, wpoeiruopwieru, zcpoiqwqwea. crazy keyboard inputs.
I am trying akismet is not specially for names (http://kemayo.wordpress.com/2005/12/02/akismet-py/).
thanks in advance.
|
service or strategy to detect if users enter fake names?
| 0.049958 | 0 | 0 | 361 |
13,934,311 |
2012-12-18T13:42:00.000
| 0 | 1 | 0 | 0 |
c#,java,php,python,api
| 13,934,456 | 4 | false | 0 | 0 |
Ask for a real email and send info to connect there. Then get info from the account.
No way is really safe anyway.
| 4 | 2 | 0 |
Im searching about services/strategies to detect when entered names in forms are spammy, example: asdasdasd, ksfhaiodsfh, wpoeiruopwieru, zcpoiqwqwea. crazy keyboard inputs.
I am trying akismet is not specially for names (http://kemayo.wordpress.com/2005/12/02/akismet-py/).
thanks in advance.
|
service or strategy to detect if users enter fake names?
| 0 | 0 | 0 | 361 |
13,934,311 |
2012-12-18T13:42:00.000
| 0 | 1 | 0 | 0 |
c#,java,php,python,api
| 49,864,949 | 4 | false | 0 | 0 |
If speed isn't an issue, download a list of the top 100k most common names, throw them in an O(1) lookup data structure, see if the input is there, and if not, you could always compare the input to the entries using a string similarity algorithm.
Although if you do, you will probably want to bucket by starting letter to prevent having to perform that calculation on the entire list.
| 4 | 2 | 0 |
Im searching about services/strategies to detect when entered names in forms are spammy, example: asdasdasd, ksfhaiodsfh, wpoeiruopwieru, zcpoiqwqwea. crazy keyboard inputs.
I am trying akismet is not specially for names (http://kemayo.wordpress.com/2005/12/02/akismet-py/).
thanks in advance.
|
service or strategy to detect if users enter fake names?
| 0 | 0 | 0 | 361 |
13,934,311 |
2012-12-18T13:42:00.000
| 2 | 1 | 0 | 0 |
c#,java,php,python,api
| 13,934,347 | 4 | false | 0 | 0 |
One strategy is having a black list with weird names and/or a white list with normal names, to reject/accept names. But it can be difficult to found it.
| 4 | 2 | 0 |
Im searching about services/strategies to detect when entered names in forms are spammy, example: asdasdasd, ksfhaiodsfh, wpoeiruopwieru, zcpoiqwqwea. crazy keyboard inputs.
I am trying akismet is not specially for names (http://kemayo.wordpress.com/2005/12/02/akismet-py/).
thanks in advance.
|
service or strategy to detect if users enter fake names?
| 0.099668 | 0 | 0 | 361 |
13,935,864 |
2012-12-18T15:10:00.000
| 0 | 0 | 0 | 0 |
javascript,jquery,python,html,django
| 14,459,121 | 1 | false | 1 | 0 |
I`m not sure, that I understand you correctly, but if you talk about Named Destinations in PDF file you should:
get Named Destinations of PDF file with pyPdf library with "getNamedDestinations" method
open PDF file with nameddest parameter like "http://example.org/doc.pdf#nameddest=something"
| 1 | 1 | 0 |
I develop a simple site in django.
in this site the user has to upload a pdf file,
and then he can see it with tree of sections in the side.
The challenge is:
A. How can i read the file sections?
it could be in django, python, javascript or each another idea.
B. How can i jump to specific section when the user click on it?
I display the file with html object tag.
Thanks for any reply.
|
Django: How can i get the sections of pdf file and jump to them?
| 0 | 0 | 0 | 119 |
13,936,563 |
2012-12-18T15:47:00.000
| 5 | 0 | 0 | 0 |
python,netcdf
| 16,386,862 | 5 | false | 0 | 0 |
If you want to only use the netCDF-4 API to copy any netCDF-4 file, even those with variables that use arbitrary user-defined types, that's a difficult problem. The netCDF4 module at netcdf4-python.googlecode.com currently lacks support for compound types that have variable-length members or variable-length types of a compound base type, for example.
The nccopy utility that is available with the netCDF-4 C distribution shows it is possible to copy an arbitrary netCDF-4 file using only the C netCDF-4 API, but that's because the C API fully supports the netCDF-4 data model. If you limit your goal to copying netCDF-4 files that only use flat types supported by the googlecode module, the algorithm used in nccopy.c should work fine and should be well-suited to a more elegant implementation in Python.
A less ambitious project that would be even easier is a Python program that would copy any netCDF "classic format" file, because the classic model supported by netCDF-3 has no user-defined types or recursive types. This program would even work for netCDF-4 classic model files that also use performance features such as compression and chunking.
| 1 | 5 | 0 |
I would like to make a copy of netcdf file using Python.
There are very nice examples of how to read or write netcdf-file, but perhaps there is also a good way how to make the input and then output of the variables to another file.
A good-simple method would be nice, in order to get the dimensions and dimension variables to the output file with the lowest cost.
|
copy netcdf file using python
| 0.197375 | 0 | 0 | 3,485 |
13,937,326 |
2012-12-18T16:28:00.000
| 1 | 1 | 0 | 0 |
gdata-api,google-api-client,google-api-python-client,google-provisioning-api
| 13,938,196 | 1 | false | 0 | 0 |
There is no group rename function for groups as there is for users. With the Group Settings and Provisioning APIs though, you can capture much of the group specifics and migrate that over to a new group. You would lose:
-Group Archive
-Managers (show only as members)
-Email Delivery (Immediate, Digest, No-Delivery, etc)
| 1 | 1 | 0 |
I am trying to find a way to rename (change email address aka group id) a google group via api. Using the python client libraries and the provisioning api i am able to modify the group name and description, and I have used the group settings api to modify a group's settings. Is there a way to change the email address?
|
Is it possible to change email address of a Google group via API?
| 0.197375 | 0 | 1 | 923 |
13,938,903 |
2012-12-18T18:07:00.000
| 2 | 1 | 0 | 0 |
c#,python,asp.net,web-services,perl
| 13,939,065 | 3 | false | 0 | 0 |
a) WebSockets in conjuction with ajax to update only parts of the site would work, disadvantage: the clients infrastructure (proxies) must support those (which is currently not the case 99% of time).
b) With existing infrastructure the approach is Long Polling. You make an XmlHttpRequest using javascript. In case no data is present, the request is blocked on server side for say 5 to 10 seconds. In case data is avaiable, you immediately answer the request. The client then immediately sends a new request. I managed to get >500 updates per second using java client connecting via proxy, http to a webserver (real time stock data displayed).
You need to bundle several updates with each request in order to get enough throughput.
| 1 | 2 | 0 |
I wonder how to update fast numbers on a website.
I have a machine that generates a lot of output, and I need to show it on line. However my problem is the update frequency is high, and therefore I am not sure how to handle it.
It would be nice to show the last N numbers, say ten. The numbers are updated at 30Hz. That might be too much for the human eye, but the human eye is only for control here.
I wonder how to do this. A page reload would keep the browser continuously loading a page, and for a web page something more then just these numbers would need to be shown.
I might generate a raw web engine that writes the number to a page over a specific IP address and port number, but even then I wonder whether this page reloading would be too slow, giving a strange experience to the users.
How should I deal with such an extreme update rate of data on a website? Usually websites are not like that.
In the tags for this question I named the languages that I understand. In the end I will probably write in C#.
|
Rapid number updates on a website
| 0.132549 | 0 | 1 | 114 |
13,939,773 |
2012-12-18T19:06:00.000
| 2 | 0 | 0 | 1 |
python,google-app-engine,full-text-search
| 13,942,535 | 1 | false | 1 | 0 |
Unfortunately yes; we don't yet have a way for you to weight different fields more or less than others. Sorry!
| 1 | 0 | 0 |
I have a python appengine app that uses full text search. The document model has something like
title: title
abstract: short abstract
full text: lots and lots of text
If someone searches for a string, I want it ordered such that score for matches in title >> abstract >> full text. There doesn't seem to be a way to do this with the exiting scoring options, am I out of luck?
|
Full text search in appengine, make some fields scores more important than others?
| 0.379949 | 0 | 0 | 152 |
13,939,913 |
2012-12-18T19:16:00.000
| 12 | 0 | 1 | 0 |
python,pickle
| 13,940,039 | 2 | false | 0 | 0 |
There is no sure way other than to try to unpickle it, and catch exceptions.
| 1 | 22 | 0 |
Is there any way of checking if a file has been created by pickle? I could just catch exceptions thrown by pickle.load but there is no specific "not a pickle file" exception.
|
How to test if a file has been created by pickle?
| 1 | 0 | 0 | 8,307 |
13,940,144 |
2012-12-18T19:33:00.000
| 1 | 0 | 1 | 0 |
python
| 13,940,217 | 3 | false | 0 | 0 |
If you want to reuse your code, you shouldn't keep it after if __name__ == '__main__' (use this for functions/classes/modules and make the simplest call possible from this part of the program). And let me mention here the Zen of Python (two points at least matter in your case):
Sparse is better than dense.
Readability counts.
| 1 | 2 | 0 |
I want to hear an opinion on the question: Is it a bad idea to have a lot of code after
if __name__ == '__main__':
The reason I'm asking this is, My current project has about 400 hundred lines of code, and as it grows, I keep adding lines after above statement. So this program is expected to be about 3000 lines of code and I'm worry , that I will have too much code after this statement. So the question 'Is it a good pythonic way to write a lot of code after this statement?'
|
Design: Code after "if __name__ == __main__" test
| 0.066568 | 0 | 0 | 1,702 |
13,940,449 |
2012-12-18T19:52:00.000
| 2 | 1 | 0 | 0 |
python,html,tidesdk
| 13,948,150 | 1 | false | 1 | 0 |
The current version has very old webkit so because of that the HTML5 support is lacking. Audio and video tags are currently not supported in windows because underlying webkit implementation (wincairo) does not support it. Wa are working on the first part to use the latest webkit. once completed we are also planning to work on the audio/video support on windows.
| 1 | 0 | 0 |
Iam trying to stream audio in my TideSDK application, but it seems to be quite difficult. The HTML5 audio does not work for me, neither does video tags. The player simply keeps loading. I've tested and confirmed that my code worked in many other browsers.
My next attemp was VLC via Python bindings. But without any confirmation I do believe you need to have VLC installed for the vlc.py file to work?
Basically, what I want to do is play audio in a sophisticated way (probably through Python) and wrap it in my TideSDK application. I want it to work out of the box - nothing for my end users to install.
Iam by the way pretty new the the whole python thing, but I learn fast so I'd love to see some examples on how to get started!
Perhaps a quite quirky way to do it would be by using flash, but I'd love not to.
For those of you who are not familiar with TideSDK, its a way to build desktop applications with HTML, CSS, Python, Ruby and PHP.
|
Streaming audio in Python with TideSDK
| 0.379949 | 0 | 0 | 401 |
13,941,436 |
2012-12-18T21:01:00.000
| 0 | 1 | 0 | 0 |
python
| 13,941,495 | 2 | false | 0 | 0 |
This is more of an email question then a python question. I'd refer to the RFC on email. However to address your question in between lines of the body you should put a CRLF
| 1 | 0 | 0 |
I am using smtplib to send emails with python. I can get the email to send with the info I want in the body, but I can't find a good source to look over on how I can format the mail itself.
Anyone know of a good resource....
I have a list that I want to iterate over and put lines between.
|
smtplib email formatting
| 0 | 0 | 0 | 2,444 |
13,944,387 |
2012-12-19T01:44:00.000
| -2 | 0 | 1 | 0 |
python,filepath
| 13,944,530 | 3 | false | 0 | 0 |
It is OS-independent. If you hardcode your paths as C:\Whatever they will only work on Windows. If you hardcode them with the Unix standard "/" they will only work on Unix. os.path.join detects the operating system it is running under and joins the paths using the correct symbol.
| 1 | 106 | 0 |
I'm not able to see the bigger picture here I think; but basically I have no idea why you would use os.path.join instead of just normal string concatenation?
I have mainly used VBScript so I don't understand the point of this function.
|
Why use os.path.join over string concatenation?
| -0.132549 | 0 | 0 | 21,321 |
13,944,798 |
2012-12-19T02:45:00.000
| 0 | 0 | 0 | 0 |
python,google-drive-api
| 13,944,967 | 1 | true | 1 | 0 |
The solution you have presented is the correct one. As you have realized, the Drive file system is not exactly like a hierarchical file system, so you will have to perform these checks.
One optimization you could perform is to try to find the grand-child folder (Sub2) first, so you will save a number of calls.
| 1 | 2 | 0 |
I have a Django app that needs to create a file in google drive: in FolderB/Sub1/Sub2/file.pdf. I have the id for FolderB but I don't know if Sub1 or Sub2 even exist. If not it should be created and the file.pdf should be put in it.
I figure I can look at children at each level and create the folder at each level if its not there, but this seems like a lot of checks and api calls just to create one file. Its also a harder task trying to accommodate multiple folder structures (ie, one python function that can accept any path of any depth and upload a file there)
|
How do I upload a file to a subfolder even if it doesn't exist
| 1.2 | 0 | 0 | 206 |
13,945,351 |
2012-12-19T04:09:00.000
| 0 | 0 | 1 | 1 |
python-3.x,installation,package,fipy
| 13,946,326 | 2 | false | 0 | 0 |
No, FiPy does not support Python 3. You need to either use Python 2 or help update FiPy to support Python 3. Contact the authors about that.
| 1 | 3 | 0 |
I would like to install Fipy on Python3.3 (Windows 7 or Linux/ubuntu-mint), is possible? I need to use Fipy for improving my python3.3 coding. Do I have to translate that in python 2.7 ?
Please does anyone know? any suggestion?
|
how to Install Fipy on Python 3.3
| 0 | 0 | 0 | 2,625 |
13,947,467 |
2012-12-19T07:28:00.000
| 3 | 0 | 1 | 0 |
python,linux,ubuntu,version
| 13,948,475 | 2 | false | 0 | 0 |
You can install the library in one location like /opt and then create two soft links inside /usr/lib/python2.6 and /usr/lib/python2.7 pointing to that library.
| 1 | 8 | 0 |
I have a pure python module that will work for both Python 2.6 and 2.7. Instead of putting the module into the python version specific paths, is it possible to place the library at one location that will be accessed by both Python 2.6 and 2.7? System is Ubuntu.
|
Where should version independent python library go?
| 0.291313 | 0 | 0 | 149 |
13,950,786 |
2012-12-19T10:55:00.000
| 2 | 0 | 1 | 0 |
python,windows,service,windows-services
| 13,951,161 | 1 | true | 0 | 0 |
python service creation using win32service created the pythonservice.exe in C:\Python27\Lib\site-packages\win32 by default.
you can perform os.chdir(yourdir) in your code just before service creation,
the best thing would be use absolute paths and setting proper sys.path within your script for accessing files
| 1 | 6 | 0 |
I am trying to run a python application as a Windows Service. The code I have installs and starts but I am having issues importing modules and classes which are part of the application.
Note:: Python libraries are being included fine.
I have checked the python path and all the correct values are in there, (including the application directory) which is leading me to believe that the windows service could be running in a different location.
Does a python application running as a windows service get run from a different location on windows?
|
Python, Windows service Import error
| 1.2 | 0 | 0 | 1,766 |
13,953,039 |
2012-12-19T13:02:00.000
| -2 | 0 | 0 | 0 |
python,google-app-engine,gae-search
| 13,954,922 | 2 | false | 1 | 0 |
Could be a bug in the way you build your query, since it's not shown.
Could be that you don't have an index for the case that isn't working.
| 1 | 8 | 0 |
trying to figure out whether this is a bug or by design. when no query_string is specified for a query, the SearchResults object is NOT sorted by the requested column. for example, here is some logging to show the problem:
Results are returned unsorted on return index.search(query):
query_string = ''
sort_options string: search.SortOptions(expressions=[search.SortExpression(expression=u'firstname', direction='ASCENDING', default_value=u'')], limit=36)
Results are returned sorted on return index.search(query):
query_string = 'test'
sort_options string: search.SortOptions(expressions=[search.SortExpression(expression=u'firstname', direction='ASCENDING', default_value=u'')], limit=36)
This is how I'm constructing my query for both cases (options has limit, offset and sort_options parameters):
query = search.Query(query_string=query_string, options=options)
|
sort_options only applied when query_string is not empty?
| -0.197375 | 1 | 0 | 103 |
13,955,176 |
2012-12-19T15:03:00.000
| 3 | 1 | 1 | 0 |
python,string,error-handling,filepath
| 13,955,742 | 2 | false | 0 | 0 |
Use raw string instead of string ie
use r'filepath'
It fixes the problem off blacklash "\"
| 1 | 9 | 0 |
I need to put a lot of filepaths in the form of strings in Python as part of my program. For example one of my directories is D:\ful_automate\dl. But Python recognizes some of the characters together as other characters and throws an error. In the example the error is IOError: [Errno 22] invalid mode ('wb') or filename: 'D:\x0cul_automate\\dl. It happens a lot for me and every time I need to change the directory name to one that may not be problematic.
|
File paths in Python in the form of string throw errors
| 0.291313 | 0 | 0 | 22,129 |
13,956,049 |
2012-12-19T15:49:00.000
| 1 | 0 | 0 | 0 |
python,django-admin
| 13,956,144 | 1 | true | 1 | 0 |
What's happening
Chrome is not fetching the page again, just displaying what was there "before" and Chrome stored in its cache.
Chrome is not even asking your webserver about the page, and as far as your app knows, you never went back to that page.
Don't worry about it
However, if you:
Refresh the page
Try to POST some data,
Chrome will have to hit your webserver, and Django will redirect to the login page.
| 1 | 1 | 0 |
I am new to Django . I am trying out the django admin/auth system. I logged into the admin site first and then I logged out . After logging out and clicking the back button on my browser(Chrome) I am still getting the old admin page . Should it not be redirected to login page instead? If so Is there some configuration issue on my setup?
|
clicking the back button in django admin site
| 1.2 | 0 | 0 | 518 |
13,956,055 |
2012-12-19T15:49:00.000
| 1 | 1 | 0 | 0 |
c++,python,boost-python
| 13,989,547 | 1 | false | 0 | 1 |
How are you holding a "C++ strong reference" to the wrapped class ?
I'm quite rusty on boost python, but I believe it's the boost::shared_ptr's deleter presence which ensures lifetime management.
If that isn't the problem, you probably need to hold the instance in C++ in a boost::python::object.
| 1 | 6 | 0 |
I've wrapped a C++ class using Boost.Python. These Objects have strong references (boost::shared_ptr) on the C++-side, and there may be intermittent strong references in Python as well. So far, everything works well. However, if I create a python weak reference from one of the strong references, this weak reference is deleted as soon as the last python strong reference disappears. I'd like the weak reference to stay alive until the last strong reference on the C++ side disappears as well. Is it possible to achieve that?
Phrased another way: Is there a way to find out from python if a particular C++ object (wrapped by Boost.Python) still exists?
|
Boost.Python: Getting a python weak reference to a wrapped C++ object
| 0.197375 | 0 | 0 | 241 |
13,956,774 |
2012-12-19T16:27:00.000
| 6 | 1 | 0 | 1 |
python,google-app-engine
| 13,957,307 | 1 | true | 1 | 0 |
you can try one of the following way:
Log
write a log to datastore while each time you call sent_mail.
write log with logging module and check the log in dashboard.
mail
while send the email, add a debug email address in email's "bcc" field.
you can also check the "sent mail" in the email account used as sender.
| 1 | 4 | 0 |
Wondering if it is possible to see a history of emails that a GAE app has sent? Need to look into the history for debugging purposes.
Note that logging when I send the email or bcc'ing a user are not options for this particular question as the period I'm curious about was in the past (since then we are bcc'ing).
|
Does a GAE app keep a log of the emails it sends?
| 1.2 | 0 | 0 | 88 |
13,957,413 |
2012-12-19T17:02:00.000
| 5 | 1 | 0 | 0 |
python,selenium
| 13,957,461 | 1 | false | 1 | 0 |
You can use class or module level setup and teardown methods instead of test level setup and teardown. Be careful with this though, as if you don't reset your test environment explicitly in each test, you have to handle cleaning everything out (cookies, history, etc) manually, and recovering the browser if it has crashed, before each test.
| 1 | 1 | 0 |
We have a suite of selenium tests that on setup and teardown open and close the browser to start a new test.
This approach takes a long time for tests to run as the opening and closing is slow. Is there any way to open the browser once in the constructor then reste on setup and cleanup on teardown, then on the deconstructor close the browser?
Any example would be really appreciated.
|
Selenium test suite only open browser once
| 0.761594 | 0 | 1 | 601 |
13,957,829 |
2012-12-19T17:27:00.000
| 12 | 0 | 1 | 0 |
python,keyword,raise
| 13,957,884 | 6 | false | 0 | 0 |
raise causes an exception to be raised. Some other languages use the verb 'throw' instead.
It's intended to signal an error situation; it flags that the situation is exceptional to the normal flow.
Raised exceptions can be caught again by code 'upstream' (a surrounding block, or a function earlier on the stack) to handle it, using a try, except combination.
| 1 | 311 | 0 |
I have read the official definition of "raise", but I still don't quite understand what it does.
In simplest terms, what is "raise"?
Example usage would help.
|
How to use "raise" keyword in Python
| 1 | 0 | 0 | 435,877 |
13,958,475 |
2012-12-19T18:10:00.000
| 2 | 0 | 0 | 0 |
python,html,django,google-app-engine,datastore
| 13,958,619 | 1 | true | 1 | 0 |
If you're using CloudSQL, django models are supported, and you'll be fine.
If you're using the HRD, getting the admin pages to work would be more difficult.
django models are not supported. The app engine SDK comes with django-style forms that work with the GAE db.Model fields.
Alternatively, you can use django-nonrel which includes a translation layer that allows django models to be used with GAE. The translation layer has various limitations, most prominently, many-to-many relations aren't support. This breaks the Django permissions module which is used by the admin. I've seen some attempts documented to get around this, but I'm not sure how successful people have been.
| 1 | 2 | 0 |
I have been leaning Django so I could use the Django admin form in my GAE application. I have just read that GAE doesn't support the django models so i am thinking that it also does not support the admin form.
So the question is does GAE support any other 'forms' or 'reports' environment or do you have to do everything with html
|
does GAE support django admin form
| 1.2 | 0 | 0 | 67 |
13,959,257 |
2012-12-19T19:00:00.000
| 2 | 0 | 1 | 0 |
python,windows
| 13,959,312 | 3 | false | 0 | 1 |
but it shuts itself down within a second, and the program isn't executed.
Yes, the program is being executed. It is being executed so quickly that you don't have a chance to see what is happening.
Put raw_input() at the end of your code so that the console window doesn't close instantly.
| 2 | 1 | 0 |
I'm a beginning programmer in Python and I have a strange problem with my computer. When I have a .py file on my computer (containing a script that works), and I double click on it to open, the following happens: the program opens (it's the black screen view), but it shuts itself down within a second, and the program isn't executed. However, when I right click and choose "edit with IDLE", everything works as normal.
I didn't have this problem from the beginning on, but then I installed some other versions of Python and I think that's the moment when scripts didn't want to open anymore.
|
computer (windows XP) doesn't want to open .py
| 0.132549 | 0 | 0 | 218 |
13,959,257 |
2012-12-19T19:00:00.000
| 2 | 0 | 1 | 0 |
python,windows
| 13,959,538 | 3 | false | 0 | 1 |
Austin is right, you made a console app, and once its finished(in microseconds), it closes itself.
Create a mainloop or use Austin's trick, to force app not to close itself.
| 2 | 1 | 0 |
I'm a beginning programmer in Python and I have a strange problem with my computer. When I have a .py file on my computer (containing a script that works), and I double click on it to open, the following happens: the program opens (it's the black screen view), but it shuts itself down within a second, and the program isn't executed. However, when I right click and choose "edit with IDLE", everything works as normal.
I didn't have this problem from the beginning on, but then I installed some other versions of Python and I think that's the moment when scripts didn't want to open anymore.
|
computer (windows XP) doesn't want to open .py
| 0.132549 | 0 | 0 | 218 |
13,959,271 |
2012-12-19T19:00:00.000
| 4 | 0 | 1 | 0 |
python,multithreading,path
| 13,959,291 | 2 | false | 0 | 0 |
Don't use os.chdir. Instead, use os.path.join to form full paths.
| 2 | 1 | 0 |
I'm writing a Python script within which I sometimes change directories with os.chdir(IMG_FOLDER) in order to do my file operations. That works fine as long as I have one thread only (as I can go back where I came from before leaving the function). Now, in case of multi threading, I would require a seperate "os path" instance for each thread otherwise it might mess up my file operations, hey?
How do I best go about this?
|
separate working path for thread?
| 0.379949 | 0 | 0 | 803 |
13,959,271 |
2012-12-19T19:00:00.000
| 0 | 0 | 1 | 0 |
python,multithreading,path
| 13,963,660 | 2 | true | 0 | 0 |
The ultimate solution to this problem was that I
use absolute paths, no more relative as suggested by Perkins
when data is being received in my main thread I write it to data .tmp e.g. and only once the write process is completely finished, it'll be renamed to the name, that I'm scanning for in the seperate thread.
| 2 | 1 | 0 |
I'm writing a Python script within which I sometimes change directories with os.chdir(IMG_FOLDER) in order to do my file operations. That works fine as long as I have one thread only (as I can go back where I came from before leaving the function). Now, in case of multi threading, I would require a seperate "os path" instance for each thread otherwise it might mess up my file operations, hey?
How do I best go about this?
|
separate working path for thread?
| 1.2 | 0 | 0 | 803 |
13,960,897 |
2012-12-19T20:47:00.000
| 4 | 0 | 0 | 0 |
java,python,selenium,webdriver
| 35,039,909 | 5 | false | 1 | 0 |
"If you're running selenium tests against a Java application, then it makes sense to drive your tests with Java." This is untrue. It makes no difference what the web application is written in.
Personally I prefer python because it's equally as powerful as other languages, such as Java, and far less verbose making code maintenance less of a headache. However, if you choose a language, don't write it like you were programming in another language. For example if you're writing in Python, don't write like you were using Java.
| 5 | 6 | 0 |
I'm wondering what the pros and cons are of using Selenium Webdriver with the python bindings versus Java. So far, it seems like going the java route has much better documentation. Other than that, it seems down to which language you prefer, but perhaps I'm missing something.
Thanks for any input!
|
Selenium Webdriver with Java vs. Python
| 0.158649 | 0 | 1 | 12,904 |
13,960,897 |
2012-12-19T20:47:00.000
| 0 | 0 | 0 | 0 |
java,python,selenium,webdriver
| 13,992,922 | 5 | false | 1 | 0 |
It really does not matter. Even the Documentation. Selenium lib is not big at all.
Moreover, if you are good in development, you'll wrap selenium in your own code, and will never use driver.find(By.whatever(description)). Also you'd use some standards and By.whatever will become By.xpath only.
Personally, I prefer python and the reason is that and my other tests for software use other python libs -> this way I can unite my tests.
| 5 | 6 | 0 |
I'm wondering what the pros and cons are of using Selenium Webdriver with the python bindings versus Java. So far, it seems like going the java route has much better documentation. Other than that, it seems down to which language you prefer, but perhaps I'm missing something.
Thanks for any input!
|
Selenium Webdriver with Java vs. Python
| 0 | 0 | 1 | 12,904 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.