Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
12,635,762 |
2012-09-28T07:45:00.000
| 1 | 0 | 1 | 0 |
python,multithreading,gil
| 12,636,228 | 3 | false | 0 | 1 |
The Python interpreter is not aware of C launched threads in any way, so they can happily churn their own CPU time.
However I doubt this is a correct solution for your performance problems. First try using multiple processes with multiprocess module. If interprocess IO is too much after that you can result trickery like C threads. This would make your program an order of magnitude more complex so avoid it if possible.
| 3 | 7 | 0 |
I have heard of Python's GIL problem, which states that there can only be one Python thread executing the Python bytecode at one time in a multicore machine. So a multithreaded Python program is not a good idea.
I am wondering if I can write a C extension that uses pthread to potentially boost the performance of my program. I'd like to create a thread in my C extension and make it runs in parallel with Python's main thread.
My assumption is that the Python main thread will do things more related to IO, while the pthread in my C extension will spend most of its time doing computation. The Python main thread communicates with the thread in the C extension using a queue (like a producer consumer model).
Is there any difference between multithread with Python and C extension?
|
Could a C extension for multithreaded Python boost performance?
| 0.066568 | 0 | 0 | 1,191 |
12,635,762 |
2012-09-28T07:45:00.000
| 2 | 0 | 1 | 0 |
python,multithreading,gil
| 12,636,279 | 3 | true | 0 | 1 |
To answer your original question:
Yes, C extensions can be immune from the GIL, provided they do not call any Python API functions without the GIL held. So, if you need to communicate with the Python app, you'd need to acquire the GIL to do so. If you don't want to get your hands too dirty with the C API, you can use ctypes to call a C library (which can just use pthreads as usual), or Cython to write your C extension in a Python-like syntax.
| 3 | 7 | 0 |
I have heard of Python's GIL problem, which states that there can only be one Python thread executing the Python bytecode at one time in a multicore machine. So a multithreaded Python program is not a good idea.
I am wondering if I can write a C extension that uses pthread to potentially boost the performance of my program. I'd like to create a thread in my C extension and make it runs in parallel with Python's main thread.
My assumption is that the Python main thread will do things more related to IO, while the pthread in my C extension will spend most of its time doing computation. The Python main thread communicates with the thread in the C extension using a queue (like a producer consumer model).
Is there any difference between multithread with Python and C extension?
|
Could a C extension for multithreaded Python boost performance?
| 1.2 | 0 | 0 | 1,191 |
12,635,762 |
2012-09-28T07:45:00.000
| 0 | 0 | 1 | 0 |
python,multithreading,gil
| 12,636,431 | 3 | false | 0 | 1 |
Provided one thread runs CPU bound while the other runs IO bound, I don't see a problem.
The IO bound thread will call IO routines which usually release the GIL while doing their stuff, effectively allowing the other thread to run.
So give the "simple" solution a try and switch only if it really doesn't work the way you want it to.
| 3 | 7 | 0 |
I have heard of Python's GIL problem, which states that there can only be one Python thread executing the Python bytecode at one time in a multicore machine. So a multithreaded Python program is not a good idea.
I am wondering if I can write a C extension that uses pthread to potentially boost the performance of my program. I'd like to create a thread in my C extension and make it runs in parallel with Python's main thread.
My assumption is that the Python main thread will do things more related to IO, while the pthread in my C extension will spend most of its time doing computation. The Python main thread communicates with the thread in the C extension using a queue (like a producer consumer model).
Is there any difference between multithread with Python and C extension?
|
Could a C extension for multithreaded Python boost performance?
| 0 | 0 | 0 | 1,191 |
12,636,812 |
2012-09-28T09:00:00.000
| 0 | 0 | 0 | 0 |
python,django,dynatree
| 12,659,689 | 3 | false | 1 | 0 |
There is a difference between 'selected' and 'active'.
Selection is typically done using checkboxes, while only one node can be activated (typically by a mouse click).
A 2nd click on an active node will not fire an 'onActivate' event, but you can implement the 'onClick' handler to catch this and call node.deactivate()
| 3 | 1 | 0 |
I'm currently working on a django project and i'm using dynatree to build treeview.
I have two trees, first tree has items that user can select and the selected items will be moved to second tree. Is there a way that I can do so in dynatree?And there's an option for user to "unselect" the item so the selected item will move back to the first tree. So when user unselect the item, how can i move the item back to its original parent node? Thanks in advance..
|
Moving nodes from one tree to another with Dynatree
| 0 | 0 | 0 | 908 |
12,636,812 |
2012-09-28T09:00:00.000
| 1 | 0 | 0 | 0 |
python,django,dynatree
| 19,162,943 | 3 | false | 1 | 0 |
I have a dynamTree on a DIV dvAllLetterTemplates (which contains a master List) and a DIV dvUserLetterTemplates to which items are to be copied.
//Select the Parent Node of the Destination Tree
var catNode = $("#dvUserLetterTemplates").dynatree("getTree").selectKey(catKey, false);
if (catNode != null) {
//Select the source node from the Source Tree
var tmplNode = $("#dvAllLetterTemplates").dynatree("getTree").selectKey(arrKeys[i], false);
if (tmplNode != null) {
//Make a copy of the source node
var ndCopy = tmplNode.toDict(false, null);
//Add it to the parent node
catNode.addChild(ndCopy);
//Remove the source node from the source tree (to prevent duplicate copies
tmplNode.remove();
//Refresh both trees
$("#dvUserLetterTemplates").dynatree("getTree").redraw();
$("#dvAllLetterTemplates").dynatree("getTree").redraw();
}
}
| 3 | 1 | 0 |
I'm currently working on a django project and i'm using dynatree to build treeview.
I have two trees, first tree has items that user can select and the selected items will be moved to second tree. Is there a way that I can do so in dynatree?And there's an option for user to "unselect" the item so the selected item will move back to the first tree. So when user unselect the item, how can i move the item back to its original parent node? Thanks in advance..
|
Moving nodes from one tree to another with Dynatree
| 0.066568 | 0 | 0 | 908 |
12,636,812 |
2012-09-28T09:00:00.000
| 2 | 0 | 0 | 0 |
python,django,dynatree
| 15,516,606 | 3 | true | 1 | 0 |
I've solved my problem using the copy/paste concept of context menu.
And for the second problem, I used global variable to store the original parent node so when user unselect the item, it will move back to its original parent.
| 3 | 1 | 0 |
I'm currently working on a django project and i'm using dynatree to build treeview.
I have two trees, first tree has items that user can select and the selected items will be moved to second tree. Is there a way that I can do so in dynatree?And there's an option for user to "unselect" the item so the selected item will move back to the first tree. So when user unselect the item, how can i move the item back to its original parent node? Thanks in advance..
|
Moving nodes from one tree to another with Dynatree
| 1.2 | 0 | 0 | 908 |
12,637,248 |
2012-09-28T09:26:00.000
| 1 | 0 | 1 | 0 |
python,indentation
| 12,637,266 | 2 | false | 0 | 0 |
Notepad++ (Windows), Sublime Text, vim (Unix/Linux) etc. There are a lot of them
| 2 | 0 | 0 |
Another identation-question but i didn't find this particular one anywhere so here goes.
Is there some simple way, or some good editor, that lets me use tab to create a 4*whitespace identation instead of the hardtab jump?
I really want to be able to use tab while coding, but i want it to conform to the 4 whitespace standard.
|
Python indentation; Tab = 4*whitespace?
| 0.099668 | 0 | 0 | 104 |
12,637,248 |
2012-09-28T09:26:00.000
| 1 | 0 | 1 | 0 |
python,indentation
| 12,637,281 | 2 | true | 0 | 0 |
You are looking for a feature called "soft tabs", many editors offer it. Go for google ;)
| 2 | 0 | 0 |
Another identation-question but i didn't find this particular one anywhere so here goes.
Is there some simple way, or some good editor, that lets me use tab to create a 4*whitespace identation instead of the hardtab jump?
I really want to be able to use tab while coding, but i want it to conform to the 4 whitespace standard.
|
Python indentation; Tab = 4*whitespace?
| 1.2 | 0 | 0 | 104 |
12,639,930 |
2012-09-28T12:21:00.000
| 0 | 1 | 0 | 1 |
python,cgi,iis-7.5
| 21,917,122 | 1 | false | 0 | 0 |
I've solved the _urandom() error by changing IIS 7.5 settings to Impersonate User = yes. I'm not a Windows admin so I cannot elaborate.
Afterwards import cgi inside python script worked just fine.
| 1 | 5 | 0 |
I’m having a very strange issue with running a python CGI script in IIS.
The script is running in a custom application pool which uses a user account from the domain for identity. Impersonation is disabled for the site and Kerberos is used for authentication.
When the account is member of the “Domain Admins” group, everything works like a charm
When the account is not member of “Domain Admins”, I get an error on the very first line in the script: “import cgi”. It seems like that import eventually leads to a random number being generated and it’s the call to _urandom() which fails with a “WindowsError: [Error 5] Access is denied”.
If I run the same script from the command prompt, when logged in with the same user as the one from the application pool, everything works as a charm.
When searching the web I have found out that the _urandom on windows is backed by the CryptGenRandom function in the operating system. Somehow it seems like my python CGI script does not have access to that function when running from the IIS, while it has access to that function when run from a command prompt.
To complicate things further, when logging in as the account running the application pool and then invoking the CGI-script from the web browser it works. It turns out I have to be logged in with the same user as the application pool for it to work. As I previously stated, impersonation is disabled, but somehow it seems like the identity is somehow passed along to the security functions in windows.
If I modify the random.py file that calls the _urandom() function to just return a fixed number, everything works fine, but then I have probably broken a lot of the security functions in python.
So have anyone experienced anything like this? Any ideas of what is going on?
|
Python CGI in IIS: issue with urandom function
| 0 | 0 | 0 | 928 |
12,640,409 |
2012-09-28T12:52:00.000
| 1 | 0 | 0 | 1 |
python,google-app-engine,google-docs-api,blobstore,google-cloud-storage
| 12,648,928 | 3 | false | 1 | 0 |
If you're already using the Files API to read and write the files, I'd recommend you use Google Cloud Storage rather than the Blobstore. GCS offers a richer RESTful API (makes it easier to do things like access control), does a number of things to accelerate serving static data, etc.
| 2 | 2 | 0 |
I have a Google App Engine app where I need to store text files that are larger than 1 MB (the maximum entity size.
I'm currently storing them in the Blobstore and I make use of the Files API for reading and writing them. Current operations including uploading them from a user, reading them to process and update, and presenting them to a user. Eventually, I would like to allow a user to edit them (likely as a Google doc).
Are there advantages to storing such text files in Google Cloud Storage, as a Google Doc, or in some other location instead of using the Blobstore?
|
Storing text files > 1MB in GAE/P
| 0.066568 | 0 | 0 | 223 |
12,640,409 |
2012-09-28T12:52:00.000
| 0 | 0 | 0 | 1 |
python,google-app-engine,google-docs-api,blobstore,google-cloud-storage
| 12,641,339 | 3 | false | 1 | 0 |
Sharing data is more easy in Google Docs (now Google Drive) and Google Cloud Storage. Using Google drive u can also use the power of Google Apps scripts.
| 2 | 2 | 0 |
I have a Google App Engine app where I need to store text files that are larger than 1 MB (the maximum entity size.
I'm currently storing them in the Blobstore and I make use of the Files API for reading and writing them. Current operations including uploading them from a user, reading them to process and update, and presenting them to a user. Eventually, I would like to allow a user to edit them (likely as a Google doc).
Are there advantages to storing such text files in Google Cloud Storage, as a Google Doc, or in some other location instead of using the Blobstore?
|
Storing text files > 1MB in GAE/P
| 0 | 0 | 0 | 223 |
12,640,557 |
2012-09-28T13:01:00.000
| 2 | 0 | 1 | 0 |
python,windows,distutils
| 12,640,847 | 3 | true | 0 | 0 |
If you are using pip, you can do pip install package --upgrade, but you'll see that essentially it's the same as uninstall followed by a fresh install.
| 2 | 4 | 0 |
I'm running Python on Windows and usually install packages using pre-built binaries. When I upgrade packages (ex. from matplotlib-1.0.0 to matplotlib-1.1.1) do I need to uninstall the earlier version first?
I did a test upgrading matplotlib without uninstalling the previous version and everything seems to be okay. matplotlib.__version__ shows '1.1.1'. So did distutils just overwrite files, potentially leaving old files cluttering my site-packages folder? Or did the installer look for previous installations, remove those first, and then install the new version?
|
Do I need to uninstall Python package before upgrading to newer version?
| 1.2 | 0 | 0 | 1,640 |
12,640,557 |
2012-09-28T13:01:00.000
| -1 | 0 | 1 | 0 |
python,windows,distutils
| 24,775,650 | 3 | false | 0 | 0 |
Yes, you need a clean install. I've shot my leg when upgrading from matplotlib 1.2.1 to 1.3.1 w/o removing first the old package.
| 2 | 4 | 0 |
I'm running Python on Windows and usually install packages using pre-built binaries. When I upgrade packages (ex. from matplotlib-1.0.0 to matplotlib-1.1.1) do I need to uninstall the earlier version first?
I did a test upgrading matplotlib without uninstalling the previous version and everything seems to be okay. matplotlib.__version__ shows '1.1.1'. So did distutils just overwrite files, potentially leaving old files cluttering my site-packages folder? Or did the installer look for previous installations, remove those first, and then install the new version?
|
Do I need to uninstall Python package before upgrading to newer version?
| -0.066568 | 0 | 0 | 1,640 |
12,642,624 |
2012-09-28T15:07:00.000
| 2 | 0 | 1 | 0 |
python
| 12,642,663 | 4 | false | 0 | 0 |
Globals are not really taboo, it's just you have to remember to declare them as global in your function before you use them. In my opinion this actually makes them more clear because the user sees that you are explicitly using a global.
I would be more afraid of the non-pythonistas not understanding python globals, modifying your code and not adding the proper global declarations.
| 2 | 9 | 0 |
Stack Overflow has a lot of questions regarding global variables in python, and it seems to generate some amount of confusion for people coming from other languages. Scoping rules don't exactly work the way a lot of people from other backgrounds expect them to.
At the same time, code is meant to be organized not so much on class level, but on module level. So when everything is not necessarily contained in classes, state that would otherwise be found in member variables can go in module-level variables.
So my question is 2 part:
1) Should I be avoiding the use of globals (specifically setting them from within functions and using the global keyword)?
2) If #1 is yes, are there common patterns where they are expected to be used?
I work in a place where lots of different languages abound and I want to mitigate confusion and make sure that pythonistas won't hate me later.
Thank you for any constructive input.
|
Frequency of global variables in python?
| 0.099668 | 0 | 0 | 1,129 |
12,642,624 |
2012-09-28T15:07:00.000
| 3 | 0 | 1 | 0 |
python
| 12,642,790 | 4 | false | 0 | 0 |
In short, yes, you should be avoiding using the global keyword. It might be in the language for a reason but I generally consider it to be a code smell -- if you have some state you want to keep up, encapsulate it in a class. That's far less fragile than using global.
| 2 | 9 | 0 |
Stack Overflow has a lot of questions regarding global variables in python, and it seems to generate some amount of confusion for people coming from other languages. Scoping rules don't exactly work the way a lot of people from other backgrounds expect them to.
At the same time, code is meant to be organized not so much on class level, but on module level. So when everything is not necessarily contained in classes, state that would otherwise be found in member variables can go in module-level variables.
So my question is 2 part:
1) Should I be avoiding the use of globals (specifically setting them from within functions and using the global keyword)?
2) If #1 is yes, are there common patterns where they are expected to be used?
I work in a place where lots of different languages abound and I want to mitigate confusion and make sure that pythonistas won't hate me later.
Thank you for any constructive input.
|
Frequency of global variables in python?
| 0.148885 | 0 | 0 | 1,129 |
12,643,662 |
2012-09-28T16:15:00.000
| 2 | 0 | 0 | 0 |
python,neo4j,py2neo
| 31,026,259 | 5 | false | 0 | 0 |
Well, I myself had need for massive performance from neo4j. I end up doing following things to improve graph performance.
Ditched py2neo, since there were lot of issues with it. Besides it is very convenient to use REST endpoint provided by neo4j, just make sure to use request sessions.
Use raw cypher queries for bulk insert, instead of any OGM(Object-Graph Mapper). That is very crucial if you need an high-performant system.
Performance was not still enough for my needs, so I ended writing a custom system that merges 6-10 queries together using WITH * AND UNION clauses. That improved performance by a factor of 3 to 5 times.
Use larger transaction size with atleast 1000 queries.
| 1 | 18 | 0 |
I am finding Neo4j slow to add nodes and relationships/arcs/edges when using the REST API via py2neo for Python. I understand that this is due to each REST API call executing as a single self-contained transaction.
Specifically, adding a few hundred pairs of nodes with relationships between them takes a number of seconds, running on localhost.
What is the best approach to significantly improve performance whilst staying with Python?
Would using bulbflow and Gremlin be a way of constructing a bulk insert transaction?
Thanks!
|
Fastest way to perform bulk add/insert in Neo4j with Python?
| 0.07983 | 1 | 1 | 12,651 |
12,646,305 |
2012-09-28T19:38:00.000
| 0 | 0 | 0 | 0 |
python,csv,import,postgresql-9.1
| 12,646,923 | 3 | false | 0 | 0 |
Nice chunk of data you have there. I'm not 100% sure about Postgre, but at least MySQL provides some SQL commands, to feed a csv directly into a table. This bypasses any insert checks and so on and is thatswhy more than a order of magnitude faster than any ordinary insert operations.
So the probably fastest way to go is create some simple python script, telling your postgre server, which csv files in which order to hungrily devour into it's endless tables.
| 1 | 9 | 0 |
I see plenty of examples of importing a CSV into a PostgreSQL db, but what I need is an efficient way to import 500,000 CSV's into a single PostgreSQL db. Each CSV is a bit over 500KB (so grand total of approx 272GB of data).
The CSV's are identically formatted and there are no duplicate records (the data was generated programatically from a raw data source). I have been searching and will continue to search online for options, but I would appreciate any direction on getting this done in the most efficient manner possible. I do have some experience with Python, but will dig into any other solution that seems appropriate.
Thanks!
|
Efficient way to import a lot of csv files into PostgreSQL db
| 0 | 1 | 0 | 10,104 |
12,652,336 |
2012-09-29T11:32:00.000
| 3 | 0 | 0 | 1 |
python,tornado,vert.x,sockjs
| 13,562,205 | 2 | false | 0 | 0 |
Vertx has build-in clustering support. I haven't tried it with many nodes, but it seemed to work well with a few. Internally it uses hazelcast to organise the nodes.
Vertx also runs on a JVM, which has already many monitoring/admin tools which might be useful. So Vertx seems to me like the "batteries included" solution.
| 1 | 2 | 0 |
I've been inspecting two similar solutions for supporting web sockets via sockJS using an independent Python server, and so far I found two solutions.
I need to write a complex, scalable web socket based web application, and I'm afraid it will be hard to scale Tornado, and it seems Vertx is better with horizontal scaling of web sockets.
I also understand that Redis can be used in conjunction with Tornado for scaling a pub/sub system horizontally, and HAproxy for scaling the SockJS requests.
Between Vertx and Tornado, what is the preferred solution for writing a scalable system which supports SockJS?
|
Vertx SockJS server vs sockjs-tornado
| 0.291313 | 0 | 1 | 1,306 |
12,652,475 |
2012-09-29T11:52:00.000
| 0 | 1 | 1 | 0 |
python-2.7,runtime
| 12,654,018 | 1 | false | 0 | 0 |
The import statement is like any other executable statement, and can be executed at any point during execution.
| 1 | 0 | 0 |
What is the best way to load a python module/file after the whole python program is up and running. My current idea is to save the new python file to disk and call import on it. I am working with python 2.7.
Each new python file will have pre-known functions, that will be called by the already running application.
|
loading python after the application is up and running
| 0 | 0 | 0 | 23 |
12,653,026 |
2012-09-29T13:20:00.000
| -2 | 1 | 0 | 0 |
python,embedded
| 12,653,117 | 2 | false | 0 | 0 |
OOP is generally not suitable for embedded development. This is because embedded hardware is limited on memory and OOP is unpredictable with memory usage. It is possible, but you are forced into static objects an methods to have any kind of reliability.
| 2 | 0 | 0 |
Hi all~ I just be interested in embedded development, and as known to all, C is the most popular programming language in embedded development. But I prefer to use Python, does Python be adapted to do any tasks about embedded development or automatic control? And are there some books about this be worth recommended? Thanks!
|
Python on automatic control and embedded development
| -0.197375 | 0 | 0 | 664 |
12,653,026 |
2012-09-29T13:20:00.000
| 5 | 1 | 0 | 0 |
python,embedded
| 12,654,265 | 2 | false | 0 | 0 |
The reason C (and C++) are prevalent in embedded systems is that they are systems-level languages with minimal run-time environment requirements and can run stand-alone (bare metal), with an simple RTOS kernel, or within a complete OS environment. Both are also almost ubiquitous being available for most 8, 16, 32 and 64 bit architectures. For example, you can write bootstrap and OS code in C or C++, whereas Python needs both of those already in place just to run.
Python on the other hand is an interpreted language (although it is possible to compile it, you would also need cross-compilation tools or an embedded target that could support self hosted development for that), and a significant amount of system level code (usually and OS) as well an the interpreter itself is required to support it. All this precludes for example deployment on very small systems where C and even C++ can deliver.
Moreover it Python would probably be unsuitable for hard-real-time systems due to its intrinsically slower execution and non-deterministic behaviour with respect to memory management.
If your embedded system happened to be running Linux it would of course be possible to use Python but the number of applications to which it was suited may be limited, and since Linux itself is somewhat resource hungry, you would probably not deploy it is the only reason was to be able to run Python.
| 2 | 0 | 0 |
Hi all~ I just be interested in embedded development, and as known to all, C is the most popular programming language in embedded development. But I prefer to use Python, does Python be adapted to do any tasks about embedded development or automatic control? And are there some books about this be worth recommended? Thanks!
|
Python on automatic control and embedded development
| 0.462117 | 0 | 0 | 664 |
12,654,391 |
2012-09-29T16:30:00.000
| 3 | 0 | 1 | 0 |
python,binary
| 12,654,491 | 3 | true | 0 | 0 |
You didn't specify how your float columns are represented in Python. The cPickle module is a fast general solution, with the drawback that it creates files readable only from Python, and that it should never be allowed to read untrusted data (received from the network). It is likely to just work with all regular datatypes, including numpy arrays.
If you can use numpy and store your data in numpy arrays, look into numpy.save and numpy.savetxt and the corresponding loading functions, which should offer performance superior to manually extracting the data.
array.array also has methods for writing array data to file, with the drawback that the array data is written in the native format and cannot be read from a different architecture.
| 1 | 1 | 0 |
I am working with information from big models, which means I have a lot of big ascii files with two float columns (lets say X and Y). However, whenever I have to read these files it takes a long time, so I thought maybe converthing them to binary files will make the reading process much faster.
I converted my asciifiles into binary files using the uu.encode(ascii_file,binary_file) command, and it worked quite well (Actually, tested the decode part and I recovered the same files).
My question is: is there anyway to read the binary files directly into python and get the data into two variables (x and y)?
Thanks!
|
how to read a binary file into variables in python
| 1.2 | 0 | 0 | 1,455 |
12,654,772 |
2012-09-29T17:22:00.000
| 61 | 0 | 0 | 1 |
python
| 31,773,158 | 2 | false | 0 | 0 |
Of course there IS a way to create files without opening. It's as easy as calling os.mknod("newfile.txt"). The only drawback is that this call requires root privileges on OSX.
| 1 | 311 | 0 |
I'd like to create a file with path x using python. I've been using os.system(y) where y = 'touch %s' % (x). I've looked for a non-directory version of os.mkdir, but I haven't been able to find anything. Is there a tool like this to create a file without opening it, or using system or popen/subprocess?
|
Create empty file using python
| 1 | 0 | 0 | 452,830 |
12,656,098 |
2012-09-29T20:11:00.000
| 5 | 1 | 0 | 0 |
c++,python,performance,cpu-speed
| 12,656,127 | 2 | true | 0 | 0 |
With that many connections, your server will be I/O bound. The frequently cited speed differences between languages like C and C++ and languages like Python and (say) Ruby lie in the interpreter and boxing overhead which slow down computation, not in the realm of I/O.
Not only can use make good and reasonably use of concurrency (both via processes and threads, the GIL is released during I/O and thus does not matter much for I/O-bound programs), there is also a wealth of asynchronous servers. In addition, web servers in general have much better Python integration (e.g. mod_wsgi for Apache) than C and C++. This frees you from writing your own server loop, socket management, etc. which you likely won't do as well as the major servers anyway. This is assuming we're talking about a web service, and not something more arcane which Apache etc. cannot do out of the box.
| 2 | 3 | 0 |
I've to develop a server that has to make a lot of connections to receive and send small files. The question is if the increment of performance with C++ worth the time to spend on develop the code or if is better to use Python and debug the code time to time to speed it up. Maybe is a little abstract question without giving a number of connections but I don't really know. At least 10,000 connections/minute to update clients status.
|
C++ vs Python server side performance
| 1.2 | 0 | 0 | 3,596 |
12,656,098 |
2012-09-29T20:11:00.000
| 2 | 1 | 0 | 0 |
c++,python,performance,cpu-speed
| 12,656,117 | 2 | false | 0 | 0 |
I'd expect that the server time would be dominated by I/O- network, disk, etc. You'd want to prove that the CPU consumption of the Python program is problematic and that you've grasped all the low-hanging CPU fruit before considering a change.
| 2 | 3 | 0 |
I've to develop a server that has to make a lot of connections to receive and send small files. The question is if the increment of performance with C++ worth the time to spend on develop the code or if is better to use Python and debug the code time to time to speed it up. Maybe is a little abstract question without giving a number of connections but I don't really know. At least 10,000 connections/minute to update clients status.
|
C++ vs Python server side performance
| 0.197375 | 0 | 0 | 3,596 |
12,658,427 |
2012-09-30T03:39:00.000
| 3 | 0 | 0 | 0 |
python,django,installation
| 12,659,723 | 5 | false | 1 | 0 |
I am not familiar with GoDaddy's setup specifically, but in general, you cannot install Django on shared hosting unless it is supported specifically (a la Dreamhost). So unless GoDaddy specifically mentions Django (or possibly mod_wsgi or something) in their documentation, which is unlikely, you can assume it is not supported.
Theoretically you can install Python and run Django from anywhere you have shell access and sufficient permissions, but you won't be able to actually serve your Django site as part of your shared hosting (i.e., on port 80 and responding to your selected hostname) because you don't have access to the webserver configuration.
You will need either a VPS (GoDaddy offers them but it's not their core business; Linode and Rackspace are other options), or a shared host that specifically supports Django (e.g. Dreamhost), or an application host (Heroku or Google App Engine). I recommend Heroku personally, especially if you are not confident in setting up and maintaining your own webserver.
| 1 | 32 | 0 |
I have never deployed a Django site before. I am currently looking to set it up in my deluxe GoDaddy account. Does anyone have any documentation on how to go about installing python and django on GoDaddy?
|
Installing a django site on GoDaddy
| 0.119427 | 0 | 0 | 61,875 |
12,659,719 |
2012-09-30T08:08:00.000
| 4 | 0 | 0 | 0 |
python,windows,winapi,python-2.7,pywin32
| 12,659,846 | 1 | true | 0 | 1 |
When someone presses the Ctrl+C key in Explorer, Explorer calls OleSetClipboard() with an IDataObject containing various formats, which may include CF_FILES, CFSTR_FILECONTENTS and CFSTR_SHELLIDLIST.
| 1 | 1 | 0 |
I am designing a Copy/Paste Application for Windows os using Python.Now I want to a Register my application with hotkey for "Ctrl+V" So that when any one press "Ctrl+V" Paste is done through my application and not through windows default Copy/Paste application.But I don't know how to get the list of files path which are to be copied and also the path of Target window where paste is to be done.So I want to know what actually happens when someone presses Ctrl+C key in windows explorer
|
Windows:What actually happens when Ctrl+C is pressed in windows explorer
| 1.2 | 0 | 0 | 339 |
12,660,516 |
2012-09-30T10:24:00.000
| 2 | 0 | 0 | 0 |
pyramid,pickle,python-3.2
| 12,673,353 | 1 | true | 1 | 0 |
Pyramid's debug toolbar keeps objects alive. Deactivating it fixes most memory leak problems. The leak that was the cause of my searching for errors in Pyramid doesn't seem to be a problem with Pyramid at all
| 1 | 3 | 0 |
I've got a Pyramid view that's misbehaving in an interesting way. What the view does is grab a pretty complex object hierarchy from a file (using pickle), does a little processing, then renders an html form. Nice and simple.
Setup:
I'm running Ubuntu 12.04 64bit, Python3.2, Pyramid 1.3.3, SQLAlchemy 0.7.8 and using the standard waitress server.
Symptoms
I was having some efficiency issues so used system monitor to try to see what was up and found that while pyramid is doing its processing and suchlike for the view I described my ram usage rose steadily. As the html form was displayed in my browser the ram usage leveled off but didn't fall. Reloading the view caused the ram usage to grow steadily from where it left off. If I keep doing this all my ram is used up and everything grinds to a halt.
If I kill the server then the ram usage drops back down immediately.
Question
What's going on? It's obvious that memory isn't being released between view renderings, but why is this happening? And how can I make it stop? I even tried calling del on stuff before returning from the view and nothing changed.
|
Pyramid app not releasing memory between views
| 1.2 | 0 | 0 | 223 |
12,663,774 |
2012-09-30T18:20:00.000
| 0 | 1 | 0 | 1 |
python,centos
| 12,663,789 | 3 | false | 0 | 0 |
Add #!/usr/bin/env python at the head of your script file.
It tell your system to search for the python interpreter and execute your script with it.
| 1 | 0 | 0 |
python blabla.py will execute. But ./blabla.py gives me an error of "no such file or directory" on CentOS6.3.
/usr/bin/env python does open up python properly.
I am new to linux and really would like to get this working. Could someone help?
Thanks in advance!
Note: thanks to all the fast replies!
I did have the #!/usr/bin/env python line at the beginning.
which python gives /usr/bin/python as an output.
And the chmod +x was done as well.
The exact error was "no such file or directory" for ./blabla.py, but python blabla.py runs fine.
|
/usr/bin/env python opens up python, but ./blabla.py does not execute
| 0 | 0 | 0 | 2,334 |
12,664,713 |
2012-09-30T20:11:00.000
| 0 | 0 | 0 | 0 |
python,django,heroku,celery
| 12,664,898 | 1 | false | 1 | 0 |
Based on you use case description you do not need a Scheduler, so APScheduler will not match your requirements well.
Do you have a web dyno besides your worker dyno? The usual design pattern for this type of processing is to set up a control thread or control process (your web dyno) that accepts requests. These requests are then placed on a request queue.
This queue is read by one or more worker threads or worker processes (you worker dyno). I have not worked with Celery, but it looks like a match with your requirements. How many worker threads or worker dyno's you will need is difficult to determine based on your description. You will need to specify also how many requests for updates you will need to process per second. Also, you will need to specify if the request is CPU bound or IO bound.
| 1 | 0 | 0 |
I am working on a project, to be deployed on Heroku in Django, which has around 12 update functions. They take around 15 minutes to run each. Let's call them update1(), update2()...update10().
I am deploying with one worker dyno on Heroku, and I would like to run up to n or more of these at once (They are not really computationally intensive, they are all HTML parsers, but the data is time-sensitive, so I would like them to be called as often as possible).
I've read a lot of Celery and APScheduler documentation, but I'm not really sure which is the best/easiest for me. Do scheduled tasks run concurrently if the times overlap with one another (ie. if I run one every 2 minutes, and another every 3 minutes, or do they wait until each one finishes?)
Any way I can queue these functions, so at least a few of them are running at once? What is the suggested number of simultaneous calls for this use-case?
|
Scheduling update functions in Django and Heroku?
| 0 | 0 | 0 | 164 |
12,666,278 |
2012-10-01T00:22:00.000
| 8 | 0 | 0 | 0 |
python,user-interface,wxpython,pyqt,tkinter
| 12,667,986 | 3 | true | 0 | 1 |
If you're running Ubuntu, PyQt will be installed by default. Most linux distros will have one of PyGtk or PyQt installed by default. WxPython was most likely installed in your Ubuntu box as a dependency for some other package in your system.
If your target market is Linux, you can just create a deb or rpm package and that'll take care of the dependencies for your application.
For Windows and Mac(and even Linux if you're so inclined) you could bundle the python interpreter with your application and its libraries into a native executable format such
as .exe, .dmg or .elf using libraries like cx_freeze, py2exe and py2app. Once this is done, your user will not have to install python or any of your libraries.
| 2 | 8 | 0 |
My question is about the easiness of distributing the GUI app across the platforms (Mac/Linux/Windows), and I want to know the one that makes the user's job easiest.
My current understanding is that Tkinter app is the easiest for the users (to install) because as long as the user has installed a Python in her box, my application should be ready to run on that box.
For GUI app written in wxPython or pyQT, the user needs to install wxWidget or QT in her box first, which is an extra step, and then install my GUI app. (But my Ubuntu box seems to have the wxWidget libraries and QT libraries installed by default, is that a norm or just Ubuntu distro is more friendly to users? I guess Windows and Mac probably does not provide them by defualt, ie. the users need to download and install them as an extra step)
|
Python GUI App Distribution: written in wxPython, TKinter or QT
| 1.2 | 0 | 0 | 4,379 |
12,666,278 |
2012-10-01T00:22:00.000
| 4 | 0 | 0 | 0 |
python,user-interface,wxpython,pyqt,tkinter
| 12,666,368 | 3 | false | 0 | 1 |
Tkinter is the only one that's included with Python. wxPython and pyQT need both the wxWindows or QT libraries and the wxPython or pyQT libraries to be installed on the system.
However, Tk does not look very nice. If you're already making the user install Python, you could just as well have them install the libraries too. (Or maybe include an installer or something.)
| 2 | 8 | 0 |
My question is about the easiness of distributing the GUI app across the platforms (Mac/Linux/Windows), and I want to know the one that makes the user's job easiest.
My current understanding is that Tkinter app is the easiest for the users (to install) because as long as the user has installed a Python in her box, my application should be ready to run on that box.
For GUI app written in wxPython or pyQT, the user needs to install wxWidget or QT in her box first, which is an extra step, and then install my GUI app. (But my Ubuntu box seems to have the wxWidget libraries and QT libraries installed by default, is that a norm or just Ubuntu distro is more friendly to users? I guess Windows and Mac probably does not provide them by defualt, ie. the users need to download and install them as an extra step)
|
Python GUI App Distribution: written in wxPython, TKinter or QT
| 0.26052 | 0 | 0 | 4,379 |
12,666,421 |
2012-10-01T00:55:00.000
| 7 | 0 | 1 | 0 |
python
| 12,666,425 | 2 | false | 0 | 0 |
Why not [mystring]? It uses the list literal to create a list with just the value of mystring inside.
| 1 | 3 | 0 |
Let's say I have a string like 'banana' and I'd to convert it into a list ['banana'], in python.
I tried ''.join(list('banana')) and other tricks, and I'm still back to square one! Thanks
|
convert string object to list object in python
| 1 | 0 | 0 | 6,460 |
12,668,710 |
2012-10-01T06:54:00.000
| 5 | 0 | 0 | 0 |
python,plone
| 12,671,365 | 1 | true | 1 | 0 |
In the example above, sample.pdf is presumably a File created in Plone. In this case, the URL without /view will render the file for download, and the URL with /view will render a Plone page with a link to download it. This is standard behaviour.
It isn't really possible to stop people from downloading the PDF. You can modify the File FTI in portal_types (go to portal_types/File in the ZMI and change the method aliases tab). If you change the "(Default)" alias to be the same as the "view" one, it will behave the same. Note that this will affect all files.
Martin
| 1 | 0 | 0 |
After using atreal.richfile.preview in plone, we get the preview of a pdf file with the url like : http://localhost:8090/plone/sample.pdf/view. If we delete part of the url i.e "/view", and enter the url: http://localhost:8090/plone/sample.pdf in the browser, it still can be viewed and the pdf becomes printable or can be copied. How can I modify the url so that it will not display the pdf in the new window if the url of the same is modified? Using plone 4.1. Which template and what code needs to be added/ edited. Please guide
|
How to modify the url in plone so that we don't get the same page if it is modified?
| 1.2 | 0 | 0 | 298 |
12,669,115 |
2012-10-01T07:33:00.000
| 1 | 0 | 0 | 0 |
python,django,web-services,web-applications,flask
| 12,674,458 | 1 | false | 1 | 0 |
I tend to use Django for "big" projects and Flask for projects requiring less than a ~300 lines file.
The challenges in moving to Flask are in my sense to go look for the extensions for forms, mails, databases... When you need them, and referring to different documentations. But it is naturally the price of flexibility.
One of the key issue I have been facing was deployment with Fabric. I was used to deploy very quickly with django-fab-deploy and it tooks me a little bit of time to set up a comparable generic deployment solution for Flask.
| 1 | 0 | 0 |
Question is for programmers who have used Django and Flask for real projects.
What challenges do you face going to the Flask?
Interested in the situation when there may be unexpected difficulties (after using django).
Specific examples are welcome.
|
What are the non-obvious problems you encounter, moving from Django to Flask?
| 0.197375 | 0 | 0 | 170 |
12,669,938 |
2012-10-01T08:39:00.000
| 0 | 0 | 1 | 1 |
python,cx-freeze
| 36,247,376 | 1 | false | 0 | 0 |
I think if you don't have a certificate that's impossible.
| 1 | 9 | 0 |
I wrote lets scripts for customer. In order not to install python and dependent packages, I packed all to 3 exe-file using cx-freeze. First - winservice, who does most of the work. Second - settings wizard. Third - client for work with winservice. Faced to the task, need after installing the package (made using bdist_msi) register the service in system, and run the wizard. How do it?
|
cx_Freeze. How install service and execute script after install
| 0 | 0 | 0 | 690 |
12,670,874 |
2012-10-01T09:45:00.000
| 9 | 0 | 0 | 0 |
python,pyqt,double-click,qtreeview,qfilesystemmodel
| 15,352,784 | 2 | false | 0 | 1 |
I don't know if you have this in python versions, but in C++ Qt you just set the edit triggers in the QAbstractItemView:
void setEditTriggers ( EditTriggers triggers )
| 1 | 4 | 0 |
Simple question. I'd like to use F2 or Enter for rename, and double click to open a file.
Using self.treeView.doubleClicked.connect(self.doubleclick) I can do things in my self.doubleClick method, but the renaming is still triggered.
The model is not read-only (model.setReadOnly(False)).
|
How to disable the double click file renaming behavior on QTreeView and QFileSystemModel in PyQt?
| 1 | 0 | 0 | 3,516 |
12,673,450 |
2012-10-01T12:39:00.000
| 2 | 0 | 0 | 0 |
python,win32com
| 12,674,538 | 1 | false | 0 | 0 |
So here is the simple solution.
I checked the defaults params for the trigger and than I saw, that Flags is set to 4, which means DISABLED.
It seems, that's the default setting for a new trigger for a task.
| 1 | 0 | 0 |
here is my problem. When I create a new scheduled task using win32com in python there is no next run time for the task. It says 'never' in task scheduler gui.
My workflow of creating tasks:
try to make new task, if failed, get existing one for update,
create daily triggers for the task,
save it all.
Any advice?
|
Scheduled task - no next run time
| 0.379949 | 0 | 0 | 265 |
12,675,471 |
2012-10-01T14:41:00.000
| 2 | 1 | 0 | 0 |
python,pdf,encoding
| 13,703,110 | 2 | true | 0 | 0 |
Yes.
This will happen when custom font encodings have been used e.g. identity-H,identity-V, etc. but fonts have not been embedded properly.
pdfminer gives garbage output in such cases because encoding is required to interpret the text
| 1 | 2 | 0 |
I am using python2.7 and PDFminer for extracting text from pdf. I noticed that sometimes PDFminer gives me words with strange letters, but pdf viewers don't. Also for some pdf docs result returned by PDFminer and other pdf viewers are same (strange), but there are docs where pdf viewers can recognize text (copy-paste). Here is example of returned values:
from pdf viewer: فتــح بـــاب ا�ستيــراد البيــ�ض والدجــــاج المجمـــد
from PDFMiner: óªéªdG êÉ````LódGh ¢†``«ÑdG OGô``«à°SG ÜÉH í``àa
So my question is can I get same result as pdf viewer, and what is wrong with PDFminer. Does it missing encodings I don't know.
|
PDFminer gives strange letters
| 1.2 | 0 | 0 | 1,267 |
12,676,194 |
2012-10-01T15:21:00.000
| 1 | 1 | 0 | 0 |
python,file,byte,python-2.2
| 12,677,772 | 1 | false | 0 | 0 |
Your best option is to manipulate the file directly; this will work regarding of Python version, i.e., 1.x, 2.x, 3.x. Here is some rough outline to get you started... if you do the actual pseudocode, it'll probably be pretty close if not exactly the correct Python:
open the file for 'r+b' (read/write; for POSIX systems, you can also just use 'r+')
go to the specific byte in question (use a file's tell() method)
write out the single byte you want changed (use a file's write() method)
close the file (use a file's close() method)
| 1 | 1 | 0 |
I need to write program that will change bytes in file in specific addreses. I can use only python 2.2 it's game's module so... I read once about mmap but i can't find it in python 2.2
|
How to change byte on specific addres
| 0.197375 | 0 | 0 | 191 |
12,683,630 |
2012-10-02T02:08:00.000
| 1 | 1 | 0 | 0 |
python,django,api,rest,tastypie
| 12,686,954 | 2 | true | 1 | 0 |
You can add a new field to the resource and dehydrate it with dehydrate_field_name().
| 1 | 0 | 0 |
How can I add custom fields (in this case, meta codes) to a dispatch_list in Tastypie?
|
Tastypie: Add meta codes to dispatch_list
| 1.2 | 0 | 0 | 883 |
12,688,766 |
2012-10-02T10:36:00.000
| 1 | 0 | 0 | 0 |
python,eclipse,syntax-highlighting
| 12,962,250 | 1 | true | 0 | 0 |
Unfortunately the keywords are not currently customizable.
(it's hard-coded at org.python.pydev.editor.PyCodeScanner)
You can grab the code and modify yourself...
I don't know of any attempt to integrate an eclipse editor with pygments, but I guess it could be possible.
| 1 | 2 | 0 |
Can you think of some way to realize custom source code highlighting in Eclipse/Pydev?
I'd like to highlight some tokens that are usually not distinguished.
Is there a way to do change the highlighting in Eclipse and/or Pydev? I mean not just change colors, but really introduce new elements.
Or can I incorporate pygments into Eclipse?
Or if all this is hard, what is the easiest way to use another editor with pygments? Can I even embed another editor in Eclipse?
|
Custom highlighting with Eclipse/Python?
| 1.2 | 0 | 0 | 256 |
12,691,551 |
2012-10-02T13:48:00.000
| 1 | 0 | 1 | 0 |
python,datetime
| 12,691,704 | 13 | false | 0 | 0 |
This will take some work since there isn't any defined construct for holidays in any library (by my knowledge at least). You will need to create your own enumeration of those.
Checking for weekend days is done easily by calling .weekday() < 6 on your datetime object.
| 1 | 30 | 0 |
I'm trying to add n (integer) working days to a given date, the date addition has to avoid the holidays and weekends (it's not included in the working days)
|
Add n business days to a given date ignoring holidays and weekends in python
| 0.015383 | 0 | 0 | 42,545 |
12,693,054 |
2012-10-02T15:08:00.000
| 1 | 0 | 0 | 0 |
c++,python,ruby-on-rails,web-applications,web-crawler
| 12,697,280 | 3 | false | 1 | 0 |
Adding to mechanize:
if your page has a javascript component that mechanize cant handle, selenium drives an actual web browser. If you're hellbent on using ruby, you can also use WATIR, but selenium has both ruby and python bindings.
| 2 | 3 | 0 |
I currently have been assigned to create a web crawler to automate some reporting tasks I do. This web crawler would have to login with my credentials, search specific things in different fields (some in respect to the the current date), download CSVs that contain the data if there is any data available, parse the CSVs quickly to get a quick number count, create an email with the CSVs attached and send it.
I currently know C++ and Python very well, am in the process of learning C, but I was told that Ruby or Ruby on Rails was a great way to do this. Is Ruby on Rails solely for creating web apps, and if so, does my task fit the description of a web app, or can I just make a standalone program that runs and does it all?
I would like to know which language would be the easiest to code with (has easy to use modules), has a good library/module relative to these tasks. What would I need to take into account before undergoing this task? I have till the end of December to make this, and I only work here for around 12 hours per week (I'm a student, and this is for my internship). Is this feasible?
Thanks.
|
Easiest way to tackle this web crawling task?
| 0.066568 | 0 | 1 | 321 |
12,693,054 |
2012-10-02T15:08:00.000
| 0 | 0 | 0 | 0 |
c++,python,ruby-on-rails,web-applications,web-crawler
| 12,693,218 | 3 | false | 1 | 0 |
While this is not a great Stackoverflow question, since you are a student and it's for an internship, it seems like it would be in poor form to flag it, or down-vote it. :)
Basically, you can pretty much accomplish this task with any of the languages you listed. If you want learning Ruby as a part of your experience for your internship, then this might be a great project and a way of learning it. But, python would work great, also (you could probably use Mechanize). I should probably disclose that I'm a Python developer and I love it. I think it's a great language with great support and tools. I'm sure the Ruby guys feel the same about their language. Again, I think it's what you want to try to accomplish during your internship. What experience do you want to take away, etc. Best of luck.
| 2 | 3 | 0 |
I currently have been assigned to create a web crawler to automate some reporting tasks I do. This web crawler would have to login with my credentials, search specific things in different fields (some in respect to the the current date), download CSVs that contain the data if there is any data available, parse the CSVs quickly to get a quick number count, create an email with the CSVs attached and send it.
I currently know C++ and Python very well, am in the process of learning C, but I was told that Ruby or Ruby on Rails was a great way to do this. Is Ruby on Rails solely for creating web apps, and if so, does my task fit the description of a web app, or can I just make a standalone program that runs and does it all?
I would like to know which language would be the easiest to code with (has easy to use modules), has a good library/module relative to these tasks. What would I need to take into account before undergoing this task? I have till the end of December to make this, and I only work here for around 12 hours per week (I'm a student, and this is for my internship). Is this feasible?
Thanks.
|
Easiest way to tackle this web crawling task?
| 0 | 0 | 1 | 321 |
12,696,151 |
2012-10-02T18:34:00.000
| 0 | 0 | 1 | 1 |
python
| 15,816,458 | 2 | false | 0 | 0 |
I also had this problem. Like mottyg1 said, it happens when the python script is run from a directory containing non-english characters. I can't change the directory name though, and my python script needed to be in the directory in order to perform manipulations on the filenames. So my workaround was simply to move the script to a different folder and then pass in the directory containing the files to be changed.
So to be clear, the problem is only when the directory containing the python file has non-english characters, but python can still handle such characters in its functions, at least as far as I've been able to tell.
| 1 | 12 | 0 |
When running any Python script (by double clicking a .py file on Windows 7) I'm getting a Python: failed to set __main__.__loader__ error message. What to do?
More details:
The scripts work on other machines.
The only version of Python installed on the machine on which the scripts don't work is 3.2.
I get the same error when trying to run the script from the Windows shell (cmd).
Here's an example for the content of a file named "hey.py" that I failed to run on my machine:
print('hey')
|
failed to set __main__.__loader__ in Python
| 0 | 0 | 0 | 5,239 |
12,697,595 |
2012-10-02T20:16:00.000
| 1 | 0 | 0 | 0 |
python,django,migration,django-south
| 12,697,732 | 1 | true | 1 | 0 |
--initial is not about detecting changes, you shouldn't expect it to.
It takes the current state of the tables and exports them as create table statements to get your first migration off the ground such that on a new install, you simply run "python manage.py migrate" to build your tables from start to finish.
No matter how many times you run --initial, it will generate these migrations with full table output. Again, it is not about detecting anything - it simply outputs the current state of the tables and is intended to be used as the "intitial/first" migration.
| 1 | 0 | 0 |
I'm trying to run migration by south,
But when I run : manage.py schemamigration <my_app> --initial
it makes wrong modifications, creating "Added model treinoclub_app.Endereco
Added model treinoclub_app.Academia".
But I didn't make any changes for this table.
|
south migration making incorrect initial schemamigration
| 1.2 | 0 | 0 | 91 |
12,698,212 |
2012-10-02T20:57:00.000
| 1 | 0 | 0 | 1 |
python,eclipse,celery
| 19,998,790 | 5 | false | 1 | 0 |
I create a management command to test task.. find it easier than running it from shell..
| 1 | 29 | 0 |
I need to debug Celery task from the Eclipse debugger.
I'm using Eclipse, PyDev and Django.
First, I open my project in Eclipse and put a breakpoint at the beginning of the task function.
Then, I'm starting the Celery workers from Eclipse by Right Clicking on manage.py from the PyDev Package Explorer and choosing "Debug As->Python Run" and specifying "celeryd -l info" as the argument. This starts MainThread, Mediator and three more threads visible from the Eclipse debugger.
After that I return back to the PyDev view and start the main application by Right Click on the project and choosing Run As/PyDev:Django
My issues is that once the task is submitted by the mytask.delay() it doesn't stop on the breakpoint. I put some traces withing the tasks code so I can see that it was executed in one of the worker threads.
So, how to make the Eclipse debugger to stop on the breakpoint placed withing the task when it executed in the Celery workers thread?
|
How to debug Celery/Django tasks running locally in Eclipse
| 0.039979 | 0 | 0 | 23,443 |
12,698,862 |
2012-10-02T21:47:00.000
| 3 | 0 | 0 | 0 |
c++,python,sockets,networking,network-programming
| 12,698,891 | 2 | false | 0 | 0 |
No; you cannot connect to an IPv6 server without some form of IPv6 transit.
Depending on your network, you may be able to set up a 6to4 gateway. This is a server configuration change, though, and is outside the scope of Stack Overflow.
| 1 | 2 | 0 |
my network does not support the ipv6 hence i have no access to ipv6 servers, is there any solution to connect to them using sockets that uses 'AF_INET' domain? or any kind of other solutions? is there any server on the Internet that does such a convert for free?
i can reed python and c++.
|
can i connect to a ipv6 address via a AF_INET domain socket?
| 0.291313 | 0 | 1 | 557 |
12,699,132 |
2012-10-02T22:11:00.000
| 1 | 0 | 1 | 0 |
python
| 12,699,266 | 5 | false | 0 | 0 |
There are a lot of permutations. It usually isn't a good idea to generate them all just to count them. You should look for a mathematical formula to solve this, or perhaps use dynamic programming to compute the result.
| 1 | 1 | 0 |
How do you find all the combinations of a 40 letter string?
I have to find how many combinations 20 D and 20 R can make.
as in one combination could be...
DDDDDDDDDDDDDDDDDDDDRRRRRRRRRRRRRRRRRRRR
thats 1 combination, now how do I figure out the rest?
|
Finding the different combinations
| 0.039979 | 0 | 0 | 217 |
12,699,376 |
2012-10-02T22:30:00.000
| 2 | 0 | 0 | 0 |
python,opengl
| 12,699,435 | 1 | true | 0 | 0 |
Is PyOpenGL the answer?
No. At least not in the way you expect it. If your GPU does support OpenGL-4.3 you could use Compute Shaders in OpenGL, but those are not written in Python
but simply run a "vanilla" python script ported to the GPU.
That's not how GPU computing works. You have to write the shaders of computation kernels in a special language. Either OpenCL or OpenGL Compute Shaders or, specific to NVIDIA, in CUDA.
Python would then just deliver the framework for getting the GPU computation running.
| 1 | 1 | 1 |
I want to write an algorithm that would benefit from the GPU's superior hashing capability over the CPU.
Is PyOpenGL the answer? I don't want to use drawing tools, but simply run a "vanilla" python script ported to the GPU.
I have an ATI/AMD GPU if that means anything.
|
Can normal algos run on PyOpenGL?
| 1.2 | 0 | 0 | 389 |
12,699,827 |
2012-10-02T23:19:00.000
| 0 | 0 | 1 | 0 |
python,character-encoding,ascii,block,non-ascii-characters
| 12,699,924 | 2 | false | 0 | 0 |
Your python shell is probably using either ISO-8859-1 or Unicode, not he same character set as Character Map.
chr(219) is also U+00DB, which is probably the Unicode character Û. I don't know what character set you are using, but there aren't any symbol characters that early in the Unicode character set.
| 1 | 7 | 0 |
In IDLE, print(chr(219)) (219's the block character) outputs "Û".
Is there any way to get it to output the block character instead?
This might actually be some sort of computer-wide problem, as I cannot seem to get the block character to print from anywhere, copying it out of charmap and into any textbox just results in the Û.
|
Python: block character will not print
| 0 | 0 | 0 | 5,367 |
12,700,350 |
2012-10-03T00:27:00.000
| 0 | 0 | 0 | 0 |
python-2.7,django-views,django-1.3
| 12,865,154 | 2 | false | 1 | 0 |
As zzzirk says: Django really isn't about flat pages, so the most Djangonic solution is to put flat pages beside the django app, not within it.
| 2 | 0 | 0 |
Building a "history" system that serves static pages based on the date the user asks for. Not all dates have associated pages, and it isn't possible to know, based on what's in the database which do, which don't. I haven't been able to find a way to redirect to a static page because there doesn't seem to be any way to capture the value of the {{STATIC_URL}} tag on the python side. I have got some code that depends on the static file being on the same file system as the django server, but that is clearly wrong. I have two needs:
1: how can I (redirect?) to the static page(s) from my views.py file?
2: how can I query for the existence of a particular one of those static pages?
|
Django: using {{STATIC_URL}} from python side
| 0 | 0 | 0 | 148 |
12,700,350 |
2012-10-03T00:27:00.000
| 0 | 0 | 0 | 0 |
python-2.7,django-views,django-1.3
| 12,739,819 | 2 | false | 1 | 0 |
I'm not certain from your question if you understand the intention of the {{ STATIC_URL }} tag. It is a URL prefix for static content files such as css, javascript, or image files. It is not intended as a path for your own static HTML files. In the end I'm not sure you are asking the right question for what you are wanting to accomplish.
| 2 | 0 | 0 |
Building a "history" system that serves static pages based on the date the user asks for. Not all dates have associated pages, and it isn't possible to know, based on what's in the database which do, which don't. I haven't been able to find a way to redirect to a static page because there doesn't seem to be any way to capture the value of the {{STATIC_URL}} tag on the python side. I have got some code that depends on the static file being on the same file system as the django server, but that is clearly wrong. I have two needs:
1: how can I (redirect?) to the static page(s) from my views.py file?
2: how can I query for the existence of a particular one of those static pages?
|
Django: using {{STATIC_URL}} from python side
| 0 | 0 | 0 | 148 |
12,700,574 |
2012-10-03T00:58:00.000
| 4 | 0 | 1 | 0 |
python
| 12,700,668 | 2 | false | 0 | 0 |
In addition to what @nneonneo said, you should periodically scan through your list of cursor weak references and cull out the Nones otherwise you will end up with an ever growing list of Nones
| 1 | 2 | 0 |
I have a text editing program that hands out cursors to other parts of the program that require it. The cursor consists of a two part list, [start, end], which needs to be updated every time text is inserted/removed (the start/end index gets moved forward or backwards).
When the cursor is no longer used, I want to stop updating it, since there are many and they are time consuming to update. By not in use, I mean that the object that requested it no longer references it - it no longer cares about it. (For example: it has a list of cursors to all search results for the word 'bob', and a new search was made for the word 'fred', so now it replaces its result list with a new list of new cursors... the old list and its cursors are no longer used.)
I can require that any object using the cursor calls a .finished() method when it no longer needs it. But it would be easier if I could detect when it is no longer being referenced by anything outside of the editor. How do I check this in python (I know the garbage cleanup maintains a list, and deletes it when no longer referenced)?
|
Python: How do I keep track of whether an object is still in 'use'?
| 0.379949 | 0 | 0 | 101 |
12,702,146 |
2012-10-03T04:57:00.000
| 10 | 0 | 0 | 0 |
python,mysql,python-2.7,mysql-connector-python
| 13,899,478 | 8 | false | 0 | 0 |
I met the similar problem under Windows 7 when installing mysql-connector-python-1.0.7-py2.7.msi and mysql-connector-python-1.0.7-py3.2.msi.
After changing from "Install only for yourself" to "Install for all users" when installing Python for windows, the "python 3.2 not found" problem disappear and mysql-connector-python-1.0.7-py3.2.msi was successfully installed.
I guess the problem is that mysql connector installer only looks for HKEY_LOCAL_MACHINE entries, and the things it looks for might be under HKEY_CURRENT_USER etc. So the solution that change the reg table directly also works.
| 2 | 12 | 0 |
I have downloaded mysql-connector-python-1.0.7-py2.7.msi from MySQL site
and try to install but it gives error that
Python v2.7 not found. We only support Microsoft Windows Installer(MSI) from python.org.
I am using Official Python v 2.7.3 on windows XP SP3 with MySQL esssential5.1.66
Need Help ???
|
mysql for python 2. 7 says Python v2.7 not found
| 1 | 1 | 0 | 19,218 |
12,702,146 |
2012-10-03T04:57:00.000
| 0 | 0 | 0 | 0 |
python,mysql,python-2.7,mysql-connector-python
| 19,051,115 | 8 | false | 0 | 0 |
I solved this problem by using 32bit python
| 2 | 12 | 0 |
I have downloaded mysql-connector-python-1.0.7-py2.7.msi from MySQL site
and try to install but it gives error that
Python v2.7 not found. We only support Microsoft Windows Installer(MSI) from python.org.
I am using Official Python v 2.7.3 on windows XP SP3 with MySQL esssential5.1.66
Need Help ???
|
mysql for python 2. 7 says Python v2.7 not found
| 0 | 1 | 0 | 19,218 |
12,703,241 |
2012-10-03T06:48:00.000
| 1 | 0 | 1 | 0 |
python,file
| 12,703,403 | 3 | false | 0 | 0 |
i don't know if i understood you well, but using open() you create object representing file stream. until you keep reference to this object, the file stream is opened. but if you call open() again for the same file, you'll make another object representing file stream. target of stream will be the sames but object's will be different.
i don't know how if garbage collector works in python the same as in java, but if it does, all the objects that are not pointed by any reference will be deleted from memory. not "immediately" but when garbage collector will run, and this is unpredictable. it may be now, and it may be in next few seconds.
| 1 | 1 | 0 |
I'm learning how to use open(file, 'r') and was wondering:
If I say open(file1, 'r')and then later try to access that same file using open() again, will it work? Because I never did close() on it. And does it close immediately after opening, because it's not assigned to any variable?
|
Open(file) - Does it stay open?
| 0.066568 | 0 | 0 | 472 |
12,704,933 |
2012-10-03T08:50:00.000
| 0 | 0 | 1 | 0 |
python,refactoring
| 12,705,192 | 2 | false | 0 | 0 |
Eclipse+pydev has a refactoring tool. But a simple ctrl+F should suffice to know where you have used the variable
| 1 | 1 | 0 |
Which tool people normally use when they do the refactoring for Python!?
For me, to get rid of a variable, I need to trace through the program to make sure that I completely delete it!?
|
How to do refactoring for Python?
| 0 | 0 | 0 | 385 |
12,707,239 |
2012-10-03T11:08:00.000
| 2 | 1 | 0 | 0 |
python,protocols,irc
| 12,721,513 | 1 | true | 0 | 0 |
Are you looking for /notice ? (see irchelp.org/irchelp/misc/ccosmos.html#Heading227)
| 1 | 0 | 0 |
I'm writing up an IRC bot from scratch in Python and it's coming along fine.
One thing I can't seem to track down is how to get the bot to send a message to a user that is private (only viewable to them) but within a channel and not a separate PM/conversation.
I know that it must be there somewhere but I can't find it in the docs.
I don't need the full function, just the command keyword to invoke the action from the server (eg PRIVMSG).
Thanks folks.
|
IRC msg to send to server to send a "whisper" message to a user in channel
| 1.2 | 0 | 1 | 2,449 |
12,708,573 |
2012-10-03T12:34:00.000
| 0 | 1 | 0 | 0 |
python,twitter,urlencode
| 12,708,663 | 1 | true | 0 | 0 |
Try without spaces (ie. the %20). Doh!
| 1 | 0 | 0 |
I'm trying to track several keywords at once, with the following url:
https://stream.twitter.com/1.1/statuses/filter.json?track=twitter%2C%20whatever%2C%20streamingd%2C%20
But the stream only returns results for the first keyword?! What am I doing wrong?
|
Twitter Public Stream URL when several track keywords?
| 1.2 | 0 | 1 | 163 |
12,709,062 |
2012-10-03T13:00:00.000
| 5 | 0 | 1 | 0 |
python,lambda,inline-if
| 12,709,132 | 4 | false | 0 | 0 |
What's wrong with lambda x: x if x < 3 else None?
| 1 | 51 | 0 |
I was writing some lambda functions and couldn't figure this out. Is there a way to have something like lambda x: x if (x<3) in python? As lambda a,b: a if (a > b) else b works ok. So far lambda x: x < 3 and x or None seems to be the closest i have found.
|
Python lambda with if but without else
| 0.244919 | 0 | 0 | 100,862 |
12,711,511 |
2012-10-03T15:14:00.000
| 1 | 0 | 0 | 1 |
python,embed
| 12,862,310 | 1 | true | 0 | 0 |
Well, the only way I could come up with is to run the Python engine on a separate thread. Then the main thread is blocked when the python thread is running.
When I need to suspend, I block the Python thread, and let the main thread run. When necessary, the OnIdle of the main thread, i block it and let the python continue.
it seems to be working fine.
| 1 | 1 | 0 |
I have an app that embeds python scripting.
I'm adding calls to python from C, and my problem is that i need to suspend the script execution let the app run, and restore the execution from where it was suspended.
The idea is that python would call, say "WaitForData" function, so at that point the script must suspend (pause) the calls bail out so the app event loop would continue. When the necessary data is present, i would like to restore the execution of the script, it is like the python call returns at that point.
i'm running single threaded python.
any ideas how can i do this, or something similar, where i'll have the app event loop run before python call exits?
|
suspend embedded python script execution
| 1.2 | 0 | 0 | 164 |
12,711,743 |
2012-10-03T15:27:00.000
| 1 | 0 | 1 | 0 |
python,list
| 12,711,895 | 3 | false | 0 | 0 |
I assume that the data are noisy, in the sense that it could just be anything at all, written in. The main difficulty here is going to be how to define the mapping between your input data, and categories, and that is going to involve, in the first place, looking through the data.
I suggest that you look at what you have, and draw up a list of mappings from input occupations to categories. You can then use pretty much any tool (and if you're using excel, stick with excel) to apply that mapping to each row. Some rows will not fall into any category. You should look at them, and figure out if that is because your mapping is inadequate (e.g. you didn't think of how to deal with veterinarians), or if it is because the data are noisy. If it's noise, you can either deal with the remainder by hand, or try to use some other technique to categorise the data, e.g. regular expressions or some kind of natural language processing library.
Once you have figured out what your problem cases are, come back and ask us about them, with sample data, and the code you have been using.
If you can't even take the first step in figuring out how to run the mapping, do some research, try to write something, then come back with a specific question about that.
| 1 | 3 | 1 |
Currently I have a list of 110,000 donors in Excel. One of the pieces of information they give to us is their occupation. I would like to condense this list down to say 10 or 20 categories that I define.
Normally I would just chug through this, going line by line, but since I have to do this for a years worth of data, I don't really have the time to do a line by line of 1,000,000+ rows.
Is there anyway to define my 10 or 20 categories and then have python sort it out from there?
Update:
The data is poorly formatted. People self populate a field either online or on a slip of paper and then mail it into a data processing company. There is a great deal of variance. CEO, Chief Executive, Executive Office, the list goes on.
I used a SORT UNIQ comand and found that my list has ~13,000 different professions.
|
categorizing items in a list with python
| 0.066568 | 0 | 0 | 1,266 |
12,713,797 |
2012-10-03T17:32:00.000
| 0 | 0 | 0 | 0 |
python,numpy,recommendation-engine,topic-modeling,gensim
| 15,066,821 | 3 | false | 0 | 0 |
My tricks are using a search engine such as ElasticSearch, and it works very well, and in this way we unified the api of all our recommend systems. Detail is listed as below:
Training the topic model by your corpus, each topic is an array of words and each of the word is with a probability, and we take the first 6 most probable words as a representation of a topic.
For each document in your corpus, we can inference a topic distribution for it, the distribution is an array of probabilities for each topic.
For each document, we generate a fake document with the topic distribution and the representation of the topics, for example the size of the fake document is about 1024 words.
For each document, we generate a query with the topic distribution and the representation of the topics, for example the size of the query is about 128 words.
All preparation is finished as above. When you want to get a list of similar articles or others, you can just perform a search:
Get the query for your document, and then perform a search by the query on your fake documents.
We found this way is very convenient.
| 1 | 3 | 1 |
I am looking to compute similarities between users and text documents using their topic representations. I.e. each document and user is represented by a vector of topics (e.g. Neuroscience, Technology, etc) and how relevant that topic is to the user/document.
My goal is then to compute the similarity between these vectors, so that I can find similar users, articles and recommended articles.
I have tried to use Pearson Correlation but it ends up taking too much memory and time once it reaches ~40k articles and the vectors' length is around 10k.
I am using numpy.
Can you imagine a better way to do this? or is it inevitable (on a single machine)?
Thank you
|
Topic-based text and user similarity
| 0 | 0 | 0 | 1,318 |
12,714,434 |
2012-10-03T18:17:00.000
| 1 | 0 | 0 | 1 |
python,telnet,ubuntu-12.04
| 12,788,388 | 1 | true | 0 | 0 |
If you really want to use system's su program, you will need to create a terminal pair, see man 7 pty, in python that's pty.openpty call that returns you a pair of file descriptors, one for you and one for su. Then you have to fork, in the child process change stdin/out/err to slave fd and exec su. In the parent process you send data to and receive data from master fd. Linux kernel connects those together.
Alternatively you could perhaps emulate su instead?
| 1 | 4 | 0 |
I am trying to create a Telnet Server using Python on Ubuntu 12.04. In order to be able to execute commands as a different user, I need to use the su command, which then prompts for the password. Now, I know that the prompt is sent to the STDERR stream, but I have no idea which stream I am supposed to send the password to. If I try to send it via STDIN, I get the error: su: must be run from a terminal. How do I proceed?
|
Telnet Server ubuntu - password stream
| 1.2 | 0 | 0 | 690 |
12,714,965 |
2012-10-03T18:54:00.000
| 3 | 0 | 0 | 1 |
python,stream,twisted,boto
| 12,716,129 | 2 | true | 1 | 0 |
boto is a Python library with a blocking API. This means you'll have to use threads to use it while maintaining the concurrence operation that Twisted provides you with (just as you would have to use threads to have any concurrency when using boto ''without'' Twisted; ie, Twisted does not help make boto non-blocking or concurrent).
Instead, you could use txAWS, a Twisted-oriented library for interacting with AWS. txaws.s3.client provides methods for interacting with S3. If you're familiar with boto or AWS, some of these should already look familiar. For example, create_bucket or put_object.
txAWS would be better if it provided a streaming API so you could upload to S3 as the file is being uploaded to you. I think that this is currently in development (based on the new HTTP client in Twisted, twisted.web.client.Agent) but perhaps not yet available in a release.
| 1 | 4 | 0 |
I have a server which files get uploaded to, I want to be able to forward these on to s3 using boto, I have to do some processing on the data basically as it gets uploaded to s3.
The problem I have is the way they get uploaded I need to provide a writable stream that incoming data gets written to and to upload to boto I need a readable stream. So it's like I have two ends that don't connect. Is there a way to upload to s3 with a writable stream? If so it would be easy and I could pass upload stream to s3 and it the execution would chain along.
If there isn't I have two loose ends which I need something in between with a sort of buffer, that can read from the upload to keep that moving, and expose a read method that I can give to boto so that can read. But doing this I'm sure I'd need to thread the s3 upload part which I'd rather avoid as I'm using twisted.
I have a feeling I'm way over complicating things but I can't come up with a simple solution. This has to be a common-ish problem, I'm just not sure how to put it into words very well to search it
|
Boto reverse the stream
| 1.2 | 1 | 1 | 636 |
12,716,608 |
2012-10-03T20:44:00.000
| 7 | 0 | 1 | 0 |
python,mongodb,pymongo
| 12,716,831 | 3 | true | 0 | 0 |
Calling mycollection.options() returns a dict with 'capped': True if it's a capped collection.
| 1 | 3 | 0 |
I'm writing functionality in Python to ensure the existence, type, and size of mongodb collections. Most of these collections are capped. I know that the mongo shell includes mycollection.iscapped(), but pymongo does not seem to support this functionality.
Within the context of pymongo, what is the best way to tell if a collection is capped collection?
|
Can pymongo detect if a collection is capped?
| 1.2 | 0 | 0 | 1,088 |
12,718,216 |
2012-10-03T23:16:00.000
| 1 | 0 | 1 | 0 |
python
| 12,727,999 | 1 | true | 0 | 1 |
Try importing pdb and just manually setting breakpoints in the code with pdb.set_trace(). This won't work in all multi-threaded cases, but I find that it works in many of them and is a big improvement over the native Eclipse/PyDev debugger.
| 1 | 1 | 0 |
Ok, I am new to python and my code calls some library (which is wrapping some C++ code) and I pass it a callback function on my side (as library needs to). The strange thing is that if I insert a breakpoint in my other part of the code, it will hit and deugger stops in eclipse but none of my breakpoints in the callback hit. The callback is sure called but the breakpoint is somehow ignored by PyDev. What I am doing wrong? The callback is obviously coming on a different thread. I am using Python 2.7
|
In Pydev setting a breakpoint, but breakpoint not hit on callbacks only
| 1.2 | 0 | 0 | 736 |
12,721,616 |
2012-10-04T06:38:00.000
| 6 | 0 | 1 | 0 |
python,windows-7,cygwin,64-bit
| 12,722,722 | 1 | false | 0 | 0 |
To uninstall the Python interpreter (or any other package) from Cygwin:
Run the setup.exe file (the one you downloaded for installing Cygwin)
Make sure the installation folder matches your Cygwin location
On the package selection screen, find the package you want to uninstall (here python)
Change its state from keep to uninstall by clicking on it 3 times
Click next to begin the uninstallation
At the moment, Python 2.7 has not been ported to Cygwin. The latest Python 2 version is 2.6.8-2.
However, Python 3.2 has been ported, so you should check if your script is compatible with it.
You can have Python 2 and 3 installed at the same time on Cygwin – the first can be fired with the classic python command, and the latter with python3.
| 1 | 6 | 0 |
I am trying to run a Python script on a Windows 7-64 bit machine using Cygwin. I can't get the newest version of Python installed in this environment.
Question:
How do I uninstall Python 2.6
Which Python package should I use for Cygwin?
|
Cygwin Newbie: How do I uninstall Python 2.6.x from Cygwin and install Python 2.7.x?
| 1 | 0 | 0 | 7,488 |
12,721,998 |
2012-10-04T07:09:00.000
| 0 | 0 | 1 | 0 |
c#,security,ironpython
| 12,730,482 | 1 | false | 0 | 1 |
The trick is to run IronPython in its own AppDomain and only provide the assemblies you want the user to be able to call to that AppDomain, and then set the security policy to prevent them from referencing more.
| 1 | 0 | 0 |
I want to use ironpython inside my C# App, but I am afraid that it will cause a security issue, I don't want the end user to use all the available DLLs of my App, instead I want to provide custom classes that user can use in python.
My question is how to secure my dlls from being used in ironpython.
the end user may addReference of the dll, I want to secure most of the classes from being used.
|
Secure and protect classes or DLLs in .NET from being used in IronPython
| 0 | 0 | 0 | 97 |
12,723,009 |
2012-10-04T08:13:00.000
| 1 | 1 | 0 | 0 |
python,pyramid
| 26,917,124 | 2 | false | 1 | 0 |
Simply add this following code where your Pyramid web app gets initialized.
import mimetypes
mimetypes.add_type('application/x-font-woff', '.woff')
For instance, I have added it in my webapp.py file, which gets called the first time the server gets hit with a request.
| 1 | 4 | 0 |
I am using pyramid web framework to build a website. I keep getting this warning in chrome console:
Resource interpreted as Font but transferred with MIME type application/octet-stream: "http:static/images/fonts/font.woff".
How do I get rid of this warning message?
I have configured static files to be served using add_static_view
I can think of a way to do this by adding a subscriber function for responses that checks if the path ends in .woff and setting the response header to application/x-font-woff. But it does not look like a clean solution. Is there a way to tell Pyramid to do it through some setting.
|
How to set the content type header in response for a particular file type in Pyramid web framework
| 0.099668 | 0 | 0 | 1,474 |
12,723,746 |
2012-10-04T08:59:00.000
| 4 | 0 | 1 | 0 |
python,datetime
| 12,723,975 | 2 | true | 0 | 0 |
The parse() method of dateutil is very flexible and will parse almost anything you throw at it.
However, because of that flexibility, if your input is limited to a certain number of patterns, custom code that checks for those patterns then uses datetime.datetime.strptime() could easily beat it.
Since this depends entirely on the number of patterns you need to test for, the only thing you can do is measure which one will be faster for your specific usecases.
| 1 | 2 | 0 |
I'm building a generic custom strToDatetime(string) function. The date string may be in some different formats. The 2 most popular alternatives seem datetime.strptime(string, format) and dateutil.parser(string). It seems datetime.strptime() requires a format and dateutil.parser() does not, so the possible solutions seem to be:
Test date strings pattern to find date string format and use datetime.strptime()
Use dateutil.parser()
Is this correct? Alternative 1 (harder and may require maintenance in the future) has advantages, such as performance?
|
Converting date string in unknown format to datetime
| 1.2 | 0 | 0 | 2,330 |
12,728,547 |
2012-10-04T13:39:00.000
| 1 | 0 | 0 | 0 |
python,django
| 12,728,830 | 3 | true | 1 | 0 |
I would separate out the common models, ect... into it's own python package. Then each project will have this package installed, or just install the package at the system level and they both can use it. If you handle this properly you shouldn't have to change much of your imports and you can easily update the package by pushing a new egg to the server and installing via easy_install or pip.
This is the way that I currently have my projects setup. Common non-business logic is separated out into their own project and I create a package from this and then install and use like any other python library.
The little bit of extra work here can save you a lot of time and allows for transparent updating of your common code between various projects as well.
| 1 | 1 | 0 |
I have two Django Projects where I use a lot of common models. Custom user classes, Algorithm classes, Product classes. The two projects are related to e-commerce, both run on different machines and serve completely different purposes.
However, considering that they have these models in "common", I was wondering if it would be worth it to create a third, common project that serves as the "base" with the base models, and then both of these projects would import the common models from this base project.
This would also help as we can join the two different customer and product databases from both e-commerce websites into this big common database.
My questions are:
1) Does anybody have any experience with the possible overhead or can realistically estimate it? Joining the common parts of both Django projects will be necessary down the road, but I estimate there will be a lot of overhead in importing a third project (possibly in real-time).
2) What would be the best approach to importin this third project? I can think of multiple ways:
Creating a packaged, installable Python module such as the existing ones on the internet (setuptools, lxml, tastypie) and importing that module into both Django projects;
Having the project sit on a directory in a machine and importing from that path in real-time inside the Python file (have done it before, works but seems to have some overhead);
EDIT: Our common models/functionality, additionally, contain trade-secreted and patenteable content, therefore public distribution is out of the question. I am guessing the route would be to create something like a package, but not publicly distributed, only distributed and installed on the 2 specific machines.
Can anybody give some feedback on this?
Thank you
|
Including common Django Project in multiple projects
| 1.2 | 0 | 0 | 1,451 |
12,729,549 |
2012-10-04T14:29:00.000
| 1 | 0 | 1 | 0 |
python,git,oop,virtualenv,pypi
| 12,729,643 | 2 | false | 0 | 0 |
Option 1, will cause you pain in the long term. Any non trivial library is going to have to break backward compatibility at some stage, and you don't want to have to update apps A,B and C because app D needs some new functionality from the library
| 2 | 0 | 0 |
I have few projects and they use some common code. I refactored this code into common library but then a problem arose. How to manage this common code. I've considered some options which are:
libraries as soft links in filesystem.
libraries as git submodules.
dependencies managed with pip/requirements.txt.
What are pros and cons of this solutions? Do you have another proposals? Which one should i choose and why?
I use Git, and python in virtualenv.
|
Common libraries in many projects
| 0.099668 | 0 | 0 | 193 |
12,729,549 |
2012-10-04T14:29:00.000
| 1 | 0 | 1 | 0 |
python,git,oop,virtualenv,pypi
| 12,729,931 | 2 | true | 0 | 0 |
The third option with virtualenv is really convenient. Just make a requirements file in your project, install the dependencies into your virtualenv, and run the env. Each project can have their own dependencies and virtualenv, and nothing overlaps. You also don't have to worry about installing conflicting modules in your system's Python.
| 2 | 0 | 0 |
I have few projects and they use some common code. I refactored this code into common library but then a problem arose. How to manage this common code. I've considered some options which are:
libraries as soft links in filesystem.
libraries as git submodules.
dependencies managed with pip/requirements.txt.
What are pros and cons of this solutions? Do you have another proposals? Which one should i choose and why?
I use Git, and python in virtualenv.
|
Common libraries in many projects
| 1.2 | 0 | 0 | 193 |
12,729,828 |
2012-10-04T14:41:00.000
| 3 | 1 | 1 | 0 |
java,c++,python,ruby,compiler-construction
| 12,730,127 | 1 | true | 0 | 0 |
You are correct in that runtime dynamic binding is entirely different conceptually from class inheritance.
But as I re-read your question, I don't think I would agree that "Java and C++, runtime dynamic binding is implemented as class inheritance." Class inheritance is simply the definition of broader behavior that includes existing behavior from existing classes. Further, runtime binding doesn't necessarily have anything to do with object orientation; it can refer merely to deferred method resolution.
Class inheritance refers to the "template" for how an object is built, with more and more refined behavior with successive subclasses. Runtime dynamic binding is merely a way of saying that a reference to a method (for example) is deferred until execution time. In a given language, a particular class may leverage runtime dynamic binding, but have inherited classes resolved at compile time.
In a nutshell, Inheritance refers to the definition or blueprint of an object. Runtime dynamic binding is, at its most basic level, merely a mechanism for resolving method calls at execution time.
EDIT I do need to clarify one point on this: Java implements dynamic binding on overridden class methods, while C++ determines a type through polymorphism at runtime, so it is not accurate for me to say that dynamic binding has "no relationship" to class inheritance. At a "macro" level, they're not inherently related, but a given language might leverage it in its inheritance mechanism.
| 1 | 6 | 0 |
I am trying to clarify the concept of runtime dynamic binding and class inheritance in dynamic languages (Python, ruby) and static type languages (java, C++). I am not sure I am right or not.
In dynamic languages like Python and Ruby, runtime dynamic binding is implemented as duck typing. When the interpreter checks the type of an object, it checks whether the object has the specific method (or behaviour) rather than check the type of the object; and runtime dynamic binding does not mean class inheritence. Class inheritance just reduce code copy in Python and Ruby.
In static typed languages like Java and C++, runtime dynamic binding can be obtained only class inheritance. Class inheritance not only reduces code copy here, but is also used to implement runtime dynamic binding.
In summary, class inheritance and runtime dynamic binding are two difference concepts. In Python and Ruby, they are totally different; in Java and C++ they are mixed together.
Am I right?
|
Difference between runtime dynamic binding and class inheritance
| 1.2 | 0 | 0 | 1,662 |
12,730,293 |
2012-10-04T15:06:00.000
| 23 | 0 | 0 | 0 |
python,sockets,network-programming,telnet
| 12,730,703 | 3 | true | 0 | 0 |
Telnet is a way of passing control information about the communication channel. It defines line-buffering, character echo, etc, and is done through a series of will/wont/do/dont messages when the connection starts (and, on rare occasions, during the session).
That's probably not what your server documentation means. Instead, it probably means that you can open a TCP socket to the port using a program like "Telnet" and interact with a command interpreter on the server.
When the Telnet program connects, it typically listens for these control messages before responding in kind and so will work with TCP/socket connections that don't actually use the telnet protocol, reverting to a simple raw pipe. The server must do all character echo, line buffering, etc.
So in your case, the server is likely using a raw TCP stream with no telnet escape sequences and thus there is no difference.
| 1 | 22 | 0 |
I am trying to send commands to a server via a python script. I can see the socket connection being established on the server. But the commands I am sending across , do not seem to make it through(server does a read on the socket).
The server currently supports a telnet command interpreter. ie: you telnet to the command address and port, and you can start sending
string commands.
My question is , is there anything fundamentally different from sending strings over a tcp socket, as opposed to using telnet.
I have used both raw sockets as well as the Twisted framework.
|
How does telnet differ from a raw tcp connection
| 1.2 | 0 | 1 | 30,360 |
12,730,426 |
2012-10-04T15:14:00.000
| 0 | 0 | 0 | 0 |
python,web-scraping,translation
| 16,074,293 | 3 | false | 1 | 0 |
Or go to MicrosoftTranslator.com, and paste your text in one box, have it translate and cut and paste the result?
Failing that, the MS Translator API can be used for up to 2 million characters a month for free...so maybe use that?
| 1 | 0 | 0 |
I would like to translate a few hundred words for an application I'm writing. This is a simple, one-off project, so I'm not willing to pay for the the google translate API.
Is there another web service which will do this?
Another idea is to just send a search to Google, and scrape the result from the first result. For example, google 'translate food to spanish'.
However, the page is a mess of obfuscated javascript, and I would need help scraping the result.
I think python would be good for this, but any language will do.
|
Automate translation for personal use
| 0 | 0 | 1 | 184 |
12,736,172 |
2012-10-04T21:21:00.000
| 3 | 0 | 1 | 0 |
python,exception-handling,gevent,greenlets
| 12,774,007 | 1 | true | 0 | 0 |
If you link() the child greenlet to the parent greenlet, then LinkedExited will be raised in the parent when the child exits. At that point you can check the exception property of the child greenlet. It will contain the exception instance raised in the child (if the child finished with an error). Now that you have the exception, you could handle it right away in the parent or you could raise it in the parent.
| 1 | 4 | 0 |
In using gevent, whenever a child greenlet throws an exception, I would like it to bubble up to the parent (and ideally have the parent throw the exception). In the documentation for greenlets, it says this is automatically done, but this doesn't appear to be the case in gevent.
How do I bubble up exceptions in gevent?
Thanks!
|
Gevent greenlet bubbling up exceptions to the parent
| 1.2 | 0 | 0 | 775 |
12,737,366 |
2012-10-04T23:17:00.000
| 0 | 0 | 0 | 0 |
python-3.x,pygame,pyopengl
| 12,755,970 | 2 | false | 0 | 1 |
PyGame's only relation to PyOpenGL is that PyGame can provide a window for PyOpenGL to render on.
Your question now is whether PyGame's windowing environment is faster than another.
In my experience, GLUT can be very slightly faster than PyGame for windowing (by comparing GLUT and SDL). wxWidgets I think is a bit slower. PyGlet isn't PyOpenGL (although it is a Python OpenGL implementation).
My recommendation: PyGame is easiest to use and provides helpful utilities besides. Use it instead of anything else; any performance differences are negligible.
When you need better windowing support, go for Qt or wxWidgets, in that order.
| 1 | 3 | 0 |
I see that all tutorials about the pyopengl implements pygame, is posible use PyOpengl without pygame?. and if is so, then, is most faster without pygame or not?
|
PyOpengl without PyGame
| 0 | 0 | 0 | 1,388 |
12,737,993 |
2012-10-05T00:39:00.000
| 0 | 0 | 1 | 0 |
python,image-processing
| 57,430,322 | 3 | false | 0 | 0 |
I would suggest the following procedure:
1) Convert your image to a binary image (nxn numpy array): 1(object pixels) and 0 (background pixels)
3) Since you want to follow a contour, you can see this problem as: finding the all the object pixels belonging to the same object. This can be solved by the Connected Components Labeling Object.
4) Once you have your objects identified, you can run the Marching squares algorithm over each object. MS consist in divide your image in n squares and then evaluating the value of all the vertex for a given square. MS will find the the border by analyzing each square and finding those in which the value of at least one of the vertex of a given square is 0 whereas the other vertex of such square are 1 --> The border/contour is contained in such square.
| 1 | 3 | 0 |
I am new to Python and would be grateful if anyone can point me the right direction or better still some examples.
I am trying to write a program to convert image (jpeg or any image file) into gcode or x/y coordinate. Instead of scanning x and y direction, I need to follow the contour of objects in the image. For example, a doughnut with outer circle and inner circle, or a face with face outline and inner contour of organs.
I know there is something called marching square, but not sure how to do it in python?
Thank you.
|
Follow image contour using marching square in Python
| 0 | 0 | 0 | 3,955 |
12,738,827 |
2012-10-05T02:50:00.000
| 1 | 0 | 0 | 0 |
java,python,jython,scikit-learn
| 50,292,755 | 6 | false | 0 | 0 |
I found myself in a similar situation.
I'll recommend carving out a classifier microservice. You could have a classifier microservice which runs in python and then expose calls to that service over some RESTFul API yielding JSON/XML data-interchange format. I think this is a cleaner approach.
| 1 | 35 | 1 |
I have a classifier that I trained using Python's scikit-learn. How can I use the classifier from a Java program? Can I use Jython? Is there some way to save the classifier in Python and load it in Java? Is there some other way to use it?
|
How can I call scikit-learn classifiers from Java?
| 0.033321 | 0 | 0 | 42,287 |
12,740,424 |
2012-10-05T06:05:00.000
| 1 | 0 | 0 | 0 |
python,sql,django,django-queryset
| 12,740,533 | 2 | false | 1 | 0 |
Try this.
https://docs.djangoproject.com/en/dev/topics/db/sql/
| 1 | 0 | 0 |
I want to select data from multiple tables, so i just want to know that can i used simple SQL queries for that, If yes then please give me an example(means where to use these queries and how).
Thanks.
|
Can I used simple sql commands in django
| 0.099668 | 1 | 0 | 77 |
12,743,436 |
2012-10-05T09:32:00.000
| 2 | 0 | 0 | 0 |
python,mysql
| 12,743,439 | 1 | true | 0 | 0 |
There is a property of the connection object called thread_id, which returns an id to be passed to KILL. MySQL has a thread for each connection, not for each cursor, so you are not killing queries, but are instead killing connection. To kill an individual query you must run each query in it's own connection, and then kill the connection using the result from thread_id
| 1 | 0 | 0 |
Background:
I'm working on dataview, and many of the reports are generated by very long running queries. I've written a small query caching daemon in python that accepts a query, spawns a thread to run it, and stores the result when done as a pickled string. The results are generally various aggregations broken down by month, or other factors, and the result sets are consequently not large. So my caching daemon can check whether it has the result already, and return it immediately, otherwise it sends back a 'pending' message (or 'error' or 'failed' or various other messages). The point being, that the client, which is a django web server would get back 'pending' and query again in 5~10 seconds, in the meanwhile putting up a message for the user saying 'your report is being built, please be patient'.
The problem:
I would like to add the ability for the user to cancel a long running query, assuming it hasn't been cached already. I know I can kill a query thread in MySQL using KILL, but is there a way to get the thread/query/process id of the query in a manner similar to getting the id of the last inserted row? I'm doing this through the python MySQLdb module, and I can't see any properties/methods of the cursor object that would return this.
|
Get process id (of query/thread) of most recently run query in mysql using python mysqldb
| 1.2 | 1 | 0 | 1,016 |
12,749,702 |
2012-10-05T15:45:00.000
| 2 | 0 | 0 | 0 |
python,django
| 12,750,335 | 2 | true | 1 | 0 |
Converting this to a model objects doesn't require storing it in database.
Also if you are sure you don't want to store it maybe placing it in models.py and making it Django models is wrong idea. Probably it should be just normal Python classes e.g. in resources.py or something like that, not to mistake it with models. I prefer such way because maybe converting is slower (very tiny) but it allows to add not only custom constructor but others methods and properties as well which is very helpful. It also is just convenient and organizes stuff when you use normal classes and objects.
| 1 | 0 | 0 |
Question
In Django, when using data from an API (that doesn't need to be saved to the database) in a view, is there reason to prefer one of the following:
Convert API data (json) to a json dictionary and pass to the template
Convert API data (json) to the appropriate model object from models.py and then pass that to the template
What I've Considered So Far
Performance: I timed both approaches and averaged them over 25 iterations. Converting the API response to a model object was slower by approximately 50ms (0.4117 vs. 0.4583 seconds, +11%). This did not include timing rendering.
Not saving this data to the database does prevent me from creating many-to-many relationships with the API's data (must save an object before adding M2M relationships), however, I want the API to act as the store for this data, not my app
DRY: If I find myself using this API data in multiple views, I may find convenience in putting all my consumption/cleaning/etc. code in the appropriate object __init__ in models.
Thanks very much in advance.
|
Passing JSON (dict) vs. Model Object to Templates in Django
| 1.2 | 0 | 0 | 662 |
12,750,731 |
2012-10-05T16:53:00.000
| 0 | 0 | 0 | 0 |
android,python,django,dojo
| 12,750,812 | 1 | true | 1 | 1 |
I suggest to take a look at PhoneGap. I use it to embed JQuery mobile inside a Android application. PhoneGap makes it easy to embed javascript inside an Android application. Just try the PhoneGap tutorial.
Edit: PhoneGap let u use the GPS sensor through javascript
| 1 | 1 | 0 |
I am trying to make a android apps using Dojo toolkit and GeoDjango. Its a project based on GPS work. Can anyone help in this issue? I want to have the staring steps? I have some source code and SDK installed in my computer. But still confused about the staring. can Anyone help?
How I will make it possible to create the apps. Steps plz?
|
Android apps with Dojo and geodjango
| 1.2 | 0 | 0 | 197 |
12,750,787 |
2012-10-05T16:58:00.000
| 3 | 0 | 0 | 1 |
python,parallel-processing,cluster-computing
| 12,785,406 | 1 | true | 0 | 0 |
Author of jug here.
Jug does handle the dependencies very well. If you change any of the inputs or intermediate steps, running jug status will tell you the state of the computation.
There is currently no way to specify that some tasks (what jug calls jobs) should have multiple processes allocated to them. In the past, whenever I had tasks which were to be run in multiple threads, I was forced to take a worst-case-scenario approach and allocate all processes to the jug execute process.
This means, of course, that single-threaded tasks will take up all the processes. Since the bulk of the computation was in the multi-threaded tasks, it was acceptable.
| 1 | 3 | 0 |
I have the usual large set of dependent jobs and want to run them effectively in a PBS cluster environment. I have been using Ruffus and am pretty happy with it, but I also want to experiment a bit with other approaches.
One that looks interesting in python is jug. However, it appears that jug assumes that the jobs are homogeneous in their requirements. I have some jobs that require 8GB RAM while others require only 100MB; some can consume all processors and some are single-threaded. I'm aiming for being able to quickly assemble a pipeline, run it and have it "update" based on dependencies, and log reasonably so that I can see what jobs still need to be run. Is anyone using jug or other similar system with these types of requirements?
|
Python jug (or other) for embarrassingly parallel jobs in cluster environment with heterogenous tasks
| 1.2 | 0 | 0 | 522 |
12,751,787 |
2012-10-05T18:11:00.000
| 4 | 0 | 1 | 0 |
python,map,filter,lambda
| 12,751,822 | 5 | false | 0 | 0 |
Because filter in python takes only one argument. So you need to define a lambda/function that takes only one argument if you want to use it in filter.
| 2 | 9 | 0 |
This is my code:
filter(lambda n,r: not n%r,range(10,20))
I get the error:
TypeError: <lambda>() takes exactly 2 arguments (1 given)
So then I tried:
foo=lambda n,r:not n%r
Which worked fine. So I thought this will work:
bar=filter(foo,range(10,20))
but again:
TypeError: <lambda>() takes exactly 2 arguments (1 given)
Something similar happens for map as well. But reduce works fine. What am I doing wrong? Am I missing something crucial needed in order to use filter or map?
|
Why is lambda asking for 2 arguments despite being given 2 arguments?
| 0.158649 | 0 | 0 | 15,792 |
12,751,787 |
2012-10-05T18:11:00.000
| 0 | 0 | 1 | 0 |
python,map,filter,lambda
| 12,751,828 | 5 | false | 0 | 0 |
Your lambda function takes two arguments n and r. filter must be called with a function that takes one argument and returns True when the item should be kept. Maybe you meant to define r or n outside your lambda function and then capture it in the closure.
| 2 | 9 | 0 |
This is my code:
filter(lambda n,r: not n%r,range(10,20))
I get the error:
TypeError: <lambda>() takes exactly 2 arguments (1 given)
So then I tried:
foo=lambda n,r:not n%r
Which worked fine. So I thought this will work:
bar=filter(foo,range(10,20))
but again:
TypeError: <lambda>() takes exactly 2 arguments (1 given)
Something similar happens for map as well. But reduce works fine. What am I doing wrong? Am I missing something crucial needed in order to use filter or map?
|
Why is lambda asking for 2 arguments despite being given 2 arguments?
| 0 | 0 | 0 | 15,792 |
12,753,527 |
2012-10-05T20:22:00.000
| 0 | 1 | 0 | 0 |
php,python,apache,url,timeout
| 12,941,867 | 5 | true | 0 | 0 |
While there have been some good answers here, I found that a simple php sleep() call with an override to Apache's timeout was all I needed.
I know that unit tests should be in isolation, but the server this endpoint is hosted on is no going anywhere.
| 2 | 3 | 0 |
I'm unit testing a URL fetcher, and I need a test url which always causes urllib2.urlopen() (Python) to time out. I've tried making a php page with just sleep(10000) in it, but that causes 500 internal server error.
How would I make a resource that causes a connection timeout in the client whenever it is requested?
|
How should I create a test resource which always times out
| 1.2 | 0 | 1 | 615 |
12,753,527 |
2012-10-05T20:22:00.000
| 1 | 1 | 0 | 0 |
php,python,apache,url,timeout
| 12,753,554 | 5 | false | 0 | 0 |
Connection timeout? Use, for example, netcat. Listen on some port (nc -l), and then try to download data from that port.. http://localhost:port/. It will open connection, which will never reply.
| 2 | 3 | 0 |
I'm unit testing a URL fetcher, and I need a test url which always causes urllib2.urlopen() (Python) to time out. I've tried making a php page with just sleep(10000) in it, but that causes 500 internal server error.
How would I make a resource that causes a connection timeout in the client whenever it is requested?
|
How should I create a test resource which always times out
| 0.039979 | 0 | 1 | 615 |
12,753,896 |
2012-10-05T20:54:00.000
| 2 | 0 | 1 | 1 |
python,dll,tcl
| 12,754,599 | 2 | true | 0 | 0 |
The easiest way would be by using the Tkinter package and its built in Tcl interpreter inside your Python Process.
If it is a Tcl extension dll, it makes no real sense to call it from Python without much setup first.
| 2 | 1 | 0 |
I have a script written in tcl, that load a dll file. I want to load this file in python. I am not sure if the dll file is written in tcl, but because the tcl file imports it I think it is written in tcl. I have tried to use WinDLL("path_to_dll") but I get that there is no module with the provided name.
|
how to load from Python a dll file written in tcl
| 1.2 | 0 | 0 | 221 |
12,753,896 |
2012-10-05T20:54:00.000
| 1 | 0 | 1 | 1 |
python,dll,tcl
| 12,757,979 | 2 | false | 0 | 0 |
Python and Tcl have substantially different C APIs; there's no reason at all to expect that a binary library written for one would work with the other. It might be possible to write a library that would use the API of whichever library it is loaded into — from the Tcl perspective, this would involve linking against the Tcl stub library so that there's no nasty hard binding to the API — but I don't know how to do that on the Python side of things and it's definitely highly tricky stuff to attempt to do. More practical would be to have one library that contains a language-independent implementation and then two more that bind that API to a particular language (the binding layer could even be automatically generated with a tool like SWIG, though that doesn't address language impedance issues).
Of course, if you're just wanting to write the library from one language and consume it from another, you can do that. A library is just bytes on disk, after all. It does usually tend to be easier to let specialist tools (compilers, linkers) look after writing libraries though; the data format of a library isn't exactly the simplest thing ever!
| 2 | 1 | 0 |
I have a script written in tcl, that load a dll file. I want to load this file in python. I am not sure if the dll file is written in tcl, but because the tcl file imports it I think it is written in tcl. I have tried to use WinDLL("path_to_dll") but I get that there is no module with the provided name.
|
how to load from Python a dll file written in tcl
| 0.099668 | 0 | 0 | 221 |
12,755,611 |
2012-10-06T00:21:00.000
| 5 | 0 | 0 | 0 |
twitter,python-2.7
| 12,755,702 | 1 | true | 0 | 0 |
Found the answer....
loc = [tweepy.api.get_user(friend).followers_count for friend in lof]
| 1 | 2 | 0 |
I have a list for followers:
lof = [31536003, 15066760, 75862029]
I can get the follower count for each follower in the list:
user = tweepy.api.get_user(31536003)
print user.followers_count
However, I am trying to write a list comprehension that can return a list in python. The list should be a list of the follower count of my
lof = [31536003, 15066760, 75862029] and will look something like [100,200,300]
which means user 31536003 has 100 followers, user 15066760 has 200 followers and so on.
How to accomplish this using list comprehension?
|
Find the number of follower using tweepy
| 1.2 | 0 | 1 | 1,505 |
12,755,804 |
2012-10-06T01:00:00.000
| 0 | 0 | 1 | 1 |
python,macos,import,pyopengl
| 12,791,804 | 2 | false | 0 | 0 |
Thanks guys! I figured it out.It was infact a separate module which I needed to copy over to the "site-packages" location and it worked fine.So in summary no issues with the path just that the appropriate module was not there.
| 1 | 0 | 0 |
I am trying to run a python script on my mac .I am getting the error :-
ImportError: No module named opengl.opengl
I googled a bit and found that I was missing pyopengl .I installed pip.I go to the directory pip-1.0 and then say
sudo pip install pyopengl
and it installs correctly I believe because I got this
Successfully installed pyopengl Cleaning up...
at the end.
I rerun the script but i am still getting the same error .Can someone tell me what I might be missing?
Thanks!
|
No module named opengl.opengl
| 0 | 0 | 0 | 2,646 |
12,756,976 |
2012-10-06T05:18:00.000
| 26 | 1 | 0 | 0 |
python,pyramid,alembic
| 12,757,266 | 2 | true | 0 | 0 |
Just specify alembic -c /some/path/to/another.ini when running alembic commands. You could even put the [alembic] section in your development.ini and production.ini files and just alembic -c production.ini upgrade head.
| 1 | 13 | 0 |
I'm attempting to configure SQLAlchemy Alembic for my Pyramid project and I want to use my developement.ini (or production.ini) for the configuration settings for Alembic. Is it possible to specify the .ini file I wish to use anywhere within Alembic?
|
Use different .ini file for alembic.ini
| 1.2 | 0 | 0 | 5,384 |
12,760,257 |
2012-10-06T13:36:00.000
| 0 | 0 | 1 | 0 |
python,django
| 12,883,745 | 2 | true | 0 | 0 |
Used Redis. It's thread safe.
| 2 | 0 | 0 |
Uploading files (images) via ajax multi-uploader (http://github.com/valums/file-uploader). Each uploaded file is saved in temp file. Put temp file name in dict with key of original name. Later every image will be resized in several sizes and saved to s3 storage. The problem is that there're at least 2 instances of the dict when uploading with excluding file names set, so I get partial dict at the end. How or where can I store the dict to update them from any thread. Tryed global with locking (read somewhere, that globals are accessible from all threads) - doesn't work.
|
Storing data from multiple threads
| 1.2 | 0 | 0 | 105 |
12,760,257 |
2012-10-06T13:36:00.000
| 0 | 0 | 1 | 0 |
python,django
| 12,760,393 | 2 | false | 0 | 0 |
More info about how uploading files via multi-uploader works in terms of its architecture would be very helpful in your question.
However IMO one should always shy away from global data structures when doing web applications with Python/Django. Why?
Django fcgi and similar set ups are meant to run with multiple processes serving the end web server (usually Apache/Nginx), and I don't know of any way to safely and consistently share data between these processes; IMO the architecture doesn't suit this purpose, but is built to do work (fulfill requests) in parallel.
| 2 | 0 | 0 |
Uploading files (images) via ajax multi-uploader (http://github.com/valums/file-uploader). Each uploaded file is saved in temp file. Put temp file name in dict with key of original name. Later every image will be resized in several sizes and saved to s3 storage. The problem is that there're at least 2 instances of the dict when uploading with excluding file names set, so I get partial dict at the end. How or where can I store the dict to update them from any thread. Tryed global with locking (read somewhere, that globals are accessible from all threads) - doesn't work.
|
Storing data from multiple threads
| 0 | 0 | 0 | 105 |
12,760,904 |
2012-10-06T15:02:00.000
| 2 | 0 | 1 | 0 |
python,variables,types
| 12,760,943 | 3 | true | 0 | 0 |
While you can start your code with something like t = str.isdigit and then check t(n) (you can't make it an attribute of strings because you can't intercept attribute lookup from the outside), this is actually a pretty bad idea. While it's quicker to type, it is significantly harder to read. This will bite you (or anyone else) working the code in a few weeks from now. This is not because abstracting over unnecessary details is bad (it isn't), this snippet simply isn't complicated enough to benefit from an abstraction. Any readable name for this is at least as long as isidigt, so you can't actually win any readability.
| 1 | 1 | 0 |
I'm new to python but I caught on to the basics pretty quick and decided to start trying to make a program while I'm still learning, since I learn best by actually doing things.
So I'm making a program in python that will add polynomials and I need to see if a character from the parser is numeric im using the isdigit() command.
Instead of having to type isdigit() all the time in my code such as n.isdigit(), I want to assign it to a variable t = 'isdigit()' and then type n.t. This doesn't work, so is there an alternative to not typing the whole command?
|
How can i set isdigit() command as a variable?
| 1.2 | 0 | 0 | 314 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.