Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
14,941,334
2013-02-18T16:54:00.000
0
0
0
1
python,cluster-computing,scheduler
42,817,839
2
false
0
0
Take a look at the ipcluster_tools. The documentation is sparse but it is easy to use.
1
3
0
we are trying to solve a problem related to cluster job scheduler. The problem is the following we have a set of python scripts which are executed in a cluster, the launching process is currently done by means of the human interaction, I mean to start the test we have a bash script which interact with the cluster to request the resources needed for the execution. What we are intending to do is to build an automatic launching process (which should be sound in the sense that it realizes the job status and based on that wait the job ending, restart the execution, etc...). Basically we have to implement a layer between the user workstation and the cluster. Another additional difficulty is that our layer must be clever enough to interact with the different cluster job schedulers. We wonder if there exists a tool or framework which help us to interact with the cluster without having to deal with each cluster scheduler details. We have searched in the web but we did not find anything suitable for our needs. By the way the programming language we use is Python. Thanks in advance! Br.-
Cluster job scheduler: tools
0
0
0
1,313
14,941,729
2013-02-18T17:15:00.000
24
0
1
1
python,linux,fork
14,942,111
2
true
0
0
Even if COW is employed, CPython uses reference counting and stores the reference count in each object's header. So unless you don't do anything with that data, you'll quickly have spurious writes to the memory in question, which will force the system to copy the data. Pass it to a function? That's another reference, an INCREF, a write to the COW'd memory. Store it in a variable or object attribute? Same. Even just look up a method on it? Ditto. Some builtin data structures allocate the bulk of their data separately from the object (e.g. most collections) for various reasons. If these end up on a different page -- or whatever granularity COW works on -- you may get lucky with those. However, an object referenced from such a collection is not exempt -- using it manipulates its refcount just the same. In addition, a bit of data will be shared because there are no writes to it by design (e.g., the native CPython code), and some objects your fork'd process does not touch may be shared (I'm honestly not sure; I think the cycle GC does not write to the object). But Python objects used by Python code is virtually guaranteed to get written to. Similar reasoning applies to PyPy, Jython, IronPython, etc. (only that they fiddle with bits in the object header instead of doing reference counting) though I can't vouch for all possible configurations.
1
9
0
I would like to load a rather large data structure into a process and then fork in the hope to reduce total memory consumption. Will os.fork work that way or copy all of the parent process in Linux (RHEL)?
Will os.fork() use copy on write or do a full copy of the parent-process in Python?
1.2
0
0
3,526
14,942,462
2013-02-18T17:57:00.000
1
0
1
0
python,database,json,sqlalchemy,python-db-api
14,951,638
1
false
0
0
There's no magic way, you'll have to write a Python program to load your JSON data in a database. SQLAlchemy is a good tool to make it easier.
1
0
0
If we have a json format data file which stores all of our database data content, such as table name, row, and column, etc content, how can we use DB-API object to insert/update/delete data from json file into database, such as sqlite, mysql, etc. Or please share if you have better idea to handle it. People said it is good to save database data information into json format, which will be much convenient to work with database in python. Thanks so much! Please give advise!
how will Python DB-API read json format data into an existing database?
0.197375
1
0
454
14,944,900
2013-02-18T20:39:00.000
0
0
0
0
python,xpath
14,946,219
3
false
1
0
You could do the matching in XPath, and then simply take the resulting nodes parent in Python.
1
0
0
I'm trying to create a list of dicts with two data items. The page I'm looking at has 37 matches for //div[@id='content']/*[self::p or self::h2]/a[2]; however, it only has 33 matches for //div[@id='content']/*[self::p or self::h2]/a[contains(@href,'game')]/img[@src] The two xpaths have //div[@id='content']/*[self::p or self::h2] in common. I effectively only want to get the element matched for the first xpath if the second xpath is matched, and leave the 4 without the second element behind. I'm hoping that this can be accomplished with xpath but if not, could use some advice on writing a function that achieves this in python.
conditional xpath? need xpath if more specific xpath is matched
0
0
1
153
14,945,604
2013-02-18T21:29:00.000
3
1
0
0
python,amazon-web-services,boto,amazon-sqs
14,967,271
2
false
0
0
When you read a message from a queue in boto, you get a Message object. This object has at attribute called attributes. It is a dictionary of attributes that SQS keeps about this message. It includes SentTimestamp.
1
7
0
I can see messages have a sent time when I view them in the SQS message view in the AWS console. How can I read this data using Python's boto library?
SQS: How can I read the sent time of an SQS message using Python's boto library
0.291313
0
1
4,489
14,946,512
2013-02-18T22:31:00.000
2
0
0
0
lapack,blas,enthought,intel-mkl,epd-python
14,966,265
2
false
0
0
The EPD Free 7.3 installers do not include MKL. The BLAS/LAPACK libraries which they use are ATLAS on Linux & Windows and Accelerate on OSX.
1
1
1
According to the Enthought website, the EPD Python distribution uses MKL for numpy and scipy. Does EPD Free also use MKL? If not does it use another library for BLAS/LAPACK? I am using EPD Free 7.3-2 Also, what library does the windows binary installer for numpy that can be found on scipy.org use?
Does the EPD Free distribution use MKL?
0.197375
0
0
538
14,947,860
2013-02-19T00:35:00.000
0
0
0
1
python,google-app-engine
14,950,038
1
false
1
0
Easiest thing is to modify google/appengine/tools/dev_appserver_import_hook.py and add the module you want to the whitelist. This will allow you to import whatever you want. Now there's a good reason that the imports are restricted in the development server. The restricted imports match what's available on the production environment. So if you add libraries to the whitelist, your code may run on your local development server, but it will not run on the production environment. And no, you can't import restricted modules on production.
1
1
0
I am playing around with local deployment of GAE python SDK. The code that I am trying to run contains many external libraries which are not part of GAE import whitelist. I want to disable the import restrictions and let GAE app import any locally installed module. After walking through the code, I figured out that they use custom import hooks for restricting imports. However, I have not been able to figure out how to disable the overridden import hook. Let me know if you have any idea how this can be accomplished.
How to disable Google App Engine python SDK import hook?
0
0
0
158
14,948,810
2013-02-19T02:28:00.000
1
0
0
0
python,django,django-admin,django-authentication
14,964,876
2
true
1
0
I set up a brand new empty project with a custom user model and attempted to recreate the situation, which led to a diagnosis: we had added the django-usertools package to the project, which has not been updated for Django 1.5 and apparently conflicts with custom user models. Removing that package from the installed apps list in settings resolved the issue.
1
1
0
I have a custom user model (it's actually named User as I didn't see any need to name it otherwise) in my Django 1.5c1 project (currently running on the latest from the Django 1.5 branch on github). AUTH_USER_MODEL is defined in my settings properly, so the auth module works correctly and I can log in etc. fine. However, with the custom user module enabled, the admin site doesn't work. When I add admin.autodiscover() to my urls.py, every page on the site (not just admin pages) throws a NotRegistered exception and says The model User is not registered. The traceback shows that admin.autodiscover() is trying to call admin.site.unregister(User), apparently before it has registered that model. I tried renaming my user model to something other than User, but it didn't seem to work. I also tried creating my own admin.py for that app, and then I tried manually registering my custom User model with the custom UserAdmin model specified in admin.py before admin.autodiscover() ran, but that actually caused a separate exception saying that User was already registered. What should I try next in order to get admin.autodiscover() working?
Django 1.5 custom user model plus admin.autodiscover() breaks app
1.2
0
0
1,388
14,949,586
2013-02-19T03:57:00.000
2
1
0
0
python,pyramid
14,950,809
1
false
1
0
There is a project to eventually remove those templating dependancies and make them available as separate packages. The work started at last year pycon sprints and can be continued this year, who knows. OTOH having those packages installed in your venv doesn't really affect your app so just avoid using them and only use the JSON renderer or any other renderers. Instead of forking Pyramid and removing those dependancies in setup.py I propose you to join us and work on the removal project so we can all benefit the same features.
1
1
0
Is there a "good way" to install Pyramid without the templating systems? The templating systems I speak of are Mako and Chameleon. In Single Page Applications (SPA) there is very little need for server-side templating since all of the templates are rendered on the client-side with javascript. I like the power of Pyramid but the template system is unnecessary baggage in some cases. I have a feeling that the only way to accomplish this task is to fork Pyramid and modify the setup.py to remove these dependencies. That may break things,but then again, Pyramid is built in such a way that it may not care as long as nothing tries to call a renderer for one of these templates. Who knows?
Installing Pyramid without the template systems (Mako and Chameleon)
0.379949
0
0
314
14,950,378
2013-02-19T05:21:00.000
6
0
0
1
python,uid
14,950,419
2
false
0
0
Function os.getuid() returns ID of a user who runs your program. Function os.geteuid() of a user your program use permissions of. In most cases this will be the same. Well known case when these values will be different is when setuid bit is set for your program executable file, and user that runs your program is different from user that own program executable. In this case os.getuid() will return ID of user who runs program, while os.geteuid() will return ID of user who own program executable.
1
40
0
The documentation for os.getuid() says: Return the current process’s user id. And of os.geteuid() says: Return the current process’s effective user id. So what is the difference between user id and effective user id? For me both works same (on both 2.x and 3.x). I am using it to check if script is being run as root.
What is difference between os.getuid() and os.geteuid()?
1
0
0
27,249
14,951,940
2013-02-19T07:23:00.000
0
0
0
0
javascript,python,html,css,google-visualization
14,998,894
1
true
0
0
There is a feasible way, but you probably won't like it. First you make your chart using the subheaders (so you have "", "", "1", "2", "1", "2", "1", "2"). Then you make a separate chart below it, make it zero height, no grid lines, no vertical axis, no legend, no nothing (but make sure it has the same chartWidth as the above chart). In the chart below you create a chart using "A", "B", "C", "D" as your categories, and no data (null, null, null, null). Your labels will automatically align to be centered on each pair of top values. However, if you really only have 1 value for the "A" series, then they won't line up (since the labels can't be distributed evenly). You can either create a blank series, or create a double-thick line for A (by putting the same value in both columns), or something tricky like that. If you are willing, you could alternatively write Javascript to place floating text boxes below the horizontal axis labels with some fancy CSS.
1
0
0
I need to create a Table in the following format using Google Visualization : A |  &nbspB   |  &nbspC   |  &nbspD   |    |&nbsp1 |&nbsp2 |&nbsp1 |&nbsp2 |&nbsp1 |&nbsp2 | Where 1,2 are the Sub Headers of B, C, D. I am currently using a table with Headers as A, B1, B2, C1, C2, D1, D2. But I would like to divide them into sub headers as shown above. Please let me know whether this is feasible using GVis. If feasible, How can it be done. Otherwise, Please let me know if there are any workarounds for achieving this. Please let me know if there are any clarifications required in the question. Thanks.
How to use Google Visualization to create a table with Sub Columns?
1.2
0
0
321
14,955,468
2013-02-19T10:43:00.000
1
0
0
1
linux,django,python-2.7,centos
14,955,895
1
true
1
0
Your CentOS relies on python 2.4 so that's not going to work. You should probably create a new system user and install pyton 2.7 in its home directory (or use your root user and install python in /opt for global usage), you can find plenty of tutorials on Google. After succesfully doing so, you can set an alias in your user's bash profile to define which python version to use. It's also common practice to create a virtualenv for each project and/or user.
1
0
0
I am not able to install Django.I am using CentOS 5,not able to set python2.7 environment variable.Priviously in my system python2.4.3 is available,but after installing python 2.7 in the terminal i checked the version avail in system using "python -V"cmd it executed as python 2.4.3.But if i checked using "python2.7 -V"cmd it is showing python2.7.Please help me with this..... 1.I need to set python2.7 as default version. 2.Help me with the installation of Django.
need assisstance to set python2.7 as default & Django to install
1.2
0
0
143
14,959,093
2013-02-19T13:47:00.000
1
0
0
0
python,.net,ironpython
14,959,257
1
true
0
1
Well, I've never used IronPython so I don't know how much help this will be, but what I usually do when trying to figure out these things in regular python is to print type(sender) , print sender and print dir(sender) to console(or output to a file if you don't have a console available). This should help you figure out what exactly is the "sender" parameter. In the simplest case it could be the button itself so a simple == will work to know which button it was. Or it could have a method/property that gets you the button object. In which case, dir(sender) might contain an obvious one, or if not, google the class name gotten from type(sender) and see if you can find any docs.
1
0
0
If there are lets say 4 buttons, all with the same Click event, how can I find out which button was pressed? if the event looks like this def Button_Click(self, sender, e): I'm sure I can compare sender to my buttons somehow. But how?
event handling iron python
1.2
0
0
901
14,960,161
2013-02-19T14:42:00.000
1
0
0
0
python,debugging,pdb
14,960,605
2
false
0
0
This is the thing: Ctrl+D does not kill programs, it cuts waiting halfway through. When you press Ctrl+D, you interrupt the process' read() call that's waiting for input. Ctrl+D Most programs will abort when they read 0 bytes as input. If you Ctrl+D before entering anything, you'll be sending 0 bytes down the input pipe, and possibly induce a shutdown of a the program, which may think there is nothing left to be done. This is not forced. However, if you press some keys, then Ctrl+D, the read() call you interrupted will return that text, and the underlying program decides to wait for another round. That's why, when you Ctrl+D again without entering any new text, you get the behavior you expect. Your case This is what's probably happening: You type some character, they get buffered. You Ctrl+D. The text reaches iPdb, but it does not detect a newline, and thus it waits for more. You Ctrl+D again. This time 0 bytes reach iPdb, which assumes nothing more is coming and processes the text with or without newlines.
1
2
0
I am debugging my Python scripts with ipdb. Somehow I have the problem, that after entering a command, for instance n, s, c, b etc. I have to press Ctrl+D two times in order for ipdb to process the command and proceed. Any idea what causes this and how I can turn it off?
ipdb requires Ctrl+D for processing command
0.099668
0
0
463
14,961,151
2013-02-19T15:29:00.000
0
1
0
0
python,django,unit-testing
14,961,957
3
false
1
0
Did you try core developer Karen Tracy's book Django 1.1 Testing And Debugging? Although the title implies it's out of date, most of the advice is still applicable.
1
2
0
I'm trying to get caught up on unit testing, and I've looked over a few books - Debugging Django, Web Dev. with Django, and the official docs, but none seem to cover unit testing thoroughly enough for me. I'm also not an expert in Python web development, so maybe that's why. What I'm looking for is something that starts at an intermediate level of python skill/knowledge and covers Django unit testing from scratch, with a few good real-world examples. Any recommendations on such resources? Much appreciated.
Learning Django unit testing
0
0
0
311
14,962,414
2013-02-19T16:26:00.000
1
1
0
1
python,linux,amazon-web-services,amazon-ec2,amazon-rds
14,966,165
1
true
1
0
It is possible. You just have to write an init script and setup proper symbolic links in /etc/rc#.d directories. It will be started with a parameter start or stop depending on if machine is starting up or shutting down.
1
1
0
I have an Amazon Ubuntu instance which I stop and start (not terminate). I was wondering if it is possible to run a script on start and stop of the server. Specifically, I am looking at writting a python boto script to take my RDS volume offline when the EC2 server is not running. Can anyone tell me if this is possible please?
Running a script on EC2 start and stop
1.2
0
1
1,561
14,964,717
2013-02-19T18:25:00.000
1
0
0
1
python,google-app-engine,audio-streaming
14,964,896
1
true
1
0
You can't make long running external calls with App Engine. Maximum deadline (task queue and cron job handler) for UrlFetch is 10 minutes. So, I think it is not possible.
1
1
0
I have a URL address to the audio stream, how I can retranslate it in the web with my address (myapp.appspot.com)? Let me explain why I need it: I have a very narrow channel, and will not stand many connections, so I have to do it with GAE Thanks!
Retranslation audio stream with python on Google App Engine
1.2
0
0
193
14,968,441
2013-02-19T22:07:00.000
3
0
1
1
python
14,970,241
1
false
0
0
The walk itself can't give you progress, because there's no way of knowing in advance how many entries are under some directory tree.* However, in most programs that use walk, you're actually doing something with the files, which usually takes a whole lot longer than the implicit stat call. For example, grabbing my first program with os.walk in it, list(os.walk(path)) takes 2.301 seconds, while my actual function (despite only operating on a small percentage of those files) takes 139.104 seconds. And I think this kind of thing is pretty typical. So, you can first read in the entire walk (e.g., by using list(os.walk(path))), and then use that information to generate the progress for your real work. In a realistic program, you'd probably want to show an "indeterminate progress bar" with a label like "Determining size..." while doing the list(os.walk(path)), and then replace it with a percentage progress bar with "0/12345 files" once that's done. (In fact, I'm going to go add exactly that indeterminate progress bar to my program, now that I've thought of the idea…) (For a single-threaded interactive program, you obviously wouldn't want to just block on list(os.walk(path)); you might do it in a background thread with a callback to your main thread, or do one iteration of the walk object and runLater the rest each time through the event loop, etc.) * This isn't because no filesystem or OS ever could do such a thing, just because they don't. There would obviously be some tradeoffs—for example, creating and deleting lots of tiny files would be a lot slower if you had to walk up the whole tree updating counts. Classic Mac used to solve this problem by keeping a cached count in the Finder Info… which was great, except that it meant a call that could take either 1us or 1min to return, with no way of predicting which in advance (or interrupting it) programmatically.
1
2
0
for root, dirs, files in os.walk(rootDir, topdown='true'): is something regularly used in python scripts. Just wondering is there any well known way to provide progress here? When you have a large folder structure this API can take a while? Thanks.
Anyway to provide progress from os.walk?
0.53705
0
0
931
14,969,739
2013-02-19T23:42:00.000
35
0
1
0
python,garbage-collection,del
14,969,798
4
true
0
0
The del statement doesn't reclaim memory. It removes a reference, which decrements the reference count on the value. If the count is zero, the memory can be reclaimed. CPython will reclaim the memory immediately, there's no need to wait for the garbage collector to run. In fact, the garbage collector is only needed for reclaiming cyclic structures. As Waleed Khan says in his comment, Python memory management just works, you don't have to worry about it.
3
26
0
Calling del on a variable in Python. Does this free the allocated memory immediately or still waiting for garbage collector to collect? Like in java, explicitly calling del has no effect on when the memory will be freed.
Python del statement
1.2
0
0
11,169
14,969,739
2013-02-19T23:42:00.000
3
0
1
0
python,garbage-collection,del
14,969,791
4
false
0
0
"Deletion of a name removes the binding of that name from the local or global namespace". No more, no less. It does nothing to the object the name pointed to, except decrementing its refcount, and if refcount is not zero, the object will not be collected even when GC runs.
3
26
0
Calling del on a variable in Python. Does this free the allocated memory immediately or still waiting for garbage collector to collect? Like in java, explicitly calling del has no effect on when the memory will be freed.
Python del statement
0.148885
0
0
11,169
14,969,739
2013-02-19T23:42:00.000
0
0
1
0
python,garbage-collection,del
66,877,999
4
false
0
0
Regarding delete: Sometimes you have to work on large datasets where you have to compute memory-intensive operations and store a large amount of data into a variable in a recursive manner. To save RAM, when you finish your entire operation, you should delete the variable if you are no more using it outside the recursive loop. You can use the command del varname followed by Python’s garbage collector gc.collect() Regarding speed: Speed is the most important in applications such as financial applications with a regulatory requirement. You have to make sure that the speed of operation is completed within the expected timeframe.
3
26
0
Calling del on a variable in Python. Does this free the allocated memory immediately or still waiting for garbage collector to collect? Like in java, explicitly calling del has no effect on when the memory will be freed.
Python del statement
0
0
0
11,169
14,969,929
2013-02-19T23:59:00.000
1
0
1
1
python,shell,process
14,970,626
1
false
0
0
You can't generally access one Python interpreter from another. The most general way to do something like is to put an interpreter-on-a-socket (or -pipe or whatever) into your server program, and just connect your shell up to that interpreter. Doing this on top of the code module isn't hard, but to make it as nice as the normal interactive interpreter shell takes a bit more work. I believe IDLE and IPython both contain lots of useful source code, and possibly even something you can use out of the box, or with minimal changes. It's also possible to share data directly between two separate programs. For example, use multiprocessing.Value on top of mmap—or, more simply, just keep the data in a database file instead of in memory. Then your shell can just read the data without interacting directly with the server. However, this means having appropriate locks in place, or trying to write as atomically as possible and accepting that the shell will still occasionally get garbage because of races. But really, most of the time, if you can afford to dump the data by pickling/JSON/whatever, that's both the easiest and the safest solution.
1
0
0
I have Python application running on a web server via mod_wsgi and I am able to access python shell through SSH on the server. Part of the application generates a dictionary and a small number of lists in memory over time while the application is running. Is there a possible way of starting the Python shell on the server and being able to access the dictionary and lists through the shell or is the only option to program the application to pickle or json them and store them in a file periodically or by event trigger? Even if this is not focused on a web server situation is it possible for a Python shell to access an already running Python application?
Python shell access to a separate running script
0.197375
0
0
196
14,972,631
2013-02-20T05:03:00.000
3
0
1
0
python,naming-conventions,abstract-class
14,973,303
6
false
0
0
Create your 'abstract' class and raise NotImplementedError() in the abstract methods. It won't stop people using the class and, in true duck-typing fashion, it will let you know if you neglect to implement the abstract method.
3
20
0
I come from a C# background where the language has some built in "protect the developer" features. I understand that Python takes the "we're all adults here" approach and puts responsibility on the developer to code thoughtfully and carefully. That said, Python suggests conventions like a leading underscore for private instance variables. My question is, is there a particular convention for marking a class as abstract other than just specifying it in the docstrings? I haven't seen anything in particular in the python style guide that mentions naming conventions for abstract classes. I can think of 3 options so far but I'm not sure if they're good ideas: Specify it in the docstring above the class (might be overlooked) Use a leading underscore in the class name (not sure if this is universally understood) Create a def __init__(self): method on the abstract class that raises an error (not sure if this negatively impacts inheritance, like if you want to call a base constructor) Is one of these a good option or is there a better one? I just want to make sure that other developers know that it is abstract and so if they try to instantiate it they should accept responsibility for any strange behavior.
Python abstract classes - how to discourage instantiation?
0.099668
0
0
12,928
14,972,631
2013-02-20T05:03:00.000
3
0
1
0
python,naming-conventions,abstract-class
14,973,549
6
false
0
0
I just name my abstract classes with the prefix 'Abstract'. E.g. AbstractDevice, AbstractPacket, etc. It's about as easy and to the point as it comes. If others choose to go ahead and instantiate and/or use a class that starts with the word 'Abstract', then they either know what they're doing or there was no hope for them anyway. Naming it thus, also serves as a reminder to myself not to go nuts with deep abstraction hierarchies, because putting 'Abstract' on the front of a whole lot of classes feels stupid too.
3
20
0
I come from a C# background where the language has some built in "protect the developer" features. I understand that Python takes the "we're all adults here" approach and puts responsibility on the developer to code thoughtfully and carefully. That said, Python suggests conventions like a leading underscore for private instance variables. My question is, is there a particular convention for marking a class as abstract other than just specifying it in the docstrings? I haven't seen anything in particular in the python style guide that mentions naming conventions for abstract classes. I can think of 3 options so far but I'm not sure if they're good ideas: Specify it in the docstring above the class (might be overlooked) Use a leading underscore in the class name (not sure if this is universally understood) Create a def __init__(self): method on the abstract class that raises an error (not sure if this negatively impacts inheritance, like if you want to call a base constructor) Is one of these a good option or is there a better one? I just want to make sure that other developers know that it is abstract and so if they try to instantiate it they should accept responsibility for any strange behavior.
Python abstract classes - how to discourage instantiation?
0.099668
0
0
12,928
14,972,631
2013-02-20T05:03:00.000
0
0
1
0
python,naming-conventions,abstract-class
14,974,080
6
false
0
0
To enforce things is possible, but rather unpythonic. When I came to Python after many years of C++ programming I also tried to do the same, I suppose, most of people try doing so if they have an experience in more classical languages. Metaclasses would do the job, but anyway Python checks very few things at compilation time. Your check will still be performed at runtime. So, is the inability to create a certain class really that useful if discovered only at runtime? In C++ (and in C# as well) you can not even compile you code creating an abstract class, and that is the whole point -- to discover the problem as early as possible. If you have abstract methods, raising a NotImplementedError exception seems to be quite enough. NB: raising, not returning an error code! In Python errors usually should not be silent unless thay are silented explicitly. Documenting. Naming a class in a way that says it's abstract. That's all. Quality of Python code is ensured mostly with methods that are quite different from those used in languages with advanced compile-time type checking. Personally I consider that the most serious difference between dynamically typed lngauges and the others. Unit tests, coverage analysis etc. As a result, the design of code is quite different: everything is done not to enforce things, but to make testing them as easy as possible.
3
20
0
I come from a C# background where the language has some built in "protect the developer" features. I understand that Python takes the "we're all adults here" approach and puts responsibility on the developer to code thoughtfully and carefully. That said, Python suggests conventions like a leading underscore for private instance variables. My question is, is there a particular convention for marking a class as abstract other than just specifying it in the docstrings? I haven't seen anything in particular in the python style guide that mentions naming conventions for abstract classes. I can think of 3 options so far but I'm not sure if they're good ideas: Specify it in the docstring above the class (might be overlooked) Use a leading underscore in the class name (not sure if this is universally understood) Create a def __init__(self): method on the abstract class that raises an error (not sure if this negatively impacts inheritance, like if you want to call a base constructor) Is one of these a good option or is there a better one? I just want to make sure that other developers know that it is abstract and so if they try to instantiate it they should accept responsibility for any strange behavior.
Python abstract classes - how to discourage instantiation?
0
0
0
12,928
14,972,773
2013-02-20T05:17:00.000
1
0
1
0
python,notepad++
46,432,794
7
false
0
0
Please make sure it is not disabled. I happens to me too. To check go to preferences, click on language items in the left and make sure that python is not in disabled languages. Reboot notepad++ after applying changes.
3
4
0
I know Notepad++ supports Python but under the language menu I cannot find it! At "P," it only lists Pascal, Perl, PHP, Postscript, PowerShell, and strangely, Properties. I am writing some Python scripts and I want the syntax to be highlighted. How can I activate Python highlighting?
Python Syntax Highlighting in Notepad++
0.028564
0
0
15,676
14,972,773
2013-02-20T05:17:00.000
0
0
1
0
python,notepad++
54,853,804
7
false
0
0
Yes, python is at the bottom of language list (not under P item - I believe it was moved out of P after I played with Settings) .
3
4
0
I know Notepad++ supports Python but under the language menu I cannot find it! At "P," it only lists Pascal, Perl, PHP, Postscript, PowerShell, and strangely, Properties. I am writing some Python scripts and I want the syntax to be highlighted. How can I activate Python highlighting?
Python Syntax Highlighting in Notepad++
0
0
0
15,676
14,972,773
2013-02-20T05:17:00.000
8
0
1
0
python,notepad++
14,975,308
7
false
0
0
Maybe it's disabled. Check in Preferences | Language Menu/Tab Settings if it's not among the disabled items.
3
4
0
I know Notepad++ supports Python but under the language menu I cannot find it! At "P," it only lists Pascal, Perl, PHP, Postscript, PowerShell, and strangely, Properties. I am writing some Python scripts and I want the syntax to be highlighted. How can I activate Python highlighting?
Python Syntax Highlighting in Notepad++
1
0
0
15,676
14,977,687
2013-02-20T10:34:00.000
0
0
0
0
python,google-app-engine
14,977,789
2
false
0
0
Use timestamps. If the timestamp of the object you're writing doesn't match the timestamp the object had when you read it, it has been modified meanwhile.
1
1
0
For example, All "Transaction Table" entities editable for all the users. How can i check, Is someone changed and updated the same entity ?
How to check whether someone changed the entity data
0
0
0
92
14,979,319
2013-02-20T11:57:00.000
0
0
1
0
python,thread-safety,global-variables
14,980,179
4
false
0
0
If you initialize it once, and if you initialize it on when module is loaded (that means: before it can be accessed from other threads), you will have no problems with thread safety at all. No synchronization is needed. But if you mean a more complex scenario, you have to explain it further to get a reasonable code example.
2
9
0
I want to use a global variable, Init it once. having a thread safe access. Can someone share an example please?
How to use global variable in python, in a threadsafe way
0
0
0
14,487
14,979,319
2013-02-20T11:57:00.000
1
0
1
0
python,thread-safety,global-variables
14,979,833
4
false
0
0
You do have a problem if you are using multiprocessing.Processes. In which case you should take a look at Managers and Queues in the multiprocessing module.
2
9
0
I want to use a global variable, Init it once. having a thread safe access. Can someone share an example please?
How to use global variable in python, in a threadsafe way
0.049958
0
0
14,487
14,983,709
2013-02-20T15:32:00.000
1
0
0
1
python,opencl,gpgpu,pyopencl,amd-processor
15,017,872
2
false
0
0
on NVIDIA the binary will be in the ptx format. obtain the Binary sizes clGetProgramInfo() using the flag CL_PROGRAM_BINARY_SIZES store the binaries in ptx file. clGetProgramInfo() using the flag CL_PROGRAM_BINARIES clCreateProgramWithBinary() with the ptx file as input.
1
2
0
I have 2 two python script on separate files. The first one has opencl program that performs some image processing on the image passed to it and returns the results. The second script reads the image on from a file and calls the first script passing the read image as a parameter and obtains the results returned by it which is used for further processing. Now, I have like a 100 images in the folder. So the second scripts calls the first script 100 times and each time the first script is called, the opencl kernel is compiled which is absolutely unnecessary as all the images are of same format and dimension. Is there a way to first compile the opencl kernel once, store it in a binary format and call it whenever required? Of-course, i can put all the code in one large file, compile the kernel once and call it in a loop for 100 times but I want separate files for the purpose of convenience. Hardware: CPU: AMD A8 APU, AMD Phenom 2 X4 GPU: AMD Radeon HD 7640G + 7670M Dual Graphics, ATI Radeon HD5770
Can the compiled opencl program be stored as a seperate binary file?
0.099668
0
0
1,010
14,986,129
2013-02-20T17:27:00.000
10
0
0
0
python,mysql,django,mysql-python
15,328,753
3
true
1
0
A shorter answer would be, "MySQL doesn't support that type of cursor", so neither does Python-MySQL, so the reason one connection command is preferred is because that's the way MySQL works. Which is sort of a tautology. However, the longer answer is: A 'cursor', by your definition, would be some type of object accessing tables and indexes within an RDMS, capable of maintaining its state. A 'connection', by your definition, would accept commands, and either allocate or reuse a cursor to perform the action of the command, returning its results to the connection. By your definition, a 'connection' would/could manage multiple cursors. You believe this would be the preferred/performant way to access a database as 'connections' are expensive, and 'cursors' are cheap. However: A cursor in MySQL (and other RDMS) is not a the user-accessible mechanism for performing operations. MySQL (and other's) perform operations in as "set", or rather, they compile your SQL command into an internal list of commands, and do numerous, complex bits depending on the nature of your SQL command and your table structure. A cursor is a specific mechanism, utilized within stored procedures (and there only), giving the developer a way to work with data in a procedural way. A 'connection' in MySQL is what you think of as a 'cursor', sort of. MySQL does not expose it's internals for you as an iterator, or pointer, that is merely moving over tables. It exposes it's internals as a 'connection' which accepts SQL and other commands, translates those commands into an internal action, performs that action, and returns it's result to you. This is the difference between a 'set' and a 'procedural' execution style (which is really about the granularity of control you, the user, is given access to, or at least, the granularity inherent in how the RDMS abstracts away its internals when it exposes them via an API).
3
6
0
Example scenario: MySQL running a single server -> HOSTNAME Two MySQL databases on that server -> USERS , GAMES . Task -> Fetch 10 newest games from GAMES.my_games_table , and fetch users playing those games from USERS.my_users_table ( assume no joins ) In Django as well as Python MySQLdb , why is having one cursor for each database more preferable ? What is the disadvantage of an extended cursor which is single per MySQL server and can switch databases ( eg by querying "use USERS;" ), and then work on corresponding database MySQL connections are cheap, but isn't single connection better than many , if there is a linear flow and no complex tranasactions which might need two cursors ?
Why django and python MySQLdb have one cursor per database?
1.2
1
0
1,351
14,986,129
2013-02-20T17:27:00.000
2
0
0
0
python,mysql,django,mysql-python
15,302,237
3
false
1
0
As you say, MySQL connections are cheap, so for your case, I'm not sure there is a technical advantage either way, outside of code organization and flow. It might be easier to manage two cursors than to keep track of which database a single cursor is currently talking to by painstakingly tracking SQL 'USE' statements. Mileage with other databases may vary -- remember that Django strives to be database-agnostic. Also, consider the case where two different databases, even on the same server, require different access credentials. In such a case, two connections will be necessary, so that each connection can successfully authenticate.
3
6
0
Example scenario: MySQL running a single server -> HOSTNAME Two MySQL databases on that server -> USERS , GAMES . Task -> Fetch 10 newest games from GAMES.my_games_table , and fetch users playing those games from USERS.my_users_table ( assume no joins ) In Django as well as Python MySQLdb , why is having one cursor for each database more preferable ? What is the disadvantage of an extended cursor which is single per MySQL server and can switch databases ( eg by querying "use USERS;" ), and then work on corresponding database MySQL connections are cheap, but isn't single connection better than many , if there is a linear flow and no complex tranasactions which might need two cursors ?
Why django and python MySQLdb have one cursor per database?
0.132549
1
0
1,351
14,986,129
2013-02-20T17:27:00.000
0
0
0
0
python,mysql,django,mysql-python
15,421,235
3
false
1
0
One cursor per database is not necessarily preferable, it's just the default behavior. The rationale is that different databases are more often than not on different servers, use different engines, and/or need different initialization options. (Otherwise, why should you be using different "databases" in the first place?) In your case, if your two databases are just namespaces of tables (what should be called "schemas" in SQL jargon) but reside on the same MySQL instance, then by all means use a single connection. (How to configure Django to do so is actually an altogether different question.) You are also right that a single connection is better than two, if you only have a single thread and don't actually need two database workers at the same time.
3
6
0
Example scenario: MySQL running a single server -> HOSTNAME Two MySQL databases on that server -> USERS , GAMES . Task -> Fetch 10 newest games from GAMES.my_games_table , and fetch users playing those games from USERS.my_users_table ( assume no joins ) In Django as well as Python MySQLdb , why is having one cursor for each database more preferable ? What is the disadvantage of an extended cursor which is single per MySQL server and can switch databases ( eg by querying "use USERS;" ), and then work on corresponding database MySQL connections are cheap, but isn't single connection better than many , if there is a linear flow and no complex tranasactions which might need two cursors ?
Why django and python MySQLdb have one cursor per database?
0
1
0
1,351
14,988,689
2013-02-20T19:51:00.000
5
0
1
0
python,encryption,aes
14,989,960
3
false
0
0
The best way of making sure that your ciphertext won't decrypt when it has been changed is to add an authentication tag. An authentication tag is used to provide authentication and integrity of the ciphertext. This tag may consist of a MAC (e.g. AES-CMAC or HMAC using SHA-256) over the ciphertext. This however requires a second key to be secure. Another method is to use authenticated encryption such as GCM. GCM uses a single key and generates an authentication tag (the size can be configured). Make sure you use a correctly generated IV. The IV could be prefixed to the ciphertext, and should be included when calculating the authentication tag), and don't forget that the size of your plain text may not be hidden. You should verify the correctness of the tag before decryption of the ciphertext. Note that in general, you should not encrypt passwords, unless you require access to the precise password at a later date. For verification of passwords, use PBKDF2 instead.
2
5
0
I've written my own encryption method using AES in a project I've been working on lately using PyCrypto. I use a hash to generate a 32-byte password and feed that to AES-256bit encryption using CBC. The file input is padding using PKCS#7 padding to conform to be divisible by 16. I can encrypt and decrypt the file without incident and the input file originally encrypted along with the output file have the same SHA-256 hash. The only problem I'm finding is that if I supply the wrong passphrase, decryption still happens. This is a problem for what I'm doing, as I need to have decryption fail fast if the passphrase is wrong. How can I make this happen? I've heard of other methods of AES encryption, but it seems that PyCrypto only supports ECB, CBC, CFB, OFB, CTR, and OpenPGP. How can I implement cryptographically strong AES which will fail decryption without the right passphrase?
Making AES decryption fail if invalid password
0.321513
0
0
2,069
14,988,689
2013-02-20T19:51:00.000
3
0
1
0
python,encryption,aes
14,989,032
3
true
0
0
There is nothing about AES (or any other encryption algorithm for that matter) that could allow you to know whether you have the correct key. That said, it's a very useful feature when you actually want to use cryptography outside of the realm of mathematics. What you need to do is add a block with a known value at the start of your message, that way after decrypting the first block you can compare it against the known value and know whether you have the wrong key. If the data you're encrypting has a known header you could use this instead. Alternatively you could send a cryptographic hash (for example SHA-256) of the key along with the message, an attacker would only be able to recover the key if they could break the hash.
2
5
0
I've written my own encryption method using AES in a project I've been working on lately using PyCrypto. I use a hash to generate a 32-byte password and feed that to AES-256bit encryption using CBC. The file input is padding using PKCS#7 padding to conform to be divisible by 16. I can encrypt and decrypt the file without incident and the input file originally encrypted along with the output file have the same SHA-256 hash. The only problem I'm finding is that if I supply the wrong passphrase, decryption still happens. This is a problem for what I'm doing, as I need to have decryption fail fast if the passphrase is wrong. How can I make this happen? I've heard of other methods of AES encryption, but it seems that PyCrypto only supports ECB, CBC, CFB, OFB, CTR, and OpenPGP. How can I implement cryptographically strong AES which will fail decryption without the right passphrase?
Making AES decryption fail if invalid password
1.2
0
0
2,069
14,989,100
2013-02-20T20:16:00.000
1
0
1
0
java,python,javabeans
14,989,253
5
false
1
0
Implements serializable. Pick your favorite format, and write a function that will serialize it for you. JSON, Pickle, YAML, any work. Just decide! Has getters and setters -> private properties We don't do that here, those are attributes of bondage languages, we are all adults in this language. dummy constructor Again not something we really worry about as our constructors are a little bit smarter than other languages. So you can just define one __init__ and it can do all your initialization, if you must then write a factory or subclass it.
1
23
0
I am fairly new to using Python as a OOP. I am coming from a Java background. How would you write a javabean equivalent in python? Basically, I need a class that: Implements serializable. Has getters and setters -> private properties dummy constructor Any inputs? I am looking for a sample code!
JavaBean equivalent in Python
0.039979
0
0
15,017
14,991,002
2013-02-20T22:17:00.000
1
0
0
0
python,django,forms,date,callback
14,991,863
2
false
1
0
Why not just DateField instead?
1
1
0
I have a form that has a CharField input for an european date. I need to transform it to a date python object. Is there a way to let the form care about it, in the validation? some callback? I don't want to do it in the view when there's the form processing.
Django forms: get the value of a forms.CharField as a date
0.099668
0
0
1,270
14,991,783
2013-02-20T23:21:00.000
1
0
0
0
c++,python,mysql,c,django
14,992,070
1
true
1
0
This is a completely valid concern and a very common problem. You have described creating a RESTful API. I guess it could be considered a proxy to a database but is not usually referred to as a proxy. Django is a great tool to use to use to accomplish this. Django even has a couple packages that will assist in speedy development, Django REST Framework, Tastiepy, and django-piston are the most popular. Of course you could just use plain old Django. Your Django project would be the only thing that interfaces with the database and clients can send authenticated requests to Django; so clients will never connect directly to your database. This will give you fine grained permission control on a per client, per resource basis. The webserver will be under a much larger load by processing all the SQL queries and this could potentially exceed the limit of my host I believe scaling a webservice is going to be a lot easier then scaling direct connections from your clients to your database. There are many tried and true methods for scaling apps that have hundreds of requests per seconds to their databases. Because you have Django between you and the webserver you can implement caching for frequently requested resources. Additionally, I would have to figure out some way of serializing and transmitting the SQL results in Python and then unserializing them in C/C++ on the client side This should be a moot issue. There are lots of extremely popular data interchange formats. I have never used C/C++ but a quick search I saw a couple of c/c++ json serializers. python has JSON built in for free, there shouldn't be any custom code to maintain regarding this if you use a premade C/C++ JSON library. Any other downsides to this approach people can think of? I don't think there are any downsides, It is a tried and true method. It has been proven for a decade and the most popular sites in the world expose themselves through restful apis Does this sound reasonable and if it does, anything that could ease working on it; such as Python or C libraries to help develop the proxy interface? It sounds very reasonable, the Django apps I mentioned at the beginning of the answer should provide some boiler plate to allow you to get started on your API quicker.
1
3
0
I'm in the process of building a Django powered site that is backed by a MySQL server. This MySQL server is going to be accessed from additional sources, other than the website, to read and write table data; such as a program that users run locally which connects to the database. Currently the program running locally is using the MySQL/C Connector library to connect directly to the sql server and execute queries. In a final release to the public this seems insecure, since I would be exposing the connection string to the database in the code or in a configuration file. One alternative I'm considering is having all queries be sent to the Django website (authenticated with a user's login and password) and then the site will sanitize and execute the queries on the user's behalf and return the results to them. This has a number of downsides that I can think of. The webserver will be under a much larger load by processing all the SQL queries and this could potentially exceed the limit of my host. Additionally, I would have to figure out some way of serializing and transmitting the sql results in Python and then unserializing them in C/C++ on the client side. This would be a decent amount of custom code to write and maintain. Any other downsides to this approach people can think of? Does this sound reasonable and if it does, anything that could ease working on it; such as Python or C libraries to help develop the proxy interface? If it sounds like a bad idea, any suggestions for alternative solutions i.e. a Python library that specializes in this type of proxy sql server logic, a method of encrypting sql connection strings so I can securely use my current solution, etc...? Lastly, is this a valid concern? The database currently doesn't hold any terribly sensitive information about users (most sensitive would be their email and their site password which they may have reused from another source) but it could in the future which is my cause for concern if it's not secure.
Django as a mysql proxy server?
1.2
1
0
377
14,992,577
2013-02-21T00:31:00.000
3
0
1
0
python,json,dictionary,formatting
14,992,707
2
false
0
0
The first is much cleaner, in my opinion. It groups the attributes of each person together, which lends itself well to converting into a Person object. Iteration and sorting are also easier when there is only a single list, and for sorting Python provides attrgetter for simple sort keys. Technically, the second might be more efficient due to fewer dictionaries, but clarity beats any tiny gain from that.
1
1
0
Which of the following two examples is considered a better format for JSON - in terms of convention, standards and/or saving memory (or for any other reason)? Thank you in advance. Example 1: { "items": [ { "position": "Programmer", "age": 29, "fname": "Bob" }, { "position": "Developer", "age": 24, "fname": "Joe" }, { "position": "DBA", "age": 31, "fname": "Dave" }, { "position": "Systems", "age": 40, "fname": "Cindy" }, { "position": "Designer", "age": 32, "fname": "Erin" }, { "position": "NWA", "age": 45, "fname": "Sam" }, { "position": "Processor", "age": 20, "fname": "Lenny" }, { "position": "Webmaster", "age": 28, "fname": "Ed" } ] } Example 2: { "position": [ "Programmer", "Developer", "DBA", "Systems", "Designer", "NWA", "Processor", "Webmaster" ], "age": [ 29, 24, 31, 40, 32, 45, 20, 28 ], "fname": [ "Bob", "Joe", "Dave", "Cindy", "Erin", "Sam", "Lenny", "Ed" ] }
Formatting a Python Dictionary for JSON
0.291313
0
0
316
14,994,651
2013-02-21T04:39:00.000
2
0
0
0
python,programming-languages,tkinter
14,994,664
1
false
0
1
Ultimately, what you need to do is put all of the logic behind creating and using a single workspace on a single Frame object. Then you just need to create 2 Frames side-by-side -- Each one holding a "workspace".
1
0
0
mi problem is this. I´ve already created a program with tkinter, but I want to run it twice in the same window, one in each side of the window. How can I do that?? The idea is to be able to compare the data from both programs so I want them to work separated. Many thanks. PD: I cannot post an image for you because of my reputation, sorry :(
How can I create 2 workspaces in tkinter
0.379949
0
0
100
14,994,955
2013-02-21T05:08:00.000
0
0
1
0
python,notepad++
63,126,890
2
false
0
0
To apply the WordWrap, just goo to view in the Menu bar and check the WordWrap option.
1
3
0
I know the Python coding standard has a limit of 78 characters per line. I am working in Notepad++ and how do I set it so it wraps after 78 characters?
Wrapping in Notepad++
0
0
0
9,109
14,997,400
2013-02-21T08:12:00.000
0
1
0
0
python,string-comparison,confluence,zabbix
15,069,936
1
false
1
0
This might not be the solution you are looking for, but you could have the updates generate an external html page and then use an {html-include} in confluence. So the confluence pages wouldn't be updated, but their displayed content would be correct. The problem with this is that none of the confluence pages would be updated, so if you want a feed to notify people of the changes on confluence it wouldn't get the job done.
1
0
0
I have a python script that runs once a day, connects to our Zabbix monitoring database and pulls out all the active monitoring checks and documents them into Confluence. My problem is that each hosts' confluence page gets updated every time the script runs, even if the monitoring hasn't changed. A quick hack would be to get a hash of the page content and compare it with a hash of the script-generated content and only replace when the hashes don't match. Obviously the problems with this are that the script still needs to generate whole page content for comparison, and that it replaces the whole page or not at all, loosing confluence's built-in diff checker. I'm hoping to find a more elegant solution, especially one that may allow me to update only the differences...
Automatic Zabbix -> Confluence, creating too many updates
0
0
0
412
14,997,729
2013-02-21T00:31:00.000
0
0
0
0
python,arcgis,python-idle
16,342,517
1
false
0
0
I have ended up running python scripts in a CMD shell, while editing others in IDLE. There's probably a better way, but this works.
1
1
0
Is it possible to run more than one python script at the same time? I try to start a second instance of IDLE, however I get an error message: "Socket Error: No connection could be made because the target machine actively refused it." and then "IDLEs subprocess didn't make a connection...." Thanks
Multiple python sessions with IDLE
0
0
1
623
14,998,497
2013-02-21T09:20:00.000
1
0
0
0
python,scipy
14,999,516
1
true
0
0
The standard error of a linear regression is the standard deviation of the serie obtained by substracting the fitted model to your data points. It indicates how well your data points can be fitted by a linear model.
1
0
1
I need to fit a straight line to my data to find out if there is a gradient. I am currently doing this with scipy.stats.linregress. I'm a little confused though, because one of the outputs of linregress is the "standard error", but I'm not sure how linregress calculated this, as the uncertainty of your data points is not given as an input. Surely the uncertainty on the data points influence how uncertain the given gradient is? Thank you!
error in Python gradient measurement
1.2
0
0
340
15,003,202
2013-02-21T13:15:00.000
0
0
0
1
python,hadoop,mapreduce,hadoop-streaming
15,004,130
2
false
0
0
Yes, you can specify a name for each job using job.setJobName(String). If you were to set the job name to something distinguishing you should be able to tell them apart. For example, by using something likeManagementFactory.getRuntimeMXBean().getName() you can get the process id and machine name (on linux anyway, unsure of behaviour on other operating systems) in the format of 1234@localhost, where 1234 is the process id, which you could set to the job name to tell them apart.
1
1
0
I have a Hadoop cluster and different processes are able submit mapreduce jobs to this cluster (they all use the same user account). Is there a way to distinguish these jobs? Some kind of description, which can be added to job during submit like 'This is a job of process "1234", do not touch'? I am using Python and HadoopStreaming, and would like to distinguish jobs using simple hadoop job -list (or at least using web management interface).
Description of Hadoop job
0
0
0
109
15,003,468
2013-02-21T13:27:00.000
2
0
1
0
python
15,015,817
2
false
0
0
The ASCII BEL character only sounds a bell where a bell is supported. Many terminals and terminal emulators do give this special meaning to the BEL character, but as you noticed, IDLE and Pydev do not. It is not necessarily a bug, but merely a missing feature.
1
1
0
env: windows 7 English 32bit Python 2.7.3 I can print beep in IPython, but in Pydev or IDLE, it doesn't work. It only print an unrecognized char but not make a beep sound. Why? Thanks.
How to print a '\007' or '\a' (beep) in IDLE or Pydev
0.197375
0
0
1,594
15,003,487
2013-02-21T13:28:00.000
1
0
0
0
javascript,python,json,sublimetext2
15,007,333
1
false
1
0
I don't know about tree-views but with jsFormat you can get a nice format of the json-object, basically a pretty-print. The if you'd like to get advanced, and your JSON contains a lot of objects you can always use code-folding. Although this doesn't work very well with arrays (it will fold the complete json for me atleast).
1
1
0
Is it possible to create a tree view for arbitrary data (JSON) as a plugin for Sublime Editor. I like working with javascript, and don't like having to switch to my firefox console to inspect objects. Is there an existing plugin/solution? Is it possible for me to make my own - can a tree view be displayed in Sublime Editor easily?
Sublime Editor Tree View
0.197375
0
0
1,374
15,004,772
2013-02-21T14:29:00.000
8
0
1
0
python,iterator,itertools
45,022,592
6
false
0
0
They do very similar things. For small number of iterables itertools.chain(*iterables) and itertools.chain.from_iterable(iterables) perform similarly. The key advantage of from_iterables lies in the ability to handle large (potentially infinite) number of iterables since all of them need not be available at the time of the call.
3
83
0
I could not find any valid example on the internet where I can see the difference between them and why to choose one over the other.
What is the difference between chain and chain.from_iterable in itertools?
1
0
0
36,967
15,004,772
2013-02-21T14:29:00.000
4
0
1
0
python,iterator,itertools
51,928,887
6
false
0
0
Another way to see it: chain(iterable1, iterable2, iterable3, ...) is for when you already know what iterables you have, so you can write them as these comma-separated arguments. chain.from_iterable(iterable) is for when your iterables (like iterable1, iterable2, iterable3) are obtained from another iterable.
3
83
0
I could not find any valid example on the internet where I can see the difference between them and why to choose one over the other.
What is the difference between chain and chain.from_iterable in itertools?
0.132549
0
0
36,967
15,004,772
2013-02-21T14:29:00.000
0
0
1
0
python,iterator,itertools
60,322,453
6
false
0
0
Another way to look at it is to use chain.from_iterable when you have an iterable of iterables like a nested iterable(or a compound iterbale) and use chain for simple iterables
3
83
0
I could not find any valid example on the internet where I can see the difference between them and why to choose one over the other.
What is the difference between chain and chain.from_iterable in itertools?
0
0
0
36,967
15,004,797
2013-02-21T14:30:00.000
0
0
0
0
python,google-app-engine,google-sites
15,004,843
1
false
1
0
I think you can try to reach Google API's developers to tell them about this bug, they might not be aware of it.
1
1
0
I m using Google Sites API in my python code deployed over Google App Engine. I have came across a problem: Google Sites API allows to create a site, add users to site(access permission),etc.., however we get status:200 form the API that site is being created and same for adding users to the Google Site, but when i go to sites.google.com to access that site it still says 'Creating your site' I can see a site locked in wait state for almost a week. We don't have any specific steps to reproduce it, this has random appearances. Please suggest what is the correct solution or if there is no perfect solution than suggest a workaround for the same.
Even though the Google Sites API creates a site, we cannot still access the site
0
0
1
103
15,006,278
2013-02-21T15:40:00.000
2
0
0
1
java,python,jenkins,jenkins-plugins
15,006,928
2
false
0
0
Instead of killing the job, have another job that programmatically terminates all the required jobs. You could reuse the same property file to know which all jobs to be killed. You could use groovy script to terminate jobs.
1
3
0
I have a parent job that triggers many downstream jobs dynamically. I use python code to generate the list of jobs to be triggered, write it to a properties file, Inject the file using EnvInject plugin and then use the "Parameterized trigger plugin" with the job list variable (comma separated) variable to launch the jobs (If anyone know an easier way of doing this I would love to hear that also!). It works great except when killing the parent job, the triggered jobs continue to run, and I want them dead also when killing the parent. Is there a plugin or way to implement this? Maybe a hook that is called when a job is killed? EDIT: Sorry for the confusion, I wasn't clear about what I meant with "killing" the job. I mean clicking the red 'x' button in the Jenkins gui, not the Unix signal. Thanks in advance.
How to kill downstream jobs if upstream job is stopped?
0.197375
0
0
1,600
15,008,875
2013-02-21T17:44:00.000
6
0
0
0
python,artificial-intelligence,neural-network,artificial-life
15,011,126
2
true
0
0
If the environment is benign enough (e.g it's easy enough to find food) then just moving randomly may be a perfectly viable strategy and reproductive success may be far more influenced by luck than anything else. Also consider unintended consequences: e.g if offspring is co-sited with its parent then both are immediately in competition with each other in the local area and this might be sufficiently disadvantageous to lead to the death of both in the longer term. To test your system, introduce an individual with a "premade" neural network set up to steer the individual directly towards the nearest food (your model is such that such a thing exists and is reasobably easy to write down, right? If not, it's unreasonable to expect it to evolve!). Introduce that individual into your simulation amongst the dumb masses. If the individual doesn't quickly dominate, it suggests your simulation isn't set up to reinforce such behaviour. But if the individual enjoys reproductive success and it and its descendants take over, then your simulation is doing something right and you need to look elsewhere for the reason such behaviour isn't evolving. Update in response to comment: Seems to me this mixing of angles and vectors is dubious. Whether individuals can evolve towards the "move straight towards nearest food" behaviour must rather depend on how well an atan function can be approximated by your network (I'm sceptical). Again, this suggests more testing: set aside all the ecological simulation and just test perturbing a population of your style of random networks to see if they can evolve towards the expected function. (simpler, better) Have the network output a vector (instead of an angle): the direction the individual should move in (of course this means having 2 output nodes instead of one). Obviously the "move straight towards food" strategy is then just a straight pass-through of the "direction towards food" vector components, and the interesting thing is then to see whether your random networks evolve towards this simple "identity function" (also should allow introduction of a readymade optimised individual as described above). I'm dubious about the "fixed amount of food" too. (I assume you mean as soon as a red dot is consumed, another one is introduced). A more "realistic" model might be to introduce food at a constant rate, and not impose any artificial population limits: population limits are determined by the limitations of food supply. e.g If you introduce 100 units of food a minute and individuals need 1 unit of food per minute to survive, then your simulation should find it tends towards a long term average population of 100 individuals without any need for a clamp to avoid a "population explosion" (although boom-and-bust, feast-or-famine dynamics may actually emerge depending on the details).
1
8
1
I am trying to build a simple evolution simulation of agents controlled by neural network. In the current version each agent has feed-forward neural net with one hidden layer. The environment contains fixed amount of food represented as a red dot. When an agent moves, he loses energy, and when he is near the food, he gains energy. Agent with 0 energy dies. the input of the neural net is the current angle of the agent and a vector to the closest food. Every time step, the angle of movement of each agent is changed by the output of its neural net. The aim of course is to see food-seeking behavior evolves after some time. However, nothing happens. I don't know if the problem is the structure the neural net (too simple?) or the reproduction mechanism: to prevent population explosion, the initial population is about 20 agents, and as the population becomes close to 50, the reproduction chance approaches zero. When reproduction does occur, the parent is chosen by going over the list of agents from beginning to end, and checking for each agent whether or not a random number between 0 to 1 is less than the ratio between this agent's energy and the sum of the energy of all agents. If so, the searching is over and this agent becomes a parent, as we add to the environment a copy of this agent with some probability of mutations in one or more of the weights in his neural network. Thanks in advance!
Artificial life with neural networks
1.2
0
0
2,480
15,009,146
2013-02-21T17:59:00.000
0
1
1
1
python
47,823,714
3
false
0
0
Another issue might be that such pypi packages containing Bash scripts might not run correctly on e.g. Windows?
1
3
0
What is the best way to include a 'helper' shell script in setup.py that is used by a python module? I don't want to include is as a script since it is not run on it's own. Also, data_files just copies things in the the install path (not the module install path) so that does not really seem like the best route. I guess the question is: is there a way of including non-python (non-C) scripts/binaries in a python distutils package in a generic way?
python distutils include shell scripts in module directory
0
0
0
1,820
15,010,360
2013-02-21T19:10:00.000
4
0
1
0
java,python,ruby,string,char
15,010,424
5
false
1
0
Bottom line is that is just how the language designer decided to make it. It's hard to get too much further than that. However, one point about C, which is generally considered a lower-level language in that the syntax more accurately reflects the nature of the data and tasks being performed. Treating a character as a string would be a level of abstraction that would be uncharacteristic of C. It would make it less clear what the data is like under the covers. And it would almost certainly add overhead when all you needed was a character. Note that C-type languages do support single character strings, and so you really have the best of both worlds in my opinion.
2
7
0
I've noticed that languages like Java have a char primitive and a string class. Other languages like Python and Ruby just have a string class. Those languages instead use a string of length 1 to represent a character. I was wondering whether that distinction was because of historical reasons. I understand the language that directly influenced Java has a char type, but no strings. Strings are instead formed using char* or char[]. But I wasn't sure if there was an actual purpose for doing it that way. I'm also curious if one way has an advantage over another in certain situations. Why do languages like Java distinguish between the char primitive and the string class, while languages like Ruby and Python do not? Surely there must be some sort of design concern about it, be it convention, efficiency, clarity, ease of implementation, etc. Did the language designer really just pick a character representation out of a hat, so to speak?
Why do languages like Java distinguish between string and char while others do not?
0.158649
0
0
1,089
15,010,360
2013-02-21T19:10:00.000
1
0
1
0
java,python,ruby,string,char
15,010,789
5
false
1
0
I wasn't sure whether that distinction was because of historical reasons (C only has chars, strings are formed with char* or char[]) or if there was an actual purpose for doing it that way. I'm also curious if one way has an advantage over another in certain situations. In C the concept of a "string" is a character array/series of characters that is terminated by a ending character \0. Otherwise a "string" is like any other array in C. In e.g. C# and several other languages the string is treated as an abstraction, a string is more like an opaque object. The object contains methods that work on the string but exactly how the string is stored is "hidden" to the programmer. The reason for this is that C is a much older language and more close to hardware than newer languages. How a string is defined in a language (whether single or dobuble quotes are used) is really just an implementation detail that the person(s) designing the langauge thought to be a good thing at the time.
2
7
0
I've noticed that languages like Java have a char primitive and a string class. Other languages like Python and Ruby just have a string class. Those languages instead use a string of length 1 to represent a character. I was wondering whether that distinction was because of historical reasons. I understand the language that directly influenced Java has a char type, but no strings. Strings are instead formed using char* or char[]. But I wasn't sure if there was an actual purpose for doing it that way. I'm also curious if one way has an advantage over another in certain situations. Why do languages like Java distinguish between the char primitive and the string class, while languages like Ruby and Python do not? Surely there must be some sort of design concern about it, be it convention, efficiency, clarity, ease of implementation, etc. Did the language designer really just pick a character representation out of a hat, so to speak?
Why do languages like Java distinguish between string and char while others do not?
0.039979
0
0
1,089
15,010,364
2013-02-21T19:10:00.000
1
0
1
1
python,clojure
15,010,740
2
false
0
0
It sounds like you want sh to return immediately instead of waiting for notepad's exit code. How about writing a sh! macro or somesuch that runs the original sh command on a new Thread? If you're only using this as a convenience in the REPL, it would be entirely unproblematic. EDIT Arthur's answer is better and more Clojurian - go with that.
1
3
0
When working in the python repl I often need to edit multiline code. So I use import os then os.system("notepad npad.py") In clojure I first run (use '[clojure.java.shell :only [sh]]) Then I run (sh "notepad" "jpad.clj") This starts notepad but not in a useful way because the clojure repl now hangs. In other words, until I close notepad I cannot enter code in the repl and I want to keep both open. I know I can easily open notepad without clojure so it is no big deal. However, is there a way for clojure to start an external process without hanging?
Why is my clojure shell result not like what works in python?
0.099668
0
0
189
15,012,162
2013-02-21T21:02:00.000
0
0
0
1
python,hadoop,hadoop-streaming
15,181,203
1
false
0
0
You may consider using NullWritable as output, and generating the SequenceFile directly inside of your python script. You can look up the hadoop-python project in github to see candidate code: though it is admittedly bit large-ish/heavy it does handle the sequencefile generation.
1
1
0
I wish to convert a binary file in one format to a SequenceFile. I have a Python script that takes that format on stdin and can output whatever I want. The input format is not line-based. The individual records are binary themselves, hence the output format cannot be \t delimited or broken into lines with \n. Can I use the Hadoop Streaming interface to consume a binary format? How do I produce a binary output format? I assume the answer is "No" unless I hear otherwise.
Hadoop Streaming Job with binary input?
0
0
0
546
15,012,694
2013-02-21T21:37:00.000
0
0
0
1
python,usb
15,013,500
2
false
0
0
"Everything is a file" is one of the core ideas of Unix. Windows does not share this philosophy and, as far as I know, doesn't provide an equivalent interface. You're going to have to find a different way. The first way would to be to continue handling everything at a low level & have your code use a different code path under Windows. The only real reason to do this is if your goal is to learn about USB programming at a low level. The other way is to find a library that's already abstracted out the differences between platforms. PySDL immediately comes to mind (followed by PyGame, which is a higher level wrapper around that) but, as that's a gaming/multimedia library, it might be overkill for what you're doing. Google tells me that PyUSB exists and appears to just focus on handing USB devices. PySDL/PyGame have been around a while & are probably more mature so, unless you've got a particular aversion to them, I'd probably stick with them.
2
0
0
I'm trying to access a usb device through python but I'm unsure how to find the path to it. The example I'm going from is: pipe = open('/dev/input/js0','r') In which case this is either a mac or linux path. I don't know how to find the path for windows. Could someone steer me in the proper direction? I've sifted through the forums but couldn't quite find my answer. Thanks, -- Mark
opening a usb device in python -- what is the path in winXP?
0
0
0
1,609
15,012,694
2013-02-21T21:37:00.000
0
0
0
1
python,usb
15,012,889
2
false
0
0
The default USB path on windows is D:\. So, if we have a text document named mydoc.txt, which is in the folder myData the appropriate path is D:\myData\mydoc.txt
2
0
0
I'm trying to access a usb device through python but I'm unsure how to find the path to it. The example I'm going from is: pipe = open('/dev/input/js0','r') In which case this is either a mac or linux path. I don't know how to find the path for windows. Could someone steer me in the proper direction? I've sifted through the forums but couldn't quite find my answer. Thanks, -- Mark
opening a usb device in python -- what is the path in winXP?
0
0
0
1,609
15,013,102
2013-02-21T22:03:00.000
1
0
0
0
python,jenkins,jenkins-plugins
15,015,114
2
true
0
0
The closest I can think of that Jenkins offers is a file upload. You can upload file with local changes and then trigger a build. This file will be replaced at already specified location. This feature can be used by making your build parameterized and adding File Parameter option. Below is what Jenkins says about the description of this feature. Accepts a file submission from a browser as a build parameter. The uploaded file will be placed at the specified location in the workspace, which your build can then access and use. This is useful for many situations, such as: Letting people run tests on the artifacts they built. Automating the upload/release/deployment process by allowing the user to place the file. Perform data processing by uploading a dataset. It is possible to not submit any file. If it's case and if no file is already present at the specified location in the workspace, then nothing happens. If there's already a file present in the workspace, then this file will be kept as-is.
1
5
0
Is there an easy way to edit our python files in the Jenkins workspace UI? It would be super nice if we could get code highlighting too!
Edit workspace files via jenkins UI
1.2
0
0
8,520
15,014,932
2013-02-22T00:37:00.000
0
0
0
0
python,qt,events,pyqt,pyside
42,914,476
2
false
0
1
A couple of hints people might find useful: A. You need to beware of the following: Every so often the threads want to send stuff back to the main thread. So they post an event and call processEvents If the code runs from the event also calls processEvents then instead of returning to the next statement, python can instead dispatch a worker thread again and that can then repeat this process. The net result of this can be hundreds or thousands of nested processEvent statements which can then result in a recursion level exceeded error message. Moral - if you are running a multi-threaded application do NOT call processEvents in any code initiated by a thread which runs in the main thread. B. You need to be aware that CPython has a Global Interpreter Lock (GIL) that limits threads so that only one can run at any one time and the way that Python decides which threads to run is counter-intuitive. Running process events from a worker thread does not seem to do what it says on the can, and CPU time is not allocated to the main thread or to Python internal threads. I am still experimenting, but it seems that putting worker threads to sleep for a few miliseconds allows other threads to get a look in.
1
2
0
I have a Qt application written in PySide (Qt Python binding). This application has a GUI thread and many different QThreads that are in charge of performing some heavy lifting - some rather long tasks. As such long task sometimes gets stuck (usually because it is waiting for a server response), the application sometimes freezes. I was therefore wondering if it is safe to call QCoreApplication.processEvents() "manually" every second or so, so that the GUI event queue is cleared (processed)? Is that a good idea at all?
Is calling QCoreApplications.processEvents() on a set interval safe?
0
0
0
1,998
15,016,187
2013-02-22T03:09:00.000
1
0
0
0
python,pandas
15,016,437
2
false
0
0
I think the simplest solution split this into two columns in your DataFrame, one for country_code and country_name (you could name them something else). When you print or graph you can select which column is used.
1
1
1
Is there anyway to attach a descriptive version to an Index Column? For Example, I use ISO3 CountryCode's to merge from different data sources 'AUS' -> Australia etc. This is very convenient for merging different data sources, but when I want to print the data I would like the description version (i.e. Australia). I am imagining a dictionary attached to the Index Column of 'CountryCode' (where CountryCode is Key and CountryName is Value) and a flag that will print the Value instead of the Key which is used for data manipulation. Is the best solution to generate my own Dictionary() and then when it comes time to print or graph to then merge the country names in? This is ok, except it would be nice for ALL of the dataset information to be carried within the dataframe object.
Pandas: Attaching Descriptive Dict() to Hierarchical Index (i.e. CountryCode and CountryName)
0.099668
0
0
102
15,018,411
2013-02-22T06:44:00.000
2
0
0
0
python,python-2.7,wxpython
15,018,504
1
false
0
1
To run it with pythonw.exe just give your file a .pyw extension.
1
0
0
I have tried using pythonw and this only works if I drop and drag the file onto it. I remember reading somewhere there is a way to keep this from happening in the code but I can't find it. Thanks in advance.
How do I keep shell or IDLE shell from opening when running wxPython
0.379949
0
0
57
15,018,526
2013-02-22T06:54:00.000
3
0
0
0
python,scipy,fft,convolution
15,020,070
3
false
0
0
FFT fast convolution via the overlap-add or overlap save algorithms can be done in limited memory by using an FFT that is only a small multiple (such as 2X) larger than the impulse response. It breaks the long FFT up into properly overlapped shorter but zero-padded FFTs. Even with the overlap overhead, O(NlogN) will beat M*N in efficiency for large enough N and M.
1
11
1
I know generally speaking FFT and multiplication is usually faster than direct convolve operation, when the array is relatively large. However, I'm convolving a very long signal (say 10 million points) with a very short response (say 1 thousand points). In this case the fftconvolve doesn't seem to make much sense, since it forces a FFT of the second array to the same size of the first array. Is it faster to just do direct convolve in this case?
Python SciPy convolve vs fftconvolve
0.197375
0
0
10,751
15,021,521
2013-02-22T10:05:00.000
15
0
0
0
python,machine-learning,scikit-learn
15,038,477
3
true
0
0
DictVectorizer is the recommended way to generate a one-hot encoding of categorical variables; you can use the sparse argument to create a sparse CSR matrix instead of a dense numpy array. I usually don't care about multicollinearity and I haven't noticed a problem with the approaches that I tend to use (i.e. LinearSVC, SGDClassifier, Tree-based methods). It shouldn't be a problem to patch the DictVectorizer to drop one column per categorical feature - you simple need to remove one term from DictVectorizer.vocabulary at the end of the fit method. (Pull requests are always welcome!)
1
12
1
I'm trying to use the car evaluation dataset from the UCI repository and I wonder whether there is a convenient way to binarize categorical variables in sklearn. One approach would be to use the DictVectorizer of LabelBinarizer but here I'm getting k different features whereas you should have just k-1 in order to avoid collinearization. I guess I could write my own function and drop one column but this bookkeeping is tedious, is there an easy way to perform such transformations and get as a result a sparse matrix?
How to encode a categorical variable in sklearn?
1.2
0
0
22,439
15,021,523
2013-02-22T10:05:00.000
2
0
0
0
python,perl,r,openoffice.org
15,030,659
5
false
0
0
As you should be able to tell from the other answers, a better formula is -log10(value) or, in an OpenOffice Calc spreadsheet, =-LOG(value,10). You need to make certain that the value entered does not underflow to 0, however. -LOG(3E-178,10) works (177.522879), but -LOG(1E-320,10) fails because 1E-320 underflows to 0 and an Err.502 is presented. (That's probably why your use of 1/value exploded too.)
1
2
0
Is there a way to compute -log10values. Where values are very small i.e., 3*e-178 or e-320. I have tried open office with the formula log((1/value),10), it works fine but when it encounters extremely small values it gives error like division by zero not possible. I guess same will happen when I will use perl or python or R. Kindly help in converting these values to -log10value Thank you Note: I want to compute minus log of value with base 10
computing -log10 of very small values
0.07983
0
0
1,662
15,024,894
2013-02-22T13:01:00.000
0
0
0
0
django,python-2.7
32,207,711
3
false
0
0
What you want to do is called Single sign on (SSO) and it's much easier to implement on actual web server than Django. So, you should check how to do SSO on Apache/Nginx/whateverYouAreUsing, then the web server will forward the authenticated username to your django app.
1
3
0
when user logs in to his desktop windows os authenticates him against Active Directory Server. so Whenever he accesses a web page he should not be thrown a login page for entering his userid or password.Instead, his userid and domain need to be captured from his desktop and passed to the web server.(let him enter password after that) Is this possible in python to get username and domain of of client? win32api.GetUserName() gives the username of the server side. Thanks in advance
how to get username and domain of windows logged in client using python code?
0
0
1
2,471
15,026,230
2013-02-22T14:19:00.000
0
0
0
0
python,database,django,migration,django-south
15,026,416
1
true
1
0
Make the changes to your models then run python manage.py schemamigration yourapp --auto. This will create the migrations for you (you'll see a new file in your migrations directory every time you do this process). Sometimes you really need to edit a migration manually, but you should try and avoid it. Particularly if you have already run the migration (the south app keeps a record of which migrations have been run so it knows the state of your database). South is designed to support moving between different versions of your code without breaking your database. Each migration file in the migrations directory represents a snapshot of your code (specifically a snapshot of your models.py). You migrate from version to version by running python manage.py migrate yourapp version_no
1
2
0
I have installed south and make some migrations. Now there's a 'migrations' directory in the folder app. My question is: when I am refactoring models, which entries in the migration directory files I must apply the changes? I think some entries are related directly with the database schema, and others with the code itself. I couldn't fina an answer to this in the south docs.
How to handle refactor with south (django)?
1.2
0
0
118
15,027,601
2013-02-22T15:32:00.000
8
1
1
0
python
15,027,654
1
false
1
0
The best answer is to use a standardized format, such as JSON, and write up something to create the objects from that format in Python, and produce the data from Java. For simple things, this will be virtually no effort, but naturally, it'll scale up. Trying to emulate pickle from within Java will be more effort than it's worth, but I guess you could look into Jython if you were really set on the idea.
1
0
0
I want to create a serialized Python object from outside of Python (in this case, from Java) in such a way that Python can read it and treat it as if it were an object in Python. I'll start with simpler objects (int, float, String, and so on) but I'd love to know if this can be done with classes as well. Functionality is first, but being able to do it quickly is a close second. The idea is that I have some data in Java land, but some business logic in Python land. I want to be able to stream data through the python logic as quickly as possible...right now, this data is being serialized as strings and I think this is fairly wasteful. Thank you in advance
Is there a way to efficiently create Python objects from outside of Python?
1
0
0
65
15,029,252
2013-02-22T16:54:00.000
5
0
0
1
python,google-app-engine
15,033,087
1
true
1
0
appcfg.py rollback C:\path\to\my\app is the required command. If you are using Java, the rollback command is same as above, but the path to the application should be to the application's target directory. Otherwise, rollback will fail.
1
2
0
I am using app engine launcher in windows and for some reason the last time i deployed my app, the transaction wouldn't finish, and now every time i try to deploy i get the error another transaction by user is already in progress for app: s~ myapp, version 1 i have tried doing appcfg.py rollback, which brings up a python window, which then closes again almost immediately (i think it says error but it closes so fast i cant tell for sure) i have tried doing appcfg.py rollback C:\ my\apps\directory\path - which leads to the same as above i have tried doing C:\Program Files\Google\google_appengine appcfg.py rollback c:\my\app\path but windows then tells me it cant find C:\ program and now im stuck for things to try?
another transaction already in progress
1.2
0
0
3,012
15,031,315
2013-02-22T18:59:00.000
2
1
1
0
python,multithreading,parallel-processing,multiprocessing
15,031,533
2
false
0
0
There is no definitive answer to your question: it really depends what the functions do, how often they are called and what level of parallelism you need. The threading and multiprocessing modules work in radically different ways. threading implements native threads within the Python interpreter: fairly inexpensive to create but limited in parallelism due to Python's Global Interpreter Lock (GIL). Threads share the same address space, so may interfere with each other (e.g. if a thread causes the interpreter to crash, all threads, including your app, die), but inter-thread communication is cheap and fast as a result. multiprocessing implements parallelism using distinct processes: the setup is far more expensive than threads (required creation of a new process), but each process runs its own copy of the interpreter (hence no GIL related locking issues) and run in different address spaces (isolating your main app). The child processes communicate with the parent over IPC channels and required Python objects to be pickled/unpickled - so again, more expensive than threads. You need to figure out what trade-off is best suited to your purpose.
1
0
0
I've written an irc bot that runs some commands when told so, the commands are predefined python functions that will be called on the server where the bot is running. I have to call those functions without knowing exactly what they'll do (more I/O or something computationally expensive, nothing harmful since I review them when I accept them), but I need to get their return value in order to give a reply back to the irc channel. What module do you recommend for running several of these callbacks in parallel and why? The threading or multiprocessing modules, something else? I heard about twisted, but I don't know how it will fit in my current implementation since I know nothing about it and the bot is fully functional from the point of view of the protocol. Also requiring the commands to do things asynchronously is not an option since I want the bot to be easily extensible.
What module to use for calling user-defined functions in parallel
0.197375
0
0
221
15,031,694
2013-02-22T19:21:00.000
0
0
1
0
python,pip
66,554,371
13
false
0
0
In my case, it was because this library depended on another local library, which I had not yet installed. Installing the dependency with pip, and then the dependent library, solved the issue.
2
450
0
Is it possible to install packages using pip from the local filesystem? I have run python setup.py sdist for my package, which has created the appropriate tar.gz file. This file is stored on my system at /srv/pkg/mypackage/mypackage-0.1.0.tar.gz. Now in a virtual environment I would like to install packages either coming from pypi or from the specific local location /srv/pkg. Is this possible? PS I know that I can specify pip install /srv/pkg/mypackage/mypackage-0.1.0.tar.gz. That will work, but I am talking about using the /srv/pkg location as another place for pip to search if I typed pip install mypackage.
Installing Python packages from local file system folder to virtualenv with pip
0
0
0
605,734
15,031,694
2013-02-22T19:21:00.000
8
0
1
0
python,pip
53,161,203
13
false
0
0
Having requirements in requirements.txt and egg_dir as a directory you can build your local cache: $ pip download -r requirements.txt -d eggs_dir then, using that "cache" is simple like: $ pip install -r requirements.txt --find-links=eggs_dir
2
450
0
Is it possible to install packages using pip from the local filesystem? I have run python setup.py sdist for my package, which has created the appropriate tar.gz file. This file is stored on my system at /srv/pkg/mypackage/mypackage-0.1.0.tar.gz. Now in a virtual environment I would like to install packages either coming from pypi or from the specific local location /srv/pkg. Is this possible? PS I know that I can specify pip install /srv/pkg/mypackage/mypackage-0.1.0.tar.gz. That will work, but I am talking about using the /srv/pkg location as another place for pip to search if I typed pip install mypackage.
Installing Python packages from local file system folder to virtualenv with pip
1
0
0
605,734
15,031,856
2013-02-22T19:33:00.000
3
0
0
0
javascript,python,postgresql,flot
15,032,100
2
false
1
0
You can't send a Python or Javascript "datetime" object over JSON. JSON only accepts more basic data types like Strings, Ints, and Floats. The way I usually do it is send it as text, using Python's datetime.isoformat() then parse it on the Javascript side.
1
8
0
I have a postgre database with a timestamp column and I have a REST service in Python that executes a query in the database and returns data to a JavaScript front-end to plot a graph using flot. Now the problem I have is that flot can automatically handle the date using JavaScript's TIMESTAMP, but I don't know how to convert the Postgre timestamps to JavaScript TIMESTAMP (YES a timestamp, not a date stop editing if you don't know the answer) in Python. I don't know if this is the best approach (maybe the conversion can be done in JavaScript?). Is there a way to do this?
Converting postgresql timestamp to JavaScript timestamp in Python
0.291313
1
0
4,296
15,034,913
2013-02-22T23:24:00.000
1
0
0
0
python,eclipse,autocomplete,pydev
15,036,543
2
false
0
0
I don't think it's possible in PyDev/Eclipse to have function code completion that does not fill in the parameters. However, I'm not sure why you would want it disabled because Eclipse allows you to TAB through parameter arguments. Typing out the entire function will also not generate the parameters. Lastly, remember that you can always do Ctrl+Delete and Ctrl+Backspace to rapidly delete extra parameters.
2
3
0
I am using the PyDev add-on for Eclipse, and most of the autocomplete features are helpful, but I would rather not have all of the parameters filled in for functions, especially when some of the arguments are optional. I could not find any way to disable this in preferences without disabling code completion altogether.
Is it Possible to Disable PyDev Parameter Code Completion?
0.099668
0
0
816
15,034,913
2013-02-22T23:24:00.000
3
0
0
0
python,eclipse,autocomplete,pydev
15,040,558
2
false
0
0
You can leave Ctrl pressed when you apply the completion (this will leave both the parameters and the parenthesis out -- and will also override the next word).
2
3
0
I am using the PyDev add-on for Eclipse, and most of the autocomplete features are helpful, but I would rather not have all of the parameters filled in for functions, especially when some of the arguments are optional. I could not find any way to disable this in preferences without disabling code completion altogether.
Is it Possible to Disable PyDev Parameter Code Completion?
0.291313
0
0
816
15,035,111
2013-02-22T23:41:00.000
1
0
1
0
python,pycharm,pypi,mingw-w64
15,050,948
1
false
0
0
I solved it with help from the folks at #python on freenode. Or better: found a workaround. The problem was basically that i used 64 bit python on windows, which doesn't really work well with minGW64 and stuff. I installed 32 bit Python, edited the distutils.cfg fixed the -mno-cygwin problem and it basically worked out of the box. So if anyone else encounters this problem: Use 32 bit Python.
1
0
0
currently, i am working with PyCharm on Windows, and i tried to install some packages via PyPi. For convinience, i used the integrated functionality of PyCharm, which does essentially the same as the shell easy_install. However, when installing packages which have to be compiled with gcc, i get some errors. I already browsed a lot of questions here on stackoverflow because of the former errors, and managed to overcome some of the errors (using mingw64, removing the -mno-cygwin parameter from the setup scripts etc) but now i'm totally stuck on this one: build\temp.win-amd64-2.7\Release\cpyamf\amf0.o:amf0.c:(.text+0xb912): undefined reference to `__imp_PyExc_ImportError' c:/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/4.7.1/../../../../x86_64-w64-mingw32/bin/ld.exe: build\temp.win-amd64-2.7\Release\cpyamf\amf0.o: bad reloc address 0x78 in section `.data' collect2.exe: error: ld returned 1 exit status error: command 'gcc' failed with exit status 1 The error occurs on the installation of PyAMF and Twisted, which use cython for some parts. I couldn't find a solution for that one yet. Thanks in advance.
Windows / PyPi / PyCharm linker errors when compiling some cython modules
0.197375
0
0
811
15,036,694
2013-02-23T03:25:00.000
0
0
0
0
python,list
15,036,718
3
false
0
1
Your board is getting multiple references to the same array. You need to replace the * 10 with another list comprehension.
1
1
1
I have created a 10 by 10 game board. It is a 2D list, with another list of 2 inside. I used board = [[['O', 'O']] * 10 for x in range(1, 11)]. So it will produce something like ['O', 'O'] ['O', 'O']... ['O', 'O'] ['O', 'O']... Later on I want to set a single cell to have 'C' I use board.gameBoard[animal.y][animal.x][0] = 'C' board being the class the gameBoard is in, and animal is a game piece, x & y are just ints. Some times it will work and the specified cell will become ['C', 'O'], other times it will fill the entire row with ['C', 'O']['C', 'O']['C', 'O']['C', 'O'] Does anyone know why that might be happening?
2D Python list will have random results
0
0
0
168
15,036,815
2013-02-23T03:43:00.000
1
1
0
0
c#,python,asp.net,unit-testing
16,286,713
1
false
0
0
I don't know if you can fit them in one runner or process. I'm also not that familiar with Python. It seems to me that the Python written tests are more on a high level though. Acceptance tests or integration tests or whatever you want to call them. And the NUnit ones are unit test level. Therefore I would suggest that you first run the unit tests and if they pass the Python ones. You should be able to integrate that in a build script. And as you already suggested, if you can run that on a CI server, that would be my preferred approach in your situation.
1
5
0
What I'm trying to do is to combine two approaches, two frameworks into one solid scope, process ... I have a bunch of tests in python with self-written TestRunner over proboscis library which gave me a good way to write my own Test Result implementation (in which I'm using jinja). This framework is now a solid thing. These tests are for tesing UI (using Selenium) on ASP.NET site. On another hand I have to write tests for business logic. Apparently it would be right to use NUnit or TestDriven.NET for C#. Could you please give me a tip, hint, advice of how I should integrate these two approaches in one final solution? May be the answer would be just to set up a CI server, donno... Please note, the reason I'm using Python for ASP.Net portal is in its flexibility and opportunity to build any custom Test Runner, Test Loader, Test Discovery and so on... P.S. Using IronPython is not an option for me. P.P.S. For the sake of clarity: proboscis is the python library which allows to set test order and dependency of a choosen test. And these two options are the requirements! Thank you in advance!
Integrating tests written in Python and tests in C# in one solid solution
0.197375
0
0
923
15,038,135
2013-02-23T07:12:00.000
6
0
0
1
python,logging,amazon-elastic-beanstalk
15,183,303
2
true
1
0
If you need have ability to snapshot log files from Beanstalk management console, you should just write you log files to "/opt/python/log/" folder. Elastic beanstalk scripts use this folder for log tailing.
1
15
0
I'm developing python application which works on aws beanstalk environment. For error handling and debugging proposes I write logs to custom lof file on the directory /var/logs/. What should I do in order to have ability snapshot logs from Elastic beanstalk management console?
Python on the AWS Beanstalk. How to snapshot custom logs?
1.2
0
0
5,607
15,038,509
2013-02-23T08:06:00.000
0
0
0
0
python,django,uwsgi
15,061,466
1
false
1
0
After some pdb play-around I finally located the problem. It's about permissions. I set the permissions differently on my dev and production databases -- I am a superuser using the dev database, but only a staff on the production database, for whatever reason. The new model I added had its permission set to only visible to superusers, so obviously I can't see it on the admin page. Everything works after I promote myself.
1
1
0
I'm trying to add a new model to an pre-existed app on my production site with the following steps: Adding a model Add admin.site.register(<ModelName>) in the app's admin.py ./manage.py schemamigration <appname> --auto ./manage.py migrate <appname> The above steps work on my dev machine (with SQLite3), so I continued with Upload the codes (models.py, admin.py and the migration file) to the production machine Repeat step 4. on the production machine (with MySQL) service uwsgi restart The migration works. I can see the new table(s) in the database, and I can use the model correctly (with ./manage.py shell on the production machine). The only problem is that the model is not shown in the admin site. I tried: Dev site + dev database => Works. Production site + production database => Can't see the model in Admin site. Dev site + production database => Can't see the model in Admin site. Is there something I missed? Thanks.
Model added by south migration doesn't show in production admin site
0
0
0
165
15,041,647
2013-02-23T14:34:00.000
0
0
1
0
java,python,algorithm,divide-and-conquer,cosine-similarity
15,042,390
2
false
0
0
Work with the transposed matrix. That is what Mahout does on Hadoop to do this kind of task fast (or just use Mahout). Essentially, computing cosine similarity the naive way is bad. Because you end up computing a lot of 0 * something. Instead, you better work in columns, and leave away all 0s there.
1
8
1
I need to compute the cosine similarity between strings in a list. For example, I have a list of over 10 million strings, each string has to determine similarity between itself and every other string in the list. What is the best algorithm I can use to efficiently and quickly do such task? Is the divide and conquer algorithm applicable? EDIT I want to determine which strings are most similar to a given string and be able to have a measure/score associated with the similarity. I think what I want to do falls in line with clustering where the number of clusters are not initially known.
How to efficiently compute the cosine similarity between millions of strings
0
0
0
2,043
15,044,368
2013-02-23T19:02:00.000
1
0
1
0
c#,python,json,console
15,046,751
1
true
0
0
If you cannot find good enough SOAP libs for Python (did you try suds?) and cannot add a JSON (or something else) counterpart to the SOAP web service then there may be no better way to do this but this still doesn't make the design good. Some problems just don;t have a good answer unless you are ready to write the missing parts yourself.
1
2
0
Could you please share your thoughts on whether this is good design, and which platforms would be more suitable for such functionality: Python script serves a static page User sends post, which Python uses to call a C# console application C# console app takes commands via stdin, talks to a SOAP web-service, and returns json Python parses json and returns results to the user It actually works (I am pleasantly surprised), but is this the best way to do things? Is there another nice alternative? A better way to do this? Thank you!
executing c# console application from python script - good design?
1.2
0
0
199
15,049,333
2013-02-24T06:39:00.000
8
0
1
0
python,inheritance,python-3.x,protected
15,049,373
2
true
0
0
Member access allowance in Python works by "negotiation" and "treaties", not by force. In other words, the user of your class is supposed to leave their hands off things which are not their business, but you cannot enforce that other than my using _xxx identifiers making absolutely clear that their access is (normally) not suitable.
2
5
0
I would like to set up a class hierarchy in Python 3.2 with 'protected' access: Members of the base class would be in scope only for derived classes, but not 'public'. A double underscore makes a member 'private', a single underscore indicates a warning but the member remains 'public'. What (if any...) is the correct syntax for designating a 'protected' member.
"Protected" access in Python - how?
1.2
0
0
6,658
15,049,333
2013-02-24T06:39:00.000
3
0
1
0
python,inheritance,python-3.x,protected
15,049,534
2
false
0
0
Double underscores don't make a member 'private' in the C++ or Java sense - Python quite explicitly eschews that kind of language-enforced access rules. A single underscore, by convention, marks an attribute or a method as an "implementation detail" - that is, things outside can still get to it, but this isn't a supported part of the class' interface and, therefore, the guarantees that the class might make about invariants or back/forwards compatibility no longer apply. This solves the same conceptual problem as 'private' (separation of interface and implementation) in a different way. Double underscores invoke name mangling which still isn't 'private' - it is just a slightly stronger formulation of the above, whereby: - This function is an implementation detail of this class, but - Subclasses might reasonably expect to have a method of the same name that isn't meant as an overridden version of the original This takes a little bit of language support, whereby the __name is mangled to include the name of the class - so that subclass versions of it get different names instead of overriding. It is still quite possible for a subclass or outside code to call that method if it really wants to - and the goal of name mangling is explicitly not to prevent that. But because of all this, 'protected' turns out not to make much sense in Python - if you really have a method that could break invariants unless called by a subclass (and, realistically, you probably don't even if you think you do), the Python Way is just to document that. Put a note in your docstring to the effect of "This is assumed to only be called by subclasses", and run with the assumption that clients will do the right thing - because if they don't, it becomes their own problem.
2
5
0
I would like to set up a class hierarchy in Python 3.2 with 'protected' access: Members of the base class would be in scope only for derived classes, but not 'public'. A double underscore makes a member 'private', a single underscore indicates a warning but the member remains 'public'. What (if any...) is the correct syntax for designating a 'protected' member.
"Protected" access in Python - how?
0.291313
0
0
6,658
15,049,661
2013-02-24T07:33:00.000
0
0
1
0
python,skype,skype4py
15,052,112
3
false
0
0
The Type property of the chat object will be either chatTypeDialog or chatTypeMultiChat with the latter being a group chat. You can safely ignore the other legacy enumeration values.
1
0
0
is there a way to check if a chat is a group chat? Or at least to find out how many users there are in a group. Like by checking for the user number, if it is 2, then it is obviously 1-1 (Single), but if it as anything else, it would be a group chat.
Skype4Py Check If Group Chat
0
0
1
1,405
15,055,029
2013-02-24T18:27:00.000
5
0
0
0
python,web-applications,permissions,security,pyramid
15,057,901
1
true
1
0
Make a readwrite permission. Each view gets one and only one permission but each principal can be mapped to many permissions.
1
10
0
I am configuring access control for a web application based on the Pyramid framework. I am setting up permissions for my view callables using the @view_config decorator. I have two permissions, namely 'read' and 'write'. Now, I want certain views to require both permissions. I was unable to figure out how to do this with view_config - am I missing something, or is there maybe another way to do this?
Multiple permissions in view_config decorator?
1.2
0
0
1,193
15,055,175
2013-02-24T18:40:00.000
4
0
0
0
php,python,mysql,ruby
15,056,205
3
false
0
0
The better question to ask is "why are arrays zero-indexed?" The reason has to do with pointer arithmetic. The index of an array is an offset relative to the pointer address. In C++, given array char x[5], the expressions x[1] and *(x + 1) are equivalent, given that sizeof(char) == 1. So auto increment fields starting at 1 make sense. There is no real correlation between arrays and these fields.
2
5
0
The first element of arrays (in most programming languages) has an id (index) of 0. The first element (row) of MySQL tables has an (auto incremented) id of 1. The latter seems to be the exception.
Why does MySQL count from 1 and not 0?
0.26052
1
0
2,295
15,055,175
2013-02-24T18:40:00.000
0
0
0
0
php,python,mysql,ruby
15,055,977
3
false
0
0
The main reason I suppose is that a row in a database isnt an array and the autoincrement value isnt an index in the sense that an array index is. The primary key id can be any value and to a great extent it is simply essential it is unique and is not guaranteed to be anything else (for example you can delete a row and it won't renumber). This is a little like comparing apples and oranges! Array start at 0 because that's the first number. Autoinc fields start at whatever number you want them too, and in that case we would all rather it was 1.
2
5
0
The first element of arrays (in most programming languages) has an id (index) of 0. The first element (row) of MySQL tables has an (auto incremented) id of 1. The latter seems to be the exception.
Why does MySQL count from 1 and not 0?
0
1
0
2,295
15,055,561
2013-02-24T19:20:00.000
0
0
1
1
python,debugging,breakpoints,access-violation,ida
15,235,645
2
true
0
0
Apperently the pida_dump script didn't got the right base address so when i did a rebase the code was like address - old_base_address + new_base_address and because the old_base_address was worng it missed up my BP. thanks any way for the help!
1
0
0
i am using winappdbg framework to build a debugger in python. i can set some breakpoints using the event.debug.break_at(event.get_pid(),address) in order to set the breakpoint but after setting certin breakpoints (and not while setting them but once the program hits them!) i get access violation exception. for exemple i can set an access point at 0x48d1ea or 0x47a001 but if i set one at 0x408020 i get the exception. the module base address is 0x400000. 0048D0BE: xor esi,eax 0048D0C0: call [winamp!start+0x25c1] 760DCC50: add [ebx],dh Access Violation Exception event (00000001) at address 779315DE, process 9172, thread 9616 b.t.w i am taking the address to set the breakpoints on from a pida file generated by IDA. i rebased the file so the address should be aligned thanks!
keep getting access violation after setting a breakpoint with winappdbg
1.2
0
0
656
15,056,269
2013-02-24T20:27:00.000
0
1
0
0
python,pyephem,azimuth,altitude
15,056,730
2
false
0
0
Without knowing the details of the internal calculations that PyEphem is doing I don't know how easy or difficult it would be to invert those calculations to give the result you want. With regards to the "sneaking up on it" option however, you could pick two starting times (eg sunrise and noon) where the azimuth is known to be either side (one greater and one less than) the desired value. Then just use a simple "halving the interval" approach to quickly find an approximate solution.
1
3
0
I am using PyEphem to calculate the location of the Sun in the sky at various times. I have an Observer point (happens to be at Stonehenge) and can use PyEphem to calculate sunrise, sunset, and the altitude angle and azimuth (degrees from N) for the Sun at any hour of the day. Brilliant, no problem. However, what I really need is to be able to calculate the altitude angle of the Sun from an known azimuth. So I would set the same observer point (long/lat/elev/date (just yy/mm/dd, not time)) and an azimuth for the Sun. And from this input, calculate the altitude of the Sun and the time it is at that azimuth. I had hoped I would be able to just set Sun.date and Sun.az and work backwards from those values, but alas. Any thoughts on how to approach this (and if it even is approachable) with PyEphem? The only other option I'm seeing available is to "sneak up" on the azimuth by iterating over a sequence of times until I get within a margin of error of the azimuth I desire, but that is just gross. thanks in advance, Dave
PyEphem: can I calculate Sun's altitude from azimuth
0
0
0
1,610
15,057,301
2013-02-24T22:15:00.000
2
0
1
0
python,multithreading,python-multithreading
15,057,672
3
false
0
0
Threads for the purpose of speed in python is not a terribly good idea, particularly for cpu bound operations. The GIL sees off any potential performance improvements from multiple CPU's (the # of which is the theoretical limit of your speed increase from threading - though in practice YMMV). For truly independent "checks" you are far better off looking at multiprocessing.
1
3
0
I am writing a simple script which should do big amount of checks. Every check is independent so I decided to put it into multiple threads. However I don't know how fast will be machine on which the script will be set. I've already found quite nice util to check basic parameters of the target machine but I am wondering if there's any way to determine what is the max sensible amount of threads (I mean the moment when new thread starts slowing the process instead of speeding it)?
How can I determine sensible thread number in python?
0.132549
0
0
2,550
15,057,651
2013-02-24T22:52:00.000
0
0
0
0
python,python-3.3
15,057,740
2
false
1
0
PHP scripts are run server-side and produce a HTML document (among other things). You will never see the PHP source of a HTML document when requesting a website, hence there is no way for Python to grab it either. This isn't even Python-related.
1
0
0
I know how to grab a sources HTML but not PHP is it possible with the built in functions?
Python grabbing pages source with PHP in it
0
0
1
128
15,057,789
2013-02-24T23:08:00.000
0
0
1
0
python,tkinter,multiprocessing
15,057,824
2
false
0
1
Did you try to used multiprocessing module? Seems to be the one you're looking for.
1
4
0
How to run multiple processes in python without multithreading? For example consider the following problem:- We have to make a Gui,which has a start button which starts a function(say, prints all integers) and there is a stop button, such that clicking it stops the function. How to do this in Tkinter?
Multiprocessing in Python tkinter
0
0
0
7,177
15,058,771
2013-02-25T01:20:00.000
0
0
0
0
python,pyqt,powerpoint
15,058,874
1
false
0
1
Import the os module, then use os.system("powerpoint.exe filename")
1
2
0
Im struggling opening a powerpoint presentation via python. I have a program in educational program in pyqt and i need to open up a powerpoint presentation(in powerpoint viewer mode) when i click a push button. I tried using pywin32 and such but ive had no luck :(
How to open a powerpoint presentation via python?
0
0
0
3,049
15,059,534
2013-02-25T03:16:00.000
0
0
0
0
python,django
15,059,615
2
false
1
0
One way to do it is to create a row in a persistent database (or a redis key/value pair) for the task which says if it is running or finished. Have the code set the value to be running when the task starts and done when the task completes. Then have an AJAX call do a GET lookup on a URL that sends the status for the task via a web service. You can put that in a setInterval() to periodically poll the database to see if it is done. You could send an email on completion or just have a landing page / dashboard that shows the status of the tasks being run.
1
1
0
I'm using Django to develop a classifier service, and user can build a model using api like http://localhost/api/buildmodel, however, because building a model takes a long time, maybe 2 hours, and I'm using web page to show the result of building a model. How to design my Django program to return immediately and do something to show the result after building finish? maybe I can use ajax but I want to implement it in Python, like using a async method and calling a callback function after building, any suggestions will be appreciated.
how to do with a request which needs to take a long time to run?
0
0
0
73
15,059,749
2013-02-25T03:45:00.000
-1
0
0
0
python,mysql,dbf,dbase
16,302,184
2
false
0
0
When you say you are using dBase, I presume you have access to the (.) dot prompt. At dot prompt convert the .dbf file into a delimited text file. Reconvert the delimited text file into a MySql data file with the necessary command in MySql. I do not know the actual command for it. All DBMS will have commands to do that work. For eliminiating the duplicates you will have to do it at the time of populating the data to the .dbf file through a programme written in dBase.
1
0
0
The DBF files are updated every few hours. We need to import new records into MySQL and skip duplicates. I don't have any experience with DBF files but as far as I can tell a handful of the one's we're working with don't have unique IDs. I plan to use Python if there are no ready-made utilities that do this.
What's the best way to routinely import DBase (dbf) files into MySQL tables?
-0.099668
1
0
2,974