Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,319,585 | 2009-08-23T21:15:00.000 | 2 | 0 | 0 | 0 | python,sqlalchemy,sqlobject | 1,319,662 | 3 | false | 0 | 0 | You will still be using SQLAlchemy. ResultProxy is actually a dictionary once you go for .fetchmany() or similar.
Use SQLAlchemy as a tool that makes managing connections easier, as well as executing statements. Documentation is pretty much separated in sections, so you will be reading just the part that you need. | 2 | 3 | 0 | Rather than use an ORM, I am considering the following approach in Python and MySQL with no ORM (SQLObject/SQLAlchemy). I would like to get some feedback on whether this seems likely to have any negative long-term consequences since in the short-term view it seems fine from what I can tell.
Rather than translate a row from the database into an object:
each table is represented by a class
a row is retrieved as a dict
an object representing a cursor provides access to a table like so:
cursor.mytable.get_by_ids(low, high)
removing means setting the time_of_removal to the current time
So essentially this does away with the need for an ORM since each table has a class to represent it and within that class, a separate dict represents each row.
Type mapping is trivial because each dict (row) being a first class object in python/blub allows you to know the class of the object and, besides, the low-level database library in Python handles the conversion of types at the field level into their appropriate application-level types.
If you see any potential problems with going down this road, please let me know. Thanks. | Is this a good approach to avoid using SQLAlchemy/SQLObject? | 0.132549 | 1 | 0 | 569 |
1,319,763 | 2009-08-23T22:33:00.000 | 4 | 0 | 1 | 0 | python,data-structures,collections,dictionary | 1,319,790 | 10 | false | 0 | 0 | An ordered tree is usually better for this cases, but random access is going to be log(n). You should keep into account also insertion and removal costs... | 1 | 36 | 1 | I am looking for a solid implementation of an ordered associative array, that is, an ordered dictionary. I want the ordering in terms of keys, not of insertion order.
More precisely, I am looking for a space-efficent implementation of a int-to-float (or string-to-float for another use case) mapping structure for which:
Ordered iteration is O(n)
Random access is O(1)
The best I came up with was gluing a dict and a list of keys, keeping the last one ordered with bisect and insert.
Any better ideas? | Key-ordered dict in Python | 0.07983 | 0 | 0 | 12,867 |
1,319,895 | 2009-08-23T23:42:00.000 | 0 | 0 | 0 | 0 | python,flash,forms | 1,320,103 | 2 | false | 1 | 0 | For your flash app, there's no difference if the backend is python, php or anything, so you can follow a normal "php + flash contact form" guide and then build the backend using django or any other python web framework, receive the information from the http request (GET or POST, probably the last one) and do whatever you wanted to do with them.
Notice the response from python to flash works the same as with php, it's just http content, so you can use XML or even better, JSON. | 2 | 3 | 0 | I would like to know if it is possible to submit a flash form from python and, if it is, how?
I have done form submitting from python before, but the forms were HTML not flash. I really have no idea on how to do this. In my research about this I kept getting 'Ming'. However, Ming is only to create .swf files and that's not what I intend to do.
Any help on this is greatly appreciated. | How to submit data of a flash form? [python] | 0 | 0 | 0 | 458 |
1,319,895 | 2009-08-23T23:42:00.000 | 1 | 0 | 0 | 0 | python,flash,forms | 1,319,907 | 2 | true | 1 | 0 | You can set the url attribute (I think it's url, please correct me if I'm wrong) on a Flash form control to a Python script - then it will pass it through HTTP POST like any normal HTML form.
You've got nothing to be afraid of, it uses the same protocol to communicate, it's just a different submission process. | 2 | 3 | 0 | I would like to know if it is possible to submit a flash form from python and, if it is, how?
I have done form submitting from python before, but the forms were HTML not flash. I really have no idea on how to do this. In my research about this I kept getting 'Ming'. However, Ming is only to create .swf files and that's not what I intend to do.
Any help on this is greatly appreciated. | How to submit data of a flash form? [python] | 1.2 | 0 | 0 | 458 |
1,320,671 | 2009-08-24T06:00:00.000 | 1 | 0 | 1 | 0 | python,encryption | 1,320,693 | 6 | false | 0 | 0 | From my recent python experience, python doesn't do encryption natively. You need to use an external (3rd party) package. Each of these, obviously, offers a different experience. Which are you using? This will probably determine how your syntax will vary. | 2 | 2 | 0 | I have a 'public key' in a variable named varkey, for getting the public key I used the urllib and stored that public key in a variable. Now I want to encrypt a msg/string using the public key.
It's ok if somebody could lead me to some library. | How to encrypt a string using the key | 0.033321 | 0 | 0 | 16,029 |
1,320,671 | 2009-08-24T06:00:00.000 | -2 | 0 | 1 | 0 | python,encryption | 1,320,686 | 6 | false | 0 | 0 | You could use MD5 or SHA1 hashing along with your key... | 2 | 2 | 0 | I have a 'public key' in a variable named varkey, for getting the public key I used the urllib and stored that public key in a variable. Now I want to encrypt a msg/string using the public key.
It's ok if somebody could lead me to some library. | How to encrypt a string using the key | -0.066568 | 0 | 0 | 16,029 |
1,322,425 | 2009-08-24T13:30:00.000 | -1 | 0 | 0 | 0 | python,django,django-models,django-admin | 1,322,450 | 3 | false | 1 | 0 | You have SQL. You can write SQL UPDATE statements.
You have Python for writing batch scripts that interact with the Django ORM. This works really, really well for bulk changes. | 1 | 4 | 0 | Are there any admin extensions to let bulk editing data in Django Admin? (ie. Changing the picture fields of all product models at once. Note that this is needed for a users POV so scripting doesn't count.) Any thoughts on subject welcome. | Django Admin - Bulk editing data? | -0.066568 | 0 | 0 | 6,630 |
1,322,787 | 2009-08-24T14:37:00.000 | 0 | 0 | 1 | 0 | python,excel | 2,066,291 | 6 | false | 0 | 0 | I had to do this some years back. My solution was to run small Python server that exported the functions using SOAP, then call the functions using Visual Basic's SOAP library. The advantage is that you don't have to ship a Python environment with your spreadsheets. The disadvantage is that the clients will need a network connection. | 1 | 3 | 0 | Somebody really needs to fix this "subjective questions evaluator"
I usually compile my functions in a DLL and call them from excel. That works fine (well, let's just say it works)
Unfortunatelly, python cannot be compiled. I know of py2exe but I don't know that it can make a DLL.
So, ..., is there any other way ? I appreciate all ideas and suggestions on the matter. | What would be the best way to use python's functions from excel? | 0 | 0 | 0 | 3,850 |
1,323,361 | 2009-08-24T16:21:00.000 | 0 | 0 | 1 | 0 | python,wxpython,wing-ide | 1,323,458 | 1 | true | 0 | 1 | There is a Ignore this exception location check box in the window where the exception is reported in wing, or you could explicitly silence that specific exception in you code with a try except block. | 1 | 0 | 0 | I started using Wing IDE and it's great. I'm building a wxPython app, and I noticed that Wing IDE catches exceptions that are usually caught by wxPython and not really raised. This is usually useful, but I would like to disable this behavior occasionally. How do I do that? | Getting Wing IDE to stop catching the exceptions that wxPython catches | 1.2 | 0 | 0 | 175 |
1,324,238 | 2009-08-24T19:28:00.000 | 3 | 0 | 0 | 0 | python,django-templates,template-engine,mako,jinja2 | 1,698,458 | 5 | false | 1 | 0 | If you can throw caching in the mix (like memcached) then choose based on features and ease of use rather than optimization.
I use Mako because I like the syntax and features. Fortunately it is one of the fastest as well. | 3 | 53 | 0 | Jinja2 and Mako are both apparently pretty fast.
How do these compare to (the less featured but probably good enough for what I'm doing) string.Template ? | What is the fastest template system for Python? | 0.119427 | 0 | 0 | 30,594 |
1,324,238 | 2009-08-24T19:28:00.000 | 1 | 0 | 0 | 0 | python,django-templates,template-engine,mako,jinja2 | 1,324,515 | 5 | false | 1 | 0 | In general you will have to do profiling to answer that question, as it depends on how you use the templates and what for.
string.Template is the fastest, but so primitive it can hardly be called a template in the same breath as the other templating systems, as it only does string replacements, and has no conditions or loops, making it pretty useless in practice. | 3 | 53 | 0 | Jinja2 and Mako are both apparently pretty fast.
How do these compare to (the less featured but probably good enough for what I'm doing) string.Template ? | What is the fastest template system for Python? | 0.039979 | 0 | 0 | 30,594 |
1,324,238 | 2009-08-24T19:28:00.000 | -4 | 0 | 0 | 0 | python,django-templates,template-engine,mako,jinja2 | 1,325,478 | 5 | false | 1 | 0 | I think Cheetah might be the fastest, as it's implemented in C. | 3 | 53 | 0 | Jinja2 and Mako are both apparently pretty fast.
How do these compare to (the less featured but probably good enough for what I'm doing) string.Template ? | What is the fastest template system for Python? | -1 | 0 | 0 | 30,594 |
1,325,481 | 2009-08-25T00:30:00.000 | 0 | 0 | 1 | 0 | python,database,odbc,commit,bulkinsert | 1,325,524 | 2 | true | 0 | 0 | If I understand what you are doing, Python is not going to be a problem. Executing a statement inside a transaction does not create cumulative state in Python. It will do so only at the database server itself.
When you commit you will need to make sure the commit occurred, since having a large batch commit may conflict with intervening changes in the database. If the commit fails, you will have to re-run the batch again.
That's the only problem that I am aware of with large batches and Python/ODBC (and it's not even really a Python problem, since you would have that problem regardless.)
Now, if you were creating all the SQL in memory, and then looping through the memory-representation, that might make more sense. Still, 5000 lines of text on a modern machine is really not that big of a deal. If you start needing to process two orders of magnitude more, you might need to rethink your process. | 1 | 1 | 0 | I am writing a python script that will be doing some processing on text files. As part of that process, i need to import each line of the tab-separated file into a local MS SQL Server (2008) table. I am using pyodbc and I know how to do this. However, I have a question about the best way to execute it.
I will be looping through the file, creating a cursor.execute(myInsertSQL) for each line of the file. Does anyone see any problems waiting to commit the statements until all records have been looped (i.e. doing the commit() after the loop and not inside the loop after each individual execute)? The reason I ask is that some files will have upwards of 5000 lines. I didn't know if trying to "save them up" and committing all 5000 at once would cause problems.
I am fairly new to python, so I don't know all of these issues yet.
Thanks. | Importing a text file into SQL Server in Python | 1.2 | 1 | 0 | 3,467 |
1,325,568 | 2009-08-25T01:10:00.000 | 3 | 0 | 1 | 1 | python,windows,filesystems | 1,325,685 | 5 | false | 0 | 0 | Does it need to be Windows-native? There is at least one protocol which can be both browsed by Windows Explorer, and served by free Python libraries: FTP. Stick your program behind pyftpdlib and you're done. | 2 | 10 | 0 | I want to program a virtual file system in Windows with Python.
That is, a program in Python whose interface is actually an "explorer windows". You can create & manipulate file-like objects but instead of being created in the hard disk as regular files they are managed by my program and, say, stored remotely, or encrypted or compressed or versioned, or whatever I can do with Python.
What is the easiest way to do that? | easiest way to program a virtual file system in windows with Python | 0.119427 | 0 | 0 | 4,784 |
1,325,568 | 2009-08-25T01:10:00.000 | 2 | 0 | 1 | 1 | python,windows,filesystems | 1,325,652 | 5 | false | 0 | 0 | If you are trying to write a virtual file system (I may misunderstand you) - I would look at a container file format. VHD is well documented along with HDI and (embedded) OSQ. There are basically two things you need to do. One is you need to decide on a file/container format. After that it is as simple as writing the API to manipulate that container. If you would like it to be manipulated over the internet, pick a transport protocol then just write a service (would would emulate a file system driver) that listens on a certain port and manipulates this container using your API | 2 | 10 | 0 | I want to program a virtual file system in Windows with Python.
That is, a program in Python whose interface is actually an "explorer windows". You can create & manipulate file-like objects but instead of being created in the hard disk as regular files they are managed by my program and, say, stored remotely, or encrypted or compressed or versioned, or whatever I can do with Python.
What is the easiest way to do that? | easiest way to program a virtual file system in windows with Python | 0.07983 | 0 | 0 | 4,784 |
1,327,105 | 2009-08-25T09:25:00.000 | 1 | 0 | 1 | 0 | ironpython,ironpython-studio | 1,330,323 | 3 | false | 0 | 1 | I maybe don't understand the question well but copying IronMath.dll and IronPython.dll to the folder with main.exe and main.dll should work for Ironpython 1.x. These .dlls are different for IronPython 2.x.
Edit: Well, I tried PYC with IP 1.1 and it does not work. That means you have to use it with at least IP 2.0.2 (it is located in Samples\pyc folder). For simple script 'print 'hello' you need to ship (along with hello.dll and hello.exe).
IronPython.dll
Microsoft.Scripting.Core.dll
Microsoft.Scripting.dll
Microsoft.Scripting.ExtensionAttribute.dll
For more complicated script you will probably need IronPython.Modules.dll as well. | 3 | 0 | 0 | I am using IronPython studio to create IronPython scripts and convert them into executables. When converted to executables, it creates a Main exe and two dlls (IronMath.dll and IronPython.dll). Is it possible to create the executables without IronPython studio. I tried PYC downloaded from codeplex.com. It creates an exe and a dll with the same name as that of the exe (say main.exe and main.dll). But I need an exe and two dlls (similar to what is created by the IronPython studio). So that I can use other IronPython exes without any separate dlls (these 2 dlls would be enough for any FePy exe). | Generating EXE out of IronPython script | 0.066568 | 0 | 0 | 3,879 |
1,327,105 | 2009-08-25T09:25:00.000 | 0 | 0 | 1 | 0 | ironpython,ironpython-studio | 1,344,862 | 3 | false | 0 | 1 | A DLL is a dynamically linked library. It's required for your application to run properly. All applications written in .NET use them. You just don't know it, because support is built into the .NET framework, which most everyone has installed on their systems. Yay, way to go Microsoft. The DLR (Dynamic Language Runtime) isn't built into any .NET destributable at this time, however (this will change in .NET 4.0). That's why you get the dll file.
Are you writing software that utilizes any .NET libraries? If not, just write it in good 'ol cpython (the way you're supposed to). Then, uou should look into a program called py2exe. Have you ever used uTorrent? I'm assuming you have. It's build using strait up cpython + py2exe.
Enjoy. :) | 3 | 0 | 0 | I am using IronPython studio to create IronPython scripts and convert them into executables. When converted to executables, it creates a Main exe and two dlls (IronMath.dll and IronPython.dll). Is it possible to create the executables without IronPython studio. I tried PYC downloaded from codeplex.com. It creates an exe and a dll with the same name as that of the exe (say main.exe and main.dll). But I need an exe and two dlls (similar to what is created by the IronPython studio). So that I can use other IronPython exes without any separate dlls (these 2 dlls would be enough for any FePy exe). | Generating EXE out of IronPython script | 0 | 0 | 0 | 3,879 |
1,327,105 | 2009-08-25T09:25:00.000 | 2 | 0 | 1 | 0 | ironpython,ironpython-studio | 1,389,659 | 3 | true | 0 | 1 | I have created a C# application that uses the IronPython.dll and IronMath.dll to convert the IronPython scripts to executables. This doesn't require IronPython studio to be present. Only the DLLs are enough. The behavior of exe is same as that created by IronPython studio(Integrated with VS2008) | 3 | 0 | 0 | I am using IronPython studio to create IronPython scripts and convert them into executables. When converted to executables, it creates a Main exe and two dlls (IronMath.dll and IronPython.dll). Is it possible to create the executables without IronPython studio. I tried PYC downloaded from codeplex.com. It creates an exe and a dll with the same name as that of the exe (say main.exe and main.dll). But I need an exe and two dlls (similar to what is created by the IronPython studio). So that I can use other IronPython exes without any separate dlls (these 2 dlls would be enough for any FePy exe). | Generating EXE out of IronPython script | 1.2 | 0 | 0 | 3,879 |
1,328,248 | 2009-08-25T13:23:00.000 | 0 | 0 | 1 | 0 | python,optparse | 1,328,298 | 2 | false | 0 | 0 | Are you sure that subclassing is what you want to do? Your overriding behavior could just be implemented in a function. | 2 | 2 | 0 | I have a class that handles command line arguments in my program using python's optparse module. It is also inherited by several classes to create subsets of parameters. To encapsulate the option parsing mechanism I want to reveal only a function add_option to inheriting classes. What this function does is then call optparse.make_option.
Is it a good practice to simply have my add_option method say that it accepts the same arguments as optparse.make_option in the documentation, and forward the arguments as *args and **kwargs?
Should I do some parameter checking beforehand? In a way I want to avoid this to decouple that piece of code as much from a specific version of optparse. | Should I forward arguments as *args & **kwargs? | 0 | 0 | 0 | 1,449 |
1,328,248 | 2009-08-25T13:23:00.000 | 1 | 0 | 1 | 0 | python,optparse | 1,328,511 | 2 | true | 0 | 0 | It seems that you want your subclasses to have awareness of the command line stuff, which is often not a good idea.
You want to encapsulate the whole config input portion of your program so that you can drive it with a command line, config file, other python program, whatever.
So, I would remove any call to add_option from your subclasses.
If you want to discover what your config requirements look like at runtime, I would simply add that data to your subclasses; let each one have a member or method that can be used to figure out what kind of inputs it needs.
Then, you can have an input organizer class walk over them, pull this data out, and use it to drive a command line, config file, or what have you.
But honestly, I've never needed to do this at run time. I usually pull all that config stuff out to it's own separate thing which answers the question "What does the user need to tell the tool?", and then the subclasses go looking in the config data structure for what they need. | 2 | 2 | 0 | I have a class that handles command line arguments in my program using python's optparse module. It is also inherited by several classes to create subsets of parameters. To encapsulate the option parsing mechanism I want to reveal only a function add_option to inheriting classes. What this function does is then call optparse.make_option.
Is it a good practice to simply have my add_option method say that it accepts the same arguments as optparse.make_option in the documentation, and forward the arguments as *args and **kwargs?
Should I do some parameter checking beforehand? In a way I want to avoid this to decouple that piece of code as much from a specific version of optparse. | Should I forward arguments as *args & **kwargs? | 1.2 | 0 | 0 | 1,449 |
1,329,076 | 2009-08-25T15:33:00.000 | 3 | 0 | 0 | 0 | python,window,pygtk,freeze | 1,329,140 | 1 | true | 0 | 1 | You really shouldn't try to make a program become unresponsive.
If what you want to do is stop the user from using the window, make the dialog modal: gtk.Dialog.set_modal(True) | 1 | 0 | 0 | I want main window to "gray, freeze, stop working", when some other window is opened. Is there some default way to do it? Pretty much the same as gtk.Dialog is working.
EDIT: Currently I'm just replacing all contents by a text line, but I guess there should be better way. | How to freeze/grayish window in pygtk? | 1.2 | 0 | 0 | 310 |
1,331,033 | 2009-08-25T21:14:00.000 | 0 | 0 | 1 | 0 | python,memory,memory-management | 1,331,164 | 3 | false | 0 | 0 | by stopping using it when you do not need, python has garbage collector. Set the attributes, and variables to None when you are done with them. | 2 | 1 | 0 | I have just written a .psf file in Python for executing an optimization algorithm for Abaqus package, but after some analysis it stops. Could you please help me and write Python code to free the memory?
Thanks | How Can I Empty the Used Memory With Python? | 0 | 0 | 0 | 693 |
1,331,033 | 2009-08-25T21:14:00.000 | 2 | 0 | 1 | 0 | python,memory,memory-management | 1,331,258 | 3 | false | 0 | 0 | You don't really explicitly free memory in Python. What you do is stop referencing it, and it gets freed automatically. Although del does this, it's very rare that you really need to use it in a well designed application.
So this is really a question of how not to use so much memory in Python. I'd say the main hint there is to try to refactor your program to use generators, so that you don't have to hold all the data in memory at once. | 2 | 1 | 0 | I have just written a .psf file in Python for executing an optimization algorithm for Abaqus package, but after some analysis it stops. Could you please help me and write Python code to free the memory?
Thanks | How Can I Empty the Used Memory With Python? | 0.132549 | 0 | 0 | 693 |
1,331,235 | 2009-08-25T21:52:00.000 | 3 | 0 | 1 | 0 | python,import,compilation | 1,331,250 | 4 | true | 0 | 0 | I don't think that's possible - its the way Python works. The best you could do, I think, is to have some kind of automated script which deletes *.pyc files at first. Or you could have a development module which automatically compiles all imports - try the compile module.
I've personally not had this trouble before, but try checking the timestamps on the files. You could try running touch on all the Python files in the directory. (find -name \\*.py -exec touch \\{\\} \\;) | 2 | 1 | 0 | I have have a python file that imports a few frequently changed python files. I have had trouble with the imported files not recompiling when I change them. How do I stop them compiling? | Prevent python imports compiling | 1.2 | 0 | 0 | 2,496 |
1,331,235 | 2009-08-25T21:52:00.000 | 1 | 0 | 1 | 0 | python,import,compilation | 1,331,831 | 4 | false | 0 | 0 | In python 2.6, you should be able to supply the -B option. | 2 | 1 | 0 | I have have a python file that imports a few frequently changed python files. I have had trouble with the imported files not recompiling when I change them. How do I stop them compiling? | Prevent python imports compiling | 0.049958 | 0 | 0 | 2,496 |
1,331,815 | 2009-08-26T00:54:00.000 | 96 | 0 | 1 | 0 | python,regex,cross-platform,eol | 1,331,840 | 2 | true | 0 | 0 | The regex I use when I want to be precise is "\r\n?|\n".
When I'm not concerned about consistency or empty lines, I use "[\r\n]+", I imagine it makes my programs somewhere in the order of 0.2% faster. | 2 | 62 | 0 | My program can accept data that has newline characters of \n, \r\n or \r (eg Unix, PC or Mac styles)
What is the best way to construct a regular expression that will match whatever the encoding is?
Alternatively, I could use universal_newline support on input, but now I'm interested to see what the regex would be. | Regular Expression to match cross platform newline characters | 1.2 | 0 | 0 | 24,989 |
1,331,815 | 2009-08-26T00:54:00.000 | 10 | 0 | 1 | 0 | python,regex,cross-platform,eol | 39,022,365 | 2 | false | 0 | 0 | The pattern can be simplified to \r?\n for a little performance gain, as you probably don't have to deal with the old Mac style (OS 9 is unsupported since February 2002). | 2 | 62 | 0 | My program can accept data that has newline characters of \n, \r\n or \r (eg Unix, PC or Mac styles)
What is the best way to construct a regular expression that will match whatever the encoding is?
Alternatively, I could use universal_newline support on input, but now I'm interested to see what the regex would be. | Regular Expression to match cross platform newline characters | 1 | 0 | 0 | 24,989 |
1,332,598 | 2009-08-26T05:59:00.000 | 6 | 0 | 1 | 0 | java,python | 1,332,722 | 2 | false | 0 | 0 | There is no 100% reliable / portable way to do this, but the following procedure should give you some confidence that Java has been installed and configured properly (on a Linux):
Check that the "JAVA_HOME" environment variable has been set and that it points to a directory containing a "bin" directory and that the "bin" directory contains an executable "java" command.
Check that the "java" command found via a search of "PATH" is the one that was found in step 1.
Run the "java" command with "-version" to see if the output looks like a normal Java version stamp.
This doesn't guarantee that the user has not done something weird.
Actually, if it was me, I wouldn't bother with this. I'd just try to launch the Java app from Python assuming that the "java" on the user's path was the right one. If there were errors, I'd report them. | 1 | 5 | 0 | Using Python, I want to know whether Java is installed. | How to determine whether java is installed on a system through python? | 1 | 0 | 0 | 6,171 |
1,332,846 | 2009-08-26T07:08:00.000 | 0 | 0 | 0 | 0 | iphone,python,clipboard,ipod-touch | 1,333,976 | 1 | false | 1 | 0 | Sorry no, I'm assuming since you mention python that this is a web-based application? If so there is no way you can put something into/take something out of the user's clipboard automatically. However if it is webbased the user will be able to select any text/image and copy to paste elsewhere. | 1 | 1 | 0 | I want to modify a python application written for the ipod/iphone.
It should copy a string into the clipboard so that I can use it in another application.
Is it possible to access the iphone clipboard using python?
Thanks in advance.
UPDATE:
Thanks for replying.
A bit of background: The python program is a vocabulary program running locally on my ipod.
Often I want to look up the vocabulary in a dictionary.
Then I always have to repeat the following steps:
Select and copy the word.
Close the vocabulary program.
Open the dictionary.
Paste the word into the text field.
Press search.
I want to automate the process, therefore I want the python program to copy the word into the clipboard automatically and start the dictionary.
I figured out the part with the starting already, using URL schemes.
I was hoping to be able to automate the copying as well. | How can I access the iphone / ipod clipboard using python? | 0 | 0 | 0 | 994 |
1,332,853 | 2009-08-26T07:10:00.000 | 1 | 0 | 0 | 1 | python,remote-access | 1,332,904 | 5 | false | 0 | 0 | Which OS for the target machines? If 'service' is 'Windows NT service', and your local machine is also Windows, I'd use IronPython as the Python language implementation and call straight into the WMI facilities in the .net System.Management namespace -- they're meant for remote admin like that. | 1 | 0 | 0 | I decided to tackle Python as a new language to learn. The first thing I want to do is code a script that will allow me to remotely restart services on other machines from my local machine. How would I accomplish this when the remote machine requires a username and password to log on? I don't need a full solution to be given to me but maybe some pointers on what libraries I should use or any issues I need to address when writing the script.
EDIT: All the remote machines are using Windows 2003 | How to remotely restart a service on a password protected machine using Python? | 0.039979 | 0 | 0 | 4,489 |
1,332,876 | 2009-08-26T07:15:00.000 | 0 | 0 | 0 | 0 | python,html | 1,332,899 | 3 | false | 1 | 0 | If you rename the folder, I'm not sure how you can get around parsing the .htm file and replacing instances of _files with the new suffix. Perhaps you can use a folder alias (shortcut?) but then that's not a very clean solution. | 1 | 0 | 0 | A bit of background:
When I save a web page from e.g. IE8 as "webpage, complete", the images and such that the page contains are placed in a subfolder with the postfix "_files". This convention allows Windows to synchronize the .htm file and the accompanying folder.
Now, in order to keep the synchronization intact, when I rename the HTML file from my Python script I want the "_files" folder to be renamed also. Is there an easy way to do this, or will I need to
- rename the .htm file
- rename the _files folder
- parse the .htm file and replace all references to the old _files folder name with the new name? | Renaming a HTML file with Python | 0 | 0 | 0 | 531 |
1,334,813 | 2009-08-26T13:47:00.000 | 1 | 0 | 0 | 0 | python,database,statistics,time-series,schemaless | 1,335,132 | 5 | false | 0 | 0 | plain text files? It's not clear what your 10k data points per 15 minutes translates to in terms of bytes, but in any way text files are easier to store/archive/transfer/manipulate and you can inspect the directly, just by looking at. fairly easy to work with Python, too. | 1 | 17 | 0 | I am interested in monitoring some objects. I expect to get about 10000 data points every 15 minutes. (Maybe not at first, but this is the 'general ballpark'). I would also like to be able to get daily, weekly, monthly and yearly statistics. It is not critical to keep the data in the highest resolution (15 minutes) for more than two months.
I am considering various ways to store this data, and have been looking at a classic relational database, or at a schemaless database (such as SimpleDB).
My question is, what is the best way to go along doing this? I would very much prefer an open-source (and free) solution to a proprietary costly one.
Small note: I am writing this application in Python. | What is the best open source solution for storing time series data? | 0.039979 | 1 | 0 | 13,739 |
1,336,489 | 2009-08-26T18:01:00.000 | -3 | 0 | 1 | 0 | python,job-queue | 3,758,551 | 9 | false | 0 | 0 | Also there is Unix 'at'
For more info:
man at | 1 | 14 | 0 | Do you know/use any distributed job queue for python? Can you share links or tools | job queue implementation for python | -0.066568 | 0 | 0 | 13,589 |
1,336,824 | 2009-08-26T19:03:00.000 | 4 | 0 | 1 | 0 | python,calendar,icalendar,python-dateutil | 1,337,063 | 1 | true | 0 | 0 | My guess is probably not. The last date before datetime.max means you have to calculate all the recurrences up until datetime.max, and that will reasonably be a LOT of recurrences. It might be possible to add shortcuts for some of the simpler recurrences. If it is every year on the same date for example, you don't really need to compute the recurrences inbetween, for example. But if you have every third something you must, for example, and also if you have a maximum recurrences, etc. But I guess dateutil doesn't have these shortcuts. It would probably be quite complex to implement reliably.
May I ask why you need to find the last recurrence before datetime.max? It is, after all, almost eight thousand years into the future... :-) | 1 | 2 | 0 | I'm using the python dateutil module for a calendaring application which supports repeating events. I really like the ability to parse ical rrules using the rrulestr() function. Also, using rrule.between() to get dates within a given interval is very fast.
However, as soon as I try doing any other operations (ie: list slices, before(), after(),...) everything begins to crawl. It seems like dateutil tries to calculate every date even if all I want is to get the last date with rrule.before(datetime.max).
Is there any way of avoiding these unnecessary calculations? | Python dateutil.rrule is incredibly slow | 1.2 | 0 | 0 | 2,608 |
1,337,229 | 2009-08-26T20:21:00.000 | 2 | 0 | 1 | 0 | powershell,scripting,ironpython | 24,164,083 | 12 | false | 0 | 0 | A quick and dirty solution is to use CTRL+S to halt the scrolling of the display and CTRL+Q to resume it. | 4 | 45 | 0 | When I call a Powershell script, how can I keep the called script from closing its command window. I'm getting an error and I'm sure I can fix it if I could just read the error.
I have a Powershell script that sends an email with attachment using the .NET classes. If I call the script directly by executing it from the command line or calling it from the Windows Scheduler then it works fine. If I call it from within another script (IronPython, if that matters) then it fails. All scenarios work fine on my development machine. (I really do have to get that "Works on My Machine" logo!) I've got the call to Powershell happening in a way that displays a command window and I can see a flicker of red just before it closes.
Sorry: Powershell 1.0, IronPython 1.1
Solution: powershell -noexit d:\script\foo.ps1
The -noexit switch worked fine. I just added it to the arguments I pass from IronPython. As I suspected, it's something that I can probably fix myself (execution policy, although I did temporarily set as unrestricted with no effect, so I guess I need to look deeper). I'll ask another question if I run into trouble with that.
Thanks to all for the help. I learned that I need to investigate powershell switches a little more closely, and I can see quite a few things that will prove useful in the future. | Powershell window disappears before I can read the error message | 0.033321 | 0 | 0 | 98,225 |
1,337,229 | 2009-08-26T20:21:00.000 | 1 | 0 | 1 | 0 | powershell,scripting,ironpython | 33,607,786 | 12 | false | 0 | 0 | My solution was to execute the script with a command line from the console window instead of right-clicking the file -> execute with powershell.
The console keeps displaying the error messages,
even though the execution of the script ended. | 4 | 45 | 0 | When I call a Powershell script, how can I keep the called script from closing its command window. I'm getting an error and I'm sure I can fix it if I could just read the error.
I have a Powershell script that sends an email with attachment using the .NET classes. If I call the script directly by executing it from the command line or calling it from the Windows Scheduler then it works fine. If I call it from within another script (IronPython, if that matters) then it fails. All scenarios work fine on my development machine. (I really do have to get that "Works on My Machine" logo!) I've got the call to Powershell happening in a way that displays a command window and I can see a flicker of red just before it closes.
Sorry: Powershell 1.0, IronPython 1.1
Solution: powershell -noexit d:\script\foo.ps1
The -noexit switch worked fine. I just added it to the arguments I pass from IronPython. As I suspected, it's something that I can probably fix myself (execution policy, although I did temporarily set as unrestricted with no effect, so I guess I need to look deeper). I'll ask another question if I run into trouble with that.
Thanks to all for the help. I learned that I need to investigate powershell switches a little more closely, and I can see quite a few things that will prove useful in the future. | Powershell window disappears before I can read the error message | 0.016665 | 0 | 0 | 98,225 |
1,337,229 | 2009-08-26T20:21:00.000 | 0 | 0 | 1 | 0 | powershell,scripting,ironpython | 1,337,315 | 12 | false | 0 | 0 | Have you thought about redirecting stdout and stderr to a file ex:
./ascript.ps1 >logs 2>&1
Note: You can create wrapper script in powershell that calls your powershell script with all necessary redirections. | 4 | 45 | 0 | When I call a Powershell script, how can I keep the called script from closing its command window. I'm getting an error and I'm sure I can fix it if I could just read the error.
I have a Powershell script that sends an email with attachment using the .NET classes. If I call the script directly by executing it from the command line or calling it from the Windows Scheduler then it works fine. If I call it from within another script (IronPython, if that matters) then it fails. All scenarios work fine on my development machine. (I really do have to get that "Works on My Machine" logo!) I've got the call to Powershell happening in a way that displays a command window and I can see a flicker of red just before it closes.
Sorry: Powershell 1.0, IronPython 1.1
Solution: powershell -noexit d:\script\foo.ps1
The -noexit switch worked fine. I just added it to the arguments I pass from IronPython. As I suspected, it's something that I can probably fix myself (execution policy, although I did temporarily set as unrestricted with no effect, so I guess I need to look deeper). I'll ask another question if I run into trouble with that.
Thanks to all for the help. I learned that I need to investigate powershell switches a little more closely, and I can see quite a few things that will prove useful in the future. | Powershell window disappears before I can read the error message | 0 | 0 | 0 | 98,225 |
1,337,229 | 2009-08-26T20:21:00.000 | 0 | 0 | 1 | 0 | powershell,scripting,ironpython | 67,677,303 | 12 | false | 0 | 0 | My .PS1 script ran fine from the Powershell console but when "double-clicking" or "right-click open with powershell" it would exhibit the 'open/close' problem.
The Fix for me was to rename the script folder to a Name Without Spaces.
Then it all worked - Windows couldn't deal with
"C:\This is my folder\myscript.ps1" but
"C:\This_is_my_folder\myscript.ps1" worked just fine | 4 | 45 | 0 | When I call a Powershell script, how can I keep the called script from closing its command window. I'm getting an error and I'm sure I can fix it if I could just read the error.
I have a Powershell script that sends an email with attachment using the .NET classes. If I call the script directly by executing it from the command line or calling it from the Windows Scheduler then it works fine. If I call it from within another script (IronPython, if that matters) then it fails. All scenarios work fine on my development machine. (I really do have to get that "Works on My Machine" logo!) I've got the call to Powershell happening in a way that displays a command window and I can see a flicker of red just before it closes.
Sorry: Powershell 1.0, IronPython 1.1
Solution: powershell -noexit d:\script\foo.ps1
The -noexit switch worked fine. I just added it to the arguments I pass from IronPython. As I suspected, it's something that I can probably fix myself (execution policy, although I did temporarily set as unrestricted with no effect, so I guess I need to look deeper). I'll ask another question if I run into trouble with that.
Thanks to all for the help. I learned that I need to investigate powershell switches a little more closely, and I can see quite a few things that will prove useful in the future. | Powershell window disappears before I can read the error message | 0 | 0 | 0 | 98,225 |
1,338,095 | 2009-08-26T23:18:00.000 | 1 | 0 | 1 | 0 | iphone,python,pyobjc | 1,338,105 | 2 | true | 1 | 0 | No: it's Apple's deliberate policy decision (no doubt with some technical underpinnings) to not support interpreters/runtimes on iPhone for most languages -- ObjC (and Javascript within Safari) is what Apple wants you to use, not Python, Java, Ruby, and so forth. | 2 | 3 | 0 | Is it currently possible to compile Python and PyObjC for the iPhone such that AppStore applications can written in Python?
If not, is this a purely technical issue or a deliberate policy decision by Apple? | Can I write Python applications using PyObjC that target NON-jailbroken iPhones? | 1.2 | 0 | 0 | 653 |
1,338,095 | 2009-08-26T23:18:00.000 | 0 | 0 | 1 | 0 | iphone,python,pyobjc | 1,338,097 | 2 | false | 1 | 0 | no, apple strictly forbids running any kind of interpreter on iphone, and it is completely policy issue. | 2 | 3 | 0 | Is it currently possible to compile Python and PyObjC for the iPhone such that AppStore applications can written in Python?
If not, is this a purely technical issue or a deliberate policy decision by Apple? | Can I write Python applications using PyObjC that target NON-jailbroken iPhones? | 0 | 0 | 0 | 653 |
1,338,777 | 2009-08-27T03:57:00.000 | 6 | 1 | 0 | 0 | php,python,memcached,scalability | 1,338,810 | 4 | false | 1 | 0 | If your site performance is fine then there's no reason to add caching. Lots of sites can get by without any cache at all, or by moving to a file-system based cache. It's only the super high traffic sites that need memcached.
What's "crazy" is code architecture (or a lack of architecture) that makes adding caching in latter difficult. | 3 | 2 | 0 | I was just reviewing one of my client's applications which uses some old outdated php framework that doesn't rely on caching at all and is pretty much completely database dependent.
I figure I'll just rewrite it from scratch because it's really outdated and in this rewrite I want to implement a caching system. It'd be nice if I could get a few pointers if anyone has done this prior.
Rewrite will be done in either PHP or Python
Would be nice if I could profile before and after this implementation
I have my own server so I'm not restricted by shared hosting | Is it crazy to not rely on a caching system like memcached nowadays ( for dynamic sites )? | 1 | 0 | 0 | 385 |
1,338,777 | 2009-08-27T03:57:00.000 | 10 | 1 | 0 | 0 | php,python,memcached,scalability | 1,338,828 | 4 | true | 1 | 0 | Caching, when it works right (==high hit rate), is one of the few general-purpose techniques that can really help with latency -- the harder part of problems generically describes as "performance". You can enhance QPS (queries per second) measures of performance just by throwing more hardware at the problem -- but latency doesn't work that way (i.e., it doesn't take just one month to make a babies if you set nine mothers to work on it;-).
However, the main resource used by caching is typically memory (RAM or disk as it may be). As you mention in a comment that the only performance problem you observe is memory usage, caching wouldn't help: it would just earmark some portion of memory to use for caching purposes, leaving even less available as a "general fund". As a resident of California I'm witnessing first-hand what happens when too many resources are earmarked, and I couldn't recommend such a course of action with a clear conscience!-) | 3 | 2 | 0 | I was just reviewing one of my client's applications which uses some old outdated php framework that doesn't rely on caching at all and is pretty much completely database dependent.
I figure I'll just rewrite it from scratch because it's really outdated and in this rewrite I want to implement a caching system. It'd be nice if I could get a few pointers if anyone has done this prior.
Rewrite will be done in either PHP or Python
Would be nice if I could profile before and after this implementation
I have my own server so I'm not restricted by shared hosting | Is it crazy to not rely on a caching system like memcached nowadays ( for dynamic sites )? | 1.2 | 0 | 0 | 385 |
1,338,777 | 2009-08-27T03:57:00.000 | 0 | 1 | 0 | 0 | php,python,memcached,scalability | 1,338,864 | 4 | false | 1 | 0 | Depending on the specific nature of the codebase and traffic patterns, you might not even need to re-write the whole site. Horribly inefficient code is not such a big deal if it can be bypassed via cache for 99.9% of page requests.
When choosing PHP or Python, make sure you figure out where you're going to host the site (or if you even get to make that call). Many of my clients are already set up on a webserver and Python is not an option. You should also make sure any databases/external programs you want to interface with are well-supported in PHP or Python. | 3 | 2 | 0 | I was just reviewing one of my client's applications which uses some old outdated php framework that doesn't rely on caching at all and is pretty much completely database dependent.
I figure I'll just rewrite it from scratch because it's really outdated and in this rewrite I want to implement a caching system. It'd be nice if I could get a few pointers if anyone has done this prior.
Rewrite will be done in either PHP or Python
Would be nice if I could profile before and after this implementation
I have my own server so I'm not restricted by shared hosting | Is it crazy to not rely on a caching system like memcached nowadays ( for dynamic sites )? | 0 | 0 | 0 | 385 |
1,340,887 | 2009-08-27T12:51:00.000 | 4 | 0 | 0 | 1 | python,google-app-engine,mod-wsgi | 1,342,175 | 1 | true | 1 | 0 | Webapp is a fine choice for a simple web framework but there are plenty of other simple python web frameworks that have instructions for setting them up in your use case (cherrypy, web.py, etc). Since google developed webapp for gae I don't believe they published instructions for setting it up behind apache.
BigTable is proprietary to Google so you will not be able to run it locally. If you are looking for something with similar performance characteristics I'd look into the schemaless 'document-oriented' databases. | 1 | 1 | 0 | I have been playing with Google App engine a lot lately, from home on personal projects, and I have been really enjoying it. I've converted a few of my coworkers over and we are interested in using GAE for a few of our projects at work.
Our work has to be hosted locally on our own servers. I've done some searching around and I really can't find any information on using the WebApp framework and BigTable locally.
Any information you could provide on setting up a GAE-ish environment on a local Windows server would be much appreciated. I know GAE is much more than just the framework and BigTable - the scalability, propogation of your application/data across many servers are all features we don't need. We just want to get the webapp framework and BigTable up and running through mod_wsgi on Apache. | Locally Hosted Google App Engine (WebApp Framework / BigTable) | 1.2 | 0 | 0 | 911 |
1,340,892 | 2009-08-27T12:52:00.000 | 13 | 1 | 1 | 0 | python,unit-testing,testing | 1,341,053 | 7 | true | 0 | 0 | Where you have to if using a library specifying where unittests should live,
in the modules themselves for small projects, or
in a tests/ subdirectory in your package for larger projects.
It's a matter of what works best for the project you're creating.
Sometimes the libraries you're using determine where tests should go, as is the case with Django (where you put your tests in models.py, tests.py or a tests/ subdirectory in your apps).
If there are no existing constraints, it's a matter of personal preference. For a small set of modules, it may be more convenient to put the unittests in the files you're creating.
For anything more than a few modules I create the tests separately in a tests/ directory in the package. Having testing code mixed with the implementation adds unnecessary noise for anyone reading the code. | 6 | 19 | 0 | Is there a consensus about the best place to put Python unittests?
Should the unittests be included within the same module as the functionality being tested (executed when the module is run on its own (if __name__ == '__main__', etc.)), or is it better to include the unittests within different modules?
Perhaps a combination of both approaches is best, including module level tests within each module and adding higher level tests which test functionality included in more than one module as separate modules (perhaps in a /test subdirectory?).
I assume that test discovery is more straightforward if the tests are included in separate modules, but there's an additional burden on the developer if he/she has to remember to update the additional test module if the module under test is modified.
I'd be interested to know peoples' thoughts on the best way of organizing unittests. | Should Python unittests be in a separate module? | 1.2 | 0 | 0 | 5,565 |
1,340,892 | 2009-08-27T12:52:00.000 | 4 | 1 | 1 | 0 | python,unit-testing,testing | 1,340,964 | 7 | false | 0 | 0 | I generally keep test code in a separate module, and ship the module/package and tests in a single distribution. If the user installs using setup.py they can run the tests from the test directory to ensure that everything works in their environment, but only the module's code ends up under Lib/site-packages. | 6 | 19 | 0 | Is there a consensus about the best place to put Python unittests?
Should the unittests be included within the same module as the functionality being tested (executed when the module is run on its own (if __name__ == '__main__', etc.)), or is it better to include the unittests within different modules?
Perhaps a combination of both approaches is best, including module level tests within each module and adding higher level tests which test functionality included in more than one module as separate modules (perhaps in a /test subdirectory?).
I assume that test discovery is more straightforward if the tests are included in separate modules, but there's an additional burden on the developer if he/she has to remember to update the additional test module if the module under test is modified.
I'd be interested to know peoples' thoughts on the best way of organizing unittests. | Should Python unittests be in a separate module? | 0.113791 | 0 | 0 | 5,565 |
1,340,892 | 2009-08-27T12:52:00.000 | 0 | 1 | 1 | 0 | python,unit-testing,testing | 1,341,011 | 7 | false | 0 | 0 | if __name__ == '__main__', etc. is great for small tests. | 6 | 19 | 0 | Is there a consensus about the best place to put Python unittests?
Should the unittests be included within the same module as the functionality being tested (executed when the module is run on its own (if __name__ == '__main__', etc.)), or is it better to include the unittests within different modules?
Perhaps a combination of both approaches is best, including module level tests within each module and adding higher level tests which test functionality included in more than one module as separate modules (perhaps in a /test subdirectory?).
I assume that test discovery is more straightforward if the tests are included in separate modules, but there's an additional burden on the developer if he/she has to remember to update the additional test module if the module under test is modified.
I'd be interested to know peoples' thoughts on the best way of organizing unittests. | Should Python unittests be in a separate module? | 0 | 0 | 0 | 5,565 |
1,340,892 | 2009-08-27T12:52:00.000 | 3 | 1 | 1 | 0 | python,unit-testing,testing | 1,341,048 | 7 | false | 0 | 0 | There might be reasons other than testing to use the if __name__ == '__main__' check. Keeping the tests in other modules leaves that option open to you. Also - if you refactor the implementation of a module and your tests are in another module that was not edited - you KNOW the tests have not been changed when you run them against the refactored code. | 6 | 19 | 0 | Is there a consensus about the best place to put Python unittests?
Should the unittests be included within the same module as the functionality being tested (executed when the module is run on its own (if __name__ == '__main__', etc.)), or is it better to include the unittests within different modules?
Perhaps a combination of both approaches is best, including module level tests within each module and adding higher level tests which test functionality included in more than one module as separate modules (perhaps in a /test subdirectory?).
I assume that test discovery is more straightforward if the tests are included in separate modules, but there's an additional burden on the developer if he/she has to remember to update the additional test module if the module under test is modified.
I'd be interested to know peoples' thoughts on the best way of organizing unittests. | Should Python unittests be in a separate module? | 0.085505 | 0 | 0 | 5,565 |
1,340,892 | 2009-08-27T12:52:00.000 | 1 | 1 | 1 | 0 | python,unit-testing,testing | 1,341,060 | 7 | false | 0 | 0 | I usually have them in a separate folder called most often test/. Personally I am not using the if __name__ == '__main__' check, because I use nosetests and it handles the test detection by itself. | 6 | 19 | 0 | Is there a consensus about the best place to put Python unittests?
Should the unittests be included within the same module as the functionality being tested (executed when the module is run on its own (if __name__ == '__main__', etc.)), or is it better to include the unittests within different modules?
Perhaps a combination of both approaches is best, including module level tests within each module and adding higher level tests which test functionality included in more than one module as separate modules (perhaps in a /test subdirectory?).
I assume that test discovery is more straightforward if the tests are included in separate modules, but there's an additional burden on the developer if he/she has to remember to update the additional test module if the module under test is modified.
I'd be interested to know peoples' thoughts on the best way of organizing unittests. | Should Python unittests be in a separate module? | 0.028564 | 0 | 0 | 5,565 |
1,340,892 | 2009-08-27T12:52:00.000 | 15 | 1 | 1 | 0 | python,unit-testing,testing | 1,341,119 | 7 | false | 0 | 0 | YES, do use a separate module.
It does not really make sense to use the __main__ trick. Just assume that you have several files in your module, and it does not work anymore, because you don't want to run each source file separately when testing your module.
Also, when installing a module, most of the time you don't want to install the tests. Your end-user does not care about tests, only the developers should care.
No, really. Put your tests in tests/, your doc in doc, and have a Makefile ready around for a make test. Any other approaches are just intermediate solutions, only valid for specific tiny modules. | 6 | 19 | 0 | Is there a consensus about the best place to put Python unittests?
Should the unittests be included within the same module as the functionality being tested (executed when the module is run on its own (if __name__ == '__main__', etc.)), or is it better to include the unittests within different modules?
Perhaps a combination of both approaches is best, including module level tests within each module and adding higher level tests which test functionality included in more than one module as separate modules (perhaps in a /test subdirectory?).
I assume that test discovery is more straightforward if the tests are included in separate modules, but there's an additional burden on the developer if he/she has to remember to update the additional test module if the module under test is modified.
I'd be interested to know peoples' thoughts on the best way of organizing unittests. | Should Python unittests be in a separate module? | 1 | 0 | 0 | 5,565 |
1,342,377 | 2009-08-27T16:57:00.000 | 1 | 1 | 1 | 0 | python,visual-studio,ironpython | 1,343,191 | 6 | false | 0 | 0 | Firstly, there seems to be a question as to whether python (or various implementations) are as 'powerful' as C#. I'm not quite sure what to take powerful to mean, but in my experience of both languages it will be somewhat easier and faster to write a given piece of code in python than in C#. C# is faster than cpython (although if speed is desired, the psyco python module is well worth a look).
Also I would object to your dismissal of Mono. Mono is great on Linux if you write an application for it from scratch. It is not really meant to be a compatibility layer between Windows and Linux (see Wine!), and if you treat it as such you will only be disappointed.
It just seems to me that you are taking the wrong approach. If you want to convince him that not everything Microsoft is evil, and he is adamant about not learning C#, get him to learn Python (or Ruby, or LUA or whatever) until he is competent, and then introduce him to C# and get him to make his own judgement - I'm fairly in favour of open source, and am far from a rabid Microsoft supporter, but I tried C#, and found I quite liked it.
I think that getting him to use python and visual studio in a suboptimal way will turn him against both of them - far from your desired goal! | 2 | 2 | 0 | I have a friend who I am trying to teach how to program. He comes from a very basic PHP background, and for some reason is ANTI C#, I guess because some of his PHP circles condemn anything that comes from Microsoft.
Anyways - I've told him its possible to use either Ruby or Python with the VS2008 IDE, because I've read somewhere that this is possible.
But I was wondering. Is it really that practical, can you do EVERYTHING with Python in VS2008 that you can do with C# or VB.net.
I guess without starting a debate... I want to know if you're a developer using VS IDE with a language other than VB.net or C#, then please leave an answer with your experience.
If you are (like me) either a VB.net or C# developer, please don't post speculative or subjective answers. This is a serious question, and I don't want it being closed as subjective. ...
Thank you very much.
update
So far we've established that IronPython is the right tool for the job.
Now how practical is it really?
Mono for example runs C# code in Linux, but... ever tried to use it? Not practical at all, lots of code refactoring needs to take place, no support for .net v3.5, etc... | Can you really use the Visual Studio 2008 IDE to code in Python? | 0.033321 | 0 | 0 | 505 |
1,342,377 | 2009-08-27T16:57:00.000 | 2 | 1 | 1 | 0 | python,visual-studio,ironpython | 1,342,463 | 6 | false | 0 | 0 | I find it odd that your friend is against C# but is ok with Visual Studio. There is, after all, an open source development environment for .NET called SharpDevelop. The C# language is a standard. .NET is free (as in beer) and there is an open source implementation of that platform called Mono. The only "un-free" thing is Visual Studio (though there are "Express" versions which are free as in beer). | 2 | 2 | 0 | I have a friend who I am trying to teach how to program. He comes from a very basic PHP background, and for some reason is ANTI C#, I guess because some of his PHP circles condemn anything that comes from Microsoft.
Anyways - I've told him its possible to use either Ruby or Python with the VS2008 IDE, because I've read somewhere that this is possible.
But I was wondering. Is it really that practical, can you do EVERYTHING with Python in VS2008 that you can do with C# or VB.net.
I guess without starting a debate... I want to know if you're a developer using VS IDE with a language other than VB.net or C#, then please leave an answer with your experience.
If you are (like me) either a VB.net or C# developer, please don't post speculative or subjective answers. This is a serious question, and I don't want it being closed as subjective. ...
Thank you very much.
update
So far we've established that IronPython is the right tool for the job.
Now how practical is it really?
Mono for example runs C# code in Linux, but... ever tried to use it? Not practical at all, lots of code refactoring needs to take place, no support for .net v3.5, etc... | Can you really use the Visual Studio 2008 IDE to code in Python? | 0.066568 | 0 | 0 | 505 |
1,343,679 | 2009-08-27T20:51:00.000 | 0 | 0 | 0 | 0 | asp.net,ironpython | 1,350,056 | 3 | false | 1 | 0 | I don't believe that ASP.NET was ever ready for prime time. The framework is contrived and an awful fit for designing web applications. It was made for VB6 programmers that only know how to drag controls onto a design surface.
Most decent(and pretty much all bad) applications written on ASP.NET don't use it as it was designed, and if that's the case then what's the point. | 2 | 7 | 0 | Has anyone actually built and deployed a website with IronPython and ASP.NET. What were your experiences and is the combination ready for prime-time?
I asked this question just over a year ago. And the consensus seemed to be "not really".
What's the status now? | IronPython and ASP.NET: ready for prime time? | 0 | 0 | 0 | 284 |
1,343,679 | 2009-08-27T20:51:00.000 | 1 | 0 | 0 | 0 | asp.net,ironpython | 1,344,833 | 3 | false | 1 | 0 | I believe that if you want to do anthing useful/em> with .NET + IronPython, you need better support for the dynamicy of Microsoft's CLR environment, and you'll need VS2010 for that.
You may have better luck just building a strait up python app. Why bother using ASP.NET? Are you integrating with another codebase? | 2 | 7 | 0 | Has anyone actually built and deployed a website with IronPython and ASP.NET. What were your experiences and is the combination ready for prime-time?
I asked this question just over a year ago. And the consensus seemed to be "not really".
What's the status now? | IronPython and ASP.NET: ready for prime time? | 0.066568 | 0 | 0 | 284 |
1,346,297 | 2009-08-28T10:57:00.000 | 0 | 0 | 1 | 0 | python,macos,py2app | 1,349,783 | 3 | false | 0 | 0 | You probably need to give it your full PYTHONPATH.
Depends on your os. Here's how to find out:
import os [or any other std module]
os.file() | 2 | 4 | 0 | I've created an app using py2app, which works fine, but if I zip/unzip it, the newly unzipped version can't access standard python modules like traceback, or os. The manpage for zip claims that it preserves resource forks, and I've seen other applications packaged this way (I need to be able to put this in a .zip file). How do I fix this? | Py2App Can't find standard modules | 0 | 0 | 0 | 6,974 |
1,346,297 | 2009-08-28T10:57:00.000 | 0 | 0 | 1 | 0 | python,macos,py2app | 1,346,359 | 3 | false | 0 | 0 | use zip -y ... to create the file whilst preserving symlinks. | 2 | 4 | 0 | I've created an app using py2app, which works fine, but if I zip/unzip it, the newly unzipped version can't access standard python modules like traceback, or os. The manpage for zip claims that it preserves resource forks, and I've seen other applications packaged this way (I need to be able to put this in a .zip file). How do I fix this? | Py2App Can't find standard modules | 0 | 0 | 0 | 6,974 |
1,346,723 | 2009-08-28T12:30:00.000 | 0 | 0 | 0 | 0 | java,python,web-applications,javafx,turbogears | 1,371,752 | 4 | false | 1 | 0 | Yes, it is possible. If you use JavaFX you will be allowed use multiple deployments. For example, NetBeans 6.7.1 with JavaFX creates several possible deployments from one project. Then you can publish this application on web, DVD, etc. You will need to slightly customize standalone deployment for DVD to be able e.g. start it as autorun if necessary. JavaFX is good choice. | 4 | 1 | 0 | Is it possible to develop an application easily available on the web that also can be distributed on DVD (installer or started from the dvd)?
For the moment, we use static html (frameset!) pages (generated by xml files), with one difference: pdf's are only on the DVD version, the web version only shows a preview of these files.
Can this be done with JavaFX, OpenLaszlo or are there better options?
(for example: turbogears, and using tg2exe for DVD version) | JavaFX or RIA desktop app (on dvd) also available on the web? | 0 | 0 | 0 | 316 |
1,346,723 | 2009-08-28T12:30:00.000 | 0 | 0 | 0 | 0 | java,python,web-applications,javafx,turbogears | 1,357,063 | 4 | false | 1 | 0 | Yes JavaFX or Flash applications can be used to develop applications that run in different contexts.
However, it's not clear from your question why these would be preferable over your current solution.
If the information your sharing is primarily text and you're using DVD because your audience is primarily located in area with bad Internet connectivity, then you're current approach probably makes more sense. JavaFX or Flash might be more fun to write for developers but maybe doesn't serve your audience.
I would suggest that if you are shipping DVD and are looking for ways to make the DVD more useful than as a PDF delivery system would be to add video to the DVDs. And then maybe it would make more sense to use JavaFX or Flash to drive the UI. | 4 | 1 | 0 | Is it possible to develop an application easily available on the web that also can be distributed on DVD (installer or started from the dvd)?
For the moment, we use static html (frameset!) pages (generated by xml files), with one difference: pdf's are only on the DVD version, the web version only shows a preview of these files.
Can this be done with JavaFX, OpenLaszlo or are there better options?
(for example: turbogears, and using tg2exe for DVD version) | JavaFX or RIA desktop app (on dvd) also available on the web? | 0 | 0 | 0 | 316 |
1,346,723 | 2009-08-28T12:30:00.000 | 0 | 0 | 0 | 0 | java,python,web-applications,javafx,turbogears | 1,508,000 | 4 | false | 1 | 0 | This seems like a job for flex, however I know better little about it to give a better answer. | 4 | 1 | 0 | Is it possible to develop an application easily available on the web that also can be distributed on DVD (installer or started from the dvd)?
For the moment, we use static html (frameset!) pages (generated by xml files), with one difference: pdf's are only on the DVD version, the web version only shows a preview of these files.
Can this be done with JavaFX, OpenLaszlo or are there better options?
(for example: turbogears, and using tg2exe for DVD version) | JavaFX or RIA desktop app (on dvd) also available on the web? | 0 | 0 | 0 | 316 |
1,346,723 | 2009-08-28T12:30:00.000 | 0 | 0 | 0 | 0 | java,python,web-applications,javafx,turbogears | 1,347,338 | 4 | false | 1 | 0 | I think if you design it correctly to begin with, a JavaFX app can be interchanged between web-app and desktop-app relatively easily. However, I've only done this with very simple apps (specifically, Tic-Tac-Toe!), so I'm sure there might exist some caveats that I am unaware of (thus the "design it correctly" catch-all). ;)
Why don't you just provide the PDFs in your current web version, rather than redeveloping everything? I'm not aware of any browsers that don't support in-browser PDF reading anymore. | 4 | 1 | 0 | Is it possible to develop an application easily available on the web that also can be distributed on DVD (installer or started from the dvd)?
For the moment, we use static html (frameset!) pages (generated by xml files), with one difference: pdf's are only on the DVD version, the web version only shows a preview of these files.
Can this be done with JavaFX, OpenLaszlo or are there better options?
(for example: turbogears, and using tg2exe for DVD version) | JavaFX or RIA desktop app (on dvd) also available on the web? | 0 | 0 | 0 | 316 |
1,346,965 | 2009-08-28T13:27:00.000 | 2 | 0 | 1 | 0 | visual-studio,memory,performance,python-idle | 1,346,990 | 3 | true | 0 | 0 | I don't think this is possible. OTOH, you could put your computer in suspend-to-disk mode. That would pretty much freeze its state as it is when you leave (that is: VS in RAM) and restore it to the same when you start working. As an additional bonus, you would help to conserve energy and thus might save the earth. | 2 | 0 | 0 | What's up people.
Something's been bothering me for a while now... and I was wondering if any of you might know of a workaround for this.
The C# solution im working on is a huge solution that contains about 20 projects and almost the same amount of unit test projects. Each projects contains hundreds of files. So opening and closing the solution takes a while... but once it's opened, everything is fine.
But, if I leave my computer up for the night (with my solution still opened in VS) and come back the next morning, everything I'll do in VS will be very slow for the next half hour or so.
I know why this happens... it's because Windows seems to remove idle processes from memory (RAM). And when I do something in VS, it takes the data from the pagefile and puts it back in the memory which slows everything single operations I do till the process' memory has been fully restored in RAM.
So my question is, is there a way to tell Windows that VS is a high priority process/application and to leave that process' memory in RAM?
Thanks in advance,
-Oli | Visual Studio 2008 crawling after long idle time | 1.2 | 0 | 0 | 191 |
1,346,965 | 2009-08-28T13:27:00.000 | 0 | 0 | 1 | 0 | visual-studio,memory,performance,python-idle | 1,347,501 | 3 | false | 0 | 0 | AFAIK, changing the process priority won't solve the problem, as the bottleneck seems to be I/O rather than CPU time. If the problem hurts your productivity, it would be well worth it to just buy a few more Gs of RAM (how much depends on your OS and budget). If you can get about 3-4GB of RAM, you can even eliminate the swap file (or close to eliminate it). This will prevent VS from sinking when idle.
Another option would be to create a tool that will walk VS's heap, forcing it into the main memory. This can be done by writing an add-in or by code injection. Have it run before you get to work, and you'll have VS up and about once you get to it. It will, however, require some work, and you might get more than you actually need in memory (some of VS's memory is in the swap file even when you work as usual, as with every other process). | 2 | 0 | 0 | What's up people.
Something's been bothering me for a while now... and I was wondering if any of you might know of a workaround for this.
The C# solution im working on is a huge solution that contains about 20 projects and almost the same amount of unit test projects. Each projects contains hundreds of files. So opening and closing the solution takes a while... but once it's opened, everything is fine.
But, if I leave my computer up for the night (with my solution still opened in VS) and come back the next morning, everything I'll do in VS will be very slow for the next half hour or so.
I know why this happens... it's because Windows seems to remove idle processes from memory (RAM). And when I do something in VS, it takes the data from the pagefile and puts it back in the memory which slows everything single operations I do till the process' memory has been fully restored in RAM.
So my question is, is there a way to tell Windows that VS is a high priority process/application and to leave that process' memory in RAM?
Thanks in advance,
-Oli | Visual Studio 2008 crawling after long idle time | 0 | 0 | 0 | 191 |
1,347,168 | 2009-08-28T13:56:00.000 | 3 | 0 | 1 | 0 | python,python-2.6 | 1,347,750 | 4 | false | 0 | 0 | The main issue will come with any C-coded extensions you may be using: depending on your system, but especially on Windows, such extensions, compiled for 2.5, are likely to not work at all (or at least not quietly and reliably) with 2.6. That's not particularly different from, e.g., migrating from 2.4 to 2.5 in the past.
The simplest solution (IMHO) is to get the sources for any such extensions and reinstall them. On most platforms, and for most extensions, python setup.py install (possibly with a sudo or logged in as administrator, depending on your installation) will work -- you may need to download and install proper "developer" packages, again depending on what system exactly you're using and what you have already installed (for example, on Mac OS X you need to install XCode -- or at least the gcc subset thereof, but it's simplest to install it all -- which in turn requires you to sign up for free at Apple Developer Connection and download the large XCode package).
I'm not sure how hassle-free this approach is on Windows at this time -- i.e., whether you can use free-as-in-beer compilers such as mingw or Microsoft's "express" edition of VS, or have to shell out $$ to MS to get the right compiler. However, most developers of third party extensions do go out on their way to supply ready Windows binaries, exactly because having the users recompile is (or at least used to be) a hassle on Windows, and 2.6 is already widely supported by third-party extension maintainers (since after all it IS just about a simple recompile for them, too;-), so you may be in luck and find all the precompiled binaries you need already available for the extensions you use. | 2 | 2 | 0 | My stuff is developed and running on Python 2.5.2
I want to move some code to 3.x, but that isn't feasible because so many of the external packages I use are not there yet. (Like numpy for instance).
So, I'll do the intermediate step and go to 2.6.2.
My question: If an external module runs on 2.5.2, but doesn't explicitly state that it works with 2.6.x, can I assume it'll be fine? Or not? | Moving to Python 2.6.x | 0.148885 | 0 | 0 | 302 |
1,347,168 | 2009-08-28T13:56:00.000 | 2 | 0 | 1 | 0 | python,python-2.6 | 1,347,199 | 4 | false | 0 | 0 | You can't assume that. However, you should be able to easily test if it works or not.
Also, do not bother trying to move to 3.x for another year or two. 2.6 has many of 3.0's features back-ported to it already, so the transition won't be that bad, once you do make it. | 2 | 2 | 0 | My stuff is developed and running on Python 2.5.2
I want to move some code to 3.x, but that isn't feasible because so many of the external packages I use are not there yet. (Like numpy for instance).
So, I'll do the intermediate step and go to 2.6.2.
My question: If an external module runs on 2.5.2, but doesn't explicitly state that it works with 2.6.x, can I assume it'll be fine? Or not? | Moving to Python 2.6.x | 0.099668 | 0 | 0 | 302 |
1,347,376 | 2009-08-28T14:30:00.000 | 12 | 0 | 1 | 1 | python,macos,osx-snow-leopard | 1,350,316 | 5 | false | 0 | 0 | It ships with both python 2.6.1 and 2.5.4.
$ python2.5
Python 2.5.4 (r254:67916, Jul 7 2009, 23:51:24)
$ python
Python 2.6.1 (r261:67515, Jul 7 2009, 23:51:51) | 3 | 10 | 0 | I would appreciate it if somebody running the final version of Snow Leopard could post what version of Python is included with the OS (on a Terminal, just type "python --version")
Thanks! | Python version shipping with Mac OS X Snow Leopard? | 1 | 0 | 0 | 10,496 |
1,347,376 | 2009-08-28T14:30:00.000 | 1 | 0 | 1 | 1 | python,macos,osx-snow-leopard | 1,352,207 | 5 | false | 0 | 0 | You can get an installer for 2.6.2 from python.org, no reason to go without. | 3 | 10 | 0 | I would appreciate it if somebody running the final version of Snow Leopard could post what version of Python is included with the OS (on a Terminal, just type "python --version")
Thanks! | Python version shipping with Mac OS X Snow Leopard? | 0.039979 | 0 | 0 | 10,496 |
1,347,376 | 2009-08-28T14:30:00.000 | 3 | 0 | 1 | 1 | python,macos,osx-snow-leopard | 1,347,397 | 5 | false | 0 | 0 | Python 2.6.1
(according to the web)
Really good to know :) | 3 | 10 | 0 | I would appreciate it if somebody running the final version of Snow Leopard could post what version of Python is included with the OS (on a Terminal, just type "python --version")
Thanks! | Python version shipping with Mac OS X Snow Leopard? | 0.119427 | 0 | 0 | 10,496 |
1,348,026 | 2009-08-28T16:14:00.000 | 0 | 0 | 1 | 1 | python,multithreading,file | 1,348,441 | 3 | false | 0 | 0 | If you have an id associated with each thread / process that tries to create the file, you could put that id in the suffix somewhere, thereby guaranteeing that no two processes can use the same file name.
This eliminates the race condition between the processes. | 1 | 25 | 0 | Currently I have a loop that tries to find an unused filename by adding suffixes to a filename string. Once it fails to find a file, it uses the name that failed to open a new file wit that name. Problem is this code is used in a website and there could be multiple attempts to do the same thing at the same time, so a race condition exists.
How can I keep python from overwriting an existing file, if one is created between the time of the check and the time of the open in the other thread.
I can minimize the chance by randomizing the suffixes, but the chance is already minimized based on parts of the pathname. I want to eliminate that chance with a function that can be told, create this file ONLY if it doesn't exist.
I can use win32 functions to do this, but I want this to work cross platform because it will be hosted on linux in the end. | How do I create a file in python without overwriting an existing file | 0 | 0 | 0 | 12,132 |
1,348,710 | 2009-08-28T18:46:00.000 | 6 | 0 | 0 | 0 | python,django,django-admin | 1,349,413 | 1 | true | 1 | 0 | To add a section not associated with an app, you'll have to override the admin index template. Create an admin/ directory in your project templates directory, and copy the file django/contrib/admin/templates/admin/index.html into it. Then you can add whatever markup you want to this file. The only downside (unfortunately there's no good way around it at the moment) is that if you upgrade Django, you'll have to be on the lookout for any changes to that index.html file, and copy those changes over into your version as well. | 1 | 8 | 0 | In the Django admin each app you have registered with the admin gets its own section. I want to add a custom section for reporting that isn't associated with any app. How do I do that? | How do you add a custom section to the Django admin home page? | 1.2 | 0 | 0 | 5,525 |
1,350,466 | 2009-08-29T04:39:00.000 | 1 | 1 | 1 | 0 | python,module,sandbox | 1,350,473 | 7 | false | 0 | 0 | Unfortunately, I think that what you're trying to do is fundamentally impossible. If users can execute arbitrary code in your application then they can do whatever they want. Even if you were able to prevent them from importing certain modules there would be nothing stopping them from writing equivalent functionality themselves (from scratch or using some of the modules that are available).
I don't really know the specifics of implementing a sandbox in Python, but I would imagine it's something that needs to be done at the interpreter level and is far from easy! | 2 | 26 | 0 | I'm writing an application where users can enter a python script and execute it in a sandbox. I need a way to prevent the exec'ed code from importing certain modules, so malicious code won't be as much of a problem. Is there a way to do this in Python? | Preventing Python code from importing certain modules? | 0.028564 | 0 | 0 | 21,579 |
1,350,466 | 2009-08-29T04:39:00.000 | -3 | 1 | 1 | 0 | python,module,sandbox | 1,350,472 | 7 | false | 0 | 0 | You can overload the import mechanism. We used this to have a licensing system for plugins, you can easily have a whitelist / blacklist of module names. | 2 | 26 | 0 | I'm writing an application where users can enter a python script and execute it in a sandbox. I need a way to prevent the exec'ed code from importing certain modules, so malicious code won't be as much of a problem. Is there a way to do this in Python? | Preventing Python code from importing certain modules? | -0.085505 | 0 | 0 | 21,579 |
1,350,887 | 2009-08-29T08:52:00.000 | 6 | 0 | 1 | 0 | python | 33,278,074 | 2 | false | 0 | 0 | If you want default imports when using the python shell you can also set the PYTHONSTARTUP environmental variable to point to a python file that will be executed whenever you start the shell. Put all your default imports in this file. | 1 | 4 | 0 | Is there a place when I can put default imports for all my modules? | Python: Is there a place when I can put default imports for all my modules? | 1 | 0 | 0 | 5,888 |
1,351,227 | 2009-08-29T11:39:00.000 | 9 | 1 | 1 | 0 | c#,asp.net,python,django,programming-languages | 1,351,670 | 5 | true | 0 | 0 | " I understand that Python is dynamically typed, whereas C# is strongly-typed. "
This is weirdly wrong.
Python is strongly typed. A list or integer or dictionary is always of the given type. The object's type cannot be changed.
Python variables are not strongly typed. Indeed, Python variables are just labels on objects. Variables are not declared; hence the description of Python as "dynamic".
C# is statically typed. The variables are declared to the compiler to be of a specific type. The code is generated based on certain knowledge about the variables use at run-time.
Python is "interpreted" -- things are done at run-time -- little is assumed. [Technically, the Python source is compiled into byte code and the byte code is interpreted. Some folks think this is an important distinction.]
C# is compiled -- the compiler generates code based on the declared assumptions.
What conceptual obstacles should I watch out for when attempting to learn Python?
None. If you insist that Python should be like something else; or you insist that something else is more intuitive then you've polluted your own thinking with inappropriate concepts.
No programming language has obstacles. We bring our own obstacles when we impose things on the language.
Are there concepts for which no analog exists in Python?
Since Python has object-oriented, procedural and functional elements, you'd be hard-pressed to find something missing from Python.
How important is object-oriented analysis?
OO analysis helps all phases of software development -- even if you aren't doing an OO implementation. This is unrelated to Python and should be a separate question.
I need to get up to speed in about 2 weeks time (ridiculous maybe?)
Perhaps not. If you start with a fresh, open mind, then Python can be learned in a week or so of diligent work.
If, on the other hand, you compare and contrast Python with C#, it can take you years to get past your C# bias and learn Python. Don't translate C# to Python. Don't translate Python to C#.
Don't go to the well with a full bucket. | 3 | 3 | 0 | I'm new to Python, coming from a C# background and I'm trying to get up to speed. I understand that Python is dynamically typed, whereas C# is strongly-typed. -> see comments. What conceptual obstacles should I watch out for when attempting to learn Python? Are there concepts for which no analog exists in Python? How important is object-oriented analysis?
I believe answers to these and any other questions you might be able to think of would speed up my understanding Python besides the Nike mentality ("just do it")?
A little more context: My company is moving from ASP.NET C# Web Forms to Django. I've gone through the Django tutorial and it was truly great. I need to get up to speed in about 2 weeks time (ridiculous maybe? LOL)
Thank you all for your time and efforts to respond to a realllly broad question(s). | What are some of the core conceptual differences between C# and Python? | 1.2 | 0 | 0 | 3,579 |
1,351,227 | 2009-08-29T11:39:00.000 | 2 | 1 | 1 | 0 | c#,asp.net,python,django,programming-languages | 1,351,664 | 5 | false | 0 | 0 | You said that Python is dynamically typed and C# is strongly typed but this isn't true. Strong vs. weak typing and static vs. dynamic typing are orthagonal. Strong typing means str + int doesn't coerce one of the opperands, so in this regard both Python and C# are strongly typed (whereas PHP or C is weakly typed). Python is dynamically typed which means names don't have a defined type at compile time, whereas in C# they do. | 3 | 3 | 0 | I'm new to Python, coming from a C# background and I'm trying to get up to speed. I understand that Python is dynamically typed, whereas C# is strongly-typed. -> see comments. What conceptual obstacles should I watch out for when attempting to learn Python? Are there concepts for which no analog exists in Python? How important is object-oriented analysis?
I believe answers to these and any other questions you might be able to think of would speed up my understanding Python besides the Nike mentality ("just do it")?
A little more context: My company is moving from ASP.NET C# Web Forms to Django. I've gone through the Django tutorial and it was truly great. I need to get up to speed in about 2 weeks time (ridiculous maybe? LOL)
Thank you all for your time and efforts to respond to a realllly broad question(s). | What are some of the core conceptual differences between C# and Python? | 0.07983 | 0 | 0 | 3,579 |
1,351,227 | 2009-08-29T11:39:00.000 | 1 | 1 | 1 | 0 | c#,asp.net,python,django,programming-languages | 1,351,334 | 5 | false | 0 | 0 | The conceptual differences are important, but mostly in how they result in different attitudes.
Most important of those are "duck typing". Ie, forget what type things are, you don't need to care. You only need to care about what attributes and methods objects have. "If it looks like a duck and walks like a duck, it's a duck". Usually, these attitude changes come naturally after a while.
The biggest conceptual hurdles seems to be
The significant indenting. But the only ones who hate it are people who have, or are forced to work with, people who change their editors tab expansion from something other than the default 8.
No compiler, and hence no type testing at the compile stage. Many people coming from statically typed languages believe that the type checking during compilation finds many bugs. It doesn't, in my experience. | 3 | 3 | 0 | I'm new to Python, coming from a C# background and I'm trying to get up to speed. I understand that Python is dynamically typed, whereas C# is strongly-typed. -> see comments. What conceptual obstacles should I watch out for when attempting to learn Python? Are there concepts for which no analog exists in Python? How important is object-oriented analysis?
I believe answers to these and any other questions you might be able to think of would speed up my understanding Python besides the Nike mentality ("just do it")?
A little more context: My company is moving from ASP.NET C# Web Forms to Django. I've gone through the Django tutorial and it was truly great. I need to get up to speed in about 2 weeks time (ridiculous maybe? LOL)
Thank you all for your time and efforts to respond to a realllly broad question(s). | What are some of the core conceptual differences between C# and Python? | 0.039979 | 0 | 0 | 3,579 |
1,351,323 | 2009-08-29T12:24:00.000 | 0 | 0 | 0 | 0 | python,django | 1,351,459 | 2 | false | 1 | 0 | An in memory storage is not persistent, so no.
I think you mean that you only want to write to the database ever X new posts of objects. I guess this is for speedup reasons. But since you need to serialize them sooner or later anyway, you don't actually save any time that way. However, you will save time by not flushing the new objects to disk, but most databases already support that.
But you also talk about caching the rendered page, which is read caching. There you can't cache the finished result you say, but you can cache the result of the database query. That means that new message will not be immediately updated, but take a minute or so to show up, but I think most people will see this as acceptable.
Update: In this case not, then. But you should still easily be able to cache the query results, but invalidate that cache when new responses are added. That should help. | 1 | 7 | 0 | I had the following idea: Say we have a webapp written using django which models some kind of bulletin board. This board has many threads but a few of them get the most posts/views per hour.
The thread pages look a little different for each user, so you can't cache the rendered page as whole and caching only some parts of the rendered page is also not an option.
My idea was: I create an object structure of the thread in memory (with every post and other data that is needed to display it). If a new message is posted the structure is updated and every X posts (or every Y minutes, whatever comes first) the new messages are written back to the database. If the app crashes, some posts are lost, but this is definitely okay (for users and admins).
The question: Can I create such a persistent in memory storage without serialization (so no serialize->memcached)? As I understand it, WSGI applications (like Django) run in a continuous process without shutting down between requests, so it should be possible in theory. Is there any API I could use? If not: any point to look?
/edit1: I know that "persistent" usually has a different meaning, but in this case I strictly mean "in between request". | Object store for objects in Django between requests | 0 | 0 | 0 | 5,947 |
1,352,230 | 2009-08-29T19:56:00.000 | 0 | 0 | 0 | 1 | macos,osx-snow-leopard,python-3.x | 1,649,335 | 4 | false | 0 | 0 | Kenneth Reitz's soluton doesn't work for me. In fact, the install works fine but my default PATH still points to /usr/bin/python (v2.6.1.). I vaguely recall that we should be modifying our ~/.profile to point to /.../Frameworks and I expected the installer to do this for me (nope).
Anyway, /Library/Frameworks/Python.framework/Versions/3.1/bin exists so we could add it.
But I'm curious why the python bin in there does a crash and burn on me.
No time to resolve this now. Bye. | 1 | 2 | 0 | I've spent some time today playing with getting the source for python 3.1.1 to build on my MacBook Pro using the --enable-framework and --enable-universalsdk options with no success. I will humbly admit that I have no real clue why I can't compile 3.1.1 on Snow Leopard, I did make sure to get the new Xcode version for Snow Leopard, and made sure I also installed the 10.4u SDK. It seems to be choking on the 10.4 SDK during the make stage, and has several error regarding headers for wchar, cursor, and ncursor during the configure stage. I have been able to get a make from a plain configure, and most the test pass, but that just isn't challenging enough. Has anyone else attempted to build python 3.1.1 on a Mac running Snow Leopard | Python 3.1.1 on Mac OS X 10.6 Snow Leopard | 0 | 0 | 0 | 8,496 |
1,352,760 | 2009-08-30T00:58:00.000 | 5 | 0 | 1 | 1 | python,performance,process,background | 1,352,777 | 2 | false | 0 | 0 | If you are using blocking I/O to your devices, then the script won't consume any processor while waiting for the data. How much processor you use depends on what sorts of computation you are doing with the data. | 1 | 0 | 0 | Im in the process of writing a python script to act as a "glue" between an application and some external devices. The script itself is quite straight forward and has three distinct processes:
Request data (from a socket connection, via UDP)
Receive response (from a socket connection, via UDP)
Process response and make data available to 3rd party application
However, this will be done repetitively, and for several (+/-200 different) devices. So once its reached device #200, it would start requesting data from device #001 again. My main concern here is not to bog down the processor whilst executing the script.
UPDATE:
I am using three threads to do the above, one thread for each of the above processes. The request/response is asynchronous as each response contains everything i need to be able to process it (including the senders details).
Is there any way to allow the script to run in the background and consume as little system resources as possible while doing its thing? This will be running on a windows 2003 machine.
Any advice would be appreciated. | Python script performance as a background process | 0.462117 | 0 | 1 | 738 |
1,352,922 | 2009-08-30T02:30:00.000 | 5 | 1 | 1 | 1 | python,bash | 1,352,927 | 5 | false | 0 | 0 | It finds 'python' also in /usr/local/bin, ~/bin, /opt/bin, ... or wherever it may hide. | 3 | 71 | 0 | Anyone know this? I've never been able to find an answer. | Why is '#!/usr/bin/env python' supposedly more correct than just '#!/usr/bin/python'? | 0.197375 | 0 | 0 | 38,302 |
1,352,922 | 2009-08-30T02:30:00.000 | 10 | 1 | 1 | 1 | python,bash | 1,352,941 | 5 | false | 0 | 0 | it finds the python executable in your environment and uses that. it's more portable because python may not always be in /usr/bin/python. env is always located in /usr/bin. | 3 | 71 | 0 | Anyone know this? I've never been able to find an answer. | Why is '#!/usr/bin/env python' supposedly more correct than just '#!/usr/bin/python'? | 1 | 0 | 0 | 38,302 |
1,352,922 | 2009-08-30T02:30:00.000 | 67 | 1 | 1 | 1 | python,bash | 1,352,938 | 5 | true | 0 | 0 | If you're prone to installing python in various and interesting places on your PATH (as in $PATH in typical Unix shells, %PATH on typical Windows ones), using /usr/bin/env will accomodate your whim (well, in Unix-like environments at least) while going directly to /usr/bin/python won't. But losing control of what version of Python your scripts run under is no unalloyed bargain... if you look at my code you're more likely to see it start with, e.g., #!/usr/local/bin/python2.5 rather than with an open and accepting #!/usr/bin/env python -- assuming the script is important I like to ensure it's run with the specific version I have tested and developed it with, NOT a semi-random one;-). | 3 | 71 | 0 | Anyone know this? I've never been able to find an answer. | Why is '#!/usr/bin/env python' supposedly more correct than just '#!/usr/bin/python'? | 1.2 | 0 | 0 | 38,302 |
1,353,128 | 2009-08-30T04:55:00.000 | 0 | 0 | 0 | 0 | python,django,encryption,django-models | 23,873,516 | 6 | false | 1 | 0 | Some other issues to consider are that the web application will then not be able to sort or easily query on the encrypted fields. It would be helpful to know what administrative functions the client wants you to have. Another approach would be to have a separate app / access channel that does not show the critical data but still allows you to perform your admin functions only. | 4 | 7 | 0 | A client wants to ensure that I cannot read sensitive data from their site, which will still be administered by me. In practice, this means that I'll have database access, but it can't be possible for me to read the contents of certain Model Fields. Is there any way to make the data inaccessible to me, but still decrypted by the server to be browsed by the client? | Encrypted Django Model Fields | 0 | 0 | 0 | 5,260 |
1,353,128 | 2009-08-30T04:55:00.000 | 5 | 0 | 0 | 0 | python,django,encryption,django-models | 9,006,291 | 6 | false | 1 | 0 | This is possible with public key encryption. I have done something similar before in PHP but the idea is the same for a Django app:
All data on this website was stored encrypted using a private key held by the system software. The corresponding public key to decrypt the data was held by the client in a text file.
When the client wanted to access their data, they pasted the public key into an authorisation form (holding the key in the session) which unlocked the data.
When done, they deauthorised their session.
This protected the information against authorised access to the web app (so safe against weak username/passwords) and also from leaks at the database level.
This is still not completely secure: if you have root access to the machine you can capture the key as it is uploaded, or inspect the session information. For that the cure could be to run the reading software on the client's machine and access the database through an API.
I realise this is an old question but I thought I'd clarify that it is indeed possible. | 4 | 7 | 0 | A client wants to ensure that I cannot read sensitive data from their site, which will still be administered by me. In practice, this means that I'll have database access, but it can't be possible for me to read the contents of certain Model Fields. Is there any way to make the data inaccessible to me, but still decrypted by the server to be browsed by the client? | Encrypted Django Model Fields | 0.16514 | 0 | 0 | 5,260 |
1,353,128 | 2009-08-30T04:55:00.000 | 0 | 0 | 0 | 0 | python,django,encryption,django-models | 1,360,064 | 6 | false | 1 | 0 | You and your client could agree on them being obscured. A simple XOR operation or something similar will make the values unreadable in the admin and they can be decoded just in time they are needed in the site.
This way you can safely administer the site without "accidentally" reading something.
Make sure your client understands that it is technically possible for you to get the actual contents but that it would require active effort. | 4 | 7 | 0 | A client wants to ensure that I cannot read sensitive data from their site, which will still be administered by me. In practice, this means that I'll have database access, but it can't be possible for me to read the contents of certain Model Fields. Is there any way to make the data inaccessible to me, but still decrypted by the server to be browsed by the client? | Encrypted Django Model Fields | 0 | 0 | 0 | 5,260 |
1,353,128 | 2009-08-30T04:55:00.000 | 4 | 0 | 0 | 0 | python,django,encryption,django-models | 1,353,174 | 6 | true | 1 | 0 | No, it's not possible to have data that is both in a form you can't decrypt it, and in a form where you can decrypt it to show it to the client simultaneously. The best you can do is a reversible encryption on the content so at least if your server is compromised their data is safe. | 4 | 7 | 0 | A client wants to ensure that I cannot read sensitive data from their site, which will still be administered by me. In practice, this means that I'll have database access, but it can't be possible for me to read the contents of certain Model Fields. Is there any way to make the data inaccessible to me, but still decrypted by the server to be browsed by the client? | Encrypted Django Model Fields | 1.2 | 0 | 0 | 5,260 |
1,353,206 | 2009-08-30T06:10:00.000 | 0 | 0 | 0 | 0 | python,mysql,django,timeout | 1,500,947 | 6 | false | 1 | 0 | You shouldn't write queries like that, at least not to run against your live database. Mysql has a "slow queries" pararameter which you can use to identify the queries that are killing you. Most of the time, these slow queries are either buggy or can be speeded up by defining a new index or two. | 3 | 7 | 0 | I'm using Django 1.1 with Mysql 5.* and MyISAM tables.
Some of my queries can take a TON of time for outliers in my data set. These lock the tables and shut the site down. Other times it seems some users cancel the request before it is done and some queries will be stuck in the "Preparing" phase locking all other queries out.
I'm going to try to track down all the corner cases, but its nice to have a safety net so the site doesn't come down.
How do I avoid this? Can I set maximum query times? | Django: How can you stop long queries from killing your database? | 0 | 1 | 0 | 4,741 |
1,353,206 | 2009-08-30T06:10:00.000 | 1 | 0 | 0 | 0 | python,mysql,django,timeout | 1,353,862 | 6 | true | 1 | 0 | Unfortunately MySQL doesn't allow you an easy way to avoid this. A common method is basically to write a script that checks all running processes every X seconds (based on what you think is "long") and kill ones it sees are running too long. You can at least get some basic diagnostics, however, by setting log_slow_queries in MySQL which will write all queries that take longer than 10 seconds into a log. If that's too long for what you regard as "slow" for your purposes, you can set long_query_time to a value other than 10 to change the threshold. | 3 | 7 | 0 | I'm using Django 1.1 with Mysql 5.* and MyISAM tables.
Some of my queries can take a TON of time for outliers in my data set. These lock the tables and shut the site down. Other times it seems some users cancel the request before it is done and some queries will be stuck in the "Preparing" phase locking all other queries out.
I'm going to try to track down all the corner cases, but its nice to have a safety net so the site doesn't come down.
How do I avoid this? Can I set maximum query times? | Django: How can you stop long queries from killing your database? | 1.2 | 1 | 0 | 4,741 |
1,353,206 | 2009-08-30T06:10:00.000 | 0 | 0 | 0 | 0 | python,mysql,django,timeout | 1,353,366 | 6 | false | 1 | 0 | Do you know what the queries are? Maybe you could optimise the SQL or put some indexes on your tables? | 3 | 7 | 0 | I'm using Django 1.1 with Mysql 5.* and MyISAM tables.
Some of my queries can take a TON of time for outliers in my data set. These lock the tables and shut the site down. Other times it seems some users cancel the request before it is done and some queries will be stuck in the "Preparing" phase locking all other queries out.
I'm going to try to track down all the corner cases, but its nice to have a safety net so the site doesn't come down.
How do I avoid this? Can I set maximum query times? | Django: How can you stop long queries from killing your database? | 0 | 1 | 0 | 4,741 |
1,353,211 | 2009-08-30T06:13:00.000 | 2 | 1 | 1 | 0 | python,.net,ruby | 1,353,301 | 8 | false | 0 | 0 | Embedding a script engine
Use of IronPython for a scripting engine inside your .NET application. For example enabling end-users of your application to change customizable parts with a full fledge language such as Python.
A possible example might be to expose custom logic to end-users for a work flow engine. | 6 | 4 | 0 | I recall when I first read Pragmatic Programmer that they suggested using scripting languages to make you a more productive programmer.
I am in a quandary putting this into practice.
I want to know specific ways that using Python or Ruby can make me a more productive .NET developer.
One specific way per answer, and even better if you can say whether I could use Python or Ruby or Both for it.
See standard format below. | I'm a .NET Programmer. What are specific uses of Python and/or Ruby for that will make me more productive? | 0.049958 | 0 | 0 | 403 |
1,353,211 | 2009-08-30T06:13:00.000 | 2 | 1 | 1 | 0 | python,.net,ruby | 1,353,228 | 8 | false | 0 | 0 | Advanced Text Processing
Traditional strengths of awk and perl. You can just glue together a bunch of regular expressions to create a simple data-mining system on the go. | 6 | 4 | 0 | I recall when I first read Pragmatic Programmer that they suggested using scripting languages to make you a more productive programmer.
I am in a quandary putting this into practice.
I want to know specific ways that using Python or Ruby can make me a more productive .NET developer.
One specific way per answer, and even better if you can say whether I could use Python or Ruby or Both for it.
See standard format below. | I'm a .NET Programmer. What are specific uses of Python and/or Ruby for that will make me more productive? | 0.049958 | 0 | 0 | 403 |
1,353,211 | 2009-08-30T06:13:00.000 | 1 | 1 | 1 | 0 | python,.net,ruby | 1,353,220 | 8 | false | 0 | 0 | Quick Prototyping - Both
In the simplest cases when firing a python interpreter and writing a line or two is way faster than creating a new project in visual studio.
And you can use ruby to. Or lua, or evel perl, whatever. The point is implicit typing and light-weight feel. | 6 | 4 | 0 | I recall when I first read Pragmatic Programmer that they suggested using scripting languages to make you a more productive programmer.
I am in a quandary putting this into practice.
I want to know specific ways that using Python or Ruby can make me a more productive .NET developer.
One specific way per answer, and even better if you can say whether I could use Python or Ruby or Both for it.
See standard format below. | I'm a .NET Programmer. What are specific uses of Python and/or Ruby for that will make me more productive? | 0.024995 | 0 | 0 | 403 |
1,353,211 | 2009-08-30T06:13:00.000 | 1 | 1 | 1 | 0 | python,.net,ruby | 1,353,244 | 8 | false | 0 | 0 | Cross platform
Compared to .NET a simple script Python is more easily ported to other platforms such as Linux. Although possible to achieve the same with the likes of Mono it simpler to run a Python script file on different platforms. | 6 | 4 | 0 | I recall when I first read Pragmatic Programmer that they suggested using scripting languages to make you a more productive programmer.
I am in a quandary putting this into practice.
I want to know specific ways that using Python or Ruby can make me a more productive .NET developer.
One specific way per answer, and even better if you can say whether I could use Python or Ruby or Both for it.
See standard format below. | I'm a .NET Programmer. What are specific uses of Python and/or Ruby for that will make me more productive? | 0.024995 | 0 | 0 | 403 |
1,353,211 | 2009-08-30T06:13:00.000 | 4 | 1 | 1 | 0 | python,.net,ruby | 1,353,229 | 8 | false | 0 | 0 | Less Code
I think productivity is direct result on how proficient you are in a specific language. That said the terseness of a language like Python might save some time on getting certain things done.
If I compare how much less code I have to write for simple administration scripts (e.g. clean-up of old files) compared to .NET code there is certain amount of productivity gain. (Plus it is more fun which also helps getting the job done) | 6 | 4 | 0 | I recall when I first read Pragmatic Programmer that they suggested using scripting languages to make you a more productive programmer.
I am in a quandary putting this into practice.
I want to know specific ways that using Python or Ruby can make me a more productive .NET developer.
One specific way per answer, and even better if you can say whether I could use Python or Ruby or Both for it.
See standard format below. | I'm a .NET Programmer. What are specific uses of Python and/or Ruby for that will make me more productive? | 0.099668 | 0 | 0 | 403 |
1,353,211 | 2009-08-30T06:13:00.000 | 1 | 1 | 1 | 0 | python,.net,ruby | 1,353,255 | 8 | false | 0 | 0 | Processing received Email
Python has built-in support for POP3 and IMAP where the standard .NET framework doesn't. Useful for automating email triggered tasks. | 6 | 4 | 0 | I recall when I first read Pragmatic Programmer that they suggested using scripting languages to make you a more productive programmer.
I am in a quandary putting this into practice.
I want to know specific ways that using Python or Ruby can make me a more productive .NET developer.
One specific way per answer, and even better if you can say whether I could use Python or Ruby or Both for it.
See standard format below. | I'm a .NET Programmer. What are specific uses of Python and/or Ruby for that will make me more productive? | 0.024995 | 0 | 0 | 403 |
1,353,715 | 2009-08-30T11:49:00.000 | 10 | 1 | 1 | 0 | python,performance | 1,353,775 | 10 | false | 0 | 0 | This sort of premature micro-optimisation is usually a waste of time in my experience, even in C and C++. Write readable code first. If it's running too slowly, run it through a profiler, and if necessary, fix the hot-spots.
Fundamentally, you need to think about return on investment. Is it worth the extra effort in reading and maintaining "optimised" code for the couple of microseconds it saves you? In most cases it isn't.
(Also, compilers and runtimes are getting cleverer. Some micro-optimisations may become micro-pessimisations over time.) | 7 | 5 | 0 | I had an argument with a colleague about writing python efficiently. He claimed that though you are programming python you still have to optimise the little bits of your software as much as possible, as if you are writing an efficient algorithm in C++.
Things like:
In an if statement with an or always put the condition most likely to fail first, so the second will not be checked.
Use the most efficient functions for manipulating strings in common use. Not code that grinds strings, but simple things like doing joins and splits, and finding substrings.
Call as less functions as possible, even if it comes on the expense of readability, because of the overhead this creates.
I say, that in most cases it doesn't matter. I should also say that context of the code is not a super-efficient NOC or missile-guidance systems. We're mostly writing tests in python.
What's your view of the matter? | Should I optimise my python code like C++? Does it matter? | 1 | 0 | 0 | 805 |
1,353,715 | 2009-08-30T11:49:00.000 | 3 | 1 | 1 | 0 | python,performance | 1,353,728 | 10 | false | 0 | 0 | I should also say that context of the code is not a super-efficient NOC or missile-guidance systems. We're mostly writing tests in python.
Given this, I'd say that you should take your colleague's advice about writing efficient Python but ignore anything he says that goes against prioritizing readability and maintainability of the code, which will probably be more important than the speed at which it'll execute. | 7 | 5 | 0 | I had an argument with a colleague about writing python efficiently. He claimed that though you are programming python you still have to optimise the little bits of your software as much as possible, as if you are writing an efficient algorithm in C++.
Things like:
In an if statement with an or always put the condition most likely to fail first, so the second will not be checked.
Use the most efficient functions for manipulating strings in common use. Not code that grinds strings, but simple things like doing joins and splits, and finding substrings.
Call as less functions as possible, even if it comes on the expense of readability, because of the overhead this creates.
I say, that in most cases it doesn't matter. I should also say that context of the code is not a super-efficient NOC or missile-guidance systems. We're mostly writing tests in python.
What's your view of the matter? | Should I optimise my python code like C++? Does it matter? | 0.059928 | 0 | 0 | 805 |
1,353,715 | 2009-08-30T11:49:00.000 | 4 | 1 | 1 | 0 | python,performance | 1,353,788 | 10 | false | 0 | 0 | I agree with others: readable code first ("Performance is not a problem until performance is a problem.").
I only want to add that when you absolutely need to write some unreadable and/or non-intuitive code, you can generally isolate it in few specific methods, for which you can write detailed comments, and keep the rest of your code highly readable. If you do so, you'll end up having easy to maintain code, and you'll only have to go through the unreadable parts when you really need to. | 7 | 5 | 0 | I had an argument with a colleague about writing python efficiently. He claimed that though you are programming python you still have to optimise the little bits of your software as much as possible, as if you are writing an efficient algorithm in C++.
Things like:
In an if statement with an or always put the condition most likely to fail first, so the second will not be checked.
Use the most efficient functions for manipulating strings in common use. Not code that grinds strings, but simple things like doing joins and splits, and finding substrings.
Call as less functions as possible, even if it comes on the expense of readability, because of the overhead this creates.
I say, that in most cases it doesn't matter. I should also say that context of the code is not a super-efficient NOC or missile-guidance systems. We're mostly writing tests in python.
What's your view of the matter? | Should I optimise my python code like C++? Does it matter? | 0.07983 | 0 | 0 | 805 |
1,353,715 | 2009-08-30T11:49:00.000 | 2 | 1 | 1 | 0 | python,performance | 1,354,718 | 10 | false | 0 | 0 | I think there are several related 'urban legends' here.
False Putting the more often-checked condition first in a conditional and similar optimizations save enough time for a typical program that it is worthy for a typical programmer.
True Some, but not many, people are using such styles in Python in the incorrect belief outlined above.
True Many people use such style in Python when they think that it improves readability of a Python program.
About readability: I think it's indeed useful when you give the most useful conditional first, since this is what people notice first anyway. You should also use ''.join() if you mean concatenation of strings since it's the most direct way to do it (the s += x operation could mean something different).
"Call as less functions as possible" decreases readability and goes against Pythonic principle of code reuse. And so it's not a style people use in Python. | 7 | 5 | 0 | I had an argument with a colleague about writing python efficiently. He claimed that though you are programming python you still have to optimise the little bits of your software as much as possible, as if you are writing an efficient algorithm in C++.
Things like:
In an if statement with an or always put the condition most likely to fail first, so the second will not be checked.
Use the most efficient functions for manipulating strings in common use. Not code that grinds strings, but simple things like doing joins and splits, and finding substrings.
Call as less functions as possible, even if it comes on the expense of readability, because of the overhead this creates.
I say, that in most cases it doesn't matter. I should also say that context of the code is not a super-efficient NOC or missile-guidance systems. We're mostly writing tests in python.
What's your view of the matter? | Should I optimise my python code like C++? Does it matter? | 0.039979 | 0 | 0 | 805 |
1,353,715 | 2009-08-30T11:49:00.000 | 1 | 1 | 1 | 0 | python,performance | 1,354,833 | 10 | false | 0 | 0 | My visceral reaction is this:
I've worked with guys like your colleague and in general I wouldn't take advice from them.
Ask him if he's ever even used a profiler. | 7 | 5 | 0 | I had an argument with a colleague about writing python efficiently. He claimed that though you are programming python you still have to optimise the little bits of your software as much as possible, as if you are writing an efficient algorithm in C++.
Things like:
In an if statement with an or always put the condition most likely to fail first, so the second will not be checked.
Use the most efficient functions for manipulating strings in common use. Not code that grinds strings, but simple things like doing joins and splits, and finding substrings.
Call as less functions as possible, even if it comes on the expense of readability, because of the overhead this creates.
I say, that in most cases it doesn't matter. I should also say that context of the code is not a super-efficient NOC or missile-guidance systems. We're mostly writing tests in python.
What's your view of the matter? | Should I optimise my python code like C++? Does it matter? | 0.019997 | 0 | 0 | 805 |
1,353,715 | 2009-08-30T11:49:00.000 | 13 | 1 | 1 | 0 | python,performance | 1,353,732 | 10 | false | 0 | 0 | The answer is really simple :
Follow Python best practices, not C++ best practices.
Readability in Python is more important that speed.
If performance becomes an issue, measure, then start optimizing. | 7 | 5 | 0 | I had an argument with a colleague about writing python efficiently. He claimed that though you are programming python you still have to optimise the little bits of your software as much as possible, as if you are writing an efficient algorithm in C++.
Things like:
In an if statement with an or always put the condition most likely to fail first, so the second will not be checked.
Use the most efficient functions for manipulating strings in common use. Not code that grinds strings, but simple things like doing joins and splits, and finding substrings.
Call as less functions as possible, even if it comes on the expense of readability, because of the overhead this creates.
I say, that in most cases it doesn't matter. I should also say that context of the code is not a super-efficient NOC or missile-guidance systems. We're mostly writing tests in python.
What's your view of the matter? | Should I optimise my python code like C++? Does it matter? | 1 | 0 | 0 | 805 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.